In the deployment of the project, it is not possible to directly output all the information to the console, we can record this information to a log file, which is not only convenient for us to view the program runtime, but also in the event of a project failure according to the runtime logs generated to quickly locate the location of the problem.
1. Log level
Python standard library logging used to record logs, the default is divided into six logging levels (brackets for the level of the corresponding value), NOTSET (0), DEBUG (10), INFO (20), WARNING (30), ERROR (40), CRITICAL (50). When we customize the logging level, we should pay attention not to have the same value as the default logging level. When logging is executed, logging outputs log information that is greater than or equal to the set logging level, such as setting the logging level to INFO, then the logs of INFO, WARNING, ERROR, and CRITICAL level will be output.
2、logging process
The official logging module workflow is shown below:
We can see see these Python types in the following figure.Logger、LogRecord、Filter、Handler、Formatter。
Type Description:
Logger: Logging, exposing functions to the application that determine which logs are valid based on the logger and filter level.
LogRecord : A logger that passes logs to the appropriate processor for processing.
Handler : Processor, which sends the log records (generated by the logger) to a suitable destination.
Filter : Filters, which provide better granularity, can determine which log records to output.
Formatter: A formatter that specifies the layout of the log records in the final output.
- Determines whether the Logger object is available for the set level, if so, executes downward, otherwise, the process ends.
- Create a LogRecord object, if the Filter object registered to the Logger object returns False after filtering, no log is recorded and the process ends, otherwise, it is executed downwards.
- The LogRecord object passes the Handler object into the current Logger object, (sub-process in the figure) if the log level of the Handler object is greater than the set log level, and then determine whether the Filter object registered to the Handler object returns True after filtering and releases the output of the log information, otherwise, it is not released, and the process ends.
- If the incoming Handler is greater than the level set in the Logger, i.e., the Handler is valid, then it will be executed, otherwise, the process ends.
- Determine whether this Logger object still has a parent Logger object, if not (on behalf of the current Logger object is the top-level Logger object root Logger), the process is over. Otherwise, set the Logger object as its parent Logger object, repeat the above steps 3 and 4, and output the log output of the parent Logger object until it is the root Logger.
3、Log output format
The output format of the log can be considered to be set, and the default format is shown below.
4. Basic use
logging is very simple to use, using the basicConfig() method will satisfy the basic needs of use, if the method does not pass parameters, it will create a Logger object according to the default configuration, the default logging level is set toWARNINGThe default log output format is shown above, and the optional parameters for this function are listed in the following table.
Parameter name | Parameter Description |
---|---|
filename | Filename of the file to which the log is output |
filemode | File mode, r[+], w[+], a[+] |
format | Format of log output |
datefat | Format of the log with date and time |
style | Format placeholders, defaults to "%" and "{}". |
level | Setting the log output level |
stream | Define the output stream, used to initialize the StreamHandler object, can't use the filename parameter together, otherwise it will be ValueError exception. |
handles | Define the handler, used to create the Handler object, can not be used together with filename, stream parameters, otherwise it will also throw a ValueError exception |
The sample code is as follows:
import logging () ('This is a debug message') ('This is an info message') ('This is a warning message') ('This is an error message') ('This is a critical message')
The output is as follows:
WARNING:root:This is a warning message
ERROR:root:This is an error message
CRITICAL:root:This is a critical message
Pass in commonly used parameters, sample code is as follows (here the variables in the log format placeholder to be introduced later):
import logging (filename="", filemode="w", format="%(asctime)s %(name)s:%(levelname)s:%(message)s", datefmt="%d-%m-%Y %H:%M:%S", level=) ('This is a debug message') ('This is an info message') ('This is a warning message') ('This is an error message') ('This is a critical message')
The log file is generated with the following contents:
13-10-18 21:10:32 root:DEBUG:This is a debug message
13-10-18 21:10:32 root:INFO:This is an info message
13-10-18 21:10:32 root:WARNING:This is a warning message
13-10-18 21:10:32 root:ERROR:This is an error message
13-10-18 21:10:32 root:CRITICAL:This is a critical message
However, when an exception occurs, directly using the debug(), info(), warning(), error(), critical() methods without parameters cannot log the exception information, you need to set the exc_info parameter to True to do so, or use the exception() method, or use the log() method, but you also need to You can also use the log() method, but you need to set the log level and the exc_info parameter.
import logging (filename="", filemode="w", format="%(asctime)s %(name)s:%(levelname)s:%(message)s", datefmt="%d-%M-%Y %H:%M:%S", level=) a = 5 b = 0 try: c = a / b except Exception as e: # One of the following three options, the first one is recommended ("Exception occurred") ("Exception occurred", exc_info=True) (level=, msg="Exception occurred", exc_info=True)
5、Customize Logger
The above basic usage will get us up to speed with the logging module, but it's generally not enough for real-world use, and we'll need to customize the Logger.
There is only one Logger object in a system, and this object cannot be instantiated directly. Yes, the singleton pattern is used here, and the method to get the Logger object isgetLogger。
Note: The singleton pattern here does not mean that there is only one Logger object, it means that there is only one root Logger object in the whole system, when the Logger object executes the info(), error() and other methods, it actually calls the info(), error() and other methods corresponding to the root Logger object.
We can create multiple Logger objects, but it is the root Logger object that actually outputs the log. Each Logger object can be given a name, and if you set thelogger = (__name__)
, __name__ is a special built-in variable in Python that represents the name of the current module (__main__ by default). The name of the Logger object is the recommended namespace hierarchy that uses a dot as a separator.
The Logger object can set multiple Handler objects and Filter objects, and the Handler object can set the Formatter object, which is used to set the specific output format, and the commonly used variable format is shown in the following table, and all the parameters are listed inPython (3.7) Official Documentation:
variant | specification | Variable Description |
---|---|---|
asctime | %(asctime)s | Constructs the time of the log into a readable form, by default to milliseconds, e.g., 2018-10-13 23:24:57,832, and you can additionally specify the datefmt parameter to specify the format of this variable |
name | %(name) | Name of the log object |
filename | %(filename)s | File name without path |
pathname | %(pathname)s | File name with path |
funcName | %(funcName)s | The name of the function where the log records are located |
levelname | %(levelname)s | Level name of the log |
message | %(message)s | Specific log messages |
lineno | %(lineno)d | The line number where the log record is located |
pathname | %(pathname)s | full path |
process | %(process)d | Current process ID |
processName | %(processName)s | Current process name |
thread | %(thread)d | Current thread ID |
threadName | %threadName)s | Current thread name |
Both the Logger object and the Handler object can be set to a level, and the default Logger object level is 30, which is WARNING, and the default Handler object level is 0, which is NOTSET. logging module is designed to be flexible, for example, sometimes we want to output DEBUG level logs in the console, but also want to output WARNING level logs in a file. For example, sometimes we want to output a DEBUG level log on the console and a WARNING level log in a file. For example, sometimes we want to output both DEBUG level logs in the console and WARNING level logs in the file. You can set up only one Logger object with the lowest level and two Handler objects with different levels:
import logging import logger = ("logger") handler1 = () handler2 = (filename="") () () () formatter = ("%(asctime)s %(name)s %(levelname)s %(message)s") (formatter) (formatter) (handler1) (handler2) # 10, 30, 30 respectively # print() # print() # print() ('This is a customer debug message') ('This is an customer info message') ('This is a customer warning message') ('This is an customer error message') ('This is a customer critical message')
The console output results in:
2018-10-13 23:24:57,832 logger WARNING This is a customer warning message
2018-10-13 23:24:57,832 logger ERROR This is an customer error message
2018-10-13 23:24:57,832 logger CRITICAL This is a customer critical message
The output in the file reads:
2018-10-13 23:44:59,817 logger DEBUG This is a customer debug message
2018-10-13 23:44:59,817 logger INFO This is an customer info message
2018-10-13 23:44:59,817 logger WARNING This is a customer warning message
2018-10-13 23:44:59,817 logger ERROR This is an customer error message
2018-10-13 23:44:59,817 logger CRITICAL This is a customer critical message
Once you have created a customized Logger object, you should not use the log output methods in logging, which use the default configuration of the Logger object, otherwise the log messages will be duplicated.
import logging import logger = ("logger") handler = () () formatter = ("%(asctime)s %(name)s %(levelname)s %(message)s") (formatter) (handler) ('This is a customer debug message') ('This is an customer info message') ('This is a customer warning message') ('This is an customer error message') ('This is a customer critical message')
The output is as follows (you can see that the log message was output twice):
2018-10-13 22:21:35,873 logger WARNING This is a customer warning message
WARNING:logger:This is a customer warning message
2018-10-13 22:21:35,873 logger ERROR This is an customer error message
ERROR:logger:This is an customer error message
2018-10-13 22:21:35,873 logger CRITICAL This is a customer critical message
CRITICAL:logger:This is a customer critical message
Note: When introducing python files with logging output such asimport
The logs in the imported file will be output after the log level greater than the current setting is satisfied.
6. Logger Configuration
With the above example, we know the configuration needed to create a Logger object, the above directly hardcoded in the program to configure the object, the configuration can also be obtained from the dictionary type of object and configuration files. Open the Python file and you can see the configuration parsing conversion function in it.
Get configuration information from the dictionary:
import config = { 'version': 1, 'formatters': { 'simple': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s', }, # Other formatter }, 'handlers': { 'console': { 'class': '', 'level': 'DEBUG', 'formatter': 'simple' }, 'file': { 'class': '', 'filename': '', 'level': 'DEBUG', 'formatter': 'simple' }, # Other handlers }, 'loggers':{ 'StreamLogger': { 'handlers': ['console'], 'level': 'DEBUG', }, 'FileLogger': { # There's both a console Handler and a file Handler. 'handlers': ['console', 'file'], 'level': 'DEBUG', }, # Other Loggers } } (config) StreamLogger = ("StreamLogger") FileLogger = ("FileLogger") # Omit log output
Get configuration information from the configuration file:
Common configuration files are ini format, yaml format, JSON format, or can be obtained from the network, as long as there is a corresponding file parser to parse the configuration can be, the following only shows the configuration of ini format and yaml format.
file
[loggers] keys=root,sampleLogger [handlers] keys=consoleHandler [formatters] keys=sampleFormatter [logger_root] level=DEBUG handlers=consoleHandler [logger_sampleLogger] level=DEBUG handlers=consoleHandler qualname=sampleLogger propagate=0 [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=sampleFormatter args=(,) [formatter_sampleFormatter] format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
file
import (fname='', disable_existing_loggers=False) logger = ("sampleLogger") # Omit log output
file
version: 1 formatters: simple: format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s' handlers: console: class: level: DEBUG formatter: simple loggers: simpleExample: handlers: [console] propagate: no root: level: DEBUG handlers: [console]
file
import # The pyymal library needs to be installed import yaml with open('', 'r') as f: config = yaml.safe_load(()) (config) logger = ("sampleLogger") # Omit log output
7. Problems in the field
1、Chinese garbled code
The above example of the log output are English content, can not find the log output to the file there will be Chinese garbled problem, how to solve this problem?FileHandler create objects can be set when the file encoding, if the file encoding is set to "utf-8" (utf-8 and utf8 equivalent), you can solve the Chinese garbled problem. If you set the file encoding to "utf-8" (utf-8 and utf8 are equivalent), you can solve the problem of Chinese garbled code. One way is to customize the Logger object, you need to write a lot of configuration, another way is to use the default configuration method basicConfig(), pass in the handlers handler list object, in which the handler set the file encoding. Many online are invalid methods, the key reference code is as follows:
# Custom Logger Configuration handler = (filename="", encoding="utf-8")
# Use the default Logger configuration (handlers=[("", encoding="utf-8")], level=)
2. Temporarily disable log output
Sometimes we don't want logging, but we want to output the log afterward. If we print information using the print() method, then we need to comment out all the print() methods, and with logging we have the "magic" of turning logging on and off with a single click. One way is to pass a disabled logging level to the () method when using the default configuration, then you can disable the logging output below the set level, and another way is to set the disable property of the Logger object to True when customizing the Logger, and the default value is False, which means it is not disabled.
()
= True
3、Log file according to time or according to the size of the division
If the logs are saved in a file, then over time, or if there are many logs, the individual log files will be very large, which is not conducive to both backup and viewing. We would like to think that it is possible to divide the log file according to time or size? The answer is yes, and it's easy. logging takes this into account. The file providesTimedRotatingFileHandler cap (a poem)RotatingFileHandler The classes can be divided by time and size, respectively. Open this handles file and you can see that there are other functional Handler classes, which inherit from the base classBaseRotatingHandler。
# TimedRotatingFileHandler class constructor def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False, atTime=None): # Constructor for RotatingFileHandler class def __init__(self, filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False)
The sample code is as follows:
# Log files divided every 1000 Byte, with 3 backup files file_handler = ("", mode="w", maxBytes=1000, backupCount=3, encoding="utf-8")
# Divide log files every 1 hour, interval is the time interval, backup files are 10 handler2 = ("", when="H", interval=1, backupCount=10)
Python official website Although the logging library is said to be thread-safe, there are still issues worth considering in multi-process, multi-thread, multi-process-multi-thread environments, such as how to divide the logs into different log files according to the process (or thread), i.e., one process (or thread) corresponds to one file.
In summary: The Python logging library is designed to be very flexible. If you have special needs, you can improve on the basic logging library and create new Handler classes to solve problems in real development.
To this point this article on the 2022 summary of the latest Python logging library logging article is introduced to this, more related Python logging library logging content please search for my previous articles or continue to browse the following related articles I hope you will support me in the future more!