SoFunction
Updated on 2024-10-29

Explaining the use of the python logging module

1 Introduction to the logging module

The logging module is a standard module built into Python, mainly used for outputting running logs, you can set the level of the output logs, the path to save the logs, the log file rollback, etc.; compared to print, it has the following advantages:

1. You can set different logging levels to output only important information in the release version without having to display a lot of debugging information;
Outputting all information to the standard output seriously affects the developer's ability to view other data from the standard output; logging allows the developer to decide where and how to output the information;

There are four main components in the logging framework:

  1. Loggers: interfaces that can be called directly by the program
  2. Handlers: determines that log records are distributed to the correct destination
  3. Filters: Provides more granularity in determining whether logs are output or not.
  4. Formatters: Formatting the layout of the final record printout

2 Logging Module Usage

2.1 Basic use

Configure the basic settings for logging and then output the logs on the console, the

import logging
(level = ,format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = (__name__)

("Start print log")
("Do something")
("Something maybe fail.")
("Finish")

At runtime, the console output, the

2016-10-09 19:11:19,434 - __main__ - INFO - Start print log
2016-10-09 19:11:19,434 - __main__ - WARNING - Something maybe fail.
2016-10-09 19:11:19,434 - __main__ - INFO - Finish

There are a number of message levels that can be selected in logging, such as debug, info, warning, error, and critical, and by assigning different levels to a logger or handler, developers can output only error messages to a specific logging file, or log only debugging messages when debugging.

For example, let's change the level of the logger to DEBUG, and observe the output, the

Copy Code The code is as follows.
(level = ,format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s')

The console output, as you can see, outputs the debug message.

2016-10-09 19:12:08,289 - __main__ - INFO - Start print log
2016-10-09 19:12:08,289 - __main__ - DEBUG - Do something
2016-10-09 19:12:08,289 - __main__ - WARNING - Something maybe fail.
2016-10-09 19:12:08,289 - __main__ - INFO - Finish

function for each parameter:

filename: Specifies the log file name;

filemode: same meaning as file function, specify the open mode of the log file, 'w' or 'a';

format: specify the format and content of the output, format can output a lot of useful information.

Parameter: Role

%(levelno)s: prints the value of the log level
%(levelname)s: print the name of the log level
%(pathname)s: prints the path of the currently executing program, which is actually [0]
%(filename)s: prints the name of the currently executing program
%(funcName)s: the current function that prints the logs
%(lineno)d: prints the current line number of the log
%(asctime)s: time to print logs
%(thread)d: prints the thread ID
%(threadName)s: print thread name
%(process)d: prints the process ID
%(message)s: print log messages

datefmt: Specify the time format, same as ();

level: set the log level, default is;

stream: specify the output stream of the log, you can specify the output to, or file, the default output to, when stream and filename are specified at the same time, stream is ignored;

2.2 Writing logs to files

2.2.1 Writing Logs to Files

Set up logging, create a FileHandler with the format of the output message, add it to the logger, and then write the log to the specified file.

import logging
logger = (__name__)
(level = )
handler = ("")
()
formatter = ('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
(formatter)
(handler)

("Start print log")
("Do something")
("Something maybe fail.")
("Finish")

The log data in the

2016-10-09 19:01:13,263 - __main__ - INFO - Start print log
2016-10-09 19:01:13,263 - __main__ - WARNING - Something maybe fail.
2016-10-09 19:01:13,263 - __main__ - INFO - Finish

2.2.2 Outputting logs to both screen and log file

logger to add a StreamHandler to output logs to the screen.

import logging
logger = (__name__)
(level = )
handler = ("")
()
formatter = ('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
(formatter)

console = ()
()

(handler)
(console)

("Start print log")
("Do something")
("Something maybe fail.")
("Finish")

As can be seen in the documentation and on the console, the

2016-10-09 19:20:46,553 - __main__ - INFO - Start print log
2016-10-09 19:20:46,553 - __main__ - WARNING - Something maybe fail.
2016-10-09 19:20:46,553 - __main__ - INFO - Finish

can be found, logging has a log processing of the main object, other processing are added through addHandler, logging contains the main handler has the following kinds of.

Handler name: location; role

StreamHandler:; log output to a stream, which can be, or a file
FileHandler:; log output to file
BaseRotatingHandler:; basic logging rollback method
RotatingHandler:; log rollback method, support for the maximum number of log files and log file rollback
TimeRotatingHandler:; log rollback method, in a certain time area to roll back the log file
SocketHandler:; remotely output logs to TCP/IP sockets
DatagramHandler:; remotely output logs to UDP sockets
SMTPHandler:; Remote output of logs to email address
SysLogHandler:; log output to syslog
NTEventLogHandler:; Remotely output logs to Windows NT/2000/XP event logs
MemoryHandler:; log output to a specified buffer in memory
HTTPHandler: Remote output to HTTP server via "GET" or "POST".

2.2.3 Log rollback

Using RotatingFileHandler, which enables log rollback, the

import logging
from  import RotatingFileHandler
logger = (__name__)
(level = )
# Define a RotatingFileHandler that backs up up to 3 log files, up to 1K per log file.
rHandler = RotatingFileHandler("",maxBytes = 1*1024,backupCount = 3)
()
formatter = ('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
(formatter)

console = ()
()
(formatter)

(rHandler)
(console)

("Start print log")
("Do something")
("Something maybe fail.")
("Finish")

You can see, in the project directory, the backup log file that
2016/10/09  19:36               732
2016/10/09  19:36               967 .1
2016/10/09  19:36               985 .2
2016/10/09  19:36               976 .3

2.3 Setting the level of the message

Different logging levels can be set for controlling the logging output of the

Log level: scope of use

FATAL: Fatal Error
CRITICAL: something particularly bad, such as running out of memory, empty disk space, generally rarely used
ERROR: When an error occurs, such as a failed IO operation or a connection problem.
WARNING: When a very important event occurs, but it is not an error, such as an incorrect user login password.
INFO: Handling routine transactions such as requests or status changes
DEBUG: Use DEBUG level during debugging, e.g. intermediate state of each loop in an algorithm

2.4 Traceback Capture

The traceback module in Python is used for tracing exception returns, which can be logged in logging.

Code.

import logging
logger = (__name__)
(level = )
handler = ("")
()
formatter = ('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
(formatter)

console = ()
()

(handler)
(console)

("Start print log")
("Do something")
("Something maybe fail.")
try:
  open("","rb")
except (SystemExit,KeyboardInterrupt):
  raise
except Exception:
  ("Faild to open  from ",exc_info = True)

("Finish")

output in the console and log files.

Start print log
Something maybe fail.
Faild to open from
Traceback (most recent call last):
  File "G:\zhb7627\Code\Eclipse WorkSpace\PythonTest\", line 23, in <module>
    open("","rb")
IOError: [Errno 2] No such file or directory: ''
Finish

It is also possible to use (msg,_args), which is equivalent to (msg,exc_info = True,_args).

commander-in-chief (military)

("Faild to open  from ",exc_info = True)

Replace with.

("Failed to open  from ")

output in the console and log files.

Start print log
Something maybe fail.
Failed to open from
Traceback (most recent call last):
  File "G:\zhb7627\Code\Eclipse WorkSpace\PythonTest\", line 23, in <module>
    open("","rb")
IOError: [Errno 2] No such file or directory: ''
Finish

2.5 Using logging with multiple modules

The main module.

import logging
import subModule
logger = ("mainModule")
(level = )
handler = ("")
()
formatter = ('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
(formatter)

console = ()
()
(formatter)

(handler)
(console)


("creating an instance of ")
a = ()
("calling ")
()
("done with ")
("calling subModule.some_function")
subModule.som_function()
("done with subModule.some_function")

submodule.

import logging

module_logger = ("")
class SubModuleClass(object):
  def __init__(self):
     = ("")
    ("creating an instance in SubModuleClass")
  def doSomething(self):
    ("do something in SubModule")
    a = []
    (1)
    ("list a = " + str(a))
    ("finish something in SubModuleClass")

def som_function():
  module_logger.info("call function some_function")

After execution, the output in the control and log files, the

2016-10-09 20:25:42,276 - mainModule - INFO - creating an instance of
2016-10-09 20:25:42,279 - - INFO - creating an instance in SubModuleClass
2016-10-09 20:25:42,279 - mainModule - INFO - calling
2016-10-09 20:25:42,279 - - INFO - do something in SubModule
2016-10-09 20:25:42,279 - - INFO - finish something in SubModuleClass
2016-10-09 20:25:42,279 - mainModule - INFO - done with 
2016-10-09 20:25:42,279 - mainModule - INFO - calling subModule.some_function
2016-10-09 20:25:42,279 - - INFO - call function some_function
2016-10-09 20:25:42,279 - mainModule - INFO - done with subModule.some_function

First define the logger 'mainModule' in the main module and configure it, you can get the same object by getLogger('mainModule') elsewhere inside the interpreter process, you don't need to reconfigure it, you can use it directly. Defined by the logger's child logger, can share the definition and configuration of the parent logger, the so-called parent-child logger is recognized by naming, any logger starting with 'mainModule' is its child logger, for example ''.

To actually develop an application, you can first write the configuration corresponding to the application through the logging configuration file, you can generate a root logger, such as 'PythonAPP', and then load the logging configuration through fileConfig in the main function, and then use the sub-logger of the root logger to perform logging elsewhere in the application, in different modules, without repeatedly defining and configuring individual modules, such as '', ''. and then in other places of the application, in different modules, you can use the root logger's sub-logger, such as '', '' to log without repeatedly defining and configuring each module's logger.

3 Configure the logging module with a JSON or YAML file.

Although it is possible to configure logging in Python code, this is not flexible enough and the best way is to use a configuration file for it. In Python 2.7 and later, it is possible to load logging configurations from a dictionary, which means that logging configurations can be loaded via JSON or YAML files.

3.1 Configuration via JSON file

JSON configuration file.

{
  "version":1,
  "disable_existing_loggers":false,
  "formatters":{
    "simple":{
      "format":"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    }
  },
  "handlers":{
    "console":{
      "class":"",
      "level":"DEBUG",
      "formatter":"simple",
      "stream":"ext://"
    },
    "info_file_handler":{
      "class":"",
      "level":"INFO",
      "formatter":"simple",
      "filename":"",
      "maxBytes":"10485760",
      "backupCount":20,
      "encoding":"utf8"
    },
    "error_file_handler":{
      "class":"",
      "level":"ERROR",
      "formatter":"simple",
      "filename":"",
      "maxBytes":10485760,
      "backupCount":20,
      "encoding":"utf8"
    }
  },
  "loggers":{
    "my_module":{
      "level":"ERROR",
      "handlers":["info_file_handler"],
      "propagate":"no"
    }
  },
  "root":{
    "level":"INFO",
    "handlers":["console","info_file_handler","error_file_handler"]
  }
}

The configuration file is loaded via JSON, and then by configuring logging, the

import json
import 
import os

def setup_logging(default_path = "",default_level = ,env_key = "LOG_CFG"):
  path = default_path
  value = (env_key,None)
  if value:
    path = value
  if (path):
    with open(path,"r") as f:
      config = (f)
      (config)
  else:
    (level = default_level)

def func():
  ("start func")

  ("exec func")

  ("end func")

if __name__ == "__main__":
  setup_logging(default_path = "")
  func()

3.2 Configuration via YAML file

Configuration via YAML file, which looks more concise and clear than JSON.

version: 1
disable_existing_loggers: False
formatters:
    simple:
      format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
handlers:
  console:
      class: 
      level: DEBUG
      formatter: simple
      stream: ext://
  info_file_handler:
      class: 
      level: INFO
      formatter: simple
      filename: 
      maxBytes: 10485760
      backupCount: 20
      encoding: utf8
  error_file_handler:
      class: 
      level: ERROR
      formatter: simple
      filename: 
      maxBytes: 10485760
      backupCount: 20
      encoding: utf8
loggers:
  my_module:
      level: ERROR
      handlers: [info_file_handler]
      propagate: no
root:
  level: INFO
  handlers: [console,info_file_handler,error_file_handler]

The configuration file is loaded via YAML, and then by configuring logging, the

import yaml
import 
import os

def setup_logging(default_path = "",default_level = ,env_key = "LOG_CFG"):
  path = default_path
  value = (env_key,None)
  if value:
    path = value
  if (path):
    with open(path,"r") as f:
      config = (f)
      (config)
  else:
    (level = default_level)

def func():
  ("start func")

  ("exec func")

  ("end func")

if __name__ == "__main__":
  setup_logging(default_path = "")
  func()

This is the whole content of this article.