The Ins and Outs of Logging in Python, Part 1 - HedgeDoc
  5507 views
<center> # The Ins and Outs of Logging in Python, Part 1 <big> </big> *Written by Cesar Uribe. Originally published 2022-01-21 on the [Monadical blog](https://monadical.com/blog.html).* </center> Using logging modules in Python has become essential to my work as a developer. It’s a great example of working smarter, not harder, because it takes so many tasks off my plate. This blog post will serve as an introduction to logging in Python. I want to share what I’ve learned about logging in python by telling you about its central ideas, and demonstrating how I use logging with some simple examples. In Part Two, I will apply this module to a real case. First off, let me tell you why logging modules are worth learning about: 1. **History Log**: Logging’s event-based structure[^one] essentially leaves behind a trail of breadcrumbs as you work, so if a problem arises, you can easily trace it back to its source and determine how the error came to be. This is helpful for analyzing how the program is being used, and makes it easier to document code so that other programmers (and your future self!) can get some clarity on your work. 2. **Failure checks**: Logging helps detect scenarios where code could fail, before it fails. It is difficult for developers to imagine every possible scenario in which this might happen, so logging can catch blindspots that they might otherwise miss. 3. **Output Channels**: Logging uses different output channels, allowing users to send different messages to different destinations. You can filter messages by category or level and send them to the appropriate file or output. This is very useful, since you probably want INFO messages to be displayed in the system and ERROR or CRITICAL messages to trigger a notification email. 4. **Structured measuring**: Logging can be configured (the behavior, structure, and output channel of logs) according to execution context, either with configuration files or directly within the code. This not only creates a more organized system for displaying messages, but prepares your app to use a monitoring system or log catcher. <center> <img src="https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b61b.png"> <figcaption> <small> Taken from <a href="https://www.datadoghq.com/product/log-management/"> https://www.datadoghq.com/product/log-management/ </a> </small> </figcaption> </center> [^one]: If you want to learn more about how we built a poker platform using event-driven architecture, check out https://monadical.com/posts/event-driven-architecture-1.html. ## How the logging module works To use the [logging module](https://docs.python.org/3/library/logging.html) in Python, it’s important to understand some key concepts. The logging module has five main components: * **Logger**: The object a user interacts with. * **Handler**: Manages the outputs depending on the configuration. * **Filter**: Allows for a finer-grained selection of the logs. * **Formatter**: Makes it possible to assign a format to the message and change LogRecord instance to string representation. * **Log records**: Creates objects every time a logger is invoked. Log records are a package with all the required info to pass through the components You can configure each of these components to suit your needs, making the logger quite versatile. It’s not necessary to use them all, but the ones that are used must communicate with each other because they are components triggered by events, and need to be aware of events occurring across the board. If you don’t use one of them, the logging system resorts to its default behavior. The following figure illustrates what happens when a log is issued. Notice the roles played by the five main components. <center> <img src="https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b61e.png"> </center> <br> When a call to logging is made, the logger checks if the log level is allowed, according to its configuration. If it is allowed, it creates a LogRecord instance. If a filter exists, the LogRecord passes through this instance to check if it complies with the filter. Once this is done, the LogRecord is passed to the handler, which validates the log level again. Then it applies either the format set in the configuration or the default one, and sends the log messages to their respective outputs. You can see in the diagram above that the logger verifies if there are propagations (i.e., it validates if there is a parent logger). If verified, it changes from the actual logger (child) to the parent logger. This is useful for logging modular applications in Python where you can have several loggers, like a hierarchical tree. ### Logger and logging levels The logging level allows users to set the type of message displayed on the screen, so that they can customize what essential information is available at a glance. Displayed messages will indicate the type of information that we want to show[^two]. There are several logging levels: [^two]: The [loguru library](https://github.com/Delgan/loguru), which is built on top of the logging module, implements additional levels (TRACE and SUCCESS) to the standard ones. | Name level | Code | Purpose | |----------------|-------------|-------------------------------------------------------------------| | CRITICAL | 50 | Highlights critical problems that need immediate attention. | | ERROR | 40 | Shows errors and problems. | | WARNING | 30 | Warns that something is not working well. | | INFO | 20 | Provides an overview of general information. | | DEBUG | 10 | For debugging purposes. | | NONSET | 0 | There is not a specified level. The messages are propagated to the logger parents until an explicit level is found, or the root logger is reached. Root logger has WARNING as the default level | These levels are hierarchically ordered. If one of them is set off (for example, WARNING), the levels below it (INFO, DEBUG, etc.) will not be displayed, though they will be written in the code. It is possible to create a custom log level, but these levels cover the most typical scenarios. To use logging, you first need to import the logging module, and begin displaying messages with your chosen logging level: ```python # code import logging logging.info(" -> Info message") logging.warning(" -> Warning message") logging.error(" -> Error message") # output # WARNING:root: -> Warning message # ERROR:root: -> Error message ``` From these few lines of code and their respective output, you can see that: * The logging module allows you to choose the type of message or level name that you want to show: **logging.\<name-level\>(message)**. * The output shows your message and other relevant info: **the level name, parent logger, and message: `%(levelname)s:%(name)s:%(message)s`**. * The INFO log is not shown. This is because the logger parent (known as `the root`) has a WARNING level by default. This last point requires understanding of the concept of logger. Logger is an object that provides a communication layer, offering a set of methods that allow the program or application to generate a log. For example: ```python # CODE import logging # Check the logger print(logging.getLogger()) # Check the logger class print(logging.getLoggerClass()) # Check parents of RootLogger print(logging.RootLogger.__bases__) # OUTPUT # <RootLogger root (WARNING)> # <class 'logging.Logger'> # (logging.Logger,) ``` If you review the output, you can see that the logging module instantiates a logger of type RootLogger by default, which has the WARNING level assigned. <center> <img src="https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b620.png" width="40%"> </center> <br/> If you want to create a logger instance, you can do it in the following way: ```python ## CODE import logging # To create a basic configuration logging.basicConfig() # The RootLogger is no longer used to display messages. # The logger ‘mylogger’ is used instead. mylogger = logging.getLogger('my-logger-instance') # The level changed to INFO mylogger.setLevel(logging.INFO) # Messages are displayed starting from the INFO level. # Logs with DEBUG levels will be not displayed because # they don’t comply with the level set for the logger. mylogger.debug('Debug message') mylogger.info("Info message") mylogger.warning("Warning message") mylogger.error("Error message") mylogger.critical("Critical message") # To show info about logger print(mylogger) # OUTPUT # INFO:my-logger-instance:Info message # WARNING:my-logger-instance:Warning message # ERROR:my-logger-instance:Error message # CRITICAL:my-logger-instance:Critical message # <Logger my-logger-instance (INFO)> ``` The logger filters messages by levels, emits the event to process and shows the log. Once the event is emitted, the logger is internally in charge of creating the LogRecord object, which contains the event information. ```python logging.LogRecord( name, # name of the logger level, # The level of the logging event pathname, # path of the source file where the event was emitted lineno, # The line number in the file where the event was emitted msg, # The message description args, # data to merge into the msg Exc_info, # to show exception func=None, # The function name where event was emitted sinfo=None, # stack info **kwargs, ) ``` ### Configuring logging The logging module has a default parent logger called `Root` but you can configure the logger according to your needs. Creating your own logger instance results in a more verbose log system. Here’s how to set a logger name: ```python # CODE import logging # To create a basic configuration logging.basicConfig() # get logger instance api_log = logging.getLogger('api') database_log = logging.getLogger('database') # set level database_log.setLevel(logging.DEBUG) api_log.setLevel(logging.ERROR) api_log.debug(" -> Debug message") api_log.info(" -> Info message") api_log.warning(" -> Warning message") api_log.error(" -> Error message") api_log.critical(" -> Critical message") print("-" * 30, flush=True) database_log.debug(" -> Debug message") dattabase_log.info(" -> Info message") database_log.warning(" -> Warning message") database_log.error(" -> Error message") database_log.critical(" -> Critical message") # OUTPUT # ERROR:api: -> Error message # CRITICAL:api: -> Critical message # ------------------------------ # DEBUG:database: -> Debug message # INFO:database: -> Info message # WARNING:database: -> Warning message # ERROR:database: -> Error message # CRITICAL:database: -> Critical message ``` In the code above, loggers were created using the method getLogger. You can use this method to assign names or tags to the logger’s messages (for example, `api`, `database`). The messages displayed from these loggers will be shown with their respective tag. Notice that: * I used the `api_log` and `database_log` loggers instead of the default logger Root. * You can create several loggers to use in the program. In this case, two loggers were created with their own name: logger `api_log` named `api`. The other logger, `database_log` has the name `database.` If you create two loggers with the same name, the library creates only one logger. The second logger is treated just like the first logger, since it is a variable that points to the same location in the memory. * The level was set with the setLevel method. You can set a level for each logger, as in the output. Logger `api_log` only shows CRITICAL and ERROR messages while `database_log` shows all messages. * Every logger displays its name (next to the level) in the message printed on the terminal. This shows which logger is communicating the message. ### Formatter: setting a format for the message If you want log messages to show additional information related with the context of the message, like where, when, why, etc., you can set a format for them. Standardizing logs also makes debugging code much easier! The logging module offers several options for formatting messages and capturing context information where the logger was executed (see [official doc - log record attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes)) and also you can use the method `basicConfig`[^three] to configure several items at a time ```python # CODE import logging # set the message display format: # show the line number, the time, the level # the message and the module name where the logger was invoked logging.basicConfig( level = logging.DEBUG, format = "[ %(asctime)s ] - [%(name)s-%(lineno)d] - [%(levelname)s] %(message)s" ) log = logging.getLogger("mainapp.module") log.debug(" Debug message") log.info(" Info message") log.warning(" Warning message") log.error(" Error message") log.critical(" Critical message") log.log(logging.INFO,"General function to log") # OUTPUT # [ 2021-05-30 21:26:06,010 ] - [mainapp.module-12] - [DEBUG] Debug message # [ 2021-05-30 21:26:06,012 ] - [mainapp.module-13] - [INFO] Info message # [ 2021-05-30 21:26:06,014 ] - [mainapp.module-14] - [WARNING] Warning message # [ 2021-05-30 21:26:06,015 ] - [mainapp.module-15] - [ERROR] Error message # [ 2021-05-30 21:26:06,017 ] - [mainapp.module-16] - [CRITICAL] Critical message # [ 2021-05-30 21:26:06,019 ] - [mainapp.module-17] - [INFO] General function to log ``` [^three]: If you want to see the DictConfig documentation in more detail, you can run the following command: `print(logging.basicConfig.__doc__)`, or have a look at: https://docs.python.org/3/library/logging.html#logging.basicConfig. Note that the name assigned to a logger has a `dot` structure to indicate the parent and child (`mainapp.module`). This is useful when code has a modular structure, meaning there is a hierarchy among the loggers. Logging can be used in a modular app where every submodule has its logger. This allows us to trace the origin of a log message, improving traceability and providing more detailed outputs. ### Handler: setting the handler You can manage different outputs according to their level by using the handler. The handler receives a LogRecord and, depending on its configuration, sends it to the corresponding output. For example, you can configure handlers to show the info logs on screen and send the error logs to a file. The logging module offers a number of handlers (see [official doc - logging handlers](https://docs.python.org/3/library/logging.handlers.html), but the most commonly used ones are **StreamHandler** and **FileHandler**. As an example, I will set the info logs to be printed to the terminal , and set the error logs to be automatically stored in a file. For now, let’s implement two ways to set the configuration. #### 1. Use logger’s functions I have configured logging with basic config, but this is only used when working with a small script. If you need to implement an intermediate-advanced or more robust configuration, you can use the instance loggers’ methods. For example: ```python # CODE import logging # Set custom logger log = logging.getLogger(__name__) log.setLevel(logging.INFO) # Set handlers log_handler = logging.StreamHandler() error_handler = logging.FileHandler("errors.log") # Set level for every handlers log_handler.setLevel(logging.DEBUG) error_handler.setLevel(logging.ERROR) # Establish the desired string format log_format = logging.Formatter('[ %(asctime)s ][ %(levelname)s ] - %(message)s') error_format = logging.Formatter('[ %(asctime)s ][ %(name)s-%(lineno)d ][ %(levelname)s ] %(message)s') # Set the format to each handler log_handler.setFormatter(log_format) error_handler.setFormatter(error_format) # Add every handler to the logger log.addHandler(log_handler) log.addHandler(error_handler) log.debug(" Debug message") log.info(" Info message") log.warning(" Warning message") log.error(" Error message") log.critical(" Critical message") # OUTPUT # [ 2021-05-30 23:15:04,800 ][ INFO ] - Info message # [ 2021-05-30 23:15:04,805 ][ WARNING ] - Warning message # [ 2021-05-30 23:15:04,810 ][ ERROR ] - Error message # [ 2021-05-30 23:15:04,814 ][ CRITICAL ] - Critical message # FILE'S CONTENT - errors.log # [ 2021-05-30 23:15:04,810 ][ __main__-31 ][ ERROR ] Error message # [ 2021-05-30 23:15:04,814 ][ __main__-32 ][ CRITICAL ] Critical message ``` As you can see, only one logger was defined, but two handlers were assigned. For each handler, you can assign a format. Assigning a format is a good option when you want a standard format for info messages, but more detailed error messages. The messages handled by StreamHandler are shown on screen with one format, while those handled by FileHandler are written in a file with another format. All messages with a level above INFO are shown on screen, even though they should go to a file. However, the error and critical messages are also stored in the file. Also note that the level set for StreamHandler is DEBUG, but no debug logs are shown. This occurs because before the logger sends the LogRecord to the handle, it checks the level. Because the level for the logger is set as INFO, it does not let them pass to the handlers. #### 2. Using dictConfig This method for configuring the logging system is similar to using a configuration file, except that you can set configuration parameters using a Python dictionary. For example: ```python #CODE import logging import logging.config config = { 'disable_existing_loggers': False, 'version': 1, 'formatters': { 'console_logs_fmt': { 'format': '[ %(asctime)s ][ %(name)s ][ %(levelname)s ] - %(message)s' }, 'error_logs_fmt':{ 'format': '[ %(asctime)s ][ %(name)s-%(lineno)d ][ %(levelname)s ] %(message)s' } }, 'handlers':{ 'console':{ 'level': 'INFO', 'formatter': 'console_logs_fmt', 'class': 'logging.StreamHandler' }, 'errors':{ 'level': 'ERROR', 'formatter': 'error_logs_fmt', 'class': 'logging.FileHandler', 'filename': './errors-3.log' }, 'other':{ 'level':'DEBUG', 'formatter':'console_logs_fmt', 'class': 'logging.StreamHandler' } }, 'loggers':{ '':{ 'handlers': ['errors'], 'level': 'INFO' }, 'other':{ 'handlers': ['other'], 'level': 'DEBUG' } } } logging.config.dictConfig(config) logger = logging.getLogger(__name__) logger.debug(" Debug message") logger.info(" Info message") logger.warning(" Warning message") logger.error(" Error message") logger.critical(" Critical message") other_logger = logging.getLogger('other') other_logger.debug(" Debug message") other_logger.info(" Info message") other_logger.warning(" Warning message") other_logger.error(" Error message") other_logger.critical(" Critical message") # OUTPUT # [ 2021-05-31 02:37:47,752 ][ other ][ DEBUG ] - Debug message # [ 2021-05-31 02:37:47,753 ][ other ][ INFO ] - Info message # [ 2021-05-31 02:37:47,757 ][ other ][ WARNING ] - Warning message # [ 2021-05-31 02:37:47,760 ][ other ][ ERROR ] - Error message # [ 2021-05-31 02:37:47,764 ][ other ][ CRITICAL ] - Critical message # FILE'S CONTENT # [ 2021-05-31 02:37:47,752 ][ __main__-55 ][ ERROR ] Error message # [ 2021-05-31 02:37:47,752 ][ __main__-56 ][ CRITICAL ] Critical message # [ 2021-05-31 02:37:47,760 ][ other-62 ][ ERROR ] Error message # [ 2021-05-31 02:37:47,764 ][ other-63 ][ CRITICAL ] Critical message ``` Notice that I have used a new option: `disable_existing_loggers`. This option is used when you don’t want to disable parent loggers, which could happen when wrapping an app with another application or system that uses a logging system. Here, I created two loggers and three handlers to illustrate the versatility of creating a more complex configuration using this structure. Every logging component in the config dictionary is separated, which allows for more clarity in the configuration. This way, the configuration still requires setting the configuration written in the code. Think of this method as an intermediate stage when using a configuration file, where this dictionary is created by reading the file. #### 3. Using a configuration file There are several ways to set a configuration using a file. You can use a JSON, YAML, or INI file. If you use JSON or YAML, the only thing you need to do is read the file using the libraries JSON or Pyyaml, which converts the content using a Python dictionary. Once you have done that, you can use the function `dictConfig` (used above) to load the configuration. For that reason, we’ll look at this method using INI files. **`config.ini`** ```ini [loggers] keys=root [handlers] keys=console,error [formatters] keys=error_fmt,console_fmt [logger_root] level=INFO handlers=console,error [handler_console] class=StreamHandler formatter=console_fmt level=INFO args=(sys.stdout,) [handler_error] class=FileHandler level=ERROR formatter=error_fmt args=('./errors-1.log',) [formatter_console_fmt] format=[ %(asctime)s ][ %(name)s ][ %(levelname)s ] - %(message)s [formatter_error_fmt] format=[ %(asctime)s ][ %(name)s-%(lineno)d ][ %(levelname)s ] %(message)s ``` **`example.py`** ```python #CODE import logging import logging.config logging.config.fileConfig( fname='./config.cfg', disable_existing_loggers=False ) logger = logging.getLogger(__name__) logger.debug(" Debug message") logger.info(" Info message") logger.warning(" Warning message") logger.error(" Error message") logger.critical(" Critical message") # OUTPUT # [ 2021-05-31 12:17:53,281 ][ __main__ ][ INFO ] - Info message # [ 2021-05-31 12:17:53,285 ][ __main__ ][ WARNING ] - Warning message # [ 2021-05-31 12:17:53,287 ][ __main__ ][ ERROR ] - Error message # [ 2021-05-31 12:17:53,290 ][ __main__ ][ CRITICAL ] - Critical message # FILE'S CONTENT # [ 2021-05-31 12:17:53,287 ][ __main__-14 ][ ERROR ] Error message # [ 2021-05-31 12:17:53,290 ][ __main__-15 ][ CRITICAL ] Critical message ``` When using a configuration file, there is no need to change the code when you want to modify the logging configuration. You only need to change the configuration file. ### Filter - setting a fine-grained output In the first example shown in the previous section, use logger’s functions, the error and critical logs, along with the info and warning messages, were displayed in the terminal and also written to a file. But how can you configure the log if you only want to display the info and warning logs in the terminal and have the rest written to a file? Maybe you are thinking that you can do this by just setting the level to INFO, but remember that with this, all logs with level above INFO will be printed in the terminal, including errors and critical. To solve that, you can use filters like this one: ```python # CODE import logging import sys # Define my filter class FilterOnScreen(logging.Filter): def filter(self, record): return ( record.levelno == logging.INFO or record.levelno == logging.WARNING ) # Set custom logger log = logging.getLogger(__name__) log.setLevel(logging.INFO) # Set handlers log_handler = logging.StreamHandler() error_handler = logging.FileHandler("errors-2.log") # Set level for every handlers log_handler.setLevel(logging.DEBUG) error_handler.setLevel(logging.ERROR) # Establish the desired string format log_format = logging.Formatter( '[ %(asctime)s ][ %(levelname)s ] - %(message)s' ) error_format = logging.Formatter( '[ %(asctime)s ][ %(name)s-%(lineno)d ][ %(levelname)s ] %(message)s' ) # Set the format to each handler log_handler.setFormatter(log_format) error_handler.setFormatter(error_format) # Add filter to StreamHandler log_handler.addFilter(FilterOnScreen()) # Add every handler to the logger log.addHandler(log_handler) log.addHandler(error_handler) log.debug(" Debug message") log.info(" Info message") log.warning(" Warning message") log.error(" Error message") log.critical(" Critical message") # OUTPUT # [ 2021-05-31 01:47:37,350 ][ INFO ] - Info message # [ 2021-05-31 01:47:37,351 ][ WARNING ] - Warning message # FILE'S CONTENT # [ 2021-05-31 01:47:37,351 ][ __main__-41 ][ ERROR ] Error message # [ 2021-05-31 01:47:37,354 ][ __main__-42 ][ CRITICAL ] Critical message ``` Here, I created a class that inherits from the logging filter class, containing a method that receives the LogRecord and returns True if the level is INFO or WARNING. As expected, only the info and warning logs are shown on the screen. This was a simple example just for illustrating the use of filter but note that you could be able to make some additional operations or transformation over the messages before they are returned to be showed. ## Concluding remarks Logging offers a much more detailed follow-up of what is happening in your code, while maintaining a clearer structure of the events that may occur in your app. Overall, the implementation of a good logging system will help you better manage your applications, resulting in more productivity, better communication, and less stress. ## Resources * https://docs.python.org/3/library/logging.html * https://docs.python.org/3/library/logging.config.html?#module-logging.config * https://docs.python.org/3/howto/logging.html * https://realpython.com/python-logging/ * https://realpython.com/python-logging-source-code/ * https://docs.python-guide.org/writing/logging/#logging-in-a-library * https://monadical.com/posts/event-driven-architecture-1.html * https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying --- <center> <img src="https://monadical.com/static/logo-black.png" style="height: 80px"/><br/> Monadical.com | Full-Stack Consultancy *We build software that outlasts us* </center>



Recent posts:


Back to top