Docker Compose Logs: Guide & Best Practices

Squadcast Community - Aug 14 '23 - - Dev Community

Docker Compose is a tool for defining and running multi-container Docker applications. It allows developers to streamline the process of configuring, building, and running multiple containers as a single unit with a docker-compose.yml. This configuration file specifies the services, networks, and volumes required for an application, and their relationships and dependencies.
The docker-compose logs command displays the logs of all services defined in the docker-compose.yml file. It helps monitor and debug applications by providing insights into the behavior and performance of the various services.
The command aggregates the logs from all the containers specified in the docker-compose.yml file, presenting them in a unified view. By default, the logs are shown in the order they were generated, but you can filter or customize the output using various flags and options, such as:

--follow or -f: Follow log output (similar to tail -f).
--timestamps or -t: Show timestamps in the log output.
--tail: Show the last N lines of logs, where N is the number you provide.

SERVICE: Specify one or more services to display logs for instead of showing logs for all services.
This article will explore Docker Compose logging drivers and logging strategy best practices, practical examples of debugging and troubleshooting using Docker logs, and demonstrate how to set up log streaming.

Summary of key Docker Compose logs concepts

The table below summarizes key Docker Compose logs concepts this article will build upon.

Concept: docker-compose logs
Description:docker-compose logs is a Docker command used to view the container logs for a system’s defined services.

Concept: Logging drivers
Description:Docker supports several logging drivers that define how your container logs are collected and stored.

Concept:Logging strategies
Description:There are two log delivery modes in Docker: blocking and non-blocking.

Concept:Debugging with logs
Description:The docker logs command allows you to inspect specific containers and review logs that could provide insight into the issue your application is facing.

Concept:Storing logs
Description:Maintaining a healthy system requires a clear understanding of log locations and adherence to lifecycle policy guidelines.

Docker logging drivers

Logging drivers are plugins that handle container logs in Docker. They define how logs are collected, processed, and stored for a container. Each driver provides different features and is designed to work with various logging services and platforms.
The default logging driver in Docker Compose is the json-file driver. This driver stores logs as JSON files on the host machine where the container is running. However, Docker supports several other logging drivers you can configure in your Docker Compose setup.
To configure Docker Compose to use separate logging drivers, specify the desired driver in the docker-compose.yml file using the logging configuration option for each service. Here's an example:
version: '3'
services:
web:
image: my-web-app
logging:
driver: gelf
options:
gelf-address: "udp://logstash-host:12201"
tag: "my-web-app"

db:
image: my-db
logging:
driver: fluentd
options:
fluentd-address: "fluentd-host:24224"
tag: "my-db"

In this example, the web service is configured to send logs to a Logstash server in the ELK stack using the gelf logging driver. The gelf-address option is set to the address of the Logstash server, which is configured to listen for GELF input on port 12201 (you need to configure Logstash accordingly).
The db service is configured to send logs to a Fluentd server using the fluentd logging driver. The fluentd-address option is set to the address of the Fluentd server, which listens on port 24224 (default Fluentd port).
Before using this configuration, ensure you have the ELK stack and Fluentd servers set up and properly configured to receive logs from your Docker containers.

Docker Compose logging strategies

The log delivery mode in Docker determines how logs are transferred from the running containers to the specified log driver. There are two delivery modes:
Blocking: In this mode, the Docker daemon can block a container, preventing it from writing to the log driver. This is especially useful if the log driver cannot keep up with the amount of logs being written. However, it can also cause issues if the blocked container is a critical part of your application.
**Non-blocking: **Docker will not block containers in this mode, even if the log driver cannot keep up with the amount of logs being written. Instead, Docker will buffer the logs and deliver them when the log driver is ready. However, this can cause issues if the log buffer fills up, causing Docker to drop logs.
To understand the risk of log loss with non-blocking mode, suppose you have a Docker Compose file (docker-compose.yml) that defines two services: a web server and a database. The web server logs important information to stdout or stderr within the container, and you expect these logs to be available for debugging or monitoring purposes.
version: '3'
services:
web:
build: .
ports:

  • 8080:80 db: image: mysql:latest environment:
  • MYSQL_ROOT_PASSWORD=password If you start the services using docker-compose up -d in non-blocking mode (detached mode), the containers will run in the background. However, if you don't actively capture or redirect the logs to a file or logging system, the logs will not be immediately visible or persisted. This can lead to potential log loss.

Four essential Docker Compose logs best practices

There is no one-size-fits-all Docker Compose logging strategy. However, several well-established best practices can help you define the right strategy for specific use cases. Here are four key Docker Compose logging best practices to consider:
Use a centralized logging solution: Instead of relying solely on docker-compose logs, consider using a centralized logging system to aggregate and manage logs from multiple containers and services. Popular options include the ELK Stack (Elasticsearch, Logstash, and Kibana), Graylog, or Fluentd.
Configure logging drivers: Docker provides various logging drivers that allow you to send container logs to different destinations. When using Docker Compose, you can specify a logging driver for each service in your docker-compose.yml file. Choose a logging driver that fits your needs, such as json-file, syslog, journald, or fluentd.
Consider log rotation and retention: Containers can generate a significant amount of logs, consuming disk space over time. Implement log rotation and retention strategies to manage log files effectively. For example, you can configure the maximum log file size (max-size) and the number of retained log files (max-file) in the logging driver options. This helps to control disk usage and prevent log files from growing indefinitely.
Include relevant information in logs: Ensure that the logs emitted by your application or services include sufficient information for troubleshooting. Include timestamps, request/response details, error codes, stack traces, and other relevant contextual information. This makes it easier to understand and diagnose issues when reviewing the logs.
Integrated full stack reliability management platform Try For Free Drive better business outcomes with incident analytics, reliability insights, SLO tracking, and error budgets Manage incidents on the go with native iOS and Android mobile apps Seamlessly integrated alert routing, on-call, and incident response

How to debug with Docker Compose logs

Debugging a containerized application using Docker Compose logs can be very efficient, especially when dealing with HTTP 500 error codes. HTTP 500 is a generic error message indicating that the server encountered an unexpected condition that prevented it from fulfilling the request.
Here is a step-by-step guide for debugging HTTP 500 errors with Docker Compose logs.

  1. Identify the affected container: The first step is identifying the container causing the issue. If you're using Docker Compose, you can use the docker-compose ps command to list all running containers. The output will give you the container ID and its status.
    docker-compose ps

  2. View the logs: Once you have identified the problematic container, you can view its logs using the docker logs command followed by the container ID or name. This will display the logs for that particular container. Look for any error messages or stack traces related to the HTTP 500 error.
    docker logs

  3. Filter the Logs: If you're dealing with a large number of logs, it can be helpful to filter them. You can use the grep command to filter the logs for specific keywords. For example, to filter logs for "500", you can use:
    docker logs 2>&1 | grep "500"
    This will display only the logs that contain the keyword "500".

  4. Continuous log tracking: If the issue is still occurring and you need to track logs continuously, you can use the -f or --follow option with the docker logs command:
    docker logs -f
    docker logs –follow
    This will continuously display the logs as they are generated. Look for any patterns or recurring error messages.

  5. Debug the issue: You can start debugging the issue based on the error messages and stack traces in the logs. This might involve checking your application code, configuration files, database connections, or external services with which your application interacts.

  6. Inspect the container: If you cannot diagnose the issue with logs alone, you can inspect the container's metadata for clues. Use the docker inspect command to get more information about the container, like its environment variables, volumes, network settings, etc.
    docker inspect

Storing Docker Compose logs

Logs in Docker are typically stored on the host system where the Docker daemon runs. The exact location and format depend on the Docker logging driver. For example, if you're using the default json-file driver, the logs are stored in JSON format at the following location:
/var/lib/docker/containers//-json.log
If you're using a different logging driver, like syslog or journald, the logs are stored in the location determined by that system's configuration.

Lifecycle policy guidelines

Managing logs is a critical task in maintaining the health of your system. Here are some guidelines for creating a lifecycle policy based on the aggregate size of logs:
Monitor log sizes: Regularly monitor the aggregate size of your logs. Tools like du in Unix can help you with this task. You can also configure alerts in your monitoring system to notify you when logs reach a certain size.
Set a maximum log size: Determine a maximum size for your logs. This will depend on your system's capacity and how critical logs are to your operations. For example, you might decide that logs should never take up more than 50% of your disk space.
Implement log rotation: Log rotation involves renaming current log files and starting a new one. This prevents individual log files from becoming too large. Docker has built-in log rotation for the json-file and journald log drivers. You can specify the maximum file size and the number of files to keep.
Archive important old logs:If old logs are still necessary (e.g., they are required for compliance), consider archiving them. Archived logs can be compressed to save space and moved to cheaper storage options.
Delete unimportant old logs: If old logs are not needed, delete them. This can be done manually or by using a log management tool. Be careful not to delete logs that might be needed for auditing or troubleshooting purposes.
Automate log management: Consider using log management tools or services that can automate these tasks. For example, Logrotate on Linux can automatically rotate, compress, and delete logs. Managed services like AWS CloudWatch or Google Stackdriver can also handle log lifecycle management.

How to troubleshoot common issues with Docker Compose logs

If you're working with a multi-container Docker environment and experiencing issues, Docker Compose logs can be extremely useful for troubleshooting. Below is a step-by-step guide on troubleshooting with Docker Compose logs using a real-world example of a Python Flask application with a PostgreSQL database.
Assume your docker-compose.yml file looks like this:
version: '3'
services:
web:
build: .
command: python app.py
volumes:

  • .:/code ports:
  • '5000:5000' depends_on:
  • db db: image: postgres:11 environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: test_db Suppose you notice that your application is not responding as expected, and you suspect an issue with the database connection. You can follow the steps below to debug.
    1. Check the logs for the web service First, check the logs for the web service, which is where the application is running: If you see an error message related to the database connection, such as: OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused This indicates that the application is unable to connect to the database.
    2. Check the logs for the database service Next, check the logs for the database service to see if there are any issues there: docker-compose logs db
    3. Modify the Docker Compose file Based on the error message, you may have specified the wrong password for the PostgreSQL user in your docker-compose.yml file. In this case, you'd need to correct the password and run docker-compose up again. ‍4. Follow the logs If you're still experiencing issues, following the logs in real-time as you interact with the application could be helpful. You can do this with the -f or --follow option: docker-compose logs -f
    4. Filter the logs You can filter the logs using grep to find a specific error or message. For example, to find log entries that contain the word 'error', you could run: docker-compose logs web | grep error

Example of log streaming using celery, socket.IO, and containers

In this example, we provide a sample application demonstrating log streaming between containers and why it is important. We have two components:
A server that accepts HTTP POST requests
A client that sends HTTP POST requests
The client sends a POST request to the server to calculate the Fibonacci sequence. The server then computes the sequence and sends back the response. This simple model could be further extended to distribute all sorts of workloads as microservices.

Setup

The following docker-compose.yml file represents the setup described above.
version: '3'
services:
redis:
image: redis:5
ports:

  • "6379:6379" web: build: ./server command: flask run --host=0.0.0.0 --port=5001 volumes:
  • ./server:/code ports:
  • "5001:5001" environment:
  • FLASK_APP=app_server.py
  • FLASK_RUN_HOST=0.0.0.0
  • CELERY_BROKER_URL=redis://redis:6379/0
  • CELERY_RESULT_BACKEND=redis://redis:6379/1 depends_on:
  • redis worker: build: ./server # command: celery -A tasks.celery worker --loglevel=info command: celery -A tasks worker --loglevel=info volumes:
  • ./server:/code environment:
  • CELERY_BROKER_URL=redis://redis:6379/0
  • CELERY_RESULT_BACKEND=redis://redis:6379/1 depends_on:
  • web client: build: context: ./client dockerfile: Dockerfile volumes:
  • ./client:/code depends_on:
  • worker The docker-compose.yml file above creates four containers: redis, worker, and web constitute the server component, and the client is a separate component. For simplicity's sake, to avoid “404 not found” errors, we added the depends_on key to the client image so that the client sends the request only after the server is fully up. Once all the containers are up, the client sends a POST request with a single parameter n to compute Fibonacci sequence up until n Fibonacci numbers. For example, if the parameter n is 10, the sequence would be [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]. Each Fibonacci number the server calculates is sent to the client as a log emission. This behavior is helpful when a client issues a long-running command to the server, and instead of waiting at a blank screen, the server can emit feedback logs. Celery workers are used to hand off compute-intensive or long-running tasks that can be processed asynchronously. Celery requires a message broker - in this case, Redis - to facilitate communication between the task producer (the client application) and the task consumer (the worker) The complete code for the application above is available at: https://github.com/ashaik4/distributed_task_framework Summary Docker Compose enables the configuration, building, and running of multi-container Docker applications using a YAML configuration file, docker-compose.yml. Developers can monitor applications by displaying logs of all defined services with the docker-compose logs command. Docker supports various logging drivers that process and store container logs. Docker's log delivery modes, 'blocking' and 'non-blocking,' affect how logs are transferred from containers to the specified log driver. Debugging with logs can be efficient in identifying issues, such as “HTTP 500” error codes. Logs are stored based on the logging driver used, and it's essential to have a log lifecycle policy to manage the aggregate size of logs. Docker Compose logs are also useful for troubleshooting in a multi-container environment. Real-time log streaming can provide client feedback for long-running tasks processed asynchronously by server-side Celery workers.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player