How do I send my container logs to multiple destinations in Amazon ECS on AWS Fargate?

4 minute read
0

I want my application container that runs on AWS Fargate to forward logs to multiple destinations, such as Amazon CloudWatch, Amazon Data Firehose, or Splunk.

Short description

An Amazon Elastic Container Service (Amazon ECS) task definition allows you to specify only a single log configuration object for a given container. This limit means that you can forward logs to only a single destination. To forward logs to multiple destinations in Amazon ECS on Fargate, you can use FireLens.

Note: FireLens works with both Fluent Bit and Fluentd log forwarders. The following resolution uses Fluent Bit because Fluent Bit is more resource-efficient than Fluentd.

Resolution

Prerequisites:

Review the following information:

  • FireLens uses the key-value pairs specified as options in the logConfiguration object from the ECS task definition to generate the Fluent Bit output definition. The destination where the logs are routed is specified in the [OUTPUT] definition section of a Fluent Bit configuration file. For more information, see Output on the Fluent Bit website.
  • FireLens creates a configuration file on your behalf, but you can also specify a custom configuration file. You can host this configuration file in Amazon Simple Storage Service (Amazon S3). Or, create a custom Fluent Bit Docker image with the custom output configuration file added to it.
  • If you use Amazon ECS on Fargate, then you can't pull a configuration file from Amazon S3. Instead, you must create a custom Docker image with the configuration file.

Create IAM permissions

Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshoot AWS CLI errors. Also, make sure that you're using the most recent AWS CLI version.

Create AWS Identity and Access Management (IAM) permissions to allow your task role to route your logs to different destinations. For example, if your destination is Data Firehose, then you must give the task permission to call the firehose:PutRecordBatch API.

Note: Fluent Bit supports several plugins as log destinations. Destinations like CloudWatch and Kinesis require permissions that include logs:CreateLogGroup, logs:CreateLogStream, logs:DescribeLogStreams, logs:PutLogEvents,and kinesis:PutRecords. For more information see Permissions for CloudWatch and Kinesis on the GitHub website.

Create a Fluent Bit Docker image with a custom output configuration file

  1. Create a custom Fluent Bit configuration file called logDestinations.conf with your choice of [OUTPUT] definitions. For example, the following configuration file includes configurations defined for CloudWatch, Data Firehose, and Splunk:

    [OUTPUT]    Name                firehose
        Match               YourContainerName*
        region              us-west-2
        delivery_stream     nginx-stream  
    [OUTPUT]
        Name                cloudwatch
        Match               YourContainerName*
        region              us-east-1
        log_group_name      firelens-nginx-container
        log_stream_prefix   from-fluent-bit
        auto_create_group   true   
    [OUTPUT]
        Name                splunk
        Match               <ContainerName>*
        Host                127.0.0.1
        Splunk_Token        xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
        Splunk_Send_Raw     On

    Note: Different destinations require different fields to be specified in the [OUTPUT] definition. For examples, see amazon-ecs-firelens-examples on the GitHub website.

  2. Use the following Dockerfile example to create a Docker image with a custom Fluent Bit output configuration file:

    FROM amazon/aws-for-fluent-bit:latestADD logDestinations.conf /logDestinations.conf

    Note: For more information, see Dockerfile reference on the Docker website.

  3. To use the Dockerfile that you created to create the custom fluent-bit Docker image, run the following command:

    docker build -t custom-fluent-bit:latest .

    Important: Run the docker build command in the same location as the Dockerfile.

  4. To confirm that the Docker image is available to Amazon ECS, push your Docker image to Amazon Elastic Container Registry (Amazon ECR). Or, push your Docker image to your own Docker registry. For example, to push a local Docker image to Amazon ECR, run the following command:

    docker push aws_account_id.dkr.ecr.region.amazonaws.com/custom-fluent-bit:latest
  5. In your task definition, update the options for your FireLens configuration:

    {
      "containerDefinitions": [
        {
          "essential": true,
          "image": "aws_account_id.dkr.ecr.region.amazonaws.com/custom-fluent-bit:latest",
          "name": "log_router",
          "firelensConfiguration": {
            "type": "fluentbit",
            "options": {
              "config-file-type": "file",
              "config-file-value": "/logDestinations.conf"
            }
          }
        }
      ]
    }

    Note: To specify a custom configuration file, you must include the config-file-type and config-file-value options in your FireLens configuration file. To include these options, use the AWS CLI or the Amazon ECS console to create a task definition.

  6. Modify the image property in the containerDefinition section of your configuration to reflect a valid Amazon ECR image location. To specify images in Amazon ECR repositories, use the full registry/repository:tag naming convention. For example:

    aws_account_id.dkr.ecr.region.amazonaws.com/custom-fluent-bit:latest

    To use other repositories, see the image property of the task definition.

AWS OFFICIAL
AWS OFFICIALUpdated 2 months ago
6 Comments

Hello!

It is not necessary to create your own image; you can connect the config file from S3.

"config-file-type": "s3", "config-file-value": "arn:aws:s3:::yourbucket/yourdirectory/extra.conf"

replied 24 days ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 24 days ago

I didn't find any mention the logConfiguration setup on the container definition level with a logDriver of awsfirelens? something like:

 "logConfiguration": {
                "logDriver": "awsfirelens",
                "options": {
                    "compress": "gzip",
                    "provider": "ecs",
                    "dd_service": "prefix-service-service",
                    "Host": "http-intake.logs.datadoghq.com",
                    "TLS": "on",
                    "dd_source": "python-grpc",
                    "dd_tags": "env:dev, prefix-service-dev",
                    "Name": "datadog"
                },
                "secretOptions": [
                    {
                        "name": "apikey",
                        "valueFrom": "arn:aws:secretsmanager:us-east-2:12121212121:secret:datadog_dev:dd_api_key::"
                    }
                ]
            }

Full details for the issue I am facing: https://stackoverflow.com/questions/78632920/aws-ecs-fargate-send-logs-to-multiple-destinations-aws-s3-and-datadog

sany2k8
replied 4 days ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 4 days ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 3 days ago

The article didn't mention the logConfiguration setup on the container definition level with a logDriver of awsfirelens?, I was getting this error:

Error: failed creating ECS Task Definition (prefix-service-dev): ClientException: When a firelensConfiguration object is specified, at least one container has to be configured with the awsfirelens log driver.

sany2k8
replied a day ago