Skip to content

[BUG] Possible bug - not all logs of the AWS EB application appear in the CW web.stdout.log

1

Bug Report:

Description: Not all logs of the deployed AWS Beanstalk application appear in the corresponding log group (/var/log/web.stdout.log) in AWS CloudWatch (nor in /var/log/web.stdout.log on the EC2 instance).

Steps to Reproduce:

  1. Deploy the demo application to AWS Beanstalk.
  2. Check the corresponding log group (/var/log/web.stdout.log) in AWS CloudWatch.
  3. Some logs are missing.

Expected Result: All logs of the deployed application should appear in the corresponding log group (/var/log/web.stdout.log) in AWS CloudWatch.

Actual Result: Not all logs of the deployed application appear in the corresponding log group (/var/log/web.stdout.log) in AWS CloudWatch.

AWS Beanstalk Environment Configuration:

  • Environment type: Single instance (also tried with multiple instances)
  • Platform: Corretto 8 running on 64bit Amazon Linux 2/3.4.6 (also tested with a couple of earlier versions)
  • Instance type: t3.micro (also tested on t2.medium and t2.small)
  • Log streaming to CloudWatch enabled

AWS CloudWatch Log Group:

  • Log group name: /var/log/web.stdout.log

Used workaround: None.

Severity: High

Additional Details (bug reproduction): The demo application is a simple java application for writing 100000 log lines to stdout from 10 threads (10000 logs per thread). When the application is deployed to AWS Beanstalk and the logs are checked in AWS CloudWatch, not all logs appear in the corresponding log group (/var/log/web.stdout.log).

Demo Application:

  • Repository link: https://github.com/grigoart/aws-eb-log-bug/
  • Build jar by running gradle jar or use application.jar from repository (java version "1.8.0_151")
  • Running locally:
    • java -jar application.jar > log.txt
    • wait for ~1 min and stop the application
    • wc -l log.txt
    • output: "100000 log.txt" (as expected)
  • Running in AWS EB Environment:
    • deploy jar
    • wait for ~1 min
    • verify that AWS CW log group /var/log/web.stdout.log does not have all the logs (e.g. all numbers >1000 are missing). According to AWS CW log insights there are only 1000 log entries (see the image below)
    • alternatively:
      • connect to EC2 instance using SSH
      • wc -l /var/log/web.stdout.log
      • output: "2360 /var/log/web.stdout.log" (unexpected result, should be >= 100000)

AWS CW Logs Insights

asked 3 years ago1.8K views
3 Answers
2
Accepted Answer

Got the answer from tech support:

The rate-limiting errors were observed in “/var/log/messages” file.

... the rate-limiting signals were being sent by “journald” service.

Solved by configuring "rsyslog” and “journald” services through .ebextensions (workaround solution provided by tech support):

files:
    /tmp/Test.txt:
        mode: "000644"
        owner: root
        group: root
        content: |
            $SystemLogRateLimitInterval 0
            $SystemLogRateLimitBurst    0
            $ImjournalRateLimitInterval 0

    /tmp/script.sh:
        mode: "000755"
        owner: root
        group: root
        content: |
            #!/bin/bash
            #Fetching grep values for if condition to check whether the update has been performed or not.
            value=$(grep -f /tmp/Test.txt /etc/rsyslog.conf)
            value1=$(grep -w 'RateLimitInterval=0' /etc/systemd/journald.conf)
            value2=$(grep -w 'RateLimitBurst=0' /etc/systemd/journald.conf)

            # Updating /etc/rsyslog.conf file
            if [ ! -z "$value" ]
            then
            echo "Match Found"
            else
            echo "Modifying /etc/rsyslog.conf"
            echo -e '$SystemLogRateLimitInterval 0\n$SystemLogRateLimitBurst    0\n$ImjournalRateLimitInterval 0' >> /etc/rsyslog.conf
            fi

            # Updating /etc/systemd/journald.conf file
            if [ ! -z $value1 ]
            then
            echo "Match Found for RateLimitInterval"
            else
            echo "Adding RateLimitInterval=0 to /etc/systemd/journald.conf"
            echo -e "RateLimitInterval=0" >> /etc/systemd/journald.conf
            fi

            if [ ! -z $value2 ]
            then
            echo "Match Found for RateLimitBurst"
            else
            echo "Adding RateLimitBurst=0 to /etc/systemd/journald.conf"
            echo -e "RateLimitBurst=0" >> /etc/systemd/journald.conf
            fi

commands:
    01_run_script.sh: 
        command: ./tmp/script.sh 
    02_restart_journald:
        command: systemctl restart systemd-journald
    03_restart_rsyslog:
        command: systemctl restart rsyslog
answered 3 years ago
0

Did you enable the streaming environment's logs to the CloudWath Logs ? You can do it using Console (for detailed steps how to achieve this please check [2]), using EB CLI [3] or using configuration files [4]. After completing these steps you should be able to see these CloudWatch LogGroups and the environment will start streaming appropriate logs into them.

[1] Enabling Elastic Beanstalk enhanced health reporting https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced-enable.html [2] Using Elastic Beanstalk with Amazon CloudWatch Logs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html [3] Instance log streaming using the Elastic Beanstalk console https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html#AWSHowTo.cloudwatchlogs.streaming.console [4] Instance log streaming using the EB CLI https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html#AWSHowTo.cloudwatchlogs.streaming.ebcli [5] Instance log streaming using configuration files https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html#AWSHowTo.cloudwatchlogs.files

AWS
answered 3 years ago
  • Did you enable the streaming environment's logs to the CloudWath Logs

    Yes I did, see the "AWS Beanstalk Environment Configuration" section of the question.

    After completing these steps you should be able to see these CloudWatch LogGroups and the environment will start streaming appropriate logs into them.

    I see some logs of my application in the CW log group, the problem is that they are partial/incomplete, i.e. some log lines are missing.

    I assume the problem is somewhere between application -> /var/log/web.stdout.log (within the EC2 instance) rather than between /var/log/web.stdout.log -> CW.

0

After raising a support ticket, I found this answer, and it worked for us too. Though I kept discussing with AWS support for the following-

  1. Default Configuration: Why is this rate-limiting setting not overridden by default in Elastic Beanstalk environments? Given that it can cause significant issues with log visibility, it seems like an important default configuration.
  2. Documentation: Why isn’t this solution documented more prominently? Many developers using AWS Elastic Beanstalk could face this problem by default. Improved documentation could help others avoid similar issues.
  3. Usage and Best Practices: Is it common for developers using Elastic Beanstalk to face this issue? If so, it suggests that Elastic Beanstalk might not be as widely used or that many are unaware of this rate-limiting problem.

And here is the reply from them-

  1. Default Configuration: I understand that rate-limiting setting is not overridden by default in Elastic Beanstalk environments and thus, this may impact log visibility in Cloudwatch. As you mentioned, I agree that it could be an important default setting. Hence, to gather further insights into the same, I have reached out to our internal Elastic Beanstalk team for sharing more information around this setting. Please be assured that I will share any updates from the team as soon as possible. I highly appreciate your understanding here.
  1. Documentation: I understand that you believe this issue should be mentioned in the documentations so that developers are aware of the rate-limiting setting. Please allow me to mention that I have requested our internal team to consider the same. However, since it may take some time for documentations to get updated, unfortunately, I would not be able to share an ETA for when you might be able to observe this change in the Elastic Beanstalk documentation. That being said, I would like to mention that we appreciate your feedback as we strive to improve our products and services and thus, please be assured that I have shared your feedback with our Elastic Beanstalk internal teams.
  1. Usage and Best Practices : I understand that you would like to know if it is common for Elastic Beanstalk users to encounter this issue. As you might already be aware, since Elastic Beanstalk usage is more dependent on customers' application and the amount of logs the application is generating, not all Elastic Beanstalk customers encounter this issue. However, as you mentioned correctly, I believe this issue should be addressed and documented so that customers are aware of this problem.

And then the followup reply from them-

I am glad to inform you that that I have received an update from the team. The internal team has informed that the rate-limiting setting on Beanstalk environments is consistent with the default settings provided on Amazon Linux EC2 Instances. These rate limits prevent logging from using excessive amounts of system resources. Therefore, completely disabling rate limiting can be risky, and thus it's recommended to tweak it as required.

Having said that, we understand that developers should be more aware of this issue. Therefore, the internal team has acknowledged that they are planning to update the Elastic Beanstalk documentation and instructions related to this configuration, and they will update the documentation to surface this in a more prominent location.

In a nutshell, they don't do this by default in Elastic Beanstalk configuration to prevent excessing use of system resources and they will document this in the ElasticBeanstalk section guide!

I hope this helps others!

answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.