- Newest
- Most votes
- Most comments
Got the answer from tech support:
The rate-limiting errors were observed in “/var/log/messages” file.
... the rate-limiting signals were being sent by “journald” service.
Solved by configuring "rsyslog” and “journald” services through .ebextensions (workaround solution provided by tech support):
files: /tmp/Test.txt: mode: "000644" owner: root group: root content: | $SystemLogRateLimitInterval 0 $SystemLogRateLimitBurst 0 $ImjournalRateLimitInterval 0 /tmp/script.sh: mode: "000755" owner: root group: root content: | #!/bin/bash #Fetching grep values for if condition to check whether the update has been performed or not. value=$(grep -f /tmp/Test.txt /etc/rsyslog.conf) value1=$(grep -w 'RateLimitInterval=0' /etc/systemd/journald.conf) value2=$(grep -w 'RateLimitBurst=0' /etc/systemd/journald.conf) # Updating /etc/rsyslog.conf file if [ ! -z "$value" ] then echo "Match Found" else echo "Modifying /etc/rsyslog.conf" echo -e '$SystemLogRateLimitInterval 0\n$SystemLogRateLimitBurst 0\n$ImjournalRateLimitInterval 0' >> /etc/rsyslog.conf fi # Updating /etc/systemd/journald.conf file if [ ! -z $value1 ] then echo "Match Found for RateLimitInterval" else echo "Adding RateLimitInterval=0 to /etc/systemd/journald.conf" echo -e "RateLimitInterval=0" >> /etc/systemd/journald.conf fi if [ ! -z $value2 ] then echo "Match Found for RateLimitBurst" else echo "Adding RateLimitBurst=0 to /etc/systemd/journald.conf" echo -e "RateLimitBurst=0" >> /etc/systemd/journald.conf fi commands: 01_run_script.sh: command: ./tmp/script.sh 02_restart_journald: command: systemctl restart systemd-journald 03_restart_rsyslog: command: systemctl restart rsyslog
Did you enable the streaming environment's logs to the CloudWath Logs ? You can do it using Console (for detailed steps how to achieve this please check [2]), using EB CLI [3] or using configuration files [4]. After completing these steps you should be able to see these CloudWatch LogGroups and the environment will start streaming appropriate logs into them.
[1] Enabling Elastic Beanstalk enhanced health reporting https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced-enable.html [2] Using Elastic Beanstalk with Amazon CloudWatch Logs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html [3] Instance log streaming using the Elastic Beanstalk console https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html#AWSHowTo.cloudwatchlogs.streaming.console [4] Instance log streaming using the EB CLI https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html#AWSHowTo.cloudwatchlogs.streaming.ebcli [5] Instance log streaming using configuration files https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html#AWSHowTo.cloudwatchlogs.files
After raising a support ticket, I found this answer, and it worked for us too. Though I kept discussing with AWS support for the following-
- Default Configuration: Why is this rate-limiting setting not overridden by default in Elastic Beanstalk environments? Given that it can cause significant issues with log visibility, it seems like an important default configuration.
- Documentation: Why isn’t this solution documented more prominently? Many developers using AWS Elastic Beanstalk could face this problem by default. Improved documentation could help others avoid similar issues.
- Usage and Best Practices: Is it common for developers using Elastic Beanstalk to face this issue? If so, it suggests that Elastic Beanstalk might not be as widely used or that many are unaware of this rate-limiting problem.
And here is the reply from them-
- Default Configuration: I understand that rate-limiting setting is not overridden by default in Elastic Beanstalk environments and thus, this may impact log visibility in Cloudwatch. As you mentioned, I agree that it could be an important default setting. Hence, to gather further insights into the same, I have reached out to our internal Elastic Beanstalk team for sharing more information around this setting. Please be assured that I will share any updates from the team as soon as possible. I highly appreciate your understanding here.
- Documentation: I understand that you believe this issue should be mentioned in the documentations so that developers are aware of the rate-limiting setting. Please allow me to mention that I have requested our internal team to consider the same. However, since it may take some time for documentations to get updated, unfortunately, I would not be able to share an ETA for when you might be able to observe this change in the Elastic Beanstalk documentation. That being said, I would like to mention that we appreciate your feedback as we strive to improve our products and services and thus, please be assured that I have shared your feedback with our Elastic Beanstalk internal teams.
- Usage and Best Practices : I understand that you would like to know if it is common for Elastic Beanstalk users to encounter this issue. As you might already be aware, since Elastic Beanstalk usage is more dependent on customers' application and the amount of logs the application is generating, not all Elastic Beanstalk customers encounter this issue. However, as you mentioned correctly, I believe this issue should be addressed and documented so that customers are aware of this problem.
And then the followup reply from them-
I am glad to inform you that that I have received an update from the team. The internal team has informed that the rate-limiting setting on Beanstalk environments is consistent with the default settings provided on Amazon Linux EC2 Instances. These rate limits prevent logging from using excessive amounts of system resources. Therefore, completely disabling rate limiting can be risky, and thus it's recommended to tweak it as required.
Having said that, we understand that developers should be more aware of this issue. Therefore, the internal team has acknowledged that they are planning to update the Elastic Beanstalk documentation and instructions related to this configuration, and they will update the documentation to surface this in a more prominent location.
In a nutshell, they don't do this by default in Elastic Beanstalk configuration to prevent excessing use of system resources and they will document this in the ElasticBeanstalk section guide!
I hope this helps others!
Relevant content
- asked 2 years ago

Yes I did, see the "AWS Beanstalk Environment Configuration" section of the question.
I see some logs of my application in the CW log group, the problem is that they are partial/incomplete, i.e. some log lines are missing.
I assume the problem is somewhere between application ->
/var/log/web.stdout.log(within the EC2 instance) rather than between/var/log/web.stdout.log-> CW.