eb-docker-compose-log service keeps crashing

1

What is happening

Our EC2 instances degrade into a severe state every couple of hours, due to the eb-docker-compose-log service having crashed. We have to deploy new instances or we can manually restart the service from a ssh session to solve the issue every couple of hours. In both cases the service crashes again after a couple of hours. The system logs do mention the system trying to restart the service by itself, but it fails every time.

Some context

Two environments using a very different stack are having the same issue. We are using Elastic Beanstalk to deploy instances, and they are configured to use the awslogs driver to send the logs to cloudwatch. We only only configured the following:

    awslogs-group: << application name >>
    awslogs-region: eu-west-1
    awslogs-stream: << application name >>
    awslogs-create-group: "true"

This worked find up until about a month ago, but since then it keeps crashing a lot. Cloudwatch still receives logs, so I am not sure why the eb-docker-compose-log service is needed, but the instance being in a severe state is problematic enough.

What have we tried

On one of the more problematic environments we have configured a cloudwatch-agent to monitor memory and disk usage.

  • The amount of memory is very stable at 15%
  • and the percentage of disk usage or amount of inodes is very steady as well.
  • cpu is currently stable at about 5%, even during moments of degraded state

We have upgraded the instance to a bigger one, with more memory and cpu cores. We are still at the bigger instance, but it had no impact on the frequency of the issue.

Logs

The logs that do make it to cloudwatch do not hold any clues, but the system logs do. There are always a couple of these errors in the instance logs:

    error unmarshalling log entry (size=108554): proto: LogEntry: illegal tag 0 (wire type 6)

Followd by a couple of these:

    Error streaming logs: log message is too large (1937007727 > 1000000)

Note the size of this log entry is almost 2GB. While there might be erroneously be a situation where something large might be logged, the error before about unmarshalling the log entry suggests there is a parsing issue while parsing the logs, which might result in a lot of logs being concatenated.

Tech stack

That the issue is not with the application is supported by the fact that we encounter this issue on two different environments running a different tech stack (one is a custom Nodejs application, while the other is running Kong API gateway, a third-party application) The overlap between the two environments is:

  • Both are deployed using beanstalk with a docker compose configuration
  • On both we configured the awslog driver
  • Both run nginx somewhere in their stack << this might be something
  • The issue does not seem to be with eb-docker-compose-log, which apparently is just a small script that pipes the results of docker compose logs to a place that is probably configurable. I get the same error if I use docker logs <<instance-id>>

    The logs are cached, and if I look them up I can sort the lines based on their bytesize, like so: zcat container-cached.log.2.gz | awk '{ print length, $0 | "sort -n -r" }' | less

    Note the 2 in the filename; on my instance there are four zipped cached logs and one unzipped one that is actively being appended to. For all but one of those files I encountered no issues, but one of them (#2) got me this as the biggest line:

    11370455 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@.................loooooads more

    I believe those are a lot of null bytes - more than 11 megabytes of them - so something is being corrupted I think. But I have no idea how to proceed. Is this an AWS configuration for how these docker compose logs are being compressed? Is there a bug in there? Is the issue with docker? I would be very glad if someone has any clues.

No Answers

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions