How to influence deploy behaviour in single-instance single-container env



The scenario is that we have an app (YouTrack) running in a single Docker container on a single-instance EB environment. The app keeps its state in a mounted EFS filesystem. In trying to upgrade to the next version of the app, the deploy fails with an exit code of 1 for the container and a "YouTrack is already running" error message.

It appears that:

  • EB uploads the new container to the instance and attempts to start it.
  • While starting up, the new version finds that some of the filesystem resources are in use by the still-running old container.
  • It quits with the above error and the deploy fails.

I've read somewhere (but not 100% sure) that this is expected behaviour - that the old container is left running while the new one is started, and then the two are switched.

How can I prevent that happening? Is there something about the deploy configuration I can change that will cause EB to stop the old container, wait for it to exit, and only then start the new one?

(Have tried ssh'ing into the instance and killing the old container manually, but something apparently outside Docker itself is instantly restarting it every time)

asked 4 years ago223 views
1 Answer

Solved the issue - the key was to run "sudo stop eb-docker" on the EB instance before the deploy - that stopped the old container, prevented the locking issue, and all went ahead ok.

answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions