Amazon Lightsail - exec format error and deployment took too long

0

I am trying to a container to Lightsail. I have created a Docker image using:

docker build -t ai .

and the result is:

[+] Building 7.3s (13/13) FINISHED                                                                                                                           docker:desktop-linux
 => [internal] load .dockerignore                                                                                                                                            0.0s
 => => transferring context: 2B                                                                                                                                              0.0s
 => [internal] load build definition from Dockerfile                                                                                                                         0.0s
 => => transferring dockerfile: 433B                                                                                                                                         0.0s
 => [internal] load metadata for docker.io/library/python:3.10.10-slim-bullseye                                                                                              2.3s
 => [1/8] FROM docker.io/library/python:3.10.10-slim-bullseye@sha256:7b0a5cefbcdd085faa21533c21549e55a7e66f5aed40f8d1f4de13a017e352cd                                        0.0s
 => [internal] load build context                                                                                                                                            3.1s
 => => transferring context: 667.93MB                                                                                                                                        3.0s
 => CACHED [2/8] WORKDIR /code                                                                                                                                               0.0s
 => CACHED [3/8] COPY ./requirements.txt /code/requirements.txt                                                                                                              0.0s
 => CACHED [4/8] RUN apt-get update && apt-get -y install libc-dev                                                                                                           0.0s
 => CACHED [5/8] RUN pip install --upgrade pip                                                                                                                               0.0s
 => CACHED [6/8] RUN pip install --upgrade setuptools wheel                                                                                                                  0.0s
 => CACHED [7/8] RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt                                                                                          0.0s
 => [8/8] COPY ./app /code/app                                                                                                                                               0.8s
 => exporting to image                                                                                                                                                       1.0s
 => => exporting layers                                                                                                                                                      1.0s
 => => writing image sha256:87d92c2843f40364ef692fd36aa02070f54dc8b6f004d425c7c80b7dfc75f988                                                                                 0.0s
 => => naming to docker.io/library/ai                                                                                                                                        0.0s

What's Next?
  View summary of image vulnerabilities and recommendations → docker scout quickview

Then, I pushed the image to container service using aws lightsail push-container-image. The result is:

8292008e02ca: Pushed 
96bb1f4fb5d1: Layer already exists 
0556ac4e037e: Layer already exists 
de8c54c64f9c: Layer already exists 
8ccbcafbdddf: Layer already exists 
449dd841118f: Layer already exists 
e2390171833c: Layer already exists 
521648fffa22: Layer already exists 
09b72fddb658: Layer already exists 
bc460326c33d: Layer already exists 
bc01256d1a39: Layer already exists 
3804935bde62: Layer already exists 
Digest: sha256:086436c108365075af880453bfe0d00bb761790e2f5ef6320c480493a4f99c6f
Image "ai" registered.
Refer to this image as ":x-container-service.ai.82" in deployments.

After that, I choose the related image in Lightsail and try to deploy but it fails with the following logs from the container:

[6/Eyl/2023:08:04:41] [deployment:58] Creating your deployment
[6/Eyl/2023:08:05:43] exec /usr/local/bin/uvicorn: exec format error
[6/Eyl/2023:08:07:16] [deployment:58] Started 1 new node
[6/Eyl/2023:08:08:07] exec /usr/local/bin/uvicorn: exec format error
[6/Eyl/2023:08:09:18] [deployment:58] Started 1 new node
[6/Eyl/2023:08:09:20] [deployment:58] Took too long

Container capacity is Micro(1GB RAM, 0.25 vCPUs)x1 node. I have a ML model with the size of 667MB. Can it be the problem? How can I use this model with Lightsail? Are there any other possible ways to deploy and use it in the cloud?

Thanks in advance!

aeslan
asked a year ago1251 views
1 Answer
1

The error message exec /usr/local/bin/uvicorn: exec format error suggests that there might be an issue with the way uvicorn is being started in your Docker container. This could be due to a mismatch between the architecture of the machine where the image was built and the architecture of the machine where the image is being run.

However, given the size of your ML model and the capacity of your container, it's also possible that you're running out of memory. The model itself is 667MB, and the Micro instance type has 1GB of RAM. After accounting for the operating system and other processes, there may not be enough memory left for your application to run. This could be causing the deployment to fail.

Here are some suggestions to troubleshoot and resolve the issue:

  1. Check your Dockerfile: Make sure the command to start uvicorn is correct. It should be something like CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "80"]. Also, ensure that uvicorn is installed in your Docker image and that the path /usr/local/bin/uvicorn is correct.

  2. Increase the container capacity: If possible, try using a larger instance type for your container. This will give your application more memory to work with and could resolve the issue if it's being caused by a lack of memory.

  3. Optimize your ML model: If you can't increase the container size, you might need to optimize your model to reduce its memory footprint. This could involve using a smaller model, applying model pruning, or using a lower precision data type for your model weights.

  4. Use AWS Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS): If you continue to have issues with AWS Lightsail, you might want to consider using AWS ECS or EKS. These services are more flexible and might be better suited to your needs, especially if you're working with large ML models.

  5. Use AWS SageMaker: If you're deploying a machine learning model, you might want to consider using AWS SageMaker. SageMaker is specifically designed for deploying ML models and provides a lot of tools and features that can make the process easier.

Remember to also check the logs for any other errors or warnings that might help you diagnose the issue. If you're still having trouble, you might want to reach out to AWS Support for assistance.

profile picture
Yusuf
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions