Questions tagged with Containers
Content language: English
Sort by most recent
Hello,
After the change of machine type on bigger some tasks on ECS were redeployed as before they had too little memory to start. After that, they were stuck in "in progress" status with both tasks working and receiving traffic.

The fix was to force redeploy service. After that, the tasks were back to normal.
# Expected behavior
From ECS I would expect to properly scale and deploy a new version of the service without operator intervention.
Sometimes after I trigger a deployment of ECS service the new task set is stuck with 1 desired task and 0 pending. There are no new ECS events (the last one is "service xxx has reached a steady state.") Creating new deployments does not help. It just replaces the primary task set which is also getting stuck at 1 desired and 0 pending)

The service is using EC2 capacity providers and ECS deployment controller. My settings are min 100% healthy max 200%. There is 1 task running prior to deployment. There are multiple container instances available and agent logs on the instance do not show anything unusual. The cloud trail does not show any failed calls for ECS service.
Changing the desired count from 1 to 2 immediately creates 2 pending tasks.

Is there any extra information I can find? Is it possible that this is caused by a bug since there is no trace in ECS service events?
I am trying to understand whether I can make follwing scenario work with aws batch volumes:
I would like to use an EC2 instance that mounts a file system in fstab into /data. I would like to then use bind-mount to make this file system available in my batch container.
In an interactive session on the ec2 I can do this by using docker run -v /data:/data <container-name>. I could not find this scenario in the bind-mounts section of the docs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html
All attempts I tried lead to an empty folder being exposed inside the docker container at batch runtime. Is it currently possible to use batch volumes to achieve this or do I need to mount directly inside the container?
I have an angular and spring boot application that I'd like to deploy on EKS. Angular is on port 80 with an Nginx image in front of it. spring-boot is on port 8000. The container images are on ECR. I want to use fargate class.
How do I go about deploying the application as a whole?
Hi,
I am trying to build a multi-node parallel job in AWS Batch running an R script. My R script runs independently few statistical models for several users. Hence, I want to split and distribute this job running on parallel on a cluster of several servers for faster execution.
My question is to better understand the architecture of it. My understanding is that at some point I have to prepare a containerized version of my R-application code using a Docker image pushed to ECR. My question is:
The parallel logic should be placed inside the R code, while using same image? If yes, how does Batch know how to split my job (in how many chunks) ?? Is the for-loop in the Rcode enough?
Or I should define the parallel logic somewhere in the Dockerfile saying that: container1 run the models for user1-5, container2 run the models for user6-10, etc.. ??
Could you please share some ideas on that topic for better understanding? Much appreciated.
Good day,
yesterday from 22:00 on our server system_eHelper managed by LightSail in Frankfurt with a static address: ************ there was a server outage, which suddenly stopped responding and from the data we have, the RAM and hard disk are 90% full. Still, LightSail worked the whole time.
At the mentioned 22:00 yesterday time in the Czech Republic, there was an outage and the LightSail server stopped communicating.
We would like to ask you for information on how we should proceed now to get the system up and running again.
As soon as the system is operational, we will also increase the capacity of LightSail.
Since the emergency care system is running on this system, I would like to ask you to activate the HIGH AVAILIBITY mode. so that we know all possible outages.
thank you for your response,
Ondřej Teichmann
Hi all. Seeing that EKS no longer supports docker from version 1.24, is it advisable to use cri-dockerd? How does one go a bout configuring it in EKS? and are the any other alternatives for this? Thanks in advance.
Hi Team,
I have created a AWS batch job on eks cluster which is getting succeeded. I'm looking to have logging tab on job details console like fargate and ecs type job are have where we can just click on it retrieve and get the log there. No need to navigate to cloud watch
Beacuse i have many job in batch. It's hard to get it by navigate to cloud watch
Did i missed any configuration for it. Any suggestions on this
When triggering a batch job (Fargate Job Queue), the status is going to **FAILED** with the following error message:
> Cannotstartcontainererror: ResourceInitializationError: unable to create new container: mount callback failed on /tmp/containerd-mount3975084381: no users found
Unfortunately I can't find any similar errors online.
For reference, the Dockerfile that I'm building is simply the following:
```Docker
FROM python:3.8-slim-buster
WORKDIR /app
USER root
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "run.py"]
```
And the contents of run.py are as follows:
```python
print("Python script has run!")
```
The only other file in the image is requirements.txt, which contains just the line `requests`.
I temporarily gave the ecs runtime role full admin permissions to see if that would help, but the error still occurred (so I changed it back).
Any help here would be appreciated!
Hi,
Has anyone successfully ran GreengrassV2 in a container in Windows? The guides say V1 is ok for Windows OS but V2 only shows examples for Linux.
Thanks!
Hi,
I am currently deploying docker images using **Greengrass Core v2 (GGC)** to my edge devices. The docker images and GGC devices are located in the same account. This is working fine with the help of the `aws.greengrass.DockerApplicationManager` and `aws.greengrass.TokenExchangeService` components.
Now, I was wondering if it is possible to **deploy or pull docker images** from a **private ECR Registry** in a **different AWS account** than the GGC device. I wouldn't currently know how and where to set appropriate permissions to allow this.
As a workaround, I would otherwise consider the approach of [cross-account replication](https://docs.aws.amazon.com/AmazonECR/latest/userguide/replication.html). However, if there is a simpler way, I would be pleased to hear about it.
Thanks in advance!
Hi all. we have recently started experiencing an increase in zombie pods when terminating them, is anyone aware what is the root cause of a pod being a zombie/stuck on terminating state? This is the error we keep on getting: error killing pod: failed to "KillContainer" for "zombie-pod" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: cannot stop container: 803b8598080nbdkau8i0n2526be67302a3748dbcbe3066ad0fae55707d1: container 803b8598080 PID 14597 is zombie and can not be killed. Use the --init option when creating containers to run an init inside the container that forwards signals and reaps processes"