Questions tagged with Containers

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hello, I have deployed Greengrass as a Docker container from the official Docker Hub page: **amazon/aws-iot-greengrass:2.5.3-0** I run it and deployed on it basics Greengrass components, it runs fine, and deployed also IoT SiteWise most recent components. But the component SiteWiseEdgeCollectorOpcua is logging this error when it starts: ``` OpcUaManagedConnection - {"message":"Failed to start OPC-UA Connection for Source (OPC-UA): Failed to obtain Secret from Greengrass IPC"} ``` I am pretty sure the rest of the IoT SiteWise deployment is correct, since it's quite basic, and I already deployed the same things previously on a Linux deployed Greengrass and it worked fine. I tried to understand what this could be related. For sure it is not related with the permissions given to the IAM Role used by Greengrass, since it currently has all permissions to operate on my AWS account just for debugging this. To avoid networking issue i also run the container with the option `--network host` so that it can open all needed local ports and connections. My only suspect right now is the fact that I see the docker version of Greengrass is not updated since 1 year! Is it supposed to work? Maybe it is not compatible with latest Greengrass components or SiteWise most recent ones? Does someone know the solution to this? Let me know in case I need to provide more information. Thank you very much in advance for your support!
0
answers
0
votes
33
views
asked a month ago
I have a container image that I have built following the guidelines in how to create custom Docker containers for Lambda. It's definitely hooked up correctly because I'm getting logs from the runs. The situation is, I have a windows .exe binary file (provided by an unresponsive vendor) that I need to run, and Lambda sounded like a great tool for this. I installed Wine in the custom container, great. But every time the function fires, I look at that part in the CloudWatch logs and it's clear that Wine has crashed with a coredump in the `/tmp` directory. I have tried - just having wine print its version and exit, same issue, it's just on running the program at all, - using different versions of Wine, - using different `WINEPREFIX` and `WINEARCH` environment variables, - building Wine from source myself as part of the Docker container build process, and - posting in the Wine forums for help. Questions: - Is there any way to get out one of those core dumps? - Could this be because all directories other than `/tmp` are read-only to the container? How would I figure this out? - Is there any other way to run an .exe in Lambda? Thanks in advance. I know this is a niche problem. See [this other question](https://repost.aws/questions/QUx5YSwOhxQNu6j7bwLtybCQ/segmentation-fault-when-running-wine-on-aws-lambda-container) for a similar unsolved question. May be the same issue but it is from months ago and has no activity.
1
answers
0
votes
25
views
profile picture
etdr
asked a month ago
Greetings, I have a Docker image with size ~8GB. When creating a Task Definition I am prompted to fill the Task size (CPU and Memory) and optionally the Container size (CPU and Memory). Even though I've read the info sections it does not come clear to me the difference between the two. Should I set the size of the Container and the Task to 8 GB? In case the both sizes are less than the image size the container would not run?
1
answers
0
votes
24
views
apssg
asked a month ago
Hi, We've been running an app in a container based on the tomcat:9.0-jdk8-corretto image. It has been running fine under Fargate with platform 1.3. But when we changed to platform 1.4, the app starting having issues. The application in the container provides JWT tokens for authentication and access purposes. I notice in the Tomcat logs at start up under 1.4 that we get a message that we don't with 1.3: `INFO com.mchange.v2.uid.UidUtils - Failed to get local InetAddress for VMID. This is unlikely to matter. At all. We'll add some extra randomness java.net.UnknownHostException: bc7a746844e64dfd95a60014xxxxxxxx-yyyyyyyy: bc7a746844e64dfd95a60014xxxxxxxx-yyyyyyyy: Name or service not known ` When we try to obtain a token, we see this in the application log: ``` Unexpected error reading request java.lang.NoSuchMethodError: io.jsonwebtoken.SignatureAlgorithm.assertValidSigningKey(Ljava/security/Key;)V ``` If I switch the exact same container back to 1.3, it works as expected. I've been reading up on the differences between platforms 1.3 and 1.4, but nothing is jumping out as a reason for why we are having this issues. Curious if anyone else has run into something similar and if there are some ideas for what to try.
2
answers
0
votes
35
views
Greg
asked a month ago
Dear Community, Please imagine the following scenario: * I have multiple long running computation tasks. I'm planning to package them as container images and use ECS Tasks to run them. * I'm planning to have a server less part for administrating the tasks Once a computation tasks starts, it takes the input data from a SQS queue and can start its computation. Also all results end up in an SQS queue for storage. So far, so good. Now the tricky bit: The computation task needs some human input in the middle of its computation, based on intermediate results. Simplified, the task says "I have the intermediate result of 42, should I resume with route A or route B?". Saving the state and resuming in a different container (based on A or B) is not an option, it just takes too long. Instead I would like to have a server less input form, which sends the human input (A or B) to this specific container. What is the best way of doing it? My idea so far: Each container creates his own SQS queue and includes the url in his intermediate result message. But this might result in many queues. Also potentially abandoned queues, should a container crash. There must be a better way to communicate with a single container. I have seen ECS Exec, but this seams more build for debugging purposes.
3
answers
0
votes
41
views
stefan
asked a month ago
I built my docker Django app with Django-cookiecutter ( It makes a bootstrap application that is ready for production). The application works fine on my local machine with no errors. I pushed the production images to AWS ECR and used docker context ecs to deploy the application. The problem I am facing is that it starts creating all the instances and after a while starts deleting them again. I can't figure out where the problem is coming from. I am a beginner in this and I will appreciate any assistance. This is what my YAML file looks like. ``` version: '3' volumes: production_postgres_data: {} production_postgres_data_backups: {} production_traefik: {} services: django: &django build: context: . dockerfile: ./compose/production/django/Dockerfile image: public.ecr.aws/t6g1j7b6/image_converter:django platform: linux/x86_64 depends_on: - postgres - redis env_file: - ./.envs/.production/.django - ./.envs/.production/.postgres command: /start networks: - proxy - default postgres: build: context: . dockerfile: ./compose/production/postgres/Dockerfile image: public.ecr.aws/t6g1j7b6/image_converter:postgres volumes: - production_postgres_data:/var/lib/postgresql/data:Z - production_postgres_data_backups:/backups:z env_file: - ./.envs/.production/.postgres traefik: build: context: . dockerfile: ./compose/production/traefik/Dockerfile image: public.ecr.aws/t6g1j7b6/image_converter:traefik depends_on: - django volumes: - production_traefik:/etc/traefik/acme ports: - "0.0.0.0:80:80" - "0.0.0.0:443:443" - "0.0.0.0:5555:5555" redis: image: public.ecr.aws/t6g1j7b6/image_converter:redis celeryworker: <<: *django image: public.ecr.aws/t6g1j7b6/image_converter:celeryworker command: /start-celeryworker celerybeat: <<: *django image: public.ecr.aws/t6g1j7b6/image_converter:celerybeat command: /start-celerybeat flower: <<: *django image: public.ecr.aws/t6g1j7b6/image_converter:flower command: /start-flower networks: proxy: ```
1
answers
0
votes
68
views
asked a month ago
My question is simple. Is it possible to call a lambda function from an ecs fargate instance in a private subnet? If it's possible how can achieve that. Thank you.
1
answers
0
votes
40
views
asked a month ago
Dear Support , We are using App Runner to run our dockerized spring boot app and would want to expose a particular port at which the service responds to. However the domain that exposed from the **App Runner** is without the port that has been configured so this means that it works on a default secure port ie **443**. Is this a **bug** ? so this means that App Runner service is secured even without the **basic authentication** set up ? Please kindly clarify Thanks and Regards Thulsi Doss Krishnan
2
answers
0
votes
65
views
asked a month ago
I have tried creating an ECS cluster using both fargate and EC2 to use one of my prebuilt containers in ECR. I have done this using both the console and terraform code based on this blog: https://medium.com/avmconsulting-blog/how-to-deploy-a-dockerised-node-js-application-on-aws-ecs-with-terraform-3e6bceb48785, but I still face the target deregistration / target unhealthy. My container is on port 8000 for reference. Right now the container is hosted on EC2, but I'd like to use ECS for scalability. Is there a simple and fool proof guide I can follow for this?
1
answers
0
votes
20
views
asked a month ago
I am trying to deploy a flask application to elastic beanstalk via docker. When I follow the tutorials and create the application: #eb create dev-api-container I get an error: ERROR: ServiceError - Configuration validation exception: Unknown or duplicate parameter: WSGIPath I see have found some information related to an .ebextensions file where I should set this but I am confused because if this is a docker container - does elastic beanstalk even know this is a python app? I am stumped here for how to set/clear this flag. If I run the container locally it runs as I would expect. I have done a default configuration and am trying to deploy this container into us-east-1. I have not configured WSGIPath for this application.
1
answers
0
votes
23
views
asked a month ago
Hi everyone! I've been working with a Fargate app recently and I've managed to make everything work (LB, listeners, routing, CNAME with Route53, https and http protocols), but for one thing: The app itself is just a website, bundled into a docker image and the ECS Task deploys it. It does not generally have any task running, it starts the task once a call to the website is received. If it receives a call while the task is not running, the user gets a 503 response from the website. As soon as that happens, the task spawns and takes about 5 to 7 seconds to actually get up and running, the website does respond after that and the target registers. After 15 seconds or so, it deregisters again (and again, anyone that comes to the website after that, receives that 503). So the loop begins again. There's this other Fargate app, done by John Doe, same logics, that keeps the task alive for longer once it's up and running (so far I've tested it with pings every 3 minutes and keeps responding, still testing and from the logs I realise that I doesn't go down as soon as the response is given, it does reach an *idle* state whereas mine doesn't). The issue is that the task I've defined in task definition takes about 5 to 7 seconds to spawn and the website shows a 503 response until it does. But once the task is up and running, the website responds correctly and shows the page I want it to show. The other app has the exact same configuration regarding idle timeout of the LB, target groups, inbound/outbound rules, scaling rules, and so on. I don't think the problem is in the Load Balancer, nor in its target groups because they are configured correctly and the target group do register. The only difference I've noticed is the behaviour of the task itself, once up and running, won't deregister after 15-30 seconds while mine does. I need to know how to make that "task lifecycle" longer, the time the actual task lives. I've read somewhere that tasks can actually run for as long as we want. So my question is: How do I do that? How do I set up the "idle" time of a task in a Fargate app before it goes down again? If the task is failing health checks, how do I troubleshoot health checks of the task without going into the target group thing? Could it be an issue with ECS deciding somehow that the task is failing health checks and so it takes it down again? If so, is there a way of telling ECS to keep it alive? Thank you in advance!
0
answers
0
votes
19
views
asked a month ago
Hi, We have a docker image stored in ECR which we launch as an ECS task. Up until last friday everything was going fine. Since friday afternoon no deployments will successfully spin up - even if we redeploy a previously working image. No code has been changed that relates to docker at all. When I build the image locally it works. I am unable to ssh into the container as it is started up, fails launch then shuts down. Has something in ECS changed ? The only error we get is "runc create failed: args must not be empty". This suggests an error in the docker file but its a very basic file : FROM node:18 RUN ["mkdir", "-p", "/app"] WORKDIR "/app" ENV "NODE_ENV" "production" ENV "PORT" 4000 EXPOSE 4000 RUN ["npm", "install", "--global", "npm@8.x.x"] COPY ["package.json", "package-lock.json", "/app/"] RUN ["npm", "install", "--only", "production"] COPY ["dist", "/app/dist/"] ENTRYPOINT ["node", "--enable-source-maps", "--trace-deprecation", "--trace-warnings", "./dist/index.js"] thanks for any help
1
answers
0
votes
38
views
asked 2 months ago