Questions tagged with Containers
Content language: English
Sort by most recent
We have an application that requires clients to have long lasting socket connections, and have code in place application side to gracefully handle a SIGTERM event. It does depend on connections remaining established but my observation is that the SIGTERM happens after the de-registration delay has elapsed and thus all active connections are killed.
Is there a mechanism by which the SIGTERM can be sent before connection draining starts, or perhaps some other signal that would otherwise tell us that the application instance will soon be terminated?
Hello team, I would like to execute kubectl commands using a Cloud-formation. Any idea how do I achieve?
for Instance
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
I was looking Cloud formation resource typeAWSQS::Kubernetes::Resource
Note : Im not looking for helm example, Any references would help, TIA
So I have spent the past two weeks learning about ECS and trying to setup a very basic task (on EC2) with an nginx container and and a PHP app in another container. It seems like I'm very close, but I'm now getting this connection refused error from nginx:
`
**[error] 29#29: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.16.34, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.4:8000", host: "##########:48152"**
`
Here is my task definition from my CloudFormation template:
```
ContainerDefinitions:
- Name: nginx
Cpu: 10
Essential: true
Image: ###################
Memory: 128
MountPoints:
- ContainerPath: /var/www
SourceVolume: my-vol
PortMappings:
- ContainerPort: 80
Links:
- app
- Name: app
Cpu: 10
Essential: true
Image: #############
Memory: 128
MountPoints:
- ContainerPath: /var/www
SourceVolume: my-vol
PortMappings:
- ContainerPort: 8000
Volumes:
- Name: my-vol
DockerVolumeConfiguration:
Scope: task
Driver: local
```
My nginx Dockerfile:
```
FROM nginx:alpine
RUN apk update && apk add bash
COPY ./default.conf /etc/nginx/conf.d/default.conf
```
The config file:
```
server {
listen 80;
listen 443;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:8000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
```
And my app Dockerfile:
```
FROM php:8.2-fpm
...
EXPOSE 8000
```
So, why am I using bridge mode, you ask? Well all of the examples I could find were using bridge mode, and I know that this is supposed to be like using networks in Docker, which I actually got working locally, so I thought this looked like the simplest solution. Also yes, I know that using Links is deprecated, but I couldn't find any recommended alternative.
So I can see that nginx is able to resolve the app host to the IP address of the container, so I'm guessing maybe the problem is on the PHP-FPM side, although in my app's logs I see `fpm is running` and `ready to handle connections`. Anyway I don't want to just go messing around making changes that I don't fully understand the consequences of. So if anyone could explain what's going on, that'd be great.
I want to know what is the major difference between EC2 and Fargate. Why fargate is more expensive it is charge based on minutes. Which one is better to consider for running application
Hello Community,
As per the subject, I am getting this Error time to time from (not producible) ECS Fargate Task and container don't start.
These containers are programmatically initiated with the following attribute.
...
ecsTaskConfig.overrides.ephemeralStorage = {
sizeInGiB: 21
};
...
I tried to find the solution, but so far no luck, Well I thought about implementing a background job to check if the initiated task started or not, but looking for some better solution.
Any tip/guidance would be helpful.
Thanks,
Faiz
Based on https://docs.aws.amazon.com/eks/latest/userguide/deploy-collector-advanced-configuration.html
```
demo % aws eks create-addon \
--cluster-name observability \
--region us-west-2 \
--addon-name adot \
--addon-version v0.66.0-eksbuild.1 \
--configuration-values configuration-values.json
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: --configuration-values, configuration-values.json
```
Cli is not considering configuration-values
You must use a valid fully-formed launch template. The t2.small instance type does not support an AMI with a UEFI boot mode. Only instance types built on the Nitro System support UEFI. Specify an instance type that supports UEFI.
This is so weird. Would ECS require UEFI?
Hello,
We created a cluster that is using ec2 instances. We are considering moving to fargate, but have some questions:
1. Does ecs container in fargate unload the container if not used?
2. Does it take a long time to load up again? Example scenario a web api that needs to be online constantly in fargate and scale up.
Thank you,
Mary
I'm getting confused between this 2 setup's. We have ec2 machines running with proper configurations along with auto scaling etc which will take care of scaling. Where as ECS or EKS clusters having EC2 machines managed by cluster what is the major difference between this 2 setup's
What we gain if we migrate to plan auto scaling managed ec2 machines to cluster managed machines. Even I can run containers in auto scaling managed machines as well
Hello, I have not used postrgres, vm, or docker before, but I would like to use this docker image to clone the api and host it on a cloud vm. Someone said I should set up a postgres database on a cloud vm, install docker to it, and link in the image. Can anyone tell me if this is the correct way to go here, and how exactly I can do that? I see web services and postgres databases as options on cloud providers but not sure if I need one or both or how to proceed here. Also how much gb minimum plan do I need to run this to test it? The code is from https://github.com/0xProject/0x-api#database and the image is here on dockerhub https://hub.docker.com/r/0xorg/0x-api Thanks for any insights or help with getting this running!!
I set up three buckets as described in AWS doc (example.com, www.example.com, and logs.example.com), substituting my domain name. i set up example.com as static hosting and copied the bucket policy as described. i then set up www.example.com as static hosting with redirect. no matter what i enter for the endpoint here, it changes to www.example.com. i also tried this with dummy bucket names with same result. when i click on the endpoint in the www bucket, i get page not found. this is for a school project so i don't want to incur costs.
Hello aws re:Post
I want to run my pods (network wise) in a different subnet and for that I make use of the custom CNI config for the AWS-CNI plugin which already works like a charm.
Now I want to automate the whole process.
I already archived to create the CRD eniconfigs and deploy them automatically. But now I stuck at the automation of the node annotation. As I could not find any useful content while searching re:Post or the internet, I assume the solution is rather simple.
I assume that the solution is somewhere here in the Launch Template, User Data or via `KUBELET_EXTRA_ARGS` but I'm just guessing.
**The Question**
How can I provide annotations like mine (below) to the nodes on launch or after they joined the cluster automatically?
```
kubectl annotate node ip-111-222-111-222.eu-central-1.compute.internal k8s.amazonaws.com/eniConfig=eu-central-1c
```