Questions tagged with Containers
Content language: English
Sort by most recent
For EC2 there are clear explanations about network bandwidth for different instances. What about ECS Fargate?
I managed to find only that article with some benchmark - https://www.stormforge.io/blog/aws-fargate-network-performance/ so far.
What is the guaranteed and maximum network bandwidth for Fargate tasks? Does it depend on number of vCPU/Memory?
Is it possible to extend an EKS cluster (on EC2) with on-prem nodes?
The on-prem nodes would ideally be connected securely to the VPC to avoid going over public internet.
The motivation behind this is to utilize existing servers on-prem for some of the workload, and during peak hours extend the capabilities of the cluster via autoscaling EKS on-demand.
Ideally everything would be centrally managed under AWS, therefore some EKS nodes would always be active for the control plane, data redundancy, etc.
In researching this topic so far I've only found resources on EKS via AWS Outposts, EKS Anywhere, joining federated clusters, etc. -- but it seems these solutions involve managing our own infrastructure, losing the benefits of fully-managed EKS on AWS. I can't find any information about extending AWS-managed EKS clusters with on-prem hardware (effectively allowing AWS to take ownership of the node/system and integrate it into the cluster). Has anyone accomplished this, or is not viable/supported? I appreciate any feedback, thanks!
Hi. Is it possible to set up routing rules for pods in EKS using standard mesh plugins? I’m not able to install plugins like Calico.
I know how to release a host.
I am not trying to release a host.
I am trying to DELETE a host.
Why can't I DELETE a host? Are they designed to just stack up and accumulate ad nauseam or is there a way to get rid of them?
https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#Hosts:
i read that i can remote debug an application in a docker container by starting the container like
```
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -it <image_name>
```
however i don't think i can run a docker component with the -it 'interactive flag'
without the -it flag if i try to connect to a running process in the docker i receive a
```
Unable to start debugging. Attaching to process 29966 with GDB failed because of insufficient privileges with error message 'ptrace: Operation not permitted.'.
```
error.
how does anyone else debug inside a greengrass container ?
I am new to Lightsail and I'm trying to debug a failed deployment, and I'm at the point of shooting in the dark. Any ideas would be appreciated!
I have two images: a Flask/Gunicorn image built from Python-alpine and an Nginx image. Locally, I can spin them up with `docker-compose` and they work beautifully.
But in Lightsail, all I know is that my Flask image "took too long":
```
[17/Mar/2023:24:11:33] [deployment:14] Creating your deployment
[17/Mar/2023:24:13:05] [deployment:14] Started 1 new node
[17/Mar/2023:24:14:39] [deployment:14] Started 1 new node
[17/Mar/2023:24:15:54] [deployment:14] Started 1 new node
[17/Mar/2023:24:16:14] [deployment:14] Took too long
```
Things I've tried that haven't worked:
From https://repost.aws/questions/QUrqo_fzNTQ5i1E08tT1uM7g/lightsail-container-took-too-long-to-deploy-all-of-a-sudden-nothing-in-logs:
- Set Gunicorn's logging to DEBUG. Sometimes I can see the Gunicorn process being killed by SIGTERM, but the "too long" part above has no additional information.
- Set Health Check to 300 seconds in case that was the source of the SIGTERM. No effect.
- Increase capacity from "nano" to "micro" to "small". No effect.
From https://repost.aws/questions/QU8i3bF2BZQZiwKfxGw5CfgQ/how-to-deploy-amazon-linux-on-a-lightsail-container-service
- Made sure I pasted my launch command into the appropriate "launch command" form input. No effect.
Perhaps I missed something obvious.
**Update**:
I have Nginx configured to proxy requests to gunicorn and to serve static content. Below are Dockerfiles and docker-compose:
Flask/Gunicorn Dockerfile:
```
FROM python:3.10-alpine
ENV POETRY_VERSION=1.2.2 \
POETRY_VIRTUALENVS_IN_PROJECT=true \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
RUN apk add --no-cache curl \
&& curl -sSL https://install.python-poetry.org | POETRY_VERSION=$POETRY_VERSION python3 -
WORKDIR /src
# TODO: build wheel for pipeline
COPY . .
RUN /root/.local/bin/poetry install --only main
CMD . /src/.venv/bin/activate && gunicorn -w 2 --log-level debug --bind=0.0.0.0:8080 'app:app'
```
Nginx Dockerfile:
```
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
```
docker-compose.yml
```
version: "3.3"
services:
web:
image: myDockerHub/myImage
restart: always
volumes:
- static_volume:/src/my_project/static
ports:
- "8080:80"
nginx:
image: myDockerHub/nginx
restart: always
volumes:
- static_volume:/src/my_project/static
depends_on:
- web
ports:
- "80:80"
volumes:
static_volume:
```
Hey there,
I've been trying to send the logs of my IoT Greengrass core device to Cloudwatch following [this guide](https://docs.aws.amazon.com/greengrass/v2/developerguide/monitor-logs.html#enable-cloudwatch-logs). It says to create a deployment and inserting this:
```
{
"logsUploaderConfiguration": {
"systemLogsConfiguration": {
"uploadToCloudWatch": "true"
}
}
}
```
and this, accordingly:
```
{
"logsUploaderConfiguration": {
"componentLogsConfigurationMap": {
"com.example.HelloWorld": {
}
}
}
}
```
I just don't get how and where I have to insert those. It's really not explained that much... My last attempt was to create a deployment.json file that looked as follows and push it to AWS through the CLi command `aws greengrassv2 create-deployment --cli-input-json file://deployment.json`, but that doesn't work:
```
{
"targetArn": "arn:aws:iot:<myregion>:<mynumber>:thinggroup/groupname",
"deploymentName": "test_deployments",
"components": {
"aws.greengrass.Cloudwatch": {
"componentVersion": "3.1.0"
},
"aws.greengrass.LogManager": {
"componentVersion": "2.3.1",
"logsUploaderConfiguration": {
"systemLogsConfiguration": {
"uploadToCloudWatch": "true"
},
"componentLogsConfigurationMap":{
"com.example.MyPrivateDockerComponent": {
}
}
}
},
"com.example.MyPrivateDockerComponent": {
"componentVersion": "1.1.5",
"runWith": {}
}
},
"deploymentPolicies": {
"failureHandlingPolicy": "ROLLBACK",
"componentUpdatePolicy": {
"timeoutInSeconds": 60,
"action": "NOTIFY_COMPONENTS"
}
},
"iotJobConfiguration": {
"jobExecutionsRolloutConfig": {
"maximumPerMinute": 1000
}
},
"tags": {}
}
```
Can anyone tell me where to place the "logsUploaderConfiguration" that I can accordingly update my deployment to log to Cloudwatch? Also, is this somehow possible through the AWS console in addition to the CLI?
Thanks a lot for your help!
I am trying to deploy sonarqube on ECS but have been receiving an "AccessDeniedException" on my /opt/sonarqube/data/es7 directory. When run locally on my C9 instance the container works, and a directory listing shows everything as owned by the sonarqube user. When I deploy to ECS however I get the permission denied error and some appear to have been switched back to root ownership.
When deploying without the bind-mounted volumes the container will deploy, but I cannot see the environmental variables, which are needed to and RDS connection. I'm not sure if this is related.
What am I missing?
Troubleshooting:
* Tried chmoding directories via the Command section in ContainerDefinitions, and received "permissions denied" on the chmod command. I've also tried setting "Privileged" to "True" to no effect.
* Modified the sonarqube DockerFile according to the bind-mount documentation: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html. I may have not done this properly and will include the dockerfile.
* Removing the bind mounts allows the container to deploy, but it loses its state when the task restarts and will not import environmental variable.s
My Dockerfile and JSON task definition are below. The dockerfile was pulled and modified from the Sonarqube github (the stock file does not work on ECS)
Docker file:
```
FROM public.ecr.aws/amazonlinux/amazonlinux:latest
RUN yum install -y shadow-utils && yum clean all
RUN useradd sonarqube
RUN mkdir -p /opt/sonarqube/data && chown sonarqube:sonarqube /opt/sonarqube/data
RUN mkdir -p /opt/sonarqube/extensions && chown sonarqube:sonarqube /opt/sonarqube/extensions
USER sonarqube
VOLUME ["/opt/sonarqube/data"]
VOLUME ["/opt/sonarqube/extensions"]
FROM eclipse-temurin:17-jre
LABEL org.opencontainers.image.url=https://github.com/SonarSource/sonar-scanner-cli-docker
RUN set eux; \
groupadd --system --gid 1000 sonarqube; \
useradd --system --uid 1000 --gid sonarqube sonarqube;
#mkdir -p /opt/sonarqube/data && chown sonarqube:sonarqube /opt/sonarqube/data; \
#mkdir -p /opt/sonarqube/extensions && chown sonarqube:sonarqube /opt/sonarqube/extensions; \
#mkdir -p /opt/sonarqube/lib chown sonarqube:sonarqub/opt/sonarqube/lib; \
#mkdir -p /opt/sonarqube/logs && chown sonarqube:sonarqube /opt/sonarqube/logs; \
#mkdir -p /opt/sonarqube/temp && chown sonarqube:sonarqube /opt/sonarqube/temp;
ENV LANG='en_US.UTF-8' \
LANGUAGE='en_US:en' \
LC_ALL='en_US.UTF-8'
#FROM public.ecr.aws/amazonlinux/amazonlinux:latest
#RUN yum install -y shadow-utils && yum clean all
#RUN set -eux; \
#
# SonarQube setup
#
ARG SONARQUBE_VERSION=9.9.0.65466
ARG SONARQUBE_ZIP_URL=https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-${SONARQUBE_VERSION}.zip
ENV JAVA_HOME='/opt/java/openjdk' \
SONARQUBE_HOME=/opt/sonarqube \
SONAR_VERSION="${SONARQUBE_VERSION}" \
SQ_DATA_DIR="/opt/sonarqube/data" \
SQ_EXTENSIONS_DIR="/opt/sonarqube/extensions" \
SQ_LOGS_DIR="/opt/sonarqube/logs" \
SQ_TEMP_DIR="/opt/sonarqube/temp"
RUN set -eux; \
#groupadd --system --gid 1000 sonarqube; \
#useradd --system --uid 1000 --gid sonarqube sonarqube; \
whoami; \
apt-get update; \
apt-get install -y gnupg unzip curl bash fonts-dejavu; \
echo "networkaddress.cache.ttl=5" >> "${JAVA_HOME}/conf/security/java.security"; \
sed --in-place --expression="s?securerandom.source=file:/dev/random?securerandom.source=file:/dev/urandom?g" "${JAVA_HOME}/conf/security/java.security"; \
# pub 2048R/D26468DE 2015-05-25
# Key fingerprint = F118 2E81 C792 9289 21DB CAB4 CFCA 4A29 D264 68DE
# uid sonarsource_deployer (Sonarsource Deployer) <infra@sonarsource.com>
# sub 2048R/06855C1D 2015-05-25
for server in $(shuf -e hkps://keys.openpgp.org \
hkps://keyserver.ubuntu.com) ; do \
gpg --batch --keyserver "${server}" --recv-keys 679F1EE92B19609DE816FDE81DB198F93525EC1A && break || : ; \
done; \
mkdir --parents /opt; \
cd /opt; \
curl --fail --location --output sonarqube.zip --silent --show-error "${SONARQUBE_ZIP_URL}"; \
curl --fail --location --output sonarqube.zip.asc --silent --show-error "${SONARQUBE_ZIP_URL}.asc"; \
gpg --batch --verify sonarqube.zip.asc sonarqube.zip; \
unzip -q sonarqube.zip; \
mv "sonarqube-${SONARQUBE_VERSION}" sonarqube; \
rm sonarqube.zip*; \
rm -rf ${SONARQUBE_HOME}/bin/*; \
ln -s "${SONARQUBE_HOME}/lib/sonar-application-${SONARQUBE_VERSION}.jar" "${SONARQUBE_HOME}/lib/sonarqube.jar"; \
chmod -R 555 ${SONARQUBE_HOME}; \
chmod -R ugo+wrX "${SQ_DATA_DIR}" "${SQ_EXTENSIONS_DIR}" "${SQ_LOGS_DIR}" "${SQ_TEMP_DIR}"; \
apt-get remove -y gnupg unzip curl; \
rm -rf /var/lib/apt/lists/*;
COPY entrypoint.sh ${SONARQUBE_HOME}/docker/
WORKDIR ${SONARQUBE_HOME}
EXPOSE 9000
USER sonarqube
STOPSIGNAL SIGINT
ENTRYPOINT ["/opt/sonarqube/docker/entrypoint.sh"]
```
JSON Task Definition
```
{
"taskDefinitionArn": "arn:aws:ecs:us-east-1:ACCOUNT:task-definition/sonar-task-def2:20",
"containerDefinitions": [
{
"name": "sonarqube",
"image": "ACCOUNT.dkr.ecr.us-east-1.amazonaws.com/sonarqube:repost",
"cpu": 0,
"links": [],
"portMappings": [
{
"containerPort": 9000,
"hostPort": 9000,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [
{
"name": "SONARQUBE_JDBC_PASSWORD",
"value": "PASSWORD"
},
{
"name": "SONARQUBE_JDBC_URL",
"value": "LINK"
},
{
"name": "SONARQUBE_JDBC_USERNAME",
"value": "root"
}
],
"environmentFiles": [],
"mountPoints": [
{
"sourceVolume": "sonar-data",
"containerPath": "/opt/sonarqube/data"
},
{
"sourceVolume": "sonar-extensions",
"containerPath": "/opt/sonarqube/extensions"
}
],
"volumesFrom": [],
"secrets": [],
"privileged": true,
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "sonar-config3",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"systemControls": []
}
],
"family": "sonar-task-def2",
"taskRoleArn": "arn:aws:iam::ACCOUNT:role/sonar-config3-EcsTaskExecutionRole-1UXV6AQJSBHAN",
"executionRoleArn": "arn:aws:iam::ACCOUNT:role/sonar-config3-EcsTaskExecutionRole-1UXV6AQJSBHAN",
"networkMode": "awsvpc",
"revision": 20,
"volumes": [
{
"name": "sonar-data",
"host": {
"sourcePath": "/opt/sonarqube/data"
}
},
{
"name": "sonar-extensions",
"host": {
"sourcePath": "/opt/sonarqube/extensions"
}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.privileged-container"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2"
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "2048",
"memory": "2048",
"registeredAt": "2023-03-16T13:49:38.721Z",
"registeredBy": "arn:aws:sts::ACCOUNT:assumed-role/altitude-products-engineer/",EMAIL
"tags": []
}
```
Hi Folks,
I wanted to check if anyone has run into this with respect Service Mesh. Based on requirements, we need Istio but there is no managed service offering on AWS for Istio, team is concerned about upgrade/management. Any advice to make the upgrade/management seamless to app owners.
Has anyone evaluated Tetrate or a similar product for Service mesh.
Thanks
Sunil S
In AWS IoT Greengrass, there is a Rollback option for deployments under "Deployment Policies". If I understood correctly it to rolls back devices to their previous configuration if the deployment fails. I wanted to test this so I purposely build a nonworking ECR docker image and deployed it through a greengrass component. (I basically introduced a python filenotfouderror by commanding to run a nonexistend python script in my Dockerfile.)
Before that, I had a working container running. What I would like to see is my device rolling back to running the old (working) container after realizing that the container failed. However, this doesn't happen. Only the device state changes to unhealthy in the AWS console.
Now my question: What kind of errors is this Rollback function able to detect/handle? and do you have any suggestions on how I could achieve my goal of rolling back the device if the docker cmd or any file therein shows an error?
Thanks a lot for you help!
Hi,
I am using AWS IoT Greengrass to deploy a docker image that I saved in a private ECR repo to my Raspberry Pi. The deployment works fine. However, if I change the deployment (i.e. revise it) to run a different image and not the old one anymore, the old container still keeps running locally. I obviously want the old docker container to stop if I haven't included it in my deployment anymore. This only happens if I shut down the RPI and restart it. How can Imake sure the old container stops immediately.
My component recipe looks as follows, do I need to change anything therein?
For completeness: The Docker container runs a Python script that enters an infinite while loop which prints "Hello, world!" every second. Maybe the continuous loop is the problem but I don't think so as I am able to stop the container through `docker stop`.
```
{
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "com.example.MyPrivateDockerComponent_revised",
"ComponentVersion": "1.0.4",
"ComponentType": "aws.greengrass.generic",
"ComponentDescription": "A component that runs a Docker container from a private Amazon ECR image revised.",
"ComponentPublisher": "Amazon",
"ComponentDependencies": {
"aws.greengrass.DockerApplicationManager": {
"VersionRequirement": ">=2.0.0 <2.1.0",
"DependencyType": "HARD"
},
"aws.greengrass.TokenExchangeService": {
"VersionRequirement": ">=2.0.0 <2.1.0",
"DependencyType": "HARD"
}
},
"Manifests": [
{
"Platform": {
"os": "all"
},
"Lifecycle": {
"Run": "docker run --sig-proxy=True 242944196659.dkr.ecr.eu-central-1.amazonaws.com/test_repo:latest",
"Stop": "docker stop $(docker ps -q --filter ancestor=242944196659.dkr.ecr.eu-central-1.amazonaws.com/test_repo:latest)",
"Destroy": "docker rm $(docker ps -a -q --filter ancestor=242944196659.dkr.ecr.eu-central-1.amazonaws.com/test_repo:latest)"
},
"Artifacts": [
{
"Uri": "docker:242944196659.dkr.ecr.eu-central-1.amazonaws.com/test_repo:latest",
"Unarchive": "NONE",
"Permission": {
"Read": "OWNER",
"Execute": "NONE"
}
}
]
}
],
"Lifecycle": {}
}
```
Hey there,
I tried to deploy a simple Docker image that I previously upoaded to a private repository on ECR on my Raspberry Pi. I uploaded the image using a different user than the one I have saved on through access keys on the RPI. However, both users have full access to all ECR services. Now the following error occurred as I tried to deploy the docker image:
GET_ECR_CREDENTIAL_ERROR: FAILED_NO_STATE_CHANGE: Failed to download artifact name: 'docker:242944196659.dkr.ecr.eu-central-1.amazonaws.com/test_repo:latest' for component com.example.MyPrivateDockerComponent-1.0.0, reason: Failed to get auth token for docker login. Failed to get credentials for ECR registry - 242944196659. User: arn:aws:sts::242944196659:assumed-role/GreengrassV2TokenExchangeRole/82ddfef99dfb0585b238481427e354b015fa33c72fd5cf52a6b5595df294438a is not authorized to perform: ecr:GetAuthorizationToken on resource: * because no identity-based policy allows the ecr:GetAuthorizationToken action (Service: Ecr, Status Code: 400, Request ID: 60278c5f-3049-4b01-b9b8-ac4b54e6cb0c)
It seems to me that somehow my RPI is not authorized to dowload the private docker image. Any suggestions how I could solve this issue?
Thanks a lot in beforehand!