By using AWS re:Post, you agree to the Terms of Use
/Containers/

Containers

AWS container services offer the broadest choice of services to run your containers and run on the best global infrastructure, with 77 Availability Zones across 24 regions. AWS also provides strong security isolation between your containers, ensures you are running the latest security updates, and gives you the ability to set granular access permissions for every container.

Recent questions

see all
1/18

Using Elastic Beanstalk - Docker Platform with ECR - Specifying a tag via environment variable

Hi, I am trying to develop a CI/CD process, using Beanstalk's Docker platform with ECR. Code Pipeline performs the builds and manages ECR Tags & Promotions. Terraform manages the infrastructure. I am looking for an approach that allows us to use the same Dockerfile/Dockerrun.aws.json in production & non-production environments, despite wanting different tags of the same image deployed.. Perhaps from different repositories (repo_name_PROD vs repo_name_DEV ). Producing and moving Beanstalk bundles that only differ in a TAG feels unnecessary. the idea of dynamically changing Dockerfiles using the deployment process also seems fragile. What I was exploring was using a simple Environment variable: change what tag (comitHash) of an image should be based on a Beanstalk environment variable. ``` FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repoName:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` Where TAG is the Git hash of the code repository from which the artifact was produced. CodeBuild has built the code and tagged the docker image. I understand that Docker supports this: ``` ARG TAG FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repo_name:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` but requires building the image like this: `docker build --build-arg GIT_TAG=SOME_TAG . ` Am I correct in assuming this wil not work with the docker platform? I do not believe the EB Docker platform exposes a way to specify the build-arg. What is standard practice for managing tagged docker images in Beanstalk. I am a little leery of the `latest` tag.. as a poorly timed auto scaling event could pull an update before it should be deployed: that just does not work in my case. Updating my Dockerfile dufing deployment (via `sed` ) seems like asking for trouble.
0
answers
0
votes
1
views
bchandley
asked a day ago

aws-sdk V3 timeout in lambda

Hello, I'm using NodeJS 14.x lambda to control an ecs service. As I do not need the ecs task to run permanently, I created a service inside the cluster so I can play around the desired count to start or stop it at will. I also created two lambdas, one for querying the current desired count and the current Public IP, another one for updating said desired count (to 0 or 1 should I want to start or stop it) I have packed aws-sdk v3 on a lambda layer to not have to package it on each lambda. Seems to work fine as I was getting runtime error > "Runtime.ImportModuleError: Error: Cannot find module '@aws-sdk/client-ecs'" But I do not anymore. The code is also working fine from my workstation as I'm able to execute it locally and I get the desired result (query to ecs api works fine) But All I get when testing from lambdas are Timeouts... It usually execute in less than 3 secondes on my local workstation but even with a lambda timeout set up at 3 minutes, this is what I get ``` START RequestId: XXXX-XX-XXXX Version: $LATEST 2022-01-11T23:57:59.528Z XXXX-XX-XXXX INFO before ecs client send END RequestId: XXXX-XX-XXXX REPORT RequestId: XXXX-XX-XXXX Duration: 195100.70 ms Billed Duration: 195000 ms Memory Size: 128 MB Max Memory Used: 126 MB Init Duration: 1051.68 ms 2022-01-12T00:01:14.533Z XXXX-XX-XXXX Task timed out after 195.10 seconds ``` The message `before ecs client send` is a console.log I made just before the ecs.send request for debug purposes I think I've set up the policy correctly, as well as the Lambda VPC with the default outbound rule to allow all protocol on all port to 0.0.0.0/0 so I I have no idea on where to look now. I have not found any way to debug aws-sdk V3 calls like you would do on V2 by adding a logger to the config. Maybe it could help understanding the issue....
1
answers
0
votes
5
views
Tomazed
asked 5 days ago

Selectively exposing a REST endpoint publicly in an AWS EKS cluster in a private VPC

**Cluster information:** **Kubernetes version: 1.19** **Cloud being used: AWS EKS** So here is my configuration. I have a private VPC on AWS within which is hosted an AWS EKS cluster. Now this VPC has public facing load balancers which are only accessible from only specific IP addresses. On this EKS cluster are hosted a number of micro services running in their own pods. Each of these pods exposes a REST endpoint. Now here is my requirement. Out of all the REST endpoints that we have, i would like to make only one REST endpoint publicly available from the internet. The remainder of our REST endpoints should remain private accessible only from certain IP addresses. What would be the best approach to achieve this? So far,from what i have researched, here are my options: 1)Have another instance of Ingress controller which deploys a public facing load balancer to handle requests to this public facing REST endpoint. This will work. However, i am concerned with the security aspects here. An attacker might just get into our VPC and create havoc. 2)Have a completely new EKS cluster which is public facing where i deploy this single REST endpoint. This is something i would like to avoid. 3)Use something like AWS API gateway to achieve this. I am not sure if this is possible as i have to research more about it. Anyone has any ideas on how this could be achieved securely? Any advice would be very much appreciated. Regards, Kiran Hegde
5
answers
0
votes
5
views
AWS-User-1971331
asked 5 days ago

Docker push doesn't work even docker login succeeded during AWS CodePipeline Build stage

Hello, I'm preparing CI/CD using AWS CodePipeline. Unfortunatelly I have an error during build stage. Below there is content of my buildspec.yml file, where: AWS_DEFAULT_REGION = eu-central-1 CONTAINER_NAME=cicd-1-app REPOSITORY_URI = <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com/cicd-1-app ``` version: 0.2 phases: install: runtime-versions: java: corretto11 pre_build: commands: - echo Logging in to Amazon ECR... - aws --version - TAG="$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)" - IMAGE_URI=${REPOSITORY_URI}:${TAG} build: commands: - echo Build started on `date` - echo $IMAGE_URI - mvn clean package -Ddockerfile.skip - docker build --tag $IMAGE_URI . post_build: commands: - printenv - echo Build completed on `date` - echo $(docker images) - echo Pushing docker image - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com - docker push $IMAGE_URI - echo push completed - printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $IMAGE_URI > imagedefinitions.json artifacts: files: - imagedefinitions.json ``` I got error: ``` [Container] 2022/01/06 19:57:36 Running command aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [Container] 2022/01/06 19:57:37 Running command docker push $IMAGE_URI The push refers to repository [<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/cicd-1-app] 37256fb2fd27: Preparing fe6c1ddaab26: Preparing d4dfab969171: Preparing no basic auth credentials [Container] 2022/01/06 19:57:37 Command did not exit successfully docker push $IMAGE_URI exit status 1 [Container] 2022/01/06 19:57:37 Phase complete: POST_BUILD State: FAILED [Container] 2022/01/06 19:57:37 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker push $IMAGE_URI. Reason: exit status 1 ``` Even docker logged in successfully there is "no basic auth credentials" error. Do you know what could be a problem? Best regards.
2
answers
0
votes
8
views
KM
asked 10 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/2