By using AWS re:Post, you agree to the Terms of Use
/Amazon Elastic Container Registry (ECR)/

Questions tagged with Amazon Elastic Container Registry (ECR)

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

1
answers
0
votes
6
views
danieljmann96
asked a month ago

ECR Cross Account Private Link

Hi All, Struggling for a couple of days already with the following: I've followed this guide: https://aws.amazon.com/blogs/networking-and-content-delivery/centralize-access-using-vpc-interface-endpoints/ I have setup AWS Organisations with all the separate Accounts like nonprd, prd, .... AND the Shared resources account.... CIDR for Shared: 10.40.0.0/16 CIDR for nonprd: 10.0.0.0/16 CIDR for prd: 10.1.0.0/16 In this shared resources account, I've created the 4 vpc endoints for ECR (Shared resources account holds our ecr docker repos for other accounts). logs,dkr,api and S3. I've setup VPC peering with my nonprd and prd account. I've created the route table entries so that all traffic is flowing from shared to the vpc-peering connections cidr and visa versa. The private dns option for the VPC Endpoints are disabled and manually created as private Route53 records. Exactly as the ecr domain. So I have 3 extra private records IN the SHARED resources account: * api.ecr.eu-west-1.amazonaws.com * dkr.ecr.eu-west-1.amazonaws.com * logs.eu-west-1.amazonaws.com I've create the alias record pointing to the private hosted zones. I've done the Associations for Route53 from all the VPC's in nonprd and prd. I CAN resolve the dns records. BUT... And now the problem arises... When I try to run the containers in my nonprd account in any of the vpc's there, my tasks are given one of the following errors: * ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 1 time(s): AccessDeniedException: User: arn:aws:sts::${AWS::AccountId}:assumed-rol... * ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr.... The policy on the VPC endpoints (complete snippet from my cfn-template): ``` EcrApiEndpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: AWS: - "arn:aws:iam::51*******0:root" # NonPrd - "arn:aws:iam::1******3:root" # Prd - !Sub arn:aws:iam::${AWS::AccountId}:root Action: - ecr:BatchGetImage - ecr:GetAuthorizationToken - ecr:GetDownloadUrlForLayer - ecr:BatchCheckLayerAvailability - ecr:PutImage - ecr:InitiateLayerUpload - ecr:UploadLayerPart - ecr:CompleteLayerUpload Resource: - !Sub "arn:aws:ecr:${AWS::Region}:${AWS::AccountId}:repository/*" VpcId: !FindInMap [Environments, !Ref Environment, VPC] VpcEndpointType: Interface PrivateDnsEnabled: false SecurityGroupIds: - !GetAtt VPCESecurityGroup.GroupId SubnetIds: - !Select [ 0, !FindInMap [Environments, !Ref Environment, PrivateSubnets], ] - !Select [ 1, !FindInMap [Environments, !Ref Environment, PrivateSubnets], ] - !Select [ 2, !FindInMap [Environments, !Ref Environment, PrivateSubnets], ] ServiceName: Fn::Join: - "" - - "com.amazonaws." - !Ref "AWS::Region" - ".ecr.api" ``` So the VPC endpoints run in the Private subnet of the SHARED resources account. The ecs fargate service/task also has the correct permissions (everything is working fine without the VPC endpoints). Can someone help... Please...
4
answers
0
votes
2
views
AWS-User-0938280
asked a month ago

ECR delete image with terraform kreuzwerker/docker provider gets 405 Method Not Allowed. Worked until yesterday with no changes.

I have had multiple builds set up in AWS CodeBuild that run terraform code. I am using Terraform version 1.0.11 with kreuzwerker/docker provider 2.16 and aws provider version 4.5.0. Yesterday, builds stopped working because when docker_image_registry deletes the old image I receive `Error: Got error getting registry image digest: Got bad response from registry: 405 Method Not Allowed`. I have not changed any code, I'm using the same `aws/codebuild/standard:4.0` build image. Note that I have another API in a different region (`us-west-1`) with the exact same code, and it still works. Here should be enough code to figure out what's going on: ``` locals { ecr_address = format("%v.dkr.ecr.%v.amazonaws.com", data.aws_caller_identity.current.account_id, var.region) environment = terraform.workspace name = "${local.environment}-${var.service}" os_check = data.external.os.result.os == "Windows" ? "Windows" : "Unix" } variable "region" { default = "us-east-2" } provider "aws" { region = var.region } provider "docker" { host = local.os_check == "Windows" ? "npipe:////.//pipe//docker_engine" : null registry_auth { address = local.ecr_address username = data.aws_ecr_authorization_token.token.user_name password = data.aws_ecr_authorization_token.token.password } } data "external" "git_hash" { program = local.os_check == "Windows" ? ["Powershell.exe", "./Scripts/get_sha.ps1"] : ["bash", "./Scripts/get_sha.sh"] } data "aws_caller_identity" "current" {} data "aws_ecr_authorization_token" "token" { registry_id = data.aws_caller_identity.current.id } resource "aws_ecr_repository" "repo" { name = lower(local.name) image_tag_mutability = "MUTABLE" image_scanning_configuration { scan_on_push = true } tags = merge(local.common_tags, tomap({ "Name" = local.name })) } resource "aws_ecr_lifecycle_policy" "policy" { repository = aws_ecr_repository.repo.name policy = <<EOF { "rules": [ { "rulePriority": 1, "description": "Keep only last 10 images, expire all others", "selection": { "tagStatus": "any", "countType": "imageCountMoreThan", "countNumber": 10 }, "action": { "type": "expire" } } ] } EOF } resource "docker_registry_image" "image" { name = format("%v:%v", aws_ecr_repository.repo.repository_url, data.external.git_hash.result.sha) build { context = replace(trimsuffix("${path.cwd}", "/Terraform"), "/${var.company}.${var.service}", "") dockerfile = "./${var.company}.${var.service}/Dockerfile" } lifecycle { create_before_destroy = true } } ```
0
answers
0
votes
0
views
AWS-User-6890053
asked 2 months ago

Running TagUI RPA as a Lambda Function

I am trying to run a simple TagUI flow as a Lambda function using container images. I have made a Dockerfile using the bootstrap and function.sh from [this tutorial](https://aripalo.com/blog/2020/aws-lambda-container-image-support/): ``` FROM amazon/aws-lambda-provided:al2 RUN yum install -y wget nano php java-1.8.0-openjdk unzip procps RUN curl https://intoli.com/install-google-chrome.sh | bash RUN wget https://github.com/kelaberetiv/TagUI/releases/download/v6.46.0/TagUI_Linux.zip \ && unzip TagUI_Linux.zip \ && rm TagUI_Linux.zip \ && ln -sf /var/task/tagui/src/tagui /usr/local/bin/tagui \ && tagui update RUN sed -i 's/no_sandbox_switch=""/no_sandbox_switch="--no-sandbox"/' /var/task/tagui/src/tagui ADD tr.tag /var/task/tagui/src/tr.tag WORKDIR /var/runtime/ COPY bootstrap bootstrap RUN chmod 755 bootstrap WORKDIR /var/task/ COPY function.sh function.sh RUN chmod 755 function.sh CMD [ "function.sh.handler" ] ``` My function.sh: ``` function handler () { cp -r /var/task/tagui/src/* /tmp; chmod 755 /tmp/tagui; OUTPUT=$(/tmp/tagui /tmp/tr.tag -h); echo "${OUTPUT}"; } ``` Notes: - the sed line is required to get TagUI running in docker images. - tr.tag is just a simple flow to do a password reset on a webapp so I can confirm the container has run. - everything has to be run in /tmp as that is the only folder Lambda can write to in the container and TagUI creates a load of temporary files during execution. When I run as a Lambda I get the error: ```./tmp/tagui/src/tagui: line 398: 56 Trace/breakpoint trap (core dumped) $chrome_command --user-data-dir="$TAGUI_DIR/chrome/tagui_user_profile" $chrome_switches $window_size $headless_switch $no_sandbox_switch > /dev/null 2>&1``` When I run the container from Docker it runs perfectly. I have tried increasing both the memory and timeout of the function. The end goal I am trying to achieve is to have a Lambda function triggered by an API gateway that can receive a TagUI RPA flow and run it.
1
answers
0
votes
1
views
AWS-User-5656755
asked 2 months ago

Recommended batch automated workflow for updating docker containers

How do I update the docker image for a Batch Job Definition using CLI or API? It looks like the APIs for `RegisterJobDefinition` are "create only". You can't update a Job Definition from what I can tell from the documentation, so you can't change the reference to the Docker Image. The JobDefinition really wants to be defined in the CDK Constructs area (or CFT) because it ties in a bunch of stuff I already have in the CDK such as databases and EFS and Secrets. That is fine, as that all that stuff is fairly static. But Docker images are meant to change all the time, and quickly as my devs iterate code. I really don't want to specify the final Docker Image at creation time in CDK or CFT, but it looks like that's the only place to do it. I do not want to re-deploy a CDK/CFT instance just to change some code in a Docker container, that would be a slow and bad practice. Note: a[ similar was asked on the old forum](https://forums.aws.amazon.com/thread.jspa?threadID=257528&tstart=0), but didn't really get an answer. "use :latest" isn't always the best answer for Docker version management. My devs need to be able to iterate quickly and not walk over each other. I would like my devs to be able to change to a new Docker image and then test a batch. How can they do this quickly and easily? Note: Here's a[ blog post ](https://stevelasker.blog/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/)on `stable` tagging versus `unique` tagging. For deployments they recommend unique. Which isn't supported with BATCH, AFAICT.
1
answers
0
votes
6
views
PaulSPNW
asked 2 months ago

Push a container to Lightsail with AssumeRole and MFA

We are using roles as best practices to access our various environments. I have set up my `~/.aws/config` with the role: ``` [profile dev] source_profile=default role_arn=arn:aws:iam::987654321:role/MyRole mfa_serial = arn:aws:iam::123456789:mfa/MyUser ``` This works fine and I am prompted for my MFA code when running cli commands as expected and all is ok. However, when I run `aws lightsail push-container-image` with the Lightsail Control (lightsailctl) plugin I get an error: ``` AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set. Command '['lightsailctl', '--plugin', '--input-stdin']' returned non-zero exit status 1. ``` I tried the other method of calling `aws lightsail register-container-image`. This requires the `--digest` flag, so I built and pushed my image to our GitLab image registry: ``` docker build -t registry.gitlab.com/myorg/myimage:latest . docker push registry.gitlab.com/myorg/myimage:latest ``` I then get the digest using `docker images --digests`. But when I run `aws lightsail register-container-image` I get ``` An error occurred (NotFoundException) when calling the RegisterContainerImage operation: Image with digest "sha256:7494ec375bd1948670750289069cfbb0caa7c08eaae821674ee5a54b0ee422d5" not found. ``` I get the same `NotFoundException` if I push to AWS ECR. If I try to push to the Lightsail ECR reference after login I cannot connect and the layers are stuck retrying... ``` PS > cat pwd.txt | docker login 585224773020.dkr.ecr.ap-southeast-2.amazonaws.com -u AWS --password-stdin Login Succeeded PS > docker build -t 585224773020.dkr.ecr.ap-southeast-2.amazonaws.com/myorg/myimage:latest . [+] Building 3.3s (17/17) FINISHED PS > docker push 585224773020.dkr.ecr.ap-southeast-2.amazonaws.com/myorg/myimage:latest The push refers to repository [585224773020.dkr.ecr.ap-southeast-2.amazonaws.com/myorg/myimage] a7cb1ff97502: Retrying in 10 seconds 762b147902c0: Retrying in 10 seconds 235e04e3592a: Retrying in 10 seconds 6173b6fa63db: Retrying in 10 seconds 9a94c4a55fe4: Retrying in 10 seconds 9a3a6af98e18: Waiting 7d0ebbe3f5d2: Waiting EOF ```
2
answers
0
votes
8
views
MonkeyBites
asked 3 months ago

Lambda function working locally but crashing on AWS

I deploy my Lambda function code as a container image. I create a simple Python image from an alternative base image which is the Fenics Project stable image. When the dolfin module is imported the following error message is displayed before crashing : ``` terminate called after throwing an instance of 'std::logic_error' what(): basic_string::_M_construct null not valid Runtime exited with error: signal: aborted (core dumped) Runtime.ExitError ``` When I test my function locally with the runtime interface emulator and the same image, I don’t have any error messages. After some research, I have found out that the problem came from the python extension (.so) named « cpp » present into dolphin package but I don’t understand why it works locally and not on AWS. Here are my files : **Dockerfile** ``` ARG FUNCTION_DIR="/function" FROM quay.io/fenicsproject/stable:current as build-image # Include global arg in this stage of the build ARG FUNCTION_DIR # Install aws-lambda-cpp build dependencies RUN sudo apt-get update -y && \ sudo DEBIAN_FRONTEND=noninteractive apt-get install -y \ g++ \ make \ cmake \ unzip \ libcurl4-openssl-dev # Create function directory RUN sudo mkdir -p ${FUNCTION_DIR} # Copy function code COPY /aws_documents/app ${FUNCTION_DIR} RUN sudo pip install --upgrade pip && \ sudo pip install \ --target ${FUNCTION_DIR} \ awslambdaric # Multi-stage build: grab a fresh copy of the base image (to keep the image light) FROM quay.io/fenicsproject/stable:current # Include global arg in this stage of the build ARG FUNCTION_DIR # Set working directory to function root directory WORKDIR ${FUNCTION_DIR} # Copy in the build image dependencies RUN sudo pip install --upgrade pip COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR} # Define property ENTRYPOINT to call runtime client interface ENTRYPOINT [ "/usr/bin/python3.6", "-m", "awslambdaric" ] CMD [ "app.handler" ] ``` **app.py** ``` import os def handler(event,context): os.environ['XDG_CACHE_HOME'] = '/tmp/.cache' print("before import") import dolfin return("okay") ```
1
answers
0
votes
12
views
AWS-User-0926953
asked 3 months ago

How to GET list of tags using registry-alias & repository for Public ECR?

I am attempting a REST API call to GET list of image & its tags by providing registry-alias & repository-name as path parameters, for public registry of ECR. I tried below endpoint and I am unsuccessful to get response. I am also unable to get any information related to whether I am doing this right from AWS API reference or other online articles. Any guidance related to getting this working is greatly appreciated. **My expectation:- **As developer, I should be able GET list of images and tags over REST API, even when I am not authenticated with AWS account. It will be nice to have more documentation; like for example, what are the endpoints that are supported for public ECR, similar to [DockerHub](https://docs.docker.com/docker-hub/api/latest/#operation/GetNamespacesRepositoriesImages) and [Microsoft](https://github.com/microsoft/ContainerRegistry#browsing-mcr-content). **REST API that I tried:-** `GET https://public.ecr.aws/:registry-alias/:repository-name/tags/list` `GET https://public.ecr.aws/v2/:registry-alias/:repository-name/tags/list` **Response I received:-** `{ "errors": [ { "code": "DENIED", "message": "Not Authorized" } ] }` **Reference I went through:-** https://aws.amazon.com/blogs/aws/amazon-ecr-public-a-new-public-container-registry/ https://gallery.ecr.aws https://docs.aws.amazon.com/AmazonECR/latest/public/public-registries.html https://docs.aws.amazon.com/AmazonECRPublic/latest/APIReference/Welcome.html
1
answers
0
votes
1
views
Pranam
asked 4 months ago

Difficulties creating AppRunner service in second region

1. Can you create a new an AppRunner service in a separate region from an ECR image? I read a bit about replication, but would like to get it working without additional complexity if possible. Does additional region introduce any additional permissions issues? Otherwise, here's my current setup: I have an AppRunner service running successfully in one region. I'm trying to spin up a service based off the same image in a second region, but I get problems similar to this [repost question](https://repost.aws/questions/QUGTq5l0sXT1S0wwlBMr8fAQ/cant-create-or-deploy-a-service-on-app-runner-since-it-cant-pull-a-private-ecr-image). Specifically, the service is created but goes into OPERATION_IN_PROGRESS for a while until it dies & goes to status "Create failed". Looking in deployment logs for event "Create service", I see: ``` 01-25-2022 01:58:36 PM [AppRunner] Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository. 01-25-2022 01:48:54 PM [AppRunner] Starting to pull your application image. ``` Following advice of the other re:Post question, I tried looking in Cloud Trail events originating from event source "ecr.amazonaws.com". I have tons of GetAuthorizationToken events, but looking at them doesn't give me much interesting information - they seem to pass & are using the role I expect them to. A bit about permissions - I'm using the default AppRunnerECRAccessRole which I created through the UI when creating an AWS service. I'm reusing it to try & create different services. It has a policy with this JSON: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "ecr:DescribeImages", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability" ], "Resource": "*" } ] } ``` Any additional debugging tips for this specific scenario? If there is more generic advice for question #1 I'll try to follow it. I would like to "create a new service with same image in a region distinct from the image's region" if possible atm (even if that is inefficient long-term).
1
answers
0
votes
6
views
howellz
asked 4 months ago

Docker cli won't push my image to ECR

I have created a registry and I am able to login through the docker cli to ECR using ``` aws ecr get-login-password... ``` however when I execute the docker push command... ``` docker push 593040300828.dkr.ecr.ap-southeast-2.amazonaws.com/tsystems-loader:latest ``` it doesn't work, the first 5 portions just keep retrying untill I get an EOF... ``` The push refers to repository [593040300828.dkr.ecr.ap-southeast-2.amazonaws.com/tsystems-loader] 72bbc9ac96a6: Retrying in 1 second f30814431bd3: Retrying in 1 second 5f70bf18a086: Retrying in 1 second f329eecf10bd: Retrying in 1 second 01b48b68c200: Retrying in 1 second 1f981941d034: Waiting ee9738662de6: Waiting 71ae3b9da9b3: Waiting 7fcb75871b21: Waiting EOF ``` I have checked my firewall settings but I can't see any issues. I'm unable to find/read any logs for docker My system MacOS Mojave 10.14.6 ``` Docker version 20.10.11, build dea9396 aws-cli/2.4.1 Python/3.8.8 Darwin/18.7.0 exe/x86_64 prompt/off ``` mac console application has this to say ``` default 22:38:00.957671 +1100 com.docker.hyperkit [364:11:38:00.965][I] guest: still waiting for osxfs-data after 3h45m0.0025302s default 22:38:10.949001 +1100 com.docker.hyperkit [364:11:38:10.964][I] guest: still waiting for osxfs-data after 3h45m10.001184644s default 22:38:20.941394 +1100 com.docker.hyperkit [364:11:38:20.942][I] guest: still waiting for osxfs-data after 3h45m20.001059404s default 22:38:21.301292 +1100 com.docker.driver.amd64-linux proxy >> HEAD /_ping default 22:38:21.304185 +1100 com.docker.driver.amd64-linux proxy << HEAD /_ping (2.90769ms) default 22:38:21.357189 +1100 com.docker.driver.amd64-linux proxy >> HEAD /_ping default 22:38:21.358910 +1100 com.docker.driver.amd64-linux proxy << HEAD /_ping (1.749045ms) default 22:38:21.407791 +1100 com.docker.driver.amd64-linux proxy >> HEAD /_ping default 22:38:21.409707 +1100 com.docker.driver.amd64-linux proxy << HEAD /_ping (1.946499ms) error 22:38:21.444691 +1100 com.docker.cli nw_path_close_fd Failed to close guarded necp fd 8 [9: Bad file descriptor] default 22:38:21.485138 +1100 docker-credential-osxkeychain UNIX error exception: 17 default 22:38:21.488531 +1100 docker-credential-osxkeychain UNIX error exception: 17 default 22:38:21.490367 +1100 docker-credential-osxkeychain UNIX error exception: 17 default 22:38:21.492620 +1100 docker-credential-osxkeychain UNIX error exception: 17 default 22:38:21.494611 +1100 docker-credential-osxkeychain UNIX error exception: 17 default 22:38:21.497331 +1100 docker-credential-osxkeychain UNIX error exception: 17 default 22:38:21.592730 +1100 com.docker.driver.amd64-linux (d42fb075) b586f281-DriverCMD C->S SwiftAPI POST /usage: {"command":"imagePushCliLinux","count":1} default 22:38:21.620788 +1100 com.docker.driver.amd64-linux (d42fb075) b586f281-DriverCMD C<-S adaef121-SwiftAPI POST /usage (28.104391ms): OK default 22:38:21.620914 +1100 com.docker.driver.amd64-linux usage imagePushCliLinux + 1 default 22:38:21.621002 +1100 com.docker.driver.amd64-linux proxy >> POST /v1.41/images/593040300828.dkr.ecr.ap-southeast-2.amazonaws.com/tsystems-loader/push?tag=latest default 22:38:21.644493 +1100 com.docker.backend failed to lookup nlb1-8e7a241509b00b85.elb.ap-southeast-2.amazonaws.com.: name exists but no relevant records default 22:38:30.935796 +1100 com.docker.hyperkit [364:11:38:30.944][I] guest: still waiting for osxfs-data after 3h45m30.002709212s default 22:38:40.926193 +1100 com.docker.hyperkit [364:11:38:40.941][I] guest: still waiting for osxfs-data after 3h45m40.000274073s default 22:38:50.919041 +1100 com.docker.hyperkit [364:11:38:50.920][I] guest: still waiting for osxfs-data after 3h45m50.000914141s ```
3
answers
2
votes
32
views
Michael Dausmann
asked 5 months ago

How to perform CodePipeline ECS deployment based on Git tag

Hi fellow AWS humans, I am running an ECS application that is automatically built and deployed using CodeCommit, CodePipeline, and ECR. The infratructure is managed with Terraform. My setup is fairly comparable to this tutorial here: https://devops-ecs-fargate.workshop.aws/en/1-introduction.html The current ci/cd workflow is as follows: 1. Git push to CodeCommit repo main branch 2. CodePipeline builds a container Image and pushes it to the ECR registry 3. Deploy the most recently built container to ECS and update the service This is fine for very simple setups and I'm ok doing trunk based development (which, according to this blog post, is the suggested way when working with CodePipeline: https://aws.amazon.com/blogs/devops/multi-branch-codepipeline-strategy-with-event-driven-architecture/). However, **I don't want the most recent build to be pushed *straight to production***. What I' like to achieve is a 2-step ci/cd process (2 pipelines, 2 separate target environments): 1. Git push to CodeCommit repo main branch 2. CodePipeline builds a container Image and pushes it to the ECR registry 3. The most recently built container is deployed in the ECS **dev environment** 4. Tagging a specific commit (using **git tag**) will trigger a separate CodePipeline 5. The pipeline triggered in step 4 deploys the associated container to the **production environment** It seems that the only way to use CodePipeline's built-in features for deployment is by specifying a fixed branch name from which all vcs commits will trigger a new build/deployment - I see no way of specifying a git tag (and no way of specifying any wildcards either). This blog post (https://aws.amazon.com/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/) suggests that there are ways to circumvent this shortcoming by using a Lambda and CloudWatch Events. My questions are: - is there any way to achieve the illustrated ci/cd setup with AWS CodePipeline? - if it is possible: What would be a best practice to implement this? Thanks for any pointers and your help! Kind regards and big thanks, Maik
2
answers
0
votes
13
views
maik
asked 5 months ago
  • 1
  • 90 / page