By using AWS re:Post, you agree to the Terms of Use
/AWS CodeBuild/

Questions tagged with AWS CodeBuild

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Is it possible to use a non-default bridge network when running CodeBuild locally?

When I run codebuild locally it creates the default docker network and volumes as in the below output. Is there a way to use different bridge network and volumes instead of these default ones? I tried modifying the docker command in [codebuild_build.sh](https://github.com/aws/aws-codebuild-docker-images/blob/master/local_builds/codebuild_build.sh) to add a network (below build command output shows `--network mylocaltestingnetwork`) but that didn't help as such. The reason I am trying to use a different network and volumes is because I am using [localstack](https://localstack.cloud/) along with it, and the bridge network and volumes need to be accessible from the codebuild local container. If I configure my localstack to use the `agent-resources_default` network created by local codebuild then codebuild is able to access localstack. But, I would like to keep the dependency external to both codebuild and localstack by using a separate bridge network. ``` $ ./codebuild_build.sh -i public.ecr.aws/codebuild/amazonlinux2-x86_64-standard:3.0 -a codebuild-output/ -b buildspec-local.yml -c -p localstack Build Command: docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=public.ecr.aws/codebuild/amazonlinux2-x86_64-standard:3.0" -e "ARTIFACTS=<repo path>/codebuild-output/" --network mylocaltestingnetwork -e "SOURCE=<repo path>" -e "BUILDSPEC=<repo path>/buildspec-local.yml" -e "AWS_CONFIGURATION=<homedir>/.aws" -e "AWS_PROFILE=localstack" -e "INITIATOR=<user>" public.ecr.aws/codebuild/local-builds:latest Removing network agent-resources_default Removing volume agent-resources_source_volume Removing volume agent-resources_user_volume Creating network "agent-resources_default" with the default driver Creating volume "agent-resources_source_volume" with local driver Creating volume "agent-resources_user_volume" with local driver Creating agent-resources_agent_1 ... done Creating agent-resources_build_1 ... done Attaching to agent-resources_agent_1, agent-resources_build_1 agent_1 | [Container] 2022/01/14 15:57:15 Waiting for agent ping agent_1 | [Container] 2022/01/14 15:57:17 Waiting for DOWNLOAD_SOURCE agent_1 | [Container] 2022/01/14 15:57:23 Phase is DOWNLOAD_SOURCE ```
0
answers
0
votes
1
views
bbideep
asked 2 days ago

Docker push doesn't work even docker login succeeded during AWS CodePipeline Build stage

Hello, I'm preparing CI/CD using AWS CodePipeline. Unfortunatelly I have an error during build stage. Below there is content of my buildspec.yml file, where: AWS_DEFAULT_REGION = eu-central-1 CONTAINER_NAME=cicd-1-app REPOSITORY_URI = <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com/cicd-1-app ``` version: 0.2 phases: install: runtime-versions: java: corretto11 pre_build: commands: - echo Logging in to Amazon ECR... - aws --version - TAG="$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)" - IMAGE_URI=${REPOSITORY_URI}:${TAG} build: commands: - echo Build started on `date` - echo $IMAGE_URI - mvn clean package -Ddockerfile.skip - docker build --tag $IMAGE_URI . post_build: commands: - printenv - echo Build completed on `date` - echo $(docker images) - echo Pushing docker image - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com - docker push $IMAGE_URI - echo push completed - printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $IMAGE_URI > imagedefinitions.json artifacts: files: - imagedefinitions.json ``` I got error: ``` [Container] 2022/01/06 19:57:36 Running command aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [Container] 2022/01/06 19:57:37 Running command docker push $IMAGE_URI The push refers to repository [<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/cicd-1-app] 37256fb2fd27: Preparing fe6c1ddaab26: Preparing d4dfab969171: Preparing no basic auth credentials [Container] 2022/01/06 19:57:37 Command did not exit successfully docker push $IMAGE_URI exit status 1 [Container] 2022/01/06 19:57:37 Phase complete: POST_BUILD State: FAILED [Container] 2022/01/06 19:57:37 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker push $IMAGE_URI. Reason: exit status 1 ``` Even docker logged in successfully there is "no basic auth credentials" error. Do you know what could be a problem? Best regards.
2
answers
0
votes
8
views
KM
asked 10 days ago

codebuild not pulling bitbucket lfs large files

I am running codepipeline with codebuild, I can pull the repo, however, I cannot pull the LFS large files. I have followed various guides on the internet, but have come to realize that codestar is not trying to connect to bitbucket LFS. In the bitbucket history I see the regular clone request but do not see the git lfs request. Codestar seems to be acting as a proxy I am enclosing my buildpec.yml and some logs. It fails at line `git lfs pull` ``` phases: install: commands: - cd /tmp/ - curl -OJL https://github.com/git-lfs/git-lfs/releases/download/v2.13.3/git-lfs-linux-amd64-v2.13.3.tar.gz - tar xzf git-lfs-linux-amd64-v2.13.3.tar.gz - ./install.sh - cd $CODEBUILD_SRC_DIR - git lfs pull build: commands: - echo Build started on `date` - echo Building the Docker image... - docker build -f ci/Dockerfile -t system1.datagen.tech:5000/dg-simulated-reality-ecr:teststeve . post_build: commands: - echo Build completed on `date` - echo Pushing the Docker images... - docker push system1.datagen.tech:5000/dg-simulated-reality-ecr:teststeve - echo Writing image definitions file... ``` aws logs ``` [Container] 2021/12/28 14:06:47 Running command git lfs pull batch response: Authorization error: https://codestar-connections.us-east-1.amazonaws.com/git-http/ACCOUNTNUMBER/us-east-1/ARNSTUFF/MYCOMPANY/MYREPO.git/info/lfs/objects/batch Check that you have proper access to the repository ........ [Container] 2021/12/28 14:06:47 Command did not exit successfully git lfs pull exit status 2 [Container] 2021/12/28 14:06:47 Phase complete: INSTALL State: FAILED [Container] 2021/12/28 14:06:47 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: git lfs pull. Reason: exit status 2 ```
0
answers
0
votes
3
views
Steve
asked 19 days ago

Deploy files to S3 with CodeBuild (AWS CLI)

I've tried the following pipeline: ``` CodeCommit (git) -> Codebuild (build) -> Deploy S3 (deploy) ``` But Deploy S3 CodePipeline action is super basic and quite unusable for anything but the most basic use cases. --- Let's say that `Codebuild (build)` outputs `EXAMPLE.zip` artifact to `EXAMPLE_BUCKET/pipeline1/output`. ZIP file has the following content: ``` index.html code.<contenthash>.js code.<contenthash>.js code.<contenthash>.js styles.<contenthash>.css styles.<contenthash>.css icons.<contenthash>.svg ``` # Issues --- ### 1. Different files need different metadata values (i.e HTML headers) Issue here is that S3 Deploy allows only single `Cache-Control` value for all the files and no other custom metadata fields! Most file names are salted with content hash, they can be cached forever, e.g: ``` Cache-Control: max-age=31536000 ``` `index.html` is the entry point that references to most recent JS and CSS, etc files. It has to be fresh, e.g: ``` Cache-Control: no-store ``` --- ### 2. S3 Deploy automatically adds all build output artifact metadata values to extracted files This is an issue because metadata values automatically end up as HTML headers and AFAIK, there's no way to opt out of them without edge compute (Lambda@Edge or CF Functions), which would be a waste of time and money resources! 1. it's pointless extra 300b weight for each file response size 2. one of them is ARN and leaks account number, region, pipeline name, etc to end users ``` x-amz-meta-codebuild-content-sha256 <loooong hash> x-amz-meta-codebuild-buildarn <loooong arn> x-amz-meta-codebuild-content-md5 <loooong hash> ``` --- These two reasons are enough for me to to abondon Deploy S3 route and use CodeBuild to manually upload files to S3. Here's pretty much what needs to happen: 1. Download artifact ZIP file from `EXAMPLE_BUCKET/pipeline1/output` (how do I get artifact name from pipeline?) 2. Uncompress ZIP file 3. Upload all files except `index.html` to `EXAMPLE_BUCKET/files` with header `Cache-Control: max-age=31536000` 4. Upload `index.html` to `EXAMPLE_BUCKET/files` with header `Cache-Control: no-store` Could you help me with these 4 `buildspec.yml` commands? I'm quite new to command line.
0
answers
0
votes
5
views
Screen name
asked 24 days ago

Python cfn_tools module won't load in CodeBuild

I have been getting the following error in my CodeBuild execution: `ModuleNotFoundError: No module named 'cfn_tools'` Interesting note, the first time I ran this through CodeBuild with this module I had no issues. It only started happening after I made my next gitHub push that kicked off my pipeline that I saw this. The files that are related to this didn't change, and the modifications in that next push were to an unrelated section of the repo. I have since tried to do: * `pip install cfn-tools` & `pip3 install cfn-tools` which mentioned that the module was already installed. These were added to the BuildSpec section. No success - still got the error * I've added a requirements.txt file with no success still got the error. I created this file using `pip freeze` also within the BuildSpec. The module shows up, but still get the error. * Originally used runtime version 3.7 of python and then tried with 3.9 which still didn't work. python runtime 3.9 Any assistance would be appreciated. UPDATE: To add more information I download a .tar.gz file from S3 that contains the python scripts I need for running in this build. I extract the .tar.gz then I run the script that is having the error. Here is the output for when I install cfn-tools and do a pip freeze You will see below that cfn-tools loads and is part of the output of pip freeze but yet when I run my script it give me the above error. ``` [Container] 2021/12/18 20:02:33 Running command pip3 install cfn-tools Collecting cfn-tools Downloading cfn-tools-0.1.6.tar.gz (3.9 kB) Requirement already satisfied: Click>=6.0 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from cfn-tools) (7.1.2) Requirement already satisfied: boto3>=1.3.1 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from cfn-tools) (1.18.58) Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from boto3>=1.3.1->cfn-tools) (0.5.0) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from boto3>=1.3.1->cfn-tools) (0.10.0) Requirement already satisfied: botocore<1.22.0,>=1.21.58 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from boto3>=1.3.1->cfn-tools) (1.21.58) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from botocore<1.22.0,>=1.21.58->boto3>=1.3.1->cfn-tools) (2.8.2) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from botocore<1.22.0,>=1.21.58->boto3>=1.3.1->cfn-tools) (1.26.7) Requirement already satisfied: six>=1.5 in /root/.pyenv/versions/3.9.5/lib/python3.9/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.22.0,>=1.21.58->boto3>=1.3.1->cfn-tools) (1.16.0) Building wheels for collected packages: cfn-tools Building wheel for cfn-tools (setup.py): started Building wheel for cfn-tools (setup.py): finished with status 'done' Created wheel for cfn-tools: filename=cfn_tools-0.1.6-py3-none-any.whl size=5456 sha256=9cd3471445f6552165508b0bd797498a535d3ef264059c9739cc6b72f7b96a26 Stored in directory: /root/.cache/pip/wheels/51/1f/6f/f50a0600d46c29ca31519968efefdc4547e8cda7a756584837 Successfully built cfn-tools Installing collected packages: cfn-tools Successfully installed cfn-tools-0.1.6 WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv WARNING: You are using pip version 21.1.2; however, version 21.3.1 is available. You should consider upgrading via the '/root/.pyenv/versions/3.9.5/bin/python3.9 -m pip install --upgrade pip' command. [Container] 2021/12/18 20:02:36 Running command pip3 freeze arrow==1.2.0 attrs==21.2.0 aws-lambda-builders==1.8.1 aws-sam-cli==1.33.0 aws-sam-translator==1.39.0 awscli==1.20.58 backports.entry-points-selectable==1.1.0 binaryornot==0.4.4 boto3==1.18.58 botocore==1.21.58 certifi==2021.10.8 **cfn-tools==0.1.6** chardet==4.0.0 chevron==0.14.0 click==7.1.2 colorama==0.4.3 cookiecutter==1.7.3 dateparser==1.1.0 distlib==0.3.3 docker==4.2.2 docutils==0.15.2 filelock==3.3.0 Flask==1.1.4 idna==2.10 itsdangerous==1.1.0 Jinja2==2.11.3 jinja2-time==0.2.0 jmespath==0.10.0 jsonschema==3.2.0 MarkupSafe==2.0.1 pipenv==2021.5.29 platformdirs==2.4.0 poyo==0.5.0 pyasn1==0.4.8 pyrsistent==0.18.0 python-dateutil==2.8.2 python-slugify==5.0.2 pytz==2021.3 PyYAML==5.4.1 regex==2021.10.8 requests==2.25.1 rsa==4.7.2 s3transfer==0.5.0 serverlessrepo==0.1.10 six==1.16.0 text-unidecode==1.3 tomlkit==0.7.2 tzlocal==3.0 urllib3==1.26.7 virtualenv==20.8.1 virtualenv-clone==0.5.7 watchdog==2.1.2 websocket-client==1.2.1 Werkzeug==1.0.1 ```
2
answers
0
votes
2
views
AWS-User-5037180
asked a month ago

Need help in configuring and accessing env variables in aws code build

Hi all, I've the following situation here. I'm new to this code build. Basically what i'm trying to achieve is to define the environment variables in the 'environment variable tab in code build' and use those environment variables defined in buildspec.yaml. The objective of doing that is to access in the react app with proces.env.REACT_APP_SOME_SPACE which should provide the value as expected. buildspec.yaml ---------------- env: variables: REACT_APP_SOME_TOKEN: '${SOME_TOKEN}' // I understand this is plain text REACT_APP_SOME_SPACE: ${SOME_SPACE} REACT_APP_BASE_URL: 'https://cf-api.oasissystems.com.au/api/v1' REACT_APP_REQUEST_TIMEOUT: '10000' REACT_APP_SERVICE_API_KEY: '${SERVICE_API_KEY}' ... phases: install: commands: - echo "Building ${CODEBUILD_WEBHOOK_TRIGGER}" What i see in process.env.REACT_APP_SOME_TOKEN is "${SOME_TOKEN}' or whatever that is provided as a plaintext, but not the env variable defined in the environment tab I tried with the following variations - REACT_APP_SOME_SPACE: ${SOME_SPACE} - REACT_APP_SOME_SPACE: '${SOME_SPACE}' - REACT_APP_SOME_SPACE: {SOME_SPACE} Questions: 1. Is this correct way of doing? If not, Please advise 2. What is the other ways of defining the secret keys in the env variables and how do i refer it in the process.env in react app Please see the attachment what I meant by environment tab mean at aws code build. I know you guys know that. But just to make things clear. Please advise at the earliest as i'm lil not sure Edited by: baranit on May 20, 2020 2:32 AM Edited by: baranit on May 21, 2020 1:27 AM
5
answers
0
votes
1
views
baranit
asked 2 years ago

Use of VPC times out when downloading source from s3.

Hello Y'all! I have AWS CodePipeline where the source is from AWS CodeCommit, then it is built using AWS CodeBuild. All my services are under a VPC and that is the only way to reach Redis and Postgresql. What CodeBuild is working with is a NodeJS application created using CodeStar. I went ahead and created a couple of new endpoints on my nodeJS application, created the tests with the default test library, committed my changes, and pushed them. First, my VPC has a routing table pointing to an Internet Gateway. When I attached the VPC and both my private subnets to the Environment of my CodeBuild Project and click "Validate VPC Settings" I get: ``` The VPC with ID vpc-XYZ might not have an internet connection because the provided subnet with ID subnet-XYZ is public. Provide a private subnet with the 0.0.0.0/0 destination for the target NAT gateway and try again. ``` After seeing this, I go ahead and change the route table to point to a NAT gateway. I go back to Code Build setting and I get the following error: ``` The VPC with ID vpc-XYZ might not have an internet connection. CodeBuild cannot find the 0.0.0.0/0 destination for the target internet gateway with subnet ID subnet-XYZ. ``` This is because AWS is not allowing me to have two similar destinations, in this case, 0.0.0.0/0. With this problem, I keep getting an error in my CodeBuild details: ``` CLIENT_ERROR: RequestError: send request failed caused by: Get https://aws-codestar-us-west-2-USERid-admin-api-pipe.s3.us-west-2.amazonaws.com/data-admin-api-Pipel/data-admin/mYX264d: dial tcp 52.3.2.1:443: i/o timeout for primary source and source version arn:aws:s3:::aws-codestar-us-west-2-USEID-admin-api-pipe/data-admin-api-Pipel/data-admin/mYX264d ``` What am I doing wrong? Did I mess up my VPC? I can still access my services on my local machine. Edited by: MrBaxt0rz on Apr 28, 2020 3:26 PM Edited by: MrBaxt0rz on Apr 28, 2020 3:26 PM
2
answers
0
votes
1
views
MrBaxt0rz
asked 2 years ago
  • 1
  • 90 / page