By using AWS re:Post, you agree to the Terms of Use
/AWS CodePipeline/

Questions tagged with AWS CodePipeline

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

CodeBuild - Extremely long build times/caching

My project is a large JS/Yarn monorepo (using workspaces) that is contributed to dozens of times a day, by multiple devs. Every time a PR is opened, we build, lint, and test the project, which requires running `yarn install` before anything else. Once a merge occurs to the main branch, any open PRs must merge with the new main branch, and the PR checker needs to be run again before _that_ branch is merged. As you can imagine, there can be a pretty large backlog of open PRs when we're in crunch time. I have successfully used (both s3 & local) caching before in smaller projects, however I can't seem to get **local** caching working with our monorepo (this is a large project, so s3 is much too slow for us, as far as I can tell). When we try using local caching, our build fails with `"EEXIST: file already exists, mkdir '/codebuild/output/src11111111/src/github.com/OUR_ORG/OUR_PROJECT/node_modules/@OUR_PACKAGE/common'".`. This behavior is documented across the web: - https://github.com/aws-samples/aws-codebuild-samples/issues/8 - https://stackoverflow.com/questions/55890275/aws-codebuild-does-not-work-with-yarn-workspaces Some of the remedies include using s3, but as mentioned this is a large project and we update packages at least once every two weeks (some weeks , multiple times), and our initial upload more than doubles our time to 40min. Downloading doesn't save us much time either. - Are there any reasonable steps we can take to cache our node_modules directories so we don't have to pull from yarn every time? - Are there any other solutions to speed up these build times (we're currently ~14min. After optimizing other parts of the build process)? - Do you have any example (large) mono repos you can point to as a template? - Any other tips to speed up our build times?
1
answers
1
votes
8
views
asked 22 days ago
1
answers
0
votes
9
views
asked a month ago

cdk destroy deleting the stacks from cdk.out/manifest.json not my stage stacks

Hello AWS Community, in my team we have just on AWS Account, and that way we deploy th **PreProd**and ro stages in tghe same account. the Problem is: i wanant to delete all stacks produced through the **PreProd **stage before moving to the **Pre **stage because of duplicat names ... etc. i tried using the command ``` cdk destroy --app 'npx ts-node ./bin/AutBusBackend.ts' --all --force" ``` but the destroy here is deleting the stacks without the prefix ***Pre***, and in this time we don't have anway this stacks. Do you know how can i get ride of this problem to delete the stacks that are preduced in the current stage before and not the default stacks names in cdk.out ? here is my pipeline code ``` const repo = codecommit.Repository.fromRepositoryName(this, 'AutbusBackendRepo', 'AutbusBackend'); const pipeline = new CodePipeline(this, 'AutBusPipeline', { pipelineName: 'AutBusPipeline', synth: new ShellStep('Synth', { input: CodePipelineSource.codeCommit(repo, 'master'), commands: [ 'npm install -g npm', 'npm install', 'npm ci', 'npm run build', 'npm run cdk -- synth' ] }) }); const preProd = pipeline.addStage(new AppStage(this, 'PreProd',{ env: { account: account, region: region } })); const step1 = new ShellStep('IntegrationTesting', { commands: [ 'npm install', 'npm test' ] }); const step2 = new ManualApprovalStep('Manual approval before Prod'); const step3 = new ShellStep('Delete deployed Stacks', { commands: [ 'npm install', 'npm install -g aws-cdk', "cdk destroy --app 'npx ts-node ./bin/AutBusBackend.ts' --all --force" ] }); // step2.addStepDependency(step1); // step3.addStepDependency(step2); // preProd.addPost(step3); // preProd.addPost(step2); // preProd.addPost(step1); const prodStage = pipeline.addStage(new AppStage(this, 'Prod', { env: { account: account, region: region } })); ``` Thanks in adavance for any new insiring idea
0
answers
0
votes
1
views
asked 2 months ago

Error: ModuleNotFoundError: No module named in AWS Build

I can run the project on my local MAC, but when I use the pipeline to build it. I got this error: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-axjgd0da/MarkupSafe/ This project is working well, and I did not update any new lib in it. Even I redeployed to the old branch, it has the same error. Here are the build logs: `Collecting MarkupSafe==2.1.0 (from -r /usr/src/app/requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/62/0f/52c009332fdadd484e898dc8f2acca0663c1031b3517070fd34ad9c1b64e/MarkupSafe-2.1.0.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-axjgd0da/MarkupSafe/setup.py", line 65, in <module> run_setup(True) File "/tmp/pip-build-axjgd0da/MarkupSafe/setup.py", line 44, in run_setup ext_modules=ext_modules if with_binary else [], File "/usr/lib/python3.7/site-packages/setuptools/__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "/usr/lib64/python3.7/distutils/core.py", line 121, in setup dist.parse_config_files() File "/usr/lib/python3.7/site-packages/setuptools/dist.py", line 442, in parse_config_files ignore_option_errors=ignore_option_errors) File "/usr/lib/python3.7/site-packages/setuptools/config.py", line 106, in parse_configuration meta.parse() File "/usr/lib/python3.7/site-packages/setuptools/config.py", line 382, in parse section_parser_method(section_options) File "/usr/lib/python3.7/site-packages/setuptools/config.py", line 355, in parse_section self[name] = value File "/usr/lib/python3.7/site-packages/setuptools/config.py", line 173, in __setitem__ value = parser(value) File "/usr/lib/python3.7/site-packages/setuptools/config.py", line 430, in _parse_version version = self._parse_attr(value) File "/usr/lib/python3.7/site-packages/setuptools/config.py", line 305, in _parse_attr module = import_module(module_name) File "/usr/lib64/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked ModuleNotFoundError: No module named 'markupsafe' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-axjgd0da/MarkupSafe/ The command '/bin/sh -c pip3 install -r $DOCKER_APP_HOME/requirements.txt' returned a non-zero code: 1 make: *** [docker-build] Error 1`
1
answers
0
votes
13
views
asked 2 months ago

CDK Stck Failed to publish one or more assets Access Denied

Hi All, In My BuildProject/BuildSpec (in my STG Account), I run this command : - cdk deploy --require-approval never it gives me this error : ``` myStack: deploying... [0%] start: Publishing e988sdsf934da0d45effe675sdscb946f3e1sds68:current [0%] check: Check s3://cdk-hnb65dds-assets-xxxxxxxx-cregion/assets/e9882ab1236873df4sdfeffe67sdfc8ce13bsdff3e1d6sdf8d68.zip Call failed: listObjectsV2({"Bucket":"cdk-hnsd59fds-assets-xxxxxxxx-region","Prefix":"assets/e98ssdfsd87dsffsdffdsfcc8sdsdfdd6141fsdd68.zip","MaxKeys":1}) => Access Denied (code=AccessDenied) [33%] fail: Access Denied [33%] start: Publishing c24b999656e4fe6c609c31dfadffbcdfdfc2c86df:current [33%] check: Check s3://cdk-hnb659fds-assets-xxxxxxxx-cregion/assets/c24b999656e4fe6c609c31dfadffbcdfdfc2c86df.zip Call failed: listObjectsV2({"Bucket":"cdk-hnb659fds-assets-xxxxxxxx-cregion","Prefix":"assets/c24b999656e4fe6c609c31dfadffbcdfdfc2c86df.zip","MaxKeys":1}) => Access Denied (code=AccessDenied) [66%] fail: Access Denied [66%] start: Publishing werer56e4fe6c609c3ewrd17a4d9c3afwr6b8c2wer:current [66%] check: Check s3://cdk-hnb659fds-assets-xxxxxxxx-cregion/assets/werer56e4fe6c609c3ewrd17a4d9c3afwr6b8c2wer.zip Call failed: listObjectsV2({"Bucket":"cdk-hnb659fds-assets-xxxxxxxx-cregion","Prefix":"assets/werer56e4fe6c609c3ewrd17a4d9c3afwr6b8c2wer.zip","MaxKeys":1}) => Access Denied (code=AccessDenied) [100%] fail: Access Denied ❌ myStack failed: Error: Failed to publish one or more assets. See the error messages above for more information. at publishAssets (/usr/local/lib/node_modules/aws-cdk/lib/util/asset-publishing.ts:27:11) ``` How can I give CDK stack running from BuildSpec permission to publish assets? I already added this policy to my codeBuild service role, but still same issue : ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject*", "s3:PutObject", "s3:PutObjectAcl", "s3:getBucketLocation" ], "Resource": [ "arn:aws:s3:::cdk*" ] } ] } ``` also had this error : ``` ser: arn:aws:sts::xxxxxx:assumed-role/codebuild-mybp-service-role/AWSCodeBuild-d1acsd11-4sad7-9sada6834ffsadbs is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:region:xxxxxxxx:function:myStack-CustomCDKBucketDeployment-l5dzxcszxA7assa because no identity-based policy allows the lambda:InvokeFunction action (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: eedf2-03dfdf3-4ddsfd7-bfdg7-2dfsdff5c2dfgd0; Proxy: null) ``` not sure which lamda he wants to invoke here and why? what are the right permissions for this Thank you!!
2
answers
0
votes
28
views
asked 3 months ago

Build and Deploy source from git Tag from another account

Hi Team, I have an AWS Pipeline in my DEV account, I created a second Pipeline In my PROD account. I followed this articles : 1 - https://prashant-48386.medium.com/cross-account-codepipeline-that-use-codecommit-from-another-aws-account-9d5ab4c892f6 2- https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html to make the PROD Pipeline use the Repository of the DEV account. how can I Build the source from a specific git tag, not from a branch name? when I put the tag number on the Pipeline source stage it fails. I tried to edit the source stage in the pipeline and select 'full clone' option but I had this error : `remote repository is empty for primary source and source version 63sdsde73f2e1f6sdsd7564f742csdsds91ssd1f7sdsa` as I used a remote repository in another account (DEV). I tried also to do this in my Buildspec : ``` ... git-credential-helper: yes .... build: commands: - echo Build started on `date` - git config --global user.name $REPO_NAME - git config --global user.email "$REPO_NAME@xxxx.xxx" - git clone code_conit_remote_repo_dev_account_url/$REPO_NAME --branch=$TAG_VERSION - cd $REPO_NAME ``` git clone https://codecommit.region.amazonaws.com/xx/xx/xx/$REPO_NAME --branch=$TAG_VERSION but I had this error : `fatal: unable to access 'https://codecommit.region.amazonaws.com/xx/xx/xx/myRepoName/': The requested URL returned error: 403` `Command did not exit successfully git clone https://codecommit.region.amazonaws.com/xx/xx/xx/$REPO_NAME --branch=$TAG_VERSION exit status 128` Thanks for your help.
1
answers
0
votes
11
views
asked 3 months ago

AWS::CodePipeline::Pipeline Action configuration field 1000 character limit

Setting up a codebuild action inside codepipeline via a CF template (the AWS::CodePipeline::Pipeline resource), I keep running into a very limiting factor where the configuration fields are all limited to 1000 characters (see: https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html: ``` Maximum length of the action configuration value (for example, the value of the RepositoryName configuration in the CodeCommit action configuration should be less than 1000 characters: "RepositoryName": "my-repo-name-less-than-1000-characters") ``` This limit is enough for most configuration fields, but when configuring a `CodeBuild` action, the `EnvironmentVariables` field [expects a JSON string](https://docs.aws.amazon.com/ja_jp/AWSCloudFormation/latest/UserGuide/aws-properties-codepipeline-pipeline-stages-actions.html#cfn-codepipeline-pipeline-stages-actions-configuration). This JSON string can very fast reach 1000 characters, with even as little as 10 environmental variables, especially if those variables are extracted from `SECRETS_MANAGER`. For example, declaring just one variable like this: ``` {"name":"MYSERVICE_VARIABLE","value":"aws:secretsmanager:ap-northeast-1:123458087:secret:my-secret-staging-name:password","type":"SECRETS_MANAGER"} ``` Will on its own be 148 characters. If the pipeline requires just 5 of these secrets and maybe 2-3 more short ones, the limit will be reached and deployment of the pipeline will fail. I was wondering if there is any chance this limit can get reviewed once more and maybe increased to, say, 1mb json string? Failing to do so will render this feature useful only in the simplest of use-cases... Regards, Julian.
2
answers
0
votes
3
views
asked 3 months ago

AWS CodeDeploy: STRING_VALUE can not be converted to an Integer

Using AWS CodePipeline and setting a Source, Build and passing `taskdef.json` and `appspec.yaml` as artifacts, the deployment action `Amazon ECS (Blue/Green)` will fail with the error: STRING_VALUE can not be converted to an Integer This error does not specify where this error happens and therefore it is not possible to fix. For reference, the files look like this: ```yaml # appspec.yaml version: 0.0 Resources: - TargetService: Type: AWS::ECS::Service Properties: TaskDefinition: <TASK_DEFINITION> LoadBalancerInfo: ContainerName: "my-project" ContainerPort: 3000 ``` ```json // taskdef.json { "family": "my-project-web", "taskRoleArn": "arn:aws:iam::1234567890:role/ecsTaskRole-role", "executionRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole-web", "networkMode": "awsvpc", "cpu": "256", "memory": "512", "containerDefinitions": [ { "name": "my-project", "memory": "512", "image": "01234567890.dkr.ecr.us-east-1.amazonaws.com/my-project:a09b7d81", "environment": [], "secrets": [ { "name": "APP_ENV", "valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_ENV::" }, { "name": "PORT", "valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:PORT::" }, { "name": "APP_NAME", "valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_NAME::" }, { "name": "LOG_CHANNEL", "valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:LOG_CHANNEL::" }, { "name": "APP_KEY", "valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_KEY::" }, { "name": "APP_DEBUG", "valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_DEBUG::" } ], "essential": true, "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "", "awslogs-region": "", "awslogs-stream-prefix": "" } }, "portMappings": [ { "hostPort": 3000, "protocol": "tcp", "containerPort": 3000 } ], "entryPoint": [ "web" ], "command": [] } ], "requiresCompatibilities": [ "FARGATE", "EC2" ], "tags": [ { "key": "project", "value": "my-project" } ] } ``` Any insights on this issue are highly appreciated!
2
answers
0
votes
7
views
asked 4 months ago

How to use AWS CDK to compile and deploy a typescript api with dependencies to lambda?

I have an api in TypeScript that i would like to deploy to a Lambda function. The project has dependencies that needs to be compiled on the platform it's running on, so I need a build step that can do the build on Linux machine in aws before it deploys. How can I leverage the CDK, codepipeline (or codebuild?) to do this? Following the [workshop](https://cdkworkshop.com/20-typescript.html) its super easy to get going with the pipeline that can deploy a "hello world" lambda, but I'm struggling to advance with this. How can I compile the TS code as a step in the pipeline, or how can set up a step in the pipeline that pulls code from another repo and then compiles and deploys it? I've tried to use the aws-cdk-lib/aws-lambda-nodejs constructor that claims to compile automatically. But, so far the pipelines crashes on the 2. step (build step in the pipeline self mutate process) because it can't find the dependencies in the Lamba. /lambda - package-lock.json - package.json - graph.js: ``` import { ApolloServer, gql } from 'apollo-server-lambda' import { ApolloServerPluginLandingPageGraphQLPlayground } from 'apollo-server-core' const typeDefs = gql` type Query { hello: String } `; const resolvers = { Query: { hello: () => 'Hello world!', }, }; const server = new ApolloServer({ typeDefs, resolvers, introspection: true, plugins: [ApolloServerPluginLandingPageGraphQLPlayground] }); exports.handler = server.createHandler(); ``` /lib: - pipeline-stack ``` ...imports... export class PipelineStack extends cdk.Stack { constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); const pipeline = new CodePipeline(this, '*ID', { pipelineName: '*NAME', synth: new CodeBuildStep('*ID', { input: CodePipelineSource.connection('*ghaccount/ghrepo', '*branch', { connectionArn: "*GHCONNECTION ARN" }), installCommands: ['npm install -g aws-cdk-lib'], commands: [ 'npm ci', 'npm run build', 'npx cdk synth' ] }) }) pipeline.addStage(new PipelineStage(this, 'Deploy', { env: { account: process.env.CDK_DEFAULT_ACCOUNT, region: process.env.CDK_DEFAULT_REGION } })) } } ``` - pipeline-stage.ts - ``` import * as cdk from 'aws-cdk-lib'; import { Construct } from "constructs"; import { LambdaStack } from './server-stack'; export class PipelineStage extends cdk.Stage { constructor(scope: Construct, id: string, props?: cdk.StageProps) { super(scope, id, props); // services / resources the app needs to run goes here ->> const lambdaStack = new LambdaStack(this, 'LambdaStack'); } } ``` - server-stack.ts ``` export class LambdaStack extends cdk.Stack { public readonly api: apiGateway.RestApi constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props) const graphqlLambda = new NodeLambda.NodejsFunction(this, 'adeegso-api-lambda', { entry: path.join(__dirname, '/../lambda/graph.ts'), handler: 'graph.handler', depsLockFilePath: "lambda/package-lock.json", }) this.api = new apiGateway.LambdaRestApi(this, 'adeegso-api-endpoint', { handler: graphqlLambda, }) } } ``` Thanks in advance for any help provided!
1
answers
0
votes
14
views
asked 4 months ago

Using Elastic Beanstalk - Docker Platform with ECR - Specifying a tag via environment variable

Hi, I am trying to develop a CI/CD process, using Beanstalk's Docker platform with ECR. Code Pipeline performs the builds and manages ECR Tags & Promotions. Terraform manages the infrastructure. I am looking for an approach that allows us to use the same Dockerfile/Dockerrun.aws.json in production & non-production environments, despite wanting different tags of the same image deployed.. Perhaps from different repositories (repo_name_PROD vs repo_name_DEV ). Producing and moving Beanstalk bundles that only differ in a TAG feels unnecessary. the idea of dynamically changing Dockerfiles using the deployment process also seems fragile. What I was exploring was using a simple Environment variable: change what tag (comitHash) of an image should be based on a Beanstalk environment variable. ``` FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repoName:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` Where TAG is the Git hash of the code repository from which the artifact was produced. CodeBuild has built the code and tagged the docker image. I understand that Docker supports this: ``` ARG TAG FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repo_name:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` but requires building the image like this: `docker build --build-arg GIT_TAG=SOME_TAG . ` Am I correct in assuming this wil not work with the docker platform? I do not believe the EB Docker platform exposes a way to specify the build-arg. What is standard practice for managing tagged docker images in Beanstalk. I am a little leery of the `latest` tag.. as a poorly timed auto scaling event could pull an update before it should be deployed: that just does not work in my case. Updating my Dockerfile dufing deployment (via `sed` ) seems like asking for trouble.
1
answers
0
votes
6
views
asked 4 months ago

Docker push doesn't work even docker login succeeded during AWS CodePipeline Build stage

Hello, I'm preparing CI/CD using AWS CodePipeline. Unfortunatelly I have an error during build stage. Below there is content of my buildspec.yml file, where: AWS_DEFAULT_REGION = eu-central-1 CONTAINER_NAME=cicd-1-app REPOSITORY_URI = <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com/cicd-1-app ``` version: 0.2 phases: install: runtime-versions: java: corretto11 pre_build: commands: - echo Logging in to Amazon ECR... - aws --version - TAG="$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)" - IMAGE_URI=${REPOSITORY_URI}:${TAG} build: commands: - echo Build started on `date` - echo $IMAGE_URI - mvn clean package -Ddockerfile.skip - docker build --tag $IMAGE_URI . post_build: commands: - printenv - echo Build completed on `date` - echo $(docker images) - echo Pushing docker image - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com - docker push $IMAGE_URI - echo push completed - printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $IMAGE_URI > imagedefinitions.json artifacts: files: - imagedefinitions.json ``` I got error: ``` [Container] 2022/01/06 19:57:36 Running command aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-central-1.amazonaws.com WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [Container] 2022/01/06 19:57:37 Running command docker push $IMAGE_URI The push refers to repository [<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/cicd-1-app] 37256fb2fd27: Preparing fe6c1ddaab26: Preparing d4dfab969171: Preparing no basic auth credentials [Container] 2022/01/06 19:57:37 Command did not exit successfully docker push $IMAGE_URI exit status 1 [Container] 2022/01/06 19:57:37 Phase complete: POST_BUILD State: FAILED [Container] 2022/01/06 19:57:37 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker push $IMAGE_URI. Reason: exit status 1 ``` Even docker logged in successfully there is "no basic auth credentials" error. Do you know what could be a problem? Best regards.
2
answers
0
votes
233
views
asked 4 months ago

Amplify build error - Cannot find module '/codebuild/output/....

Hi all My vue app is running fine locally and builds fine locally, however, I'm trying to build my app on Amplify using a link to my github repo. The link and the clone work fine but I'm getting an error during the build. Amplify push also works fine without problems. I've only ever used NPM for all modules along with the vue-cli and Amplify-cli. I have no idea where to start with this. The main error seems to be : `Cannot find module '/codebuild/output/src323788196/src/.yarn/releases/yarn-1.23.0-20210726.1745.cjs'` I've tried `yarn install ` but that does not help. I'm not sure what to do next because I've never used yarn at all in this project. My build config is standard - ``` version: 1 backend: phases: build: commands: - '# Execute Amplify CLI with the helper script' - amplifyPush --simple frontend: phases: preBuild: commands: - npm install build: commands: - npm run build artifacts: baseDirectory: dist files: - '**/*' cache: paths: - node_modules/**/* ``` The error I'm getting is - ``` [WARNING]: ✖ An error occurred when pushing the resources to the cloud 2022-01-04T06:47:49.986Z [WARNING]: ✖ There was an error initializing your environment. 2022-01-04T06:47:49.993Z [INFO]: Error: Packaging lambda function failed with the error  [0mCommand failed with exit code 1: yarn --production [0minternal/modules/cjs/loader.js:818 [0m throw err; [0m ^ [0mError: Cannot find module '/codebuild/output/src323788196/src/.yarn/releases/yarn-1.23.0-20210726.1745.cjs' [0m at Function.Module._resolveFilename (internal/modules/cjs/loader.js:815:15)  at Function.Module._load (internal/modules/cjs/loader.js:667:27)  at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)  at internal/main/run_main_module.js:17:47 {  code: 'MODULE_NOT_FOUND',  requireStack: [] }  at runPackageManager (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:66:13)  at installDependencies (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:40:3)  at Object.buildResource [as build] (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:13:5)  at buildFunction (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-category-function/src/provider-utils/awscloudformation/utils/buildFunction.ts:41:36)  at prepareResource (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:605:33)  at async Promise.all (index 1)  at prepareBuildableResources (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:601:10)  at Object.run (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:173:5) 2022-01-04T06:47:50.024Z [ERROR]: !!! Build failed 2022-01-04T06:47:50.024Z [ERROR]: !!! Non-Zero Exit Code detected ```
0
answers
0
votes
8
views
asked 5 months ago

codebuild not pulling bitbucket lfs large files

I am running codepipeline with codebuild, I can pull the repo, however, I cannot pull the LFS large files. I have followed various guides on the internet, but have come to realize that codestar is not trying to connect to bitbucket LFS. In the bitbucket history I see the regular clone request but do not see the git lfs request. Codestar seems to be acting as a proxy I am enclosing my buildpec.yml and some logs. It fails at line `git lfs pull` ``` phases: install: commands: - cd /tmp/ - curl -OJL https://github.com/git-lfs/git-lfs/releases/download/v2.13.3/git-lfs-linux-amd64-v2.13.3.tar.gz - tar xzf git-lfs-linux-amd64-v2.13.3.tar.gz - ./install.sh - cd $CODEBUILD_SRC_DIR - git lfs pull build: commands: - echo Build started on `date` - echo Building the Docker image... - docker build -f ci/Dockerfile -t system1.datagen.tech:5000/dg-simulated-reality-ecr:teststeve . post_build: commands: - echo Build completed on `date` - echo Pushing the Docker images... - docker push system1.datagen.tech:5000/dg-simulated-reality-ecr:teststeve - echo Writing image definitions file... ``` aws logs ``` [Container] 2021/12/28 14:06:47 Running command git lfs pull batch response: Authorization error: https://codestar-connections.us-east-1.amazonaws.com/git-http/ACCOUNTNUMBER/us-east-1/ARNSTUFF/MYCOMPANY/MYREPO.git/info/lfs/objects/batch Check that you have proper access to the repository ........ [Container] 2021/12/28 14:06:47 Command did not exit successfully git lfs pull exit status 2 [Container] 2021/12/28 14:06:47 Phase complete: INSTALL State: FAILED [Container] 2021/12/28 14:06:47 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: git lfs pull. Reason: exit status 2 ```
0
answers
0
votes
7
views
asked 5 months ago

Deploy files to S3 with CodeBuild (AWS CLI)

I've tried the following pipeline: ``` CodeCommit (git) -> Codebuild (build) -> Deploy S3 (deploy) ``` But Deploy S3 CodePipeline action is super basic and quite unusable for anything but the most basic use cases. --- Let's say that `Codebuild (build)` outputs `EXAMPLE.zip` artifact to `EXAMPLE_BUCKET/pipeline1/output`. ZIP file has the following content: ``` index.html code.<contenthash>.js code.<contenthash>.js code.<contenthash>.js styles.<contenthash>.css styles.<contenthash>.css icons.<contenthash>.svg ``` # Issues --- ### 1. Different files need different metadata values (i.e HTML headers) Issue here is that S3 Deploy allows only single `Cache-Control` value for all the files and no other custom metadata fields! Most file names are salted with content hash, they can be cached forever, e.g: ``` Cache-Control: max-age=31536000 ``` `index.html` is the entry point that references to most recent JS and CSS, etc files. It has to be fresh, e.g: ``` Cache-Control: no-store ``` --- ### 2. S3 Deploy automatically adds all build output artifact metadata values to extracted files This is an issue because metadata values automatically end up as HTML headers and AFAIK, there's no way to opt out of them without edge compute (Lambda@Edge or CF Functions), which would be a waste of time and money resources! 1. it's pointless extra 300b weight for each file response size 2. one of them is ARN and leaks account number, region, pipeline name, etc to end users ``` x-amz-meta-codebuild-content-sha256 <loooong hash> x-amz-meta-codebuild-buildarn <loooong arn> x-amz-meta-codebuild-content-md5 <loooong hash> ``` --- These two reasons are enough for me to to abondon Deploy S3 route and use CodeBuild to manually upload files to S3. Here's pretty much what needs to happen: 1. Download artifact ZIP file from `EXAMPLE_BUCKET/pipeline1/output` (how do I get artifact name from pipeline?) 2. Uncompress ZIP file 3. Upload all files except `index.html` to `EXAMPLE_BUCKET/files` with header `Cache-Control: max-age=31536000` 4. Upload `index.html` to `EXAMPLE_BUCKET/files` with header `Cache-Control: no-store` Could you help me with these 4 `buildspec.yml` commands? I'm quite new to command line.
0
answers
0
votes
7
views
asked 5 months ago

How to perform CodePipeline ECS deployment based on Git tag

Hi fellow AWS humans, I am running an ECS application that is automatically built and deployed using CodeCommit, CodePipeline, and ECR. The infratructure is managed with Terraform. My setup is fairly comparable to this tutorial here: https://devops-ecs-fargate.workshop.aws/en/1-introduction.html The current ci/cd workflow is as follows: 1. Git push to CodeCommit repo main branch 2. CodePipeline builds a container Image and pushes it to the ECR registry 3. Deploy the most recently built container to ECS and update the service This is fine for very simple setups and I'm ok doing trunk based development (which, according to this blog post, is the suggested way when working with CodePipeline: https://aws.amazon.com/blogs/devops/multi-branch-codepipeline-strategy-with-event-driven-architecture/). However, **I don't want the most recent build to be pushed *straight to production***. What I' like to achieve is a 2-step ci/cd process (2 pipelines, 2 separate target environments): 1. Git push to CodeCommit repo main branch 2. CodePipeline builds a container Image and pushes it to the ECR registry 3. The most recently built container is deployed in the ECS **dev environment** 4. Tagging a specific commit (using **git tag**) will trigger a separate CodePipeline 5. The pipeline triggered in step 4 deploys the associated container to the **production environment** It seems that the only way to use CodePipeline's built-in features for deployment is by specifying a fixed branch name from which all vcs commits will trigger a new build/deployment - I see no way of specifying a git tag (and no way of specifying any wildcards either). This blog post (https://aws.amazon.com/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/) suggests that there are ways to circumvent this shortcoming by using a Lambda and CloudWatch Events. My questions are: - is there any way to achieve the illustrated ci/cd setup with AWS CodePipeline? - if it is possible: What would be a best practice to implement this? Thanks for any pointers and your help! Kind regards and big thanks, Maik
2
answers
0
votes
145
views
asked 5 months ago
  • 1
  • 90 / page