By using AWS re:Post, you agree to the Terms of Use

Questions tagged with AWS CodeBuild

Sort by most recent
  • 1
  • 12 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Is it possible to access/create other instance sizes

Hello, i guess this is more of a feature request than a question. In any case, the instance sizes available for Linux (small, medium, large) follow the typical pattern of each being roughly 2x CPU and Mem larger than the prior one. However, there's a *huge* gap between the large and the only next available size, the 2xlarge. Which is ~10x larger (145 vs 15 Gb RAM, 72 vs 8 cpu), and ~10x more expensive. While almost all of our builds are just fine on the medium or large instances, we run some security scanning tools that for one of our larger projects needs more horsepower, but not 10x more. So we're in a situation where we run builds for dozens of projects, but the single project that has to use the 2xlarge ends up representing something like 90% of our CodeBuild charges. Having a few more instance sizes between large and 2xlarge (e.g. 32Gb ram/16 CPU, 64Gb Ram/32 Cpu) would make this more manageable for us, and I've seen comments /complaints about this elsewhere from other users. I imagine that just adding these new sizes would be the easiest fix in the short term. However, it might be nice to eventually just allow for execution on arbitrary instance types. With perhaps some setup/prereqs along the lines of what's necessary to create custom build images. For example, for our one outlier project, there are certain aspects of the security scanning phase that are single threaded, so having the fastest possible CPU speeds things up considerably. We manually tested that build on a z1d class instance and saw much better results.
1
answers
0
votes
6
views
asked 4 hours ago

Need help getting an AWS built tutorial pipeline to build

Hi, I am trying to get the codebuild to work from the following AWS ML Blog post. https://aws.amazon.com/blogs/machine-learning/automate-model-retraining-with-amazon-sagemaker-pipelines-when-drift-is-detected/ The article has a link to a cloudformation stack that when clicked, imports correctly into my account. When I follow the steps to run it, all things appear to build. Following the steps in the tutorial, it becomes clear that the necessary sagemaker pipelines that are built as part of the stack failed to build. I reached out to the authors on twitter, and they noted: "something went stale indeed: CDK dropped support for node v12 sometimes back. Quick and dirty fix: pin the CDK installed version in the CodeBuild ProjectSpec." I navigated around and found that I could force a specific version of CDK in the codebuild buildspec for the failed build of the pipeline, the relevant line being here, changing the npm line from "commands": [ "npm install aws-cdk, "npm update", "python -m pip install -r requirements.txt" ] to "commands": [ "npm install aws-cdk@1.5", # arbitrary number, was going to trial-and-error with version numbers until something worked "npm update", "python -m pip install -r requirements.txt" ] When I attempt to re-run the failed build, I get the below error: `Build failed to start Build failed to start. The following error occurred: ArtifactsOverride must be set when using artifacts type CodePipelines` When I open the 'Build with Overrides' button and select disable artifacts, the closest option I can find to meeting the above suggestion, the build starts, but still fails, presumably because it is not pulling in necessary artifacts from a source. If there is another way to unstick this build I would be extremely grateful. This tutorial is greatly needed for a project I am working on and I am not very familiar with CodeBuild, but am trying to get to the materials in sagemaker as that is the focus of what I am trying to fix with some time sensitivity. ANY help you can give me would be greatly appreciated. If it is something else that is wrong, please do let me know. Other options the author suggested: "Two possible paths here:** update node to v16, python to 3.10, and then change the project image to standard 6.0 **. Alternative, pin CDK to an older version npm install cdk@x.x.xx . Not sure which version to suggest right now, it might need some trial and error" If I try this suggestion, I have to switch the environment from AL2 to Ubuntu, then look for Standard 6.0. I have to uncheck "Allow AWS CodeBuild to modify this service role so it can be used with this build project", otherwise I get an error of "Role XXX trusts too many services, expected only 1." Unchecking that lets the changes save, but same ArtifactsOverride issue when trying to run the build. Looking for the least friction solution to getting this tutorial to build as it has exactly what I need to finish a project. Please advise and thank you very much! ----- ![Build Failures in CodeBuild](/media/postImages/original/IMiX1geYsSTJOChgI88e9XXg) Sample from log with error: ``` Running setup.py develop for amazon-sagemaker-drift-detection-deployment-pipeline ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. awscli 1.25.18 requires botocore==1.27.18, but you have botocore 1.23.54 which is incompatible. awscli 1.25.18 requires s3transfer<0.7.0,>=0.6.0, but you have s3transfer 0.5.2 which is incompatible. Successfully installed amazon-sagemaker-drift-detection-deployment-pipeline-0.0.1 aws-cdk.aws-applicationautoscaling-1.116.0 aws-cdk.aws-autoscaling-common-1.116.0 aws-cdk.aws-cloudwatch-1.116.0 aws-cdk.aws-iam-1.116.0 aws-cdk.aws-sagemaker-1.116.0 aws-cdk.cloud-assembly-schema-1.116.0 aws-cdk.core-1.116.0 aws-cdk.cx-api-1.116.0 aws-cdk.region-info-1.116.0 boto3-1.20.19 botocore-1.23.54 cattrs-22.1.0 constructs-3.4.67 exceptiongroup-1.0.0rc8 jsii-1.64.0 publication-0.0.3 s3transfer-0.5.2 typeguard-2.13.3 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [notice] A new release of pip available: 22.1.2 -> 22.2.2 [notice] To update, run: pip install --upgrade pip [Container] 2022/08/13 16:20:07 Phase complete: INSTALL State: SUCCEEDED [Container] 2022/08/13 16:20:07 Phase context status code: Message: [Container] 2022/08/13 16:20:07 Entering phase PRE_BUILD [Container] 2022/08/13 16:20:07 Phase complete: PRE_BUILD State: SUCCEEDED [Container] 2022/08/13 16:20:07 Phase context status code: Message: [Container] 2022/08/13 16:20:07 Entering phase BUILD [Container] 2022/08/13 16:20:07 Running command npx cdk synth -o dist --path-metadata false Unexpected token '?' [Container] 2022/08/13 16:20:07 Command did not exit successfully npx cdk synth -o dist --path-metadata false exit status 1 [Container] 2022/08/13 16:20:07 Phase complete: BUILD State: FAILED [Container] 2022/08/13 16:20:07 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: npx cdk synth -o dist --path-metadata false. Reason: exit status 1 [Container] 2022/08/13 16:20:07 Entering phase POST_BUILD [Container] 2022/08/13 16:20:07 Phase complete: POST_BUILD State: SUCCEEDED [Container] 2022/08/13 16:20:07 Phase context status code: Message: [Container] 2022/08/13 16:20:07 Expanding base directory path: dist [Container] 2022/08/13 16:20:07 Assembling file list [Container] 2022/08/13 16:20:07 Expanding dist [Container] 2022/08/13 16:20:07 Skipping invalid file path dist [Container] 2022/08/13 16:20:07 Phase complete: UPLOAD_ARTIFACTS State: FAILED [Container] 2022/08/13 16:20:07 Phase context status code: CLIENT_ERROR Message: no matching base directory path found for dist```
1
answers
0
votes
35
views
asked 5 days ago

Multi-arch Docker image deployment using CDK Pipelines

I'd like to build a multi-architecture Docker image, push it to the default CDK ECR repo, and then push it to different deployment stages (stacks in separate accounts) using CDK Pipelines. I create the image using something like the following: ``` IMAGE_TAG=${AWS_ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com/cdk-hnb659fds-container-assets-${AWS_ACCOUNT}-${REGION}:myTag docker buildx build --progress=plain \ --platform linux/amd64,linux/arm64 --push \ --tag ${IMAGE_TAG} \ myDir/ ``` This results in three things pushed to ECR, two images and an image index (manifest). I'm then attempting to use the [cdk-ecr-deployment](https://github.com/cdklabs/cdk-ecr-deployment) to copy the image to a specific stack, for example: ``` cdk_ecr_deployment.ECRDeployment( self, "MultiArchImage", src=cdk_ecr_deployment.DockerImageName(f"{cdk_registry}:myTag"), dest=cdk_ecr_deployment.DockerImageName(f"{stack_registry}:myTag"), ) ``` However, this ends up copying only the image corresponding to the platform running the CDK deployment instead of the 2 images plus manifest. There's a [feature request](https://github.com/cdklabs/cdk-ecr-deployment/issues/192) open on `cdk-ecr-deployment` to support multi-arch images. I'm hoping someone might be able to suggest a modification to the above or some alternative that achieves the same goal, which is to deploy the image to multiple environments using CDK Pipelines. I also tried building the images + manifest into a tarball locally and then using the `aws_ecr_assets.TarballImageAsset` construct, but I encountered this [open issue ](https://github.com/aws/aws-cdk/issues/18044) when attempting the deployment locally. I'm not sure if the `TarballImageAsset` supports a multi-arch image, as it seems like the `DockerImageAsset` doesn't. Any ideas?
1
answers
0
votes
10
views
asked 6 days ago
  • 1
  • 12 / page