By using AWS re:Post, you agree to the Terms of Use

Questions tagged with AWS CloudFormation

Sort by most recent
  • 1
  • 12 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Need help getting an AWS built tutorial pipeline to build

Hi, I am trying to get the codebuild to work from the following AWS ML Blog post. https://aws.amazon.com/blogs/machine-learning/automate-model-retraining-with-amazon-sagemaker-pipelines-when-drift-is-detected/ The article has a link to a cloudformation stack that when clicked, imports correctly into my account. When I follow the steps to run it, all things appear to build. Following the steps in the tutorial, it becomes clear that the necessary sagemaker pipelines that are built as part of the stack failed to build. I reached out to the authors on twitter, and they noted: "something went stale indeed: CDK dropped support for node v12 sometimes back. Quick and dirty fix: pin the CDK installed version in the CodeBuild ProjectSpec." I navigated around and found that I could force a specific version of CDK in the codebuild buildspec for the failed build of the pipeline, the relevant line being here, changing the npm line from "commands": [ "npm install aws-cdk, "npm update", "python -m pip install -r requirements.txt" ] to "commands": [ "npm install aws-cdk@1.5", # arbitrary number, was going to trial-and-error with version numbers until something worked "npm update", "python -m pip install -r requirements.txt" ] When I attempt to re-run the failed build, I get the below error: `Build failed to start Build failed to start. The following error occurred: ArtifactsOverride must be set when using artifacts type CodePipelines` When I open the 'Build with Overrides' button and select disable artifacts, the closest option I can find to meeting the above suggestion, the build starts, but still fails, presumably because it is not pulling in necessary artifacts from a source. If there is another way to unstick this build I would be extremely grateful. This tutorial is greatly needed for a project I am working on and I am not very familiar with CodeBuild, but am trying to get to the materials in sagemaker as that is the focus of what I am trying to fix with some time sensitivity. ANY help you can give me would be greatly appreciated. If it is something else that is wrong, please do let me know. Other options the author suggested: "Two possible paths here:** update node to v16, python to 3.10, and then change the project image to standard 6.0 **. Alternative, pin CDK to an older version npm install cdk@x.x.xx . Not sure which version to suggest right now, it might need some trial and error" If I try this suggestion, I have to switch the environment from AL2 to Ubuntu, then look for Standard 6.0. I have to uncheck "Allow AWS CodeBuild to modify this service role so it can be used with this build project", otherwise I get an error of "Role XXX trusts too many services, expected only 1." Unchecking that lets the changes save, but same ArtifactsOverride issue when trying to run the build. Looking for the least friction solution to getting this tutorial to build as it has exactly what I need to finish a project. Please advise and thank you very much! ----- ![Build Failures in CodeBuild](/media/postImages/original/IMiX1geYsSTJOChgI88e9XXg) Sample from log with error: ``` Running setup.py develop for amazon-sagemaker-drift-detection-deployment-pipeline ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. awscli 1.25.18 requires botocore==1.27.18, but you have botocore 1.23.54 which is incompatible. awscli 1.25.18 requires s3transfer<0.7.0,>=0.6.0, but you have s3transfer 0.5.2 which is incompatible. Successfully installed amazon-sagemaker-drift-detection-deployment-pipeline-0.0.1 aws-cdk.aws-applicationautoscaling-1.116.0 aws-cdk.aws-autoscaling-common-1.116.0 aws-cdk.aws-cloudwatch-1.116.0 aws-cdk.aws-iam-1.116.0 aws-cdk.aws-sagemaker-1.116.0 aws-cdk.cloud-assembly-schema-1.116.0 aws-cdk.core-1.116.0 aws-cdk.cx-api-1.116.0 aws-cdk.region-info-1.116.0 boto3-1.20.19 botocore-1.23.54 cattrs-22.1.0 constructs-3.4.67 exceptiongroup-1.0.0rc8 jsii-1.64.0 publication-0.0.3 s3transfer-0.5.2 typeguard-2.13.3 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [notice] A new release of pip available: 22.1.2 -> 22.2.2 [notice] To update, run: pip install --upgrade pip [Container] 2022/08/13 16:20:07 Phase complete: INSTALL State: SUCCEEDED [Container] 2022/08/13 16:20:07 Phase context status code: Message: [Container] 2022/08/13 16:20:07 Entering phase PRE_BUILD [Container] 2022/08/13 16:20:07 Phase complete: PRE_BUILD State: SUCCEEDED [Container] 2022/08/13 16:20:07 Phase context status code: Message: [Container] 2022/08/13 16:20:07 Entering phase BUILD [Container] 2022/08/13 16:20:07 Running command npx cdk synth -o dist --path-metadata false Unexpected token '?' [Container] 2022/08/13 16:20:07 Command did not exit successfully npx cdk synth -o dist --path-metadata false exit status 1 [Container] 2022/08/13 16:20:07 Phase complete: BUILD State: FAILED [Container] 2022/08/13 16:20:07 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: npx cdk synth -o dist --path-metadata false. Reason: exit status 1 [Container] 2022/08/13 16:20:07 Entering phase POST_BUILD [Container] 2022/08/13 16:20:07 Phase complete: POST_BUILD State: SUCCEEDED [Container] 2022/08/13 16:20:07 Phase context status code: Message: [Container] 2022/08/13 16:20:07 Expanding base directory path: dist [Container] 2022/08/13 16:20:07 Assembling file list [Container] 2022/08/13 16:20:07 Expanding dist [Container] 2022/08/13 16:20:07 Skipping invalid file path dist [Container] 2022/08/13 16:20:07 Phase complete: UPLOAD_ARTIFACTS State: FAILED [Container] 2022/08/13 16:20:07 Phase context status code: CLIENT_ERROR Message: no matching base directory path found for dist```
1
answers
0
votes
7
views
asked 12 hours ago

Multi-arch Docker image deployment using CDK Pipelines

I'd like to build a multi-architecture Docker image, push it to the default CDK ECR repo, and then push it to different deployment stages (stacks in separate accounts) using CDK Pipelines. I create the image using something like the following: ``` IMAGE_TAG=${AWS_ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com/cdk-hnb659fds-container-assets-${AWS_ACCOUNT}-${REGION}:myTag docker buildx build --progress=plain \ --platform linux/amd64,linux/arm64 --push \ --tag ${IMAGE_TAG} \ myDir/ ``` This results in three things pushed to ECR, two images and an image index (manifest). I'm then attempting to use the [cdk-ecr-deployment](https://github.com/cdklabs/cdk-ecr-deployment) to copy the image to a specific stack, for example: ``` cdk_ecr_deployment.ECRDeployment( self, "MultiArchImage", src=cdk_ecr_deployment.DockerImageName(f"{cdk_registry}:myTag"), dest=cdk_ecr_deployment.DockerImageName(f"{stack_registry}:myTag"), ) ``` However, this ends up copying only the image corresponding to the platform running the CDK deployment instead of the 2 images plus manifest. There's a [feature request](https://github.com/cdklabs/cdk-ecr-deployment/issues/192) open on `cdk-ecr-deployment` to support multi-arch images. I'm hoping someone might be able to suggest a modification to the above or some alternative that achieves the same goal, which is to deploy the image to multiple environments using CDK Pipelines. I also tried building the images + manifest into a tarball locally and then using the `aws_ecr_assets.TarballImageAsset` construct, but I encountered this [open issue ](https://github.com/aws/aws-cdk/issues/18044) when attempting the deployment locally. I'm not sure if the `TarballImageAsset` supports a multi-arch image, as it seems like the `DockerImageAsset` doesn't. Any ideas?
0
answers
0
votes
3
views
asked a day ago

Restriction on CloudFormation StackSet with IAM condition cloudformation:TemplateUrl

I'm trying to restrict the S3 bucket used for **StackSet** templates with the IAM condition **cloudformation:TemplateUrl**, but it's does not work as expected: the IAM Policy applied always deny the CreateStackSet. See below the tested policy. The [doc page](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#using-iam-template-conditions) explains that you can use the condition as usual, but there is a Note that is not clear for me: ![Enter image description here](/media/postImages/original/IMUjPviuTuSAaoxl5HvXktBQ) For allowed CreateStackSet calls, the CloudTrail event included the TemplateUrl in the context, so I don't understand why the condition does not work with Stack Set. Thank for your help! ``` { "eventVersion": "1.08", [...] "eventTime": "2022-08-09T15:42:50Z", "eventSource": "cloudformation.amazonaws.com", "eventName": "CreateStackSet", "awsRegion": "us-east-1", "sourceIPAddress": "AWS Internal", "userAgent": "AWS Internal", "requestParameters": { "stackSetName": "test-deny1", "templateURL": "https://s3.amazonaws.com/trusted-bucket/EnableAWSCloudtrail.yml", "description": "Enable AWS CloudTrail. This template creates a CloudTrail trail, an Amazon S3 bucket where logs are published, and an Amazon SNS topic where notifications are sent.", "clientRequestToken": "1bd60a6d-f9dc-76a9-020a-f5a45f1bdf1e", "capabilities": [ "CAPABILITY_IAM" ] }, "responseElements": { "stackSetId": "test-deny1:97054f39-3925-47eb-92fd-09779f32bcf6" }, [...] } ``` For reference my IAM Policy: ``` { "Sid": "TemplateFromTrustedBucket", "Effect": "Allow", "Action": [ "cloudformation:CreateStackSet", "cloudformation:UpdateStackSet" ], "Resource": "*", "Condition": { "StringLike": { "cloudformation:TemplateURL": "https://s3.amazonaws.com/trusted-bucket/*" } } } ```
0
answers
0
votes
36
views
profile picture
asked 4 days ago

Need help on security issues on Cloudformation template

Hi Folks, here is an attached cloudformation template. Can anyone please help me by identifying security issues inside the CF template. --------------------------------------- AWSTemplateFormatVersion: '2010-09-09' Description: Test for candidates Parameters: InstanceType: Type: String Default: t2.small AllowedValues: - t2.nano - t2.micro - t2.small Environment: Type: String Default: dev AllowedValues: - dev - prod DBName: Description: Name of the Database Type: String Default: db1 MasterDbUserPassword: Description: Database Password Type: String NoEcho: True MinLength: 1 MaxLength: 8 AllowedPattern: ^[a-zA-Z0-9]*$ Password: NoEcho: True Type: String Description: New account password MinLength: '1' MaxLength: '41' ConstraintDescription: the password must be between 1 and 41 characters ImageId: Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id> Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 Resources: SpEc2Instance: Type: AWS::EC2::Instance Properties: ImageId: !Ref ImageId InstanceType: !Ref InstanceType SecurityGroups: - !Ref ServerSecurityGroup UserData: Fn::Base64: | #!/bin/bash sudo yum -y update sudo yum -y install httpd php php-mysqlnd sudo systemctl enable httpd sudo systemctl start httpd export AWS_ACCESS_KEY_ID=ESSEGINKULAKLARIVAAR [Hard Coding of Access Key] export AWS_SECRET_ACCESS_KEY=wHenasieuFEek/34sfscC/jshsbvMAAKEYBOKBOK [Hard Coding of secret access key] export AWS_DEFAULT_REGION=us-west-2 echo "<h1>Deployed via CloudFormation</h1>" | sudo tee /var/www/html/index.html ServerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: allow connections from specified CIDR ranges SecurityGroupIngress: - IpProtocol: tcp FromPort: 80 ToPort: 80 CidrIp: 0.0.0.0/0 [Traffic on ec2 instance over port http is allowed from internet] - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: 0.0.0.0/0 [Traffic on ec2 instance over ssh port is allowed from internet] BucketInfra: Type: AWS::S3::Bucket Properties: AccessControl: BucketOwnerFullControl BucketName: !Sub '${Environment}-file-sp-storage' PublicAccessBlockConfiguration: BlockPublicAcls: false [Amazon S3 will allow public access control lists (ACLs) for this bucket and objects in this bucket] IgnorePublicAcls: true BlockPublicPolicy: true RestrictPublicBuckets: true BucketInfraPolicy: Type: 'AWS::S3::BucketPolicy' Properties: Bucket: !Ref BucketInfra PolicyDocument: Id: InfraAbc Version: 2012-10-17 Statement: - Principal: AWS: '*' [This will allow all identities to perform any action on S3 bucket] Effect: Allow Action: '*' Resource: !Sub - 'arn:aws:s3:::${bucketName}/*' - bucketName: !Ref BucketInfra SpRestApi: Type: AWS::ApiGateway::RestApi Properties: Description: Example API EndpointConfiguration: Types: - REGIONAL Name: 'sp-api' SpApiGatewayMethod: Type: AWS::ApiGateway::Method Properties: AuthorizationType: NONE [There is no authorizationtype in place] HttpMethod: GET MethodResponses: - StatusCode: 200 - StatusCode: 404 - StatusCode: 422 - StatusCode: 501 ResourceId: !GetAtt SpRestApi.RootResourceId RestApiId: !Ref SpRestApi DefaultDB: Type: AWS::RDS::DBInstance Properties: DBName: !Ref DBName DBInstanceClass: db.t3.micro Engine: MySQL DeletionProtection: true AllocatedStorage: "20" MasterUsername: admin MasterUserPassword: !Ref MasterDbUserPassword MultiAZ: false PubliclyAccessible: true [Database access has been allowed publicly] BackupRetentionPeriod: 0 [BackupRetentionPeriod is not set up] User: Type: AWS::IAM::User Properties: UserName: !Sub "sp-${Environment}-user" LoginProfile: Password: !Ref 'Password' UserPolicy: Type: AWS::IAM::Policy Properties: PolicyName: simple_policy PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - "ec2:*" - "s3:*" - "cloudwatch:*" Resource: "*" Users: - !Ref User TaskDefinition: Type: 'AWS::ECS::TaskDefinition' Properties: RequiresCompatibilities: - FARGATE NetworkMode: awsvpc ExecutionRoleArn: !Sub - 'arn:aws:iam::${a}:role/Role${r}ClusterExecution' - a: !Ref 'AWS::AccountId' r: !Ref ResourceTag TaskRoleArn: !Sub - 'arn:aws:iam::${a}:role/Role' - a: !Ref 'AWS::AccountId' r: !Ref ResourceTag ContainerDefinitions: - Name: !Sub - '${e}-${r}' - e: !Ref Environment r: !Ref ResourceTag Image: !Ref DockerImageId Environment: - Name: api_pass Value: "ssddhfkd23!!" Secrets: - Name: kvk_api_certificate ValueFrom: !Sub - 'arn:aws:ssm:${region}:${account}:parameter/kvk_api_key' - region: !Ref 'AWS::Region' account: !Ref 'AWS::AccountId' LogConfiguration: LogDriver: awslogs Options: awslogs-group: !Sub "{{resolve:ssm:/platform-kvk-dataloader/cfn/${ResourceTag}/LogGroup}}" awslogs-region: !Ref 'AWS::Region' awslogs-stream-prefix: ecs KMSKeyforRoleKvkDataloader: Type: AWS::KMS::Key Properties: EnableKeyRotation: true Description: Key used from specific role KeyPolicy: Version: '2012-10-17' Id: KEY-POLICY Statement: - Sid: Allow administration of the key Effect: Allow Principal: * [All below KMS actions are allowed to any identity] Action: - kms:Create* - kms:Describe* - kms:Enable* - kms:List* - kms:Put* - kms:Update* - kms:Revoke* - kms:Disable* - kms:Get* - kms:Delete* - kms:ScheduleKeyDeletion - kms:CancelKeyDeletion - kms:TagResource Resource: "*" - Sid: Allow use of the key Effect: Allow Principal: AWS: Fn::GetAtt: - RoleClusterExecution - Arn Action: - kms:Decrypt - kms:DescribeKey Resource: "*" - Sid: Allow use of the key Effect: Allow Principal: AWS: Fn::GetAtt: - RoleKvkDataloader - Arn Action: - kms:Decrypt - kms:DescribeKey Resource: "*" PolicyRoleKvkDataloaderClusterExecution: Type: 'AWS::IAM::ManagedPolicy' Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - 'ecr:BatchCheckLayerAvailability' - 'ecr:GetDownloadUrlForLayer' - 'ecr:BatchGetImage' Resource: - !Sub - 'arn:aws:ecr:${region}:${account}:repository/*' - region: !Ref 'AWS::Region' account: !Ref ECRAccountId - Effect: Allow Action: * Resource: '*' - Effect: Allow Action: - 'logs:CreateLogStream' - 'logs:PutLogEvents' Resource: - !Sub - 'arn:aws:logs:${region}:${account}:log-group:Role${r}:*' - region: !Ref 'AWS::Region' account: !Ref 'AWS::AccountId' r: !Ref ResourceTag - Effect: Allow Action: - 'ssm:GetParameters' - 'ssm:GetParameter' Resource: *
1
answers
0
votes
23
views
asked 6 days ago

S3 Access Denied 403 error

Hi AWS, I was learning about App2Container service using this AWS Workshop https://catalog.us-east-1.prod.workshops.aws/workshops/2c1e5f50-0ebe-4c02-a957-8a71ba1e8c89/en-US and while deploying the infrastructure using CloudFormation template as provided in Step 1, I am experiencing the issue. ``` Resource handler returned message: "Your access has been denied by S3, please make sure your request credentials have permission to GetObject for application-migration-with-aws-workshop/lambda/4eb5dfa8efc17763bc41edb070cb9cd2. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: Lambda, Status Code: 403, Request ID: 95687072-37e7-4670-b715-7a0e5bdefd92)" (RequestToken: 09b159a9-c86b-72ef-5d6e-c18bbed29004, HandlerErrorCode: AccessDenied) ``` After that I have updated the IAM user permission with the following S3 API and here is the code for the same: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::application-migration-with-aws-workshop", "arn:aws:s3:::application-migration-with-aws-workshop/lambda/4eb5dfa8efc17763bc41edb070cb9cd2", "arn:aws:s3:::application-migration-with-aws-workshop/lambda/438e5a43749a18ff0f4c7a7d0363e695" ] } ] } ``` Please tell me what's the reason behind the failure. I know this is Amazon owned bucket. So what's missing either from permissions point of view. Thanks
2
answers
0
votes
56
views
profile picture
asked 10 days ago
  • 1
  • 12 / page