By using AWS re:Post, you agree to the Terms of Use
/AWS CloudFormation/

Questions tagged with AWS CloudFormation

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Annoying HLS Playback Problem On Windows But Not iOS

Hello All, I am getting up to speed with CloudFront and S3 for VOD. I have used the CloudFormation template. Uploaded an MP4, obtained the Key for the m3u8 file. I create a distribution in CF. I embed it in my webpage. For the most part, it works great. But there is a significantly long buffering event during the first few seconds. This problem does not exist when I play the video on my iOS device. And strangely, it does not happen when I play it in Akami's HLS tester on my Windows 11 PC using Chrome. The problem seems to only occur when I play it from my website, using any browser, on my Windows 11 PC. Steps I take to provoke the issue: Open an Incognito tab in Chrome / navigate to my website, my player is set to auto play so it auto plays / the video starts out a bit fuzzy, it then stops for a second / restarts with great resolution / and stays that way until the endo f the video. If I play again, no problems at all, but that is to be expected. I assume there is a local cache. Steps I have tried to fix / clues: I have tried different segment lengths via modifying the Lambda function created when the stack was formed by the template. The default was 5. At that setting, the fuzzy aspect lasted the longest but the buffer event seemed slightly shorter. At 1 and 2, the fuzzy is far shorter but the buffering event is notably longer. One thought, could this be related to the video player I am using? I wanted to use the AWS IVS but could not get it working the first go around so I tried the amazon-ivs-videojs. That worked immediately, except for the buffer issue. And as the buffer issue seems to go away when I test the distribution via the Akami HLS tester. As always, much appreciation for reading this question and any time spent pondering on it.
0
answers
0
votes
4
views
Redbone
asked 2 days ago

Cloudformation AWSEBSecurityGroup VPCIdNotSpecified - even though VpcId is specified?

I am trying to create a cloudformation stack with a template that another team has created. It creates an rds, elastic beanstalk, lambdas, and an api gateway. Their template works for them, but they were creating a vpc + subnets + security groups in the template. I already have a vpc created, as well as 2 subnets that I need to use. This is the template code: ``` MCTEBApp: Type: AWS::ElasticBeanstalk::Application Properties: Description: "" MCTEBVersion: Type: AWS::ElasticBeanstalk::ApplicationVersion Properties: ApplicationName: !Ref MCTEBApp Description: "" SourceBundle: S3Bucket: !ImportValue 'Fn::Sub': "${CICDStackName}-CodeBucket" S3Key: "web/docker-compose.yml" MCTEBEnv: Type: AWS::ElasticBeanstalk::Environment Properties: ApplicationName: !Ref MCTEBApp Description: "" SolutionStackName: "" OptionSettings: - Namespace: aws:autoscaling:launchconfiguration OptionName: InstanceType Value: t1.micro - Namespace: aws:elasticbeanstalk:environment OptionName: EnvironmentType Value: SingleInstance - Namespace: aws:autoscaling:launchconfiguration OptionName: IamInstanceProfile Value: aws-elasticbeanstalk-ec2-role - Namespace: aws:elasticbeanstalk:environment OptionName: ServiceRole Value: aws-elasticbeanstalk-service-role Tier: Name: WebServer Type: Standard VersionLabel: !Ref MCTEBVersion MCTEBConfig: Type: AWS::ElasticBeanstalk::ConfigurationTemplate Properties: ApplicationName: !Ref MCTEBApp Description: "" SolutionStackName: "" OptionSettings: - Namespace: aws:ec2:vpc OptionName: VPCId Value: vpc-### - Namespace: aws:ec2:vpc OptionName: Subnets Value: subnet-### - Namespace: aws:ec2:vpc OptionName: ELBSubnets Value: subnet-### - Namespace: aws:autoscaling:launchconfiguration OptionName: SecurityGroups Value: !Ref MCTEBSecurityGroup MCTEBSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: RDS allows ingress from EC2 instances in this group. VpcId: vpc-### ``` The elastic beanstalk instance is failing to be created. The logical Id is AWSEBSecurityGroup and the Status Reason is: No default VPC for this user (Service: AmazonEC2; Status Code: 400; Error Code: VPCIdNotSpecified; Request ID: ###; Proxy: null) I am not sure what I need to change to make this work. There is no option of re-creating a default VPC due to security restrictions.
2
answers
0
votes
10
views
AWS-User-7464390
asked 4 days ago

How to connect a Load balancer and an Interface VPC Endpoint together using CDK?

Acronym legend: * ALB - ApplicationLoadBalancer * ATG - ApplicationTargetGroup aka Target Group * VPC - Virtual Private Cloud **Our situation:** Using the AWS Console manually, it was shown that using Route 53 to an ALB (Application Load Balancer) to a private Interface VPC Endpoint to a private REST API-Gateway to a private Lambda works well. (ALB and a gateway Custom-domain-name exist due to https and the needed Certificate) The ALB needs a Target Group which targets the IP addresses of the Interface VPC Endpoint. (We tried using InstanceIdTarget with the endpoint's vpcEndpointId, but that failed with the error *Instance ID 'vpce-WITHWHATEVERHERE' is not valid* ) Using CDK, we created the following (among other things) using the aws_elasticloadbalancingv2 module: * ApplicationLoadBalancer (ALB) * ApplicationTargetGroup (ATG) aka Target Group We added a listener to the ALB. We added the Target Group to the listener. **It’s not clear how to get the IP addresses from the VPC endpoint. We want to add the IP addresses to the ATG aka Target Group using the targets property.** How to get the IP addresses of the Interface VPC Endpoint via CDK? A sample of resources we've used: * https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html * https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_elasticloadbalancingv2-readme.html * https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_elasticloadbalancingv2.ApplicationLoadBalancer.html * https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_elasticloadbalancingv2.ApplicationTargetGroup.html * https://stackoverflow.com/questions/57267594/how-to-get-privateipaddress-of-vpc-endpoint-in-cdk * https://medium.com/codex/aws-private-api-gateway-with-custom-domain-names-350fee48b406 - The approach we want in general. We're using the latest available as of this writing (AWS CDK 2.5.0)
1
answers
0
votes
8
views
FinneyCanHelp
asked 5 days ago

Firehose to S3 with One Record Per Line

Hey all, On this [post](https://forums.aws.amazon.com/thread.jspa?threadID=244858&tstart=0&messageID=981022#981022) there is a solution to have a target rule on a Firehose to add a newline char to every JSON event. However, the solution is for the JS CDK version and doesn't work for the Python version (1.134.0). We tried to find a way to have this solution on Python but seems that the CDK doesn't map all the needed properties from JS to Python. For now, we have a very ugly workaround that manipulates the JSON template before sending it to CloudFormation. To create the target firehose we use the code below, where the problem is the RuleTargetInput that have just a few options and doesn't enable a custom InputTransformerProperty. ``` firehose_target = KinesisFirehoseStream( stream=self.delivery_stream, # Python CDK is not allowing Custom CfnRule.InputTransformerProperty # Makefile will make the workaround message=RuleTargetInput.from_text(f'{EventField.from_path("$")}'), ) ``` Piece of the JSON template generated by the CDK: ``` "Targets": [ { "Arn": { "Fn::GetAtt": [ "firehose", "Arn" ] }, "Id": "Target0", "InputTransformer": { "InputPathsMap": {"f1":"$"}, "InputTemplate": "\\"<f1>\\"" }, "RoleArn": { "Fn::GetAtt": [ "firehoseEventsRole1814C701", "Arn" ] } } ] ``` To manipulate the InputTransformer, we run the code below before sending it to CloudFormation: ``` jq -c . cdk.out/robotic-accounting-firehose.template.json \ | sed -e 's/"InputTransformer":{"InputPathsMap":{"f1":"$$"},"InputTemplate":"\\"<f1>\\""}/"InputTransformer":{"InputPathsMap":{},"InputTemplate":"<aws.events.event>\\n"}/g' \ | jq '.' > cdk.out/robotic-accounting-firehose.template.json.tmp rm cdk.out/robotic-accounting-firehose.template.json mv cdk.out/robotic-accounting-firehose.template.json.tmp cdk.out/robotic-accounting-firehose.template.json ``` That gives us the InputTransformer that we need and works: ``` "Targets": [ { "Arn": { "Fn::GetAtt": [ "firehose", "Arn" ] }, "Id": "Target0", "InputTransformer": { "InputPathsMap": {}, "InputTemplate": "<aws.events.event>\n" }, "RoleArn": { "Fn::GetAtt": [ "firehoseEventsRole1814C701", "Arn" ] } } ] ``` We know, it's horrible, but it works. Does someone else have this problem and a better solution? Does the CDK v2 solve this? Tks, Daniel
1
answers
0
votes
8
views
Daniel Ferrari
asked 6 days ago

Container Insights on Amazon EKS Fluent Bit AccessDeniedException

I'm trying to add a Container Insight to my EKS cluster but running into a bit of an issue when deploying. According to my logs, I'm getting the following: ``` [error] [output:cloudwatch_logs:cloudwatch_logs.2] CreateLogGroup API responded with error='AccessDeniedException' [error] [output:cloudwatch_logs:cloudwatch_logs.2] Failed to create log group ``` The strange part about this is the role it seems to be assuming is the same role found within my EC2 worker nodes rather than the role for the service account I have created. I'm creating the service account and can see it within AWS successfully using the following command: ``` eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name cloudwatch-agent --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve ``` Despite the serviceaccount being created successfully, I continue to get my AccessDeniedException. One thing I found was the logs work fine when I manually add the CloudWatchAgentServerPolicy to my worker nodes, however this is not the implementation I would like and instead would rather us the automative approach of adding the service account and not touching the worker nodes directly if possible. The steps I followed can be found at the bottom of this [https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-prerequisites.html](). Thanks so much!
0
answers
0
votes
3
views
AWS-User-8353451
asked 9 days ago

AWS Lambda Applications and NodeJS

I noticed that NodeJS is the only runtime option when creating an application. https://us-east-2.console.aws.amazon.com/lambda/home?region=us-east-2#/create/application/configure Is there a reason that NodeJS is the only option? I've heard that NodeJS is able to cold start faster than Java for lambdas. I also noticed the example Java lambda project defaults to 512MB MemorySize and NodeJS defaults to 128MB. Is Amazon trying to push us to NodeJS when building lambda applications because it's a better language for the environment? Is it possible to create a Java lambda resource within the template.yml of an application? Do I need to build the classfiles and upload them manually? The `java-test` folder in my project has this structure ``` java-test/src/main/java/example/Handler.java java-test/src/main/resources java-test/build.gradle ``` I've tried the following Resource configuration, but the example.Handler class cannot be found. ``` javaTest: Type: AWS::Serverless::Function Properties: CodeUri: java-test/ Handler: example.Handler Runtime: java11 Description: Java function MemorySize: 512 Timeout: 10 # Function's execution role Policies: - AWSLambdaBasicExecutionRole - AWSLambda_ReadOnlyAccess - AWSXrayWriteOnlyAccess - AWSLambdaVPCAccessExecutionRole Tracing: Active ``` I copied parts of the blank-java lambda project below. https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-java Here's the full build output ``` docker ps "C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd" build javaTest --template C:\Users\bensi\IdeaProjects\team-up\template.yml --build-dir C:\Users\bensi\IdeaProjects\team-up\.aws-sam\build --use-container Starting Build inside a container Building codeuri: C:\Users\bensi\IdeaProjects\team-up\java-test runtime: java11 metadata: {} architecture: x86_64 functions: ['javaTest'] Fetching public.ecr.aws/sam/build-java11:latest-x86_64 Docker container image...... Mounting C:\Users\bensi\IdeaProjects\team-up\java-test as /tmp/samcli/source:ro,delegated inside runtime container Build Succeeded Built Artifacts : .aws-sam\build Built Template : .aws-sam\build\template.yaml Commands you can use next ========================= [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided Running JavaGradleWorkflow:GradleBuild Running JavaGradleWorkflow:CopyArtifacts "C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd" local invoke javaTest --template C:\Users\bensi\IdeaProjects\team-up\.aws-sam\build\template.yaml --event "C:\Users\bensi\AppData\Local\Temp\[Local] javaTest-event5.json" Invoking example.Handler (java11) Skip pulling image and use local one: public.ecr.aws/sam/emulation-java11:rapid-1.36.0-x86_64. Mounting C:\Users\bensi\IdeaProjects\team-up\.aws-sam\build\javaTest as /var/task:ro,delegated inside runtime container START RequestId: 3e9debb6-a640-4ba2-bd6e-5f2d818d303e Version: $LATEST {"errorMessage":"Class not found: example.Handler","errorType":"java.lang.ClassNotFoundException"}Class not found: example.Handler: java.lang.ClassNotFoundException java.lang.ClassNotFoundException: example.Handler. Current classpath: file:/var/task/:file:/var/task/lib/aws-lambda-java-core-1.2.1.jar:file:/var/task/lib/gson-2.8.6.jar END RequestId: 3e9debb6-a640-4ba2-bd6e-5f2d818d303e REPORT RequestId: 3e9debb6-a640-4ba2-bd6e-5f2d818d303e Init Duration: 0.07 ms Duration: 271.19 ms Billed Duration: 272 ms Memory Size: 512 MB Max Memory Used: 512 MB ```
2
answers
0
votes
8
views
AWS-User-1
asked 11 days ago

InvalidParameterValue Error in docker compose deploy

I am trying to deploy two docker containers via docker compose to ECS. This already worked before. Now I'm getting the following error: > **DatabasemongoService TaskFailedToStart: Unexpected EC2 error while attempting to tag the network interface: InvalidParameterValue** I tried deleting all resources in my account and recreating a default VPC which the docker compose uses to deploy. I tried tagging the network interface via the management web UI, which worked without troubles. I found this Documentation about EC2 Error Codes: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html > **InvalidParameterValue**: A value specified in a parameter is not valid, is unsupported, or cannot be used. Ensure that you specify a resource by using its full ID. The returned message provides an explanation of the error value. I don't get any output besides the error above to put my search on a new trail. Also there is this entry talking about the error: > InvalidNetworkInterface.InUse: The specified interface is currently in use and cannot be deleted or attached to another instance. Ensure that you have detached the network interface first. If a network interface is in use, you may also receive the **InvalidParameterValue** error. As the compose CLI handles creation and deletion of network interfaces automatically, I assume this is not the problem. Below is my docker-compose.yaml file. I start it via `docker compose --env-file=./config/.env.development up` in the ecs context. ``` version: '3' services: feathers: image: xxx build: context: ./app args: - BUILD_MODE=${MODE_ENV:-development} working_dir: /app container_name: 'feather-container' ports: - ${BE_PORT}:${BE_PORT} environment: - MODE=${MODE_ENV:-development} depends_on: - database-mongo networks: - backend env_file: - ./config/.env.${MODE_ENV} database-mongo: image: yyy build: context: ./database container_name: 'mongo-container' command: mongod --port ${MONGO_PORT} --bind_ip_all environment: - MONGO_INITDB_DATABASE=${MONGO_DATABASE} - MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME} - MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD} ports: - ${MONGO_PORT}:${MONGO_PORT} volumes: - mongo-data:/data networks: - backend networks: backend: name: be-network volumes: mongo-data: ``` Any help, idea, or point in the right direction is very appreciated!
0
answers
0
votes
6
views
jkonrath
asked 12 days ago

EC2 Launch Template doesn't start Spot Instance (but works for on-demand instance)

My EC2 launch template doesn't work when using it to launch a Spot instance. The launch template is set to launch a c5.xlarge instance **associated to a pre-existing Elastic Network Interface** @ index 0. When launching a spot instance, I receive the following cryptic message, and the spot request fails: > c5.xlarge, ami-b2b55cd5, Linux/UNIX: A network interface may not specify both a network interface ID and a subnet First off, how can a **network interface** specify a network interface id? I believe this error means to say "a spot instance may not specify both a network interface ID and a subnet", but I can't be sure. Secondly, my launch template *doesn't* specify a subnet directly - it only specifies a network interface ID, which in turn specifies the subnet. As a troubleshooting step, I've tried launching an on-demand EC2 instance directly using the same launch template, via "**Launch Templates -> Actions -> Launch Instance from Template**" - when I do this, the EC2 instance launches successfully. I've been able to reproduce this error consistently for over 9 months now, and am surprised that no one else has brought this up. What gives? Here is my Spot config: ``` "MySpotFleet" : { "Type" : "AWS::EC2::SpotFleet", "Properties" : { "SpotFleetRequestConfigData" : { "AllocationStrategy" : "lowestPrice", "IamFleetRole" : {"Fn::GetAtt" : ["MyIAMFleetRole", "Arn"]}, "InstanceInterruptionBehavior" : "stop", "LaunchTemplateConfigs": [ { "LaunchTemplateSpecification": { "LaunchTemplateId": { "Ref" : "MyLaunchTemplate" }, "Version": { "Fn::GetAtt" : [ "MyLaunchTemplate", "LatestVersionNumber" ]} } } ], "ReplaceUnhealthyInstances" : false, "SpotMaxTotalPrice" : "5.01", "SpotPrice" : "5.01", "TargetCapacity" : 1, "TerminateInstancesWithExpiration" : false, "Type" : "maintain", "ValidFrom" : "2021-01-01T00:00:00Z", "ValidUntil" : "2050-12-31T23:59:59Z" } }, "DependsOn": [ "MyLaunchTemplate" ] } ``` If I replace the above Spot config with this on-demand instance config, it works: ``` "MyInstance" : { "Type" : "AWS::EC2::Instance", "Properties" : { "LaunchTemplate" : { "LaunchTemplateId": { "Ref" : "MyLaunchTemplate" }, "Version": { "Fn::GetAtt" : [ "MyLaunchTemplate", "LatestVersionNumber" ]} } }, "DependsOn": [ "MyLaunchTemplate" ] } ``` If it helps, here is my Launch Template: ``` "MyLaunchTemplate" : { "Type" : "AWS::EC2::LaunchTemplate", "Properties" : { "LaunchTemplateName":"MyLaunchTemplate", "LaunchTemplateData":{ "IamInstanceProfile" : { "Arn" : { "Fn::GetAtt" : ["MyEC2IAMInstanceProfile", "Arn"] } }, "ImageId" : "ami-b2b55cd5", "InstanceType": "c5.xlarge", "NetworkInterfaces" : [ { "NetworkInterfaceId" : {"Ref" : "MyENI00"}, "DeviceIndex" : "0" } ], "InstanceInitiatedShutdownBehavior" : "stop", "KeyName" : "my-keypair" } } ``` And the ENI in question: ``` "MyENI00": { "Type": "AWS::EC2::NetworkInterface", "Properties": { "Description" : "MyENI00", "GroupSet" : [ {"Ref" : "MySecurityGroup"} ], "PrivateIpAddresses": [ { "Primary" : true, "PrivateIpAddress" : "172.16.0.100" }, { "Primary" : false, "PrivateIpAddress" : "172.16.0.101" } ], "SourceDestCheck": false, "SubnetId": { "Ref" : "MySubnet" } } } ```
0
answers
0
votes
4
views
AWS-User-7769226
asked 13 days ago

AWS CDK 2: Package subpath './aws-cloudfront/lib/experimental' is not defined by "exports" in xxx/node_modules/aws-cdk-lib/package.json

I tried creating a demo for VueJS SSR using Lambda@Edge and using AWS CDK v2. The code is below ``` import { CfnOutput, Duration, RemovalPolicy, Stack, StackProps } from 'aws-cdk-lib'; import { Construct } from 'constructs'; import { Bucket } from 'aws-cdk-lib/aws-s3'; import { BucketDeployment, Source } from 'aws-cdk-lib/aws-s3-deployment' import { CloudFrontWebDistribution, LambdaEdgeEventType, OriginAccessIdentity } from 'aws-cdk-lib/aws-cloudfront'; import { Code, Function, Runtime } from 'aws-cdk-lib/aws-lambda'; import { EdgeFunction } from 'aws-cdk-lib/aws-cloudfront/lib/experimental'; export class SsrStack extends Stack { constructor(scope: Construct, id: string, props?: StackProps) { super(scope, id, props); const bucket = new Bucket(this, 'DeploymentsBucket', { websiteIndexDocument: "index.html", websiteErrorDocument: "index.html", publicReadAccess: false, //only for demo not to use in production removalPolicy: RemovalPolicy.DESTROY, }); // new BucketDeployment(this, "App", { sources: [Source.asset("../../web/dist/")], destinationBucket: bucket }); // const originAccessIdentity = new OriginAccessIdentity( this, 'DeploymentsOriginAccessIdentity', ); bucket.grantRead(originAccessIdentity); const ssrEdgeFunction = new EdgeFunction(this, "ssrHandler", { runtime: Runtime.NODEJS_14_X, code: Code.fromAsset("../../lambda/ssr-at-edge/"), memorySize: 128, timeout: Duration.seconds(5), handler: "index.handler" }); const distribution = new CloudFrontWebDistribution( this, 'DeploymentsDistribution', { originConfigs: [ { s3OriginSource: { s3BucketSource: bucket, originAccessIdentity: originAccessIdentity }, behaviors: [ { isDefaultBehavior: true, lambdaFunctionAssociations: [ { eventType: LambdaEdgeEventType.ORIGIN_REQUEST, lambdaFunction: ssrEdgeFunction.currentVersion, } ] } ] } ], errorConfigurations: [ { errorCode: 403, responseCode: 200, responsePagePath: '/index.html', errorCachingMinTtl: 0, }, { errorCode: 404, responseCode: 200, responsePagePath: '/index.html', errorCachingMinTtl: 0, } ] } ); new CfnOutput(this, 'CloudFrontURL', { value: distribution.distributionDomainName }); } } ``` However when I tried deploying it shows something like this ``` Package subpath './aws-cloudfront/lib/experimental' is not defined by "exports" in /Users/petrabarus/Projects/kodingbarengpetra/vue-lambda-ssr/deployments/cdk/node_modules/aws-cdk-lib/package.json ``` Here's the content of the `package.json` ``` { "name": "ssr-at-edge", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "jest --verbose", "build": "tsc", "watch": "tsc -w", "start": "npm run build -- -w" }, "author": "", "license": "ISC", "devDependencies": { "@types/aws-lambda": "^8.10.89", "@types/node": "^17.0.5", "ts-node": "^10.4.0", "typescript": "^4.5.4" }, "dependencies": { "vue": "^2.6.14", "vue-server-renderer": "^2.6.14" } } ``` Is there anything I miss?
1
answers
0
votes
5
views
petrabarus
asked 18 days ago

S3 bucket permissions to run CloudFormation from different accounts and create Lambda Funtions.

Not sure what I am missing but I keep getting permission denied errors when I launch CloudFormation using https URL Here are the details. I have a S3 bucket "mys3bucket" in ACCOUNT A. In this bucket, I have a CloudFormation template stored at s3://mys3bucket/project1/mycft.yml . The bucket us in us-east-1. It uses S3 Serverside Encryption using S3 key [not KMS] For this bucket, I have disabled ACLs , bucket and all objects are private but I have added a bucket policy which is as below: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNT_B_NUMBER:root" }, "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectTagging", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::mys3bucket", "arn:aws:s3:::mys3bucket/project1/*" ] } ] } Now, I login to Account B --> CloudFormation --> Create new stack --> Template is Ready --> Amazon S3 URL and the I enter the object path to my template in this format https://mys3bucket.s3.amazonaws.com/project1/mycft.yml When I click next, I get the following message on the same page as a banner in red S3 error: Access Denied For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Also, just for your information, I am able to list the bucket and objects from Account B if I use Cloud9 and run aws s3 ls s3://mys3bucket/project1/mycft.yml aws s3 cp s3://mys3bucket/project1/mycft.yml . What am I missing? [I think this should work even when bucket is set a private but bucket policy allows cross-account access]. Does this use case require my bucket to be hosted as static website?
2
answers
0
votes
8
views
Alexa
asked 21 days ago

Cost Anomaly Detection in Cloudformation

I'm trying to set up a Cost Anomaly Detection monitor + subscription in Cloudformation. Creating this via the AWS Console is very easy and user friendly. I set up a monitor with Linked Account, with a subscription that has a threshold of $100 with daily alert frequency, sending alerts to an e-mail. Trying to do the above was not as clear when following the documentation and examples at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ce-anomalymonitor.html The documentation does not explain what "dimension" or "type" means in this context, and those terms are not used in the AWS Console. ``` Resources: AnomalyMonitor100Dollars: Type: AWS::CE::AnomalyMonitor Properties: MonitorName: AnomalyDetected_is_greater_than_100_dollars MonitorType: CUSTOM MonitorSpecification: !Sub ' { "Dimensions" : { "Key" : "LINKED_ACCOUNT", "Values" : [ "${AWS::AccountId}" ] } }' AnomalySubscription: Type: AWS::CE::AnomalySubscription Properties: SubscriptionName: AnomalyDetected_is_greater_than_100_dollars Threshold: 100 Frequency: DAILY MonitorArnList: - !Ref AnomalyMonitor100Dollars Subscribers: [ { "Type": "EMAIL", "Address": "xx@example.com" } ] ``` Using the above, Cloudformation reports the error > "Linked accounts can only create AWS Services monitor (Service: CostExplorer, Status Code: 400..." Guessing wildly, adding `'MonitorDimension: SERVICE'` to the monitor gives the error > "MonitorDimension must be null for Custom monitor (Service: CostExplorer, Status Code: 400..." Guessing more wildly, trying to change to `'MonitorType: DIMENSIONAL'` gives the error > "Expression must be null for Dimensional monitor (Service: CostExplorer, Status Code: 400..." No idea what expression this refers to. I'm sure this is logical once you know the implementation, but I have no idea how to do this the correct way. What am I missing?
1
answers
0
votes
5
views
Philip
asked a month ago

CloudFormation breaks on AWS::SQS::Queue with RedriveAllowPolicy property

We are specifying a RedriveAllowPolicy on our AWS::SQS::Queue in CloudFormation and are - again - receiving errors in CloudFormation without making any changes to our templates. This happened a few weeks ago, too, so it is the second breaking change for this property we're seeing, which is unfortunate. The old thread was: https://forums.aws.amazon.com/thread.jspa?messageID=1000934&tstart=0 So, in accordance to that thread, we changed our template definition to be: ``` TestQueue: Type: AWS::SQS::Queue Properties: VisibilityTimeout: 450 RedriveAllowPolicy: '{"redrivePermission":"denyAll"}' RedrivePolicy: deadLetterTargetArn: !GetAtt TestDeadLetterQueue.Arn maxReceiveCount: 5 TestDeadLetterQueue: Type: AWS::SQS::Queue Properties: MessageRetentionPeriod: 1209600 ``` This worked for a few weeks, but now CloudFormation is throwing the following error for this exact template: > 2021-12-14 10:33:14 UTC+0100 TestQueue CREATE_FAILED > > Properties validation failed for resource TestQueue with message: #: extraneous key [RedriveAllowPolicy] is not permitted Removing ` RedriveAllowPolicy: '{"redrivePermission":"denyAll"}'` from the template solves the issue - but we want to set this policy, obviously. I hope we're following the documentation at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-redriveallowpolicy precisely. Any help appreciated. This is quite a big blocker in our process right now. Full template file to reproduce the error: ``` AWSTemplateFormatVersion: '2010-09-09' Description: A prototype stack to test out CloudFormation definitions. Metadata: {} Transform: AWS::Serverless-2016-10-31 Resources: TestQueue: Type: AWS::SQS::Queue Properties: VisibilityTimeout: 450 RedriveAllowPolicy: '{"redrivePermission":"denyAll"}' RedrivePolicy: deadLetterTargetArn: !GetAtt TestDeadLetterQueue.Arn maxReceiveCount: 5 TestDeadLetterQueue: Type: AWS::SQS::Queue Properties: MessageRetentionPeriod: 1209600 ```
1
answers
0
votes
11
views
janpapenbrock
asked a month ago

Create an Athena-queryable CloudTrail with CDK (or CloudFormation?)

I'm trying to create an app/stack/solution which, when deployed, sets up the necessary infrastructure to programmatically query CloudTrail logs: In particular, to find resource creation requests in some services by a given execution role. It seemed (e.g. from this [Querying CloudTrail Logs page](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html) in the Athena developer guide) like Athena would be a good solution here, but I'm struggling to get the setup automated properly. Setting up the [Trail](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cloudtrail-readme.html#trail) is pretty straightforward. However, my current attempt at mapping the [Athena manual partitioning instructions](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#create-cloudtrail-table) to CDK generating a Glue table, seems to come up with a table with 0 partitions... And I don't really understand how the [partition projection instructions](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#create-cloudtrail-table-partition-projection) could translate to CDK? There are definitely CloudTrail events in the source bucket/prefix - does anybody know how to make this work? I'm not that deep on either Glue or Athena yet. Current draft CDK for the Glue table below: ```typescript const cloudTrailTable = new glue.Table(this, "CloudTrailGlueTable", { columns: [ { name: "eventversion", type: glue.Schema.STRING }, { name: "useridentity", type: glue.Schema.struct([ { name: "type", type: glue.Schema.STRING }, { name: "principalid", type: glue.Schema.STRING }, { name: "arn", type: glue.Schema.STRING }, { name: "accountid", type: glue.Schema.STRING }, { name: "invokedby", type: glue.Schema.STRING }, { name: "accesskeyid", type: glue.Schema.STRING }, { name: "userName", type: glue.Schema.STRING }, { name: "sessioncontext", type: glue.Schema.struct([ { name: "attributes", type: glue.Schema.struct([ { name: "mfaauthenticated", type: glue.Schema.STRING }, { name: "creationdate", type: glue.Schema.STRING }, ]), }, { name: "sessionissuer", type: glue.Schema.struct([ { name: "type", type: glue.Schema.STRING }, { name: "principalId", type: glue.Schema.STRING }, { name: "arn", type: glue.Schema.STRING }, { name: "accountId", type: glue.Schema.STRING }, { name: "userName", type: glue.Schema.STRING }, ]), }, ]), }, ]), }, { name: "eventtime", type: glue.Schema.STRING }, { name: "eventsource", type: glue.Schema.STRING }, { name: "eventname", type: glue.Schema.STRING }, { name: "awsregion", type: glue.Schema.STRING }, { name: "sourceipaddress", type: glue.Schema.STRING }, { name: "useragent", type: glue.Schema.STRING }, { name: "errorcode", type: glue.Schema.STRING }, { name: "errormessage", type: glue.Schema.STRING }, { name: "requestparameters", type: glue.Schema.STRING }, { name: "responseelements", type: glue.Schema.STRING }, { name: "additionaleventdata", type: glue.Schema.STRING }, { name: "requestid", type: glue.Schema.STRING }, { name: "eventid", type: glue.Schema.STRING }, { name: "resources", type: glue.Schema.array( glue.Schema.struct([ { name: "ARN", type: glue.Schema.STRING }, { name: "accountId", type: glue.Schema.STRING }, { name: "type", type: glue.Schema.STRING }, ]) ), }, { name: "eventtype", type: glue.Schema.STRING }, { name: "apiversion", type: glue.Schema.STRING }, { name: "readonly", type: glue.Schema.STRING }, { name: "recipientaccountid", type: glue.Schema.STRING }, { name: "serviceeventdetails", type: glue.Schema.STRING }, { name: "sharedeventid", type: glue.Schema.STRING }, { name: "vpcendpointid", type: glue.Schema.STRING }, ], dataFormat: glue.DataFormat.CLOUDTRAIL_LOGS, database: myGlueDatabase, tableName: "cloudtrail_table", bucket: myCloudTrailBucket, description: "CloudTrail Glue table", s3Prefix: `AWSLogs/${cdk.Stack.of(this).account}/CloudTrail/`, partitionKeys: [ { name: "region", type: glue.Schema.STRING }, { name: "year", type: glue.Schema.STRING }, { name: "month", type: glue.Schema.STRING }, { name: "day", type: glue.Schema.STRING }, ], }); ```
1
answers
0
votes
17
views
EXPERT
Alex_T
asked a month ago
2
answers
0
votes
17
views
ktsuda
asked a month ago

How to create lambda function with nodejs from cloudformation inline?

I am new to lambda and node-js and want to create a CF template to create a lambda function inline without passing the lambda code from a S3 bucket zip file. When I tried below code, it fails with error `Cannot find module 'app' as the CFT didn't deploy it as a node package and app.js and the nodejs directory structure is missing in it. Is there a way to create such lambda function without manually creating the zip file and adding it to the cloudFormation template? I can easily do it in python, but not sure if it's a limitation for lambda with nodejs cloudformation. My CloudFormation Template: ```yaml Resources: OnConnectFunction: Type: "AWS::Lambda::Function" Properties: Description: OnConnectFunction Handler: app.handler MemorySize: 256 Runtime: nodejs12.x Role: !GetAtt 'LambdaIAMRole.Arn' Timeout: 60 Environment: Variables: TABLE_NAME: Ref: TableName Code: ZipFile: | const AWS = require('aws-sdk'); const ddb = new AWS.DynamoDB.DocumentClient({ apiVersion: '2012-08-10', region: process.env.AWS_REGION }); exports.handler = async event => { const putParams = { TableName: process.env.TABLE_NAME, Item: { connectionId: event.requestContext.connectionId } }; try { await ddb.put(putParams).promise(); } catch (err) { return { statusCode: 500, body: 'Failed to connect: ' + JSON.stringify(err) }; } return { statusCode: 200, body: 'Connected.' }; }; ``` Error when the lambda is invoked: ```javascript { "errorType": "Runtime.ImportModuleError", "errorMessage": "Error: Cannot find module 'app'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js", "stack": [ "Runtime.ImportModuleError: Error: Cannot find module 'app'", "Require stack:", "- /var/runtime/UserFunction.js", "- /var/runtime/index.js", ] } ```
1
answers
0
votes
7
views
MODERATOR
AWS-User-6747049
asked 7 months ago

Cloudformation custom resource failing

Below CloudFormation template has started failing in the last 2 days. It uses the `cfn-response` and `NodeJS10` runtime. Appreciate your inputs. The `cfn-response` success/ failure is not being sent by the lambda and template is stuck for 3 hours. Has there been any change to the SDK? ``` AWSTemplateFormatVersion: '2010-09-09' Description: ENI extraction template for API Gateway VPC Endpoint Parameters: VPCId: Type: AWS::EC2::VPC::Id Description: VPC for the Lambda and VPC endpoint SecurityGroupIds: Type: List<AWS::EC2::SecurityGroup::Id> Description: Security group Ids for the Lambda and VPC endpoint SubnetIds: Type: List<AWS::EC2::Subnet::Id> Description: Subnet Ids for the Lambda Resources: # Get the VPCe ENI IPs using custom resource GetPrivateIPsRole: Type: AWS::IAM::Role Properties: Path: "/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: "lambda.amazonaws.com" Action: - "sts:AssumeRole" ManagedPolicyArns: - "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole" Policies: - PolicyName: "GetPrivateIPs" PolicyDocument: Statement: - Action: - "ec2:DescribeNetworkInterfaces" - "ec2:CreateNetworkInterface" - "ec2:DeleteNetworkInterface" - "ec2:DescribeInstances" - "ec2:AttachNetworkInterface" Effect: "Allow" Resource: "*" GetPrivateIPsLambda: Type: AWS::Lambda::Function Properties: Code: ZipFile: | const AWS = require('aws-sdk'); const response = require('cfn-response'); exports.handler = function(event, context) { console.log("============================================================"); console.log(event); console.log("============================================================"); var networkInterfaceIdsList = event.ResourceProperties.NetworkInterfaceIds; var ec2 = new AWS.EC2(); var params = { NetworkInterfaceIds: networkInterfaceIdsList }; ec2.describeNetworkInterfaces(params, function(err, data) { if (err) { var responseData = {} responseData["IP0"] = "Fail"; responseData["IP1"] = "Fail"; responseData["IP2"] = "Fail"; response.send(event, context, response.FAILED, responseData); } // an error occurred else { var networkInterfaceIPs = []; data.NetworkInterfaces.forEach(function getNetworkInterfaceIPs(item, index){ var ip = item.PrivateIpAddress; networkInterfaceIPs.push(ip) }); var responseData = {} responseData["IP0"] = networkInterfaceIPs[0]; responseData["IP1"] = networkInterfaceIPs[1]; responseData["IP2"] = networkInterfaceIPs[2]; response.send(event, context, response.SUCCESS, responseData); }// successful response }); }; VpcConfig: SecurityGroupIds: !Ref SecurityGroupIds SubnetIds: !Ref SubnetIds Handler: index.handler Description: Extract the VPCe ENI private IPs Role: !GetAtt GetPrivateIPsRole.Arn Runtime: nodejs10.x Timeout: 60 # Get the VPCe private IPs for the NLB target group GetPrivateIPs: Type: Custom::GetPrivateIPs Properties: ServiceToken: !GetAtt GetPrivateIPsLambda.Arn NetworkInterfaceIds: !GetAtt ApiGatewayVPCEndpoint.NetworkInterfaceIds ApiGatewayVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcEndpointType: Interface VpcId: !Ref VPCId SubnetIds: !Ref SubnetIds ServiceName: !Sub 'com.amazonaws.${AWS::Region}.execute-api' SecurityGroupIds: !Ref SecurityGroupIds PrivateDnsEnabled: false ```
1
answers
0
votes
5
views
Suraj
asked a year ago

Observations and questions around MWAA in Cloudformation

Hi, before I dive into my question, I first wanted to share some observations I made working with the cloudformation template, which might be useful for other people, too. The template can be found here: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-mwaa-environment.html> **AirflowConfigurationOptions**: as mentioned in other threads, don't try to configure the secrets manager backend until a fix from the AWS team is in place, right now it does break the environment. **DagS3Path/PluginsS3Path/RequirementsS3Path**: the later ones state that they need an s3 URI (s3:/bucket/path/), that is not the case. They are just paths relative to the bucket defined in _SourceBucketArn_. **EnvironmentClass**: the cloudformation docs don't specify the possible values for this and some of the screenshots in the mwaa docs show wrong keys in the UI. The right possibilities as of now are _mw1.small/mw1.medium/mw1.large_. **LoggingConfiguration > ModuleLoggingConfiguration**: The docs say you can configure CloudWatchLogGroupArn, that did not work for me it was silently ignored. In fact at first my role did not have create log group permissions and I did get errors in cloud trail. It will always try to create log groups following this pattern arn:aws:logs:<region>:<account>:log-group:airflow-<environment-name>-\[DagProcessing|WebServer|Task|Worker|Scheduler] - so make sure it has permissions to create and interact with these. **WebserverAccessMode**: Possible values are not in the cloudformation docs, but can be found in the cli docs: _PRIVATE_ONLY_ and _PUBLIC_ONLY_. **SourceBucketArn**: Bucket name needs to start with airflow. This one was difficult to find, ended up checking the raw cloudformation schema for the pattern. "^arn:aws(-<latin char>_)?:s3:::airflow-<latin char|number|dash>_$" (sorry for the syntax but square brackets are considered links) Which brings me finally to my question, which relates to the next step, i.e., using WebserverAccessMode PRIVATE_ONLY and adding (in cloudformation or in my case cdk_cloudformation) an application load balancer, without needing to hardcode URLs after creation. The cloudformation docs state there is a _parameter_ WebserverUrl, which confuses me. How can this be a _parameter_? It feels this should be a _return value+ and show what is visible in the UI either as public endpoint or private VPC endpoint. Has anybody worked with this? Following the docs to get the URL I would most likely end up using a custom clouformation resource and calling get-environment to get the URL, because I cannot see how I could use WebserverURL as a parameter in a meaningful way. Also for context, I have not found a way to restrict the public endpoint created with PUBLIC_ONLY IP wise. It is protected with IAM authentication, but it feels wrong to have an internal tool not restricted by IP whitelist. Edited by: andreaslang on Jan 14, 2021 1:57 AM - just added info about s3 bucket
2
answers
0
votes
0
views
andreaslang
asked a year ago

ALB/Lambda CloudFormation circular dependency

I'm trying to create a CFn stack including a Lambda function being called by ALB. Part of this requires [giving permission for the ALB to invoke the Lambda function](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html). However, when I do this in CloudFormation I get an error: API: elasticloadbalancingv2:RegisterTargets elasticloadbalancing principal does not have permission to invoke <Lambda ARN> from target group <ALB Target Group ARN> This is happening because CFn is creating the ALB target group before the permission is being created. Normally, I'd put a DependsOn in the target group definition so that but when I do that CFN tells me that there is a circular dependency - and there is because the permission references both the Lambda function and the target group. I could remove the reference to the target group from the permission (and indeed this does fix the problem) but doing so would allow anything in the account to invoke the function **(please correct me if I'm wrong here)**. Is there a way around this? Briefest CFn template below: ``` AWSTemplateFormatVersion: "2010-09-09" Resources: ALB: Type: "AWS::ElasticLoadBalancingV2::LoadBalancer" Properties: Name: "test-alb" Scheme: "internal" Type: "application" Subnets: - "subnet-01c47f76" - "subnet-957eecf0" ALBListener: Type: "AWS::ElasticLoadBalancingV2::Listener" Properties: LoadBalancerArn: !Ref ALB Port: 80 Protocol: "HTTP" DefaultActions: - Type: "forward" ForwardConfig: TargetGroups: - TargetGroupArn: !Ref ALBTargetGroup ALBLambdaPermission: DependsOn: - LambdaFunction Type: AWS::Lambda::Permission Properties: FunctionName: !GetAtt LambdaFunction.Arn Action: lambda:InvokeFunction Principal: elasticloadbalancing.amazonaws.com SourceArn: !Ref ALBTargetGroup ALBTargetGroup: DependsOn: ALBLambdaPermission Type: "AWS::ElasticLoadBalancingV2::TargetGroup" Properties: HealthCheckPath: "/" TargetType: "lambda" Targets: - Id: !GetAtt LambdaFunction.Arn LambdaRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - !Sub "arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" LambdaFunction: Type: "AWS::Lambda::Function" Properties: FunctionName: "alb-test-lambda" Handler: "index.lambda_handler" Code: ZipFile: | def lambda_handler(event, context): return { "statusCode": 200, "body": "Hello from Lambda!", "headers": { "Content-Type": "text/html" } } Role: !GetAtt LambdaRole.Arn Runtime: "python3.6"
1
answers
0
votes
5
views
EXPERT
Brettski@AWS
asked a year ago

RDS Aurora php_network_getaddresses: getaddrinfo failed in ENTRYPOINT

I am using CloudFormation templates to deploy a PHP Laravel App in a Docker container to Fargate using an Aurora Serverless Cluster for database needs. I assign the hostname of the Aurora Cluster to my container's environment in the TaskDefinition, and it shows up in the container like this: my-app-auroraserverlesscluster-*****-dbcluster-************.cluster-************.eu-central-1.rds.amazonaws.com Once the CloudFormation update is through and everything is up and running, I can successfully connect to that database, and fire queries. But during deployment - more specifically in the ENTRYPOINT script of my container, while trying to run my database migrations, I do consistently get the following error: **SQLSTATE\[HY000] \[2002] php_network_getaddresses: getaddrinfo failed: Name or service not known** I did try to set timeout:30 in /etc/resolve.conf, and I also tried to run the migration in a retry-loop for up to several minutes. But the error persists. Once the migration fails and the update completes, I can then successfully connect to the DB. Both VPC and RDS Aurora finish their update BEFORE the update of the app starts. 2020-07-21 09:31:13 UTC+0200 AppService UPDATE_IN_PROGRESS 2020-07-21 09:31:12 UTC+0200 AppTarget UPDATE_COMPLETE 2020-07-21 09:31:11 UTC+0200 AppTarget UPDATE_IN_PROGRESS 2020-07-21 09:31:11 UTC+0200 AlbListenerHttp UPDATE_COMPLETE 2020-07-21 09:31:11 UTC+0200 AuroraServerlessCluster UPDATE_COMPLETE 2020-07-21 09:31:10 UTC+0200 AlbListenerHttp UPDATE_IN_PROGRESS 2020-07-21 09:31:10 UTC+0200 AuroraServerlessCluster UPDATE_IN_PROGRESS 2020-07-21 09:31:10 UTC+0200 Alb UPDATE_COMPLETE 2020-07-21 09:31:10 UTC+0200 AuroraServerlessClientSg UPDATE_COMPLETE 2020-07-21 09:31:10 UTC+0200 Alb UPDATE_IN_PROGRESS 2020-07-21 09:31:09 UTC+0200 AuroraServerlessClientSg UPDATE_IN_PROGRESS 2020-07-21 09:31:09 UTC+0200 Vpc UPDATE_COMPLETE 2020-07-21 09:30:59 UTC+0200 DatabaseSecret UPDATE_COMPLETE 2020-07-21 09:30:59 UTC+0200 DatabaseSecret UPDATE_IN_PROGRESS 2020-07-21 09:30:58 UTC+0200 Key UPDATE_COMPLETE 2020-07-21 09:30:58 UTC+0200 Key UPDATE_IN_PROGRESS 2020-07-21 09:30:58 UTC+0200 Vpc UPDATE_IN_PROGRESS 2020-07-21 09:30:57 UTC+0200 Cluster UPDATE_COMPLETE 2020-07-21 09:30:57 UTC+0200 Alerting UPDATE_COMPLETE 2020-07-21 09:30:57 UTC+0200 Cluster UPDATE_IN_PROGRESS 2020-07-21 09:30:57 UTC+0200 Alerting UPDATE_IN_PROGRESS I am new to AWS and not sure how to do my migrations - but I am used to normally do them in the ENTRYPOINT, which does not seem possible to do in AWS? How should I be doing database migrations?
1
answers
0
votes
0
views
hooby
asked a year ago

How to create a Glue Workflow programmatically?

Is there a way to create a Glue workflow programmatically? I looked at CloudFormation but the only one I found is to create an empty Workflow (just Workflow name, Description and properties). https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-glue-workflow.html I tried to look at the APIs as well (https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-workflow.html), and even if there are all the data type for all the structures the only create API is again only adding the blank box. Am I missing something? How do we create the workflow from the blueprint in Lake Formation? is some sort of pre-assembled JSON file that we just link to the workflow in glue? can you do something similar, or need to wait for the customizable blueprints? Thank you ***UPDATE:*** As it can be derived from the snippet of code from the Accepted Answer, the key is to consider that it is actually the : **AWS::Glue::Trigger** construct that helps you build the Workflow. Specifically, you need to: 1. create the Workflow with AWS::Glue::Workflow 2. If you need create Database and connection as well ( AWS::Glue::Database , AWS::Glue::Connection) 3. Create any Crawler and any Job you want to add to the workflow using : AWS::Glue::Crawler or AWS::Glue::Job 4. Create a first Trigger (AWS::Glue::Trigger ) with Type : ON-DEMAND , and Actions = to the firs Crawler or job your Workflow need to launch and Workflowname referencing the Workflow created at point 1 5. Create any other Trigger with Type : CONDITIONAL Below an Example (to create a Workflow that launch a Crawler on an S3 Bucket (cloudtraillogs) , if successfull launch a python script to change the table and partition schema to make them work with Athena )). hope this helps ``` --- AWSTemplateFormatVersion: '2010-09-09' Description: Creates cloudtrail crwaler and catalog for Athena and a job to transform to Parquet Parameters: CloudtrailS3: Type: String Description: Enter the unique bucket name where the cloud trails log are stored CloudtrailS3Path: Type: String Description: Enter the path/prefix that you want to crawl CloudtrailDataLakeS3: Type: String Description: Enter the unique bucket name for the data lake in which to store the logs in Parquet Format Resources: CloudTrailGlueExecutionRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: - glue.amazonaws.com Action: - 'sts:AssumeRole' Path: / ManagedPolicyArns: - arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole GluePolicy: Properties: PolicyDocument: Version: '2012-10-17' Statement: - Action: - s3:GetBucketLocation - s3:GetObject - s3:PutObject - s3:ListBucket Effect: Allow Resource: - !Join ['', ['arn:aws:s3:::', !Ref CloudtrailS3] ] - !Join ['', ['arn:aws:s3:::', !Ref CloudtrailS3, '/*'] ] - !Join ['', ['arn:aws:s3:::', !Ref CloudtrailDataLakeS3] ] - !Join ['', ['arn:aws:s3:::', !Ref CloudtrailDataLakeS3, '/*'] ] - Action: - s3:DeleteObject Effect: Allow Resource: - !Join ['', ['arn:aws:s3:::', !Ref CloudtrailDataLakeS3] ] - !Join ['', ['arn:aws:s3:::', !Ref CloudtrailDataLakeS3, '/*'] ] PolicyName: glue_cloudtrail_S3_policy Roles: - Ref: CloudTrailGlueExecutionRole Type: AWS::IAM::Policy GlueWorkflow: Type: AWS::Glue::Workflow Properties: Description: Workflow to crawl the cloudtrail logs Name: cloudtrail_discovery_workflow GlueDatabaseCloudTrail: Type: AWS::Glue::Database Properties: # The database is created in the Data Catalog for your account CatalogId: !Ref AWS::AccountId DatabaseInput: # The name of the database is defined in the Parameters section above Name: cloudtrail_db Description: Database to hold tables for NY Philarmonica data LocationUri: !Ref CloudtrailDataLakeS3 GlueCrawlerCTSource: Type: AWS::Glue::Crawler Properties: Name: cloudtrail_source_crawler Role: !GetAtt CloudTrailGlueExecutionRole.Arn #Classifiers: none, use the default classifier Description: AWS Glue crawler to crawl cloudtrail logs Schedule: ScheduleExpression: 'cron(0 9 * * ? *)' DatabaseName: !Ref GlueDatabaseCloudTrail Targets: S3Targets: - Path: !Sub - s3://${bucket}/${path} - { bucket: !Ref CloudtrailS3, path : !Ref CloudtrailS3Path } Exclusions: - '*/CloudTrail-Digest/**' - '*/Config/**' #TablePrefix: '' SchemaChangePolicy: UpdateBehavior: "UPDATE_IN_DATABASE" DeleteBehavior: "LOG" Configuration: "{\"Version\":1.0,\"CrawlerOutput\":{\"Partitions\":{\"AddOrUpdateBehavior\":\"InheritFromTable\"},\"Tables\":{\"AddOrUpdateBehavior\":\"MergeNewColumns\"}}}" GlueJobConvertTable: Type: AWS::Glue::Job Properties: Name: ct_change_table_schema Role: Fn::GetAtt: [CloudTrailGlueExecutionRole, Arn] ExecutionProperty: MaxConcurrentRuns: 1 GlueVersion: 1.0 Command: Name: pythonshell PythonVersion: 3 ScriptLocation: !Sub - s3://${bucket}/python/ct_change_table_schema.py - {bucket: !Ref CloudtrailDataLakeS3} DefaultArguments: '--TempDir': !Sub - s3://${bucket}/glue_tmp/ - {bucket: !Ref CloudtrailDataLakeS3} "--job-bookmark-option" : "job-bookmark-disable" "--enable-metrics" : "" DependsOn: - CloudTrailGlueExecutionRole GlueSourceCrawlerTrigger: Type: AWS::Glue::Trigger Properties: Name: ct_start_source_crawl_Trigger Type: ON_DEMAND Description: Source Crawler trigger WorkflowName: !Ref GlueWorkflow Actions: - CrawlerName: Ref: GlueCrawlerCTSource DependsOn: - GlueCrawlerCTSource GlueJobTrigger: Type: AWS::Glue::Trigger Properties: Name: ct_change_schema_Job_Trigger Type: CONDITIONAL Description: Job trigger WorkflowName: !Ref GlueWorkflow StartOnCreation: 'true' Actions: - JobName: !Ref GlueJobConvertTable Predicate: Conditions: - LogicalOperator: EQUALS CrawlerName: !Ref GlueCrawlerCTSource CrawlState: SUCCEEDED Logical: ANY DependsOn: - GlueJobConvertTable
1
answers
0
votes
4
views
EXPERT
Fabrizio@AWS
asked 2 years ago

Cloudformation mismanages VPCGatewayAttachment; stack breaks

**_The setup_** A pre-existing VPC and Internet Gateway (IG). The IG is attached to the VPC: ``` $ aws ec2 describe-internet-gateways { "InternetGateways": [ { "Attachments": [ { "State": "available", "VpcId": "vpc-7a678b1f" } ], "InternetGatewayId": "igw-43d8c621", "OwnerId": "xxxxxxxxxxxx", "Tags": [] } ] } ``` Note: there are other resources in the VPC but they are not pertinent to the problem. A Cloudformation nested template that (amongst other things) creates an VPCGatewayAttachment using the id (igw-43d8c621) of the Internet Gateway. The parent stack is configured to _roll back upon failure_ and does this due to a mistake in a sibling nested template (operator fat fingers). ``` InternetGatewayAttachment: Type: AWS::EC2::VPCGatewayAttachment Properties: InternetGatewayId: 'igw-43d8c621' VpcId: 'vpc-7a678b1f' ``` _**Expected behaviour**_ Cloudformation _should_ fail to create the VPCGatewayAttachment because the InternetGateway it refers to is already attached to the VPC - _there is already an attachment present_. It _should_ have commenced rollback at this point. It _should not_ report having successfully created the attachment. _**Actual behaviour**_ Cloudformation successfully creates the VPCGatewayAttachment and reports this. It then goes on to create the remaining resources in the template but ends up failing due to a mistake in a sibling template. A rollback is attempted, previously created resources are deleted (as expected), however it fails to delete the VPCGatewayAttachment it reported as having created: ``` DELETE_FAILED Network vpc-7a678b1f has some mapped public address(es). Please unmap those public address(es) before detaching the gateway. (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: 0a790e5d-41d0-4c2f-9580-84eb81058b2a) ``` This nested stack is permanently in a DELETE_FAILED state. The parent stack is permanently in a ROLLBACK_FAILED state. _**What is going on?**_ Cloudformation is such a black box so my reasoning powers are limited. It reports a "Physical ID" - searc-Inter-19YDP23IEGX15 - but I have no idea what to do with it. None of the tools at my disposal accept this reference. My hypothesis is that Cloudformation has attempted to create a VPCGatewayAttachment as instructed and internally recorded having done this even though no new attachment is present. This is confirmed by viewing the Internet Gateway's attachments (see first code-block). The pre-existing gateway attachment is now in Cloudformation's ledger of resources under its ownership and control; it mistakenly thinks that it created the VPCGatewayAttachment! Upon rollback it attempts to delete it but can't due to there being publicly mapped IP addresses belonging to the VPC that this gateway attachment refers to. _**Outcomes**_ I'm stalemated. I can neither update them with the correct stack configuration nor delete the stack and start again. All progress on these stacks is halted. My only resort would be to manually delete the gateway attachment however this option is not available to me as the production workload predicated on this attachment would be negatively impacted. An alternative would be to spin up a completely new stack sans attachment but I find this a too inelegant solution especially for production. I'm cucked. Plz halp.
1
answers
0
votes
1
views
vincentr
asked 2 years ago
  • 1
  • 90 / page