By using AWS re:Post, you agree to the Terms of Use
/AWS CloudFormation/

Questions tagged with AWS CloudFormation

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

In CDK, how do you enable `associatePublicIpAddress` in an AutoScalingGroup that has a `mixedInstancesPolicy`?

I'm using AWS CDK and am trying to enable the associatePublicIpAddress property for an AutoScalingGroup that's using a launch template. My first attempt was to just set `associatePublicIpAddress: true`, but I get this error (https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-autoscaling/lib/auto-scaling-group.ts#L1526-L1528) ```typescript // first attempt new asg.AutoScalingGroup(this, 'ASG', { associatePublicIpAddress: true, // here minCapacity: 1, maxCapacity: 1, vpc, vpcSubnets: { subnetType: SubnetType.PUBLIC, onePerAz: true, availabilityZones: [availabilityZone], }, mixedInstancesPolicy: { instancesDistribution: { spotMaxPrice: '1.00', onDemandPercentageAboveBaseCapacity: 0, }, launchTemplate: new LaunchTemplate(this, 'LaunchTemplate', { securityGroup: this._securityGroup, role, instanceType machineImage, userData: UserData.forLinux(), }), launchTemplateOverrides: [ { instanceType: InstanceType.of( InstanceClass.T4G, InstanceSize.NANO ), }, ], }, keyName, }) ``` ```typescript // I hit this error from the CDK if (props.associatePublicIpAddress) { throw new Error('Setting \'associatePublicIpAddress\' must not be set when \'launchTemplate\' or \'mixedInstancesPolicy\' is set'); } ``` My second attempt was to not set `associatePublicIpAddress` and see if it gets set automatically because the AutoScalingGroup is in a public availablity zone with an internet gateway. However, it still doesn't provision a public ip address. Has anyone been able to create an autoscaling group with a mix instance policy and an associated public ip?
1
answers
0
votes
13
views
asked 2 days ago

CloudFormation Template

I have received cloud formation template from AWS Professional service to create a VPC which creates subnet in 3 AZ with 3 TGW attachment and also attach it Transit gateway. Customer requires a VPC which should have 3 CIDRS and one CIDR for each AZ . Is this possible via CFT. Please help . Below is the template: --- # Creates a VPC. Supports various patterns through Conditions. # This template is hardcoded to use 3 AZs. # Depends on SNTO being active, as well as the Subnet Calculator solution. AWSTemplateFormatVersion: 2010-09-09 Description: Standard VPC network ############## # Parameters # ############## Parameters: VPCName: Description: Text to prefix in the VPC resource names Type: String VPCPattern: Description: VPC pattern to create Type: String Default: 1 public, 1 private, with Transit Gateway AllowedValues: - 1 public, 1 private, with Transit Gateway, dedicated NAT gateways - 1 public, 1 private, no Transit Gateway, dedicated NAT gateways - 1 public, 2 private, no Transit Gateway, dedicated NAT gateways - 1 public, 1 private, with Transit Gateway - 1 public, 2 private, with Transit Gateway - No public, 1 private in 3 AZs, with Transit Gateway - No public, 2 private in 3 AZs, with Transit Gateway VPCNetwork: Description: Network (WITHOUT the /prefix) to assign to the created VPC, eg. 10.123.0.0 Type: String AllowedPattern: ^(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})$ CIDRPrefix: Description: "VPC CIDR prefix. Approximate IPs: /24: 150 IPs -- /23: 350 IPs -- /22: 750 IPs -- /21: 1500 IPs -- /20: 3000 IPs" Type: Number Default: 24 AllowedValues: - 24 - 23 - 22 - 21 - 20 TransitGatewayConnectivity: Description: Connectivity requirements, configures the Transit Gateway route tables. Type: String AllowedValues: # For the Mappings to work, we can only have alphanumeric chars, -, and . # No spaces. - "None" - "Standard" - "Inspection" # Add more as needed. Default: "Standard" SSMEndpoints: Description: Create SSM VPC interface endpoints? This allows using Session Manager to access the instance without a public subnet. Type: String Default: "No" AllowedValues: - "Yes" - "No" PrivateDomainName: Description: Private hosted zone DNS name for this VPC Type: String Default: awslocal # Advanced options: Private1SubnetMask: Description: > (Advanced) Override automatic subnet prefix selection. If this is set, all subnet masks below need to be specified. Set to 0 if subnet does not exist for the pattern. Type: String Default: Automatic AllowedValues: [ Automatic, 0, 23, 24, 25, 26, 27, 28 ] Private2SubnetMask: Description: (Advanced) Override automatic subnet mask selection. Type: String Default: Automatic AllowedValues: [ Automatic, 0, 23, 24, 25, 26, 27, 28 ] Private3SubnetMask: Description: (Advanced) Override automatic subnet mask selection. Type: String Default: Automatic AllowedValues: [ Automatic, 0, 23, 24, 25, 26, 27, 28 ] PublicSubnetMask: Description: (Advanced) Override automatic subnet mask selection. Type: String Default: Automatic AllowedValues: [ Automatic, 0, 23, 24, 25, 26, 27, 28 ] # For informational purposes only, for users calculating the available space themselves: TGWSubnetMask: # Not used Description: (Advanced) Override automatic subnet mask selection. Type: String Default: "28" AllowedValues: [ "28" ] Private1SubnetLabel: Description: Name for the first private subnet Type: String Default: "Private Subnet 1" Private2SubnetLabel: Description: (if applicable) Name for the second private subnet Type: String Default: "Private Subnet 2" Private3SubnetLabel: Description: (if applicable) Name for the third private subnet Type: String Default: "Private Subnet 3" PublicSubnetLabel: Description: Name for the public subet Type: String Default: "Public Subnet" Rules: NoTransitGatewayPattern: RuleCondition: !Or - !Equals [ !Ref VPCPattern, "1 public, 1 private, no Transit Gateway, dedicated NAT gateways" ] - !Equals [ !Ref VPCPattern, "1 public, 2 private, no Transit Gateway, dedicated NAT gateways" ] Assertions: - Assert: !Equals [ !Ref TransitGatewayConnectivity, "None" ] AssertDescription: 'VPC pattern has no Transit Gateway' HasTransitGatewayPattern: RuleCondition: !Not - !Or - !Equals [ !Ref VPCPattern, "1 public, 1 private, no Transit Gateway, dedicated NAT gateways" ] - !Equals [ !Ref VPCPattern, "1 public, 2 private, no Transit Gateway, dedicated NAT gateways" ] Assertions: - Assert: !Not [ !Equals [ !Ref TransitGatewayConnectivity, "None" ] ] AssertDescription: 'Transit Gateway pattern required for given pattern.' Mappings: ############# # Variables # ############# Variables: TransitGatewayID: Value: tgw-01bb62fb90cf083e5 VPCFlowLogBucket: Value: controltower-vpc-flow-logs-124669510339-ap-southeast-2 VPCFlowLogPrefix: Value: vpc-flow-logs NetworkAccountID: Value: "124669510339" ################################ # Transit Gateway route tables # ############################### # These are based on Conditions that looks at !Ref TransitGatewayConnectivity TransitGatewayRouteTablePatterns: "Standard": AssociateWith: "Standard" PropagateTo: "Inspection,On-premises" "Inspection": AssociateWith: "Inspection" PropagateTo: "Inspection,On-premises" "None": AssociateWith: "" PropagateTo: "" # Default VPC Prefix to subnet prefix mapping # Subnet prefix needs to take into account 3 x /28 for TGW "24": OneSubnet: Private1: 26 TwoSubnets: Private1: 27 PublicOrPrivate2: 27 ThreeSubnets: Private1: 27 Private2: 28 PublicOrPrivate3: 28 "23": OneSubnet: Private1: 25 TwoSubnets: Private1: 26 PublicOrPrivate2: 26 ThreeSubnets: Private1: 26 Private2: 27 PublicOrPrivate3: 27 "22": OneSubnet: Private1: 24 TwoSubnets: Private1: 25 PublicOrPrivate2: 25 ThreeSubnets: Private1: 25 Private2: 25 PublicOrPrivate3: 26 "21": OneSubnet: Private1: 23 TwoSubnets: Private1: 24 PublicOrPrivate2: 24 ThreeSubnets: Private1: 24 Private2: 24 PublicOrPrivate3: 25 "20": OneSubnet: Private1: 22 TwoSubnets: Private1: 23 PublicOrPrivate2: 23 ThreeSubnets: Private1: 23 Private2: 23 PublicOrPrivate3: 24 Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: Required configuiraion Parameters: - VPCName - VPCNetwork - CIDRPrefix - Label: default: Networking configuration Parameters: - VPCPattern - TransitGatewayConnectivity - Label: default: Subnet naming Parameters: - Private1SubnetLabel - Private2SubnetLabel - Private3SubnetLabel - PublicSubnetLabel - Label: default: Advanced configuration Parameters: - Private1SubnetMask - Private2SubnetMask - Private3SubnetMask - PublicSubnetMask - TGWSubnetMask ParameterLabels: VPCNetwork: default: VPC IP network CIDRPrefix: default: VPC prefix/size VPCName: default: Name of the VPC. "VPC" will be appended to this name. VPCPattern: default: VPC pattern and subnets to create PrivateDomainName: default: Private DNS name TransitGatewayConnectivity: default: Transit Gateway connectivity requirements ############## # Conditions # ############## Conditions: AutomaticCidr: !Equals [ !Ref VPCNetwork, 'Automatic' ] SSMEndpoints: !Equals [ !Ref SSMEndpoints, "Yes" ] ConnectToTransitGateway: !Not - !Or - !Equals [ !Ref VPCPattern, '1 public, 1 private, no Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, '1 public, 2 private, no Transit Gateway, dedicated NAT gateways' ] UseDedicatedNATGateways: !Or - !Equals [ !Ref VPCPattern, '1 public, 1 private, with Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, '1 public, 1 private, no Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, '1 public, 2 private, no Transit Gateway, dedicated NAT gateways' ] HaveNATGatewaysPerAZ: !Or - !Equals [ !Ref VPCPattern, '1 public, 1 private, with Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, '1 public, 1 private, no Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, '1 public, 2 private, no Transit Gateway, dedicated NAT gateways' ] # These count the number of subnet sets (not counting TGW subnets) OneSubnet: !Equals [ !Ref VPCPattern, 'No public, 1 private in 3 AZs, with Transit Gateway' ] TwoSubnets: !Or - !Equals [ !Ref VPCPattern, '1 public, 1 private, with Transit Gateway' ] - !Equals [ !Ref VPCPattern, '1 public, 1 private, with Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, 'No public, 2 private in 3 AZs, with Transit Gateway' ] - !Equals [ !Ref VPCPattern, '1 public, 1 private, no Transit Gateway, dedicated NAT gateways' ] # ThreeSubnets: Any other patter not matching the above. CreatePrivate2Subnets: !Or - !Equals [ !Ref VPCPattern, 'No public, 2 private in 3 AZs, with Transit Gateway' ] - !Equals [ !Ref VPCPattern, 'No public, 3 private in 3 AZs, with Transit Gateway' ] - !Equals [ !Ref VPCPattern, '1 public, 2 private, with Transit Gateway' ] - !Equals [ !Ref VPCPattern, '1 public, 2 private, no Transit Gateway, dedicated NAT gateways' ] CreatePrivate3Subnets: !Equals [ !Ref VPCPattern, 'No public, 3 private in 3 AZs, with Transit Gateway' ] CreatePublicSubnets: !Or - !Equals [ !Ref VPCPattern, '1 public, 1 private, with Transit Gateway' ] - !Equals [ !Ref VPCPattern, '1 public, 1 private, with Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, '1 public, 2 private, with Transit Gateway' ] - !Equals [ !Ref VPCPattern, '1 public, 1 private, no Transit Gateway, dedicated NAT gateways' ] - !Equals [ !Ref VPCPattern, '1 public, 2 private, no Transit Gateway, dedicated NAT gateways' ] # Check if the subnet masks were overridden ManualSubnetMaskPrivate1: !Not [ !Equals [ !Ref Private1SubnetMask, 'Automatic' ] ] ManualSubnetMaskPrivate2: !Not [ !Equals [ !Ref Private2SubnetMask, 'Automatic' ] ] ManualSubnetMaskPrivate3: !Not [ !Equals [ !Ref Private3SubnetMask, 'Automatic' ] ] ManualSubnetMaskPublic: !Not [ !Equals [ !Ref PublicSubnetMask, 'Automatic' ] ] # Negative conditions: UseSharedNATGateways: !Not [ !Condition UseDedicatedNATGateways ] UserDefinedCidr: !Not [ !Condition AutomaticCidr ] # Compound Conditions: UseDedicatedNATGateways&HaveNATGatewaysPerAZ: !And - !Condition 'UseDedicatedNATGateways' - !Condition 'HaveNATGatewaysPerAZ' CreatePrivate2Subnets&UseDedicatedNATGateways: !And - !Condition 'CreatePrivate2Subnets' - !Condition 'UseDedicatedNATGateways' CreatePrivate2Subnets&UseDedicatedNATGateways&HaveNATGatewaysPerAZ: !And - !Condition 'CreatePrivate2Subnets' - !Condition 'UseDedicatedNATGateways' - !Condition 'HaveNATGatewaysPerAZ' CreatePrivate2Subnets&UseSharedNATGateways: !And - !Condition 'CreatePrivate2Subnets' - !Condition 'UseSharedNATGateways' CreatePrivate3Subnets&ConnectToTransitGateway: !And - !Condition 'CreatePrivate3Subnets' - !Condition 'ConnectToTransitGateway' Resources: ####### # VPC # ####### # This Custom resource reads the list of Subnets Labels+prefixes given, and returns a dictionary # with a key of the Label given, and the value of an array of 3 CIDRs (one for each AZ). # If the Prefix is set to 0, it's as though the prefix was not listed. # The custom resource lambda runs in the network account, exposed via SNS. SubnetCalculator: Type: Custom::SubnetCalculator Properties: ServiceToken: !Sub - arn:aws:sns:${AWS::Region}:${NetworkAccountID}:SubnetCalculatorV1 - NetworkAccountID: !FindInMap [ Variables, NetworkAccountID, Value ] VPCNetwork: !GetAtt VPC.CidrBlock AZs: 3 Subnets: # Always calculate the TGW subnet, so that it is easy to migrate from # a non-TGW pattern to a TGW one. It does not need to be used for non-TGW patterns. - Label: TGW Prefix: 28 - Label: Private1 Prefix: !If - ManualSubnetMaskPrivate1 # Then: - !Ref Private1SubnetMask # Else, automatic selection based on Mapping: - !If - OneSubnet - !FindInMap [ !Ref CIDRPrefix, "OneSubnet", Private1 ] - !If - TwoSubnets - !FindInMap [ !Ref CIDRPrefix, "TwoSubnets", Private1 ] # Else: - !FindInMap [ !Ref CIDRPrefix, "ThreeSubnets", Private1 ] - !If - CreatePrivate2Subnets - Label: Private2 Prefix: !If - ManualSubnetMaskPrivate2 # Then: - !Ref Private2SubnetMask # Else, automatic selection based on Mapping: - !If - TwoSubnets - !FindInMap [ !Ref CIDRPrefix, "TwoSubnets", PublicOrPrivate2 ] # Else - !FindInMap [ !Ref CIDRPrefix, "ThreeSubnets", Private2 ] # If not CreatePrivate2Subnets, skip this section: - !Ref AWS::NoValue - !If - CreatePrivate3Subnets - Label: Private3 Prefix: !If - ManualSubnetMaskPrivate3 # Then: - !Ref Private3SubnetMask # Else, automatic selection based on Mapping: - !FindInMap [ !Ref CIDRPrefix, "ThreeSubnets", PublicOrPrivate3 ] # If not CreatePrivate3Subnets, skip this section: - !Ref AWS::NoValue - !If - CreatePublicSubnets - Label: Public Prefix: !If - ManualSubnetMaskPublic # Then: - !Ref PublicSubnetMask # Else, automatic selection based on Mapping: - !If - TwoSubnets - !FindInMap [ !Ref CIDRPrefix, "TwoSu
1
answers
0
votes
23
views
asked 7 days ago

Best practice for restoring an RDS Aurora snapshot into a CloudFormation-built solution

Hi experts, I'm looking for best practices in restoring data into a cloudformation-built system. I've got extensive cloudformation that builds a solution, including an RDS Aurora Serverless database cluster. Now I want to restore that RDS server from a snapshot. - I notice that restoring through the console creates a new cluster, and this is no longer in the cloudformation stack, so doesn't get updates (plus my existing RDS instance is retained) - I found the property `DbSnapshotIdentifier` in DBInstance along with this answer https://repost.aws/questions/QUGElgNYmhTEGzkgTUVP21oQ/restoring-rds-snapshot-with-cloud-formation, however I see in the docs that I can never change it after the initial deployment (it seems it will delete the DB if I do - see below). This means I could never restore more than once. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-dbsnapshotidentifier - I also found a StackOverflow post from 6 years ago with the same question but no real answers. https://stackoverflow.com/questions/32255309/how-do-i-restore-rds-snapshot-into-a-cloudformation For the DbSnapshotIdentifier point above, here's the relevant wording from their docs that concerns me: > After you restore a DB instance with a DBSnapshotIdentifier property, you must specify the same DBSnapshotIdentifier property for any future updates to the DB instance. When you specify this property for an update, the DB instance is not restored from the DB snapshot again, and the data in the database is not changed. However, if you don't specify the DBSnapshotIdentifier property, an empty DB instance is created, and the original DB instance is deleted It seems this should be simple but it's not. Please don't tell me I need to fall back to using `mysqlbackup` ¯\\_(ツ)\_/¯ Thanks in advance, Scott
1
answers
0
votes
30
views
asked 9 days ago

RequestParameters for Api Event in Serverless::Function in JSON - how does it work?

I'm trying to add some query string parameters for a Lambda function, using a SAM template written in JSON. All the examples are in YAML? Can anyone point out where I'm going wrong. Here's the snippet of the definition: ``` "AreaGet": { "Type": "AWS::Serverless::Function", "Properties": { "Handler": "SpeciesRecordLambda::SpeciesRecordLambda.Functions::AreaGet", "Runtime": "dotnet6", "CodeUri": "", "MemorySize": 256, "Timeout": 30, "Role": null, "Policies": [ "AWSLambdaBasicExecutionRole" ], "Events": { "AreaGet": { "Type": "Api", "Properties": { "Path": "/", "Method": "GET", "RequestParameters": [ "method.request.querystring.latlonl": { "Required": "true" }, "method.request.querystring.latlonr": { "Required": "true" } ] } } } } }, ``` and here's the error message I get: > Failed to create CloudFormation change set: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [AreaGet] is invalid. Event with id [AreaGet] is invalid. Invalid value for 'RequestParameters' property. Keys must be in the format 'method.request.[querystring|path|header].{value}', e.g 'method.request.header.Authorization'. Sorry I know this is a bit of a beginners question, but I'm a bit lost as to what to do, as I can't find any information about this using JSON. Maybe you can't do it using JSON? Thanks, Andy.
1
answers
0
votes
31
views
asked 15 days ago

Best practice guidance to avoid "CloudFormation cannot update a stack when a custom-named resource requires replacing"

Hi, Over the years we have taken the approach of naming everything we deploy — it's clean, orderly and unambiguous. Since embracing infastructure-as-code practices, our CloudFormation recipes have been written to name everything with the project's prefix and stage. For example, a VPC will be deployed as `projectname-vpc-dev`, and its subnets will be `projectname-subnet-a-dev`, etc. Unfortunately, it seems some AWS resources won't update via CF if they are named — CloudFormation returns an error like this: > `CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename <name> and update the stack again.` How should we best overcome this? Should we simply avoid naming things? Can we use tags instead to avoid this? What's best practice? For reference, here's a snippet of CloudFormation that appears to be causing the issue above (with serverless.yml variables): ``` Type: AWS::EC2::SecurityGroup Properties: GroupName: projectname-dev GroupDescription: Security group for projectname-dev ... ``` I also had the same problem previously with `AWS::RDS::DBCluster` for `DBClusterIdentifier`. Generally speaking, how do I know which CloudFormation settings block stack updates like this? It feels like a bit of whack-a-mole at present. For the above example the docs at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group.html say nothing of this behaviour, but it does say "update requires replacement" against the fields `GroupName` and `GroupDescription`. Is that what I need to look out for, or is that something different again? Thanks in advance... Scott
1
answers
0
votes
20
views
asked 16 days ago

Issues Creating MediaConnect Flows with Cloudformation Template

Hi, I'm struggling creating Media Connect flows using cloudformation where the ingest protocol is not zixi-push. The documentation does state that srt-listener is not supported via cloudformation ( reference https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-mediaconnect-flowsource.html#cfn-mediaconnect-flowsource-maxlatency ) but I'm trying "rtp" and it fails. Additionally, the error message is not very helpful "Error occurred during operation 'AWS::MediaConnect::Flow'." (RequestToken: <redacted>, HandlerErrorCode: GeneralServiceException)" A working template (using Zixi on port 2088) looks like this; ``` `{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "Media Connect Flow Test", "Resources": { "MediaConnectFlowA": { "Type": "AWS::MediaConnect::Flow", "Properties": { "Name": "WIZARDA", "AvailabilityZone": "eu-west-1b", "Source": { "Name": "WIZARDASource", "StreamId": "A", "Description": "Media Connect Flow Test - WIZARDA", "Protocol": "zixi-push", "IngestPort": 2088, "WhitelistCidr": "<redacted>/32" } } } } }` ``` but keeping the protocol as zixi and changing the ingress port results in failure (this could be be design I guess as it's a non-standard Zixi port). Similarly, and more importantly for what I want to do, trying to change the protocol to "rtp" fails e.g. ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Media Connect Flow Test", "Resources": { "MediaConnectFlowA": { "Type": "AWS::MediaConnect::Flow", "Properties": { "Name": "WIZARDA", "AvailabilityZone": "eu-west-1b", "Source": { "Name": "WIZARDASource", "StreamId": "A", "Description": "Media Connect Flow Test - WIZARDA", "Protocol": "rtp", "IngestPort": 2088, "WhitelistCidr": "<redacted>/32" } } } } } ``` Can anyone advise on the right construct to create a flow with RTP source? (also rtp-fec failed) for completeness, the I've run via the console and also using the CLI e.g. ``` aws --profile=aws-course --region=eu-west-1 cloudformation create-stack --stack-name="mediaconnect-rtp" --template-body file://..\MConly.json ```
1
answers
0
votes
25
views
asked 25 days ago

Lambda Handler No Space On Device Error

Have a lambda function that is throwing an error of "No space left on device". The lambda function creates a custom resource handler defined within the lambda python code: response = cfn.register_type( Type='RESOURCE', TypeName='AWSQS:MYCUSTOM::Manager', SchemaHandlerPackage="s3://xxx/yyy/awsqs-mycustom-manager.zip", LoggingConfig={"LogRoleArn": "xxx", "LogGroupName": "awsqs-mycustom-manager-logs"}, ExecutionRoleArn="xxx" The lambda function when created has the following limits set: 4GB of Memory and 4GB of Ephemeral space. However, I was still receiving a no space on device even thought the '/tmp/' is specified and this is plenty of space. Doing additional digging I added a "df" output inside of the code/zip file. When the output prints is shows that only 512MB of space is available in temp? Filesystem 1K-blocks Used Available Use% Mounted on /mnt/root-rw/opt/amazon/asc/worker/tasks/rtfs/python3.7-amzn-201803 27190048 22513108 3293604 88% / /dev/vdb 1490800 14096 1460320 1% /dev **/dev/vdd 538424 872 525716 1% /tmp** /dev/root 10190100 552472 9621244 6% /var/rapid /dev/vdc 37120 37120 0 100% /var/task Its like a new instance was created internally and did not adopt the size from the parent. Forgive me if technically my language is incorrect as this is the first time busting this out and seeing this type of error. Just has me confused as too what is going on under the covers, and I can find no documentation on how to increase the ephemeral storage within the handler even though the originating lamda function in which this is defined has already had the limits increased.
1
answers
0
votes
42
views
asked a month ago

Lifecycle Configuration Standard --> Standard IA -- Glacier Flexible Restore via CloudFormation

We do shared web hosting and my cPanel servers stores backups in S3, each server with its own bucket. cPanel does not have a provision to select the storage class, so everything gets created as Standard. With around 9TB of backups being maintained, I would really like them to be stored as Standard IA after the first couple of days, and then transition to Glacier after they have been in IA for 30 days. The logic here is the backup that is most likely needed would be the most recent. Currently we skip the step of transferring to IA and they go straight to Glacier after 30 days. According to this page, that kind of multi staged transition should be ok, and it confirms that the transitions from class to class I want are acceptable. https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html The examples on this page show a transition in days of 1, seeming to show that a newly created object stored in Standard can be transitioned immediately: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-lifecycleconfig.html My YAML template for Cloud Formation has this section in it: ``` - Id: TransitionStorageType Status: Enabled Transitions: - StorageClass: "STANDARD_IA" TransitionInDays: 2 - StorageClass: "GLACIER" TransitionInDays: 32 ``` When I run the template all of the buckets update with nice green check marks, then the whole stack rolls back without saying what the issue is. If turn that into 2 separate rules like this: ``` - Id: TransitionStorageIA Status: Enabled Transitions: - StorageClass: "STANDARD_IA" TransitionInDays: 2 - Id: TransitionStorageGlacier Status: Enabled Transitions: - StorageClass: "GLACIER" TransitionInDays: 32 ``` Then each bucket getting modified errors with: `Days' in Transition action must be greater than or equal to 30 for storageClass 'STANDARD_IA'` but if you look at the rules, it is in Standard IA for 30 days as it doesn't change to Glacier until day 32, and it transitions to Standard IA at day 2. So that error does not make any sense. What do I need to do to make this work? My monthly bill is in serious need of some trimming. Thank you.
1
answers
0
votes
13
views
asked a month ago

How to ensure using the latest lambda layer version when deploying with CloudFormation and SAM?

Hi, we use CloudFormation and SAM to deploy our Lambda (Node.js) functions. All our Lambda functions has a layer set through `Globals`. When we make breaking changes in the layer code we get errors during deployment because new Lambda functions are rolled out to production with old layer and after a few seconds *(~40 seconds in our case)* it starts using the new layer. For example, let's say we add a new class to the layer and we import it in the function code then we get an error that says `NewClass is not found` for a few seconds during deployment *(this happens because new function code still uses old layer which doesn't have `NewClass`)*. Is it possible to ensure new lambda function is always rolled out with the latest layer version? Example CloudFormation template.yaml: ``` Globals: Function: Runtime: nodejs14.x Layers: - !Ref CoreLayer Resources: CoreLayer: Type: AWS::Serverless::LayerVersion Properties: LayerName: core-layer ContentUri: packages/coreLayer/dist CompatibleRuntimes: - nodejs14.x Metadata: BuildMethod: nodejs14.x ExampleFunction: Type: AWS::Serverless::Function Properties: FunctionName: example-function CodeUri: packages/exampleFunction/dist ``` SAM build: `sam build --base-dir . --template ./template.yaml` SAM package: `sam package --s3-bucket example-lambda --output-template-file ./cf.yaml` Example CloudFormation deployment events, as you can see new layer (`CoreLayer123abc456`) is created before updating the Lambda function so it should be available to use in the new function code but for some reasons Lambda is updated and deployed with the old layer version for a few seconds: | Timestamp | Logical ID | Status | Status reason | | --- | --- | --- | --- | 2022-05-23 16:26:54 | stack-name | UPDATE_COMPLETE | - 2022-05-23 16:26:54 | CoreLayer789def456 | DELETE_SKIPPED | - 2022-05-23 16:26:53 | v3uat-farthing | UPDATE_COMPLETE_CLEANUP_IN_PROGRESS | - 2022-05-23 16:26:44 | ExampleFunction | UPDATE_COMPLETE | - 2022-05-23 16:25:58 | ExampleFunction | UPDATE_IN_PROGRESS | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_COMPLETE | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_IN_PROGRESS | Resource creation Initiated 2022-05-23 16:25:50 | CoreLayer123abc456 | CREATE_IN_PROGRESS | - 2022-05-23 16:25:41 | stack-name | UPDATE_IN_PROGRESS | User Initiated
3
answers
0
votes
77
views
asked a month ago

ApplicationLoadBalancedFargateService with listener on one port and health check on another fails health check

Hi, I have an ApplicationLoadBalancedFargateService that exposes a service on one port, but the health check runs on another. Unfortunately, the target fails health check and terminates the task. Here's a snippet of my code ``` const hostPort = 5701; const healthCheckPort = 8080; taskDefinition.addContainer(stackPrefix + 'Container', { image: ecs.ContainerImage.fromRegistry('hazelcast/hazelcast:3.12.6'), environment : { 'JAVA_OPTS': `-Dhazelcast.local.publicAddress=localhost:${hostPort} -Dhazelcast.rest.enabled=true`, 'LOGGING_LEVEL':'DEBUG', 'PROMETHEUS_PORT': `${healthCheckPort}`}, portMappings: [{containerPort : hostPort, hostPort: hostPort},{containerPort : healthCheckPort, hostPort: healthCheckPort}], logging: ecs.LogDriver.awsLogs({streamPrefix: stackPrefix, logRetention: logs.RetentionDays.ONE_DAY}), }); const loadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, stackPrefix + 'Service', { cluster, publicLoadBalancer : false, desiredCount: 1, listenerPort: hostPort, taskDefinition: taskDefinition, securityGroups : [fargateServiceSecurityGroup], domainName : env.getPrefixedRoute53(stackName), domainZone : env.getDomainZone(), }); loadBalancedFargateService.targetGroup.configureHealthCheck({ path: "/metrics", port: healthCheckPort.toString(), timeout: cdk.Duration.seconds(15), interval: cdk.Duration.seconds(30), healthyThresholdCount: 2, unhealthyThresholdCount: 5, healthyHttpCodes: '200-299' }); ``` Any suggestions on how I can get this to work? thanks
1
answers
0
votes
40
views
asked a month ago

Elemental Mediaconvert job template for Video on Demand

I launched the fully managed video on demand template from here https://aws.amazon.com/solutions/implementations/video-on-demand-on-aws/?did=sl_card&trk=sl_card. I have a bunch of questions on how to tailor this service to my use case. I will each separate questions for each. Firstly, is possible to use my own GUID as an identifier for the mediaconvert jobs and outputs. The default GUID tagged onto the videos in this workflow are independent of my application server. So it's difficult for the server to track who owns what video on the destination s3 bucket. Secondly, I would like to compress the video input for cases where the resolution is higher than 1080p. For my service i don't want to process any videos higher than 1080p. Is there a way i can achieve this without adding a lamda during the ingestion stage to compress it? I know it can by compressed on the client, i am hoping this can be achieved on this workflow, perhaps using mediaconvert? Thirdly, based on some of the materials i came across about this service, aside from the hls files mediaconvert generates, its supposed to generate an mp4 version of my video for cases where a client wants to download the full video as opposed to streaming. That is not the default behaviour, how do i achieve this? Lastly, how do i add watermarks to my videos in this workflow. Forgive me if some of these questions feel like things i could have easily researched on and gotten solutions. I did do some research, but i failed to grasp a clear understanding on anything
1
answers
0
votes
18
views
asked a month ago

Error with creating Cloudformation stack during creating resources and have a role specified

I am exploring how to delegate Cloudformation permission to other users by testing specifying a role when creating a stack. I notice that some resources like VPC, IGW and EIP can be created but error was prompted. The created resources cannot be deleted by the stack also during rollback or stack deletion. For example, the following simple template create a VPC: ``` Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 10.3.9.0/24 ``` I have actually created a role to specify during creation with policy which allow a lot of actions that I collected by querying the cloudtrail using athena. The following are already included: `"ec2:CreateVpc","ec2:DeleteVpc","ec2:ModifyVpcAttribute"` However, the following occur during creation: > Resource handler returned message: "You are not authorized to perform this operation. (Service: Ec2, Status Code: 403, Request ID: bf28db5b-461e-48ff-9430-91cc05be77ef)" (RequestToken: bc6c6c87-a616-2e94-65eb-d4e5488a499a, HandlerErrorCode: AccessDenied) Looks like some callback mechanisms are used? The VPC was actually created. The deletion was also failed but it did not succeeded. > Resource handler returned message: "You are not authorized to perform this operation. (Service: Ec2, Status Code: 403, Request ID: f1e43bf1-eb08-462a-9788-f183db2683ab)" (RequestToken: 80cc5412-ba28-772b-396e-37b12dbf8066, HandlerErrorCode: AccessDenied) Any hint about this issue? Thanks.
2
answers
0
votes
48
views
asked 2 months ago

How can I build a CloudFormation secret out of another secret?

I have an image I deploy to ECS that expects an environment variable called `DATABASE_URL` which contains the username and password as the userinfo part of the url (e.g. `postgres://myusername:mypassword@mydb.foo.us-east-1.rds.amazonaws.com:5432/mydbname`). I cannot change the image. Using `DatabaseInstance.Builder.credentials(fromGeneratedSecret("myusername"))`, CDK creates a secret in Secrets Manager for me that has all of this information, but not as a single value: ```json { "username":"myusername", "password":"mypassword", "engine":"postgres", "host":"mydb.foo.us-east-1.rds.amazonaws.com", "port":5432, "dbInstanceIdentifier":"live-myproduct-db" } ``` Somehow I need to synthesise that `DATABASE_URL` environment variable. I don't think I can do it in the ECS Task Definition - as far as I can tell the secret can only reference a single key in a secret. I thought I might be able to add an extra `url` key to the existing secret using references in cloud formation - but I can't see how. Something like: ```java secret.newBuilder() .addTemplatedKey( "url", "postgres://#{username}:#{password}@#{host}:#{port}/#{db}" ) .build() ``` except that I just made that up... Alternatively I could use CDK to generate a new secret in either Secrets Manager or Systems Manager - but again I want to specify it as a template so that the real secret values don't get materialised in the CloudFormation template. Any thoughts? I'm hoping I'm just missing some way to use the API to build compound secrets...
3
answers
0
votes
17
views
asked 2 months ago

ApplicationLoadBalancedFargateService with load balancer, target groups, targets on non-standard port

I have an ECS service that exposes port 8080. I want to have the load balancer, target groups and target use that port as opposed to port 80. Here is a snippet of my code: ``` const servicePort = 8888; const metricsPort = 8888; const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDef'); const repository = ecr.Repository.fromRepositoryName(this, 'cloud-config-server', 'cloud-config-server'); taskDefinition.addContainer('Config', { image: ecs.ContainerImage.fromEcrRepository(repository), portMappings: [{containerPort : servicePort, hostPort: servicePort}], }); const albFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'AlbConfigService', { cluster, publicLoadBalancer : false, taskDefinition: taskDefinition, desiredCount: 1, }); const applicationTargetGroup = new elbv2.ApplicationTargetGroup(this, 'AlbConfigServiceTargetGroup', { targetType: elbv2.TargetType.IP, protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, vpc, healthCheck: {path: "/CloudConfigServer/actuator/env/profile", port: String(servicePort)} }); const addApplicationTargetGroupsProps: elbv2.AddApplicationTargetGroupsProps = { targetGroups: [applicationTargetGroup], }; albFargateService.loadBalancer.addListener('alb-listener', { protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, defaultTargetGroups: [applicationTargetGroup]} ); } } ``` This does not work. The health check is taking place on port 80 with the default URL of "/" which fails, and the tasks are constantly recycled. A target group on port 8080, with the appropriate health check, is added, but it has no targets. What is the recommended way to achieve load balancing on a port other than 80? thanks
1
answers
0
votes
69
views
asked 2 months ago
  • 1
  • 90 / page