Content language: English
Sort by most recent
My CNF stack failed with an "Internal Failure" and now I can't rollback or update it!
So, I tried to import existing resources into my CloudFormation stack and the process failed. It looks like it might have timed out or something, but the latest message was "IMPORT_ROLLBACK_FAILED" due to an "Internal Failure". Now my stack is stuck and I can't change it at all. There is no "continue rollback" option or "update" option. The only thing I can do is delete the stack, which is something I do not want to do. I can't even ask AWS Support about it because we only have the "Basic" support plan on this account and that only covers billing questions and quota increases. What options do I have? Pay more to unwedge CFN? That should be AWS's problem, not mine! Anyway, here's what I see in CFN console. Does anyone have any thoughts on how to fix this? ``` 2022-12-04 02:18:52 UTC-0500 datastore-sg IMPORT_ROLLBACK_FAILED Internal Failure 2022-12-04 02:18:51 UTC-0500 rdssngSG UPDATE_FAILED The security token included in the request is invalid 2022-12-04 02:18:50 UTC-0500 rdssngSG UPDATE_IN_PROGRESS Remove stack-level tags from imported resource if applicable. 2022-12-04 02:18:38 UTC-0500 datastore-sg IMPORT_ROLLBACK_IN_PROGRESS The security token included in the request is invalid 2022-12-04 02:16:45 UTC-0500 rdsSG UPDATE_FAILED The security token included in the request is invalid 2022-12-03 04:32:59 UTC-0500 rdsSG UPDATE_IN_PROGRESS Apply stack-level tags to imported resource if applicable. 2022-12-03 04:32:58 UTC-0500 rdsSG IMPORT_COMPLETE Resource import completed. ```
s3 etag comparison on PutObject
Is there a way to use eTags as they meant to be used with S3, where you can retrieve an object, modify it, then when you POST it back, provide the ORIGINAL etag so that your PutObject fails if someone has snuck another update in between your Get and your Put ? I am looking to implement [optimistic updating](https://www.ibm.com/docs/en/odm/8.9.1?topic=entities-updating-retrieving-by-using-etag) as described [here](https://fideloper.com/etags-and-optimistic-concurrency-control), [here](https://event-driven.io/en/how_to_use_etag_header_for_optimistic_concurrency/), etc. Possible?
Table doesn't update.
I'm creating a table with this query: ``` CREATE TABLE test_1 AS SELECT id, created_at, transaction_type, customer_id, amount, description FROM "db_1"."transactions" WHERE (customer_id != xxxx AND customer_id != yyyy) AND transaction_type = 'type_4' AND description = 'Some Description' UNION ALL SELECT id, created_at, transaction_type, customer_id, amount, description FROM "db_1"."transactions" WHERE customer_id = xxxx AND (transaction_type = 'type_1' OR transaction_type = 'type_2' OR transaction_type = 'type_3') ``` The query gets me what I need correctly, except that it doesn't update itself. The issue could be because I'm building a **secondary table** from the **main table**, but since the main table comes directly from an S3 bucket, when the bucket is updated, the **main table** updates as well. But the **secondary table** doesn't. So, what should I do differently? Make a reference to the S3 bucket instead of the tables? How would that look like? Thanks in advance for any suggestions.
Error while querying Athena
Hello, I'm current redeploying a CI/CD pipeline from a Legacy Terraform to Terraform on Cloud. The following error first appeared on the newly migrated pipelines: HIVE_UNKNOWN_ERROR: com.amazonaws.services.lakeformation.model.InvalidInputException: Unable to assume role. Please verify Lake Formation has access to role arn:aws:iam::561######914:role/aws-reserved/sso.amazonaws.com/us-west-2/AWSReservedSSO_AdministratorAccess_0bb#####78e (Service: AWSLakeFormation; Status Code: 400; Error Code: InvalidInputException; Request ID: 73d56a83-6796-4cbe-befb-3e0b4e736773; Proxy: null) After trying to grant permissions manually we oscillated between propagating this error to all databases on the project to retrieving this error to only a few databases. We tried to grant permission through the *Data lake permissions*, with LF-Tags and also with the Databases. But without success. Any idea on what to do?
AWS Batch with Amazon EKS
I am trying to use AWS Batch with Amazon EKS but cannot create the Compute Environment through AWS CDK. My current eks cluster is configured with public and private access with many CIDR blocks but not `0.0.0.0/0`. Here is the error: `12:05:13 PM | CREATE_FAILED | AWS::Batch::ComputeEnvironment | batchjobtest Resource handler returned message: "EKS cluster must use API server endpoint that has public access and is accessible to the public internet. ` I don't want to allow `0.0.0.0/0` CIDR block on my EKS cluster. Is there any other way around it?
How to select the longest value of a string?
HI I'm writing a query that selects different barcodes. Every barcode has a specific sequence. But in the database the barcode exicsts multiple times. Everytime a sequence is added the barcode is visable in a different row. For example: ![Enter image description here](/media/postImages/original/IM3zpCRGaFRVSZq98cXYUe2g) I don't want to use a date filter because sometimes two sequences happen at the same date. Now I've used three select filters but I'm wondering if it's possible to use an other way. SELECT wc.dn_barcode , ( SELECT da_waarnemingsequence FROM collo_dwh.collo wc2 WHERE wc2.dn_barcode = wc.dn_barcode AND length(sequence) = ( SELECT max(length(sequence) FROM collo_dwh.collo wc3 WHERE wc3.dn_barcode = wc2.dn_barcode ) ) sequence FROM (database wc) WHERE wc.da_datum_sortering1 BETWEEN "date_add"('day', -7, current_date) AND current_date AND wc.da_landcode_gea = 'NL' AND wc.sequence LIKE '%A1%B1%' AND wc.sequenceNOT LIKE '%I%'
Elastic IP Addresses disappeared (EC2 Instances)
Hi, So I've had a few Elastic IP addresses for years and associate them with specific EC2 Instances, as required, in a specific Region. They have now disappeared & I do not release them (100% positive). I tried allocating a new IP address to test, but the Scope of this new public IP address is VPC only (I cannot edit/change). Any thoughts?
Strategies for RDS MySQL migration to Aurora
RDS has automatically upgraded us to MySQL 8.0.30, and Aurora currently only supports migration from 8.0.28. AWS Support says they can't provide a timeline for 8.0.30 support, and recommends either using DMS (which seems very lacking for this type of migration) or dumping the database and recreating it (difficult for a database in the hundreds of gigabytes range.) Does anyone have any good strategies for migrating a database of this size to Aurora? The answer here -- https://repost.aws/questions/QUM2j4BPEQS5CBHVu4QbOLCA/migrate-rds-my-sql-8-0-28-to-aurora-my-sql -- doesn't work, since you can't create an Aurora read replica for unsupported MySQL versions...
"aws swf register-domain" output argument
My use case is region build services chain reaction plugin - aws cli. My objective is to reduce the number of tasks to automate. I'd like to execute one command and receive feedback about success or failure to enable workflow continuation or do retry enforcement. I am using `aws swf register-domain` for its intended purpose and I'd like to understand if it is at all possible to invoke a response from the command execution other than `none`. I look longingly at the `--output` argument which, according to documentation, allows one to determine an output format. Does this imply that using the argument enables one to invoke a json response from the execution of a sequence that looks like this `aws --region xyz-northeast-1 swf register-domain --name unique-domain-name --workflow-execution-retention-period-in-days 90 --output json` ?
MWAA CloudWatch metrics - Billing too high
Hi, we're using MWAA and we're charged almost 300$/month for Cloudwatch metrics, for example, our last month: $0.30 per metric-month for the first 10,000 metrics - EU (Ireland) 907.597 Metrics $272.28 But when I go to Cloudwatch > Metrics > All Metrics I can only see a total of 11,056 metrics, so why is the metric number in billing so high? Also, we don't actually use metrics, they were automatically set up by MWAA, so can we disable them as we just need MWAA logs but we don't care about metrics? Thanks,