Browse through the questions and answers listed below or filter and sort to narrow down your results.
RDS MySql Aurora - Select into outfile S3 stopped working with error: No response body
When running select into outfile s3 we are all of a sudden receiving this error: ERROR 63994 (HY000): S3 API returned error: Unknown:No response body. This started two days ago, the error response isn't mentioned in any of the documentation. This has been working perfectly for over a year since set up, all of a sudden multiple regions and RDS clusters are showing the same error. Has anyone seen the same error or know how to troubleshoot it further? I've already followed the document here https://aws.amazon.com/premiumsupport/knowledge-center/amazon-aurora-upload-data-S3/ And checked all our existing IAM roles etc, this is all automated by CDK and nothing has changed recently as far as I'm aware. Thanks
Tagging of Aurora Auto Scaling Replicas
We are using AWS Aurora with automatic scaling of read replicas, which are launched with a generated name-tag with the prefix "application-autoscaling-" but no other tags. Is there a way we could overwrite the prefix and define additional tags (like the environment or application-id for which the cluster is used), as we can do for instances launched into EC2 auto-scaling groups? I can imagine doing this with some lambda function, but is there maybe an easier way? And if not, how are the costs for the autoscaled readers reported in the cost-replorer for example? Are they counted towards the readers I've created manually in the cluster?
RDS Aurora VolumeBytesUsed value suddenly decreased
![Enter image description here](/media/postImages/original/IM7NnmrmqUSziv4J3t70uSeA) I checked CloudWatch and found that the Volume Bytes Used metric suddenly dropped around 700GB without any cause, I'm the only one authorized to run queries in the company on a live environment and I didn't run any Any query like "delete" or "drop" the table/data at the time of this drop, can anyone who has encountered this situation explain it to me?
does restoring a RDS snapshot also restores the transaction data stored in the table within?
I'm new to using RDS, and I'm concerned about backing up MySQL DB on RDS . so i made 1 manual snapshot and when i do need to restore my DB via snapshot , does the transaction data stored in the tables also restored?
Uncaught RedisClusterException: Can't communicate with any node in the cluster in /home/cloudpanel/public_html/RedisCache.php
Hello Everyone, We are getting "Uncaught RedisClusterException: Can't communicate with any node in the cluster in" this error. We are using Elastic Cache redis cluster with 2 node and cluster mode was enable PHP library ==>https://github.com/cheprasov/php-redis-client we are using this library. This error randomly occurs in our websites. At the time of error Elastic cache load was normal. Is there any way to trouble shoot this issue.
Upgrade AWS Aurora Cluster MySQL from 5.7 to 8.0
Hi, I have to upgrade Aurora Cluster MySQL from 5.7 to 8.0, with DMS I can't migrate JSON fields from some table. I try to config external replication, but, after take snapshot and restored to version 8 cluster, when i try to call mysql.rds_set_external_master i get error "Failed purging old relay logs: Failed furing log reset" and start replication doesn't work. Edit: When one of the AWS command (like mysql.rds_set_external_source) fails or has some problems, all of the similar functions fail with the error above. I solved the situation with rebooting the replica instance.
Add reader endpoint for DB clusters when creating AWS::SecretsManager::SecretTargetAttachment
When creating a DB cluster (ie: Aurora Mysql) with autorotated secrets manager credentials, and attaching connection data using `AWS::SecretsManager::SecretTargetAttachment`, it is missing the replicas read-only endpoint. It would be nice to include it under some `readersHost` key. Is it possible? Where should I ask for it?
Why does my RDS Aurora Serverless V2 clone not finish...?
Hello, We have been working on migrating form self-managed MySQL databases (on EC2) to RDS Aurora Serverless V2 (for MySQL). The problem can be described by following these steps: * we create an RDS instance * we migrate a database from our self-managed EC2 database to the new RDS instance via a 3rd-party tool * we then clone the RDS instance * the new clone cluster is being created * the new clone cluster is available * the new clone cluster never gets any instances * the new clone cluster has its two endpoints, both of which are in the "creating" state We have gone through this same process a few times over the last month or so and have not had any problems whatsoever, with our cloned clusters up/running within 10-20 minutes. This time, we have waited for 12+ hours and the cluster still has zero instances. There are no warnings or errors or any notifications of any kind (after the initial notice that says that the cloning process has started). **So...does anyone have any idea why our RDS Aurora Serverless V2 clones don't finish...?** I'll be happy to provide any additional information, if needed. Thanks in advance for any help. 🤓 Travis
Invoke Lambda Privilege MySQL issue
Hi guys, I have an Aurora v3 cluster, with MySQL, which has an INSERT Trigger. When the data are inserted the trigger fires a Lambda. If I write within the table using the `admin` user of the DB everything works fine, but if I try to use another user the following message appears: ``` Unknown trigger has an error in its body: 'Access denied; you need (at least one of) the Invoke Lambda privilege(s) for this operation' ``` I can't figure out what I am missing. I've already set `log_bin_trust_function_creators=1` and these are the `GRANTS` associated with the user: ```GRANT USAGE ON *.* TO `pippo`@`%``` ```GRANT ALL PRIVILEGES ON `prod`.* TO `pippo`@`%``` ```GRANT `AWS_LAMBDA_ACCESS`@`%`,`rds_superuser_role`@`%` TO `pippo`@`%``` Thanks for help
Incremental Copy of RDS Aurora MySql Table to RedShift
Hi!!! To copy/incrementally copy data from Aurora MySql 5.7 to RedShift, can anyone share implementation steps? i couldnt do it with DataPipeline, Glue, DMS due to Certificate Issues at the Db. 1. Scheduled Jobs that does S3 exports of initial/changed data. This is the least expensive as you can decide what tables to export data from and how often. You can decide to do that every 10 mins or so. AWS Batch can be used with Lambda to accomplish this. - any references to Achieve this? Any other better optimised ways? Thanks!!