Questions tagged with Database
Content language: English
Sort by most recent
Cloudformation Stack issues removing it
Hi, We have an infrastructure Codepipeline with a number of stacks such as VPC, RSD, ECS, Cloudfront and DNS. We had some weird errors in the pipeline in a way I have tried to remove all the Cloudformation stacks to try to recreate a fresh pipeline. However the only Cloudformation stack left with errors after trying to remove it is the following RDS stack and I could not see how to fix it: 2022-11-23 14:52:07 UTC+0000 Crafter-RDS-Test ROLLBACK_COMPLETE - 2022-11-23 14:52:06 UTC+0000 RdsStack DELETE_COMPLETE - 2022-11-23 14:51:55 UTC+0000 RdsStack DELETE_IN_PROGRESS - 2022-11-23 14:51:42 UTC+0000 Crafter-RDS-Test ROLLBACK_IN_PROGRESS The following resource(s) failed to create: [RdsStack]. Rollback requested by user. 2022-11-23 14:51:42 UTC+0000 RdsStack CREATE_FAILED Embedded stack arn:aws:cloudformation:eu-west-1:XXXXXXXXXXXX:stack/Crafter-RDS-Test-RdsStack-1F1EM487W2KQ5/4364b050-6b3e-11ed-88aa-0664e7728df7 was not successfully created: The following resource(s) failed to create: [DBSubnetGroup]. 2022-11-23 14:51:08 UTC+0000 RdsStack CREATE_IN_PROGRESS Resource creation Initiated 2022-11-23 14:51:07 UTC+0000 RdsStack CREATE_IN_PROGRESS - 2022-11-23 14:51:02 UTC+0000 Crafter-RDS-Test CREATE_IN_PROGRESS User Initiated Can anyone help please. Thanks in advance
RDS MS SQL server ODBC Linked server connection issues
we tried using the SSMS console to add linked servers but the admin user does not have Sys admin access to add linked servers. so we tried using the sp_addlinkedserver from this article: https://aws.amazon.com/blogs/database/implement-linked-servers-with-amazon-rds-for-microsoft-sql-server/ https://aws.amazon.com/premiumsupport/knowledge-center/rds-sql-server-create-linked-server/ but have still been unsuccessful. our linked servers are ODBCs and can connect in localhost easily Please help
Do GSI not projected attributes use WRU?
I'm new to AWS and NoSQL/DynamoDB. I was wondering if having a GSI with projection type KEYS_ONLY whould use WRU or incur in extra cost when modifying non-key attributes in the original table. For example if a table with PK "id" and GSI-PK "email" would get extra WRU used or cost added when modifying the attribute "city" in the original table. Thank you in advance.
AppRunner-RDS connection issues
I'm deploying an AppRunner service using an ECR Image, this service is public (both outgoing and incoming) and the actual issue is that I can't connect to a public RDS database. Actually RDS database is public just for debugging purposes and rapid testing of the image but my application can't reach (ETIMEOUT) that public database. Database endpoint is public and sg is allowing all inbound and all outbound. The same image deployed in ECS Fargate works correctly and also in my local environment while pointing to the public RDS instance. Is that an issue or am I missing something?
Redshift serverless data api sql calls
Hi, I am new to Redshift serverless and working on a Lambda function to connect to serverless database using Python redshift Data API and execute bunch of SQL and stored procedures that are stored in a Nexus repository artifact. I am seeing errors when I try to execute SQL statements read from a file as a string. Here is an example of a DDL from one of the scripts. -- Table Definitions -- ---------------------------------- -- test1 ----------------------- -- ---------------------------------- DROP TABLE IF EXISTS test1; CREATE TABLE test1 ( id varchar(32), name varchar(64) DEFAULT NULL, grade varchar(64) DEFAULT NULL, zip varchar(5) DEFAULT NULL -- test2 -------------------------- -- ---------------------------------- DROP TABLE IF EXISTS test2; CREATE TABLE test2( id varchar(32), user_id varchar(32) DEFAULT NULL, hnum varchar(6), ts_created timestamp DEFAULT NULL, ts_updated timestamp DEFAULT NULL -- and few other tables in the same script The function runs fine if I hard code the sql query in the code and I don't see any syntax or other errors with the sql file contents since I could run those using Redshift query editor by manually copy n pasting all the DDLs. Am I missing anything or using data API is not the right approach for this use case? Error and Traceback from the lambda function execution: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/runtime/bootstrap.py", line 60, in <module> main() File "/var/runtime/bootstrap.py", line 57, in main awslambdaricmain.main([os.environ["LAMBDA_TASK_ROOT"], os.environ["_HANDLER"]]) File "/var/runtime/awslambdaric/__main__.py", line 21, in main bootstrap.run(app_root, handler, lambda_runtime_api_addr) File "/var/runtime/awslambdaric/bootstrap.py", line 405, in run handle_event_request( File "/var/runtime/awslambdaric/bootstrap.py", line 165, in handle_event_request xray_fault = make_xray_fault(etype.__name__, str(value), os.getcwd(), tb_tuples) FileNotFoundError: [Errno 2] No such file or directory
Need to move or clone RDS Instance MS SQL with DB data to another region
Hi, I have created an Initial RDS Instance with MS SQL and DB Data but it's too slow for me in the California region, I am planning to migrate RDS Instance with DB using a cloud formation template to the Mumbai region, can I anyone help me with the steps?
HIVE_FILESYSTEM_ERROR: Input path does not exist: when querying a table created by glue crawler using Athena. Possibly due to missing slash in s3 path.
Hello. I am running into the following error when querying a delta lake table that was built using a glue crawler. The s3 path has been modified to hide sensitive info. Notice how there is a slash missing between the name of the glue catalog table and the partition. I believe this is the cause of the error, but I do not think this is coming from our end or something we can fix on our end: "HIVE_FILESYSTEM_ERROR: Input path does not exist: s3://bucket_name_here/glue_catalog_db_name_here/**table_namepartition**/additional_partition/part_name.snappy.parquet This query ran against the "db_name" database, unless qualified by the query."
Cannot create clone of Aurora Serverless v1. Documentation says otherwise.
I'm tasked with upgrading an Aurora Serverless v1 DB cluster from from Aurora MySQL version 1 to Aurora MySQL version 2. According to the documentation at https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.MySQL56.EOL.html, I can perform an in-place upgrade, described in detail at https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.MajorVersionUpgrade.html#AuroraMySQL.Updates.MajorVersionUpgrade.1to2 Before I perform the in-place upgrade, I'd like to create a clone of the current Serverless v1 cluster and test the in-place upgrade on that clone. According to the documentation at https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html I can create an Aurora Serverless v1 clone from an Aurora Serverless v1 DB cluster. Further down in that document, a guide is shown how to do this in the AWS Console. In the Console, under RDS > Databases, it is shown that you can select a Serverless cluster and that a "Create clone" menu item will show up under the Actions drop-down menu. When I click that "Actions" button, after selecting my v1 Serverless cluster in the Console, I don't get the "Create clone" action. Has this option been removed for Serverless v1?