Questions tagged with Database

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I have an on-premises MySQL database that needs to be migrated to an AWS RDS MySQL database. The on-premises database will be updated regularly, and I want to update the RDS database with the latest records from the on-premises database on a daily basis at a scheduled time. The two databases have schema differences, and I need to modify the data from the on-premises database to match the schema of the RDS database. I will not be performing any analytics on the data, and the RDS database will be used as the database for a web application. Can you suggest an ideal approach for this scenario? Thanks in Advance!
1
answers
0
votes
10
views
Jinali
asked 16 hours ago
Im trying to find a way, to make the multi-az cluster read replica, to become writable. Is it even possible? or the only way of "promoting" the read replica, to be writable, is to "failover" the primary instance?
1
answers
0
votes
8
views
asked 2 days ago
We have been providing AR web services since March 17th. Normally, the RDS CPU usage was only up to 4-5%, but there was a history of it using up to 60% today, so we are investigating the cause. Currently, RDS is created in a private area, and we understand that it can only be accessed by EC2 created in the same VPC. We checked the access logs of the Ubuntu EC2, but it seems that only two workers accessed the EC2 with their IP addresses. We are wondering if there is any other way to access the private RDS, and if CPU resources can be used like this when automatic RDS backups are performed. The RDS specification is db.m5.large, and MariaDB and the EC2 specification is c5n.2xlarge Ubuntu. Approximately 1 minute later, CloudWatch logs showed [Warning] Aborted connection numbers to db: 'unconnected' user: 'rdsadmin' host: 'localhost' (Got an error reading communication packets).
1
answers
0
votes
3
views
asked 2 days ago
A question for the AWS professionals. I recently worked with a dev team to create a web app for my business. The infrastructure is aws rds (db.m6i.large, 100% utilization, ondemand, multi-az) s3, and lightsail. Cost are estimated to be $300 p month in the calculator but we are being charged $1000 p month. Anyone know why we are charged so much?
1
answers
0
votes
14
views
Andrew
asked 2 days ago
Redshift gives me the error select table_schema,table_name,LISTAGG(column_name,', ') within group (order by ordinal_position asc) from information_schema.columns where table_name = 'abcde' and table_schema = 'xyz' group by 1,2 i tried to create mytable insert into mytable select table_schema , table_name , ordinal_position as colpos, column_name as ColumnName from information_schema.columns where table_name = 'abcde' and table_schema = 'xyz' group by 1,2 gives me error: Function "has_column_privilege(oid,smallint,text)" not supported. Function "has_column_privilege(oid,smallint,text)" not supported. Function "has_table_privilege(oid,text)" not supported. Function "has_table_privilege(oid,text)" not supported. Function "has_table_privilege(oid,text)" not supported. i would want to acheive this which would will be later used in my stored proc. table_schema , tablename, distkey, sortkey, columns xyz abcde col1 col2,col3 col1,col2,col3,col4,col5,col6,col7 i also tried with select schema_name as databasename,table_name as tablename,ordinal_position as colpos,column_name from pg_catalog.svv_all_columns where database_name='prod123' and schema_name='xyz' and table_name='abcde' order by 1,2,3,4 get the error: Function "has_column_privilege(oid,smallint,text)" not supported. Function "has_column_privilege(oid,smallint,text)" not supported. Failed to get redshift columns from ******* thanks KN
0
answers
0
votes
5
views
KN
asked 2 days ago
Hi, I'd appreciate AWS Athena support for TIMESTAMP data type with microsecond precision for all row formats and table engines. Currently, the support is very inconsistent. See the SQL script below. ``` drop table if exists test_csv; create external table if not exists test_csv ( id int, created_time timestamp ) row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde' with serdeproperties('separatorChar'=',', 'quoteChar'='"', 'escapeChar'='\\') location 's3://my-bucket/tmp/timestamp_csv_test/'; -- result: OK drop table if exists test_parquet; create external table if not exists test_parquet ( id int, created_time timestamp ) row format serde 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' stored as inputformat 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' outputformat 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' location 's3://my-bucket/tmp/timestamp_parquet_test/' tblproperties ('parquet.compress' = 'snappy'); -- result: OK drop table if exists test_iceberg; create table if not exists test_iceberg ( id int, created_time timestamp ) location 's3://my-bucket/tmp/timestamp_iceberg_test/' tblproperties ( 'table_type' ='iceberg'); -- result: OK insert into test_csv values (1, timestamp '2023-03-22 11:00:00.123456'); /* result: ERROR [HY000][100071] [Simba][AthenaJDBC](100071) An error has been thrown from the AWS Athena client. GENERIC_INTERNAL_ERROR: class org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector cannot be cast to class org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector (org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector and org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector are in unnamed module of loader io.trino.server.PluginClassLoader @1df1bd44). If a data manifest file was generated at 's3://my-bucket/athena_results/ad44adee-2a80-4f41-906a-17aa5dc27730-manifest.csv', you may need to manually clean the data from locations specified in the manifest. Athena will not delete data in your account. [Execution ID: ***] */ insert into test_parquet values (1, timestamp '2023-03-22 11:00:00.123456'); -- result: OK select * from test_parquet; -- result: OK DATA: 1,2023-03-22 11:00:00.123000 BUT THE TIMESTAMP VALUE IS TRUNCATED TO MILLISECONDS! insert into test_iceberg values (1, timestamp '2023-03-22 11:00:00.123456'); -- result: OK select * from test_csv; select * from test_iceberg; -- result: OK DATA: 1,2023-03-22 11:00:00.123456 THIS IS FINE ```
0
answers
0
votes
17
views
asked 3 days ago
Hello, Im trying to setup DAX to handle caching for our DynamoDB logic for our existing kubernetes cluster. However, when I follow the guides, they are incomplete. From official doc here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.create-cluster.console.create-subnet-group.html 1. Open the **DynamoDB **console at https://console.aws.amazon.com/dynamodb/. 2. In the navigation pane, under **DAX, choose Subnet groups.** However there are NO such thing as "DAX" under DynamoDB. There is simply create table etc. When I search DAX in the console, I get no hits. How exactly am I to understand how this is to be done when the official guide itself isnt correct? Same with guides I've found, they simply do not align with how it looks in real life. Help much appreciated since our Prod enviroment is in dire need of this ASAP. Kind regards Olle
1
answers
0
votes
9
views
asked 3 days ago
[This document](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion.Process) indicates that PostGIS needs to be updated before a major version upgrade. And, [this list](https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-extensions.html#postgresql-extensions-15x) shows the supported PostGIS version under PostgreSQL 15.2 is 3.3.2. However, the latest PostGIS version available on RDS under PostgreSQL 14.7 [appears to be 3.1.7](https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-extensions.html#postgresql-extensions-14x). Indeed, after running this: `SELECT postgis_extensions_upgrade();` Then, attempting the upgrade, I get the following: ``` ------------------------------------------------------------------ Upgrade could not be run on ... ------------------------------------------------------------------ The instance could not be upgraded from 14.7.R1 to 15.2.R1 because of following reasons. Please take appropriate action on databases that have usages incompatible with requested major engine version upgrade and try again. - Following usages in database 'postgres' need to be corrected before upgrade: -- The instance could not be upgraded because there are one or more databases with an older version of PostGIS extension or its dependent extensions (address_standardizer, address_standardizer_data_us, postgis_tiger_geocoder, postgis_topology, postgis_raster) installed. Please upgrade all installations of PostGIS and drop its dependent extensions and try again. ----------------------- END OF LOG ---------------------- ``` Trying the specific target version also fails: `ALTER EXTENSION postgis UPDATE TO '3.3.2';` ``` ERROR: extension "postgis" has no update path from version "3.1.7" to version "3.3.2" SQL state: 22023 ``` Any suggestions?
0
answers
1
votes
10
views
asked 3 days ago
Hi community, I am trying to perform an ETL job using AWS Glue. Our data is stored in MongoDB Atlas, inside a VPC. Our AWS is connected to our MongoDB Atlas using VPC peering. To perform the ETL job in AWS Glue I have first created a connection using the VPC details and the mongoDB Atlas URI along with the password and username. The connection is used by the AWS Glue crawlers to extract the schema to AWS Data Catalog Tables. This connection works! However, I am then attempting to perform the actual ETL job using the following pySpark code: #My Temp Variables source_database="d*********a" source_table_name="main_businesses source_mongodb_db_name = "main" source_mongodb_collection = "businesses" glueContext.create_dynamic_frame.from_catalog(database=source_database,table_name=source_table_name,additional_options = {"database": source_mongodb_db_name,"collection":source_mongodb_collection}) However the connection times out and for some reason mongodb atlas is blocking the connection from the ETL job. It's as if the ETL Job is using the connection differently than the crawler does. Maybe the ETL Job is not able to run the job inside our AWS VPC that is connected to the MongoDB Atlas VPC (VPC Peering is not possible?). Does anyone have any idea what might be going on or how I can fix this? Thank you!
1
answers
0
votes
18
views
asked 4 days ago
Hi, Im getting this strange error: ``` ERROR 1227 (42000): Access denied; you need (at least one of) the GRANT OPTION privilege(s) for this operation ``` Trying to run this command on "writer instance" of "multi-az cluster": ``` grant replication slave, replication client, replication_slave_admin on *.* to 'repl'@'%'; ``` Knowing that the same command works on the "multi-az" "writer instance", im confused.
0
answers
0
votes
34
views
asked 5 days ago
Hi, Logged on as the m aster user on a Oracle 19 RDS database, I would like to execute the following: GRANT EXECUTE ON XDB.DBMS_XMLSCHEMA TO appl_owner; This fails with ORA-01031: insufficient privileges. I tried this with RDSADMIN package: EXECUTE rdsadmin.rdsadmin_util.**grant_sys_object**( p_obj_name=>'DBMS_XMLSCHEMA', p_grantee=>'FWD_OWNER', p_privilege=>'EXECUTE'); But that want to grant privileges on SYS objects, so it doesn't find objects in schema XDB. How can I grant privileges on objects in different schema (xdb) to o different user/schema? Thanks
1
answers
0
votes
7
views
asked 5 days ago
I want to make some database operations with GUI and what is the best option for this project? To run my project i'm using xampp and I would like to be able to run queries on my database and dynamically display, sort, add, remove, and update records based on site user input. Should I use a RDS database or EC2?
1
answers
0
votes
38
views
asked 5 days ago