Questions tagged with Database
Content language: English
Sort by most recent
Hi All,
We just updated our redis cluster in elasticache redis cluster from v7.0.5 to
v7.0.7 with dualstack connection and it just went down and keep resetting all incoming connections. After investigation, we found out it may be a bug that can reproduce on all AWS account
**expected output**: Everything should works fine after redis upgrade
**actual output**: redis cluster keep reset all incoming connections
**step to reproduce**:
1. Open a new redis cluster with the following settings:
choose "dualstack" in connection section instead of default ipv4 option
choose redis v7
2. check if AWS choose v7.0.7, we can only reproduce this in v7.0.7, not v7.0.5 not v6.2 or v6
3. try to connect to this redis cluster and will find out all connection refused.

Thanks for every folks in AWS User Group Taiwan that help us to narrow down the issue
Original Post on Facebook in Chinese Traditional: https://www.facebook.com/groups/awsugtw/posts/5984294404980354/
Hi All, is there anyone tried to configure CloudTrail for Redshift? we are trying to do this to get the IAM user activity who run the query in query editor v2.
We have found few docs and followed the steps to configure the CloudTrail, we cant get the logs we are looking forward.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-tutorial.html
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html
This is the docs we have found to show us CloudTrail can integrate with Redshift. And it can get the log result for the query editor v2.
https://docs.aws.amazon.com/redshift/latest/mgmt/logging-with-cloudtrail.html
But it doesn't show the steps that how to logging the calls with CloudTrail.
Looking forward the guidance from you all, so that we can learn together.
Thanks.
Hi everyone,
I have a question about if i need make a setting or configuration to previse any afectation by summer time change, that as we know, it should not be done anymore
Im interesting on RDS, however is good knows if any another service have this problem and need to be configured
Thanks in advance.
Hi Team,
We created Postgres Aurora read replica for Postgres RDS and enabled performance insights for Aurora read replica.
In performance insights for this read replica, it is not tracking DB CPU, SQLs, etc.
Can you please help how we can track the metrics correctly through performance insights?
Thanks,
Tushar
Greetings.
A while ago I created an Aurora instance to migrate my database, but it had a lot of performance issues. Responses are 2-3 times slower than normal on the current host (digital ocean). How can I improve this?
I have an on-premises MySQL database that needs to be migrated to an AWS RDS MySQL database. The on-premises database will be updated regularly, and I want to update the RDS database with the latest records from the on-premises database on a daily basis at a scheduled time. The two databases have schema differences, and I need to modify the data from the on-premises database to match the schema of the RDS database. I will not be performing any analytics on the data, and the RDS database will be used as the database for a web application.
Can you suggest an ideal approach for this scenario?
Thanks in Advance!
Im trying to find a way, to make the multi-az cluster read replica, to become writable.
Is it even possible? or the only way of "promoting" the read replica, to be writable, is to "failover" the primary instance?
We have been providing AR web services since March 17th. Normally, the RDS CPU usage was only up to 4-5%, but there was a history of it using up to 60% today, so we are investigating the cause. Currently, RDS is created in a private area, and we understand that it can only be accessed by EC2 created in the same VPC. We checked the access logs of the Ubuntu EC2, but it seems that only two workers accessed the EC2 with their IP addresses. We are wondering if there is any other way to access the private RDS, and if CPU resources can be used like this when automatic RDS backups are performed. The RDS specification is db.m5.large, and MariaDB and the EC2 specification is c5n.2xlarge Ubuntu. Approximately 1 minute later, CloudWatch logs showed [Warning] Aborted connection numbers to db: 'unconnected' user: 'rdsadmin' host: 'localhost' (Got an error reading communication packets).
A question for the AWS professionals.
I recently worked with a dev team to create a web app for my business. The infrastructure is aws rds (db.m6i.large, 100% utilization, ondemand, multi-az) s3, and lightsail. Cost are estimated to be $300 p month in the calculator but we are being charged $1000 p month. Anyone know why we are charged so much?
Redshift gives me the error
select table_schema,table_name,LISTAGG(column_name,', ')
within group (order by ordinal_position asc)
from information_schema.columns
where table_name = 'abcde'
and table_schema = 'xyz'
group by 1,2
i tried to create mytable
insert into mytable
select table_schema , table_name , ordinal_position as colpos,
column_name as ColumnName
from information_schema.columns
where table_name = 'abcde'
and table_schema = 'xyz'
group by 1,2
gives me error:
Function "has_column_privilege(oid,smallint,text)" not supported.
Function "has_column_privilege(oid,smallint,text)" not supported.
Function "has_table_privilege(oid,text)" not supported.
Function "has_table_privilege(oid,text)" not supported.
Function "has_table_privilege(oid,text)" not supported.
i would want to acheive this which would will be later used in my stored proc.
table_schema , tablename, distkey, sortkey, columns
xyz abcde col1 col2,col3 col1,col2,col3,col4,col5,col6,col7
i also tried with
select schema_name as databasename,table_name as tablename,ordinal_position as colpos,column_name
from pg_catalog.svv_all_columns where database_name='prod123' and schema_name='xyz' and
table_name='abcde' order by 1,2,3,4
get the error:
Function "has_column_privilege(oid,smallint,text)" not supported.
Function "has_column_privilege(oid,smallint,text)" not supported.
Failed to get redshift columns from *******
thanks
KN
Hi, I'd appreciate AWS Athena support for TIMESTAMP data type with microsecond precision for all row formats and table engines. Currently, the support is very inconsistent. See the SQL script below.
```
drop table if exists test_csv;
create external table if not exists test_csv (
id int,
created_time timestamp
)
row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
with serdeproperties('separatorChar'=',', 'quoteChar'='"', 'escapeChar'='\\')
location 's3://my-bucket/tmp/timestamp_csv_test/';
-- result: OK
drop table if exists test_parquet;
create external table if not exists test_parquet (
id int,
created_time timestamp
)
row format serde 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
stored as inputformat 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
outputformat 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
location 's3://my-bucket/tmp/timestamp_parquet_test/'
tblproperties ('parquet.compress' = 'snappy');
-- result: OK
drop table if exists test_iceberg;
create table if not exists test_iceberg (
id int,
created_time timestamp
)
location 's3://my-bucket/tmp/timestamp_iceberg_test/'
tblproperties ( 'table_type' ='iceberg');
-- result: OK
insert into test_csv values (1, timestamp '2023-03-22 11:00:00.123456');
/*
result: ERROR [HY000][100071] [Simba][AthenaJDBC](100071) An error has been thrown from the AWS Athena client. GENERIC_INTERNAL_ERROR: class org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector cannot be cast to class org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector (org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector and org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector are in unnamed module of loader io.trino.server.PluginClassLoader @1df1bd44). If a data manifest file was generated at 's3://my-bucket/athena_results/ad44adee-2a80-4f41-906a-17aa5dc27730-manifest.csv', you may need to manually clean the data from locations specified in the manifest. Athena will not delete data in your account. [Execution ID: ***]
*/
insert into test_parquet values (1, timestamp '2023-03-22 11:00:00.123456');
-- result: OK
select * from test_parquet;
-- result: OK DATA: 1,2023-03-22 11:00:00.123000 BUT THE TIMESTAMP VALUE IS TRUNCATED TO MILLISECONDS!
insert into test_iceberg values (1, timestamp '2023-03-22 11:00:00.123456');
-- result: OK
select * from test_csv;
select * from test_iceberg;
-- result: OK DATA: 1,2023-03-22 11:00:00.123456 THIS IS FINE
```
Hello,
Im trying to setup DAX to handle caching for our DynamoDB logic for our existing kubernetes cluster.
However, when I follow the guides, they are incomplete.
From official doc here:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.create-cluster.console.create-subnet-group.html
1. Open the **DynamoDB **console at https://console.aws.amazon.com/dynamodb/.
2. In the navigation pane, under **DAX, choose Subnet groups.**
However there are NO such thing as "DAX" under DynamoDB. There is simply create table etc. When I search DAX in the console, I get no hits.
How exactly am I to understand how this is to be done when the official guide itself isnt correct?
Same with guides I've found, they simply do not align with how it looks in real life.
Help much appreciated since our Prod enviroment is in dire need of this ASAP.
Kind regards
Olle