Questions tagged with Database
Content language: English
Sort by most recent
Someone deleted rows from the critical table of RDS Aurora MySQL Instance. We don't have anything in slow query log either. Is there any other way to find out or trace all the query that ran against RDS aurora Instances?
multi-read/write databases, with replication to each other, in multi-az cluster/aurora cluster setups
Hello, I want to implement "active-active" replication, like it is described here https://workmarket.tech/zero-downtime-maintenances-on-mysql-rds-ba13b51103c2 , but using multi-az feature created read instances/replicas. As far as i understood, if im using just "multi-az" then i dont have access to the created "secondary" database. In "multi-az cluster" and "aurora cluster", i do have "read" access to the "secondary" database replicas. Is it possible to get "cmd" access into them? Is it possible to implement some manual changes with cmd access? Is it possible to get the "binlog" location, and using it, setup the "active-active" replication, between the "primary" and "standby"? Or there are some limitations, that make it impossible? Thanks
I'm newbie to AWS. When I looked at my Bill today, the RDS Backup services were the cause of the rise in the bill. There isn't a database,nor snapshots or anything operating at the RDS Dashboard, only the standard mysql:80. How can I terminate the RDS Backup Services?
All my instances,services,rds snapshots are deleted still there is increase in my bill.
We recently upgraded an MSSQL RDS database instance from 12.00.6329.1.v1 to 15.00.4236.7.v1. The instance functions fine, but it is supporting a legacy application that sometimes encounters errors that leave us no choice but to rollback to a snapshot. We also copy snapshots to our QA account periodically to refresh our QA databases, and this procedure has increased by the same significant amount of time. Prior to upgrading, these snapshot restores took around 30 minutes to get the instance to the Available state. After upgrading, it is taking several hours. The most recent attempt took 6 hours. This is harmful to our business as the increase in recovery time causes cascading effects. This feels like a possible AWS bug, I'm not sure what we could possibly do differently to improve this outcome. Any ideas? Instance details: Instance class: db.r6i.4xlarge Storage type: gp2 Storage size: 2000GiB
I have set up a new instance in Lightsail with Debian 11 LAMP stack, 1. Connected PhpMyAdmin through a tunnelled PuTTY and uploaded my data which looks good. 2. Connected with FIleZilla and uploaded some php which works, and need to connect to the MySql with command line. I have used mysql -u bitnami -p with password in the PuTTY command line and get ERROR 1045 (28000): Access denied for user 'bitnami'@'localhost' (using password: YES). I have tried all the passwords I have used with this and am getting nowhere, not sure what i am missing. Can anyone help please?
Where can I find the listener logs for AWS RDS Oracle databases. I can only find alert logs and audit logs in the AWS console but not listener logs
RDS databases have DNS names. The primary DB and read replicas have different DNS names. 1) Does my code need to have access to the read replicas and if yes then in what way? 2)Once I have created my read replicas, is it automatic that my read throughput will be scaled out or does one make other configurations?
Postgres database upgrade failed (11.6 -> 13.7) with error: *pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 6052; 1259 720106 INDEX unq_nickname_ci dba pg_restore: error: could not execute query: ERROR: function public.unaccent(unknown, text) does not exist LINE 3: SELECT public.unaccent('public.unaccent', $1) -- schema-qua... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts.* After the first failed upgrade, the index and function were deleted, so they don't exist in the catalog, but subsequent upgrades still ends with the same error. Do you have any idea how to proceed?
While trying to Update a table present in one database using a table present in another database, I'm getting an ASSERT error
In Redshift, I'm trying to update a table using another table from another database. The error details: SQL Error [XX000]: ERROR: Assert Detail: ----------------------------------------------- error: Assert code: 1000 context: scan->m_src_id == table_id - query: 17277564 location: xen_execute.cpp:5251 process: padbmaster [pid=30866] The context is not helpful. I have used a similar join based approach for other tables and there the update statement has been working fine. Update syntax used: ``` UPDATE ods.schema.tablename SET "TimeStamp" = GETDATE(), "col" = S."col", FROM ods.schema.tablename T INNER JOIN stg.schema.tablename S ON T.Col = S.Col; ```
Is there a Boto3 python script available that gives Date and Time, when was last time Table in Amazon Redshift was written (INSERT, UPDATE and DELETE), just need data and time, not the content which was written.
The other day I went to apply the update to 5.7. It failed do to some stuff I missed. The log was: ``` 1) MySQL 5.7 preupgrade check to catch orphan table issues For the following tables, either the datadir directory or frm file was removed or corrupted. More Information: https://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html [table_schema, table_name] DB_NAME, - Check the server logs or examine datadir to determine the issue and correct it before upgrading. 2) The DB instance must have enough space to rebuild the largest table that uses an old temporal data format. The DB instance must have enough space for the largest table that uses an old temporal data format to rebuild. More Information: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.DateTime57 You must rebuild tables that use an old temporal data format, it might take an extended period of time. - Scale storage so that there is enough space for the largest table that uses an old temporal data format, and make sure the storage type is gp2 or io1, if possible. 3) Check for heavy load or a high number of write operations on instance before upgrade * History list length No issues found. * Insert buffer size No issues found. Errors: 1 Warnings: 1 Database Objects Affected: 2 ----------------------- END OF LOG ---------------------- ``` That day I did what should have been done. I dropped all the orphan tables. (There was only one) I updated all the tables and data that had the old date format. I then moved the maintenance window to the next morning so it would automatically do it in the maintenance window since, and this is the most annoying part, because I manually applied the recommendation, and it failed, It now shows the recommendation to update as applied, and I can no longer just click apply in the console. Anyways. It didn't update the next morning. The compatibility log last written is still from the first time I ran it manually. There has been no update to it. I then changed the maintenance window again to the next morning. (this morning. 15th of March) Got to work this morning and checked it. Still hasn't been updated. By the looks of the log, it hasn't even been attempted again. That being said. I did check a few more things and realized I had forgotten about storage space. (I know it says it in the log. Lol) I only had 18GB left so I just upped that now so there is enough space for the rebuild. I again have changed the maintenance window to tomorrow morning so hopefully, it will update. Not super hopeful since the logs haven't been touched all the other times. Does anyone have any ideas on how I can manually update it if it doesn't work again? Or how I can get the recommendation back so I can use it?