Questions tagged with PostgreSQL
Content language: English
Sort by most recent
Hi! First question on this forum and it may seem trivial, but I'm trying to get around seamless reconnections of applications and interfaces after Point in Time Recovery. PITR of Aurora Instances and DynamoDB tables always result in new instances of either being created. Potentially these could be hosting connections from many targets. My experience (more that 2 decades) is with the likes Oracle and SQL Server and in situ restores/recoveries so no need to repoint applications. How are people handling the change of database target if they have to performs such a recovery?
(We are thinking of going RDS rather than Aurora because we can understand the process there, but this seems like a poor reason to choose one over the other).
Thanks,
John
Hi,
I have 13.4 db with app working fine. How can I test it on 14.4? What AWS recommends? Should I create another cluster 14.4 and test it with this version?
we are experiencing sudden spikes in read/write latency and queue depth on a production PostgreSQL instance.
There is no increase in connections, workloads, web layer traffic, cronjobs, etc.
We saw the physical device read/write IOs plummet on Enhanced Monitoring.
We think there to be a degradation issue with volume.



we found that it will return to normal every 5mins in our web apps apm monitor.

Can someone from AWS please look into this ASAP?
Giving an example to explain my scenario -
FileA has 100 attributes. FileB has 300 attributes. The attributes in Files A and B can be from multiple tables. These files are present in S3. We are going to import Files A and B into RDS Postgres. Now this Postgres will have
Table A --> generated from File A (100 attributes) and
Table B --> generated from File B (300 attributes)
We now want to split Tables A & B into multiple smaller tables using AWS DMS.
Source Postgres (Tables A & B) --> AWS DMS --> Target Postgres (Tables C, D, E, F)
Is this possible?
Any advice to implement this scenario? Please help
Hi there, Is there a way to Migrate an Aurora-based Postgres Db to an RDS Postgres Instance? I have searched on AWS docs I couldn't find any. Any Pointer will be greatly appreciated. Thanks
Hello
Im currently using DMS and got the following message
2022-10-30T09:40:03 [SOURCE_CAPTURE ]E: Unable to get table definition for table '0' [1021802] (oracle_endpoint_capture.c:759)
and found the following info
https://community.qlik.com/t5/Official-Support-Articles/SOURCE-CAPTURE-E-Unable-to-get-table-definition-for-table-0/ta-p/1809603
I was looking towards enabling source capture for DDL commands in SOURCE_CAPTURE
Im looking towards a way to enable trace only for ddl commands (traceDdlChanges?) to dump the exact command which failed.
Activating full debug trace for SOURCE_CAPTURE is not an option at all.
Too much verbose for an event that may happen very occasionally
I couldnt find any further information
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.DDLHandling.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ErrorHandling.html
Thanks
I have a stack configured in Cloudformation with RDS (Postgres) on a private subnet, to be accessed from Elastic Beanstalk which has ec2 instances on the public subnet.
I'd like to use [sqitch](https://sqitch.org) to manage my database migrations (updates.) I have a GitHub repository specifically for my Postgres database, and each time there is a commit to the main branch I'd like to have a GitHub Action deploy the migration to my RDS instance using the `sqitch` command.
How can I do this with RDS on the private subnet? Is there some way I could use `eb ssh` in a GitHub Action to create an ssh tunnel to RDS, such that `sqitch` can connect directly from GitHub to the DBMS on RDS and deploy the migration?
Should I setup a bastion host? Or should I have a GitHub Action that somehow creates an ephemeral ec2 instance to retrieve my database repository and deploy the migration to RDS on the private subnet?
Or are there alternatives I haven't thought of?
Apologies if I've mixed up some AWS/Cloudformation terminology, I'm pretty new to this.
I need to convert timestamp into MMDDYY format :
2020-01-31 01:14:00.000 must be converted to 01/31/20
the timestamp is stored in a table and I need to display it as MMDDYY
Here is what I am trying to use :
to_date(cast(timestamp_column as varchar), 'DDMMYY') - returns a completely different date
Please anyone help me out here asap
Good morning, I'm starting out with the AWS Elastic Beanstalk service, I'd like to follow the architecture outlined in the LAMP documentation, however, I want to work with PostgreSQL, not MySQL, it's possible to replace postgres with mysql, I've already done the whole process, but not Connections to rds and ec2 instances work for me
Hello!
I recently found 3 RDS instances of mine with unexpected Multi-AZ flag set to YES even if I left Multi-AZ flag NO (is the default!) when I created them from snapshot restoring, using AWS console.
Does anyone get the same issue?
I already contacted AWS support and they're investigating - they also told me that using CLI or "Restore to point in time" is ok instead.
I have an active PostgreSQL db in LightSail. I have found that I should be able to copy it to new database to have a consistent database for use in a different ENV.
I have followed the steps to restore snapshot to new database. It ended up in alert:
CreateRelationalDatabaseFromSnapshot[eu-central-1]
The restoreTime must be on or before the latestRestorableTime for the specified source database.
InvalidParams
While the time is 10 minutes or even 1 day before the time now.
I tried it once again, and it ends up in a different error now:
CreateRelationalDatabaseFromSnapshot[eu-central-1]
Some names are already in use: (NEW NAME I HAVE USED FOR DB)
NameExists
And I do not see the db in a list of my dbs even after 2 hours of waiting.
Have someone seen the same issue before? And what could we do to copy DB using LightSail services?
We have also tried manual snapshot, and then create new from snapshot - it does not work. Error is the same.
_tl;dr;_ I want to create a separate DB user account for each tenant in a SaaS, to support multi-tenant setup for PostgreSQL db using Row Level Security (RLS). It seems this isn't possible or practical with RDS Proxy because the SDK doesn't allow for easy management of secrets / credentials associated with RDS Proxy. What am I missing? How can I achieve a multi-tenant RLS setup with RDS Proxy and PostgreSQL RLS?
I'm trying to create a SaaS with a multi-tenant DB setup. RDS Aurora Postgres. **Each tenant in the database === a DB account** (see: https://aws.amazon.com/blogs/database/multi-tenant-data-isolation-with-postgresql-row-level-security/).
This was going fairly well when I was in the PoC stage, because I ignorantly put off storing DB secrets in secret manager and just had a few sample accounts setup to test things out.
That said, I've recently realized that with RDS Proxy you need to actually add each database credential to the proxy in order to be able to use that credential through the proxy... and that's not something that happens instantly, it can take an unknown amount of time for RDS Proxy to be updated, and frankly I'm not sure how well this would scale adding potentially hundreds or even thousands of credentials to RDS Proxy.
I had hoped / thought _maybe_ that using the "IAM Authentication" would solve the issue, but although it doesn't seem super well documented / clear (at least not through the AWS console), I _think_ IAM Authentication doesn't do anything for us unless we're using SQL server:
> IAM Authentication. Choose whether to require, allow, or disallow IAM authentication for connections to your proxy. **The allow option is only valid for proxies for RDS for SQL Server**. The choice of IAM authentication or native database authentication applies to all DB users that access this proxy.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy-setup.html
If I'm misunderstanding something here I'd love to know, and would really appreciate any advice. I feel like I'm fighting a loosing battle with my current approach and would love to know if there is something I'm missing that would salvage things!
If not, then I'm left either
1. Figure out how to programmatically add secrets / users to the DB Proxy - I think https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-rds/interfaces/modifydbproxyrequest.html#auth is perhaps _a_ mechanism I could use, but again it doesn't feel like it was really built for this - each time a user registered, it looks like I'd have to basically update the entire proxy, I can't "just" add a single user.
2. Switch away from the "each user in the SaaS has a separate DB user" approach to something else, essentially putting the onus of security back on the application layer (which was my entire goal of using RLS originally).
3. ??
Note that [the AWS documentation on RDS Proxy and adding database users](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy-managing.html#rds-proxy-new-db-user) of course says that you can certain add DB users, this I know, **the issue is adding users at scale, dynamically, via the SDK** - it just doesn't feel like RDS Proxy is designed for this (for understandable reasons I might add, I realize there is probably a fair amount of complexity hidden in RDS Proxy).