Questions tagged with PostgreSQL
Content language: English
Sort by most recent
I created a PostgreSQL database instance. I made sure to choose the free tier option. I put maybe 20 entries into one of my tables but I got an email today saying my account has exceeded 85% of the usage limit for one or more AWS Free Tier-eligible services for the month of January.
DMS task has been running for 1 week. When I examine the changes after full load, it only transfers the **insert operations** to the target table. The source table contains data that has changed. Table statistics: 768 insert, 0 updates, 0 delete. Source: RDS PostgreSQL Target : RDS PostgreSQL Our all source tables have PRIMARY KEY. DMS tasks are running as full load + CDC What could cause this problem? What configurations should I check? Thanks for your help.
Hi, I have a some AWS Lambda functions which connect to a Postgres SQL database hosted on EC2 and and Oracle DB hosted out of AWS. I have been facing some intermittent connectivity challenges in connecting to DB hosted outside AWS. Also, basis some research I came across articles stating that directly connecting Lambda to a DB is not ideal for production workloads as Lambda cant maintain a connection pool. There is an option for AWS RDS proxy to handle this but that appears to be only working for MYSQL hosted on RDS. Any pointers on best practices to connect Lambda to a relational database both within and outside AWS network? Regards, dbeings
Upgrading Aurora PostgreSQL cluster from 12.12 to 14.6. Why do we need to drop and recreate all replication slots? Post upgrade, How to ensure replication slots resume incremental changes only?
It appears that Aurora PostgreSQL major version upgrade requires us to first drop all replication slots, then perform upgrade, and then recreate replication slots. We use logical slots for Debezium/Kafka replication for inter-process work flows. When we drop and recreate these replication slots as part of major version upgrade, how can we ensure replication restarts from where it left of (meaning, replication resumes incremental changes only) and not force us to do FULL sync. We cannot afford to have FULL sync due to large table sizes.
Following this blog: https://docs.aws.amazon.com/redshift/latest/dg/c_serial_isolation.html, it states "A database snapshot is created within a transaction on the first occurrence of most SELECT statements, DML commands such as COPY, DELETE, INSERT, UPDATE". We have a requirement for Redshift snapshot isolation to create a snapshot also on RENAME commands. Is this supported? If not, how to approach this if we have concurrent transactions with RENAME commands.
We have a requirement to sync the data from the on-prem database to AWS RDS (PotgreSQL) at specific intervals (unlike one-time data migration). Assume there is an Interconnect/VPN already established between AWS and On-prem network. The expected data volume is likely 1000 rows only, so I do not see the necessity to build ETL with AWS Glue. Given that, what are the possible solution options to fetch the data? Can AWS Batch/a pg_cron job be considered here to execute a set of select and update SQLs? Alternatively, how do we trigger the AWS Lambda at certain intervals if AWS Lambda is a solution option for this requirement? Appreciate your input.
Hi, I was going through the below reinvent video about Deep dive on Amazon Aurora with PostgreSQL. I see a mention about "Concurrency : Remove Log buffer" and "Aurora PostgreSQL: Writing Less". So does this mean that Aurora Postgres doesn't use wal buffer or is there is there any change in the way it is being used? https://www.youtube.com/watch?v=Ul-j5fKfv2k&t=334s Thanks,
1) I have created a Ec2 Amazon linux 2 instance and have attached 4 extra EBS volumes ( gp2=3gb, gp3=3gb,io1=4gb, io2=4gb formatted and mounted it and I have installed postgres db (v 8.4.18) on it. ( dev/xvdf /data/db1... etc) 2) I have created number of sample tables in the database and made my ebs root volume full to 100%. Now I am creating more table i get the error that your disk is full. 3) I want to know how can I set the root limit or even if the root volume is 100% the further data I am creating should automatically get stored in the other EBS volumes without increasing the size of root volume. The further data should be store in any of the attached root volumes or I want to see if the data divides in all the 4 EBS volumes. 4)In the Postgres database configuration, Do I need to map the default data directory (ie, /var/lib/postgresql/) to any of the attached EBS volumes? or how can I mount those ebs volumes to one another.
![Enter image description here](/media/postImages/original/IM7CyvSDMwQ9aSm2XrstwGaw) I had mistakenly launched a db.r6g.large PostgreSQL instance and deleted it instantly while under Free Tier. I also deleted all the snapshots as well. Yet, I see daily increase in the bill by a few cents. Anyone knows what to do there? ![Enter image description here](/media/postImages/original/IMvcNFLSyQRNq6EGeRQElTaQ)
Setting a custom value works fine in Aurora. For example: ``` SET jwt.claims.email = 'firstname.lastname@example.org'; SET my.custom.setting = 'yes'; ``` In vanilla Postgres, one can also associate custom settings with a database object: ``` ALTER DATABASE mydb SET my.custom.setting = 'yes'; ``` But in Aurora, I get the response: ``` ERROR: permission denied to set parameter "my.custom.setting"; SQLState: 42501 ``` even though the current user is the database owner. Why does this disparity between standard Postgres and Aurora (and presumably RDS) exist? This breaks compatibility with a current setup that we're trying to migrate to AWS.
My database stopped responding and I rebooted it. Now, it is forever rebooting and there is an error message in the log: 2023-01-11 15:39:16.131 GMT  LOG: skipping missing configuration file "/rdsdbdata/config/recovery.conf" 2023-01-11 15:39:16 UTC::@::LOG: database system is shut down