Questions tagged with PostgreSQL

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hello, if I have RDS instances of postgres running version 9.5 and 9.6 what do I need to do in order to trigger an upgrade to a supported version that is no deprecated? Auto minor version upgrade is enabled. I know this version is no longer supported and honestly, I am surprised they didn't automatically upgrade by now to a supported version.
1
answers
0
votes
20
views
asked a month ago
Dear Friends, I have a need to copy/replicate of specific table from **RDS (postgres)** to **another AWS account's S3** to build **timeseries** data for analytics or data lake perspective. I am looking for an option which is performant for million of records without performing pg export to csv. Thanks in advance.
0
answers
0
votes
9
views
asked a month ago
Hi, I have a PostgreSQL RDS instance (subscriber) that lives in the same AZ and VPC as an EC2 machine with PostgreSQL (publisher). I established logical replication subscription against the publisher about a month ago using a CNAME DNS that points to the EC2 DNS name. The publisher is restricted to only allow access from private IP in the VPC and this worked up to three days ago where it suddenly stopped. I find no events that indicate what happened but the cause is clear - the DNS name is suddenly resolving to the public IP at the RDS instance and is denied access by the publisher. I fail to understand how this can happen and also seems against documentation. That is, I expect the DNS lookup to reliably resolve to the private IP. Any ideas what can explain this behaviour and/or if my expectations are wrong or right?
0
answers
2
votes
54
views
bjornha
asked a month ago
I am trying to find the latest DMS engine version number. Here is what I have found: "When you launch a new replication instance, it runs the latest AWS DMS engine version unless you specify otherwise." But this does not mention what the latest version number is. Using a PostgreSQL database as an AWS DMS source says: PostgreSQL source version 14.x - Use Aws DMS version 3.4.7 and above. This, at least specifies that there is a version 3.4.7, but it doesn't say that is the latest number. Today I created a DMS Replication instance with DMS endpoints. The replication instance shows engine version 3.4.6 and I did not specify the engine version. But it seems like there must be a version 3.4.7 because that is required when your source db is PostgreSQL version 14.x. Where can I find the current, latest DMS engine version number?
4
answers
0
votes
64
views
asked a month ago
Hi , I recently migrated EC2 postgres to RDS and I'm facing the problem with one table that has a sequence . When trying to insert Im getting permissions error . I granted all the permissions already to the owner of the schema and the user that tries the insert operation but still facing the same error . Please advise how can I solve it attached screenshots ![table](/media/postImages/original/IMKXtxSJVaQry8xqfn3iFu3Q) ![insert and permissions](/media/postImages/original/IMhbLDGeaGTZ-4IeZ5gb6mng)
2
answers
0
votes
43
views
asked a month ago
I am trying to migrate data from an on prem SQL Server DB to Postgres Aurora using AWS DMS. The data migrates just fine, but for tables that have a boolean column in the primary key, during validation it fails with the following error: ``` [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. ``` Upon checking the logs on the Postgres side, I find this: `ERROR: invalid input syntax for type boolean: "" at character <number>` `STATEMENT: SELECT cast ("pcode" as varchar(6)) , "other_columns" , "boolean_column" FROM "db"."table" WHERE ((("boolean_column" = ' ' AND "pcode" > 'L7L3V9') AND ("boolean_column" = ' ' AND "pcode" <= 'L8L4E8'))) ORDER BY "boolean_column" ASC , "pcode" ASC ` During validation, it's fetching the records from the postgres in batches, and for each batch it uses the wrong value for the "bolean_column" (comparing to '' - blank string). I am not sure why it's doing this, or how to influence this behaviour so that the validations will be performed successfully. The boolean column is a: - bit field on the SQL Server side - on postgres side, have tried converting to (both with same validation error as above): - numeric with precision 1 - boolean
0
answers
0
votes
26
views
asked a month ago
Hello friends! Today I have a big problem that I cannot extend the user who requests information from the RDS through Power BI using the solution that Amazon provides at this [link](https://docs.aws.amazon.com/pt_br/whitepapers/latest/using-power-bi-with-aws-cloud/connecting-the-microsoft-power-bi-service-to-aws-data-sources.html): According to the documentation, for Power BI to access RDS data in a VPC it is necessary to install the [Microsoft gateway](https://powerbi.microsoft.com/pt-br/gateway/) on an EC2 machine in the VPC and then this solution establishes an outbound connection to Microsoft Azure Service Bus over the Internet and is configured in Microsoft Power BI to connect to data sources that it can access. In the gateway and RDS communication, the user is always unable to limit access to the tables that Power BI can access depending on the source user. Has anyone experienced this and have a solution? Thanks!
0
answers
0
votes
20
views
Felipe
asked a month ago
I have an RDS (non-Aurora) postgres primary (multi-AZ) with associated readonly replica. We were seeing queries canceled on the readonly due to recovery conflicts, so I changed hot_standby_feedback=1 in the parameter group for the readonly replica. This resolved some conflicts, meaning we are seeing fewer of them, but it hasn't resolved them all. Today I saw this in the readonly replica's log: ERROR: canceling statement due to conflict with recovery DETAIL: User query might have needed to see row versions that must be removed. According to this document I found in the knowledge center (https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-error-conflict-recovery/), the solution for this is to set hot_standby_feedback=1 (true). This is also backed by postgres documentation, which states this: "Remedial possibilities exist if the number of standby-query cancellations is found to be unacceptable. The first option is to set the parameter hot_standby_feedback, which prevents VACUUM from removing recently-dead rows and so cleanup conflicts do not occur. If you do this, you should note that this will delay cleanup of dead rows on the primary, which may result in undesirable table bloat. However, the cleanup situation will be no worse than if the standby queries were running directly on the primary server, and you are still getting the benefit of off-loading execution onto the standby. If standby servers connect and disconnect frequently, you might want to make adjustments to handle the period when hot_standby_feedback feedback is not being provided. For example, consider increasing max_standby_archive_delay so that queries are not rapidly canceled by conflicts in WAL archive files during disconnected periods. You should also consider increasing max_standby_streaming_delay to avoid rapid cancellations by newly-arrived streaming WAL entries after reconnection." (see https://www.postgresql.org/docs/12/hot-standby.html#HOT-STANDBY-CONFLICT) I thought that by setting hot_standby_feedback=1 that we should not encounter anymore conflicts due to "User query might have needed to see row versions that must be removed". Why are we still seeing them? Has anyone else encountered this problem? Any thoughts on how to correct it?
2
answers
0
votes
30
views
KrisO
asked 2 months ago
From January 31, 2024, Amazon Aurora PostgreSQL-compatible edition will no longer support major version 11.x. Newer major versions are available. Per the Aurora Version Policy [1], we are providing 12 months notice to give you time to upgrade your database cluster(s). We recommend that you proactively upgrade your databases running Amazon Aurora PostgreSQL major version 11.x to Amazon Aurora PostgreSQL 12 or higher at your convenience before January 31, 2024. If you do not upgrade your database running Aurora PostgreSQL 11.x before January 31, 2024, Amazon Aurora will upgrade your Amazon Aurora PostgreSQL 11.x databases to the appropriate Amazon Aurora PostgreSQL major version during a scheduled maintenance window on or after January 31, 2024. ***How to Determine Which Instances are Running Aurora PostgreSQL 11.x*** In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane. To view DB cluster information by using the AWS CLI, use the describe-db-clusters command. To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. [2] You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();". ***How to Upgrade to a New Major Version*** You can initiate an upgrade of your database instance — either immediately or during your next maintenance window — to a newer major version of Amazon Aurora PostgreSQL using the AWS Management Console or the AWS Command Line Interface (CLI). The upgrade process will shut down the database instance, perform the upgrade, and restart the database instance. The database instance may be restarted multiple times during the upgrade process. While major version upgrades typically complete within the standard maintenance window, the duration of the upgrade depends on the number of objects within the database. To avoid any unplanned unavailability outside your maintenance window, we recommend that you first take a snapshot [3] or a fast database clone [4] of your database and test the upgrade to get an estimate of the duration. To learn more about upgrading PostgreSQL major versions in Aurora, review the Upgrading Database Versions page [5]. **Please be aware of the following timeline:** * Now through January 31, 2024 - You can initiate upgrades of Amazon Aurora PostgreSQL 11.x instances to Amazon Aurora PostgreSQL 12 or higher at any time. * Starting August 1, 2023, you will no longer be able to create new Aurora clusters or instances with Aurora PostgreSQL major version 11.x from either the AWS Console or the CLI. You can still add read replicas to existing Aurora PostgreSQL 11.x clusters and continue to apply changes to existing Aurora PostgreSQL 11.x instances, such as migrating to a Graviton2 R6g instance or changing instance configuration, until January 31, 2024. * Starting January 31, 2024 Amazon Aurora will upgrade your Amazon Aurora PostgreSQL 11.x databases to the appropriate Amazon Aurora PostgreSQL major version during a scheduled maintenance window on or after January 31, 2024. Restoration of Amazon Aurora PostgreSQL 11.x database snapshots will result in an automatic upgrade of the restored database to a supported version at the time. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [6] [1]https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html [2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/accessing-monitoring.html [3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CreateSnapshotCluster.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html [5] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Updates.html [6] http://aws.amazon.com/support
0
answers
0
votes
592
views
AWS
EXPERT
asked 2 months ago
On the "modify" page, the "DB engine version" list only shows the existing version, I can't select a version to upgrade to. https://stackoverflow.com/q/75319524/924597
0
answers
0
votes
70
views
Shorn
asked 2 months ago
Hello, has anyone figured out faster way of doing postgres 14 column type change in RDS? We currently have a table that is running out of int4 space and multiple columns needs to be converted to int8. Table also has quite many foreign keys pointing to it and lot of indexes. On top of this, database cannot be taken down for maintenance for a long period of time. Tried doing the type conversion on warmed up copy of the database, but it takes almost 3 hours which is way too much. things already considered: - take a snapshot, restore with huge amount of iops, make the conversion, reduce iops (won't work as freshly restored RDS postgres seems to be really slow with disk) - increase current iops to max and do the conversion, reduce iops later (would cost ton of money and probably won't work as volume optimizing reduces performance, etc.) - exporting & re-importing table data (might be really fast but then would need to drop and re-create indexes otherwise import would be slow) Any other ideas? are there some postgres settings or tricks that could be used for making the process faster?
1
answers
0
votes
41
views
asked 2 months ago
Why does it say I am using so much more space? I did a bit of searching, and it might be the temporary files. Can you tell me how to clear them? I haven't found a way to connect to RDS to view or clear those files.
1
answers
0
votes
18
views
asked 2 months ago