Questions tagged with Aurora PostgreSQL

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, A framework I use requires access to pg_catalog.pg_largeobject, but the master user doesn't seem to have access to the table. `GRANT SELECT ON pg_catalog.pg_largeobject TO $MASTER_USER` also fails with "ERROR: role $MASTER_USER does not exist" Is there any way to access the table? I know there are some tables the master user can't access, such as pg_authid, but I want to confirm the table is one of them. RDS: Aurora Serverless v1 PostgreSQL10 Thanks
1
answers
0
votes
26
views
amrael
asked 2 months ago
Hi, i am unable to find a good tutorial to deploy my rails-postgresql-react app on to AWS free-tier initially. Would be grateful if some one can guide me. Thanks
1
answers
0
votes
16
views
asked 2 months ago
Is it possible to setup a linked server connection between Aurora or RDS (PostgreSQL) and an on-prem MSSQL server using ****Windows authentication**** with the on-prem Active Directory (and its AD-Connector on AWS)? The purpose of this setup is for ongoing work, not a migration, and ideally, we would like to create a single connection to the AWS database which will in turn query the on-prem DB and fetch data from there for our query. I am familiar with this blog post: https://aws.amazon.com/blogs/database/implement-linked-servers-with-amazon-rds-for-microsoft-sql-server/ but it doesn't mention Windows Authentication. Thanks!
1
answers
0
votes
25
views
asked 2 months ago
Hi, I was going through the below reinvent video about Deep dive on Amazon Aurora with PostgreSQL. I see a mention about "Concurrency : Remove Log buffer" and "Aurora PostgreSQL: Writing Less". So does this mean that Aurora Postgres doesn't use wal buffer or is there is there any change in the way it is being used? https://www.youtube.com/watch?v=Ul-j5fKfv2k&t=334s Thanks,
1
answers
0
votes
40
views
Rakesh
asked 2 months ago
Newer versions of Amazon Aurora PostgreSQL-compatible edition are now available and database cluster(s) running Aurora PostgreSQL minor versions 11.12, 12.7, and 13.3 need to be upgraded by March 15, 2023. These newer minor versions include important updates that will improve the operations of your Aurora PostgreSQL instances and workloads. We strongly encourage you to upgrade to at least a recommended minimum minor version at your earliest convenience. * For PostgreSQL Minor Version 11.12, the recommended minimum minor version is 11.17. * For PostgreSQL Minor Version 12.7, the recommended minimum minor version is 12.12. * For PostgreSQL Minor Version 13.3, the recommended minimum minor version is 13.8. Starting on or after 12:00 PM PDT on March 15, 2023, if your Amazon Aurora PostgreSQL cluster has not been upgraded to a newer minor version, we will schedule the relevant recommended minimum minor version to be automatically applied during your next maintenance window. Changes will apply to your cluster during your next maintenance window even if auto minor version upgrade is disabled. Restoration of Amazon Aurora PostgreSQL 11.12, 12.7, and 13.3 database snapshots after March 15, 2023 will result in an automatic upgrade of the restored database to a supported version at the time. How to Determine Which Instances are Running These Minor Versions? * In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane. * To view DB cluster information by using the AWS CLI, use the describe-db-clusters command. * To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();". How to Apply a New Minor Version You can apply a new minor version in the AWS Management Console, via the AWS CLI, or via the RDS API. Customers using CloudFormation are advised to apply updates in CloudFormation. We advise you to take a manual snapshot before upgrading. For detailed upgrade procedures, please see the available User Guide [1]. Please be aware that your cluster will experience a short period of downtime when the update is applied. Visit the Aurora Version Policy [2] and the documentation [3] for more information and detailed release notes about minor versions, including existing supported versions. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [4]. [1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html [2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html [3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html [4] https://aws.amazon.com/support
0
answers
0
votes
225
views
AWS
EXPERT
asked 3 months ago
I am seeing my Aurora Postgres sequence values skip by 33 on a semi-consistent basis. I am aware of a thread from the PG mailing list saying that when PG recovers it can cause a sequence skip. Also if a large transaction rolls back then any sequence values updated during that transaction will remain at their new value and won't be rolled back. I get that - its the whole point of a sequence. But in my case nothing is happening and boom tomorrow morning the sequences have skipped ahead by 33. This article discusses other reasons that can cause a sequence skip: https://www.cybertec-postgresql.com/en/gaps-in-sequences-postgresql/ But I am not seeing any of those events. This appears to happen randomly. Anyone else seeing this? I migrated from RDS/Postgres and never experienced this. This just started on the migration to Aurora Postgres.
2
answers
0
votes
37
views
cody
asked 4 months ago
Hi! First question on this forum and it may seem trivial, but I'm trying to get around seamless reconnections of applications and interfaces after Point in Time Recovery. PITR of Aurora Instances and DynamoDB tables always result in new instances of either being created. Potentially these could be hosting connections from many targets. My experience (more that 2 decades) is with the likes Oracle and SQL Server and in situ restores/recoveries so no need to repoint applications. How are people handling the change of database target if they have to performs such a recovery? (We are thinking of going RDS rather than Aurora because we can understand the process there, but this seems like a poor reason to choose one over the other). Thanks, John
1
answers
0
votes
36
views
asked 4 months ago
Hi, I have 13.4 db with app working fine. How can I test it on 14.4? What AWS recommends? Should I create another cluster 14.4 and test it with this version?
2
answers
0
votes
59
views
asked 4 months ago
Is there any plan to support auto-sharding in Aurora for MySQL / PostgreSQL ? (similar to Citus for PostgreSQL)
1
answers
0
votes
87
views
asked 4 months ago
Hello, I'am trying to run a Lake Formation blueprint for database ingestion (Aurora PostgreSQL, Glue Connection working, snapshot mode), but I got the following error: ``` An error occurred while calling o471.pyWriteDynamicFrame. You may get a different result due to the upgrading of Spark 3.0: writing dates before 1582-10-15 or timestamps before 1900-01-01T00:00:00Z into Parquet INT96 files can be dangerous, as the files may be read by Spark 2.x or legacy versions of Hive later, which uses a legacy hybrid calendar that is different from Spark 3.0+'s Proleptic Gregorian calendar. See more details in SPARK-31404. You can set spark.sql.legacy.parquet.int96RebaseModeInWrite to 'LEGACY' to rebase the datetime values w.r.t. the calendar difference during writing, to get maximum interoperability. Or set spark.sql.legacy.parquet.int96RebaseModeInWrite to 'CORRECTED' to write the datetime values as it is, if you are 100% sure that the written files will only be read by Spark 3.0+ or other systems that use Proleptic Gregorian calendar. ``` I've found that adding `--conf spark.sql.legacy.parquet.int96RebaseModeInWrite=CORRECTED` would solve. However, it is not possible to change the Glue ETL Job (got `putObject: AccessDenied: Access Denied`).
1
answers
0
votes
51
views
asked 4 months ago
Hi there, Is there a way to Migrate an Aurora-based Postgres Db to an RDS Postgres Instance? I have searched on AWS docs I couldn't find any. Any Pointer will be greatly appreciated. Thanks
1
answers
0
votes
88
views
asked 4 months ago
If we have 20 transactions on a table in a second so worst case will DMS make 20 connections for tracking transactions ?
1
answers
0
votes
49
views
Sarath
asked 5 months ago