By using AWS re:Post, you agree to the Terms of Use
/Aurora PostgreSQL/

Questions tagged with Aurora PostgreSQL

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Upgrade Amazon Aurora PostgreSQL 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 minor versions by July 15, 2022

Newer versions of Amazon Aurora PostgreSQL-compatible edition are now available and database cluster(s) running Aurora PostgreSQL minor versions 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 need to be upgraded by July 15, 2022. These newer minor versions include important updates that will improve the operations of your Aurora PostgreSQL instances and workloads. We strongly encourage you to upgrade to at least a recommended minimum minor version at your earliest convenience. * For PostgreSQL Minor Versions 10.13, 10.14 and 10.16, the recommended minimum minor version is 10.17. * For PostgreSQL Minor Versions 11.8 and 11.11, the recommended minimum minor version is 11.12. * For PostgreSQL Minor Versions 12.4 and 12.6, the recommended minimum minor version is 12.7. Starting on or after 12:00 PM PDT on July 15, 2022, if your Amazon Aurora PostgreSQL cluster has not been upgraded to a newer minor version, we will schedule the relevant recommended minimum minor version to be automatically applied during your next maintenance window. Changes will apply to your cluster during your next maintenance window even if auto minor version upgrade is disabled. Restoration of Amazon Aurora PostgreSQL 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 database snapshots after July 15, 2022 will result in an automatic upgrade of the restored database to a supported version at the time. *How to Determine Which Instances are Running These Minor Versions?* * In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane. * To view DB cluster information by using the AWS CLI, use the describe-db-clusters command. * To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();". *How to Apply a New Minor Version * You can apply a new minor version in the AWS Management Console, via the AWS CLI, or via the RDS API. Customers using CloudFormation are advised to apply updates in CloudFormation. We advise you to take a manual snapshot before upgrading. For detailed upgrade procedures, please see the available User Guide [1]. Please be aware that your cluster will experience a short period of downtime when the update is applied. Visit the Aurora Version Policy [2] and the documentation [3] for more information and detailed release notes about minor versions, including existing supported versions. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [4]. [1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html [2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html [3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html [4] https://aws.amazon.com/support ****
0
answers
0
votes
6
views
asked 24 days ago

Cannot access encrypted files from RDS in S3 bucket

I export data from an Aurora Postgres instance to S3 via the `aws_s3.query_export_to_s3` function. The destination bucket does not have default encryption enabled. When I try to download one of the files I get the following error: > The ciphertext refers to a customer mast3r key that does not exist, does not exist in this region, or you are not allowed to access. Note: I had to change the word mast3r because this forum doesn't allow me to post it as it is a "non-inclusive" word... The reasons seems to be that the files got encrypted with the AWS managed RDS key which has the following policy: ``` { "Version": "2012-10-17", "Id": "auto-rds-2", "Statement": [ { "Sid": "Allow access through RDS for all principals in the account that are authorized to use RDS", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:CreateGrant", "kms:ListGrants", "kms:DescribeKey" ], "Resource": "*", "Condition": { "StringEquals": { "kms:CallerAccount": "123456789", "kms:ViaService": "rds.eu-central-1.amazonaws.com" } } }, { "Sid": "Allow direct access to key metadata to the account", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789:root" }, "Action": [ "kms:Describe*", "kms:Get*", "kms:List*", "kms:RevokeGrant" ], "Resource": "*" } ] } ``` I assume that the access doesn't work because of the `ViaService` condition when trying to decrypt the file via S3. I tried to access to files with the root user instead of an IAM user and it works. Is there any way to get access with an IAM user? As far as I know, you cannot modify the policy of an AWS managed key. I also don't understand why the root user can decrypt the file as the policy doesn't explicitly grant decrypt permissions to it other than the permissions when called from RDS.
1
answers
0
votes
4
views
asked a month ago

My Postgres RDS Database is got restarted. It says heavy consumption of memory.

My Postgres RDS Database is got restarted. It says heavy consumption of memory. But around 20GB freeable memory available at the time of restart. As I checked AAS Graph there is more locking happening. Below are the logs. ``` PL/pgSQL function evaluate_program_payout_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 141 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:CONTEXT: SQL statement "delete FROM EvalSlabResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_slab_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 139 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33300):pmli_bre_uat@dmsclientdb:[9065]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33300):pmli_bre_uat@dmsclientdb:[9065]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33300):pmli_bre_uat@dmsclientdb:[9065]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33286):pmli_bre_uat@dmsclientdb:[8793]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33286):pmli_bre_uat@dmsclientdb:[8793]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33286):pmli_bre_uat@dmsclientdb:[8793]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC::@:[13295]:FATAL: Can't handle storage runtime process crash 2022-03-08 11:28:42 UTC::@:[13295]:LOG: database system is shut down 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:CONTEXT: SQL statement "delete FROM DVResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_dv_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 143 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:CONTEXT: SQL statement "SELECT ess.entityid, ess.version_id FROM evalslabsummary ess WHERE ess.contextid = par_contextid AND ess.program_code = par_program_code AND ess.start_date = var_start_date" PL/pgSQL function evaluate_slab(character varying,character varying,character varying,character varying,character varying) line 71 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:CONTEXT: SQL statement "delete FROM DVResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_dv_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 143 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:CONTEXT: SQL statement "delete FROM ProgramPayoutResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_program_payout_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 141 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:CONTEXT: SQL statement "delete FROM ProgramPayoutResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_program_payout_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 141 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33236):pmli_bre_uat@dmsclientdb:[7485]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33236):pmli_bre_uat@dmsclientdb:[7485]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33236):pmli_bre_uat@dmsclientdb:[7485]:HINT: In a moment you should be able to reconnect to the database and repeat you ```
1
answers
0
votes
9
views
asked 2 months ago

Error connecting to Aurora PostgreSQL dB in .NET Core Lambda function.

I'm attempting to create a Lambda where I can make calls to various stored procedures and functions in my Aurora PostgreSQL dB instance. I'm following the guide on this page: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.Connecting.NET.html Eventually I want to connect this with Dapper, but for now I'm just trying to get the code from the above example to work. I am using the npgsql package and can successfully retrieve the RDSAuthToken via the RDSAuthTokenGenerator.GenerateAuthToken() function using the appropriate region endpoint, cluster endpoint, port number, and db user. The problem comes when I use the AuthToken I retrieved earlier to create a connection to the server: using NpgsqlConnection connection = new NpgsqlConnection($"Server=Cluster Endpoint;User Id=dB User;Password=AuthToken;Database=dB Instance name"); I am now getting this error: "28000: pg_hba.conf rejects connection for host \"172.31.30.255\", user \"dB User\", database \"dB Instance Name\", SSL off I'm not sure what I need to do to get this to work. As far as I can tell, I've done everything exactly as I was supposed to according to the guide in the documentation. I also created a user role with the specific permission for rds-db:connect for my specific dB user and dB instance id. My only guess is that I have failed to connect that authorization in some way to the actual dB user. I assigned that permission to a role with the same name, and then I created a dB user with that name in the dB and then granted it the rds_iam role, but it's not clear to me that the IAM user and the dB user would be connected yet. And I haven't been able to find examples online for how to connect them. It would be great to get a little help with this one. Thanks! Edit: I realized that my issue might be with the SSL Certificate path that is required at the end of the connection string in the example I linked above. I will keep looking into this, but I'm wondering if this will work to use in a Lambda if I have to reference a path to a certificate that I install on my computer. Although, I might not be understanding how this works.
1
answers
0
votes
14
views
asked 2 months ago

Aurora Postgres upgrade from 11.13 to 12.8 failing - I assume due to PostGis

Trying to upgrade our Aurora Clusters finally. Got them recently updated to 11.13, but every attempt I make to upgrade to 12.8 fails with **"Database cluster is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully."** Here are the logs which I think point to the culprit: **2022-02-11 22:37:53.514 GMT [5276] ERROR: could not access file "$libdir/postgis-2.4": No such file or directory 2022-02-11 22:37:53.514 GMT [5276] STATEMENT: LOAD '$libdir/postgis-2.4' 2022-02-11 22:37:53.515 GMT [5276] ERROR: could not access file "$libdir/rtpostgis-2.4": No such file or directory 2022-02-11 22:37:53.515 GMT [5276] STATEMENT: LOAD '$libdir/rtpostgis-2.4'** command: "/rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/pg_ctl" -w -D "/rdsdbdata/db" -o "--config_file=/rdsdbdata/config_new/postgresql.conf --survivable_cache_mode=off" -m fast stop >> "pg_upgrade_server.log" 2>&1 waiting for server to shut down....2022-02-11 22:37:53.541 GMT [5185] LOG: received fast shutdown request 2022-02-11 22:37:53.541 GMT [5185] LOG: aborting any active transactions 2022-02-11 22:37:53.542 GMT [5237] LOG: shutting down ................sh: /rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/curl: /apollo/sbin/envroot: bad interpreter: No such file or directory 2022-02-11 22:38:10.305 GMT [5185] FATAL: Can't handle storage runtime process crash 2022-02-11 22:38:10.305 GMT [5185] LOG: database system is shut down -------- I found several other articles that point to issues with Postgis, so I followed what they suggest, but no luck. First our cluster is running Postgis 2.4.4. So I went ahead and updated this to 3.1.4, tried the approach to restart the instance and validate its really using Postgis 3 and that all looks fine. Nothing helps though. If anyone has suggestions, I am happy to try. Thanks Thomas
2
answers
0
votes
16
views
asked 3 months ago

Announcement: Amazon Aurora PostgreSQL 10.x end of support is January 31, 2023

From January 31, 2023, Amazon Aurora PostgreSQL-compatible edition will no longer support major version 10.x. Per the Aurora Version Policy (1), we are providing 12 months notice to give you time to upgrade your database cluster(s). We recommend that you proactively upgrade your databases running Amazon Aurora PostgreSQL major version 10.x to Amazon Aurora PostgreSQL 11 or higher at your convenience before January 31, 2023. If you do not upgrade your database running Aurora PostgreSQL 10.x before January 31, 2023, Amazon Aurora will upgrade your Amazon Aurora PostgreSQL 10.x databases to the appropriate Amazon Aurora PostgreSQL major version during a scheduled maintenance window on or after January 31, 2023. *How to Determine Which Instances are Running Aurora PostgreSQL 10.x* In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane. To view DB cluster information by using the AWS CLI, use the describe-db-clusters command. To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. (2) You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();". *How to Upgrade to a New Major Version * You can initiate an upgrade of your database instance — either immediately or during your next maintenance window — to a newer major version of Amazon Aurora PostgreSQL using the AWS Management Console or the AWS Command Line Interface (CLI). The upgrade process will shut down the database instance, perform the upgrade, and restart the database instance. The database instance may be restarted multiple times during the upgrade process. While major version upgrades typically complete within the standard maintenance window, the duration of the upgrade depends on the number of objects within the database. To avoid any unplanned unavailability outside your maintenance window, we recommend that you first take a snapshot (3) or a fast database clone (4) of your database and test the upgrade to get an estimate of the duration. To learn more about upgrading PostgreSQL major versions in Aurora, review the Upgrading Database Versions page (5). Please be aware of the following timeline: • Now through January 31, 2023 - You can initiate upgrades of Amazon Aurora PostgreSQL 10.x instances to Amazon Aurora PostgreSQL 11 or higher at any time. • Starting August 1, 2022, you will no longer be able to create new Aurora clusters or instances with Aurora PostgreSQL major version 10.x from either the AWS Console or the CLI. You can still add read replicas to existing Aurora PostgreSQL 10.x clusters and continue to apply changes to existing Aurora PostgreSQL 10.x instances, such as migrating to a Graviton2 R6g instance or changing instance configuration, until January 31, 2023. • Starting January 31, 2023 Amazon Aurora will upgrade your Amazon Aurora PostgreSQL 10.x databases to the appropriate Amazon Aurora PostgreSQL major version during a scheduled maintenance window on or after January 31, 2023. Restoration of Amazon Aurora PostgreSQL 10.x database snapshots will result in an automatic upgrade of the restored database to a supported version at the time. [1]https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html [2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/accessing-monitoring.html [3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CreateSnapshotCluster.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html [5] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Updates.html
0
answers
0
votes
216
views
asked 4 months ago

DB Log Processing through Kinesis Data streams and Time Series DB

Hi Team, I have an architecture based question, How Postgre SQL DB log processing can be captured through AWS lambda , aws Kinisis Data streams and finally Data should loads into Time Stream Database. Providing High level scenario: Draft Data flow : **Aurora Postgre DB** ----DB Logs Processing to ---->** Lambda** --->Ingestion to ----> **Kinesis Data Streams ** ---Process and Join context data and insert --- Insert to --------> **Time Stream Database** I believe , we can process / loads the AWS IoT (sensors , device data) to Time Stream Database through Lambda , Kinesis Streams , Kinesis Data analytics and finally Time series Database and we can do analytics on time series data . But i am not sure How the postgre SQL db logs (write ahead logs) process through Lambda and ingest through Kinesis streams and finally load into Time Stream Database . and above flow also required to Joins some tables like Event driven tables with associated Account , Customer tables and then it will load into Time Series Database . would like to know if above flow would be accurate , as we are not processing any sensors / devices data ( where sensors data captures all measures and dimensions data from device and loads into Time Stream DB ) so Time Series database always a primary database . if anyone can through some lights , how postgre sql db logs can be integrated with Time Stream database through Kinesis Data streams , Lambda . Need your help Thanks
1
answers
0
votes
17
views
asked 5 months ago

Aurora postgresql major version upgrade from 9.6 to 10.18

Seeing some errors in the Postgresql log file. But other than these reported errors, the db instance is working fine and applications have no issues connecting, reading and writing to db instance. We want some help from AWS to verify if these errors require any further action. Thanks. 2021-12-07T03:29:32.000Z Copy 2021-12-07 03:29:32 UTC::@:[8182]:WARNING: unrecognized configuration parameter "rds.adaptive_autovacuum" 2021-12-07 03:29:32 UTC::@:[8182]:WARNING: unrecognized configuration parameter "rds.adaptive_autovacuum" 2021-12-07T03:29:32.000Z Copy 2021-12-07 03:29:32 UTC::@:[8182]:WARNING: unrecognized configuration parameter "rds.enable_plan_management" 2021-12-07 03:29:32 UTC::@:[8182]:WARNING: unrecognized configuration parameter "rds.enable_plan_management" 2021-12-07T03:29:32.000Z 2021-12-07 03:29:32 UTC::@:[8182]:LOG: database system is shut down Postgres Shared Memory Value: 51704012800 bytes LOG: skipping missing configuration file "/rdsdbdata/db/postgresql.auto.conf" 2021-12-07T03:29:35.000Z 2021-12-07 03:29:35 UTC::@:[8776]:WARNING: unrecognized configuration parameter "rds.adaptive_autovacuum" 2021-12-07T03:29:35.000Z 2021-12-07 03:29:35 UTC::@:[8776]:WARNING: unrecognized configuration parameter "rds.enable_plan_management" 2021-12-07T03:29:35.000Z 2021-12-07 03:29:35 UTC::@:[8776]:LOG: redirecting log output to logging collector process 2021-12-07T03:29:35.000Z 2021-12-07 03:29:35 UTC::@:[8776]:HINT: Future log output will appear in directory "/rdsdbdata/log/error". 2021-12-07T03:38:49.000Z 2021-12-07 03:38:49 UTC::@:[8777]:LOG: skipping missing configuration file "/rdsdbdata/db/postgresql.auto.conf" 2021-12-07T03:51:10.000Z 2021-12-07 03:51:10 UTC::@:[29750]:LOG: setting shmemname /mnt/hugetlbfs/aurora-postgres-buffer-cache 2021-12-07T03:51:10.000Z 2021-12-07 03:51:10 UTC::@:[29743]:LOG: Waiting for runtime initialization complete... 2021-12-07T03:51:11.000Z 2021-12-07 03:51:11 UTC:[local]:rdsadmin@rdsadmin:[30193]:FATAL: the database system is starting up 2021-12-07T03:51:11.000Z 2021-12-07 03:51:11 UTC::@:[30192]:LOG: database system was interrupted; last known up at 2021-12-07 03:50:57 UTC 2021-12-07T03:51:11.000Z 2021-12-07 03:51:11 UTC::@:[30192]:LOG: Outbound recovery is not required 2021-12-07T03:51:12.000Z 2021-12-07 03:51:12 UTC:[local]:rdsadmin@rdsadmin:[30220]:FATAL: the database system is starting up 2021-12-07T03:51:13.000Z 2021-12-07 03:51:13 UTC::@:[29743]:LOG: database system is ready to accept connections 2021-12-07T03:55:10.000Z 2021-12-07 03:55:10 UTC::@:[29743]:LOG: received SIGHUP, reloading configuration files 2021-12-07T03:55:10.000Z 2021-12-07 03:55:10 UTC::@:[29743]:LOG: parameter "unix_socket_permissions" cannot be changed without restarting the server 2021-12-07T03:55:10.000Z 2021-12-07 03:55:10 UTC::@:[29743]:LOG: parameter "apg_critical_insights_enabled" changed to "on" 2021-12-07T03:55:10.000Z 2021-12-07 03:55:10 UTC::@:[29743]:LOG: configuration file "/rdsdbdata/config/postgresql.conf" contains errors; unaffected changes were applied 2021-12-07T09:48:09.000Z 2021-12-07 09:48:09 UTC:10.201.11.180(61119):postgresrds@spam:[17324]:LOG: could not receive data from client: Connection timed out
1
answers
0
votes
97
views
asked 5 months ago

Aurora Postgres 13 with IAM Auth - Unable to establish logical replication connection

Hello, I'm following the tutorial for using Postgres CDC using logical replication here - https://aws.amazon.com/blogs/database/stream-changes-from-amazon-rds-for-postgresql-using-amazon-kinesis-data-streams-and-aws-lambda/ The DB cluster parameter group has rds.logical_replication enabled, and I've verified that the user I intend to use is capable of IAM auth, and that logical replication slots can be created and queried: CREATE ROLE replicate LOGIN; GRANT rds_replication, rds_iam TO replicate; GRANT SELECT ON ALL TABLES IN SCHEMA myschema TO replicate; ...snip...IAM authenticate as replicate user... Server: PostgreSQL 13.4 Version: 3.2.0 Home: http://pgcli.com replicate@dev-db-host:mydb> SELECT pg_create_logical_replication_slot('mydb_replication_slot', 'wal2json'); replication slot "mydb_replication_slot" already exists Time: 0.016s replicate@dev-db-host:mydb> SELECT pg_create_logical_replication_slot('test_replication_slot', 'wal2json'); +--------------------------------------+ | pg_create_logical_replication_slot | |--------------------------------------| | (test_replication_slot,0/5005318) | +--------------------------------------+ SELECT 1 Time: 0.044s replicate@dev-db-host:mydb> SELECT * FROM pg_logical_slot_peek_changes('test_replication_slot', null, null); +-------+-------+--------+ | lsn | xid | data | |-------+-------+--------| +-------+-------+--------+ SELECT 0 Time: 0.033s replicate@dev-db-host:mydb> However when I attempt create a replication connection using the python psycopg2 code in the blog post, postgres tells me > FATAL: password authentication failed for user "replicate" Someone else asked the psycopg devs, who've indicated it's an RDS issue: https://github.com/psycopg/psycopg2/issues/1391 Any ideas? Cheers, Jim P.S. I've verified that the postgres logical replication connection (either `replication=database` option on the command line or psycopg2's `LogicalReplicationConnection` type) is possible when using a plain old password instead of RDS IAM.
0
answers
0
votes
9
views
asked 5 months ago

How to investigate Aurora Postgres IAM authentication errors from rdsauthproxy

I have been using IAM database authentication on an Aurora for Postgres for many months now and everything worked well. A few days ago I started getting login errors until now it is impossible to login at all. I am not sure about the timeline as we only use these accounts for individual user connections. Only accounts not using IAM can login now. I am not aware of any change but I cannot pinpoint the root cause of the error. The error I am getting in Postgres clients is this: ``` Unable to connect to server: FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` If I look into the Postgres logs I get a little more details: ``` * Trying <REDACTED_IP>:1108... * Connected to rdsauthproxy (<REDACTED_IP>) port 1108 (#0) > POST /authenticateRequest HTTP/1.1 Host: rdsauthproxy:1108 Accept: */* Content-Length: 753 Content-Type: multipart/form-data; boundary=------------------------1f9a4da08078f511 * We are completely uploaded and fine * Mark bundle as not supporting multiuse < HTTP/1.1 403 Forbidden < Content-Type: text/html;charset=utf-8 < Content-Length: 0 < * Connection #0 to host rdsauthproxy left intact 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:LOG: pam_authenticate failed: Permission denied 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:DETAIL: Connection matched pg_hba.conf line 13: "hostssl all +rds_iam all pam" 2021-12-05 14:42:43 UTC:10.4.2.137(13615):<REDACTED_USERNAME>@postgres:[7488]:FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` So it seems to be "rdsauthproxy" that rejects the authentication. My understanding is that this proxy is part of the Aurora instance and I did not find a way to get its logs where hopefully I could find any information on why the authentication is rejected. I checked the IAM configuration in case something changed but it seems fine. The users have a policy like this: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:eu-west-3:<REDACTED_ACCOUNT_ID>:dbuser:*/<REDACTED_USERNAME>" } ] } ``` The usernames match exactly between IAM and Postgres. In Postgres they all have the "rds_iam" role. Is there anything I could be missing? At least is there a way to retrieve logs of an Aurora rdsauthproxy instance that maybe could point me in the right direction?
1
answers
0
votes
91
views
asked 6 months ago

Does AWS DMS support ARRAY data type for RDS for PostgreSQL on EC2 to Aurora PostgreSQL migration?

I am currently migrating an Amazon RDS for PostgreSQL database from Amazon EC2 to Amazon Aurora PostgreSQL-Compatible Edition. I am using AWS DMS and have encountered the following issue: One of the columns in a particular table stores the values of water pressure measured within a second. This column is an array of decimal numbers (Ex: {2.44, 5.66, 8.55}). I received the following error message during the migration from AWS DMS: "1 unsupported data type '_float4' on table 'data1', column 'pressure'". Does AWS DMS support ARRAY data type for double or floating point numbers? The AWS documentation indicates that the arrays can't be migrated. However, further down on the same page, it's mentioned that AWS DMS supports arrays from a source RDS for PostgreSQL database and that arrays are mapped to CLOBs in AWS DMS. I'm looking for some guidance on whether ARRAY data type is supported by AWS DMS during migration. The reports return the following error: Note: You can see that the pressure column is indicated with real[] . 1 unsupported data type '_float4' on table 'data1', column 'pressure' pipeminder=# \d data1 Table "public.data1" Column | Type | Modifiers ---------------+--------------------------+----------- device_id | bigint | not null timestamp | timestamp with time zone | not null pressure | real[] | not null pressure_min | real | not null pressure_mean | real | not null pressure_max | real | not null flow | real | not null Indexes: "data1_unique_device_time" UNIQUE CONSTRAINT, btree (device_id, "timestamp")
1
answers
0
votes
28
views
asked 2 years ago
  • 1
  • 90 / page