By using AWS re:Post, you agree to the Terms of Use
/PostgreSQL/

Questions tagged with PostgreSQL

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Autovacuum does not clear the database.

Autovacuum does not clear the database. Databases middle-db Postgres 10.18, AWS RDS(vCPU 2, RAM 8Gb, SSD(gp2) 1100Gib) Table "spree_datafeed_products" ``` relid | 16556 schemaname | public relname | spree_datafeed_products seq_scan | 20 seq_tup_read | 365522436 idx_scan | 962072108 idx_tup_fetch | 9929276855 n_tup_ins | 2846455 n_tup_upd | 35778058 n_tup_del | 284291955 n_tup_hot_upd | 0 n_live_tup | 3546840 n_dead_tup | 338790851 n_mod_since_analyze | 307930753 last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | 2022-04-29 13:01:43.985749+00 vacuum_count | 0 autovacuum_count | 0 analyze_count | 0 autoanalyze_count | 1 ``` Table and indexes sizes: ``` indexname | size index_spree_datafeed_products_on_updated_at | 48 GB index_spree_datafeed_products_on_state | 35 GB index_spree_datafeed_products_on_size_variant_field | 40 GB index_spree_datafeed_products_on_product_id | 32 GB index_spree_datafeed_products_on_original_id | 31 GB index_spree_datafeed_products_on_datafeed_id | 42 GB index_spree_datafeed_products_on_datafeed_id_and_original_id | 31 GB index_spree_datafeed_products_on_data_hash | 39 GB spree_datafeed_products_pkey | 18 GB pg_size_pretty - 419 GB ``` Worker: ``` datid | 16404 datname | milanstyle_production pid | 2274 backend_start | 2022-05-01 19:52:00.066097+00 xact_start | 2022-05-01 19:52:00.23692+00 query_start | 2022-05-01 19:52:00.23692+00 state_change | 2022-05-01 19:52:00.236921+00 wait_event_type | wait_event | state | active backend_xid | backend_xmin | 1301636863 query | autovacuum: VACUUM ANALYZE public.spree_datafeed_products backend_type | autovacuum worker ``` Settings: ``` autovacuum on autovacuum_analyze_scale_factor 0.05 autovacuum_analyze_threshold 50 autovacuum_freeze_max_age 200000000 autovacuum_max_workers 3 autovacuum_multixact_freeze_max_age 400000000 autovacuum_naptime 30 autovacuum_vacuum_cost_delay 20 autovacuum_vacuum_cost_limit -1 autovacuum_vacuum_scale_factor 0.1 autovacuum_vacuum_threshold 50 ``` The garbage cleaning script has accumulated a lot of deleted entries. We have been waiting for more than a week (autoclearance). What is the problem? Why is the database failing?
0
answers
0
votes
2
views
asked 11 days ago

[Urgent Action Required] - Upgrade your RDS for PostgreSQL minor versions

This announcement is for customers that are running one or more Amazon RDS DB instances with a version of PostgreSQL, that has been deprecated by Amazon RDS and requires attention. The RDS PostgreSQL minor versions that are listed in the table below are supported, and any DB instances running earlier versions will be automatically upgraded to the version marked as "preferred" by RDS, no earlier than July 15, 2022 starting 12 AM PDT: | Major Versions Supported | Minor Versions Supported | | --- | --- | | 14 | 14.1 and later | | 13 |13.3 and later | | 12 | 12.7 and later | | 11 |11.12 and later | | 10 |10.17 and later| | 9 |none | Amazon RDS supports DB instances running the PostgreSQL minor versions listed above. Minor versions not included above do not meet our high quality, performance, and security bar. In the PostgreSQL versioning policy [1] the PostgreSQL community recommends that you always run the latest available minor release for whatever major version is in use. Additionally, we recommend that you monitor the PostgreSQL security page for documented vulnerabilities [2]. If you have automatic minor version upgrade enabled as a part of your configuration settings, you will be automatically upgraded. Alternatively, you can take action yourselves by performing the upgrade earlier. You can initiate an upgrade by going to the Modify DB Instance page in the AWS Management Console and change the database version setting to a newer minor/major version of PostgreSQL. Alternatively, you can also use the AWS CLI to perform the upgrade. To learn more about upgrading PostgreSQL minor versions in RDS, review the 'Upgrading Database Versions' page [3]. The upgrade process will shutdown the database instance, perform the upgrade, and restart the database instance. The DB instance may restart multiple times during the process. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after clicking on the "Modify DB Instance" button. If you choose not to apply the change immediately, the upgrade will be performed during your next maintenance window. Starting no earlier than July 15, 2022 12 AM PDT, we will automatically upgrade the DB instances running deprecated minor version to the preferred minor version of the specific major version of your RDS PostgreSQL database. (For example, instances running RDS PostgreSQL 10.1 will be automatically upgraded to 10.17 starting no earlier than July 15, 2022 12 AM PDT) Should you need to create new instances using the deprecated version(s) of the database, we recommend that you restore from a recent DB snapshot [4]. You can continue to run and modify existing instances/clusters using these versions until July 14, 2022 11:59 PM PDT, after which your DB instance will automatically be upgraded to the preferred minor version of the specific major version of your RDS PostgreSQL database. Starting no earlier than July 15, 2022 12 AM PDT, restoring the snapshot of a deprecated RDS PostgreSQL database instance will result in an automatic version upgrade of the restored database instance using the same upgrade process as described above. Should you have any questions or concerns, please see the RDS FAQs [5] or you can contact the AWS Support Team on the community forums and via AWS Support [6]. Sincerely, Amazon RDS [1] https://www.postgresql.org/support/versioning/ [2] https://www.postgresql.org/support/security/ [3] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html [5] https://aws.amazon.com/rds/faqs/ [search for "guidelines for deprecating database engine versions"] [6] https://aws.amazon.com/support
0
answers
1
votes
10
views
asked 14 days ago

How / Where do I upgrade my instance's PostgreSQL Version?

Hello! I am trying to deploy a rails 7 app to elastic beanstalk, but my deploy keeps failing. In the logs I see: `An error occurred while installing pg (1.3.5), and Bundler cannot continue` (installing the postgres gem is failing) I SSHed onto my instance and ran bundle manually, and see `Your PostgreSQL is too old. Either install an older version of this gem or upgrade your database to at least PostgreSQL-9.3.` In my .elasticbeanstalk.packages.config file I have: ``` packages: yum: postgresql14-devel: [] ``` But this seems to have no effect on the version of Postgres on my instance. Creating an RDS instance associated to my beanstalk environment with any version of PostgreSQL does not seem to solve the problem. `postgres -V` is not a command on my beanstalk instance if I ssh onto my instance and cat `usr/bin/pg_config` I think it may be set to version 9.2, but this file doesn't look to me like something I should be editing via SSH and I don't see any references to manipulating how it gets generated. Any assistance would be greatly appreciated! Update1: .elasticbeanstalk.packages.config should be within `.ebextensions` instead. I made this change, still had this error. Found another thread on stack overflow that described the packages.config file to look like ``` packages: yum: amazon-linux-extras: [] commands: 01_postgres_activate: command: sudo amazon-linux-extras enable postgresql10 02_postgres_install: command: sudo yum install -y postgresql-devel ``` I updated the file, still had no luck. I terminated and rebuild my app to try again, and manually ran `sudo amazon-linux-extras enable postgresql10` and `sudo yum install -y postgresql-devel` via ssh, which finally let me successfully bundle install. Still working on making this work via EB deploy instead of manually messing with the boxes. Update 2: After making the above changes and creating a new application + environment I am able to consistently get past the above issue.
1
answers
1
votes
6
views
asked 15 days ago

My ECS tasks (VPC A) can't connect to my RDS (VPC B) even though the VPCs are peered and networking is configured correctly

Hi, As mentioned in the question, my ECS tasks cannot connect to my RDS. The ECS tasks try to resolve the rds by name, and it resolves to the RDS public IP (RDS has public and private IPs). However, the security group on RDS doesn't allow open access from all IPs so the connection fails. I temporarily allowed all connections and could see that the ECS tasks are routing through the open internet to access the RDS. Reachability Analyzer checking specific tasks' Elastic Network Interface to the RDI ENI is successful, using internal routing through the peering connection. At the same time I have another server on VPC C that can connect to the RDS. All the config is similar between these two apps, including the peering connection, security group policies and routing tables. Any help is appreciated Here are some details about the VPCs VPC A - 15.2.0.0/16 [three subnets] VPC B - 111.30.0.0/16 [three subnets] VPC C - 15.0.0.0/16 [three subnets] Peering Connection 1 between A and B Peering Connection 2 between C and B Route table for VPC A: 111.30.0.0/16 : Peering Connection 1 15.2.0.0/16: Local 0.0.0.0/0: Internet Gateway Route table for VPC C: 111.30.0.0/16: Peering Connection 2 15.2.0.0/16: Local 0.0.0.0/0: Internet Gateway Security groups allow traffic to RDS: Ingress: 15.0.0.0/16: Allow DB Port 15.2.0.0/16: Allow DB Port Egress: 0.0.0.0/0: Allow all ports When I add the rule: 0.0.0.0/0 Allow DB Port to the RDS, then ECS can connect to my RDS through its public IP.
1
answers
2
votes
5
views
asked 16 days ago

Not able to do one time load, from postgres to opensearch using DMS

Trying to migrate existing data from AWS RDS Postgres to AWS managed OpenSearch, but it is not working, no rows were migrated to opensearch, When checking the Cloudwatch log getting below error Bulk request failed. no retry. TotalRecordCount 4080, FailedRecordCount 4080 [1026400] (elasticsearch_bulk_utils.c:181) DMS has the following configuration: { "TargetMetadata": { "TargetSchema": "", "SupportLobs": false, "FullLobMode": false, "LobChunkSize": 0, "LimitedSizeLobMode": false, "LobMaxSize": 0, "InlineLobMaxSize": 0, "LoadMaxFileSize": 0, "ParallelLoadThreads": 5, "ParallelLoadBufferSize": 100, "BatchApplyEnabled": false, "TaskRecoveryTableEnabled": false, "ParallelLoadQueuesPerThread": 0, "ParallelApplyThreads": 0, "ParallelApplyBufferSize": 100, "ParallelApplyQueuesPerThread": 0 }, "FullLoadSettings": { "TargetTablePrepMode": "DO_NOTHING", "CreatePkAfterFullLoad": false, "StopTaskCachedChangesApplied": false, "StopTaskCachedChangesNotApplied": false, "MaxFullLoadSubTasks": 8, "TransactionConsistencyTimeout": 600, "CommitRate": 50000 }, "Logging": { "EnableLogging": true, "LogComponents": [ { "Id": "TRANSFORMATION", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SOURCE_UNLOAD", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "IO", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TARGET_LOAD", "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG" }, { "Id": "PERFORMANCE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SOURCE_CAPTURE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SORTER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "REST_SERVER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "VALIDATOR_EXT", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TARGET_APPLY", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TASK_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TABLES_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "METADATA_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "FILE_FACTORY", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "COMMON", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "ADDONS", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "DATA_STRUCTURE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "COMMUNICATION", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "FILE_TRANSFER", "Severity": "LOGGER_SEVERITY_DEFAULT" } ], "CloudWatchLogGroup": null, "CloudWatchLogStream": null }, "ControlTablesSettings": { "historyTimeslotInMinutes": 5, "ControlSchema": "", "HistoryTimeslotInMinutes": 5, "HistoryTableEnabled": true, "SuspendedTablesTableEnabled": false, "StatusTableEnabled": true, "FullLoadExceptionTableEnabled": false }, "StreamBufferSettings": { "StreamBufferCount": 3, "StreamBufferSizeInMB": 8, "CtrlStreamBufferSizeInMB": 5 }, "ChangeProcessingDdlHandlingPolicy": { "HandleSourceTableDropped": true, "HandleSourceTableTruncated": true, "HandleSourceTableAltered": true }, "ErrorBehavior": { "DataErrorPolicy": "LOG_ERROR", "EventErrorPolicy": null, "DataTruncationErrorPolicy": "LOG_ERROR", "DataErrorEscalationPolicy": "SUSPEND_TABLE", "DataErrorEscalationCount": 0, "TableErrorPolicy": "SUSPEND_TABLE", "TableErrorEscalationPolicy": "STOP_TASK", "TableErrorEscalationCount": 0, "RecoverableErrorCount": -1, "RecoverableErrorInterval": 5, "RecoverableErrorThrottling": true, "RecoverableErrorThrottlingMax": 1800, "RecoverableErrorStopRetryAfterThrottlingMax": true, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "ApplyErrorInsertPolicy": "LOG_ERROR", "ApplyErrorUpdatePolicy": "LOG_ERROR", "ApplyErrorEscalationPolicy": "LOG_ERROR", "ApplyErrorEscalationCount": 0, "ApplyErrorFailOnTruncationDdl": false, "FullLoadIgnoreConflicts": true, "FailOnTransactionConsistencyBreached": false, "FailOnNoTablesCaptured": true }, "ChangeProcessingTuning": { "BatchApplyPreserveTransaction": true, "BatchApplyTimeoutMin": 1, "BatchApplyTimeoutMax": 30, "BatchApplyMemoryLimit": 500, "BatchSplitSize": 0, "MinTransactionSize": 1000, "CommitTimeout": 1, "MemoryLimitTotal": 1024, "MemoryKeepTime": 60, "StatementCacheSize": 50 }, "PostProcessingRules": null, "CharacterSetSettings": null, "LoopbackPreventionSettings": null, "BeforeImageSettings": null, "FailTaskWhenCleanTaskResourceFailed": false, "TTSettings": null } Opensearch have index with following settings { "settings": { "index.max_ngram_diff" :8, "analysis": { "analyzer": { "my_ngram_analyzer": { "type": "custom", "tokenizer": "standard", "filter": [ "lowercase", "mynGram" ] } }, "filter": { "mynGram": { "type": "nGram", "min_gram": 6, "max_gram": 14, "token_chars": [ "letter", "digit", "whitespace", "symbol" ] } } }, "number_of_shards": 6, "number_of_replicas": 1 }, "mappings" : { "properties" : { "created_at" : { "type" : "date" }, "id" : { "type" : "long" }, "name" : { "type" : "text", "analyzer":"my_ngram_analyzer" , "search_analyzer": "my_ngram_analyzer" }, "phone" : { "type" : "text", "analyzer":"my_ngram_analyzer" , "search_analyzer": "my_ngram_analyzer" }, "updated_at" : { "type" : "date" } } } } I have tried to insert a sample document using _bulk API on opensearch console and it worked, below is the thing I had tried over opensearch, which worked POST _bulk {"index":{"_index":"contacts"}} {"name": "name","phone" : "11111111","created_at" : "2021-12-21T12:12:59","updated_at" : "2021-12-21T12:12:59","id": 101}
1
answers
0
votes
10
views
asked 23 days ago

How to use psycopg2 to load data into Redshift tables with the copy command

I am trying to load data from an EC2 instance into Redshift tables but cannot figure out how to do this using the copy command. I have tried the following to create the sql queries: ``` def copy_query_creator(table_name, schema): copy_sql_template = sql.SQL("COPY {table_name} from stdin iam_role 'iam_role' DATEFORMAT 'MM-DD-YYYY' TIMEFORMAT 'MM-DD-YYYY HH12:MI:SS AM' ACCEPTINVCHARS fixedwidth {schema}").format(table_name = sql.Identifier(table_name),schema = schema) return copy_sql_template ``` and ``` def copy_query_creator_2(table_name, iam_role, schema): copy_sql_base = """ COPY {} FROM STDIN iam_role {} DATEFORMAT 'MM-DD-YYYY' TIMEFORMAT 'MM-DD-YYYY HH12:MI:SS AM' ACCEPTINVCHARS fixedwidth {}""".format(table_name, iam_role, schema) print(copy_sql_base) return copy_sql_base ``` where schema is the fixedwidth_spec in the example snippet below: ``` copy table_name from 's3://mybucket/prefix' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' fixedwidth 'fixedwidth_spec'; ``` The function that uses the query created looks like so: ``` def copy_query(self, filepath): schema = Query.create_schema() #returns the formatted fixedwidth_spec table_name = Query.get_table_def() #returns the table_name print(copy_query_creator_2(table_name, iam_role, schema)) self.connect() with self.connection.cursor() as cursor: try: with open(filepath) as f: cursor.copy_expert(copy_query_creator_2(table_name, iam_role, schema), f) print('copy worked') logging.info(f'{copy_query_creator_2(table_name, iam_role, schema)} ran; {cursor.rowcount} records copied.') except (Exception, psycopg2.Error) as error: logging.error(error) print(error) ``` The two attempts return errors. The first returns 'Composed elements must be Composable, got %r instead' while the latter returns 'error at or near STDIN'. Please help.
0
answers
0
votes
3
views
asked 25 days ago

Advice for best database/datastorage for historical data

Hi, I´m doing some reasearch to find the best place to centralize lots of data logs generated by my application considering pricing ,performance and scalabilty. Today all my application data including logs are stored on an Oracle database, but I´m thinking to move all the LOG related data outside it to reduce it´s size and not to worry about storage performance etc... Just put everything on a "infinite" storage apart from my actual database using CDC or a regular batch process **Below are some needs:** - Only inserts are necessary (no updates or deletes) - Customers will need access to this historic data - Well defined pattern of access (one or two indexes at maximum) - Latencies of few seconds is ok - Avoid infrastrucure, DBA, perfomance bottleneck log term... - Infinite Retentiton period (means I don´t want to worry about performance issues, storage size in long term. But something that can handle a few terabytes of data ) **Use case example: ** Historical Sales order by items ( id_item | id_customer | qty_sold | date_inserted ... ), aprox 50 millions records per day Where I would need to see the historical data by item, and by customer for example (two dimensions) I´ve done some research with the options below **S3 + Athena **-> Put everthing on s3, no worries about infrastructure perfomance issues, however as I need query by item and customer, probably it´would be necessary to break files by item or customer , generate millions of partitions to avoid high costs searching on every file etc.. **Postgre** -> Not sure if could be performance bottleneck once tables gets too big even with partition strategies **DynamoDB **-> Not sure if it´s a good alternative to historical data regarding pricing once seconds latency is ok **MongoDB/ DocumentDB **-> Not very familiar with it (I´d prefer SQL language type) but I know it´s has a good scalability **Cassandra**-> dont´know very much **Timeseries db as influxDB, timestream etc..**-> dont´know very much, but it seems appropriate for timeseries What option would you choose ? Sorry in advance if I saying something wrong or impossible :) Thank you!
1
answers
0
votes
5
views
asked 2 months ago

My Postgres RDS Database is got restarted. It says heavy consumption of memory.

My Postgres RDS Database is got restarted. It says heavy consumption of memory. But around 20GB freeable memory available at the time of restart. As I checked AAS Graph there is more locking happening. Below are the logs. ``` PL/pgSQL function evaluate_program_payout_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 141 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33366):pmli_bre_uat@dmsclientdb:[10065]:CONTEXT: SQL statement "delete FROM EvalSlabResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_slab_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 139 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33300):pmli_bre_uat@dmsclientdb:[9065]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33300):pmli_bre_uat@dmsclientdb:[9065]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33300):pmli_bre_uat@dmsclientdb:[9065]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33286):pmli_bre_uat@dmsclientdb:[8793]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33286):pmli_bre_uat@dmsclientdb:[8793]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33286):pmli_bre_uat@dmsclientdb:[8793]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC::@:[13295]:FATAL: Can't handle storage runtime process crash 2022-03-08 11:28:42 UTC::@:[13295]:LOG: database system is shut down 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33448):pmli_bre_uat@dmsclientdb:[11092]:CONTEXT: SQL statement "delete FROM DVResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_dv_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 143 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33378):pmli_bre_uat@dmsclientdb:[10069]:CONTEXT: SQL statement "SELECT ess.entityid, ess.version_id FROM evalslabsummary ess WHERE ess.contextid = par_contextid AND ess.program_code = par_program_code AND ess.start_date = var_start_date" PL/pgSQL function evaluate_slab(character varying,character varying,character varying,character varying,character varying) line 71 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33408):pmli_bre_uat@dmsclientdb:[10455]:CONTEXT: SQL statement "delete FROM DVResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_dv_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 143 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33328):pmli_bre_uat@dmsclientdb:[9247]:CONTEXT: SQL statement "delete FROM ProgramPayoutResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_program_payout_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 141 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:HINT: In a moment you should be able to reconnect to the database and repeat your command. 2022-03-08 11:28:42 UTC:10.10.3.18(33474):pmli_bre_uat@dmsclientdb:[12157]:CONTEXT: SQL statement "delete FROM ProgramPayoutResult WHERE contextid = par_context_id and program_code = par_program_code and start_date = var_start_date" PL/pgSQL function evaluate_program_payout_version(character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying,character varying) line 141 at SQL statement 2022-03-08 11:28:42 UTC:10.10.3.18(33236):pmli_bre_uat@dmsclientdb:[7485]:WARNING: terminating connection because of crash of another server process 2022-03-08 11:28:42 UTC:10.10.3.18(33236):pmli_bre_uat@dmsclientdb:[7485]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2022-03-08 11:28:42 UTC:10.10.3.18(33236):pmli_bre_uat@dmsclientdb:[7485]:HINT: In a moment you should be able to reconnect to the database and repeat you ```
1
answers
0
votes
9
views
asked 2 months ago

Aurora Postgres upgrade from 11.13 to 12.8 failing - I assume due to PostGis

Trying to upgrade our Aurora Clusters finally. Got them recently updated to 11.13, but every attempt I make to upgrade to 12.8 fails with **"Database cluster is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully."** Here are the logs which I think point to the culprit: **2022-02-11 22:37:53.514 GMT [5276] ERROR: could not access file "$libdir/postgis-2.4": No such file or directory 2022-02-11 22:37:53.514 GMT [5276] STATEMENT: LOAD '$libdir/postgis-2.4' 2022-02-11 22:37:53.515 GMT [5276] ERROR: could not access file "$libdir/rtpostgis-2.4": No such file or directory 2022-02-11 22:37:53.515 GMT [5276] STATEMENT: LOAD '$libdir/rtpostgis-2.4'** command: "/rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/pg_ctl" -w -D "/rdsdbdata/db" -o "--config_file=/rdsdbdata/config_new/postgresql.conf --survivable_cache_mode=off" -m fast stop >> "pg_upgrade_server.log" 2>&1 waiting for server to shut down....2022-02-11 22:37:53.541 GMT [5185] LOG: received fast shutdown request 2022-02-11 22:37:53.541 GMT [5185] LOG: aborting any active transactions 2022-02-11 22:37:53.542 GMT [5237] LOG: shutting down ................sh: /rdsdbbin/aurora-12.8.12.8.0.5790.0/bin/curl: /apollo/sbin/envroot: bad interpreter: No such file or directory 2022-02-11 22:38:10.305 GMT [5185] FATAL: Can't handle storage runtime process crash 2022-02-11 22:38:10.305 GMT [5185] LOG: database system is shut down -------- I found several other articles that point to issues with Postgis, so I followed what they suggest, but no luck. First our cluster is running Postgis 2.4.4. So I went ahead and updated this to 3.1.4, tried the approach to restart the instance and validate its really using Postgis 3 and that all looks fine. Nothing helps though. If anyone has suggestions, I am happy to try. Thanks Thomas
2
answers
0
votes
17
views
asked 3 months ago

RDS Postgres 12.7 Procedure Stopped Working - invalid transaction termination

We have a stored procedure that has been running without issue for months in PG12.7 on RDS. The procedure is shown below. Last night around 6PM MT this procedure stopped working. The error throw is ERROR: invalid transaction termination. Nothing around our database configuration or schema has changed. Our RDS instance is no longer capable of running any commits inside a procedure, even from known good code. We've even scraped the web to find examples of simple commit procedures that work and none do. Is it possible some patch was applied behind the scenes that incapacitated our ability to run these procedures? Seems like a stretch but it's all ive got. ``` CREATE OR REPLACE PROCEDURE public.me_cpro_session_insert(integer) LANGUAGE plpgsql AS $body$ begin insert into me_cpro_session(patient_guid,last_name,first_name,id_session,start_date,end_date,is_complete,instrument,question,position,choice,range_choice,score,instrument_score,appointment_guid) select p.id_external as patient_guid, p.last_name, p.first_name, s.id as id_session, s.start_date, s.end_date - interval '5 hours' as end_date, s.is_complete, qs.name as instrument, t1.text as question, lq.position, t2.text as choice, r.range_choice, c.score, sc.score as instrument_score, s.id_appointment from response r join session s on r.id_session = s.id join patient p on p.id = s.id_patient join question_set qs on qs.id = r.id_question_set join link_question lq on lq.id_question = r.id_question and lq.id_question_set = r.id_question_set join question q on r.id_question = q.id left join choice c on r.id_choice = c.id join translation t1 on t1.id_expression = q.id_expression and t1.id_language = 2 left join translation t2 on t2.id_expression = c.id_expression and t2.id_language = 2 left join score sc on sc.id_session = s.id and sc.id_question_set = r.id_question_set where s.id = $1 order by qs.id asc, lq.position asc; commit; END; $body$ ; ```
1
answers
0
votes
5
views
asked 3 months ago

AWS DMS Postgres to OpenSearch LOB handling

Source: postgres Target: OpenSearch I have a `text` column called `description` in one of my postgres tables. Per the documentation, this data type is mapped to a `NCLOB`. Since OpenSearch does not not offer LOB support, my `description` is missing in my OpenSearch documents. I tried using the mapping rule bellow, but does not seem to be doing anything ``` { "rule-type": "transformation", "rule-id": "3", "rule-name": "3", "rule-target": "column", "object-locator": { "schema-name": "public", "table-name": "jobs", "column-name": "description" }, "rule-action": "change-data-type", "data-type": { "type": "string", "length": 500 } } ``` When i check the logs i see the following ``` Column 'description' is unsupported in table def 'public.jobs' since the LOB support is disabled ``` However, i do have LOB enabled under task settings: ``` "TargetMetadata": { "ParallelApplyBufferSize": 0, "ParallelApplyQueuesPerThread": 0, "ParallelApplyThreads": 0, "TargetSchema": "", "InlineLobMaxSize": 0, "ParallelLoadQueuesPerThread": 0, "SupportLobs": true, "LobChunkSize": 10, "TaskRecoveryTableEnabled": false, "ParallelLoadThreads": 0, "BatchApplyEnabled": false, "FullLobMode": true, "LimitedSizeLobMode": false, "LoadMaxFileSize": 0, "ParallelLoadBufferSize": 0 }, ``` Is that transformation rule supposed to work? Or will any LOB column be skipped because OpenSearch does not have LOB support? Any way to make this work? Thanks!
1
answers
1
votes
24
views
asked 4 months ago

Aurora Postgres 13 with IAM Auth - Unable to establish logical replication connection

Hello, I'm following the tutorial for using Postgres CDC using logical replication here - https://aws.amazon.com/blogs/database/stream-changes-from-amazon-rds-for-postgresql-using-amazon-kinesis-data-streams-and-aws-lambda/ The DB cluster parameter group has rds.logical_replication enabled, and I've verified that the user I intend to use is capable of IAM auth, and that logical replication slots can be created and queried: CREATE ROLE replicate LOGIN; GRANT rds_replication, rds_iam TO replicate; GRANT SELECT ON ALL TABLES IN SCHEMA myschema TO replicate; ...snip...IAM authenticate as replicate user... Server: PostgreSQL 13.4 Version: 3.2.0 Home: http://pgcli.com replicate@dev-db-host:mydb> SELECT pg_create_logical_replication_slot('mydb_replication_slot', 'wal2json'); replication slot "mydb_replication_slot" already exists Time: 0.016s replicate@dev-db-host:mydb> SELECT pg_create_logical_replication_slot('test_replication_slot', 'wal2json'); +--------------------------------------+ | pg_create_logical_replication_slot | |--------------------------------------| | (test_replication_slot,0/5005318) | +--------------------------------------+ SELECT 1 Time: 0.044s replicate@dev-db-host:mydb> SELECT * FROM pg_logical_slot_peek_changes('test_replication_slot', null, null); +-------+-------+--------+ | lsn | xid | data | |-------+-------+--------| +-------+-------+--------+ SELECT 0 Time: 0.033s replicate@dev-db-host:mydb> However when I attempt create a replication connection using the python psycopg2 code in the blog post, postgres tells me > FATAL: password authentication failed for user "replicate" Someone else asked the psycopg devs, who've indicated it's an RDS issue: https://github.com/psycopg/psycopg2/issues/1391 Any ideas? Cheers, Jim P.S. I've verified that the postgres logical replication connection (either `replication=database` option on the command line or psycopg2's `LogicalReplicationConnection` type) is possible when using a plain old password instead of RDS IAM.
0
answers
0
votes
9
views
asked 5 months ago

How to investigate Aurora Postgres IAM authentication errors from rdsauthproxy

I have been using IAM database authentication on an Aurora for Postgres for many months now and everything worked well. A few days ago I started getting login errors until now it is impossible to login at all. I am not sure about the timeline as we only use these accounts for individual user connections. Only accounts not using IAM can login now. I am not aware of any change but I cannot pinpoint the root cause of the error. The error I am getting in Postgres clients is this: ``` Unable to connect to server: FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` If I look into the Postgres logs I get a little more details: ``` * Trying <REDACTED_IP>:1108... * Connected to rdsauthproxy (<REDACTED_IP>) port 1108 (#0) > POST /authenticateRequest HTTP/1.1 Host: rdsauthproxy:1108 Accept: */* Content-Length: 753 Content-Type: multipart/form-data; boundary=------------------------1f9a4da08078f511 * We are completely uploaded and fine * Mark bundle as not supporting multiuse < HTTP/1.1 403 Forbidden < Content-Type: text/html;charset=utf-8 < Content-Length: 0 < * Connection #0 to host rdsauthproxy left intact 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:LOG: pam_authenticate failed: Permission denied 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:DETAIL: Connection matched pg_hba.conf line 13: "hostssl all +rds_iam all pam" 2021-12-05 14:42:43 UTC:10.4.2.137(13615):<REDACTED_USERNAME>@postgres:[7488]:FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` So it seems to be "rdsauthproxy" that rejects the authentication. My understanding is that this proxy is part of the Aurora instance and I did not find a way to get its logs where hopefully I could find any information on why the authentication is rejected. I checked the IAM configuration in case something changed but it seems fine. The users have a policy like this: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:eu-west-3:<REDACTED_ACCOUNT_ID>:dbuser:*/<REDACTED_USERNAME>" } ] } ``` The usernames match exactly between IAM and Postgres. In Postgres they all have the "rds_iam" role. Is there anything I could be missing? At least is there a way to retrieve logs of an Aurora rdsauthproxy instance that maybe could point me in the right direction?
1
answers
0
votes
92
views
asked 6 months ago

Resolution for Fatal error when using DMS for on-going replication from RDS Postgres to S3

I attempted to configure my RDS postgres instance for CDC using [Setting up an Amazon RDS PostgreSQL DB instance as a source][1] I configured the DMS source endpoint to use a specific slotName. First using a slot that already existed in the DB and then using a slot I created using pg_create_logical_replication_slot. After successfully testing the endpoint to ensure connectivity, I started the replication task with was to load existing data followed by on-going changes. Both times, the replication failed with a fatal error such as the following and no data (even the existing data that should have loaded) gets replicated. Last Error Stream Component Fatal error. Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2822] [1020101] Error executing source loop; Stream component failed at subtask 0, component st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y ; Stream component 'st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y' terminated [reptask/replicationtask.c:2829] [1020101] Stop Reason FATAL_ERROR Error Level FATAL What is the configuration I'm missing to enable CDC from RDS Postgres v11.8? I'm configuring DMS to use the RDS master user. A migration existing data only DMS task succeeds on this instance (if the slotName configuration is removed). The relevant CloudWatch log entries appear to be these 2021-02-03T22:03:05 [SOURCE_CAPTURE ]I: Slot has plugin 'test_decoding' (postgres_test_decoding.c:233) 2021-02-03T22:03:05 [SOURCE_CAPTURE ]I: Initial positioning requested is 'now' (postgres_endpoint_capture.c:511) 2021-02-03T22:03:05 [SOURCE_CAPTURE ]E: When working with Configured Slotname, user must specify LSN [1020101] (postgres_endpoint_capture.c:517) 2021-02-03T22:03:05 [TASK_MANAGER ]I: Task - W6AUC5OI3DNFMFUQ6ZDEYYNZ3NBSABQK3HFD2WQ is in ERROR state, updating starting status to AR_NOT_APPLICABLE (repository.c:5101) 2021-02-03T22:03:05 [TASK_MANAGER ]E: Task error notification received from subtask 0, thread 0 [1020101] (replicationtask.c:2822) 2021-02-03T22:03:05 [TASK_MANAGER ]E: Error executing source loop; Stream component failed at subtask 0, component st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y ; Stream component 'st_0_KVPUHIZBICJJRNGCC32HP5EFGGLIZLRF2YSRJ6Y' terminated [1020101] (replicationtask.c:2829) 2021-02-03T22:03:05 [TASK_MANAGER ]E: Task 'W6AUC5OI3DNFMFUQ6ZDEYYNZ3NBSABQK3HFD2WQ' encountered a fatal error (repository.c:5194) 2021-02-03T22:03:05 [SORTER ]I: Final saved task state. Stream position , Source id 0, next Target id 1, confirmed Target id 0, last source timestamp 0 (sorter.c:803) Other items in the log appear to be informational to track progress. [1]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.RDSPostgreSQL.CDC
1
answers
0
votes
221
views
asked a year ago
  • 1
  • 90 / page