Questions tagged with Aurora PostgreSQL
Content language: English
Sort by most recent
Hello.
I tried upgrade Aurora PostgreSQL from 10 to 11 via CloudFormation with CDK.
At first, I simply changed PostgreSQL version in CDK and tried `cdk deploy`.
I got this error:
```
UPDATE_ROLLBACK_COMPLETE: The specified DB Instance is a member of a cluster. Modify the DB engine version for the DB Cluster using the ModifyDbCluster API.
```
So, I tried upgrade Aurora PostgreSQL in AWS Console manually, I got success it.
But I want to sync this upgrade with CloudFormation template.
I changed PostgreSQL version in CDK and tried `cdk deploy`.
I got this error:
```
UPDATE_ROLLBACK_COMPLETE: Resource handler returned message: "Cannot change VPC security group while doing a major version upgrade.(..."
```
I cannot look latest message via Google search.
How to upgrade Aurora PostgreSQL major version with Cloudformation?
Recently I have started working with the aws and now I need to write a python script for disk congestion on aurora rds cluster using chaos engineering, for that I have understood pretty good information but the thing is that when I am using the aurora volume status query it is giving me the no.of disks and nodes present in my cluster. So I have only 1 db cluster with a reader and a writer node but the query is giving me as disks present are 96 and the nodes present are 96. How is that possible, as per the documentation each cluster contains 6 nodes, 2 in each AZ then why am I getting nodes as 96 and disks as 96 and what exactly these disks are? How many disks can be present for a node?
I've created an empty Aurora DB inside AWS.
I backed up my SQL Server DB and upload that backup to an S3 bucket.
I thought I might be able to import that SQL Server backup either into my Aurora DB or into a SQL Server instance in AWS then migrate that to Aurora... I can't see how to do this. I've tried creating an endpoint, thinking I could use that to link to the SQL Server backup file but can't see a way.
Is this possible or should I use a different approach?
I'm trying to upgrade a RDS Aurora cluster via CloudFormation template but it fails with the error `You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.`. This error comes from the `DBInstance` (AWS::RDS::DBInstance) `DBParameterGroupName` definition. The CloudFormation template beneath is minimum test template to try out the Blue / Green deployment. It works quite well if I don't specify a `DBParameterGroupName` for the resource `AWS::RDS::DBInstance`. I do not modify the current running parameter, so I don't understand this error message. Is there any solution for this?
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
MajorVersionUpgrade:
Type: String
Description: Swap this between 'Blue' or 'Green' if we are doing a Major version upgrade
AllowedValues:
- Blue
- Green
EngineGreen:
Description: 'Aurora engine and version'
Type: String
AllowedValues:
- 'aurora-postgresql-10.14'
- 'aurora-postgresql-11.16'
- 'aurora-postgresql-12.11'
- 'aurora-postgresql-13.4'
- 'aurora-postgresql-13.7'
- 'aurora-postgresql-14.3'
EngineBlue:
Description: 'Aurora engine and version'
Type: String
AllowedValues:
- 'aurora-postgresql-10.14'
- 'aurora-postgresql-11.16'
- 'aurora-postgresql-12.11'
- 'aurora-postgresql-13.4'
- 'aurora-postgresql-13.7'
- 'aurora-postgresql-14.3'
Mappings:
EngineMap:
'aurora-postgresql-10.14':
Engine: 'aurora-postgresql'
EngineVersion: '10.14'
Port: 5432
ClusterParameterGroupFamily: 'aurora-postgresql10'
ParameterGroupFamily: 'aurora-postgresql10'
'aurora-postgresql-11.16':
Engine: 'aurora-postgresql'
EngineVersion: '11.16'
Port: 5432
ClusterParameterGroupFamily: 'aurora-postgresql11'
ParameterGroupFamily: 'aurora-postgresql11'
'aurora-postgresql-12.11':
Engine: 'aurora-postgresql'
EngineVersion: '12.11'
Port: 5432
ClusterParameterGroupFamily: 'aurora-postgresql12'
ParameterGroupFamily: 'aurora-postgresql12'
'aurora-postgresql-13.4':
Engine: 'aurora-postgresql'
EngineVersion: '13.4'
Port: 5432
ClusterParameterGroupFamily: 'aurora-postgresql13'
ParameterGroupFamily: 'aurora-postgresql13'
'aurora-postgresql-13.7':
Engine: 'aurora-postgresql'
EngineVersion: '13.7'
Port: 5432
ClusterParameterGroupFamily: 'aurora-postgresql13'
ParameterGroupFamily: 'aurora-postgresql13'
'aurora-postgresql-14.3':
Engine: 'aurora-postgresql'
EngineVersion: '14.3'
Port: 5432
ClusterParameterGroupFamily: 'aurora-postgresql14'
ParameterGroupFamily: 'aurora-postgresql14'
Conditions:
BlueDeployment: !Equals [!Ref MajorVersionUpgrade, "Blue"]
GreenDeployment: !Equals [!Ref MajorVersionUpgrade, "Green"]
Resources:
DBClusterParameterGroupGreen:
Type: "AWS::RDS::DBClusterParameterGroup"
Properties:
Description: !Ref 'AWS::StackName'
Family: !FindInMap [EngineMap, !Ref EngineGreen, ClusterParameterGroupFamily]
Parameters:
client_encoding: 'UTF8'
DBClusterParameterGroupBlue:
Type: "AWS::RDS::DBClusterParameterGroup"
Properties:
Description: !Ref 'AWS::StackName'
Family: !FindInMap [EngineMap, !Ref EngineBlue, ClusterParameterGroupFamily]
Parameters:
client_encoding: 'UTF8'
DBParameterGroupBlue:
Type: 'AWS::RDS::DBParameterGroup'
Properties:
Description: !Ref 'AWS::StackName'
Family: !FindInMap [EngineMap, !Ref EngineBlue, ParameterGroupFamily]
DBParameterGroupGreen:
Type: 'AWS::RDS::DBParameterGroup'
Properties:
Description: !Ref 'AWS::StackName'
Family: !FindInMap [EngineMap, !Ref EngineGreen, ParameterGroupFamily]
DBCluster:
DeletionPolicy: Snapshot
UpdateReplacePolicy: Snapshot
Type: 'AWS::RDS::DBCluster'
Properties:
DatabaseName: 'dbupgradetest'
DBClusterParameterGroupName: !If [GreenDeployment, !Ref DBClusterParameterGroupGreen, !Ref DBClusterParameterGroupBlue]
Engine: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, Engine], !FindInMap [EngineMap, !Ref EngineBlue, Engine]]
EngineMode: provisioned
EngineVersion: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, EngineVersion], !FindInMap [EngineMap, !Ref EngineBlue, EngineVersion]]
MasterUsername: 'user'
MasterUserPassword: 'password123'
Port: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, Port], !FindInMap [EngineMap, !Ref EngineBlue, Port]]
DBInstance:
Type: 'AWS::RDS::DBInstance'
Properties:
AllowMajorVersionUpgrade: true
AutoMinorVersionUpgrade: true
DBClusterIdentifier: !Ref DBCluster
DBInstanceClass: 'db.t3.medium'
# DBParameterGroupName: !If [GreenDeployment, !Ref DBParameterGroupGreen, !Ref DBParameterGroupBlue] # <- this line / definition causes the error
Engine: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, Engine], !FindInMap [EngineMap, !Ref EngineBlue, Engine]]
```
Here is an example of the execution order. It only works if `DBParameterGroupName` is not set.
```
aws cloudformation create-stack --parameters ParameterKey=MajorVersionUpgrade,ParameterValue=Blue ParameterKey=EngineBlue,ParameterValue=aurora-postgresql-10.14 ParameterKey=EngineGreen,ParameterValue=aurora-postgresql-11.16 --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --stack-name db-upgrade-test --template-url [path to template]
```
Now switch to version `11.16` by changing the `MajorVersionUpgrade` value from `Blue` to `Green`. Other parameters are not modified.
```
aws cloudformation update-stack --stack-name db-upgrade-test --use-previous-template --parameters ParameterKey=MajorVersionUpgrade,ParameterValue=Green ParameterKey=EngineBlue,ParameterValue=aurora-postgresql-10.14 ParameterKey=EngineGreen,ParameterValue=aurora-postgresql-11.16 --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM
```
Now switch to version `12.11` by changing the `MajorVersionUpgrade` value from `Green` to `Blue` and updating the value for `EngineBlue` to `aurora-postgresql-12.11`.
```
aws cloudformation update-stack --stack-name db-upgrade-test --use-previous-template --parameters ParameterKey=MajorVersionUpgrade,ParameterValue=Blue ParameterKey=EngineBlue,ParameterValue=aurora-postgresql-12.11 ParameterKey=EngineGreen,ParameterValue=aurora-postgresql-11.16 --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM
```
Hi,
I would like to create a ServerlessCluster with public access. I have successfully been able to create a database, and can access it using private methods, but I would really like public access for my users.
```
const cluster = new rds.ServerlessCluster(this, 'AnotherCluster', {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
vpc: env.getVpc(),
vpcSubnets: {
subnetType: ec2.SubnetType.PUBLIC, // doc indicates that this should result in public access
},
//publiclyAccessible: true, // this option not available for ServerlessCluster
credentials: rds.Credentials.fromGeneratedSecret('postgres'),
enableDataApi: true,
defaultDatabaseName: 'defaultDatabase'
});
```
The construct 'ServerlessCluster' does not have the 'publicAccessible' property, so that can't be configured. The doc indicates that by specifying subnetType: ec2.SubnetType.PUBLIC then the default is to provide public access, but despite placing the database in the public subnets, it does not do so. The domain name resolves to a private address.
I can create a similar database from the console, specify public access, and that works, but I really need to do this from the CDK.
How can I get this to work?
thanks.
I'm wondering if Amazon has solved the connection pooling problem that is mentioned in StackOverflow posts dating back to 2017 when performing CRUD from a Lambda function. My application is Lambda and API Gateway driven. I don't like the idea of creating and destroying connections in Lambda functions that perform one operation/transaction. What are my options with Serverless V2?
Has anyone else experienced an issue with the RDS console whereby it doesn't show the instance type for existing instances? Our instance type is "db.r5.xlarge" and it just shows a blank dropdownlist with no values and you are unable to save any changes to of the instance settings as the instance type is a required field and can't by empty
While trying to convert an Aurora PostgreSQL instance from On-Demand to Serverless + enabling IAM Authentication I got stuck in **configuring-iam-database-auth**, I only have one instance (Single AZ) and it just says "Rebooting". It has been stuck like that for almost 5 hours. I'm unable to modify it/delete it because is on a invalid state
Any ideas?
As I understand Data API is not supported yet for Aurora PostgreSQL serverless v2. I couldn't find any detailed instructions on how to connect from a lambda to Aurora PostgreSQL serverless v2. Can i assume that instructions here [Connecting to RDS from lambda](https://aws.amazon.com/premiumsupport/knowledge-center/connect-lambda-to-an-rds-instance/) are applicable? If not can I be pointed to something similar?
Aurora Serverless V2 doesn't yet support the built in SQL editor. I cannot connect to it from PGadmin or other IDEs. how do I run my DDL's against it? when I first need to create schemas and tables?
I'm using postgres flavour of this.
When I am performing failover_db_cluster operation using python boto3 api by passing writer DB instance identifier , it is giving me below error . But I can able to successfullly use other api like describe_db_clusters .
NOTE : I am running the code from EC2 instances.
Below sample code -
```
import boto3
session = boto3.Session()
client = session.client('rds')
response = client.failover_db_cluster(DBClusterIdentifier="test-instance-2")
print(response)
```
`botocore.errorfactory.DBClusterNotFoundFault: An error occurred (DBClusterNotFoundFault) when calling the FailoverDBCluster operation: The source cluster could not be found or cannot be accessed: test-instance-2 `
can anybody let me know why I am getting the issue and what should be resolutions?
We've PostgreSQL Aurora Serverless V2 cluster with 2 instances (1 reader and 1 writer instance). They are configured to have 2-16 ACUs.
When we populate data into the DB, either instance or both instances get stuck. PostgreSQL connections are timing out without any errors. Stuck instance looks healthy in AWS console but AWS doesn't get any metrics from the instance. Only instance **reboot** fixes it temporarily, but it takes a long time (30-60 min).
Error logs contain the following repeated (every 5-10 sec) line:
```
2022-06-20 06:01:06 UTC::@:[31477]:WARNING: worker took too long to start; canceled
2022-06-20 06:01:11 UTC::@:[31477]:WARNING: worker took too long to start; canceled
```
We've checked all instance metrics before the freeze but we've seen nothing interesting.
The load is quite heavy when we caused freezes of the database. In the first run we used 1,000 lambdas concurrently to insert data. It was working fine until writer instance got stuck. Later we've used only 100 lambdas in parallel. Writer instance didn't have any issues but the reader instance got stuck.
We chose the new Aurora Serverless V2 to be our production database. Currently it feels too unstable for us and we are considering migration to some more mature service.