By using AWS re:Post, you agree to the Terms of Use
/Amazon Relational Database Service/

Questions tagged with Amazon Relational Database Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

[Urgent Action Required] - Upgrade your RDS for PostgreSQL minor versions

This announcement is for customers that are running one or more Amazon RDS DB instances with a version of PostgreSQL, that has been deprecated by Amazon RDS and requires attention. The RDS PostgreSQL minor versions that are listed in the table below are supported, and any DB instances running earlier versions will be automatically upgraded to the version marked as "preferred" by RDS, no earlier than July 15, 2022 starting 12 AM PDT: | Major Versions Supported | Minor Versions Supported | | --- | --- | | 14 | 14.1 and later | | 13 |13.3 and later | | 12 | 12.7 and later | | 11 |11.12 and later | | 10 |10.17 and later| | 9 |none | Amazon RDS supports DB instances running the PostgreSQL minor versions listed above. Minor versions not included above do not meet our high quality, performance, and security bar. In the PostgreSQL versioning policy [1] the PostgreSQL community recommends that you always run the latest available minor release for whatever major version is in use. Additionally, we recommend that you monitor the PostgreSQL security page for documented vulnerabilities [2]. If you have automatic minor version upgrade enabled as a part of your configuration settings, you will be automatically upgraded. Alternatively, you can take action yourselves by performing the upgrade earlier. You can initiate an upgrade by going to the Modify DB Instance page in the AWS Management Console and change the database version setting to a newer minor/major version of PostgreSQL. Alternatively, you can also use the AWS CLI to perform the upgrade. To learn more about upgrading PostgreSQL minor versions in RDS, review the 'Upgrading Database Versions' page [3]. The upgrade process will shutdown the database instance, perform the upgrade, and restart the database instance. The DB instance may restart multiple times during the process. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after clicking on the "Modify DB Instance" button. If you choose not to apply the change immediately, the upgrade will be performed during your next maintenance window. Starting no earlier than July 15, 2022 12 AM PDT, we will automatically upgrade the DB instances running deprecated minor version to the preferred minor version of the specific major version of your RDS PostgreSQL database. (For example, instances running RDS PostgreSQL 10.1 will be automatically upgraded to 10.17 starting no earlier than July 15, 2022 12 AM PDT) Should you need to create new instances using the deprecated version(s) of the database, we recommend that you restore from a recent DB snapshot [4]. You can continue to run and modify existing instances/clusters using these versions until July 14, 2022 11:59 PM PDT, after which your DB instance will automatically be upgraded to the preferred minor version of the specific major version of your RDS PostgreSQL database. Starting no earlier than July 15, 2022 12 AM PDT, restoring the snapshot of a deprecated RDS PostgreSQL database instance will result in an automatic version upgrade of the restored database instance using the same upgrade process as described above. Should you have any questions or concerns, please see the RDS FAQs [5] or you can contact the AWS Support Team on the community forums and via AWS Support [6]. Sincerely, Amazon RDS [1] https://www.postgresql.org/support/versioning/ [2] https://www.postgresql.org/support/security/ [3] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html [5] https://aws.amazon.com/rds/faqs/ [search for "guidelines for deprecating database engine versions"] [6] https://aws.amazon.com/support
0
answers
1
votes
4
views
AWS-User-8019255
asked 9 days ago
1
answers
0
votes
5
views
gs-scooter
asked 14 days ago

AWs trigger EventBatchingCondition/BatchWindow is not optional

Hi team, I have a glue workflow : trigger (type = "EVENT") => trigger a glue job (to take data from S3 and push them to MySQL RDS) I configured the glue Triggering criteria to kickoff the glue job after 5 events were received. in the console it says : > Specify the number of events received or maximum elapsed time before firing this trigger. > Time delay in seconds (optional) on AWS documentation it says also it's not required : ``` BatchWindow Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received. Type: Integer Valid Range: Minimum value of 1. Maximum value of 900. Required: No ``` So I want only my trigger to be triggered only and only after 5 events are received and not depending on: Time delay in seconds (optional). actually, the Time delay in seconds (optional) is set to 900 by default and my job is started after 900s even if there are no 5 events received. that's not the behaviour we want. We want ONLY the job to be started after x events are received. I tried via the console to edit the trigger and remove the 900s for the Time delay in seconds (optional) input but I can't save it until I put a value on it. it says it's optional but it doesn't seem to be. is there a workaround to make the trigger not take account of Time delay in seconds (optional)? and only be launched when it received x events and nothing else. right now the behaviour I have is that my job is triggered after 900s, we want to eliminate this case and let the job be triggered only and only if there is x event received and nothing else. how can I make the Time delay in seconds (optional) input optional, because now the console forces me to put a value in there? thank you.
1
answers
0
votes
5
views
Jess
asked 18 days ago

Upgrade Amazon Aurora PostgreSQL 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 minor versions by July 15, 2022

Newer versions of Amazon Aurora PostgreSQL-compatible edition are now available and database cluster(s) running Aurora PostgreSQL minor versions 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 need to be upgraded by July 15, 2022. These newer minor versions include important updates that will improve the operations of your Aurora PostgreSQL instances and workloads. We strongly encourage you to upgrade to at least a recommended minimum minor version at your earliest convenience. * For PostgreSQL Minor Versions 10.13, 10.14 and 10.16, the recommended minimum minor version is 10.17. * For PostgreSQL Minor Versions 11.8 and 11.11, the recommended minimum minor version is 11.12. * For PostgreSQL Minor Versions 12.4 and 12.6, the recommended minimum minor version is 12.7. Starting on or after 12:00 PM PDT on July 15, 2022, if your Amazon Aurora PostgreSQL cluster has not been upgraded to a newer minor version, we will schedule the relevant recommended minimum minor version to be automatically applied during your next maintenance window. Changes will apply to your cluster during your next maintenance window even if auto minor version upgrade is disabled. Restoration of Amazon Aurora PostgreSQL 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 database snapshots after July 15, 2022 will result in an automatic upgrade of the restored database to a supported version at the time. *How to Determine Which Instances are Running These Minor Versions?* * In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane. * To view DB cluster information by using the AWS CLI, use the describe-db-clusters command. * To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();". *How to Apply a New Minor Version * You can apply a new minor version in the AWS Management Console, via the AWS CLI, or via the RDS API. Customers using CloudFormation are advised to apply updates in CloudFormation. We advise you to take a manual snapshot before upgrading. For detailed upgrade procedures, please see the available User Guide [1]. Please be aware that your cluster will experience a short period of downtime when the update is applied. Visit the Aurora Version Policy [2] and the documentation [3] for more information and detailed release notes about minor versions, including existing supported versions. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [4]. [1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html [2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html [3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html [4] https://aws.amazon.com/support ****
0
answers
0
votes
3
views
AWS-AdamLevin
asked 19 days ago

aws glue job fail for escaper char

Hi team, I tried to load a big CSV file from s3 to RDS MySQL using AWS glue, I have an escaper character on the file (special character). This escape char is also defined on the crawled CSV table. each time the job fails with an error : `An error occurred while calling o122.pyWriteDynamicFrame. Duplicate entry '123456' for key 'MySQL table.PRIMARY'` I don't have any duplicate keys on my file and the table is truncated each time before running the job. I tried to narrow down the issue by dividing the file into chunks each chank runs successfully, but the whole file in a single job always fails with the above error. I divided the whole file into chunks,** every chunk runs successfully and I get the full data.** I can't figure out why ? is this a glue issue or a data issue ..? I know the issue is related to my escape character because when I removed them the whole file passed **OR** when I replace my special character escape with "\" the whole file passes also. is that because glue doesn't support certain escape characters (I have this issue with big files) Not sure why the whole file with the escaper fails and when we run it in sub-chunk every chunk passes? any idea? glue script: ``` datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "db_csv", table_name = "tbl_csvxx", transformation_ctx = "datasource0") applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("id", "string", "id", "string"), ("col1", "string", "col1", "string"), ("date1", "string", "date2", "timestamp"), ("col2", "string", "col2", "string"), ("col3", "string", "col3", "string"), ("col4", "string", "col24", "string"), ("col5", "string", "col5", "string"),...], transformation_ctx = "applymapping1") selectfields2 = SelectFields.apply(frame = applymapping1, paths = [ "col1", "col2", "col3", "id","col4", "col5",...], transformation_ctx = "selectfields2") datasink3 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = selectfields2, catalog_connection = conn_name, connection_options = {"dbtable": "mysqltable", "database": db_name}, transformation_ctx = "datasink3") ``` sample data : ``` "123","2018-02-09 12:16:38.000","2018-02-09 12:16:38.000","addr1 ®" addr2®" addr3®"",,,"22","city1","121",,,,,"CC" "456","2018-02-09 12:16:38.000","2018-02-09 12:16:38.000","sds, dssdds®"F®", sds sds, dwr, re2",,,"ree364","ABD","288",,,,,"N" "789","2018-02-09 12:16:38.000","2018-02-09 12:16:38.000","Alle# 02, Sept# 06, sdsx,",,"SAP# ®"C®"","DPPK# 05","dssd","313","Alkl",,,"1547","P" ``` Thank you.
1
answers
0
votes
5
views
Jess
asked a month ago

Announcement: Amazon RDS for SQL Server ending support for Microsoft SQL Server 2012

Microsoft announced they will end support for SQL Server 2012 on July 12, 2022. On that date, Microsoft will stop critical patch updates for SQL Server 2012. We strongly recommend that you upgrade your RDS for SQL Server 2012 database instance to a different major version at your earliest convenience \[1]. Starting September 1, 2021, we will begin disabling the creation of new Amazon RDS for SQL Server database instances using Microsoft SQL Server 2012. Starting June 1, 2022, we plan to end support of Microsoft SQL Server 2012 on Amazon RDS SQL Server. At that time, any remaining instances will be scheduled to migrate to SQL Server 2014 (latest minor version available) as described below. We recommend that you upgrade your Microsoft SQL Server 2012 instances to Microsoft SQL Server 2014 or later at a time convenient to you. You can schedule an upgrade to a different major version by going to the instance modify page in the AWS Management Console and changing the database version to a desired value. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after exiting the modify page. If you choose not to apply the change immediately, the upgrade will be scheduled during your maintenance window. Upgrade Options: We support five (or four on some regions) different major/minor version combinations of SQL Server 2012. These database instances can be upgraded to the latest minor version of SQL Server 2014, 2016, 2017, and 2019 directly. To find out more information on upgrading please reference this document \[2]. You will still be able to restore a SQL Server 2012 database to any major version supported instance on Amazon RDS SQL Server, even after the deprecation. For more information on restoring a database in RDS see here \[3]. Should you have any questions or concerns, the AWS Support Team is available via AWS Premium Support \[4]. \[1] https://docs.microsoft.com/en-us/lifecycle/products/microsoft-sql-server-2012 \[2] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.SQLServer.html \[3] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html \[4] https://aws.amazon.com/support
0
answers
0
votes
5
views
tobyxu
asked a month ago

CloudFormation RDS CreateInstance fails incompatible-parameters

We have been creating RDS MariaDB instances on an almost daily basis using CloudFormation and our scripts for a long time (years), however, the last week or two RDS CreateInstance fails intermittently with the below error - deleting the stack and retrying usually works, but of course not a suitable long term solution for regular environment creation: ``` 2022-04-14 15:26:20 UTC+0100 RdsInstance CREATE_FAILED DB Instance is in state: incompatible-parameters ``` If I view the RDS database in question, under the "Events" listing there are about 5 pages of the same event: ``` April 14, 2022, 3:10:13 PM UTC Your MySql memory setting is inappropriate for the usages ``` On that failure, it attempts to roll back, but that also fails because the delete DB instance fails for the same reason (more-or-less, DB not in available state). However, the DB does eventually end up being available, after something like 20 mins or so, the DB (having failed to be deleted) will be showing as "Available". We have not changed anything in the parameter group or DB engine. It is running MariaDB 10.3.31. Anyone have any idea what might be causing this or what might have changed recently? --- *EDIT*: Following on from the answers provided so far, the thing I'm most interested in is the intermittent nature of the issue, and the fact that its just started happening, having run successfully for a long time previously. If there was an incorrect parameter for the DB type, then I'd expect it to fail every time, the intermittent nature of it makes me thinking its more likely a race condition or timing issue involved. I have reviewed the parameter group, and only one value has been changed from the default params for Mariadb10.3: `max_connections` - now set to a fixed 1000 (rather than the default to calculate it based on the size of the instance). This hasn't changed for a long time, and can't see that this is causing an issue?
2
answers
0
votes
5
views
robh
asked a month ago

Announcement: Amazon Relational Database Service (Amazon RDS) for MariaDB 10.2 End-of-Life date is October 15, 2022

Amazon RDS is starting the end of life (EOL) process for MariaDB major engine version 10.2. We are doing this because the MariaDB community is planning to discontinue support for MariaDB 10.2 on May 23, 2022 \[1]. Amazon RDS for MariaDB 10.2 will reach end of life on October 15, 2022 00:00:01 AM UTC. While you will be able to run your Amazon RDS for MariaDB 10.2 databases between community MariaDB 10.2 EOL (May 23, 2022) and Amazon RDS for MariaDB 10.2 EOL (October 15, 2022), these databases will not receive any security patches during this extended availability period. We strongly recommend that you proactively upgrade your databases to major version 10.3 or greater before community EOL on May 23, 2022. MariaDB 10.3 offers improved Oracle compatibility, support for querying historical states of the database, features that increase flexibility for developers and DBAs, and improved manageability \[2]. Our most recent release, Amazon RDS for MariaDB 10.6, introduces multiple MariaDB features to enhance the performance, scalability, reliability and manageability of your workloads, including MyRocks storage engine, IAM integration, one-step multi-major upgrade, delayed replication, improved Oracle PL/SQL compatibility and Atomic DDL \[3]. If you choose to upgrade to MariaDB 10.6, you will be able to upgrade your MariaDB 10.2 instances seamlessly to Amazon RDS for MariaDB 10.6 in a single step, thus reducing downtime substantially. Both versions, MariaDB 10.3 and 10.6, contain numerous fixes to various software bugs in earlier versions of the database. If you do not upgrade your databases before October 15, 2022, Amazon RDS will upgrade your MariaDB 10.2 databases to 10.3 during a scheduled maintenance window between October 15, 2022 00:00:01 UTC and November 15, 2022 00:00:01 UTC. On January 15, 2023 00:00:01 AM UTC, any Amazon RDS for MariaDB 10.2 databases that remain will be upgraded to version 10.3 regardless of whether the instances are in a maintenance window or not. You can initiate an upgrade of your database instance to a newer major version of MariaDB — either immediately or during your next maintenance window — using the AWS Management Console or the AWS Command Line Interface (CLI). The upgrade process will shut down the database instance, perform the upgrade, and restart the database instance. The database instance may be restarted multiple times during the upgrade process. While major version upgrades typically complete within the standard maintenance window, the duration of the upgrade depends on the number of objects within the database. To avoid any unplanned unavailability outside your maintenance window, we recommend that you first take a snapshot of your database and test the upgrade to get an estimate of the upgrade duration. If you are operating an Amazon RDS for MariaDB 10.2 database on one of the retired instance types (t1, m1, m2), you will need to migrate to a newer instance type before upgrading the engine major version. To learn more about upgrading MariaDB major versions in Amazon RDS, review the Upgrading Database Versions page \[4]. We want to make you aware of the following additional milestones associated with upgrading databases that are reaching EOL. **Now through October 15, 2022 00:00:01 AM UTC **- You can initiate upgrades of Amazon RDS for MariaDB 10.2 instances to MariaDB 10.3 or 10.6 at any time. **July 15, 2022 00:00:01 AM UTC –** After this date and time, you cannot create new Amazon RDS instances with MariaDB 10.2 from either the AWS Console or the CLI. You can continue to restore your MariaDB 10.2 snapshots as well as create read replicas with version 10.2 until the October 15, 2022 end of support date. **October 15, 2022 00:00:01 AM UTC -** Amazon RDS will automatically upgrade MariaDB 10.2 instances to version 10.3 within the earliest scheduled maintenance window that follows. After this date and time, any restoration of Amazon RDS for MariaDB 10.2 database snapshots will result in an automatic upgrade of the restored database to a still supported version at the time. **January 15, 2023 00:00:01 AM UTC -** Amazon RDS will automatically upgrade any remaining MariaDB 10.2 instances to version 10.3 whether or not they are in a maintenance window. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support \[5]. \[1] https://mariadb.org/about/#maintenance-policy \[2] https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-rds-now-supports-mariadb-10_3/ \[3] https://aws.amazon.com/about-aws/whats-new/2022/02/amazon-rds-mariadb-supports-mariadb-10-6/ \[4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html \[5] http://aws.amazon.com/support
0
answers
0
votes
9
views
palK
asked a month ago

Announcement: Amazon RDS for Oracle - End of Support Timeline for 12c Oracle Release 2 (12.2.0.1) and Oracle Release 1 (12.1.0.2) Major Version

Oracle Corporation has announced the end of support for Oracle Database version 12.1.0.2 on July 31, 2022, and version 12.2.0.1 on March 31, 2022 [1]. After these dates, Oracle Support will no longer release Critical Patch Updates for these database versions, and these versions will no longer be available in Amazon RDS for Oracle. For Oracle Database version 12.2.0.1: Amazon RDS for Oracle will end the support for Oracle Database version 12.2.0.1 (all editions) for both License Included (LI) and Bring Your Own License (BYOL) models on March 31, 2022. For Oracle Database version 12.1.0.2: Amazon RDS for Oracle will end the support for Oracle Database version 12.1.0.2 (all editions) for both License Included (LI) and Bring Your Own License (BYOL) models on July 31, 2022. If you have already upgraded your DB instance(s) to 19c, no further action is required. We highly recommend you upgrade your existing 12.1.0.2 and 12.2.0.1 DB instances to version 19c and validate your applications before the automatic upgrades begin. Please keep in mind the following timeline for the 19c version: • 19c: The Premier Support for Oracle Database 19c ends on April 30, 2024, while the Extended Support ends on April 30, 2027. Amazon RDS for Oracle plans to support Oracle Database 19c until April 30, 2027. 12.2.0.1 Deprecation - Timeline summary: • Now - March 31, 2022 - You can upgrade 12.2.0.1 DB instances manually to the version of your choice. • From February 14, 2022 - You can upgrade 12.2.0.1 snapshots manually to the version of your choice. • From February 14, 2022 - Amazon RDS for Oracle disables new instance creates on 12.2.0.1, but you will be able to continue to restore 12.2.0.1 DB snapshots without being auto upgraded until March 31, 2022. • From April 1, 2022 - Amazon RDS for Oracle starts automatic upgrades of 12.2.0.1 DB instances to 19c. • From April 1, 2022 - Amazon RDS for Oracle starts automatic upgrades to 19c of DB instances restored from 12.2.0.1 snapshots. 12.2.0.1 Deprecation Timeline by Amazon RDS for Oracle (for both BYOL and LI): Starting on February 14, 2022, Amazon RDS for Oracle will disable the ability to create new DB instances using 12.2.0.1. If you are using 12.2.0.1, please upgrade your instance(s) to a later major version before March 31, 2022. Starting on April 1, 2022, Amazon RDS Oracle will automatically upgrade DB instances from 12.2.0.1 to the latest Release Update (RU) on RDS for Oracle Database 19c. We highly recommend that you test your application on 19c or whichever major version you plan on upgrading to. Starting on April 1, 2022, any 12.2.0.1 DB instance created from a snapshot restore or point-in-time restore will be automatically upgraded to the latest RU on RDS for Oracle 19c. Starting on February 14, 2022, you may upgrade your snapshots manually from 12.2.0.1 to a newer major engine version. For more information, see Upgrading an Oracle DB Snapshot [3]. If you have encrypted snapshots do plan to perform manual snapshot upgrade. 12.1.0.2 Deprecation - Timeline summary: • Now - July 31, 2022 - You can upgrade 12.1.0.2 DB instances manually to the version of your choice. • From June 1, 2022 - You can upgrade 12.1.0.2 snapshots manually to the version of your choice. • From June 1, 2022 - Amazon RDS for Oracle disables new instance creates on 12.1.0.2, but you will be able to continue to restore 12.1.0.2 DB snapshots without being auto upgraded until July 31, 2022. • From August 1, 2022 - Amazon RDS for Oracle starts automatic upgrades of 12.1.0.2 DB instances to 19c. • From August 1, 2022 - Amazon RDS for Oracle starts automatic upgrades to 19c of DB instances restored from 12.1.0.2 snapshots. 12.1.0.2 Deprecation Timeline by Amazon RDS for Oracle (for both BYOL and LI): Starting on June 1, 2022, Amazon RDS for Oracle will disable the ability to create new DB instances using 12.1.0.2. If you are using 12.1.0.2, please upgrade your instance(s) to a later major version before July 31, 2022. Starting on August 1, 2022, Amazon RDS Oracle will automatically upgrade DB instances from 12.1.0.2 to the latest Release Update (RU) on RDS for Oracle Database 19c. We highly recommend that you test your application on 19c or whichever major version you plan on upgrading to. Starting on August 1, 2022, any 12.1.0.2 DB instance created from a snapshot restore or point-in-time restore will be automatically upgraded to the latest RU on RDS for Oracle 19c. Starting on June 1, 2022, you may upgrade your snapshots manually from 12.1.0.2 to a newer major engine version. For more information, see Upgrading an Oracle DB Snapshot [3]. If you have encrypted snapshots, you need to plan to perform manual snapshot upgrade. Additional Notes: After the upgrade, if the SQL statements perform in an unexpected manner due to the change in the plans by the 19c optimizer, you can use the OPTIMIZER_FEATURES_ENABLE parameter to retain the behavior of the 12c optimizer. There is no impact to your current Reserved Instances (RI) due to the engine version deprecation of 12.1.0.2 and 12.2.0.1. For versions in Extended Support, BYOL model customers must have purchased Extended Support agreements from Oracle Support or upgrade to a version for which they have support. For the details on the Licensing and Support requirements for the BYOL model customers, refer to the Amazon RDS for Oracle FAQs [4]. To learn more about upgrading Oracle major versions in RDS, review Upgrading the Oracle DB engine [5]. Please contact us through AWS Support [6] should you have any questions or concerns. Oracle Database Version 19c: The Extended Support for Oracle Database 19c ends on April 30, 2027. Amazon RDS for Oracle will support Oracle Database 19c until April 30, 2027. We recommend you to upgrade your existing 12.1.0.2 or 12.2.0.1 DB instances to version 19c [7] because it is the long-term support release. Note for versions in Extended Support, BYOL model customers must have purchased Extended Support agreements from Oracle Support or upgrade to a version for which they have support. For the details on the Licensing and Support requirements for the BYOL model customers, refer to the Amazon RDS for Oracle FAQs [4]. Review the version specific deprecation details on the AWS forum [2]. To learn more about Amazon RDS policy for supporting database versions, please see the RDS FAQs and search for “guidelines for deprecating database engine versions” [8]. Please contact us through AWS Support [6] or the AWS Developer Forums [9] should you have any questions or concerns. [1] https://www.oracle.com/us/assets/lifetime-support-technology-069183.pdf [2] https://forums.aws.amazon.com/ann.jspa?annID=8593 [3] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBSnapshot.Oracle.html [4] https://aws.amazon.com/rds/oracle/faqs/ [5] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Oracle.html [6] https://aws.amazon.com/support [7] https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-rds-for-oracle-now-supports-oracle-database-19c/ [8] https://aws.amazon.com/rds/faqs [9] https://forums.aws.amazon.com/forum.jspa?forumID=60
0
answers
0
votes
5
views
Vejey_AWS
asked a month ago

Understanding RDS throughput limits

I have trouble understanding what throughput limit(s) my RDS instance is supposed to have. Based on this [blog post](https://aws.amazon.com/blogs/database/making-better-decisions-about-amazon-rds-with-amazon-cloudwatch-metrics/): > An Amazon RDS instance has two types of throughput limits: Instance level and EBS volume level limits. > You can monitor instance level throughput with the metrics WriteThroughput and ReadThroughput. WriteThroughput is the average number of bytes written to disk per second. ReadThroughput is the average number of bytes read from disk per second. For example, a db.m4.16xlarge instance class supports 1,250-MB/s maximum throughput. The EBS volume throughput limit is 250 MiB/S for GP2 storage based on 16 KiB I/O size, and 1,000 MiB/s for Provisioned IOPS storage type. If you experience degraded performance due to a throughput bottleneck, you should validate both of these limits and modify the instance as needed. My RDS instance is of db.r6g.8xlarge type, which according to https://aws.amazon.com/rds/instance-types/ has 9000 Mbps (= 1125 MB/s) EBS dedicated bandwidth. On the other hand, according to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html the underlying gp2 volume (5TB size) has a 250 MB/s throughput limit. So how are these two limits applied? Should I be able to reach close to 1125 MB/s or am I restricted to 250 MiB/s because of gp2 volume limit? In CloudWatch, during bulk write operations I have observed Total (Read + Write) Throughput momentarily reach ~ 1000 MB/s but mostly it was steady around 420 MB/s, i.e. somewhere in between the two limits.
2
answers
0
votes
7
views
nikos64
asked 2 months ago

load csv from s3 to Aurora mysql using mysql JDBC driver

Hi team, I'm creating an AWS glue job to load data from CSV file on S3 to Aurora MySQL 8 as DB, I'm using a custom JDBC driver because as I understood glue connection doesn't support MySQL 8. please is there an example of script how to load data from S3 to RDS (NOT RDS to RDS) ? I found this helpful link: https://aws.amazon.com/blogs/big-data/building-aws-glue-spark-etl-jobs-by-bringing-your-own-jdbc-drivers-for-amazon-rds/ but it loads from RDS to RDS not sure how to use the same logic to load from S3 to RDS I'm currently using this code but it doesn't work the glue job stop with this error : ``` An error occurred while calling o96.pyWriteDynamicFrame. The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: HJsdDCNsdP96DsdT; S3 Extended Request ID: Fvv72asdLoYsdKCUT9UndlsdRosdfgddup+niZem3RP3sXo4Gp0Fsd5H6sd8TrKMysdanEk=; Proxy: null) ``` code used : ``` import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job ## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ['JOB_NAME']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) spark.read.option("escapeChar","®") connection_mysql8_options = { "url": "jdbc:mysql://databhddfd8bb-180ewlrdhdfhi3ew.cluster-cqdhdfhddvbc.region.rds.amazonaws.com:3306/mydb", "dbtable": "mydbTable", "user": "root", "password": "WsdtbasdLjZasdVrsadtgGHDNJasd, "customJdbcDriverS3Path": "s3://myBucket/mysql-connector-java-8.0.28.jar", "customJdbcDriverClassName": "com.mysql.cj.jdbc.Driver"} datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "csv_db", table_name = "mytable_csv", transformation_ctx = "datasource0") applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("id", "string", "id", "string"), ("created", "string", "created", "timestamp"), .....], transformation_ctx = "applymapping1") selectfields2 = SelectFields.apply(frame = applymapping1, paths = ["created", "id", .....], transformation_ctx = "selectfields2") ##datasink5 = glueContext.write_dynamic_frame.from_options(frame = selectfields2, connection_type="mysql", connection_options=connection_mysql8_options, transformation_ctx = "datasink5") datasink5 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = selectfields2, catalog_connection = "myaccount-rds-edwextract-connection", connection_options = { "customJdbcDriverS3Path": "s3://myBucket/mysql-connector-java-8.0.21.jar", "customJdbcDriverClassName": "com.mysql.cj.jdbc.Driver", "user": "root", "password": "GJDMTRasdasdassdasd1AtsdasdLasdadasd", "url": "jdbc:mysql://daafb269d8bb-1asd0ewasfasfew.cluster-cqtsafasf.region.rds.amazonaws.com:3306/mydb", "connectionType": "mysql", "dbtable": "mydbTable", "database": "mydb"}, transformation_ctx = "datasink5") job.commit() ``` would like to know what is the correct syntax to load from CSV in S3 to amazon aurora mt SQL 8 via JDBC driver Thank you!!!
1
answers
0
votes
6
views
Jess
asked 2 months ago

Aurora upgrade 2 to 3 / MySql 5.7 to 8.0: potential bug in pre-check validation

We believe we have noticed some weird behavior in AWS upgrade pre-checks for Aurora 2 to 3 / Mysql 5.7 to 8.0. We believe it is related to the AWS-specific rule ["There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or DESC qualifiers for GROUP BY clauses,"](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.57to80Prechecks) though we are not breaking this rule. Our findings: (SP1) ``` DELIMITER $$ CREATE PROCEDURE sp_1 () BEGIN SELECT st.name as hdervascferef, max(st.status) FROM SetupTableName st GROUP BY st.name; END$$ DELIMITER ; ``` Produces a precheck error: ``` { "level": "Error", "dbObject": "trax.sp_1", "description": "Obsolete procedure - trax.sp_1. Contains depreciated keywords." }, ``` But: (SP2) ``` DELIMITER $$ CREATE PROCEDURE sp_2 () BEGIN SELECT st.name as hdervasferef, max(st.status) FROM SetupTableName st GROUP BY st.name; END$$ DELIMITER ; ``` Does not produce an error. The only difference here is that the alias, `hdervascferef` from SP1, an arbitrary string containing substring `asc`, while the alias `hdervasferef` alias from SP2 has the 'c' removed and thus does not contain substring `asc`, and so there is no error. We are running into this on many stored procedures because we have many tables with a column named `hasChilds`, which has the `asc` substring, and thus prevents these SPs from passing the pre-check. We have found removing instances of letters `asc` from an SP causes the pre-check to pass, but this is not a viable option for us, as the use of the `hasChilds` column in our stored procedures is vital to their function. ##### Replication steps : 1. Add these two SPs into our AWS Aurora instance with engine = 5.7.mysql_aurora.2.07.2 2. follow instructions at [AWS RDS MySql Testing an Upgrade](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.UpgradeTesting) 3. verify failed prechecks for SP1 but not SP2 We would appreciate any help / guidance that can be provided! ##### tl;dr I think we are being flagged for the "There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or DESC qualifiers for GROUP BY clauses," due to our query having a `group by` and the sub string 'asc' in it, despite not actually breaking the rule, and this prevents us from upgrading our Aurora instances because we fail the pre-check.
1
answers
0
votes
8
views
joelaflop
asked 2 months ago
  • 1
  • 90 / page