By using AWS re:Post, you agree to the Terms of Use
/Amazon Aurora/

Questions tagged with Amazon Aurora

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Aurora MySQL crashing randomly

I am on my third Aurora cluster that is randomly failing, leading my application to die. AWS support team didn't answer my support case. Engine version: 8.0.mysql_aurora.3.01.0 ``` /etc/rds/oscar-start-cmd: line 39: 2687 Killed /rdsdbbin/oscar/bin/mysqld --core-file --innodb_shared_buffer_pool_uses_huge_pages='1' "$@" grover/runtime/overlay.cpp:2270: Assertion failed: err == 0 Stack trace: /rdsdbbin/oscar/bin/mysqld() [0x2be2f08] /rdsdbbin/oscar/bin/mysqld(_Z27log_grover_pid_from_page_nomm+0x1d) [0x2850bdd] <inline> (in buf_page_t::set_grover_vol_pid(unsigned long, unsigned long) at /local/p4clients/pkgbuild-FRTaI/workspace/src/OscarMysql80/storage/innobase/include/ut0lock_free_hash.h:638) /rdsdbbin/oscar/bin/mysqld() [0x2597395] (in buf_page_init at /local/p4clients/pkgbuild-FRTaI/workspace/src/OscarMysql80/storage/innobase/buf/buf0buf.cc:6645) /rdsdbbin/oscar/bin/mysqld(_Z22buf_page_init_for_readP7dberr_tmRK9page_id_tRK11page_size_tm+0x2e0) [0x25a3cf0] /rdsdbbin/oscar/bin/mysqld(_Z17buf_read_page_lowP7dberr_tbmmRK9page_id_tRK11page_size_tbbb+0x91) [0x25c6c91] /rdsdbbin/oscar/bin/mysqld(_Z13buf_read_pageRK9page_id_tRK11page_size_tb+0x3c) [0x25c76bc] /rdsdbbin/oscar/bin/mysqld(_ZN9Buf_fetchI16Buf_fetch_normalE9read_pageEv+0x27) [0x2597ce7] /rdsdbbin/oscar/bin/mysqld(_ZN16Buf_fetch_normal3getERP11buf_block_t+0xb2) [0x259ed82] /rdsdbbin/oscar/bin/mysqld(_ZN9Buf_fetchI16Buf_fetch_normalE11single_pageEv+0x4e) [0x25a654e] /rdsdbbin/oscar/bin/mysqld(_Z16buf_page_get_genRK9page_id_tRK11page_size_tmP11buf_block_t10Page_fetchPKcmP5mtr_tb+0x1d9) [0x25a75a9] /rdsdbbin/oscar/bin/mysqld() [0x2637bc1] /rdsdbbin/oscar/bin/mysqld(_Z28fseg_alloc_free_page_generalPhjhmP5mtr_tS1_+0x1d0) [0x2639160] /rdsdbbin/oscar/bin/mysqld(_Z14btr_page_allocP12dict_index_tjhmP5mtr_tS2_+0xd5) [0x256ecc5] /rdsdbbin/oscar/bin/mysqld(_ZN3lob14alloc_lob_pageEP12dict_index_tP5mtr_tjb+0x216) [0x28bb676] /rdsdbbin/oscar/bin/mysqld(_ZN3lob12first_page_t5allocEP5mtr_tb+0x24) [0x28ab0c4] /rdsdbbin/oscar/bin/mysqld(_ZN3lob6insertEPNS_13InsertContextEP5trx_tRNS_5ref_tEP15big_rec_field_tm+0x14f) [0x28b78df] /rdsdbbin/oscar/bin/mysqld(_ZN3lob31btr_store_big_rec_extern_fieldsEP5trx_tP10btr_pcur_tPK5upd_tPmPK9big_rec_tP5mtr_tNS_6opcodeE+0xb16) [0x26edbb6] /rdsdbbin/oscar/bin/mysqld() [0x277331d] /rdsdbbin/oscar/bin/mysqld(_Z29row_ins_clust_index_entry_lowjmP12dict_index_tmP8dtuple_tP10btr_pcur_tP9que_thr_tb+0x646) [0x2774906] /rdsdbbin/oscar/bin/mysqld(_Z25row_ins_clust_index_entryP12dict_index_tP8dtuple_tP10btr_pcur_tP9que_thr_tb+0xe8) [0x277b158] /rdsdbbin/oscar/bin/mysqld(_Z12row_ins_stepP9que_thr_t+0x274) [0x277b7d4] /rdsdbbin/oscar/bin/mysqld() [0x278ca73] /rdsdbbin/oscar/bin/mysqld(_ZN11ha_innobase9write_rowEPh+0x226) [0x268fac6] /rdsdbbin/oscar/bin/mysqld(_ZN7handler12ha_write_rowEPh+0x177) [0x14a4867] /rdsdbbin/oscar/bin/mysqld(_Z12write_recordP3THDP5TABLEP9COPY_INFOS4_+0x5d4) [0x172e3d4] /rdsdbbin/oscar/bin/mysqld(_ZN21Sql_cmd_insert_values13execute_innerEP3THD+0xbaf) [0x173018f] /rdsdbbin/oscar/bin/mysqld(_ZN11Sql_cmd_dml7executeEP3THD+0x6cc) [0x119905c] /rdsdbbin/oscar/bin/mysqld(_Z30mysql_execute_command_internalP3THDb+0x1143) [0x1139f33] /rdsdbbin/oscar/bin/mysqld(_Z21mysql_execute_commandP3THDb+0x17b) [0x113d31b] /rdsdbbin/oscar/bin/mysqld(_Z20dispatch_sql_commandP3THDP12Parser_state+0x351) [0x113df91] 21:03:27 UTC - mysqld got signal 6 ; Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware. Thread pointer: 0x14652cf4e000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 1465869fba9f thread_stack 0x40000 /rdsdbbin/oscar/bin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x2d) [0x246ac4d] /rdsdbbin/oscar/bin/mysqld(handle_fatal_signal+0x532) [0x1310292] /lib64/libpthread.so.0(+0x117df) [0x147cc9b707df] /lib64/libc.so.6(gsignal+0x110) [0x147cc8ef3c20] /lib64/libc.so.6(abort+0x147) [0x147cc8ef50c7] /rdsdbbin/oscar/bin/mysqld() [0xf963d7] /rdsdbbin/oscar/bin/mysqld() [0x2dba17a] /rdsdbbin/oscar/bin/mysqld() [0x2dba333] /rdsdbbin/oscar/bin/mysqld() [0x2be2f08] /rdsdbbin/oscar/bin/mysqld(log_grover_pid_from_page_no(unsigned long, unsigned long)+0x1d) [0x2850bdd] /rdsdbbin/oscar/bin/mysqld() [0x2597395] /rdsdbbin/oscar/bin/mysqld(buf_page_init_for_read(dberr_t*, unsigned long, page_id_t const&, page_size_t const&, unsigned long)+0x2e0) [0x25a3cf0] /rdsdbbin/oscar/bin/mysqld(buf_read_page_low(dberr_t*, bool, unsigned long, unsigned long, page_id_t const&, page_size_t const&, bool, bool, bool)+0x91) [0x25c6c91] /rdsdbbin/oscar/bin/mysqld(buf_read_page(page_id_t const&, page_size_t const&, bool)+0x3c) [0x25c76bc] /rdsdbbin/oscar/bin/mysqld(Buf_fetch<Buf_fetch_normal>::read_page()+0x27) [0x2597ce7] /rdsdbbin/oscar/bin/mysqld(Buf_fetch_normal::get(buf_block_t*&)+0xb2) [0x259ed82] /rdsdbbin/oscar/bin/mysqld(Buf_fetch<Buf_fetch_normal>::single_page()+0x4e) [0x25a654e] /rdsdbbin/oscar/bin/mysqld(buf_page_get_gen(page_id_t const&, page_size_t const&, unsigned long, buf_block_t*, Page_fetch, char const*, unsigned long, mtr_t*, bool)+0x1d9) [0x25a75a9] /rdsdbbin/oscar/bin/mysqld() [0x2637bc1] /rdsdbbin/oscar/bin/mysqld(fseg_alloc_free_page_general(unsigned char*, unsigned int, unsigned char, unsigned long, mtr_t*, mtr_t*)+0x1d0) [0x2639160] /rdsdbbin/oscar/bin/mysqld(btr_page_alloc(dict_index_t*, unsigned int, unsigned char, unsigned long, mtr_t*, mtr_t*)+0xd5) [0x256ecc5] /rdsdbbin/oscar/bin/mysqld(lob::alloc_lob_page(dict_index_t*, mtr_t*, unsigned int, bool)+0x216) [0x28bb676] /rdsdbbin/oscar/bin/mysqld(lob::first_page_t::alloc(mtr_t*, bool)+0x24) [0x28ab0c4] /rdsdbbin/oscar/bin/mysqld(lob::insert(lob::InsertContext*, trx_t*, lob::ref_t&, big_rec_field_t*, unsigned long)+0x14f) [0x28b78df] /rdsdbbin/oscar/bin/mysqld(lob::btr_store_big_rec_extern_fields(trx_t*, btr_pcur_t*, upd_t const*, unsigned long*, big_rec_t const*, mtr_t*, lob::opcode)+0xb16) [0x26edbb6] /rdsdbbin/oscar/bin/mysqld() [0x277331d] /rdsdbbin/oscar/bin/mysqld(row_ins_clust_index_entry_low(unsigned int, unsigned long, dict_index_t*, unsigned long, dtuple_t*, btr_pcur_t*, que_thr_t*, bool)+0x646) [0x2774906] /rdsdbbin/oscar/bin/mysqld(row_ins_clust_index_entry(dict_index_t*, dtuple_t*, btr_pcur_t*, que_thr_t*, bool)+0xe8) [0x277b158] /rdsdbbin/oscar/bin/mysqld(row_ins_step(que_thr_t*)+0x274) [0x277b7d4] /rdsdbbin/oscar/bin/mysqld() [0x278ca73] /rdsdbbin/oscar/bin/mysqld(ha_innobase::write_row(unsigned char*)+0x226) [0x268fac6] /rdsdbbin/oscar/bin/mysqld(handler::ha_write_row(unsigned char*)+0x177) [0x14a4867] /rdsdbbin/oscar/bin/mysqld(write_record(THD*, TABLE*, COPY_INFO*, COPY_INFO*)+0x5d4) [0x172e3d4] /rdsdbbin/oscar/bin/mysqld(Sql_cmd_insert_values::execute_inner(THD*)+0xbaf) [0x173018f] /rdsdbbin/oscar/bin/mysqld(Sql_cmd_dml::execute(THD*)+0x6cc) [0x119905c] /rdsdbbin/oscar/bin/mysqld(mysql_execute_command_internal(THD*, bool)+0x1143) [0x1139f33] /rdsdbbin/oscar/bin/mysqld(mysql_execute_command(THD*, bool)+0x17b) [0x113d31b] /rdsdbbin/oscar/bin/mysqld(dispatch_sql_command(THD*, Parser_state*)+0x351) [0x113df91] /rdsdbbin/oscar/bin/mysqld(dispatch_command(THD*, COM_DATA const*, enum_server_command)+0x1b39) [0x113ff99] /rdsdbbin/oscar/bin/mysqld(do_command(THD*)+0x1c6) [0x1140f46] /rdsdbbin/oscar/bin/mysqld(THD_task::process_connection()+0x134) [0x12fcfc4] /rdsdbbin/oscar/bin/mysqld(Thread_pool::worker_loop()+0x180) [0x12fbc80] /rdsdbbin/oscar/bin/mysqld(Thread_pool::worker_launch(void*)+0x20) [0x12fbea0] /rdsdbbin/oscar/bin/mysqld() [0x296c531] /lib64/libpthread.so.0(+0x740a) [0x147cc9b6640a] /lib64/libc.so.6(clone+0x3e) [0x147cc8fad09e] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (1479bc268028): [omitted] Connection ID (thread ID): 45980 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. aurora backtrace compare flag : 1 Writing a core file [...] ```
0
answers
0
votes
5
views
tobias
asked 2 days ago

Aurora MySQL 8 Provisioned - Low Freeable Memory Causing Failover Despite Restricted Parameters

We recently moved from Provisioned Aurora 2.10.1 (MySQL 5.7) to Aurora 3.01.0 (MySQL 8.0). We have Enhanced Monitoring and Performance Insights enabled. We've been monitoring the database metrics and noticed that the Freeable Memory consistently drops so low that Aurora executes a Failover. There are no deadlocks or sleeping or zombie processes running when this occurs. In Performance Insights, os.memory.free.avg decreases over time while os.memory.active.avg increases at the same rate (of course) and then we also notice that os.memory.pageTables.avg increases at the same rate as os.memory.active.avg . The database size is smaller than the total amount of RAM that the instance can support, so the instance size should be big enough to handle all of it. We've even reduced the innodb_buffer_pool_size to a size sufficiently smaller than the DB Memory but large enough to cache the data, but it doesn't help. The Buffer Cache Hit Ratio hovers around 100%. The Swap space remains at zero. The reader Freeable memory remains consistent but the Writer's Freeable memory is what drops. When Aurora promotes a Reader to a Writer, the same thing happens. When I launch a new instance from a snapshot, I don't see the same issue occur. If MySQL is limited by parameters in the parameter group, how can we determine if the OS is using more Memory than it's supposed to and if so, how do we stop the OS from using more memory than it's supposed to?
1
answers
0
votes
2
views
AWS-User-7986928
asked 4 days ago

Does using stored procedures with Amazon Aurora reduce costs and/or speed up response time?

I'm making a webapplication that can serve pages that require data from multiple tables. The queries necessary for fetching this data have been bundled into one stored procedure. Now it takes only one database call to this stored procedure in order to receive the resultsets from all the multiple tables. If i had executed all the queries separately it would have entailed making multiple database calls. I've read that queries that are executed from a stored procedure are just as fast as queries that are performed separately, however i can imagine that opening a new database connection or transaction for each separate query will make the process time a lot longer then when all of these queries are executed in one stored procedure using only one database connection or transaction. Thats why i decided to make stored procedures instead of doing separate database calls. I also assumed that i would be charged per DB query made when using Amazon aurora. However on the pricing page https://aws.amazon.com/rds/aurora/pricing/ i'm reading that i get charged per I/O operation instead. If the query result can be received from the memory cache then i don't get charged in that case. So i'm kind of wondering whether the use of stored procedures instead of separate db calls will really save me money or not. I'm also wondering whether using stored procedures really is faster then simply doing all the required queries separately. Lets assume that all the separate queries made by the application are done within one transaction. How would this compare to using a stored procedure that executes the same queries? I can imagine that each query has to travel from the EC2 server to the amazon aurora server and the result has to travel back again. I would assume that this adds a few milliseconds of extra processing time to each query that is made separately, but i'm not sure so thats why i'm asking.
1
answers
0
votes
5
views
Viridios
asked 8 days ago

My Aurora RDS just peak for no reason

I have **2 of db.r5.12xlarge Aurora RDS** (**MySQL**) [Engine Version: Engine version5.7.mysql_aurora.2.07.2] [Parameter Group: default.aurora-mysql5.7] I define A (worked fine) and B (B have problem) Previously used for a month without any problems. But 1-2 days ago, I found too much response time from my Load Balance monitoring. Then I checked monitor on db.r5.12xlarge Aurora RDS (B). • 3 Hours https://i.imgur.com/dOMVuyA.png • 2 Weeks https://i.imgur.com/V9faGpx.png • 1 Week Performance Insight (Red Arrow start problem) https://i.imgur.com/RCahLzV.png • 6 Hours Load Balance https://i.imgur.com/RIEcwLU.png • 1 Hour Target Group (When getting normal) https://i.imgur.com/8rd0isP.png • Benchmark connection time (on problem period) RDS A 0.0061 second, RDS B 0.3102 second. I have checked on my EC2 every intances work normally and no CPU peak. I tried to add more instance to Load Balance. (But no help) I have checked Performance Insights no impact query. (Because I didn't change my code or anything.) But my RDS A worked fine, My Application have to connect 2 Aurora RDS. (The work of both of them is almost the same.) And my RDS A work harder than (more queries, more insert, more select) RDS B (But no problem). This happened to me once earlier this month. And all of a sudden it came back to normal without me doing anything about it. I really don't understand (This pic when RDS B back to normal https://i.imgur.com/kz5W4Wb.png). Thank you very much and I'm really sorry about my bad english. // ==== Update! Problem day (No any special or routine operators in these days, everything same all days) - 27 Nov - 3 Dem - 17 Dem - 27 Dem - 3 Jan - 10 Jan
0
answers
0
votes
8
views
WeakEnglishGuy
asked 18 days ago

How to investigate Aurora Postgres IAM authentication errors from rdsauthproxy

I have been using IAM database authentication on an Aurora for Postgres for many months now and everything worked well. A few days ago I started getting login errors until now it is impossible to login at all. I am not sure about the timeline as we only use these accounts for individual user connections. Only accounts not using IAM can login now. I am not aware of any change but I cannot pinpoint the root cause of the error. The error I am getting in Postgres clients is this: ``` Unable to connect to server: FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` If I look into the Postgres logs I get a little more details: ``` * Trying <REDACTED_IP>:1108... * Connected to rdsauthproxy (<REDACTED_IP>) port 1108 (#0) > POST /authenticateRequest HTTP/1.1 Host: rdsauthproxy:1108 Accept: */* Content-Length: 753 Content-Type: multipart/form-data; boundary=------------------------1f9a4da08078f511 * We are completely uploaded and fine * Mark bundle as not supporting multiuse < HTTP/1.1 403 Forbidden < Content-Type: text/html;charset=utf-8 < Content-Length: 0 < * Connection #0 to host rdsauthproxy left intact 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:LOG: pam_authenticate failed: Permission denied 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:DETAIL: Connection matched pg_hba.conf line 13: "hostssl all +rds_iam all pam" 2021-12-05 14:42:43 UTC:10.4.2.137(13615):<REDACTED_USERNAME>@postgres:[7488]:FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` So it seems to be "rdsauthproxy" that rejects the authentication. My understanding is that this proxy is part of the Aurora instance and I did not find a way to get its logs where hopefully I could find any information on why the authentication is rejected. I checked the IAM configuration in case something changed but it seems fine. The users have a policy like this: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:eu-west-3:<REDACTED_ACCOUNT_ID>:dbuser:*/<REDACTED_USERNAME>" } ] } ``` The usernames match exactly between IAM and Postgres. In Postgres they all have the "rds_iam" role. Is there anything I could be missing? At least is there a way to retrieve logs of an Aurora rdsauthproxy instance that maybe could point me in the right direction?
1
answers
0
votes
5
views
Fran
asked a month ago

RDS GlobalDatabase Modify/Termination issue

I am using Aurora MySQL RDS for work and I created a Global database with 'GlobalClusterIdentifier': 'Production_Global-Cluster' which has a underscore in the name and it is not supposed to be allowed. Now I am trying to terminate this Global database, I removed all regional clusters and instances under it. When I try to remove the termination protection from it or try to modify it I get the following error. ``` Traceback (most recent call last): File "dbtest.py", line 69, in <module> DeletionProtection=False File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 676, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the ModifyGlobalCluster operation: The parameter GlobalClusterIdentifier is not a valid identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens. ``` The termination protection option is only available with AWS CLI and Boto3 and both are giving me same error, if I try to change the name from AWS console directly it gives me a similar error. I tried to describe all global clusters with boto3 and then piped the output of 'GlobalClusterIdentifier' in the next command and it gives the same error as well. Does anyone have been in the same situation? Any idea on how to terminate this? or at lease change the name maybe? Thanks in Advance!!
1
answers
0
votes
17
views
AWSnoobX
asked a month ago

Upgrading Aurora Postgres from 10.16 to 11.11

I failed to upgrade postgres engine from 10.16 to 11.11 with error message which suggest me to change wal_level to logical or higher, but I can't find a way to change that setting, does anyone have experience with this? this is the full message: ``` ----------------------------------------------------------------- pg_upgrade run on Tue Jul 20 08:31:19 2021 ----------------------------------------------------------------- sh: /rdsdbbin/aurora-11.9.3.4.3.8395.0/bin/curl: /apollo/sbin/envroot: bad interpreter: No such file or directory command: \[ -f /rdsdbbin/aurora-10.14.2.7.3.209608.0/bin/setup_env.sh ] && source /rdsdbbin/aurora-10.14.2.7.3.209608.0/bin/setup_env.sh ; "/rdsdbbin/aurora-10.14.2.7.3.209608.0/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/rdsdbdata/db_old" -o "-p 5432 -b --config_file=/rdsdbdata/config_old/postgresql.conf -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/rdsdbdata/log/upgrade'" start >> "pg_upgrade_server.log" 2>&1 waiting for server to start....2021-07-20 08:31:39.978 GMT \[11484] LOG: skipping missing configuration file "/rdsdbdata/db_old/postgresql.auto.conf" 2021-07-20 08:31:39.984 GMT \[11484] LOG: pgaudit extension initialized 2021-07-20 08:31:39.993 GMT \[11484] LOG: listening on Unix socket "/rdsdbdata/log/upgrade/.s.PGSQL.5432" .2021-07-20 08:31:40.998 GMT \[11484] LOG: Waiting for runtime initialization complete... 2021-07-20 08:31:41.050 GMT \[11509] LOG: setting shmemname /dev/shm/aurora-postgres-buffer-cache .2021-07-20 08:31:42.220 GMT \[12016] LOG: database system was shut down at 2021-07-20 08:30:28 GMT 2021-07-20 08:31:42.231 GMT \[12016] FATAL: logical replication slot "q4qxxeimi2gz6hhd_00026367_4c69119a_f0a5_4aa7_9e2f_7c2ea95c59df" exists, but wal_level < logical 2021-07-20 08:31:42.231 GMT \[12016] HINT: Change wal_level to be logical or higher. 2021-07-20 08:31:42.233 GMT \[11484] LOG: startup process (PID 12016) exited with exit code 1 2021-07-20 08:31:42.233 GMT \[11484] LOG: aborting startup due to startup process failure 2021-07-20 08:31:42.233 GMT \[11484] LOG: database system is shut down stopped waiting pg_ctl: could not start server Examine the log output. ```
4
answers
1
votes
4
views
pyc19
asked 6 months ago

DMS loads last column as 0, Source CSV file in S3, Target Aurora

I am facing an issue while loading data from CSV file(s) in S3 -> Aurora MySQL. While the load completes successfully with all rows, the last column (Total Profit) however is loaded as 0. The column is decimal type and I have defined it as below in S3 table mapping: “ColumnType”: “NUMERIC”, “ColumnPrecision”: “10", “ColumnScale”: “2" In the table the column is defined as decimal(10,2) I have other columns in the file of the same type and they load correctly. I have used other methods to load the same file and it loads fine. In the log I can only see one warning however it also doesn't make sense as OrderID is a Integer. In table is it defined as Int(11). 2021-06-25T17:14:33 \[TARGET_LOAD ]W: Invalid BC timestamp was encountered in column 'OrderID'. The value will be truncated on the target to the timestamp: 874708545 (csv_target.c:173) Below is the S3 transformation: { "TableCount": "1", "Tables": \[ { "TableName": "orders", "TablePath": "div/yyyy-mm-dd/", "TableOwner": "div", "TableColumns": \[ { "ColumnName": "OrderID", "ColumnType": "INT4" }, { "ColumnName": "Country", "ColumnType": "STRING", "ColumnLength": "50" }, { "ColumnName": "Item Type", "ColumnType": "STRING", "ColumnLength": "30" }, { "ColumnName": "Sales Channel", "ColumnType": "STRING", "ColumnLength": "10" }, { "ColumnName": "Order Priority", "ColumnType": "STRING", "ColumnLength": "5" }, { "ColumnName": "Order Date", "ColumnType": "DATE" }, { "ColumnName": "Region", "ColumnType": "STRING", "ColumnLength": "80" }, { "ColumnName": "Ship Date", "ColumnType": "DATE" }, { "ColumnName": "Units Sold", "ColumnType": "INT2" }, { "ColumnName": "Unit Price", "ColumnType": "NUMERIC", "ColumnPrecision": "5", "ColumnScale": "2" }, { "ColumnName": "Unit Cost", "ColumnType": "NUMERIC", "ColumnPrecision": "5", "ColumnScale": "2" }, { "ColumnName": "Total Revenue", "ColumnType": "NUMERIC", "ColumnPrecision": "10", "ColumnScale": "2" }, { "ColumnName": "Total Cost", "ColumnType": "NUMERIC", "ColumnPrecision": "10", "ColumnScale": "2" }, { "ColumnName": "Total Profit", "ColumnType": "NUMERIC", "ColumnPrecision": "10", "ColumnScale": "2" } ], "TableColumnsTotal": "14" } ] } Sample data: 535113847,Azerbaijan,Snacks,Online,C,2014-10-08,Middle East and North Africa,2014-10-23,934,152.58,97.44,142509.72,91008.96,51500.76 874708545,Panama,Cosmetics,Offline,L,2015-02-22,Central America and the Caribbean,2015-02-27,4551,437.2,263.33,1989697.2,1198414.83,791282.37 854349935,Sao Tome and Principe,Fruits,Offline,M,2015-12-09,Sub-Saharan Africa,2016-01-18,9986,9.33,6.92,93169.38,69103.12,24066.26 892836844,Sao Tome and Principe,Personal Care,Online,M,2014-09-17,Sub-Saharan Africa,2014-10-12,9118,81.73,56.67,745214.14,516717.06,228497.08 Table definition: mysql> describe orders; _----------------_---------------_------_-----_---------_-------+ | Field | Type | Null | Key | Default | Extra | _----------------_---------------_------_-----_---------_-------+ | OrderID | int(11) | NO | | NULL | | | Country | varchar(50) | NO | | NULL | | | Item Type | varchar(30) | NO | | NULL | | | Sales Channel | varchar(10) | NO | | NULL | | | Order Priority | varchar(5) | NO | | NULL | | | Order Date | date | NO | | NULL | | | Region | varchar(80) | NO | | NULL | | | Ship Date | date | NO | | NULL | | | Units Sold | smallint(6) | NO | | NULL | | | Unit Price | decimal(5,2) | NO | | NULL | | | Unit Cost | decimal(5,2) | NO | | NULL | | | Total Revenue | decimal(10,2) | NO | | NULL | | | Total Cost | decimal(10,2) | NO | | NULL | | | Total Profit | decimal(10,2) | NO | | NULL | | _----------------_---------------_------_-----_---------_-------+ Post Load Output: mysql> select * from orders limit 3; _-----------_-----------------------_-----------_---------------_----------------_------------_-----------------------------------_------------_------------_------------_-----------_---------------_------------_--------------+ | OrderID | Country | Item Type | Sales Channel | Order Priority | Order Date | Region | Ship Date | Units Sold | Unit Price | Unit Cost | Total Revenue | Total Cost | Total Profit | _-----------_-----------------------_-----------_---------------_----------------_------------_-----------------------------------_------------_------------_------------_-----------_---------------_------------_--------------+ | 535113847 | Azerbaijan | Snacks | Online | C | 2014-10-08 | Middle East and North Africa | 2014-10-23 | 934 | 152.58 | 97.44 | 142509.72 | 91008.96 | 0.00 | | 874708545 | Panama | Cosmetics | Offline | L | 2015-02-22 | Central America and the Caribbean | 2015-02-27 | 4551 | 437.20 | 263.33 | 1989697.20 | 1198414.83 | 0.00 | | 854349935 | Sao Tome and Principe | Fruits | Offline | M | 2015-12-09 | Sub-Saharan Africa | 2016-01-18 | 9986 | 9.33 | 6.92 | 93169.38 | 69103.12 | 0.00 | _-----------_-----------------------_-----------_---------------_----------------_------------_-----------------------------------_------------_------------_------------_-----------_---------------_------------_--------------+ 3 rows in set (0.00 sec) Kindly suggest Edited by: DivAWS on Jun 27, 2021 11:20 AM
1
answers
0
votes
0
views
DivAWS
asked 7 months ago

RDS Aurora Mysql Multi-master downtime and best practices

Hi all, ---Context--- A customer is currently using a large RDS Aurora cluster (12 replicas, 6x r5.12xlarge and 6x r5.4xlarge ) for their production environment. This cluster is currently part of a monolith that is proactively (and slowly) being broken down into smaller applications with independent data stores. This will still take months/years to complete due to competing priorities on their end ---Challenge--- Over the past few months the customer has performed a few database restarts due to either engine upgrades or different database parameter tuning. The customer would like to evaluate multi-master or any other alternative that mitigates service downtime as much as possible for future upgrades or restarts. ---Questions--- 1) Is multi-master (2 nodes) + 12 additional read replicas an option at all? 2) If we ever implement a multi-master approach keeping the remaining replicas as readers, how does a database upgrade/restart affect the service? are all the database notes rebooted as well as it happens with a regular single-master cluster? 3) The customer application is not built for a multi-master active-active approach as they won't be able to handle deadlocks at the application level. Is a multi-master active-passive an option for fail-over? 4) Do we have any other recommendation/architecture for managing database upgrade/restarts that would help minimizing downtime? Thanks!
1
answers
0
votes
2
views
Miguel_V
asked 2 years ago

5.7.mysql_aurora.2.07.0 randomly crashes with mysqld got signal 11

We just switched a MySQL host to Aurora using the new 5.7.mysql_aurora.2.07.0 version. It ran fine overnight but this morning we started getting random crashes. We will get one or two and then it will be fine for a few hours and then we will get a few more and so on. These are the logs: 20:55:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. Attempting to collect some information that could help diagnose the problem. As this is a crash and something is definitely wrong, the information collection process might fail. ``` 20:55:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. Attempting to collect some information that could help diagnose the problem. As this is a crash and something is definitely wrong, the information collection process might fail. ``` We are running Aurora on other databases with very similar workloads and queries with no problems. The only difference between them is that we are running 5.7.12 instead of 5.7.mysql_aurora.2.07.0. Initially, this seems like an issue on the newer version. Any help is appreciated.
2
answers
0
votes
0
views
gbird
asked 2 years ago

Best practices for faster seeding of cross-region replica Aurora cluster to enable failback

Customer has currently has multiple databases in a single Aurora - MySQL Regional Cluster. Customer is planning DR strategy. They have two choices: 1\ Aurora - MySQL Regional - Cross Region Replica based on Binlog a.k.a Logical Replication 2\ Aurora - MySQL Global - Physical Replication What is the best strategy in both the options to reduce the time it takes to seed a replica cluster in case of failback (not failover)? Customer has done experiments, and found that with no changes to their current databases to cluster ratio, it takes 3.75 hrs with Aurora Regional + Logical Replication, and 3 hrs for Aurora Global to build a new replica cluster in a new region. I did ask that typically failback has no SLA requirements, it is mostly related to failover, then why customer is interested in reducing that duration. But, again if there is no options, I will emphasize this. But, in the meantime, I would like your view on these options and considerations: **Option 1**: Split the cluster to maintain a ratio with 1 DB per Cluster. This way the overall cluster size is reduced, and brining up a replica to another region would be faster regardless of whether they go with Aurora Regional or Aurora Global. **Option 2**: It doesn't matter whether you split or not. Which I doubt. If Option 1 is the right way to go, the next question comes is mechanisms to split the DBs to their individual clusters: **Option A**: Take a Snapshot of existing multi-DB cluster > Restore it to either Aurora Regional or Aurora Global and delete all the databases except one. Do this for as many databases you have in that cluster. Will this way of doing reduces the overall size of the cluster? I believe no, because Aurora storage does not shrink with deleted tables or database space, it might surely claim this space to reuse (in certain conditions). So, when a replica cluster has to be created on a new region, it will also occupy the same storage space or content as it originally had before deleting the database. Is this true? If true will the behavior be different between Regional and Global offerings of Aurora. **Option B**: The granularity of Aurora backup and restore is at the cluster level, so you can individual backup a database and restore it to a new cluster. Use some mechanism - DMS or other means to individual migrate the contents of the database to the new cluster. This is not clear to me, so I need some more clarity on what is AWS recommendation here.
1
answers
0
votes
1
views
EXPERT
Hrushi_G
asked 2 years ago

Can we configure separate binary logging (binlog) on Aurora MySQL's Read Replica instance?

A customer has enabled binlog and expecting to get binlogs from read replica instance of Aurora - MySQL Cluster. Here are the steps being followed: Enabled binlogs via cluster parameter group and set the format to ROW. Restarted the cluster to ensure changes take effect Logging into writer node shows `mysql> show master status -> ; +----------------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +----------------------------+----------+--------------+------------------+-------------------+ | mysql-bin-changelog.000002 | 120 | | | | +----------------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.01 sec) ` However, the reader node does not return any value `mysql> show master status; Empty set (0.01 sec)` On both reader and writer, the bin_log_format is shown as ROW `MySQL [(none)]> SHOW VARIABLES LIKE 'binlog_format'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | binlog_format | ROW | +---------------+-------+ 1 row in set (0.061 sec) ` Backup retention value is also set to a non-zero value. My understanding here is you can pull binlog only from the writer node in Aurora. The replica nodes use redo logs and share the storage layer with the writer. The customer's expectations seems incorrect. Can I be doubly sure here?
1
answers
0
votes
1
views
EXPERT
Hrushi_G
asked 2 years ago

Aurora Serverless Scaling

I received a question from a customer below, any advice would be appreciated. We’ve been continuing with our benchmark testing this week but have hit a bit of a road block with regards to Aurora Serverless. The concerns the way Aurora scales and the use of database connection pools in our J2EE application servers. Each application server has a set minimum and maximum number of connections that can be acquired based on the current load. As the load increases the number of connections established from each application server increases to service the increasing transactions. The max setting is a static value though which is set when the application server is created. As the database scales so does the max_connections value, however the problem is that Aurora doesn’t scale as fast as our application servers connection pools so we end up seeing quite a few errors where mysql refused connections due to there being too many for the current size of the server. Do you guys have any suggestions about how to best manage the scaling between application servers and Aurora Serverless or even have any libraries that might help? The only considerations I can think of is to either implement a caching strategy, limit concurrent connections with their application servers to aurora. That would mean that their application servers wouldn’t be able to scale up as much to make use of the extra performance gained by scaling Aurora. - Not Ideal. Any thoughts?
1
answers
0
votes
4
views
AWS-User-6214894
asked 3 years ago

Migrate MySQL 5.5 to AWS

A customer is running ~60 MySQL databases (the majority are 5.5, the rest are 5.6) on-prem and are using GCP for DR/backup site (they do not perform local backups in their DC). Their database vary in size and can reach up to 3 TB. The way they sync the data with the DR site is by backing up the databases on-prem using Percona XtraBackup, copy the backup files to the cloud and import into the databases in the cloud from these backup files. They then use statement based replication (SBR) for updating the DR site continuously. Same mechanism is used to migrate data back to their data center when they need to recover a database from backup or when they will need to fail back after a DR. To perform backups, they just create volume snapshots at different intervals on the cloud (hourly for the first day, daily for the fist week, weekly for the first month and then monthly for 6 months). They want to migrate the DR/backup site to AWS. They could use a similar approach and use MySQL on EC2, but there is not much benefit for them in this, so they are willing to move only if they can use managed services (Aurora MySQL, RDS MySQL). I need to come with a solution that will address their requirements: - Use Percona XtraBackup and not mysqldump due to the format of the files and the time it takes to seed a database from dump files - Continuous replication (to achieve near zero RPO), using statement based replication - Ability to move back to on-prem in short time when needed - Use managed services If I understand correctly, they can use the same mechanism for 5.6 using Aurora. They can import into an Aurora cluster from XtraBackup files and then they can set up SBR from on-prem to AWS. However, this solution does not solve the other direction. They can't backup Aurora (or RDS) using XtraBackup. This means that they will either need to use mysqldump, which they don't want to use as they claim it takes way too long. As an alternative they can create a MySQL Instance as a replica of Aurora and then use it to create the backup. Q1: Are there any other options to achieve what they need? Q2: How long should it take to create an Aurora replica on EC2 from a large database? Q3: Can the same mechanism be used to migrate from 5.5 on prem to Aurora 5.6? (and vice versa) Q4: Assuming that the answer to Q3 is yes, how backwards compatible is 5.6? Will the applications that use 5.5 work against 5.6 or will they need to be rewritten? Assuming there are issues with replicating from 5.5 to 5.6, they will need to use RDS MySQL 5.5 on AWS. My understand is that 5.5 does not support importing XtraBackup files. Q5: Is there a way to use XtraBackup with 5.5? If the answer to Q5 is no, they could use mysqldump or DMS to replicate from on-prem to the cloud. The initial seeding will be slower than XtraBackup if I understand correctly. Q6: Is there a preferred solution for the initial seeding? DMS? mysqldump, mydumper, etc.? Can they use SBR after the initial seeding? Q7: How to move back the data from RDS 5.5? Q8: I guess what I am asking is: What is the best solution for them for 5.5 (and maybe also 5.6)?
1
answers
0
votes
11
views
EXPERT
Uri
asked 3 years ago

3 Aurora performance questions for Aurora database specialists

3 Aurora performance questions for Aurora database specialists: - For our production RDS MySQL instances, we were encouraged very strongly by one of AWS Principal Engineers to use provisioned IOPS instead of regular GP IOPS to ensure consistently good performance. Does Aurora also have a concept of different IOPS flavors, and if so which ones are appropriate for production workloads? My thoughts: IOPS on Aurora is reported differently from standard RDS, as Aurora doesn’t use traditional block storage. Now i presume the primary instance must have some disk attached to it, but doubt that there is any provisioned IOPS, correct? - Some of our queries do complex math (haversine distance calculations, etc). What should we expect CPU-load-wise and performance-wise when executing these queries on Aurora vs stock MySQL? My thoughts: i presume, given the de-coupled architecture of Aurora, that Aurora will perform better than standard MySQL RDs, but haven't been able to find any data to support this. - We want to test Aurora's performance under production load by adding an Aurora read replica underneath an existing MySQL primary. Something like this: ``` Existing MySQL Primary / \ / \ / \ Existing MySQL RR New Aurora RR ``` Is a heterogeneous MySQL/Aurora mix of read replicas a good idea? What bad things might happen if we do this? My thoughts: Given the way that Aurora is architected, this doesn’t seem like a good idea to me. Aurora is architected so that you have a primary instance that connects to shared storage. You can then add read replicas (up to 15) that connect to the same underlying shared storage. So, while I like the ‘out of the box’ thinking, I don’t think this really matches Aurora architectural patterns – but I will check and report back! Any insights are much appreciated!
1
answers
0
votes
4
views
AWS-User-7568766
asked 3 years ago
  • 1
  • 90 / page