By using AWS re:Post, you agree to the Terms of Use
/Aurora MySQL/

Questions tagged with Aurora MySQL

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Aurora MySQL crashing randomly

I am on my third Aurora cluster that is randomly failing, leading my application to die. AWS support team didn't answer my support case. Engine version: 8.0.mysql_aurora.3.01.0 ``` /etc/rds/oscar-start-cmd: line 39: 2687 Killed /rdsdbbin/oscar/bin/mysqld --core-file --innodb_shared_buffer_pool_uses_huge_pages='1' "$@" grover/runtime/overlay.cpp:2270: Assertion failed: err == 0 Stack trace: /rdsdbbin/oscar/bin/mysqld() [0x2be2f08] /rdsdbbin/oscar/bin/mysqld(_Z27log_grover_pid_from_page_nomm+0x1d) [0x2850bdd] <inline> (in buf_page_t::set_grover_vol_pid(unsigned long, unsigned long) at /local/p4clients/pkgbuild-FRTaI/workspace/src/OscarMysql80/storage/innobase/include/ut0lock_free_hash.h:638) /rdsdbbin/oscar/bin/mysqld() [0x2597395] (in buf_page_init at /local/p4clients/pkgbuild-FRTaI/workspace/src/OscarMysql80/storage/innobase/buf/buf0buf.cc:6645) /rdsdbbin/oscar/bin/mysqld(_Z22buf_page_init_for_readP7dberr_tmRK9page_id_tRK11page_size_tm+0x2e0) [0x25a3cf0] /rdsdbbin/oscar/bin/mysqld(_Z17buf_read_page_lowP7dberr_tbmmRK9page_id_tRK11page_size_tbbb+0x91) [0x25c6c91] /rdsdbbin/oscar/bin/mysqld(_Z13buf_read_pageRK9page_id_tRK11page_size_tb+0x3c) [0x25c76bc] /rdsdbbin/oscar/bin/mysqld(_ZN9Buf_fetchI16Buf_fetch_normalE9read_pageEv+0x27) [0x2597ce7] /rdsdbbin/oscar/bin/mysqld(_ZN16Buf_fetch_normal3getERP11buf_block_t+0xb2) [0x259ed82] /rdsdbbin/oscar/bin/mysqld(_ZN9Buf_fetchI16Buf_fetch_normalE11single_pageEv+0x4e) [0x25a654e] /rdsdbbin/oscar/bin/mysqld(_Z16buf_page_get_genRK9page_id_tRK11page_size_tmP11buf_block_t10Page_fetchPKcmP5mtr_tb+0x1d9) [0x25a75a9] /rdsdbbin/oscar/bin/mysqld() [0x2637bc1] /rdsdbbin/oscar/bin/mysqld(_Z28fseg_alloc_free_page_generalPhjhmP5mtr_tS1_+0x1d0) [0x2639160] /rdsdbbin/oscar/bin/mysqld(_Z14btr_page_allocP12dict_index_tjhmP5mtr_tS2_+0xd5) [0x256ecc5] /rdsdbbin/oscar/bin/mysqld(_ZN3lob14alloc_lob_pageEP12dict_index_tP5mtr_tjb+0x216) [0x28bb676] /rdsdbbin/oscar/bin/mysqld(_ZN3lob12first_page_t5allocEP5mtr_tb+0x24) [0x28ab0c4] /rdsdbbin/oscar/bin/mysqld(_ZN3lob6insertEPNS_13InsertContextEP5trx_tRNS_5ref_tEP15big_rec_field_tm+0x14f) [0x28b78df] /rdsdbbin/oscar/bin/mysqld(_ZN3lob31btr_store_big_rec_extern_fieldsEP5trx_tP10btr_pcur_tPK5upd_tPmPK9big_rec_tP5mtr_tNS_6opcodeE+0xb16) [0x26edbb6] /rdsdbbin/oscar/bin/mysqld() [0x277331d] /rdsdbbin/oscar/bin/mysqld(_Z29row_ins_clust_index_entry_lowjmP12dict_index_tmP8dtuple_tP10btr_pcur_tP9que_thr_tb+0x646) [0x2774906] /rdsdbbin/oscar/bin/mysqld(_Z25row_ins_clust_index_entryP12dict_index_tP8dtuple_tP10btr_pcur_tP9que_thr_tb+0xe8) [0x277b158] /rdsdbbin/oscar/bin/mysqld(_Z12row_ins_stepP9que_thr_t+0x274) [0x277b7d4] /rdsdbbin/oscar/bin/mysqld() [0x278ca73] /rdsdbbin/oscar/bin/mysqld(_ZN11ha_innobase9write_rowEPh+0x226) [0x268fac6] /rdsdbbin/oscar/bin/mysqld(_ZN7handler12ha_write_rowEPh+0x177) [0x14a4867] /rdsdbbin/oscar/bin/mysqld(_Z12write_recordP3THDP5TABLEP9COPY_INFOS4_+0x5d4) [0x172e3d4] /rdsdbbin/oscar/bin/mysqld(_ZN21Sql_cmd_insert_values13execute_innerEP3THD+0xbaf) [0x173018f] /rdsdbbin/oscar/bin/mysqld(_ZN11Sql_cmd_dml7executeEP3THD+0x6cc) [0x119905c] /rdsdbbin/oscar/bin/mysqld(_Z30mysql_execute_command_internalP3THDb+0x1143) [0x1139f33] /rdsdbbin/oscar/bin/mysqld(_Z21mysql_execute_commandP3THDb+0x17b) [0x113d31b] /rdsdbbin/oscar/bin/mysqld(_Z20dispatch_sql_commandP3THDP12Parser_state+0x351) [0x113df91] 21:03:27 UTC - mysqld got signal 6 ; Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware. Thread pointer: 0x14652cf4e000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 1465869fba9f thread_stack 0x40000 /rdsdbbin/oscar/bin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x2d) [0x246ac4d] /rdsdbbin/oscar/bin/mysqld(handle_fatal_signal+0x532) [0x1310292] /lib64/libpthread.so.0(+0x117df) [0x147cc9b707df] /lib64/libc.so.6(gsignal+0x110) [0x147cc8ef3c20] /lib64/libc.so.6(abort+0x147) [0x147cc8ef50c7] /rdsdbbin/oscar/bin/mysqld() [0xf963d7] /rdsdbbin/oscar/bin/mysqld() [0x2dba17a] /rdsdbbin/oscar/bin/mysqld() [0x2dba333] /rdsdbbin/oscar/bin/mysqld() [0x2be2f08] /rdsdbbin/oscar/bin/mysqld(log_grover_pid_from_page_no(unsigned long, unsigned long)+0x1d) [0x2850bdd] /rdsdbbin/oscar/bin/mysqld() [0x2597395] /rdsdbbin/oscar/bin/mysqld(buf_page_init_for_read(dberr_t*, unsigned long, page_id_t const&, page_size_t const&, unsigned long)+0x2e0) [0x25a3cf0] /rdsdbbin/oscar/bin/mysqld(buf_read_page_low(dberr_t*, bool, unsigned long, unsigned long, page_id_t const&, page_size_t const&, bool, bool, bool)+0x91) [0x25c6c91] /rdsdbbin/oscar/bin/mysqld(buf_read_page(page_id_t const&, page_size_t const&, bool)+0x3c) [0x25c76bc] /rdsdbbin/oscar/bin/mysqld(Buf_fetch<Buf_fetch_normal>::read_page()+0x27) [0x2597ce7] /rdsdbbin/oscar/bin/mysqld(Buf_fetch_normal::get(buf_block_t*&)+0xb2) [0x259ed82] /rdsdbbin/oscar/bin/mysqld(Buf_fetch<Buf_fetch_normal>::single_page()+0x4e) [0x25a654e] /rdsdbbin/oscar/bin/mysqld(buf_page_get_gen(page_id_t const&, page_size_t const&, unsigned long, buf_block_t*, Page_fetch, char const*, unsigned long, mtr_t*, bool)+0x1d9) [0x25a75a9] /rdsdbbin/oscar/bin/mysqld() [0x2637bc1] /rdsdbbin/oscar/bin/mysqld(fseg_alloc_free_page_general(unsigned char*, unsigned int, unsigned char, unsigned long, mtr_t*, mtr_t*)+0x1d0) [0x2639160] /rdsdbbin/oscar/bin/mysqld(btr_page_alloc(dict_index_t*, unsigned int, unsigned char, unsigned long, mtr_t*, mtr_t*)+0xd5) [0x256ecc5] /rdsdbbin/oscar/bin/mysqld(lob::alloc_lob_page(dict_index_t*, mtr_t*, unsigned int, bool)+0x216) [0x28bb676] /rdsdbbin/oscar/bin/mysqld(lob::first_page_t::alloc(mtr_t*, bool)+0x24) [0x28ab0c4] /rdsdbbin/oscar/bin/mysqld(lob::insert(lob::InsertContext*, trx_t*, lob::ref_t&, big_rec_field_t*, unsigned long)+0x14f) [0x28b78df] /rdsdbbin/oscar/bin/mysqld(lob::btr_store_big_rec_extern_fields(trx_t*, btr_pcur_t*, upd_t const*, unsigned long*, big_rec_t const*, mtr_t*, lob::opcode)+0xb16) [0x26edbb6] /rdsdbbin/oscar/bin/mysqld() [0x277331d] /rdsdbbin/oscar/bin/mysqld(row_ins_clust_index_entry_low(unsigned int, unsigned long, dict_index_t*, unsigned long, dtuple_t*, btr_pcur_t*, que_thr_t*, bool)+0x646) [0x2774906] /rdsdbbin/oscar/bin/mysqld(row_ins_clust_index_entry(dict_index_t*, dtuple_t*, btr_pcur_t*, que_thr_t*, bool)+0xe8) [0x277b158] /rdsdbbin/oscar/bin/mysqld(row_ins_step(que_thr_t*)+0x274) [0x277b7d4] /rdsdbbin/oscar/bin/mysqld() [0x278ca73] /rdsdbbin/oscar/bin/mysqld(ha_innobase::write_row(unsigned char*)+0x226) [0x268fac6] /rdsdbbin/oscar/bin/mysqld(handler::ha_write_row(unsigned char*)+0x177) [0x14a4867] /rdsdbbin/oscar/bin/mysqld(write_record(THD*, TABLE*, COPY_INFO*, COPY_INFO*)+0x5d4) [0x172e3d4] /rdsdbbin/oscar/bin/mysqld(Sql_cmd_insert_values::execute_inner(THD*)+0xbaf) [0x173018f] /rdsdbbin/oscar/bin/mysqld(Sql_cmd_dml::execute(THD*)+0x6cc) [0x119905c] /rdsdbbin/oscar/bin/mysqld(mysql_execute_command_internal(THD*, bool)+0x1143) [0x1139f33] /rdsdbbin/oscar/bin/mysqld(mysql_execute_command(THD*, bool)+0x17b) [0x113d31b] /rdsdbbin/oscar/bin/mysqld(dispatch_sql_command(THD*, Parser_state*)+0x351) [0x113df91] /rdsdbbin/oscar/bin/mysqld(dispatch_command(THD*, COM_DATA const*, enum_server_command)+0x1b39) [0x113ff99] /rdsdbbin/oscar/bin/mysqld(do_command(THD*)+0x1c6) [0x1140f46] /rdsdbbin/oscar/bin/mysqld(THD_task::process_connection()+0x134) [0x12fcfc4] /rdsdbbin/oscar/bin/mysqld(Thread_pool::worker_loop()+0x180) [0x12fbc80] /rdsdbbin/oscar/bin/mysqld(Thread_pool::worker_launch(void*)+0x20) [0x12fbea0] /rdsdbbin/oscar/bin/mysqld() [0x296c531] /lib64/libpthread.so.0(+0x740a) [0x147cc9b6640a] /lib64/libc.so.6(clone+0x3e) [0x147cc8fad09e] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (1479bc268028): [omitted] Connection ID (thread ID): 45980 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. aurora backtrace compare flag : 1 Writing a core file [...] ```
0
answers
0
votes
5
views
tobias
asked 2 days ago

Aurora MySQL 8 Provisioned - Low Freeable Memory Causing Failover Despite Restricted Parameters

We recently moved from Provisioned Aurora 2.10.1 (MySQL 5.7) to Aurora 3.01.0 (MySQL 8.0). We have Enhanced Monitoring and Performance Insights enabled. We've been monitoring the database metrics and noticed that the Freeable Memory consistently drops so low that Aurora executes a Failover. There are no deadlocks or sleeping or zombie processes running when this occurs. In Performance Insights, os.memory.free.avg decreases over time while os.memory.active.avg increases at the same rate (of course) and then we also notice that os.memory.pageTables.avg increases at the same rate as os.memory.active.avg . The database size is smaller than the total amount of RAM that the instance can support, so the instance size should be big enough to handle all of it. We've even reduced the innodb_buffer_pool_size to a size sufficiently smaller than the DB Memory but large enough to cache the data, but it doesn't help. The Buffer Cache Hit Ratio hovers around 100%. The Swap space remains at zero. The reader Freeable memory remains consistent but the Writer's Freeable memory is what drops. When Aurora promotes a Reader to a Writer, the same thing happens. When I launch a new instance from a snapshot, I don't see the same issue occur. If MySQL is limited by parameters in the parameter group, how can we determine if the OS is using more Memory than it's supposed to and if so, how do we stop the OS from using more memory than it's supposed to?
1
answers
0
votes
2
views
AWS-User-7986928
asked 4 days ago

Does using stored procedures with Amazon Aurora reduce costs and/or speed up response time?

I'm making a webapplication that can serve pages that require data from multiple tables. The queries necessary for fetching this data have been bundled into one stored procedure. Now it takes only one database call to this stored procedure in order to receive the resultsets from all the multiple tables. If i had executed all the queries separately it would have entailed making multiple database calls. I've read that queries that are executed from a stored procedure are just as fast as queries that are performed separately, however i can imagine that opening a new database connection or transaction for each separate query will make the process time a lot longer then when all of these queries are executed in one stored procedure using only one database connection or transaction. Thats why i decided to make stored procedures instead of doing separate database calls. I also assumed that i would be charged per DB query made when using Amazon aurora. However on the pricing page https://aws.amazon.com/rds/aurora/pricing/ i'm reading that i get charged per I/O operation instead. If the query result can be received from the memory cache then i don't get charged in that case. So i'm kind of wondering whether the use of stored procedures instead of separate db calls will really save me money or not. I'm also wondering whether using stored procedures really is faster then simply doing all the required queries separately. Lets assume that all the separate queries made by the application are done within one transaction. How would this compare to using a stored procedure that executes the same queries? I can imagine that each query has to travel from the EC2 server to the amazon aurora server and the result has to travel back again. I would assume that this adds a few milliseconds of extra processing time to each query that is made separately, but i'm not sure so thats why i'm asking.
1
answers
0
votes
5
views
Viridios
asked 8 days ago

Using AWS Lambda with RDS Proxy: Socket Ended Error

**Some background:** I have a Node 12 Lambda running an Apollo GraphQL server. It connects to a MySQL RDS database via RDS Proxy using IAM authentication. **Problem:** Since switching from direct DB connection to RDS Proxy via IAM auth, we get intermittent socket ended errors, seen below: ``` Error: This socket has been ended by the other party at TLSSocket.writeAfterFIN [as write] (net.js:456:14) at PoolConnection.write (/var/task/node_modules/mysql2/lib/connection.js:363:20) at PoolConnection.writePacket (/var/task/node_modules/mysql2/lib/connection.js:294:12) at Query.start (/var/task/node_modules/mysql2/lib/commands/query.js:60:16) at Query.execute (/var/task/node_modules/mysql2/lib/commands/command.js:45:22) at PoolConnection.handlePacket (/var/task/node_modules/mysql2/lib/connection.js:456:32) at PoolConnection.addCommand (/var/task/node_modules/mysql2/lib/connection.js:478:12) at PoolConnection.query (/var/task/node_modules/mysql2/lib/connection.js:546:17) at MysqlQueryRunner.<anonymous> (/var/task/node_modules/typeorm/driver/mysql/MysqlQueryRunner.js:184:56) at step (/var/task/node_modules/typeorm/node_modules/tslib/tslib.js:143:27) { code: 'EPIPE', fatal: true } ``` I know that this is far from enough info to debug, but in lieu of throwing up all possible config surrounding these services, I'll wait for more pointed questions. Happy to post settings, connection info and any other config as needed to debug. Thanks in advance for any help!
2
answers
0
votes
6
views
fdizdar
asked 9 days ago

My Aurora RDS just peak for no reason

I have **2 of db.r5.12xlarge Aurora RDS** (**MySQL**) [Engine Version: Engine version5.7.mysql_aurora.2.07.2] [Parameter Group: default.aurora-mysql5.7] I define A (worked fine) and B (B have problem) Previously used for a month without any problems. But 1-2 days ago, I found too much response time from my Load Balance monitoring. Then I checked monitor on db.r5.12xlarge Aurora RDS (B). • 3 Hours https://i.imgur.com/dOMVuyA.png • 2 Weeks https://i.imgur.com/V9faGpx.png • 1 Week Performance Insight (Red Arrow start problem) https://i.imgur.com/RCahLzV.png • 6 Hours Load Balance https://i.imgur.com/RIEcwLU.png • 1 Hour Target Group (When getting normal) https://i.imgur.com/8rd0isP.png • Benchmark connection time (on problem period) RDS A 0.0061 second, RDS B 0.3102 second. I have checked on my EC2 every intances work normally and no CPU peak. I tried to add more instance to Load Balance. (But no help) I have checked Performance Insights no impact query. (Because I didn't change my code or anything.) But my RDS A worked fine, My Application have to connect 2 Aurora RDS. (The work of both of them is almost the same.) And my RDS A work harder than (more queries, more insert, more select) RDS B (But no problem). This happened to me once earlier this month. And all of a sudden it came back to normal without me doing anything about it. I really don't understand (This pic when RDS B back to normal https://i.imgur.com/kz5W4Wb.png). Thank you very much and I'm really sorry about my bad english. // ==== Update! Problem day (No any special or routine operators in these days, everything same all days) - 27 Nov - 3 Dem - 17 Dem - 27 Dem - 3 Jan - 10 Jan
0
answers
0
votes
8
views
WeakEnglishGuy
asked 18 days ago

inject non quoted csv file into RDS via glue

I have a pyspark script generated by my glue job that aims to read data from a CSV file in an S3 bucket and write it on my SQL RDS table. in my CSV file, I have string multi-lines. if the strings is quoted the job pass, but in my case, multi-line strings are not quoted so the job cannot insert data in my table; I tried : `spark.read.option("multiLine", "true").option("quoteChar", -1).option("header","true")` it doesn't work. I also tried : ``` datasink5 = glueContext.write_dynamic_frame.from_options( frame = dynamic_frame_write, connection_type = "s3", connection_options = { "path": "s3://mycsvFile" }, format = "csv", format_options={ "quoteChar": -1, "separator": "," }, transformation_ctx = "datasink5") ``` but this wrote the data back to s3 not to my RDS table. this is my glue job : ``` import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.dynamicframe import DynamicFrame from awsglue.job import Job import pyspark.sql.functions as f ## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ['JOB_NAME']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session ## spark.read.option("multiLine", "true").option("quoteChar", -1).option("header","true").option("escape","\'") job = Job(glueContext) job.init(args['JOB_NAME'], args) def otherTreatment(dfa): ... return dfa datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "db_rds", table_name = "tbl_csv_extract", transformation_ctx = "datasource0") applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("id", "string", "id", "string"), ("created", "string", "created", "timestamp"), ("name", "string", "name", "string"), ("high", "string", "high", "decimal(22,7)")], transformation_ctx = "applymapping1") selectfields2 = SelectFields.apply(frame = applymapping1, paths = ["created", "name", "high", "id"], transformation_ctx = "selectfields2") resolvechoice3 = ResolveChoice.apply(frame = selectfields2, choice = "MATCH_CATALOG", database = "db_rds_sql", table_name = "tbl_teststring", transformation_ctx = "resolvechoice3") resolvechoice4 = ResolveChoice.apply(frame = resolvechoice3, choice = "make_cols", transformation_ctx = "resolvechoice4") data_frame = resolvechoice4.toDF() data_frame = otherTreatment(data_frame) dynamic_frame_write = DynamicFrame.fromDF(data_frame, glueContext, "dynamic_frame_write") datasink5 = glueContext.write_dynamic_frame.from_catalog(frame = dynamic_frame_write, database = "db_rds_sql", table_name = "tbl_teststring", transformation_ctx = "datasink5") ## with the flowing script write output back to s3 not in my sql table datasink5 = glueContext.write_dynamic_frame.from_options( frame = dynamic_frame_write, connection_type = "s3", connection_options = { "path": "s3://mycsvFile" }, format = "csv", format_options={ "quoteChar": -1, "separator": "," }, transformation_ctx = "datasink5") job.commit() ``` does anyone have any idea how can I write My CSV file with non quoted multiline with glue pyspark?
1
answers
0
votes
5
views
AWS-User-5483303
asked a month ago
  • 1
  • 90 / page