By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Database

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

RDS MySQL stuck offline & upgrading

One of our DB instances on MySQL 8.0.29 was crashing with regular "DB instance restarted" in the logs. I tried modifying it to 8.0.30, but now it's stuck upgrading, with regular "Error restarting mysql: MySQL process is not running and the restart attempt failed." in the logs. The error log contains the following over and over again: ``` 2022-09-23T16:03:46.082336Z 0 [Warning] [MY-011068] [Server] The syntax 'log_slave_updates' is deprecated and will be removed in a future release. Please use log_replica_updates instead. 2022-09-23T16:03:46.082352Z 0 [Warning] [MY-011069] [Server] The syntax '--master-info-repository' is deprecated and will be removed in a future release. 2022-09-23T16:03:46.082362Z 0 [Warning] [MY-011069] [Server] The syntax '--master-info-repository' is deprecated and will be removed in a future release. 2022-09-23T16:03:46.082415Z 0 [Warning] [MY-011069] [Server] The syntax '--relay-log-info-file' is deprecated and will be removed in a future release. 2022-09-23T16:03:46.082425Z 0 [Warning] [MY-011069] [Server] The syntax '--relay-log-info-repository' is deprecated and will be removed in a future release. 2022-09-23T16:03:46.082457Z 0 [Warning] [MY-011068] [Server] The syntax 'skip_slave_start' is deprecated and will be removed in a future release. Please use skip_replica_start instead. 2022-09-23T16:03:46.082469Z 0 [Warning] [MY-011068] [Server] The syntax 'slave_exec_mode' is deprecated and will be removed in a future release. Please use replica_exec_mode instead. 2022-09-23T16:03:46.082483Z 0 [Warning] [MY-011068] [Server] The syntax 'slave_load_tmpdir' is deprecated and will be removed in a future release. Please use replica_load_tmpdir instead. 2022-09-23T16:03:46.082676Z 0 [Warning] [MY-010101] [Server] Insecure configuration for --secure-file-priv: Location is accessible to all OS users. Consider choosing a different directory. 2022-09-23T16:03:46.082703Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead. 2022-09-23T16:03:46.082719Z 0 [System] [MY-010116] [Server] /rdsdbbin/mysql/bin/mysqld (mysqld 8.0.29) starting as process 726 2022-09-23T16:03:46.085105Z 0 [Warning] [MY-010161] [Server] You need to use --log-bin to make --log-replica-updates work. 2022-09-23T16:03:46.089322Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-09-23T16:03:46.095259Z 1 [Warning] [MY-012191] [InnoDB] Scan path '/rdsdbdata/db/innodb' is ignored because it is a sub-directory of '/rdsdbdata/db/ 2022-09-23T16:03:48.212117Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. InnoDB: Progress in percents: 116:03:49 UTC - mysqld got signal 11 ; Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware. Thread pointer: 0x1518a5406000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 15189fcc63e8 thread_stack 0x40000 /rdsdbbin/mysql/bin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x2e) [0x2097dae] /rdsdbbin/mysql/bin/mysqld(print_fatal_signal(int)+0x2bb) [0x102057b] /rdsdbbin/mysql/bin/mysqld(handle_fatal_signal+0xa5) [0x1020625] /lib64/libpthread.so.0(+0x118e0) [0x1518c0d458e0] /rdsdbbin/mysql/bin/mysqld(page_cur_search_with_match_bytes(buf_block_t const*, dict_index_t const*, dtuple_t const*, page_cur_mode_t, unsigned long*, unsigned long*, unsigned long*, unsigned long*, page_cur_t*)+0x190) [0x22d69b0] /rdsdbbin/mysql/bin/mysqld(btr_cur_search_to_nth_level(dict_index_t*, unsigned long, dtuple_t const*, page_cur_mode_t, unsigned long, btr_cur_t*, unsigned long, char const*, unsigned long, mtr_t*)+0x1ec2) [0x2406642] /rdsdbbin/mysql/bin/mysqld(row_search_on_row_ref(btr_pcur_t*, unsigned long, dict_table_t*, dtuple_t const*, mtr_t*)+0xd2) [0x233d2c2] /rdsdbbin/mysql/bin/mysqld(row_undo_search_clust_to_pcur(undo_node_t*)+0x15f) [0x234bd4f] /rdsdbbin/mysql/bin/mysqld(row_undo_mod(undo_node_t*, que_thr_t*)+0x18f) [0x258461f] /rdsdbbin/mysql/bin/mysqld(row_undo_step(que_thr_t*)+0x52) [0x234c542] /rdsdbbin/mysql/bin/mysqld(que_run_threads(que_thr_t*)+0x5c8) [0x22f3da8] /rdsdbbin/mysql/bin/mysqld(trx_rollback_or_clean_recovered(bool)+0x9a5) [0x23a4475] /rdsdbbin/mysql/bin/mysqld(trx_recovery_rollback_thread()+0x1c) [0x23a53fc] /rdsdbbin/mysql/bin/mysqld(std::thread::_State_impl<std::thread::_Invoker<std::tuple<Detached_thread, void (*)()> > >::_M_run()+0xb0) [0x2296fc0] /rdsdbbin/mysql/bin/mysqld() [0x2baabf0] /lib64/libpthread.so.0(+0x744b) [0x1518c0d3b44b] /lib64/libc.so.6(clone+0x3f) [0x1518c052040f] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): Connection ID (thread ID): 0 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. ``` Is there any way to fix this, or are we going to have to delete / recreate the instance?
0
answers
0
votes
12
views
asked 10 days ago

Neptune and Cypher - Poor Query Performance

I am wanting to use Neptune for an application with cypher as my query language. I have a pretty small dataset of around ~8500 nodes and ~8500 edges edges. I am trying to do what seem to be fairly straightforward queries, but the latency is very high (~6-8 seconds for around 1000 rows). I have tried with various instance types, enabling and disabling caches, enabling and disabling the OSGP index to no avail. I'm really at a loss as to why the query performance is so poor. Does anyone have any experience with poor query query performance using Neptune? I feel I must be doing something incorrect to have such high query latency. Here is some more detailed information on my graph structure and my query. I have a graph with 2 node types `A` and `B` and a single edge type `MAPS_TO` which always is directed from an `A` node to a `B` node. The relation `MAPS_TO` is many to many, but with the current dataset it is primarily one-to-one, i.e. the graph is mainly disconnected subgraphs of the form: ``` (A)-[MAPS_TO]-(B) ``` What I would like to do is for all A nodes to collect the distinct B nodes which they map to satisfying some conditions. I've experimented with my queries a bit and the fastest one I've been able to arrive at is: ``` MATCH (a:A) WHERE a.Owner = $owner AND a.IsPublic = true WITH a MATCH (a)-[r:MAPS_TO]->(b:B) WHERE (b)<-[:MAPS_TO {CreationReason: "origin"}]-(:A {Owner: $owner}) OR (b)<-[:MAPS_TO {CreationReason: "origin"}]-(:A {IsPublic: true}) WITH a, r, b ORDER BY a.AId SKIP 0 LIMIT 1000 RETURN a { .AId } AS A, collect(distinct b { B: {BId: b.BId, Name: b.Name, other properties on B nodes...} R: {CreationReason: r.CreationReason, other relation properties} }) ``` The above query takes ~6 seconds (looking at explain and http timing) on the `t4g.medium` instance type. I tried upping to a `r5d.2xlarge` instance type and this cut the query time in half to 3-4 seconds. However, using such a large instance type seems quite excessive for such a small amount of data. Really I am just trying to figure out why my query seems to perform so poorly. It seems to me that with the amount of data I have it should not really be possible to have a Neptune configuration with such performance. EDIT for more info: We are using the `t4g.medium` instance type with 3 reader instances and the query going to the reader instances. Again we have around ~8500 nodes split approximately equally into `A` nodes and `B` nodes. There are around ~8500 edges of a single type `MAPS_TO` all going from `A` to `B`. The output of the status endpoint for OpenCypher is: ``` {'status': 'healthy', 'startTime': 'Mon Sep 19 18:56:50 UTC 2022', 'dbEngineVersion': '1.1.1.0.R5', 'role': 'reader', 'dfeQueryEngine': 'viaQueryHint', 'gremlin': {'version': 'tinkerpop-3.5.2'}, 'sparql': {'version': 'sparql-1.1'}, 'opencypher': {'version': 'Neptune-9.0.20190305-1.0'}, 'labMode': {'ObjectIndex': 'enabled', 'ReadWriteConflictDetection': 'enabled'}, 'features': {'ResultCache': {'status': 'enabled'}, 'IAMAuthentication': 'disabled', 'Streams': 'disabled', 'AuditLog': 'enabled'}, 'settings': {'clusterQueryTimeoutInMs': '120000'}} ``` I have tried this with the `ObjectIndex` enabled and disabled and do not see much difference in performance. I have also tried the query on a larger instance type, the `r5d.2xlarge` to see if performance was improved by the result cache. The response time roughly cut in half, but that still seems to be very slow and a larger instance type then should be necessary. The only thing being run against the database currently are the above queries so I do not see how it could be a concurrency issue. We have looked at the output of explain (too long to post). It is not clear to me that there is a single place where the query is spending a large amount of time. The `DFEPipelineJoin` taking the longest makes sense to me based on the description in the documentation. What is not clear to me would be how to eliminate all the `DFEPipelineJoin`'s from the query.
2
answers
0
votes
39
views
asked 12 days ago