Assert error 1000 in Redshift

0

Hi, I made a simple SP that performs as deep copy of a table, like described here https://docs.aws.amazon.com/redshift/latest/dg/performing-a-deep-copy.html. The SP is simple, it takes schema name and table name as input parameters, then create a new table via CREATE TABLE (LIKE), then inserts data from old table into the new, then drops the old table, and finally renames new table as old. When i ran the SP on a table for the first time, it worked fine. However, any consequent run of the procedure on the same table gives Assert 1000 error and it is related to insert data part: ERROR: Assert Detail: ----------------------------------------------- error: Assert code: 1000 context: table - query: 282270297 location: funcs_int.cpp:88 process: query0_99_282270297 [pid=16374] ----------------------------------------------- Where: SQL statement " INSERT INTO ...blabla... [ErrorId: 1-64903d7e-333bc0482dcd82f552aab048].

What is going on and what should i do? The deep copy table is accessible for querying, but any attempt to insert new data is giving assert error, so it is broken or something. How do i fix it and do proper deep copy that works?

PS. maybe it is important that the subject table has identity column, so when i insert data from old table into the new one, i ignore the identity column, i.e. i do not use INSERT INTO SELECT *, but use INSERT INTO (col1, col2, ...) SELECT col1, col2,...

asked 10 months ago1044 views
1 Answer
0

You need to control Deep copy operation such that there are no other queries accessing these tables. If this is confirmed and still you are facing this issue then try increasing the compute so the operation gets more resources so it can finish faster. If this too fails then best to raise a Support Ticket for troubleshooting and deeper investigation.

profile pictureAWS
answered 10 months ago
  • The subject table was not accessed by any other query, and it was very small, about 20 columns and couple of thousands of rows. The first deep copy finishes fast, but any consequent attempt to deep copy same table gives the Assert 1000 error.

  • This is clearly a bug somewhere in Redshift internals, and i cannot raise tech support tickets. What are the options here?

  • It is recommended to have at least Developer support. Without support assistance you will not be able to hand over logs for investigation. Other option is to see if version updates rolled out on periodic basis resolve your issue. One other point worth mentioning - to provide better stability it is recommended to run your Production cluster on Trailing track instead of Current track.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions