By using AWS re:Post, you agree to the AWS re:Post Terms of Use

Bug(Redshift SQL): Update statement drops the character '0' from the table reference.

0

I have a temporary table created with a name based on a numeric ID. This table is sometimes created with a 0 as the leading character of the digits in the table name such as: "schema_name"."tmp_012345". Whenever i run an update statement on this table where i want to cast a nested super field to a varchar:

SET enable_case_sensitive_identifier TO true; UPDATE "schema_name"."tmp_012345" set other_id = data."otherId"::VARCHAR(255) WHERE this_id IS NOT NULL;

I get: SQL Error [XX000]: ERROR: Query unsupported due to an internal error. Detail: SQL reparse error. Where: relation "tmp_12345" does not exist.

But running the statement: UPDATE "schema_name"."tmp_012345" set updated_at = CURRENT_TIMESTAMP WHERE this_id IS NOT NULL; Is successful.

Ive tried running this with the python redshift client and DBeaver workbench with the same result.

asked 18 days ago24 views
1 Answer
0

This issue appears to be related to how Amazon Redshift handles certain UPDATE statements involving SUPER data types and type casting. The problem you're experiencing is likely due to a combination of factors:

  1. The use of a SUPER data type field (data."otherId") in the UPDATE statement.
  2. The casting of this SUPER field to VARCHAR.
  3. The presence of a leading zero in the table name.

When Redshift processes the UPDATE statement with the type casting from SUPER to VARCHAR, it seems to be incorrectly parsing the table name, dropping the leading zero. This doesn't happen in the second UPDATE statement because it doesn't involve SUPER data types or type casting.

To work around this issue, you could try the following approaches:

  1. Use a subquery to perform the casting:

    UPDATE "schema_name"."tmp_012345" SET other_id = (SELECT data."otherId"::VARCHAR(255)) WHERE this_id IS NOT NULL;

  2. Create a view of your temporary table and perform the UPDATE on the view:

    CREATE VIEW "schema_name"."v_tmp_012345" AS SELECT * FROM "schema_name"."tmp_012345";

    UPDATE "schema_name"."v_tmp_012345" SET other_id = data."otherId"::VARCHAR(255) WHERE this_id IS NOT NULL;

  3. If possible, avoid using leading zeros in table names.

  4. Consider using a CTE (Common Table Expression) to perform the casting:

    WITH casted_data AS ( SELECT this_id, data."otherId"::VARCHAR(255) AS casted_other_id FROM "schema_name"."tmp_012345" WHERE this_id IS NOT NULL ) UPDATE "schema_name"."tmp_012345" t SET other_id = c.casted_other_id FROM casted_data c WHERE t.this_id = c.this_id;

If these workarounds don't resolve the issue, you may need to contact Amazon Redshift support, as this could be a bug in how Redshift handles certain UPDATE statements with SUPER data types and table names with leading zeros.
Sources
UPDATE - Amazon Redshift
Examples with character types - Amazon Redshift

profile picture
answered 18 days ago
profile picture
EXPERT
reviewed 6 days ago
  • Thank you AI. I do have a workaround changing the table name: "schema_name"."tmp_012345" -> "schema_name"."tmp012345". This way I'm not bothered by the bug any more.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions