- Newest
- Most votes
- Most comments
This issue appears to be related to how Amazon Redshift handles certain UPDATE statements involving SUPER data types and type casting. The problem you're experiencing is likely due to a combination of factors:
- The use of a SUPER data type field (data."otherId") in the UPDATE statement.
- The casting of this SUPER field to VARCHAR.
- The presence of a leading zero in the table name.
When Redshift processes the UPDATE statement with the type casting from SUPER to VARCHAR, it seems to be incorrectly parsing the table name, dropping the leading zero. This doesn't happen in the second UPDATE statement because it doesn't involve SUPER data types or type casting.
To work around this issue, you could try the following approaches:
-
Use a subquery to perform the casting:
UPDATE "schema_name"."tmp_012345" SET other_id = (SELECT data."otherId"::VARCHAR(255)) WHERE this_id IS NOT NULL;
-
Create a view of your temporary table and perform the UPDATE on the view:
CREATE VIEW "schema_name"."v_tmp_012345" AS SELECT * FROM "schema_name"."tmp_012345";
UPDATE "schema_name"."v_tmp_012345" SET other_id = data."otherId"::VARCHAR(255) WHERE this_id IS NOT NULL;
-
If possible, avoid using leading zeros in table names.
-
Consider using a CTE (Common Table Expression) to perform the casting:
WITH casted_data AS ( SELECT this_id, data."otherId"::VARCHAR(255) AS casted_other_id FROM "schema_name"."tmp_012345" WHERE this_id IS NOT NULL ) UPDATE "schema_name"."tmp_012345" t SET other_id = c.casted_other_id FROM casted_data c WHERE t.this_id = c.this_id;
If these workarounds don't resolve the issue, you may need to contact Amazon Redshift support, as this could be a bug in how Redshift handles certain UPDATE statements with SUPER data types and table names with leading zeros.
Sources
UPDATE - Amazon Redshift
Examples with character types - Amazon Redshift
Relevant content
- Accepted Answerasked 3 years ago
- Accepted Answerasked 10 months ago
- Accepted Answerasked 4 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 14 days ago
- AWS OFFICIALUpdated 9 months ago
Thank you AI. I do have a workaround changing the table name: "schema_name"."tmp_012345" -> "schema_name"."tmp012345". This way I'm not bothered by the bug any more.