Writing into redshift with Glue 4.0 fails due to string lengths

0

I'm writing into redshift and realized Glue 4.0 is probably optimizing the column sizes. Summary of error:

py4j.protocol.Py4JJavaError: An error occurred while calling o236.pyWriteDynamicFrame.
: java.sql.SQLException: 
Error (code 1204) while loading data into Redshift: "String length exceeds DDL length"
Table name: "PUBLIC"."table_name"
Column name: column_a
Column type: varchar(256)

In previous glue versions, the string columns were always varchar(65535) but now, my tables are created with varchar(256), and writing into some columns fail due to this error. Now, will this occur with other data types? . How can I solve this within Glue 4.0?

asked a year ago890 views
1 Answer
0

The closest answer I've found is about the new redshift driver for spark under the 'Configuring the maximum size of string columns ': https://docs.databricks.com/external-data/amazon-redshift.html#language-python

But this is with respect to spark. How can I translate this to Glue?

answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions