AWS Glue S3 to Redshift ETL job - ignored `preactions` and `postactions`

0

I am building a data pipeline to Load data into Redshift from an S3 data lake.

Data are stored in Parquet format on S3 and I would like to load them into the respective Redshift tables using an AWS Glue ETL job. The whole job should just DROP the existing table and replace it with the new data. My idea is to use the preactions field of the glueContext.write_dynamic_frame.from_options() method to execute the needed SQL commands.

Here is an example code that uses mimics the actual job I would like to perform:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

#args = getResolvedOptions(sys.argv, ["JOB_NAME"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)

raw_dyf = glueContext.create_dynamic_frame.from_catalog(
    database="medium_test",
    table_name="orders_csv",
    transformation_ctx="AWSGlueDataCatalog_node",
)

transformed_dyf = ApplyMapping.apply(
    frame=raw_dyf,
    mappings=[
        ("order_id", "string", "order_id", "string"),
        ("restaurant_id", "string", "restaurant_id", "string"),
        ("no_of_items", "long", "no_of_items", "long"),
        ("net_total", "long", "net_total", "long"),
        ("discount", "long", "discount", "long"),
        ("total", "long", "total", "long"),
        ("customer_id", "long", "customer_id", "long"),
        ("order_status", "string", "order_status", "string"),
        ("order_type", "string", "order_type", "string"),
        ("source", "string", "source", "string"),
        ("sales_date", "string", "sales_date", "string"),
    ],
    transformation_ctx="ChangeSchema_node",
)

datasink = glueContext.write_dynamic_frame.from_options(
    frame=transformed_dyf,
    connection_type="redshift",
    connection_options={
        "redshiftTmpDir": "s3://aws-glue-assets/temporary/",
        "useConnectionProperties": "true",
        "dbtable": "public.test_orders",
        "connectionName": "data_infra_glue_redshift_connection",
        "preactions": """DROP TABLE public.test_orders;
        CREATE TABLE IF NOT EXISTS public.test_orders (order_id VARCHAR(100), restaurant_id VARCHAR, no_of_items BIGINT, net_total BIGINT, discount BIGINT, total BIGINT, customer_id BIGINT, order_status VARCHAR, order_type VARCHAR, source VARCHAR, sales_date VARCHAR);
        TRUNCATE TABLE public.test_orders;
        """
    },
    transformation_ctx="AmazonRedshift_node",
)

job.commit()

Unfortunately, it turns out that when the Job is executed the preactions are ignored and data are appended to the existing table. The result is that, for example, running multiple times on the same data results in multiple copies of the data, which is not the intended behaviour as the table should be dropped and recreated at every run.

Is it evident to someone what I am missing? Is there any specific permission that the role running the script should have to perform SQL commands?

Thanks for your help!

Best regards, Alberto

Alberto
已提問 5 個月前檢視次數 353 次
1 個回答
0

I believe the problem is your preactions contain a newline character.

preactions: A semicolon-delimited list of SQL commands that are run before the COPY command. If the commands fail, then Amazon Redshift throws an exception.

Note: Be sure that the preaction parameter doesn't contain newline characters.

https://repost.aws/knowledge-center/sql-commands-redshift-glue-job

Also, you should avoid the pattern of dropping and then recreating the table each time. Instead, just truncate the table. When you drop and recreate objects like this, it will contribute to catalog bloat.

profile pictureAWS
專家
已回答 5 個月前
  • Hi Jon, thanks a lot for taking the time to look into the issue. Unfortunately, that is not the case. I checked now that the behaviour is the same even if I only have one line in the 'preactions':

    AmazonRedshift_node1701791820733 = glueContext.write_dynamic_frame.from_options(
        frame=transformed_dyf,
        connection_type="redshift",
        connection_options={
            "redshiftTmpDir": "s3://aws-glue-assets-938815263677-eu-south-1/temporary/",
            "useConnectionProperties": "true",
            "dbtable": "public.test_orders",
            "connectionName": "data_infra_glue_redshift_connection",
            "preactions": "TRUNCATE TABLE public.test_orders;"
        },
        transformation_ctx="AmazonRedshift_node",
    )
    

    Incidentally, the pattern of dropping and recreating the table each time is the one that is generated by Glue Studio Visual ETL scripting. But I understand your issue with it.

  • Can you try using from_jdbc_conf instead of from_options like the article does that is linked?

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南