By using AWS re:Post, you agree to the Terms of Use

Questions tagged with AWS Glue

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to reference a glue job python argument in CodeGenConfigurationNodes of create_job() function in boto3 Glue Client.

I would like to reference a python parameter of a glue job within the CodeGenConfigurationNodes of create_job() function in boto3 Glue Client. For instance: I have an argument `--s3_location` which has to be referred within S3CsvSource Node of my ETL job as given below ``` s3_source_node1 = glueContext.create_dynamic_frame.from_options( format_options={ "quoteChar": '"', "withHeader": True, "separator": ",", "optimizePerformance": False, }, connection_type="s3", format="csv", connection_options={"paths": [args["s3_location"]]}, transformation_ctx="s3_source_node1", ) ``` This has to be done via `create_job()` function available as part of GlueClient in boto3. However while defining `CodeGenConfigurationNodes` within `create_job()`, I was not able to mention `[args["s3_location"]` in `path` property of `S3CsvSource` node property. Current CodeGenConfigurationNodes for S3CsvSource is as below: ``` CodeGenConfigurationNodes = { 'node-1':{ 'S3CsvSource': { 'Name': 's3_source', 'Paths': [ 's3://my_bucket/sample_input.csv', ], 'Separator': 'comma', 'QuoteChar': 'quote', 'WithHeader': True, 'WriteHeader': True, }, }] ``` My expected result is something like ``` CodeGenConfigurationNodes = { 'node-1':{ 'S3CsvSource': { 'Name': 's3_source', 'Paths': [ args["s3_location"], ], 'Separator': 'comma', 'QuoteChar': 'quote', 'WithHeader': True, 'WriteHeader': True, }, }] ``` Where `args["s3_location"]` refers to glue job parameter `--s3_location`.
0
answers
0
votes
23
views
asked 20 days ago

Querying Latest Available Partition

I am building an ETL pipeline using primarily state machines, Athena, and the Glue catalog. In general things work in the following way: 1. A table, partitioned by "version", exists in the Glue Catalog. The table represents the output destination of some ETL process. 2. A step function (managed by some other process) executes "INSERT INTO" athena queries. The step function supplies a "version" that is used as part of the "INSERT INTO" query so that new data can be appended into the table defined in (1). The table contains all "versions" - it's a historical table that grows over time. My question is: What is a good way of exposing a view/table that allows someone (or something) to query only the latest "version" partition for a given historically partitioned table? I've looked into other table types AWS offers, including Governed tables and Iceberg tables. Each seems to have some incompatibility with our existing or planned future architecture: 1. Governed tables do not support writes via athena insert queries. Only Glue ETL/Spark seems to be supported at the moment. 2. Iceberg tables do not support Lake Formation data filters (which we'd like to use in the future to control data access) 3. Iceberg tables also seem to have poor performance. Anecdotally, it can take several seconds to insert a very small handful of rows to a given iceberg table. I'd worry about future performance when we want to insert a million rows. Any guidance would be appreciated!
1
answers
0
votes
50
views
asked a month ago