Questions tagged with AWS Glue
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
I'm trying to build an ETL pipeline with AWS Glue, and the first step is to copy raw data from the original source to a staging bucket. The job is rather simple: source is a data catalog table (from...
1
answers
0
votes
261
views
asked 4 months agolg...
Hello,
In a Glue ETL made of nodes: Amazon S3, Change Schema, AWS Glue Data Catalog with the table "us_spending" backed by S3, I have the following error:
> Error Category: PERMISSION_ERROR;...
1
answers
0
votes
217
views
asked 4 months agolg...
I am looking for the best way to pass a parameter from one glue job to another within a step function.
Each day, I will receive a file. In the file there will be data for certain dates. The first...
1
answers
0
votes
799
views
asked 4 months agolg...
We have use case where we want to export ~500TB of DynamoDb data to a S3, one of the possible approaches that I found was making use of AWS Glue Job.
Also while exporting the data to S3, we need to...
2
answers
0
votes
299
views
asked 4 months agolg...
I have issue in trying to set up custom query on glue studio for Big Query. For example, the query below works on BQ, but doesn't work on the custom query on glue studio.
```
SELECT * FROM...
1
answers
0
votes
65
views
asked 4 months agolg...
I need to load data from my dataframe to BiqQuery table JSON type field.
There is connector that according to documentation supports this feature:...
0
answers
0
votes
52
views
asked 4 months agolg...
I'm working on a project that makes use of Glue Record Matching transforms which, by my best research though AWS docs, is only supported in Glue 2.0 jobs (and additionally, the maximum Glue version I...
0
answers
0
votes
56
views
asked 4 months agolg...
I am trying to write a pyspark dataframe to S3 and the AWS data catalog using the Iceberg format and the pyspark.sql.DataFrameWriterV2 with the createOrReplace function. When I write the same...
1
answers
0
votes
632
views
asked 4 months agolg...
Hi. I am trying to run an AWS Glue job where I transfer data from S3 to Amazon Redshift. However, I am receiving the following error:
```
Error Category: UNCLASSIFIED_ERROR; An error occurred while...
2
answers
0
votes
1079
views
asked 4 months agolg...
I have a data pipeline built in Redshift Serverless, with some final tables being the result. We are also running a web app that I have set up an Aurora Serverless Postgres DB, to run from. The idea...
0
answers
0
votes
120
views
asked 4 months agolg...
Can someone please help with this error? I have a csv file in an S3 bucket, created a crawler to update a table in glue, and the crawler runs but when I try to view the data in athena I get this...
1
answers
0
votes
569
views
asked 4 months agolg...
Hi this question is regarding corrupt or malformed records in Glue ETL.
Spark DataFrames obviously have an option for indicated column for _corrupt_record when this happens and the entire record is...
1
answers
0
votes
207
views
asked 4 months agolg...