Questions tagged with AWS Glue
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
Having issues creating a Glue Crawler both via the console and through SAM. I can create other resources (S3 buckets, lambda functions, Glue DB...) but when I try to create the Crawler I...
1
answers
0
votes
234
views
asked 3 months agolg...
My Glue jobs are no longer connecting to MariaDB. The scripts, database and supporting infrastructure have not changed. I am unsure of the exact start date of the failures.
Interestingly the crawler...
1
answers
0
votes
219
views
asked 3 months agolg...
We are encountering a issue where we're utilizing the "super" datatype. The column in the Parquet file we receive has a maximum length of 192K. How should we handle this data? Are there alternative...
2
answers
0
votes
304
views
asked 3 months agolg...
I have a Glue ETL job in `us-east-1`. My code commit repository is defined in `eu-central-1`. How can I configure the `us-east-1` ETL job to push its code to the `eu-entral-1` code repository?
I...
1
answers
0
votes
137
views
asked 3 months agolg...
I have an Glue ETL job that runs in one region (`eu-central-1`) and successfully reads source data from an S3 bucket in a different region (`us-east-1`). I would like to write the output of the...
1
answers
0
votes
166
views
asked 3 months agolg...
i have glue jobs and i want to export all the jobs programmatically . I don't know where to start. Please advice
1
answers
0
votes
173
views
asked 3 months agolg...
Example s3://bucket1/mytable/ -- > east-2 bucket folder with same schema
s3://bucket2/mytable/ -- > west-2 bucket folder with same schema
can we create a single table from this two...
3
answers
0
votes
571
views
asked 3 months agolg...
Hi Experts,
I have been experiencing issues with my AWS Glue Pyspark Job and hence enabled SPARK UI logging feature. However, when the job completed, I clicked on the Spark UI tab on Job run and it is...
1
answers
0
votes
353
views
asked 3 months agolg...
# Error while running UNLOAD to PARQUET query using column names with spaces in
## Introduction
I have a table in Athena with the following column names ["column space 1", "column space 2"]. I...
1
answers
0
votes
652
views
asked 3 months agolg...
I set up a replication task with AWS Database Migration Service to implement full load + CDC from a RDS instance to a S3 bucket. Since I want to use Athena to query the data in S3, I set the option...
2
answers
0
votes
243
views
asked 3 months agolg...
I managed to use glue crawler to crawled data (parquet file) from s3, however the column with type "boolean" is recognised as "string" when checking the data schema. Although i can edit the schema on...
1
answers
0
votes
598
views
asked 3 months agolg...
I have a scenario where I need to move data from S3 to a Postgres database running on an EC2 instance. All of this is part of cdk app so I'm looking to add this as a step to the current step function....
1
answers
0
votes
293
views
asked 3 months agolg...