Questions tagged with Extract Transform & Load Data
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
Hello
I have JSON files for jobs created and extracted from Amazon transcribe and I need to add it to cloud formation stack of PCA
so I found that on my PCA(post call analytics) dashboard there is...
1
answers
0
votes
239
views
asked a year agolg...
Aws interface issuelg...
I am trying to download a dataset from S3 bucket using aws interface.
I have the security code and other details.
But when I type the code for downloading the data, I get an error.
I am not even...
1
answers
0
votes
195
views
asked a year agolg...
```
2023-03-10 03:52:55,572 ERROR [stream execution thread for [id = bea7911a-d787-44e1-836b-8488a43e16ea, runId = ba5fb563-91a6-4d39-befb-f740a9be9a08]] streaming.MicroBatchExecution...
0
answers
0
votes
55
views
asked a year agolg...
Error to view tablelg...
After using a crawler to create a table it shows the following error which I don't know how to fix.
![Error](/media/postImages/original/IMr9UNFbk4Q-qth24BoQLcPQ)
1
answers
0
votes
242
views
asked a year agolg...
Hi,
I have noticed that when running the DataBrew recipe job for data masking, the job is failing when the parquet file has a single record in it.
The error is following:
> "Exception: Error: Unable...
1
answers
0
votes
513
views
asked a year agolg...
New to glue and athena.
I have a great toy example by an AWS community builder working. But in my real use case, I want to capture all the fields from an eventbridge event 'detail' section and...
1
answers
0
votes
217
views
asked a year agolg...
Trying to surface daily csvs in S3 in Redshift via AWS Glue Studio but databases aren't showing uplg...
I am trying to use the AWS Glue Studio to build a simple ETL workflow. Basically, I have a bunch of `csv` files in different directories in S3.
I want those csvs to be accessible via a database and...
1
answers
0
votes
211
views
asked a year agolg...
Hi All,
I have some issues when running my glue job, I landed my pipe delimited csv file in a s3 bucket and after running the crawler pointing to the folder where the file is placed, a glue catalog...
1
answers
0
votes
1578
views
asked a year agolg...
Looking for a little insight on this:
ETL job fails with the following error: “Connection reset on https://xxxxxx.com:9200 https://xxxxxxx.com:9200"
Basic information:
*ETL job is pulling data...
1
answers
0
votes
210
views
asked a year agolg...
Hey Guys!
I am trying to Read a large amout of data(About 45GB in 5.500.000 files) in S3 and rewrite in a partitioned folder (In another Folder inside the same Bucket) but I am facing this...
1
answers
0
votes
425
views
asked a year agolg...
I do a crawler to load all my S3 csv files to Glue Data Catalog. Now I want to create a glue job to execute ETL (create and drop temporary tables, select and insert data to tables in Data Catalog) But...
0
answers
0
votes
55
views
asked a year agolg...
I can't find a proper way of setting the correct data type for a timestamp attribute on my Athena table **parquet** in order to query for time intervals.
im creating the table via a crawler on parquet...
1
answers
0
votes
486
views
asked a year agolg...