Questions tagged with Extract Transform & Load Data
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
I try to use Glue Crawler to read CSV files from S3 and create catalog table from it. Crawler run succesfully and it will create catalog table but those tables are empty (without columns) if I have...
3
answers
0
votes
1023
views
asked a year agolg...
When I started an etl job and mapped one table to a s3 bucket and change some data type. I got two columns empty because these two columns included some null value, how can I skip the null value in...
0
answers
0
votes
61
views
asked a year agolg...
I converted a CSV(from S3) to parquet(to S3) using AWS glue and the file which is converted to Parquet was named randomly .How do i choose the name of the file that is to be converted to Parquet from...
1
answers
0
votes
784
views
asked a year agolg...
Macie provides detailed positions of sensitive data in output file. But, I want to extract that data using positions from output file. Also, macie reveal only 10 samples.
Is there any way to get more...
1
answers
0
votes
330
views
asked a year agolg...
I'm writing partitioned parquet data using a Spark data frame and mode=overwrite to update stale partitions. I have this set: spark.conf.set('spark.sql.sources.partitionOverwriteMode','dynamic')
The...
1
answers
0
votes
876
views
asked a year agolg...
How can one set up an Execution Class = FLEX on a Jupyter Job Run , im using the %magic on my %%configure cell like below and also setting the input arguments with --execution_class = FLEX
But still...
2
answers
0
votes
610
views
asked a year agolg...
Hi, I'd appreciate AWS Athena support for TIMESTAMP data type with microsecond precision for all row formats and table engines. Currently, the support is very inconsistent. See the SQL script below....
0
answers
0
votes
155
views
asked a year agolg...
Started getting this error today when querying data from Athena in a table created from parquet files in our S3 bucket:
![Enter image description...
0
answers
0
votes
100
views
asked a year agolg...
Hi community,
I am trying to perform an ETL job using AWS Glue.
Our data is stored in MongoDB Atlas, inside a VPC.
Our AWS is connected to our MongoDB Atlas using VPC peering.
To perform the ETL...
1
answers
1
votes
459
views
asked a year agolg...
In Redshift, I'm trying to update a table using another table from another database. The error details:
SQL Error [XX000]: ERROR: Assert
Detail:
-----------------------------------------------
...
1
answers
0
votes
244
views
asked a year agolg...
I'm attempting to use AWS Data Pipeline to move a CSV file from my computer to AWS Data Lake as a parquet file. I'm unable to finad the exact template to select to migrate from my local...
0
answers
0
votes
85
views
asked a year agolg...
i want to directly move a csv file from my laptop to aws data lake using aws pipeline?
is it possible to so ? if yes how?
1
answers
0
votes
342
views
asked a year agolg...