Questions tagged with Extract Transform & Load Data
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
We're using S3 Select SelectObjectContent to convert CSV input to JSON output.
CSV files on input are very large, so we're passing chunks using ScanRange. Recently we ran into an issue with CSV files...
1
answers
0
votes
299
views
asked 4 months agolg...
Hi,
I am considering Glue to connect to a third party application's database (Oracle) and bring a data set (in excess of 1M rows) obtained by joining multiple tables at source end. The destination...
1
answers
0
votes
358
views
asked 4 months agolg...
I have multiple Visual ETL configured correctly, but if go back to the previous screen and then try to see the job again, the display editor will lost the configuration and it will highlight some...
0
answers
0
votes
107
views
asked 4 months agolg...
I am working with .sas7bdat file stored in my s3 bucket
I want to convert the sas7bdat file to csv but in glue visual etl I cannot see an option for sas7bdat file format
Can someone please help me...
1
answers
0
votes
310
views
asked 4 months agolg...
Hello,
While building a job in AWS Glue (Amazon S3, Change Schema, AWS Glue Data Catalog), I had a surprising cost for data preview session (AWS Glue GlueInteractiveSession) of 91% of the total...
1
answers
0
votes
212
views
asked 4 months agolg...
I am importing the data dump file that I have downloaded from S3.
```
-----load schema
DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(operation => 'IMPORT', job_mode => 'SCHEMA',...
1
answers
0
votes
1000
views
asked 4 months agolg...
Hello,
While trying to run this command `DELETE FROM "datasets"."us_spending"` in Athena, on a table from AWS Data Catalog, I had this error:
```
NOT_SUPPORTED: Cannot delete from non-managed Hive...
1
answers
0
votes
730
views
asked 4 months agolg...
Hello,
For an AWS Data Catalog table, I ran Glue (structure: Amazon S3 -> Change Schema -> AWS Glue Data Catalog ) and populate table with only string records. All the actions were done from the...
1
answers
0
votes
171
views
asked 4 months agolg...
Hello
I am using PySpark on Glue Job to do ETL on a table sourced from S3 And S3 sourced from mysql via DMS (table schema as below, column 'op', 'row_updated_timestamp' & 'row_commit_timestamp' are...
1
answers
0
votes
133
views
asked 4 months agolg...
I'm trying to build an ETL pipeline with AWS Glue, and the first step is to copy raw data from the original source to a staging bucket. The job is rather simple: source is a data catalog table (from...
1
answers
0
votes
271
views
asked 4 months agolg...
Hello,
In a Glue ETL made of nodes: Amazon S3, Change Schema, AWS Glue Data Catalog with the table "us_spending" backed by S3, I have the following error:
> Error Category: PERMISSION_ERROR;...
1
answers
0
votes
221
views
asked 4 months agolg...
I am looking for the best way to pass a parameter from one glue job to another within a step function.
Each day, I will receive a file. In the file there will be data for certain dates. The first...
1
answers
0
votes
838
views
asked 4 months agolg...