Questions tagged with Extract Transform & Load Data
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
Hi, i have ran a crawler that connects by jdbc to SnowFlake and it creates a table into the Glue catalog database, that works nice.
now i want to use glue studio to take the source (AWS Glue Data...
2
answers
0
votes
455
views
asked a year agolg...
I am trying to migrate 3 tables and my selection rule looks like this and I get an error
where schema name is like 'EEOP_INTEG' and Source table name is like '[EEOP_USER, EEOP_ORG, EEOP_USER_ORG]',...
2
answers
0
votes
490
views
asked a year agolg...
When I am trying to upload a regular data table from the query, it is not being uploaded to Quicksight. The Full Refresh is giving also problems. When I run the query on the Hubble, it works...
1
answers
0
votes
281
views
asked a year agolg...
Hello everybody,
I'm working for a small retail company, where we want to move our data to aws. Our data can be from different sources (postgresDB, MySQL DB or other unstructured data sources or...
2
answers
0
votes
327
views
asked a year agolg...
Issue importing custom Python modules in Jupyter notebooks with SparkMagic and AWS Glue endpointlg...
I'm encountering an issue while attempting to run ETL scripts in Jupyter notebooks using SparkMagic, which is connected to an AWS Glue endpoint via SSH. I followed the tutorial provided in the AWS...
2
answers
0
votes
703
views
asked a year agolg...
I am trying to create a **Custom Visual transform** but unfortunately facing some issues.
Here my motive it to truncate a **MySQL** table before loading the data into it and I want to do it with the...
1
answers
0
votes
274
views
asked a year agolg...
Hi, I am using G2x type worker, during data write operation glue job allocating only 5 executors and not allocating more executors which makes process getting so long. How to resolve this issue.
3
answers
0
votes
491
views
asked a year agolg...
how to fix this to execute Jobs from Amazon Glue to Redshift? Error: "JobName:s3-redshift and JobRunId:jr_30d8ac0c6b35d44f641b7d5b55819365897b18e6244c8a5559bff53efc8e23c1 failed to execute with...
2
answers
0
votes
256
views
asked a year agolg...
Delivering records to Security Lake custom source prefix is not possible with Kinesis Firehose?lg...
I'm trying to set up custom sources for Security Lake and have a Kinesis Firehose delivery stream configured to deliver parquet files into the Security Lake bucket under the ext/ prefix.
The problem...
1
answers
0
votes
311
views
asked a year agolg...
Hi, i have a crawler that connects by jdbc to a snowflake and the process finishes well, but it doesn't create any table, here the output of the cloudwatch:
BENCHMARK : Classification complete,...
2
answers
0
votes
364
views
asked a year agolg...
At present, Glue supports the iceberg framework, but the MERGE INTO syntax is needed here. I set it according to the official website's article, but I can't succeed. There will always be an error of...
0
answers
0
votes
74
views
asked a year agolg...
I need to insert a dynamicFrame created by reading a table from Redshift to RDS and the frame includes a column has string values in Korean. My target DB's default encoding is not utf-8, so I needed...
1
answers
0
votes
396
views
asked a year agolg...