Questions tagged with Extract Transform & Load Data
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
Hello,
While trying to run this command `DELETE FROM "datasets"."us_spending"` in Athena, on a table from AWS Data Catalog, I had this error:
```
NOT_SUPPORTED: Cannot delete from non-managed Hive...
1
answers
0
votes
709
views
asked 4 months agolg...
Hello,
For an AWS Data Catalog table, I ran Glue (structure: Amazon S3 -> Change Schema -> AWS Glue Data Catalog ) and populate table with only string records. All the actions were done from the...
1
answers
0
votes
162
views
asked 4 months agolg...
Hello
I am using PySpark on Glue Job to do ETL on a table sourced from S3 And S3 sourced from mysql via DMS (table schema as below, column 'op', 'row_updated_timestamp' & 'row_commit_timestamp' are...
1
answers
0
votes
126
views
asked 4 months agolg...
I'm trying to build an ETL pipeline with AWS Glue, and the first step is to copy raw data from the original source to a staging bucket. The job is rather simple: source is a data catalog table (from...
1
answers
0
votes
254
views
asked 4 months agolg...
Hello,
In a Glue ETL made of nodes: Amazon S3, Change Schema, AWS Glue Data Catalog with the table "us_spending" backed by S3, I have the following error:
> Error Category: PERMISSION_ERROR;...
1
answers
0
votes
214
views
asked 4 months agolg...
I am looking for the best way to pass a parameter from one glue job to another within a step function.
Each day, I will receive a file. In the file there will be data for certain dates. The first...
1
answers
0
votes
755
views
asked 4 months agolg...
Hi. I am trying to run an AWS Glue job where I transfer data from S3 to Amazon Redshift. However, I am receiving the following error:
```
Error Category: UNCLASSIFIED_ERROR; An error occurred while...
2
answers
0
votes
1037
views
asked 4 months agolg...
```
df = spark.read.parquet("s3://folder/")
df = df.withColumn('filename', input_file_name())
AmazonS3_node1697616892615 = DynamicFrame.fromDF(df, glueContext, "s3sparkread")
```
if this is the code...
1
answers
0
votes
346
views
asked 4 months agolg...
I'm trying to achieve data change capture using AWS Glue and don't want to use DMS. I'm trying to transfer data between two Oracle RDS instances which are in different AWS Account. Here I am trying to...
1
answers
0
votes
501
views
asked 4 months agolg...
I'm trying to achieve data change capture using AWS Glue and don't want to use DMS. I'm trying to transfer data between two Oracle RDS instances which are in different AWS Account. Here I am trying to...
1
answers
0
votes
476
views
asked 4 months agolg...
is it possible to conveniently mask all columns in a Redshift table? The example in the [docs](https://docs.aws.amazon.com/redshift/latest/dg/t_ddm.html#ddm-example) only masks one column but is there...
1
answers
0
votes
482
views
asked 4 months agolg...
For a DeltaTarget, glue.create_crawler is not recognizing the parameter "CreateNativeDeltaTable"lg...
Unknown parameter in Targets.DeltaTargets[0]: "CreateNativeDeltaTable", must be one of: DeltaTables, ConnectionName, WriteManifest.
However, in the documentation it shows the parameter (see...
1
answers
0
votes
170
views
asked 4 months agolg...