how to pass S3 file (table) to DynamoDB as part of an ETL Job

0

Hi I am building an ETL Which should start by taking a file from s3 and copy/transfer it to DynamoDB. Y tried to build it on the ETL Canvas but I was not able to find the option to pass the file to DynamoDB. How can I do this?

Manually I used to import my S3 files to DynamoDB, and created a crawler to make the data visible (Data Catalog) to the ETL Job. But now I want to do this part as part on the ETL process, to avoid any human intervention (manual).

i am not an expert on AWS Glue, so any advice will be greatly appreciated.

Thanks

Alejandro

2 Answers
1

I believe you know how to use Glue to import data to DynamoDB, but you are concerned about manual intervention.

To avoid any manual intervention, you can use AWS Glue Triggers, when fired, a trigger can start specified jobs and crawlers. A trigger fires on demand, based on a schedule, or based on a combination of events. This would remove the need for you to manually crawl your S3 data, instead the complete solution would be handled by AWS Glue.

Moreover, if you are importing to new tables, I suggest using DynamoDB Import from S3 feature, however, it does not currently allow importing to existing tables.

profile pictureAWS
EXPERT
answered 2 years ago
-1

The visual jobs don't have a DDB target yet. What you can do is run a crawler on DDB, so the table is linked with the catalog and then create a visual Glue job that reads from s3 and saves on that catalog table.

profile pictureAWS
EXPERT
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions