1 Answer
- Newest
- Most votes
- Most comments
0
In a visual job, you can use an S3 source and specify JSON format, with an optional JsonPath, or due the same reading from a JSON table that you can build using a crawler.
Once the source reads the data as you need, then you can store it on DynamoDB to use as the source of other jobs.
Relevant content
- asked a year ago
- asked 9 months ago
- asked 8 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 2 years ago
Thanks for your respon! but sorry i think i will give you my example flow if using table form (Structured data)
S3 (csv) [Sources] -> Select fields [Transforms] -> SQL Query [Transforms] -> S3 (csv) [targets]
but now the source data is using Json format, which is i need too transform it into table/structured data so i can do data transformation using my SQL Query.
I'm new with AWS, usually i just use ETL tools like pentaho/talend. so i prefer use visual job than the script one.
the "source" transfoms the data into a structured in memory table, which you can see in the "Data preview" panel and from then transform, run SQL or whatever you need