1 Resposta
- Mais recentes
- Mais votos
- Mais comentários
0
In a visual job, you can use an S3 source and specify JSON format, with an optional JsonPath, or due the same reading from a JSON table that you can build using a crawler.
Once the source reads the data as you need, then you can store it on DynamoDB to use as the source of other jobs.
Conteúdo relevante
- AWS OFICIALAtualizada há 2 anos
- AWS OFICIALAtualizada há 3 meses
- AWS OFICIALAtualizada há 2 anos
- AWS OFICIALAtualizada há 2 anos
Thanks for your respon! but sorry i think i will give you my example flow if using table form (Structured data)
S3 (csv) [Sources] -> Select fields [Transforms] -> SQL Query [Transforms] -> S3 (csv) [targets]
but now the source data is using Json format, which is i need too transform it into table/structured data so i can do data transformation using my SQL Query.
I'm new with AWS, usually i just use ETL tools like pentaho/talend. so i prefer use visual job than the script one.
the "source" transfoms the data into a structured in memory table, which you can see in the "Data preview" panel and from then transform, run SQL or whatever you need