- 最新
- 最多得票
- 最多評論
I don't believe you have the option to only output a single file when using DataPipeline. You are using a pre-built solution which uses the emr-dynamodb-connector, which limits your ability for customization. You can of course provide your own code to DataPipeline in which you can achieve your goal of a single file output.
You could use AWS Glue to achieve this using Spark, and before you write the data to S3 you call repartion
or coalesce
to reduce to a single partition. If you have understanding of Hadoop or Spark you will understand that reducing the partitions reduces the distribution of the job to essentially a single reducer. This can lead to issues if the table has a lot of data, as a single node in your cluster will need to hold the entire contents of the table, leading to Storage or OOM issues.
相關內容
- 已提問 1 年前
- AWS 官方已更新 2 年前