- 新しい順
- 投票が多い順
- コメントが多い順
I don't believe you have the option to only output a single file when using DataPipeline. You are using a pre-built solution which uses the emr-dynamodb-connector, which limits your ability for customization. You can of course provide your own code to DataPipeline in which you can achieve your goal of a single file output.
You could use AWS Glue to achieve this using Spark, and before you write the data to S3 you call repartion
or coalesce
to reduce to a single partition. If you have understanding of Hadoop or Spark you will understand that reducing the partitions reduces the distribution of the job to essentially a single reducer. This can lead to issues if the table has a lot of data, as a single node in your cluster will need to hold the entire contents of the table, leading to Storage or OOM issues.
関連するコンテンツ
- AWS公式更新しました 3年前