I want to upload data in bulk to my Amazon DynamoDB table.
To upload data to DynamoDB in bulk, use one of the following options.
To issue multiple PutItem calls simultaneously, use the BatchWriteItem API operation. You can also use parallel processes or threads in your code to issue multiple parallel BatchWriteItem API calls. This makes the data load faster.
AWS Data Pipeline
If the data is in Amazon Simple Storage Service (Amazon S3), then you can use AWS Data Pipeline to export to DynamoDB. Data Pipeline automates the process of creating an Amazon EMR cluster and exporting your data from Amazon S3 to DynamoDB in parallel BatchWriteItem requests. When you use Data Pipeline, you don't have to write the code for the parallel transfer. For more information, see Importing data from Amazon S3 to DynamoDB.
If you exported your upload data to Amazon S3 from a different DynamoDB table using the DynamoDB export feature, then use AWS Glue. This option is efficient for uploading large datasets. This is because the export feature uses the DynamoDB backup functionality, and it doesn't scan the source table. AWS Glue doesn't impact the performance or availability of the source table. For more information, see Using AWS Glue with Amazon DynamoDB as source and sink.