How to read DynamoDB table using aws lambda function in python?
0
I have a use-case where I have to read DynamoDB table data and convert this into csv file and write into a s3 bucket using AWS lambda function. DynamoDB tables data is bit low around 2-3 MB so I want to read entire table every week. I am new to AWS and services and want to how to read table without triggers and how to schedule lambda function so it can run every week?
Please share the steps to accomplish this task.
Any help would be appreciated.
1 Answers
0
To schedule a Lambda function to run on a regular basis, use CloudWatch Events.
Scanning a whole DynamoDB table may not be the most efficient way of doing things but here's some code anyway:
import boto3
ddbclient = boto3.client('dynamodb')
def lambda_handler(event, context):
paginator = ddbclient.get_paginator('scan')
iterator = paginator.paginate(TableName='YourTableNameHere')
for page in iterator:
for item in page['Items']:
<do some work here>
Use CloudWatch Events with a cron expression telling it when you want to run the function.
Relevant questions
DynamoDB: Time frame to avoid stale read
asked 3 months agoHow to establish connection between Unity and AWS DynamoDB for WebGL platform ?
asked 2 days agoHow can i import an .xlsx File located in S3 Bucket in my quicksight?
asked 3 months agoHow to merge aws data pipeline output files into a single file?
asked 2 months agoHow to read DynamoDB table using aws lambda function in python?
asked a month agoUsing AWS Lambda to run Python script, how can I save data?
Accepted Answerasked 3 years agoLambda that parses data into attributes?
asked 6 days agoA lambda function to delete old archive files in s3 bucket
asked 3 years agoExtracting tables DynamoDB->S3 : HIVE_BAD_DATA
asked 2 months agoCreate cron job using AWS lambda
asked 2 years ago
Okay cool and how can i schedule this lambda function to run every week?