- Newest
- Most votes
- Most comments
Create a lambda service role, that would have access to your s3 bucket and the location, where you would store csv file in that bucket. Here is how to create lambda execution role, you'll have to add additional policy to this role based on what resources you need to access from lambda so the lambda execution role.
Create a lambda function with python runtime(python3.10 or so)and adjust your python script in that lambda function and import all the required boto3 modules. Make sure that you use the role while creating this function which you'd have set up in first step. Here is how to create lambda function from console. Within your code, you may already be using pandas dataframe to get the data from google API, you may need to add pandas module or any required modules as layer to your lambda function. Here is how to add layers to lambda functions.
Create an eventbridge rule in cloudwatch event and make a cron schedule with target as this lambda function. Here is the tutorial of Schedule AWS Lambda functions using EventBridge.
Hope you find this this information helpful.
Comment here if you have additional questions, happy to help.
Abhishek
Abhishek's answer is spot on. If you want help to automate the creation of the Lambda, you can use AWS SAM to automate the deployment of the Lambda, as well as creating the Eventbridge rule to schedule the function. E.g. with SAM CLI, then 5. Scheduled Task, and this is the piece in the template.yaml that will run the function:
# This example runs every hour.
Events:
CloudWatchEvent:
Type: Schedule
Properties:
Schedule: cron(0 * * * ? *)
Relevant content
- Accepted Answerasked 5 years ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago