One method you could use to approach this would be to utilize a scheduled EventBridge rule to trigger the Lambdas automatically every 24 hours and trigger an S3 Export Task for the most recent day's log data.
However, another way to ensure that log data is continually archived from CloudWatch to S3 without losing data outside the window of running the Lambdas would be to utilize a subscription filter on the log groups you wish to archive, to be delivered to a Kinesis Firehose delivery stream with an S3 destination.
[CW Log Group Subscription Filter] -> [Kinesis Firehose] -> [S3]
By setting the Filter Pattern on the Subscription Filter to capture all logs, this will forward all log data ingested in to the log group to S3 via the Kinesis Firehose delivery stream without needing to run the lambda's to export data. Note that this will only send logs ingested after the creation of the subscription filter to S3. Any logs ingested prior to the subscription filter creation would still need to be exported.
Resources for configuring this can be found here:
Relevant questions
S3 server access logs to Cloudwatch?
Accepted Answerasked 4 years agoCloudwatch logs to S3 continuous export
asked 4 months agoSpecific Cloudwatch log groups not responding to queries
asked 6 months agoAre we able to export only parts of the Amazon CloudWatch logs to Amazon S3?
Accepted Answerasked 2 years agoHow to stream CloudFront real time logs to cloudwatch
asked 6 months agoArchiving CloudWatch logs no with data loss
asked a month agoRDS/Postgres logs to S3
Accepted Answerasked 4 years agohow to push only error and warning logs to cloudwatch
asked 6 days agoIs CloudWatch "Vended Log" cost for VPC Flow Log delivery to S3 calculated on GB before or after compression?
asked 8 months agosave logs of each outbound data transfer response from cloudfront to s3
asked 3 months ago