Archiving CloudWatch logs no with data loss

0

I'm trying to archive logs to s3 before they expire. I have written lambdas to achieve that. But the issue now is that, some of the log data is lost because some of is written after the archiving is done. I need help with how to architect this without any data loss and if possible if I could know the exact time the retention policies delete the triggers when the time is due.

已提问 2 年前753 查看次数
1 回答
0

One method you could use to approach this would be to utilize a scheduled EventBridge rule to trigger the Lambdas automatically every 24 hours and trigger an S3 Export Task for the most recent day's log data.

However, another way to ensure that log data is continually archived from CloudWatch to S3 without losing data outside the window of running the Lambdas would be to utilize a subscription filter on the log groups you wish to archive, to be delivered to a Kinesis Firehose delivery stream with an S3 destination.

[CW Log Group Subscription Filter] -> [Kinesis Firehose] -> [S3]

By setting the Filter Pattern on the Subscription Filter to capture all logs, this will forward all log data ingested in to the log group to S3 via the Kinesis Firehose delivery stream without needing to run the lambda's to export data. Note that this will only send logs ingested after the creation of the subscription filter to S3. Any logs ingested prior to the subscription filter creation would still need to be exported.

Resources for configuring this can be found here:

Using CloudWatch Logs subscription filters - Example 3: Subscription filters with Amazon Kinesis Data Firehose

How do I create, configure, and troubleshoot a subscription filter to Kinesis using the CloudWatch console?

AWS
支持工程师
已回答 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则