Archiving CloudWatch logs no with data loss

0

I'm trying to archive logs to s3 before they expire. I have written lambdas to achieve that. But the issue now is that, some of the log data is lost because some of is written after the archiving is done. I need help with how to architect this without any data loss and if possible if I could know the exact time the retention policies delete the triggers when the time is due.

已提問 2 年前檢視次數 754 次
1 個回答
0

One method you could use to approach this would be to utilize a scheduled EventBridge rule to trigger the Lambdas automatically every 24 hours and trigger an S3 Export Task for the most recent day's log data.

However, another way to ensure that log data is continually archived from CloudWatch to S3 without losing data outside the window of running the Lambdas would be to utilize a subscription filter on the log groups you wish to archive, to be delivered to a Kinesis Firehose delivery stream with an S3 destination.

[CW Log Group Subscription Filter] -> [Kinesis Firehose] -> [S3]

By setting the Filter Pattern on the Subscription Filter to capture all logs, this will forward all log data ingested in to the log group to S3 via the Kinesis Firehose delivery stream without needing to run the lambda's to export data. Note that this will only send logs ingested after the creation of the subscription filter to S3. Any logs ingested prior to the subscription filter creation would still need to be exported.

Resources for configuring this can be found here:

Using CloudWatch Logs subscription filters - Example 3: Subscription filters with Amazon Kinesis Data Firehose

How do I create, configure, and troubleshoot a subscription filter to Kinesis using the CloudWatch console?

AWS
支援工程師
已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南