Archiving CloudWatch logs no with data loss

0

I'm trying to archive logs to s3 before they expire. I have written lambdas to achieve that. But the issue now is that, some of the log data is lost because some of is written after the archiving is done. I need help with how to architect this without any data loss and if possible if I could know the exact time the retention policies delete the triggers when the time is due.

질문됨 2년 전753회 조회
1개 답변
0

One method you could use to approach this would be to utilize a scheduled EventBridge rule to trigger the Lambdas automatically every 24 hours and trigger an S3 Export Task for the most recent day's log data.

However, another way to ensure that log data is continually archived from CloudWatch to S3 without losing data outside the window of running the Lambdas would be to utilize a subscription filter on the log groups you wish to archive, to be delivered to a Kinesis Firehose delivery stream with an S3 destination.

[CW Log Group Subscription Filter] -> [Kinesis Firehose] -> [S3]

By setting the Filter Pattern on the Subscription Filter to capture all logs, this will forward all log data ingested in to the log group to S3 via the Kinesis Firehose delivery stream without needing to run the lambda's to export data. Note that this will only send logs ingested after the creation of the subscription filter to S3. Any logs ingested prior to the subscription filter creation would still need to be exported.

Resources for configuring this can be found here:

Using CloudWatch Logs subscription filters - Example 3: Subscription filters with Amazon Kinesis Data Firehose

How do I create, configure, and troubleshoot a subscription filter to Kinesis using the CloudWatch console?

AWS
지원 엔지니어
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠