- Le plus récent
- Le plus de votes
- La plupart des commentaires
Is this to batch move or stream real time as you mention move but then with subscription filters I believe they will only monitor for new events from that point of configuration and not historical events? (Need to confirm)
You could export logs directly to S3 from the console and then import from there. When I have done this in the past, I believe I could only export log groups 1 or 2 at a time.
Send CloudWatch logs to Kinesis Firehose https://repost.aws/knowledge-center/cloudwatch-logs-stream-to-kinesis
Kinesis Firehose streams can be sent directly to Splunk for ingestion https://aws.amazon.com/kinesis/data-firehose/splunk/ https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureFirehose
As mentioned, kinesis may not be used for certain reasons.
OK fair enough, if Kinesis Firehose is considered as being part of the Kinesis product then it's out.
I was reading it too literally and considering them as separate products. And data can be sent from Cloudwatch to Firehose without ever having to touch a Kinesis stream.
Why not try Splunk Addon for AWS as outlined here - https://docs.splunk.com/Documentation/AddOns/released/AWS/CloudWatchLogs
This is a "pull" approach, meaning logs are fetched by Splunk by connecting to AWS. As amount of logs will be huge, the push mechanism is preferred: https://docs.splunk.com/Documentation/AddOns/released/AWS/UseCases
I would recommend considering using Amazon Kinesis Data Firehose to reliably deliver logs from CloudWatch Logs to Splunk.
Some key advantages of this approach:
- Kinesis Data Firehose can automatically deliver log data from CloudWatch Logs to Splunk with minimal code required. It handles log aggregation, compression and transport securely at a large scale.
- Firehose delivers log data reliably to Splunk with options for data transformation along the way if needed. It can also handle high volumes of log data from CloudWatch Logs.
- This avoids the need to build out and manage your own log delivery infrastructure using Lambda, SNS/SQS etc. which comes with additional operational overhead.
- Splunk has documentation on how to configure Firehose for log delivery directly to Splunk for ingestion.
To get started, you can create a Firehose delivery stream that sources data from a CloudWatch Logs group and delivers to your Splunk endpoint. The AWS documentation provides steps to set this up. Let me know if you have any other questions!
Contenus pertinents
- demandé il y a un an
- demandé il y a 2 jours
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a un an
Main objective here is reliability and no lost logs, so real time or batch not a constraint. As mentioned I m not a big fun of export solution and I know there are limitations with that
Makes perfect sense! Ta