- Newest
- Most votes
- Most comments
Hi, Olive.
Honestly, I am not confident it is possible to do that only with metric filter with accuracy as the lambda logs are not structured and can vary in format.
Would you consider using another approach which is enabling Lambda Insights Metrics for the lambdas you want to have an alarm on?
Based on Lambda Insights Metrics, you would be interested on the following metrics:
total_memory: The amount of memory allocated to your Lambda function. This is the same as your function’s memory size. memory_utilization: The maximum memory measured as a percentage of the memory allocated to the function.
I am not fully aware the use case, but you could setup a CloudWatch alarm on CloudWatch Metrics in the LambdaInsights namespace. You could even explore the usage of CloudWatch Anomaly Detection.
Please accept this answer if it helps. Otherwise, lets discuss your use case and find a solution.
Regards, Pablo Silva
Relevant content
- Accepted Answerasked 2 years ago
- Accepted Answerasked a year ago
- asked 9 months ago
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 7 months ago
Have you tried troubleshooting for:
Inspect recent log events to identify memory value positions Update the filter pattern to use named placeholders Check the underlying metric period vs alarm period
I did and the memory value position is unchanged. I also noticed it went back to the expected default for some hours today and its back to having spikes that do not match the log stream