Metric Filter for Lambda Logs Not Reflecting Accurate Memory Usage

0

I created a metric filter from a Lambda log group to monitor allocated and maximum memory used by the Lambda function. The Filter pattern which is - [...] was previously working correctly, capturing values with placeholders like $13 for allocated memory and $18 for max memory used as suggested here that they are constants. These values accurately reflected memory usage in MB.

An alarm was configured based on this metric with a threshold of >=200 (assuming MB). However, two days ago, the issue began:

Alarm notifications started triggering for memory usage exceeding 6.7554799525E8, significantly higher than the 200 MB threshold. Adding a unit (MB) to the metric resulted in a "unit type is none" error. Now, the displayed metric values are inaccurate: Allocated memory, which is a constant 256MB, spikes to 430 MB in the monitoring graph Max memory used goes even higher. Log stream inspection confirms memory usage never exceeds 200 MB. Graph of memory allocated and memory used

Troubleshooting Attempts: Increased the alarm threshold to 200,000,000 (assuming bytes), but it reverted back to 200. I deleted and recreated the metric filters the same way but ran into the same issues Suspect the variable position for the metric value might have changed, but limited output display (only shows $1-$6) hinders confirmation. Visible metric value variable Visible metric value variable Visible metric value variable

Question: Is there a way to view more of the filter pattern output to verify the variable position? If not, what other potential causes could explain the inaccurate metric values?

  • Have you tried troubleshooting for:

    Inspect recent log events to identify memory value positions Update the filter pattern to use named placeholders Check the underlying metric period vs alarm period

  • I did and the memory value position is unchanged. I also noticed it went back to the expected default for some hours today and its back to having spikes that do not match the log stream

Olive
asked a month ago141 views
1 Answer
1
Accepted Answer

Hi, Olive.

Honestly, I am not confident it is possible to do that only with metric filter with accuracy as the lambda logs are not structured and can vary in format.

Would you consider using another approach which is enabling Lambda Insights Metrics for the lambdas you want to have an alarm on?

Based on Lambda Insights Metrics, you would be interested on the following metrics:

total_memory: The amount of memory allocated to your Lambda function. This is the same as your function’s memory size. memory_utilization: The maximum memory measured as a percentage of the memory allocated to the function.

I am not fully aware the use case, but you could setup a CloudWatch alarm on CloudWatch Metrics in the LambdaInsights namespace. You could even explore the usage of CloudWatch Anomaly Detection.

Please accept this answer if it helps. Otherwise, lets discuss your use case and find a solution.

Regards, Pablo Silva

profile pictureAWS
answered a month ago
profile picture
EXPERT
reviewed a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions