Convert log fields into table columns with aws cloudwatch log insights

0

i've a lambda function and i want to have a cloudwatch logs table with errors and warnings columns.Actually I was able with this query to get an error / warnings report per day:

parse "[E*]" as @error
| parse "[W*]" as @warning
| filter ispresent(@warning) or ispresent(@error)
| stats count(@error) as error, count(@warning) as warning by bin(15m)

Here are two example messages of the lambda:

WARNING:

Field           Value
@ingestionTime  1653987507053
@log    XXXXXXX:/aws/lambda/lambda-name
@logStream 2022/05/31/[$LATEST]059106a15343448486b43f8b1168ec64
@message    2022-05-31T08:58:18.293Z b1266ad9-95aa-4c4e-9416-e86409f6455e WARN error catched and errorHandler configured, handling the error: Error: Error while executing handler: TypeError: Cannot read property 'replace' of undefined
@requestId  b1266ad9-95aa-4c4e-9416-e86409f6455e
@timestamp  1653987498296

ERROR:

Field           Value
@ingestionTime  1653917638480
@log    XXXXXXXX:/aws/lambda/lambda-name
@logStream 2022/05/30/[$LATEST]bf8ba722ecd442dbafeaeeb3e7251024
@message    2022-05-30T13:33:57.406Z 8b5ec77c-fb30-4eb3-bd38-04a10abae403 ERROR Invoke Error {"errorType":"Error","errorMessage":"Error while executing configured error handler: Error: No body found in handler event","stack":["Error: Error while executing configured error handler: Error: No body found in handler event"," at Runtime.<anonymous> (/var/task/index.js:3180:15)"]}
@requestId  8b5ec77c-fb30-4eb3-bd38-04a10abae403
@timestamp  1653917637407
errorMessage    
Error while executing configured error handler: Error: No body found in handler event
errorType   
Error
stack.0 Error: Error while executing configured error handler: Error: No body found in handler event
stack.1 at Runtime.<anonymous> (/var/task/index.js:3180:15)

Can you help me understand how to set up the query in order to have a table with the following columns and their values: from @message extract timestamp, requestID, type (WARN or ERROR), errorMessage and if feasible also the name of the lambda from @log and the @logStream.

Can you help me understand how such a query is produced?

2 Answers
0

It seems that not work correctly (does not correctly recognize the type). If i remove the filter for the type, I receive poorly paginated results and I don't understand why: @message 2022-06-07T08:01:26.897Z 65985471-edd9-44ba-99f8-8e9f0a5d4fa6 INFO saving evse statuses to database...

the type info is in the third place but the query (removing the filter on error and warn) put as type "statuses" and as info "to database..."

answered 2 years ago
  • I may well have the parse wrong since I didn't have your data to work with. Check the pattern for your log events and modify accordingly. (What did the event look like that gave you the info noted above for the type and info fields?)

    Once you get the right data in the parse field though, I think this should work.

0

If you fields always have these two formats, then you can extract the type (ERROR or WARN) as the third space separated field. I used this and rewrote your first query to show you an alternative way of doing this that keeps the type in a single column, and how you can still use this to create multiple series for a time chart (always useful to know some more tricks! )

parse @message "* * * *" as time, request, type, info
| filter type in ["ERROR", "WARN"]
| stats count(type="ERROR") as error,  count(type="WARN") as warning by bin(15m)

You asked to see w table with columns of time, requestID, type, error message, log, logstream Basing this off the above query we can get all of this (apart from the error message) using the following:

parse @message "* * * *" as time, request, type, info
| filter type in ["ERROR", "WARN"]
| display @timestamp, @requestId, type, @log, @logstream

The last part is getting the error message. You have two formats of messages here, so we will need two capture patterns. I'm assuming your JSON event is ingested with the JSON fields extracted and you therefore have a field called errorMessage. The trick here is to use coalesce to get a JSON error message if it exists, and if not, take the info field from the first format of your events. You can then add this new field into the display command.

parse @message "* * * *" as time, request, type, info
| filter type in ["ERROR", "WARN"]
| fields coalesce(errorMessage, info) as msg
| display @timestamp, @requestId, type, msg @log, @logstream

If you want to see more on the Logs Insights syntax see https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html.

A Quick note on field names. Field names don't have to start with @. In fact it is better not to. The CloudWatch discovered fields do start with @, and it will help you distinguish between them and those you create. The exception to this is discovered fields from JSON do not start with @. You can read more at https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html.

AWS
AWS-SA
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions