By using AWS re:Post, you agree to the Terms of Use
/CloudWatch Logs Insights/

Questions tagged with CloudWatch Logs Insights

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

IAM poilcy for an user to access Enhanced Monitoring for RDS.

I am trying to create an IAM user that will have least privileges to be able to view enhanced monitoring for a particular RDS database. I have created a ROLE (Enhanced Monitoring) and attached a managed policy to it:'AmazonRDSEnhancedMonitoringRole'. This role is passed to RDS database using the passrole permission. The policy that I am attaching to this IAM user is as below: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "rds:*", "cloudwatch:GetMetricData", "iam:ListRoles", "cloudwatch:GetMetricStatistics", "cloudwatch:DeleteAnomalyDetector", "cloudwatch:ListMetrics", "cloudwatch:DescribeAnomalyDetectors", "cloudwatch:ListMetricStreams", "cloudwatch:DescribeAlarmsForMetric", "cloudwatch:ListDashboards", "ec2:*", "cloudwatch:PutAnomalyDetector", "cloudwatch:GetMetricWidgetImage" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:PassRole", "cloudwatch:*" ], "Resource": [ "arn:aws:cloudwatch:*:accountnumber:insight-rule/*", "arn:aws:iam::accountnumber:role/Enhanced-Monitoring", "arn:aws:rds:us-east-1:accountnumber:db:dbidentifier" ] } ] } ``` As you can see, I have given almost every permission to this user, but still I am getting 'Not Authorized' error on the IAM user RDS dashboard for enhanced monitoring, although cloudwatch logs are displaying normally. I am following this guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) for enhanced monitoring of RDS. Refer to example 2 on this page.
1
answers
0
votes
30
views
asked 11 days ago

Why are aggregate results in a Log Insights query nonsensical (count < count_distinct for the same variable)?

The following log insights query on a single log group returns negative numbers for the variable `@distinct_unique_keys_delta`: ```sql parse @message /(?<@unique_key>Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+)/ | filter @message like /Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+/ | stats count(@unique_key) - count_distinct(@unique_key) as @distinct_unique_keys_delta by datefloor(@timestamp, 1d) as @_datefloor | sort @_datefloor asc ``` My understanding is that the number of unique values of a variable can never be more than the total number of values of a variable. When I ran this query I was concerned that I might be misunderstanding the correct usage of `datefloor`, so I tried this query: ```sql parse @message /(?<@unique_key>Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+)/ | filter @message like /Processing key: \w+\/[\w=_-]+\/\w+\.\d{4}-\d{2}-\d{2}-\d{2}\.[\w-]+\.\w+\.\w+/ | stats count(@unique_key) - count_distinct(@unique_key) as @distinct_unique_keys_delta ``` The result of this query for the time range I chose (a whole day), was -20,347 for the `@distinct_unique_keys_delta` variable. To me this result seems completely nonsensical. Am I doing something wrong, interpreting the results wrong or is there a bug in the code running this log insights query?
1
answers
0
votes
12
views
asked 25 days ago

Convert log fields into table columns with aws cloudwatch log insights

i've a lambda function and i want to have a cloudwatch logs table with errors and warnings columns.Actually I was able with this query to get an error / warnings report per day: ``` parse "[E*]" as @error | parse "[W*]" as @warning | filter ispresent(@warning) or ispresent(@error) | stats count(@error) as error, count(@warning) as warning by bin(15m) ``` Here are two example messages of the lambda: WARNING: ``` Field Value @ingestionTime 1653987507053 @log XXXXXXX:/aws/lambda/lambda-name @logStream 2022/05/31/[$LATEST]059106a15343448486b43f8b1168ec64 @message 2022-05-31T08:58:18.293Z b1266ad9-95aa-4c4e-9416-e86409f6455e WARN error catched and errorHandler configured, handling the error: Error: Error while executing handler: TypeError: Cannot read property 'replace' of undefined @requestId b1266ad9-95aa-4c4e-9416-e86409f6455e @timestamp 1653987498296 ``` ERROR: ``` Field Value @ingestionTime 1653917638480 @log XXXXXXXX:/aws/lambda/lambda-name @logStream 2022/05/30/[$LATEST]bf8ba722ecd442dbafeaeeb3e7251024 @message 2022-05-30T13:33:57.406Z 8b5ec77c-fb30-4eb3-bd38-04a10abae403 ERROR Invoke Error {"errorType":"Error","errorMessage":"Error while executing configured error handler: Error: No body found in handler event","stack":["Error: Error while executing configured error handler: Error: No body found in handler event"," at Runtime.<anonymous> (/var/task/index.js:3180:15)"]} @requestId 8b5ec77c-fb30-4eb3-bd38-04a10abae403 @timestamp 1653917637407 errorMessage Error while executing configured error handler: Error: No body found in handler event errorType Error stack.0 Error: Error while executing configured error handler: Error: No body found in handler event stack.1 at Runtime.<anonymous> (/var/task/index.js:3180:15) ``` Can you help me understand how to set up the query in order to have a table with the following columns and their values: from @message extract timestamp, requestID, type (WARN or ERROR), errorMessage and if feasible also the name of the lambda from @log and the @logStream. Can you help me understand how such a query is produced?
2
answers
0
votes
22
views
asked a month ago

Proper conversion of AWS Log Insights to Metrics for visualization and monitoring

TL;DR; ---- What is the proper way to create a metric so that it generates reliable information about the log insights? What is desired ------ The current Log insights can be seen similar to the following [![AWS Log insights][1]][1] However, it becomes easier to analyse these logs using the metrics (mostly because you can have multiple sources of data in the same plot and even perform math operations between them). Solution according to docs ----- Allegedly, a log can be converted to a metric filter following a guide like [this][2]. However, this approach does not seem to work entirely right (I guess because of the time frames that have to be imposed in the metric plots), providing incorrect information, for example: [![Dashboard][3]][3] Issue with solution ----- In the previous image I've created a dashboard containing the metric count (the number 7), corresponding to the sum of events each 5 minutes. Also I've added a preview of the log insight corresponding to the information used to create the event. However, as it can be seen, the number of logs is 4, but the event count displays 7. Changing the time frame in the metric generates other types of issues (e.g., selecting a very small time frame like 1 sec won't retrieve any data, or a slightly smaller time frame will now provide another wrong number: 3, when there are 4 logs, for example). P.S. ----- I've also tried converting the log insights to metrics using [this lambda function][4] as suggested by [Danil Smirnov][5] to no avail, as it seems to generate the same issues. [1]: https://i.stack.imgur.com/0pPdp.png [2]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CountingLogEventsExample.html [3]: https://i.stack.imgur.com/Dy5td.png [4]: https://serverlessrepo.aws.amazon.com/#!/applications/arn:aws:serverlessrepo:us-east-1:085576722239:applications~logs-insights-to-metric [5]: https://blog.smirnov.la/cloudwatch-logs-insights-to-metrics-a2d197aac379
0
answers
0
votes
11
views
asked 4 months ago
  • 1
  • 90 / page