By using AWS re:Post, you agree to the Terms of Use
/Amazon CloudWatch Logs/

Questions tagged with Amazon CloudWatch Logs

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Restrict CloudWatch Logs

Hi there. We have an IAM user called mlops1. We would like mlops1 to be able to use the AWS console to view logs in CloudWatch, but only a certain log group. This is what the allowed actions look like in our IAM policy (note that the Account ID has been redacted): { "Effect": "Allow", "Action": [ "cloudwatch:Describe*", "cloudwatch:Get*", "cloudwatch:List*", "logs:Get*", "logs:List*", "logs:StartQuery", "logs:StopQuery", "logs:Describe*", "logs:TestMetricFilter", "logs:FilterLogEvents" ], "Resource": "arn:aws:logs:us-east-1:<account_id>:log-group:/aws/sagemaker/TrainingJobs:log-stream:*" } As you can see, we would like mlops1 to be able to access only the "/aws/sagemaker/TrainingJobs" log group. However, the user receives the following error message (again, Account ID has been redacted): Error: User: arn:aws:iam::<account_id>:user/mlops1 is not authorized to perform: logs:DescribeLogGroups on resource: arn:aws:logs:us-east-1:<account_id>:log-group::log-stream: because no identity-based policy allows the logs:DescribeLogGroups action This error message is not true since the policy contains "logs:Describe*". We found that when we open up to all resources (i.e. *), then mlops1 can access the desired logs in CloudWatch. However, this user can also access any other logs, which is not what we want. How can we limit the user's access to just the "/aws/sagemaker/TrainingJobs" log group? Is there some additional syntax required? Thank you in advance for your help!
1
answers
0
votes
15
views
asked 4 days ago

CDK CodePipeline fails to output logs when deployed to a custom VPC, how to fix?

Hi everyone, Help is very appreciated! I'm managing a code pipeline with CDK and when I deploy it to a custom VPC with an internet gateway (public subnet) I fail to see any logs in CodeBuild. Here is my CDK Code: ``` const pipeline = new CodePipeline(this, id, { pipelineName: `Hubs-CDK-Pipeline-${id}`, selfMutation: false, synth: new ShellStep('Synth', { input: CodePipelineSource.gitHub( 'stafflink-pty-ltd/sauron', id === Environment.STAGING ? 'aws' : 'main', { authentication: SecretValue.secretsManager('manavs-github-token', { jsonField: 'token' }) } ), primaryOutputDirectory: 'cdk/cdk.out', commands: [ `node -v`, `sudo npm i -g n --force`, `n lts`, `n prune`, `node -v`, `npm i -g yarn`, `yarn`, `yarn rw setup deploy serverless`, `rm -f .env`, `sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64`, `sudo chmod a+x /usr/local/bin/yq`, `yq eval-all -i '.provider.vpc.securityGroupIds |= ["${lambdaSG.securityGroupId}"]' api/replace.yml`, `yq eval-all -i '.provider.vpc.subnetIds |= ["${vpc.isolatedSubnets[0].subnetId}", "${vpc.isolatedSubnets[1].subnetId}","${vpc.isolatedSubnets[2].subnetId}"]' api/replace.yml`, `yq eval-all -i '.provider.iam.role.statements |= [{"Effect": "Allow", "Action": ["s3:GetObject", "s3:PutObject"], "Resource": ["${bucket.bucketArn}"]}]' api/replace.yml`, `yq eval-all -i '. as $item ireduce ({}; . * $item)' api/serverless.yml api/replace.yml`, `yq eval-all -i '. as $item ireduce ({}; . * $item)' web/serverless.yml web/replace.yml`, `cat api/serverless.yml`, `npm run ci:build`, `npm run ci:migrate`, `yarn rw deploy aws`, `cd cdk`, `npm i`, `npm i -g aws-cdk`, `cdk synth`, `cd ..` ] }), codeBuildDefaults: { vpc, subnetSelection: { subnetType: SubnetType.PUBLIC }, securityGroups: [codePipeSG], rolePolicy: [ new iam.PolicyStatement({ effect: Effect.ALLOW || undefined, actions: [ 'logs:CreateLogGroup', 'logs:CreateLogStream', 'logs:PutLogEvents' ], resources: ['*'] }), new iam.PolicyStatement({ effect: Effect.ALLOW || undefined, actions: [ 's3:Abort*', 's3:DeleteObject*', 's3:GetBucket*', 's3:GetObject*', 's3:List*', 's3:PutObject', 's3:PutObjectLegalHold', 's3:PutObjectRetention', 's3:PutObjectTagging', 's3:PutObjectVersionTagging' ], resources: ['*'] }) ], buildEnvironment: { computeType: ComputeType.LARGE, buildImage: LinuxBuildImage.STANDARD_5_0, ``` Here is my security group: ``` const codePipeSG = new SecurityGroup(this, 'code-pipeline-security-group', { vpc, allowAllOutbound: true, securityGroupName: `hubs-codepipe-${id}` }) ``` Here is my VPC: ``` const vpc = new Vpc(this, 'VPC', { cidr: id === Environment.PROD ? '10.1.0.0/16' : '10.0.0.0/16', natGateways: 0, maxAzs: 3, subnetConfiguration: [ { name: `public-${id}-1`, subnetType: SubnetType.PUBLIC, cidrMask: 24 }, { name: `isolated-${id}-1`, subnetType: SubnetType.PRIVATE_ISOLATED, cidrMask: 28 } ], vpcName: `hubs${id}` }) ```
1
answers
0
votes
17
views
asked a month ago

Convert log fields into table columns with aws cloudwatch log insights

i've a lambda function and i want to have a cloudwatch logs table with errors and warnings columns.Actually I was able with this query to get an error / warnings report per day: ``` parse "[E*]" as @error | parse "[W*]" as @warning | filter ispresent(@warning) or ispresent(@error) | stats count(@error) as error, count(@warning) as warning by bin(15m) ``` Here are two example messages of the lambda: WARNING: ``` Field Value @ingestionTime 1653987507053 @log XXXXXXX:/aws/lambda/lambda-name @logStream 2022/05/31/[$LATEST]059106a15343448486b43f8b1168ec64 @message 2022-05-31T08:58:18.293Z b1266ad9-95aa-4c4e-9416-e86409f6455e WARN error catched and errorHandler configured, handling the error: Error: Error while executing handler: TypeError: Cannot read property 'replace' of undefined @requestId b1266ad9-95aa-4c4e-9416-e86409f6455e @timestamp 1653987498296 ``` ERROR: ``` Field Value @ingestionTime 1653917638480 @log XXXXXXXX:/aws/lambda/lambda-name @logStream 2022/05/30/[$LATEST]bf8ba722ecd442dbafeaeeb3e7251024 @message 2022-05-30T13:33:57.406Z 8b5ec77c-fb30-4eb3-bd38-04a10abae403 ERROR Invoke Error {"errorType":"Error","errorMessage":"Error while executing configured error handler: Error: No body found in handler event","stack":["Error: Error while executing configured error handler: Error: No body found in handler event"," at Runtime.<anonymous> (/var/task/index.js:3180:15)"]} @requestId 8b5ec77c-fb30-4eb3-bd38-04a10abae403 @timestamp 1653917637407 errorMessage Error while executing configured error handler: Error: No body found in handler event errorType Error stack.0 Error: Error while executing configured error handler: Error: No body found in handler event stack.1 at Runtime.<anonymous> (/var/task/index.js:3180:15) ``` Can you help me understand how to set up the query in order to have a table with the following columns and their values: from @message extract timestamp, requestID, type (WARN or ERROR), errorMessage and if feasible also the name of the lambda from @log and the @logStream. Can you help me understand how such a query is produced?
2
answers
0
votes
22
views
asked a month ago

AppSync request mapping template errors not logged in CloudWatch

I have a simple resolver that has a simple Lambda function as a data source. This function always throws an error (to test out logging). The resolver has request mapping template enabled and it is configured as follows: ``` $util.error("request mapping error 1") ``` The API has logging configured to be as verbose as possible yet I cannot see this `request mapping error 1` from my CloudWatch logs in `RequestMapping` log type: ``` { "logType": "RequestMapping", "path": [ "singlePost" ], "fieldName": "singlePost", "resolverArn": "xxx", "requestId": "bab942c6-9ae7-4771-ba45-7911afd262ac", "context": { "arguments": { "id": "123" }, "stash": {}, "outErrors": [] }, "fieldInError": false, "errors": [], "parentType": "Query", "graphQLAPIId": "xxx" } ``` The error is not completely lost because I can see this error in the query response: ``` { "data": { "singlePost": null }, "errors": [ { "path": [ "singlePost" ], "data": null, "errorType": null, "errorInfo": null, "locations": [ { "line": 2, "column": 3, "sourceName": null } ], "message": "request mapping error 1" } ] } ``` When I add `$util.appendError("append request mapping error 1")` to the request mapping template so it looks like this: ``` $util.appendError("append request mapping error 1") $util.error("request mapping error 1") ``` Then the appended error appears in the `RequestMapping` log type but the `errors` array is still empty: ``` { "logType": "RequestMapping", "path": [ "singlePost" ], "fieldName": "singlePost", "resolverArn": "xxx", "requestId": "f8eecff9-b211-44b7-8753-6cc6e269c938", "context": { "arguments": { "id": "123" }, "stash": {}, "outErrors": [ { "message": "append request mapping error 1" } ] }, "fieldInError": false, "errors": [], "parentType": "Query", "graphQLAPIId": "xxx" } ``` When I do the same thing with response mapping template then everything works as expected (errors array contains `$util.error(message)` and outErrors array contains `$util.appendError(message)` messages. 1. Is this working as expected so the `$util.error(message)` will never show up in CloudWatch logs? 2. Under what conditions will `errors` array in `RequestMapping` log type be populated? 3. Bonus question: can the `errors` array contain more than 1 item for either `RequestMapping` or `ResponseMapping` log types?
0
answers
0
votes
8
views
asked 2 months ago

ec2tagger: Unable to describe ec2 tags for initial retrieval: AuthFailure: AWS was not able to validate the provided access credentials / cloudwatch log agent, vpc endpoints

I got error: "ec2tagger: Unable to describe ec2 tags for initial retrieval: AuthFailure: AWS was not able to validate the provided access credentials" in cloudwatch log agent on an ec2 instance that has: 1. CloudWatchAgentServerRole -- this is default AWS managed role attached to the instance, this default role already allow ""ec2:DescribeTags"," in its policy. <---- NOTE this 2. Its NACL allowed all outbound and allowed all vpc's CIDR network range inbound 3. Cloudwatch log agent config file's region is correct 4. telnet ec2.us-east-2.amazonaws.com 443 or telnet monitoring.us-east-2.amazonaws.com 443 or telnet logs.us-east-2.amazonaws.com 443 under the ec2 instance all return successful connection (Connected <..> Escape character is '^]') I also create three vpc endpoints: logs (com.amazonaws.us-east-2.logs), monitoring (com.amazonaws.us-east-2.monitoring), ec2 (com.amazonaws.us-east-2.ec2) interface endpoints. They have SG that allowed all VPC's CIDR network range inbound. The idea is to expose metrics to cloudwatch via vpc endpoints. Despite all above setup, I can't make cloudwatch agent to work and it keeps echo above error complain about credentials is not valid even though the REGION in config file is correct and traffic between instance and cloudwatch is allowed.
1
answers
0
votes
266
views
asked 2 months ago

Problem receiving IP 127.0.0.1 at service startup instead of local IP

**Context:** We've got a number of load balanced web servers running on Windows OS in AWS using C# .NET (5). We have a web server application as well as a Windows Service running on the same machine and we have problems with logging from the Windows Service. **Problem Description**: Since we have many servers running load balanced, we name the log stream with the private IP number in order to distinguish which machine that potentially has problems. This private IP is extracted at startup of the application (for both the Windows Service and the Web Server.) This is usually sucessfull, but yesterday we had an incident when one Windows Service log stream was labeled with 127.0.0.1 instead of the local IP number. Eventually I was able to pinpoint which server it was, restarted the windows service, which made the private IP number appear instead in the new log stream name. **?: Suggested reason with possible solution:** I'm guessing this is a race condition error. The machine has not received it's private IP number yet by AWS network before our service asked for it. **If so we can wait for the real IP to appear just to make sure we get the right IP number in our log. ** I have three question related to this: **Questions:** 1. **Do you see any other reason than the one I suggested why the IP number 127.0.0.1 appears? ** 2. ** Is there a better solution available than the one I suggested?** 3. **Is there a way, using an AWS API of some sort to get hold of the public IP for the server?** Here's the code how we extract the private IP address in this context: ``` var hostName = System.Net.Dns.GetHostName(); var ipAddresses = System.Net.Dns.GetHostAddresses(hostName); var ipv4Address = ipAddresses.FirstOrDefault(ip => ip.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork); ```
2
answers
0
votes
25
views
asked 3 months ago
1
answers
0
votes
62
views
asked 5 months ago
  • 1
  • 90 / page