By using AWS re:Post, you agree to the Terms of Use
All Questions
Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

create-export-task | Filter CloudWatch logs using JMESpath

My objective is to create a mechanism for exporting CloudWatch logs to S3 on a case-by-case basis. Given my logs appear in the following format: ``` { "level": "error", "message": "Oops", "errorCode": "MY_ERROR_CODE_1" } { "level": "info", "message": "All good" } { "level": "info", "message": "Something else" } ``` I'd like the export to **only** include the error logs. Using [create-export-task](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/logs/create-export-task.html), is it possible to use the `query` param to filter the response data given the above log structure? I'm not sure whether the log structure is incorrect for this use or if I have misunderstood the purpose of the query param. My JMESPath attempts so far have been unsuccessful. Some attempts include: ``` aws logs create-export-task \ --log-group-name myGroup \ --log-stream-name-prefix myGroup-test \ --from 1664537580000 \ --to 1664537640000 \ --destination myGroup-archive-ab1 \ --destination-prefix test \ --query '{Message: message, Error: errorCode}' ``` and same command, but with the following query `--query '{Message: .message, Error: .errorCode}'` which produces the following error: *Bad value for --query {Message: .message, Error: .errorCode}: invalid token: Parse error at column 10, token "." (DOT), for expression: "{Message: .message, Error: .errorCode}"*
0
answers
0
votes
7
views
asked 14 hours ago

Trouble with AWS lambda runtime API with docker image

## Short version I am running a lambda function in a docker container, and all executions are marked as failures with a Runtime.ExitError, even though I am using the runtime API and the lambda added as on_success destination is running. ## Longer version, with context I have a setup with a bunch of functions chained using API invocations and destinations. One of them requires a custom runtime (handler is a PHP command), I have been using a docker image for that. In order to get it running correctly, I am getting the request ID in the entrypoint, and in the command, running both my command and a curl to the runtime API, like so: ``` CMD ["/bin/bash", "-c", "/app/bin/my-super-command && curl --silent -X POST \"http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/${REQUEST_ID}/response\" -d 'SUCCESS'"] ``` I know the request id is correct (I am printing it in the entrypoint), and at the end of the logs, I am getting the following lines (edited of course): ``` End of my-super-command {"status":"OK"} END RequestId: 123456-abcd-1234-abcd-12345678910 REPORT RequestId: 123456-abcd-1234-abcd-12345678910 Duration: 39626.80 ms Billed Duration: 39777 ms Memory Size: 384 MB Max Memory Used: 356 MB Init Duration: 149.26 ms RequestId: 123456-abcd-1234-abcd-12345678910 Error: Runtime exited without providing a reason Runtime.ExitError Beginning of the entrypoint ``` The first line is from my command, the second line looks is the output from the curl (it looks like a success, and the API documentation seems to agree with me), but as we can see, the call seems to be marked as failed later. The weird stuff: * The lambda logs a failure even though the Runtime API returns an OK to my call for success * The lambda is marked as failed in the monitoring * The function I put after this one in the workflow, in a destination, with the `on_success` condition, runs ! The problems I have had, and then processed: * I am getting the request id with a combination of grep/sed/trim because there's a \r somewhere, that's not optimal but I am printing it and appears correctly (I have printed the full curl command too, just in case) * I have had issues with timeout/OOM, but as you can see above, it is not the case here. Am I missing something here ? Maybe I did not understand the usage of the runtime API. As you can see the next run seems to be launched but interrupted, so there might be some timing issue.
0
answers
0
votes
10
views
asked 15 hours ago

Problem on Application load balancer with rule: Health check only responds on the default rule

Hi everyone I have 3 microservices running on an **ECS cluster**. Each microservice is launched by a **Fargate task**. Each microservice runs in its own Docker container. * *Microservice A* responds on port 8083. * *Microservice B* responds on port 8084. * *Microservice C* responds on port 8085. My configuration consists of two public subnets, two private, an internet gateway and a NAT, as well as two security groups, one for fargate services and one for ALB. On the security groups I have enabled inbound traffic on all ports. I have defined a listner for the ALB that responds on port 80 and wrote some path-based rules to route requests to the appropriate target group (*every target group is a Target type*) :![Enter image description here](/media/postImages/original/IM8oFOWQXjQEuDjdKe3PeGgw) Only the health check of the target group that responds to the default rule responds ( but I suspect it all happens randomly) , and consequently only the service reachable on port 8083 works ![Enter image description here](/media/postImages/original/IMtOk5-EqJRrmxLa49ium6hg) The remaining target groups are **unreachable**. What you notice is that in the "*Registered Target"* section the assigned IP addresses change continuously. For example: ![![Enter image description here](/media/postImages/original/IMkdJ_RNqsTJazJ3J8j4foqw) Enter image description here](/media/postImages/original/IMCm7LLgy1QJKk0JsLC3XlGg) But every time IP assigned it generates a timeout. It can happen quite randomly that a certain IP address is registered correctly. These are the ECS configurations of one of the unresponsive services: ![Enter image description here](/media/postImages/original/IMOdt86JdpS_2paN_elspK5g) What is the problem and how can I solve it? Thank you. **UPDATE1** I tried to add a new instance for microservice A. For the new IP (10.0.0.137) the health check is not responding. After a few minutes, the provisioning of a new IP (10.0.0.151) appears and it is registered correctly: ![Enter image description here](/media/postImages/original/IMUcZubrfCRrGo-fpqYAvSJQ) **UPDATE2** It is really strange behavior. **All services are now connected correctly**, after several hours of failed attempts. It looks like an IP address assignment problem. Before finding the correct address, AWS makes several attempts with different IP addresses until it randomly finds the correct one. These are the CIDRs of my PRIVATE subnets * private_subnets = ["10.0.0.128/28", "10.0.0.144/28"] * public_subnets = ["10.0.0.0/28", "10.0.0.16/28"] While these are the IPs that connected successfully: 1. 10.0.0.136 (micorservice A istance1) 2. 10.0.0.151 (micorservice A istance2) 3. 10.0.0.153 (micorservice A istance3) 4. 10.0.0.152 (micorservice B) 5. 10.0.0.142 (Microservice C)
2
answers
0
votes
27
views
asked 17 hours ago