By using AWS re:Post, you agree to the Terms of Use
/Microservices/Questions/
Questions in Microservices
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

MQTT Connection keeps getting disconnected/closed while publishing or subscribing to topics using LTE Modem

- I'm using Quectel modem BG95 with a host MCU to connect to AWS IoT core and publish to topics and subscribe to topics as well. - I used to get an error occasionally that closed the MQTT connection exactly while doing pub/sub operations and connection had to be re-established , but that was very rare. - However, since the last few days I have been running tests on multiple devices (using same IoT core endpoint) and have been getting this MQTT dis-connection on each pub or sub operation. I am attaching a log for review. - To me it seems a server side issue since I have tried it with multiple modems and previous versions of firmware. ``` [While publishing to topic] ;2022-05-08T02:29:41Z;28;-966233403;462863960;;RAK000121|-45,RAKTEST|-56 AT+QIDEACT=1 OK[ 2022-05-08T02:29:41Z ] [FARM_IP][INFO] MDM_SET_DEACTIVATE_PDP-else AT+QIACT=1 OK AT+QMTOPEN=0,"a5u9klmd2viw3z-ats.iot.us-west-1.amazonaws.com",8883 OK +QMTOPEN: 0,0 --- [Opening MQTT Connection] [ 2022-05-08T02:29:41Z ] [FARM_IP][INFO] Mqtt opened AT+QMTCONN=0,"0123qwer786" OK +QMTCONN: 0,0,0 --- [MQTT client connected] AT+QMTPUB=0,1,1,0,"fm/1011",72 --- [Publishing to the MQTT Topic] > ;2022-05-08T02:29:41Z;28;-966233403;462863960;;RAK000121|-45,RAKTEST|-56 OK +QMTSTAT: 0,1 --- [MQTT Connection Closed] ``` ``` [While Subscribing to topic] AT+QMTSUB=0,1,"imei/get_logs",0 --- [Subscribing to the MQTT Topic] OK +QMTSTAT: 0,1 --- [MQTT Connection Closed] [ ] [FARM_IP][INFO] Starting timer AT+QMTSUB=0,1,"imei/get_logs",0 --- [Subscribing to the MQTT Topic] OK +QMTSTAT: 0,1 --- [MQTT Connection Closed] ```
1
answers
0
votes
25
views
asked 21 days ago

Slow lambda responses when bigger load

Hi, Currently, I'm doing load testing using Gatling and I have one issue with my lambdas. I have two lambdas one is written in Java 8 and one is written in Python. I'm using Gatling for my load testing and I have a test where I'm doing one request with 120 concurrent users then I'm ramping them from 120 to 400 users in 1 minute, and then Gatling is doing requests with 400 constants users per second for 2 minutes. There is a weird behavior in these lambdas because the responses are very high. In the lambdas there is no logic, they are just returning a String. Here are some screenshots of Gatling reports: [Java Report][1] [Python Report][2] I can add that I did some tests when Lambda is warm-up and there is the same behaviour as well. I'm using API Gateway to run my lambdas. Do you have any idea why there is such a big response time? Sometimes I'm receiving an HTTP error that says: i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms Here is also my Gatling simulation code: public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/dev")).pause(1); { setUp(scn.injectOpen( atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(1)) ).protocols(httpProtocol) ); } } I also checked logs and turned on the X-ray for API Gateway but there was nothing there. The average latency for these services was 14ms. What can be the reason for that slow Lambda responses? [1]: https://i.stack.imgur.com/sCx9M.png [2]: https://i.stack.imgur.com/SuHU0.png
0
answers
0
votes
7
views
asked 2 months ago

How can I do Distributed Transaction with EventBridge?

I'm using the following scenario to explain the problem. I have an ecommerce app which allows the customers to sign up and get an immediate coupon to use in the application. I want to use **EventBridge ** and a few other resources like a Microsoft SQL Database and Lambdas. The coupon is retrieved from a third-party API which exists outside of AWS. The event flow is: Customer --- *sends web form data* --> EventBridge Bus --> Lambda -- *create customer in SQL DB* --*get a coupon from third-party API* -- *sends customer created successfully event* --> EventBridge Bus Creating a customer in SQL DB, getting the coupon from the third-party API should happen in a single transaction. There is a good chance that either of that can fail due to network error or whatever information that the customer provides. Even if the customer has provided the correct data and a new customer is created in the SQL DB, the third-party API call can fail. These two operations should succeed only if both succeed. Does EventBridge provide distributed transaction through its .NET SDK? In the above example, if the third-party call fails, the data created in the SQL database for the customer is rolled back as well as the message is sent back to the queue so it can be tried again later. I'm looking for something similar to [TransactionScope](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample06_Transactions.md) that is available in Azure. If that is not available, how can I achieve distributed transaction with EventBridge, other AWS resources and third-party services which have a greater chance of failure as a unit.
3
answers
0
votes
17
views
asked 2 months ago

Load testing serverless stack using Gatling

Hi, I'm doing some load testing on my serverless app and I see that it is unable to handle some higher loads. I'm using API Gateway. Lambda(Java 8) and DynamoDB. The code that I'm using is the same as this from this [link]([https://github.com/Aleksandr-Filichkin/aws-lambda-runtimes-performance/tree/main/java-graalvm-lambda/src/lambda-java). In my load testing, I'm using Gatling. The load that I configured is that I'm doing a request with 120 users, then in one minute I ramp users from 120 to 400, and then for 2 minutes I'm making requests with 400 constant users per second. The problem is that my stack is unable to handle 400 users per second. Is it normal? I thought that serverless will scale nicely and will work like a charm. Here is my Gatling simulation code: ```java public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/activitiesv2")).pause(1); { setUp(scn.injectOpen(atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(2)) ).protocols(httpProtocol) ); } } ``` Here are the Gatling report results: [Image link](https://ibb.co/68SYDsb) I'm also receiving an error: **i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms ** -> This is usually approx 50 requests. It is happening when Gatling is starting to inject 400 constant users per second. I'm wondering what could be wrong. It is too much for API Gateway, Lambda and DynamoDB?
2
answers
0
votes
9
views
asked 2 months ago
1
answers
0
votes
31
views
asked 2 months ago

mongodb-org-4.0.repo : No such file or directory al instalar el mongo shell en mi AWS Cloud9

I try to connect to my cluster DocumentDB on AWS from AWS C9 with [this tutorial][1]. But every time I try to connect I get connection failed after 6 attempts: (scr_env) me:~/environment/sephora $ mongo --ssl --host xxxxxxxxxxxxx:xxxxx --sslCAFile rds-combined-ca-bundle.pem --username username --password mypassword MongoDB shell version v3.6.3 connecting to: mongodb://xxxxxxxxxxxxx:xxxxx/ 2022-03-22T23:12:38.725+0000 W NETWORK [thread1] Failed to connect to xxx.xx.xx.xxx:xxxxx after 5000ms milliseconds, giving up. 2022-03-22T23:12:38.726+0000 E QUERY [thread1] Error: couldn't connect to server xxxxxxxxxxxxx:xxxxx, connection attempt failed : connect@src/mongo/shell/mongo.js:251:13 @(connect):1:6 exception: connect failed Indeed it seems to be missing the VPC configuration. So I tried to do with [this documentation][2]. But I do not know how to install the mongo shell on my AWS Cloud9. Indeed, it seems that I cannot create the repository file with the `echo -e "[mongodb-org-4.0] \name=MongoDB repository baseurl=...`. returns: `mongodb-org-4.0.repo: No such file or directory`. Also, when I tried to install the mongo shell with `sudo yum install -y mongodb-org-shell` which I did not have, and which I installed, it returns `repolist 0`. [1]: https://www.youtube.com/watch?v=Ild9ay9U_vY [2]: https://stackoverflow.com/a/17793856/4764604
2
answers
0
votes
3
views
asked 2 months ago
1
answers
0
votes
13
views
asked 2 months ago

Lambda function updating cannot be made atomic with RevisionId

A number of Lambda API calls allow for a RevisionId argument to ensure that the operation only continues if the current revision of the Lambda function matches, very similar to an atomic Compare-And-Swap operation. However, this RevisionId appears to be useless for performing some atomic operations, for the following reason: Suppose I want to update a function's code and then publish it, in 2 separate steps (I know it can be done in 1 step, but this does not interest me, because I cannot set the description of a published version in 1 update/publish step...it must be done in 2 steps). The [update_function_code](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.update_function_code) call returns a RevisionId that corresponds to the "in progress" update of the function. This RevisionId cannot be used because it will change once the function becomes active/updated. This new RevisionId can only be obtained by [get_function](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.get_function). Update code -> RevisionId A (in progress) -> RevisionId B (updated/active) -> Get Function -> RevisionId B -> Publish Function There exists a race condition due to the fact that I must call `get_function` in order to get the current RevisionId before I continue with publishing my function. This race condition makes it impossible create an atomic sequence of operations that includes a `update_function_code` operation, because the RevisionId that it returns cannot be relied on, and has to be refreshed with a `get_function` call. Concurrently, another operation could change the RevisionId, and you wouldn't know, because you're depending on `get_function` to return an unknown RevisionId.
1
answers
0
votes
4
views
asked 2 months ago

How to access API Parameters of a node and add them as part of it's own output json in AWS Step Functions?

Here's some part of my StepFunction: https://i.stack.imgur.com/4Jxd9.png Here's the workflow for the "Parallel" node: ``` { "Type": "Parallel", "Branches": [ { "StartAt": "InvokeEndpoint01", "States": { "InvokeEndpoint01": { "Type": "Task", "End": true, "Parameters": { "Body": "$.Input", "EndpointName": "dummy-endpoint-name1" }, "Resource": "arn:aws:states:::aws-sdk:sagemakerruntime:invokeEndpoint" } } }, { "StartAt": "InvokeEndpoint02", "States": { "InvokeEndpoint02": { "Type": "Task", "End": true, "Parameters": { "Body": "$.Input", "EndpointName": "dummy-endpoint-name2" }, "Resource": "arn:aws:states:::aws-sdk:sagemakerruntime:invokeEndpoint" } } } ], "Next": "Lambda Invoke" }, ``` I would like to access the `EndpointName` of each node inside this Parallel block and add it as one of the keys of that particular node's output, without modifying the existing output's body and other headers.(in the above json, `EndpointName` can be found for first node inside the Parallel at `$.Branches[0].States.InvokeEndpoint01.Parameters.EndpointName`) Here's output of one of the node inside the Parallel block: ``` { "Body": "{xxxx}", "ContentType": "application/json", "InvokedProductionVariant": "xxxx" } ``` and I would like to access the API Parameter and make it something like below: ``` { "Body": "{xxxx}", "ContentType": "application/json", "InvokedProductionVariant": "xxxx", "EndpointName": "dummy-endpoint-name1" } ``` How do I do this?
2
answers
1
votes
7
views
asked 3 months ago

AppMesh mTLS - Unable to verify SSL encryption is established using SPIRE

I'm in the process of setting up a prototype mesh with mTLS. I've gotten to the point where I have my services coupled with envoy sidecars and the sidecars are receiving certificates from SPIRE. I've been following along with this [article ](https://aws.amazon.com/blogs/containers/using-mtls-with-spiffe-spire-in-app-mesh-on-eks/) and am now running into an issue. In their steps, they perform a curl command from a container outside of the mesh and get some TLS negotiation messages. When I try to do the same thing, I get the following: ``` bash-4.2# curl -v -k https://grpc-client-service.grpc.svc.cluster.local:80/ * Trying 10.100.152.100:80... * Connected to grpc-client-service.grpc.svc.cluster.local (10.100.152.100) port 80 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt * CApath: none * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client hello (1): * error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol * Closing connection 0 curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol ``` Any advice on where I should start to troubleshoot this issue? Here's a rough overview of my setup: There are two pods that represent a client and a server service. The client has a web interface that allows the user to input text. The client takes the input text and submits that to the server service. The server service responds with an echo message that has some extra formatting so you know it came from the server. I've got both pods wrapped in virtual services that connect directly to virtual nodes. I was able to successfully test this with a basic mesh setup prior to adding the SPIRE workload parameters to the services. Within the envoy sidecars, I can see that the SPIRE server is indeed issuing certificates.
1
answers
0
votes
15
views
asked 3 months ago

HTTP API GW + API VPC Link + Cloudmap + Fargate - How does it load balance

I am using an infrastructure setup as described in the title. This setup is also somewhat shown in this picture: https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2021/02/04/5-CloudMap-example.png In the official AWS blog here: https://aws.amazon.com/blogs/compute/configuring-private-integrations-with-amazon-api-gateway-http-apis/ the following is stated about using such setup: > As AWS Cloud Map provides client-side service discovery, you can replace the load balancer with a service registry. Now, connections are routed directly to backend resources, instead of being proxied. This involves fewer components, making deployments safer and with less management, and reducing complexity. My question is simple: What load balancing algorithm does HTTP API GW use when distributing traffic to resources (the Fargate tasks) registered in a service registry? Is it round-robin just as it is with ALB? Only thing I was able to find is this: > For integrations with AWS Cloud Map, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. The registered resources' attributes must include IP addresses and ports. API Gateway distributes requests across healthy resources that are returned from DiscoverInstances. https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-private.html#http-api-develop-integrations-private-Cloud-Map
2
answers
0
votes
31
views
asked 3 months ago
  • 1
  • 90 / page