By using AWS re:Post, you agree to the Terms of Use
/Microservices/

Questions tagged with Microservices

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to get traffic from a public API Gateway to a private one?

I would like to use [private API Gateways](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html#api-gateway-api-endpoint-types-private) to organise Lambda functions into microservices, while keeping them invisible from the public internet. I would then like to expose specific calls using a public API Gateway. How do I get traffic from my public API Gateway to a private API Gateway? **What I've looked at so far** In the past, for **container-based resources**, I've used the following pattern: *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ECS]* However, I can't find an equivalent bridge to get specific traffic to a private API Gateway. I.e. *Internet -> API Gateway -> ? -> Private Gateway -> Lambda* My instinct tells me that a network-based solution should exist (equivalent to VPC Link), but so far the only suggestions I've had involve: - Solving using compute ( *Internet -> API Gateway -> VPC[Lambda proxy] -> Private Gateway -> Lambda* ) - Solving using load balancers ( *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ALB] -> Private Gateway -> Lambda* ) Both of these approaches strike me as using the wrong (and expensive!) tools for the job. I.e. Compute where no computation is required and (two!!) load balancers where no load balancing is required (as Lambda effectively self-loadbalances). **Alternative solutions** Perhaps there's a better way (other than a private API Gateway) to organise collections of serverless resources into microservices. I'm attempting to use them to present a like-for-like interface that my container-based microservices would have. E.g. Documented (Open API spec), authentication, traffic monitoring, etc. If using private API Gateways to wrap internal resources into microservices is actually a misuse, and there's a better way to do it, I'm happy to hear it.
1
answers
0
votes
20
views
asked 21 hours ago

Extracted FORMS keep order

Hello. I am using AWS textract and specifically the FORMS functionality to extract just that. It works really good. But the issue I have is that when returning the extracted FORMS, they do not keep their natural order as they come from the document. Is there any way to keep the natural order in the returned objected? Or can I map back to the document order using the coordinates? This is how I use the extraction currently: def ocr(document): job_id = start_job(client, BUCKET, document) is_job_complete(client, job_id) response = get_job_results(client, job_id) #This is the full object of the OCR field_list = [] doc = Document(response) start = 0 for page in doc.pages: lst = [] for field in page.form.fields: lst.append("Key: {} Value: {}".format(field.key, field.value)) field_list.append(lst) start = start + 1 text_list = []#Also extract the raw text for i in range(0,len(response)): for item in response[i]["Blocks"]: if item["BlockType"] == "LINE": text_list.append(item["Text"]) text = " ".join(text_list) return(field_list, text) To put it in a real scenario, Example document contains the following FORMS: A: 123 B: 432 C: 000 D: 126 But the above function returns: B: 432 A: 123 D: 126 C: 000 Hence not keeping the natural order of working from the top, then left to right, down to the bottom of the document. Is there any setting I can alter earlier or something I can change about my current function to return, the original/natural order?
3
answers
0
votes
43
views
asked 18 days ago

Cognito Migration Trigger errors when Lambda execution time too high

I am currently in the process of validating the migration of a set of users to a cognito user pool via the migration trigger, the essence of the lambda function for the trigger can be boiled down to: ``` def lambda_handler(event, context): response = requests.post(external_auth_api_url, json_with_user_and_pass) if response.status_code = 200: event["response"] = { "userAttributes": { "username": event["userName"], "email": event["userName"], "email_verified": "true" }, "finalUserStatus": "CONFIRMED", "messageAction": "SUPPRESS" } return event ``` This is doing an external rest call to the old system the user was signing in through as per the documentation and returning a success response. The issue I noticed is that if the lambda function time is too long, for example, the average execution time of this lambda for me right now via ngrok is about 5 seconds total, cognito is failing when I call initiateAuth with USERNAME_PASSWORD flow and returning the following: ``` botocore.errorfactory.UserNotFoundException: An error occurred (UserNotFoundException) when calling the InitiateAuth operation: Exception migrating user in app client xxxxxxxxxxxx ``` I managed to validate that this issue was occurring by simply returning a success response without doing an external REST call and essentially bringing the lambda function runtime down to milliseconds, in which case I got the tokens as expected and the user was successfully migrated. I also tested this by simply having a lambda function like: ``` def lambda_handler(event, context): time.sleep(5) event["response"] = { "userAttributes": { "username": event["userName"], "email": event["userName"], "email_verified": "true" }, "finalUserStatus": "CONFIRMED", "messageAction": "SUPPRESS" } return event ``` This fails with the same error response as above. If anyone can advise, I am not sure if there is a maximum time the migration trigger will wait that is not documented, I wouldn't expected the trigger to have such a thing if the migration trigger's intention is to do external REST calls which may or may not be slow. Thanks in advance!
1
answers
2
votes
20
views
asked 20 days ago

Best practices for lambda layer dependencies

Hi all, We recently started using the OpenTelemetry lambda layer for python: https://aws-otel.github.io/docs/getting-started/lambda/lambda-python in our serverless applications. We've encountered an issue with dependencies in a few of our projects, where the version of a particular dependency required by the lambda layer would conflict with the version installed in the lambda function itself. For example, the lambda layer had a requirement for protobuf>=3.15.0, whereas our application was using 3.13.0, causing the following error in the logs: ``` 2022-06-01T15:59:35.215-04:00 Configuration of configurator failed 2022-06-01T15:59:35.215-04:00 Traceback (most recent call last): 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 105, in _load_configurators 2022-06-01T15:59:35.215-04:00 entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 215, in configure 2022-06-01T15:59:35.215-04:00 self._configure(**kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 231, in _configure 2022-06-01T15:59:35.215-04:00 _initialize_components(kwargs.get("auto_instrumentation_version")) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 181, in _initialize_components 2022-06-01T15:59:35.215-04:00 trace_exporters, log_exporters = _import_exporters( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 149, in _import_exporters 2022-06-01T15:59:35.215-04:00 for (exporter_name, exporter_impl,) in _import_config_components( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 136, in _import_config_components 2022-06-01T15:59:35.215-04:00 component_impl = entry_point.load() 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2470, in load 2022-06-01T15:59:35.215-04:00 self.require(*args, **kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2493, in require 2022-06-01T15:59:35.215-04:00 items = working_set.resolve(reqs, env, installer, extras=self.extras) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 800, in resolve 2022-06-01T15:59:35.215-04:00 raise VersionConflict(dist, req).with_context(dependent_req) 2022-06-01T15:59:35.215-04:00 pkg_resources.ContextualVersionConflict: (protobuf 3.13.0 (/var/task), Requirement.parse('protobuf>=3.15.0'), {'googleapis-common-protos'}) 2022-06-01T15:59:35.215-04:00 Failed to auto initialize opentelemetry 2022-06-01T15:59:35.215-04:00 Traceback (most recent call last): 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 123, in initialize 2022-06-01T15:59:35.215-04:00 _load_configurators() 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 109, in _load_configurators 2022-06-01T15:59:35.215-04:00 raise exc 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 105, in _load_configurators 2022-06-01T15:59:35.215-04:00 entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 215, in configure 2022-06-01T15:59:35.215-04:00 self._configure(**kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 231, in _configure 2022-06-01T15:59:35.215-04:00 _initialize_components(kwargs.get("auto_instrumentation_version")) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 181, in _initialize_components 2022-06-01T15:59:35.215-04:00 trace_exporters, log_exporters = _import_exporters( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 149, in _import_exporters 2022-06-01T15:59:35.215-04:00 for (exporter_name, exporter_impl,) in _import_config_components( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 136, in _import_config_components 2022-06-01T15:59:35.215-04:00 component_impl = entry_point.load() 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2470, in load 2022-06-01T15:59:35.215-04:00 self.require(*args, **kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2493, in require 2022-06-01T15:59:35.215-04:00 items = working_set.resolve(reqs, env, installer, extras=self.extras) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 800, in resolve 2022-06-01T15:59:35.215-04:00 raise VersionConflict(dist, req).with_context(dependent_req) 2022-06-01T15:59:35.266-04:00 pkg_resources.ContextualVersionConflict: (protobuf 3.13.0 (/var/task), Requirement.parse('protobuf>=3.15.0'), {'googleapis-common-protos'}) ``` We've encountered the same issue with other libraries. My question is whether there are any best practices or recommendations to deal with this issue a little bit better. Should the lambda layer maybe publish its list of dependencies for each version so that the service using it will know what to expect? I feel like this is introducing a very loose dependency that will be only caught in runtime, which seems problematic to me. Hope it makes sense. I searched older posts and couldn't find anything relevant. Many thanks in advance, Juan
0
answers
1
votes
44
views
asked 21 days ago

MQTT Connection keeps getting disconnected/closed while publishing or subscribing to topics using LTE Modem

- I'm using Quectel modem BG95 with a host MCU to connect to AWS IoT core and publish to topics and subscribe to topics as well. - I used to get an error occasionally that closed the MQTT connection exactly while doing pub/sub operations and connection had to be re-established , but that was very rare. - However, since the last few days I have been running tests on multiple devices (using same IoT core endpoint) and have been getting this MQTT dis-connection on each pub or sub operation. I am attaching a log for review. - To me it seems a server side issue since I have tried it with multiple modems and previous versions of firmware. ``` [While publishing to topic] ;2022-05-08T02:29:41Z;28;-966233403;462863960;;RAK000121|-45,RAKTEST|-56 AT+QIDEACT=1 OK[ 2022-05-08T02:29:41Z ] [FARM_IP][INFO] MDM_SET_DEACTIVATE_PDP-else AT+QIACT=1 OK AT+QMTOPEN=0,"a5u9klmd2viw3z-ats.iot.us-west-1.amazonaws.com",8883 OK +QMTOPEN: 0,0 --- [Opening MQTT Connection] [ 2022-05-08T02:29:41Z ] [FARM_IP][INFO] Mqtt opened AT+QMTCONN=0,"0123qwer786" OK +QMTCONN: 0,0,0 --- [MQTT client connected] AT+QMTPUB=0,1,1,0,"fm/1011",72 --- [Publishing to the MQTT Topic] > ;2022-05-08T02:29:41Z;28;-966233403;462863960;;RAK000121|-45,RAKTEST|-56 OK +QMTSTAT: 0,1 --- [MQTT Connection Closed] ``` ``` [While Subscribing to topic] AT+QMTSUB=0,1,"imei/get_logs",0 --- [Subscribing to the MQTT Topic] OK +QMTSTAT: 0,1 --- [MQTT Connection Closed] [ ] [FARM_IP][INFO] Starting timer AT+QMTSUB=0,1,"imei/get_logs",0 --- [Subscribing to the MQTT Topic] OK +QMTSTAT: 0,1 --- [MQTT Connection Closed] ```
1
answers
0
votes
89
views
asked 2 months ago

Slow lambda responses when bigger load

Hi, Currently, I'm doing load testing using Gatling and I have one issue with my lambdas. I have two lambdas one is written in Java 8 and one is written in Python. I'm using Gatling for my load testing and I have a test where I'm doing one request with 120 concurrent users then I'm ramping them from 120 to 400 users in 1 minute, and then Gatling is doing requests with 400 constants users per second for 2 minutes. There is a weird behavior in these lambdas because the responses are very high. In the lambdas there is no logic, they are just returning a String. Here are some screenshots of Gatling reports: [Java Report][1] [Python Report][2] I can add that I did some tests when Lambda is warm-up and there is the same behaviour as well. I'm using API Gateway to run my lambdas. Do you have any idea why there is such a big response time? Sometimes I'm receiving an HTTP error that says: i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms Here is also my Gatling simulation code: public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/dev")).pause(1); { setUp(scn.injectOpen( atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(1)) ).protocols(httpProtocol) ); } } I also checked logs and turned on the X-ray for API Gateway but there was nothing there. The average latency for these services was 14ms. What can be the reason for that slow Lambda responses? [1]: https://i.stack.imgur.com/sCx9M.png [2]: https://i.stack.imgur.com/SuHU0.png
0
answers
0
votes
8
views
asked 3 months ago

How can I do Distributed Transaction with EventBridge?

I'm using the following scenario to explain the problem. I have an ecommerce app which allows the customers to sign up and get an immediate coupon to use in the application. I want to use **EventBridge ** and a few other resources like a Microsoft SQL Database and Lambdas. The coupon is retrieved from a third-party API which exists outside of AWS. The event flow is: Customer --- *sends web form data* --> EventBridge Bus --> Lambda -- *create customer in SQL DB* --*get a coupon from third-party API* -- *sends customer created successfully event* --> EventBridge Bus Creating a customer in SQL DB, getting the coupon from the third-party API should happen in a single transaction. There is a good chance that either of that can fail due to network error or whatever information that the customer provides. Even if the customer has provided the correct data and a new customer is created in the SQL DB, the third-party API call can fail. These two operations should succeed only if both succeed. Does EventBridge provide distributed transaction through its .NET SDK? In the above example, if the third-party call fails, the data created in the SQL database for the customer is rolled back as well as the message is sent back to the queue so it can be tried again later. I'm looking for something similar to [TransactionScope](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample06_Transactions.md) that is available in Azure. If that is not available, how can I achieve distributed transaction with EventBridge, other AWS resources and third-party services which have a greater chance of failure as a unit.
3
answers
0
votes
29
views
asked 3 months ago
  • 1
  • 90 / page