By using AWS re:Post, you agree to the Terms of Use

Microservices

Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features.

Recent questions

see all
1/18

Latency in GET requests

Hello. I wrote a code in Python that extracts data from the FTX exchange using their API. I am running the code in an AWS instance (free plan, t2.micro), located very closeby to the servers of the exchange. The code is essentially an infinite loop. At each step, it sends 3 'GET' requests, elaborates the response, and then goes to the next step. For the first few hundred iterations, the latency (defined below at the end of the post) for each block of three requests is of the order of 0.3seconds. After some time, it starts to grow up, reaching values from 2 to 5 seconds. In my local computer, located in the US, the latency is pretty constant at 1 second. There are no ratelimits in the FTX API for `GET` requests, so I should not expect any limit from the server. Is AWS limiting the rate of `GET` requests that I can make? I am trying to understand the origin of this extra-latency. To do so, I have monitored the https data traffic with `tcpdump` and I have modified the python script so that it stops as soon as it experiences a latency > 2 seconds. In this way, I can isolate the last packets in the tcpdump output and try to understand the origin of the delay. However, I really don't know how to read the output (I uploaded it here https://pastebin.com/tAhcicPU). Can anyone help me to understand the origin of the latency? 104.18.33.31.443 is the IP of FTX server 172.31.9.8 is the IP of the machine where my code runs **Definition of latency used here**: I post the relevant part of the code where I compute the latency ``` latency=0 for pair in pairList: # pairList = ['BTC/USD','ETH/BTC','ETH/USD'] api=requests.get(f'https://ftx.com/api/markets/{pair}/orderbook?depth={20}') latency+=api.elapsed.total_seconds() return latency ``` So, it is the total sum of the latency returned by the requests.get for each request.
1
answers
0
votes
18
views
asked 10 days ago

Not receiving traces sent by DAPR + Opentelmetry Xray exporter setup

To collect traces from DAPR we are using Opentelemetry collector and using Zipkin receiver since that's supported by DAPR and using awsxray exporter in open telemetry configuration. We see the traces generated in Opentelemetry log, however we do not see traces collected in AWS Xray. Please let us know what could be the issue Open telemetry configuration: receivers: zipkin: endpoint: 0.0.0.0:9411 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 awsxray: awsxray/udp_endpoint: endpoint: "0.0.0.0:5678" transport: udp awsxray/proxy_server: proxy_server: endpoint: "0.0.0.0:1234" tls: insecure: true region: ap-south-1 Exporter configuration: awsxray: region: ap-south-1 index_all_attributes: true resource_arn: "arn:aws:iam::215977048474:role/EKS-NodeGroup" #role_arn: "arn:aws:iam::215977048474:role/MastBazaar-EKS" no_verify_ssl: true Service configuration: service: extensions: [pprof, zpages, health_check] pipelines: traces: receivers: [zipkin] # List your exporter here. exporters: [awsxray,logging] Also added environment variable OTEL_PROPAGATORS: "xray" in deployment as seen below containers: - name: otel-collector image: otel/opentelemetry-collector-contrib-dev:latest env: - name: OTEL_PROPAGATORS value: "xray" Sample log in open telemetry 2022-09-13T18:15:46.106Z INFO loggingexporter/logging_exporter.go:42 TracesExporter {"#spans": 1} 2022-09-13T18:15:46.106Z DEBUG loggingexporter/logging_exporter.go:51 ResourceSpans #0 Resource SchemaURL: Resource labels: -> service.name: STRING(multiplyapp) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope Span #0 Trace ID : 5db6a8eb0ade84a251ca547034b13c5f Parent ID : 9987294085d81f24 ID : f88d24f02301bf28 Name : CallLocal/multiplyapp/multiply Kind : SPAN_KIND_SERVER Start time : 2022-09-13 18:15:44.031751 +0000 UTC End time : 2022-09-13 18:15:45.034974 +0000 UTC Status code : STATUS_CODE_UNSET Status message : Attributes: -> dapr.api: STRING(/dapr.proto.internals.v1.ServiceInvocation/CallLocal) -> dapr.invoke_method: STRING(multiply) -> dapr.protocol: STRING(grpc) -> rpc.service: STRING(ServiceInvocation) -> net.host.ip: STRING(10.0.195.115) 2022-09-13T18:15:54.505Z INFO loggingexporter/logging_exporter.go:42 TracesExporter {"#spans": 1} 2022-09-13T18:15:54.505Z DEBUG loggingexporter/logging_exporter.go:51 ResourceSpans #0 Resource SchemaURL: Resource labels: -> service.name: STRING(multiplyapp) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope Span #0 Trace ID : c3f98c14589d4618899f4581503e4c3f Parent ID : 642dd665bbc6c3dd ID : 014fec0dfb862b5c Name : CallLocal/multiplyapp/multiply Kind : SPAN_KIND_SERVER Start time : 2022-09-13 18:15:53.489635 +0000 UTC End time : 2022-09-13 18:15:54.492579 +0000 UTC Status code : STATUS_CODE_UNSET Status message : Attributes: -> dapr.api: STRING(/dapr.proto.internals.v1.ServiceInvocation/CallLocal) -> dapr.invoke_method: STRING(multiply) -> dapr.protocol: STRING(grpc) -> rpc.service: STRING(ServiceInvocation) -> net.host.ip: STRING(10.0.195.115) Any guidance will be very much appreciated
1
answers
0
votes
20
views
asked 17 days ago

Amazon ECS/SQS/lambda/

Good morning everyone, I am just starting in the AWS world and I have a challenge that I need to solve with the most appropriate tools that AWS offers me. The use case is the following: I have to process some pdf documents add some images to them and send it back. Currently I am doing it with a microservice that receives a pdf and returns it modified. When I do load tests the queue receives 50 requests and in the bash task I get blocked with 9 pdf at the same time and the ECS crashes. One solution is to increase the capacity of the ECS so that the microservice can process more documents. But I have read that SQS can help me solve this so I want to be sure I am applying the right architecture: - I have a .net core microservice in docker that produces requests and sends them to the queue. - I have an SQS that receives requests and arranges them in order of arrival. - I have a lambda that listens to the SQS and when a new request arrives it fires the event to the consuming microservice (the lambda "fires" up to 10 times simultaneously and in each "firing" it lets only 1 document through, or they recommend that in each "firing" it lets 10 documents through). - The consuming microservice receives a message from the lambda and starts processing all the SQS requests until all of them are finished. - When finished and the SQS is emptied the lambda again is waiting for the SQS to have a new message and the cycle starts again. Overview: I have a microservice is publisher. The microservice is consumer The lambda is the trigger The SQS is the queue
1
answers
0
votes
46
views
asked 21 days ago

error in lambda , no clue why

hi, we are geting this error in the logs and have no clue why, we have a lambda which does a scan on dynambd, the lambda seems to throw an error count at the time, there seems to be nothing in dynamdb, this is the error, can anyone tell me why this is happening? { "errorType": "ValidationException", "errorMessage": "Invalid FilterExpression: Expression size has exceeded the maximum allowed size; expression size: 4172", "code": "ValidationException", "message": "Invalid FilterExpression: Expression size has exceeded the maximum allowed size; expression size: 4172", "time": "2022-09-07T16:12:20.819Z", "requestId": "RG2TJNCJ36LIHKIHPJ8V7BQ3ARVV4KQNSO5AEMVJF66Q9ASUAAJG", "statusCode": 400, "retryable": false, "retryDelay": 6.469955607178434, "stack": [ "ValidationException: Invalid FilterExpression: Expression size has exceeded the maximum allowed size; expression size: 4172", " at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:52:27)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:686:14)", " at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)", " at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)", " at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:688:12)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)" ] }
1
answers
0
votes
40
views
asked 23 days ago

Popular users

see all
1/18