By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Serverless

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Lambda random long execution while running QLDB query

I have a lambda triggered by a SQS FIFO queue when there are messages on this queue. Basically this lambda is getting the message from the queue and connecting to QLDB through a VPC endpoint in order to run a simple SELECT query and a subsequent INSERT query. The table selected by the query has a index for the field used in the where condition. Flow (all the services are running "inside" a VPC): `SQS -> Lambda -> VPC interface endpoint -> QLDB` Query SELECT: `SELECT FIELD1, FIELD2 FROM TABLE1 WHERE FIELD3 = "ABCDE"` Query INSERT: `INSERT INTO TABLE1 .....` This lambda is using a shared connection/session on QLDB and this is how I'm connecting to it: ``` import { QldbDriver, RetryConfig } from 'amazon-qldb-driver-nodejs' let driverQldb: QldbDriver const ledgerName = 'MyLedger' export function connectQLDB(): QldbDriver { if ( !driverQldb ) { const retryLimit = 4 const retryConfig = new RetryConfig(retryLimit) const maxConcurrentTransactions = 1500 driverQldb = new QldbDriver(ledgerName, {}, maxConcurrentTransactions, retryConfig) } return driverQldb } ``` When I run a load test that simulates around 200 requests/messages per second to that lambda in a time interval of 15 minutes, I'm starting facing a random long execution for that lambda while running the queries on QLDB (mainly the SELECT query). Sometimes the same query retrieves data around 100ms and sometimes it takes more than 40 seconds which results in lambda timeouts. I have changed lambda timeout to 1 minute but this is not the best approch and sometimes it is not enough too. The VPC endpoint metrics are showing around 250 active connections and 1000 new connections during this load test execution. Is there any QLDB metric that could help to identify the root cause of this behavior? Could it be related to some QLDB limitation (like the 1500 active sessions described here: https://docs.aws.amazon.com/qldb/latest/developerguide/limits.html#limits.default) or something related to concurrency read/write iops?
1
answers
0
votes
56
views
asked 17 days ago

API Gateway - How to accept Authorization with Bearer keyword - HTTP API

API Gateway HTTP usign Cognito requires JWT token to be included in Authorization Header. This is a problem when testing against Swagger Editor, which includes "Bearer" keyboard in Authorization Header. Is there a way to configure API Gateway to accept JWT with `Bearer` keyword? **OpenAPI Schema:** ``` securitySchemes: AwsOAuth2: type: oauth2 flows: implicit: authorizationUrl: https://auth.ourdomain.com/login scopes: aws.cognito.signin.user.admin: Gives you access to all the User Pool APIs that can be accessed using access tokens alone email: Grants access to the email and email_verified claims. This scope can only be requested with the openid scope. openid: Returns all user attributes in the ID token that are readable by the client. The ID token is not returned if the openid scope is not requested by the client. phone: Grants access to the phone_number and phone_number_verified claims. This scope can only be requested with the openid scope. profile: Grants access to all user attributes that are readable by the client. This scope can only be requested with the openid scope. x-amazon-apigateway-authorizer: identitySource: "$request.header.Authorization" jwtConfiguration: audience: - "xxxxxxxx" issuer: "https://cognito-idp.eu-west-1.amazonaws.com/eu-west-1_xxxxxxx" type: "jwt" security: - AwsOAuth2: [] ``` Generates following curl request in OpenAPI Swagger Editor: ``` curl -X 'GET' \ 'https://api.ourdomain.com/0.5/app-user/heyho' \ -H 'accept: application/json' \ -H 'Authorization: Bearer eyJraWQiOiJ1aVcwc3Exxxxxxxxxxxx' ``` Problem is, that this gets rejected by API Gateway HTTP when integrating with Cognito. It requires header like this (without Bearer): ``` -H 'Authorization: eyJraWQiOiJ1aVcwc3Exxxxxxxxxxxx' ```
1
answers
0
votes
48
views
asked 18 days ago

Latency in GET requests

Hello. I wrote a code in Python that extracts data from the FTX exchange using their API. I am running the code in an AWS instance (free plan, t2.micro), located very closeby to the servers of the exchange. The code is essentially an infinite loop. At each step, it sends 3 'GET' requests, elaborates the response, and then goes to the next step. For the first few hundred iterations, the latency (defined below at the end of the post) for each block of three requests is of the order of 0.3seconds. After some time, it starts to grow up, reaching values from 2 to 5 seconds. In my local computer, located in the US, the latency is pretty constant at 1 second. There are no ratelimits in the FTX API for `GET` requests, so I should not expect any limit from the server. Is AWS limiting the rate of `GET` requests that I can make? I am trying to understand the origin of this extra-latency. To do so, I have monitored the https data traffic with `tcpdump` and I have modified the python script so that it stops as soon as it experiences a latency > 2 seconds. In this way, I can isolate the last packets in the tcpdump output and try to understand the origin of the delay. However, I really don't know how to read the output (I uploaded it here https://pastebin.com/tAhcicPU). Can anyone help me to understand the origin of the latency? 104.18.33.31.443 is the IP of FTX server 172.31.9.8 is the IP of the machine where my code runs **Definition of latency used here**: I post the relevant part of the code where I compute the latency ``` latency=0 for pair in pairList: # pairList = ['BTC/USD','ETH/BTC','ETH/USD'] api=requests.get(f'https://ftx.com/api/markets/{pair}/orderbook?depth={20}') latency+=api.elapsed.total_seconds() return latency ``` So, it is the total sum of the latency returned by the requests.get for each request.
1
answers
0
votes
19
views
asked 18 days ago

Need help to understand Redshift Serverless costs

I'm using Redshift Serverless to run some tests, but I don't understand how it's being billed. I'm still on the Free Tier $300, but already used almost $50 of those, and according to my calculations, the cost should be less than $2 so far. ![Enter image description here](/media/postImages/original/IM7KWpN_prSgqg3s9BsfEJZQ) I understand that Redshift Serverless is billed for RPU and Storage. But when I check the usage using: ``` select date_trunc('day', start_time) usage_date, sum(compute_seconds) total_compute_seconds, sum(compute_seconds)/(60*60) total_compute_hours, total_compute_hours*0.375 total_compute_cost from sys_serverless_usage group by date_trunc('day', start_time); ``` Result shows that the cost should be less than $2 so far: ![Enter image description here](/media/postImages/original/IM66nz2iuHQfesqSiwDkiArA) Storage doesn't seem to be the cost either, as the cost of it shows $0 so far, using: ``` SELECT date_trunc('day', start_time) usage_date, SUM((data_storage/(1024*1024*1024))*(datediff(s,start_time,end_time)/3600.0)) AS GB_hours, GB_hours / 720 AS GB_months, GB_months*0.024 AS storage_cost_day FROM sys_serverless_usage GROUP BY 1 ORDER BY 1; ``` I need help to understand where the money is going, if there is some fixed cost or how is it draining so fast? I also tried to find Redshift Serverless on the Billing section, but it doesn't seem to be there (maybe cause it's still under the Free Tier, but some services show up there even when cost is $0) Thanks in advance!
1
answers
0
votes
32
views
asked 23 days ago