By using AWS re:Post, you agree to the Terms of Use
/Amazon API Gateway/

Questions tagged with Amazon API Gateway

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

0
answers
0
votes
4
views
Ravi
asked 8 days ago

AWS SAM "No response from invoke container for" wrong function name

I've debugged my application, and identified a problem. I have 2 REST API Gateway, and it seems like since they both bind on the same endpoint, the first one will recieve the call that the second one should handle. Here's my template.yaml ```yaml Resources: mysampleapi1: Type: 'AWS::Serverless::Function' Properties: Handler: packages/mysampleapi1/dist/index.handler Runtime: nodejs14.x CodeUri: . Description: '' MemorySize: 1024 Timeout: 30 Role: >- arn:aws:iam:: [PRIVATE] Events: Api1: Type: Api Properties: Path: /users Method: ANY Environment: Variables: NODE_ENV: local Tags: STAGE: local mysampleapi2: Type: 'AWS::Serverless::Function' Properties: Handler: packages/mysampleapi2/dist/index.handler Runtime: nodejs14.x CodeUri: . Description: '' MemorySize: 1024 Timeout: 30 Role: >- arn:aws:iam:: [PRIVATE] Events: Api1: Type: Api Properties: Path: /wallet Method: ANY Environment: Variables: NODE_ENV: local Tags: STAGE: local ``` When I send a HTTP request for ```mysampleapi2``` Here's what's happening in the logs using the startup command sam local start-api --port 3001 --log-file /tmp/server-output.log --profile personal --debug ```log 2022-04-27 18:2:34,953 | Mounting /home/mathieu_auclair/Documents/Project/repositories/server as /var/task:ro,delegated inside runtime container 2022-04-27 18:20:35,481 | Starting a timer for 30 seconds for function 'mysampleapi1' 2022-04-27 18:21:05,484 | Function 'mysampleapi1' timed out after 30 seconds 2022-04-27 18:21:46,732 | Container was not created. Skipping deletion 2022-04-27 18:21:46,732 | Cleaning all decompressed code dirs 2022-04-27 18:21:46,733 | No response from invoke container for mysampleapi1 2022-04-27 18:21:46,733 | Invalid lambda response received: Lambda response must be valid json ``` Why is my ```mysampleapi2``` not picking the HTTP call? If I run them in separate template files using different ports, then it works... why is that? Re-post from my question on StackOverflow: https://stackoverflow.com/questions/72036152/aws-sam-no-response-from-invoke-container-for-wrong-function-name
1
answers
1
votes
3
views
Mathieu Auclair
asked 18 days ago

Should I use Cognito Identity Pool OIDC JWT Connect Tokens in the AWS API Gateway?

I noticed this question from 4 years ago: https://repost.aws/questions/QUjjIB-M4VT4WfOnqwik0l0w/verify-open-id-connect-token-generated-by-cognito-identity-pool So I was curious and I looked at the JWT token being returned from the Cognito Identity Pool. Its `aud` field was my identity pool id and its `iss` field was "https://cognito-identity.amazonaws.com", and it turns out that you can see the oidc config at "https://cognito-identity.amazonaws.com/.well-known/openid-configuration" and grab the public keys at "https://cognito-identity.amazonaws.com/.well-known/jwks_uri". Since I have access to the keys, that means I can freely validate OIDC tokens produced by the Cognito Identity Pool. Moreso, I should be also able to pass them into an API Gateway with a JWT authorizer. This would allow me to effectively gate my API Gateway behind a Cognito Identity Pool without any extra lambda authorizers or needing IAM Authentication. Use Case: I want to create a serverless lambda app that's blocked behind some SAML authentication using Okta. Okta does not allow you to use their JWT authorizer without purchasing extra add-ons for some reason. I could use IAM Authentication onto the gateway instead but I'm afraid of losing formation such as the user's id, group, name, email, etc. Using the JWT directly preserves this information and passes it to the lambda. Is this a valid approach? Is there something I'm missing? Or is there a better way? Does the IAM method preserve user attributes...?
0
answers
0
votes
2
views
Brandon-Ellis-AI
asked 19 days ago
1
answers
0
votes
7
views
AWS-User-5636405
asked a month ago

AWS SAM: set the authorization cache TTL in the resource template (AWS::Serverless::Api)

Hi all, I am using SAM in order to deploy my serverless application which consist of a REST API and a lambda authorizer. The REST API is not triggering a Lambda. It integrates other public services. When declaring the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html) and its [auth](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-property-api-apiauth.html) attribte, I cannot find a way to configure the authorization-cache's TTL as in the [AWS::ApiGateway::Authorizer](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizerresultttlinseconds) resource. Am I missing something? If not, is there any reason the authorization-cache's TTL configuration is not made available in the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html) element? This potentially missing feature is something minor for us, and does not block us in our project. It is more a nice-to-have, as I would prefer to not have to copy/paste the whole OpenAPI specification directly in the template file, but rather use the SAM feature to specify the API via the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html#sam-api-definitionuri) 's *DefinitionUri* attribute. This makes it possible to not have an API definition in the template, but to embbed this definition in a local file which will be automatically uploaded to S3 during the SAM deploy step. Thanks
1
answers
0
votes
1
views
AWS-User-2851010
asked a month ago

Slow lambda responses when bigger load

Hi, Currently, I'm doing load testing using Gatling and I have one issue with my lambdas. I have two lambdas one is written in Java 8 and one is written in Python. I'm using Gatling for my load testing and I have a test where I'm doing one request with 120 concurrent users then I'm ramping them from 120 to 400 users in 1 minute, and then Gatling is doing requests with 400 constants users per second for 2 minutes. There is a weird behavior in these lambdas because the responses are very high. In the lambdas there is no logic, they are just returning a String. Here are some screenshots of Gatling reports: [Java Report][1] [Python Report][2] I can add that I did some tests when Lambda is warm-up and there is the same behaviour as well. I'm using API Gateway to run my lambdas. Do you have any idea why there is such a big response time? Sometimes I'm receiving an HTTP error that says: i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms Here is also my Gatling simulation code: public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/dev")).pause(1); { setUp(scn.injectOpen( atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(1)) ).protocols(httpProtocol) ); } } I also checked logs and turned on the X-ray for API Gateway but there was nothing there. The average latency for these services was 14ms. What can be the reason for that slow Lambda responses? [1]: https://i.stack.imgur.com/sCx9M.png [2]: https://i.stack.imgur.com/SuHU0.png
0
answers
0
votes
5
views
Maryn
asked a month ago

Load testing serverless stack using Gatling

Hi, I'm doing some load testing on my serverless app and I see that it is unable to handle some higher loads. I'm using API Gateway. Lambda(Java 8) and DynamoDB. The code that I'm using is the same as this from this [link]([https://github.com/Aleksandr-Filichkin/aws-lambda-runtimes-performance/tree/main/java-graalvm-lambda/src/lambda-java). In my load testing, I'm using Gatling. The load that I configured is that I'm doing a request with 120 users, then in one minute I ramp users from 120 to 400, and then for 2 minutes I'm making requests with 400 constant users per second. The problem is that my stack is unable to handle 400 users per second. Is it normal? I thought that serverless will scale nicely and will work like a charm. Here is my Gatling simulation code: ```java public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/activitiesv2")).pause(1); { setUp(scn.injectOpen(atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(2)) ).protocols(httpProtocol) ); } } ``` Here are the Gatling report results: [Image link](https://ibb.co/68SYDsb) I'm also receiving an error: **i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms ** -> This is usually approx 50 requests. It is happening when Gatling is starting to inject 400 constant users per second. I'm wondering what could be wrong. It is too much for API Gateway, Lambda and DynamoDB?
2
answers
0
votes
3
views
Maryn
asked a month ago
1
answers
0
votes
10
views
AWS-User-9889113
asked 2 months ago

Lambda Authorizer with API Key enabled on API Gateway

Hi, I am trying to develop a Lambda Authorizer to be able to auth both JWT tokens and API Keys. I am now currently using the `Token` as the Lambda event payload. The API Key is now encoded as a Basic Token and put at `Authorization` header. With API Gateway enabled, I put the API Key at `UsageIdentifierKey` field in the response from Lambda Authorizer to API Gateway. However, it seems like there is no way for API Gateway to automatically map the API Key to its ID and pass both of them to my backend service. Is there a way to make this mapping happen at API Gateway? An alternative way I have explored is that by putting the API Key at the `x-api-key` header, API Gateway will do the automatic mapping to pass both API Key and API Key ID to the backend service. In this way, I have to use `Request` as the Lambda event payload to make the Lambda authorizer be able to auth both JWT token and API Key. However, the problem is the setup for `Identity Sources`. Since JWT Token is put at `Authorization` header and API Key will be put at `x-api-key` header. I have to put both of them as the `Identity Sources`. But based on the document, all the sources specified as the `Identity Sources` must not be empty or null or nil. And also, I think it's related to cache of the auth. If I go with this way, Is there a way to make the `Identity Sources` be like at least one of them must be not empty instead of all of them? Thanks. Jia
1
answers
0
votes
6
views
shijiakimi
asked 2 months ago

Which AWS service is best for a proxy http service (architecture strategy)

Hello all. I have a need for a small service that serves as an intermediate http proxy between my clients (mobile app) and a Database server (outside of AWS - receiving http requests). I can think of a small NodeJS function that accepts the clients' http requests, routs them to the DB server, listens for the response coming back from the DB server and sends it back to the client. The anticipated load is not much at all, maybe 1,000 such requests every day. No need for extraordinary security measures, nor load balancing or multi region CDN's and such. I mostly need this solution to bypass CORS limitation with the target DB service that requires some middle-ware and can't serve my Angular app directly from my dev machine / mobile. There is a secret token to be sent to the target DB server, that can maybe be served from this intermediate proxy service, instead of including it with the client request. I can see multiple possibilities to implement in AWS, differ in price and implementation, and I cannot decide which would serve me best with the minimum costs (if any). Some examples I could think of: * AWS API GW * AWS API GW + Lambda * Lambda (is it possible without the API GW service) * NodeJS on an EC2 * AWS Amplify (sounds like an overkill for this usecase?) * Amazon LightRail (looks expensive, though?) Please advise on the most suitable service to use, in your mind, that will bear minimum costs and be relatively easy to configure/implement. Thank you! Mor
2
answers
0
votes
17
views
mor
asked 2 months ago
  • 1
  • 90 / page