By using AWS re:Post, you agree to the Terms of Use
/Amazon API Gateway/

Questions tagged with Amazon API Gateway

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to get traffic from a public API Gateway to a private one?

I would like to use [private API Gateways](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html#api-gateway-api-endpoint-types-private) to organise Lambda functions into microservices, while keeping them invisible from the public internet. I would then like to expose specific calls using a public API Gateway. How do I get traffic from my public API Gateway to a private API Gateway? **What I've looked at so far** In the past, for **container-based resources**, I've used the following pattern: *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ECS]* However, I can't find an equivalent bridge to get specific traffic to a private API Gateway. I.e. *Internet -> API Gateway -> ? -> Private Gateway -> Lambda* My instinct tells me that a network-based solution should exist (equivalent to VPC Link), but so far the only suggestions I've had involve: - Solving using compute ( *Internet -> API Gateway -> VPC[Lambda proxy] -> Private Gateway -> Lambda* ) - Solving using load balancers ( *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ALB] -> Private Gateway -> Lambda* ) Both of these approaches strike me as using the wrong (and expensive!) tools for the job. I.e. Compute where no computation is required and (two!!) load balancers where no load balancing is required (as Lambda effectively self-loadbalances). **Alternative solutions** Perhaps there's a better way (other than a private API Gateway) to organise collections of serverless resources into microservices. I'm attempting to use them to present a like-for-like interface that my container-based microservices would have. E.g. Documented (Open API spec), authentication, traffic monitoring, etc. If using private API Gateways to wrap internal resources into microservices is actually a misuse, and there's a better way to do it, I'm happy to hear it.
1
answers
0
votes
20
views
asked 21 hours ago

StartCallAnalyticsJob : User is not authorized to access this resource

Hi everybody, I wanna ask you about AWS Transcribe Analytics Call. API is well with AWS Transcribe but I need also sentiment Analysis, so I try to use AWS Transcribe Analytics. There is my code : ``` from __future__ import print_function import time import boto3 transcribe = boto3.client('transcribe', 'us-east-1') job_name = "my-first-call-analytics-job" job_uri = "PATH_S3_TO_WAV_WHO_HAD_WORD_FOR_AWS_TRANSCRIBE" output_location = "PATH_TO_CREATED_FOLDER" data_access_role = "arn:aws:s3:::MY_BUCKET_NAME_WHERE_WAV_FILES" transcribe.start_call_analytics_job( CallAnalyticsJobName = job_name, Media = { 'MediaFileUri': job_uri }, DataAccessRoleArn = data_access_role, OutputLocation = output_location, ChannelDefinitions = [ { 'ChannelId': 0, 'ParticipantRole': 'AGENT' }, { 'ChannelId': 1, 'ParticipantRole': 'CUSTOMER' } ] ) while True: status = transcribe.get_call_analytics_job(CallAnalyticsJobName = job_name) if status['CallAnalyticsJob']['CallAnalyticsJobStatus'] in ['COMPLETED', 'FAILED']: break print("Not ready yet...") time.sleep(5) print(status) ``` I had done aws configure and I use a IAM user who have AdministratorAccess. > **botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the StartCallAnalyticsJob operation: User: MY_ARN_USER is not authorized to access this resource** Any help please ? Thank you very much!
1
answers
0
votes
20
views
asked 7 days ago

API GW HTTP API: Cross Account Access via IAM

Hi, I have an **API-GW HTTP API** (in account A) that uses **IAM** auth. I'm trying to invoke that API using an **IAM role** from another account (account B) I'm getting 403 responses when trying to invoke the **API-GW** from account B. I'm able to successfully invoke it from its own account (account A). The IAM role in **account B** has the following **policy** ``` { "Version": "2012-10-17", "Statement": [ { "Action": "execute-api:Invoke", "Resource": [ "arn:aws:execute-api:*:ACCOUNT-A-ID:*" ], "Effect": "Allow" } ] } ``` I have a "CrossAccountInvocationRole" in **account A** with **Policy** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "execute-api:*", "Resource": "arn:aws:execute-api:*:ACCOUNT-A-ID:*/*/*/*" } ] } ``` with **Trusted Entities** ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::ACCOUNT-B-ID:role/role-name" ] }, "Action": "sts:AssumeRole" } ] } ``` My **APIGW IAM Role in Account A** has the following **Policy** ``` { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::ACCOUNT-A-ID:role/CrossAccountInvocationRole", "Effect": "Allow" } ] } ``` and **Trusted entities** ```{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "apigateway.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` I tried attaching the above policy to the Lambda that is invoked by API-GW as well To test, I used the AWS CLI `sts assume-role` to get credentials for the IAM role and then used those credentials in a Lambda in account B as well as the Postman application. Both gave me 403 errors. Question: 1. Is it even possible to do cross account invocation on an API GW HTTP API with IAM? 2. If yes, what an I doing wrong?
1
answers
0
votes
32
views
asked 10 days ago

EC2s Development and Production Environments, Isolation, VPN, API GW, Private and Public Endpoints with RDS and Data Sanitization

Hi Everyone, I have the following idea for an infrastructure architecture in AWS but I believe that I need some help with clarifying several issues which I believe, the best answers to will come from here. I am thinking about the following layout: In production: 1. an EC2 with Apache that provides service portal for web users 2. an RDS for the sake of the portal 3. another EC2 with Apache and business-logic php application as CRM 4. the same RDS will be used by the CRM application as well In development: The same layout, with 1 EC2 for web client services, 1 EC2 for the sake of developing the CRM and an RDS for the data I thought about using two different VPCs for the sake of this deployment. I need data replication with sanitization from the production RDS to the development RDS (thinking either by SQL procedures or other method, didn't think about that yet, but I know I need it to be like that since I have no desire to enable my developers to work with real client data). Both the production and development CRM EC2s are exposing Web APIs Both the production and development service portals are exposing Web APIs Both the production and development CRM and service portal are web accessible For the development environment I want to enable access (Web and Web APIs) only through VPN, hence, I want my developers to connect with VPN clients to the development VPC with VPN and work against both EC2s on-top of that connection. I also want them to be able to test all APIs and thinking about setting an API Gateway on that private endpoint. For the production environment, I want to enable access (Web and Web APIs) to the CRM EC2 through VPN, hence, I want my business units to connect with their VPN clients to a production VPN gateway, and work against the CRM on-top of that connection. I don't want to expose my CRM to the world. For the production environment, I want to enable everyone on the internet (actually, not everyone, I want to Geo-Block access to the service portal, hence, I do believe I need Amazon CDN services enabled for that cause) to access the service portal, still, I want to enable an API Gateway for the Web APIs that are exposed by this service portal EC2. I've been reading about Amazon API gateway (and API Gateway Cache) and it's resource policy and VPC endpoints with their own security groups and Amazon Route 53 resolver for the sake of VPN connections. I also been reading lots about Amazon virtual private gateway and a private and public endpoints, but, I still can't figure-out with element comes to play where and how the interactions should be design for those elements. I believe I also need Amazon KMS for the keys, certificates and passwords, but, I'm still trying to figure out the right approach for the above, so, I'm leaving the KMS part for the end. of course I'm thinking about security at the top of my concerns, so, I do believe all connectivity's should be harden in-between the elements, is only using ACLs is the right way to go!? I would really appreciate the help
1
answers
0
votes
35
views
asked 11 days ago

Passing a custom message from Websocket API Lambda authroizer to the client

My websocket API has a custom authorizer, and I want to transmit a message to the WS client (ie. browser) when a validation is failed. For an example, if IP address is invalid, I want to send an object to the client's "onerror" or "onclose" event handlers. I have enabled route-responses for my `$connect` route. The `$connect` route is integrated to a Lambda. I managed to transmit the error string from the authorizer to `$connect` handler using the `context` field in the returned deny-policy. And then I tried returning an object like follows from the handler, hoping it would reach the client's event handlers. ```javascript export const handler = async function (event, context) { try { if (event.requestContext.authorizer.rejectCode === 'INVALID_IP') { return { statusCode: 403, body: JSON.stringify({ message: "Connection accept failed. Invalid IP" }) }; } return { statusCode: 200, body: JSON.stringify({ message: "success" }) }; } catch (error) { return { statusCode: 500, body: JSON.stringify({ message: "Connection failed " }) }; } } ``` But none of these returned values are reached to the client. I get an event object similar to follows on the client "onerror" and "onclose" events. ```javascript { isTrusted: true bubbles: false cancelBubble: false cancelable: false code: 1006 composed: false currentTarget: WebSocket defaultPrevented: false eventPhase: 0 path: [] reason: "" returnValue: true srcElement: WebSocket target: WebSocket timeStamp: 40664.800000000745 type: "close" // or "error" wasClean: false } ``` I feel like I'm missing something basic. Any help is appreciated.
0
answers
0
votes
13
views
asked 12 days ago

How to define API Gateway to Eventbridge integration?

I am building an API Gateway (v1) integration such that posts to a given endpoint should route the body of the POST to my custom eventbus in Eventbridge. I have created the gateway endpoints, event bus etc but I am struggling with the definition of the integration. I am doing this in Terraform, which basically wraps the AWS API for PutIntegration and I cannot seem to figure out the correct format of the request-parameters map required by AWS. since eventbus payloads have a specific structure, I assume I need to build the map to construct that payload. I also saw a post about needing a custom X-Amz-Target header as well. Are there any AWS documented examples of just how to build this integration & mapping? My attempts invariably lead to an error response along the lines of: *Error updating API Gateway Integration: BadRequestException: Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: tagmodernization-dev-us-west-2-eventbus-information-reporting, Invalid mapping expression specified: integration.request.body.Entries[0].EventBusName]* mapping variations I have tried include: integration.request.body.EventBusName integration.request.body.Entries[0].EventBusName] EventBusName I realize I can achieve a similar goal using VTL with the request templates capability, but I am still unclear on the output format of the mapping anyway.
1
answers
0
votes
34
views
asked 13 days ago

API Gateway as Reverse HTTP Proxy to SQS

I am trying to use AWS API Gateway as a reverse (forwarding) proxy to AWS SQS. I essentially want to send a REST request to the API Gateway which then gets forwarded directly to the SQS REST API and returns the response. When I send a request to the gateway, I immediately get back ```xml <?xml version="1.0"?> <ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"> <Error> <Type>Sender</Type> <Code>AccessDenied</Code> <Message>Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.</Message> <Detail/> </Error> <RequestId>51c903b2-4da3-5d5e-a3b8-589ee72167de</RequestId> </ErrorResponse> ``` However, when I switch the request URL to SQS directly (`https://sqs.us-east-1.amazonaws.com`) the request succeeds. What am I missing? ```shell curl --request POST 'https://my-api-gateway.com/sqs' \ --header 'X-Amz-Date: <date>' \ --header 'X-Amz-Security-Token: <token>' \ --header 'Authorization: <auth>' \ --header 'Amz-Sdk-Invocation-Id: <invocation>' \ --header 'Amz-Sdk-Request: attempt=1; max=10' \ --header 'User-Agent: aws-sdk-go-v2/1.16.5 os/macos lang/go/1.18.3 md/GOOS/darwin md/GOARCH/arm64 api/sqs/1.18.6' \ --header 'Content-Length: 206' \ --data-urlencode 'Action=ReceiveMessage' \ --data-urlencode 'MaxNumberOfMessages=10' \ --data-urlencode 'QueueUrl=<my-queue-url>' \ --data-urlencode 'Version=2012-11-05' \ --data-urlencode 'WaitTimeSeconds=20' ``` Configuration: 1. [Integrations](https://i.stack.imgur.com/geLqx.png) 2. [Routes](https://i.stack.imgur.com/Lk3QQ.png) 3. [Parameter Mappings](https://i.stack.imgur.com/LtCO4.png)
2
answers
0
votes
45
views
asked 13 days ago

How do I not receive "Internal Failure for IAM authorizer" error when using AWS IAM authorizer on Govcloud?

I have an app which uses a role with this policy to invoke an API gateway: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "mobileanalytics:PutEvents", "cognito-sync:*", "cognito-identity:*" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "execute-api:Invoke" ], "Resource": [ "arn:aws:execute-api:us-east-1:XXXXXXXXXX:aaaaaaaaaa/$default/POST/routename/${aws:PrincipalTag/username}" ] } ] } ``` (In govcloud, us-east-1 is changed to us-gov-west-1). This works fine in commercial. However, I get 500 internal server errors on govcloud. Upon customizing and inspecting the logs, I find that it's an authorizer error with the error message "internal failure for IAM authorizer". Searching this error on google yielded 0 results... Now I'm scared. In a panic, I tried opening up all permissions more broadly ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "mobileanalytics:PutEvents", "cognito-sync:*", "cognito-identity:*" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "execute-api:*" ], "Resource": [ "*" ] } ] } ``` But this yielded the same results. However, when I tried hitting the same endpoint using complete admin permissions, my requests went through just fine. What can I do to stop this behavior? Are IAM Authorizers even supported on govcloud? Do I need to add more permissions?
0
answers
0
votes
18
views
asked 14 days ago

How do I change the expiration time of credential information retrieved from the Cognito ID Pool?

We are using aws-sdk to get temporary credential information from the Cognito ID pool in order to send requests from our front-end web application to the API Gateway that has been configured for authorization by the IAM authorizer. The credential information expiration time is 1 hour by default, is there any way to change the expiration time? ``` const client = new CognitoIdentityClient({ region: process.env.VUE_APP_AWS_REGION }); const getIdCommandInput = { AccountId: process.env.VUE_APP_AWS_ACCOUNT_ID, IdentityPoolId: process.env.VUE_APP_COGNITO_AUTH_IDENTITY_POOL_ID, Logins: {} }; const userPool = `cognito-idp.${process.env.VUE_APP_AWS_REGION}.amazonaws.com/${process.env.VUE_APP_COGNITO_AUTH_USER_POOL_ID}`; getIdCommandInput.Logins[userPool] = store.state.authenticateResult.idToken; const getIdCommand = new GetIdCommand(getIdCommandInput); const identityIdResponse = await client.send(getIdCommand); const getCredentialsForIdentityCommandInput = { IdentityId: identityIdResponse.IdentityId, Logins: {} }; getCredentialsForIdentityCommandInput.Logins[userPool] = store.state.authenticateResult.idToken; const getCredentialsForIdentityCommand = new GetCredentialsForIdentityCommand(getCredentialsForIdentityCommandInput); const credentialsResponse = await client.send(getCredentialsForIdentityCommand); ``` When the credential information is retrieved with the above code, the Expiration property contains the date and time one hour later. I tried the following, but there was no change in the 1-hour expiration. (1) Change the "maximum session time" of IAM roles set to "authenticated roles" in the Cognito identity pool to 2 hours. (2) Change the "Maximum session time" of IAM roles set to groups in the Cognito user pool to 2 hours.
1
answers
0
votes
42
views
asked 15 days ago

IAM Policy - AWS Transfer Family

Hello, This question may seem a bit long-winded since I will be describing the relevant background information to hopefully avoid back and forth, and ultimately arrive at a resolution. I appreciate your patience. I have a Lambda function that is authenticating users via Okta for SFTP file transfers, and the Lambda function is called through an API Gateway. My company has many different clients, so we chose this route for authentication rather than creating user accounts for them in AWS. Everything has been working fine during my testing process except for one key piece of functionality. Since we have many customers, we don't want them to be able to interact or even see another customer's folder within the dedicated S3 bucket. The directory structure has the main S3 bucket at the top level and within that bucket resides each customer's folder. From there, they can create subfolders, upload files, etc. I have created the IAM policy - which is an inline policy as part of an assumed role - as described in this document: https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html. My IAM policy looks exactly like the one shown in the "Creating a session policy for an Amazon S3 bucket" section of the documentation. The "transfer" variables are defined in the Lambda function. Unfortunately, those "transfer" variables do not seem to be getting passed to the IAM policy. When I look at the Transfer Family endpoint log, it is showing access denied after successfully connecting (confidential information is redacted): <user>.39e979320fffb078 CONNECTED SourceIP=<source_ip> User=<user> HomeDir=/<s3_bucket>/<customer_folder>/ Client="SSH-2.0-Cyberduck/8.3.3.37544 (Mac OS X/12.4) (x86_64)" Role=arn:aws:iam::<account_id>:role/TransferS3AccessRole Kex=diffie-hellman-group-exchange-sha256 Ciphers=aes128-ctr,aes128-ctr <user>.39e979320fffb078 ERROR Message="Access denied" However, if I change the "transfer" variables in the Lambda function to include the actual bucket name and update the IAM policy accordingly, everything works as expected; well, almost everything. With this change, I am not able to restrict access and, thus, any customer could interact with any other customer's folders and files. Having the ability to restrict access by using the "transfer" variables is an integral piece of functionality. I've searched around the internet - including this forum - and cannot seem to find the answer to this problem. Likely, I have overlooked something and hopefully it is an easy fix. Looking forward to getting this resolved. Thank you very much in advance!
5
answers
0
votes
55
views
asked 18 days ago

Using $defs in API Gateway Models

I am working on an API Gateway api using the Serverless Framework. The project contains a json-schema which apparently is used to create a model in API Gateway. Recently, I started to use the `$defs` element in the schema (https://json-schema.org/understanding-json-schema/structuring.html#defs), which is a way to re-use definitions within the same schema (pasting my schema below). However, no my deployments are failing: > Error: > CREATE_FAILED: ApiGatewayMethodV1PreviewsPostApplicationJsonModel (AWS::ApiGateway::Model) > Resource handler returned message: “Invalid model specified: Validation Result: warnings : [], errors : [Invalid model schema > specified. Unsupported keyword(s): [“$defs”], Model reference must be in canonical form, Model reference must be in canonical form] (Service: ApiGateway, Status Code: 400, Request ID: 7048dc90-7bb4-4259-bed8-50d7a93963d9, Extended Request ID: null)” This probably means that `$defs` is not supported in JSON schema draft 4? Any other way to avoid duplications in the schema file? Here is my schema (Typescript but you get the idea): ``` export const inputSchema = { type: 'object', properties: { body: { type: 'object', oneOf: [ { properties: { input: { type: 'string' }, options: { "$ref": "#/$defs/options" }, }, required: ['input'], }, { properties: { data: { type: 'string' }, options: { "$ref": "#/$defs/options" }, }, required: ['data'], }, ], }, }, $defs: { options: { type: 'object', properties: { camera: { type: 'string' }, auto_center: { type: 'boolean' }, view_all: { type: 'boolean' }, }, }, }, }; ```
0
answers
0
votes
15
views
asked 21 days ago

API Gateway WSS Endpoint not found

I've created a WSS chat app using the sample that comes with the AWS dotnet lambda templates. My web front end can connect ok and it creates a record in dynamo but when I try to broadcast a message to all connections I get the following error: `Name or service not known (execute-api.ap-southeast-2.amazonaws.com:443) ` I'm using the following code to set it: var protocol = "https"; //var protocol = "wss"; var domainName = request.RequestContext.DomainName; //var domainName = "ID HERE.execute-api.ap-southeast-2.amazonaws.com"; var stage = request.RequestContext.Stage; // var stage = ""; //var stage = "test"; //var stage = "test/@connections"; var endpoint = $"{protocol}://{domainName}/{stage}"; and it logs the following: ``` API Gateway management endpoint: https://ID HERE.execute-api.ap-southeast-2.amazonaws.com/test ``` Ive tried all the combinations and a custom domain. Im thinking that ap-southeast-2 does not support wss ? Or ... ?? Been stuck on this for a while now. About ready to give up. Anyone got any ideas?? Update: Heres the code for sending the message - it just an updated version of the sample. From the startup: ``` public Functions() { DDBClient = new AmazonDynamoDBClient(); // Grab the name of the DynamoDB from the environment variable setup in the CloudFormation template serverless.template if (Environment.GetEnvironmentVariable(TABLE_NAME_ENV) == null) { throw new ArgumentException($"Missing required environment variable {TABLE_NAME_ENV}"); } ConnectionMappingTable = Environment.GetEnvironmentVariable(TABLE_NAME_ENV) ?? ""; this.ApiGatewayManagementApiClientFactory = (Func<string, AmazonApiGatewayManagementApiClient>)((endpoint) => { return new AmazonApiGatewayManagementApiClient(new AmazonApiGatewayManagementApiConfig { ServiceURL = endpoint, RegionEndpoint = RegionEndpoint.APSoutheast2, // without this I get Credential errors LogResponse = true, // dont see anything extra with these LogMetrics = true, DisableLogging = false }); }); } ``` And the SendMessageFunction: ``` try { // Construct the API Gateway endpoint that incoming message will be broadcasted to. var protocol = "https"; //var protocol = "wss"; var domainName = request.RequestContext.DomainName; //var domainName = "?????.execute-api.ap-southeast-2.amazonaws.com"; var stage = request.RequestContext.Stage; // var stage = ""; //var stage = "test"; //var stage = "test/@connections"; var endpoint = $"{protocol}://{domainName}/{stage}"; context.Logger.LogInformation($"API Gateway management endpoint: {endpoint}"); JObject message = JObject.Parse(request.Body); context.Logger.LogInformation(request.Body); if (!GetRecipient(message, context, out WSMessageRecipient? recipient)) { context.Logger.LogError($"Invalid or empty WSMessageRecipient"); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.BadRequest, Body = "Nothing to do or invalid request" }; } if (!GetData(message, context, out string? data)) { context.Logger.LogError($"Invalid or empty WSSendMessage"); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.BadRequest, Body = "Nothing to do or invalid request" }; } var stream = new MemoryStream(UTF8Encoding.UTF8.GetBytes(data!)); if (stream.Length == 0) { context.Logger.LogError($"Empty Stream"); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.BadRequest, Body = "Empty data stream" }; } // List all of the current connections. In a more advanced use case the table could be used to grab a group of connection ids for a chat group. ScanResponse scanResponse = await GetConnectionItems(recipient); // Construct the IAmazonApiGatewayManagementApi which will be used to send the message to. var apiClient = ApiGatewayManagementApiClientFactory(endpoint); context.Logger.LogInformation($"Table scan of {ConnectionMappingTable} got {scanResponse.Items.Count} records."); // Loop through all of the connections and broadcast the message out to the connections. var count = 0; foreach (var item in scanResponse.Items) { var connectionId = item[ConnectionIdField].S; context.Logger.LogInformation($"Posting to connection {count}: {connectionId}"); var postConnectionRequest = new PostToConnectionRequest { ConnectionId = connectionId, Data = stream }; try { stream.Position = 0; await apiClient.PostToConnectionAsync(postConnectionRequest); context.Logger.LogInformation($"Posted to connection {count}: {connectionId}"); count++; } catch (AmazonServiceException e) { // API Gateway returns a status of 410 GONE then the connection is no // longer available. If this happens, delete the identifier // from our DynamoDB table. if (e.StatusCode == HttpStatusCode.Gone) { context.Logger.LogInformation($"Deleting gone connection: {connectionId}"); var ddbDeleteRequest = new DeleteItemRequest { TableName = ConnectionMappingTable, Key = new Dictionary<string, AttributeValue> { {ConnectionIdField, new AttributeValue {S = connectionId}} } }; await DDBClient.DeleteItemAsync(ddbDeleteRequest); } else { context.Logger.LogError( $"Error posting message to {connectionId}: {e.Message}"); context.Logger.LogInformation(e.StackTrace); } } catch (Exception ex) { context.Logger.LogError($"Bugger, something fecked up: {ex.Message}"); context.Logger.LogInformation(ex.StackTrace); } } return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.OK, Body = "Data sent to " + count + " connection" + (count == 1 ? "" : "s") }; } catch (Exception e) { context.Logger.LogInformation("Error Sending Message: " + e.Message); context.Logger.LogInformation(e.StackTrace); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.InternalServerError, Body = $"Failed to send message: {e.Message}" }; } ```
2
answers
0
votes
37
views
asked 22 days ago

Unable to connect to AWS service[API Gateway] from IOT Core device[Inside docker container]

I have created a component using GreengrassV2 and running multiple containers inside this component. Now, my requirement is to call API Gateway and fetch some data into one of the docker containers running on the local device inside the component. I am using "import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;" for fetching the credentials but getting ``` 05:45:06.476 INFO - Calling API Gateway with request params com.amazon.spiderIoT.stowWorkcell.entities.ApiGatewayRequest@e958e637 Exception in thread "main" com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: To use assume role profiles the aws-java-sdk-sts module must be on the class path., com.amazonaws.auth.profile.ProfileCredentialsProvider@7c36db44: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@6c008c24: Failed to connect to service endpoint: ] at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1269) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:845) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:794) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561) at com.amazon.spiderIoT.stowWorkcell.client.ApiGatewayClient.execute(ApiGatewayClient.java:134) at com.amazon.spiderIoT.stowWorkcell.client.ApiGatewayClient.execute(ApiGatewayClient.java:79) at com.amazon.spiderIoT.stowWorkcell.StowWorkcellService.startMQTTSubscriberForSortLocationEvents(StowWorkcellService.java:120) at com.amazon.spiderIoT.stowWorkcell.StowWorkcellService.onInitialize(StowWorkcellService.java:65) at com.amazon.spideriot.sdk.service.Service.run(Service.java:79) at com.amazon.spiderIoT.stowWorkcell.StowWorkcellServiceDriver.main(StowWorkcellServiceDriver.java:26) ``` My dependencyConfiguration looks like :- ``` { "aws.greengrass.TokenExchangeService": { "componentVersion": "2.0.3", "DependencyType": "HARD" }, "aws.greengrass.DockerApplicationManager": { "componentVersion": "2.0.4" }, "aws.greengrass.Cloudwatch": { "componentVersion": "3.0.0" }, "aws.greengrass.Nucleus": { "componentVersion": "2.4.0", "configurationUpdate": { "merge": "{\"logging\":{\"level\":\"INFO\"}, \"iotRoleAlias\": \"GreengrassV2TestCoreTokenExchangeRoleAlias\"}" } }, "aws.greengrass.LogManager": { "componentVersion": "2.2.3", "configurationUpdate": { "merge": "{\"logsUploaderConfiguration\":{\"systemLogsConfiguration\": {\"uploadToCloudWatch\": \"true\",\"minimumLogLevel\": \"INFO\",\"diskSpaceLimit\": \"10\",\"diskSpaceLimitUnit\": \"MB\",\"deleteLogFileAfterCloudUpload\": \"false\"},\"componentLogsConfigurationMap\": {\"LedService\": {\"minimumLogLevel\": \"INFO\",\"diskSpaceLimit\": \"20\",\"diskSpaceLimitUnit\": \"MB\",\"deleteLogFileAfterCloudUpload\": \"false\"}}},\"periodicUploadIntervalSec\": \"5\"}" } } } ``` Java code for using AWS credentials ``` public class APIGatewayModule extends AbstractModule { @Provides @Singleton public AWSCredentialsProvider getAWSCredentialProvider() { return new DefaultAWSCredentialsProviderChain(); } @Provides @Singleton public ApiGatewayClient getApiGatewayClient(final AWSCredentialsProvider awsCredentialsProvider) { System.out.println("Getting client configurations"); final com.amazonaws.ClientConfiguration clientConfiguration = new com.amazonaws.ClientConfiguration(); System.out.println("Got client configurations" + clientConfiguration); return new ApiGatewayClient(clientConfiguration, Region.getRegion(Regions.fromName("us-east-1")), awsCredentialsProvider, AmazonHttpClient.builder().clientConfiguration(clientConfiguration).build()); } } ``` I have been following this doc: https://docs.aws.amazon.com/greengrass/v2/developerguide/device-service-role.html My question is regarding everywhere in this document, it is mentioned that "AWS IoT Core credentials provider", what credentials provider should we use? Also, as mentioned in this doc we should use --provision true when "When you run the AWS IoT Greengrass Core software, you can choose to provision the AWS resources that the core device requires." But we started without this flag, how can this be tackled and is there any other document that provides reference to using credentials provider and calling API Gateway from AWS SDK Java. On SSH to docker, i could find that the variable is set ``` AWS_CONTAINER_CREDENTIALS_FULL_URI=http://localhost:38135/2016-11-01/credentialprovider/ ``` But unable to curl from docker to this URL, is this how this is suppose to work?
3
answers
0
votes
47
views
asked a month ago

AWS-SDK-Javascript v3 API call to DynamoDB is return undefined and ignoring execution of a console.log command

The goal of this code snippet is retreiving all connectionsids of a chat room to reply to a chat sendmessage command in the API Gateway WebSocket. I have used PutCommand and GetCommand a lot, but today I'm using the QueryCommand for the first time. The code Part 1, the DynamoDB call: ``` export async function ddbGetAllRoomConnections(room) { const params = { "TableName": "MessageTable", "KeyConditionExpression": "#DDB_room = :pkey", "ExpressionAttributeValues": { ":pkey": "" }, "ExpressionAttributeNames": { "#DDB_room": "room" }, "ScanIndexForward": true, "Limit": 100 }; console.log("ddbGetAllRoomConnections-1:",params); const data = await ddbClient.send( new QueryCommand(params) ); console.log("ddbGetAllRoomConnections-2:",data); return data; } ``` The calling part: ``` const normalConnections = ddbGetAllRoomConnections(connData.lastroom); if (typeof normalConnections.Items === 'undefined' || normalConnections.Items.length <= 0) { throw new Error("Other Connections not found"); } ``` The following logfile entries are occuring in sequence: ``` logfile puzlle message1: ddbGetAllRoomConnections-1: { TableName: 'MessageTable', KeyConditionExpression: '#DDB_room = :pkey', ExpressionAttributeValues: { ':pkey': '' }, ExpressionAttributeNames: { '#DDB_room': 'room' }, ScanIndexForward: true, Limit: 100 } logfile puzlle message2: ERROR Error: Other Connections not found at Runtime.handler (file:///var/task/chat-messages.js:49:21) at processTicksAndRejections (node:internal/process/task_queues:96:5) { statusCode: 500 } logfile puzlle message3: END RequestId: ``` Waht irritates me is, the following sequence of occurences in the logfile: 1. ddbGetAllRoomConnections-1: is coming correctly before the ddbClient.send command 2. after the ddbClient.send command there is no ddbGetAllRoomConnections-2 log entry 3. The next logentry is after the call of ddbGetAllRoomConnections showing the value undefined. I tried also PartiQL per ExecuteCommand, then debugging with Dynobase I retrieved the code for the params section in the current setting.
1
answers
0
votes
25
views
asked a month ago

Not able to get complete response of a web page when using api gateway

Hi All, I have created API Gateway with mtls enabled and with integration type as vpc link to NLB. The resources and methods are as below, ``` / ANY GET OPTIONS /{proxy+} ANY OPTIONS ``` In URL Path Parameters , URL Query String Parameters , HTTP Headers are empty. In Method Execution, Authorization, Request Validator are set to None, API Key Required is false. In Method Response, HTTP Status is Proxy, HTTP Status is 200, Response Headers for 200 is Access-Control-Allow-Origin. Response Body for 200 is No models. wildcard custom domain name is created and cert is imported. NLB is listening on TCP 443 port and forwarding traffic to EC2 on a particular port where reverse proxy is running and forwarding traffic to backend servers based on host headers. The flow is like, After entering website url in browser (https://xxx.xx.abc.io), it is going to API Gateway (as CNAME record is created with API Gateway's domain name), In API Gateway, mentioned same website url (https://xxx.xx.abc.io) in Endpoint URL (to later match the host header in reverse proxy so that it will forward the traffic to the server where application is running) as anyway traffic goes to NLB (irrespective of what we mention in Endpoint URL) which should forward traffic to reverse proxy. Below are the API Gateway logs ``` (3cad-6a6c-2ff1-4dda-12345) Starting execution for request: 3ba352b-.... (3cad-6a6c-2ff1-4dda-12345) HTTP Method: GET, Resource Path: / (3cad-6a6c-2ff1-4dda-12345) Method request path: {} (3cad-6a6c-2ff1-4dda-12345) Method request query string: {} (3cad-6a6c-2ff1-4dda-12345) Method request headers: {sec-fetch-mode=navigate, sec-fetch-site=none, accept-language=en-GB,en;q=0.9, User-Agent=Mozilla/5.0 (X 10_15_7) WebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 S/537.36, Host=https://xxx.xx.abc.io, sec-fetch-user=?1, accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9, sec-ch-ua=" Not A;Brand";v="99", "Chromium";v="101", "Google Chrome";v="101", sec-ch-ua-mobile=?0, sec-ch-ua-platform="S", upgrade-insecure-requests=1, X-Forwarded-For=<my_ip>, accept-encoding=gzip, deflate, br, sec-fetch-dest=document} (3cad-6a6c-2ff1-4dda-12345) Method request body before transformations: (3cad-6a6c-2ff1-4dda-12345) Endpoint request URI: https://xxx.xx.abc.io (3cad-6a6c-2ff1-4dda-12345) Endpoint request headers: {sec-fetch-mode=navigate, sec-fetch-site=none, x-amzn-apigateway-api-id=2c2zsc3, accept-language=en-GB,en;q=0.9, User-Agent=Mozilla/5.0 (X 10_15_7) WebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 S/537.36, Host=https://xxx.xx.abc.io, sec-fetch-user=?1, accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9, sec-ch-ua=" Not A;Brand";v="99", "Chromium";v="101", "Google Chrome";v="101", sec-ch-ua-mobile=?0, sec-ch-ua-platform="macOS", upgrade-insecure-requests=1, X-Forwarded-For=<my_ip>, accept-encoding=gzip, deflate, br, sec-fetch-dest=document} (3cad-6a6c-2ff1-4dda-12345) Endpoint request body after transformations: (3cad-6a6c-2ff1-4dda-12345) Sending request to https://xxx.xx.abc.io (3cad-6a6c-2ff1-4dda-12345) Received response. Status: 200, Integration latency: 46 ms (3cad-6a6c-2ff1-4dda-12345) Endpoint response headers: {access-control-allow-methods=GET,PUT,POST,DELETE, access-control-allow-headers=x-http-method-override,x-requested-with,content-type,accept, Content-Type=text/html; charset=utf-8, Content-Length=303389, ETag=W/"41d-TlJhUL/tuywaSgaxKgTtn8", Date=Wed, 01 Dec 2021 13:15:15 GMT, X-Content-Type-Options=nosniff, Strict-Transport-Security=max-age=300;includeSubDomains;preload;always;, X-Frame-Options=deny} (3cad-6a6c-2ff1-4dda-12345) Endpoint response body before transformations: <!DOCTYPE html><html><head><meta charSet="utf-8"/><meta http-equiv="x-ua-compatible" content="ie=edge"/><meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"/><meta http-equiv="Content-Security-Policy" content="unsafe-inline"/><style> @keyframes spin{ 0%{transform:rotate(0)} 100%{transform:rotate(360deg)} } #___gatsby>div:empty{ position:fixed; (3cad-6a6c-2ff1-4dda-12345) Method response body after transformations: <!DOCTYPE html><html><head><meta charSet="utf-8"/><meta http-equiv="x-ua-compatible" content="ie=edge"/><meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"/><meta http-equiv="Content-Security-Policy" content="unsafe-inline"/><style> @keyframes spin{ 0%{transform:rotate(0)} 100%{transform:rotate(360deg)} } #___gatsby>div:empty{ (3cad-6a6c-2ff1-4dda-12345) Method response headers: {access-control-allow-methods=GET,PUT,POST,DELETE, access-control-allow-headers=x-http-method-override,x-requested-with,content-type,accept, Content-Type=text/html; charset=utf-8, Content-Length=303389, ETag=W/"41d-TlJhUL/tuywaSgaxKgTtn8", Date=Wed, 01 Dec 2021 13:15:15 GMT, X-Content-Type-Options=nosniff, Strict-Transport-Security=max-age=300;includeSubDomains;preload;always;, X-Frame-Options=deny} (3cad-6a6c-2ff1-4dda-12345) Successfully completed execution (3cad-6a6c-2ff1-4dda-12345) Method completed with status: 200 ``` The issue is website is just trying to load but its totally blank (no images nothing on page) On `dev-tools`, console tab, it shows below, ``` Refused to execute script from '<URL>' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled. https://xxx.xx.abc.io/:1 Refused to execute script from 'https://xxx.xx.abc.io/component---src-pages-index-js-c14b2e4d69274.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled. Refused to execute script from 'https://xxx.xx.abc.io/3-c31d8ae5a9706.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled. ``` If I directly hit NLB DNS:app_port, the page loads properly. Can anyone suggest where is the problem? Thanks,
2
answers
0
votes
84
views
asked a month ago

How to shorten the API response time in a API GW + Lambda solution

We are building a REST API using API GW + Lambda with NodeJS. In a nutshell the API extracts a big payload (up to 1MB) from the request, stores it into S3 and then return a response. We'd like to shorten the API response time but find this is quite difficult using Lambda. What we have thought of: 1. Make storing payload into S3 async via SNS/SQS. However the payload is too big to put into a SNS/SQS message directly so we still have to put it somewhere first (e.g. S3) and include a reference in the SNS/SQS message. Therefore making it async does not seem helpful here. 2. We also tried to return the response to API GW before storing object into S3 completes and hoped that lambda would continue running until the S3 calls completes. However the lambda stops execution immediately after the return. Any pending operations get "frozen" and only continue when the lambda is triggered by a new incoming request. This is what AWS document describes. Changing lambda context `callbackWaitsForEmptyEventLoop` to `true` or `false` does not help either. Any ideas are appreciated. Update on 1 Jun 2022: Thanks all for your answers. I've uploaded two XRAY traces for the first invocation and second invocation on the lambda with provisioned concurrency. I tried to put the same payload (1.3MB) into ElastiCache, S3 and DynamoDB (after slicing) at the same time. - https://ibb.co/2jZnhpv - https://ibb.co/NnZN1dW As you see putting the payload into S3 isn't that slow. I haven't tried the other approaches like writing into Firehose yet but I doubt it will be significantly faster. I reckon the lambda extension is more reasonable if it enables Lambda to continue run after return response to API GW.
3
answers
0
votes
57
views
asked a month ago

Unknown reason for API Gateway WebSocket LimitExceededException

We have several API Gateway WebSocket APIs, all regional. As their usage has gone up, the most used one has started getting LimitExceededException when we send data from Lambda, through the socket, to the connected browsers. We are using the javascript sdk's [postToConnection](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ApiGatewayManagementApi.html#postToConnection-property) function. The usual behavior is we will not get this error at all, then we will get several hundred spread out over 2-4 minutes. The only documentation we've been able to find that may be related to this limit is the [account level quota](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-account-level-limits-table) of 10,000 per second (and we're not sure if that's the actual limit we should be looking at). If that is the limit, the problem then is that we are nowhere near it. For a single deployed API we're hitting a maximum of 3000 messages sent through the socket **per minute** with an overall account total of about 5000 per minute. So nowhere near the 10,000 per second. The only thing we think may be causing it is we have a "large" number messages going through the socket relative to the number of connected clients. For the api that's maxing at about 3000 messages per minute, we usually have 2-8 connected clients. Our only guess is there may be a lower limit to number of messages per second we can send to a specific socket connection, however we cannot find any docs on this. Thanks for any help anyone can provide
1
answers
0
votes
38
views
asked 2 months ago
0
answers
0
votes
26
views
asked 2 months ago

AWS SAM "No response from invoke container for" wrong function name

I've debugged my application, and identified a problem. I have 2 REST API Gateway, and it seems like since they both bind on the same endpoint, the first one will recieve the call that the second one should handle. Here's my template.yaml ```yaml Resources: mysampleapi1: Type: 'AWS::Serverless::Function' Properties: Handler: packages/mysampleapi1/dist/index.handler Runtime: nodejs14.x CodeUri: . Description: '' MemorySize: 1024 Timeout: 30 Role: >- arn:aws:iam:: [PRIVATE] Events: Api1: Type: Api Properties: Path: /users Method: ANY Environment: Variables: NODE_ENV: local Tags: STAGE: local mysampleapi2: Type: 'AWS::Serverless::Function' Properties: Handler: packages/mysampleapi2/dist/index.handler Runtime: nodejs14.x CodeUri: . Description: '' MemorySize: 1024 Timeout: 30 Role: >- arn:aws:iam:: [PRIVATE] Events: Api1: Type: Api Properties: Path: /wallet Method: ANY Environment: Variables: NODE_ENV: local Tags: STAGE: local ``` When I send a HTTP request for ```mysampleapi2``` Here's what's happening in the logs using the startup command sam local start-api --port 3001 --log-file /tmp/server-output.log --profile personal --debug ```log 2022-04-27 18:2:34,953 | Mounting /home/mathieu_auclair/Documents/Project/repositories/server as /var/task:ro,delegated inside runtime container 2022-04-27 18:20:35,481 | Starting a timer for 30 seconds for function 'mysampleapi1' 2022-04-27 18:21:05,484 | Function 'mysampleapi1' timed out after 30 seconds 2022-04-27 18:21:46,732 | Container was not created. Skipping deletion 2022-04-27 18:21:46,732 | Cleaning all decompressed code dirs 2022-04-27 18:21:46,733 | No response from invoke container for mysampleapi1 2022-04-27 18:21:46,733 | Invalid lambda response received: Lambda response must be valid json ``` Why is my ```mysampleapi2``` not picking the HTTP call? If I run them in separate template files using different ports, then it works... why is that? Re-post from my question on StackOverflow: https://stackoverflow.com/questions/72036152/aws-sam-no-response-from-invoke-container-for-wrong-function-name
1
answers
1
votes
42
views
asked 2 months ago

Should I use Cognito Identity Pool OIDC JWT Connect Tokens in the AWS API Gateway?

I noticed this question from 4 years ago: https://repost.aws/questions/QUjjIB-M4VT4WfOnqwik0l0w/verify-open-id-connect-token-generated-by-cognito-identity-pool So I was curious and I looked at the JWT token being returned from the Cognito Identity Pool. Its `aud` field was my identity pool id and its `iss` field was "https://cognito-identity.amazonaws.com", and it turns out that you can see the oidc config at "https://cognito-identity.amazonaws.com/.well-known/openid-configuration" and grab the public keys at "https://cognito-identity.amazonaws.com/.well-known/jwks_uri". Since I have access to the keys, that means I can freely validate OIDC tokens produced by the Cognito Identity Pool. Moreso, I should be also able to pass them into an API Gateway with a JWT authorizer. This would allow me to effectively gate my API Gateway behind a Cognito Identity Pool without any extra lambda authorizers or needing IAM Authentication. Use Case: I want to create a serverless lambda app that's blocked behind some SAML authentication using Okta. Okta does not allow you to use their JWT authorizer without purchasing extra add-ons for some reason. I could use IAM Authentication onto the gateway instead but I'm afraid of losing formation such as the user's id, group, name, email, etc. Using the JWT directly preserves this information and passes it to the lambda. Is this a valid approach? Is there something I'm missing? Or is there a better way? Does the IAM method preserve user attributes...?
0
answers
0
votes
12
views
asked 2 months ago
  • 1
  • 90 / page