Questions tagged with Amazon API Gateway
Content language: English
Sort by most recent
Hi,
I want to call a HTTP endpoint from my AWS API gateway and that endpoint is secured with Oauth 2.0. So, is there any way to implement Oauth 2.0 using HTTP integration type in AWS API Gateway (API Creation Wizard)?
Also, if lambda is the only option, any examples will be appreciated.
Hi All,
We are using AWS ECS Fargate ALB & API gateway to serve our API, mostly its is always healthy, but at time it throws status code 0 or 503, sharing the error message that is accompanied with these statuses. We have 1 task always active and trigger another one on 80% CPU load. But we always see 2 tasks active though it barely uses .25 CPU and 512 Memory system. We are not sure what is the issue here and why we keep getting these errors. Not sure if it has anything to do with the size of the payload received. Timeout is set to 15 secs at API gateway level. Not sure where we are going wrong. Any help here is much appreciated.
Error Status & Message
~~~
Status 0:
"responseBody":".execute-api.ap-south-1.amazonaws.com: Temporary failure in name resolution"
Status 503:
"responseBody":"<html> <head><title>503 Service Temporarily Unavailable</title></head> <body> <center><h1>503 Service Temporarily Unavailable</h1></center> </body> </html> "
Hello all! I am investigating an issue happening with recent API Gateway deployments that have resulting warnings in the Jenkins console output resembling the following:
```
"warnings": [
"More than one server provided. Ignoring all but the first for defining endpoint configuration",
"More than one server provided. Ignoring all but the first for defining endpoint configuration",
"Ignoring response model for 200 response on method 'GET /providers/{id}/identity/children' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring request model for 'PUT /providers/{id}/admin_settings' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/profile/addresses/{address_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/profile/anecdotes/{anecdote_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring request model for 'POST /providers/{id}/routes' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/routes/{route_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /service_type_groups/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /service_types/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method."
]
```
Here is an example of the 200 response for an effected method in the OAS doc:
```
responses:
'200':
description: Array of Provider Identities that are children of this Provider
content:
'application/json':
schema:
description: Array of children provider identities
type: array
items:
$ref: '#/components/schemas/providerIdentityExpansion'
'404':
$ref: '#/components/responses/not_found'
'500':
$ref: '#/components/responses/server_error'
```
Based on the language in the warnings text, my understanding is that there is some kind of default request/200 response model defined, and it is somehow being overwritten in the API methods themselves. But when comparing some other (seemingly) non-warning methods they look identical in how they are implemented. I have tried a few potential fixes with removing adding attributes, but none have worked so far.
Would anyone be able to help me in finding what exactly is going wrong here in the OAS doc?
Hi AWS,
Is this workflow architecture possible:
RDS (PostgreSQL) --------------------> Amazon MQ Broker --------------> Lambda Function -----------------------> S3 Bucket
(Data is stored for customers)
The database can be in DynamoDB as well. Amazon MQ is used as an event-source for the lambda function and the lambda is sending the request to API Gateway and getting the JSON response and further sending it to S3 to be stored as output.
Please suggest
Hello,
I am trying to use API gateway with a lambda function, but with my own domain (which is on route 53). This is my current config:
in API gateway I created a resource with a GET method, and I published it to a stage I called v1. I get an endpoint like
```
https://11111111.execute-api.us-east-1.amazonaws.com/v1
```
if I call this endpoint I can see the reply from my lambda function. so far so good.
Then In API gateway again, I made a custom domain name for api.mydomain.com, and I get something like
```
22222222.execute-api.us-east-1.amazonaws.com
```
finally in route 53 I created a record type A (api.mydomain.com), marked as ALIAS and with value
```
22222222.execute-api.us-east-1.amazonaws.com
```
If I try to call https://api.mydomain.com/v1 I get a 403 error.
Am I missing something?
Also, do I need to enable CORS to allow any browser to call this endpoint?
Hello,
has anyone been able to figure out gzip-compression with API Gateway and S3 integration?
If I have a uncompressed binary file in S3 and enable compression in API Gateway then requesting with "accept-encoding: gzip" works and returns proper gzip data (as long as payload is under 10MB).
What doesn't seem to work is that when file in S3 is already gzipped and has content-encoding metadata defined. API gateway re-compresses the data. So I will then receive twice gzipped data, which then breaks on all http clients.
This document here says that API gateway should not re-compress the data when integration returns content-encoding header. If the content-encoding metadata for the file has been defined, S3 will actually return that header. So is this a bug in API gateway? It seems to ignore the header and re-compresses already compressed data.
Anyone has figured out how to get this to work?
Hello
I'm trying to run a SAM instance locally containing an API Gateway and a Lambda written in python. My goal is to POST images to my API so that I can upload them to an S3 bucket where they can be publicly served.
If I do HTTP operations using JSON everything works fine, but when I try to POST a binary type like 'image/jpeg' I get the following error:
```
2023-03-18 09:06:27 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
UnicodeDecodeError while processing HTTP request: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
2023-03-18 09:06:34 127.0.0.1 - - [18/Mar/2023 09:06:34] "POST /media HTTP/1.1" 502 -
```
I've tried adding BinaryMediaTypes to my template.yaml and create a fully defined CloudFormation but I still get this error.
Here's my code:
https://github.com/caelumvox/blog-api
Would anyone know how to get this working locally? Thank you
I'm trying to do something I thought was easy, but my google fu is failing me.
I have been provided an API gateway endpoint that I must call a get on the d/l a file onto my EC2 instance.
The request has to be signed.
I see all kinds of SDK's and examples, but nothing for cli.
I don't see aws cli command that will let me call a gateway endpoint (test-invoke) doesn't seem right. Is there one.
If not, is there a simple way to use the aws cli to create a signed request that I can send with invoke-webrequest (powershell) to d/l the file.
The IAM permissions are in place and the EC2 instance profile does have the invoke permission for the API.
I have a CloudFront distribution with WAF to protect an HTTP API Gateway. CloudFront distribution has an Alternate domain name [api.mysite.dev]() which we manage with CloudFlare (CNAME record points to [https://{distro}.cloudfront.net]()). Distributions Origin is an HTTP API Gateway default endpoint. We use a build-in Auth0 authorizer on API so we cannot use a [custom lambda authorizer](https://wellarchitectedlabs.com/security/300_labs/300_multilayered_api_security_with_cognito_and_waf/3_prevent_requests_from_accessing_api_directly/).
Now I want to higher the security and disable the default API endpoint. I created a Custom domain name for the API with an ACM certificate in the same region and disabled the default endpoint. Instead of the default endpoint, I specified the API's custom domain name as an Origin for CloudFront distribution - ([apigw.mysite.dev]() which is pointing to API Gateway domain name
[d-123abc123.execute-api.{my-region}.amazonaws.com]()).
But CloudFront responds with **404 Not Found error** when calling the [api.mysite.dev]() as if CF couldn't reach the origin custom domain name. CloudFront logs doesn't bring any valuable info.
I've reviewed the [documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-custom-domain-names.html) and followed carefully the steps in [knowledge center](https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-domain-cloudfront/).
Can anyone provide any tips on to how to fix the issue? Can I use a HTTP API with custom domain managed externally (and ACM certificate) as a origin for CloudFront?
I am working on Airbnb like project. There are Public RESTful APIs that need to be secured with API Gateway and oauth 2.0 I want a solution to secure the public RESTful APIs with OAuth 2.0. Thanks
My app has its back-end on API Gateway and front-end is on a S3 bucket. That means they have different URLs and the cookie ends up being samesite: None. Because of that, Safari Browser doesn't store the login cookie I send from the back-end even with secure: true.
My question is, is it possible to mantain this architecture and still manage to send a cookie that Safari can store ? If not possible, what would the architecture look like to be able to send cookies samesite: true ? If you can point me to the right direction I appreciate it.
I needed to set up cross account access to AppSync, from account A to account B. I'm using CDK for infra. Since AppSync doesn't support resource based policies, I created an instance of API gateway in account B, and setup a [aws service integration](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_apigateway.AwsIntegration.html) (AwsIntegration) from the API Gateway to AppSync in that account; Then I set up a resource based policy on the API gateway in account B that allows requests from services in account A, which then get proxied to AppSync in account B. I got the approach from [here](https://stackoverflow.com/questions/65698880/appsync-to-appsync-integration-http-datasource-aws-iam).
Instead of using a aws service integration, I'd like to use the [HttpIntegration](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_apigateway.HttpIntegration.html#initializer). The HttpIntegration, however, doesn't seem to create the needed Authorization header to access AppSync. I keep getting 401 error when I try to test. Is the [credentialsRole](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_apigateway.IntegrationOptions.html#credentialsrole) on the construct just being ignored? Or am I missing something?
Thanks