Questions tagged with Serverless
Content language: English
Sort by most recent
I want to write an AWS Lambda function that is able to respond to various types of event types: API Gateway, Kinesis, S3, etc.
The API/SDK I need to work with is Java.
I'd like to create a general-purpose handler, but it appears that each service has its own event type and does not derive from some common parent event type.
Is there a pattern or a best practice for creating an AWS Lambda function that can be used with a variety of event types? Or do I need to create a custom handler for each event type?
Hello all! I am investigating an issue happening with recent API Gateway deployments that have resulting warnings in the Jenkins console output resembling the following:
```
"warnings": [
"More than one server provided. Ignoring all but the first for defining endpoint configuration",
"More than one server provided. Ignoring all but the first for defining endpoint configuration",
"Ignoring response model for 200 response on method 'GET /providers/{id}/identity/children' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring request model for 'PUT /providers/{id}/admin_settings' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/profile/addresses/{address_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/profile/anecdotes/{anecdote_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring request model for 'POST /providers/{id}/routes' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/routes/{route_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /service_type_groups/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /service_types/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method."
]
```
Here is an example of the 200 response for an effected method in the OAS doc:
```
responses:
'200':
description: Array of Provider Identities that are children of this Provider
content:
'application/json':
schema:
description: Array of children provider identities
type: array
items:
$ref: '#/components/schemas/providerIdentityExpansion'
'404':
$ref: '#/components/responses/not_found'
'500':
$ref: '#/components/responses/server_error'
```
Based on the language in the warnings text, my understanding is that there is some kind of default request/200 response model defined, and it is somehow being overwritten in the API methods themselves. But when comparing some other (seemingly) non-warning methods they look identical in how they are implemented. I have tried a few potential fixes with removing adding attributes, but none have worked so far.
Would anyone be able to help me in finding what exactly is going wrong here in the OAS doc?
Hi
Can anyone share the code flow for chime sdk serverless meeting with cognito user pool?
Step-by-step process would be a great help as we tried, but we lost.
Thanks
Siddharth
Hi AWS,
Is this workflow architecture possible:
RDS (PostgreSQL) --------------------> Amazon MQ Broker --------------> Lambda Function -----------------------> S3 Bucket
(Data is stored for customers)
The database can be in DynamoDB as well. Amazon MQ is used as an event-source for the lambda function and the lambda is sending the request to API Gateway and getting the JSON response and further sending it to S3 to be stored as output.
Please suggest
Hello,
I am trying to use API gateway with a lambda function, but with my own domain (which is on route 53). This is my current config:
in API gateway I created a resource with a GET method, and I published it to a stage I called v1. I get an endpoint like
```
https://11111111.execute-api.us-east-1.amazonaws.com/v1
```
if I call this endpoint I can see the reply from my lambda function. so far so good.
Then In API gateway again, I made a custom domain name for api.mydomain.com, and I get something like
```
22222222.execute-api.us-east-1.amazonaws.com
```
finally in route 53 I created a record type A (api.mydomain.com), marked as ALIAS and with value
```
22222222.execute-api.us-east-1.amazonaws.com
```
If I try to call https://api.mydomain.com/v1 I get a 403 error.
Am I missing something?
Also, do I need to enable CORS to allow any browser to call this endpoint?
I am trying to invoke a lambda to store data in a Dynamodb table. In my own AWS account, it works, but not in the company AWS account I'm working at. Cloudwatch does not show any errors. The timeout occurs at "await dynamodb.describeTable(describeParams).promise();".

My code is as follows:
```
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient();
const dynamodb = new AWS.DynamoDB();
exports.handler = async (event) => {
const valueTostore = event.body || 'default_value';
const params = {
TableName: 'my-values',
Item: {
id: new Date().toISOString(),
SessionConfig: valueTostore
}
};
try {
const describeParams = { TableName: 'my-values' };
await dynamodb.describeTable(describeParams).promise();
} catch (error) {
const response = {
statusCode: 500,
body: JSON.stringify({ message: 'Error while accessing table' })
};
return response;
}
try {
await docClient.put(params).promise();
} catch (error) {
const response = {
statusCode: 500,
body: JSON.stringify({ message: 'Error while storing value' })
};
return response;
}
const response = {
statusCode: 200,
body: JSON.stringify({ message: 'Value stored successfully' })
};
return response;
};
```
Hello
I'm trying to run a SAM instance locally containing an API Gateway and a Lambda written in python. My goal is to POST images to my API so that I can upload them to an S3 bucket where they can be publicly served.
If I do HTTP operations using JSON everything works fine, but when I try to POST a binary type like 'image/jpeg' I get the following error:
```
2023-03-18 09:06:27 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
UnicodeDecodeError while processing HTTP request: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
2023-03-18 09:06:34 127.0.0.1 - - [18/Mar/2023 09:06:34] "POST /media HTTP/1.1" 502 -
```
I've tried adding BinaryMediaTypes to my template.yaml and create a fully defined CloudFormation but I still get this error.
Here's my code:
https://github.com/caelumvox/blog-api
Would anyone know how to get this working locally? Thank you
we use loads of lambda, eventbridge, all that good stuff. My devs were favouring a local environment, but this is clearly not possible. How do we write code / release fast, with a serverless architecture, without having to deploy every tiny change back up to AWS?
I'm trying to deploy my FastAPI on lightsail AWS which uses ubuntu. I created a directory in /home/ubunutu named say myapp so, /home/ubuntu/myapp Now I create Virtual Environment in the virtual env and install required libraries. After this I start my gunicorn server on local host which is listening at port 8000.
But whenever I access the serverip:8000 it gives error [Error: connect ECONNREFUSED "server ip address":8000].
I didn't create any ".service" files because I want to make sure first if it runs locally but it isn't running when the same runs on Vs-Code and the same API is deployed on Heroku working as-well.
I even tried to create a sample file using a tutorial and followed the same steps which were also given on the tutorial itself but I got the same error.
The :8000 port is free, I made sure of that. Server is returning API calls on other API's even changed some other API's to port 8000 as-well and there it worked perfect.
The firewall is disabled so that is not the problem as-well.
It just isn't working with Gunicorn.
I am working on Airbnb like project. There are Public RESTful APIs that need to be secured with API Gateway and oauth 2.0 I want a solution to secure the public RESTful APIs with OAuth 2.0. Thanks
Hello
I run the following example from the documentation
[AWS CLI apigateway put-integration](https://docs.aws.amazon.com/cli/latest/reference/apigateway/put-integration.html)
```
aws apigateway put-integration --rest-api-id 1234123412 --resource-id a1b2c3 --http-method GET --type AWS --integration-http-method POST --uri 'arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:123412341234:function:function_name/invocations'
```
But got the following error :
```
An error occurred (NotFoundException) when calling the PutIntegration operation: Invalid Method identifier specified
```
Of course I took good `--rest-api-id` value and `--resource-id` value.
Does the issue may comes from the URI ?
Please advise ?
Hello,
I've developed an app for Slack but I've been having trouble starting it for a while now.
For those who don't know, the slack api requires a response within 3 seconds for user interaction requests, if this time passes it generates an error and the application does not work as it should.
I came to solve this problem, creating a lambda that runs every 3 minutes, invoking the main lambda, but I don't think it's the best way to do it.
With that I decided to create a lambda with provisioned concurrency, leaving 3 lambdas started. I think it was the best choice.
Turns out it's not working as it should. When I make the call, the API gateway is pointing to $LATEST, causing it to start a new lambda and I get timeout errors.
The configuration was done correctly as you can see:



Now when I make the calls it's creating a new container and pointing to the $LATEST version instead of pointing to the started lambdas. Version 42 lambdas receive one request or another at random as you can see in the logs. I really don't understand what's going on.
