Questions tagged with Serverless
Content language: English
Sort by most recent
Hey all! Hope your are doing well. I have been trying to write a query service for some internal databases in my VPC. My current setup is API Gateway with a Lambda that queries the database which works fine, but unfortunately I ran into two issues:
- API Gateway default timeout is 30s which is not very long for queries.
- Lambda response size limit is 6mb, which is fine but also not suitable for the biggest queries.
Are there any serverless services I can use to solve this problem? I do require custom domain / authentication. Some solution I thought of were:
- Chunking request, which should work fine but I think 30s is still not very long. It is a temporary solution for now.
- Using ALB as a "api" to trigger lambdas, which would fix the timeout, but response size limit is still 6mb.
- Hosting my own API on a EC2/Container, which I can do but I like serverless solutions.
- Using websockets, but it seems harder to attach existing apps to a WS compared to a REST api.
If somebody has some input would really appreciate it! Thanks in advance.
Hello All!
So I'm on my first foray in dockerized applications and would like to deploy a cron task on fargate to run the docker image.
**Some background:**
The docker image is published to ECR via CI/CD pipeline from my gitlab account. The image is a slim buster python, and I utilize docker in docker to build the image. The push to ECR is successful.
The basics of the image are:
1. Create a python env with the libraries specified in the requirements.txt
2. Import libraries into __main__.py along with all the user-defined funcs on a func.py file
__main__.py runs a series of DDL sqls and DML sqls passed to our snowflake via the snowflake python SDK which stages data. Then the staged data is looped through in batches with each batch sent to an external api (smarty streets) and the GET results sent back to snowflake (basically standardizing free text address fields is the point of the application).
**My Goal**
Have this run on a chron via a scheduled Fargate task once per day.
**My Problem**
I have a fargate cluster defined, I have a task definition, and a schedule rule (Event Bridge) created, but nothing happens. When I try to manually run the task definition nothing happens. The task shows that its in a pending status then disappears, and there are no logs given either (telling me something is failing) and nothing happens on the scheduled time from the event bridge rule I created. On the SQL DMLs I have a start|finish time for this process being logged to a snowflake table, which is showing that there is nothing being sent to snowflake. I'm sure there is something very stupid that I'm missing on getting a task to run the docker container but am not finding it. Looked through a lot of tutorials on youtube, but they all seem to be web app driven for a Fargate *service *(typically a flask), and not a simple burst ETL job on a schedule like I"m trying to do.
Any examples on how to tie this all together???
**Added Notes**
I'm pretty sure the subnets I chosen have their IPs whitelisted on our snowflake, but I am expecting the API to fail as its external. But at this point I just want to trigger the docker container/
Hello,
I am practicing using SAM CLI to make and deploy Lambda functions as APIs. I am running into issues enabling CORS on the API gateway associated with my lambda function. I have tried both configuring CORS in my template.yaml file and going into the API gateway console and enabling CORS manually.
The lambda function is a simple hello world function that takes in 1 parameter which is a name and returns " {name} says hello world!". I have tested the api locally using a react app to invoke the api call and everything works fine. That is not the case for when its deployed to AWS.
Here is my template.yaml file:
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
apple-app Sample SAM Template for apple-app
# More info about Globals:
Globals:
Function:
Timeout: 3
MemorySize: 128
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.9
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
RestApiId: !Ref AWS::ApiGateway::RestApi
Cors:
AllowMethods: "'GET, POST'"
AllowHeaders: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
AllowOrigin: "'*'"
Outputs:
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
```
Am I configuring the implicit HelloWorld api correctly to enable CORS?
Since my configuration with the yaml file didn't work correctly. I tried manually enabling CORS by going into the API gateway console and clicking the button "Enable CORS and replace existing CORS headers". This is the response I get with an error:
✔ Add Access-Control-Allow-Headers, Access-Control-Allow-Methods, Access-Control-Allow-Origin Method Response Headers to OPTIONS method
✔ Add Access-Control-Allow-Headers, Access-Control-Allow-Methods, Access-Control-Allow-Origin Integration Response Header Mappings to OPTIONS method
✖ Add Access-Control-Allow-Origin Method Response Header to GET method
✖ Add Access-Control-Allow-Origin Integration Response Header Mapping to GET method
Your resource has been configured for CORS. If you see any errors in the resulting output above please check the error message and if necessary attempt to execute the failed step manually via the Method Editor.
! The Empty Model does not exist, and retry resource creation without it.
I am not sure what I'm doing wrong, so any help would be greatly appreciated.
Thank you.
Our team has created a Lambda Function URL to manage background tasks in a queuing fashion. The Lambda essentially schedules the kick off of these tasks by simply calling a RESTful API request to and external endpoint still hosted on AWS. However, we are getting outbound connection timeout to the external endpoint (and basically anywhere) time to time and then unable to send outbound requests for a few hours, eg: after sending 35 requests in 2 hours we are unable to connect for about 3hours. I suspect this being caused by some sort of limit or quota being reached within VPC configuration but I cannot make any sense of this as we don't get any meaningful warning or error. Removing the VPC does solve the timeout problem.
A bit more details about the setup: Lambda has got an RDS attached where it stores the state and updates of last runs and the Lambda is configured with VPC to get access to the RDS instance with the outbound rules such that allows all protocol on all ports. The frequency of the outbound API calls is 5mins with a very light payload, which normally gets a response within 50ms before timeout happens.
What are we doing wrong with the VPC config and shouldn't we get some warning/emails when quotas/limit are reached?
Thanks
I created a project using the blueprint of serverless image handler; deployment region set to 'us-east-1' and region for the bucket of cloudformation template 'us-east-1'. On running the 'build-and-deployOSS', I am getting error 'The action configuration is not valid'.

Those are the configuration steps defined in that workflow
Configuration:
Steps:
- Run: curl -O https://bootstrap.pypa.io/get-pip.py
- Run: python3 get-pip.py
- Run: yum install zip rsync -y
- Run: pip install --upgrade pip
- Run: pip install --upgrade setuptools
- Run: pip install --upgrade virtualenv
- Run: export OPS_CO_PATH=`pwd`
- Run: echo $OPS_CO_PATH
- Run: cd $OPS_CO_PATH/deployment
- Run: echo $OPS_CO_PATH/deployment
- Run: if ! aws s3api head-bucket --bucket $IMAGEBUCKET 2>/dev/null; then aws s3 mb s3://$IMAGEBUCKET --region $REGION; fi
- Run: if ! aws s3api head-bucket --bucket $TEMPLATE_OUTPUT_BUCKET 2>/dev/null; then aws s3 mb s3://$TEMPLATE_OUTPUT_BUCKET --region $TEMPLATE_REGION; fi
- Run: if ! aws s3api head-bucket --bucket $BUCKET_PREFIX-$REGION 2>/dev/null; then aws s3 mb s3://$BUCKET_PREFIX-$REGION --region $REGION; fi
- Run: ./build-s3-dist.sh $BUCKET_PREFIX $SOLUTION_NAME $VERSION >buildresults.txt
- Run: aws s3 cp $OPS_CO_PATH/deployment/global-s3-assets s3://$TEMPLATE_OUTPUT_BUCKET/$SOLUTION_NAME/$VERSION --recursive --acl bucket-owner-full-control
- Run: aws s3 cp $OPS_CO_PATH/deployment/regional-s3-assets s3://$BUCKET_PREFIX-$REGION/$SOLUTION_NAME/$VERSION --recursive --acl bucket-owner-full-control

I noticed that on build-s3-dist.sh file there is variable DIST_OUTPUT_BUCKET which is not in the templateParameters.json and also not listed as variables for that build. I tried to update the workflow by defining this variable and setting its value same as of 'TEMPLATE_OUTPUT_BUCKET' for the action 'build-and-deployOSS' (not sure what I did was right or not), and run that workflow again but getting error on that same step.

Any idea how this error could be fixed?
Thanks,
#### What I'm trying to do:
I'm trying to create an API using API Gateway that invokes a Lambda Function. I am trying these objects in AWS using SAM templates and deploying to Cloudformation. I am also trying to make sure that generated documentation (Swagger/OpenApi) appears correctly (specifically for response status code, body, and description). Additionally, my goal is to keep the project simple for myself and my fellow team members who do not work with this technology regularly.
#### How I've done it so far:
I've used `AWS::Serverless::Api` and `AWS::Serverless::Function` within my templates since they provide more features without having to define individual methods within API Gateway. With these tools, I've been able to create the functionality of what I'm trying to achieve along with some (but not all) of the documentation I would like to see.
#### The issue:
Within the documentation of both `AWS::Serverless::Api` and `AWS::Serverless::Function`, I see no reference to Method Responses. As a result, the documentation that is generated does not contain response(s). Not entirely surprising because I'm not defining them in the template, but I have been unsuccessful in finding where they might be defined.
#### What I've tried so far:
I've tried using Documentation Parts, defining the method explicitly, and briefly attempted to use a separate yaml file for the definition body (which I don't want to do). None of these have fixed the issue.
#### Question:
Is there a way to define a response/method response while using only `AWS::Serverless::Api` and `AWS::Serverless::Function` within the template without the use of DefinitionBody or a separate yaml file?
#### Thank you!
Thank you for reading this! I am open to trying new solutions and I'll be happy to share the testing template I've been working with if that helps answer the question.
Would it be possible to setup an Aurora serverless v2 in us-east-1 and have a read replica in another region (us-west-2)? How could we do this via console?
Hello there,
I'm trying to drop a DB on the Aurora, but the requests just hangs. I've tried several times and the last attempt has been runnning for 600 seconds.
it's a tiny DB of 20MB gzipped.
* running `show databases;` returns the borken_db in the list.
* running `use broken_db;`now hangs too.
* running `show processlist`returns the following:
| Id | User | Host | db | Command | Time | State | Info |
|----|-----------------|-----------------|-------------|---------|------|----------------------------------|-----------------------------|
| 5 | event_scheduler | localhost | | Daemon | 8310 | Waiting on empty queue | |
| 19 | rdsadmin | localhost | | Sleep | 0 | | |
| 21 | rdsadmin | localhost | | Sleep | 1 | | |
| 22 | rdsadmin | localhost | | Sleep | 1 | | |
| 25 | rdsadmin | localhost | | Sleep | 252 | | |
| 36 | root_user | 10.0.0.48:36768 | broken_db | Sleep | 2404 | | |
| 38 | root_user | 10.0.0.48:36788 | mysql | Query | 2736 | Waiting for schema metadata lock | DROP DATABASE `broken_db` |
| 47 | root_user | 10.0.0.48:36826 | mysql | Query | 2346 | Waiting for schema metadata lock | drop DATABASE broken_db |
| 50 | root_user | 10.0.0.48:36854 | | Query | 1990 | Waiting for schema metadata lock | USE `broken_db` |
| 51 | root_user | 10.0.0.48:36874 | mysql | Query | 0 | init | show processlist |
| 52 | root_user | 10.0.0.48:36894 | mysql | Query | 1042 | Waiting for schema metadata lock | use broken_db |
| 58 | root_user | 10.0.0.48:36922 | | Query | 178 | Waiting for schema metadata lock | use broken_db |
| 59 | rdsadmin | localhost | | Sleep | 7 | | |
where do I go from there?
Hi all. I have a Lambda question. I am trying to use the SERP API for some scholar data with a Lambda function. My function is working mostly for the first pull but am getting a connection refused for the follow up loop in my test. should i split up the function into multiple (or put an API gateway)?
It's the commented out line for const additionalcitations and getCitationsFromPaginationUrls that is getting the connection refused. Why is it refusing connection on the second attempt (I can manually go there in my browser…there are no service blocks either)?
(Note I ultimately want to add additionalcitations to citations and return that combined list. What you see here doesn't take that into account as I am trying to debug).
```
import * as https from 'https'; // Importing the 'https' package
export const handler = async (event) => {
const serpapiKey = process.env.serpapikey; // Get the SerpAPI key from environment variables
const url = event.article_cited_by_serapi_link + '&api_key=' + serpapiKey; // Create the URL to make the SerpAPI request with the article_cited_by_serapi_link and API key
const response = await getPage(url); // Call the getPage function with the URL and wait for a response
const {paginationUrls,citations} = await getPaginationUrls(response, url); // Call the getPaginationUrls function with the response and URL to get an array of pagination URLs
//const additionalCitations = await getCitationsFromPaginationUrls(paginationUrls);
//return paginationUrls; // Return the pagination URLs array
return {citations,paginationUrls};
}
const getPage = async (url) => {
return new Promise((resolve, reject) => {
https.get(url, (res) => { // Make an HTTPS GET request to the specified URL using the 'https' package
let data = '';
res.on('data', (chunk) => {
data += chunk; // As data comes in, add it to the data variable
});
res.on('end', () => {
resolve(JSON.parse(data)); // When the response ends, parse the data and resolve the promise with the result
});
}).on('error', (err) => {
reject(err); // If there is an error, reject the promise with the error
});
});
};
//get the list of pagination URLs to later loop through
const getPaginationUrls = async (response, url) => {
let paginationUrls = []; // Create an empty array to hold the pagination URLs
let citations = [];
if (response.serpapi_pagination) { // If the response includes pagination information
const otherPages = response.serpapi_pagination.other_pages; // Get the 'other_pages' object from the pagination information
const numPages = Math.ceil(response.search_information.total_results / 10); // Calculate the number of pages based on the total number of results and the fact that each page shows 10 results
for (let i = 1; i <= numPages; i++) {
paginationUrls.push(`${url}&start=${10*(i-1)}`); // Add a URL to the paginationUrls array for each page, with the appropriate start parameter
}
}
// Extract citations from organic results
if (response.organic_results) {
citations = extractCitations(response);
}
return {paginationUrls,citations}; // Return the paginationUrls array
};
```
I am getting error when I am running AWS Glue Job with Data Quality Check!
ModuleNotFoundError: No module named 'awsgluedq'
Is there anyone can help?
Thanks,
I have a question regarding automating the process of creating a dataset, revision, and jobs in data exchange (API) using lambda functions and SAM template. I am trying to achieve this using Boto3, but I am facing an issue during the job creation step where I am unable to locate OpenAPI. Can someone guide me on how to resolve this issue?
I would appreciate any help or suggestions on how to automate this process successfully.
Thank you.
We would like to apply our model validation to multiple content types in proxy integration with lambda functions. This is without specifying a mapping template.
We think it is either achiveable via accepting multiple content type in the request body in "Method Request". Or by rejecting any content type that does not match the specified content types in "Method Request".
We have also tried setting the "passthrough behaviour", but as the integration is a proxy lambda integration, it does not validate the request body.