Questions tagged with Serverless
Content language: English
Sort by most recent
Hello I'm making a simple API request to my lambda endpoint from another server that's hosted on railway.app and everytime I make a request to my lambda I get the following error.
```
<H1>403 ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Bad request.
We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
<BR clear="all">
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: _-bskwhg7aCgBL3YAH7_MazAyGiMiE1dfA5i7xa1wg_uRvNzMFVTiQ==
</PRE>
```
this is just an snippet of the request error from an axios request from my server to my lambda
I haven't used cloudfront before nor have I configured anything to use Cloudfront, I used the Serverless package for typescript to create an S3 bucket for my API, and I don't know how to resolve this issue, because the same request I'm making from my railway server can be completed by using a get request with postman.
Any help would be greatly appreciated
Hi, I am working on a requirement wherein I have to restrict the incoming requests to Lamba function behind the AWS API Gateway to be less than 800 KB. This needs to implemented preferrably at the Gateway level and need to implement it in Terraform as Infrastructure-as-Code. I am thinking that AWS [WAF SizeConstraint](https://docs.aws.amazon.com/waf/latest/APIReference/API_SizeConstraintStatement.html) might be the answer but looks like it will only inspect up to 4096 bytes and also not sure how to implement a filter that rejects incoming requests greater than 800 KB body size in Terraform.
Thanks in Advance
Hello,
I receive the error in Lambda: **Receive error in Lambda: Unable to import module 'functions': No module named 'functions' Traceback**
I have researched the issue and from what I understand it's a problem bundling some of python depencies and libraries. The issue I'm having, is I'm trying to find a fix for this issue via CDK. We deploy our resources via CDK and I would like to add the fix to the CDK stack.
How do I implement a deployment package with my lambda in CDK? Are there resources I can find for these steps?
Thanks
Hi,
I want to understand the best way to query RDS using a lambda function to fetch data and write to another AWS account S3 bucket.
Do we need to create view?
how long lambda can be executed?
What should be batching strategy?
Can it write in parquet format?
Thanks in advance.
I have 2 databases that I am using, first DynamoDB and second one is TimestreamDB. And I am trying to query from both the databases using AppSync Graphql API.
for that I am adding multiple dynamo tables as separate data sources, and for timestream I am creating VPC endpoint for TimestreamDB, and adding HTTP data source for that.
Now the question is, I can create schema, query and resolvers for Dynamo Tables. And the AWS AppSync documentation says that for now, only public endpoints are working with AppSync. Ref:"https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-http-resolvers.html"
So is there any other way I can satisfy my requirements of connecting Timestream HTTP endpoint with AppSYnc?
Hello Community,
As per the subject, I am getting this Error time to time from (not producible) ECS Fargate Task and container don't start.
These containers are programmatically initiated with the following attribute.
...
ecsTaskConfig.overrides.ephemeralStorage = {
sizeInGiB: 21
};
...
I tried to find the solution, but so far no luck, Well I thought about implementing a background job to check if the initiated task started or not, but looking for some better solution.
Any tip/guidance would be helpful.
Thanks,
Faiz
Say I have a lambda handler that is able to process sqs queue invocations and also lambda-to-lambda invocations.
The lambda has a max concurrency limit of 10.
Let's say there is a period of time where the concurrency of the lambda is maxed out due to the high volume of sqs queue messages that are processing.
What happens where there is a lambda-to-lambda invocation in the middle of sqs queue messages being processed and maxing out the concurrency limit? Is the AWS CLI invocation handled after all the messages in the queue are processed? Or does the lambda try to process that invocation at the next available instance?
I am getting a "Missing Auth Token" error when attempting to POST to my Inference Model in AWS from Postman desktop. The model works fine from the internal-to-AWS test pages in SageMarker Studio. The model is deployed on a serverless endpoint.
A few months ago, I set up a Lambda Function that is invoked through a API Gateway Integration. A few days ago I noticed that endpoint was returning HTTP 500 (Internal Server Error). When I went to check the Lambda function, the main console for Lambda showed no set up functions, with a red error box with no text, and next to the refresh button the message "Last Fetched 53 years ago". Through API Gateway I was able to access the Lambda function, but the code window is missing (not empty, missing), and when I go to the "versions" tab, I once again get the red error box with no text. Same thing when I tried the "aliases" tab, and to view the function url. Oddly, when I ran the built in test, it says it passed, but when I checked the browser devtools, it showed it returned a HTTP 403 (Forbidden) error. When testing the function url in Postman, I get the same HTTP 403 error.
When I tried to create a new function, I went through the wizard without issue, but pressing "Create Function" at the end gives me a spinner that after 30min is still spinning.
Honestly, I haven't a clue how to even approach trying to solve this issue, or even what the issue is. Any guidance or assistance would be very much appreciated.
I am trying to export a snapshot to an S3 bucket, from our Aurora Postgres Serverless v2 cluster. I keep getting the following error message: "The specified db snapshot engine mode isn't supported and can't be exported."
AFAICT from the user guide, Serverless v2 does support exporting snapshots to S3: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-export-snapshot.html
Our engine is Aurora PostgreSQL, version 10.21, region is eu-west-1, encrypted.
I have a simple lambda that I would like to *enable function URL* to assign an HTTPS endpoint; however, it is a container-based lambda, and I don't see *enable function URL* as an option in the *Advanced settings*.
Do I have to use API Gateway to assign an endpoint to container-based lambdas or is there some other way to make it accessible?
I'm looking for the correct API reference to **restore a provisioned cluster snapshot to a serverless namespace**.
I tested the following already with no success:
1) redshift.restore_from_cluster_snapshot() - only works for restoring snapshots from cluster to cluster mode
2) redshift-serverless.restore_from_snapshot() - only works for restoring snapshots from namespace to namespace mode
The only way I was able to do was via console.
Please advice.
Regards