Questions tagged with Serverless
Content language: English
Sort by most recent
I have cloudformation template , shown below . I am creating a serverless lambda and trying to pass a lambda arn , but when this template is deployed , i get an error that it is not a valid arn. I've deployed other cloudformation templates passing similar ARN , which works, but I'm not sure why the lambda arn is different or special?
```
AWSTemplateFormationVersion:
.....
Parameters:
lambdaArn:
Type: String
Default: arn:aws:lambda:us-east-1:6666666666666:layer:myLayer:1
Resources:
mylambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: testlayer
....
Layers:
- !Ref lambdaArn
```
Does anyone have any ideas on how the below might be happening? Its a relatively new free tier account with nothing on it so far, this is the first deployments I've been trying on this account.
**Browser 403 errors:**

**Serverless deployment 403 errors:**
```
~/dev/my-serverless-backend-test ❯ sls deploy
Running "serverless" from node_modules
(node:1519) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.
Please migrate your code to use AWS SDK for JavaScript (v3).
For more information, check the migration guide at https://a.co/7PzMCcy
(Use `node --trace-warnings ...` to show where the warning was created)
DOTENV: Could not find .env file.
Deploying my-serverless-backend-test to stage dev (eu-west-2)
Warning: Not authorized to perform: lambda:GetFunction for at least one of the lambda functions. Deployment will not be skipped even if service files did not change.
✖ Stack my-serverless-backend-test failed to deploy (71s)
Environment: darwin, node 18.13.0, framework 3.27.0 (local) 3.27.0v (global), plugin 6.2.3, SDK 4.3.2
Credentials: Local, "serverless-admin-test" profile
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
CREATE_FAILED: GetCurrentUserLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Service returned error code AccessDeniedException (Service: Lambda, Status Code: 403, Request ID: 83a8a6d3-306d-43d2-9c68-d83195b25cc3)" (RequestToken: 6246c74b-5221-6aad-a27c-d28fa2e47540, HandlerErrorCode: GeneralServiceException)
```
Lambda ARM 1GB 0.000133 USD / Sec
Lambda X86 1GB 0.0000167 USD / Sec
SagaMaker Severless Inference 1GB 0.0000200 USD / Sec
I have no idea why SagaMaker Serverless Inference is more expensive than Lambda, they don't support GPU, same with Lambda.
Anybody knows the reason?
Are there any constraints regarding the size of private subnets for Aurora serverless v2 nodes? I've tested a scenario with /28 subnets, which seems to work without any problems, but I'd like to make sure I'm not missing something important.
[From the documentation:](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html)
> The CIDR blocks in each of your subnets must be large enough to accommodate spare IP addresses for Amazon Aurora to use during maintenance activities, including failover and compute scaling. For example, a range such as 10.0.0.0/24 and 10.0.1.0/24 is typically large enough.
A /24 for a single node per AZ seems to be rather wasteful IMHO.
I need to configure lambda function subscription across multiple accounts. These lambda functions are used for log ingestion to a third party application. I need a solution that works across different accounts . Right now these lambda functions subscriptions are per account therefore not a centralized solution.
Is there a way to view the errors from ingestion or searching? I checked Cloudwatch, but did not see any new log groups. I would like to understand some of the error messages I see primarily when ingesting data into indexes.
Hello,
I configured API gateway & lambda function to update one of my dynamodb table.
Completed testing with API gateway menu, so also tried with curl, but it fail.
Checked cloud watch log, I only can see path parameter, body is not passed correctly.
How can I fix it? As I know, PUT request could have body to update database table attribute value, but it's not in my case.
I also configured 'use lambda proxy integration' option in 'integration request'.
For better understanding, I also add my configuration in below.
**Resource**
/card/{card_no}
GET
DELETE
PUT <-- this is the problem
**tested by API gateway test client**
INIT_START Runtime Version: python:3.9.v16 Runtime Version ARN: xxxx
START RequestId: xxxx Version: $LATEST
Event:
{
"resource": "/card/{card_no}",
"path": "/card/1",
"httpMethod": "PUT",
"headers": null,
"multiValueHeaders": null,
"queryStringParameters": null,
"multiValueQueryStringParameters": null,
"pathParameters": {
"card_no": "1"
},
...
"body": "{\n \"card_no\": 1,\n \"nickname\": \"name\",\n \"overall_type\": \"type\"\n}",
"isBase64Encoded": false
}
END RequestId: xxxx
REPORT RequestId: xxxx Duration: 1322.79 ms Billed Duration: 1323 ms Memory Size: 128 MB Max Memory Used: 66 MB Init Duration: 236.32 ms
**tested by curl**
curl -v -X PUT \
'https://xxxx.amazonaws.com/dev/card/1' \
-H 'content-type: application/json' \
-d '{"card_no": 1,"nickname": "nickname","overall_type": "type"}'
Trying xxx..
Connected to xxxx (xxxx) port 443 (#0)
ALPN: offers h2
ALPN: offers http/1.1
....
Using HTTP2, server supports multiplexing
Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
h2h3 [:method: PUT]
h2h3 [:path: /dev/card/1]
h2h3 [:scheme: https]
h2h3 [:authority: xxxx.amazonaws.com]
h2h3 [user-agent: curl/7.86.0]
h2h3 [accept: */*]
h2h3 [content-type: application/json]
h2h3 [content-length: 60]
Using Stream ID: 1 (easy handle 0x14180c600)
PUT /dev/card/1 HTTP/2
Host: xxxx.amazonaws.com
user-agent: curl/7.86.0
accept: */*
content-type: application/json
content-length: 60
Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
We are completely uploaded and fine
HTTP/2 200
date: xxx
content-type: application/json
content-length: 220
x-amzn-requestid: xxxx
x-amz-apigw-id: xxxx
x-amzn-trace-id: Root=xxxx
Connection #0 to host 3pjqiu4m22.execute-api.ap-northeast-2.amazonaws.com left intact
{"errorMessage": "'body'", "errorType": "KeyError", "requestId": "xxxx", "stackTrace": [" File \"/var/task/index.py\", line 12, in handler\n body_input = json.loads(event['body'])\n"]}%
**cloud watch log when I send curl**
INIT_START Runtime Version: python:3.9.v16 Runtime Version ARN: xxxx
START RequestId: xxxx Version: $LATEST
Event:
{
"card_no": 1
} ==> strange point, I added print in my python code to see all the request, but only path parameter passed, can't see body...
[ERROR] KeyError: 'body'
Traceback (most recent call last):
File "/var/task/index.py", line 12, in handler
body_input = json.loads(event['body'])
END RequestId: xxxx
REPORT RequestId: xxxx Duration: 1024.82 ms Billed Duration: 1025 ms Memory Size: 128 MB Max Memory Used: 64 MB Init Duration: 226.62 ms
**lambda code**
```python
import json
import boto3
def handler(event, context) :
print("Event: %s" % json.dumps(event))
client = boto3.resource('dynamodb')
table = client.Table('CardInfo')
body_input = json.loads(event['body'])
response = table.update_item(
xxx...xxx
},
ReturnValues="UPDATED_NEW"
)
return {
'statusCode': response['ResponseMetadata']['HTTPStatusCode'],
'body': json.dumps(response['Attributes'], default=str)
}
```
Hi there,
I'm trying to move my private static website to be hosted on S3 bucket. I have a solution fully works without the usage of the website hosting feature of S3. Actually, I have an ALB that forwards the traffic to my S3 bucket via private endpoints. All fine up until here.
The next challenge is integrating CSRF tokens into the workflow. Basically, I would like to generate a CRSF token when the first call gets in and then validate it on every next request. Since the CSRF token is managed on the server side, in the current scenario with the S3 bucket I have no server which can take care of that.
So, the idea would be of using a Lambda function that intercepts the initial call (on a specific path for example), generates the CSRF token, and pass it back to the HTTP call.
Any idea how I could implement the lambda function for such a scenario?
Thank you
Hello All,
I have created my lambda function with serverless framework , and want to add it to a static IP for other service use
I have configured everything acc to aws blog for static ip
But when I am attaching than VPC to my lambda , my lambda function stope behaving normally , it gives response time out issue with 502 status
And when I detach the VPC its works fine
I have a problem with my socket server built with webSocketApi. everything works fine unless the decive loose its connectivity, close the conection and someone try to send a message to this device.
The websocketApi does not throw any error and if i try to get the connection via @connections API, the SocketApi stll giving like the device is connected untill the idle timeout come. And then it's impossible to detect from my side this case to enqueue the message to be sent later.
I'm thinking about using the keep alive client ping to trace the last ping and close the connection manually if the last ping is older than 30 secs or something like this.
Has someone any solution for this case or this is the normal way to works?
thank you.
I confirmed that the order is ensured for the same group ID in the FIFO queue. When messages with different group IDs come in, I wonder if the order is also guaranteed for messages with different group IDs.
**example**
send message order
1. MessageBody: 1, group id:1
2. MessageBody: 2, group id:2
3. MessageBody: 3, group id:3
4. MessageBody: 4, group id:4
Sequence when calling the receive message API
1. MessageBody: 1, group id:1
2. MessageBody: 2, group id:2
3. MessageBody: 3, group id:3
4. MessageBody: 4, group id:4
I keep getting the following error
`InvalidLambdaResponseException: An error occurred (InvalidLambdaResponseException) when calling the SignUp operation: Unrecognizable lambda output`
whether I use boto3's `sign_up` or the JS `signUp` function. My trigger lambda is implemented in python and I autoconfirm and verify the user
```
event['response']['autoConfirmUser'] = True
event['response']['autoVerifyEmail'] = True
```
and return the event with `return event` .
The lambda works without any errors but I cannot figure out why I keep getting this error.