Serverless

Serverless is a way to describe the services, practices, and strategies that enable you to build more agile applications so you can innovate and respond to change faster. With serverless computing, infrastructure management tasks like capacity provisioning and patching are handled by AWS, so you can focus on only writing code that serves your customers.

Recent questions

see all
1/18

boto3 sqs receive_message MaxNumberOfMessages

Hi, we have a normal sqs (not FIFO), and trying to pull messages from it through python boto3 receive_message call specified here : https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Client.receive_message message = sqs.receive_message( QueueUrl=queue_url, AttributeNames=[ 'SentTimestamp' ], MessageAttributeNames=[ 'All' ], VisibilityTimeout=120, MaxNumberOfMessages=MAX_RECEIVE_MESSAGES, WaitTimeSeconds=MAX_WAIT_TIME_SECS ) and having a few questions. * With understanding WaitTimeSeconds > 0 is called long polling. Even if we have around 10 messages in sqs, setting WaitTimeSeconds = 2 secs, MaxNumberOfMessages =10, we still get 1 message by calling receive_message. Is that expected behavior? * Based on developer document, MaxNumberOfMessages does not guarantee receiving the specified number of messages but just give the API a chance to scan through servers which containing messages. Am I understanding it correctly? * We program a loop to call the above api multiple times (times =10) and do get 1 message each time - total 10 messages. We wonder if such programming style would hit some sort of API limit (only allow certain times of receive_message call in a period). Is there such limit existing? * Will we be charged by each receive_message call? What is the amount of charge by each call? * Is it true only when the queue has > 5000 messages then we will see long polling returns more than 1 message? * the api allow us to specify VisibilityTimeout. However, if the SQS has a specified visibility timeout during creation. Which one will take affect, the one specified in API or the one specified during SQS creation? * where in real life, we will have a million message in the sqs at the very beginning. if we set WaitTimeSeconds = 2 secs, will the call return in 2 secs or possibility less than 2 secs in case the single long polling returns 10 messages within 1 sec? * our lambda is waken up by cloudwatch event every minute (shortest time can be specified by cloudwatch event rule), once waken up, it will run into a loop which keeps calling receive_message with MaxNumberOfMessages =10 until either the loop runs N times or received M number of messages. Any issue with this design? Sorry for so many questions. Thank you!
0
answers
0
votes
4
views
asked 38 minutes ago

Lambda component with IPC permissions in Greengrass V2

We have migrated a lambda from AWS Greengrass v1 to AWS Greengrass v2. This lambda needs to extract and decrypt a secret from Greengrass Core. How can we authorize the component to perform IPC permissions to the lambda for that? Regular components recipes have the option `ComponentConfiguration/DefaultConfiguration/accessControl`. However when we build the component out of a lambda using AWS CLI [create-component-version](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/greengrassv2/create-component-version.html) and option `--lambda-function`, there is no option to assign authorization policies. One way we tried to make it work is by using a *merge update* in our deployment (as documented [here](https://docs.aws.amazon.com/greengrass/v2/developerguide/ipc-secret-manager.html)). ``` "accessControl": { "aws.greengrass.SecretManager": { "<my-component>:secrets:1": { "policyDescription": "Credentials for server running on edge.", "operations": [ "aws.greengrass#GetSecretValue" ], "resources": [ "arn:aws:secretsmanager:us-east-1:<account-id>:secret:xxxxxxxxxx" ] } } } ``` However the end recipe of the component (in the deployment) does not display the `accessControl` (AWS Greengrass Console), so we assume it has not been *merge updated.* ``` ... "ComponentConfiguration": { "DefaultConfiguration": { "lambdaExecutionParameters": { "EnvironmentVariables": { "LOG_LEVEL": "DEBUG" } }, "containerParams": { "memorySize": 16384, "mountROSysfs": false, "volumes": {}, "devices": {} }, "containerMode": "NoContainer", "timeoutInSeconds": 30, "maxInstancesCount": 10, "inputPayloadEncodingType": "json", "maxQueueSize": 200, "pinned": false, "maxIdleTimeInSeconds": 30, "statusTimeoutInSeconds": 30, "pubsubTopics": { "0": { "topic": "dt/app/+/status/update", "type": "PUB_SUB" } } } }, ``` Any guidance here would be greatly appreciated! Thanks
1
answers
0
votes
4
views
profile picture
rodmaz
asked an hour ago

Updating an ECS service automatically using the CLI via Lambda

I have a multi-container application that runs a service on ECS. The images are hosted on ECR, configuration files are pulled from a S3 bucket during container startup via script. The application sits behind a network loadbalancer with EIP. The loadbalancer is in a public subnet and reachable, the app itself is inside a private subnet. My ultimate goal is to automatically update the service when either a.) a new image is checked in or b.) a new configuration file is uploaded. I figured the best way to do this behind a network load balancer (which supports rolling update) is to use the AWS ECS CLi inside a lambda function that triggers upon update. If I did not misread the docs, the CLI should trigger a rolling update. To test the CLI, I tried: `aws ecs update-service --cluster mycluster --service myservice --force-new-deployment` However, this was not successful. A new task was created, but was stopped before deployment was finished with log message: > Essential container in task exited Parameters for the service are min. 100 % and max. 200 %. I also tried to set the lower bound of running tasks to 0 %. This resulted in the successful exit of the old task, but the new tasks failed to deploy with the same error. This makes me think that I probably configured something incorrectly. Questions: 1.) Is using a lambda function a smart choice here? Or is there a better way? 2.) How can I troubleshoot the failing rolling update? I appreciate any help! If you need more information, please let me know. Best regards, Sebastian
1
answers
0
votes
10
views
asked a day ago

Concurrent executions of Lambda functions

I would like to understand how many calls of a Lambda function, through AWS-SDK, I can perform simultaneously. Now I am a free tier and it seems that I can't performe more than 10 concurrently executions. In my project, each client is going to make running a function and so the number of concurrently executions that my aws account can performe would be equal to the number of clients that my web app can serve simultaneously. I will be fine with 1000 at the beginning. Is it possible? How many concurrently executions my aws account (if not anymore free tier) could manage of a single function and overall through all functions by default? I will use Europe (Milano) region. With my account, right now, I can performe 10 concurrent executions of my function. I attached my concurrency data in the image:![Enter image description here](/media/postImages/original/IMS9LLlqoWQcqO9cDBFWBcOw) And my code:; import AWS from 'aws-sdk'; AWS.config.update({ accessKeyId: 'idKey', secretAccessKey: 'SecretKey', region: 'eu-west-3', }); const lambda = new AWS.Lambda(); var Utente = {feature: "feature"}; const params = { FunctionName: 'Lambda', Payload: JSON.stringify(Utente) }; ˙ lambda.updateFunctionConfiguration({ FunctionName: 'Lambda', Environment: { Variables: {} } }).promise(); function Invokation(params){ lambda.invoke(params, (error, data) => { if (error) { console.log(error) } else { console.log("OK") } }) }; for (let index = 0; index < 20; index++) { console.log(index); Invokation(params); }
1
answers
0
votes
17
views
asked a day ago

WebsocketApi Lambda and connect ECONNREFUSED

Hello, I've been setting up Websocket API with Lambda and I received following error: ``` <message-id> INFO Error: connect ECONNREFUSED <ip> at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1300:16) { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: <ip>, port: 80, '$metadata': { attempts: 1, totalRetryDelay: 0 } ``` Env is a Node18 and Postman (ws) as a client. Lambda's code: ``` import { ApiGatewayManagementApiClient, DeleteConnectionCommand,PostToConnectionCommand } from "@aws-sdk/client-apigatewaymanagementapi"; const api = new ApiGatewayManagementApiClient({endpoint: 'wss://<id>.execute-api.<region>.amazonaws.com/production', region: 'eu-central-1' }) export const handler = async (event) => { console.log(event); const {routeKey, connectionId} = event?.requestContext let msg; console.log(`Request key is ${routeKey} and connectionID is ${connectionId}`) switch(routeKey){ case '$connect': msg = 'connected My friend'; break; case '$disconnect': msg = 'disconnected my friend'; break; case 'message': try{ await replyToMessage(connectionId, 'RAMP PAM PAM') }catch(e){ console.log('EXEPTION ALARM') console.log(e) } break; default: console.log('something bad happened', routeKey); break; } // TODO implement const response = { statusCode: 200 }; return response; }; const replyToMessage = (ConnectionId, message) =>{ const data = {message}; const cmd = new PostToConnectionCommand({ ConnectionId, Data: Buffer.from(JSON.stringify(data)) }) const result = api.send(cmd); console.log(result) return result; } ``` Lambda's policy ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "execute-api:*", "Resource": "arn:aws:execute-api:<region><id>:<api-id>/production/*" } ] } ``` Thank you in advance, Arczik!
1
answers
0
votes
16
views
Arczik
asked 2 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/5