Questions tagged with AWS Lambda

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

PostgreSQL Connection to RDS from external server - Connection errors but works from other sources

I have a Lambda Python function connecting via psycopg2 to a PostgreSQL db instance running RDS. The Lambda connects absolutely fine (Lambda and RDS both in EU-West-2 region) I can also connect to the PostgreSQL via PgAdmin4 from a local development system and other developers can also access from other locations/IPs via PGAdmin with no problem. I can also connect a simple psycopg2 connect and query script from my local desktop here. Therefore I know RDS is accepting and responding to externally-sourced psycopg2 connections and queries. HOWEVER, when I upload the same simple connect script to my web server (OVH - based in France if of any relevance), running equivalent Python and psycopg2 etc., the connection fails with the standard psycopg2 error response from the Python: `Error raised: connection to server at "xxxxxxxx.yyyyyyyyyy.eu-west-2.rds.amazonaws.com" (ppp.qqq.rrr.ssss), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections?` I've tweaked the Security Group settings to permit anything from anywhere etc and still no joy. PostgreSQL in the RDS seems to have listening on * which seems necessary to permit connections under certain circumstances. What is the subtlety in the differing sources that means such a connection from the OVH web server won't work; I can't find anything in the docs that seems to link to this issue and there's nothing obvious mis-configured on the server-side.. Any responses gratefully received.
0
answers
0
votes
11
views
asked a day ago

How to know if my Lambda Authorizer for API Gateway is caching results?

I have a Lambda Authorizer for an API Gateway API with one resource and three methods - PUT / GET / DELETE. Each method uses the same Lambda Authorizer, the TOKEN kind, to verify a JWT from Cognito. An IAM policy is returned by the Lambda which allows PUT / GET / DELETE actions on the resource. The authorization and policy work fine -- I just don't know if the result is being cached by API Gateway. When I look at the API Gateway execution logs, every request seems to be calling the Lambda Authorizer. Every API Gateway execution log has a line like this: ``` Sending request to https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/arn:aws:lambda:us-east-1:123456789012:function:MY_LAMBDA_FUNCTION:prod/invocations ``` **Does this invocation of the Lambda function mean that the Lambda Authorizer is not caching properly?** After the "Sending request" log line, there's a line like "Authorizer result body before parsing" and then this line: ``` Using valid authorizer policy for principal: *****user ``` **Does this statement indicate that the Lambda Authorizer using a cached policy?** The strange thing is...when I check the Lambda logs, the execution times vary wildly, almost as if the Lambda itself is caching the result...but I think the caching happens on the API Gateway side? What's going on here? Sample of Lambda duration times: 529ms, 10ms, 217ms, 213ms, 8ms, 2ms
0
answers
0
votes
20
views
profile picture
asked a day ago

AccessDeniedException when retrieving AWS Parameters from Lambda

I am attempting to access system parameters from a Lambda developed using C# I have added the required lambda layer as per https://docs.aws.amazon.com/systems-manager/latest/userguide/ps-integration-lambda-extensions.html#ps-integration-lambda-extensions-sample-commands The lambda execution role has the following in the IAM definition (???????? replacing actual account id) ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ssm:*" ], "Resource": "arn:aws:ssm:*:???????????:parameter/*" } ] } ``` As per the AWS page reference above I made a HTTP GET request to http://localhost:2773/systemsmanager/parameters/get/?name=/ClinMod/SyncfusionKey&version=1 This is failing with the following response ``` { "Version": "1.1", "Content": { "Headers": [ { "Key": "Content-Type", "Value": [ "text/plain" ] }, { "Key": "Content-Length", "Value": [ "31" ] } ] }, "StatusCode": 401, "ReasonPhrase": "Unauthorized", "Headers": [ { "Key": "X-Amzn-Errortype", "Value": [ "AccessDeniedException" ] }, { "Key": "Date", "Value": [ "Thu, 01 Dec 2022 12:16:59 GMT" ] } ], "TrailingHeaders": [], "RequestMessage": { "Version": "1.1", "VersionPolicy": 0, "Content": null, "Method": { "Method": "GET" }, "RequestUri": "http://localhost:2773/systemsmanager/parameters/get/?name=/ClinMod/SyncfusionKey&version=1", "Headers": [], "Properties": {}, "Options": {} }, "IsSuccessStatusCode": false } ```` Any clues where I am going wrong?
2
answers
0
votes
27
views
asked 2 days ago

Concurrently executions from a FIFO queue

I have my FIFO queue connected with a Lambda function. When a message is sent to the queue the function takes and processes it. My problem occurs when more than one message is sent to the queue: I understood that with a FIFO queue, basically, all the messages are processed one by one (if they share the same GroupID). This is not what happens to me: when I send more than one message (5 for example) to the queue in few seconds, than the first message goes in flight while the other messages are waiting, but when the first message process is completed then all the other 4 messages go together in flight!! How can this happen if they share the same GroupID and they are is a FIFO queue? I expect that when the first is completed then the second goes in flight, and the other 3 waits and so on! I think it doesn't depends on the queue setting because I changed all the parameters many times (Visibility time out, content based deduplication and so on). In any case I leave you, below, the screenshots of the parameters setting I now have. My account has a maximum of 10 concurrent executions and it is the exact number of messages that are in flight together at maximum (I tried to send many messages and what happens is that the first is in flight alone and then ten by ten all the others get in flight concurrently, very strange to me). I would like for each group only one execution at a time, the others must wait for the completion of the one that is processing. I want to manage concurrency by the different groupID I give to the message in the queue. I'm sending messages to the queue through aws-sdk in node.js. Can someone help me please? ![Enter image description here](/media/postImages/original/IMq9LZSI5NTICMDyBK0cDvFw) ![Enter image description here](/media/postImages/original/IMZB4MNTTqSjSCcze-hT402g) ![Enter image description here](/media/postImages/original/IMPBK_zjJaTsucQIS75zAFcA)
1
answers
0
votes
14
views
asked 2 days ago
2
answers
0
votes
24
views
asked 2 days ago

Lambda deploy from Eclipse not working: AWS ResourceConflictException

Hello, Please see below the steps that return an AWS error in local Eclipse: - I installed AWS Toolkit for Eclipse, on Eclipse Jee 2018-12 - When I installed it 2 years ago, it worked fine and the lambda project was deployed in the Lambda Service directly from Eclipse without any configuration from my side; congratulations for this clean deploy configuration ! - An year ago this deploy started not to complete successfully: it worked only until the transfer of the eclipse archive in S3, without installation in Lambda Service. I have to do the installation of the S3 archive from the Management Console -> Lambda console, manually. - The Error Cause: **An update is in progress for resource**: arn:aws:lambda:us-east-2:<removed>:function:RekonAddUser (Service: AWSLambda; Status Code: 409; Error Code: **ResourceConflictException**; Request ID: 408040a5-f1fe-4ba0-b251-df0c2fc8fe9c) This cause was thrown wile I did not have any other interaction with that resource ! - The message was: Failed to upload project to Lambda - The error was: com.amazonaws.eclipse.core.exceptions.AwsActionException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-2:<removed>:function:RekonAddUser (Service: AWSLambda; Status Code: 409; Error Code: ResourceConflictException; Request ID: 408040a5-f1fe-4ba0-b251-df0c2fc8fe9c) at com.amazonaws.eclipse.lambda.upload.wizard.UploadFunctionWizard.doFinish(UploadFunctionWizard.java:115) at com.amazonaws.eclipse.core.plugin.AbstractAwsJobWizard$1.run(AbstractAwsJobWizard.java:35) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63) Caused by: com.amazonaws.services.lambda.model.ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-2:<removed>:function:RekonAddUser (Service: AWSLambda; Status Code: 409; Error Code: ResourceConflictException; Request ID: 408040a5-f1fe-4ba0-b251-df0c2fc8fe9c) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.lambda.AWSLambdaClient.doInvoke(AWSLambdaClient.java:2654) at com.amazonaws.services.lambda.AWSLambdaClient.invoke(AWSLambdaClient.java:2630) at com.amazonaws.services.lambda.AWSLambdaClient.executeUpdateFunctionCode(AWSLambdaClient.java:2514) at com.amazonaws.services.lambda.AWSLambdaClient.updateFunctionCode(AWSLambdaClient.java:2490) at com.amazonaws.eclipse.lambda.upload.wizard.util.UploadFunctionUtil.performFunctionUpload(UploadFunctionUtil.java:134) at com.amazonaws.eclipse.lambda.upload.wizard.UploadFunctionWizard.doFinish(UploadFunctionWizard.java:111) ... 2 more - The Session Data: eclipse.buildId=4.10.0.I20181206-0815 java.version=1.8.0_60 java.vendor=Oracle Corporation BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_GB Framework arguments: -product org.eclipse.epp.package.jee.product Command-line arguments: -os win32 -ws win32 -arch x86_64 -product org.eclipse.epp.package.jee.product Thank you,
0
answers
0
votes
16
views
asked 2 days ago

Lambda component with IPC permissions in Greengrass V2

We have migrated a lambda from AWS Greengrass v1 to AWS Greengrass v2. This lambda needs to extract and decrypt a secret from Greengrass Core. How can we authorize the component to perform IPC permissions to the lambda for that? Regular components recipes have the option `ComponentConfiguration/DefaultConfiguration/accessControl`. However when we build the component out of a lambda using AWS CLI [create-component-version](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/greengrassv2/create-component-version.html) and option `--lambda-function`, there is no option to assign authorization policies. One way we tried to make it work is by using a *merge update* in our deployment (as documented [here](https://docs.aws.amazon.com/greengrass/v2/developerguide/ipc-secret-manager.html)). ``` "accessControl": { "aws.greengrass.SecretManager": { "<my-component>:secrets:1": { "policyDescription": "Credentials for server running on edge.", "operations": [ "aws.greengrass#GetSecretValue" ], "resources": [ "arn:aws:secretsmanager:us-east-1:<account-id>:secret:xxxxxxxxxx" ] } } } ``` However the end recipe of the component (in the deployment) does not display the `accessControl` (AWS Greengrass Console), so we assume it has not been *merge updated.* ``` ... "ComponentConfiguration": { "DefaultConfiguration": { "lambdaExecutionParameters": { "EnvironmentVariables": { "LOG_LEVEL": "DEBUG" } }, "containerParams": { "memorySize": 16384, "mountROSysfs": false, "volumes": {}, "devices": {} }, "containerMode": "NoContainer", "timeoutInSeconds": 30, "maxInstancesCount": 10, "inputPayloadEncodingType": "json", "maxQueueSize": 200, "pinned": false, "maxIdleTimeInSeconds": 30, "statusTimeoutInSeconds": 30, "pubsubTopics": { "0": { "topic": "dt/app/+/status/update", "type": "PUB_SUB" } } } }, ``` Any guidance here would be greatly appreciated! Thanks
1
answers
0
votes
9
views
profile picture
rodmaz
asked 4 days ago