Questions tagged with AWS Lambda

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Silent failure in CloudFormation Lambda VpcConfig

I'm trying to add a VPC to a lambda, via CloudFormation. We're using SAM, so it's a "AWS::Serverless::Function". I have added the VpcConfig section of the CF template as per the docs, but the VPC is never attached to the lambda. No error, successful deploy, but no VPC. I can then add the VPC (and later EFS) config via the console. Drift detection shows no discrepancy between actual and expected, either before or after I manually add the VPC. Deploying again later, using "sam deploy", silently removes the VPC config. Below is a minimal CloudFormation template displaying the behavior. I've tried everything I can think of, including a "DependsOn" clause referencing the VPC and subnets. What am I missing? ``` AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: Test template for VPC/Lambda config Resources: MyVPC: Type: AWS::EC2::VPC Properties: CidrBlock: "10.0.0.0/24" EnableDnsHostnames: true EnableDnsSupport: true MyVPCSubnetMaster: Type: AWS::EC2::Subnet Properties: VpcId: !Ref MyVPC AvailabilityZone: !Select [0, !GetAZs ""] CidrBlock: "10.0.0.0/28" MapPublicIpOnLaunch: true MyVPCSubnetBackup: Type: AWS::EC2::Subnet Properties: VpcId: !Ref MyVPC AvailabilityZone: !Select [ 1, !GetAZs "" ] CidrBlock: "10.0.0.16/28" MapPublicIpOnLaunch: true MyLambda: Type: AWS::Serverless::Function VpcConfig: SecurityGroupIds: - !GetAtt MyVPC.DefaultSecurityGroup SubnetIds: - !GetAtt MyVPCSubnetMaster.SubnetId - !GetAtt MyVPCSubnetBackup.SubnetId Properties: FunctionName: "MyLambda" Runtime: "python3.8" Handler: "index.handler" CodeUri: test/MyLambda ```
2
answers
0
votes
26
views
Eric
asked 18 days ago

Lambda nodejs cant connect to documentDB

Good morning all, I'm trying to connect with NodeJs to my documentdb cluster with mongoose without ssl: I get {"message":"Internal server error"} with ssl i get pem file not found {"message":"ENOENT: no such file or directory, open '/var/task/rds-combined-ca-bundle.pem'"} Here is my code with ssl ``` import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda' import mongoose = require('mongoose') import fs = require("fs") import path = require("path") export const lambdaHandler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => { let response: APIGatewayProxyResult; try { const filePath = path.join(__dirname, 'rds-combined-ca-bundle.pem') const databaseUri = 'mongodb://myuser:mypassword@mycluster.docdb.amazonaws.com:27017/?ssl=true&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false' const client = await mongoose.connect(databaseUri, { ssl: true, sslValidate: false, sslCA: filePath, useNewUrlParser: true, useUnifiedTopology: true }) // Return result response = { statusCode: 200, body: JSON.stringify({ test: 'test mongoose', client: client }) } } catch (err: unknown) { console.log('4',err) response = { statusCode: 500, body: JSON.stringify({ message: err.message }) } } return response; } ``` Here is my code without ssl ``` import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda' import mongoose = require('mongoose') export const lambdaHandler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => { let response: APIGatewayProxyResult; try { const client = await mongoose.connect( 'mongodb://myuser:mypassword@mycluster.docdb.amazonaws.com:27017/sample-database?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false', { useNewUrlParser: true }) //Return result response = { statusCode: 200, body: JSON.stringify({ test: 'test mongoose', client: client }) } } catch (err: unknown) { console.log('4',err) response = { statusCode: 500, body: JSON.stringify({ message: err.message }) } } return response; } ``` Could you help me ? thank you sincerely
1
answers
0
votes
40
views
asked 18 days ago

Lambda Node.js function Can't Access Aurora MySQL

I have a Node.js (ver 16) app in a Lambda Function and I run it on my local machine fine, however when I run the function on AWS I get the following error: ``` { "errorType": "Error", "errorMessage": "ER_ACCESS_DENIED_ERROR: Access denied for user 'mailQueue'@'172.31.40.76' (using password: YES)", "trace": [ "Error: ER_ACCESS_DENIED_ERROR: Access denied for user 'mailQueue'@'172.31.40.76' (using password: YES)", " at Handshake.Sequence._packetToError (/var/task/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)", " at Handshake.ErrorPacket (/var/task/node_modules/mysql/lib/protocol/sequences/Handshake.js:123:18)", " at Protocol._parsePacket (/var/task/node_modules/mysql/lib/protocol/Protocol.js:291:23)", " at Parser._parsePacket (/var/task/node_modules/mysql/lib/protocol/Parser.js:433:10)", " at Parser.write (/var/task/node_modules/mysql/lib/protocol/Parser.js:43:10)", " at Protocol.write (/var/task/node_modules/mysql/lib/protocol/Protocol.js:38:16)", " at Socket.<anonymous> (/var/task/node_modules/mysql/lib/Connection.js:88:28)", " at Socket.<anonymous> (/var/task/node_modules/mysql/lib/Connection.js:526:10)", " at Socket.emit (node:events:527:28)", " at Socket.emit (node:domain:475:12)", " --------------------", " at Protocol._enqueue (/var/task/node_modules/mysql/lib/protocol/Protocol.js:144:48)", " at Protocol.handshake (/var/task/node_modules/mysql/lib/protocol/Protocol.js:51:23)", " at PoolConnection.connect (/var/task/node_modules/mysql/lib/Connection.js:116:18)", " at Pool.getConnection (/var/task/node_modules/mysql/lib/Pool.js:48:16)", " at Runtime.exports.handler (/var/task/index.js:15:10)", " at Runtime.handleOnceNonStreaming (file:///var/runtime/index.mjs:1028:29)" ] } ``` The Aurora Security group allows connections from 172.31.0.0/16 and the reachability analyser gives it the ok. So it appears to be mySQL having issues. The user appears to have correct permissions from any host. ``` SHOW GRANTS FOR 'mailQueue' > GRANT USAGE ON *.* TO `mailQueue`@`%` > GRANT ALL PRIVILEGES ON `emailTransactions`.* TO `mailQueue`@`%` WITH GRANT OPTION ``` Any suggestions from anyone? Confirmed all settings with manuals and searched the net, I'm stumped.
2
answers
0
votes
24
views
asked 18 days ago

lambda - redshift - status = Query does not exist

The same code works in one account but not another account. In a step function, lambda calls a redshift query, returns, next step calls get status. The lambda returns but the status call errors with msg "Query does not exist". The query in the previous step has successfully run and a statement id returned. Its only the step with status call that fails. Other lambdas in the same step function using the same pattern work correctly. These other steps use the same get status lambda as the steps that fail. The common item is the call to the redshift query in the previous step. When ever that is used the status call fails. However: The query executes correctly and the same lambda works in the same step function in another account. Other points: The redshift clusters were built with a cloud formation template and should be identical. The entire step function (unchanged other than account number change) works correctly in another account. The lambdas have been manually copied and pasted from the account that works to ensure they are identical. Does any one have any suggestions, as all obvious checks have been done. For completeness the code for the redshift query call is below, however as stated above this works in another account. ``` import json import boto3 import base64 import urllib.parse import botocore.session as bc from botocore.exceptions import ClientError ssm_client = boto3.client('ssm') def lambda_handler(event, context): environment = event['environment'] source_bucket = event['source_bucket'] processed_bucket = event['processed_bucket'] role = event['role'] region = event['region'] database_name = event['database_name'] secret_name = event['secret_name'] secret_arn = event['secret_arn'] cluster_id = event['cluster_id'] proc_name = event['proc_name'] ssm_redshift_proc_name = ssm_client.get_parameter(Name=proc_name, WithDecryption=True) redshift_proc_name = ssm_redshift_proc_name['Parameter']['Value'] query_str = "call "+redshift_proc_name+"();" bc_session = bc.get_session() session = boto3.Session( botocore_session = bc_session, region_name = region, ) client_redshift = session.client("redshift-data") res = client_redshift.execute_statement( Database = database_name, SecretArn = secret_arn, Sql = query_str, ClusterIdentifier = cluster_id ) return { 'environment': environment, 'source_bucket': source_bucket, 'processed_bucket': processed_bucket, 'role': role, 'region': region, 'database_name': database_name, 'secret_name': secret_name, 'secret_arn': secret_arn, 'cluster_id': cluster_id, 'statementid': res['Id'] } ```
1
answers
0
votes
35
views
asked 19 days ago

Unable to use AWS Parameters and Secrets Lambda Extension

Hello I tried all the steps required to use AWS Parameters and Secrets Lambda Extension such like adding layer and using the X-Aws-Parameters-Secrets-Token in the header but the problem is when I call the request to get the secrets by using AWS Lambda Extension I get the "feign.RetryableException: Connection refused (Connection refused) executing GET http://localhost:2773/secretsmanager/get?secretId=test" problem. Error : Connection refused (Connection refused) executing GET http://localhost:2773/secretsmanager/get?secretId=test" problem. I really do not understand the problem. The token seems fine as well. I used Feign Client to make a GET request to call the secrets by using AWS Lambda Extension . Could you please check the implementation and let me know the problem? ``` //* SecretsAndParametersExtensionAPI class (API class for Feign Client) @Headers({"X-Aws-Parameters-Secrets-Token: {token}"}) public interface SecretsAndParametersExtensionAPI { // TODO move me @RequestLine("GET /secretsmanager/get") @Headers("X-Aws-Parameters-Secrets-Token: {token}") String getSecret(@Param("token") String token, @QueryMap Map<String, Object> queryMap); } // Test class to get Secrets by using AWS Secrets Parameters Lambda Extension @Test public void testSecretsExtension() { String sessionToken = EnvVarCommon.SESSION_TOKEN.get(); System.out.println(sessionToken); try { SecretsAndParametersExtensionAPI secretsAndParametersExtensionAPI = Feign.builder().target(SecretsAndParametersExtensionAPI.class, "http://localhost:2773/"); Map<String, Object> queryMap = new HashMap<>(); queryMap.put("secretId", "test"); String resultFromSecretExtension = secretsAndParametersExtensionAPI.getSecret(sessionToken, queryMap); System.out.println("Result From Secret Extension " + resultFromSecretExtension); log.debug("Request sent to ULH and ULH send request to LAVIN to download profile picture"); } catch (IllegalStateException | JsonSyntaxException exception) { log.error( "Failed to get response from ULH for downloading profile picture for the UserID '{}'", exception); } } //* template.yml file (CloudFormation file for adding Layer) Mappings: RegionToLayerArnMap: us-east-1: "LayerArn": "arn:aws:lambda:us-east-1:177933569100:layer:AWS-Parameters-and-Secrets-Lambda-Extension:2" us-east-2: "LayerArn": "arn:aws:lambda:us-east-2:590474943231:layer:AWS-Parameters-and-Secrets-Lambda-Extension:2" eu-west-1: "LayerArn": "arn:aws:lambda:eu-west-1:015030872274:layer:AWS-Parameters-and-Secrets-Lambda-Extension:2" eu-west-2: "LayerArn": "arn:aws:lambda:eu-west-2:133256977650:layer:AWS-Parameters-and-Secrets-Lambda-Extension:2" eu-west-3: "LayerArn": "arn:aws:lambda:eu-west-3:780235371811:layer:AWS-Parameters-and-Secrets-Lambda-Extension:2" AlperTestBotLambda: Type: AWS::Serverless::Function Condition: EnableAlperTestbot Properties: Tracing: Active Runtime: java11 Environment: Variables: component: !Ref Component componentShortName: !Ref ComponentShortName version: !Ref Version zone: !Ref Zone tenant: !Ref Tenant testTenant: "test" alperTestQueueName: !Ref AlperTestQueueName aws.sessionToken: !Ref SessionToken Policies: - !Ref SecureParameterAccess - !Ref PurgeSqsPolicyTestQueues EventInvokeConfig: MaximumRetryAttempts: 0 Layers: - !FindInMap [ RegionToLayerArnMap, !Ref "AWS::Region", LayerArn ] ```
1
answers
0
votes
57
views
asked 19 days ago