By using AWS re:Post, you agree to the Terms of Use
/AWS Lambda/

Questions tagged with AWS Lambda

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Annoying HLS Playback Problem On Windows But Not iOS

Hello All, I am getting up to speed with CloudFront and S3 for VOD. I have used the CloudFormation template. Uploaded an MP4, obtained the Key for the m3u8 file. I create a distribution in CF. I embed it in my webpage. For the most part, it works great. But there is a significantly long buffering event during the first few seconds. This problem does not exist when I play the video on my iOS device. And strangely, it does not happen when I play it in Akami's HLS tester on my Windows 11 PC using Chrome. The problem seems to only occur when I play it from my website, using any browser, on my Windows 11 PC. Steps I take to provoke the issue: Open an Incognito tab in Chrome / navigate to my website, my player is set to auto play so it auto plays / the video starts out a bit fuzzy, it then stops for a second / restarts with great resolution / and stays that way until the endo f the video. If I play again, no problems at all, but that is to be expected. I assume there is a local cache. Steps I have tried to fix / clues: I have tried different segment lengths via modifying the Lambda function created when the stack was formed by the template. The default was 5. At that setting, the fuzzy aspect lasted the longest but the buffer event seemed slightly shorter. At 1 and 2, the fuzzy is far shorter but the buffering event is notably longer. One thought, could this be related to the video player I am using? I wanted to use the AWS IVS but could not get it working the first go around so I tried the amazon-ivs-videojs. That worked immediately, except for the buffer issue. And as the buffer issue seems to go away when I test the distribution via the Akami HLS tester. As always, much appreciation for reading this question and any time spent pondering on it.
0
answers
0
votes
4
views
Redbone
asked 2 days ago

Lambda Execution Function Issue For RDS Reboot

Greetings, I created a simple function taking as reference the basic Lambda in Python to start/stop RDS from here: [https://aws.amazon.com/es/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/]() But I changed it for reboot purposes, so my Python code is the following: ``` # Lambda for RDS reboot given a REGION, KEY and VALUE import boto3 import os import sys import time from datetime import datetime, timezone from time import gmtime, strftime # REGION: the rds region # KEY - VALUE: the KEY and VALUE from RDS tag def reboot_rds(): region = os.environ["REGION"] key = os.environ["KEY"] value = os.environ["VALUE"] client = boto3.client("rds", region_name=region) response = client.describe_db_instances() v_readReplica = [] for i in response["DBInstances"]: readReplica = i["ReadReplicaDBInstanceIdentifiers"] v_readReplica.extend(readReplica) for i in response["DBInstances"]: # Check if the RDS is Aurora if i["Engine"] not in ["aurora-mysql", "aurora-postgresql"]: # Check if RDS is a replica instance if ( i["DBInstanceIdentifier"] not in v_readReplica and len(i["ReadReplicaDBInstanceIdentifiers"]) == 0 ): arn = i["DBInstanceArn"] resp2 = client.list_tags_for_resource(ResourceName=arn) # Check tag if 0 == len(resp2["TagList"]): print("Instance {0} tag value is not correct".format(i["DBInstanceIdentifier"])) else: for tag in resp2["TagList"]: # if tag values match if tag["Key"] == key and tag["Value"] == value: if i["DBInstanceStatus"] == "available": client.reboot_db_instance( DBInstanceIdentifier=i["DBInstanceIdentifier"], ForceFailover=False, ) print("Rebooting RDS {0}".format(i["DBInstanceIdentifier"])) elif i["DBInstanceStatus"] == "rebooting": print( "Instance RDS {0} is already rebooting".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "creating": print( "Instance RDS {0} is on creation, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "modifying": print( "Instance RDS {0} {0} is modifying, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "stopped": print( "Cannot reboot RDS {0} it is already stopped".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "starting": print( "Instance RDS {0} is starting, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "stopping": print( "Instance RDS {0} is stopping, try later.".format( i["DBInstanceIdentifier"] ) ) elif tag["Key"] != key and tag["Value"] != value: print( "Tag values {0} doesn't match".format(i["DBInstanceIdentifier"]) ) elif len(tag["Key"]) == 0 or len(tag["Value"]) == 0: print("Error {0}".format(i["DBInstanceIdentifier"])) else: print( "Instance RDS {0} is on a different state, check the RDS monitor for more info".format( i["DBInstanceIdentifier"] ) ) def lambda_handler(event, context): reboot_rds() ``` My environment variables: | Key| Value | | --- | --- | | KEY | tmptest | | REGION | us-east-1e | | VALUE| reboot| And finally my event named 'Test' `{ "key1": "tmptest", "key2": "us-east-1e", "key3": "reboot" }` I checked the indentation of my code before execute it and its fine, but in execution of my test event I got the following output: `{ "errorMessage": "2022-01-14T14:50:22.245Z b8d0dc59-714d-4543-8651-b5a2532dfe8e Task timed out after 1.00 seconds" }` ``` START RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e Version: $LATEST END RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e REPORT RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e Duration: 1000.76 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 65 MB Init Duration: 243.69 ms 2022-01-14T14:50:22.245Z b8d0dc59-714d-4543-8651-b5a2532dfe8e Task timed out after 1.00 seconds ``` Also my test RDS has the correct tag values in order to get the reboot action but nothing, until now I cannot reboot my instance with my Lambda function. Any clue what's wrong with my code? Maybe some additional configuration issue or something in my code is not correct, I don't know. I'd appreciate if someone can give a hand with this. **UPDATE 2022/01/15** As suggestion of **Brettski@AWS** I increased the time from 1 second to 10 then I got the following error message: ``` { "errorMessage": "Could not connect to the endpoint URL: \"https://rds.us-east-1e.amazonaws.com/\"", "errorType": "EndpointConnectionError", "requestId": "b2bb3840-42a2-4220-84b4-642d17d7a9e6", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 103, in lambda_handler\n reiniciar_rds()\n", " File \"/var/task/lambda_function.py\", line 16, in reiniciar_rds\n response = client.describe_db_instances()\n", " File \"/var/runtime/botocore/client.py\", line 386, in _api_call\n return self._make_api_call(operation_name, kwargs)\n", " File \"/var/runtime/botocore/client.py\", line 691, in _make_api_call\n http, parsed_response = self._make_request(\n", " File \"/var/runtime/botocore/client.py\", line 711, in _make_request\n return self._endpoint.make_request(operation_model, request_dict)\n", " File \"/var/runtime/botocore/endpoint.py\", line 102, in make_request\n return self._send_request(request_dict, operation_model)\n", " File \"/var/runtime/botocore/endpoint.py\", line 136, in _send_request\n while self._needs_retry(attempts, operation_model, request_dict,\n", " File \"/var/runtime/botocore/endpoint.py\", line 253, in _needs_retry\n responses = self._event_emitter.emit(\n", " File \"/var/runtime/botocore/hooks.py\", line 357, in emit\n return self._emitter.emit(aliased_event_name, **kwargs)\n", " File \"/var/runtime/botocore/hooks.py\", line 228, in emit\n return self._emit(event_name, kwargs)\n", " File \"/var/runtime/botocore/hooks.py\", line 211, in _emit\n response = handler(**kwargs)\n", " File \"/var/runtime/botocore/retryhandler.py\", line 183, in __call__\n if self._checker(attempts, response, caught_exception):\n", " File \"/var/runtime/botocore/retryhandler.py\", line 250, in __call__\n should_retry = self._should_retry(attempt_number, response,\n", " File \"/var/runtime/botocore/retryhandler.py\", line 277, in _should_retry\n return self._checker(attempt_number, response, caught_exception)\n", " File \"/var/runtime/botocore/retryhandler.py\", line 316, in __call__\n checker_response = checker(attempt_number, response,\n", " File \"/var/runtime/botocore/retryhandler.py\", line 222, in __call__\n return self._check_caught_exception(\n", " File \"/var/runtime/botocore/retryhandler.py\", line 359, in _check_caught_exception\n raise caught_exception\n", " File \"/var/runtime/botocore/endpoint.py\", line 200, in _do_get_response\n http_response = self._send(request)\n", " File \"/var/runtime/botocore/endpoint.py\", line 269, in _send\n return self.http_session.send(request)\n", " File \"/var/runtime/botocore/httpsession.py\", line 373, in send\n raise EndpointConnectionError(endpoint_url=request.url, error=e)\n" ] } ``` It's strange because my VPC configuration is fine, it's the same VPC of my RDS, its zone and the same security group. What else have I to consider in order to make my code work properly?
2
answers
0
votes
5
views
TEENEESE
asked 2 days ago

aws lambda - ES6 module error : module is not defined in ES module scope

Based on these resources : https://aws.amazon.com/about-aws/whats-new/2022/01/aws-lambda-es-modules-top-level-await-node-js-14/ https://aws.amazon.com/blogs/compute/using-node-js-es-modules-and-top-level-await-in-aws-lambda/ it is clear that aws nodejs 14.x now supports ES6 module. However, when I to run a nodejs app with ES6 module, I get this error ``` undefined ERROR Uncaught Exception { "errorType": "ReferenceError", "errorMessage": "module is not defined in ES module scope\nThis file is being treated as an ES module because it has a '.js' file extension and '/var/task/package.json' contains \"type\": \"module\". To treat it as a CommonJS script, rename it to use the '.cjs' file extension.", "stack": [ "ReferenceError: module is not defined in ES module scope", "This file is being treated as an ES module because it has a '.js' file extension and '/var/task/package.json' contains \"type\": \"module\". To treat it as a CommonJS script, rename it to use the '.cjs' file extension.", " at file:///var/task/index.js:20:1", " at ModuleJob.run (internal/modules/esm/module_job.js:183:25)", " at process.runNextTicks [as _tickCallback] (internal/process/task_queues.js:60:5)", " at /var/runtime/deasync.js:23:15", " at _tryAwaitImport (/var/runtime/UserFunction.js:74:12)", " at _tryRequire (/var/runtime/UserFunction.js:162:21)", " at _loadUserApp (/var/runtime/UserFunction.js:197:12)", " at Object.module.exports.load (/var/runtime/UserFunction.js:242:17)", " at Object.<anonymous> (/var/runtime/index.js:43:30)", " at Module._compile (internal/modules/cjs/loader.js:1085:14)" ] } ``` I have already added `"type": "module" `in package.json package.json ``` { "name": "autoprocess", "version": "1.0.0", "description": "", "main": "index.js", "type": "module", "scripts": { }, "author": "", "license": "ISC", "dependencies": { "@aws-sdk/client-sqs": "^3.41.0", "aws-sdk": "^2.1030.0", "check-if-word": "^1.2.1", "express": "^4.17.1", "franc": "^6.0.0", "is-html": "^3.0.0", "nodemon": "^2.0.15" } } ``` index.json ``` 'use strict'; import StringMessage from './StringMessage.js'; module.exports.handler = async (event) => { var data = JSON.parse(event.body); //other code goes here let response = { statusCode: 200, headers: { }, body: "" }; console.log("response: " + JSON.stringify(response)) return response; }; ``` I have also tried replacing "module.exports.handler" with "exports.handler ". this does not work either. error message shows "exports is not defined in ES module scope" What am I doing wrong? additional info: I am uploading the function code via zip file
1
answers
0
votes
4
views
az-gi
asked 3 days ago

aws-sdk V3 timeout in lambda

Hello, I'm using NodeJS 14.x lambda to control an ecs service. As I do not need the ecs task to run permanently, I created a service inside the cluster so I can play around the desired count to start or stop it at will. I also created two lambdas, one for querying the current desired count and the current Public IP, another one for updating said desired count (to 0 or 1 should I want to start or stop it) I have packed aws-sdk v3 on a lambda layer to not have to package it on each lambda. Seems to work fine as I was getting runtime error > "Runtime.ImportModuleError: Error: Cannot find module '@aws-sdk/client-ecs'" But I do not anymore. The code is also working fine from my workstation as I'm able to execute it locally and I get the desired result (query to ecs api works fine) But All I get when testing from lambdas are Timeouts... It usually execute in less than 3 secondes on my local workstation but even with a lambda timeout set up at 3 minutes, this is what I get ``` START RequestId: XXXX-XX-XXXX Version: $LATEST 2022-01-11T23:57:59.528Z XXXX-XX-XXXX INFO before ecs client send END RequestId: XXXX-XX-XXXX REPORT RequestId: XXXX-XX-XXXX Duration: 195100.70 ms Billed Duration: 195000 ms Memory Size: 128 MB Max Memory Used: 126 MB Init Duration: 1051.68 ms 2022-01-12T00:01:14.533Z XXXX-XX-XXXX Task timed out after 195.10 seconds ``` The message `before ecs client send` is a console.log I made just before the ecs.send request for debug purposes I think I've set up the policy correctly, as well as the Lambda VPC with the default outbound rule to allow all protocol on all port to 0.0.0.0/0 so I I have no idea on where to look now. I have not found any way to debug aws-sdk V3 calls like you would do on V2 by adding a logger to the config. Maybe it could help understanding the issue....
1
answers
0
votes
5
views
Tomazed
asked 5 days ago
2
answers
0
votes
6
views
Arian Calabrese
asked 5 days ago

SAM deploy does not deploy Layer dependencies to S3

In my SAM template I've got 2 Lambda functions that share dependencies via a Layer. Here's my directory structure. As you can see, individual functions have no ``requirements.txt`` file, but it's shared within ``deps/`` directory: ``` ├── deps │   └── requirements.txt ├── src │   ├── function1 │   │   └── getArticlesById.py │   └── function2 │   └── getArticlesById.py └── template.yaml ``` ```yaml AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: Sample SAM Template for testing API Gateway, Lambda, DynamoDB integration Globals: Api: OpenApiVersion: 3.0.1 Function: Timeout: 5 Parameters: Environment: Type: String Default: dev Resources: DepsLayer: Type: AWS::Serverless::LayerVersion Properties: Description: !Sub Dependencies for ${AWS::StackId}-${Environment} ContentUri: deps/ CompatibleRuntimes: - python3.9 RetentionPolicy: Retain Metadata: BuildMethod: python3.9 GetRecommendationsByIdFunctionDynamo: Type: AWS::Serverless::Function Properties: CodeUri: src/function1 Handler: getArticlesById.lambda_handler Runtime: python3.9 MemorySize: 3008 Tracing: Active Policies: - AWSLambdaVPCAccessExecutionRole - DynamoDBReadPolicy: TableName: !Ref MyDatabase Layers: - !Ref DepsLayer Events: HelloWorld: Type: Api Properties: Path: /getArticlesByIdDynamo Method: get RestApiId: !Ref API Environment: Variables: STAGE: !Sub ${Environment} GetRecommendationsByIdFunctionS3: Type: AWS::Serverless::Function Properties: CodeUri: src/function2 Handler: getArticlesById.lambda_handler Runtime: python3.9 MemorySize: 3008 Tracing: Active Policies: - AWSLambdaVPCAccessExecutionRole - S3ReadPolicy: BucketName: !Ref MyBucket Layers: - !Ref DepsLayer Events: HelloWorld: Type: Api Properties: Path: /getArticlesByIdS3 Method: get RestApiId: !Ref API Environment: Variables: STAGE: !Sub ${Environment} ``` ``sam build`` fetches all dependencies and puts them into ``.aws-sam/build/DepsLayer/python``: ``` .aws-sam ├── build │   ├── DepsLayer │   │   └── python │   ├── GetRecommendationsByIdFunctionDynamo │   │   └── getArticlesById.py │   ├── GetRecommendationsByIdFunctionS3 │   │   └── getArticlesById.py │   └── template.yaml └── build.toml ``` However when I run `sam deploy`, `DepsLayer` dependencies are not copied over to S3, and Lambda functions fail at runtime, since they can't find these dependencies. ``` $ aws --version aws-cli/2.3.2 Python/3.9.7 Darwin/20.6.0 source/x86_64 prompt/off $ sam --version SAM CLI, version 1.36.0 ```
0
answers
0
votes
2
views
maslick
asked 5 days ago

AWS Lambda Layers with Python Package

Hi, I have a python script which is running on my local machine and I want to move it to AWS Lambda for periodic execution. I have 3 import statements in the script for which I am adding layers but facing some issues. from googleapiclient.discovery import build import pandas as pd from datetime import date For googleapiclient api, I downloaded it in a folder and uploaded to AWS layer and lambda is able to find this module. I wanted to use this along with AWS Data wrangler package but running into layer size restriction issues. So I downloaded pandas to the same folder as google api and then uploaded the zip file to layer. But now I am get a numpy dependency error though numpy was downloaded as part of pandas install. Two folders as part of my libraries folder are numpy, numpy-1.22.0.dist-info which is correct version as per error message below. I also tried downloading numpy separately in same package but that also not working. Please let me know if I am missing something and if this is the correct approach for installing python packages for AWS Lambda. Response { "errorMessage": "Unable to import module 'lambda_function': Unable to import required dependencies:\nnumpy: \n\nIMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\n\nImporting the numpy C-extensions failed. This error can happen for\nmany reasons, often due to issues with your setup or how NumPy was\ninstalled.\n\nWe have compiled some common reasons and troubleshooting tips at:\n\n https://numpy.org/devdocs/user/troubleshooting-importerror.html\n\nPlease note and check the following:\n\n * The Python version is: Python3.7 from \"/var/lang/bin/python3.7\"\n * The NumPy version is: \"1.22.0\"\n\nand make sure that they are the versions you expect.\nPlease carefully study the documentation linked above for further help.\n\nOriginal error was: No module named 'numpy.core._multiarray_umath'\n", "errorType": "Runtime.ImportModuleError", "stackTrace": [] } Function Logs START RequestId: de7259a6-cd11-4ecc-9f93-1a624f1c0c6d Version: $LATEST [ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.7 from "/var/lang/bin/python3.7" * The NumPy version is: "1.22.0" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: No module named 'numpy.core._multiarray_umath' Traceback (most recent call last): END RequestId: de7259a6-cd11-4ecc-9f93-1a624f1c0c6d REPORT RequestId: de7259a6-cd11-4ecc-9f93-1a624f1c0c6d Duration: 1.68 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 55 MB Init Duration: 570.09 ms Request ID de7259a6-cd11-4ecc-9f93-1a624f1c0c6d Regards, Dbeings
1
answers
0
votes
10
views
dbeing
asked 7 days ago

Long response time for cloudfront misses

Need some help debugging this long response time I'm seeing from my cloudfront CDN for images that have not been cached. The outline of our setup is that we have a cloudfront cdn that responds with cached images when available. If no cached image is available, there's a lambda that pulls the requested image from s3 and resizes it using sharp.js, then sends the resized image as the response to the request. Cloudfront caches this image and then uses it for subsequent requests for the same image. The problem is that this handling takes 2-3s usually. You can see in [this](https://i.stack.imgur.com/uCt4W.png) screenshot. I'm only partially aware of the breakdown of those 2-3s. That screenshot is of logs from cloudfront, so that means the problem must lie somewhere within our cloudfront setup. The lambda itself takes 800-1300ms from start to finish, and that includes the time it takes to pull the image from s3, resize it, convert it to a buffer, then respond to the request. We already use the [http keepAlive](https://aws.amazon.com/blogs/networking-and-content-delivery/leveraging-external-data-in-lambdaedge/) optimization to reduce the latency of pulling the image from s3. However the lambda's running time is often 50% or less of the total response time, so that means there's another significant bottleneck elsewhere that I haven't discovered, and I'm not sure how to go about finding it. I've tried enabling AWS X-Ray to get more insights into the problem but our lambda is on Lambda@Edge, which doesn't support X-Ray. What else can I investigate and where else could I look?
2
answers
0
votes
7
views
AWS-User-8778696
asked 8 days ago

Using AWS Lambda with RDS Proxy: Socket Ended Error

**Some background:** I have a Node 12 Lambda running an Apollo GraphQL server. It connects to a MySQL RDS database via RDS Proxy using IAM authentication. **Problem:** Since switching from direct DB connection to RDS Proxy via IAM auth, we get intermittent socket ended errors, seen below: ``` Error: This socket has been ended by the other party at TLSSocket.writeAfterFIN [as write] (net.js:456:14) at PoolConnection.write (/var/task/node_modules/mysql2/lib/connection.js:363:20) at PoolConnection.writePacket (/var/task/node_modules/mysql2/lib/connection.js:294:12) at Query.start (/var/task/node_modules/mysql2/lib/commands/query.js:60:16) at Query.execute (/var/task/node_modules/mysql2/lib/commands/command.js:45:22) at PoolConnection.handlePacket (/var/task/node_modules/mysql2/lib/connection.js:456:32) at PoolConnection.addCommand (/var/task/node_modules/mysql2/lib/connection.js:478:12) at PoolConnection.query (/var/task/node_modules/mysql2/lib/connection.js:546:17) at MysqlQueryRunner.<anonymous> (/var/task/node_modules/typeorm/driver/mysql/MysqlQueryRunner.js:184:56) at step (/var/task/node_modules/typeorm/node_modules/tslib/tslib.js:143:27) { code: 'EPIPE', fatal: true } ``` I know that this is far from enough info to debug, but in lieu of throwing up all possible config surrounding these services, I'll wait for more pointed questions. Happy to post settings, connection info and any other config as needed to debug. Thanks in advance for any help!
2
answers
0
votes
6
views
fdizdar
asked 9 days ago

AWS Lambda email function does not always work

I have setup a custom flow with cognito to send MFA codes via email using lambda triggers. I have just noticed though that the function does not appear to always work and the emails are not always sent when requesting to login. My account is still in the sandbox mode since i want to stay in free tier but i havent went over my daily limit so i should still be able to send emails I have setup the lambda function with a promise but this hasnt fixed the issue. I checked the lambda and cloudwatch SES logs for the trigger and there isnt any failures according to it so i am quite confused Any ideas what is happening? here is my lambda trigger below for sending emails ``` const crypto = require("crypto"); var aws = require("aws-sdk"); var ses = new aws.SES({ region: "eu-west-2" }); exports.handler = async(event, context, callback) => { var verificationCode = 0; //Only called after SRP_A and PASSWORD_VERIFIER challenges. if (event.request.session.length == 2) { const n = crypto.randomInt(0, 100000); verificationCode = n.toString().padStart(6, "0"); const minimumNumber = 0; const maximumNumber = 100000; verificationCode = Math.floor(Math.random() * maximumNumber) + minimumNumber; await sendMail(event.request.userAttributes.email, verificationCode); } else { //if the user makes a mistake, we pick code from the previous session instead of sending new code const previousChallenge = event.request.session.slice(-1)[0]; verificationCode = previousChallenge.challengeMetadata; } //add to privateChallengeParameters, so verify auth lambda can read this. this is not sent to client. event.response.privateChallengeParameters = { "verificationCode": verificationCode }; //add it to session, so its available during the next invocation. event.response.challengeMetadata = verificationCode; return event; }; async function sendMail(emailAddress, secretLoginCode) { const params = { Destination: { ToAddresses: [emailAddress] }, Message: { Body: { Html: { Charset: 'UTF-8', Data: `<html><body><p>This is your secret login code:</p> <h3>${secretLoginCode}</h3></body></html>` }, Text: { Charset: 'UTF-8', Data: `Your secret login code: ${secretLoginCode}` } }, Subject: { Charset: 'UTF-8', Data: 'Your secret login code' } }, Source: 'my verified email' }; await ses.sendEmail(params).promise(); }
2
answers
0
votes
10
views
SAGE
asked 9 days ago

SAM invoke won't take local env vars

I have a sample SAM application with basic endpoints. I just want to run it locally by: sam local invoke -e events/event-post-item.json putItemFunction --profile myprofile -n local.json `local.json` is as follows: ``` { "Parameters": { "TABLE_CUSTOMERS": "MyDynamoDBTable" // MyDynamoDBTable is the DynamoDB resource name from `template.yml` } } ``` And following is the code for `putItemFunction` ``` const dynamodb = require('aws-sdk/clients/dynamodb'); const docClient = new dynamodb.DocumentClient(); const tableName = process.env.TABLE_CUSTOMERS; exports.putItemHandler = async (event) => { const { body, httpMethod, path } = event; if (httpMethod !== 'POST') { throw new Error(`postMethod only accepts POST method, you tried: ${httpMethod} method.`); } console.log('received:', JSON.stringify(event)); const { id, name } = JSON.parse(body); const params = { TableName: tableName, Item: { id, name }, }; await docClient.put(params).promise(); const response = { statusCode: 200, body, }; console.log(`response from: ${path} statusCode: ${response.statusCode} body: ${response.body}`); return response; }; ``` I run this, and I get a "resource not found" error. I have made sure that the profile details are correct. I do understand that I am invoking it locally, and my local machine has a runtime docker only. Not the DynamoDB table, 'cause that's created over the cloud. Shouldn't SAM figure out the generated database name as defined in the Stack's template.yml file? When I hardcode the generated database name in the env file, it works. But that's not pragmatic approach. I want SAM to just use the table name that was just generated when I deployed the stack.
2
answers
0
votes
4
views
AWS-User-0643573
asked 9 days ago

AWS Lambda Applications and NodeJS

I noticed that NodeJS is the only runtime option when creating an application. https://us-east-2.console.aws.amazon.com/lambda/home?region=us-east-2#/create/application/configure Is there a reason that NodeJS is the only option? I've heard that NodeJS is able to cold start faster than Java for lambdas. I also noticed the example Java lambda project defaults to 512MB MemorySize and NodeJS defaults to 128MB. Is Amazon trying to push us to NodeJS when building lambda applications because it's a better language for the environment? Is it possible to create a Java lambda resource within the template.yml of an application? Do I need to build the classfiles and upload them manually? The `java-test` folder in my project has this structure ``` java-test/src/main/java/example/Handler.java java-test/src/main/resources java-test/build.gradle ``` I've tried the following Resource configuration, but the example.Handler class cannot be found. ``` javaTest: Type: AWS::Serverless::Function Properties: CodeUri: java-test/ Handler: example.Handler Runtime: java11 Description: Java function MemorySize: 512 Timeout: 10 # Function's execution role Policies: - AWSLambdaBasicExecutionRole - AWSLambda_ReadOnlyAccess - AWSXrayWriteOnlyAccess - AWSLambdaVPCAccessExecutionRole Tracing: Active ``` I copied parts of the blank-java lambda project below. https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-java Here's the full build output ``` docker ps "C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd" build javaTest --template C:\Users\bensi\IdeaProjects\team-up\template.yml --build-dir C:\Users\bensi\IdeaProjects\team-up\.aws-sam\build --use-container Starting Build inside a container Building codeuri: C:\Users\bensi\IdeaProjects\team-up\java-test runtime: java11 metadata: {} architecture: x86_64 functions: ['javaTest'] Fetching public.ecr.aws/sam/build-java11:latest-x86_64 Docker container image...... Mounting C:\Users\bensi\IdeaProjects\team-up\java-test as /tmp/samcli/source:ro,delegated inside runtime container Build Succeeded Built Artifacts : .aws-sam\build Built Template : .aws-sam\build\template.yaml Commands you can use next ========================= [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided Running JavaGradleWorkflow:GradleBuild Running JavaGradleWorkflow:CopyArtifacts "C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd" local invoke javaTest --template C:\Users\bensi\IdeaProjects\team-up\.aws-sam\build\template.yaml --event "C:\Users\bensi\AppData\Local\Temp\[Local] javaTest-event5.json" Invoking example.Handler (java11) Skip pulling image and use local one: public.ecr.aws/sam/emulation-java11:rapid-1.36.0-x86_64. Mounting C:\Users\bensi\IdeaProjects\team-up\.aws-sam\build\javaTest as /var/task:ro,delegated inside runtime container START RequestId: 3e9debb6-a640-4ba2-bd6e-5f2d818d303e Version: $LATEST {"errorMessage":"Class not found: example.Handler","errorType":"java.lang.ClassNotFoundException"}Class not found: example.Handler: java.lang.ClassNotFoundException java.lang.ClassNotFoundException: example.Handler. Current classpath: file:/var/task/:file:/var/task/lib/aws-lambda-java-core-1.2.1.jar:file:/var/task/lib/gson-2.8.6.jar END RequestId: 3e9debb6-a640-4ba2-bd6e-5f2d818d303e REPORT RequestId: 3e9debb6-a640-4ba2-bd6e-5f2d818d303e Init Duration: 0.07 ms Duration: 271.19 ms Billed Duration: 272 ms Memory Size: 512 MB Max Memory Used: 512 MB ```
2
answers
0
votes
8
views
AWS-User-1
asked 11 days ago
1
answers
0
votes
3
views
SAGE
asked 11 days ago

Javascript IoT client SDK AttachPolicy call inside AWS lambda blocks and causes the lambda to timeout without any error/exception/log whatsoever

Hello, **Issue in short: ** When I make [AttachPolicy](https://docs.aws.amazon.com/iot/latest/apireference/API_AttachPolicy.html) call using the [Javascript v3 AWS SDK IoT Client](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-iot/index.html) inside AWS lambda, the call blocks and causes the lambda to timeout without any error/exception/log whatsoever. I am having trouble identifying why the call does not succeed and blocks, hence my question here. --- **Details:** I have a lambda function from which I want to use the [Javascript v3 AWS SDK IoT Client](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-iot/index.html). The request that I want to make to IoT core is [AttachPolicy](https://docs.aws.amazon.com/iot/latest/apireference/API_AttachPolicy.html), in order to attach a policy to a Thing's certificate (certificate is identified by ARN). **Basically what I have in my code (excerpts) is this:** **1) Creation of an IoT client: ** ``` new IoTClient({ region: process.env.REGION }); ``` I have not provided any credentials, because I read that they are retrieved from the lambda's execution role: https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-node-credentials-lambda.html ---- **2) Creation of AttachPolicyCommand** - https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-iot/classes/attachpolicycommand.html: ``` const attachPolicyCommand = new AttachPolicyCommand({ policyName: <NAME_OF_POLICY>, target: <CERTIFICATE_ARN> }); console.log(`Sending attach policy request to attach '${policyName}' to '${targetArn}'.`) ``` From the log above I double check and make absolutely sure that the policy name and the certificate ARN are the correct ones. **3) AttachPolicy request - this is where the problem arises** ``` const attachPolicyResponse = await this.iotClient.send(attachPolicyCommand); ... // unreachable code after the call above ... // after the configured lambda's timeout, the function execution ceases without executing any other line of code ``` I have also tried to wrap the above call inside try/catch/finally block, but it didn't make any difference. --- One thing that I noticed and thought was causing the issue was that the lambda's execution role didn't have the permission to access the [AttachPolicy](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) action. But once I fixed that and added the needed policy, the issue persists. This is the json of the policy which I added to the lambda execution role in order to allow AttachPolicy: ``` { "Version": "2012-10-17", "Statement": [ { "Action": "iot:AttachPolicy", "Resource": "*", "Effect": "Allow" } ] } ``` Another thing is that the lambda's timeout configuration seems to make no difference - I tried making it both 5 seconds and 2 minutes - and the blocking is, respectively, for 5 seconds and 2 minutes. I'll be grateful if you provide some insights on what the problem might be :)).
1
answers
1
votes
5
views
Stiliyan Goranov
asked 11 days ago

Unable to retrieve a stored AWS Secretss API keys and parameters

Hi Everyone, I am new to lamda Function. I have stored my API Key and other parameters for an rest endpoint as key pair values as a secret in AWS Secret Manager. While I need to retreive the key and other parameters to construct the end point, I am unable to even print it. I have added my code below written in Python. The response is coming null along with no error and no information in logs. import boto3 import base64 from botocore.exceptions import ClientError import json def get_secret(): secret_name = "aXXXXXXXXXXXXXXXXXXXXXXX2cFVdm" region_name = "apXXXXXXX1" session = boto3.session.Session() client = session.client( service_name='secretsmanager', region_name=region_name ) # In this sample we only handle the specific exceptions for the 'GetSecretValue' API. # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html # We rethrow the exception by default. get_secret_value_response="" try: get_secret_value_response = client.get_secret_value( SecretId=secret_name ) except ClientError as e: if e.response['Error']['Code'] == 'DecryptionFailureException': # Secrets Manager can't decrypt the protected secret text using the provided KMS key. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'InternalServiceErrorException': # An error occurred on the server side. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'InvalidParameterException': # You provided an invalid value for a parameter. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'InvalidRequestException': # You provided a parameter value that is not valid for the current state of the resource. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'ResourceNotFoundException': # We can't find the resource that you asked for. # Deal with the exception here, and/or rethrow at your discretion. raise e else: # Decrypts secret using the associated KMS key. # Depending on whether the secret is a string or binary, one of these fields will be populated. if 'SecretString' in get_secret_value_response: secret = get_secret_value_response['SecretString'] else: decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary']) # Your code goes here. return get_secret_value_response def lambda_handler(event, context): ms=get_secret(); print(ms)
3
answers
0
votes
10
views
Arun Kumar
asked 12 days ago

Amplify build error - Cannot find module '/codebuild/output/....

Hi all My vue app is running fine locally and builds fine locally, however, I'm trying to build my app on Amplify using a link to my github repo. The link and the clone work fine but I'm getting an error during the build. Amplify push also works fine without problems. I've only ever used NPM for all modules along with the vue-cli and Amplify-cli. I have no idea where to start with this. The main error seems to be : `Cannot find module '/codebuild/output/src323788196/src/.yarn/releases/yarn-1.23.0-20210726.1745.cjs'` I've tried `yarn install ` but that does not help. I'm not sure what to do next because I've never used yarn at all in this project. My build config is standard - ``` version: 1 backend: phases: build: commands: - '# Execute Amplify CLI with the helper script' - amplifyPush --simple frontend: phases: preBuild: commands: - npm install build: commands: - npm run build artifacts: baseDirectory: dist files: - '**/*' cache: paths: - node_modules/**/* ``` The error I'm getting is - ``` [WARNING]: ✖ An error occurred when pushing the resources to the cloud 2022-01-04T06:47:49.986Z [WARNING]: ✖ There was an error initializing your environment. 2022-01-04T06:47:49.993Z [INFO]: Error: Packaging lambda function failed with the error  [0mCommand failed with exit code 1: yarn --production [0minternal/modules/cjs/loader.js:818 [0m throw err; [0m ^ [0mError: Cannot find module '/codebuild/output/src323788196/src/.yarn/releases/yarn-1.23.0-20210726.1745.cjs' [0m at Function.Module._resolveFilename (internal/modules/cjs/loader.js:815:15)  at Function.Module._load (internal/modules/cjs/loader.js:667:27)  at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)  at internal/main/run_main_module.js:17:47 {  code: 'MODULE_NOT_FOUND',  requireStack: [] }  at runPackageManager (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:66:13)  at installDependencies (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:40:3)  at Object.buildResource [as build] (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-nodejs-function-runtime-provider/src/utils/legacyBuild.ts:13:5)  at buildFunction (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-category-function/src/provider-utils/awscloudformation/utils/buildFunction.ts:41:36)  at prepareResource (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:605:33)  at async Promise.all (index 1)  at prepareBuildableResources (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:601:10)  at Object.run (/root/.nvm/versions/node/v12.21.0/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:173:5) 2022-01-04T06:47:50.024Z [ERROR]: !!! Build failed 2022-01-04T06:47:50.024Z [ERROR]: !!! Non-Zero Exit Code detected ```
0
answers
0
votes
6
views
DareDevil
asked 12 days ago

jsii.errors.JSIIError: Cannot read properties of undefined (reading 'bindToGraph')

HI All This is my first implementation of StateMachineFragment. Goal: Attempting to create a class for re-usable lambda state. This class can take a parameter and pass this as payload to Lambda and the lambda will execute the right query based on the payload. Below is my POC code to 'classs-ify' the lambda and the call to statemachine. ``` from aws_cdk import ( Duration, Stack, # aws_sqs as sqs, aws_stepfunctions as _stepfunctions, aws_stepfunctions as sfn, aws_stepfunctions_tasks as _stepfunctions_tasks, aws_lambda as _lambda, ) from constructs import Construct class SubMachine(_stepfunctions.StateMachineFragment): def __init__(self, parent, id, *, jobTypeParam): super().__init__(parent, id) existingFunc = _lambda.Function.from_function_arn(self, "ExistingLambdaFunc", function_arn="arn:aws:lambda:us-east-1:958$#$#$#$:function:dummyFunction") lambda_invoked = _stepfunctions_tasks.LambdaInvoke(self, "someID", lambda_function=existingFunc) wait_10_seconds = _stepfunctions.Wait(self, "Wait for 10 seconds", time=_stepfunctions.WaitTime.duration(Duration.seconds(10)) ) self._start_state = wait_10_seconds self._end_states = [lambda_invoked.end_states] def start_state(self): return self._start_state def end_states(self): return self._end_states class StepfunctionsClasStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) test_lambda_1 = SubMachine(self, "SubMachine1", jobTypeParam="one") state_machine = _stepfunctions.StateMachine(self, "TestStateMachine", definition=test_lambda_1, # role=marketo_role ) ``` When I try and deploy this code, I get the following error: ``` jsii.errors.JSIIError: Cannot read properties of undefined (reading 'bindToGraph') ``` I am not sure where I am going wrong. Thoughts? Thanks
1
answers
0
votes
7
views
tkansara
asked 15 days ago

Start_state and End_states for step functions

Hi All All I am trying to do is to create a reusable ambda component where I can pass parameters to the class so that the Lambda can do different things, based on input param. I am using CDK in python to deploy the stack. I would like to create a parallel stepfunction, where I can pass the same lambda using different param/payload so they can be branches. I am running the following code: ``` from aws_cdk import ( # Duration, Stack, # aws_sqs as sqs, aws_stepfunctions as _stepfunctions, aws_stepfunctions_tasks as _stepfunctions_tasks, aws_lambda as _lambda, ) from constructs import Construct class LambdaJob(_stepfunctions.StateMachineFragment): def __init__(self, parent, id, *, jobTypeParam): super().__init__(parent, id) existingFunc = _lambda.Function.from_function_arn(self, "ExistingLambdaFunc", function_arn="arn:aws:lambda:us-east-1:95842$$$$$:function:dummyFunction") lambda_invoked = _stepfunctions_tasks.LambdaInvoke(self, "someID", lambda_function=existingFunc) wait_10_seconds = _stepfunctions.Wait(self, "Wait for 10 seconds", time=_stepfunctions.WaitTime.duration(Duration.seconds(10)) ) self.start_state = wait_10_seconds class StepfunctionsClasStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) test_lambda_1 = LambdaJob(self, "Quick") #state_machine = _stepfunctions.StateMachine(self, "TestStateMachine", # definition=LambdaJob(self, "Quick"), # # role=marketo_role # ) ``` However i keep getting the following error: ``` TypeError: Can't instantiate abstract class LambdaJob with abstract methods end_states, start_state ``` Any thoughts on what I am doing wrong ? Thanks
1
answers
0
votes
7
views
tkansara
asked 15 days ago

Design suggestions

Hello All I am expanding the scope of the project, and wanted some suggestions/comments on if the right tech stack is being used. (Job - Pulling Leads and Activities from Marketo). Each job has a different query. **Scope**: We need 2 jobs to be run daily, however here is the catch. Each job should be created and queued first. Once done, we can poll to see if the job on Marketo side is completed and the file is downloaded. The file download can take more than 15 mins. THe file an be downloaded using the Job Id, using the 2 jobs that were created earlier. **Current/Implemented Solution**: I started with solving for Leads only and here is the stack that was worked out. The Event will be triggered, on a daily basis using event bridge. The task that is to be triggered is a step function. The Sfn, first calls a Lambda to create job, waits for 1 min, another lambda to queue the job, then wait for 10 mins and 3rd lambda to check status of file. If not ready, wait for 10 and poll again for file status(this is a loop with choice to keep checking file status). Once file is ready, call a container(fargate/ECS) and pass the Job Id as containerOverride to the container. Run the job on container to download the file and upload the file to S3. **Incorporate pulling activities into the above flow**: Since the queuing and Polling(for file status) lamba are similar, and the first lambda(creating the job) is different, I though of creating a parallel task where each branch does create, queue, poll and download the file(using the implemented solution, so 2 lambdas for job create and reuse the 2nd and 3rd lambdas). Once the complete chain(one for Leads and one for activities) is done, have a consolidation stage where the output from each container output is collected and an SNS message of job completion is send. I am looking forward to your suggestions to see if the suggested above workflow is how it should be done or is there any other technology that I should use. **Design Constraints**: I need to create and queue all the jobs first before starting the downloading since Marketo has a limit of 500mb for file download. Hence I the need to create and queue all the jobs first , and then only start the job to download of files. Thanks for your suggestions. Regards Tanmay
0
answers
0
votes
29
views
tkansara
asked 15 days ago

Best way for a Wireless IoT Lambda decoder to determine decoder type?

Hi, I'm trying to figure out the best design pattern for having a Lambda function that handles decoding multiple binary payload types to determine which decoder to use on an incoming LoRa payload. The vanilla input event looks like this: ``` { "WirelessDeviceId": "<<REDACTED>>", "PayloadData": "BA==", "WirelessMetadata": { "LoRaWAN": { "ADR": true, "Bandwidth": 125, "ClassB": false, "CodeRate": "4/5", "DataRate": "3", "DevAddr": "01b9c3bb", "DevEui": "<<REDACTED>>", "FCnt": 312, "FOptLen": 0, "FPort": 4, "Frequency": "904500000", "Gateways": [ { "GatewayEui": "<<REDACTED>>", "Rssi": -96, "Snr": 2.5 } ], "MIC": "0110ac72", "MType": "UnconfirmedDataUp", "Major": "LoRaWANR1", "Modulation": "LORA", "PolarizationInversion": false, "SpreadingFactor": 7, "Timestamp": "2021-12-23T19:58:35Z" } } } ``` Which on it's own doesn't contain any useful information for my Lambda decoder to determine which decoder to use. Inside Lambda, I have a nodejs script with a bunch of custom decoders as individual modules, each specifically built for a particular device type (categorized by manufacturer & model). The handler imports the appropriate module using a simple if statement and runs the payload through that decoder. Something like this... ``` const decodera = require('./lib/decodera'); const decoderb = require('./lib/decoderb'); exports.handler = async (event) => { // Call the correct transformer function if (event.DecoderType === 'a') { return decodera(event); } else if (event.DecoderType === 'b') { return decoderb(event); } else { console.log(`A transformer for ${event.DecoderType} was not found.`); } }; ``` Originally I had a separate Lambda for each decoder, but that gets complex since you need to manage a whole bunch of functions, rules, and topics for each new device type. So I'd prefer to have only one, and let the function determine the correct decoder once the event arrives. Thats the idea at least. However, this method seems to require an enriched version of the example event above with information about which decoder to use. So I'm trying to figure out how best to handle. 1) Do I store that info in Dynamo and use a separate rule with `get_dynamodb()` first (with something like "decoder_type" and add it to the event and then republish it, where the `aws_lambda()` rule would then send it to the decoder function, where I would republish a second time to the final topic? 2) Can I at least combine both the `get_dynamodb()` and `aws_lambda()` functions into one rule? The point here is it seems I still need Dynamo to store some kind of key to tell Lambda which decoder to use? 3) Is there a simpler way?
1
answers
1
votes
1
views
AWS-User-9404308
asked 15 days ago

Write containerOverwrites from Lambda to Container

Hello All I have a step function that is calling a lambda, which does some processing, and gets the id of the job that needs to be processed by the ecs runtask. What I am trying to do is to pass the job Id as containerOverride so that each run a different Id can be passed to the ecs Run task. Here is the dummy lambda Output: ``` Test Event Name dummyTest Response { "containerOverrides": " [ { \"name\": \"getFileTask\", \"environment\": [ { \"name\": \"name1\", \"value\": \"123\" }, { \"name\": \"DATE\", \"value\": \"1234-12-12\" }, { \"name\": \"SCRIPT\", \"value\": \"123456\" } ] } ] " } ``` Dummy Lambda Code: ``` def lambda_handler(event, context): # TODO implement print("Success") overridden_Text = ' [ { "name": "getFileTask", "environment": [ { "name": "name1", "value": "123" }, { "name": "DATE", "value": "1234-12-12" }, { "name": "SCRIPT", "value": "123456" } ] } ] ' return{ 'containerOverrides': overridden_Text } ``` Here is the record when the ECS run task is triggered(TaskStateEntered) from the step function: ``` { "name": "ECS RunTask", "input": { "containerOverrides": " [ { \"name\": \"getFileTask\", \"environment\": [ { \"name\": \"name1\", \"value\": \"123\" }, { \"name\": \"DATE\", \"value\": \"1234-12-12\" }, { \"name\": \"SCRIPT\", \"value\": \"123456\" } ] } ] " }, "inputDetails": { "truncated": false } } ``` The issue is that when the run task enters the TaskSubmitted stage: ``` "LastStatus": "PROVISIONING", "LaunchType": "FARGATE", "Memory": "4096", "Overrides": { "ContainerOverrides": [ { "Command": [], "Environment": [], "EnvironmentFiles": [], "Name": "getFileTask", "ResourceRequirements": [] } ], "InferenceAcceleratorOverrides": [] }, "PlatformFamily": "Linux", "PlatformVersion": "1.4.0", "StartedBy": "AWS Step Functions", ``` For what ever reasons the Environment variable is not being pushed from lambda output to the Container as launch override. Is there something that I am doing incorrectly ? Thanks
1
answers
0
votes
7
views
tkansara
asked 16 days ago

S3 Event Bridge events have null values for VersionId. Is this a bug?

When working with Lambda Functions to handle EventBridge events from an S3 bucket with versioning enabled, I find that the VersionId field of the AWS Event object always shows a null value instead of the true value. For example, here is the JSON AWSEvent that uses the aws.s3@ObjectDeleted schema. This JSON was the event payload that went to my Lambda Function when I deleted an object from a bucket that had versioning enabled: Note that $.object.versionId is null but when I look in the bucket, I see unique Version ID values for both the original cat pic "BeardCat.jpg" and its delete marker. Also, I found the same problem in the AWSEvent JSON for an aws.s3@ObjectCreated event, too. There should have been a non-null VersionId in the ObjectCreated event and the ObjectDeleted event. Have I found a bug? Note: Where you see 'xxxx' or 'XXXXXXXXX' I was simply redacting AWS Account numbers and S3 bucket names for privacy reasons. ``` { detail: class ObjectDeleted { bucket: class Bucket { name: tails-dev-images-xxxx } object: class Object { etag: d41d8cd98f00b204e9800998ecf8427e key: BeardCat.jpg sequencer: 0061CDD784B140A4CB versionId: null } deletionType: null reason: DeleteObject requestId: null requester: XXXXXXXXX sourceIpAddress: null version: 0 } detailType: null resources: [arn:aws:s3:::tails-dev-images-xxxx] id: 82b7602e-a2fe-cffb-67c8-73b4c8753f5f source: aws.s3 time: Thu Dec 30 16:00:04 UTC 2021 region: us-east-2 version: 0 account: XXXXXXXXXX } ```
2
answers
0
votes
7
views
TheSpunicorn
asked 17 days ago

Multi Region strategy for API Gateway

If disaster recovery is not a requirement, what would be the best strategy for setting up API Gateway to server global customers. Here are three options that I can think of, not able to land on one. **Option 1**: Single Edge Optimized API Gateway serving traffic * Pros: save cost and avoid complexity of data replication (backend is opensearch) * Cons: Latency? not sure how much edge optimized API will help with latency, as customer will be hitting the API at nearest edge (ssl handshake, etc) and traffic flowing via backbone network. ( Question 1) **Option 2** Multiple Regional API Gateway with Route53 Latency based routing * Pros: customers hitting closest API. * Cons: Data replication, Cost. Also, since there is no cloud front here, traffic will be flowing via internet to closest region API, lets say we have API deployed in two regions , US and Singapore, would users in Europe see latency , worse than the Option 1, where requests are going to nearest edge location and reaches API via backbone? **Option 3** Multiple Edge Optimized API Gateway with Route53 Latency based routing * Pros: customers hitting closest API. Not sure how latency based routing works on an edge optimized endpoint, would it even help, since both endpoints are edge optimized. Not sure how smart is Route53 (Question 2) * Cons: Data replication, cost and uncertainty of Latency based routing. and Finally , one that I can think of could work, but haven't found too many solutions where people implemented. **Option 4** Multiple Regional API Gateway with single custom Cloudfront on top with cloudfront functions to do the routing. * Pros: customers hitting closest edge optimized location and routed to nearest API, this routing will be based on country of origin header from cloudfront. * Cons: Same Data Replication, Cost and predefined list of countries based routing. I need to spend time and run tests with multiple solutions. But wanted to seek community advise first. To summarize everything, if cost, complexity and disaster recovery are kept out of discussion, what would be best architecture for API Gateway to avoid latency issues.
2
answers
0
votes
18
views
Balu
asked 17 days ago

AWS CDK 2: Package subpath './aws-cloudfront/lib/experimental' is not defined by "exports" in xxx/node_modules/aws-cdk-lib/package.json

I tried creating a demo for VueJS SSR using Lambda@Edge and using AWS CDK v2. The code is below ``` import { CfnOutput, Duration, RemovalPolicy, Stack, StackProps } from 'aws-cdk-lib'; import { Construct } from 'constructs'; import { Bucket } from 'aws-cdk-lib/aws-s3'; import { BucketDeployment, Source } from 'aws-cdk-lib/aws-s3-deployment' import { CloudFrontWebDistribution, LambdaEdgeEventType, OriginAccessIdentity } from 'aws-cdk-lib/aws-cloudfront'; import { Code, Function, Runtime } from 'aws-cdk-lib/aws-lambda'; import { EdgeFunction } from 'aws-cdk-lib/aws-cloudfront/lib/experimental'; export class SsrStack extends Stack { constructor(scope: Construct, id: string, props?: StackProps) { super(scope, id, props); const bucket = new Bucket(this, 'DeploymentsBucket', { websiteIndexDocument: "index.html", websiteErrorDocument: "index.html", publicReadAccess: false, //only for demo not to use in production removalPolicy: RemovalPolicy.DESTROY, }); // new BucketDeployment(this, "App", { sources: [Source.asset("../../web/dist/")], destinationBucket: bucket }); // const originAccessIdentity = new OriginAccessIdentity( this, 'DeploymentsOriginAccessIdentity', ); bucket.grantRead(originAccessIdentity); const ssrEdgeFunction = new EdgeFunction(this, "ssrHandler", { runtime: Runtime.NODEJS_14_X, code: Code.fromAsset("../../lambda/ssr-at-edge/"), memorySize: 128, timeout: Duration.seconds(5), handler: "index.handler" }); const distribution = new CloudFrontWebDistribution( this, 'DeploymentsDistribution', { originConfigs: [ { s3OriginSource: { s3BucketSource: bucket, originAccessIdentity: originAccessIdentity }, behaviors: [ { isDefaultBehavior: true, lambdaFunctionAssociations: [ { eventType: LambdaEdgeEventType.ORIGIN_REQUEST, lambdaFunction: ssrEdgeFunction.currentVersion, } ] } ] } ], errorConfigurations: [ { errorCode: 403, responseCode: 200, responsePagePath: '/index.html', errorCachingMinTtl: 0, }, { errorCode: 404, responseCode: 200, responsePagePath: '/index.html', errorCachingMinTtl: 0, } ] } ); new CfnOutput(this, 'CloudFrontURL', { value: distribution.distributionDomainName }); } } ``` However when I tried deploying it shows something like this ``` Package subpath './aws-cloudfront/lib/experimental' is not defined by "exports" in /Users/petrabarus/Projects/kodingbarengpetra/vue-lambda-ssr/deployments/cdk/node_modules/aws-cdk-lib/package.json ``` Here's the content of the `package.json` ``` { "name": "ssr-at-edge", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "jest --verbose", "build": "tsc", "watch": "tsc -w", "start": "npm run build -- -w" }, "author": "", "license": "ISC", "devDependencies": { "@types/aws-lambda": "^8.10.89", "@types/node": "^17.0.5", "ts-node": "^10.4.0", "typescript": "^4.5.4" }, "dependencies": { "vue": "^2.6.14", "vue-server-renderer": "^2.6.14" } } ``` Is there anything I miss?
1
answers
0
votes
5
views
petrabarus
asked 18 days ago

API gateway lacks permissions to trigger lambda when made by terraform

My environments includes an API gateway with two methods: POST and OPTIONS. The POST one requires an API key and the OPTIONS one do not. Each triggers a different lambda. I am using terraform to build the environments. When I build an environment the AWS console shows that everything is as it should: The gateway, the methods and the lambda shows that the gateway has permissions. Calling the POST method works well. Calling the OPTIONS method fails and on Cloudwatch it shows an empty error: > Lambda invocation failed with status: 403. Lambda request id: 43284993-c96d-4416-8db4-5c05b18eb2e1 > > Execution failed due to configuration error: > > Method completed with status: 500 If I manually remove the OPTION method, and re-make it manually and deploy the API - it works. A difference I have noticed after the manual change is that under the lambda "Triggers" view the trigger is a bit different: The triggers made by terraform have these fields: > API key: <value of the the api key> > API type: REST > Authorization: None > Method: OPTIONS > Resource Path: /my_url_path > Stage: <name of the API stage> And the one made manually lacks the "API key" one. Although my terraform code does not specify an API key at the lambda permission configuration. I am not sure whether it is a terraform mis-configuration or mis-feature in AWS API. Would love any suggestion for debugging it or a solution. Thank you!
1
answers
2
votes
12
views
Assaf Shechter
asked 18 days ago

DB Log Processing through Kinesis Data streams and Time Series DB

Hi Team, I have an architecture based question, How Postgre SQL DB log processing can be captured through AWS lambda , aws Kinisis Data streams and finally Data should loads into Time Stream Database. Providing High level scenario: Draft Data flow : **Aurora Postgre DB** ----DB Logs Processing to ---->** Lambda** --->Ingestion to ----> **Kinesis Data Streams ** ---Process and Join context data and insert --- Insert to --------> **Time Stream Database** I believe , we can process / loads the AWS IoT (sensors , device data) to Time Stream Database through Lambda , Kinesis Streams , Kinesis Data analytics and finally Time series Database and we can do analytics on time series data . But i am not sure How the postgre SQL db logs (write ahead logs) process through Lambda and ingest through Kinesis streams and finally load into Time Stream Database . and above flow also required to Joins some tables like Event driven tables with associated Account , Customer tables and then it will load into Time Series Database . would like to know if above flow would be accurate , as we are not processing any sensors / devices data ( where sensors data captures all measures and dimensions data from device and loads into Time Stream DB ) so Time Series database always a primary database . if anyone can through some lights , how postgre sql db logs can be integrated with Time Stream database through Kinesis Data streams , Lambda . Need your help Thanks
1
answers
0
votes
5
views
AWS-User-8897895
asked 21 days ago

aws lex lambda currentIntent['slots'] value for slot name changes for some utterances

I am very new to lex. I have a questionnaire workflow setup with lex and lambda, i have slot name answers which set as empty array '[]' on the first time lambda is fired to record the values (my questions state in it). But i see something weird happening with sometimes for the few words for example i have 4 words (like a multiple choice) some words are "recognized" fine but for other they change/override my slots["answers"] value. `{'messageVersion': '1.0', 'invocationSource': 'DialogCodeHook', 'userId': 'XXXXX', 'sessionAttributes': {'assessmentState': '1', 'sessionId': 'XXXXX'}, 'requestAttributes': None, 'bot': {'name': 'XXXXX', 'alias': 'XXXXX', 'version': 'XXXXX'}, 'outputDialogMode': 'Text', 'currentIntent': {'name': 'XXXXX', 'slots': {'answers': '[]'}, 'slotDetails': {'answers': {'resolutions': [{'value': '[]'}], 'originalValue': '[]'}}, 'confirmationStatus': 'None', 'nluIntentConfidenceScore': 0.95}, 'alternativeIntents': [{'name': 'XXXXX1', 'slots': {}, 'slotDetails': {}, 'confirmationStatus': 'None', 'nluIntentConfidenceScore': None}], 'inputTranscript': 'Not at all', 'recentIntentSummaryView': [{'intentName': 'XXXXX', 'checkpointLabel': None, 'slots': {'answers': '[]'}, 'confirmationStatus': 'None', 'dialogActionType': 'ElicitSlot', 'fulfillmentState': None, 'slotToElicit': 'answers'}], 'sentimentResponse': None, 'kendraResponse': None, 'activeContexts': []}` this is my first request (answer to my first question) if you see `currentIntent['slots']['answers']` is an `'[]'` which i set when lambda is invoked first time even before this to render user my first question and for this question i answered 'Not at all' if you see the inputTranscript. which looks all fine but for the next question when i tried to answer with another utterance the event has changed a lot. `{'messageVersion': '1.0', 'invocationSource': 'DialogCodeHook', 'userId': 'XXXXX', 'sessionAttributes': {'assessmentState': '2', 'sessionId': 'XXXXX'}, 'requestAttributes': None, 'bot': {'name': 'XXXXX', 'alias': 'XXXXX', 'version': '23'}, 'outputDialogMode': 'Text', 'currentIntent': {'name': 'XXXXX', 'slots': {'answers': 'Several days'}, 'slotDetails': {'answers': {'resolutions': [], 'originalValue': 'Several days'}}, 'confirmationStatus': 'None', 'nluIntentConfidenceScore': 1.0}, 'alternativeIntents': [{'name': 'XXXXX1', 'slots': {}, 'slotDetails': {}, 'confirmationStatus': 'None', 'nluIntentConfidenceScore': None}, {'name': 'XXXXX2', 'slots': {'foodName': None, 'delayWhen': None, 'confirmation': None, 'barcode': None, 'food': None}, 'slotDetails': {'foodName': None, 'delayWhen': None, 'confirmation': None, 'barcode': None, 'food': None}, 'confirmationStatus': 'None', 'nluIntentConfidenceScore': 0.62}, {'name': 'XXXXX3', 'slots': {'answers': 'Several days'}, 'slotDetails': {'answers': {'resolutions': [], 'originalValue': 'Several days'}}, 'confirmationStatus': 'None', 'nluIntentConfidenceScore': 0.51}, {'name': 'XXXXX5', 'slots': {}, 'slotDetails': {}, 'confirmationStatus': 'None', 'nluIntentConfidenceScore': 0.48}], 'inputTranscript': 'Several days', 'recentIntentSummaryView': [{'intentName': 'XXXXX', 'checkpointLabel': None, 'slots': {'answers': '[{"id": 1, "value": 0}]'}, 'confirmationStatus': 'None', 'dialogActionType': 'ElicitSlot', 'fulfillmentState': None, 'slotToElicit': 'answers'}], 'sentimentResponse': None, 'kendraResponse': None, 'activeContexts': []}` but here if you see `currentIntent['slots']['answers']` it is supposed to have `'[{"id": 1, "value": 0}]'` which i set after my 1st question is answered but here it is 'Several days' and if you see `recentIntentSummaryView` array's element `{'intentName': 'XXXXX', 'checkpointLabel': None, 'slots': {'answers': '[{"id": 1, "value": 0}]'},....` this has the value i set in my previous question. as far as my observation go i think it fails for some words and after a build and publish it starts to work which i dont know sometimes looks like magic and sometimes confuses what did i miss. the slot configuration i used is name: answers, type: AMAZON.streetAddress, prompt: what assessment? I used type as AMAZON.streetAddress because i had same issue on other intent and i found somewhere that AMAZON.alphaNumeric has some issue ... and change to AMAZON.streetAddress and that intent works fine, this intent also has similar config and lambda code as other but i am lost in what to do or whats happening. Please help me out, Thanks in advance.
0
answers
0
votes
9
views
AWS-User-1009878
asked 24 days ago
  • 1
  • 90 / page