By using AWS re:Post, you agree to the Terms of Use
/Serverless/Questions/
Questions in Serverless
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unable to perform OpenSearch text queries from Gremlin using AWS Lambda written in Javascript

I am syncing my AWS Neptune nodes in an AWS OpenSearch cluster as per the documentation https://docs.aws.amazon.com/neptune/latest/userguide/full-text-search.html. The name of the OpenSearch index is amazon_neptune. The OpenSearch index type is _doc. Following is the index configuration ``` { "settings": { "number_of_shards": 1, "number_of_replicas": 1, "analysis": { "normalizer": { "useLowercase": { "type": "custom", "filter": "lowercase" } } } }, "mappings": { "properties": { "document_type" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "entity_id" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "entity_type" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "predicates": { "properties": { "content": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above" : 1000, "normalizer": "useLowercase" } } }, "visibilityType": { "type": "keyword" }, "status": { "type": "keyword" }, "type": { "type": "keyword" }, "firstName": { "type": "text", "fields": { "keyword": { "type": "keyword", "normalizer": "useLowercase" } } }, "lastName": { "type": "text", "fields": { "keyword": { "type": "keyword", "normalizer": "useLowercase", "ignore_above" : 1000 } } } } } } } } ``` Using the npm gremlin package, I'm trying to query my documents. Following is the code: ``` 'use strict'; const gremlin = require('gremlin'); exports.handler = async (event, context) => { try { const DriverRemoteConnection = gremlin.driver.DriverRemoteConnection; const Graph = gremlin.structure.Graph; const dc = new DriverRemoteConnection(<neptune_endpoint>,{}); const graph = new Graph(); const dbClient = graph.traversal().withRemote(dc); const res = await dbClient .withSideEffect("Neptune#fts.endpoint",<https_opensearch_endpoint>) .withSideEffect('Neptune#fts.queryType', 'term') .V().has("visibilityType","Neptune#fts PUBLIC") .toList(); console.log('res:', res); } catch(err) { console.error('Failed to query', err); } } ``` But I'm getting the following error ``` Failed to query ResponseError: Server error: {"detailedMessage":"method [POST], host [<https_opensearch_endpoint>], URI [/amazon_neptune/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 403 Forbidden]\n{\"Message\":\"User: anonymous is not authorized to perform: es:ESHttpPost\"}","requestId":"23a9e7d7-7dde-465b-bf29-9c59cff12e86","code":"BadRequestException"} (500) ``` I have given the following permission to my lambda ``` Type: AWS::IAM::Policy Properties: PolicyName: <Policy_Name> Roles: - 'Ref': <lambda_role> PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - es:ESHttpGet - es:ESHttpPost - es:ESHttpPut - es:ESHttpDelete Resource: <opensearch_cluster_arn> ``` My OpenSearch cluster as well as Neptune cluster are located inside the same VPC. My lambda is hosted inside the same VPC as well. Please help me in understanding why I'm getting the 403 error when I've given the proper reading permissions to my lambda. Any help would be highly appreciated.
1
answers
0
votes
31
views
asked a day ago

rejected records error

I am recieving the following error when trying to test a lambda function to write to timestream: ``` RejectedRecords: An error occurred (RejectedRecordsException) when calling the WriteRecords operation: One or more records have been rejected. See RejectedRecords for details. Rejected Index 0: Multi measure name already has an assigned measure value type. Each multi measure name can have only one measure value type and cannot be changed. Other records were written successfully. ``` This is the code for my 'records' array: ``` records = [{ 'Dimensions': dimensions, 'Time': CURRENT_TIME, 'MeasureName': 'measurementvalues', 'MeasureValueType': 'MULTI', 'MeasureValues': [ {'Name': 'rainfall_mm', 'Value': str(event["rain_mm"]), 'Type': 'DOUBLE'}, {'Name': 'Temperature', 'Value': str(event["temperature_C"]), 'Type': 'DOUBLE'}, {'Name': 'humidity', 'Value': str(event["humidity"]), 'Type': 'BIGINT'}, {'Name': 'wind_max_m_s', 'Value': str(event["wind_max_m_s"]), 'Type': 'DOUBLE'}, {'Name': 'wind_avg_m_s', 'Value': str(event["wind_avg_m_s"]), 'Type': 'DOUBLE'}, {'Name': 'wind_dir_deg', 'Value': str(event["wind_dir_deg"]), 'Type': 'DOUBLE'}, {'Name': 'Battery_check', 'Value': str(event["battery_ok"]), 'Type': 'BIGINT'}, {'Name': 'wind_max_mph', 'Value':str(event["wind_max_m_s"]*2.23694), 'Type':'DOUBLE'}, {'Name': 'wind_avg_mph', 'Value':str(event["wind_avg_m_s"]*2.23694), 'Type':'DOUBLE'}, {'Name': 'wind_dir_deg_corr', 'Value':str(event["wind_dir_deg"]+0), 'Type':'DOUBLE'}, ] }, ] ``` I would be very grateful if anyone could shed light on this error :)
0
answers
0
votes
13
views
asked 3 days ago

How to ensure using the latest lambda layer version when deploying with CloudFormation and SAM?

Hi, we use CloudFormation and SAM to deploy our Lambda (Node.js) functions. All our Lambda functions has a layer set through `Globals`. When we make breaking changes in the layer code we get errors during deployment because new Lambda functions are rolled out to production with old layer and after a few seconds *(~40 seconds in our case)* it starts using the new layer. For example, let's say we add a new class to the layer and we import it in the function code then we get an error that says `NewClass is not found` for a few seconds during deployment *(this happens because new function code still uses old layer which doesn't have `NewClass`)*. Is it possible to ensure new lambda function is always rolled out with the latest layer version? Example CloudFormation template.yaml: ``` Globals: Function: Runtime: nodejs14.x Layers: - !Ref CoreLayer Resources: CoreLayer: Type: AWS::Serverless::LayerVersion Properties: LayerName: core-layer ContentUri: packages/coreLayer/dist CompatibleRuntimes: - nodejs14.x Metadata: BuildMethod: nodejs14.x ExampleFunction: Type: AWS::Serverless::Function Properties: FunctionName: example-function CodeUri: packages/exampleFunction/dist ``` SAM build: `sam build --base-dir . --template ./template.yaml` SAM package: `sam package --s3-bucket example-lambda --output-template-file ./cf.yaml` Example CloudFormation deployment events, as you can see new layer (`CoreLayer123abc456`) is created before updating the Lambda function so it should be available to use in the new function code but for some reasons Lambda is updated and deployed with the old layer version for a few seconds: | Timestamp | Logical ID | Status | Status reason | | --- | --- | --- | --- | 2022-05-23 16:26:54 | stack-name | UPDATE_COMPLETE | - 2022-05-23 16:26:54 | CoreLayer789def456 | DELETE_SKIPPED | - 2022-05-23 16:26:53 | v3uat-farthing | UPDATE_COMPLETE_CLEANUP_IN_PROGRESS | - 2022-05-23 16:26:44 | ExampleFunction | UPDATE_COMPLETE | - 2022-05-23 16:25:58 | ExampleFunction | UPDATE_IN_PROGRESS | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_COMPLETE | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_IN_PROGRESS | Resource creation Initiated 2022-05-23 16:25:50 | CoreLayer123abc456 | CREATE_IN_PROGRESS | - 2022-05-23 16:25:41 | stack-name | UPDATE_IN_PROGRESS | User Initiated
2
answers
0
votes
43
views
asked 4 days ago

Amazon Linux 2 on Beanstalk isn't installing SQSD and prevents cron.yml from working

We're on solution stack "64bit Amazon Linux 2 v3.3.13 running PHP 7.4" the worker server is spinning up, unpacking the "platform-engine.zip" and when it comes to setting up SQSD: ``` May 23 12:45:01 ip-172-31-12-195 su: (to sqsd) root on none May 23 12:45:10 ip-172-31-12-195 aws-sqsd-monitor: restarting aws-sqsd... May 23 12:45:10 ip-172-31-12-195 systemd: Starting (null)... May 23 12:45:10 ip-172-31-12-195 su: (to sqsd) root on none May 23 12:45:10 ip-172-31-12-195 systemd: Created slice User Slice of sqsd. May 23 12:45:10 ip-172-31-12-195 systemd: Started Session c2 of user sqsd. May 23 12:45:10 ip-172-31-12-195 aws-sqsd: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/ May 23 12:45:13 ip-172-31-12-195 aws-sqsd: Cannot load config file. No such file or directory: "/etc/aws-sqsd.d/default.yaml" - (AWS::EB::SQSD::FatalError) May 23 12:45:13 ip-172-31-12-195 systemd: aws-sqsd.service: control process exited, code=exited status=1 May 23 12:45:13 ip-172-31-12-195 systemd: Failed to start (null). May 23 12:45:13 ip-172-31-12-195 systemd: Unit aws-sqsd.service entered failed state. May 23 12:45:13 ip-172-31-12-195 systemd: aws-sqsd.service failed. May 23 12:45:13 ip-172-31-12-195 systemd: Removed slice User Slice of sqsd. ``` I can't find anything online about this, so some help would be greatly appreciated.
1
answers
0
votes
27
views
asked 5 days ago

Containers/services unable to communicate between them

I have created an ECS cluster that runs one service on Fargate with one task definition. The task definition runs two containers that are supposed to communicate with each other: - nginx (using `fastcgi_pass <hostname>:9000;`) - php-fpm I have tried running them in one task definition or in seperate services (with Service Discovery set with either A records or SRV records - I have tried all the options). Other info: - Public VPC with two public subnets - Security group that allows access from itself to port 9000 (the php-fpm port) - Load balancer connected to the nginx container on port 80 Here is one of the task definitions that I tried, in this case running the containers in the same task definition (nginx has `fastcgi_pass localhost:9000;`). I hope somebody can help me... It can't be this hard to do something so simple. Nothing seems to work. ``` { "ipcMode": null, "executionRoleArn": "arn:aws:iam::359816492978:role/ecsTaskExecutionRole", "containerDefinitions": [ { "dnsSearchDomains": null, "environmentFiles": null, "logConfiguration": { "logDriver": "awslogs", "secretOptions": null, "options": { "awslogs-group": "/ecs/v1-stage", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "ecs" } }, "entryPoint": null, "portMappings": [ { "hostPort": 80, "protocol": "tcp", "containerPort": 80 } ], "command": null, "linuxParameters": null, "cpu": 0, "environment": [], "resourceRequirements": null, "ulimits": null, "dnsServers": null, "mountPoints": [], "workingDirectory": null, "secrets": null, "dockerSecurityOptions": null, "memory": null, "memoryReservation": null, "volumesFrom": [], "stopTimeout": null, "image": "359816492978.dkr.ecr.us-east-1.amazonaws.com/nginx", "startTimeout": null, "firelensConfiguration": null, "dependsOn": null, "disableNetworking": null, "interactive": null, "healthCheck": null, "essential": true, "links": [], "hostname": null, "extraHosts": null, "pseudoTerminal": null, "user": null, "readonlyRootFilesystem": null, "dockerLabels": null, "systemControls": null, "privileged": null, "name": "nginx" }, { "dnsSearchDomains": null, "environmentFiles": null, "logConfiguration": { "logDriver": "awslogs", "secretOptions": null, "options": { "awslogs-group": "/ecs/v1-stage", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "ecs" } }, "entryPoint": null, "portMappings": [ { "hostPort": 9000, "protocol": "tcp", "containerPort": 9000 } ], "command": null, "linuxParameters": null, "cpu": 0, "environment": [], "resourceRequirements": null, "ulimits": null, "dnsServers": null, "mountPoints": [], "workingDirectory": null, "secrets": null, "dockerSecurityOptions": null, "memory": null, "memoryReservation": null, "volumesFrom": [], "stopTimeout": null, "image": "359816492978.dkr.ecr.us-east-1.amazonaws.com/php", "startTimeout": null, "firelensConfiguration": null, "dependsOn": null, "disableNetworking": null, "interactive": null, "healthCheck": null, "essential": true, "links": [], "hostname": null, "extraHosts": null, "pseudoTerminal": null, "user": null, "readonlyRootFilesystem": null, "dockerLabels": null, "systemControls": null, "privileged": null, "name": "php" } ], "placementConstraints": [], "memory": "1024", "taskRoleArn": "arn:aws:iam::359816492978:role/ecsTaskExecutionRole", "compatibilities": [ "EC2", "FARGATE" ], "taskDefinitionArn": "arn:aws:ecs:us-east-1:359816492978:task-definition/v1-stage:5", "family": "v1-stage", "requiresAttributes": [ { "targetId": null, "targetType": null, "value": null, "name": "com.amazonaws.ecs.capability.logging-driver.awslogs" }, { "targetId": null, "targetType": null, "value": null, "name": "ecs.capability.execution-role-awslogs" }, { "targetId": null, "targetType": null, "value": null, "name": "com.amazonaws.ecs.capability.ecr-auth" }, { "targetId": null, "targetType": null, "value": null, "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19" }, { "targetId": null, "targetType": null, "value": null, "name": "com.amazonaws.ecs.capability.task-iam-role" }, { "targetId": null, "targetType": null, "value": null, "name": "ecs.capability.execution-role-ecr-pull" }, { "targetId": null, "targetType": null, "value": null, "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18" }, { "targetId": null, "targetType": null, "value": null, "name": "ecs.capability.task-eni" } ], "pidMode": null, "requiresCompatibilities": [ "FARGATE" ], "networkMode": "awsvpc", "runtimePlatform": { "operatingSystemFamily": "LINUX", "cpuArchitecture": null }, "cpu": "512", "revision": 5, "status": "ACTIVE", "inferenceAccelerators": null, "proxyConfiguration": null, "volumes": [] } ```
1
answers
0
votes
11
views
asked 6 days ago

Occasionally getting "MongoServerSelectionError: Server selection timed out..." errors

Hi, We have a lambda application that uses DocumentDB as the database layer. The lambdas are set in the same VPC as the DocumentDB cluster, and we're able to connect, and do all query (CRUD) operations as normal. The cluster is a simple cluster with 1 db.t4g.medium instance. One of the lambdas is triggered by an SNS queue and gets executed ~1M times over a 24h period. There is a database query involved in each one, and the vast majority of these executions go fine. The MongoClient is created outside of the handler in a separate file as detailed here: https://www.mongodb.com/docs/atlas/manage-connections-aws-lambda/ so that "warm" lambda executions will re-use the same connection. Our lambdas are executed as async handlers, not using a callback. The MongoClient itself is created in its own file as so: ``` const uri = `mongodb://${process.env.DB_USER}:${process.env.DB_PASSWORD}@${process.env.DB_ENDPOINT}:${process.env.DB_PORT}/?tls=true&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false`; const client = new MongoClient(uri, { tlsCAFile: 'certs/rds-combined-ca-bundle.pem' }); export const mongoClient = client.connect() ``` A sample handler would be something like this (TypeScript): ```` import { mongoClient } from "./mongo.client"; const DB_NAME = 'MyDB'; export const snsHandler = async (event: SNSEvent): Promise<void> => { const notif = JSON.parse(event.Records[0].Sns.Message); const item = await mongoClient .then(client => client.db(DB_NAME).collection(notif.collection).findOne({ _id: notif.id })) .catch(err => { console.error(`Couldn't find item with id ${notif.id} from collection ${notif.collection}`, err) return null; }) // do something with item } ```` --- --- Every so often (~100 times a day), we get specific errors along the lines of: ``` MongoServerSelectionError: Server selection timed out after 30000 ms at Timeout._onTimeout (/var/task/src/settlement.js:5:157446) at listOnTimeout (internal/timers.js:557:17) at processTimers (internal/timers.js:500:7) ``` or ``` [MongoClient] Error when connecting to mongo xg [MongoServerSelectionError]: Server selection timed out after 30000 ms at Timeout._onTimeout (/var/task/src/settlement.js:5:157446) at listOnTimeout (internal/timers.js:557:17) at processTimers (internal/timers.js:500:7) { reason: To { type: 'ReplicaSetNoPrimary', servers: Map(1) { '[REDACTED].docdb.amazonaws.com:[REDACTED]' => [ry] }, stale: false, compatible: true, heartbeatFrequencyMS: 10000, localThresholdMS: 15, setName: 'rs0', logicalSessionTimeoutMinutes: undefined }, code: undefined, [Symbol(errorLabels)]: Set(0) {} } ``` or ``` Lambda exited with error: exit status 128 Runtime.ExitError ``` --- --- In the Monitoring tab of the DocumentDB instance, the CPU doesn't go higher than 10%, and the database connections peak at ~170 (the connection limit on the tg4.medium is 500, unless I'm mistaken), with an average of around 30-40. For the lambda itself, the max concurrent executions peak at ~100. The errors aren't correlated to the peaks - they can happen at any time of the day, throughout the day. Can anyone provide any insight as to why the connection might be timing out from time to time, please? The default parameters of the MongoClient should keep the connection alive as long as the lambda is still active, and we don't seem to be close enough to the max connection limit. I'm assuming the way we have it set it is wrong, but I'm not sure how to go about fixing it Thanks
1
answers
0
votes
21
views
asked 6 days ago

Unknown reason for API Gateway WebSocket LimitExceededException

We have several API Gateway WebSocket APIs, all regional. As their usage has gone up, the most used one has started getting LimitExceededException when we send data from Lambda, through the socket, to the connected browsers. We are using the javascript sdk's [postToConnection](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ApiGatewayManagementApi.html#postToConnection-property) function. The usual behavior is we will not get this error at all, then we will get several hundred spread out over 2-4 minutes. The only documentation we've been able to find that may be related to this limit is the [account level quota](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-account-level-limits-table) of 10,000 per second (and we're not sure if that's the actual limit we should be looking at). If that is the limit, the problem then is that we are nowhere near it. For a single deployed API we're hitting a maximum of 3000 messages sent through the socket **per minute** with an overall account total of about 5000 per minute. So nowhere near the 10,000 per second. The only thing we think may be causing it is we have a "large" number messages going through the socket relative to the number of connected clients. For the api that's maxing at about 3000 messages per minute, we usually have 2-8 connected clients. Our only guess is there may be a lower limit to number of messages per second we can send to a specific socket connection, however we cannot find any docs on this. Thanks for any help anyone can provide
1
answers
0
votes
35
views
asked 10 days ago

Using MSK as trigger to a Lambda with SASL/SCRAM Authentication

Hi, I have set up a MSK cluster with SASL/SCRAM authentication. I have stored the username and password in a secret using AWS Secrets Manager. Now I am trying to set the topic in the MSK cluster as an event source to a Lambda function. In order to do so, I am following this documentation: https://aws.amazon.com/blogs/compute/using-amazon-msk-as-an-event-source-for-aws-lambda/ However the above documentation is for unauthenticated protocol. So I tried to add the authentication and the secret. I also added a policy in the execution role of the Lambda function that lets it read the secret value: "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:*" ], "Resource": [ "arn:aws:secretsmanager:****:*******:secret:*" ] }, { "Effect": "Allow", "Action": "secretsmanager:ListSecrets", "Resource": "*" } ]} When I am trying to add the trigger, I am getting the error: An error occurred when creating the trigger: Cannot access secret manager value arn:aws:secretsmanager:*****:*****:secret:*******. Please ensure the role can perform the 'secretsmanager:GetSecretValue' action on your broker in IAM. (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ****; Proxy: null) I am not able to understand this error since I have included in the policy all the Actions from "secretsmanager" on all the Resources in my account. Can someone help?
2
answers
0
votes
22
views
asked 10 days ago

Lambda function as image, how to find your handler URI

Hello, I have followed all of the tutorials on how to build an AWS Lambda function as a container image. I am also using the AWS SAM SDK as well. What I don't understand is how do I figure out my end-point URL mapping from within my image to the Lambda function? For example in my docker image that I am using the AWS Python 3.9 image where I install some other packages and my python requirements and my handler is defined as: summarizer_function_lambda.postHandler My python file being copied into the image is the same name as above but without the .postHandler My AWS SAM Template has: AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: AWS Lambda dist-bart-summarizer function # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: DistBartSum: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: FunctionName: DistBartSum ImageUri: <my-image-url> PackageType: Image Events: SummarizerFunction: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /postHandler Method: POST So what is my actual URI path to do my POST call either locally or once deployed on Lambda?? When I try and do a CURL command I get an "{"message": "Internal server error"}% " curl -XPOST "https://<my-aws-uri>/Prod/postHandler/" -d '{"content": "Test data.\r\n"}' So I guess my question is how do you "map" your handler definitions from within a container all the way to the end point URI?
2
answers
0
votes
37
views
asked 11 days ago

Athena Error: Permission Denied on S3 Path.

I am trying to execute athena queries from a lambda function but I am getting this error: `Athena Query Failed to run with Error Message: Permission denied on S3 path: s3://bkt_logs/apis/2020/12/16/14` The bucket `bkt_logs` is the bucket which is used by AWS Glue Crawlers to crawl through all the sub-folders and populate Athena table on which I am querying on. Also, `bkt_logs` is an encrypted bucket. These are the policies that I have assigned to the Lambda. ``` [ { "Action": [ "s3:Get*", "s3:List*", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::athena-query-results/*", "Effect": "Allow", "Sid": "AllowS3AccessToSaveAndReadQueryResults" }, { "Action": [ "s3:*" ], "Resource": "arn:aws:s3:::bkt_logs/*", "Effect": "Allow", "Sid": "AllowS3AccessForGlueToReadLogs" }, { "Action": [ "athena:GetQueryExecution", "athena:StartQueryExecution", "athena:StopQueryExecution", "athena:GetWorkGroup", "athena:GetDatabase", "athena:BatchGetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowAthenaAccess" }, { "Action": [ "glue:GetTable", "glue:GetDatabase", "glue:GetPartitions" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowGlueAccess" }, { "Action": [ "kms:CreateGrant", "kms:DescribeKey", "kms:Decrypt" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowKMSAccess" } ] ``` What seems to be wrong here? What should I do to resolve this issue?
1
answers
0
votes
49
views
asked 11 days ago

Problem uploading media to AWS S3 with Django Storages / Boto3 (form a website on Lambda)

Hi all! I have a Django website which is deployed on AWS Lambda. All the static/media is stored in the S3 bucket. I managed to serve static from S3 and it works fine, however, when trying to upload media through admin (I was trying to add an article with a pic attached to it), I get a message "Endpoint request timed out". Here is my AWS and storage configuration: **ukraine101.aws.utils.py** ``` from storages.backends.s3boto3 import S3Boto3Storage StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static') MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media') ``` **settings.py** ``` STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_URL = 'https://<my-bucket-name>.s3.amazonaws.com/' MEDIA_URL = 'https://<my-bucket-name>.s3.amazonaws.com/media/' MEDIA_ROOT = MEDIA_URL DEFAULT_FILE_STORAGE = 'ukraine101.aws.utils.MediaRootS3BotoStorage' STATICFILES_STORAGE = 'ukraine101.aws.utils.StaticRootS3BotoStorage' AWS_STORAGE_BUCKET_NAME = '<my-bucket-name>' AWS_S3_REGION_NAME = 'us-east-1' AWS_ACCESS_KEY_ID = '<my-key-i-dont-show>' AWS_SECRET_ACCESS_KEY = '<my-secret-key-i-dont-show>' AWS_S3_SIGNATURE_VERSION = 's3v4' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERIFY = True AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME STATICFILES_LOCATION = 'static' ``` **My Article model:** ``` class Article(models.Model): title = models.CharField(max_length=250, ) summary = models.TextField(blank=False, null=False, ) image = models.ImageField(blank=False, null=False, upload_to='articles/', ) text = RichTextField(blank=False, null=False, ) category = models.ForeignKey(Category, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) featured = models.BooleanField(default=False) date_created = models.DateField(auto_now_add=True) slug = AutoSlugField(populate_from='title') related_book = models.ForeignKey(Book, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) def get_absolute_url(self): return reverse("articles:article-detail", kwargs={"slug": self.slug}) def get_comments(self): return Comment.objects.filter(article=self.id) author = models.ForeignKey(User, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) ``` **AWS bucket policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` **CORS:** ``` [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "POST", "PUT", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] ``` **User permissions policies (there are two attached): ** Policy 1: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3:::<my-bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:*Object*", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` Policy 2: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": [ "arn:aws:s3:::<my-bucket-name>", "arn:aws:s3:::<my-bucket-name>/*" ] } ] } ``` Please, if someone knows what can be wrong and why this timeout is happening, help me.
1
answers
0
votes
12
views
asked 11 days ago

Invalid security token error when executing nested step function on Step Functions Local

Are nested step functions supported on AWS Step Functions Local? I am trying to create 2 step functions, where the outer one executes the inner one. However, when trying to execute the outer step function, getting an error: "The security token included in the request is invalid". To reproduce, use the latest `amazon/aws-stepfunctions-local:1.10.1` Docker image. Launch the container with the following command: ```sh docker run -p 8083:8083 -e AWS_DEFAULT_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=TESTID -e AWS_SECRET_ACCESS_KEY=TESTKEY amazon/aws-stepfunctions-local ``` Then create a simple HelloWorld _inner_ step function in the Step Functions Local container: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"A Hello World example of the Amazon States Language using a Pass state\",\ \"StartAt\": \"HelloWorld\",\ \"States\": {\ \"HelloWorld\": {\ \"Type\": \"Pass\",\ \"End\": true\ }\ }}" --name "HelloWorld" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Then add a simple _outer_ step function that executes the HelloWorld one: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"OuterTestComment\",\ \"StartAt\": \"InnerInvoke\",\ \"States\": {\ \"InnerInvoke\": {\ \"Type\": \"Task\",\ \"Resource\": \"arn:aws:states:::states:startExecution\",\ \"Parameters\": {\ \"StateMachineArn\": \"arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld\"\ },\ \"End\": true\ }\ }}" --name "HelloWorldOuter" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Finally, start execution of the outer Step Function: ```sh aws stepfunctions --endpoint-url http://localhost:8083 start-execution --state-machine-arn arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorldOuter ``` The execution fails with the _The security token included in the request is invalid_ error in the logs: ``` arn:aws:states:us-east-1:123456789012:execution:HelloWorldOuter:b9627a1f-55ed-41a6-9702-43ffe1cacc2c : {"Type":"TaskSubmitFailed","PreviousEventId":4,"TaskSubmitFailedEventDetails":{"ResourceType":"states","Resource":"startExecution","Error":"StepFunctions.AWSStepFunctionsException","Cause":"The security token included in the request is invalid. (Service: AWSStepFunctions; Status Code: 400; Error Code: UnrecognizedClientException; Request ID: ad8a51c0-b8bf-42a0-a78d-a24fea0b7823; Proxy: null)"}} ``` Am I doing something wrong? Is any additional configuration necessary?
0
answers
0
votes
18
views
asked 12 days ago

Lambda function throwing : TooManyRequestsException, Rate exceeded

When the Lambda function is invoked, occasionally I see the following error for the function even though there is no Load running or not many lambda functions running. While the throttling and quota requests are set to default running in the Mumbai region, but this error is observed even when no load is running.. How do I determine with configuration needs to be increased to address this problem ? 2022-05-17T10:01:13.555Z 84379818-c8b8-44a3-b353-2c9f7f8f5e48 ERROR Invoke Error { "errorType": "TooManyRequestsException", "errorMessage": "Rate exceeded", "code": "TooManyRequestsException", "message": "Rate exceeded", "time": "2022-05-17T10:01:13.553Z", "requestId": "c3dc9f1b-d7c3-40d5-bec7-78e19dc2e033", "statusCode": 400, "retryable": true, "stack": [ "TooManyRequestsException: Rate exceeded", " at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:52:27)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:686:14)", " at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)", " at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)", " at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:688:12)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)" ] }
1
answers
0
votes
21
views
asked 12 days ago

firebase-admin experiencing Runtime.ImportModuleError on Lambda (Node v14)

Receiving this error when attempting to use the `firebase-admin` SDK on Node v14 on Lambda. ``` 2022-05-13T21:41:25.923Z undefined ERROR Uncaught Exception { "errorType": "Runtime.ImportModuleError", "errorMessage": "Error: Cannot find module 'app'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js", "stack": [ "Runtime.ImportModuleError: Error: Cannot find module 'app'", "Require stack:", "- /var/runtime/UserFunction.js", "- /var/runtime/index.js", " at _loadUserApp (/var/runtime/UserFunction.js:202:13)", " at Object.module.exports.load (/var/runtime/UserFunction.js:242:17)", " at Object.<anonymous> (/var/runtime/index.js:43:30)", " at Module._compile (internal/modules/cjs/loader.js:1085:14)", " at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)", " at Module.load (internal/modules/cjs/loader.js:950:32)", " at Function.Module._load (internal/modules/cjs/loader.js:790:12)", " at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12)", " at internal/main/run_main_module.js:17:47" ] } ``` Here's the code attempting to import and use lambda: ``` import firebase from 'firebase-admin'; export const useFirebaseOnServer = () => { if (firebase.apps.length === 0) { return firebase.initializeApp({ credential: firebase.credential.cert(require('./serviceAccount.json')), databaseURL: 'https://maskotter-f06cb.firebaseio.com', }); } return firebase.app(); }; ``` And finally my `template.yaml` file: ``` AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: > maskotter-email-forwarder # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 6 Tracing: Active Resources: ForwarderFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: forwarder/ Handler: app.lambdaHandler Runtime: nodejs14.x Architectures: - x86_64 Metadata: # Manage esbuild properties BuildMethod: esbuild BuildProperties: Minify: true Target: "es2020" Sourcemap: true EntryPoints: - app.ts ``` Been endlessly searching for an answer. Any help would be appreciated!
0
answers
0
votes
4
views
asked 16 days ago
1
answers
0
votes
21
views
asked 16 days ago

`RequestTimeout`s for S3 put requests from a Lambda in a VPC for larger payloads

# Update I added a VPC gateway endpoint for S3 in the same region (US East 1). I selected the route table for it that the lambda uses. But still, the bug persists. Below I've included details regarding my network configuration. The lambda is located in the "api" subnet. ## Network Configuration 1 VPC 4 subnets: * public &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.0.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: public * private &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.1.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: private &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: private * api &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.4.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: api &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: api * private2-required &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.2.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: - 3 route tables: * public &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: ::/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxxx * private &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local * api &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: nat-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: pl-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Target: vpce-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(VPC S3 endpoint) 4 network ACLs * public &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * private &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: PostgreSQL TCP 5432 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: PostgreSQL TCP 5432 10.0.4.0/24 (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: Custom TCP TCP 32768-65535 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: Custom TCP TCP 1024-65535 10.0.4.0/24 (allow) * api &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * \- &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) # Update # I increased the timeout of the lambda to 5 minutes, and the timeout of the PUT request to the S3 bucket to 5 minutes as well. Before this the request itself would timeout, but now I'm actually getting a response back from S3. It is a 400 Bad Request response. The error code is `RequestTimeout`. And the message in the payload of the response is "Your socket connection to the server was not read from or written to within the timeout period." This exact same code works 100% of the time for a small payload (on the order of 1KB), but apparently for payloads on the order of 1MB it starts breaking. There is no logic in _my code_ that does anything differently based on the size of the payload. I've read similar issues that suggest the issue is with the wrong number of bytes being provided in the "content-length" header, but I've never provided a value for that header. Furthermore, the lambda works flawlessly when executed in my local environment. The problem definitely appears to be a networking one. At first glance it might seem like this is just an issue with the lambda being able to interact with services outside of the VPC, but that's not the case because the lambda _does_ work exactly as expected for smaller file sizes (<1KB). So it's not that it flat out can't communicate with S3. Scratching my head here... # Original # I use S3 to host images for an application. In my local testing environment the images upload at an acceptable speed. However, when I run the same exact code from an AWS Lambda (in my VPC), the speeds are untenably slow. I've concluded this because I've tested with smaller images (< 1KB) and they work 100% of the time without making any changes to the code. Then I use 1MB sized payloads and they fail 98% percent of the time. I know the request to S3 is the issue because of logs made from within the Lambda that indicate the execution reaches the upload request, but — almost — never successfully passes it (times out).
1
answers
0
votes
32
views
asked 17 days ago

Unable to override taskRoleArn when running ECS task from Lambda

I have a Lambda function that is supposed to pass its own permissions to the code running in an ECS task. It looks like this: ``` ecs_parameters = { "cluster": ..., "launchType": "FARGATE", "networkConfiguration": ..., "overrides": { "taskRoleArn": boto3.client("sts").get_caller_identity().get("Arn"), ... }, "platformVersion": "LATEST", "taskDefinition": f"my-task-definition-{STAGE}", } response = ecs.run_task(**ecs_parameters) ``` When I run this in Lambda, i get this error: ``` "errorMessage": "An error occurred (ClientException) when calling the RunTask operation: ECS was unable to assume the role 'arn:aws:sts::787364832896:assumed-role/my-lambda-role...' that was provided for this task. Please verify that the role being passed has the proper trust relationship and permissions and that your IAM user has permissions to pass this role." ``` If I change the task definition in ECS to use `my-lambda-role` as the task role, it works. It's specifically when I try to override the task role from Lambda that it breaks. The Lambda role has the `AWSLambdaBasicExecutionRole` policy and also an inline policy that grants it `ecs:runTask` and `iam:PassRole`. It has a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": [ "ecs.amazonaws.com", "lambda.amazonaws.com", "ecs-tasks.amazonaws.com" ] }, "Action": "sts:AssumeRole" ``` The task definition has a policy that grants it `sts:AssumeRole` and `iam:PassRole`, and a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com", "AWS": "arn:aws:iam::account-ID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" }, "Action": "sts:AssumeRole" ``` How do I allow the Lambda function to pass the role to ECS, and ECS to assume the role it's been given? P.S. - I know a lot of these permissions are overkill, so let me know if there are any I can get rid of :) Thanks!
2
answers
1
votes
21
views
asked 18 days ago
  • 1
  • 90 / page