By using AWS re:Post, you agree to the Terms of Use
/AWS Lambda/

Questions tagged with AWS Lambda

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unknown reason for API Gateway WebSocket LimitExceededException

We have several API Gateway WebSocket APIs, all regional. As their usage has gone up, the most used one has started getting LimitExceededException when we send data from Lambda, through the socket, to the connected browsers. We are using the javascript sdk's [postToConnection](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ApiGatewayManagementApi.html#postToConnection-property) function. The usual behavior is we will not get this error at all, then we will get several hundred spread out over 2-4 minutes. The only documentation we've been able to find that may be related to this limit is the [account level quota](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-account-level-limits-table) of 10,000 per second (and we're not sure if that's the actual limit we should be looking at). If that is the limit, the problem then is that we are nowhere near it. For a single deployed API we're hitting a maximum of 3000 messages sent through the socket **per minute** with an overall account total of about 5000 per minute. So nowhere near the 10,000 per second. The only thing we think may be causing it is we have a "large" number messages going through the socket relative to the number of connected clients. For the api that's maxing at about 3000 messages per minute, we usually have 2-8 connected clients. Our only guess is there may be a lower limit to number of messages per second we can send to a specific socket connection, however we cannot find any docs on this. Thanks for any help anyone can provide
1
answers
0
votes
29
views
asked 2 days ago

Using MSK as trigger to a Lambda with SASL/SCRAM Authentication

Hi, I have set up a MSK cluster with SASL/SCRAM authentication. I have stored the username and password in a secret using AWS Secrets Manager. Now I am trying to set the topic in the MSK cluster as an event source to a Lambda function. In order to do so, I am following this documentation: https://aws.amazon.com/blogs/compute/using-amazon-msk-as-an-event-source-for-aws-lambda/ However the above documentation is for unauthenticated protocol. So I tried to add the authentication and the secret. I also added a policy in the execution role of the Lambda function that lets it read the secret value: "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:*" ], "Resource": [ "arn:aws:secretsmanager:****:*******:secret:*" ] }, { "Effect": "Allow", "Action": "secretsmanager:ListSecrets", "Resource": "*" } ]} When I am trying to add the trigger, I am getting the error: An error occurred when creating the trigger: Cannot access secret manager value arn:aws:secretsmanager:*****:*****:secret:*******. Please ensure the role can perform the 'secretsmanager:GetSecretValue' action on your broker in IAM. (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ****; Proxy: null) I am not able to understand this error since I have included in the policy all the Actions from "secretsmanager" on all the Resources in my account. Can someone help?
2
answers
0
votes
13
views
asked 2 days ago

Lambda function as image, how to find your handler URI

Hello, I have followed all of the tutorials on how to build an AWS Lambda function as a container image. I am also using the AWS SAM SDK as well. What I don't understand is how do I figure out my end-point URL mapping from within my image to the Lambda function? For example in my docker image that I am using the AWS Python 3.9 image where I install some other packages and my python requirements and my handler is defined as: summarizer_function_lambda.postHandler My python file being copied into the image is the same name as above but without the .postHandler My AWS SAM Template has: AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: AWS Lambda dist-bart-summarizer function # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: DistBartSum: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: FunctionName: DistBartSum ImageUri: <my-image-url> PackageType: Image Events: SummarizerFunction: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /postHandler Method: POST So what is my actual URI path to do my POST call either locally or once deployed on Lambda?? When I try and do a CURL command I get an "{"message": "Internal server error"}% " curl -XPOST "https://<my-aws-uri>/Prod/postHandler/" -d '{"content": "Test data.\r\n"}' So I guess my question is how do you "map" your handler definitions from within a container all the way to the end point URI?
2
answers
0
votes
28
views
asked 2 days ago

Athena Error: Permission Denied on S3 Path.

I am trying to execute athena queries from a lambda function but I am getting this error: `Athena Query Failed to run with Error Message: Permission denied on S3 path: s3://bkt_logs/apis/2020/12/16/14` The bucket `bkt_logs` is the bucket which is used by AWS Glue Crawlers to crawl through all the sub-folders and populate Athena table on which I am querying on. Also, `bkt_logs` is an encrypted bucket. These are the policies that I have assigned to the Lambda. ``` [ { "Action": [ "s3:Get*", "s3:List*", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::athena-query-results/*", "Effect": "Allow", "Sid": "AllowS3AccessToSaveAndReadQueryResults" }, { "Action": [ "s3:*" ], "Resource": "arn:aws:s3:::bkt_logs/*", "Effect": "Allow", "Sid": "AllowS3AccessForGlueToReadLogs" }, { "Action": [ "athena:GetQueryExecution", "athena:StartQueryExecution", "athena:StopQueryExecution", "athena:GetWorkGroup", "athena:GetDatabase", "athena:BatchGetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowAthenaAccess" }, { "Action": [ "glue:GetTable", "glue:GetDatabase", "glue:GetPartitions" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowGlueAccess" }, { "Action": [ "kms:CreateGrant", "kms:DescribeKey", "kms:Decrypt" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowKMSAccess" } ] ``` What seems to be wrong here? What should I do to resolve this issue?
1
answers
0
votes
34
views
asked 3 days ago

Problem uploading media to AWS S3 with Django Storages / Boto3 (form a website on Lambda)

Hi all! I have a Django website which is deployed on AWS Lambda. All the static/media is stored in the S3 bucket. I managed to serve static from S3 and it works fine, however, when trying to upload media through admin (I was trying to add an article with a pic attached to it), I get a message "Endpoint request timed out". Here is my AWS and storage configuration: **ukraine101.aws.utils.py** ``` from storages.backends.s3boto3 import S3Boto3Storage StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static') MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media') ``` **settings.py** ``` STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_URL = 'https://<my-bucket-name>.s3.amazonaws.com/' MEDIA_URL = 'https://<my-bucket-name>.s3.amazonaws.com/media/' MEDIA_ROOT = MEDIA_URL DEFAULT_FILE_STORAGE = 'ukraine101.aws.utils.MediaRootS3BotoStorage' STATICFILES_STORAGE = 'ukraine101.aws.utils.StaticRootS3BotoStorage' AWS_STORAGE_BUCKET_NAME = '<my-bucket-name>' AWS_S3_REGION_NAME = 'us-east-1' AWS_ACCESS_KEY_ID = '<my-key-i-dont-show>' AWS_SECRET_ACCESS_KEY = '<my-secret-key-i-dont-show>' AWS_S3_SIGNATURE_VERSION = 's3v4' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERIFY = True AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME STATICFILES_LOCATION = 'static' ``` **My Article model:** ``` class Article(models.Model): title = models.CharField(max_length=250, ) summary = models.TextField(blank=False, null=False, ) image = models.ImageField(blank=False, null=False, upload_to='articles/', ) text = RichTextField(blank=False, null=False, ) category = models.ForeignKey(Category, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) featured = models.BooleanField(default=False) date_created = models.DateField(auto_now_add=True) slug = AutoSlugField(populate_from='title') related_book = models.ForeignKey(Book, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) def get_absolute_url(self): return reverse("articles:article-detail", kwargs={"slug": self.slug}) def get_comments(self): return Comment.objects.filter(article=self.id) author = models.ForeignKey(User, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) ``` **AWS bucket policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` **CORS:** ``` [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "POST", "PUT", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] ``` **User permissions policies (there are two attached): ** Policy 1: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3:::<my-bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:*Object*", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` Policy 2: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": [ "arn:aws:s3:::<my-bucket-name>", "arn:aws:s3:::<my-bucket-name>/*" ] } ] } ``` Please, if someone knows what can be wrong and why this timeout is happening, help me.
1
answers
0
votes
9
views
asked 3 days ago

Lambda function throwing : TooManyRequestsException, Rate exceeded

When the Lambda function is invoked, occasionally I see the following error for the function even though there is no Load running or not many lambda functions running. While the throttling and quota requests are set to default running in the Mumbai region, but this error is observed even when no load is running.. How do I determine with configuration needs to be increased to address this problem ? 2022-05-17T10:01:13.555Z 84379818-c8b8-44a3-b353-2c9f7f8f5e48 ERROR Invoke Error { "errorType": "TooManyRequestsException", "errorMessage": "Rate exceeded", "code": "TooManyRequestsException", "message": "Rate exceeded", "time": "2022-05-17T10:01:13.553Z", "requestId": "c3dc9f1b-d7c3-40d5-bec7-78e19dc2e033", "statusCode": 400, "retryable": true, "stack": [ "TooManyRequestsException: Rate exceeded", " at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:52:27)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)", " at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:686:14)", " at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)", " at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)", " at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)", " at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:688:12)", " at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)" ] }
1
answers
0
votes
16
views
asked 4 days ago

firebase-admin experiencing Runtime.ImportModuleError on Lambda (Node v14)

Receiving this error when attempting to use the `firebase-admin` SDK on Node v14 on Lambda. ``` 2022-05-13T21:41:25.923Z undefined ERROR Uncaught Exception { "errorType": "Runtime.ImportModuleError", "errorMessage": "Error: Cannot find module 'app'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js", "stack": [ "Runtime.ImportModuleError: Error: Cannot find module 'app'", "Require stack:", "- /var/runtime/UserFunction.js", "- /var/runtime/index.js", " at _loadUserApp (/var/runtime/UserFunction.js:202:13)", " at Object.module.exports.load (/var/runtime/UserFunction.js:242:17)", " at Object.<anonymous> (/var/runtime/index.js:43:30)", " at Module._compile (internal/modules/cjs/loader.js:1085:14)", " at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)", " at Module.load (internal/modules/cjs/loader.js:950:32)", " at Function.Module._load (internal/modules/cjs/loader.js:790:12)", " at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12)", " at internal/main/run_main_module.js:17:47" ] } ``` Here's the code attempting to import and use lambda: ``` import firebase from 'firebase-admin'; export const useFirebaseOnServer = () => { if (firebase.apps.length === 0) { return firebase.initializeApp({ credential: firebase.credential.cert(require('./serviceAccount.json')), databaseURL: 'https://maskotter-f06cb.firebaseio.com', }); } return firebase.app(); }; ``` And finally my `template.yaml` file: ``` AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: > maskotter-email-forwarder # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 6 Tracing: Active Resources: ForwarderFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: forwarder/ Handler: app.lambdaHandler Runtime: nodejs14.x Architectures: - x86_64 Metadata: # Manage esbuild properties BuildMethod: esbuild BuildProperties: Minify: true Target: "es2020" Sourcemap: true EntryPoints: - app.ts ``` Been endlessly searching for an answer. Any help would be appreciated!
0
answers
0
votes
4
views
asked 7 days ago

`RequestTimeout`s for S3 put requests from a Lambda in a VPC for larger payloads

# Update I added a VPC gateway endpoint for S3 in the same region (US East 1). I selected the route table for it that the lambda uses. But still, the bug persists. Below I've included details regarding my network configuration. The lambda is located in the "api" subnet. ## Network Configuration 1 VPC 4 subnets: * public &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.0.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: public * private &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.1.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: private &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: private * api &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.4.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: api &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: api * private2-required &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.2.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: - 3 route tables: * public &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: ::/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxxx * private &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local * api &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: nat-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: pl-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Target: vpce-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(VPC S3 endpoint) 4 network ACLs * public &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * private &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: PostgreSQL TCP 5432 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: PostgreSQL TCP 5432 10.0.4.0/24 (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: Custom TCP TCP 32768-65535 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: Custom TCP TCP 1024-65535 10.0.4.0/24 (allow) * api &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * \- &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) # Update # I increased the timeout of the lambda to 5 minutes, and the timeout of the PUT request to the S3 bucket to 5 minutes as well. Before this the request itself would timeout, but now I'm actually getting a response back from S3. It is a 400 Bad Request response. The error code is `RequestTimeout`. And the message in the payload of the response is "Your socket connection to the server was not read from or written to within the timeout period." This exact same code works 100% of the time for a small payload (on the order of 1KB), but apparently for payloads on the order of 1MB it starts breaking. There is no logic in _my code_ that does anything differently based on the size of the payload. I've read similar issues that suggest the issue is with the wrong number of bytes being provided in the "content-length" header, but I've never provided a value for that header. Furthermore, the lambda works flawlessly when executed in my local environment. The problem definitely appears to be a networking one. At first glance it might seem like this is just an issue with the lambda being able to interact with services outside of the VPC, but that's not the case because the lambda _does_ work exactly as expected for smaller file sizes (<1KB). So it's not that it flat out can't communicate with S3. Scratching my head here... # Original # I use S3 to host images for an application. In my local testing environment the images upload at an acceptable speed. However, when I run the same exact code from an AWS Lambda (in my VPC), the speeds are untenably slow. I've concluded this because I've tested with smaller images (< 1KB) and they work 100% of the time without making any changes to the code. Then I use 1MB sized payloads and they fail 98% percent of the time. I know the request to S3 is the issue because of logs made from within the Lambda that indicate the execution reaches the upload request, but — almost — never successfully passes it (times out).
1
answers
0
votes
32
views
asked 9 days ago

Unable to override taskRoleArn when running ECS task from Lambda

I have a Lambda function that is supposed to pass its own permissions to the code running in an ECS task. It looks like this: ``` ecs_parameters = { "cluster": ..., "launchType": "FARGATE", "networkConfiguration": ..., "overrides": { "taskRoleArn": boto3.client("sts").get_caller_identity().get("Arn"), ... }, "platformVersion": "LATEST", "taskDefinition": f"my-task-definition-{STAGE}", } response = ecs.run_task(**ecs_parameters) ``` When I run this in Lambda, i get this error: ``` "errorMessage": "An error occurred (ClientException) when calling the RunTask operation: ECS was unable to assume the role 'arn:aws:sts::787364832896:assumed-role/my-lambda-role...' that was provided for this task. Please verify that the role being passed has the proper trust relationship and permissions and that your IAM user has permissions to pass this role." ``` If I change the task definition in ECS to use `my-lambda-role` as the task role, it works. It's specifically when I try to override the task role from Lambda that it breaks. The Lambda role has the `AWSLambdaBasicExecutionRole` policy and also an inline policy that grants it `ecs:runTask` and `iam:PassRole`. It has a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": [ "ecs.amazonaws.com", "lambda.amazonaws.com", "ecs-tasks.amazonaws.com" ] }, "Action": "sts:AssumeRole" ``` The task definition has a policy that grants it `sts:AssumeRole` and `iam:PassRole`, and a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com", "AWS": "arn:aws:iam::account-ID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" }, "Action": "sts:AssumeRole" ``` How do I allow the Lambda function to pass the role to ECS, and ECS to assume the role it's been given? P.S. - I know a lot of these permissions are overkill, so let me know if there are any I can get rid of :) Thanks!
2
answers
1
votes
16
views
asked 9 days ago

FTP Transfer Family, FTPS, TLS resume failed

We have: - an AWS transfer family server with FTPS protocol - a custom hostname and a valid ACM certificate which is attached to the FTP server - a Lambda for the Identity provider The client is using: - EXPLICIT AUTH TLS - our custom hostname - port 21 The problem is: the client can connect, the authentication is successfully (see below for the auth test result), but during the communication with the FTP server a TLS_RESUME_FAILURE occurs. The error in the customer client is "522 Data connection must use cached TLS session", and the error in the CloudWatch LogGroup of the transfer server is just "TLS_RESUME_FAILURE" I have no clue why this is happen. Any ideas? Here is the auth test result ``` { "Response": "{\"HomeDirectoryDetails\":\"[{\\\"Entry\\\":\\\"/\\\",\\\"Target\\\":\\\"/xxx/new\\\"}]\",\"HomeDirectoryType\":\"LOGICAL\",\"Role\":\"arn:aws:iam::123456789:role/ftp-s3-access-role\",\"Policy\":\"{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"AllowListAccessToBucket\", \"Action\": [\"s3:ListBucket\"], \"Effect\": \"Allow\", \"Resource\": [\"arn:aws:s3:::xxx-prod\"]}, {\"Sid\": \"TransferDataBucketAccess\", \"Effect\": \"Allow\", \"Action\": [\"s3:PutObject\", \"s3:GetObject\", \"s3:GetObjectVersion\", \"s3:GetObjectACL\", \"s3:PutObjectACL\"], \"Resource\": [\"arn:aws:s3:::xxx-prod/xxx/new\", \"arn:aws:s3:::xxx-prod/xxx/new/*\"]}]}\",\"UserName\":\"test\",\"IdentityProviderType\":\"AWS_LAMBDA\"}", "StatusCode": 200, "Message": "" } ```
1
answers
0
votes
9
views
asked 10 days ago

Non guessable CloudFront URL

I'm wondering if there's a way to make the S3 path unguessable. Let's suppose I have an S3 path like this: https://s3-bucket.com/{singer_id}/album/song/song.mp3, this file will be served through CloudFront, so the path will be: https://cloundfront-dist-id.com/{singer_id}/album/song/song.mp3?signature=... (I'm using signed URLs). My question is : it is possible to make the /{singer_id}/album/song/song.mp3 not guessable by hashing it using for example Lambda or Lambda@Edge function so the client will see a url like this https://cloundfront-dist-id.com/some_hash?signature= ? Thanks in advance. https://stackoverflow.com/questions/70885356/non-guessable-cloudfront-url I am also facing issue. Question may arise why need of hash because signed url are secure. For my side, I need such url with s3 path hidden. I am using same AWS bucket for retrieving image for internal use without signed url and sharing that file to others using signed url. Internal USe CDN without signed url after CNAMe https://data.example.com/{singer_id}/album/song/song.mp3 Signed url https://secured-data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires == Since both using same AWS bucket and if someone guesses in signed url then access content https://data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires . File opens . In this scenario, I want to hide {singer_id}/album/song/song.mp3 to some new value and file is displayed under new name
1
answers
0
votes
7
views
asked 10 days ago

Data Quality Framework in AWS

I am trying to implement a data quality framework for an application which ingests data from various systems(batch, near real time, real time). Few items that I want to highlight here are: * The data pipelines widely vary and ingest very high volumes of data. They are developed using spark,python,emr clusters, kafka, Kinesis stream * Any new system that we onboard in the framework, it should be easily able to include the data quality checks with minimal coding. so some sort of metadata framework might help for ex: storing the business rules in dynamodb which can automatically run check different feeders/new data pipeline created. * Our tech stack includes AWS,Python,Spark, Java, so kindly advise related services(AWS Databrew, PyDeequ, Greatexpectations libraries, various lambda event driven services are some I want to focus) * I am also looking for some sort of audit, balance and control mechanism. Auditing the source data, balancing # of records between 2 points and have some automated mechanism to remediate(control) them. * I am looking for testing frameworks for the different data pipelines. Also for data profiling, kindly advise tools/libraries, Aws data brew, Pandas are some I am exploring. I know there wont be one specific solution, and hence appreciate all and any different ideas. A flow diagram with Audit, balance and control with automated data validation and testing mechanism for data pipelines can be very helpful.
1
answers
0
votes
11
views
asked 11 days ago
0
answers
0
votes
7
views
asked 13 days ago

Python Bandit -equivalent of fromstring in defusedxml.ElementTree?

I am trying to read xml file from S3 location and using diffusedxml library. Code executes fine, but Bandit throws medium Severity in code analysis and throws following message- blacklist: Using lxml.etree.parse to parse untrusted XML data is known to be vulnerable to XML attacks. Replace lxml.etree.parse with its defusedxml equivalent function. Test ID: B320 Severity: MEDIUM Confidence: HIGH CWE: CWE-20 File: ./lambda_code/ccda_step2_validation/defusedxml/defusedxml/lxml.py Line number: 135 More info: https://bandit.readthedocs.io/en/1.7.4/blacklists/blacklist_calls.html#b313-b320-xml-bad-etree 134 parser = getDefaultParser() 135 elementtree = _etree.parse(source, parser, base_url=base_url) 136 check_docinfo(elementtree, forbid_dtd, forbid_entities) blacklist: Using lxml.etree.fromstring to parse untrusted XML data is known to be vulnerable to XML attacks. Replace lxml.etree.fromstring with its defusedxml equivalent function. Test ID: B320 Severity: MEDIUM Confidence: HIGH CWE: CWE-20 File: ./lambda_code/ccda_step2_validation/defusedxml/defusedxml/lxml.py Line number: 143 More info: https://bandit.readthedocs.io/en/1.7.4/blacklists/blacklist_calls.html#b313-b320-xml-bad-etree 142 parser = getDefaultParser() 143 rootelement = _etree.fromstring(text, parser, base_url=base_url) 144 elementtree = rootelement.getroottree() Here is my Pseudo code ``` from defusedxml.defusedxml.ElementTree import fromstring is_valid_file = False S3_CLIENT = boto3.client("s3") s3_file = S3_CLIENT.get_object(Bucket=bucketname, Key=filename_with_key) #Read a text file entire content s3_filedata = s3_file["Body"].read() try: #tree = ET.ElementTree(ET.fromstring(s3_filedata)) tree = fromstring(s3_filedata) #search_document_header = tree.getroot() search_document_header = tree.findall(".") search_patient_section = tree.findall(".//{urn:hl7-org:v3}patientRole") if str(search_document_header).find("ClinicalDocument") !=-1 and str(search_patient_section).find("patientRole") !=-1: is_valid_file = True except Exception as e: is_valid_file = False LOGGER.error("in parse error") ```
0
answers
0
votes
1
views
asked 18 days ago

Lambda Events not triggering EventBridge destination

I am using the Amazon Selling Partner API (SP-API) and am trying to set up a Pub/Sub like system for receiving customer orders etc. The Notifications API in SP-API sends notifications of different types in 2 different ways depending on what event you are using. Some send directly to eventBridge and others are sent to SQS. https://developer-docs.amazon.com/sp-api/docs/notifications-api-v1-use-case-guide#section-notification-workflows I have correctly set up the notifications that are directly sent to eventBridge, but am struggling to work the SQS notifications. I want all notifications to be send to my own endpoint. For the SQS model, I am receiving notifications in SQS, which is set as a trigger for a Lambda function (This part works). The destination for this function is set as another eventBridge (this is that part that doesn't work). This gives the architecture as: `SQS => Lambda => eventBridge => my endpoint` Why is lambda not triggering my eventBridge destination in order to send the notifications? **Execution Role Policies:** * Lambda 1. AWSLambdaBasicExecutionRole 2. AmazonSQSFullAccess 3. AmazonEventBridgeFullAccess 4. AWSLambda_FullAccess * EventBridge 1. Amazon_EventBridge_Invoke_Api_Destination 2. AmazonEventBridgeFullAccess 3. AWSLambda_FullAccess **EventBridge Event Pattern:** `{"source": ["aws.lambda"]}` **Execution Role Trusted Entities:** * EventBridge Role `"Service": [ "events.amazonaws.com", "lambda.amazonaws.com", "sqs.amazonaws.com" ]` * Lambda Role `"Service": [ "lambda.amazonaws.com", "events.amazonaws.com", "sqs.amazonaws.com" ]` **Lambda Code:** ``` exports.handler = function(event, context, callback) { console.log("Received event: ", event); context.callbackWaitForEmptyEventLoop = false callback(null, event); return { statusCode: 200, } } ```
1
answers
0
votes
7
views
asked 18 days ago

AWS SAM "No response from invoke container for" wrong function name

I've debugged my application, and identified a problem. I have 2 REST API Gateway, and it seems like since they both bind on the same endpoint, the first one will recieve the call that the second one should handle. Here's my template.yaml ```yaml Resources: mysampleapi1: Type: 'AWS::Serverless::Function' Properties: Handler: packages/mysampleapi1/dist/index.handler Runtime: nodejs14.x CodeUri: . Description: '' MemorySize: 1024 Timeout: 30 Role: >- arn:aws:iam:: [PRIVATE] Events: Api1: Type: Api Properties: Path: /users Method: ANY Environment: Variables: NODE_ENV: local Tags: STAGE: local mysampleapi2: Type: 'AWS::Serverless::Function' Properties: Handler: packages/mysampleapi2/dist/index.handler Runtime: nodejs14.x CodeUri: . Description: '' MemorySize: 1024 Timeout: 30 Role: >- arn:aws:iam:: [PRIVATE] Events: Api1: Type: Api Properties: Path: /wallet Method: ANY Environment: Variables: NODE_ENV: local Tags: STAGE: local ``` When I send a HTTP request for ```mysampleapi2``` Here's what's happening in the logs using the startup command sam local start-api --port 3001 --log-file /tmp/server-output.log --profile personal --debug ```log 2022-04-27 18:2:34,953 | Mounting /home/mathieu_auclair/Documents/Project/repositories/server as /var/task:ro,delegated inside runtime container 2022-04-27 18:20:35,481 | Starting a timer for 30 seconds for function 'mysampleapi1' 2022-04-27 18:21:05,484 | Function 'mysampleapi1' timed out after 30 seconds 2022-04-27 18:21:46,732 | Container was not created. Skipping deletion 2022-04-27 18:21:46,732 | Cleaning all decompressed code dirs 2022-04-27 18:21:46,733 | No response from invoke container for mysampleapi1 2022-04-27 18:21:46,733 | Invalid lambda response received: Lambda response must be valid json ``` Why is my ```mysampleapi2``` not picking the HTTP call? If I run them in separate template files using different ports, then it works... why is that? Re-post from my question on StackOverflow: https://stackoverflow.com/questions/72036152/aws-sam-no-response-from-invoke-container-for-wrong-function-name
1
answers
1
votes
3
views
asked 23 days ago

How to pass the Amplify app ID to a function? How to do app introspection from backend functions?

## Background Amplify apps are easily extensible with Lambda functions, using `amplify add function`. Great! ## Problem How can I access the Amplify app ID from the Lambda function code? There are a lot of scenarios where I need that string in order to locate resources or access secrets in SSM. ## More generally How can my function do introspection on the app? How can I get the app ID from the Lambda function? Is there a service? Am I supposed to pass the information (somehow) through the CloudFormation template for the function? ## Due diligence I've spent days trying to figure this out, and I have at least learned the secret, undocumented way to get anything in a nested CloudFormation stack's outputs into the parameters for my CloudFormation stack, so that I can create environment variables that my Lambda function can see. That does not solve my original problem of finding the top-level app ID. Or any information about the top-level app. If I could find the stack name for the top-level CloudFormation for the stack then I could learn a lot of things. I can't. #### How to pass stack outputs from app resources into function stack parameters I've spent days trying to figure this out, and I have at least learned the secret, undocumented way to use `dependsOn` in the backend-config.json to get the outputs from the CloudFormation stacks for other resources in the Amplify app and feed those into the parameters for my stack for my function: ``` "function": { "MyFunctionName": { "build": true, "providerPlugin": "awscloudformation", "service": "Lambda", "dependsOn": [ { "category": "api", "resourceName": "Data", "attributes": [ "GraphQLAPIIdOutput" ] } ], } } } ``` That creates a new parameter for your function that's named using a pattern that's not documented anywhere, from what I can tell: `[category][resource name][CloudFormation stack output name]`. You can reference that in your CloudFormation stack for your function to create an environment variable that your function code can access: ``` { "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { ... "secretsPathAmplifyAppId": { "Type": "String" } ... "Resources": { ... "Environment": { "Variables": { "AMPLIFY_APP_ID": { "Ref": "secretsPathAmplifyAppId" }, ``` #### Using the `AmplifyAppId` in `amplify-meta.json` doesn't work If I could access the `provider` / `cloudformation` data from a `dependsOn` then I could get the app ID into my function's stack. But that doesn't work. I spent some time eliminating that possibility. #### Using `secretsPathAmplifyAppId` There is a side effect of using amplify update function to add secrets. If you add any secret to the function then you will get a new parameter as an input to your function's CloudFormation stack: `secretsPathAmplifyAppId` I did that and added a dummy secret that I don't really need, in order to get that CloudFormation stack parameter containing the Amplify App ID that I do need. And then I referenced that in my CloudFormation template for my function: ``` { "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { ... "env": { "Type": "String" }, "s3Key": { "Type": "String" }, ... "secretsPathAmplifyAppId": { "Type": "String" } ``` That works, right? **No!** If I create a new app in Amplify, perhaps deploying it to a staging or production account for the first time, then I'll get the error `Parameters: [secretsPathAmplifyAppId] must have values` from the initial build when I press "Save and Deploy" on the "Host your web app" form. This is because using `secretsPathAmplifyAppId` relies on the Amplify CLI adding the value to the `team-provider-info.json` file. For a new app's first deployment, "the `team-provider-info.json` file is not available in the Admin UI deployment job", as described in https://github.com/aws-amplify/amplify-cli/issues/8513 . And there is apparently no solution. ## WHY IS THIS SO HARD?!? The Amplify documentation implies that it's not difficult to add a Lambda function and do whatever. I'm a Lambda pro and a code pro, and I can do whatever. But only if I can pass context information to my code. How can an Amplify app's Lambda functions do introspection on the app?
1
answers
0
votes
4
views
asked 23 days ago

Lambda function URL not respecting CORS settings

I am experimenting with using Lambda function URLs instead of API gateway. They seem to be working fine for the most part, except my browser keeps complaining about CORS. I did some testing. With CORS enabled on API gateway, I get this (expected) result. ``` $ curl -v https://lo1mb5fn4f.execute-api.ap-southeast-2.amazonaws.com/prod/default -X OPTIONS * Trying 52.62.9.26:443... * Connected to lo1mb5fn4f.execute-api.ap-southeast-2.amazonaws.com (52.62.9.26) port 443 (#0) * schannel: disabled automatic use of client certificate * schannel: ALPN, offering http/1.1 * schannel: ALPN, server accepted to use http/1.1 > OPTIONS /prod/default HTTP/1.1 > Host: lo1mb5fn4f.execute-api.ap-southeast-2.amazonaws.com > User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Wed, 27 Apr 2022 09:18:52 GMT < Content-Type: application/json < Content-Length: 0 < Connection: keep-alive < x-amzn-RequestId: 2b81917b-42e6-47ac-88dd-4211fb0b93ad < Access-Control-Allow-Origin: * < Access-Control-Allow-Headers: Content-Type,Authorization,X-Amz-Date,X-Api-Key,X-Amz-Security-Token < x-amz-apigw-id: RO6ThFTYSwMFdqg= < Access-Control-Allow-Methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT < * Connection #0 to host lo1mb5fn4f.execute-api.ap-southeast-2.amazonaws.com left intact ``` I then do the same query against the Lambda function URL also with CORS enabled in the console, however, I do not get the CORS headers returned. ``` $ curl -v https://b3cdmthu62o6bqzcrb7efnw7be0ktquf.lambda-url.ap-southeast-2.on.aws/ -X OPTIONS * Trying 54.66.8.158:443... * Connected to b3cdmthu62o6bqzcrb7efnw7be0ktquf.lambda-url.ap-southeast-2.on.aws (54.66.8.158) port 443 (#0) * schannel: disabled automatic use of client certificate * schannel: ALPN, offering http/1.1 * schannel: ALPN, server accepted to use http/1.1 > OPTIONS / HTTP/1.1 > Host: b3cdmthu62o6bqzcrb7efnw7be0ktquf.lambda-url.ap-southeast-2.on.aws > User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Wed, 27 Apr 2022 09:20:18 GMT < Content-Type: application/json < Content-Length: 20 < Connection: keep-alive < x-amzn-RequestId: 13433ad6-7504-4054-abf5-ca53f9b39b3f < X-Amzn-Trace-Id: root=1-62690ad2-0796999b639d9a5507c42dfb;sampled=0 < "Hello from Lambda!"* Connection #0 to host b3cdmthu62o6bqzcrb7efnw7be0ktquf.lambda-url.ap-southeast-2.on.aws left intact ``` This does appear to be a bug in the way how Lambda reads the CORS data. I would appreciate any tips on what I might be doing wrong. If it is not me, is there a way to escalate this to AWS and report this as a bug?
2
answers
1
votes
14
views
asked 24 days ago

IoT TwinMaker and Grafana integration where component has multiple list properties

Hello! I'm currently trying to create a dashboard where one component of an entity has multiple property definitions of the list type. My property definitions and lambda function seem to be operating as intended. Using the "Test" functionality in the AWS IoT TwinMaker console shows me the following: ``` { "assetId": { "propertyReference": { "componentName": "master_task_list", "entityId": "AllTasks", "propertyName": "assetId" }, "propertyValue": { "stringValue": "master_task_list" } }, "assetType": { "propertyReference": { "componentName": "master_task_list", "entityId": "AllTasks", "propertyName": "assetType" }, "propertyValue": { "stringValue": "task_list" } }, "robotIdList": { "propertyReference": { "componentName": "master_task_list", "externalIdProperty": {}, "entityId": "AllTasks", "propertyName": "robotIdList" }, "propertyValue": { "listValue": [ { "stringValue": "3e2b4d1d-a86f-4f7b-a436-46e7653f7fef" } ] } }, "siteArn": { "propertyReference": { "componentName": "master_task_list", "entityId": "AllTasks", "propertyName": "siteArn" }, "propertyValue": { "stringValue": "arn:aws:iotroborunner:us-east-1:<accountId>:site/<siteId>" } }, "stateList": { "propertyReference": { "componentName": "master_task_list", "externalIdProperty": {}, "entityId": "AllTasks", "propertyName": "stateList" }, "propertyValue": { "listValue": [ { "stringValue": "ACTIVE" } ] } }, "taskIdList": { "propertyReference": { "componentName": "master_task_list", "externalIdProperty": {}, "entityId": "AllTasks", "propertyName": "taskIdList" }, "propertyValue": { "listValue": [ { "stringValue": "0ed00afe-c55f-4311-9468-4dca01b1625d" } ] } }, "waypointsList": { "propertyReference": { "componentName": "master_task_list", "externalIdProperty": {}, "entityId": "AllTasks", "propertyName": "waypointsList" }, "propertyValue": { "listValue": [ { "stringValue": "[(0, 0, 5), (40.529312, -74.626496, 5), (40.52904, -74.6267648, 5), (40.529152, -74.6268927, 5), (40.5291568, -74.6268862, 0)]" } ] } } } ``` Which tells me all the values are being reported correctly as their respective types. However, when issuing a request from Grafana using the IoT TwinMaker data source, the query inspector asks for multiple properties, but only one comes back The query: ``` { "queries": [ { "componentName": "master_task_list", "entityId": "AllTasks", "properties": [ "waypointsList", "taskIdList", "robotIdList", "stateList" ], "queryType": "GetPropertyValue", "refId": "A", "componentTypeId": "com.defuzzy.task_list", "datasource": "AWS IoT TwinMaker", "datasourceId": 3, "intervalMs": 10000, "maxDataPoints": 2738 } ], "range": { "from": "2022-04-26T22:54:24.962Z", "to": "2022-04-27T04:54:24.962Z", "raw": { "from": "now-6h", "to": "now" } }, "from": "1651013664962", "to": "1651035264962" } ``` The response ``` { "results": { "A": { "frames": [ { "schema": { "name": "waypointsList", "refId": "A", "meta": { "custom": {} }, "fields": [ { "name": "Value", "type": "string", "typeInfo": { "frame": "string", "nullable": true } } ] }, "data": { "values": [ [ "[(0, 0, 5), (40.529312, -74.626496, 5), (40.52904, -74.6267648, 5), (40.529152, -74.6268927, 5), (40.5291568, -74.6268862, 0)]" ] ] } } ] } } } ``` Even if I only ask for one (different) property, I still only get one property back (`waypointsList` in this case). Am I doing something wrong? This feels a little broken. Any help would be appreciated!
1
answers
0
votes
8
views
asked 24 days ago

Should I use Cognito Identity Pool OIDC JWT Connect Tokens in the AWS API Gateway?

I noticed this question from 4 years ago: https://repost.aws/questions/QUjjIB-M4VT4WfOnqwik0l0w/verify-open-id-connect-token-generated-by-cognito-identity-pool So I was curious and I looked at the JWT token being returned from the Cognito Identity Pool. Its `aud` field was my identity pool id and its `iss` field was "https://cognito-identity.amazonaws.com", and it turns out that you can see the oidc config at "https://cognito-identity.amazonaws.com/.well-known/openid-configuration" and grab the public keys at "https://cognito-identity.amazonaws.com/.well-known/jwks_uri". Since I have access to the keys, that means I can freely validate OIDC tokens produced by the Cognito Identity Pool. Moreso, I should be also able to pass them into an API Gateway with a JWT authorizer. This would allow me to effectively gate my API Gateway behind a Cognito Identity Pool without any extra lambda authorizers or needing IAM Authentication. Use Case: I want to create a serverless lambda app that's blocked behind some SAML authentication using Okta. Okta does not allow you to use their JWT authorizer without purchasing extra add-ons for some reason. I could use IAM Authentication onto the gateway instead but I'm afraid of losing formation such as the user's id, group, name, email, etc. Using the JWT directly preserves this information and passes it to the lambda. Is this a valid approach? Is there something I'm missing? Or is there a better way? Does the IAM method preserve user attributes...?
0
answers
0
votes
2
views
asked 24 days ago

¿How can we crate a lambda which uses a Braket D-Wave device?

We are trying to deploy a Lambda with some code which works in a Notebook. The code is rather simple and uses D-Wave — DW_2000Q_6. The problem is that when we execute the lambda (container lambda due to size problems), it give us the following error: ```json { "errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'", "errorType": "OSError", "stackTrace": [ " File \"/var/lang/lib/python3.8/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n", " File \"/var/lang/lib/python3.8/imp.py\", line 171, in load_source\n module = _load(spec)\n", " File \"<frozen importlib._bootstrap>\", line 702, in _load\n", " File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 843, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n", " File \"/var/task/lambda_function.py\", line 6, in <module>\n from dwave.system.composites import EmbeddingComposite\n", " File \"/var/task/dwave/system/__init__.py\", line 15, in <module>\n import dwave.system.flux_bias_offsets\n", " File \"/var/task/dwave/system/flux_bias_offsets.py\", line 22, in <module>\n from dwave.system.samplers.dwave_sampler import DWaveSampler\n", " File \"/var/task/dwave/system/samplers/__init__.py\", line 15, in <module>\n from dwave.system.samplers.clique import *\n", " File \"/var/task/dwave/system/samplers/clique.py\", line 32, in <module>\n from dwave.system.samplers.dwave_sampler import DWaveSampler, _failover\n", " File \"/var/task/dwave/system/samplers/dwave_sampler.py\", line 31, in <module>\n from dwave.cloud import Client\n", " File \"/var/task/dwave/cloud/__init__.py\", line 21, in <module>\n from dwave.cloud.client import Client\n", " File \"/var/task/dwave/cloud/client/__init__.py\", line 17, in <module>\n from dwave.cloud.client.base import Client\n", " File \"/var/task/dwave/cloud/client/base.py\", line 89, in <module>\n class Client(object):\n", " File \"/var/task/dwave/cloud/client/base.py\", line 736, in Client\n @cached.ondisk(maxage=_REGIONS_CACHE_MAXAGE)\n", " File \"/var/task/dwave/cloud/utils.py\", line 477, in ondisk\n directory = kwargs.pop('directory', get_cache_dir())\n", " File \"/var/task/dwave/cloud/config.py\", line 455, in get_cache_dir\n return homebase.user_cache_dir(\n", " File \"/var/task/homebase/homebase.py\", line 150, in user_cache_dir\n return _get_folder(True, _FolderTypes.cache, app_name, app_author, version, False, use_virtualenv, create)[0]\n", " File \"/var/task/homebase/homebase.py\", line 430, in _get_folder\n os.makedirs(final_path)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 223, in makedirs\n mkdir(name, mode)\n" ] } ``` It seems that the library tries to write to some files which are not in /tmp folder. I'm wondering if is possible to do this, and if not, what are the alternatives. imports used: ```python import boto3 from braket.ocean_plugin import BraketDWaveSampler from dwave.system.composites import EmbeddingComposite from neal import SimulatedAnnealingSampler ```
1
answers
0
votes
6
views
asked 25 days ago

Adding custom cidr to ingress security group using Lambda without default vpc

Hello all! I have been searching the internet for this but I didn't exactly find a solution. Basically I am trying to add custom cidr ips to a security group via lambda function. I have given all the appropriate permissions (as far as i can tell) . I even tried attaching the vpc (which is non-default) to the lambda function to access the security group but the error was the same so i removed it from lambda function. But I am getting "An error occurred (VPCIdNotSpecified) when calling the AuthorizeSecurityGroupIngress operation: No default VPC for this user" **Below is the Policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:RevokeSecurityGroupIngress", "ec2:CreateNetworkInterface", "ec2:AuthorizeSecurityGroupIngress", "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcs", "ec2:DeleteNetworkInterface", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:CreateLogGroup" ], "Resource": "arn:aws:logs:us-west-2:xxxx:log-group:xxx:log-stream:*" } ] } ``` **Lambda function:** ``` #!/usr/bin/python3.9 import boto3 ec2 = boto3.client('ec2') def lambda_handler(event, context): response = ec2.authorize_security_group_ingress( GroupId='sg-xxxxxxx' IpPermissions=[ { 'FromPort': 443, 'IpProtocol': 'tcp', 'IpRanges': [ { 'CidrIp': '1x.1x.x.1x/32', 'Description': 'adding test cidr using lambda' }, ], 'ToPort': 443 } ], DryRun=True ) return response ``` Could someone point me to the right direction? VPC is non-defaul. All I need is to add ingress rule to an existing security group within a non-default vpc. **The error log:** ``` Test Event Name snstest Response { "errorMessage": "An error occurred (VPCIdNotSpecified) when calling the AuthorizeSecurityGroupIngress operation: No default VPC for this user", "errorType": "ClientError", "requestId": "7de9dce1-f2f9-4609-897e-b75ef751544e", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 21, in lambda_handler\n response = ec2.authorize_security_group_ingress(\n", " File \"/var/runtime/botocore/client.py\", line 391, in _api_call\n return self._make_api_call(operation_name, kwargs)\n", " File \"/var/runtime/botocore/client.py\", line 719, in _make_api_call\n raise error_class(parsed_response, operation_name)\n" ] } Function Logs START RequestId: 7de9dce1-f2f9-4609-897e-b75ef751544e Version: $LATEST [ERROR] ClientError: An error occurred (VPCIdNotSpecified) when calling the AuthorizeSecurityGroupIngress operation: No default VPC for this user Traceback (most recent call last):   File "/var/task/lambda_function.py", line 21, in lambda_handler     response = ec2.authorize_security_group_ingress(   File "/var/runtime/botocore/client.py", line 391, in _api_call     return self._make_api_call(operation_name, kwargs)   File "/var/runtime/botocore/client.py", line 719, in _make_api_call     raise error_class(parsed_response, operation_name)END RequestId: 7de9dce1-f2f9-4609-897e-b75ef751544e REPORT RequestId: 7de9dce1-f2f9-4609-897e-b75ef751544e Duration: 213.81 ms Billed Duration: 214 ms Memory Size: 128 MB Max Memory Used: 77 MB Request ID 7de9dce1-f2f9-4609-897e-b75ef751544e ```
3
answers
0
votes
8
views
asked a month ago

AWS Lambda@Edge created using AWS CDK doesn't put Log to CloudWatch

I created a simple Lambda@Edge function like below. ``` 'use strict'; exports.handler = async function(event, context, callback) { const cf = event.Records[0].cf; console.log('Record: ', JSON.stringify(cf, null, 2)); console.log('Context: ', JSON.stringify(context, null, 2)); console.log('Request: ', JSON.stringify(cf.request, null, 2)); callback(null, cf.request); } ``` And I deployed it using AWS CDKv2 `experimental EdgeFunction like below ``` const edgeFunction = new cloudfront.experimental.EdgeFunction(this, 'EdgeFunction', { runtime: Runtime.NODEJS_14_X, handler: 'index.handler', code: Code.fromAsset(path.join(__dirname, '../../../../lambda/ssr2')), }); ``` and also I set it up as edge function for a Distribution ``` const distribution = new Distribution(this, 'Distribution', { defaultBehavior: { origin, cachePolicy: CachePolicy.CACHING_DISABLED, viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS, edgeLambdas: [ { functionVersion: edgeFunction.currentVersion, eventType: LambdaEdgeEventType.VIEWER_REQUEST, } ] }, ``` But when I tried sending the request to the Distribution, the log didn't show up anything. I checked the permission, the role already has permission ``` Allow: logs:CreateLogGroup Allow: logs:CreateLogStream Allow: logs:PutLogEvents ``` I expect the function write logs to the CloudWatch. What did I miss? **UPDATE 1** Below is the role document, ``` { "sdkResponseMetadata": null, "sdkHttpMetadata": null, "partial": false, "permissionsBoundary": null, "policies": [ { "arn": "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole", "document": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" } ] }, "id": "ANPAJNCQGXC425412345", "name": "AWSLambdaBasicExecutionRole", "type": "managed" } ], "resources": { "logs": { "service": { "icon": "data:image/svg+xml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgNjQgNjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiAgPGcgdHJhbnNmb3JtPSJzY2FsZSguOCkiPgogICAgPGRlZnM+CiAgICAgIDxsaW5lYXJHcmFkaWVudCB4MT0iMCUiIHkxPSIxMDAlIiB4Mj0iMTAwJSIgeTI9IjAlIiBpZD0iYSI+CiAgICAgICAgPHN0b3Agc3RvcC1jb2xvcj0iI0IwMDg0RCIgb2Zmc2V0PSIwJSIvPgogICAgICAgIDxzdG9wIHN0b3AtY29sb3I9IiNGRjRGOEIiIG9mZnNldD0iMTAwJSIvPgogICAgICA8L2xpbmVhckdyYWRpZW50PgogICAgPC9kZWZzPgogICAgPGcgZmlsbD0ibm9uZSIgZmlsbC1ydWxlPSJldmVub2RkIj4KICAgICAgPHBhdGggZD0iTTAgMGg4MHY4MEgweiIgZmlsbD0idXJsKCNhKSIvPgogICAgICA8cGF0aCBkPSJNNTUuMDYgNDYuNzc3YzAtMy45MDktMy4yMDItNy4wOS03LjEzOC03LjA5LTMuOTM1IDAtNy4xMzYgMy4xODEtNy4xMzYgNy4wOSAwIDMuOTEgMy4yIDcuMDkgNy4xMzYgNy4wOXM3LjEzNy0zLjE4IDcuMTM3LTcuMDltMi4wMSAwYzAgNS4wMTEtNC4xMDMgOS4wODctOS4xNDcgOS4wODctNS4wNDMgMC05LjE0Ny00LjA3Ni05LjE0Ny05LjA4NyAwLTUuMDEgNC4xMDQtOS4wODYgOS4xNDctOS4wODYgNS4wNDQgMCA5LjE0OCA0LjA3NiA5LjE0OCA5LjA4Nm04LjQ0IDEzLjY5N0w1OC41IDU0LjIwM2ExMy4wMzMgMTMuMDMzIDAgMDEtMS45NDcgMi4xNmw2Ljk5OCA2LjI3YTEuNDc0IDEuNDc0IDAgMDAyLjA2Ni0uMTA3IDEuNDUzIDEuNDUzIDAgMDAtLjEwOC0yLjA1Mm0tMTcuNTg4LTIuODEyYzYuMDQzIDAgMTAuOTU4LTQuODgzIDEwLjk1OC0xMC44ODVzLTQuOTE1LTEwLjg4NC0xMC45NTgtMTAuODg0Yy02LjA0MSAwLTEwLjk1NyA0Ljg4Mi0xMC45NTcgMTAuODg0IDAgNi4wMDIgNC45MTYgMTAuODg1IDEwLjk1NyAxMC44ODVtMTkuMTkgNi4yQTMuNDgzIDMuNDgzIDAgMDE2NC41MjkgNjVhMy40NzUgMy40NzUgMCAwMS0yLjMyMi0uODgzTDU0LjkzMSA1Ny42YTEyLjkzNSAxMi45MzUgMCAwMS03LjAwOSAyLjA2Yy03LjE1IDAtMTIuOTY3LTUuNzc5LTEyLjk2Ny0xMi44ODIgMC03LjEwMiA1LjgxNy0xMi44ODEgMTIuOTY3LTEyLjg4MSA3LjE1MSAwIDEyLjk2OSA1Ljc3OSAxMi45NjkgMTIuODgxIDAgMi4wMzgtLjQ5MiAzLjk2LTEuMzQ0IDUuNjc0bDcuMzA5IDYuNTRhMy40NDQgMy40NDQgMCAwMS4yNTYgNC44NzJNMjEuMjggMjkuMzkzYzAgLjUxOS4wMzIgMS4wMzYuMDk0IDEuNTM2YS45OTQuOTk0IDAgMDEtLjgyMyAxLjEwNmMtMi40NzIuNjM0LTYuNTQgMi41NTMtNi41NCA4LjMxIDAgNC4zNDggMi40MTMgNi43NDggNC40MzkgNy45OTYuNjkxLjQzMyAxLjUxLjY2NCAyLjM3My42NzNsMTIuMTIyLjAxMS0uMDAyIDEuOTk3LTEyLjEzMS0uMDFjLTEuMjQ2LS4wMTQtMi40MjgtLjM1MS0zLjQyOC0uOTc3QzE1LjM3NyA0OC43OTcgMTIgNDUuODkgMTIgNDAuMzQ1YzAtNi42ODMgNC42LTkuMTUzIDcuMy0xMC4wMjYtLjAyLS4zMDctLjAzLS42MTctLjAzLS45MjYgMC01LjQ2IDMuNzI4LTExLjEyMyA4LjY3Mi0xMy4xNzEgNS43ODItMi40MDcgMTEuOTA4LTEuMjE0IDE2LjM4NCAzLjE4OSAxLjM4OCAxLjM2NCAyLjUyOSAzLjAyIDMuNDA0IDQuOTM3YTYuNTA5IDYuNTA5IDAgMDE0LjE1NC0xLjUwMmMzLjAwMiAwIDYuMzgyIDIuMjY0IDYuOTg0IDcuMjE1IDIuODEyLjY0NCA4Ljc1MyAyLjg5NCA4Ljc1MyAxMC4zNjIgMCAyLjk4MS0uOTQxIDUuNDQ0LTIuNzk4IDcuMzE5bC0xLjQzMy0xLjQwMWMxLjQ3My0xLjQ4OCAyLjIyLTMuNDc5IDIuMjItNS45MTggMC02LjUzMi01LjUwNC04LjE1Ny03Ljg3My04LjU1MWExLjAwMiAxLjAwMiAwIDAxLS44MjMtMS4xNTdjLS4zMjktNC4wNTUtMi43NTMtNS44NzItNS4wMy01Ljg3Mi0xLjQzNyAwLTIuNzg0LjY5NS0zLjY5NyAxLjkwN2ExLjAwNiAxLjAwNiAwIDAxLTEuNzUtLjI1OGMtLjgyMy0yLjI2Ni0yLjAxLTQuMTcxLTMuNTI1LTUuNjYxLTMuODgtMy44MTYtOS4xODQtNC44NS0xNC4xOTUtMi43NjYtNC4xNyAxLjcyNy03LjQzNyA2LjcwMi03LjQzNyAxMS4zMjgiIGZpbGw9IiNGRkYiLz4KICAgIDwvZz4KICA8L2c+Cjwvc3ZnPgo=", "name": "Amazon CloudWatch Logs" }, "statements": [ { "action": "logs:CreateLogGroup", "effect": "Allow", "resource": "*", "service": "logs", "source": { "index": "0", "policyName": "AWSLambdaBasicExecutionRole", "policyType": "managed" } }, { "action": "logs:CreateLogStream", "effect": "Allow", "resource": "*", "service": "logs", "source": { "index": "0", "policyName": "AWSLambdaBasicExecutionRole", "policyType": "managed" } }, { "action": "logs:PutLogEvents", "effect": "Allow", "resource": "*", "service": "logs", "source": { "index": "0", "policyName": "AWSLambdaBasicExecutionRole", "policyType": "managed" } } ] } }, "roleName": "MyProject-EdgeFunctionFnServiceRoleC7B72E4-1DV3AZXP558ZS", "trustedEntities": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] } ``` I just tried using the Test in the Lambda Panel. All the tests send logs to the CloudWatch. However when I send request to the CloudFront, it didn't send anything. **UPDATE 2** I just found out from StackOverflows that the log is being stored not centrally but distributed to regions. Something like below ``` /aws/lambda/us-east-1.MyProject-EdgeFunctionFn44308ADF-loJeFwXXzTOm ``` So instead of opening it from Lambda panel, I need to open it in the CloudFront panel. Somewhat I couldn't find it in any AWS documentations. **References** https://aws.amazon.com/id/blogs/networking-and-content-delivery/aggregating-lambdaedge-logs/ https://stackoverflow.com/questions/66949758/serverless-aws-lambdaedge-how-to-debug#:~:text=Go%20to%20CloudWatch%20and%20search,%2D%3E%20Lambda%40Edge%20Errors%20.
2
answers
0
votes
8
views
asked a month ago

AWS lambda function times out invoked from aws java sdk V2

I have created a AWS lambda function (written in python) that reads a tar.gz file from one S3 bucket, unzips and untars it and writes the extracted files to another S3 bucket. Tar inside the GZ is of >1GB size so lambda takes more time to complete the task. I invoke this lambda function from a java client. I am using AWS SDK V2 for Java (software.amazon.awssdk.*), and using Lambda sync client software.amazon.awssdk.services.lambda.LambdaClient. Though the lambda invocation works (lambdaClient.invoke(invokeRequest)), but it fails with "Read timed out" error. In the background (in AWS) the lambda completes its execution after sometime. Following is the lambda client bean creation code. ``` LambdaClient lambdaClient = LambdaClient.builder() .credentialsProvider(awsCredentialsProvider) .region(Region.US_EAST_1) .overrideConfiguration(ClientOverrideConfiguration.builder() .apiCallTimeout(Duration.ofMinutes(30)) .apiCallAttemptTimeout(Duration.ofMinutes(30)) .build() ) .build(); ``` Following is the lambda invocation code. ``` //This is a user defined pojo object that maps to lambda input json payload UntarLambdaPayload untarLambdaPayload = UntarLambdaPayload.builder() .sourceBucket(lambdaProps.getSourceBucket()) .destinationBucket(lambdaProps.getDestinationBucket()) .sourceKey("myTarFile.tar.gz") .build(); ObjectMapper mapper = new ObjectMapper(); String jsonRequest = mapper.writeValueAsString(untarLambdaPayload); SdkBytes payload = SdkBytes.fromUtf8String(jsonRequest); InvokeRequest invokeRequest = InvokeRequest.builder() .functionName(lambdaProps.getFunctionName()) .overrideConfiguration(AwsRequestOverrideConfiguration.builder() .apiCallTimeout(Duration.ofMinutes(30)) .apiCallAttemptTimeout(Duration.ofMinutes(30)).build()) .payload(payload) .build(); InvokeResponse res = lambdaClient.invoke(invokeRequest); ``` And I am getting the below exception. ``` software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Read timed out at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:102) at software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:47) at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:204) at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:83) at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36) at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56) at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48) at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31) at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37) at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26) at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:167) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:82) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:175) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:76) at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45) at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56) at software.amazon.awssdk.services.lambda.DefaultLambdaClient.invoke(DefaultLambdaClient.java:2355) ``` My code's further logic depends on the successful completion of the lambda function. If lambda is timed out then code cannot proceed with processing the untarred files in S3 bucket#2. I tried `overrideConfiguration` with apiCallTimeout and apiCallAttemptTimeout in InvokeRequest (as well as in Lambda Client), but it did not work. I am going to do research on LambdaClient waiter functionality for which I haven't got any help so far on how to use it with Lambda. **How can I make the `lambdaClient.invoke(invokeRequest)` wait until the lambda running in AWS completes its execution?**
2
answers
0
votes
16
views
asked a month ago
  • 1
  • 90 / page