By using AWS re:Post, you agree to the Terms of Use
/Serverless/

Questions tagged with Serverless

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Sync DynamoDB to S3

What is the best way to sync my DynamoDB tables to S3, so that I can perform serverless 'big data' queries using Athena? The data must be kept in sync without any intervention. The frequency of sync would depend on the cost, ideally daily but perhaps weekly. I have had this question a long time. I will cover what I have considered, and why I don't like the options. 1) AWS Glue Elastic Views. Sounds like this will do the job with no code, but it was announced 18 months ago and there have been no updates since. Its not generally available, and there is not information on when it might be. 2) Use dynamodb native backup following this blog https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/. I actually already use this method for 'one-off' data transfers that I kick-off manually and then configure in Athena. I have two issues with this option. The first is that, to my knowledge, the export cannot be scheduled natively. The blog suggests using the CLI to kick off exports, and I assume the writer intends that the CLI would need scheduling on a cron job somewhere. I don't run any servers for this. I imagine I could do it via a scheduled Lambda with an SDK. The second issue is that the export path in S3 always includes a unique export ID. This means I can't configure the Athena table to point to a static location for the data and just switch over the new data after a scheduled export. Perhaps I could write another lambda to move the data around to a static location after the export has finished, but it seems a shame to have to do so much work and I've not seen that covered anywhere before. 3) I can use data pipeline as described in https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html. This post is more about backing data up than making it accessible to Athena. I feel like this use case must be so common, and yet none of the ideas I've seen online are really complete. I was wondering if anyone had any ideas or experiences that would be useful here?
2
answers
0
votes
9
views
asked 16 days ago

Should I use Cognito Identity Pool OIDC JWT Connect Tokens in the AWS API Gateway?

I noticed this question from 4 years ago: https://repost.aws/questions/QUjjIB-M4VT4WfOnqwik0l0w/verify-open-id-connect-token-generated-by-cognito-identity-pool So I was curious and I looked at the JWT token being returned from the Cognito Identity Pool. Its `aud` field was my identity pool id and its `iss` field was "https://cognito-identity.amazonaws.com", and it turns out that you can see the oidc config at "https://cognito-identity.amazonaws.com/.well-known/openid-configuration" and grab the public keys at "https://cognito-identity.amazonaws.com/.well-known/jwks_uri". Since I have access to the keys, that means I can freely validate OIDC tokens produced by the Cognito Identity Pool. Moreso, I should be also able to pass them into an API Gateway with a JWT authorizer. This would allow me to effectively gate my API Gateway behind a Cognito Identity Pool without any extra lambda authorizers or needing IAM Authentication. Use Case: I want to create a serverless lambda app that's blocked behind some SAML authentication using Okta. Okta does not allow you to use their JWT authorizer without purchasing extra add-ons for some reason. I could use IAM Authentication onto the gateway instead but I'm afraid of losing formation such as the user's id, group, name, email, etc. Using the JWT directly preserves this information and passes it to the lambda. Is this a valid approach? Is there something I'm missing? Or is there a better way? Does the IAM method preserve user attributes...?
0
answers
0
votes
2
views
asked 25 days ago

¿How can we crate a lambda which uses a Braket D-Wave device?

We are trying to deploy a Lambda with some code which works in a Notebook. The code is rather simple and uses D-Wave — DW_2000Q_6. The problem is that when we execute the lambda (container lambda due to size problems), it give us the following error: ```json { "errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'", "errorType": "OSError", "stackTrace": [ " File \"/var/lang/lib/python3.8/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n", " File \"/var/lang/lib/python3.8/imp.py\", line 171, in load_source\n module = _load(spec)\n", " File \"<frozen importlib._bootstrap>\", line 702, in _load\n", " File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 843, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n", " File \"/var/task/lambda_function.py\", line 6, in <module>\n from dwave.system.composites import EmbeddingComposite\n", " File \"/var/task/dwave/system/__init__.py\", line 15, in <module>\n import dwave.system.flux_bias_offsets\n", " File \"/var/task/dwave/system/flux_bias_offsets.py\", line 22, in <module>\n from dwave.system.samplers.dwave_sampler import DWaveSampler\n", " File \"/var/task/dwave/system/samplers/__init__.py\", line 15, in <module>\n from dwave.system.samplers.clique import *\n", " File \"/var/task/dwave/system/samplers/clique.py\", line 32, in <module>\n from dwave.system.samplers.dwave_sampler import DWaveSampler, _failover\n", " File \"/var/task/dwave/system/samplers/dwave_sampler.py\", line 31, in <module>\n from dwave.cloud import Client\n", " File \"/var/task/dwave/cloud/__init__.py\", line 21, in <module>\n from dwave.cloud.client import Client\n", " File \"/var/task/dwave/cloud/client/__init__.py\", line 17, in <module>\n from dwave.cloud.client.base import Client\n", " File \"/var/task/dwave/cloud/client/base.py\", line 89, in <module>\n class Client(object):\n", " File \"/var/task/dwave/cloud/client/base.py\", line 736, in Client\n @cached.ondisk(maxage=_REGIONS_CACHE_MAXAGE)\n", " File \"/var/task/dwave/cloud/utils.py\", line 477, in ondisk\n directory = kwargs.pop('directory', get_cache_dir())\n", " File \"/var/task/dwave/cloud/config.py\", line 455, in get_cache_dir\n return homebase.user_cache_dir(\n", " File \"/var/task/homebase/homebase.py\", line 150, in user_cache_dir\n return _get_folder(True, _FolderTypes.cache, app_name, app_author, version, False, use_virtualenv, create)[0]\n", " File \"/var/task/homebase/homebase.py\", line 430, in _get_folder\n os.makedirs(final_path)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 223, in makedirs\n mkdir(name, mode)\n" ] } ``` It seems that the library tries to write to some files which are not in /tmp folder. I'm wondering if is possible to do this, and if not, what are the alternatives. imports used: ```python import boto3 from braket.ocean_plugin import BraketDWaveSampler from dwave.system.composites import EmbeddingComposite from neal import SimulatedAnnealingSampler ```
1
answers
0
votes
6
views
asked a month ago

How to create (Serverless) SageMaker Endpoint using exiting tensorflow pb (frozen model) file?

Note: I am a senior developer, but am very new to the topic of machine learning. I have two frozen TensorFlow model weight files: `weights_face_v1.0.0.pb` and `weights_plate_v1.0.0.pb`. I also have some python code using Tensorflow 2, that loads the model and handles basic inference. The models detect respectively faces and license plates, and the surrounding code converts an input image to a numpy array, and applies blurring to the images in areas that had detections. I want to get a SageMaker endpoint so that I can run inference on the model. I initially tried using a regular Lambda function (container based), but that is too slow for our use case. A SageMaker endpoint should give us GPU inference, which should be much faster. I am struggling to find out how to do this. From what I can tell reading the documentation and watching some YouTube video's, I need to create my own docker container. As a start, I can use for example `763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference:2.8.0-gpu-py39-cu112-ubuntu20.04-sagemaker`. However, I can't find any solid documentation on how I would implement my other code. How do I send an image to SageMaker? Who tells it to convert the image to numpy array? How does it know the tensor names? How do I install additional requirements? How can I use the detections to apply blurring on the image, and how can I return the result image? Can someone here please point me in the right direction? I searched a lot but can't find any example code or blogs that explain this process. Thank you in advance! Your help is much appreciated.
1
answers
0
votes
2
views
asked a month ago

JumpCloud Serverlesss Lambda Function timeout error

Hello Giks, Hope all doing good. I'm facing issue while running a serverless application Lambda function. I used this application to download files from remote node and store it to S3 bucket. it was working fine previously, all of sudden it stop fetching files source location. while debugging issue I observe that it was a lot of time complete a test event. In the CloudWatch logs I'm getting below error logs. ***START RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 Version: $LATEST END RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 REPORT RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 Duration: 180625.18 ms Billed Duration: 180000 ms Memory Size: 192 MB Max Memory Used: 193 MB Init Duration: 587.37 ms XRAY TraceId: 1-626104fb-16ae94a33273f6404d180e41 SegmentId: 1010be2c3227cfab Sampled: true REPORT RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 Duration: 180625.18 ms Billed Duration: 180000 ms Memory Size: 192 MB Max Memory Used: 193 MB Init Duration: 587.37 ms XRAY TraceId: 1-626104fb-16ae94a33273f6404d180e41 SegmentId: 1010be2c3227cfab Sampled: true 2022-04-21T07:20:17.417Z 052226a9-5344-45f1-88bf-5c00242baee0 Task timed out after 180.63 seconds*** I have tried increasing memory and timeout parameter but still getting same error. In X-Ray Trace Logs getting below response. serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpYAWS::Lambda serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY OK 202 17ms Dwell Time OK - 47ms Attempt #1 Error (4xx) 200 3.03min Attempt #2 Error (4xx) 200 3.00min Attempt #3 Error (4xx) 200 3.00min serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpYAWS::Lambda::Function serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY Error (4xx) - 3.01min Initialization OK - 587ms Invocation Error (4xx) - 3.01min serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY Error (4xx) - 3.00min Initialization OK - 611ms Invocation Error (4xx) - 3.00min serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY Error (4xx) - 3.00min Initialization OK - 549ms Invocation Error (4xx) - 3.00min Can anyone advise if anything I missed to debug. Thanks, Aman
2
answers
0
votes
8
views
asked a month ago

App Runner actions work very slow (2-10 minutes) and deployer provides incorrect error message

App Runner actions work very slow for me. create/pause/resume may take 2-5 minutes for simple demo image (`public.ecr.aws/aws-containers/hello-app-runner:latest`) and create-service when image not found takes ~10 minutes: example #1 - 5 min to deploy hello-app image ``` 04-17-2022 05:59:55 PM [AppRunner] Service status is set to RUNNING. 04-17-2022 05:59:55 PM [AppRunner] Deployment completed successfully. 04-17-2022 05:59:44 PM [AppRunner] Successfully routed incoming traffic to application. 04-17-2022 05:58:33 PM [AppRunner] Health check is successful. Routing traffic to application. 04-17-2022 05:57:01 PM [AppRunner] Performing health check on port '8000'. 04-17-2022 05:56:51 PM [AppRunner] Provisioning instances and deploying image. 04-17-2022 05:56:42 PM [AppRunner] Successfully pulled image from ECR. 04-17-2022 05:54:56 PM [AppRunner] Service status is set to OPERATION_IN_PROGRESS. 04-17-2022 05:54:55 PM [AppRunner] Deployment started. ``` example #2 - 10 min when image not found ``` 04-17-2022 05:35:41 PM [AppRunner] Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository. 04-17-2022 05:25:47 PM [AppRunner] Starting to pull your application image. ``` example #3 - 10 min when image not found ``` 04-17-2022 06:46:24 PM [AppRunner] Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository. 04-17-2022 06:36:31 PM [AppRunner] Starting to pull your application image. ``` but 404 error should be detected immediately and fail much faster. because no need to retry 404 many times for 10 min, right? additionally the error message `Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository` is very confusing. it doesn't show image name and doesn't provide the actual cause. 404 is not related to access errors like 401 or 403, correct? can App Runner actions performance and error message be improved?
0
answers
0
votes
4
views
asked a month ago

CodeBuild failing with invalidParameterError on build with a valid parameter given

I'm trying to create a lambda layer in serverless and have it deploy to AWS creating the lambda layer for use in other deployments. However I'm running into an issue where the "Lambda:PublishLayerVersion" is failing because of CompatibleArchitectures. I'm wondering if its possible that there was a mistake that I'm missing or its serverless having an issue because Action is using a lowercase 'p' for "Lambda:publishLayerVersion" when the docs here: https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html states it is "Lambda:PublishLayerVersion". It is also likely that the SDK error is legitimate that the param "CompatibleArchitectures" isn't supported in "us-west-1" but I have a hard time finding docs to tell me what is supported in different regions. serverless.yml Spec: ``` provider: name: aws runtime: python3.8 lambdaHashingVersion: 20201221 region: us-west-1 stage: ${opt:stage, 'stage'} deploymentBucket: name: name.serverless.${self:provider.region}.deploys deploymentPrefix: serverless iamRoleStatements: - Effect: Allow Action: - s3:PutObject - s3:GetObject Resource: "arn:aws:s3:::name.serverless.${self:provider.region}/*" - Effect: Allow Action: - cloudformation:DescribeStacks Resource: "*" - Effect: Allow Action: - lambda:PublishLayerVersion Resource: "*" layers: aws-abstraction-services-layer: # name: aws-abstraction-services-layer path: aws-abstraction-layer description: "This is the goal of uploading our abstractions to a layer to upload and use to save storage in deployment packages" compatibleRuntimes: - python3.8 allowedAccounts: - '*' plugins: - serverless-layers - serverless-python-requirements ``` Output of build log ``` [Container] 2022/04/12 17:14:41 Running command serverless deploy Running "serverless" from node_modules Deploying aws-services-layer to stage stage (us-west-1) [ LayersPlugin ]: => default ... ○ Downloading requirements.txt from bucket... ... ○ requirements.txt The specified key does not exist.. ... ○ Changes identified ! Re-installing... ... ○ pip install -r requirements.txt -t . ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. aws-sam-cli 1.40.1 requires requests==2.25.1, but you have requests 2.27.1 which is incompatible. WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv WARNING: You are using pip version 21.1.2; however, version 22.0.4 is available. You should consider upgrading via the '/root/.pyenv/versions/3.8.10/bin/python3.8 -m pip install --upgrade pip' command. Collecting requests Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) Collecting charset-normalizer~=2.0.0 Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting certifi>=2017.4.17 Downloading certifi-2021.10.8-py2.py3-none-any.whl (149 kB) Collecting idna<4,>=2.5 Downloading idna-3.3-py3-none-any.whl (61 kB) Collecting urllib3<1.27,>=1.21.1 Downloading urllib3-1.26.9-py2.py3-none-any.whl (138 kB) Installing collected packages: urllib3, idna, charset-normalizer, certifi, requests Successfully installed certifi-2021.10.8 charset-normalizer-2.0.12 idna-3.3 requests-2.27.1 urllib3-1.26.9 ... ○ Created layer package /codebuild/output/src847310000/src/.serverless/aws-services-layer-stage-python-default.zip (0.8 MB) ... ○ Uploading layer package... ... ○ OK... ServerlessLayers error: Action: Lambda:publishLayerVersion Params: {"Content":{"S3Bucket":"name.serverless.us-west-1.deploys","S3Key":"serverless/aws-services-layer/stage/layers/aws-services-layer-stage-python-default.zip"},"LayerName":"aws-services-layer-stage-python-default","Description":"created by serverless-layers plugin","CompatibleRuntimes":["python3.8"],"CompatibleArchitectures":["x86_64","arm64"]} AWS SDK error: CompatibleArchitectures are not supported in us-west-1. Please remove the CompatibleArchitectures value from your request and try again [Container] 2022/04/12 17:14:47 Command did not exit successfully serverless deploy exit status 1 [Container] 2022/04/12 17:14:47 Phase complete: BUILD State: FAILED [Container] 2022/04/12 17:14:47 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: serverless deploy. Reason: exit status 1 [Container] 2022/04/12 17:14:47 Entering phase POST_BUILD [Container] 2022/04/12 17:14:47 Phase complete: POST_BUILD State: SUCCEEDED [Container] 2022/04/12 17:14:47 Phase context status code: Message: ```
1
answers
1
votes
8
views
asked a month ago

AWS SAM: set the authorization cache TTL in the resource template (AWS::Serverless::Api)

Hi all, I am using SAM in order to deploy my serverless application which consist of a REST API and a lambda authorizer. The REST API is not triggering a Lambda. It integrates other public services. When declaring the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html) and its [auth](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-property-api-apiauth.html) attribte, I cannot find a way to configure the authorization-cache's TTL as in the [AWS::ApiGateway::Authorizer](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizerresultttlinseconds) resource. Am I missing something? If not, is there any reason the authorization-cache's TTL configuration is not made available in the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html) element? This potentially missing feature is something minor for us, and does not block us in our project. It is more a nice-to-have, as I would prefer to not have to copy/paste the whole OpenAPI specification directly in the template file, but rather use the SAM feature to specify the API via the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html#sam-api-definitionuri) 's *DefinitionUri* attribute. This makes it possible to not have an API definition in the template, but to embbed this definition in a local file which will be automatically uploaded to S3 during the SAM deploy step. Thanks
1
answers
0
votes
11
views
asked a month ago
0
answers
0
votes
2
views
asked a month ago

Slow lambda responses when bigger load

Hi, Currently, I'm doing load testing using Gatling and I have one issue with my lambdas. I have two lambdas one is written in Java 8 and one is written in Python. I'm using Gatling for my load testing and I have a test where I'm doing one request with 120 concurrent users then I'm ramping them from 120 to 400 users in 1 minute, and then Gatling is doing requests with 400 constants users per second for 2 minutes. There is a weird behavior in these lambdas because the responses are very high. In the lambdas there is no logic, they are just returning a String. Here are some screenshots of Gatling reports: [Java Report][1] [Python Report][2] I can add that I did some tests when Lambda is warm-up and there is the same behaviour as well. I'm using API Gateway to run my lambdas. Do you have any idea why there is such a big response time? Sometimes I'm receiving an HTTP error that says: i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms Here is also my Gatling simulation code: public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/dev")).pause(1); { setUp(scn.injectOpen( atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(1)) ).protocols(httpProtocol) ); } } I also checked logs and turned on the X-ray for API Gateway but there was nothing there. The average latency for these services was 14ms. What can be the reason for that slow Lambda responses? [1]: https://i.stack.imgur.com/sCx9M.png [2]: https://i.stack.imgur.com/SuHU0.png
0
answers
0
votes
6
views
asked 2 months ago

Load testing serverless stack using Gatling

Hi, I'm doing some load testing on my serverless app and I see that it is unable to handle some higher loads. I'm using API Gateway. Lambda(Java 8) and DynamoDB. The code that I'm using is the same as this from this [link]([https://github.com/Aleksandr-Filichkin/aws-lambda-runtimes-performance/tree/main/java-graalvm-lambda/src/lambda-java). In my load testing, I'm using Gatling. The load that I configured is that I'm doing a request with 120 users, then in one minute I ramp users from 120 to 400, and then for 2 minutes I'm making requests with 400 constant users per second. The problem is that my stack is unable to handle 400 users per second. Is it normal? I thought that serverless will scale nicely and will work like a charm. Here is my Gatling simulation code: ```java public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/activitiesv2")).pause(1); { setUp(scn.injectOpen(atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(2)) ).protocols(httpProtocol) ); } } ``` Here are the Gatling report results: [Image link](https://ibb.co/68SYDsb) I'm also receiving an error: **i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms ** -> This is usually approx 50 requests. It is happening when Gatling is starting to inject 400 constant users per second. I'm wondering what could be wrong. It is too much for API Gateway, Lambda and DynamoDB?
2
answers
0
votes
5
views
asked 2 months ago

synchronous queue implementation on AWS

I have a queue in which producers are adding data and consumers wants to read and process it. In the diagram below producers are adding data in a queue with (Px, Tx, X) example (P3, T3,10) here, P3 is the producer ID, T3 is the number of packets required to process and 10 is data. for (P3, T3,10) consumer needs to read 3 packets from the P3 producer so In the Image below, one of the consumer needs to pick (P3, T3,10), (P3, T3,15) and (P3, T3,5) and perform a function on data that just add all the number that is 10+15+5 = 30 and save 30 to DB. Similarly there is a case for P1 producer (P1,T2,1) and (P1,T2,10) sum = 10+1 = 11 to DB. I have read about AWS Kinesis but it has issues, all consumers read the same data which doesn't fit my case. The major issue is how we can limit consumers for: 1 - Read data queue in synchronous. 2 - If one of the consumers has read (P1, T2,1) then only this consumer can read the next packet from the P1 producer (This point is the major issue for me as the consumer need to add those two number) 3 - This can also cause deadlock as some of the consumers will be forced to read data from a particular producer only because they have already read one packet from the same producer, now they have to wait for the next packet to perform add. I have also read about SQS and MQ but the above challenges still exist for them too. ![Image](https://i.stack.imgur.com/7b3Mm.png) [https://i.stack.imgur.com/7b3Mm.png](https://i.stack.imgur.com/7b3Mm.png) My current approach: for N produces I have started N EC2 instances, producers send data to EC2 through WebSocket (Websocket is not a requirement) and I can process it there easily. As you can see having N EC2 to process N producers will cause budget issues, how can I improve on this solution.
1
answers
0
votes
12
views
asked 2 months ago

AWS StepFunctions - SageMaker's InvokeEndpoint block throws "validation error" when fetching parameters for itself inside iterator of Map block

I have a state-machine workflow with 3 following states: [screenshot-of-my-workflow](https://i.stack.imgur.com/4xJTE.png) 1. A 'Pass' block that adds a list of strings(SageMaker endpoint names) to the original input. (*this 'Pass' will be replaced by a call to DynamoDB to fetch list in future.*) 2. Use map to call SageMaker endpoints dictated by the array(or list) from above result. 3. Send the result of above 'Map' to a Lambda function and exit the workflow. Here's the entire workflow in .asl.json, inspired from [this aws blog](https://docs.aws.amazon.com/step-functions/latest/dg/sample-map-state.html). ``` { "Comment": "A description of my state machine", "StartAt": "Pass", "States": { "Pass": { "Type": "Pass", "Next": "InvokeEndpoints", "Result": { "Endpoints": [ "sagemaker-endpoint-1", "sagemaker-endpoint-2", "sagemaker-endpoint-3" ] }, "ResultPath": "$.EndpointList" }, "InvokeEndpoints": { "Type": "Map", "Next": "Post-Processor Lambda", "Iterator": { "StartAt": "InvokeEndpoint", "States": { "InvokeEndpoint": { "Type": "Task", "End": true, "Parameters": { "Body": "$.InvocationBody", "EndpointName": "$.EndpointName" }, "Resource": "arn:aws:states:::aws-sdk:sagemakerruntime:invokeEndpoint", "ResultPath": "$.InvocationResult" } } }, "ItemsPath": "$.EndpointList.Endpoints", "MaxConcurrency": 300, "Parameters": { "InvocationBody.$": "$.body.InputData", "EndpointName.$": "$$.Map.Item.Value" }, "ResultPath": "$.InvocationResults" }, "Post-Processor Lambda": { "Type": "Task", "Resource": "arn:aws:states:::lambda:invoke", "Parameters": { "Payload.$": "$", "FunctionName": "arn:aws:lambda:<my-region>:<my-account-id>:function:<my-lambda-function-name>:$LATEST" }, "Retry": [ { "ErrorEquals": [ "Lambda.ServiceException", "Lambda.AWSLambdaException", "Lambda.SdkClientException" ], "IntervalSeconds": 2, "MaxAttempts": 6, "BackoffRate": 2 } ], "End": true } } } ``` As can be seen in the workflow, I am iterating over the list from the previous 'Pass' block and mapping those to iterate inside 'Map' block and trying to access the Parameters of 'Map' block inside each iteration. Iteration works fine with number of iterators, but I can't access the Parameters inside the iteration. I get this error: ``` { "resourceType": "aws-sdk:sagemakerruntime", "resource": "invokeEndpoint", "error": "SageMakerRuntime.ValidationErrorException", "cause": "1 validation error detected: Value '$.EndpointName' at 'endpointName' failed to satisfy constraint: Member must satisfy regular expression pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9])* (Service: SageMakerRuntime, Status Code: 400, Request ID: ed5cad0c-28d9-4913-853b-e5f9ac924444)" } ``` So, I presume the error is because "$.EndpointName" is not being filled with the relevant value. How do I avoid this. But, when I open the failed execution and check the InvokeEndpoint block from graph-inspector, input to that is what I expected and above JSON-Paths to fetch the parameters should work, but they don't. [screenshot-of-graph-inspector](https://i.stack.imgur.com/3gXsM.jpg) What's causing the error and How do I fix this?
1
answers
0
votes
7
views
asked 2 months ago

MSK Custom Configuration using Cloudformation

Hi AWS Users, I am trying to spin up a MSK cluster with a custom MSK configuration using my serverless app. I wrote the cloudformation template for the generation of the MSK Cluster and was able to successfully bring it up. I recently saw that AWS added cloudformation template of `AWS::MSK::Configuration`. [1] I was trying that out to create a custom configuration. The Configuration requires a `ServerProperties`key that is usually a PlainText in AWS console. An example of Server Properties: ``` auto.create.topics.enable=true default.replication.factor=2 min.insync.replicas=2 num.io.threads=8 num.network.threads=5 num.partitions=10 num.replica.fetchers=2 replica.lag.time.max.ms=30000 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 unclean.leader.election.enable=true zookeeper.session.timeout.ms=18000 ``` `AWS::MSK::Configuration` accepts base64 (api functionality) and I have been trying to implement this. I am using the cloudformation `Fn::Base64` functionality. e.g: ``` Resources: ServerlessMSKConfiguration: Type: AWS::MSK::Configuration Properties: ServerProperties: Fn::Base64: auto.create.topics.enable=true ``` This gives me back a 400 error during deploy. ``` Resource handler returned message: "[ClientRequestToken: xxxxx] Invalid request body (Service: Kafka, Status Code: 400, Request ID: 1139d840-c02d-4fdb-b68c-cee93673d89d, Extended Request ID: null)" (RequestToken: xxxx HandlerErrorCode: InvalidRequest) ``` Can someone please help me format this ServerProperties properly, not sure how to give the proper base64 string in the template. Any help is much appreciated. [1] - [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-msk-configuration.html](MSK::Configuration)
0
answers
0
votes
5
views
asked 2 months ago

AWS SAM Layer makes too many versions

I have noticed that SAM is producing a new layer each time we deploy, even though the contents of the layer should be bit-for-bit identical. Creating a new layer and deleting the old one takes time, so this slows down our deployments. Consider this `Makefile`: ``` build-KnexNodeModulesLayer: mkdir -p "$(ARTIFACTS_DIR)/nodejs/" npm install --prefix "$(ARTIFACTS_DIR)/nodejs/" pg@8.7.3 knex@1.0.1 # Flatten timestamps to avoid false change detection find "$(ARTIFACTS_DIR)" | xargs touch -am -h -d 2019-02-11T05:09:12 ``` If we then `sam build`, and run `find .aws-sam/build/KnexNodeModulesLayer/ -newermt "2019-02-12"`, we can see that all files, directories, and symlinks have the expected timestamp. However, running `sam deploy` two times create two different versions of this layer. I downloaded these layers, and found that all files and symlinks have the expected timestamp from our `touch` command, but directories have the current timestamp: ``` ❯ stat -c "%w %y %F %n" {a,b}/nodejs/node_modules/pg{,/LICENSE} 2022-03-15 12:28:10.681528449 -0400 2022-03-15 12:28:10.682330405 -0400 directory a/nodejs/node_modules/pg 2019-02-11 05:09:12.000000000 -0500 2019-02-11 05:09:12.000000000 -0500 regular file a/nodejs/node_modules/pg/LICENSE 2022-03-15 12:56:14.010637090 -0400 2022-03-15 12:56:14.011286674 -0400 directory b/nodejs/node_modules/pg 2019-02-11 05:09:12.000000000 -0500 2019-02-11 05:09:12.000000000 -0500 regular file b/nodejs/node_modules/pg/LICENSE ``` It seems like the process of creating the zip file is introducing this difference, and should be considered a bug.
1
answers
1
votes
8
views
asked 2 months ago

How to access API Parameters of a node and add them as part of it's own output json in AWS Step Functions?

Here's some part of my StepFunction: https://i.stack.imgur.com/4Jxd9.png Here's the workflow for the "Parallel" node: ``` { "Type": "Parallel", "Branches": [ { "StartAt": "InvokeEndpoint01", "States": { "InvokeEndpoint01": { "Type": "Task", "End": true, "Parameters": { "Body": "$.Input", "EndpointName": "dummy-endpoint-name1" }, "Resource": "arn:aws:states:::aws-sdk:sagemakerruntime:invokeEndpoint" } } }, { "StartAt": "InvokeEndpoint02", "States": { "InvokeEndpoint02": { "Type": "Task", "End": true, "Parameters": { "Body": "$.Input", "EndpointName": "dummy-endpoint-name2" }, "Resource": "arn:aws:states:::aws-sdk:sagemakerruntime:invokeEndpoint" } } } ], "Next": "Lambda Invoke" }, ``` I would like to access the `EndpointName` of each node inside this Parallel block and add it as one of the keys of that particular node's output, without modifying the existing output's body and other headers.(in the above json, `EndpointName` can be found for first node inside the Parallel at `$.Branches[0].States.InvokeEndpoint01.Parameters.EndpointName`) Here's output of one of the node inside the Parallel block: ``` { "Body": "{xxxx}", "ContentType": "application/json", "InvokedProductionVariant": "xxxx" } ``` and I would like to access the API Parameter and make it something like below: ``` { "Body": "{xxxx}", "ContentType": "application/json", "InvokedProductionVariant": "xxxx", "EndpointName": "dummy-endpoint-name1" } ``` How do I do this?
2
answers
1
votes
5
views
asked 2 months ago

Unable to load shared library 'ldap.so.2' or one of its dependencies

I've created a .NET 6 CONTAINER IMAGE C# lambda function using visual studio. My function will ultimately utilize https://github.com/flamencist/ldap4net to query active directory. This package requires that openldap be installed, which it is natively installed on the Amazon Linux 2 docker container image I'm using for this lambda function. I am running into the error below upon invocation of the published lambda. > ``` { "errorType": "DllNotFoundException", "errorMessage": "Unable to load shared library 'ldap.so.2' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libldap.so.2: cannot open shared object file: No such file or directory", "stackTrace": [ "at LdapForNet.Native.NativeMethodsLinux.ldap_initialize(IntPtr& ld, String uri)", "at LdapForNet.Native.LdapNativeLinux.Init(IntPtr& ld, String url)", "at LdapForNet.LdapConnection.Connect(String url, LdapVersion version)", "at LdapForNet.LdapConnectExtensions.Connect(ILdapConnection connection, Uri uri, LdapVersion version)", "at LdapForNet.LdapConnectExtensions.Connect(ILdapConnection connection, String hostname, Int32 port, LdapSchema ldapSchema, LdapVersion version)", "at hqpoc_lam_net6_docker.Function.ValidateUser(String domainName, String username, String password) in C:\\git\\hqpoc-lam-net6-docker\\hqpoc-lam-net6-docker\\src\\hqpoc-lam-net6-docker\\Function.cs:line 35", "at hqpoc_lam_net6_docker.Function.FunctionHandler(String input, ILambdaContext context) in C:\\git\\hqpoc-lam-net6-docker\\hqpoc-lam-net6-docker\\src\\hqpoc-lam-net6-docker\\Function.cs:line 24", "at lambda_method1(Closure , Stream , ILambdaContext , Stream )", "at Amazon.Lambda.RuntimeSupport.Bootstrap.UserCodeLoader.Invoke(Stream lambdaData, ILambdaContext lambdaContext, Stream outStream) in /src/Repo/Libraries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/UserCodeLoader.cs:line 145", "at Amazon.Lambda.RuntimeSupport.HandlerWrapper.<>c__DisplayClass8_0.<GetHandlerWrapper>b__0(InvocationRequest invocation) in /src/Repo/Libraries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/HandlerWrapper.cs:line 56", "at Amazon.Lambda.RuntimeSupport.LambdaBootstrap.InvokeOnceAsync(CancellationToken cancellationToken) in /src/Repo/Libraries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/LambdaBootstrap.cs:line 176" ] ``` I have zipped the relevant binaries into the lambda function directory, like described here: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-linux-binary-package/ How can I utilize the native binary 'openldap' in this docker container with my C# lambda function? Thanks!
1
answers
0
votes
27
views
asked 3 months ago

Python lambda failing to initialize RSA public key occasionally

I'm trying to create a custom request authorizer working with several user pools, in Python. So to validate tokens I tried first with pyjwk/cryptography ``` claims = jwt.decode(token, options={"verify_signature": False, "require":["iss"]}) issuer = claims['iss'] jwks_client = PyJWKClient(issuer+"/.well-known/jwks.json",False) signing_key = jwks_client.get_signing_key_from_jwt(token) ``` Occasionally, about 5% of the time, lambda instance will just timeout on this last line, even with 30 second run time. Thought maybe it was network, rewrote it to get the JWK through requests and initialize the key with RSAAlgorithm.from_jwk, nope - the JWK is retrieved, but it's initializing the key that fails. Called RSAAlgorithm.from_jwk outside the handle method with dummy hardcoded JWK to move initialization of cryptography to init stage; handler method works smoother now, instead of being slow on the first invocation, but the random failure still happens. Thought maybe it was cryptography or pyjwk, switched to python-jose and it's different backends. Nope - still fails in loading the key, now written as jwk.construct(). What is causing this strange and random behavior? An instance that failed once stays permanently broken and doesn't recover in the next request. On the logs there's nothing, although such broken instances drop the memory usage. Here are first two requests from broken and working instances running the same image, same time, for same user pool key. Broken: 2022-02-14T17:38:17.185+02:00 START RequestId: d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Version: $LATEST 2022-02-14T17:38:17.205+02:00 [DEBUG] 2022-02-14T15:38:17.205Z d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Starting new HTTPS connection (1): cognito-idp.us-east-1.amazonaws.com:443 2022-02-14T17:38:20.190+02:00 END RequestId: d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 2022-02-14T17:38:20.190+02:00 REPORT RequestId: d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Duration: 3003.51 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 53 MB Init Duration: 467.93 ms 2022-02-14T17:38:20.190+02:00 2022-02-14T15:38:20.189Z d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Task timed out after 3.00 seconds 2022-02-14T17:38:20.706+02:00 START RequestId: a5242265-c13d-4015-9b7d-2699f0b26efe Version: $LATEST 2022-02-14T17:38:20.709+02:00 [DEBUG] 2022-02-14T15:38:20.709Z a5242265-c13d-4015-9b7d-2699f0b26efe Starting new HTTPS connection (1): cognito-idp.us-east-1.amazonaws.com:443 2022-02-14T17:38:23.712+02:00 END RequestId: a5242265-c13d-4015-9b7d-2699f0b26efe 2022-02-14T17:38:23.712+02:00 REPORT RequestId: a5242265-c13d-4015-9b7d-2699f0b26efe Duration: 3004.51 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 23 MB 2022-02-14T17:38:23.712+02:00 2022-02-14T15:38:23.711Z a5242265-c13d-4015-9b7d-2699f0b26efe Task timed out after 3.00 seconds Working: 2022-02-14T17:38:23.733+02:00 START RequestId: 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Version: $LATEST 2022-02-14T17:38:23.740+02:00 [DEBUG] 2022-02-14T15:38:23.739Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Starting new HTTPS connection (1): cognito-idp.us-east-1.amazonaws.com:443 2022-02-14T17:38:23.926+02:00 [DEBUG] 2022-02-14T15:38:23.926Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 https://cognito-idp.us-east-1.amazonaws.com:443 "GET /us-east-1_.../.well-known/jwks.json HTTP/1.1" 200 916 2022-02-14T17:38:23.942+02:00 [DEBUG] 2022-02-14T15:38:23.941Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Got the key a2PUhJTqMTiNysvmY+RfUPARHESV35jOMXWXJ4mAa/A= in 0.20495343208312988 seconds 2022-02-14T17:38:23.960+02:00 [INFO] 2022-02-14T15:38:23.960Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 response {'principalId': '...', 'policyDocument': {...}} 2022-02-14T17:38:23.980+02:00 END RequestId: 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 2022-02-14T17:38:23.980+02:00 REPORT RequestId: 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Duration: 244.45 ms Billed Duration: 245 ms Memory Size: 128 MB Max Memory Used: 55 MB Init Duration: 447.66 ms 2022-02-14T17:38:24.149+02:00 START RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 Version: $LATEST 2022-02-14T17:38:24.154+02:00 [DEBUG] 2022-02-14T15:38:24.154Z 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 Got the cached key a2PUhJTqMTiNysvmY+RfUPARHESV35jOMXWXJ4mAa/A= 2022-02-14T17:38:24.155+02:00 [INFO] 2022-02-14T15:38:24.155Z 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 response {'principalId': '...', 'policyDocument': {...}} 2022-02-14T17:38:24.156+02:00 END RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 2022-02-14T17:38:24.156+02:00 END RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 2022-02-14T17:38:24.156+02:00 REPORT RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 Duration: 2.64 ms Billed Duration: 3 ms Memory Size: 128 MB Max Memory Used: 55 MB
1
answers
0
votes
8
views
asked 3 months ago

Overriding Hostname on ECS Fargate

Hello, I am setting up a Yellowfin [deployment](https://wiki.yellowfinbi.com/display/yfcurrent/Install+in+a+Container) using their stock app-only [image](https://hub.docker.com/r/yellowfinbi/yellowfin-app-only) on ECS Fargate. I was able to set up a test cluster for my team to experiment with. Yellowfin requires a license to use their software. To issue a license, Yellowfin needs to know the hostname of the underlying platform it runs on. Yellowfin can provide wildcard licenses that match on a standard prefix or suffix. Currently, we are using a development license that matches on the default hostname that our test environment's Fargate task is assigned. The default hostname seems to be of the form <32-character alphanumeric string>-<10-digit number> where the former is the running task's ID and the latter is an ID of some other related AWS resource (the task definition? the cluster?) that I could not identify. Although this 10-digit number stays constant if new tasks are run, it does not seem like a good strategy to base the real Yellowfin license off of it. I would like to override the hostname of a Fargate task when launching the container to include a common prefix (e.g., "myorg-yfbi-") to make it simple to request a wildcard license for actual use. If possible, I would like to avoid building my own image or migrating to use another AWS service. Is there a standard way to override the hostname for a Fargate service by solely updating the task definition? Would overriding the entrypoint be a viable option? Is there another way to set hostname in Fargate that I am not aware of? Thank you for any guidance you can provide. Happy to provide more information if it helps.
1
answers
0
votes
22
views
asked 3 months ago

Host a fine-tuned BERT Multilingual model on SageMaker with Serverless inference

Hi All, Good day!! Key point to note here is, we have pre-processing script for the text document, deserialize which is required for prediction then we have post-processing script for generating NER (entitites). I went through SageMaker material and decided to try following options. 1. Option 1: Bring our own model, write a inference script and deploy it on SM real-time endpoint using Pytorch container. I went through Suman video (https://www.youtube.com/watch?v=D9Qo5OpG4p8) which is really good, need to try with our pre-processing and post-processing scripts then see if it works fine or not. 2. Option 2: Bring our own model, write a inference script and deploy it on SM real-time endpoint using Huggingface container. I went through Huggingface docs (https://huggingface.co/docs/sagemaker/inference#deploy-a-%F0%9F%A4%97-transformers-model-trained-in-sagemaker) but there is no reference for how to use own pre and post-processing scripts to setup inference pipeline. If you know any examples on using our own pre and post-processing scripts using Huggingface container then please share it. 3. Option 3: Bring our own model, write a inference script and deploy it on SM Serverless inference/endpoint using Huggingface container. I went through Julien video (https://www.youtube.com/watch?v=cUhDLoBH80o&list=PLJgojBtbsuc0E1JcQheqgHUUThahGXLJT&index=35) which is excellent but he has not shown how to use our own pre and post-processing scripts using Huggingface container. Please share if you know any examples. Could you please help? Thanks, Vinayak
1
answers
0
votes
12
views
asked 3 months ago

SAM deployment of lambda (EventSource MQTT) fails with invalid parameter VIRTUAL_HOST

I am struggeling with an issue which appeared out of a sudden between two deployments of our application. What we are doing is. There is a lambda function which has an EventSource configured. In this case it is a MessageQueue (MQ) Event, listening to a AmazonMQ RabbitMQ Broker. It worked fine for many months but with todays deployment it failed. Last working deployment: 2022-02-02 11:06:16 UTC+0100 Error: ``` Resource handler returned message: "Invalid request provided: Invalid parameters: VIRTUAL_HOST (Service: Lambda, Status Code: 400, Request ID:, Extended Request ID: null)" (RequestToken: , HandlerErrorCode: InvalidRequest) ``` Template excerpt: ``` ConsumerFunction: Type: 'AWS::Serverless::Function' Properties: CodeUri: . Events: MQEvent: Type: MQ Properties: BatchSize: 120 Enabled: true Broker: 'arn:aws:mq:us-east-1:11111:broker:cwv-broker:11111' Queues: - 'consumer-queue-name' SourceAccessConfigurations: - Type: BASIC_AUTH URI: 'arn:aws:secretsmanager:us-east-1:1111:secret:global-secrets/rabbitmq-broker-credentials' - Type: VIRTUAL_HOST URI: '/consumervhost' FunctionName: 'consumer-v1-prod' Handler: handler/consumer.php Layers: - !Sub 'arn:aws:lambda:${AWS::Region}:209497400698:layer:php-80:16' - !Sub 'arn:aws:lambda:${AWS::Region}:403367587399:layer:redis-php-80:11' MemorySize: 250 Policies: - AWSSecretsManagerGetSecretValuePolicy: SecretArn: 'arn:aws:secretsmanager:us-east-1:11111:secret:global-secrets/rabbitmq-broker-credentials' - VPCAccessPolicy: {} - !Ref CwvMqAccessPolicy ReservedConcurrentExecutions: 5 Runtime: provided.al2 Timeout: 900 VpcConfig: SecurityGroupIds: - !ImportValue MainVPC-DefaultSecurityGroup SubnetIds: - !ImportValue MainVPC-SubnetPrivateA - !ImportValue MainVPC-SubnetPrivateB Parameters: RetentionDays: 1 ``` Sam version: SAM CLI, version 1.37.0 Deployment script: ``` sam package \ --output-template-file /tmp/deploy-stack.yaml \ --s3-bucket "deployment-resources" \ --profile "$AWS_PROFILE" sam deploy \ --template-file /tmp/deploy-stack.yaml \ --s3-bucket "deployment-resources" \ --capabilities CAPABILITY_IAM \ --stack-name "consumer-prod-v1" \ --profile "$AWS_PROFILE" ``` Help is much appreciated.
0
answers
0
votes
2
views
asked 3 months ago
  • 1
  • 90 / page