Questions tagged with Serverless

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Getting error "Request is missing Authentication Token" on serverless deploy for af-south-1 region

We are using buildspec and serverless to deploy our lambdas to was. It was working fine till early morning India time. But in post build, started seeing the error "Request is missing Authentication Token". `` ``` version: 0.2 phases: install: commands: - npm install -g serverless@3.13.0 build: commands: - env="dev" && region="af-south-1" - echo $CODEBUILD_WEBHOOK_HEAD_REF - if [ "$CODEBUILD_WEBHOOK_HEAD_REF" = "refs/heads/main" ]; then env="prod"; elif [ "$CODEBUILD_WEBHOOK_HEAD_REF" = "refs/heads/uat" ]; then env="uat"; elif [ "$CODEBUILD_WEBHOOK_HEAD_REF" = "refs/heads/dev" ]; then env="dev"; fi - npm install - npm run build - rm -rf node_modules - npm install --production post_build: commands: - env="dev" && region="af-south-1" - echo $CODEBUILD_WEBHOOK_HEAD_REF - if [ "$CODEBUILD_WEBHOOK_HEAD_REF" = "refs/heads/main" ]; then env="prod"; elif [ "$CODEBUILD_WEBHOOK_HEAD_REF" = "refs/heads/uat" ]; then env="uat"; elif [ "$CODEBUILD_WEBHOOK_HEAD_REF" = "refs/heads/dev" ]; then env="dev"; region="af-south-1"; elif [ "$CODEBUILD_WEBHOOK_HEAD_REF" = "refs/heads/stage" ]; then env="stage"; fi - echo "deploying for env=$env region=$region" - serverless deploy --stage $env --region $region --verbose ``` Here's the error we are getting. ``` [Container] 2022/11/28 10:36:58 Running command serverless deploy --stage $env --region $region --verbose To ensure safe major version upgrades ensure "frameworkVersion" setting in service configuration (recommended setup: "frameworkVersion: ^3.13.0") Warning: Invalid configuration encountered at 'provider': unrecognized property 'package' Learn more about configuration validation here: http://slss.io/configuration-validation Deploying test-authorizer to stage dev (af-south-1) Excluding development dependencies for service package × Stack test-authorizer-dev failed to deploy (0s) ``` We tried to deploy on us-east-1, but it worked, it just stopped working for af-south-1 Checked the following already 1. Checking the deploy role for right permissions 2. Checking the STS token - it is active 3. Also tried to change the deployment environment on codebuild. It didn't work
1
answers
2
votes
46
views
HBL9
asked a day ago

sagemakee endpoint failing with ""An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (413) from primary and could not load the entire response body""

Hello, I have created sagemaker endpoint by following https://github.com/huggingface/notebooks/blob/main/sagemaker/20_automatic_speech_recognition_inference/sagemaker-notebook.ipynb and this is failing with error ""An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (413) from primary and could not load the entire response body"". The predict function returning me following error but CW log does not have any error details for the endpoint. ``` ModelError Traceback (most recent call last) /tmp/ipykernel_16248/2846183179.py in 2 # audio_path = "s3://ml-backend-sales-call-audio/sales-call-audio/1279881599154831602.playback.mp3" 3 audio_path = "/home/ec2-user/SageMaker/finetune-deploy-bert-with-amazon-sagemaker-for-hugging-face/1279881599154831602.playback.mp3" ## AS OF NOW have stored locally in notebook instance ----> 4 res = predictor.predict(data=audio_path) 5 print(res) ~/anaconda3/envs/amazonei_pytorch_latest_p37/lib/python3.7/site-packages/sagemaker/predictor.py in predict(self, data, initial_args, target_model, target_variant, inference_id) 159 data, initial_args, target_model, target_variant, inference_id 160 ) --> 161 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args) 162 return self._handle_response(response) 163 ~/anaconda3/envs/amazonei_pytorch_latest_p37/lib/python3.7/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 493 ) 494 # The "self" in this scope is referring to the BaseClient. --> 495 return self._make_api_call(operation_name, kwargs) 496 497 _api_call.name = str(py_operation_name) ~/anaconda3/envs/amazonei_pytorch_latest_p37/lib/python3.7/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 912 error_code = parsed_response.get("Error", {}).get("Code") 913 error_class = self.exceptions.from_code(error_code) --> 914 raise error_class(parsed_response, operation_name) 915 else: 916 return parsed_response ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (413) from primary and could not load the entire response body. See https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEventViewer:group=/aws/sagemaker/Endpoints/asr-facebook-wav2vec2-base-960h-2022-11-25-19-27-19 in account xxxx for more information. ` ```
1
answers
0
votes
63
views
asked 4 days ago

C++ Lambda - segmentation fault

Hi, I am trying to create a lambda function with access to an S3 bucket and allow operations to the bucket such as create and delete files (the usual I assume). I installed (on my windows pc) a WSL instance of ubuntu 20.04 and I installed AWS CLI according to: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.htm I then followed the guide to configure and set access keys: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html Now, I followed the guide on creating a hello world lambda function in c++: https://aws.amazon.com/blogs/compute/introducing-the-c-lambda-runtime/ The first example, all is fine and the lambda function successfully runs from the aws console without any errors or warnings. Now, when I continue the same guide to the "beyond hello", things start going wrong. I successfully setup all what was needed and installed the SDK's as required. Now, when I try to make the project, the compiler stops with this message: ``` cpp-encoder-example/main.cpp:78:56: error: no matching function for call to ‘Aws::S3::S3Client::S3Client(std::shared_ptr<Aws::Auth::EnvironmentAWSCredentialsProvider>&, Aws::Client::ClientConfiguration&)’ 78 | S3::S3Client client(credentialsProvider, config); ``` followed by a few lines with this note where n = 5, 4 and 1. (line number is of the first warning thrown) ``` include/aws/s3/S3Client.h:96:9: note: candidate expects n arguments, 2 provided ``` Now, when I remove 'credentialsProvider' from S3::S3Client client(credentialsProvider, config); in main.cpp, all does compile. (should that work?) However, I then continue to create the lambda function and when created and I press test in the aws console, it stops with: ``` s2n_init() failed: 402653268 (Failed to load or unload an openssl provider) Fatal error condition occurred in /home/username/aws-sdk-cpp/crt/aws-crt-cpp/crt/aws-c-io/source/s2n/s2n_tls_channel_handler.c:197: 0 && "s2n_init() failed" Exiting Application No call stack information available START RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Version: $LATEST 2022-11-21T09:02:07.642Z xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Task timed out after 1.02 seconds END RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx REPORT RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Duration: 1015.50 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 16 MB ``` Now, for some reason I think something is failing during compile time with the certificates. What certificates may I not have set correctly, what installation step might I have missed? Have I failed at something else and can someone give me a pointer (pun not intended) to what to do / try? Ps. I'm not sure what tags to add since sdk or c++ are not included in them.
1
answers
0
votes
20
views
asked 5 days ago