Questions tagged with Serverless

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, I am deploying a lambda function that utilizes the NLTK packages for preprocessing text. For the application to work I need to download the stop words, punkt and wordnet libraries. I have deployed using a docker image and SAM cli. When the function runs on AWS, I get a series of errors when trying to access the NLTK libraries. The first error I got was that '/home/sbx_user1051/' cannot be edited. After reading solutions on stack over flow, I was pointed in the direction of needing to store the NLTK libraries in the /tmp/ directory because that is the only directory that can be modified. Now, after redeploying the image with the changes to the code, I have the files stored in temp, but the lambda function does not search for that file when trying to access the stop words. It still tries to search for the file in these directories: - '/home/sbx_user1051/nltk_data' - '/var/lang/nltk_data' - '/var/lang/share/nltk_data' - '/var/lang/lib/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' What should I do about importing the NLTK libraries needed when running this function on aws lambda?
0
answers
0
votes
11
views
Tyler
asked a day ago
I am currently working on a lambda function in which I have to send a message to an SQS queue. The Lambda function sits inside of a VPC to allow connection with a peered network that the Function makes requests to. Whenever I try to send the message to SQS however code execution seems to timeout consistently. I had the same issue when I was trying to send commands to DynamoDB. ``` import { SQSClient, SendMessageCommand } from "@aws-sdk/client-sqs"; const sqsClient = new SQSClient({region: 'us-east-1'}); export const handler = async (event, context, callback) => { const response = await sqsClient.send(new SendMessageCommand(messageParams)); console.log(response); // <----- Doesn't reach here return callback(null, 'OK'); }; ``` IAM Permissions are all correct and the Security Group allows all traffic (When set to a VPC) So far, to specifically target the timeout problem, I've tried putting the function in a private subnet, public subnet, placing it in no VPC, replacing SDK v3 with aws-sdk v2 via a layer. None of these seem to have any impact on the issue. I haven't used VPC endpoints yet but I guess that shouldn't be necessary when the function is not connected to a VPC or in a public subnet?
0
answers
0
votes
12
views
asked 2 days ago
* below CF stack is failing with this error "Resource handler returned message: Error occurred during operation 'CreateApplication'." (RequestToken: <some-token-id>, HandlerErrorCode: GeneralServiceException)" * Region: eu-weat-1 * anyone knows what could be possible reasons for this error? ``` AWSTemplateFormatVersion: 2010-09-09 Description: EMR serverless cluster Resources: EmrSparkApp: Type: AWS::EMRServerless::Application Properties: Type: Spark ReleaseLabel: emr-6.9.0 Outputs: EmrSparkAppId: Description: Application ID of the EMR Serverless Spark App Value: !Ref EmrSparkApp ```
1
answers
0
votes
17
views
asked 3 days ago
Hello everyone, I am facing an odd situation here. I have some events since a few days that are fired 2 times in the same bus (default). They are exactly the same : content and id. And so they triggered some lambdas two times messing with our event process. I thought it should be impossible. I assume that if there are two logs in events/debug, there are two event fired. Look at the photo. You can see the same id in the JSON at the same hour. ![Duplicate log](/media/postImages/original/IMZY4XnsBAQoSpbfzey5DcZw) If you have any idea about what can cause that. Thanks for your help. EDIT 1 : The events are generated by a lambda using aws sdk for nodeJs and method putEvents.
2
answers
0
votes
23
views
newza
asked 3 days ago
Im trying to create a nested stack on a cloudformation template, i have declared in the parent application a reference to the http api we are using, and use this api in the child template. When i try to do the build with sam, it throws this error: **"E0001 Error transforming template:ApiId must be a valid reference to an 'AWS::Serverless::HttpApi' resource in same template."** **Parent template declaration:** ``` childStack: Type: "AWS::Serverless::Application" Properties: Location: ./child.yaml Parameters: ApiId: !Ref ApiReference ``` **Child template declaration:** ``` AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Parameters: ApiId: Type: string Globals: Function: Runtime: !Ref "AWS::NoValue" Handler: !Ref "AWS::NoValue" Layers: !Ref "AWS::NoValue" Resources: lambdaFunctionLogGroup: Type: 'AWS::Logs::LogGroup' Properties: Location: ./parent.yaml LogGroupName: !Join - '/' - - '/aws/lambda' - !Ref 'TournamentSubscriptionFunction' RetentionInDays: !FindInMap [Service, !Ref EnvironmentName, LogRetentionInDays] lambdaFunction: Location: ./parent.yaml Type: AWS::Serverless::Function Properties: Description: Image validation for identity verification FunctionName: !Sub '${EnvironmentName}-lambda' PackageType: Image Architectures: ['arm64'] Environment: Variables: ExampleVariable Policies: - CloudWatchLambdaInsightsExecutionRolePolicy Events: Name: Type: HttpApi Properties: Path: /event-client/api/lambda Method: POST ApiId: !Ref ApiReference Auth: Authorizer: OAuth2Authorizer VpcConfig: SubnetIds: !Split - ',' - Fn::ImportValue: !Sub '${EnvironmentName}-PrivateSubnets' Tags: Environment: !Sub '${EnvironmentName}' Metadata: DockerTag: nodejs16.x-v1 DockerContext: ../dist/src/client/lambda-route Dockerfile: Dockerfile ```
1
answers
0
votes
21
views
asked 3 days ago
I've a lambda that processes messages from SQS. The input queue has a redrive policy that causes messages to be moved to a DLQ if the lambda fails to process them after repeated attempts. This arrangement works and, if there are messages in the DLQ, I can send them back to the source queue using the the AWS console "Start DLQ redrive" button, along with the "Redrive to source queue(s)" option. For some messages, however, the lambda function decides to push them directly to the DLQ. For those messages, however, when I try a DLQ redrive using the "Redrive to source queue(s)" option, it fails with "Failed: CouldNotDetermineMessageSource". Is there any way that I can avoid this message, or does the "Redrive to source queue(s)" option only work for messages put in the DLQ by the AWS runtime ?
1
answers
0
votes
18
views
asked 4 days ago
Hello all! I am trying to use the `allOf` keyword to inherit attributes of one schema model into another as such: ``` providerIdentityExpansion: description: ID of the Provider Identity with fields possibly expanded. allOf: - $ref: '#/components/schemas/providerIdentityNoExpansion' properties: oft_confused_with: oneOf: - $ref: '#/components/schemas/oft_confused_with' - $ref: '#/components/schemas/oftConfusedWithExpansionArray' title: ProviderIdentityExpansion type: object ``` I previously just duplicated the model, but changed the names (one with expansion, one without). But using allOf, when I use swagger-cli to bundle all OpenAPI 3.0 Spec docs together into one file and deploy on API Gateway, it responds with following warning: ``` "Unsupported model type 'ComposedSchema' in 200 response to method 'GET /providers/{id}/identity'. Ignoring.", "Unsupported model type 'ComposedSchema' in 200 response to method 'GET /providers/{id}/identity/children'. Ignoring." ``` Is anyone familiar with the model type ComposedSchema, and how to utilize the `allOf` keyword without triggering this error in AWS?
0
answers
0
votes
13
views
asked 4 days ago
I want to write an AWS Lambda function that is able to respond to various types of event types: API Gateway, Kinesis, S3, etc. The API/SDK I need to work with is Java. I'd like to create a general-purpose handler, but it appears that each service has its own event type and does not derive from some common parent event type. Is there a pattern or a best practice for creating an AWS Lambda function that can be used with a variety of event types? Or do I need to create a custom handler for each event type?
1
answers
0
votes
17
views
Brian
asked 5 days ago
Hello all! I am investigating an issue happening with recent API Gateway deployments that have resulting warnings in the Jenkins console output resembling the following: ``` "warnings": [ "More than one server provided. Ignoring all but the first for defining endpoint configuration", "More than one server provided. Ignoring all but the first for defining endpoint configuration", "Ignoring response model for 200 response on method 'GET /providers/{id}/identity/children' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring request model for 'PUT /providers/{id}/admin_settings' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /providers/{id}/profile/addresses/{address_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /providers/{id}/profile/anecdotes/{anecdote_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring request model for 'POST /providers/{id}/routes' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /providers/{id}/routes/{route_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /service_type_groups/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /service_types/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method." ] ``` Here is an example of the 200 response for an effected method in the OAS doc: ``` responses: '200': description: Array of Provider Identities that are children of this Provider content: 'application/json': schema: description: Array of children provider identities type: array items: $ref: '#/components/schemas/providerIdentityExpansion' '404': $ref: '#/components/responses/not_found' '500': $ref: '#/components/responses/server_error' ``` Based on the language in the warnings text, my understanding is that there is some kind of default request/200 response model defined, and it is somehow being overwritten in the API methods themselves. But when comparing some other (seemingly) non-warning methods they look identical in how they are implemented. I have tried a few potential fixes with removing adding attributes, but none have worked so far. Would anyone be able to help me in finding what exactly is going wrong here in the OAS doc?
0
answers
0
votes
21
views
asked 6 days ago
Hi Can anyone share the code flow for chime sdk serverless meeting with cognito user pool? Step-by-step process would be a great help as we tried, but we lost. Thanks Siddharth
1
answers
0
votes
24
views
asked 6 days ago
Hi AWS, Is this workflow architecture possible: RDS (PostgreSQL) --------------------> Amazon MQ Broker --------------> Lambda Function -----------------------> S3 Bucket (Data is stored for customers) The database can be in DynamoDB as well. Amazon MQ is used as an event-source for the lambda function and the lambda is sending the request to API Gateway and getting the JSON response and further sending it to S3 to be stored as output. Please suggest
2
answers
0
votes
38
views
profile picture
asked 6 days ago
Hello, I am trying to use API gateway with a lambda function, but with my own domain (which is on route 53). This is my current config: in API gateway I created a resource with a GET method, and I published it to a stage I called v1. I get an endpoint like ``` https://11111111.execute-api.us-east-1.amazonaws.com/v1 ``` if I call this endpoint I can see the reply from my lambda function. so far so good. Then In API gateway again, I made a custom domain name for api.mydomain.com, and I get something like ``` 22222222.execute-api.us-east-1.amazonaws.com ``` finally in route 53 I created a record type A (api.mydomain.com), marked as ALIAS and with value ``` 22222222.execute-api.us-east-1.amazonaws.com ``` If I try to call https://api.mydomain.com/v1 I get a 403 error. Am I missing something? Also, do I need to enable CORS to allow any browser to call this endpoint?
2
answers
0
votes
34
views
asked 6 days ago