Questions tagged with Serverless
Content language: English
Sort by most recent
Aurora MySQL serverless 1 is no longer supported (for creation), yet all of the documentation is still pointing towards serverless v1. I am using MySQL serverless v2 with secrets manager and I have a python module that is able to connect to the writer endpoint alright without RDS proxy. For following the lambda function examples, I have created an RDS proxy, however I am finding a hard time finding reliable lambda code examples, especially in javascript that can successfully connect to my Aurora serverless v2. The goal is to have this connection be triggered by cognito events.
I have an architecture where two entities are posting data to an API gateway. They both follow their own (different) json schema, API GW then pushes it into Kinesis Stream/Firehose. Should I create a separate stream+firehose for each schema? I understand that I could stream them both into the same Kinesis Stream / Firehose and use a lambda to parse each datapoint and determine how to write each data to s3 however I am afraid of lambda concurrency issues should the velocity of the data spike. What is the best practice in this context?
Querying from redshift to serverless aurora in cross account facing error as Timeout Expired,Debug Driver logs from
com.amazon.redshift.core.v3.QueryExecutorImpl.receiveErrorResponse: <=BE ErrorMessage(ERROR: timeout expired
Location: File: ../src/external_catalog/external_catalog_api.cpp, Routine: localize_external_table, Line: 1267
Server SQLState: D;600)
Cross account set up well verified:Anoynomus details below
nc -vz auroradnsname.rds.amazonaws.com 5432
Connection to auroradnsname.rds.amazonaws.com 5432 port [tcp/postgres] succeeded!
P.S :Instead of VPC peering using aws transit gateway
On updating the MSSQL package from Version6.2.0 to Version9.1.0 in package.json file , we are facing the below error .
Failed to connect to Server1_name - Hostname/IP does not match certificate's altnames: Host: Server1_name is not in the cert's altnames: DNS:Server2_name
The lambda is written in Node.js, lambda connects to mssql server in npm mssql version 6.2.0 but throws the above error in npm mssql version 9.1.1
We upgraded the version for resolving the security vulnerability reported for dependency package (xmldom).
Please guide what can be the ways to resolve the error and the root cause.
Hello,
I am desperately needing help connecting to Amazon Redshift server using an odbc driver. I have followed the "Configuring an ODBC connection" seen here: https://docs.aws.amazon.com/redshift/latest/mgmt/configure-odbc-connection.html#obtain-odbc-url, but unable to figure out what's wrong with my setup.
I have tried the code suggested by R like this:
con <- DBI::dbConnect(odbc::odbc(),
Driver = "/opt/amazon/redshift/lib/amazonredshiftodbc.dylib",
Host = "rhealth-prod-4.cldcoxyrkflo.us-east-1.redshift.amazonaws.com",
Schema = "dev",
Port = 5439)
I get the following error:
Error: nanodbc/nanodbc.cpp:1118: 00000: [Amazon][ODBC] (11560) Unable to locate SQLGetPrivateProfileString function: [Amazon][DSI] An error occurred while attempting to retrieve the error message for key 'LibsLoadErr' with message parameters ['""'] and component ID 3: Message not found in file "/opt/amazon/redshift/ErrorMessages/en-US/ODBCMessages.xml"
The odbc.ini and odbcinst.ini files are in my /User/ location so I shouldn't need to set environment variables, unless I am missing something, but here are my configuration files:
odbc.ini:
[ODBC Data Sources]
Amazon_Redshift_dylib=Amazon Redshift DSN for macOS X
[Amazon Redshift DSN for macOS X]
Driver=/opt/amazon/redshift/lib/amazonredshiftodbc.dylib
Host=rhealth-prod-4.cldcoxyrkflo.us-east-1.redshift.amazonaws.com
Port=5439
Database=saf
locale=en-US
odbcinst.ini:
[ODBC Drivers]
Amazon_Redshift_dylib=Installed
[Amazon_Redshift_dylib]
Description=Amazon Redshift DSN for macOS X
Driver=/opt/amazon/redshift/lib/amazonredshiftodbc.dylib
Any insight would be greatly appreciated.
Hi there,
I am attempting to use the extension in the title following the guide mentioned [here](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_lambda.html) and I am unfortunately hitting an issue where the extension starts and awaits requests (I can see this in the logs from setting the debug flag) although when I send the request, it times-out. I have set the timeout of the lambda to the maximum potential value with the same effect.

I have set my lambdas execution role in the following manner:

The runtime of my function is arm64 using TypeScript. My code to request my secret is as follows:

I have been trying different things all to no avail as the application still times out. Any help on what is going on here would be greatly appreciated.
We have physical gateways deployed all around the country, they've been working for months, and out of the blue they've stopped posting data.
**Our architecture:**
1. nRF thingy91 as the gateway
2. Serverless with API gateway and Lambdas
3. HTTPS POST request from the gateway to the endpoint
4. MongoDB as the database
**Things we've discovered: **
1. Gateways stopped sending data a week ago
2. Testing the gateway, we found that it does post, but receives an error from the backend
2. No changes were made to the backend and no redeployments
3. All gateways have enough data on their sims
4. Serverless offline works
5. Lambdas aren't being invoked, but the API gateway is getting some traffic, though I'm having trouble deciphering what and how exactly
6. CloudWatch shows no new logs
7. Postman doesn't work either
8. Don't see any billing issues
9. Redeploying doesn't resolve the issue
10. Receiving and handling requests works just fine and dandy from our mobile and web apps, which communicate with different endpoints but are under the same stage.
I'm receiving error: **Receive error in Lambda: Unable to import module 'functions': No module named 'functions' Traceback**
**code: Code.fromAsset(path.resolve(__dirname, BundleLocation + name)),**
I'm deployig this via the CDK. I confirmed I have this in my code for CDK for my Lambda; however I am still receiving an error. However, when I go into Lambda to troubleshoot why my app is not working I see logs in CloudWatch and I look through Cloudwatch logs and I am receiving this error.
I also want to note my CDK stack is in typescript and my python code is in python. I'm not sure if this makes a difference. Does anyone know what could cause this error after me detailing I have the correct syntax?
Also is there any work arounds for this?
How to attach authorizer to api gateway V2 route in aws cloudformation?
I am using Api Gateway v2 and cloudformation.
I am using stages "prod" and "stg" I would like to work on separate lambda stg and prod.
In AWS console it is just one click of one button "Attach Authorization" in "Routes" section
I am using simple authorizer:
My cloudformation looks like this:
```
Authorizer:
Type: 'AWS::ApiGatewayV2::Authorizer'
Properties:
ApiId: !Ref ApiGateway
AuthorizerPayloadFormatVersion: 2.0
AuthorizerResultTtlInSeconds: 5
AuthorizerType: REQUEST
AuthorizerUri: !Join
- ''
- - 'arn:'
- !Ref 'AWS::Partition'
- ':apigateway:'
- !Ref 'AWS::Region'
- ':lambda:path/2015-03-31/functions/'
- 'arn:aws:lambda:'
- !Ref 'AWS::Region'
- ':'
- !Ref 'AWS::AccountId'
- :function:${stageVariables.AuthorizerFunctionName}
- /invocations
EnableSimpleResponses: true
IdentitySource:
- '$request.header.Authorization'
Name: !Sub ${ProjectName}-gateway-authorizer
MyRoute:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId: !Ref ApiGateway
AuthorizationType: CUSTOM
AuthorizerId: !Ref Authorizer
RouteKey: 'POST /posts/all'
Target: !Join
- /
- - integrations
- !Ref PostsLambdaIntegrationGet
```
Authorizer lambda body:
```
import json
# import jwt
def lambda_handler(event, context):
print('*********** The event is: ***************')
print(event)
print('headers is:')
print(event['headers'])
print('headers Authorization is:')
# !!!!! DONWCASE by postam or api !!!!! "A" -> "a"
print(event['headers']['authorization'])
if event['headers']['authorization'] == 'abc123':
response = {
"isAuthorized": True,
"context": {
"anyotherparam": "values"
}
}
else:
response = {
"isAuthorized": False,
"context": {
"anyotherparam": "values"
}
}
print('response is:')
print(response)
return response
```
BTW I do not see this option in cli [apigatewayv2 cli documentation](https://docs.aws.amazon.com/cli/latest/reference/apigatewayv2/index.html) too.
BTW I asked this question on [attach authorizer to api gateway V2 route in aws cloudformation](https://stackoverflow.com/questions/75225545/attach-authorizer-to-api-gateway-v2-route-in-aws-cloudformation) too.
1) I attached authorizer.
2) I deployed api.
3) I checked authorizer with hardcoded lambda name (it works), it verifies my lambda and permissions are correct.
Is Postgres Serverless v2 instance good for production transactional system
Hi,
I have created a ASP.NET Core Web API project using below option from Visual Studio. Actually I needed to install AWS Toolkit for Visual Studio to use this template.

I have deployed this ASP.NET Core Web API to AWS Lambda using the option of right clicking the project and using the option

When I try to deploy below popup appears but it doesn't give options to deploy in multiple environments.

Now I want same Web API to deploy for different environment like test environment so how can I deploy the same.
[Jeremy Daly talks about reusing RDS connections in AWS Lambda](https://www.jeremydaly.com/reuse-database-connections-aws-lambda/). I assume this principle of establishing the connection outside of the Lambda handler's context would work for Snowflake connections too?