By using AWS re:Post, you agree to the Terms of Use
/Serverless/

Questions tagged with Serverless

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Cognito - CustomSMSSender InvalidCiphertextException: null on Code Decrypt (Golang)

Hi, i followed this document to customize cognito SMS delivery flow https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-sms-sender.html I'm not working on a Javascript environment so wrote this Go snippet: ``` package main import ( "context" golog "log" "os" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/kms" ) // USING THIS TYPES BECAUSE AWS-SDK-GO DOES NOT SUPPORTS THIS // CognitoEventUserPoolsCustomSmsSender is sent by AWS Cognito User Pools before each mail to send. type CognitoEventUserPoolsCustomSmsSender struct { events.CognitoEventUserPoolsHeader Request CognitoEventUserPoolsCustomSmsSenderRequest `json:"request"` } // CognitoEventUserPoolsCustomSmsSenderRequest contains the request portion of a CustomSmsSender event type CognitoEventUserPoolsCustomSmsSenderRequest struct { UserAttributes map[string]interface{} `json:"userAttributes"` Code string `json:"code"` ClientMetadata map[string]string `json:"clientMetadata"` Type string `json:"type"` } func main() { lambda.Start(sendCustomSms) } func sendCustomSms(ctx context.Context, event *CognitoEventUserPoolsCustomSmsSender) error { golog.Printf("received event=%+v", event) golog.Printf("received ctx=%+v", ctx) config := aws.NewConfig().WithRegion(os.Getenv("AWS_REGION")) session, err := session.NewSession(config) if err != nil { return err } kmsProvider := kms.New(session) smsCode, err := kmsProvider.Decrypt(&kms.DecryptInput{ KeyId: aws.String("a8a566c5-796a-4ba1-8715-c9c17c6f0cb5"), CiphertextBlob: []byte(event.Request.Code), }) if err != nil { return err } golog.Printf("decrypted code %v", smsCode.Plaintext) return nil } ``` i'm always getting `InvalidCiphertextException: : InvalidCiphertextException null`, can someone help? This is how lambda config looks on my user pool: ``` "LambdaConfig": { "CustomSMSSender": { "LambdaVersion": "V1_0", "LambdaArn": "arn:aws:lambda:eu-west-1:...:function:cognito-custom-auth-sms-sender-dev" }, "KMSKeyID": "arn:aws:kms:eu-west-1:...:key/a8a566c5-796a-4ba1-8715-c9c17c6f0cb5" }, ```
0
answers
0
votes
0
views
AWS-User-1153293
asked a day ago

SAM deploy does not deploy Layer dependencies to S3

In my SAM template I've got 2 Lambda functions that share dependencies via a Layer. Here's my directory structure. As you can see, individual functions have no ``requirements.txt`` file, but it's shared within ``deps/`` directory: ``` ├── deps │   └── requirements.txt ├── src │   ├── function1 │   │   └── getArticlesById.py │   └── function2 │   └── getArticlesById.py └── template.yaml ``` ```yaml AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: Sample SAM Template for testing API Gateway, Lambda, DynamoDB integration Globals: Api: OpenApiVersion: 3.0.1 Function: Timeout: 5 Parameters: Environment: Type: String Default: dev Resources: DepsLayer: Type: AWS::Serverless::LayerVersion Properties: Description: !Sub Dependencies for ${AWS::StackId}-${Environment} ContentUri: deps/ CompatibleRuntimes: - python3.9 RetentionPolicy: Retain Metadata: BuildMethod: python3.9 GetRecommendationsByIdFunctionDynamo: Type: AWS::Serverless::Function Properties: CodeUri: src/function1 Handler: getArticlesById.lambda_handler Runtime: python3.9 MemorySize: 3008 Tracing: Active Policies: - AWSLambdaVPCAccessExecutionRole - DynamoDBReadPolicy: TableName: !Ref MyDatabase Layers: - !Ref DepsLayer Events: HelloWorld: Type: Api Properties: Path: /getArticlesByIdDynamo Method: get RestApiId: !Ref API Environment: Variables: STAGE: !Sub ${Environment} GetRecommendationsByIdFunctionS3: Type: AWS::Serverless::Function Properties: CodeUri: src/function2 Handler: getArticlesById.lambda_handler Runtime: python3.9 MemorySize: 3008 Tracing: Active Policies: - AWSLambdaVPCAccessExecutionRole - S3ReadPolicy: BucketName: !Ref MyBucket Layers: - !Ref DepsLayer Events: HelloWorld: Type: Api Properties: Path: /getArticlesByIdS3 Method: get RestApiId: !Ref API Environment: Variables: STAGE: !Sub ${Environment} ``` ``sam build`` fetches all dependencies and puts them into ``.aws-sam/build/DepsLayer/python``: ``` .aws-sam ├── build │   ├── DepsLayer │   │   └── python │   ├── GetRecommendationsByIdFunctionDynamo │   │   └── getArticlesById.py │   ├── GetRecommendationsByIdFunctionS3 │   │   └── getArticlesById.py │   └── template.yaml └── build.toml ``` However when I run `sam deploy`, `DepsLayer` dependencies are not copied over to S3, and Lambda functions fail at runtime, since they can't find these dependencies. ``` $ aws --version aws-cli/2.3.2 Python/3.9.7 Darwin/20.6.0 source/x86_64 prompt/off $ sam --version SAM CLI, version 1.36.0 ```
0
answers
0
votes
2
views
maslick
asked 5 days ago

CDK with typescript - error on cloud9

Hello Everyone, I tried https://github.com/fortejas/example-serverless-python-api on a cloud9 environment but I got the following error Commands that I used to setup: ``` mkdir sample-api cd sample-api/ cdk init app --language typescript . cd ~ git clone https://github.com/kasukur/example-serverless-python-api.git ls -lrt example-serverless-python-api/ cp -rf example-serverless-python-api/lambda-api/ ~/environment/sample-api/. cd ~/environment/sample-api/ Delete node_modules folder Delete package-lock.json npm i @aws-cdk/aws-lambda-python-alpha --force -g 
ec2-user:~/environment/sample-api $ cdk deploy ``` the error is ``` ec2-user:~/environment/sample-api $ cdk synth npm WARN exec The following package was not found and will be installed: ts-node /home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:750 return new TSError(diagnosticText, diagnosticCodes); ^ TSError: ⨯ Unable to compile TypeScript: bin/sample-api.ts:4:10 - error TS2305: Module '"../lib/sample-api-stack"' has no exported member 'SampleApiStack'. 4 import { SampleApiStack } from '../lib/sample-api-stack'; ~~~~~~~~~~~~~~ at createTSError (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:750:12) at reportTSError (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:754:19) at getOutput (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:941:36) at Object.compile (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1243:30) at Module.m._compile (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1370:30) at Module._extensions..js (node:internal/modules/cjs/loader:1153:10) at Object.require.extensions.<computed> [as .ts] (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1374:12) at Module.load (node:internal/modules/cjs/loader:981:32) at Function.Module._load (node:internal/modules/cjs/loader:822:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) { diagnosticText: `\x1B[96mbin/sample-api.ts\x1B[0m:\x1B[93m4\x1B[0m:\x1B[93m10\x1B[0m - \x1B[91merror\x1B[0m\x1B[90m TS2305: \x1B[0mModule '"../lib/sample-api-stack"' has no exported member 'SampleApiStack'.\n` + '\n' + "\x1B[7m4\x1B[0m import { SampleApiStack } from '../lib/sample-api-stack';\n" + '\x1B[7m \x1B[0m \x1B[91m ~~~~~~~~~~~~~~\x1B[0m\n', diagnosticCodes: [ 2305 ] } Subprocess exited with error 1 ``` Could someone please help with this Thank you
1
answers
0
votes
2
views
Sri
asked 9 days ago

Multi Region strategy for API Gateway

If disaster recovery is not a requirement, what would be the best strategy for setting up API Gateway to server global customers. Here are three options that I can think of, not able to land on one. **Option 1**: Single Edge Optimized API Gateway serving traffic * Pros: save cost and avoid complexity of data replication (backend is opensearch) * Cons: Latency? not sure how much edge optimized API will help with latency, as customer will be hitting the API at nearest edge (ssl handshake, etc) and traffic flowing via backbone network. ( Question 1) **Option 2** Multiple Regional API Gateway with Route53 Latency based routing * Pros: customers hitting closest API. * Cons: Data replication, Cost. Also, since there is no cloud front here, traffic will be flowing via internet to closest region API, lets say we have API deployed in two regions , US and Singapore, would users in Europe see latency , worse than the Option 1, where requests are going to nearest edge location and reaches API via backbone? **Option 3** Multiple Edge Optimized API Gateway with Route53 Latency based routing * Pros: customers hitting closest API. Not sure how latency based routing works on an edge optimized endpoint, would it even help, since both endpoints are edge optimized. Not sure how smart is Route53 (Question 2) * Cons: Data replication, cost and uncertainty of Latency based routing. and Finally , one that I can think of could work, but haven't found too many solutions where people implemented. **Option 4** Multiple Regional API Gateway with single custom Cloudfront on top with cloudfront functions to do the routing. * Pros: customers hitting closest edge optimized location and routed to nearest API, this routing will be based on country of origin header from cloudfront. * Cons: Same Data Replication, Cost and predefined list of countries based routing. I need to spend time and run tests with multiple solutions. But wanted to seek community advise first. To summarize everything, if cost, complexity and disaster recovery are kept out of discussion, what would be best architecture for API Gateway to avoid latency issues.
2
answers
0
votes
18
views
Balu
asked 17 days ago

Glue job hudi-init-load-job with script HudiInitLoadNYTaxiData.py fails

Hello. WE have some sort of POC and currently evaluating capability of Glue. As a part of that evaluation I've recently activated the latest version [of "AWS Glue Connector for Apache Hudi" which is 0.9.0](https://aws.amazon.com/marketplace/pp/prodview-zv3vmwbkuat2e?ref_=beagle&applicationId=GlueStudio) . To be precise I speak about the results of implementation of the steps from the [article] (https://aws.amazon.com/blogs/big-data/writing-to-apache-hudi-tables-using-aws-glue-connector/) . We currently don't use AWS Lake Formation. So I've successfully implemented every step except the part related to AWS Lake Formation. Once I got the work of CloudFormation successfully accomplished I kicked off hudi-init-load-job JOB. But a result was somewhat frustrating! The job failed with the following result: 2021-12-24 08:50:56,249 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(70)): Error from Python:Traceback (most recent call last): File "/tmp/HudiInitLoadNYTaxiData.py", line 27, in <module> glueContext.write_dynamic_frame.from_options(frame = DynamicFrame.fromDF(inputDf, glueContext, "inputDf"), connection_type = "marketplace.spark", connection_options = combinedConf) File "/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py", line 653, in from_options format_options, transformation_ctx) File "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", line 279, in write_dynamic_frame_from_options format, format_options, transformation_ctx) File "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", line 302, in write_from_options return sink.write(frame_or_dfc) File "/opt/amazon/lib/python3.6/site-packages/awsglue/data_sink.py", line 35, in write return self.writeFrame(dynamic_frame_or_dfc, info) File "/opt/amazon/lib/python3.6/site-packages/awsglue/data_sink.py", line 31, in writeFrame return DynamicFrame(self._jsink.pyWriteDynamicFrame(dynamic_frame._jdf, callsite(), info), dynamic_frame.glue_ctx, dynamic_frame.name + "_errors") File "/opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in _call_ answer, self.gateway_client, self.target_id, self.name) File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco return f(*a, **kw) File "/opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o115.pyWriteDynamicFrame. : java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)[Ljava/lang/Object; at org.apache.hudi.DataSourceOptionsHelper$.$anonfun$allAlternatives$1(DataSourceOptions.scala:749) at org.apache.hudi.DataSourceOptionsHelper$.$anonfun$allAlternatives$1$adapted(DataSourceOptions.scala:749) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.hudi.DataSourceOptionsHelper$.<init>(DataSourceOptions.scala:749) at org.apache.hudi.DataSourceOptionsHelper$.<clinit>(DataSourceOptions.scala) at org.apache.hudi.HoodieWriterUtils$.parametersWithWriteDefaults(HoodieWriterUtils.scala:80) at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:157) at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676) at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271) at com.amazonaws.services.glue.marketplace.connector.SparkCustomDataSink.writeDynamicFrame(CustomDataSink.scala:43) at com.amazonaws.services.glue.DataSink.pyWriteDynamicFrame(DataSink.scala:65) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) I'm quite new to the AWS stack so could someone please give me a hand to fix it
6
answers
0
votes
4
views
AWS-User-0316168
asked 23 days ago
3
answers
0
votes
20
views
Alexander
asked a month ago

how to connect to private RDS from localhost

I have a private VPC with private subnets a private jumpbox in 1 private subnet and my private RDS aurora MySql serverless instance in another private subnet. I did those commands on my local laptop to try to connect to my RDS via port forwarding: ``` aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["5901"],"localPortNumber"=["9000"] --profile myProfile aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["22"],"localPortNumber"=["9999"] --profile myProfile aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["3306"],"localPortNumber"=["3306"] --profile myProfile ``` The connection to the server hangs. I had this error on my local laptop: ``` Starting session with SessionId: myuser-09e5cd0206cc89542 Port 3306 opened for sessionId myuser-09e5cd0206cc89542. Waiting for connections... Connection accepted for session [myuser-09e5cd0206cc89542] Connection to destination port failed, check SSM Agent logs. ``` and those errors in `/var/log/amazon/ssm/errors.log`: ``` 2021-11-29 00:50:35 ERROR [handleServerConnections @ port_mux.go.278] [ssm-session-worker] [myuser-017cfa9edxxxx] [DataBackend] [pluginName=Port] Unable to dial connection to server: dial tcp :3306: connect: connection refused 2021-11-29 14:13:07 ERROR [transferDataToMgs @ port_mux.go.230] [ssm-session-worker] [myuser-09e5cdxxxxxx] [DataBackend] [pluginName=Port] Unable to read from connection: read unix @->/var/lib/amazon/ssm/session/3366606757_mux.sock: use of closed network connection ``` and I try to connect to RDS like this : [![enter image description here][1]][1] I even tried to put the RDS Endpoint using ssh Tunnel, but it doesn't work: [![enter image description here][2]][2] Are there any additional steps to do on the remote server ec2-instance? It seems the connection is accepted but the connection to the destination port doesn't work. or is there any best other way to connect to private rds in private vpc when de don't have site-to site-vpn or Direct connect ? [1]: https://i.stack.imgur.com/RwiZ8.png [2]: https://i.stack.imgur.com/53GIh.png
6
answers
0
votes
30
views
AWS-User-1737129
asked a month ago

AWS Lambda “Cannot load native module 'Cryptodome.Hash._MD5'”

I recently added some dependencies to my serverless project and ran into the following error when invoking my newly deployed Lambda. I don't encounter this issue on my local dev instance running MacOS 10.13.6 and Python 3.6.0. module initialization error: Cannot load native module 'Cryptodome.Hash._MD5': Trying '_MD5.cpython-36m-x86_64-linux-gnu.so': /var/task/vendored/Cryptodome/Util/../Hash/_MD5.cpython-36m-x86_64-linux-gnu.so: cannot open shared object file: No such file or directory, Trying '_MD5.abi3.so': /var/task/vendored/Cryptodome/Util/../Hash/_MD5.abi3.so: cannot open shared object file: No such file or directory, Trying '_MD5.so': /var/task/vendored/Cryptodome/Util/../Hash/_MD5.so: cannot open shared object file: No such file or directory I did some research on this problem and here's what I've gathered: **-** Lambda runs on Linux and the package above may need to be built on a Linux environment to resolve correctly **-** pycryptodome may be a drop in replacement for pycrypto and may be causing some conflicts with Lambda's environment This dependency stems from one of my other dependencies and I don't want to manually modify these dependencies to use a different package. I would also prefer to not set up a virtual Linux environment to package this project. What can I do to better investigate this issue, and ideally, resolve it? Edited by: JGMeyer on Aug 8, 2019 7:17 PM Edited by: JGMeyer on Aug 8, 2019 7:17 PM
8
answers
0
votes
0
views
JGMeyer
asked 2 years ago
  • 1
  • 90 / page