By using AWS re:Post, you agree to the Terms of Use
/Well-Architected Framework/

Well-Architected Framework

AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.

Recent questions

see all
1/18

Cognito - CustomSMSSender InvalidCiphertextException: null on Code Decrypt (Golang)

Hi, i followed this document to customize cognito SMS delivery flow https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-sms-sender.html I'm not working on a Javascript environment so wrote this Go snippet: ``` package main import ( "context" golog "log" "os" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/kms" ) // USING THIS TYPES BECAUSE AWS-SDK-GO DOES NOT SUPPORTS THIS // CognitoEventUserPoolsCustomSmsSender is sent by AWS Cognito User Pools before each mail to send. type CognitoEventUserPoolsCustomSmsSender struct { events.CognitoEventUserPoolsHeader Request CognitoEventUserPoolsCustomSmsSenderRequest `json:"request"` } // CognitoEventUserPoolsCustomSmsSenderRequest contains the request portion of a CustomSmsSender event type CognitoEventUserPoolsCustomSmsSenderRequest struct { UserAttributes map[string]interface{} `json:"userAttributes"` Code string `json:"code"` ClientMetadata map[string]string `json:"clientMetadata"` Type string `json:"type"` } func main() { lambda.Start(sendCustomSms) } func sendCustomSms(ctx context.Context, event *CognitoEventUserPoolsCustomSmsSender) error { golog.Printf("received event=%+v", event) golog.Printf("received ctx=%+v", ctx) config := aws.NewConfig().WithRegion(os.Getenv("AWS_REGION")) session, err := session.NewSession(config) if err != nil { return err } kmsProvider := kms.New(session) smsCode, err := kmsProvider.Decrypt(&kms.DecryptInput{ KeyId: aws.String("a8a566c5-796a-4ba1-8715-c9c17c6f0cb5"), CiphertextBlob: []byte(event.Request.Code), }) if err != nil { return err } golog.Printf("decrypted code %v", smsCode.Plaintext) return nil } ``` i'm always getting `InvalidCiphertextException: : InvalidCiphertextException null`, can someone help? This is how lambda config looks on my user pool: ``` "LambdaConfig": { "CustomSMSSender": { "LambdaVersion": "V1_0", "LambdaArn": "arn:aws:lambda:eu-west-1:...:function:cognito-custom-auth-sms-sender-dev" }, "KMSKeyID": "arn:aws:kms:eu-west-1:...:key/a8a566c5-796a-4ba1-8715-c9c17c6f0cb5" }, ```
0
answers
0
votes
0
views
AWS-User-1153293
asked a day ago

Two identically configured Elastic Beanstalk environments, log streaming works in one but not the other

This is a copy of a question I asked earlier on Stack Overflow. Hoping maybe I can get some useful responses here. Edits: formatting. I have a node.js application running in docker, deployed to an Elastic Beanstalk cluster via ECS. This application has two environments, call them "stage" and "prod". Both environments are configured to stream (non-custom) instance logs to cloudwatch with identical security policies in place. Log streaming works correctly in one environment ("stage") while the other ("prod") does not stream to cloudwatch (groups and streams are created but no events are ever written) and logs instead get written to disk on each EC2 instance. I have verified the following are true for both environments: 1. Both environments are in the same region (`us-east-1`) 2. Identical platform and version (Docker on Amazon Linux 2/3.0.0). 3. The `Instance log streaming to CloudWatch Logs` option enabled in the `Software` section of the configuration tab on the EB web console 4. Identical settings for `Retention` (3 days) and `Lifecycle` (Delete logs upon termination). 5. Code deployed (a public-facing GraphQL API if that matters) which writes a lot of logging output to the console via `console.debug`, `console.info` and friends. 6. Custom `Service Role` set on the `Security` section of the EB console's configuration tab. Both service roles resolve to the IAM role set as the instance profile. 7. Custom `IAM Instance Profile` IAM roles with the identical permission, trust relationships, and permission policies as below: ``` Trusted entities The identity provider(s) ec2.amazonaws.com The identity provider(s) elasticbeanstalk.amazonaws.com Condition Key Value StringEquals sts:ExternalId elasticbeanstalk Permissions policies AmazonEC2ContainerRegistryReadOnly AWSElasticBeanstalkEnhancedHealth AWSElasticBeanstalkWebTier AWSElasticBeanstalkMulticontainerDocker AmazonEC2ContainerRegistryPowerUser AWSElasticBeanstalkWorkerTier sns-topic-publish-allow-policy cloudwatch-allow-policy AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy cloudwatch-allow-policy policy document: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "logs:CreateLogStream" ], "Resource": "*" } ] } ``` Both environments otherwise run correctly, sit at Green/OK Health status, and report no permission problems. Differences are that 'stage' is not load balanced or scaled and runs on a smaller instance size. Prod has load balancing and scaling (which I'm assuming is irrelevant but I can share details on that if it is). # Expected behavior - stage When the application deployed to the `stage` environment writes something to the console, it appears as an event in a Cloudwatch stream named `/aws/elasticbeanstalk/stage/var/log/eb-docker/containers/eb-current-app/stdouterr.log > %EC2-INSTANCE-ID%` as I expect it to. If I ssh into the instance that wrote to the log, there is nothing written on disk under `/var/log/eb-docker/containers/eb-current-app` which is also expected. # Observed behavior - prod When the application deployed to the `prod` environment writes something to the console on the other hand, nothing is written to cloudwatch. Cloudwatch log groups appear named `/aws/elasticbeanstalk/prod/var/log/eb-docker/containers/eb-current-app/stdouterr.log > %EC2-INSTANCE-ID%` but __no events are ever logged__. If I ssh into the instance that wrote to the log, the text logged appears on disk under `/var/log/eb-docker/containers/eb-current-app/eb-%SOME_HASH%-stdouterr.log` and if the `Instance log streaming to CloudWatch Logs` is left enabled, all the instances eventually fill up their available disk space with log contents and crash. This condition has survived multiple instance restarts, waits of multiple hours with the streaming option enabled, the termination and rebuild of every instance in the environment, and deployment of new application versions from ECS. If I clone `stage` to a new environment, log streaming works as expected. If I clone `prod` to a new environment, log streaming fails in exactly the same manner as the original environment. Something is clearly misconfigured for `prod` but I don't have a clue what it is. What am I missing?
0
answers
0
votes
6
views
AWS-User-9087486
asked 11 days ago

Connections time out of a client request to a Network Load Balancer

I connected two AWS Accounts with a peering connection. All subnets on each side are allowed to talk with each other. If I try to communicate between the two sides with the IPs of the instances it works fine. I added a NLB on one side to avoid IPs and use a DNS name as a host. The ECS service registers the IP automatically to the NLB target group to achieve the goal. The client on one side tries to make a request through the NLB to the same target as before. The NLB is configured as internal and assigned to 3 AZ, the target group contains the IP of the target I want to reach. Each AZ contains a subnet with its own small range of IPs(1.0.x.0/20) but all the CIDR used for the rules are using the broader IP range(1.0.0.0/16) to cover them all. There are no overlappings between any IP ranges on both Accounts. The NLB has 3 private IPs(one for each AZ) registered on its DNS entry. I can do the request to the IP behind the NLB with success and the request to the NLB IP which is associated with the AZ on which the target IP is located. The request to the two other IPs of the NLB results in a timeout. There's one ACL for the whole Account which allows all traffic, the default security group allows the traffic of the CIDR of both Accounts and the routing tables contains an entry to route the traffic to the peering connection for the CIDR of the other side and one route for the local CIDR to "local". I also tried the Reachability Analyzer with the peer connection as sender and the NLB as a receiver and specify the IP of the target in the target group. This test succeeds because it uses the one NIC which is in the same AZ. I tried to use the peer connection as sender and the other two NICs of the NLB and set the IP of the target which fails with NO_PATH. To me, it looks like the NLB doesn't route the request to the other NIC. But I couldn't find any limitations to this kind of setup on the documentation.
1
answers
0
votes
5
views
AWS-User-5257795
asked 11 days ago

Popular users

see all
1/18