- Newest
- Most votes
- Most comments
Haven't tried this myself, but I am thinking you may be able to achieve this using API Gateway and Step Functions, based on this capability announcement - https://aws.amazon.com/about-aws/whats-new/2021/05/amazon-api-gateway-rest-apis-integrates-with-step-funtions-synchronous-express-workflows/#:~:text=You%20can%20now%20create%20Amazon,a%20workflow%20of%20different%20microservices.
From the step function, you can branch out to the different lambdas depending on the client ID that you capture in your API Gateway and pass-on to the Step Function. Worth doing a POC.
If your processing can be asynchronous, then you can receive one primary lambda that receives the request from API Gateway, parses the Client ID and based on that drops the message in a different SQS queue for each Client. You can then have secondary lambdas for each client that get triggered by their respective SQS queue messages and do the processing. If you want to not hard-code the SQS queues in your primary lambda, you can create a look-up table in DynamoDB. That way new clients can be added without having to modify the primary lambda code and redeploy every time a new client is added.
With Application load balancer, you can make use host based routing. (https://aws.amazon.com/premiumsupport/knowledge-center/elb-configure-host-based-routing-alb/ )
That way:-
``
client1.domain -> ALB -----(Rule-1)------> Target group-1 that hosts client1.domain.
client2.domain -> ALB -----(Rule-2)------> Target group-2 that hosts client2.domain.
``
You can have lambda function as as a target, instead of re-writing everything on a EC2 again.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html
that could be an option, I'll try it, but I'm afraid we will reach som Target Group limits soon
Relevant content
- Accepted Answerasked 5 years ago
- asked 9 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
I would need to work with http requests, so I think this approach will add latency to the requests