Questions tagged with Microservices
Content language: English
Sort by most recent
I have a simple lambda that I would like to *enable function URL* to assign an HTTPS endpoint; however, it is a container-based lambda, and I don't see *enable function URL* as an option in the *Advanced settings*.
Do I have to use API Gateway to assign an endpoint to container-based lambdas or is there some other way to make it accessible?
What are the options available for handling messages greater than 256KB on SQS using node.js
I wanted to check here to see if anyone had any better solution ideas for a problem we are attempting to solve for a client.
The client has on-premise services running which are generating messages. The end goal is: the messages come from their on-premise services, have a "current state" copy cached somewhere in AWS, and then get pushed out through an AWS websocket client.
The client has provided 3 options for getting the message to AWS:
1. The messages sit in their on-premise Kafka and an AWS Service consumes messages from it
2. The messages sit in their on-premise RabbitMQ and an AWS Service consumes messages from it
3. A websocket is exposed on their on-premise service and an AWS Service subscribes to that websocket
We've done some research into several options for this.
1. For the Kafka option, we are thinking setting up Lambda as a consumer to their on-premise Kafka (that is well documented). Lambda would just receive the message, save the "current state" in Dynamo or S3, and then publish through API Gateway for the websocket piece. The downside to this approach is that the messages are intended to retain their order, and using Lambda as a Kafka consumer doesn't exactly guarantee order. We can't tell if it's possible to have Lambda restrict itself to a single consumer, which would technically retain order, but this really feels like a situation where we aren't using the best tools for the job.
2. We have not been able to hunt down many examples of consuming on-premise RabbitMQ from AWS Services. Current suspicions is that if we could find examples, it might have similar downsides to the Kafka approach.
3. The websocket approach had us researching a handful of different things. We haven't really been able to find any services in AWS that can subscribe to that websocket and trigger an event when new messages come in. The best idea we could come up with here was spinning up a micro EC2 that subscribes to that websocket, receives messages, stores "current state" in Dynamo or S3, and then publishes through API Gateway for the websocket. In an ideal world, we would be accomplishing this in a way that better utilizes AWS Services and doesn't depend on a constantly running EC2. The websocket approach solves the message ordering issue, because we won't have to worry about old messages due to network connectivity issues or anything like that. On-premise has its own "current state", so if network connectivity issues happen, AWS would just end of getting the "current state" when it is able to reconnect, and all the messages in between would correctly never get to AWS.
We tried looking into IoT and EventBus because our research took us down those paths, but nothing there seems like it can really pull messages from the on-premise like we need it to.
We currently feel like options 1 and 3 are our best ideas right now. We wanted to quickly check with the community here in case someone was aware of anything that we hadn't considered yet.
I've been struggling with this topic for ages now and haven't found a decent answer, which I'm sure means I'm likely missing something. But the question is, what?
I'm wanting to build an application that is entirely AWS Lambda based. Actually building it is easy enough, but it's the automation testing side I'm stuck on.
I can obviously write local unit tests for actual business functionality. That's easy. But it misses two major things:
First, testing the way the lambda interacts with the rest of the ecosystem. I can easily have a mock DynamoDB, a mock Cognito, or whatever, but that only tests that the lambda is calling the mock as expected. If the mock doesn't match how the service really acts then it's a huge false positive.
More importantly though is how the entire system acts. For example, that lambdas are actually able to call each other, or that the authorizer is doing the right thing for what the next lambda expects, and so on.
In the Monolith scenario, I'd run up a Docker container for each piece of infrastructure - there's likely not many of them, run up the actual monolith service in-process and then interact with it. And that works really well, is really fast and reliable, and gives fantastic confidence.
The best I can come up with for testing this kind of thing is:
* Deploy the entire application to an ephemeral environment.
* Run a set of E2E tests against this environment.
* Tear down the environment.
* Succeed/Fail based on the tests.
Using an ephemeral environment means that a failed test run doesn't bleed into the next one. But it does mean that starting the test run is slower.
But deploying an entire environment is costly - definitely in time, and possibly in money - and there's no good way to reset the environment between tests, which means there's no good way to ensure that each test is properly isolated. There's no easy way for me to call and just set a DynamoDB or a User Pool back to an empty state at the start of each test as far as I can tell. And even if there were, a Serverless application is likely to have a lot more pieces of infrastructure to reset back to a blank state, though that could be controlled by knowing which pieces need resetting for each test.
The next level would be to deploy/tear down the environment *per test* but that's getting a bit ridiculous at that point.
Surely though this must be a solved problem? What's the best practice way of testing that an entire serverless application is working correctly?
Cheers
I made a lambda function to connect to a CodeCommit repository, select a branch and get a specific file. It works as expected.
Then I moved this function to a subnet (tried a public and a private one) and it is no longer able to connect to CodeCommit, am I missing something?
I need this lambda to be in a subnet to connect to a db, no need to have internet access as it should be triggered from S3.
Thanks
M
Hi Team,
In my FIFO SQS queue messages are available but not able to listing received messages after polling in from console?
Note: sometimes getting messages and sometimes not ? issue is intermittent.
Can somebody help me on the same ,how to debug or anything i have missed,
polling setting -->
duration -20 sec,
count -- 5
I have a problem with my socket server built with webSocketApi. everything works fine unless the decive loose its connectivity, close the conection and someone try to send a message to this device.
The websocketApi does not throw any error and if i try to get the connection via @connections API, the SocketApi stll giving like the device is connected untill the idle timeout come. And then it's impossible to detect from my side this case to enqueue the message to be sent later.
I'm thinking about using the keep alive client ping to trace the last ping and close the connection manually if the last ping is older than 30 secs or something like this.
Has someone any solution for this case or this is the normal way to works?
thank you.
I'm not sure if something changed on AWS, but I didn't change anything on my side on the lambda function or the db/tables in AWS Timestream.
I'm testing my lambda and it works, but it doesn't write to AWS Timestream. I'm very confused.
I can see in logs that everything goes through and I'm not seeing any errors... and when I query my Timestream DB it doesn't return anything - unless I go way back when it was working properly.
Hi Team,
I am trying to Debezium and MSK for CDC.
I have MySql Auraora serverless database
I'm trying to connect the all (MySQL aurora serverless Database, Debezium, my MSK Cluster) for data change capture
I'm trying to follow this blog :
[Introducing Amazon MSK Connect](https://aws.amazon.com/blogs/aws/introducing-amazon-msk-connect-stream-data-to-and-from-your-apache-kafka-clusters-using-managed-connectors/)
in this blog, we use connector capacity type = auto Scaled
but in Amazon Managed Streaming for Apache Kafka, Developer Guide, it says :
> Important
The Debezium MySQL connector plugin supports only one task and does not work with autoscaled capacity mode for Amazon MSK Connect. You should instead use provisioned capacity mode and set worker count equal to one in your connector configuration. To learn more about the capacity modes for MSK Connect, see Connector capacity.
[Developer Guide](https://docs.aws.amazon.com/msk/latest/developerguide/mkc-debeziumsource-connector-example.html)
at this point, I enable to successfully create an MSK connector (MSK Connect)
not sure which approach is the right one
1 - are the MSK and MSK connectors compatible with MySQL aurora **serverless** ?
2 - is there any other article that includes all the steps/constraints on how to create an MSK cluster and MSK connector with Auror and Debezium?
3 - does the MSK connector need internet access from within the VPC to be successfully created?
now I have this error when trying to create the connector :
```
Code: InvalidInput.InvalidConnectorConfiguration
Message: The connector configuration is invalid. Message: Failed to find any class that implements Connector and which name matches io.debezium.connector.mysql.MySqlConnector, available connectors are:.........
```
```
[Worker-07edsd2458791sds1545] org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.debezium.connector.mysql.MySqlConnector, available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='2.7.1', encodedVersion=2.7.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorCheckpointConnector, name='org.apache.kafka.connect.mirror.MirrorCheckpointConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorHeartbeatConnector, name='org.apache.kafka.connect.mirror.MirrorHeartbeatConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorSourceConnector, name='org.apache.kafka.connect.mirror.MirrorSourceConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='2.7.1', encodedVersion=2.7.1, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='2.7.1', encodedVersion=2.7.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}
```
Note: my VPC is private only and doesn't have access to the internet
Thank you for your Help!
Hi, I am developing a game and is curious to know how I can restrict the CRUD api requests to be originating from my Gamelift servers only. In other words, how can I restrict the IP addresses that can use the api endpoints that trigger lambda be from a particular IP range that AWS has? Any idea would be highly appreciated.
I have created a terraform project to build eks with karpenter, but when I try to build certain projects I get the problem that I show below, does anyone know how to fix it or what terraform configuration I need to apply to do it.
```
Warning FailedMount 25m kubelet MountVolume.SetUp failed for volume "kube-api-access-xxxxx" : write /var/lib/kubelet/pods/xxxxxx-xxxxx-xxxxxx/volumes/kubernetes.io~projected/kube-api-access-xxxxx/..2023_02_15_09_10_29.2455859137/token: no space left on device
Warning FailedMount 5m57s (x8 over 24m) kubelet Unable to attach or mount volumes: unmounted volumes=[kube-api-access-xxxx], unattached volumes=[kube-api-access-xxxx]: timed out waiting for the condition
Warning FailedMount 3m39s (x13 over 24m) kubelet (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[kube-api-access-xxxxx], unattached volumes=[kube-api-access-xxxxx]: timed out waiting for the condition
```
I've created a new project from a template HelloWorld. I use Rider and macOS. When I build the image with local configuration, the image is created successfully.

When I run the remote configuration I receive Runtime.InvalidEntrypoint

This is how my default template.yaml file looks like.
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
Sample SAM Template for HelloWorld
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 10
MemorySize: 128
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
PackageType: Image
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
Metadata:
DockerTag: dotnet6-v1
DockerContext: ./src/HelloWorld
Dockerfile: Dockerfile
DockerBuildArgs:
SAM_BUILD_MODE: run
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
```
This is how my Remote configuration looks like.

I use macOS and I read that Runtime.InvalidEntrypoint is thrown when the image is build with a different architecture type than the expected which make sense. macOS default configuration is arm64 and I guess lambda is expecting x86/64.
If that's the case, how to setup the remote configuration so that the output is x64 image?
