Handling Java RMI in AWS ASG

0

We have 2 services front end api service and a backend service. In order to get high tps we are using async calls.

  1. End user http call lands on one of the tomcat servers in frontend
  2. The frontend calls backend in Async with the context of the server ip and puts the request thread to sleep
  3. Once backend finishes the job it makes a callback to frontend using rmi with the server ip it got in context
  4. In the callback the original http thread is invoked.
  5. The invoked http request thread consumes the prepared data from cache and completes the response.

This was fine till we were in physical DC as we used to not scale or de-scale. With AWS ASG the server ip might not exist by the time backend service tries to make a callback. Due to this the request at the user end needs to get requests retried.

We want to get out of RMI here and still remain async. Would like to get any solutions for this

asked 3 years ago144 views
1 Answer
0

To handle the problem of backend callback in an auto-scaling environment like AWS ASG, and to replace Java RMI while maintaining asynchronous behavior, consider adopting a message queue-based architecture. This approach decouples your services and ensures reliable message delivery even when instances scale up or down.

Solution Outline

  1. Message Queue for Asynchronous Communication: Use a message queue service like Amazon SQS (Simple Queue Service) or Apache Kafka to handle the asynchronous communication between the frontend and backend services.

  2. Frontend-Backend Communication:

    • When the frontend API service receives a user request, it sends a message to the message queue with all necessary context.
    • The frontend service can store the request state in a distributed cache like Amazon ElastiCache (Redis or Memcached).
  3. Backend Processing:

    • The backend service consumes messages from the queue, processes the request, and then sends a response message to another queue dedicated to responses.
  4. Frontend Callback Handling:

    • A separate component or service in the frontend continuously polls or listens to the response queue.
    • When a response is received, the component retrieves the corresponding request state from the cache, assembles the response, and completes the HTTP request.

Detailed Implementation

1. Using Amazon SQS

Frontend Service:

  • Upon receiving an HTTP request:
    • Generate a unique request ID.
    • Store the request state in a distributed cache with the request ID as the key.
    • Send a message to SQS containing the request details and the request ID.
// Pseudocode for sending a message to SQS
String requestId = generateUniqueId();
Cache.put(requestId, requestState);

Map<String, String> messageAttributes = new HashMap<>();
messageAttributes.put("RequestId", requestId);

SendMessageRequest sendMsgRequest = new SendMessageRequest()
        .withQueueUrl(queueUrl)
        .withMessageBody(requestData)
        .withMessageAttributes(messageAttributes);
sqsClient.sendMessage(sendMsgRequest);

Backend Service:

  • Poll SQS for new messages.
  • Process each message and send the result to a response SQS queue.
// Pseudocode for receiving a message from SQS, processing it, and sending a response
ReceiveMessageRequest receiveRequest = new ReceiveMessageRequest(queueUrl)
        .withMaxNumberOfMessages(1);
List<Message> messages = sqsClient.receiveMessage(receiveRequest).getMessages();

for (Message message : messages) {
    String requestData = message.getBody();
    String requestId = message.getMessageAttributes().get("RequestId").getStringValue();
    
    // Process the request
    String responseData = processRequest(requestData);
    
    // Send response to response queue
    Map<String, String> responseAttributes = new HashMap<>();
    responseAttributes.put("RequestId", requestId);
    
    SendMessageRequest sendResponseRequest = new SendMessageRequest()
            .withQueueUrl(responseQueueUrl)
            .withMessageBody(responseData)
            .withMessageAttributes(responseAttributes);
    sqsClient.sendMessage(sendResponseRequest);
    
    // Delete processed message
    sqsClient.deleteMessage(new DeleteMessageRequest(queueUrl, message.getReceiptHandle()));
}

Frontend Response Handler:

  • Poll the response SQS queue.
  • Retrieve the request state from the cache using the request ID and complete the HTTP response.
// Pseudocode for receiving a response from SQS and completing the HTTP request
ReceiveMessageRequest responseReceiveRequest = new ReceiveMessageRequest(responseQueueUrl)
        .withMaxNumberOfMessages(1);
List<Message> responseMessages = sqsClient.receiveMessage(responseReceiveRequest).getMessages();

for (Message message : responseMessages) {
    String responseData = message.getBody();
    String requestId = message.getMessageAttributes().get("RequestId").getStringValue();
    
    // Retrieve request state from cache
    RequestState requestState = Cache.get(requestId);
    
    // Complete the HTTP request using the cached state and response data
    completeHttpRequest(requestState, responseData);
    
    // Delete processed message
    sqsClient.deleteMessage(new DeleteMessageRequest(responseQueueUrl, message.getReceiptHandle()));
}

Benefits of Using a Message Queue

  1. Decoupling: Frontend and backend services are decoupled, making the system more resilient to changes in the number of instances.
  2. Scalability: The architecture supports scaling up and down without losing messages or context, as messages are stored in the queue until processed.
  3. Reliability: Message queues provide reliable message delivery, ensuring that messages are not lost even if instances go down.
  4. Asynchronous Processing: The system maintains asynchronous processing, allowing the frontend to handle requests without waiting for backend processing to complete immediately.

So, by adopting a message queue-based architecture, you can address the issues with Java RMI in a dynamic auto-scaling environment, ensuring reliable and scalable asynchronous communication between your frontend and backend services. This approach decouples the services, improves reliability, and supports the dynamic nature of AWS auto-scaling.

Let me know if this solves your issue.

profile picture
EXPERT
answered 8 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions