In Neptune-to-Neptune replication, what is the purpose or flow from Blue Neptune cluster, DynamoDB, and Lambda function to the Green cluster?

0

In Neptune-to-Neptune replication, my understanding is that the stream endpoint allows the streampoller Lambda function to fetch query details from the Blue cluster, form a query, and execute it in the Green cluster.

However,

  1. I am unsure how the lease DynamoDB table sources data from the Blue cluster and do we really a dynamo db table?
  2. Once the entire database is replicated will there be any notification in any logs confirming 100% replication?

If there is any README available detailing the overall architecture, it would be greatly appreciated.

Dhinesh
질문됨 2달 전634회 조회
2개 답변
1
수락된 답변
  1. I am unsure how the lease DynamoDB table sources data from the Blue cluster and do we really a dynamo db table?

The DynamoDB table used in the architecture maintains the checkpoint (the commitNum and opNum that was last processed by the Lambda function). It is just a single row table that stores the checkpoint across Lambda invocations.

  1. Once the entire database is replicated will there be any notification in any logs confirming 100% replication?

The intent of Neptune Streams is to perform on-going replication as changes are made from the source. There is a CloudWatch dashboard that gets deployed with the Streams stack that provides a lag metric showing how far behind the target is from the source. That can be used to show when the target has all of the data from the source (lag of 0).

This blog post explains the architecture in detail: https://aws.amazon.com/blogs/database/capture-graph-changes-using-neptune-streams/

profile pictureAWS
답변함 2달 전
profile picture
전문가
검토됨 2달 전
0

Thanks @Taylor-AWS.

2 - During an upgrade from version 1.2 to 1.3, data might be ingested into the 1.3 cluster after the upgrade. If I find that my application is not compatible with version 1.3 and wish to rollback, taking a snapshot of the 1.3 cluster and rolling back to version 1.2 is not possible. Does AWS recommend any best practices for handling this situation where the data from the 1.3 version needs to be transferred back to the 1.2 version?

Dhinesh
답변함 2달 전
  • This is really what the blue/green solution is for and what it is trying to avoid. By using the blue/green solution, the new cluster is created and upgraded to the latest version while leaving the old cluster in-place. This allows you to test your application against the new cluster before moving your production application to using the new cluster.

    There are really only two options if you need to downgrade:

    1. Do a full export from the 1.3 cluster and bulk load to a new cluster that you've provisioned at 1.2.
    2. Open a support case and the Neptune engineering team can downgrade the cluster manually from their end.

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠