Design Pattern for daily upload of source data i(in graphql) into dynamo

0

Hello I am looking for best practices in designing a simple solution to retrieve a large data set (100K orders) daily into a dynamo DB.

At the moment, the source data (orders) is accessible via graphql api , but the solution needs to cover other methods, such as as REST API.

Also, I need something configurable the data sources object model evolve on a regular base.

In the old days of EAI, we would simply use an out of the box configurable adapter (usually with object mapping wizards) for mapping source object modes (at data field level) to target object data model. Just wondering what the AWS best practices for this scenario? thanks a lot :)

asked a year ago197 views
1 Answer
0

Hello, Based on your query, I do believe that you could try leveraging the AWS Architecture Center, that offers reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, etc.

Nevertheless, in case you are seeking guidance specific to your use case, please feel free to get in touch with an AWS Solution Architect who have the right expertise with multiple AWS Services and can assist you. You can fill up this contact form (Nature of Support - Sales Support) and an AWS SA from the team will get back to you on the same.

AWS
SUPPORT ENGINEER
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions