Design Pattern for daily upload of source data i(in graphql) into dynamo

0

Hello I am looking for best practices in designing a simple solution to retrieve a large data set (100K orders) daily into a dynamo DB.

At the moment, the source data (orders) is accessible via graphql api , but the solution needs to cover other methods, such as as REST API.

Also, I need something configurable the data sources object model evolve on a regular base.

In the old days of EAI, we would simply use an out of the box configurable adapter (usually with object mapping wizards) for mapping source object modes (at data field level) to target object data model. Just wondering what the AWS best practices for this scenario? thanks a lot :)

已提問 1 年前檢視次數 218 次
1 個回答
0

Hello, Based on your query, I do believe that you could try leveraging the AWS Architecture Center, that offers reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, etc.

Nevertheless, in case you are seeking guidance specific to your use case, please feel free to get in touch with an AWS Solution Architect who have the right expertise with multiple AWS Services and can assist you. You can fill up this contact form (Nature of Support - Sales Support) and an AWS SA from the team will get back to you on the same.

AWS
支援工程師
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南