Design Pattern for daily upload of source data i(in graphql) into dynamo

0

Hello I am looking for best practices in designing a simple solution to retrieve a large data set (100K orders) daily into a dynamo DB.

At the moment, the source data (orders) is accessible via graphql api , but the solution needs to cover other methods, such as as REST API.

Also, I need something configurable the data sources object model evolve on a regular base.

In the old days of EAI, we would simply use an out of the box configurable adapter (usually with object mapping wizards) for mapping source object modes (at data field level) to target object data model. Just wondering what the AWS best practices for this scenario? thanks a lot :)

preguntada hace un año218 visualizaciones
1 Respuesta
0

Hello, Based on your query, I do believe that you could try leveraging the AWS Architecture Center, that offers reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, etc.

Nevertheless, in case you are seeking guidance specific to your use case, please feel free to get in touch with an AWS Solution Architect who have the right expertise with multiple AWS Services and can assist you. You can fill up this contact form (Nature of Support - Sales Support) and an AWS SA from the team will get back to you on the same.

AWS
INGENIERO DE SOPORTE
respondido hace un año

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas