내용으로 건너뛰기

How to sync big SAP datasources using Appflow

0

Hello everyone,

I'm currently working on loading heavy SAP datasources, such as 0FI_ACDOCA_10, using Appflow to S3.

Initially, I set up a flow run in incremental loading mode without any filters. However, this approach frequently encountered errors due to network connection issues, timeouts, ...

So, I decided to split the loads into multiple parts, such as one load per week. After that, I set up a schedule using incremental loading mode to run daily and capture new data changes. However, this schedule requires an initial load, which means I need to set a filter to retrieve data equal to or greater than the current month using a date field. Unfortunately, this approach filters out any data changes from previous months, even if they were captured using delta mode.

Ideally, the flow for running incremental loading should be able to capture data changes without the need for an initial load?

If you have any solutions or suggestions for this situation, I would greatly appreciate it.

Thank you.

1개 답변
3

Due to the characteristics of the SAP datasource, the AppFlow integration can be complex when handling large data loads. To optimize the SAP extractor or use a different extractor if possible. Some extractors allow more granular filtering, which can help improve the efficiency of data extraction and reduce the size of each data pull. Or you can use AWS Glue to extract data I offer you o follow the following articles

https://aws.amazon.com/blogs/awsforsap/architecture-options-for-extracting-sap-data-with-aws-services/ https://aws.amazon.com/blogs/awsforsap/extracting-data-from-sap-hana-using-aws-glue-and-jdbc/ https://aws.amazon.com/blogs/awsforsap/run-federated-queries-to-an-aws-data-lake-with-sap-hana/

전문가
답변함 3년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

관련 콘텐츠