跳至內容

How to sync big SAP datasources using Appflow

0

Hello everyone,

I'm currently working on loading heavy SAP datasources, such as 0FI_ACDOCA_10, using Appflow to S3.

Initially, I set up a flow run in incremental loading mode without any filters. However, this approach frequently encountered errors due to network connection issues, timeouts, ...

So, I decided to split the loads into multiple parts, such as one load per week. After that, I set up a schedule using incremental loading mode to run daily and capture new data changes. However, this schedule requires an initial load, which means I need to set a filter to retrieve data equal to or greater than the current month using a date field. Unfortunately, this approach filters out any data changes from previous months, even if they were captured using delta mode.

Ideally, the flow for running incremental loading should be able to capture data changes without the need for an initial load?

If you have any solutions or suggestions for this situation, I would greatly appreciate it.

Thank you.

已提問 3 年前檢視次數 687 次
1 個回答
3

Due to the characteristics of the SAP datasource, the AppFlow integration can be complex when handling large data loads. To optimize the SAP extractor or use a different extractor if possible. Some extractors allow more granular filtering, which can help improve the efficiency of data extraction and reduce the size of each data pull. Or you can use AWS Glue to extract data I offer you o follow the following articles

https://aws.amazon.com/blogs/awsforsap/architecture-options-for-extracting-sap-data-with-aws-services/ https://aws.amazon.com/blogs/awsforsap/extracting-data-from-sap-hana-using-aws-glue-and-jdbc/ https://aws.amazon.com/blogs/awsforsap/run-federated-queries-to-an-aws-data-lake-with-sap-hana/

專家
已回答 3 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。