When AppFlow writes parquet to S3, can it maintain the source datatypes or not?

2

In the AppFlow UI, it appears to indicate that maintaining the source datatypes is possible via the option underneath parquet: Enter image description here

However, in documentation (https://docs.aws.amazon.com/appflow/latest/userguide/s3.html), it states:

If you choose Parquet as the format for your destination file in Amazon S3, the option to aggregate all records into one file per flow run will not be available. When choosing Parquet, Amazon AppFlow will write the output as string, and not declare the data types as defined by the source.

These two sources conflict with each other. The behavior I am seeing is what the documentation describes and all data is being written as string type. I am trying to determine if this is intended or a bug. If the latter, I can open a support ticket.

已提问 2 年前849 查看次数
1 回答
0

Hi there, As mentioned When choosing Parquet, Amazon AppFlow will write the output as string and not declare the data types as defined by the source. That means it will not declare any data as per source code but will write the data as string and do not declare any other data type. I hope it would be clear now. If you still have queries please feel free to reach us via case. Thank you!

AWS
已回答 2 年前
  • What is the option "Preserve source data types in Parquet format" for? Trying to understand if I can keep source data types somehow.

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则

相关内容