Redshift Unload to S3 - How to limit file count?

0

Hi Team,

I am dumping data from redshift to s3 using unload command, however its splitting the file into 1000+ files. I cannot turn parallel off as it takes 3x the amount of times for the redshift query to run and my organization has a policy to terminate the query if it runs more than 45 min. I tried utilizing the maxfilesize option but it still splits files into multiple files some of which contain actual data and others which only contains headers, not sure whats causing these only header files.

I need to keep the file count under 1000 as this a limit enforced by quicksight.

TIA, Tanish

質問済み 6ヶ月前607ビュー
1回答
0

Hi Tanish,

The maxfilesize and parallel off parameters are the two options you have when it comes to limiting the number of files. As you mentioned, parallel off will increase runtime of the unload, so it is likely not a desirable solution.

I would suggest tweaking the maxfilesize parameter to get the best output. Keep in mind that each slice in your cluster will create it's own files, so you will be creating at least one file per slice in your cluster, which may be what is introducing the empty files in your unload. Are empty files causing issues with the QuickSight component?

Note you can also change the compression of the output files to further reduce the number of files.

A final point is that QuickSight has the ability to connect directly to Redshift, so if possible in your organisation this may provide better performance and remove the file number problem.

profile pictureAWS
エキスパート
回答済み 6ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ