Redshift - During COPY space usage reached to 99%

0

I would like to UNLOAD 250 millions records of one redshift table (100 GB) and COPY that table to different account. UNLOAD has created 350 GB of CSV files. During execution of COPY command on destination cluster it was about to use 100% space of cluster so had to terminate COPY at 99% of space usage. I have 130 GB of free space in destination cluster. Any suggestion for this or any other alternative for single table ?

已提問 2 年前檢視次數 202 次
2 個答案
1

You may want to consider unloading file in a different format like Parquet which takes significantly less space on S3. https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html If source table is taking only 100GB, you can try creating destination table first using same ddl as source table to take advantage of compression or make sure option COMPUDATE [ON] when you copy the data. when you have this option on best column compression is determined & set by applying different compression codecs on sample set of column data. Also Copy performance will be lot better when you have multiple files (based on #of slices).

AWS
專家
Nita_S
已回答 2 年前
1

There are some factors to reduce used segments(blocks) of a cluster. First sort keys make additional temporary segments. So test it with tables which has no sort key. And check the encoding(compression) of target table columns. And load from split files not just one file with compressed format (such like .gz)

已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南