- Newest
- Most votes
- Most comments
You may want to consider unloading file in a different format like Parquet which takes significantly less space on S3. https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html If source table is taking only 100GB, you can try creating destination table first using same ddl as source table to take advantage of compression or make sure option COMPUDATE [ON] when you copy the data. when you have this option on best column compression is determined & set by applying different compression codecs on sample set of column data. Also Copy performance will be lot better when you have multiple files (based on #of slices).
There are some factors to reduce used segments(blocks) of a cluster. First sort keys make additional temporary segments. So test it with tables which has no sort key. And check the encoding(compression) of target table columns. And load from split files not just one file with compressed format (such like .gz)
Relevant content
- asked 5 years ago
- asked 2 years ago
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
I will check and apply this - thanks.