- Newest
- Most votes
- Most comments
While proper splitting of files is very important and highly recommended, it shouldn't cause a CPU spike across the cluster.
What is usually the cause of a CPU spike like what you're describing is if you are loading into a table without any compression settings. The default setting for COPY is that COMPUPDATE is ON. What happens is that Redshift will take the incoming rows, run them through every compression setting we have and return the the appropriate (smallest) compression.
To fix the issue, it's best to make sure that compression is applied to the target table of the COPY statement. Run Analyze Compression command if necessary to figure out what the compression should be and manually apply it to the DDL. For temporary tables LZO can be an excellent choice to choose because it's faster to encode on these transient tables than say ZSTD. Just to be sure also set COMPUPDATE OFF in the COPY statement.
Relevant content
- asked a year ago
- asked 6 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago