By using AWS re:Post, you agree to the Terms of Use

Unload command added more decimal places to float4 field


I have a table with a float4 field, and one of the values is 120.12 When I unload the data to a file in S3, and look at the file in a text editor, the related field now has the value 120.120003 I.e. it has made the value more precise by adding 4 more decimal places, and then (randomly) making the last decimal place a non-zero value. This is happening for many database records that I am unloading/exporting.

Why is this happening, and what can I do to prevent it? Is it because I'm using float4 and it has "6 significant digits of precision"? Is redshift just adding in extra precision because I used float4? If so, should I be using DECIMAL(8,2) instead? I have used float4 previously without this issue when exporting/unloading.

An example of the create table query:

CREATE TABLE exampleTableName (
  id       int identity(1,1),
  accId    varchar(16) encode zstd,
  amount   float4,

  primary key(id)
interleaved sortkey(accId);

The unload command:

TO 's3://bucket/prefixFile_'
asked 3 years ago60 views
1 Answer
Accepted Answer

Yes, this happens when you use FLOAT for the decimal data because it cannot store values of arbitrary precision.

I would generally always recommend using the DECIMAL datatype unless you have an existing application that has an unchangeable requirement for FLOAT, e.g., calculation expects FLOAT and output cannot change.

Additionally, by using DECIMAL you are able to use our new AZ64 compression encoding which will reduce the amount of storage needed and improve your query performance.

answered 3 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions