2 Answers
- Newest
- Most votes
- Most comments
0
Hi,
In Redshift for storing JSON data we have introduced the SUPER data type.
However, the SUPER data type only supports up to 1MB of data for an individual SUPER field or object. For more information, see Ingesting and querying semistructured data in Amazon Redshift.
hope this helps,
0
We are encountering a similar issue where we're utilizing the "super" datatype, which has a maximum length of 65K. However, the column in the Parquet file we receive has a maximum length of 192K. How should we handle this data? Are there alternative datatypes we can use to accommodate such large data sizes?
answered a day ago
Relevant content
- asked 7 months ago
- asked 4 years ago
- asked 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
How big is your JSON data?
try using text data type, supported in postgres and redshift.