Question on storage space.

1

Hello, We are using db.r7g.8xlarge aurora postgres instance with storage_type "aurora-iopt1" and PG version is 15.4. In one of our use cases, we have ~3billion rows in a table with ~115 columns in it, occupying ~ 1TB of storage space. And we are expected to have ~90 billion such rows in future, which will be occupying ~30TB+ in size. So, as we are planning to partition the table, but even with this volume of data we can easily reach the max storage limit which is ~128TB per database. And so, wanted to check if any compression techniques available in postgres aurora, which can be utilized to keep the storage space minimum?

Additionally, what should be the total number of partitions per table for optimal performance and usage? OR What should be the optimal size of a single table partition which would make the querying and table maintenance easy without any issue?

1 Answer
1

Consider table partitioning to split the large table into multiple smaller tables based on a column value like date. This makes queries and maintenance easier to manage.

For optimal performance, aim to have partitions in the 100s of GB size range. Too many small partitions can impact performance, while too few large partitions makes maintenance difficult.

Monitor disk usage of temporary files and consider increasing maintenance_work_mem and work_mem for queries creating large temporary results. This allocates more memory and reduces disk usage.

Aurora PostgreSQL does not support table compression directly but you can explore columnar storage formats like Parquet for analytics workloads requiring compression.

Refer to the AWS documentation for the latest storage limits of Aurora database instances. Consider resizing to a larger instance type if you need more storage capacity than currently available.

Monitor disk usage trends over time to understand growth and proactively plan for capacity needs. Check the AWS documentation for options to auto-scale storage capacity if needed.

profile picture
EXPERT
answered 3 months ago
  • Thank you. So, does it mean that we can't compress the historical partitions in aurora postgres like we have the flexibility to do in other databases.

    Saw the documentation and the storage limit for aurora postgres appears to be ~128TB and we have individual table which will grow till ~25-30TB and we may easily hit the limit.

    I came across few blogs explaining TOAST compression technique, which compresses varchar and text columns using pglz4 and lz4. Out of that lz4 appears to be more effective. And it means we can be able to compress all the varchar columns using lz4 algorithm, which will compress all the partitions columns (not tried if we will be able to do it only on specific historic partition). However, can you please suggest, if this is advisable method?

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions