I was reading the documentation on EBS volume constraints, and
I ended on this section: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html#block_size
The entire paragraph is pretty much true, mentioning that "industry default block size" is 4k and some workloads may benefit from a lower or higher blocksize. But then it shows a table, describing it as:
The following table shows storage capacity as a function of block size:
Block size | Max volume size |
---|
4 KiB (default) | 16 TiB |
8 KiB | 32 TiB |
16 KiB | 64 TiB |
32 KiB | 128 TiB |
64 KiB (maximum) | 256 TiB |
This make me feel as if I must have at least an 8KiB blocksize if I want to have a filesystem with more than 16TiB, which I don't think is the case for let's say Linux ext4/xfs - I tried with a couple of IO2 disks and everything just went fine.
I wonder what this table is an indication of? Am I missing something?
I see your point.
But I provisioned a single IO2 EBS volume with 20TiB, which can go as high as 64TiB for a single volume. I still cannot fathom the correlation between the block size on the left to the max volume size on the right.
There is the "EBS block size", which is how it presents itself to the OS (descibred as "512-byte sectors" on the same link). There is the "industry default for logical data blocks", which I assume they are talking about filesystem block sizes, which as far as I know have to go hand-to-hand with the Linux Kernel page size, for instance.
So what do they mean when they list that "The EBS-imposed limit on volume size (64 TiB) is currently equal to the maximum size enabled by 16-KiB data blocks"?