
There are 3 drops in Freeable memory corresponding to 3 query plan screenshots below.
- 18.3G at 11:47 to 658M at 11:51
- 18.3G at 11:55 to 641M at 12
- 18.1G at 12:08 to 6.23G at 12:10
temp_bytes 172MB at 12:14 corresponds to when the 3rd drop in freeablememory goes back up to 18.1G. It also corresponds to FreeLocalStorage going back up to 133.687Bil bytes at 12:14 after drop.
There is only 1 peak of temp_bytes corresponding to the 3rd drop in FreeableMemory, because the previous drops in FreeableMemory did not lead to the query spilling to disk.
I observe that disk spills cause FreeLocalStorage to drop, and temp_bytes to spike after query completion. More visuals on this in related question at https://repost.aws/questions/QULdz8JxjXQGubb3SItYOyNw/in-rds-how-are-temp-bytes-and-freelocalstorage-related
FreeLocalStorage started dropping at 12:10 and reached 123.346 at 12:13. Note that this is 2 minutes later than the 3rd query started at 12:08, meaning the query tried using memory first before going to disk.
12:10 is also the deepest trough of FreeableMemory, so looks like when it spills to disk, FreeableMemory starts to get released and increase again.
3 query plans with their settings to illustrate 3 drops in FreeableMemory
1. work_mem 4GB max_parallel_workers_per_gather 8

Question
- How is 8 workers quicksorting 1.76Mil kb on average related to (1st drop above) FreeableMemory dropping by 17.7G? This number doesn't match 8*1.76 = 14.08? Even counting the 9th parent assuming it does work too is not 17.7G.
2. work_mem 4GB max_parallel_workers_per_gather 4
In attempt to use half the FreeableMemory and test if the 17.7 drop without crashing was coincidence, i reduced workers from 8 to 4 expecting FreeableMemory use to drop to half of previous query.
Question
- Why did the workers now each take 3.2mil kb instead of staying at 1.76Mil as in 1st query? It's like there's some mechanism distributing FreeableMemory evenly to workers.
- How is 4*3.2 = 12.8 related to the 17.7G drop? (2nd drop above).
3. work_mem 2GB max_parallel_workers_per_gather 4
To force disk spill in order to see something on FreeLocalStorage, i decreased work_mem from 4GB to 2GB
Question
- Is it expected behaviour that FreeableMemory always drops first, until memory runs out, after which FreeLocalStorage is used and FreeableMemory goes back up?
- Why doesn't the query give ERROR: out of memory, DETAIL: Failed on request of size 8388608 from the beginning, such as when i use higher work_mem like 8GB with 8 worker. Is the query allocating more memory at runtime as the query goes on, thus only discovering in the middle that it must use disk?
- How is 1944928kb related to the 10GB drop in FreeLocalStorage, or 172Mil bytes of temp_bytes?
- Could i have predicted the 12.1G FreeableMemory drop before disk spilling from query plan or previous experiments?
- If i want to avoid disk spill, how to predict FreeableMemory required based on query plan? (probably also influences work_mem, max_parallel_workers_per_gather)
- How is FreeableMemory related to Memory size listed under Instance Type specs https://aws.amazon.com/ec2/instance-types/? If i can answer previous question, next step is this to scale up instance to handle query.