Compare commits

...

1 Commits

Author SHA1 Message Date
Auke Kok
6b682b6651 Swap meta allocators with much more reserve.
We increase the reserve from 2x to 3x the minimum number of blocks
needed for our reserve, and change the algorithm that determines when
to swap them.

The old algorithm swaps them if _avail is just larger than _freed. While
the simplest algorithm, it suffers from the problem that in practice,
when we hit ENOSPC conditions, it will almost always swap on every
iteration when we hit our low water mark.

In our testing, we regularly see failures because what is effectively
happening is that we starve both allocators by slowly draining them
block by block, trying to do work. The work of course requires us to
drain more blocks to commit changes. This cycle doesn't end until both
allocators are almost completely drained and not enough blocks remain to
do any real work anymore.

The new algorithm will not swap allocators unless _freed is 50%
larger than _avail.

The outcome is that, during meta space pressure, we're allowing _avail to
slowly drain down to the same levels as before, effectively. However, we
then swap to _freed which is now 1.5x larger.

This results in us being able to do a whole chunk of work without
needing to swap. While draining _avail for longer, we allow work to
commit and recycle blocks back into _freed muich more effectively.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-02-11 17:04:28 -05:00

View File

@@ -668,14 +668,16 @@ static void scoutfs_server_commit_func(struct work_struct *work)
* the reserved blocks after having filled the log trees's avail
* allocator during its transaction. To avoid prematurely
* setting the low flag and causing enospc we make sure that the
* next transaction's meta_avail has 2x the reserved blocks so
* next transaction's meta_avail has 3x the reserved blocks so
* that it can consume a full reserved amount and still have
* enough to avoid enospc. We swap to freed if avail is under
* the buffer and freed is larger.
* the buffer and freed is larger by 50%. This results in much less
* swapping overall and allows the pools to refill naturally.
*/
if ((le64_to_cpu(server->meta_avail->total_len) <
(scoutfs_server_reserved_meta_blocks(sb) * 2)) &&
(le64_to_cpu(server->meta_freed->total_len) >
(scoutfs_server_reserved_meta_blocks(sb) * 3)) &&
((le64_to_cpu(server->meta_freed->total_len) +
(le64_to_cpu(server->meta_freed->total_len) >> 1)) >
le64_to_cpu(server->meta_avail->total_len)))
swap(server->meta_avail, server->meta_freed);