mirror of
https://github.com/versity/scoutfs.git
synced 2026-01-05 11:45:09 +00:00
f57c07381a1ccd9b9c598935212402d4f5aff6f3
We used to have 16k blocks in our own radix_tree cache. When we introduced the simple file block mapping code it preferred to have block size == page size. That let us remove a bunch of code and reuse all the kernel's buffer head code. But it turns out that the buffer heads are just a bit too inflexible. We'd like to have blocks larger than page size, obviously, but it turns out there's real functional differences. Resolving the problem of unlocked readers and allocating writers working with the same blkno is the most powerful example of this. It's trivial to fix by always inserting new allocated cached blocks in the cache. But solving it with buffer heads requires expensive and risky locking around the buffer head cache which can only support a single physical instance of a given blkno because there can be multiple blocks per page. So this restores the simple block cache that was removed back in commit 'c8e76e2 scoutfs: use buffer heads'. There's still work to do to get this fully functional but it's worth it. Signed-off-by: Zach Brown <zab@versity.com> Reviewed-by: Mark Fasheh <mfasheh@versity.com>
Description
No description provided
Languages
C
87.2%
Shell
9.1%
Roff
2.5%
TeX
0.9%
Makefile
0.3%