Zach Brown f57c07381a Go back to having our own scoutfs_block cache
We used to have 16k blocks in our own radix_tree cache.  When we
introduced the simple file block mapping code it preferred to have block
size == page size.  That let us remove a bunch of code and reuse all the
kernel's buffer head code.

But it turns out that the buffer heads are just a bit too inflexible.

We'd like to have blocks larger than page size, obviously, but it turns
out there's real functional differences.

Resolving the problem of unlocked readers and allocating writers working
with the same blkno is the most powerful example of this.  It's trivial
to fix by always inserting new allocated cached blocks in the cache. But
solving it with buffer heads requires expensive and risky locking around
the buffer head cache which can only support a single physical instance
of a given blkno because there can be multiple blocks per page.

So this restores the simple block cache that was removed back in commit
'c8e76e2 scoutfs: use buffer heads'.  There's still work to do to get
this fully functional but it's worth it.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2016-11-16 14:45:07 -08:00
Description
No description provided
6.7 MiB
Languages
C 87.2%
Shell 9.1%
Roff 2.5%
TeX 0.9%
Makefile 0.3%