mirror of
https://github.com/versity/scoutfs.git
synced 2026-02-07 19:20:44 +00:00
6afeb978028e1b6efcac93d6af945ee201ba04e7
Our first attempt at storing file data put them in items. This was easy to implement but won't be acceptable in the long term. The cost of the power of LSM indexing is compaction overhead. That's acceptable for fine grained metadata but is totally unacceptable for bulk file data. This switches to storing file data in seperate block allocations which are referenced by extent items. The bulk of the change is the mechanics of working with extents. We have high level callers which add or remove logical extents and then underlying mechanisms that insert, merge, or split the items that the extents are stored in. We have three types of extent items. The primary type maps logical file regions to physical block extents. The next two store free extents per-node so that clients don't create lock and LSM contention as they try and allocate extents. To fill those per-node free extents we add messages that communcate free extents in the form of lists of segment allocations from the server. We don't do any fancy multi-block allocation yet. We only allocate blocks in get_blocks as writes find unmapped blocks. We do use some per-task cursors to cache block allocation positions so that these single block allocations are very likely to merge into larger extents as tasks stream wites. This is just the first chunk of the extent work that's coming. A later patch adds offline flags and fixes up the change nonsense that seemed like a good idea here. The final moving part is that we initiate writeback on all newly allocated extents before we commit the metadata that references the new blocks. We do this with our own dirty inode tracking because the high level vfs methods are unusably slow in some upstream kernels (they walk all inodes, not just dirty inodes.) Signed-off-by: Zach Brown <zab@versity.com>
Description
No description provided
Languages
C
87%
Shell
9.3%
Roff
2.5%
TeX
0.8%
Makefile
0.4%