mirror of
https://github.com/versity/scoutfs.git
synced 2026-02-08 19:50:08 +00:00
3818f727767fbd15d542eb6c7b4b99f255bea5c0
Iterating over items backwards would result in a lot of extra work. When an item isn't present in the cache we go and search the segments for the item. Once we find the item in its stack of segments we also read in and cache all the items from the missing item to the end of all the segments. This reduced complexity a bit but had very bad worst case performance. If you read items backwards you constantly get cache misses that each search the segments for the item and then try to cache everything to the end of the segment. You're essentially working uncached and are doing quite a lot of work to get that single missed item cached each time. This adds the complexity to cache all the items in the segment stack around the missed item, not just after the missed item. Now reverse iteration hits cached items for everything in the segment after the initial miss. To make this work we have to pass the full lock coverage range to the item reading path. Then we search the manifest for segments that contain the missing key and use those segment's ranges to determine the full range of items that we'll cache. Then we again search the manifest for all the level 0 segments that intersect that range. That range extension is only for cached reads, it doesn't apply to the 'next' call which ignores caching. That operation is getting different enough that we pull it out into its own function. Signed-off-by: Zach Brown <zab@versity.com>
Description
No description provided
Languages
C
87%
Shell
9.3%
Roff
2.5%
TeX
0.8%
Makefile
0.4%