From d42a3115c961468a793b72ea2a48ec4ca2e3203d Mon Sep 17 00:00:00 2001 From: Zach Brown Date: Thu, 8 Feb 2018 13:45:07 -0800 Subject: [PATCH] scoutfs: fix livelock in item_set_batch scoutfs_item_set_batch() first tries to populate the item cache with the range of keys it's going to be modifying. It does this by walking the input key range and trying to read any missing regions. It made a bad assumption that reading from the final present key of a cached range would read more items into the cache. That was often the case when the last present key landed in a segment that contained more keys. But if the last present key was at the end of a segment the read wouldn't make any difference. It'd keep trying to read that final present key indefinitely. The fix is to try and populate the item cache starting with the first key that's missing from the cache by incrementing the last key that we found in the cache. This stopped scoutfs/507 from reliably getting stuck trying to modify an xattr whose single item happened to land at the end of a segment. Signed-off-by: Zach Brown --- kmod/src/item.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kmod/src/item.c b/kmod/src/item.c index b5afda23..7216709e 100644 --- a/kmod/src/item.c +++ b/kmod/src/item.c @@ -1310,8 +1310,10 @@ int scoutfs_item_set_batch(struct super_block *sb, struct list_head *list, if (check_range(sb, &cac->ranges, range_end, range_end)) { if (scoutfs_key_compare(range_end, last) >= 0) break; - /* start reading from hole starting at range_end */ + /* start reading after the last key we have cached */ + scoutfs_key_inc(range_end); } else { + /* start reading from the missing first */ scoutfs_key_copy(range_end, first); }