From 2d6d113e03820eca8bbed4f3c44955919dee9768 Mon Sep 17 00:00:00 2001 From: Zach Brown Date: Wed, 6 Sep 2017 14:34:04 -0700 Subject: [PATCH] scoutfs: continue index walk after lock We saw inode index queries spinning. They were finding no cached entries in their locked region but the next key in the segments was in the region. This can happen if an item has been deleted in the current transaction. The query won't walk up in to the new dirty seq but it will try to walk the old seq. The item will still be in the segments but won't be visible to item_next because it's marked deleted. The query will spin finding the next stale key to read from and finding it missing in the cache. This is fixed by taking the current coherent cache at its word. When it tells us there's no entries we advance the key to check the manifest for to past the locked region. In this case it'll skip past the cached delete item and look for the next key in the segments. Signed-off-by: Zach Brown --- kmod/src/ioctl.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/kmod/src/ioctl.c b/kmod/src/ioctl.c index 80abaf4a..31c07ef9 100644 --- a/kmod/src/ioctl.c +++ b/kmod/src/ioctl.c @@ -131,7 +131,18 @@ static long scoutfs_ioc_walk_inodes(struct file *file, unsigned long arg) if (ret == -ENOENT) { + /* done if lock covers last iteration key */ + if (scoutfs_key_compare(&last_key, lock->end) <= 0) { + ret = 0; + break; + } + + /* continue iterating after locked empty region */ + scoutfs_key_copy(&key, lock->end); + scoutfs_key_inc_cur_len(&key); + scoutfs_unlock(sb, lock, DLM_LOCK_PR); + /* * XXX This will miss dirty items. We'd need to * force writeouts of dirty items in our