scoutfs: item cache reads shouldn't clobber dirty

We read into the item cache when we hit a region that isn't cached.
Inode index items are created without cached range coverage.   It's easy
to trigger read attempts that overlap with dirty inode index items.

insert_item() had a bug where it's notion of overwriting only applied to
logical presence.  It always let an insertion overwrite an existing item
if it was a deletion.

But that only makes sense for new item creation.  Item cache population
can't do this.  In this inode index case it can replace a correct dirty
inode index item with its old pre-deletion item from the read.  This
clobbers the deletions and leaks the old inode index item versions.

So teach the item insertion for caching to never, ever, replace an
existing item.   This fixes assertion failures from trying to
immediately walk meta seq items after creating a few thousand dirty
entries.

Signed-off-by: Zach Brown <zab@versity.com>
This commit is contained in:
Zach Brown
2017-06-17 09:51:51 -07:00
parent bf7b3ac506
commit ef551776ae

View File

@@ -394,14 +394,24 @@ static void become_deletion_item(struct super_block *sb,
}
/*
* Try to insert the given item. If there's already a non-deletion item
* with the insertion key then return -EEXIST. An existing deletion
* item is replaced and freed.
* Try to add an item to the cache. The caller is responsible for
* marking the newly inserted item dirty.
*
* The caller is responsible for marking the newly inserted item dirty.
* We distinguish between callers seeing trying to insert a new logical
* item and others trying to populate the cache.
*
* New logical item creaters have made sure the items are participating
* in consistent locking. It's safe for them to clobber dirty deletion
* items with a new version of the item.
*
* Cache readers can only populate items that weren't present already.
* In particular, they absolutely cannot replace dirty old inode index items
* with the old version that was just deleted (outside of range caching and
* locking consistency).
*/
static int insert_item(struct super_block *sb, struct item_cache *cac,
struct cached_item *ins, bool overwrite)
struct cached_item *ins, bool logical_overwrite,
bool cache_populate)
{
struct rb_root *root = &cac->items;
struct cached_item *item;
@@ -426,7 +436,8 @@ restart:
item->dirty |= RIGHT_DIRTY;
node = &(*node)->rb_right;
} else {
if (!item->deletion && !overwrite)
if (cache_populate ||
(!item->deletion && !logical_overwrite))
return -EEXIST;
/* sadly there's no augmented replace */
@@ -986,7 +997,7 @@ int scoutfs_item_create(struct super_block *sb, struct scoutfs_key_buf *key,
return -ENOMEM;
spin_lock_irqsave(&cac->lock, flags);
ret = insert_item(sb, cac, item, false);
ret = insert_item(sb, cac, item, false, false);
if (!ret) {
scoutfs_inc_counter(sb, item_create);
mark_item_dirty(sb, cac, item);
@@ -1071,7 +1082,7 @@ int scoutfs_item_insert_batch(struct super_block *sb, struct list_head *list,
list_for_each_entry_safe(item, tmp, list, entry) {
list_del_init(&item->entry);
if (insert_item(sb, cac, item, false))
if (insert_item(sb, cac, item, false, true))
list_add(&item->entry, list);
}
@@ -1188,7 +1199,7 @@ int scoutfs_item_set_batch(struct super_block *sb, struct list_head *list,
/* insert the caller's items, overwriting any existing */
list_for_each_entry_safe(item, tmp, list, entry) {
list_del_init(&item->entry);
insert_item(sb, cac, item, true);
insert_item(sb, cac, item, true, false);
mark_item_dirty(sb, cac, item);
}