Fix deadlock between lock invalidate and evict

We've had a long-standing deadlock between lock invalidation and
eviction.  Invalidating a lock wants to lookup inodes and drop their
resources while blocking locks.  Eviction wants to get a lock to perform
final deletion while the inodes has I_FREEING set which blocks lookups.

We only saw this deadlock a handful of times in all of the time we've
run the code, but it's now much more common now that we're acquiring
locks in iput to test that nlink is zero instead of only when nlink is
zero.  I see unmount hang regularly when testing final inode deletion.

This adds a lookup variant for invalidation which will refuse to
return freeing inodes so they won't be waited on.  Once they're freeing
they can't be seen by future lock users so they don't need to be
invalidated.  This keeps the lock invalication promise and avoids
sleeping on freeing inodes which creates the deadlock.

Signed-off-by: Zach Brown <zab@versity.com>
This commit is contained in:
Zach Brown
2021-03-23 10:20:34 -07:00
parent dba88705f7
commit 4389c73c14
3 changed files with 28 additions and 3 deletions

View File

@@ -672,6 +672,28 @@ struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino)
return ilookup5(sb, ino, scoutfs_iget_test, &ino);
}
static int iget_test_nofreeing(struct inode *inode, void *arg)
{
return !(inode->i_state & I_FREEING) && scoutfs_iget_test(inode, arg);
}
/*
* There's a natural risk of a deadlock between lock invalidation and
* eviction. Invalidation blocks locks while looking up inodes and
* invalidating local caches. Inode eviction gets a lock to check final
* inode deletion while the inode is marked FREEING which blocks
* lookups.
*
* We have a lookup variant which doesn't return I_FREEING inodes
* instead of waiting on them. If an inode has made it to I_FREEING
* then it doesn't have any local caches that are reachable and the lock
* invalidation promise is kept.
*/
struct inode *scoutfs_ilookup_nofreeing(struct super_block *sb, u64 ino)
{
return ilookup5(sb, ino, iget_test_nofreeing, &ino);
}
struct inode *scoutfs_iget(struct super_block *sb, u64 ino)
{
struct scoutfs_lock *lock = NULL;

View File

@@ -79,6 +79,7 @@ int scoutfs_orphan_inode(struct inode *inode);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup_nofreeing(struct super_block *sb, u64 ino);
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
u32 minor, u64 ino);

View File

@@ -155,8 +155,10 @@ static void lock_inv_iput_worker(struct work_struct *work)
}
/*
* invalidate cached data associated with an inode whose lock is going
* away.
* Invalidate cached data associated with an inode whose lock is going
* away. We ignore indoes with I_FREEING instead of waiting on them to
* avoid a deadlock, if they're freeing then they won't be visible to
* future lock users and we don't need to invalidate them.
*
* We try to drop cached dentries and inodes covered by the lock if they
* aren't referenced. This removes them from the mount's open map and
@@ -176,7 +178,7 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
struct scoutfs_inode_info *si;
struct inode *inode;
inode = scoutfs_ilookup(sb, ino);
inode = scoutfs_ilookup_nofreeing(sb, ino);
if (inode) {
si = SCOUTFS_I(inode);