Currently the first inode number that can be allocated directly follows
the root inode. This means the first batch of allocated inodes are in
the same lock group as the root inode.
The root inode is a bit special. It is always hot as absolute path
lookups and inode-to-path resolution always read directory entries from
the root.
Let's try aligning the first free inode number to the next inode lock
group boundary. This will stop work in those inodes from necessarily
conflicting with work in the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
The number of inodes that each inode group lock covers is a balance
between minimizing lock overhead and contention. With 128 inodes per
lock we're still doing quite a bit of bulk creates, finds, etc, per lock
but could reduce the risk of contention a bit. Let's see how it works
out.
Signed-off-by: Zach Brown <zab@versity.com>
We had some logic to try and delay lock invalidation while the lock was
still actively in use. This was trying to reduce the cost of
pathological lock conflict cases but it had some severe fairness
problems.
It was first introduced to deal with bad patterns in userspace that no
longer exist and it was built on top of the LSM transaction machinery
that also no longer exists. It hasn't aged well.
Instead of introducing invalidation latency in the hopes that it leads
to more batched work, which it can't always, let's aim more towards
reducing latency in all parts of the write-invalidate-read path and
also aim towards reducing contention in the first place.
Signed-off-by: Zach Brown <zab@versity.com>
A lock that is undergoing invalidation is put on a list of locks in the
super block. Invalidation requests put locks on the list. While locks
are invalidated they're temporarily put on a private list.
To support a request arriving while the lock is being processed we
carefully manage the invalidation fields in the lock between the
invalidation worker and the incoming request. The worker correctly
noticed that a new invalidation request had arrived but it left the lock
on its private list instead of putting it back on the invalidation list
for further processing. The lock was unreachable, wouldn't get
invalidated, and caused everyone trying to use the lock to block
indefinitely.
When the worker sees another request arrive for an invalidating lock it
needs to move the lock from the private list back to the invalidation
list.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we added a ilookup variant that ignored I_FREEING inodes
to avoid a deadlock between lock invalidation (lock->I_FREEING) and
eviction (I_FREEING->lock);
Now we're seeing similar deadlocks between eviction (I_FREEING->lock)
and fh_to_dentry's iget (lock->I_FREEING).
I think it's reasonable to ignore all inodes with I_FREEING set when
we're using our _test callback in ilookup or iget. We can remove the
_nofreeing ilookup variant and move its I_FREEING test into the
iget_test callback provided to both ilookup and iget.
Callers will get the same result, it will just happen without waiting
for a previously I_FREEING inode to leave. They'll get NULL instead of
waiting from ilookup. They'll allocate and start to initialize a newer
instance of the inode and insert it along side the previous instance.
We don't have inode number re-use so we don't have the problem where a
newly allocated inode number is relying on inode cache serialization to
not find a previously allocated inode that is being evicted.
This change does allow for concurrent iget of an inode number that is
being deleted on a local node. This could happen in fh_to_dentry with a
raw inode number. But this was already a problem between mounts because
they don't have a shared inode cache to serialize them. Once we fix
that between nodes, we fix it on a single node as well.
Signed-off-by: Zach Brown <zab@versity.com>
The vfs often calls filesystem methods with i_mutex held. This creates
a natural ordering of i_mutex outside of cluster locks. The file
aio_read method acquired i_mutex after its cluster lock, creating a
deadlock with other vfs methods like setattr.
The acquisition of i_mutex after the cluster lock was due to using the
pattern where we use the per-task lock to discover if we're the first
user of the lock in a call chain. Readpage has to do this, but file
aio_read doesn't. It should never be called recursively. So we can
acquire the i_mutex outside of the cluster lock and warn if we ever are
called recursively.
Signed-off-by: Zach Brown <zab@versity.com>
When move blocks is staging it requires an overlapping offline extent to
cover the entire region to move.
It performs the stage by modifying extents at a time. If there are
fragmented source extents it will modify each of them at a time in the
region.
When looking for the extent to match the source extent it looked from
the iblock of the start of the whole operation, not the start of the
source extent it's matching. This meant that it would find a the first
previous online extent it just modified, which wouldn't be online, and
would return -EINVAL.
The fix is to have it search from the logical start of the extent it's
trying to match, not the start of the region.
Signed-off-by: Zach Brown <zab@versity.com>
The client's incoming lock invalidation request handler triggers a
BUG_ON if it gets a request for a lock that is already processing a
previous invalidation request. The server is supposed to only send
one request at a time.
The problem is that the batched invalidation request handling will send
responses outside of spinlock coverage before reacquirin the lock and
finishing processing once the response send has been successful.
This gives a window for another invalidation request to arrive after the
response was sent but before the invalidation finished processing. This
triggers the bug.
The fix is to mark the lock such that we can recognize a valid second
request arriving after we send the response but before we finish
processing. If it arrives we'll continue invalidation processing with
the arguments from the new request.
Signed-off-by: Zach Brown <zab@versity.com>
Lock teardown during unmount involves first calling shutdown and then
destroy. The shutdown call is meant to ensure that it's safe to tear
down the client network connections. Once shutdown returns locking is
promising that it won't call into the client to send new lock requests.
The current shutdown implementation is very heavy handed and shuts down
everything. This creates a deadlock. After calling lock shutdown, the
client will send its farewell and wait for a response. The server might
not send the farewell response until other mounts have unmounted if our
client is in the server's mount. In this case we stil have to be
processing lock invalidation requests to allow other unmounting clients
to make forward progress.
This is reasonably easy and safe to do. We only use the shutdown flag
to stop lock calls that would change lock state and send requests. We
don't have it stop incoming requests processing in the work queueing
functions. It's safe to keep processing incoming requests between
_shutdown and _destroy because the requests already come in through the
client. As the client shuts down it will stop calling us.
Signed-off-by: Zach Brown <zab@versity.com>
Even though we can pass in gfp flags to vmalloc it eventually calls pte
alloc functions which ignore the caller's flags and use user gfp flags.
This risks reclaim re-entering fs paths during allocations in the block
cache. These allocs that allowed reclaim deep in the fs was causing
lockdep to add RECLAIM dependencies between locks and holler about
deadlocks.
We apply the same pattern that xfs does for disabling reclaim while
allocating vmalloced block payloads. Setting PF_MEMALLOC_NOIO causes
reclaim in that task to clear __GFP_IO and __GFP_FS, regardless of the
individual allocation flags in the task, preventing recursion.
Signed-off-by: Zach Brown <zab@versity.com>
The shared recovery layer outputs different messages than when it ran
only for lock_recovery in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Locks have a bunch of state that reflects concurrent processing.
Testing that state determines when it's safe to free a lock because
nothing is going on.
During unmount we abruptly stop processing locks. Unmount will send a
farewell to the server which will remove all the state associated with
the client that's unmounting for all its locks, regardless of the state
the locks were in.
The client unmount path has to clean up the interupted lock state and
free it, carefully avoiding assertions that would otherwise indicate
that we're freeing used locks. The move to async lock invalidation
forgot to clean up the invalidation state. Previously a synchronous
work function would set and clear invalidate_pending while it was
running. Once we finished waiting for it invalidate_pending would be
clear. The move to async invalidation work meant that we can still have
invalidate_pending with no work executing. Lock destruction removed
locks from the invalidation list but forgot to clear the
invalidate_pending flag.
This triggered assertions during unmount that were otherwise harmless.
There was other use of the lock, we just forgot to clean up the lock
state.
Signed-off-by: Zach Brown <zab@versity.com>
The data_info struct holds the data allocator that is filled by
transactions as they commit. We have to free it after we've shutdown
transactions. It's more like the forest in this regard so we move its
desctruction down by the forest to group similar behaviour.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the lock client waits for invalidation work and prevents
future work from being queued. We're currently shutting down the
subsystems that lock calls before lock itself, leading to crashes if we
happen to have invalidations executing as we unmount.
Shutting down locking before its dependencies fixes this. This was hit
in testing during the inode deletion fixes because it created the
perfect race by acquiring locks during unmount so that the server was
very unlikely to send invalidations on behalf to one mount on behalf of
another as they both unmounted.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the transaction during unmount relied on the vfs unmount
path to perform a sync of any remaining dirty transaction. There are
ways that we can dirty a transaction during unmount after it calls
the fs sync, so we try to write any remaining dirty transaction before
shutting down.
Signed-off-by: Zach Brown <zab@versity.com>
We've had a long-standing deadlock between lock invalidation and
eviction. Invalidating a lock wants to lookup inodes and drop their
resources while blocking locks. Eviction wants to get a lock to perform
final deletion while the inodes has I_FREEING set which blocks lookups.
We only saw this deadlock a handful of times in all of the time we've
run the code, but it's now much more common now that we're acquiring
locks in iput to test that nlink is zero instead of only when nlink is
zero. I see unmount hang regularly when testing final inode deletion.
This adds a lookup variant for invalidation which will refuse to
return freeing inodes so they won't be waited on. Once they're freeing
they can't be seen by future lock users so they don't need to be
invalidated. This keeps the lock invalication promise and avoids
sleeping on freeing inodes which creates the deadlock.
Signed-off-by: Zach Brown <zab@versity.com>
t_umount had a typo that had it try to unmount a mount based on a
caller's variable, which accidentally happened to work for its only
caller. Future callers would not have been so lucky.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we wouldn't try and remove cached dentries and inodes as
lock revocation removed cluster lock coverage. The next time
we tried to use the cached dentries or inodes we'd acquire
a lock and refresh them.
But now cached inodes prevent final inode deletion. If they linger
outside cluster locking then any final deletion will need to be deferred
until all its cached inodes are naturally dropped at some point in the
future across the cluster. It might take refreshing the dentries or for
memory pressure to push out the old cached inodes.
This tries to proctively drop cached dentries and inodes as we lose
cluster lock coverage if they're not actively referenced. We need to be
careful not to perform final inode deletion during lock invalidation
because it will deadlock, so we defer an iput which could delete during
evict out to async work.
Now deletion can be done synchronously in the task that is performing
the unlink because previous use of the inode on remote mounts hasn't
left unused cached inodes sitting around.
Signed-off-by: Zach Brown <zab@versity.com>
Today an inode's items are deleted once its nlink reaches zero and the
final iput is called in a local mount. This can delete inodes from
under other mounts which have opened the inode before it was unlinked on
another mount.
We fix this by adding cached inode tracking. Each mount maintains
groups of cached inode bitmaps at the same granularity as inode locking.
As a mount performs its final iput it gets a bitmap from the server
which indicates if any other mount has inodes in the group open.
This makes the two fast paths of opening and closing linked files and of
deleting a file that was unlinked locally only pay a moderate cost of
either maintaining the bitmap locally and only getting the open map once
per lock group. Removing many files in a group will only lock and get
the open map once per group.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we have the recov layer we can have the lock server use it to
track lock recovery. The lock server no longer needs its own recovery
tracking structures and can instead call recov. We add a call for the
server to call to kick lock processing once lock recovery finishes. We
can get rid of the persistent lock_client items now that the server is
driving recovery from the mounted_client items.
Signed-off-by: Zach Brown <zab@versity.com>
The server starts recovery when it finds mounted client items as it
starts up. The clients are done recovering once they send their
greeting. If they don't recover in time then they'll be fenced.
Signed-off-by: Zach Brown <zab@versity.com>
Add a little set of functions to help the server track which clients are
waiting to recover which state. The open map messages need to wait for
recovery so we're moving recovery out of being only in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Add lock coverage which tracks if the inode has been refreshed and is
covered by the inode group cluster lock. This will be used by
drop_inode and evict_inode to discover that the inode is current and
doesn't need to be refreshed.
Signed-off-by: Zach Brown <zab@versity.com>
The sneaky rhashtable_insert_fast() can't return -EEXIST despite the
last line of the function *REALLY* making it look like it can. It just
inserts new objects at the head of the bucket lists without comparing
the insertion with existing objects.
The block cache was relying on insertion to resolve duplicate racing
allocated blocks. Because it couldn't return -EEXIST we could get
duplicate cached blocks present in the hash table.
rhashtable_lookup_insert_fast() fixes this by actually comparing the
inserted objects key with the objects found in the insertion bucket. A
racing allocator trying to insert a duplicate cached block will get an
error, drop their allocated block, and retry their lookup.
Signed-off-by: Zach Brown <zab@versity.com>
The rhashtable can return EBUSY if you insert fast enough to trigger an
expansion of the next table size that is waiting to be rehashed in an
rcu callback. If we get EBUSY from rhasthable_insert we call
synchronize_rcu to wait for the rehash to complete before trying again.
This was hit in testing restores of a very large namespace and took a
few hours to hit.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing cpu livelocks in block shrinking where counters show that a
single block cache shrink call is only getting EAGAIN from repeated
rhashtable walk attempts. It occurred to me that the running task might
be preventing an RCU grace period from ending by never blocking.
The hope of this commit is that by waiting for rcu callbacks to run
we'll ensure that any pending rebalance callback runs before we retry
the rhashtable walk again. I haven't been able to reproduce this easily
so this is a stab in the dark.
Signed-off-by: Zach Brown <zab@versity.com>
Kinda weird to goto back to the out label and then out the bottom. Just
return -EIO, like forest_next_hint() does.
Don't call client_get_roots() right before retry, since is the first thing
retry does.
Signed-off-by: Andy Grover <agrover@versity.com>
Support O_TMPFILE: Create an unlinked file and put it on the orphan list.
If it ever gains a link, take it off the orphan list.
Change MOVE_BLOCKS ioctl to allow moving blocks into offline extent ranges.
Ioctl callers must set a new flag to enable this operation mode.
RH-compat: tmpfile support it actually backported by RH into 3.10 kernel.
We need to use some of their kabi-maintaining wrappers to use it:
use a struct inode_operations_wrapper instead of base struct
inode_operations, set S_IOPS_WRAPPER flag in i_flags. This lets
RH's modified vfs_tmpfile() find our tmpfile fn pointer.
Add a test that tests both creating tmpfiles as well as moving their
contents into a destination file via MOVE_BLOCKS.
xfstests common/004 now runs because tmpfile is supported.
Signed-off-by: Andy Grover <agrover@versity.com>
The srch client compaction work initializes allocators, dirties blocks,
and writes them out as its transaction. It forgot to call the
pre-commit allocator prepare function.
The prepare function drops block references used by the meta allocator
during the transaction. This leaked block references which kept blocks
from being freed by the shrinker under memory pressure. Eventually
memory was full of leaked blocks and the shrinker walked all of them
looking blocks to free, resulting in an effective livelock that ground
the system to a crawl.
Signed-off-by: Zach Brown <zab@versity.com>
By the time we get to destroying the block cache we should have put all
our block references. Warn as we tear down the blocks if we see any
blocks that still have references, implying a ref leak. This caught a
leak caused by srch compaction forgetting to put allocator list block
refs.
Signed-off-by: Zach Brown <zab@versity.com>