When move blocks is staging it requires an overlapping offline extent to
cover the entire region to move.
It performs the stage by modifying extents at a time. If there are
fragmented source extents it will modify each of them at a time in the
region.
When looking for the extent to match the source extent it looked from
the iblock of the start of the whole operation, not the start of the
source extent it's matching. This meant that it would find a the first
previous online extent it just modified, which wouldn't be online, and
would return -EINVAL.
The fix is to have it search from the logical start of the extent it's
trying to match, not the start of the region.
Signed-off-by: Zach Brown <zab@versity.com>
The client's incoming lock invalidation request handler triggers a
BUG_ON if it gets a request for a lock that is already processing a
previous invalidation request. The server is supposed to only send
one request at a time.
The problem is that the batched invalidation request handling will send
responses outside of spinlock coverage before reacquirin the lock and
finishing processing once the response send has been successful.
This gives a window for another invalidation request to arrive after the
response was sent but before the invalidation finished processing. This
triggers the bug.
The fix is to mark the lock such that we can recognize a valid second
request arriving after we send the response but before we finish
processing. If it arrives we'll continue invalidation processing with
the arguments from the new request.
Signed-off-by: Zach Brown <zab@versity.com>
Lock teardown during unmount involves first calling shutdown and then
destroy. The shutdown call is meant to ensure that it's safe to tear
down the client network connections. Once shutdown returns locking is
promising that it won't call into the client to send new lock requests.
The current shutdown implementation is very heavy handed and shuts down
everything. This creates a deadlock. After calling lock shutdown, the
client will send its farewell and wait for a response. The server might
not send the farewell response until other mounts have unmounted if our
client is in the server's mount. In this case we stil have to be
processing lock invalidation requests to allow other unmounting clients
to make forward progress.
This is reasonably easy and safe to do. We only use the shutdown flag
to stop lock calls that would change lock state and send requests. We
don't have it stop incoming requests processing in the work queueing
functions. It's safe to keep processing incoming requests between
_shutdown and _destroy because the requests already come in through the
client. As the client shuts down it will stop calling us.
Signed-off-by: Zach Brown <zab@versity.com>
Even though we can pass in gfp flags to vmalloc it eventually calls pte
alloc functions which ignore the caller's flags and use user gfp flags.
This risks reclaim re-entering fs paths during allocations in the block
cache. These allocs that allowed reclaim deep in the fs was causing
lockdep to add RECLAIM dependencies between locks and holler about
deadlocks.
We apply the same pattern that xfs does for disabling reclaim while
allocating vmalloced block payloads. Setting PF_MEMALLOC_NOIO causes
reclaim in that task to clear __GFP_IO and __GFP_FS, regardless of the
individual allocation flags in the task, preventing recursion.
Signed-off-by: Zach Brown <zab@versity.com>
The shared recovery layer outputs different messages than when it ran
only for lock_recovery in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Locks have a bunch of state that reflects concurrent processing.
Testing that state determines when it's safe to free a lock because
nothing is going on.
During unmount we abruptly stop processing locks. Unmount will send a
farewell to the server which will remove all the state associated with
the client that's unmounting for all its locks, regardless of the state
the locks were in.
The client unmount path has to clean up the interupted lock state and
free it, carefully avoiding assertions that would otherwise indicate
that we're freeing used locks. The move to async lock invalidation
forgot to clean up the invalidation state. Previously a synchronous
work function would set and clear invalidate_pending while it was
running. Once we finished waiting for it invalidate_pending would be
clear. The move to async invalidation work meant that we can still have
invalidate_pending with no work executing. Lock destruction removed
locks from the invalidation list but forgot to clear the
invalidate_pending flag.
This triggered assertions during unmount that were otherwise harmless.
There was other use of the lock, we just forgot to clean up the lock
state.
Signed-off-by: Zach Brown <zab@versity.com>
The data_info struct holds the data allocator that is filled by
transactions as they commit. We have to free it after we've shutdown
transactions. It's more like the forest in this regard so we move its
desctruction down by the forest to group similar behaviour.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the lock client waits for invalidation work and prevents
future work from being queued. We're currently shutting down the
subsystems that lock calls before lock itself, leading to crashes if we
happen to have invalidations executing as we unmount.
Shutting down locking before its dependencies fixes this. This was hit
in testing during the inode deletion fixes because it created the
perfect race by acquiring locks during unmount so that the server was
very unlikely to send invalidations on behalf to one mount on behalf of
another as they both unmounted.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the transaction during unmount relied on the vfs unmount
path to perform a sync of any remaining dirty transaction. There are
ways that we can dirty a transaction during unmount after it calls
the fs sync, so we try to write any remaining dirty transaction before
shutting down.
Signed-off-by: Zach Brown <zab@versity.com>
We've had a long-standing deadlock between lock invalidation and
eviction. Invalidating a lock wants to lookup inodes and drop their
resources while blocking locks. Eviction wants to get a lock to perform
final deletion while the inodes has I_FREEING set which blocks lookups.
We only saw this deadlock a handful of times in all of the time we've
run the code, but it's now much more common now that we're acquiring
locks in iput to test that nlink is zero instead of only when nlink is
zero. I see unmount hang regularly when testing final inode deletion.
This adds a lookup variant for invalidation which will refuse to
return freeing inodes so they won't be waited on. Once they're freeing
they can't be seen by future lock users so they don't need to be
invalidated. This keeps the lock invalication promise and avoids
sleeping on freeing inodes which creates the deadlock.
Signed-off-by: Zach Brown <zab@versity.com>
t_umount had a typo that had it try to unmount a mount based on a
caller's variable, which accidentally happened to work for its only
caller. Future callers would not have been so lucky.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we wouldn't try and remove cached dentries and inodes as
lock revocation removed cluster lock coverage. The next time
we tried to use the cached dentries or inodes we'd acquire
a lock and refresh them.
But now cached inodes prevent final inode deletion. If they linger
outside cluster locking then any final deletion will need to be deferred
until all its cached inodes are naturally dropped at some point in the
future across the cluster. It might take refreshing the dentries or for
memory pressure to push out the old cached inodes.
This tries to proctively drop cached dentries and inodes as we lose
cluster lock coverage if they're not actively referenced. We need to be
careful not to perform final inode deletion during lock invalidation
because it will deadlock, so we defer an iput which could delete during
evict out to async work.
Now deletion can be done synchronously in the task that is performing
the unlink because previous use of the inode on remote mounts hasn't
left unused cached inodes sitting around.
Signed-off-by: Zach Brown <zab@versity.com>
Today an inode's items are deleted once its nlink reaches zero and the
final iput is called in a local mount. This can delete inodes from
under other mounts which have opened the inode before it was unlinked on
another mount.
We fix this by adding cached inode tracking. Each mount maintains
groups of cached inode bitmaps at the same granularity as inode locking.
As a mount performs its final iput it gets a bitmap from the server
which indicates if any other mount has inodes in the group open.
This makes the two fast paths of opening and closing linked files and of
deleting a file that was unlinked locally only pay a moderate cost of
either maintaining the bitmap locally and only getting the open map once
per lock group. Removing many files in a group will only lock and get
the open map once per group.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we have the recov layer we can have the lock server use it to
track lock recovery. The lock server no longer needs its own recovery
tracking structures and can instead call recov. We add a call for the
server to call to kick lock processing once lock recovery finishes. We
can get rid of the persistent lock_client items now that the server is
driving recovery from the mounted_client items.
Signed-off-by: Zach Brown <zab@versity.com>
The server starts recovery when it finds mounted client items as it
starts up. The clients are done recovering once they send their
greeting. If they don't recover in time then they'll be fenced.
Signed-off-by: Zach Brown <zab@versity.com>
Add a little set of functions to help the server track which clients are
waiting to recover which state. The open map messages need to wait for
recovery so we're moving recovery out of being only in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Add lock coverage which tracks if the inode has been refreshed and is
covered by the inode group cluster lock. This will be used by
drop_inode and evict_inode to discover that the inode is current and
doesn't need to be refreshed.
Signed-off-by: Zach Brown <zab@versity.com>
The sneaky rhashtable_insert_fast() can't return -EEXIST despite the
last line of the function *REALLY* making it look like it can. It just
inserts new objects at the head of the bucket lists without comparing
the insertion with existing objects.
The block cache was relying on insertion to resolve duplicate racing
allocated blocks. Because it couldn't return -EEXIST we could get
duplicate cached blocks present in the hash table.
rhashtable_lookup_insert_fast() fixes this by actually comparing the
inserted objects key with the objects found in the insertion bucket. A
racing allocator trying to insert a duplicate cached block will get an
error, drop their allocated block, and retry their lookup.
Signed-off-by: Zach Brown <zab@versity.com>
The rhashtable can return EBUSY if you insert fast enough to trigger an
expansion of the next table size that is waiting to be rehashed in an
rcu callback. If we get EBUSY from rhasthable_insert we call
synchronize_rcu to wait for the rehash to complete before trying again.
This was hit in testing restores of a very large namespace and took a
few hours to hit.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing cpu livelocks in block shrinking where counters show that a
single block cache shrink call is only getting EAGAIN from repeated
rhashtable walk attempts. It occurred to me that the running task might
be preventing an RCU grace period from ending by never blocking.
The hope of this commit is that by waiting for rcu callbacks to run
we'll ensure that any pending rebalance callback runs before we retry
the rhashtable walk again. I haven't been able to reproduce this easily
so this is a stab in the dark.
Signed-off-by: Zach Brown <zab@versity.com>
Kinda weird to goto back to the out label and then out the bottom. Just
return -EIO, like forest_next_hint() does.
Don't call client_get_roots() right before retry, since is the first thing
retry does.
Signed-off-by: Andy Grover <agrover@versity.com>
Support O_TMPFILE: Create an unlinked file and put it on the orphan list.
If it ever gains a link, take it off the orphan list.
Change MOVE_BLOCKS ioctl to allow moving blocks into offline extent ranges.
Ioctl callers must set a new flag to enable this operation mode.
RH-compat: tmpfile support it actually backported by RH into 3.10 kernel.
We need to use some of their kabi-maintaining wrappers to use it:
use a struct inode_operations_wrapper instead of base struct
inode_operations, set S_IOPS_WRAPPER flag in i_flags. This lets
RH's modified vfs_tmpfile() find our tmpfile fn pointer.
Add a test that tests both creating tmpfiles as well as moving their
contents into a destination file via MOVE_BLOCKS.
xfstests common/004 now runs because tmpfile is supported.
Signed-off-by: Andy Grover <agrover@versity.com>
The srch client compaction work initializes allocators, dirties blocks,
and writes them out as its transaction. It forgot to call the
pre-commit allocator prepare function.
The prepare function drops block references used by the meta allocator
during the transaction. This leaked block references which kept blocks
from being freed by the shrinker under memory pressure. Eventually
memory was full of leaked blocks and the shrinker walked all of them
looking blocks to free, resulting in an effective livelock that ground
the system to a crawl.
Signed-off-by: Zach Brown <zab@versity.com>
By the time we get to destroying the block cache we should have put all
our block references. Warn as we tear down the blocks if we see any
blocks that still have references, implying a ref leak. This caught a
leak caused by srch compaction forgetting to put allocator list block
refs.
Signed-off-by: Zach Brown <zab@versity.com>
That comment looked very weird indeed until I recognized that I must
have forgotten to delete the first two attempts at starting the
sentence.
Signed-off-by: Zach Brown <zab@versity.com>
The very first greeting a client sends is unique becuase it doesn't yet
have a server_term field set and tells the server to create items to
track the client.
A server processing this request can create the items and then shut down
before the client is able to receive the reply. They'll resend the
greeting without server_term but then the next server will get -EEXIST
errors as it tries to create items for the client. This causes the
connection to break, which the client tries to reestablish, and the
pattern repeats indefinitely.
The fix is to simply recognize that -EEXIST is acceptable during item
creation. Server message handlers always have to address the case where
a resent message was already processed by a previous server but it's
response didn't make it to the client.
Signed-off-by: Zach Brown <zab@versity.com>
Remove an old client info field from the unmount barrier mechanism which
was removed a while ago. It used to be compared to a super field to
decide to finish unmount without reconnecting but now we check for our
mounted_client item in the server's btree.
Signed-off-by: Zach Brown <zab@versity.com>
Define a family field, and add a union for IPv4 and v6 variants, although
v6 is not supported yet.
Family field is now used to determine presence of address in a quorum slot,
instead of checking if addr is zero.
Signed-off-by: Andy Grover <agrover@versity.com>
The block-stale-reads test was built from the ashes of a test that
used counters and triggers to work with the btree when it was
only used on the server.
The initial quick translation to try and trigger block cache retries
while the forest called the btree got so much wrong. It was still
trying to use some 'cl' variable that didn't refer to the client any
more, the trigger helpers now call statfs to find paths and can end up
triggering themselves. and many more counters stale reads can happen
throughout the system while we're working -- not just one from our
trigger.
This fixes it up to consistently use fs numbers instead of
the silly stale cl variable and be less sensitive to triggers firing and
counter differences.
Signed-off-by: Zach Brown <zab@versity.com>
t_trigger_arm always output the value of the trigger after arming on the
premise that tests required the trigger being armed. In the process of
showing the trigger it calls a bunch of t_ helpers that build the path
to the trigger file using statfs_more to get the rid of mounts.
If the trigger being armed is in the server's mount and the specific
trigger test is fired by the server's statfs_more request processing
then the trigger can be fired before read its value. Tests can
inconsistently fail as the golden output shows the trigger being armed
or not depending on if it was in the server's mount or not.
t_trigger_arm_silent doesn't output the value of the armed trigger. It
can be used for low level triggers that don't rely on reading the
trigger's value to discover that their effect has happened.
Signed-off-by: Zach Brown <zab@versity.com>