Killing a task can end up in evict and break out of acquiring the locks
to perform final inode deletion. This isn't necessarily fatal. The
orphan task will come around and will delete the inode when it is truly
no longer referenced.
So let's silence the error and keep track of how many times it happens.
Signed-off-by: Zach Brown <zab@versity.com>
Orphaned items haven't been deleted for quite a while -- the call to the
orphan inode scanner has been commented out for ages. The deletion of
the orphan item didn't take rid zone locking into account as we moved
deletion from being strictly local to being performed by whoever last
used the inode.
This reworks orphan item management and brings back orphan inode
scanning to correctly delete orphaned inodes.
We get rid of the rid zone that was always _WRITE locked by each mount.
That made it impossible for other mounts to get a _WRITE lock to delete
orphan items. Instead we rename it to the orphan zone and have orphan
item callers get _WRITE_ONLY locks inside their inode locks. Now all
nodes can create and delete orphan items as they have _WRITE locks on
the associated inodes.
Then we refresh the orphan inode scanning function. It now runs
regularly in the background of all mounts. It avoids creating cluster
lock contention by finding candidates with unlocked forest hint reads
and by testing inode caches locally and via the open map before properly
locking and trying to delete the inode's items.
Signed-off-by: Zach Brown <zab@versity.com>
The log merging work deletes log trees items once their item roots are
merged back into the fs root. Those deleted items could still have
populated srch files that would be lost. We force rotation of the srch
files in the items as they're reclaimed to turn them into rotated srch
files that can be compacted.
Signed-off-by: Zach Brown <zab@versity.com>
Refilling a btree block by moving items from its siblings as it falls
under the join threshold had some pretty serious mistakes. It used the
target block's total item count instead of the siblings when deciding
how many items to move. It didn't take item moving overruns into
account when deciding to compact so it could run out of contiguous free
space as it moved the last item. And once it compacted it returned
without moving because the return was meant to be in the error case.
This is all fixed by correctly examining the sibling block to determine
if we should join a block up to 75% full or move a big chunk over,
compacting if the free space doesn't have room for an excessive worst
case overrun, and fixing the compaction error checking return typo.
Signed-off-by: Zach Brown <zab@versity.com>
The alloc iterator needs to find and include the totals of the avail and
freed allocator list heads in the log merge items.
Signed-off-by: Zach Brown <zab@versity.com>
Some item_val_len() callers were applying alignment twice, which isn't
needed.
And additions to erased_bytes as value lengths change didn't take
alignment into account. They could end up double counting if val_len
changes within the alignment are then accounted for again as the full
item and alignment is later deleted. Additions to erased_bytes based on
val_len should always take alignment into account.
Signed-off-by: Zach Brown <zab@versity.com>
The item cache allocates a page and a little tracking struct for each
cached page. If the page allocation fails it might try to free a null
page pointer, which isn't allowed.
Signed-off-by: Zach Brown <zab@versity.com>
Item creation, which fills out a new item at the end of the array of
item structs at the start of the block, didn't explicitly zero the item
struct padding to 0. It would only have been zero if the memory was
already zero, which is likely for new blocks, but isn't necessarily true
if the memory had previously been used by deleted values.
Signed-off-by: Zach Brown <zab@versity.com>
The change to aligning values didn't update the btree block verifier's
total length calculation, and while we're in there we can also check
that values are correctly aligned.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we had an unused function that could be flipped on to verify
btree blocks during traversal. This refactors the block verifier a bit
to be called by a verifying walker. This will let callers walk paths to
leaves to verify the tree around operations, rather than verification
being performed during the next walk.
Signed-off-by: Zach Brown <zab@versity.com>
Take the condition used to decide if a btree block needs to be joined
and put it in total_above_join_low_water() so that btree_merging will be
able to call it to see if the leaf block it's merging into needs to be
joined.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree function for freeing all the blocks in a btree without
having to cow the blocks to track which refs have been freed. We use a
key from the caller to track which portions of the tree have been freed.
Signed-off-by: Zach Brown <zab@versity.com>
Add the client work which is regularly scheduled to ask the server for
log merging work to do. The relatively simple client work gets a
request from the server, finds the log roots to merge given the reqeust
seq, performs the merge with a btree call and callbacks, and commits the
result to the server.
Signed-off-by: Zach Brown <zab@versity.com>
This adds the server processing side of the btree merge functionality.
The client isn't yet sending the log_merge messages so no merging will
be performed.
The bulk of the work happens as the server processess a get_log_merge
message to build a merge request for the client. It starts a log merge
if one isn't in flight. If one is in flight it checks to see if it
should be spliced and maybe finished. In the common case it finds the
next range to be merged and sends the request to the client to process.
The commit_log_merge handler is the completion side of that request. If
the request failed then we unwind its resources based on the stored
request item. If it succeeds we record it in an item for get_
processing to splice eventually.
Then we modify two existing server code paths.
First, get_log_tree doesn't just create or use a single existing log
btree for a client mount. If the existing log btree is large enough it
sets its finalized flag and advances the nr to use a new log btree.
That makes the old finalized log btree available for merging.
Then we need to be a bit more careful when reclaiming the open log btree
for a client. We can't use next to find the only open log btree, we use
prev to find the last and make sure that it isn't already finalized.
Signed-off-by: Zach Brown <zab@versity.com>
Add the format specification for the upcoming btree merging. Log btrees
gain a finalized field, we add the super btree root and all the items
that the server will use to coordinate merging amongst clients, and we
add the two client net messages which the server will implement.
Signed-off-by: Zach Brown <zab@versity.com>
Extract part of the get_last_seq handler into a call that finds the last
stable client transaction seq. Log merging needs this to determine a
cutoff for stable items in log btrees.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree call to just dirty to a leaf block, joining and splitting
along the way so that the blocks in the path satisfy the balance
constraints.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree function for merging the items in a range from a number of
read-only input btrees into a destination btree.
Signed-off-by: Zach Brown <zab@versity.com>
Add a BTW_SUBTREE flag to btree_walk() to restrict splitting or joining
of the root block. When clients are merging into the root built from a
reference to the last parent in the fs tree we want to be careful that
we maintain a single root block that can be spliced back into the fs
tree. We specifically check that the root block remain within the
split/join thresholds. If it falls out of compliance we return an error
so that it can be spliced back into the fs tree and then split/joined
with its siblings.
Signed-off-by: Zach Brown <zab@versity.com>
Add calls for working with subtrees built around references to blocks in
the last level of parents. This will let the server farm out btree
merging work where concurrency is built around safely working with all
the items and leaves that fall under a given parent block.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree helper for finding the range of keys which are found in
leaves referenced by the last parent block when searching for a given
key.
Signed-off-by: Zach Brown <zab@versity.com>
Rename the item version to seq and set it to the max of the transaction
seq and the lock's write_seq. This lets btree item merging chose a seq
at which all dirty items written in future commits must have greater
seqs. It can drop the seqs from items written to the fs tree during
btree merging knowing that there aren't any older items out in
transactions that could be mistaken for newer items.
Signed-off-by: Zach Brown <zab@versity.com>
Rename the write_version lock field to write_seq and get it from the
core seq in the super block.
We're doing this to create a relationship between a client transaction's
seq and a lock's write_seq. New transactions will have a greater seq
than all previously granted write locks and new write locks will have a
greater seq than all open transactions. This will be used to resolve
ambiguities in item merging as transaction seqs are written out of order
and write locks span transactions.
Signed-off-by: Zach Brown <zab@versity.com>
Get the next seq for a client transaction from the core seq in the super
block. Remove its specific next_trans_seq field.
While making this change we switch to only using le64 in the network
message payloads, the rest of the processing now uses natural u64s.
Signed-off-by: Zach Brown <zab@versity.com>
Add a new seq field to the super block which will be the source of all
incremented seqs throughout the system. We give out incremented seqs to
callers with an atomic64_t in memory which is synced back to the super
block as we commit transactions in the server.
Signed-off-by: Zach Brown <zab@versity.com>
When we moved to the current allocator we fixed up the server commit
path to initialize the pair of allocators as a commit is finished rather
than before it starts. This removed all the error cases from
hold_commit. Remove the error handling from hold_commit calls to make
the system just a bit simpler.
Signed-off-by: Zach Brown <zab@versity.com>
The core quorum work loop assumes that it has exclusive access to its
slot's quorum block. It uniquely marks blocks it writes and verifies
the marks on read to discover if another mount has written to its slot
under the assumption that this must be a configuration error that put
two mounts in the same slot.
But the design of the leader bit in the block violates the invariant
that only a slot will write to its block. As the server comes up and
fences previous leaders it writes to their block to clear their leader
bit.
The final hole in the design is that because we're fencing mounts, not
slots, each slot can have two mounts in play. An active mount can be
using the slot and there can still be a persistent record of a previous
mount in the slot that crashed that needs to be fenced.
All this comes together to have the server fence an old mount in a slot
while a new mount is coming up. The new mount sees the mark change and
freaks out and stops participating in quorum.
The fix is to rework the quorum blocks so that each slot only writes to
its own block. Instead of the server writing to each fenced mount's
slot, it writes a fence event to its block once all previous mounts have
been fenced. We add a bit of bookkeeping so that the server can
discover when all block leader fence operations have completed. Each
event gets its own term so we can compare events to discover live
servers.
We get rid of the write marks and instead have an event that is written
as a quorum agent starts up and is then checked on every read to make
sure it still matches.
Signed-off-by: Zach Brown <zab@versity.com>
If the server shuts down it calls into quorum to tell it that the
server has exited. This stops quorum from sending heartbeats that
suppress other leader elections.
The function that did this got the logic wrong. It was setting the bit
instead of clearing it, having been initially written to set a bit when
the server exited.
Signed-off-by: Zach Brown <zab@versity.com>
Add the peername of the client's connected socket to its mounted_client
item as it mounts. If the client doesn't recover then fencing can use
the IP to find the host to fence.
Signed-off-by: Zach Brown <zab@versity.com>
The error messages from reading quorum blocks were confusing. The mark
was being checked when the block had already seen an error, and we got
multiple messages for some errors.
This cleans it up a bit so we only get one error message for each error
source and each message contains relevant context.
Signed-off-by: Zach Brown <zab@versity.com>
Currently the server's recovery timeout work synchronously reclaims
resources for each client whose recovery timed out.
scoutfs_recov_next_pending() can always return the head of the pending
list because its caller will always remove it from the list as it
iterates.
As we move to real fencing the server will be creating fence requests
for all the timed out clients concurrently. It will need to iterate
over all the rids for clients in recovery.
So we sort recovery's pending list by rid and change _recov_next_pending
to return the next pending rid after a rid argument. This lets the
server iterate over all the pending rids at once.
Signed-off-by: Zach Brown <zab@versity.com>
Client recovery in the server doesn't add the omap rid for all the
clients that it's waiting for. It only adds the rid as they connect. A
client whose recovery timeout expires and is evicted will try to have
its omap rid removed without being added.
Today this triggers a warning and returns an error from a time when the
omap rid lifecycle was more rigid. Now that it's being called by the
server's reclaim_rid, along with a bunch of other functions that succeed
if called for non-existant clients, let's have the omap remove_rid do
the same.
Signed-off-by: Zach Brown <zab@versity.com>
I saw a confusing hang that looked like a lack of ordering between
a waker setting shutting_down and a wait event testing it after
being woken up. Let's see if more barriers help.
Signed-off-by: Zach Brown <zab@versity.com>
Our connection state spans sockets that can disconnect and reconnect.
While sockets are connected we store the socket's remote address in the
connection's peername and we clear it as sockets disconnect.
Fencing wants to know the last connected address of the mount. It's a
bit of metadata we know about the mount that can be used to find it and
fence it. As we store the peer address we also stash it away as the
last known peer address for the socket. Fencing can then use that
instead of the current socket peer address which is guaranteed to be
uninitialized because there's no socket connected.
Signed-off-by: Zach Brown <zab@versity.com>
The client currently always queues immediate connect work if it's
nodify_down is called. It was assuming that notify_down is only called
from a healthy established connection. But it's also called for
unsuccessful conneect attempts that might not have timed out. Say the
host is up but the port isn't listening.
This results in spamming connection attempts while an old stale leader
block until a new server is elected, fences the previous leader, and
updates their quorum block.
The fix is to explicitly manage the connection work queueing delay. We
only set it to immediately queue on mount and when we see a greeting
reply from the server. We always set it to a longer timeout as we start
a connection attempt. This means we'll always have a long reconnect
delay unless we really connected to a server.
Signed-off-by: Zach Brown <zab@versity.com>
The server is responsible for calling the fencing subsystem. It is the
source of fencing requests as it decides that previous mounts are
unresponsive. It is responsible for reclaiming resources for fenced
mounts and freeing their associated fence request.
Signed-off-by: Zach Brown <zab@versity.com>
Add sysfs attribute creation that can provide the parent dir kobject
instead of always creating the sysfs object dir off of the main
per-mount dir.
Signed-off-by: Zach Brown <zab@versity.com>
Add super_ops->umount_begin so that we can implement a forced unmount
which tries to avoid issuing any more network or storage ops. It can
return errors and lose unsynchronized data.
Signed-off-by: Zach Brown <zab@versity.com>
Add the data_alloc_zone_blocks volume option. This changes the
behaviour of the server to try and give mounts free data extents which
fall in exclusive fixed-size zones.
We add the field to the scoutfs_volume_options struct and add it to the
set_volopt server handler which enforces constrains on the size of the
zones.
We then add fields to the log_trees struct which records the size of the
zones and sets bits for the zones that contain free extents in the
data_avail allocator root. The get_log_trees handler is changed to read
all the zone bitmaps from all the items, pass those bitmaps in to
_alloc_move to direct data allocations, and finally update the bitmaps
in the log_trees items to cover the newly allocated extents. The
log_trees data_alloc_zone fields are cleared as the mount's logs are
reclaimed to indicate that the mount is no longer writing to the zone.
The policy mechanism of finding free extents based on the bitmaps is
ipmlemented down in _data_alloc_move().
Signed-off-by: Zach Brown <zab@versity.com>
Add parameters so that scoutfs_alloc_move() can first search for source
extents in specified zones. It uses relatively cheap searches through
the order items to find extents that intersect with the regions
described by the zone bitmaps.
Signed-off-by: Zach Brown <zab@versity.com>
Allocators store free extents in two items, one sorted by their blkno
position and the other by their precise length.
The length index makes it easy to search for precise extent lengths, but
it makes it hard to search for a large extent within a given blkno
region. Skipping in the blkno dimension has to be done for every
precise length value.
We don't need that level of precision. If we index the extents by a
coarser order of the length then we have a fixed number of orders in
which we have to skip in the blkno dimension when searching within a
specific region.
This changes the length item to be stored at the log(8) order of the
length of the extents. This groups extents into orders that are close
to the human-friendly base 10 orders of magnitude.
With this change the order field in the key no longer stores the precise
extent length. To preserve the length of the extent we need to use
another field. The only 64bit field remaining is the first which is a
higher comparision priority than the type. So we use the highest
comparison priority zone field to differentiate the position and order
indexes and can now use all three 64bit fields in the key.
Finally, we have to be careful when constructing a key to use _next when
searching for a large extent. Previously keys were relying on the magic
property that building a key from an extent length of 0 ended up at the
key value -0 = 0. That only worked because we never stored zero length
extents. We now store zero length orders so we can't use the negative
trick anymore. We explicitly treat 0 length extents carefully when
building keys and we subtract the order from U64_MAX to store the orders
from largest to smallest.
Signed-off-by: Zach Brown <zab@versity.com>
Introduce global volume options. They're stored in the superblock and
can be seen in sysfs files that use network commands to get and
set the options on the server.
Signed-off-by: Zach Brown <zab@versity.com>
A lock that is undergoing invalidation is put on a list of locks in the
super block. Invalidation requests put locks on the list. While locks
are invalidated they're temporarily put on a private list.
To support a request arriving while the lock is being processed we
carefully manage the invalidation fields in the lock between the
invalidation worker and the incoming request. The worker correctly
noticed that a new invalidation request had arrived but it left the lock
on its private list instead of putting it back on the invalidation list
for further processing. The lock was unreachable, wouldn't get
invalidated, and caused everyone trying to use the lock to block
indefinitely.
When the worker sees another request arrive for an invalidating lock it
needs to move the lock from the private list back to the invalidation
list.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we added a ilookup variant that ignored I_FREEING inodes
to avoid a deadlock between lock invalidation (lock->I_FREEING) and
eviction (I_FREEING->lock);
Now we're seeing similar deadlocks between eviction (I_FREEING->lock)
and fh_to_dentry's iget (lock->I_FREEING).
I think it's reasonable to ignore all inodes with I_FREEING set when
we're using our _test callback in ilookup or iget. We can remove the
_nofreeing ilookup variant and move its I_FREEING test into the
iget_test callback provided to both ilookup and iget.
Callers will get the same result, it will just happen without waiting
for a previously I_FREEING inode to leave. They'll get NULL instead of
waiting from ilookup. They'll allocate and start to initialize a newer
instance of the inode and insert it along side the previous instance.
We don't have inode number re-use so we don't have the problem where a
newly allocated inode number is relying on inode cache serialization to
not find a previously allocated inode that is being evicted.
This change does allow for concurrent iget of an inode number that is
being deleted on a local node. This could happen in fh_to_dentry with a
raw inode number. But this was already a problem between mounts because
they don't have a shared inode cache to serialize them. Once we fix
that between nodes, we fix it on a single node as well.
Signed-off-by: Zach Brown <zab@versity.com>
The vfs often calls filesystem methods with i_mutex held. This creates
a natural ordering of i_mutex outside of cluster locks. The file
aio_read method acquired i_mutex after its cluster lock, creating a
deadlock with other vfs methods like setattr.
The acquisition of i_mutex after the cluster lock was due to using the
pattern where we use the per-task lock to discover if we're the first
user of the lock in a call chain. Readpage has to do this, but file
aio_read doesn't. It should never be called recursively. So we can
acquire the i_mutex outside of the cluster lock and warn if we ever are
called recursively.
Signed-off-by: Zach Brown <zab@versity.com>