- This makes ScoutFS packages more directly tied to a given kernel while
still allowing for weak modules usage when possible.
- For EL9, this still prevents the installation of kmod packages across
minor releases, which no longer have strict kABI gurantees.
The server's commit_log_trees has an error message that includes the
source of the error, but it's not used for all errors. The WARN_ON is
redundant with the message and is removed because it isn't filtered out
when we see errors from forced unmount.
Signed-off-by: Zach Brown <zab@versity.com>
The userspace fencing process wasn't careful about handling underlying
directories that disappear while it was working.
On the server/fenced side, fencing requests can linger after they've
been resolved by writing 1 to fenced or error. The script could come
back around to see the directory before the server finally removes it,
causing all later uses of the request dir to fail. We saw this in the
logs as a bunch of cat errors for the various request files.
On the local fence script side, all the mounts can be in the process of
being unmounted so both the /sys/fs dirs and the mount it self can be
removed while we're working.
For both, when we're working with the /sys/fs files we read them without
logging errors and then test that the dir still exists before using what
we read. When fencing a mount, we stop if findmnt doesn't find the
mount and then raise a umount error if the /sys/fs dir exists after
umount fails.
And while we're at it, we have each scripts logging append instead of
truncating (if, say, it's a log file instead of an interactive tty).
Signed-off-by: Zach Brown <zab@versity.com>
We're getting test failures from messages that our guests can be
unresponsive. They sure can be. We don't need to fail for this one
specific case.
Signed-off-by: Zach Brown <zab@versity.com>
Silence another error warning and assertion that's assuming that the
result of the errors is going to be persistent. When we're forcing an
unmount we've severed storage and networking.
Signed-off-by: Zach Brown <zab@versity.com>
mmap_stress gets completely stalled in lock messaging and starving
most of the mmap_stress threads, which causes it to delay and even
time out in CI.
Instead of spawning threads over all 5 test nodes, we reduce it
to spawning over only 2 artificially. This still does a good number
of operations on those node, and now the work is spread across the
two nodes evenly.
Additionaly, I've added a miniscule (10ms) delay in between operations
that should hopefully be sufficient for other locking attempts to
settle and allow the threads to better spread the work.
This now shows that all the threads exit within < 0.25s on my test
machine, which is a lot better than the 40s variation that I was seeing
locally. Hopefully this fares better in CI.
Signed-off-by: Auke Kok <auke.kok@versity.com>
There's a scenarion where mmap_stress gets enough resources that
twoe of the threads will starve the others, which then all take
a very long time catching up committing changes.
Because this test program didn't finish until all the threads had
completed a fixed amount of work, essentially these threads all
ended up tripping over eachother. In CI this would exceed 6h+,
while originally I intended this to run in about 100s or so.
Instead, cap the run time to ~30s by default. If threads exceed
this time, they will immediately exit, which causes any clog in
contention between the threads to drain relatively quickly.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Assembling a srch compaction operation creates an item and populates it
with allocator state. It doesn't cleanly unwind the allocation and undo
the compaction item change if allocation filling fails and issues a
warning.
This warning isn't needed if the error shows that we're in forced
unmount. The inconsistent state won't be applied, it will be dropped on
the floor as the mount is torn down.
Signed-off-by: Zach Brown <zab@versity.com>
The log merging process is meant to provide parallelism across workers
in mounts. The idea is that the server hands out a bunch of concurrent
non-intersecting work that's based on the structure of the stable input
fs_root btree.
The nature of the parallel work (cow of the blocks that intersect a key
range) means that the ranges of concurrently issued work can't overlap
or the work will all cow the same input blocks, freeing that input
stable block multiple times. We're seeing this in testing.
Correctness was intended by having an advancing key that sweeps sorted
ranges. Duplicate ranges would never be hit as the key advanced past
each it visited. This was broken by the mapping of the fs item keys to
log merge tree keys by clobbering the sk_zone key value. It effectively
interleaves the ranges of each zone in the fs root (meta indexes,
orphans, fs items). With just the right log merge conditions that
involve logged items in the right places and partial completed work to
insert remaining ranges behind the key, ranges can be stored at mapped
keys that end up with ranges out of order. The server iterates over
these and ends up issueing overlapping work, which results in duplicated
frees of the input blocks.
The fix, without changing the format of the stored log tree items, is to
perform a full sweep of all the range items and determine the next item
by looking at the full precision stored keys. This ensures that the
processed ranges always advance and never overlap.
Signed-off-by: Zach Brown <zab@versity.com>
The xfstests's golden output includes the full set of tests we expect to
run when no args are specified. If we specify args then the set of
tests can change and the test will always fail when they do.
This fixes that by having the test check the set of tests itself, rather
than relying on golden output. If args are specified then our xfstest
only fails if any of the executed xfstest tests failed. Without args,
we perform the same scraping of the check output and compare it against
the expected results ourself.
It would have been a bit much to put that large file inline in the test
file, so we add a dir of per-test files in revision control. We can
also put the list of exclusions there.
We can also clean up the output redirection helper functions to make
them more clear. After xfstests has executed we want to redirect output
back to the compared output so that we can catch any unexpected output.
Signed-off-by: Zach Brown <zab@versity.com>
Add a little background function that runs during the test which
triggers a crash if it finds catastrophic failure conditions.
This is the second bg task we want to kill and we can only have one
function run on the EXIT trap, so we create a generic process killing
trap function.
We feed it the fenced pid as well. run-tests didn't log much of value
into the fenced log, and we're not logging the kills into anymore, so we
just remove run-tests fenced logging.
Signed-off-by: Zach Brown <zab@versity.com>
Add an option to run-tests to have it loop over each test that will be
run a number of times. Looping stops if the test doesn't pass.
Most of the change in the per-test execution is indenting as we add the
for loop block. The stats and kmsg output are lifted up before of the
loop.
Signed-off-by: Zach Brown <zab@versity.com>
The data_wait_err ioctl currently requires the correct data_version
for the inode to be passed in, or else the ioctl returns -ESTALE. But
the ioctl itself is just a passthrough mechanism for notifying data
waiters, which doesn't involve the data_version at all.
Instead, we can just drop checking the value. The field remains in the
headers, but we've marked it as being ignored from now on. The reason
for the change is documented in the header file as well.
This all is a lot simpler than having to modify/rev the data_waiters
interface to support passing back the data_version, because there isn't
any space left to easily do this, and then userspace would just pass it
back to the data_wait_err ioctl.
Signed-off-by: Auke Kok <auke.kok@versity.com>
scoutfs_alloc_prepare_commit() is badly named. All it really does is
put the references to the two dirty alloc list blocks in the allocator.
It must allways be called if allocation was attempted, but it's easier
to require that it always be paired with _alloc_init().
If the srch compaction worker in the client sees an error it will send
the error back to the server without writing its dirty blocks. In
avoiding the write it also avoided putting the two block references,
leading to leaked blocks. We've been seeing rare messages with leaked
blocks in tests.
Signed-off-by: Zach Brown <zab@versity.com>
The .get_acl() method now gets passed a mnt_idmap arg, and we can now
choose to implement either .get_acl() or .get_inode_acl(). Technically
.get_acl() is a new implementation, and .get_inode_acl() is the old.
That second method now also gets an rcu flag passed, but we should be
fine either way.
Deeper under the covers however we do need to hook up the .set_acl()
method for inodes, otherwise setfacl will just fail with -ENOTSUPP. To
make this not super messy (it already is) we tack on the get_acl()
changes here.
This is all roughly ca. v6.1-rc1-4-g7420332a6ff4.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Similar to before when namespaces were added, they are now translated to
a mnt_idmap, since v6.2-rc1-2-gabf08576afe3.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The typical pattern of spinning isolating a list_lru results in a
livelock if there are blocks with leaked refcounts. We're rarely seeing
this in testing.
We can have a modest array in each block that records the stack of the
caller that initially allocated the block and dump that stack for any
blocks that we're unable to shrink/isolate. Instead of spinning
shrinking, we can give it a good try and then print the blocks that
remain and carry on with unmount, leaking a few blocks. (Past events
have had 2 blocks.)
Signed-off-by: Zach Brown <zab@versity.com>
The tests were using high ephemeral port numbers for the mount server's
listening port. This caused occasional failure if the client's
ephemeral ports happened to collide with the ports used by the tests.
This ports all the port number configuration in one place and has a
quick check to make sure it doesn't wander into the current ephemeral
range. Then it updates all the tests to use the chosen ports.
Signed-off-by: Zach Brown <zab@versity.com>
The server's srch commit error warnings were a bit severe. The
compaction operations are a function of persistent state. If they fail
then the inputs still exist and the next attempt will retry whatever
failed. Not all errors are a problem, only those that result in partial
commits that leave inconsistent state.
In particular, we have to support the case where a client retransmits a
compaction request to a new server after a first server performed the
commit but couldn't respond. Throwing warnings when the new server gets
ENOENT looking for the busy compaction item isn't helpful. This came in
tests as background compaction was in flight as tests unmounted and
mounted servers repeatedly to test lock recovery.
Signed-off-by: Zach Brown <zab@versity.com>
The block cache had a bizarre cache eviction policy that was trying to
avoid precise LRU updates at each block. It had pretty bad behaviour,
including only allowing reclaim of maybe 20% of the blocks that were
visited by the shrinker.
We can use the existing list_lru facility in the kernel to do a better
job. Blocks only exhibit contention as they're allocated and added to
per-node lists. From then on we only set accessed bits and the private
list walkers move blocks around on the list as we see the accessed bits.
(It looks more like a fifo with lazy promotion than a "LRU" that is
actively moving list items around as they're accessed.)
Using the facility means changing how we remove blocks from the cache
and hide them from lookup. We clean up the refcount inserted flag a bit
to be expressed more as a base refcount that can be acquired by
whoever's removing from the cache. It seems a lot clearer.
Signed-off-by: Zach Brown <zab@versity.com>
Add kernelcompat helpers for initial use of list_lru for shrinking. The
most complicated part is the walk callback type changing.
Signed-off-by: Zach Brown <zab@versity.com>
Readers can read a set of items that is stale with respect to items that
were dirtied and written under a local cluster lock after the read
started.
The active reader machanism addressed this by refusing to shrink pages
that could contain items that were dirtied while any readers were in
flight. Under the right circumstances this can result in refusing to
shrink quite a lot of pages indeed.
This changes the mechanism to allow pages to be reclaimed, and instead
forces stale readers to retry. The gamble is that reads are much faster
than writes. A small fraction should have to retry, and when they do
they can be satisfied by the block cache.
Signed-off-by: Zach Brown <zab@versity.com>
The default TCP keepalive value is currently 10s, resulting in clients
being disconnected after 10 seconds of not replying to a TCP keepalive
packet. These keepalive values are reasonable most of the times, but
we've seen client disconnects where this timeout has been exceeded,
resulting in fencing. The cause for this is unknown at this time, but it
is suspected that network intermissions are happening.
This change adds a configurable value for this specific client socket
timeout. It enforces that its value is above UNRESPONSIVE_PROBES, whose
value remains unchanged.
The default value of 10000ms (10s) is changed to 60s. This is the value
we're assuming is much better suited for customers and has been briefly
trialed, showing that it may help to avoid network level interruptions
better.
Signed-off-by: Auke Kok <auke.kok@versity.com>
It's possible that scoutfs_net_alloc_conn() fails due to -ENOMEM, which
is legitimately a failure, thus the code here releases the sock again.
But the code block here sets `ret = ENOMEM` and then restarts the loop,
which immediately sets `ret = kernel_accept()`, thus overwriting the
-ENOMEM error value.
We can argue that an ENOMEM error situation here is not catastrophical.
If this is the first that we're ever receiving an ENOMEM situation here
while trying to accept a new client, we can just release the socket and
wait for the client to try again. If the kernel at that point still is
out of memory to handle the new incoming connection, that will then
cascade down and clean up the while listener at that point.
The alternative is to let this error path unwind out and break down the
listener immediately, something the code today doesn't do. We're keeping
the behavior therefore the same.
I've opted therefore to replace the `ret = -ENOMEM` assignment with a
comment explaining why we're ignoring the error situation here.
Signed-off-by: Auke Kok <auke.kok@versity.com>
If scoutfs_send_omap_response fails for any reason, req is NULL and we
would hit a hard NULL deref during unwinding.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This function returns a stack pointer to a struct scoutfs_extent, after
setting start, len to an extent found in the proper zone, but it leaves
map and flags members unset.
Initialize the struct to {0,} avoids passing uninitialized values up the
callstack.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Several of the inconsistency error paths already correctly `goto out`
but this one has a `break`. This would result in doing a whole lot of
work on corrupted data.
Make this error path go to `out` instead as the others do.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In these two error conditions we explicitly set `ret = -EIO` but then
`break` to set `ret = 0` immediately again, masking away a critical
error code that should be returned.
Instead, `goto out` retains the EIO error value for the caller.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The value of `ret` is not initialized. If the writeback list is empty,
or, if igrab() fails on the only inode on the list, the value
of `ret` is returned without being initialized. This would cause the
caller to needlessly have to retry, perhaps possibly make things worse.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We shouldn't copy the entire _dirent struct and then copy in the name
again right after, just stop at offsetoff(struct, name).
Now that we're no longer copying the uninitialized name[3] from ent,
there is no more possible 1-byte leak here, too.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Assure that we reschedule even if this happens. Maybe it'll recover. If
not, we'll have other issues elsewhere first.
Signed-off-by: Auke Kok <auke.kok@versity.com>
ARRAY_SIZE(...) will return `3` for this array with members from 0 to 2,
therefore arr[3] is out of bounds. The array length test is off by one
and needs fixing.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This removes the KC_MSGHDR_STRUCT_IOV_ITER kernel compat.
kernel_{send,recv}msg() initializes either msg_iov or msg_iter.
This isn't a clean revert of "69068ae2 Initialize msg.msg_iter from
iovec." because previous patches fixed the order of arguments, and the
net send caller was removed.
Signed-off-by: Zach Brown <zab@versity.com>
Previous work had the receiver try to receive multiple messages in bulk.
This does the same for the sender.
We walk the send queue and initialize a vector that we then send with
one call. This is intentionally similar to the single message sending
pattern to avoid unintended changes.
Along with the changes to recieve in bulk this ended up increasing the
message processing rate by about 6x when both send and receive were
going full throttle.
Signed-off-by: Zach Brown <zab@versity.com>
When the msg_iter compat was added the iter was initialized with nr_segs
and count swapped. I'm not convinced this had any effect because the
kernel_{send,recv}msg() call would initialize msg_iter again with the
correct arguments.
Signed-off-by: Zach Brown <zab@versity.com>
Our messaging layer is used for small control messages, not large data
payloads. By calling recvmsg twice for every incoming message we're
hitting the socket lock reasonably hard. With senders doing the same,
and a lot of messages flowing in each direction, the contention is
non-trivial.
This changes the receiver to copy as much of the incoming stream into a
page that is then framed and copied again into individual allocated
messages that can be processed concurrently. We're avoiding contention
with the sender on the socket at the cost of additional copies of our
small messages.
Signed-off-by: Zach Brown <zab@versity.com>
The lock client has a requirement that it can't handle some messages
being processed out of order. Previously it had detected message
ordering itself, but had missed some cases. Recieve processing was then
changed to always call lock message processing from the recv work to
globally order all lock messages.
This inline processing was contributing to excessive latencies in making
our way through the incoming receive queue, delaying work that would
otherwise be parallel once we got it off the recv queue.
This was seen in practice as a giant flood of lock shrink messages
arrived at the client. It processed each in turn, starving a statfs
response long enough to trigger the hung task warning.
This fix does two things.
First, it moves ordered recv processing out of the recv work. It lets
the recv work drain the socket quickly and turn it into a list that the
ordered work is consuming. Other messages will have a chance to be
received and queued to their processing work without having to wait for
the ordered work to be processed.
Secondly, it adds parallelism to the ordered processing. The incoming
lock messages don't need global ordering, they need ordering within each
lock. We add an arbitrary but reasonable number of ordered workers and
hash lock messages to each worker based on the lock's key.
Signed-off-by: Zach Brown <zab@versity.com>
Make sure to log an error if the SCOUTFS_QUORUM_EVENT_END
update_quorum_block() call fails in scoutfs_quorum_worker().
Correctly print if the reader or writer failed when logging errors
in update_quorum_block().
Signed-off-by: Chris Kirby <ckirby@versity.com>
During log compaction, the SRCH_COMPACT_LOGS_PAD_SAFE trigger was
generating inode numbers that were not in sorted order. This resulted
in later failures during srch-basic-functionality, because we were
winding up with out of order first/last pairs and merging incorrectly.
Instead, reuse the single entry in the block repeatedly, generating
zero-padded pairs of this entry that are interpreted as create/delete
and vanish during searching and merging. These aren't encoded in the
normal way, but the extra zeroes are ignored during the decoding phase.
Signed-off-by: Chris Kirby <ckirby@versity.com>
Make sure that the orphan scanners can see deletions after forced unmounts
by waiting for reclaim_open_log_tree() to run on each mount; and waiting for
finalize_and_start_log_merge() to run and not find any finalized trees.
Do this by adding two new counters: reclaimed_open_logs and
log_merge_no_finalized and fixing the orphan-inodes test to check those
before waiting for the orphan scanners to complete.
Signed-off-by: Chris Kirby <ckirby@versity.com>
Tests such as quorum-heartbeat-timeout were failing with EIO messages in
dmesg output due to expected errors during forced unmount. Use ENOLINK
instead, and filter all errors from dmesg with this errno (67).
Signed-off-by: Chris Kirby <ckirby@versity.com>
This test compiles an earlier commit from the tree that is starting to
fail due to various changes on the OS level, most recently due to sparse
issues with newer kernel headers. This problem will likely increase
in the future as we add more supported releases.
We opt to just only run this test on el7 for now. While we could have
made this skip sparse checks that fail it on el8, it will suffice at
this point if this just works on one of the supported OS versions
during testing.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The iput worker can accumulate quite a bit of pending work to do. We've
seen hung task warnings while it's doing its work (admitedly in debug
kernels). There's no harm in throwing in a cond_resched so other tasks
get a chance to do work.
Signed-off-by: Zach Brown <zab@versity.com>
It's possible for the quorum worker to be preempted for a long period,
especially on debug kernels. Since we only check for how much time
has passed, it's possible for a clean receive to inadvertently
trigger an election. This can cause the quorum-heartbeat-timeout
test to fail due to observed delays outside of the expected bounds.
Instead, make sure we had a receive failure before comparing timestamps.
Signed-off-by: Chris Kirby <ckirby@versity.com>
In finalize_and_start_log_merge(), we overwrite the server
mount's log tree with its finalized form and then later write out
its next open log tree. This leaves a window where the mount's
srch_file is nulled out, causing us to lose any search items in
that log tree.
This shows up as intermittent failures in the srch-basic-functionality
test.
Eliminate this timing window by doing what unmount/reclaim does when
it finalizes, by moving the resources from the item that we finalize
into server trees/items as it finalizes. Then there is no window
where those resources exist only in memory until we create another
transaction.
Signed-off-by: Chris Kirby <ckirby@versity.com>
It's entirely likely that the trigger here is munched by a read on a
dirty block from any unrelated or background read. Avoid that by putting
the trigger at the end of the condition list.
Now that the order is swapped, we have to avoid a null deref in
block_is_dirty(bp) here, as well.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The issue with the previous attempt to fix the orphan-inodes test was
that we would regularly exceed the 120s timeout value put in there.
Instead, in this commit, we change the code to add a new counter to
indicate orphan deletion progress. When orphan inodes are deleted, the
increment of this counter indicates progress happened. Inversely,
every time the counter doesn't increment, and the orphan scan attempts
counter increments, we know that there was no more work to be done.
For safety, we wait until 2 consecutive scan attempts were made without
forward progress in the test case.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This reverts commit 138c7c6b49.
The timeout value here is still exceeded by CI test jobs, and thus
causing the test to fail.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Adjusting hung_task_timeout_secs is still needed for this test to pass
with a debug kernel. But the logic belongs on the platform side.
Signed-off-by: Chris Kirby <ckirby@versity.com>
The try_drain_data_freed() path was generating errors about overrunning
its commit budget:
scoutfs f.2b8928.r.02689f error: 1 holders exceeded alloc budget av: bef 8185 now 8036, fr: bef 8185 now 7602
The budget overrun check was using the current number of commit holders
(in this case one) instead of the the maximum number of concurrent holders
(in this case two). So even well behaved paths like try_drain_data_freed()
can appear to exceed their commit budget if other holders dirty some blocks
and apply their commits before the try_drain_data_freed() thread does its
final budget reconciliation.
Signed-off-by: Chris Kirby <ckirby@versity.com>
Free extents are stored in two btrees: one sorted by block number, one
by size. So if you insert a new extent between two existing extents, you can
be modifying two items in the by-block-number tree. And depending on the size
of those items, that can result in three items over in the -by-size tree.
So that's a 5x multiplier per level.
If we're shrinking the tree and adding more freed blocks, we're conceptually
dirtying two blocks at each level to merge. (current *2 in the code).
But if they fall under the low water mark then one of them is freed, so we
can have *3 per level in this case.
Signed-off-by: Chris Kirby <ckirby@versity.com>
On el8, sparse is at 0.6.4 in epel-release, but it fails with:
```
[SP src/util.c]
src/util.c: note: in included file (through /usr/include/sys/stat.h):
/usr/include/bits/statx.h:30:6: error: not a function <noident>
/usr/include/bits/statx.h:30:6: error: bad constant expression type
```
This is due to us needing O_DIRECT from <fcntl.h>, so we set _GNU_SOURCE
before including it, but this causes (through _USE_GNU in sys/stat.h)
statx.h to be included, and that has __has_include, and sparse is too
dumb to understand it.
Just shut it up.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This fixes a potential fence post failure like the following:
error: 1 holders exceeded alloc budget av: bef 7407 now 7392, fr: bef 8185 now 7672
The code is only accounting for the freed btree blocks, not the dirtying of
other items. So it's possible to be at exactly (COMMIT_HOLD_ALLOC_BUDGET / 2),
dirty some log btree blocks, loop again, then consume another
(COMMIT_HOLD_ALLOC_BUDGET / 2) and blow past the total budget.
In this example, we went over by 13 blocks.
By only consuming up to 1/8 of the budget on each loop, and committing when we
have consumed 3/4 of the budget, we can avoid the fence post condition.
Signed-off-by: Chris Kirby <ckirby@versity.com>
The `-R` option will shuffle the order in which tests are executed.
The testing order shouldn't affect the outcome of any of the tests, but
in practice many of these tests will execute code slightly different
based on the history of the filesystem, resources allocated, memory
usage etc. of tests that were executed before. Shuffling the order of
tests therefore introduces small semi-random variations in the
enviroment.
The xfstests test is the only one that can't be shuffled yet into the
mix, so it is kept at the end. This is because it leaves the filesystems
unmounted. At a later point we may want to address this.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Fail the build if we don't check with sparse in both the kernel and
userspace utils. Add a filtering wrapper to the kernel build so that we
have a place to filter out uninteresting errors from kernel sources that
we're building against.
Signed-off-by: Zach Brown <zab@versity.com>
This is another example of refactoring a loop to avoid sparse warnings
from doing something in the else of a failed trylock if. We want to
drop and reacquire the lock if the trylock fails so we do it every loop
iteration. This shouldn't be experiencing much contention because most
of the cov users are usually done under locks and invalidation has
excluded lock holders. So the additional lock and unlock noise should
be local.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_item_write_done() acquires the cinf dirty_lock and pg rwlock out
of order. It uses a trylock to detect failure and back off of both
before retrying.
sparse seems to have some peculiar sensitivity to following the else
branch from a failed trylock while already in a context. Doing that
consistently triggered the spurious mismatched context warning.
This refactors the loop to always drop and reacquire the dirty_lock
after attemping the trylock. It's not great, but this shouldn't be very
contended because the transaction write has serialized write lock
holderse that would be trying to dirty items. The silly lock noise will
be mostly cached.
Signed-off-by: Zach Brown <zab@versity.com>
Looks like the compiler isn't smart enough to understand the pass by
pointer value, and we can initialize it here easily.
make[1]: Entering directory '/usr/src/kernels/5.14.0-503.26.1.el9_5.x86_64'
CC [M] /home/auke/scoutfs/kmod/src/server.o
/home/auke/scoutfs/kmod/src/server.c: In function ‘fence_pending_recov_worker’:
/home/auke/scoutfs/kmod/src/server.c:4170:23: error: ‘addr.v4.addr’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
4170 | ret = scoutfs_fence_start(sb, rid, le32_to_be32(addr.v4.addr),
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4171 | SCOUTFS_FENCE_CLIENT_RECOVERY);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
There's still the obvious issue here that we'd intended to support ipv6
but just disregard that here.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Occasionally, we have some tests fail because these kills produce:
tests/lock-recover-invalidate.sh: line 42: 9928 Terminated
Even though we expected them to be silent. In these particular cases we
already don't care about this output.
We borrow the silent_kill() function from orphan-inodes and promote it
to t_silent_kill() in funcs/exec.sh, and then use it everywhere where
appropriate.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The current test sequence performs the unlink and immediately tests
whether enough resources are available to create new files again, and
this consistently fails.
One of my crummy VMs takes a good 12 seconds before the `touch` actually
succeeds. We care about the filesystem eventually returning from ENOSPC,
and certainly we don't want it to take forever, but there is a period
after our first ENOSPC error and cleanup that we expect ENOSPC to fail
for a bit longer.
Make the timeout 120s. As soon as the `touch` completes, exit the wait
loop.
Signed-off-by: Auke Kok <auke.kok@versity.com>
If run without `-m` (explicit mkfs) in subsequent testing, old test
data files may break several tests. Most failures are -EEXIST, but
there are some more subtle ones.
This change erases any existing test dir as needed just before we
run the tests, and avoids the issue entirely.
I considered doing a `mv dir dir.$$ && rm -rf dir.$$ &` alternative
solution but that likely will interfere disproportionally with
tests that do disconnects and other thing that can be impacted by an
unlink storm.
This has an obvious performance aspect - tests will be a little
slower to start on subsequent runs. In CI, this will effectively be
a no-op though.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This test regularly fails in CI when the 15 seconds elapses and the
system still hasn't concluded the mount log merges and orphan inode
scans needed to unlink the test files.
Instead of just extending the timeout value, we test-and-retry for 120s.
This hopefully is faster in most cases. My smallest VM needs about 6s-8s
on average.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The client transaction commit worker has a series of functions that it
calls to commit the current transaction and open the next one. If any
of them fail, it retries all of them from the beginning each time until
they all succeed.
This pattern behaves badly since we added the strict get_trans_seq and
commit_trans_seq latching in the log_trees. The server will only commit
the items for a get or commit request once, and will fail a commit
request if it isn't given the seq that matches the current item.
If the server gets an error it can have persisted items while sending an
error to the client. If this error was for a get request, then the
client will retry all of its transaction write functions. This includes
the commit request which is now using a stale seq and will fail
indefinitely. This is visible in the server log as:
error -5 committing client logs for rid e57e37132c919c4f: invalid log trees item get_trans_seq
The solution is to retry the commit and get phases independently. This
way a failed get will be retried on its own without running through the
commit phase that had succeeded. The client will eventually get the
next seq that it can then safely commit.
Signed-off-by: Zach Brown <zab@versity.com>
At the end of get_log_trees we can try and drain the data_freed extent
tree, which can take multiple commits. If a commit fails then the
blocks are still dirty in memory. We can't send references to those
blocks to the client. We have to return an error and not send the
log_trees, like the main get_log_trees does. The client will retry and
eventually get a log_trees that references blocks that were successfully
committed.
Signed-off-by: Zach Brown <zab@versity.com>
Stored as `results/scoutfs.tap`, this file contains TAP format 14
generated test results.
Embedded in the output are some metadata so that these files can be
aggregated and stored in an unique and deduplicating way, but using a
generated UUID at the start of testing. The file itself also catches git
ID, date, and kernel version, as well as the (possibly altered) test
sequence used.
Any test that has diff or dmesg output will be considered failed, and a
copy of the relevant data is included as comments.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This happens with the basic-truncate test, only. It's the only user
of the `yes` program.
The `yes` command normally fails gracefully under the usual runs that
are attached to some terminal. But when the test script runs entirely
under something else, it will throw a needless error message that
pollutes the test output:
`yes: standard output: Broken pipe`
Adjust the redirect to omit all stderr for `yes` in this case.
Signed-off-by: Auke Kok <auke.kok@versity.com>
scoutfs cli commands were using a helper that tried to perform word
expansion on the path argument. This was done with the intent of
providing the convenience of shell expansion (env vars, ~) within the
cli command argument.
But it breaks paths that accidentally have their file names match the
syntax that wordexp supports. "[ ]" tripped up files in the wild.
We don't need to provide shell expansion functionality in our argument
parsing. The shell can do that. The cli must pass the arguments
straight through, no parsing at all.
Signed-off-by: Zach Brown <zab@versity.com>
Very old copy/paste bug here, we want to update new_inode's ctime
instead. old_inode already is updated.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We need to assure we're emitting dents with the proper position
and we already have them as part of our dent. The only caveat is
to increment ctx->pos once beyond the list to make sure the caller
doesn't call us once more.
Signed-off-by: Auke Kok <auke.kok@versity.com>
While debugging a double unlock error we hit this condition and
debugging would have been a lot easier had we enforced this simple
constraint that we can't decrement the lock users count if it's
already 0.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Similar to fiemap, readdir and walk_inodes, this method could have
put_user during a page fault, causing potentially a deadlock.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Similar to readdir and fiemap vfs methods, we can't copy to user while
holding cluster locks. The previous comment about it being safe no
longer applies, and this could deadlock.
Rewrite the loop to iterate and store entries in a page, then flush
the page contents while not holding a clusterlock.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Now that we support mmap writes, at any point in time we could
pagefault and lock for writes. That means - just like readdir -
we can no longer lock and copy_to_user, since it also may page fault
and thus deadlock.
We statically allocate 32 extent entries on the stack and use
these to shuffle out fiemap entries at a time, locking and
unlocking around collecting and fiemap_fill_extent_next.
Signed-off-by: Auke Kok <auke.kok@versity.com>
dir_emit() will copy_to_user, which can pagefault. If this happens while
cluster locked, we could deadlock.
We use a single page to stage dir_emit data, and iterate between
fetching dirents while locked, and emitting them while not locked.
Signed-off-by: Auke Kok <auke.kok@versity.com>
These 2 sections of compat for readdir are wholly obsolete and can be
hard dropped, which restores the method to look like current upstream
code.
This was added in ddd1a4e.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We merely trace exit values and position, and ignore length.
Because vm_fault_t is __bitwise, sparse will loudly complain about
a plain cast to u32, so we must __force (on el8). ret will be 512 in
normal cases.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Now that all of these should be passing, we enable all mmap() tests in
xfstests, and update the golden output with the new tests.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Two test programs are added. The run time is about 1min on my el7
instance.
The test script finishes up with a read/write mmap test on offline
extents to verify the data wait paths in those functions.
One program will perform vfs read/write and mmap read/write calls on
the same file from across 5 threads (mounts) repeatedly. The goal
is to assure there are no locking issues between read/write paths.
The second test program performs consistency checking on a file that is
repeatedly written/read using memory maps and normal reads and writes,
and the content is verified after every operation.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Add support for writable MAP_SHARED mmap()ings. Avoid issues with late
writepage()s building transactions by doing the block_write_begin() work in
scoutfs_data_page_mkwrite(). Ensure the page is marked dirty and prepared
for write, then let the VM complete the write when the page is flushed or
invalidated.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Auke Kok <auke.kok@versity.com>
The list alloc blocks have an array of blknos that are offset by a start
field in the block header. The print code wasn't using that and was
always referencing the beginning of the array, which could miss blocks.
Signed-off-by: Zach Brown <zab@versity.com>
Since kABI migrations across minor versions is a thing of the past going
forward, we now:
- Detect if we're on EL9
- If so, add a requirement on the various flavors of release package to
that specific major.minor version
This appropriately does not allow upgrades across minor versions.
Additional blkdev/bdev changes now cause this call to be removed as
well resulting in us having to use yet another API to do the same for
el9_5.
The changes are a little more subtle as now the bdev_mount() call passes
a custom bd_holder_ops that we must match or else throw a WARN_ON, so we
switch to using sbi as our holder arg instead.
Make sure to bdev_fput and not fput, since we don't want to have our
private data cleanup deferred, failing xfstests generic/604.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The assignments to it is no longer needed at all. All references can be
dropped since v6.4-rc4-163-g0d625446d0a4.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since v6.5-rc1-7-g9b6304c1d537, current_time() is no longer
extern, so we need to update this grep regex to continue to match.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Add a quick test that races readers and shrinking to stress lock object
refcount racing between concurrent lock request handling threads in the
lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Right now a client requesting a null mode for a lock will cause
invalidations of all existing granted modes of the lock across the
cluster.
This is unneccessarily broad. The absolute requirement is that a null
request invalidates other existing granted modes on the client. That's
how the client safely resolves shrinking's desire to free locks while
the locks are in use. It relies on turning it into the race between use
and remote invalidation.
But that only requires invalidating existing grants from the requesting
client, not all clients. It is always safe for null grants to coexist
with all grants on other clients. Consider the existing mechanics
involving null modes. First, null locks are instatiated on the client
before sending any requests at all. At any given time newly allocated
null locks are coexisting with all existing locks across the cluster.
Second, the server frees the client entry tracking struct the moment it
sends a null grant to the client. From that point on the client's null
lock can not have any impact on the rest of the lock holders because the
server has forgotten about it.
So we add this case to the server's test that two client lock modes are
compatible. We take the opportunity to comment the heck out of this
function instead of making it a dense boolean composition. The only
functional change is the addition of this case, the existing cases are
refactored but unchanged.
Signed-off-by: Zach Brown <zab@versity.com>
When freeing acked reesponses in the net layer we sweep the send and
resend queues looking for queued responses up to the sequence number
we've had acked. The code that did this used a weird pattern of
returning ints and adding them which gave me pause. Clean it up to use
bools and or (not short-circuiting ||) to more obviously communicate
what's going on.
Signed-off-by: Zach Brown <zab@versity.com>
Over time some fields have been added to the lock struct which haven't
been added to the lock tracing output. Add some of the more relevant
lock fields to tracing.
Signed-off-by: Zach Brown <zab@versity.com>
Lock object lifetimes in the lock server are protected by reference
counts. References are acquired while holding a lock on an rbtree.
Unfortunately, the decision to free lock objects wasn't tested while
also holding that lock on the rbtree. A caller putting their object
would test the refcount, then wait to get the rbtree lock to remove it
from the tree.
There's a possible race where the decision is made to remove the object
but another reference is added before the object is removed. This was
seen in testing and manifest as an incoming request handling path adding
a request message to the object before it is freed, losing the message.
Clients would then hang on a lock that never saw a response because
their request was freed with the lock object.
The fix is to hold the rbtree lock when testing the refcount and
deciding to free. It adds a bit more contention but not significantly
so, given the wild existing contention on a per-fs spinlocked rbtree.
Signed-off-by: Zach Brown <zab@versity.com>
Previously, any t_skip would cause the final test result to be a failure
because up until now no test should have been skipped.
However, with format-version-forward-back not being compatible with el9,
we are going to rely on el7/8 testing for that test soleley, and
therefore we have to allow skipping of this test on el9 and newer OS
versions.
We add `t_skip_permitted` to signal this from the test case to the
run-tests.sh script. A new exit code is passed, and all accounting is
updated to reflect that a test was skipped, but this was permitted. We
modify format-version-forward-back to use this new exit path.
Signed-off-by: Auke Kok <auke.kok@versity.com>
I'm seeing more and more of these as audit is enabled in el8 and el9
images I am using for testing, and during ENOSPC tests this has a chance
of triggering process accounting suspension, and subsequent resume.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v1.18-10-g5507ee5, we changed the test code away from loopback
to device-mapper, which simplified our DUT setup code.
However, this results in the occasional `device changed size` messages
now being emitted by the `dm` driver instead of the `loop` kernel
module. We have to additionally ignore these kernel messages from now as
well.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v5.17-rc4-53-g3a3bae50af5d, we can no longer omit having this
method unhooked as the mm caller blindly calls it now. In-kernel
filesystems all were fixed in this change.
aops->invalidatepage was the old aops method that would free pages
with private attached data. This method is replaced with the
new invalidate_folio method. If this method is NULL, the memory
will become orphaned. (v5.17-rc4-29-gf50015a596fa)
Signed-off-by: Auke Kok <auke.kok@versity.com>
I've pushed a tag/release to scoutfs-xfstests-dev instead of a full
blown branch. This seems simpler and cleaner than using branches,
because we're going to end up rebasing these things a lot. However, we
can't --track tags, so, if the branch name passed to -x is actually a
tag instead of a branch, we have to omit the --track option here.
Signed-off-by: Auke Kok <auke.kok@versity.com>
CI testing needs to know which xfstests branch to use on all OSs.
We can't just use the el9 xfstests branch on el9 only, because we
need to run the same el9 xfstests on el8 and el7 as well, otherwise
testing will just fail.
So, we put a marker file in our git repo that tells us that we're
not going to use the default `scoutfs` branch from scoutfs-xfstests-dev
but our own special tag or branch. The CI job then should pass the
proper -x {branch} flag to the run-tests.sh script.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The new version of xfstests adds a _lot_ more tests to our mix. Many
of the new ones will auto enable or auto skip as needed.
There are tests we can't or won't support that will be in future
xfstests. Disable them now so we can avoid dealing with them later.
Quite a few fall into "we don't support these types of mounting yet",
mostly bind-mount or dm-mapper things. We disable all the swapfile
tests flatout.
A few tests fail on el7 but not el8/9 but we don't have a way to run
them without failing yet, so disable them as well.
Update golden with the proper new array of tests. This all requires
the `auke/scoutfs-el9` branch in `versity/scoutfs-xfstests-dev`.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Using t_skip, we just skip this test on el9.
If we ever want to add a formatversion 2->3 test, perhaps we should
just add a separate test script, instead of going over a static array.
But let's not worry about this too much right now.
Signed-off-by: Auke Kok <auke.kok@versity.com>
It turns out that on el9, `bash -c` prints out `bash: line 1: cd..`
instead of `line 0:` on el7 or el8. So discard all the stderr from
these `cd` lines entirely and just rely on the expected echo
output to stdout.
Signed-off-by: Auke Kok <auke.kok@versity.com>
There's filefrag already, and that works, but, it's output is very
inconsistent between various OS release versions, and it has already
meant that we'd needed to adjust tests to account for these little
but insignificant changes. A lot more work than useful. It's even
more changed in el9.
This adds `scoutfs get-fiemap FILE` and prints out block extent
info with flags that we care about as an abbreviated letter: U for
Unwritten, L for Last, and O for Unknown (as in, "offline").
The -P/--physical and -L/--logical options turn off logical or physical
offset display, in case you only want to see the offsets in either
units. You can pass -b/--byte to display offsets and lengths in
byte values. The block size will then be obtained from fstat() of
the queried file (4096 for scoutfs).
I've removed all uses of filefrag from our scoutfs tests. Xfstests
still calls it but their internal diff takes care of that issue.
Where needed and appropriate, the tests are adjusted so that the output
of `scoutfs get-fiemap` is as close as it can to what it used to be,
so that reading the test results allows the quick view of what might
have been going wrong.
There are some output strings I have not bothered to update because
there's no real value to updating every output string to match,
and we just adjust the golden file accordingly.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This isn't a simple case where we can use u64_region_wraps because
length is s32.
Let's actually test an overflow case instead of a case that doesn't
overflow, though. We still should properly add an overflow test here as
well.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We use check_add_overflow(a, b, d) here to validate that (off, len)
pairs do not exceed the max value type. The kernel conveniently has
several macros to sort out the problems with signed or unsigned types.
However, we're not interested in purely seeing whether (a + b)
overflows, because we're using this for (off, len) overflow checks,
where the bytes we read are from 0 to len -1. We must therefore call
this check with (b) being "len - 1".
I've made sure that we don't accidentally fail when (len == 0)
in all cases by making sure we've already checked this condition
before, and moving code around as needed to ensure that (len > 0)
in all cases where we check.
The macro check_add_overflow requires a (d) argument in which
temporarily the result of the addition is stored and then checked to see
if an overflow occurred. We put a `tmp` variable on the stack of the
correct type as needed to make the checks function.
simple-release-extents test mistakenly relied on this buggy wrap code,
so it needs fixing. The move-blocks test also got it wrong.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We consistently enter scoutfs_data_wait_check when len == 0 from
scoutfs_aio_write() which directly passes the i_size_read() value,
and for cases where we `echo >> $FILE` this is always reached.
This can cause the wrapping check to fail since `0 + (0 - 1) < 0` which
triggers the WARN_ON_ONCE wrap check that needs updating to allow
certain operations on huge files.
More importantly we can just omit all these checks if `len == 0` anyway,
since they should always succeed and never should require taking all the
locks.
Signed-off-by: Auke Kok <auke.kok@versity.com>
getline() allocates the space for the return value even if there is an
error, so when it returns an error, we still have to free() it.
In el9, when reading stdin we will get errno=0 returned (no error) when
we hit the end of stdin. This behavior is different from el7/8. We don't
want to throw an error here to avoid failing the test, since it doesn't.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The warnings thrown by el9's version of xargs are unexpected output and
cause this test to fail. When using the -I option (replace) the -n 1
arguments are always assumed. In el7/8 no warnings were printed.
We can just remove `-n 1` since the argument is never needed.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In el9 releases, our includes declare offsetof() before our header
chain includes stddef.h, which doesn't properly check if offsetof
is already defined, leading to a redefinition. Just include stddef
at all times here.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Passing a holder ptr to these functions now replaces the FMODE_EXCL
flag. _put no longer needs flags for this reason, but the holder
instead.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v6.4-rc2-198-g05bdb9965305 adds a new type for passing flags instead
of abusing fmode_t flags. They are essentially the same flags just
in a new type.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v6.0-rc6-9-g863f144f12ad changes the VFS method to pass in a struct
file and not a dentry in preperation for tmpfile support in fuse.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The current spec template can't handle future major el releases
gracefully and fails to build entirely. We isolate all changes
so that they are either "el7 specific" or generic. This rids us
entirely of el8 specific conditionals.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Folios are the new data types used for passing pages. For now,
folios only appear to have a single page. Future kernels will
change that.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.19-rc4-52-ge33c267ab70d Adds shrinker names to the registration
call to aid with shrinker debugging, which is highly opaque.
To enable you'll have to recompile the kernel with
CONFIG_SHRINKER_DEBUG=y though, since it's disabled by default in
OSV kernels.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The iter based read/write calls can support splice in el9 if we
hook up these calls, otherwise splice will stop working.
->write() similar to: v3.15-rc4-330-g8d0207652cbe. ->read() to
generic implementation.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We instead opt to use sock_setsockopt which is generally exactly the
same and can be easily converted to map to kernel_setsockopt without
impacting the code significantly.
There are 3 methods we're calling with usec timeval's, and that is
significantly different now that this requires a bit more compat code
so we split these out to separate compat functions to handle them.
Some of the TCP sock functions also have a slightly different signature
that we want to split them out (struct socket vs. sock). Some further
no longer return success, either.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We switch to using 64bit usec structs and recommended replacement
functions from Documentation/core-api/timekeeping.rst.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v5.11-rc4-8-ge65ce2a50cf6 the *set handler is passed a
user_namespace struct pointing to the map from the mount.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Greg KH tells us to do just this in v5.4-rc5-31-g9927c6fa3e1d:
No one checks the return value of debugfs_create_atomic_t(),
as it's not needed, so make the return value void, so that no
one tries to do so in the future.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.12-rc6-9-g4f0f586bf0c8
All list_sort functions use the list_cmp_func_t type, which compares
list_head member types. These are now required to be `const` as the
compiler will now check them. This propagates into our callers.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.7-rc2-1174-gfd4f12bc38c3 significantly rewrites the bpf iterator
which hits this _next() function. It also adds a check that verifies
that the *pos is incremented after every call, even if it goes beyond
the last member (in which case it's not used).
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.11-rc4-7-g2f221d6f7b88 Changes setattr_prepare from an extern
to plain int. There's no impact further to the compat to keep it
working except for the detection regex.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We could use sizeof_field as a direct replacement (which is the same)
except that this entire thing can directly use offsetofend().
Signed-off-by: Auke Kok <auke.kok@versity.com>
The wrapper in setattr_more that translates the operations to attr_x
needs to decide whether to ask attr_x to perform a change to any of
the fields passed to it or not. For the date and size fields this
is implicit - we always tell attr_x to change them. For any of the
other fields, it should be explicit.
The only field that is in the struct that this applies to is
data_version. Because the data version field by default is zero,
we use that as condition to decide whether to pass the data_version
down to attr_x.
Previously, the code would always pass a data_version=0 down to attr_x,
triggering one of the validity checks, making it return -EINVAL. We
add a simple test case to test for this issue.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We should rely on sparse from epel to do automated sparse checking and
not a git tag. But the 0.6.4 build currently fails on sparse/gcc
redefines.
This magic Awk from Zach script processes sparse and gcc internal defines
and leaves the one intact that sparse doesn't have.
Signed-off-by: Auke Kok <auke.kok@versity.com>
These new shrinkers were recently added. Because there's very little
ways to debug them, or even see them properly function, we should at
least add counters for them.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This is done by xfstests and it's so much easier to follow what is going
on from logs or e.g. serial console that I thought I should do this for
scoutfs tests as well. It makes it so much easier to discern which test
may have been cause for issues when running a bunch of tests and you're
looking back at logs later.
Signed-off-by: Auke Kok <auke.kok@versity.com>
These are extremely limited and very quick basic ACL tests we can
trivially do in under a second - purely basic funtionality tests only.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In 29160b0b I mistakenly disabled all caching of ACLs for el8
instead of only disabling cache lookups. The correct change
should have been to disable cache lookups only, and leave setting the
acl cache after storing or fetching, as the kernel needs this data
to resolve acls when doing permission checks.
Restore the acl cache insertions fixes.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The test harness provides a TMP directory for tests to use. It's badly
named. It's meant to be more of a scratch directory that is not on the
FS being tested.
Tests use it both for small log files that give insight into the
platform and for large generated files that are not worth saving. We
want to save the directory after test runs to get at the log files, but
we don't want to burn a ton of space also saving large generated files
This updates the handful of tests to remove their handful of files that
are large enough to be a problem. With these out of the way we can save
the tmp/ directory without its space consumption getting out of hand.
Signed-off-by: Zach Brown <zab@versity.com>
The script really wants to print rid instead of pid. But in case
of failure, we can just dump the arrays as well.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We can rely on `bc` and `date` to record, manipulate and compare
time data with nanosecond precision. This fixes timing issues on
faster systems where this test completes a single pass of createmany in
under 1.0 second, causing the math to always fail.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This extra check assures the passed meta device and data device
are indeed what they should be, and prevents against unwanted
swapping or repeated duplicate device arguments.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Add a run-tests -V option that passes through the -V option to mkfs so
that runs can specify the format version that the primary volume will
have. This doesn't affect the scratch file system versions.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for the indx xattr tag which lets xattrs determine the sort
order of by their inode number in a global index.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test binary that uses o_tmpfile and linkat to create a file in a
given dir. We have something similar, but it's weirdly specific to a
given test. This is a simpler building block that could be used by more
tests.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for project IDs. They're managed through the _attr_x
interfaces and are inherited from the parent directory during creation.
Signed-off-by: Zach Brown <zab@versity.com>
Now that the _READ_XATTR_TOTALS ioctl uses the weak item cache we have
to drop caches before each attempt to read the xattrs that we just wrote
and synced.
Signed-off-by: Zach Brown <zab@versity.com>
Change the read_xattr_totls ioctl to use the weak item cache instead of
manually reading and merging the fs items for the xattr totals on every
call.
Signed-off-by: Zach Brown <zab@versity.com>
The _READ_XATTR_TOTALS ioctl had manual code for merging the .totl.
total and value while reading fs items. We're going to want to do this
in another reader so let's put these in their own funcions that clearly
isolate the logic of merging the fs items into a coherent result.
We can get rid of some of the totl_read_ counters that tracked which
items we were merging. They weren't adding much value and conflated the
reading ioctl interface with the merging logic.
Signed-off-by: Zach Brown <zab@versity.com>
Add a forest item reading interface that lets the caller specify the net
roots instead of always getting them from a network request.
Signed-off-by: Zach Brown <zab@versity.com>
Add the weak item cache that is used for reads that can handle results
being a little behind. This gives us a lot more freedom to implement
the cache that biases concurrent reads.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
[zab@versity.com: refactored for retention, added test cases]
Signed-off-by: Zach Brown <zab@versity.com>
Add a bit to the private scoutfs inode flags which indicates that the
inode is in retention mode. The bit is visible through the _attr_x
interface. It can only be set on regular files and when set it prevents
modification to all but non-user xattrs. It can be cleared by root.
Signed-off-by: Zach Brown <zab@versity.com>
We have some fs functions which return info based on the test mount nr
as the test has setup. This refactors those a bit to also provide
some of the info when the caller has a path in a given mount. This will
let tests work with scratch mounts a little more easily.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Now that we have the attr_x calls we can implement stat_more with
get_attr_x and setattr_more with set_attr_x.
The conversion of stat_more fixes a surprising consistency bug.
stat_more wasn't acquiring a cluster lock for the inode nore refreshing
it so it could have returned stale data if modifications were made in
another mount.
Signed-off-by: Zach Brown <zab@versity.com>
The existing stat_more and setattr_more interfaces aren't extensible.
This solves that problem by adding attribute interfaces which specify
the specific fields to work with.
We're about to add a few more inode fields and it makes sense to add
them to this extensible structure rather than adding more ioctls or
relatively clumsy xattrs. This is modeled loosely on the upstream
kernel's statx support.
The ioctl entry points call core functions so that we can also implement
the existing stat_more and setattr_more interfaces in terms of these new
attr_x functions.
Signed-off-by: Zach Brown <zab@versity.com>
Initially setattr_more followed the general pattern where extent
manipulation might require multiple transactions if there are lots of
extent items to work with. The scoutfs_data_init_offline_extent()
function that creates an offline extent handled transactions itself.
But in this case the call only supports adding a single offline extent.
It will always use a small fixed amount of metadata and could be
combined with other metadata changes in one atomic transaction.
This changes scoutfs_data_init_offline_extent() to have the caller
handle transactions, inode updates, etc. This lets the caller perform
all the restore changes in one transaction. This interface change will
then be used as we add another caller that adds a single offline extent
in the same way.
Signed-off-by: Zach Brown <zab@versity.com>
Add a little inline helper to test whether the mounted format version
supports a feature or not, returning an errno that callers can use when
they can return a shared expected error.
Signed-off-by: Zach Brown <zab@versity.com>
We're about to add new format structures so increment the max version to
2. Future commits will add the features before we release version 2 in
the wild.
Signed-off-by: Zach Brown <zab@zabbo.net>
We're about to increase the inode size and increment the format version.
Inode reading and writing has to handle different valid inode sizes as
allowed by the format version. This is the initial skeletal work that
later patches which really increase the inode size will further refine
to add the specific known sizes and format versions.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
[zab@versity.com: reworded description, reworked to use _within]
Signed-off-by: Zach Brown <zab@versity.com>
Add a lookup variant that returns an error if the item value is larger
than the caller's value buffer size and which zeros the rest of the
caller's buffer if the returned value is smaller.
Signed-off-by: Zach Brown <zab@versity.com>
We were using a seqcount to protect high frequency reads and writes to
some of our private inode fields. The writers were serialized by the
caller but that's a bit too easy to get wrong. We're already storing
the write seqcount update so the additional internal spinlock stores in
seqlocks isn't a significant additional overhead. The seqlocks also
handle preemption for us.
Signed-off-by: Zach Brown <zab@versity.com>
Don't let change-format-version decrease the format version. It doesn't
have the machinery to go back and migrate newer structures to older
structures that would be compatible with code expecting the older
version.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
[zab@versity.com: split from initial patch with other changes]
Signed-off-by: Zach Brown <zab@versity.com>
Definitions in forest.h use lock pointers. Pre-declare the struct so it
doesn't break inclusion without lock.h, following current practice in
the header.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_file_write_iter tried to track written bytes and return those
unless there was an error. But written was uninitialized if we got
errors in any of the calls leading up to performing the write. The
bytes written were also not being passed to the generic_write_sync
helper. This fixes up all those inconsistencies and makes it look like
the write_iter path in other filesystems.
Signed-off-by: Zach Brown <zab@versity.com>
When we write to file contents we change the data_version. To stage old
contents into an offline region the data_version of the file must match
the archived copy. When writing we have to make sure that there is no
offline data so that we don't increase the data_version which will
prevent staging of any other file regions because the data_versions no
longer match.
scoutfs_file_write_iter was only checking for offline data in its write
region, not the entire file. Fix it to match the _aio_write method and
check the whole file.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_data_wait_check_iter() was checking the contiguous region of the
file starting at its pos and extending for iter_iov_count() bytes. The
caller can do that with the previous _data_wait_check() method by
providing the same count that _check_iter() was using.
Signed-off-by: Zach Brown <zab@versity.com>
The item cache has a bit of safety checks that make sure that an
operation is performed while holding a lock that covers the item. It
dumped a stack trace via WARN when that wasn't true, but it didn't
include any details about the keys or lock modes involved.
This adds a message that's printed once which includes the keys and
modes when an operation is attempted that isn't protected.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_item_create() was checking that its lock had a read mode, when
it should have been checking for a write mode. This worked out because
callers with write mode locks are also protecting reads.
Signed-off-by: Zach Brown <zab@versity.com>
Unlink looks up the entry items for the name it is removing because we
no longer store the extra key material in dentries. If this lookup
fails it will use an error path which release a transaction which wasn't
held. Thankfully this error path is unlikely (corruption or systemic
errors like eio or enomem) so we haven't hit this in practice.
Signed-off-by: Zach Brown <zab@versity.com>
When we added the crtime creation timestamp to the inode we forgot to
update mkfs to set the crtime of the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
Block reads can return ESTALE naturally as mounts read through old
cached blocks. We won't always log it as an error but we should add a
tracepoint that can be inspected.
Signed-off-by: Zach Brown <zab@versity.com>
This addresses some minor issues with how we handle driving the
weak-modules infrastructure for handling running on kernels not
explicitly built for.
For one, we now drive weak-modules at install-time more explicitly (it
was adding symlinks for all modules into the right place for the running
kernel, whereas now it only handles that for scoutfs against all
installed kernels).
Also we no longer leave stale modules on the filesystem after an
uninstall/upgrade, similar to what's done for vsm's kmods right now.
RPM's pre/postinstall scriptlets are used to drive weak-modules to clean
things up.
Note that this (intentionally) does not (re)generate initrds of any
kind.
Finally, this was tested on both the native kernel version and on
updates that would need the migrated modules. As a result, installs are
a little quicker, the module still gets migrated successfully, and
uninstalls correctly remove (only) the packaged module.
server_log_merge_free_work() is responsible for freeing all the input
log trees for a log merge operation that has finished. It looks for the
next item to free, frees the log btree it references, and then deletes
the item. It was doing this with a full server commit for each item
which can take an agonizingly long time.
This changes it perform multiple deletions in a commit as long as
there's plenty of alloc space. The moment the commit gets low it
applies the commit and opens a new one. This sped up the deletion of a
few hundred thousand log tree items from taking hours to seconds.
Signed-off-by: Zach Brown <zab@versity.com>
The btree_merge code was pinning leaf blocks for all input btrees as it
iterated over them. This doesn't work when there are a very large
number of input btrees. It can run out of memory trying to hold a
reference to a 64KiB leaf block for each input root.
This reworks the btree merging code. It reads a window of blocks from
all input trees to get a set of merged items. It can take multiple
passes to complete the merge but by setting the merge window large
enough this overhead is reduced. Merging now consumes a fixed amount of
memory rather than using memory proportional to the number of input
btrees.
Signed-off-by: Zach Brown <zab@versity.com>
Add a mount option for the amount of time that log merge creation can
wait before giving up. We add some counters so we can see how often
the timeout is being hit and what the average successfull wait time is.
Signed-off-by: Zach Brown <zab@versity.com>
The server sends sync requests to clients when it sees that they have
open log trees that need to be committed for log merging to proceed.
These are currently sent in the context of each client's get_log_trees
request, resulting in sync requests queued for one client from all
clients. Depending on message delivery and commit latencies, this can
create a sync storm.
The server's sends are reliable and the open commits are marked with the
seq when they opened. It's easy for us to record having sent syncs to
all open commits so that future attempts can be avoided. Later open
commits will have higher seqs and will get a new round of syncs sent.
Signed-off-by: Zach Brown <zab@versity.com>
The server was checking all client log_trees items to search for the
lowest commit seq that was still open. This can be expensive when there
are a lot of finalized log_trees items that won't have open seqs. Only
the last log_trees item for each client rid can be open, and the items
are sorted by rid and nr, so we can easily only check the last item for
each client rid.
Signed-off-by: Zach Brown <zab@versity.com>
During get_log_trees the server checks log_trees items to see if it
should start a log merge operation. It did this by iterating over all
log_trees items and there can be quite a lot of them.
It doesn't need to see all of the items. It only needs to see the most
recent log_trees item for each mount. That's enough to make the
decisions that start the log merging process.
Signed-off-by: Zach Brown <zab@versity.com>
KASAN could raise a spurious warning if the unwinder started in code
without ORC metadata and tried to access in the KASAN stack frame
redzones. This was fixed upstream but we can rarely see it in older
kernels. We can ignore these messages.
Signed-off-by: Zach Brown <zab@versity.com>
This test is trying to make sure that concurrent work isn't much, much,
slower than individual work. It does this by timing creating a bunch of
files in a dir on a mount and then timing doing the same in two mounts
concurrently. But it messed it up the concurrency pretty badly.
It had the concurrent createmany tasks creating files with a full path.
That means that every create is trying to read all the parent
directories. The way inode number allocation works means that one of
the mounts is likely to be getting a write lock that includes a shared
parent. This created a ton of cluster lock contention between the two
tasks.
Then it didn't sync the creates between phases. It could be
accidentally recording the time it took to write out the dirty single
creates as time taken during the parallel creates.
By syncing between phases and having the createmany tasks create files
relative to their per-mount directories we actually perform concurrent
work and test that we're not creating contention outside of the task
load.
This became a problem as we switched from loopback devices to device
mapper devices. The loopback writers were using buffered writes so we
were masking the io cost of constantly invalidating and refilling the
item cache by turning the reads into memory copies out of the page
cache.
While we're in here we actually clean up the created files and then use
t_fail to fail the test while the files still exist so they can be
examined.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we're not setting up per-mount loopback devices we can not have
the loop module loaded until tests are running.
Signed-off-by: Zach Brown <zab@versity.com>
We don't directly mount the underlying devices for each mount because
the kernel notices multiple mounts and doesn't setup a new super block
for each.
Previously the script used loopback devices to create the local shared
block construct 'cause it was easy. This introduced corruption of
blocks that saw concurrent read and write IOs. The buffered kernel file
IO paths that loopback eventually degrades into by default (via splice)
could have buffered readers copying out of pages without the page lock
while writers modified the page. This manifest as occasional crc
failure of blocks that we knowingly issue concurrent reads and writes to
from multiple mounts (the quorum and super blocks).
This changes the script to use device-mapper linear passthrough devices.
Their IOs don't hit a caching layer and don't provide an opportunity to
corrupt blocks.
Signed-off-by: Zach Brown <zab@versity.com>
Our large fragmented free test creates pathologically file extents which
are as expensive as possible to free. We know that debugging kernels
can take a long time to do this so we can extend the hung task timeout.
Signed-off-by: Zach Brown <zab@versity.com>
One of the phases of this test wanted to delete files but got the glob
quoting wrong. This didn't matter for the original test but when we
changed the test to use its own xattr name then those existing undeleted
files got confused with other files in later phases of the test.
This changes the test to delete the files with a more reliable find
pattern instead of using shell glob expansion.
Signed-off-by: Zach Brown <zab@versity.com>
Previously the bulk_create_paths test tool used the same xattr name for
each category of xattrs it was creating.
This created a problem where two tests got their xattrs confused with
each other. The first test created a bunch of srch xattrs, failed, and
didn't clean up after itself. The second test saw these search xattrs
as its own and got very confused when there were far more srch xattrs
than it thought it had created.
This lets each test specify the srch xattr names that are created by
bulk_create_paths so that tests can work with their xattrs independent
of each other.
Signed-off-by: Zach Brown <zab@versity.com>
We just added a test to try and get srch compaction stuck by having an
input file continue at a specific offset. To exercise the bug the test
needs to perform 6 compactions. It needs to merge 4 sets of logs into 4
sorted files, it needs to make partial progress merging those 4 sorted
files into another file, and then finall attempt to continue compacting
from the partial progress offset.
The first version of the test didn't necessarily ensure that these
compactions happened. It created far too many log files then just
waited for time to pass. If the host was slow then the mounts may not
make it through the initial logs to try and compact the sorted files.
The triggers wouldn't fire and the test would fail.
These changes much more carefully orchestrate and watch the various
steps of compaction to make sure that we trigger the bug.
Signed-off-by: Zach Brown <zab@versity.com>
Add a sysfs file for getting and setting the delay between srch
compaction requests from the client. We'll use this in testing to
ensure compaction runs promptly.
Signed-off-by: Zach Brown <zab@versity.com>
Compacting sorted srch files can take multiple transactions because they
can be very large. Each transaction resumes at a byte offset in a block
where the previous transaction stopped.
The resuming code tests that the byte offsets are sane but had a mistake
in testing the offset to skip to. It returned an error if the
compaction resumed from the last possible safe offset for decoding
entries.
If a system is unlucky enough to have a compaction transaction stop at
just this offset then compaction stops making forward progress as each
attempt to resume returns an error.
The fix allows continuation from this last safe offset while returning
errors for attempts to continue *past* that offset. This matches all
the encoding code which allows encoding the last entry in the block at
this offset.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test for srch compaction getting stuck hitting errors continuing a
partial operation. It ensures that a block has an encoded entry at
the _SAFE_BYTES offset, that an operaton stops precisely at that
offset, and then watches for errors.
Signed-off-by: Zach Brown <zab@versity.com>
The srch compaction request building function and the srch compaction
worker both have logic to recognize a valid response with no input files
indicating that there's no work to do. The server unfortunately
translated nr == 0 into ENOENT and send that error response to the
client. This caused the client to increment error counters in the
common case when there's no compaction work to perform. We'd like the
error counter to reflect actual errors, we're about to check it in a
test, so let's fix this up to the server sends a sucessful response with
nr == 0 to indicate that there's no work to do.
Signed-off-by: Zach Brown <zab@versity.com>
Without `iflag=fullblock` we encounter sporadic cases where the
input file to the truncate test isn't fully written to 8K and ends
up to be only 4K. The subsequent truncate tests then fail.
We add a check to the input test file size just to be sure in the
future.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The server had a few lower level seqcounts that it used to protect
state. One user got it wrong by forgetting to disable pre-emption
around writers. Debug kernels warned as write_seqcount_begin() was
called without preemption disabled.
We fix that user and make it easier to get right in the future by having
one higher level seqlock and using that consistently for seq read
begin/retry and write lock/unlock patterns.
Signed-off-by: Zach Brown <zab@versity.com>
On el9 distros systemd-journald will log rotation events into kmesg.
Since the default logs on VM images are transient only, they are
rotated several times during a single test cycle, causing test failures.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The t_quiet test command execution helper was constantly truncating the
quiet.log with the output of each command. It was meant to show each
command and its output as they're run.
Signed-off-by: Zach Brown <zab@versity.com>
The rpmbuild support files no longer define the previously used kernel
module macros. This carves out the differences between el7 and el8 with
conditionals based on the distro we are building for.
Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
In rhel7 this is a nested struct with ktime_t. However, in rhel8
ktime_t is a simple s64, and not a union, and thus we can't do
this as easily. Just memset it.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In newer kernels, we always get -ESTALE because the inode has been
marked immediately as deleting. Since this is expected behavior we
should not fail the test here on this error value.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In RHEL7, this was skipped automatically. In RHEL8, we don't support
the needed passing through of the actual user namespace into our
ACL set/get handlers. Once we get around v5.11 or so, the handlers
are automatically passed the namespace. Until then, skip this test.
Signed-off-by: Auke Kok <auke.kok@versity.com>
New kernels expect to do a partial match when a .prefix is used here,
and provide a .name member in case matching should look at the whole
string. This is what we want.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The caller takes care of caching for us. Us doing caching
messes with memory management of cached ACLs and breaks.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Instead of messing with quotes and using grep for the correct
xattr name, directly query the value of the xattr being tested
only, and compare that to the input.
Side effect is that this is significantly simpler and faster.
Signed-off-by: Auke Kok <auke.kok@versity.com>
`stat` internally switched to using the new `statx` syscall, and this
affects the output of perror() subsequently. This is the same error
as before (and expected).
Signed-off-by: Auke Kok <auke.kok@versity.com>
The filefrag program in e2fsprogs-v1.42.10-10-g29758d2f now includes
an extra flag, and changes how the `unknown` flag is output.
We essentially adjust for this "new" golden value on the fly if we
encounter it. We don't expect future changes to the output.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In older versions of coreutils, quoted strings are occasionally
output using utf-8 open/close single quotes.
New versions of coreutils will exclusively use the ASCII single quote
character "'" when the output is not a TTY - as is the case with
all test scripts.
We can avoid most of these problems by always setting LC_ALL=C in
testing, however.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The aio_read and aio_write callbacks are no longer used by newer
kernels which now uses iter based readers and writers.
We can avoid implementing plain .read and .write as an iter will
be generated when needed for us automatically.
We add a new data_wait_check_iter() function accordingly.
With these methods removed from the kernel, the el8 kernel no
longer uses the extended ops wrapper struct and is much closer now
to upstream. As a result, a lot of methods are moving around from
inode_dir_operations to and from inode_file_operations etc, and
perhaps things will look a bit more structured as a result.
As a result, we need a slightly different data_wait_check() that
accounts for the iter and offset properly.
Signed-off-by: Auke Kok <auke.kok@versity.com>
.readpages is obsolete in el8 kernels. We implement the .readahead
method instead which is passed a struct readahead_control. We use
the readahead_page(rac) accessor to retrieve page by page from the
struct.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v4.9-12228-g530e9b76ae8f Drops all (un)register_(hot)cpu_notifier()
API functions. From here on we need to use the new cpuhp_* API.
We avoid this entirely for now, at the cost of leaking pages until
the filesystem is unmounted.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Convert the timeout struct unto a u64 nsecs value before passing it to
the trace point event, as to not overflow the 64bit limitation on args.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v4.16-rc1-1-g9b2c45d479d0
This interface now returns (sizeof (addr)) on success, instead of 0.
Therefore, we have to change the error condition detection.
The compat for older kernels handles the addrlen check internally.
Signed-off-by: Auke Kok <auke.kok@versity.com>
MS_* flags from <linux/mount.h> should not be used in the kernel
anymore from 4.x onwards. Instead, we need to use the SB_* versions
Signed-off-by: Auke Kok <auke.kok@versity.com>
Move to the more recent interfaces for counting and scanning cached
objects to shrink.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
Move towards modern bio intefaces, while unfortunately carrying along a
bunch of compat functions that let us still work with the old
incompatible interfaces.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
memalloc_nofs_save() was introduced as preferential to trying to use GFP
flags to indicate that a task should not recurse during reclaim. We use
it instead of the _noio_ we were using before.
Signed-off-by: Zach Brown <zab@versity.com>
__percpu_counter_add_batch was renamed to make it clear that the __
doesn't mean it's less safe, as it means in other calls in the API, but
just that it takes an additional parameter.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
There are new interfaces available but the old one has been retained
for us to use. In case of older kernels, we will need to fall back
to the previous name of these functions.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Provide fallback in degraded mode for kernels pre-v4.15-rc3 by directly
manipulating the member as needed.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since v4.6-rc3-27-g9902af79c01a, inode->i_mutex has been replaced
with ->i_rwsem. However, long since whenever, inode_lock() and
related functions already worked as intended and provided fully
exclusive locking to the inode.
To avoid a name clash on pre-rhel8 kernels, we have to rename a
stack variable in `src/file.c`.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since v4.15-rc3-4-gae5e165d855d, <linux/iversion.h> contains a new
inode->i_version API and it is not included by default.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The new variant of the code that recomputes the augmented value
is designed to handle non-scalar types and to facilitate that, it
has new semantics for the _compute callback. It is now passed a
boolean flag `exit` that indicates that if the value isn't changed,
it should exit and halt propagation.
The callback function now shall return whether that propagation should
stop or not, and not the computed new value. The callback can now
directly update the new computed value in the node.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Fixes: Error: implicit declaration of function ‘blkdev_put’
Previously this was an `extern` in <fs.h> and included implicitly,
hence the need to hard include it now.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v4.1-rc4-22-g92cf211874e9 merges this into preempt.h, and on
rhel7 kernels we don't need this include anymore either.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v3.15-rc1-6-g1a56f2aa4752 removes flush_work_sync entirely, but
ever since v3.6-rc1-25-g606a5020b9bd which made all workqueues
non-reentrant, it has been equivalent to flush_work.
This is safe because in all cases only one server->work can be
in flight at a time.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v3.18-rc3-2-g230fa253df63 forces us to remove ACCESS_ONCE() with
READ_ONCE(), but it is probably the better interface and works with
non-scalar types.
Signed-off-by: Auke Kok <auke.kok@versity.com>
PAGE_CACHE_SIZE was previously defined to be equivalent to PAGE_SIZE.
This symbol was removed in v4.6-rc1-32-g1fa64f198b9f.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Because we `-include src/kernelcompat.h` from the command line,
this header gets included before any of the kernel includes in
most .c and .h files. We should at least make sure we pull in
<fs> and <kernel> since they're required.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The fence-and-reclaim test has a little function that runs after fencing
and recovery to make sure that all the mounts are operational again.
The main thing it does is re-use the same locks across a lot of files to
ensure that lock recovery didn't lose any locks that stop forward
progress.
But I also threw in a test of the committed_seq machinery, as a bit of
belt and suspenders. The problem is the test is racey. It samples the
seq after the write so the greatest seq it rememebers can be after the
write and will not be committed by the other nodes reads. It being less
than the committed_seq is a totally reasonable race.
Which explains why this test has been rarely failing since it was
written. There's no particular reason to test the committed_seq
machinery here, so we can just remove that racey test.
Signed-off-by: Zach Brown <zab@versity.com>
Server code that wants to dirty blocks by holding a commit won't be
allowed to until the current allocators for the server transaction have
enough space for the holder. As an active holder applies the commit the
allocators are refilled and the waiting holders will proceed.
But the current allocators can have no resources as the server starts
up. There will never be active holders to apply the commit and refill
the allocators. In this case all the holders will block indefinitely.
The fix is to trigger a server commit when a holder doesn't have room.
It used to be that commits were only triggered when apply callers were
waiting. We transfer some of that logic into a new 'committing' field
so that we can have commits in flight without apply callers waiting. We
add it to the server commit tracing.
While we're at it we clean up the logic that tests if a hold can
proceed. It used to be confusingly split across two functions that both
could sample the current allocator space remaining. This could lead to
weird cases where the first holder could use the second alloc remaining
call, not the one whose values were tested to see if the holder could
fit. Now each hold check only samples the allocators once.
And finally we fix a subtle case where the budget exceeded message can
spuriously trigger in the case where dirtying the freed list created a
new empty block after the holder recorded the amount of space in the
freed block.
Signed-off-by: Zach Brown <zab@versity.com>
Data preallocation attempts to allocate large aligned regions of
extents. It tried to fill the hole around a write offset that
didn't contain an extent. It missed the case where there can be
multiple extents between the start of the region and the hole.
It could try to overwrite these additional existing extents and writes
could return EINVAL.
We fix this by trimming the preallocation to start at the write offset
if there are any extents in the region before the write offset. The
data preallocation test output has to be updated now that allocation
extents won't grow towards the start of the region when there are
existing extents.
Signed-off-by: Zach Brown <zab@versity.com>
Log merge completions were spliced in one server commit. It's possible
to get enough completion work pending that it all can't be completed in
one server commit. Operations fail with ENOSPC and because these
changes can't be unwound cleanly the server asserts.
This allows the completion splicing to break the work up into multiple
commits.
Processing completions in multiple commits means that request creation
can observe the merge status in states that weren't possible before.
Splicing is careful to maintain an elevated nr_complete count while the
client can't get requests because the tree is rebalancing.
Signed-off-by: Zach Brown <zab@versity.com>
The move_blocks ioctl finds extents to move in the source file by
searching from the starting block offset of the region to move.
Logically, this is fine. After each extent item is deleted the next
search will find the next extent.
The problem is that deleted items still exist in the item cache. The
next iteration has to skip over all the deleted extents from the start
of the region. This is fine with large extents, but with heavily
fragmented extents this creates a huge amplification of the number of
items to traverse when moving the fragmented extents in a large file.
(It's not quite O(n^2)/2 for the total extents, deleted items are purged
as we write out the dirty items in each transaction.. but it's still
immense.)
The fix is to simply start searching for the next extent after the one
we just moved.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which exercises filling holes in prealloc regions when the
_contig_only prealloc option is not set.
Signed-off-by: Zach Brown <zab@versity.com>
If the _contig_only option isn't set then we try to preallocate aligned
regions of files. The initial implementation naively only allowed one
preallocation attempt in each aligned region. If it got a small
allocation that didn't fill the region then every future allocation
in the region would be a single block.
This changes every preallocation in the region to attempt to fill the
hole in the region that iblock fell in. It uses an extra extent search
(item cache search) to try and avoid thousands of single block
allocations.
Signed-off-by: Zach Brown <zab@versity.com>
The RCU hash table uses deferred work to resize the hash table. There's
a time during resize when hash table iteration will return EAGAIN until
resize makes more progress. During this time resize can perform
GFP_KERNEL allocations.
Our shrinker tries to iterate over its RCU hash table to find blocks to
reclaim. It tries to restart iteration if it gets EAGAIN on the
assumption that it will be usable again soon.
Combine the two and our shrinker can get stuck retrying iteration
indefinitely because it's shrinking on behalf of the hash table resizing
that is trying to allocate the next table before making iteration work
again. We have to stop shrinking in this case so that the resizing
caller can proceed.
Signed-off-by: Zach Brown <zab@versity.com>
Add an ioctl that gives the callers all entries that refer to an inode.
It's like a backwards readdir. It's a light bit of translation between
the internal _add_next_linkrefs() list of entries and the ioctl
interface of a buffer of entry structs.
Signed-off-by: Zach Brown <zab@versity.com>
Extend scoutfs_dir_add_next_linkref() to be able to return multiple
backrefs under the lock for each call and have it take an argument to
limit the number of backrefs that can be added and returned.
Its return code changes a bit in that it returns 1 on success instead of
0 so we have to be a little careful with callers who were expecting 0.
It still returns -ENOENT when no entries are found.
We break up its tracepoint into one that records each entry added and
one that records the result of each call.
This will be used by an ioctl to give callers just the entries that
point to an inode instead of assembling full paths from the root.
Signed-off-by: Zach Brown <zab@versity.com>
Update the quorum_heartbeat_timeout_ms test to also test the mount
option, not just updating the timeout via sysfs. This takes some
reworking as we have to avoid the active leader/server when setting the
timeout via the mount option. We also allow for a bit more slack around
comparing kernel sleeps and userspace wall clocks.
Signed-off-by: Zach Brown <zab@versity.com>
Mount option parsing runs early enough that the rest of the option
read/write serialization infrastructure isn't set up yet. The
quorum_heartbeat_timeout_ms mount option tried to use a helper that
updated the stored option but it wasn't initialized yet so it crashed.
The helper was really only to have the option validity test in one
place. It's reworked to only verify the option and the actual setting
is left to the callers.
Signed-off-by: Zach Brown <zab@versity.com>
If setting a sysfs option failes the bash write error is output. It
contains the script line number which can fail over time, leading to
mismatched golden output failures if we used the output as an expected
indication of failure. Callers should test its rc and output
accordingly if they want the failure logged and compared.
Signed-off-by: Zach Brown <zab@versity.com>
Forced unmount is supposed to isolate the mount from the world. The
net.c TCP messaging returns errors when sending during forced unmount.
The quorum code has its own UDP messaging and wasn't taking forced
unmount into account.
This lead to quorum still being able to send resignation messages to
other quorum peers during forced unmount, making it hard to test
heartbeat timeouts with forced unmount.
The quorum messaging is already unreliable so we can easily make it drop
messages during forced unmount. Now forced unmount more fully isolates
the quorum code and it becomes easier to test.
Signed-off-by: Zach Brown <zab@versity.com>
Add tracking and reporting of delays in sending or receiving quorum
heartbeat messages. We measure the time between back to back sends or
receives of heartbeat messages. We record these delays truncated down
to second granularity in the quorum sysfs status file. We log messages
to the console for each longest measured delay up to the maximum
configurable heartbeat timeout.
Signed-off-by: Zach Brown <zab@versity.com>
Add mount and sysfs options for changing the quorum heartbeat timeout.
This allows setting a longer delay in taking over for failed hosts that
has a greater chance of surviving temporary non-fatal delays.
We also double the existing default timeout to 10s which is still
reasonably responsive.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum udp socket allocation still allowed starting io which can
trigger longer latencies trying to free memory. We change the flags to
prefer dipping into emergency pools and then failing rather than
blocking trying to satisfy an allocation. We'd much rather have a given
heartbeat attempt fail and have the opportunity to succeed at the next
interval rather than running the risk of blocking across multiple
intervals.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum work was using the system workq. While that's mostly fine,
we can create a dedicated workqueue with the specific flags that we
need. The quorum work needs to run promptly to avoid fencing so we set
it to high priority.
Signed-off-by: Zach Brown <zab@versity.com>
In the quorum work loop some message receive actions extend the timeout
after the timeout expiration is checked. This is usually fine when the
work runs soon after the messages are received and before the timeout
expires. But under load the work might not schedule until long after
both the message has been received and the timeout has expired.
If the message was a heartbeat message then the wakeup delay would be
mistaken for lack of activity on the server and it would try to take
over for an otherwise active server.
This moves the extension of the heartbeat on message receive to before
the timeout is checked. In our case of a delayed heartbeat message it
would still find it in the recv queue and extend the timeout, avoiding
fencing an active server.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command for writing a super block to a new data device after
reading the metadata device to ensure that there's no existing
data on the old data device.
Signed-off-by: Zach Brown <zab@versity.com>
Some tests had grown a bad pattern of making a mount point for the
scratch mount in the root /mnt directory. Change them to use a mount
point in their test's temp directory outside the testing fs.
Signed-off-by: Zach Brown <zab@versity.com>
Split the existing device_size() into get_device_size() and
limit_device_size(). An upcoming command wants to get the device size
without applying limiting policy.
Signed-off-by: Zach Brown <zab@versity.com>
We missed initializing sb->s_time_gran which controls how some parts of
the kernel truncate the granularity of nsec in timespec. Some paths
don't use it at all so time would be maintained at full precision. But
other paths, particularly setattr_copy() from userspace and
notify_change() from the kernel use it to truncate as times are set.
Setting s_time_gran to 1 maintains full nsec precision.
Signed-off-by: Zach Brown <zab@versity.com>
The VFS performs a lot of checks on renames before calling the fs
method. We acquire locks and refresh inodes in the rename method so we
have to duplciate a lot of the vfs checks.
One of the checks involves loops with ancestors and subdirectories. We
missed the case where the root directory is the destination and doesn't
have any parent directories. The backref walker it calls returns
-ENOENT instead of 0 with an empty set of parents and that error bubbled
up to rename.
The fix is to notice when we're asking for ancestors of the one
directory that can't have ancestors and short circuit the test.
Signed-off-by: Zach Brown <zab@versity.com>
When a client no longer needs to append to a srch file, for whatever
reason, we move the reference from the log_trees item into a specific
srch file btree item in the server's srch file tracking btree.
Zeroing the log_trees item and inserting the server's btree item are
done in a server commit and should be written atomically.
But commit_log_trees had an error handling case that could leave the
newly inserted item dirty in memory without zeroing the srch file
reference in the existing log_trees item. Future attempts to rotate the
file reference, perhaps by retrying the commit or by reclaiming the
client's rid, would get EEXIST and fail.
This fixes the error handling path to ensure that we'll keep the dirty
srch file btree and log_trees item in sync. The desynced items can
still exist in the world so we'll tolerate getting EEXIST on insertion.
After enough time has passed, or if repair zeroed the duplicate
reference, we could remove this special case from insertion.
Signed-off-by: Zach Brown <zab@versity.com>
The move_blocks ioctl intends to only move extents whose bytes fall
inside i_size. This is easy except for a final extent that straddles an
i_size that isn't aligned to 4K data blocks.
The code that either checked for an extent being entirely past i_size or
for limiting the number of blocks to move by i_size clumsily compared
i_size offsets in bytes with extent counts in 4KB blocks. In just the
right circumstances, probably with the help of a byte length to move
that is much larger than i_size, the length calculation could result in
trying to move 0 blocks. Once this hit the loop would keep finding that
extent and calculating 0 blocks to move and would be stuck.
We fix this by clamping the count of blocks in extents to move in terms
of byte offsets at the start of the loop. This gets rid of the extra
size checks and byte offset use in the loop. We also add a sanity check
to make sure that we can't get stuck if, say, corruption resulted in an
otherwise impossible zero length extent.
Signed-off-by: Zach Brown <zab@versity.com>
There were kernels that didn't apply the current umask to inode modes
created with O_TMPFILE without acls. Let's have a test running to make
sure that we're not surprised if we come across one.
Signed-off-by: Zach Brown <zab@versity.com>
We had a one-off test that was overly specific to staging from tmpfile.
This renames it to a more generic test where we can add more tests of
o_tmpfile in general.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we've removed its users we can remove the global saved copy of
the super block from scoutfs_sb_info.
Signed-off-by: Zach Brown <zab@versity.com>
As the server does its work its transactions modify a dirty super block
in memory. This used the global super block in scoutfs_sb_info which
was visible to everything, including the client. Move the dirty super
block over to the private server info so that only the server can see
it.
This is mostly boring storage motion but we do change that the quorum
code hands the server a static copy of the quorum config to use as it
starts up before it reads the most recent super block.
Signed-off-by: Zach Brown <zab@versity.com>
Refilling a client's data_avail is the only alloc_move call that doesn't
try and limit the number of blocks that it dirties. If it doesn't find
sufficiently large extents it can exhaust the server's alloc budget
without hitting the target. It'll try to dirty blocks and return a hard
error.
This changes that behaviour to allow returning 0 if it moved any
extents. Other callers can deal with partial progress as they already
limit the blocks they dirty. This will also return ENOSPC if it hadn't
moved anything just as the current code would.
The result is that data fill can not necessarily hit the target. It
might take multiple commits to fill the data_avail btree.
Signed-off-by: Zach Brown <zab@versity.com>
The server's statfs request handler was intending to lock dirty
structures as they were walked to get sums used for statfs fields.
Other callers walk stable structures, though, so the summation calls had
grown iteration over other structures that the server didn't know it had
to lock.
This meant that the server was walking unlocked dirty structures as they
were being modified. The races are very tight, but it can result in
request handling errors that shut down connections and IO errors from
trying to read inconsistent refs as they were modified by the locked
writer.
We've built up infrastructure so the server can now walk stable
structures just like the other callers. It will no longer wander into
dirty blocks so it doesn't need to lock them and it will retry if its
walk of stale data crosses a broken reference.
Signed-off-by: Zach Brown <zab@versity.com>
Transition from manual checking for persistent ESTALE to the shared
helper that we just added. This should not change behavior.
Signed-off-by: Zach Brown <zab@versity.com>
Many readers had little implementations of the logic to decide to retry
stale reads with different refs or decide that they're persistent and
return hard errors. Let's move that into a small helper.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_forest_inode_count() assumed it was called with stable refs and
would always translate ESTALE to EIO. Change it so that it passes
ESTALE to the caller who is responsible for handling it.
The server will use this to retry reading from stable supers that it's
storing in memory.
Signed-off-by: Zach Brown <zab@versity.com>
The server has a mechanism for tracking the last stable roots used by
network rpcs. We expand it a bit to include the entire super so
that we can add users in the server which want the last full stable
super. We can still use the stable super to give out the stable
roots.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum code was using the copy of the super block in the sb info for
its config. With that going away we make different users more carefully
reference the config. The quorum agent has a copy that it reads on
setup, the client rarely reads a copy when trying to connect, and the
server uses its super.
This is about data access isolation and should have no functional effect
other than to cause more super reads.
Signed-off-by: Zach Brown <zab@versity.com>
A few paths throughout the code get the fsid for the current mount by
using the copy of the super block that we store in the scoutfs_sb_info
for the mount. We'd like to remove the super block from the sbi and
it's cleaner to have a specific constant field for the fsid of the mount
which will not change.
Signed-off-by: Zach Brown <zab@versity.com>
When we truncate away from a partial block we need to zero its tail that
was past i_size and dirty it so that it's written.
We missed the typical vfs boilerplate of calling block_truncate_page
from setattr->set_size that does this. We need to be a little careful
to pass our file lock down to get_block and then queue the inode for
writeback so its written out with the transaction. This follows the
pattern in .write_end.
Signed-off-by: Zach Brown <zab@versity.com>
The d_prune_aliases in lock invalidation was thought to be safe because
the caller had an inode refernece, surely it can't get into iput_final.
I missed the fundamental dcache pattern that dput can ascend through
parents and end up in inode eviction for entirely unrelated inodes.
It's very easy for this to deadlock, imagine if nothing else that the
inode invalidation is blocked on in dput->iput->evict->delete->lock is
itself in the list of locks to invalidate in the caller.
We fix this by always kicking off d_prune and dput into async work.
This increases the chance that inodes will still be referenced after
invalidation and prevent inline deletion. More deletions can be
deferred until the orphan scanner finds them. It should be rare,
though. We're still likely to put and drop invalidated inodes before a
writer gets around to removing the final unlink and asking us for the
omap that describes our cached inodes.
To perform the d_prune in work we make it a behavioural flag and make
our queued iputs a little more robust. We use much safer and
understandable locking to cover the count and the new flags and we put
the work in re-entrant work in their own workqueue instead of one work
instance in the system_wq.
Signed-off-by: Zach Brown <zab@versity.com>
Add a quick test of the index items to make sure that rapid inode
updates don't create duplicate meta_seq items.
Signed-off-by: Zach Brown <zab@versity.com>
FS items are deleted by logging a deletion item that has a greater item
version than the item to delete. The versions are usually maintained by
the write_seq of the exclusive write lock that protects the item. Any
newer write hold will have a greater version than all previous write
holds so any items created under the lock will have a greater vers than
all previous items under the lock. All deletion items will be merged
with the older item and both will be dropped.
This doesn't work for concurrent write-only locks. The write-only locks
match with each other so their write_seqs are asssigned in the order
that they are granted. That grant order can be mismatched with item
creation order. We can get deletion items with lesser versions than the
item to delete because of when each creation's write-only lock was
granted.
Write only locks are used to maintain consistency between concurrent
writers and readers, not between writers. Consistency between writers
is done with another primary write lock. For example, if you're writing
seq items to a write-only region you need to have the write lock on the
inode for the specific seq item you're writing.
The fix, then, is to pass these primary write locks down to the item
cache so that it can chose an item version that is the greatest amongst
the transaction, the write-only lock, and the primary lock. This now
ensures that the primary lock's increasing write_seq makes it down to
the item, bringing item version ordering in line with exclusive holds of
the primary lock.
All of this to fix concurrent inode updates sometimes leaving behind
duplicate meta_seq items because old seq item deletions ended up with
older versions than the seq item they tried to delete, nullifying the
deletion.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we've removed the hash and pos from the dentry_info struct we
can do without it. We can store the refresh gen in the d_fsdsta pointer
(sorry, 64bit only for now.. could allocate if we needed to.) This gets
rid of the lock coverage spinlocks and puts a bit more pressure on lock
lookup, which we already know we have to make more efficient. We can
get rid of all the dentry info allocation calls.
Now that we're not setting d_op as we allocate d_fsdata we put the ops
on the super block so that we get d_revalidate called on all our
dentries.
We also are a bit more precise about the errors we can return from
verification. If the target of a dentry link changes then we return
-ESTALE rather than silently performing the caller's operation on
another inode.
Signed-off-by: Zach Brown <zab@versity.com>
Add a lock call to get the current refresh_gen of a held lock. If the
lock doesn't exist or isn't readable then we return 0. This an be used
to track lock coverage of structures without the overhead and lifetime
binding of the lock coverage struct.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_sysfs_exit() is called during error handling in module init.
When scoutfs is built-in (so, never.) the __exit section won't be
loaded. Remove the __exit annotation so it's always available to be
called.
Signed-off-by: Zach Brown <zab@versity.com>
The dentry cache life cycles are far too crazy to rely on d_fsdata being
kept in sync with the rest of the dentry fields. Callers can do all
sorts of crazy things with dentries. Only unlink and rename need these
fields and those operations are already so expensive that item lookups
to get the current actual hash and pos are lost in the noise.
Signed-off-by: Zach Brown <zab@versity.com>
The test shell helpers for saving and restoring mount options were
trying to put each mount's option value in an array. It meant to build
the array key by concatenating the option name and the mount number.
But it didn't isolate the option "name" variable when evaluating it,
instead always evaluating "name_" to nothing and building keys for all
options that only contained the mount index. This then broke when tests
attempted to save and restore multiple options.
Signed-off-by: Zach Brown <zab@versity.com>
Make mount options for the size of preallocation and whether or not it
should be restricted to extending writes. Disabling the default
restriction to streaming writes lets it preallocate in aligned regions
of the preallocation size when they contain no extents.
Signed-off-by: Zach Brown <zab@versity.com>
The orphan_scan_delay_ms option setting code mistakenly set the default
before testing the option for -1 (not the default) to discover if
multiple options had been set. This made any attempt to set fail.
Initialize the option to -1 so the first set succeeds and apply the
default if we don't set the value.
Signed-off-by: Zach Brown <zab@versity.com>
The simple-xattr-unit test had a helper that failed by exiting with
non-zero instead of emitting a message. Let's make it a bit easier to
see what's going on.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for the POSIX ACLs as described in acl(5). Support is
enabled by default and can be explicitly enabled or disabled with the
acl or noacl mount options, respectively.
Signed-off-by: Zach Brown <zab@versity.com>
The upcoming acl support wants to be able to get and set xattrs from
callers who already have cluster locks and transactions. We refactor
the existing xattr get and set calls into locked and unlocked variants.
It's mostly boring code motion with the unfortunate situation that the
caller needs to acquire the totl cluster lock before holding a
transaction before calling into the xattr code. We push the parsing of
the tags to the caller of the locked get and set so that they can know
to acquire the right lock. (The acl callers will never be setting
scoutfs. prefixed xattrs so they will never have tags.)
Signed-off-by: Zach Brown <zab@versity.com>
Move to the use of the array of xattr_handler structs on the super to
dispatch set and get from generic_ based on the xattr prefix. This
will make it easier to add handling of the pseudo system. ACL xattrs.
Signed-off-by: Zach Brown <zab@versity.com>
try_delete_inode_items() is responsible for making sure that it's safe
to delete an inode's persistent items. One of the things it has to
check is that there isn't another deletion attempt on the inode in this
mount. It sets a bit in lock data while it's working and backs off if
the bit is already set.
Unfortunately it was always clearing this bit as it exited, regardless
of whether it set it or not. This would let the next attempt perform
the deletion again before the working task had finished. This was often
not a problem because background orphan scanning is the only source of
regular concurrent deletion attempts.
But it's a big problem if a deletion attempt takes a very long time. It
gives enough time for an orphan scan attempt to clear the bit then try
again and clobber on whoever is performing the very slow deletion.
I hit this in a test that built files with an absurd number of
fragmented extents. The second concurrent orphan attempt was able to
proceed with deletion and performed a bunch of duplicate data extent
frees and caused corruption.
The fix is to only clear the bit if we set it. Now all concurrent
attempts will back off until the first task is done.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which gives the server a transaction with a free list block
that contains blknos that each dirty an individiaul btree blocks in the
global data free extent btree.
Signed-off-by: Zach Brown <zab@versity.com>
Recently scoutfs_alloc_move() was changed to try and limit the amount of
metadata blocks it could allocate or free. The intent was to stop
concurrent holders of a transaction from fully consuming the available
allocator for the transaction.
The limiting logic was a bit off. It stopped when the allocator had the
caller's limit remaining, not when it had consumed the caller's limit.
This is overly permissive and could still allow concurrent callers to
consume the allocator. It was also triggering warning messages when a
call consumed more than its allowed budget while holding a transaction.
Unfortunately, we don't have per-caller tracking of allocator resource
consumption. The best we can do is sample the allocators as we start
and return if they drop by the caller's limit. This is overly
conservative in that it accounts any consumption during concurrent
callers to all callers.
This isn't perfect but it makes the failure case less likely and the
impact shouldn't be significant. We don't often have a lot of
concurrency and the limits are larger than callers will typically
consume.
Signed-off-by: Zach Brown <zab@versity.com>
Add scoutfs_alloc_meta_low_since() to test if the metadata avail or
freed resources have been used by a given amount since a previous
snapshot.
Signed-off-by: Zach Brown <zab@versity.com>
As _get_log_trees() in the server prepares the log_trees item for the
client's commit, it moves all the freed data extents from the log_trees
item into core data extent allocator btree items. If the freed blocks
are very fragmented then it can exceed a commit's metadata allocation
budget trying to dirty blocks in the free data extent btree.
The fix is to move the freed data extents in multiple commits. First we
move a limited number in the main commit that does all the rest of the
work preparing the commit. Then we try to move the remaining freed
extents in multiple additional commits.
Signed-off-by: Zach Brown <zab@versity.com>
Callers who send to specific client connections can get -ENOTCONN if
their client has gone away. We forgot to free the send tracking struct
in that case.
Signed-off-by: Zach Brown <zab@versity.com>
The omap code keeps track of rids that are connected to the server. It
only freed the tracked rids as the server told it that rids were being
removed. But that removal only happened as clients were evicted. If
the server shutdown it'd leave the old rid entries around. They'd be
leaked as the mount was unmounted and could linger and crate duplicate
entries if the server started back up and the same clients reconnected.
The fix is to free the tracking rids as the server shuts down. They'll
be rebuilt as clients reconnect if the server restarts.
Signed-off-by: Zach Brown <zab@versity.com>
If we return an error from .fill_super without having set sb->s_root
then the vfs won't call our put_super. Our fill_super is careful to
call put_super so that it can tear down partial state, but we weren't
doing this with a few very early errors in fill_super. This tripped
leak detection when we weren't freeing the sbi when returning errors
from bad option parsing.
Signed-off-by: Zach Brown <zab@versity.com>
Clients don't use the net conn info and specified that it has 0 size.
The net layer would try and allocate a zero size region which returns
the magic ZERO_SIZE_PTR, which it would then later try and free. While
that works, it's a little goofy. We can avoid the allocation when the
size is 0. The pointer will remain null which kfree also accepts.
Signed-off-by: Zach Brown <zab@versity.com>
Add an option to skip printing structures that are likely to be so huge
that the print output becomes completely unwieldly on large systems.
Signed-off-by: Zach Brown <zab@versity.com>
Like a lot of places in the server, get_log_trees() doesn't have the
tools in needs to safely unwind partial changes in the face of an error.
In the worst case, it can have moved extents from the mount's log_trees
item into the server's main data allocator. The dirty data allocator
reference is in the super block so it can be written later. The dirty
log_trees reference is on stack, though, so it will be thrown away on
error. This ends up duplicating extents in the persistent structures
because they're written in the new dirty allocator but still remain in
the unwritten source log_trees allocator.
This change makes it harder for that to happen. It dirties the
log_trees item and always tries to update so that the dirty blocks are
consistent if they're later written out. If we do get an error updating
the item we throw an assertion. It's not great, but it matches other
similar circumstances in other parts of the server.
Signed-off-by: Zach Brown <zab@versity.com>
We were setting sk_allocation on the quorum UDP sockets to prevent
entering reclaim while using sockets but we missed setting it on the
regular messaging TCP sockets. This could create deadlocks where the
sending socket could enter scoutfs reclaim and wait for server messages
while holding the socket lock, preventing the receive thread from
receiving messages while it blocked on the socket lock.
The fix is to prevent entering the FS to reclaim during socket
allocations.
Signed-off-by: Zach Brown <zab@versity.com>
Client log_trees allocator btrees can build up quite a number of
extents. In the right circumstances fragmented extents can have to
dirty a large number of paths to leaf blocks in the core allocator
btrees. It might not be possible to dirty all the blocks necessary to
move all the extents in one commit.
This reworks the extent motion so that it can be performed in multiple
commits if the meta allocator for the commit runs out while it is moving
extents. It's a minimal fix with as little disruption to the ordering
of commits and locking as possible. It simply bubbles up an error when
the allocators run out and retries functions that can already be retried
in other circumstances.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing allocator motion during get_log_trees dirty quite a lot of
blocks, which makes sense. Let's continue to up the budget. If we
still need significantly larger budgets we'll want to look into capping
the dirty block use of the allocator extent movers which will mean
changing callers to support partial progress.
Signed-off-by: Zach Brown <zab@versity.com>
When a new server starts up it rebuilds its view of all the granted
locks with lock recovery messages. Clients give the server their
granted lock modes which the server then uses to process all the resent
lock requests from clients.
The lock invalidation work in the client is responsible for
transitioning an old granted mode to a new invalidated mode from an
unsolicited message from the server. It has to process any client state
that'd be incompatible with the new mode (write dirty data, drop
caches). While it is doing this work, as an implementation short cut,
it sets the granted lock mode to the new mode so that users that are
compatible with the new invalidated mode can use the lock whlie it's
being invalidated. Picture readers reading data while a write lock is
invalidating and writing dirty data.
A problem arises when a lock recover request is processed during lock
invalidation. The client lock recover request handler sends a response
with the current granted mode. The server takes this to mean that the
invalidation is done but the client invalidation worker might still be
writing data, dropping caches, etc. The server will allow the state
machine to advance which can send grants to pending client requests
which believed that the invalidation was done.
All of this can lead to a grant response handler in the client tripping
the assertion that there can not be cached items that were incompatible
with the old mode in a grant from the server. Invalidation might still
be invalidating caches. Hitting this bug is very rare and requires a
new server starting up while a client has both a request outstanding and
an invalidation being processed when the lock recover request arrives.
The fix is to record the old mode during invalidation and send that in
lock recover responses. This can lead the lock server to resend
invalidation requests to the client. The client already safely handles
duplicate invalidation requests from other failover cases.
Signed-off-by: Zach Brown <zab@versity.com>
The change to only allocate a buffer for the first xattr item with
kmalloc instead of the entire logical xattr payload with vmalloc
included a regression for getting large xattrs.
getxattr used to copy the entire payload into the large vmalloc so it
could unlock just after get_next_xattr. The change to only getting the
first item buffer added a call to copy from the rest of the items but
those copies weren't covered by the locks. This would often work
because the lock pointer still pointed to a valid lock. But if the lock
was invalidated then the mode would no longer be compatible and
_item_lookup would return EINVAL.
The fix is to extend xattr_rwsem and cluster lock coverage to the rest
fo the function body, which includes the value item copies. This also
makes getxattr's lock coverage consistent with setxattr and listxattr
which might reduce the risk of similar mistakes in the future.
Signed-off-by: Zach Brown <zab@versity.com>
After we've merged a log btree back into the main fs tree we kick off
work to free all its blocks. This would fully fill the transactions
free blocks list before stopping to apply the commit.
Consuming the entire free list makes it hard to have concurrent holders
of a commit who also want to free things. This chnages the log btree
block freeing to limit itself to a fraction of the budget that each
holder gets. That coarse limit avoids us having to precisely account
for the allocations and frees while modifying the freeing item while
still freeing many blocks per commit.
Signed-off-by: Zach Brown <zab@versity.com>
Server commits use an allocator that has a limited number of available
metadata blocks and entries in a list for freed blocks. The allocator
is refilled between commits. Holders can't fully consume the allocator
during the commit and that tended to work out because server commit
holders commit before sending responses. We'd tend to commit frequently
enough that we'd get a chance to refill the allocators before they were
consumed.
But there was no mechanism to ensure that this would be the case.
Enough concurrent server holders were able to fully consume the
allocators before committing. This causes scoutfs_meta_alloc and _free
to return errors, leading the server to fail in the worst cases.
This changes the server commit tracking to use more robust structures
which limit the number of concurrent holders so that the allocators
aren't exhausted. The commit_users struct stops holders from making
progress once the allocators don't have room for more holders. It also
lets us stop future holders from making progress once the commit work
has been queued. The previous cute use of a rwsem didn't allow for
either of these protections.
We don't have precise tracking of each holder's allocation consumption
so we don't try and reserve blocks for each holder. Instead we have a
maxmimum consumption per holder and make sure that all the holders can't
consume the allocators if they all use their full limit.
All of this requires the holding code paths to be well behaved and not
use more than the per-hold limit. We add some debugging code to print
the stacks of holders that were active when the total holder limit was
exceeded. This is the motivation for having state in the holders. We
can record some data at the time their hold started that'll make it a
little easier to track down which of the holders exceeded their limit.
Signed-off-by: Zach Brown <zab@versity.com>
Add helper function to give the caller the number of blocks remaining in
the first list block that's used for meta allocation and freeing.
Signed-off-by: Zach Brown <zab@versity.com>
There was a brief time where we exported the ability to hold and apply
commits outside of the main server code. That wasn't a great idea, and
the few users have seen been reworked to not require directly
manipulating server transactions, so we can reduce risk and make these
functions private again.
Signed-off-by: Zach Brown <zab@versity.com>
Quorum members will try to elect a new leader when they don't receive
heartbeats from the currently elected leader. This timeout is short to
encourage restoring service promptly.
Heartbeats are sent from the quorum worker thread and are delayed while
it synchronously starts up the server, which includes fencing previous
servers. If fence requests take too long then heartbeats will be
delayed long enough for remaining quorum members to elect a new leader
while the recently elected server is still busy fencing.
To fix this we decouple server startup from the quorum main thread.
Server starting and stopping becomes asynchronous so the quorum thread
is able to send heartbeats while the server work is off starting up and
fencing.
The server used to call into quorum to clear a flag as it exited. We
remove that mechanism and have the server maintain a running status that
quorum can query.
We add some state to the quorum work to track the asynchronous state of
the server. This lets the quorum protocol change roles immediately as
needed while remembering that there is a server running that needs to be
acted on.
The server used to also call into quorum to update quorum blocks. This
is a read-modify-write operation that has to be serialized. Now that we
have both the server starting up and the quorum work running they both
can't perform these read-modify-write cycles. Instead we have the
quorum work own all the block updates and it queries the server status
to determine when it should update the quorum block to indicate that the
server has fenced or shut down.
Signed-off-by: Zach Brown <zab@versity.com>
The fence script we use for our single node multi-mount tests only knows
how to fence by using forced unmount to destroy a mount. As of now, the
tests only generate failing nodes that need to be fenced by using forced
unmount as well. This results in the awkward situation where the
testing fence script doesn't have anything to do because the mount is
already gone.
When the test fence script has nothing to do we might not notice if it
isn't run. This adds explicit verification to the fencing tests that
the script was really run. It adds per-invocation logging to the fence
script and the test makes sure that it was run.
While we're at it, we take the opportunity to tidy up some of the
scripting around this. We use a sysfs file with the data device
major:minor numbers so that the fencing script can find and unmount
mounts without having to ask them for their rid. They may not be
operational.
Signed-off-by: Zach Brown <zab@versity.com>
Extended attribute values can be larger than a reasonable maximum size
for our btree items so we store xattrs in many items. The first pass at
this code used vmalloc to make it relatively easy to work with a
contiguous buffer that was cut up into multiple items.
The problem, of course, is that vmalloc() is expensive. Well, the
problem is that I always forget just how expensive it can be and use it
when I shouldn't. We had loads on high cpu count machines that were
catastrophically cpu bound on all the contentious work that vmalloc does
to maintain a coherent global address space.
This removes the use of vmalloc and only allocates a small buffer for
the first compound item. The later items directly reference regions of
value buffer rather than copying it to and from the large intermediate
vmalloced buffer.
Signed-off-by: Zach Brown <zab@versity.com>
The t_server_nr and t_first_client_nr helpers iterated over all the fs
numbers examining their quorum/is_leader files, but clients don't have a
quorum/ directory. This was causing spurious outputs in tests that were
looking for servers but didn't find it in the first quorum fs number and
made it down into the clients.
Give them a helper that returns 0 for being a leader if the quorum/ dir
doesn't exist.
Signed-off-by: Zach Brown <zab@versity.com>
We were seeing rare test failures where it looked like is_leader wasn't
set for any of the mounts. The test that couldn't find a set is_leader
file had just perfomed some mounts so we know that a server was up and
processing requests.
The quorum task wasn't updating the status that's shown in sysfs and
debugfs until after the server started up. This opened the race where
the server was able to serve mount requests and have the test run to
find no is_leader file set before the quorum task was able to update the
stats and make its election visible.
This updates the quorum task to make its status visible more often,
typically before it does something that will take a while. The
is_leader will now be visible before the server is started so the test
will always see the file after server starts up and lets mounts finish.
Signed-off-by: Zach Brown <zab@versity.com>
The final iput of an inode can delete items in cluster locked
transactions. It was never safe to call iput within locked
transactions but we never saw the problem. Recent work on inode
deletion raised the issue again.
This makes sure that we always perform iput outside of locked
transactions. The only interesting change is making scoutfs_new_inode()
return the allocated inode on error so that the caller can put the inode
after releasing the transaction.
Signed-off-by: Zach Brown <zab@versity.com>
During forced unmount commits abort due to errors and the open
transaction is left in a dirty state that is cleaned up by
scoutfs_shutdown_trans(). It cleans all the dirty blocks in the commit
write context with scoutfs_block_writer_forget_all(), but it forgot to
call scoutfs_alloc_prepare_commit() to put the block references held by
the allocator.
This was generating leaked block warnings during testing that used
forced unmount. It wouldn't affect regular operations.
Signed-off-by: Zach Brown <zab@versity.com>
We were seeing a number of problems coming from races that allowed tasks
in a mount to try and concurrently delete an inode's items. We could
see error messages indicating that deletion failed with -ENOENT, we
could see users of inodes behave erratically as inodes were deleted from
under them, and we could see eventual server errors trying to merge
overlapping data extents which were "freed" (add to transaction lists)
multiple times.
This commit addresses the problems in one relatively large patch. While
we could mechanically split up the fixes, they're all interdependent and
splitting them up (bisecting through them) could cause failures that
would be devilishly hard to diagnose.
First we stop allowing multiple cached vfs inodes. This was initially
done to avoid deadlocks between lock invalidation and final inode
deletion. We add a specific lookup that's used by invalidation which
ignores any inodes which are in I_NEW or I_FREEING. Now that iget can
wait on inode flags we call iget5_locked before acquiring the cluster
lock. This ensures that we can only have one cached vfs inode for a
given inode number in evict_inode trying to delete.
Now that we can only have one cached inode, we can rework the omap
tracking to use _set and _clear instead of _inc and _put. This isn't
strictly necessary but is a simplification and lets us issue warnings if
we see that we ever try to set an inode numbers bit on behalf of
multiple cached inodes. We also add a _test helper.
Orphan scanning would try to perform deletion by instantiating a cached
inode and then putting it, triggering eviction and final deletion. This
was an attempt to simplify concurrency but ended up causing more
problems. It no longer tries to interact with inode cache at all and
attempts to safely delete inode items directly. It uses the omap test
to determine that it should skip an already cached inode.
We had attempted to forbid opening inodes by handle if they had an nlink
of 0. Since we allowed multiple cached inodes for an inode number this
was to prevent adding cached inodes that were being deleted. It was
only performing the check on newly allocated inodes, though, so it could
get a reference to the cached inode that the scanner had inserted for
deleting. We're chosing to keep restricting opening by handle to only
linked inodes so we also check existing inodes after they're refreshed.
We're left with a task evicting an inode and the orphan scanner racing
to delete an inode's items. We move the work of determining if its safe
to delete out of scoutfs_omap_should_delete() and into
try_delete_inode_items() which is called directly from eviction and
scanning. This is mostly code motion but we do make three critical
changes. We get rid of the goofy concurrent deletion detection in
delete_inode_items() and instead use a bit in the lock data to serialize
multiple attempts to delete an inode's items. We no longer assume that
the inode must still be around because we were called from evict and
specifically check that inode item is still present for deleting.
Finally, we use the omap test to discover that we shouldn't delete an
inode that is locally cached (and would be not be included to the omap
response). We do all this under the inode write lock to serialize
between mounts.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing some trouble with very specific race conditions. This
updates the orphan-inodes test to try and force final inode deletion
during eviction, the orphan scan worker, and opening inodes by handle to
all race and hit an inode number at the same time.
Signed-off-by: Zach Brown <zab@versity.com>
The orphan inode test often uses a trick where it runs sleep in the
abckground with a file as stdin as a means of holding files open. This
can very rarely fail if the background sleep happens to be first
schedled after the unlink of the file it's reading as stdin. A small
delay gives it a chance to run and open the file before its unlinked.
It's still possible to lose the race, of course, but so far this has
been good enough.
Signed-off-by: Zach Brown <zab@versity.com>
Add a mount option to set the delay betwen scanning of the orphan list.
The sysfs file for the option is writable so this option can be set at
run time.
Signed-off-by: Zach Brown <zab@versity.com>
The mount options code is some of the oldest in the tree and is weirdly
split between options.c and super.c. This cleans up the options code,
moves it all to options.c, and reworks it to be more in line with the
modern subsystem convenction of storing state in an allocated info
struct.
Rather than putting the parsed options in the super for everyone to
directly reference we put them in the private options info struct and
add a locked read function. This will let us add sysfs files to change
mount options while safely serializing with readers.
All the users of mount options that used to directly reference the
parsed struct now call the read function to get a copy. They're all
small local changes except for quorum which saves a static copy of the
quorum slot number because it references it in so many places and relies
on it not changing.
Finally, we remove the empty debugfs "options" directory.
Signed-off-by: Zach Brown <zab@versity.com>
The inode caller of omap was manually calculating the group and bits,
which isn't fantastic. Export the little helper to calculate it so
the inode caller doesn't have to.
Signed-off-by: Zach Brown <zab@versity.com>
You can almost feel the editing mistake that brought the delay
calculation into the conditional and forgot to remove the initial
calculation at declaration.
Signed-off-by: Zach Brown <zab@versity.com>
We were seeing ABBA deadlocks on the dio_count wait and extent_sem
between fallocate and reads. It turns out that fallocate got lock
ordering wrong.
This brings fallocate in line with the rest of the adherents to the lock
heirarchy. Most importantly, the extent_sem is used after the
dio_count. While we're at it we bring the i_mutex down to just before
the cluster lock for consistency.
Signed-off-by: Zach Brown <zab@versity.com>
The man pages and inline help blurbs for the recently added format
version and quorum config commands incorrectly described the device
arguments which are needed.
Signed-off-by: Zach Brown <zab@versity.com>
The server's log merge complete request handler was considering the
absence of the client's original request as a failure. Unfortunately,
this case is possible if a previous server successfully completed the
client's request but the response was lost because it stopped for
whatever reason.
The failure was being logged as a hard error to the console which was
causing tests to occasionally fail during server failover that hit just
as the log merge completion was being processed.
The error was being sent to the client as a response, we just need to
silence the message for these expected but rare errors.
We also fix the related case where the server printed the even more
harsh WARN_ON if there was a next original request but it wasn't the one
we expected to find from our requesting client.
Signed-off-by: Zach Brown <zab@versity.com>
The net _cancel_request call hasn't been used or tested in approximately
a bazillion years. Best to get rid of it and have to add and test it
if we think we need it again.
Signed-off-by: Zach Brown <zab@versity.com>
Our open by handle functions didn't care that the inode wasn't
referenced and let tasks open unlinked inodes by number. This
interacted badly with the inode deletion mechanisms which required that
inodes couldn't be cached on other nodes after the transaction which
removed their final reference.
If a task did accidentally open a file by inode while it was being
deleted it could see the inode items in an inconsistent state and return
very confusing errors that look like corruption.
The fix is to give the handle iget callers a flag to tell iget to only
get the inode if it has a positive nlink. If iget sees that the inode
has been unlinked it returns enoent.
Signed-off-by: Zach Brown <zab@versity.com>
The orphan inodes test needs to test if inode items exist as it
manipulates inodes. It used to open the inode by a handle but we're
fixing that to not allow opening unlinked files. The
get-allocated-inos ioctl tests for the presence of items owned by the
inode regardless of any other vfs state so we can use it to verify what
scoutfs is doing as we work with the vfs inodes.
Signed-off-by: Zach Brown <zab@versity.com>
Add the get-allocated-inos scoutfs command which wraps the
GET_ALLOCATED_INOS ioctl. It'll be used by tests to find items
associated with an inode instead of trying to open the inode by a
constructed handle after it was unlinked.
Signed-off-by: Zach Brown <zab@versity.com>
Add an ioctl that can give some indication of inodes that have inode
items. We're exposing this for tests that verify the handling of open
unlinked inodes.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding an ioctl that wants to build inode item keys so let's
export the private inode key initializer.
Signed-off-by: Zach Brown <zab@versity.com>
This reverts commit 61ad844891.
This fix was trying to ensure that lock recovery response handling
can't run after farewell calls reclaim_rid() by jumping through a bunch
of hoops to tear down locking state as the first farewell request
arrived.
It introduced very slippery use after free during shutdown. It appears
that it was from drain_workqueue() previously being able to stop
chaining work. That's no longer possible when you're trying to drain
two workqueues that can queue work in each other.
We found a much clearer way to solve the problem so we can toss this.
Signed-off-by: Zach Brown <zab@versity.com>
We recently found that the server can send a farewell response and try
to tear down a client's lock state while it was still in lock recovery
with the client. The lock recovery response could add a lock
for the client after farell's reclaim_rid() had thought the client was
gone forever and tore down its locks.
This left a lock in the lock server that wasn't associated with any
clients and so could never be invalidated. Attempts to acquire
conflicting locks with it would hang forever, which we saw as hangs in
testing with lots of unmounting.
We tried to fix it by serializing incoming request handling and
forcefully clobbering the client's lock state as we first got
the farewell request. That went very badly.
This takes another approach of trying to explicitly wait for lock
recovery to finish before sending farewell responses. It's more in
line with the overall pattern of having the client be up and functional
until farewell tears it down.
With this in place we can revert the other attempted fix that was
causing so many problems.
Signed-off-by: Zach Brown <zab@versity.com>
The local-force-unmount fenced fencing script only works when all the
mounts are on the local host and it uses force unmount. It is only
used in our specific local testing scripts. Packaging it as an example
lead people to believe that it could be used to cobble together a
multi-host testing network, however temporary.
Move it from being in utils and packged to being private to our tests so
that it doesn't present an attractive nuisance.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_recov_shutdown() tried to move the recovery tracking structs off
the shared list and into a private list so they could be freed. But
then it went and walked the now empty shared list to free entries. It
should walk the private list.
This would leak a small amount of memory in the rare cases where the
server was shutdown while recovery was still pending.
Signed-off-by: Zach Brown <zab@versity.com>
Back when we added the get/commit transaction sequence numbers to the
log_trees we forgot to add them to the scoutfs print output.
Signed-off-by: Zach Brown <zab@versity.com>
The server's little set_shutting_down() helper accidentally used a read
barrier instead of a write barrier.
Signed-off-by: Zach Brown <zab@versity.com>
Tear down client lock server state and set a boolean so that
there is no race between client/server processing lock recovery
at the same time as farewell.
Currently there is a bug where if server and clients are unmounted
then work from the client is processed out of order, which leaves
behind a server_lock for a RID that no longer exists.
In order to fix this we need to serialize SCOUTFS_NET_CMD_FAREWELL
in recv_worker.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
This unit test reproduces the race we have between
client and server diong lock recovery while farewell
is processed.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
The max_seq and active reader mechanisms in the item cache stop readers
from reading old items and inserting them in the cache after newer items
have been reclaimed by memory pressure. The max_seq field in the pages
must reflect the greatest seq of the items in the page so that reclaim
knows that the page contains items newer than old readers and must not
be removed.
We update the page max_seq as items are inserted or as they're dirtied
in the page. There's an additional subtle effect that the max_seq can
also protect items which have been erased. Deletion items are erased
from the pages as a commit completes. The max_seq in that page will
still protect it from being reclaimed even though no items have that seq
value themselves.
That protection fails if the range of keys containing the erased item is
moved to another page with a lower max_seq. The item mover only
updated the destination page's max_seq for each item that was moved. It
missed that the empty space between the items might have a larger
max_seq from an erased item. We don't know where the erased item is so
we have to assume that a larger max_seq in the source page must be set
on the destination page.
This could explain very rare item cache corruption where nodes were
seeing deleted directory entry items reappearing. It would take a
specific sequence of events involving large directories with an isolated
removal, a delayed item cache reader, a commit, and then enough
insertions to split the page all happening in precisely the wrong
sequence.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command to change the quorum config which starts by only supports
updating the super block whlie the file system is oflfine.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding a command to change the quorum config which updates its
version number. Let's make the version a little more visible and start
it at the more humane 1.
Signed-off-by: Zach Brown <zab@versity.com>
Move the code that checks that the super is in use from
change-format-version into its own function in util.c. We'll use it in
an upcoming command to change the quorum config.
Signed-off-by: Zach Brown <zab@versity.com>
Move functions for printing and validating the quorum config from mkfs.c
to quorum.c so that they can be used in an upcoming command to change
the quorum config.
Signed-off-by: Zach Brown <zab@versity.com>
The change from --quorum-count to --quorum-slot forgot to update a
mention of the option in an error message in mkfs when it wasn't
provided.
Signed-off-by: Zach Brown <zab@versity.com>
We want to enable the test case for:
generic/023 - tests that renameat2 syscall exists
generic/024 - renameat2 with NOREPLACE flag
Move both generic/025 and 078 to the no run list so that
we can test the unsupported output if the flags were
passed that were not supported.
Example output:
generic/025 [not run] fs doesn't support RENAME_EXCHANGE
generic/078 [not run] fs doesn't support RENAME_WHITEOUT
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
The goal of the test case is to have two mount points
with two async calls made to do renameat2. This allows
for two calls to race to call renameat2 RENAME_NOREPLACE.
When this happens you expect one of them to fail with a
-EEXIST. This would validate that the new flag works.
Essentially one of the two calls to renameat should hit the
new RENAME_NOREPLACE code and exit early.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Support generic renameat2 syscall then add support for the
RENAME_NOREPLACE flag. To suppor the flag we need to check
the existance of both entries and return -EXIST.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
The current test case attempts to create a state to read
by calling setattr and getattr in attempt to force block
cache reads. It so happens that this does not always force
cache block reads, which in rare cases causes this test case
to fail.
The new test case removes all the extra bouncing around of mount
points and we just directly call scoutfs df which will walk
everyone's allocators to summarize the block counts, which is
guaranteed to exist. Therefore, we do not have to create any sort
of state prior to trying to force a read.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Let's try maintaining release notes in a file in the repo. There are
lots of schemes for associating commits and release notes and this seems
like the simplest place to start.
Signed-off-by: Zach Brown <zab@versity.com>
[85164.299902] scoutfs f.8c19e1.r.facf2e error: server error writing btree blocks: -5
[144308.589596] scoutfs f.c9397a.r.8ae97f error: server error -5 freeing merged btree blocks: looping commit del/upd freeing item
[174646.005596] scoutfs f.15f0b3.r.1862df error: server error -5 freeing merged btree blocks: final commit del/upd freeing item
[146653.893676] scoutfs f.c7f188.r.34e23c error: server error writing super block: -5
[273218.436675] scoutfs f.dd4157.r.f0da7e error: server failed to bind to 127.0.0.1:42002, err -98
[376832.542823] scoutfs f.049985.r.1a8987 error: error -5 reading quorum block 19 to update event 1 term 3
The above is an example output that will be filtered out
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
We do not want to short-circuit btree_walk early, it is
better to handle the force unmount on the caller side.
Therefore, remove this from btree_walk.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
If there is a forced unmount we call _net_shutdown from
umount_begin in order to tell the server and clients to
break out of pending network replies. We then add the call
to abort within the shutdown_worker since most of the mucking
with send and resend queues are all done there.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Only BUG_ON for inconsistency and not do it for commit errors
or failure to delete the original request.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
In scoutfs_server_worker we do not properly handle the clean up
of _block_writer_init and alloc_init. On error paths we can clean
up the context if either of thoes are initialized we can call
alloc_prepare_commit or writer_forget_all to ensure we drop
the block references and clear the dirty status of all the blocks
in the writer.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Remove a bunch of old language from the README. We're no longer in the
early days of the open release so we can remove all the alpha quality
language. And the system has grown sufficiently that the repo README
isn't a great place for a small getting started doc. There just isn't
room to do the subject justice. If we need such a thing for the
project we'll put it as a first order doc in the repo that'd be
distributed along with everything else.
Signed-off-by: Zach Brown <zab@versity.com>
In order to safely free blocks we need to first dirty
the work. This allows for resume later on without a double
free.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
As we update xattrs we need to update any existing old items with the
contents of the new xattr that uses those items. The loop that updated
existing items only took the old xattr size into account and assumed
that the new xattr would use those items. If the new xattr size used
fewer parts then the attempt to update all the old parts that weren't
covered by the new size would go very wrong. The length of the region
in the new xattr would be negative so it'd try to use the max part
length. Worse, it'd copy these max part length regions outside the
input new xattr buffer. Typically this would land in addressible memory
and copy garbage into the unused old items before they were later
deleted.
However, it could access so far outside the input buffer that it could
cross a page boudary into inaccessible memory and fault. We saw this in
the field while trying to repeatedly incrementally shrink a large xattr.
This fixes the loop that updates overlapping items between the new and
old xattr to start with the smaller of their two item counts. Now it
will only update items that are actually used by both xattrs and will
only safely access the new xattr input buffer.
Signed-off-by: Zach Brown <zab@versity.com>
From now on if we make incompatible changes to structures or messages
then we update the format version and ensure that the code can deal with
all the versions in its supported range.
Signed-off-by: Zach Brown <zab@versity.com>
We had arbitrarily chosen an ioctl code 's' to match scoutfs, but of
course that conflicts. This chooses an arbitrary hole in the upstream
reservations from inode-numbers.rst.
Then we make sure to have our _IO[WR] usage reflect the direction of the
final type paramater. For most of our ioctls userspace is writing an
argument parameter to perform an operation (that often has side
effects). Most of our ioctls should be _IOW because userspace is
writing the parameter, not _IOR (though the operation tends to read
state). A few ioctls copy output back to userspace in the parameter so
they're _IOWR.
Signed-off-by: Zach Brown <zab@versity.com>
The idea here was that we'd expand the size of the struct and
valid_bytes would tell the kernel which fields were present in
userspace's struct. That doesn't combine well with the ioctl convention
of having the size of the type baked into the ioctl number. We'll
remove this to make the world less surprising. If we expand the
interface we'd add additional ioctls and types.
Signed-off-by: Zach Brown <zab@versity.com>
While checking in on some other code I noticed that we have lingering
allocator and writer contexts over in the lock server. The lock server
used to manage its own client state and recovery. We've sinced moved
that into shared recov functionality in the server. The lock server no
longer manipulates its own btrees and doesn't need these unused
references to the server's contexts.
Signed-off-by: Zach Brown <zab@versity.com>
Introduce some space between the current key zone and type values so
that we have room to insert new keys amongst the current keys if we need
to. A spacing of 4 is arbitrarily chosen as small enough to still give
us intuitively small numbers while leaving enough room to grow, given
how long its taken to come to the current number of keys.
Signed-off-by: Zach Brown <zab@versity.com>
The code that updates inode index items on behalf of indexed fields uses
an array to track changes in the fields. Those array indexes were the
raw key type values.
We're about to introduce some sparse space between all the key values so
that we have some room to add keys in the future at arbitrary sort
positions amongst the previous keys.
We don't want the inode index item updating code to keep using raw types
as array indices when the type values are no longer small dense values.
We introduce indirection from type values to array indices to keep the
tracking array in the in-memory inode struct small.
Signed-off-by: Zach Brown <zab@versity.com>
As we freeze the format let's remove this old experiment to try and make
it easier to line up traces from different mounts. It never worked
particularly well and I think it could be argued that trying to merge
trace logs on different machines isn't a particularly meaningful thing
to do. You care about how they interact not what they were doing at
the same time with their indepdendent resources.
Signed-off-by: Zach Brown <zab@versity.com>
There are a few bad corner cases in the state machine that governs how
client transactions are opened, modified, and committed.
The worst problem is on the server side. All server request handlers
need to cope with resent requests without causing bad side effects.
Both get_log_trees and commit_log_trees would try to fully processes
resent requests. _get_log_trees() looks safe because it works with the
log_trees that was stored previously. _commit_log_trees() is not safe
because it can rotate out the srch log file referenced by the sent
log_trees every time it's processed. This could create extra srch
entries which would delete the first instance of entries. Worse still,
by injecting the same block structure into the system multiple times it
ends up causing multiple frees of the blocks that make up the srch file.
The client side problems are slightly different, but related. There
aren't strong constraints which guarantee that we'll only send a commit
request after a get request succeeds. In crazy circumstances the
commit request in the write worker could come before the first get in
mount succeeds. Far worse is that we can send multiple commit requests
for one transaction if it changes as we get errors during multiple
queued write attempts, particularly if we get errors from get_log_trees
after having successfully committed.
This hardens all these paths to ensure a strict sequence of
get_log_trees, transaction modification, and commit_log_trees.
On the server we add *_trans_seq fields to the log_trees struct so that
both get_ and commit_ can see that they've already prepared a commit to
send or have already committed the incoming commit, respectively. We
can use the get_trans_seq field as the trans_seq of the open transaction
and get rid of the entire seperate mechanism we used to have for
tracking open trans seqs in the clients. We can get the same info by
walking the log_trees and looking at their *_trans_seq fields.
In the client we have the write worker immediately return success if
mount hasn't opened the first transaction. Then we don't have the
worker return to allow further modification until it has gotten success
from get_log_trees.
Signed-off-by: Zach Brown <zab@versity.com>
The transaction code was built a million years ago and put all of its
data in our core super block info. This finally moves the rest of the
private transaction fields out of the core super block and into the
transaction info. This makes it clear that it's private to trans.c and
brings it line with the rest of the subsystems in the tree.
Signed-off-by: Zach Brown <zab@versity.com>
Add tracking in the alloc functions that the server uses to move extents
between allocator structures on behalf of client mounts.
Signed-off-by: Zach Brown <zab@versity.com>
The srch compaction worker will wait a bit before attempting another
compaction as it finishes a compaction that failed.
Unfortunately, it clobbered the errors it got during compaction with the
result of sending the commit to the server with the error flag. If the
commit is successful then it thinks there were no errors and immediately
re-queues itself to try the next compaction.
If the error is persistent, as it was with a bug in how we merged log
files with a single page's worth of entries, then we can spin
indefinitely getting and error, clobbering the error with the commit
result, and immediately queueing our work to do it all over again.
This fix preserves existing errors when geting the result of the commit
and will correctly back off. If we get persistent merge errors at least
they won't consume significant resources. We add a counter for commit
for the errors so we can get some visibility if this happens.
Signed-off-by: Zach Brown <zab@versity.com>
The k-way merge function at the core of the srch file entry merging had
some bookkeeping math (calculating number of parents) that couldn't
handle merging a single incoming entry stream, so it threw a warning and
returned an error. When refusing to handle that case, it was assuming
that caller was trying to merge down a single log file which doesn't
make any sense.
But in the case of multiple small unsorted logs we can absolutely end up
with their entries stored in one sorted page. We have one sorted input
page that's merging multiple log files. The merge function is also the
path that writes to the output file so we absolutely need to handle this
case.
We more carefully calculate the number of parents, clamping it to one
parent when we'd otherwise get "(roundup(1) -> 1) - 1 == 0" when
calculating the number of parents from the number of inputs. We can
relax the warning and error to refuse to merge nothing.
The test triggers this case by putting single search entries in the log
files for mounts and unmounting them to force rotation of the mount log
files into mergable rotated log files.
Signed-off-by: Zach Brown <zab@versity.com>
Our statfs implementation had clients reading the super block and using
the next free inode number to guess how many inodes there might be. We
are very aggressive with giving directories private pools of inode
numbers to allocate from. They're often not used at all, creating huge
gaps in allocated inode numbers. The ratio of the average number of
allocations per directory to the batch size given to each directory is
the factor that the used inode count can be off by.
Now that we have a precise count of active inodes we can use that to
return accurate counts of inodes in the files fields in the statfs
struct. We still don't have static inode allocation so the fields don't
make a ton of sense. We fake the total and free count to give a
reasonable estimate of the total files that doesn't change while the
free count is calculated from the correct count of used inodes.
While we're at it we add a request to get the summed fields that the
server can cheaply discover in cache rather than having the client
always perform read IOs.
Signed-off-by: Zach Brown <zab@versity.com>
Add an alloc_foreach variant which uses the caller's super to walk the
allocators rather than always reading it off the device.
Signed-off-by: Zach Brown <zab@versity.com>
Add a count of used inodes to the super block and a change in the inode
count to the log_trees struct. Client transactions track the change in
inode count as they create and delete inodes. The log_trees delta is
added to the count in the super as finalized log_trees are deleted.
Signed-off-by: Zach Brown <zab@versity.com>
We had previously started on a relatively simple notion of an
interoperability version which wasn't quite right. This fleshes out
support for a more functional format version. The super blocks have a
single version that defines behaviour of the running system. The code
supports a range of versions and we add some initial interfaces for
updating the version while the system is offline. All of this together
should let us safely change the underlying format over time.
Signed-off-by: Zach Brown <zab@versity.com>
Add a write_nr field to the quorum block header which is incremented
with every write. Each event also gets a write_nr field that is set to
the incremented value from the header. This gives us a history of the
order of event updates that isn't sensitive to misconfigured time.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding another command that does block IO so move some block
reading and writing functions out of mkfs. We also grow a few function
variants and call the write_sync variant from mkfs instead of having it
manually sync.
Signed-off-by: Zach Brown <zab@versity.com>
The code that shows the note sections as files uses the section size to
define the size of the notes payload. We don't need to null terminate
the strings to define their lengths. Doing so puts a null in the notes
file which isn't appreciated by many readers.
Signed-off-by: Zach Brown <zab@versity.com>
The test harness might as well use all cpus when building. It's
reasonably safe to assume both that the test systems are otherwise idle
and that the build is likely to succeed.
Signed-off-by: Zach Brown <zab@versity.com>
TCP keepalive probes only work when the connection is idle. They're not
sent when there's unacked send data being retramnsmitted. If the server
fails while we're retransmitting we don't break the connection and try
to elect and connect to a new server until the very long default
conneciton timeouts or the server comes back and the stale connection is
aborted.
We can set TCP_USER_TIMEOUT to break an unresponsive connection when
there's written data. It changes the behavior of the keepalive probes
so we rework them a bit to clearly apply our timeout consistently
between the two mechanisms.
Signed-off-by: Zach Brown <zab@versity.com>
As the server comes up it needs to fence any previous servers before it
assumes exclusive access to the device. If fencing fails it can leave
fence requests behind. The error path for these very early failures
didn't shut down fencing so we'd have lingering fence requests span the
life cycle of server startup and shutdown. The next time the server
starts up in this mount it can try to create the fence request again,
get an error because a lingering one already exists, and immediately
shut down.
The result is that fencing errors that hit that initial attempt during
server startup can become persistent fencing errors for the lifetime of
that mount, preventing it from every successfully starting the server.
Moving the fence stop call to hit all exiting error paths consistently
clean up fence requests and avoid this problem. The next server
instance will get a chance to process the fence request again. It might
well hit the same error, but at least it gets a chance.
Signed-off-by: Zach Brown <zab@versity.com>
The current script gets stuck in an infinite loop when the test
suite is started with 1 mount point. This is due to the advancement
part of the script in which it advances the ops for each mount.
The current while loop checks for when the op_mnt wraps by checking if
it equals 0. But the problem is we set each of the op_mnts to 0 during
the advancement, so when it wraps it still equates to 0, so it is an
infinite loop. Therefore, the fix is to check at the end of the loop
check if the last op's mount number wrapped. If so just break out.
Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
In some of the allocation paths there are goto statements
that end up calling kfree(). That is fine, but in cases
where the pointer is not initially set to NULL then we
might have an undefined behavior. kfree on a NULL pointer
does nothing, so essentially these changes should not
change behavior, but clarifies the code path better.
Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
Unfortunately, we're back in kernels that don't yet have d_op->d_init.
We allocate our dentry info manually as we're given dentries. The
recent verification work forgot to consistently make sure the info was
allocated before using it. Fix that up, and while we're at it be a bit
more robust in how we check to see that it's been initialized without
grabbing the d_lock.
Signed-off-by: Zach Brown <zab@versity.com>
This adds i_version to our inode and maintains it as we allocate, load,
modify, and store inodes. We set the flag in the superblock so
in-kernel users can use i_version to see changes in our inodes.
Signed-off-by: Zach Brown <zab@versity.com>
More recent gcc notices that ret in delete_files can be undefined if nr
is 0 while missing that we won't call delete_files in that case. Seems
worth fixing, regardless.
Signed-off-by: Zach Brown <zab@versity.com>
Add a quick test to make sure that create is validating stale dentries
before deciding if it should create or return -eexist.
Signed-off-by: Zach Brown <zab@versity.com>
Add the .totl. xattr tag. When the tag is set the end of the name
specifies a total name with 3 encoded u64s separated by dots. The value
of the xattr is a u64 that is added to the named total. An ioctl is
added to read the totals.
Signed-off-by: Zach Brown <zab@versity.com>
The fs log btrees have values that start with a header that stores the
item's seq and flags. There's a lot of sketchy code that manipulates
the value header as items are passed around.
This adds the seq and flags as core item fields in the btree. They're
only set by the interfaces that are used to store fs items: _insert_list
and _merge. The rest of the btree items that use the main interface
don't work with the fields.
This was done to help delta items discover when logged items have been
merged before the finalized lob btrees are deleted and the code ends up
being quite a bit cleaner.
Signed-off-by: Zach Brown <zab@versity.com>
Add an inode creation time field. It's created for all new inodes.
It's visible to stat_more. setattr_more can set it during
restore.
Signed-off-by: Zach Brown <zab@versity.com>
Our dir methods were trusting dentry args. The vfs code paths use
i_mutex to protect dentries across revalidate or lookup and method
calls. But that doesn't protect methods running in other mounts.
Multiple nodes can interleave the initial lookup or revalidate then
actual method call.
Rename got this right. It is very paranoid about verifying inputs after
acquiring all the locks it needs.
We extend this pattern to the rest of the methods that need to use the
mapping of name to inode (and our hash and pos) in dentries. Once we
acquire the parent dir lock we verify that the dentry is still current,
returning -EEXIST or -ENOENT as appropriate.
Along these lines, we tighten up dentry info correctness a bit by
updating our dentry info (recording lock coverage and hash/pos) for
negative dentries produced by lookup or as the result of unlink.
Signed-off-by: Zach Brown <zab@versity.com>
Client lock invalidation handling was very strict about not receiving
duplicate invalidation requests from the server because it could only
track one pending request. The promise to only send one invalidate at a
time is made by one server, it can't be enforced across server failover.
Particularly because invalidation processing can have to do quite a lot
of work with the server as it tears down state associated with the lock.
We fix this by recording and processing each individual incoming
invalidation request on the lock.
The code that handled reordering of incoming grant responses and
invalidation requests waited for the lock's mode to match the old mode
in the invalidation request before proceeding. That would have
prevented duplicate invalidation requests from making forward progress.
To fix this we make lock client recieve processing synchronous instead
of going through async work which can reorder. Now grant responses are
processed as they're received and will always be resolved before all the
invalidation requests are queued and processed in order.
Signed-off-by: Zach Brown <zab@versity.com>
The forest reader reads items from the fs_root and all log btrees and
gives them to the caller who tracks them to resolve version differences.
The reads can run into stale blocks which have been overwritten. The
forest reader was implementing the retry under the item state in the
caller. This can corrupt items that are only seen firest in an old fs
root before a merge and then only seen in the fs_root after a merge. In
this case the item won't have any versioning and the existing version
from the old fs_root is preferred. This is particularly bad when the
new version was deleted -- in that case we have no metadata which would
tell us to drop the old item that was read from the old fs_root.
This is fixed by pushing the retry up to callers who wipe the item state
before each retry. Now each set of items is related to a single
snapshot of the fs_root and logs at one point in time.
I haven't seen definitive evidence of this happening in practice. I
found this problem after putting on my craziest thinking toque and
auditing the code for places where we could lose item updates.
Signed-off-by: Zach Brown <zab@versity.com>
Btree merging attempted to build an rbtree of the input roots with only
one version of an item present in the rbtree at a time. It really
messed this up by completely dropping an input root when a root with a
newer version of its item tried to take its place in the rbtree. What
it should have done is advance to the next item in the older root, which
itself could have required advancing some other older root. Dropping
the root entirely is catastrophically wrong because it hides the rest of
the items in the root from merging. This has been manifesting as
occasional mysterious item loss during tests where memory pressure, item
update patterns, and merging all lined up just so.
This fixes the problem by more clearly keeping the next item in each
root in the rbtree. We sort by newest to oldest version so that once
we merge the most recent version of an item its easy to skip all the
older versions of the items in the next rbtree entries for the
rest of the input roots.
While we're at it we work with references to the static cached input
btree blocks. The old code was a first pass that used an expensive
btree walk per item and copied the value payload.
Signed-off-by: Zach Brown <zab@versity.com>
When the xattr inode searchs fail the test will eventually fail when the
output differs, but that could take a while. Have it fail much sooner
so that we can have tighter debugging interations and trace ring buffer
contents that are likely to be a lot closer to the first failure.
Signed-off-by: Zach Brown <zab@versity.com>
The current orphan scan uses the forest_next_hint to look for candidate
orphan items to delete. It doesn't skip deleted items and checks the
forest of log btrees so it'd return hints for every single item that
existed in all the log btrees across the system. And we call the hint
per-item.
When the system is deleting a lot of files we end up generating a huge
load where all mounts are constantly getting the btree roots from the
server, reading all the newest log btree blocks, finding deleted orphan
items for inodes that have already been deleted, and moving on to the
next deleted orphan item.
The fix is to use a read-only traversal of only one version of the fs
root for all the items in one scan. This avoids all the deleted orphan
items that exist in the log btrees which will disappear when they're
merged. It lets the item iteration happen in a single read-only cached
btree instead of constantly reading in the most recently written root
block of every log btree.
The result is an enormous speedup of large deletions. I don't want to
describe exactly how enormous.
Signed-off-by: Zach Brown <zab@versity.com>
We can be performing final deletion as inodes are evicted during
unmount. We have to keep full locking, transactions, and networking up
and running for the evict_inodes() call in generic_shutdown_super().
Unfortunately, this means that workers can be using inode references
during evict_inodes() which prevents them from being evicted. Those
workers can then remain running as we tear down the system, causing
crashes and deadlocks as the final iputs try to use resources that have
been destroyed.
The fix is to first properly stop orphan scanning, which can instantiate
new cached inodes, up before the call to kill_block_super ends up trying
to evict all inodes. Then we just need to wait for any pending iput and
invalidate work to finish and perform the final iput, which will always
evict because generic_shutdown_super has cleared MS_ACTIVE.
Signed-off-by: Zach Brown <zab@versity.com>
Add some simple tracking of message counts for each lock in the lock
server so that we can start to see where conflicts may be happening in a
running system.
Signed-off-by: Zach Brown <zab@versity.com>
Add a quick helper that can be used to avoid doing work if we know that
we're already shutting down. This can be a single coarser indicator
than adding functions to each subsystem to track that we're shutting
down.
Signed-off-by: Zach Brown <zab@versity.com>
Currently the first inode number that can be allocated directly follows
the root inode. This means the first batch of allocated inodes are in
the same lock group as the root inode.
The root inode is a bit special. It is always hot as absolute path
lookups and inode-to-path resolution always read directory entries from
the root.
Let's try aligning the first free inode number to the next inode lock
group boundary. This will stop work in those inodes from necessarily
conflicting with work in the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
We had some logic to try and delay lock invalidation while the lock was
still actively in use. This was trying to reduce the cost of
pathological lock conflict cases but it had some severe fairness
problems.
It was first introduced to deal with bad patterns in userspace that no
longer exist and it was built on top of the LSM transaction machinery
that also no longer exists. It hasn't aged well.
Instead of introducing invalidation latency in the hopes that it leads
to more batched work, which it can't always, let's aim more towards
reducing latency in all parts of the write-invalidate-read path and
also aim towards reducing contention in the first place.
Signed-off-by: Zach Brown <zab@versity.com>
We have a problem where items can appear to go backwards in time because
of the way we chose which log btrees to finalize and merge.
Because we don't have versions in items in the fs_root, and even might
not have items at all if they were deleted, we always assume items in
log btrees are newer than items in the fs root.
This creates the requirement that we can't merge a log btree if it has
items that are also present in older versions in other log btrees which
are not being merged. The unmerged old item in the log btree would take
precedent over the newer merged item in the fs root.
We weren't enforcing this requirement at all. We used the max_item_seq
to ensure that all items were older than the current stable seq but that
says nothing about the relationship between older items in the finalized
and active log btrees. Nothing at all stops an active btree from having
an old version of a newer item that is present in another mount's
finalized log btree.
To reliably fix this we create a strict item seq discontinuity between
all the finalized merge inputs and all the active log btrees. Once any
log btree is naturally finalized the server forced all the clients to
group up and finalize all their open log btrees. A merge operation can
then safely operate on all the finalized trees before any new trees are
given to clients who would start using increasing items seqs.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command for the server to request that clients commit their open
transaction. This will be used to create groups of finalized log
btrees for consistent merging.
Signed-off-by: Zach Brown <zab@versity.com>
We were checking that quorum_slot_nr was within the range of possible
slots allowed by the format as it was parsed. We weren't checking that
it referenced a configured slot. Make sure, and give a nice error
message that shows the configured slots.
Signed-off-by: Zach Brown <zab@versity.com>
During rough forced unmount testing we saw a seemingly mysterious
concurrent election. It could be explained if mounts coming up don't
start with the same term. Let's try having mounts initialize their term
to the greatest of all the terms they can see in the quorum blocks.
This will prevent the situation where some new quorum actors with
greater terms start out ignoring all the messages from others.
Signed-off-by: Zach Brown <zab@versity.com>
Nothing interesting here, just a minor convenience to use test and set
instead of testing and then setting.
Signed-off-by: Zach Brown <zab@versity.com>
The server doesn't give us much to go on when it gets an error handling
requests to work with log trees from the client. This adds a lot of
specific error messages so we can get a better understanding of
failures.
Signed-off-by: Zach Brown <zab@versity.com>
We were trusting the rid in the log trees struct that the client sent.
Compare it to our recorded rid on the connection and fail if the client
sent the wrong rid.
Signed-off-by: Zach Brown <zab@versity.com>
The locking protocol only allows one outstanding invalidation request
for a lock at a time. The client invalidation state is a bit hairy and
involves removing the lock from the invalidation list while it is being
processed which includes sending the response. This means that another
request can arrive while the lock is not on the invalidation list. We
have fields in the lock to record another incoming request which puts
the lock back on the list.
But the invalidation work wasn't always queued again in this case. It
*looks* like the incoming request path would queue the work, but by
definition the lock isn't on the invalidation list during this race. If
it's the only lock in play then the invalidation list will be empty and
the work won't be queued. The lock can get stuck with a pending
invalidation if nothing else kicks the invaliation worker. We saw this
in testing when the root inode lock group missed the wakeup.
The fix is to have the work requeue itself after putting the lock back
on the invalidation list when it notices that another request came in.
Signed-off-by: Zach Brown <zab@versity.com>
When a client socket disconnects we save the connection state to re-use
later if the client reconnects. A newly accepted connection finds the
old connection associated with the reconnecting client and migrates
state from the old idle connection to the newly accepted connection.
While moving messages between the old and new send and resend queues the
code had an aggressive BUG_ON that was asserting that the newly accepted
connection couldn't have any messages in its resend queue.
This BUG can be tripped due to the ordering of greeting processing and
connection state migration. The server greeting processing path sends
the farewell response to the client before it calls the net code to
migrate connection state. When it "sends" the farewell response it puts
the message on the send queue and kicks the send work. It's possible
for the send work to execute and move the farewell response to the
resend queue and trip the BUG_ON.
This is harmless. The sent greeting response is going to end up on the
resend queue either way, there's no reason for the reconnection
migration to assert that it can't have happened yet. It is going to be
dropped the moment we get a message from the client with a recv_seq that
is necessarily past the greeting response which always gets a seq of 1
from the newly accepted connection.
We remove the BUG_ON and try to splice the old resend queue after the
possible response at the head of the resend_queue so that it is the
first to be dropped.
Signed-off-by: Zach Brown <zab@versity.com>
The last thing server commits do is move extents from the freed list
into freed extents. It moves as many as it can until it runs out of
avail meta blocks and space fore freed meta blocks in the current
allocator's lists.
The calculation for whether the lists had resources to move an extent
was quite off. It missed that the first move might have to dirty the
current allocator or the list block, that the btree could join/split
blocks at each level down the paths, and boy does it look like the
height component of the calculation was just bonkers.
With the wrong calculation the server could overflow the freed list
while moving extents and trigger a BUG_ON. We rarely saw this in
testing.
Signed-off-by: Zach Brown <zab@versity.com>
server_get_log_trees() sets the low flag in a mount's meta_avail
allocator, triggering enospc for any space consuming allocatins in the
mount, if the server's global meta_vail pool falls below the reserved
block count. Before each server transaction opens we swap the global
meta_avail and meta_freed allocators to ensure that the transaction has
at least the reserved count of blocks available.
This creates a risk of premature enospc as the global meta_avail pool
drains and swaps to the larger meta_freed. The pool can be close to the
reserved count, perhaps at it exactly. _get_log_trees can fill the
client's mount, even a little, and drop the global meta_avail total
under the reserved count, triggering enospc, even though meta_Freed
could have had quite a lot of blocks.
The fix is to ensure that the global meta_avail has 2x the reserved
count and swapping if it falls under that. This ensures that a server
transaction can consume an entire reserved count and still have enough
to avoid triggering enospc.
This fixes a scattering of rare premature enospc returns that were
hitting during tests. It was rare for meta_avail to fall just at the
reserved count and for get_log_trees to have to refill the client
allocator, but it happened.
Signed-off-by: Zach Brown <zab@versity.com>
Add a scoutfs command that uses an ioctl to send a request to the server
to safely use a device that has grown.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs was incorrectly initializing total_data_blocks. The field is meant
to record the number of blocks from the start of the device that the
filesystem could access. mkfs was subtracting the initial reserved area
of the device, meaning the number of blocks that the filesystem might
access.
This could allow accesses past devices if mount checks the device size
against the smaller total_data_blocks.
And we're about to use total_data_blocks as the start of a new extent to
add when growing the volume. It needs to be fixed so that this new
grown free extent doesn't overlap with the end of the existing free
extents.
Signed-off-by: Zach Brown <zab@versity.com>
There are fields in the super block that specify the range of blocks
that would be used for metadata or data. They are from the time when a
single block device was carved up into regions for metadata and data.
They don't make sense now that we have separate metadata and data block
devices. The starting blkno is static and we go to the end of the
device.
This removes the fields now that they serve no purpose. The only use
of them to check that freed extents fell within the correct bounds can
still be performed by using the static starting number or roughly using
the size of the devices. It's not perfect, but this is already only
a check to see that the blknos aren't utter nonsense.
We're removing the fields now to avoid having to update them while
worrying about users when resizing devices.
Signed-off-by: Zach Brown <zab@versity.com>
As subsystems were built I tended to use interruptible waits in the hope
that we'd let users break out of most waits.
The reality is that we have significant code paths that have trouble
unwinding. Final inode deletion during iput->evict in a task is a good
example. It's madness to have a pending signal turn an inode deletion
from an efficient inline operation to a deferred background orphan inode
scan deletion.
It also happens that golang built pre-emptive thread scheduling around
signals. Under load we see a surprising amount of signal spam and it
has created surprising error cases which would have otherwise been fine.
This changes waits to expect that IOs (including network commands) will
complete reasonably promptly. We remove all interruptible waits with
the notable exception of breaking out of a pending mount. That requires
shuffling setup around a little bit so that the first network message we
wait for is the lock for getting the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
If async network request submission fails then the response handler will
never be called. The sync request wrapper made the mistake of trying to
wait for completion when initial submission failed. This never happened
in normal operation but we're able to trigger it with some regularity
with forced unmount during tests. Unmount would hang waiting for work
to shutdown which was waiting for request responses that would never
happen.
Signed-off-by: Zach Brown <zab@versity.com>
Changing the file size can changes the file contents -- reads will
change when they stop returning data. fallocate can change the file
size and if it does it should increment the data_version, just like
setattr does.
Signed-off-by: Zach Brown <zab@versity.com>
The stage_tmpfile test util was written when fallocate didn't update
data_version for size extensions. It is more correct to get the
data_version after fallocate changes data_versions for however many
transactions, extent allocations, and i_size extensions it took to
allocate space.
Signed-off-by: Zach Brown <zab@versity.com>
Some kernels have blkdev_reread_part acquire the bd_mutex and then call
into drop_partitions which calls fsync_bdev which acquires s_umount.
This inverts the usual pattern of deactivate_super getting s_umount and
then using blkdev_put in kill_sb->put_super to drop a second device.
The inversion has been fixed upstream by years of rewrites. We can't go
back in time to fix the kernels that we're testing against,
unfortunately, so we disable lockdep around our valid leg of the
inversion that lockdep is noticing in our testing.
Signed-off-by: Zach Brown <zab@versity.com>
iput() can only be used in contexts that could perform final inode
deletion which requires cluster locks and transactions. This is
absolutely true for the transaction committing worker. We can't have
deletion during transaction commit trying to get locks and dirty *more*
items in the transaction.
Now that we're properly getting locks in final inode deletion and
O_TMPFILE support has put pressure on deletion, we're seeing deadlocks
between inode eviction during transaction commit getting a index lock
and index lock invalidation trying to commit.
We use the newly offered queued iput to defer the iput from walking our
dirty inodes. The transaction commit will be able to proceed while
the iput worker is off waiting for a lock.
Signed-off-by: Zach Brown <zab@versity.com>
Lock invalidation had the ability to kick iput off to work context. We
need to use it for inode writeback as well so we move the mechanism over
to inode.c and give it a proper call.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing errors during truncate that are surprising. Let's try and
recover from them and provide more info when they happen so that we can
dig deeper.
Signed-off-by: Zach Brown <zab@versity.com>
We recently fixed problems sending omap responses to originating clients
which can race with the clients disconnecting. We need to handle the
requests sent to clients on behalf of an origination request in exactly
the same way. The send can race with the client being evicted. It'll
be cleaned after the race is safely ignored by the client's rid being
removed from the server's request tracking.
Signed-off-by: Zach Brown <zab@versity.com>
The times in the quorum status file are in absolute monotinic kernel
time since bootup. That's not particularly helpful especially when
comparing across hosts with different boot times.
This shows relative times in timespec64 seconds until or since the times
in question. While we're at it we also collect the send and receive
timestamps closer to each send or receive call.
Signed-off-by: Zach Brown <zab@versity.com>
Generally, forced unmount works by returning errors for all IO. Quorum
is pretty resilient in that it can have the IO errors eaten by server
startup and does its own messaging that won't return errors. Trying to
force unmount can have the quorum service continually participate in
electing a server that immediately fails and shutds down.
This specifically shuts down the internal quorum service when it sees
that unmount is being forced. This is easier and cleaner than having
the network IO return errors and then having that trigger shutdown.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum service shuts down if it sees errors that mean that it can't
do its job.
This is mostly fatal errors gathering resources at startup or runtime IO
errors but it was also shutting down if server startup fails. That's
not quite right. This should be treated like the server shutting down
on errors. Quorum needs to stay around to participate in electing the
next server.
Fence timeouts could trigger this. A quorum mount could crash, the
next server without a fence script could have a fence request timeout
and shutdown, and now the third remaining server is left to indefinitely
send vote requests into the void.
With this fixed, continuing that example, the quorum service in the
second mount remains to elect the third server with a working fence
script after the second server shuts down after its fence request times
out.
Signed-off-by: Zach Brown <zab@versity.com>
This should be good enough to get single node mounts up and running with
fenced with minimal effort. The example config will need to be copied
to /etc/scoutfs/scoutfs-fenced.conf for it to be functional, so this
still requires specific opt-in and wont accidentally run for multi-node
systems.
Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
The omap message lifecycle is a little different than the server's usual
handling that sends a response from the request handler. The response
is sent long after the initial receive handler is pinning the connection
to the client. It's fine for the response to be dropped.
The main server request handler handled this case but other response
senders didn't. Put this error handling in the server response sender
itself so that all callers are covered.
Signed-off-by: Zach Brown <zab@versity.com>
We hide I_FREEING inodes from inode lookup to avoid inversions with
cluster locking. This can result in duplicate inodes structs for a
given inode number. Then can both race to try and delete the same items
for their shared inode number. This leads to error messages from
evict_inode and could lead to corruption if they, for example, both try
and free the same data extents.
This adds very basic serialization so only one instance can try to
delete items at a time.
Signed-off-by: Zach Brown <zab@versity.com>
The item cache has to be careful not to insert stale read items when
previously dirty items have been written and invalidated while a read
was in flight.
This was previously done by recording the possible range of items that a
reader could see based on the key range of its lock. This is
disasterous when a workload operates entirely within one lock. I ran
into this when testing a small number of files with massive amounts of
xattrs. While any reader is in flight all pages can't be invalidated
because they all intersect with the one lock that covers all the items
in use.
The fix is to more naturally reflect the problem by tracking the
greatest item seq in pages and the earliest seq that any readers
can't see. This lets invalidate only skip pages with items
that weren't visible to the earliest reader.
This more naturally reflects that the problem is due to the age of the
items, not their position in the key space. Now only a few of the most
recently modified pages could be skipped and they'll be at the end
of the LRU and won't typically be visited. As an added benefit it's
now much cheaper to add, delete, and test the active readers.
This fix stopped rm -rf of a full system's worth of xattrs from taking
minutes constantly spinning skipping all pages in the LRU to seconds of
doing real removal work.
Signed-off-by: Zach Brown <zab@versity.com>
Normally mkfs would fail if we specify meta or data devices that are too
small. We'd like to use small devices for test scenarios, though, so
add an option to allow specifying sizes smaller than the minumum
required sizes.
Signed-off-by: Zach Brown <zab@versity.com>
These forward declarations were for interfaces that have since been
removed or changed and are no longer needed.
Signed-off-by: Zach Brown <zab@versity.com>
Returning ENOSPC is challenging because we have clients working on
allocators which are a fraction of the whole and we use COW transactions
so we need to be able to allocate to free. This adds support for
returning ENOSPC to client posix allocators as free space gets low.
For metadata, we reserve a number of free blocks for making progress
with client and server transactions which can free space. The server
sets the low flag in a client's allocator if we start to dip into
reserved blocks. In the client we add an argument to entering a
transaction which indicates if we're allocating new space (as opposed to
just modifying existing data or freeing). When an allocating
transaction runs low and the server low flag is set then we return
ENOSPC.
Adding an argument to transaciton holders and having it return ENOSPC
gave us the opportunity to clean it up and make it a little clearer.
More work is done outside the wait_event function and it now
specifically waits for a transaction to cycle when it forces a commit
rather than spinning until the transaction worker acquires the lock and
stops it.
For data the same pattern applies except there are no reserved blocks
and we don't COW data so it's a simple case of returning the hard ENOSPC
when the data allocator flag is set.
The server needs to consider the reserved count when refilling the
client's meta_avail allocator and when swapping between the two
meta_avail and meta_free allocators.
We add the reserved metadata block count to statfs_more so that df can
subtract it from the free meta blocks and make it clear when enospc is
going to be returned for metadata allocations.
We increase the minimum device size in mkfs so that small testing
devices provide sufficient reserved blocks.
And finally we add a little test that makes sure we can fill both
metadata and data to ENOSPC and then recover by deleting what we filled.
Signed-off-by: Zach Brown <zab@versity.com>
The forest log merge work calls into the client to send commit requests
to the server. The forest is usually destroyed relatively late in the
sequence and can still be running after the client is destroyed.
Adding a _forest_stop call lets us stop the log merging work
before the client is destroyed.
Signed-off-by: Zach Brown <zab@versity.com>
Killing a task can end up in evict and break out of acquiring the locks
to perform final inode deletion. This isn't necessarily fatal. The
orphan task will come around and will delete the inode when it is truly
no longer referenced.
So let's silence the error and keep track of how many times it happens.
Signed-off-by: Zach Brown <zab@versity.com>
Orphaned items haven't been deleted for quite a while -- the call to the
orphan inode scanner has been commented out for ages. The deletion of
the orphan item didn't take rid zone locking into account as we moved
deletion from being strictly local to being performed by whoever last
used the inode.
This reworks orphan item management and brings back orphan inode
scanning to correctly delete orphaned inodes.
We get rid of the rid zone that was always _WRITE locked by each mount.
That made it impossible for other mounts to get a _WRITE lock to delete
orphan items. Instead we rename it to the orphan zone and have orphan
item callers get _WRITE_ONLY locks inside their inode locks. Now all
nodes can create and delete orphan items as they have _WRITE locks on
the associated inodes.
Then we refresh the orphan inode scanning function. It now runs
regularly in the background of all mounts. It avoids creating cluster
lock contention by finding candidates with unlocked forest hint reads
and by testing inode caches locally and via the open map before properly
locking and trying to delete the inode's items.
Signed-off-by: Zach Brown <zab@versity.com>
The log merging work deletes log trees items once their item roots are
merged back into the fs root. Those deleted items could still have
populated srch files that would be lost. We force rotation of the srch
files in the items as they're reclaimed to turn them into rotated srch
files that can be compacted.
Signed-off-by: Zach Brown <zab@versity.com>
Refilling a btree block by moving items from its siblings as it falls
under the join threshold had some pretty serious mistakes. It used the
target block's total item count instead of the siblings when deciding
how many items to move. It didn't take item moving overruns into
account when deciding to compact so it could run out of contiguous free
space as it moved the last item. And once it compacted it returned
without moving because the return was meant to be in the error case.
This is all fixed by correctly examining the sibling block to determine
if we should join a block up to 75% full or move a big chunk over,
compacting if the free space doesn't have room for an excessive worst
case overrun, and fixing the compaction error checking return typo.
Signed-off-by: Zach Brown <zab@versity.com>
The alloc iterator needs to find and include the totals of the avail and
freed allocator list heads in the log merge items.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs was miscalculating the offset of the start of the free region in
the center of blocks as it populated blocks with items. It was using
the length of the free region as its offset in the block. To find
the offset of the end of the free region in the block it has to be
taken relative to the end of the item array.
Signed-off-by: Zach Brown <zab@versity.com>
Some item_val_len() callers were applying alignment twice, which isn't
needed.
And additions to erased_bytes as value lengths change didn't take
alignment into account. They could end up double counting if val_len
changes within the alignment are then accounted for again as the full
item and alignment is later deleted. Additions to erased_bytes based on
val_len should always take alignment into account.
Signed-off-by: Zach Brown <zab@versity.com>
The item cache allocates a page and a little tracking struct for each
cached page. If the page allocation fails it might try to free a null
page pointer, which isn't allowed.
Signed-off-by: Zach Brown <zab@versity.com>
Item creation, which fills out a new item at the end of the array of
item structs at the start of the block, didn't explicitly zero the item
struct padding to 0. It would only have been zero if the memory was
already zero, which is likely for new blocks, but isn't necessarily true
if the memory had previously been used by deleted values.
Signed-off-by: Zach Brown <zab@versity.com>
The change to aligning values didn't update the btree block verifier's
total length calculation, and while we're in there we can also check
that values are correctly aligned.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we had an unused function that could be flipped on to verify
btree blocks during traversal. This refactors the block verifier a bit
to be called by a verifying walker. This will let callers walk paths to
leaves to verify the tree around operations, rather than verification
being performed during the next walk.
Signed-off-by: Zach Brown <zab@versity.com>
Take the condition used to decide if a btree block needs to be joined
and put it in total_above_join_low_water() so that btree_merging will be
able to call it to see if the leaf block it's merging into needs to be
joined.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree function for freeing all the blocks in a btree without
having to cow the blocks to track which refs have been freed. We use a
key from the caller to track which portions of the tree have been freed.
Signed-off-by: Zach Brown <zab@versity.com>
Over time the printing of the btree roots embedded in the super block
has gotten a little out of hand. Add a helper macro for the printf
format and args and re-order them to match their order in the
superblock.
Signed-off-by: Zach Brown <zab@versity.com>
We now have a core seq number in the super that is advanced for multiple
users. The client transaction seq comes from the core seq so we
remove the trans_seq from the super. The item version is also converted
to use a seq that's derived from the core seq.
Signed-off-by: Zach Brown <zab@versity.com>
Add the client work which is regularly scheduled to ask the server for
log merging work to do. The relatively simple client work gets a
request from the server, finds the log roots to merge given the reqeust
seq, performs the merge with a btree call and callbacks, and commits the
result to the server.
Signed-off-by: Zach Brown <zab@versity.com>
This adds the server processing side of the btree merge functionality.
The client isn't yet sending the log_merge messages so no merging will
be performed.
The bulk of the work happens as the server processess a get_log_merge
message to build a merge request for the client. It starts a log merge
if one isn't in flight. If one is in flight it checks to see if it
should be spliced and maybe finished. In the common case it finds the
next range to be merged and sends the request to the client to process.
The commit_log_merge handler is the completion side of that request. If
the request failed then we unwind its resources based on the stored
request item. If it succeeds we record it in an item for get_
processing to splice eventually.
Then we modify two existing server code paths.
First, get_log_tree doesn't just create or use a single existing log
btree for a client mount. If the existing log btree is large enough it
sets its finalized flag and advances the nr to use a new log btree.
That makes the old finalized log btree available for merging.
Then we need to be a bit more careful when reclaiming the open log btree
for a client. We can't use next to find the only open log btree, we use
prev to find the last and make sure that it isn't already finalized.
Signed-off-by: Zach Brown <zab@versity.com>
Add the format specification for the upcoming btree merging. Log btrees
gain a finalized field, we add the super btree root and all the items
that the server will use to coordinate merging amongst clients, and we
add the two client net messages which the server will implement.
Signed-off-by: Zach Brown <zab@versity.com>
Extract part of the get_last_seq handler into a call that finds the last
stable client transaction seq. Log merging needs this to determine a
cutoff for stable items in log btrees.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree call to just dirty to a leaf block, joining and splitting
along the way so that the blocks in the path satisfy the balance
constraints.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree function for merging the items in a range from a number of
read-only input btrees into a destination btree.
Signed-off-by: Zach Brown <zab@versity.com>
Add a BTW_SUBTREE flag to btree_walk() to restrict splitting or joining
of the root block. When clients are merging into the root built from a
reference to the last parent in the fs tree we want to be careful that
we maintain a single root block that can be spliced back into the fs
tree. We specifically check that the root block remain within the
split/join thresholds. If it falls out of compliance we return an error
so that it can be spliced back into the fs tree and then split/joined
with its siblings.
Signed-off-by: Zach Brown <zab@versity.com>
Add calls for working with subtrees built around references to blocks in
the last level of parents. This will let the server farm out btree
merging work where concurrency is built around safely working with all
the items and leaves that fall under a given parent block.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree helper for finding the range of keys which are found in
leaves referenced by the last parent block when searching for a given
key.
Signed-off-by: Zach Brown <zab@versity.com>
Rename the item version to seq and set it to the max of the transaction
seq and the lock's write_seq. This lets btree item merging chose a seq
at which all dirty items written in future commits must have greater
seqs. It can drop the seqs from items written to the fs tree during
btree merging knowing that there aren't any older items out in
transactions that could be mistaken for newer items.
Signed-off-by: Zach Brown <zab@versity.com>
Rename the write_version lock field to write_seq and get it from the
core seq in the super block.
We're doing this to create a relationship between a client transaction's
seq and a lock's write_seq. New transactions will have a greater seq
than all previously granted write locks and new write locks will have a
greater seq than all open transactions. This will be used to resolve
ambiguities in item merging as transaction seqs are written out of order
and write locks span transactions.
Signed-off-by: Zach Brown <zab@versity.com>
Get the next seq for a client transaction from the core seq in the super
block. Remove its specific next_trans_seq field.
While making this change we switch to only using le64 in the network
message payloads, the rest of the processing now uses natural u64s.
Signed-off-by: Zach Brown <zab@versity.com>
Add a new seq field to the super block which will be the source of all
incremented seqs throughout the system. We give out incremented seqs to
callers with an atomic64_t in memory which is synced back to the super
block as we commit transactions in the server.
Signed-off-by: Zach Brown <zab@versity.com>
When we moved to the current allocator we fixed up the server commit
path to initialize the pair of allocators as a commit is finished rather
than before it starts. This removed all the error cases from
hold_commit. Remove the error handling from hold_commit calls to make
the system just a bit simpler.
Signed-off-by: Zach Brown <zab@versity.com>
The core quorum work loop assumes that it has exclusive access to its
slot's quorum block. It uniquely marks blocks it writes and verifies
the marks on read to discover if another mount has written to its slot
under the assumption that this must be a configuration error that put
two mounts in the same slot.
But the design of the leader bit in the block violates the invariant
that only a slot will write to its block. As the server comes up and
fences previous leaders it writes to their block to clear their leader
bit.
The final hole in the design is that because we're fencing mounts, not
slots, each slot can have two mounts in play. An active mount can be
using the slot and there can still be a persistent record of a previous
mount in the slot that crashed that needs to be fenced.
All this comes together to have the server fence an old mount in a slot
while a new mount is coming up. The new mount sees the mark change and
freaks out and stops participating in quorum.
The fix is to rework the quorum blocks so that each slot only writes to
its own block. Instead of the server writing to each fenced mount's
slot, it writes a fence event to its block once all previous mounts have
been fenced. We add a bit of bookkeeping so that the server can
discover when all block leader fence operations have completed. Each
event gets its own term so we can compare events to discover live
servers.
We get rid of the write marks and instead have an event that is written
as a quorum agent starts up and is then checked on every read to make
sure it still matches.
Signed-off-by: Zach Brown <zab@versity.com>
If the server shuts down it calls into quorum to tell it that the
server has exited. This stops quorum from sending heartbeats that
suppress other leader elections.
The function that did this got the logic wrong. It was setting the bit
instead of clearing it, having been initially written to set a bit when
the server exited.
Signed-off-by: Zach Brown <zab@versity.com>
Add the peername of the client's connected socket to its mounted_client
item as it mounts. If the client doesn't recover then fencing can use
the IP to find the host to fence.
Signed-off-by: Zach Brown <zab@versity.com>
The error messages from reading quorum blocks were confusing. The mark
was being checked when the block had already seen an error, and we got
multiple messages for some errors.
This cleans it up a bit so we only get one error message for each error
source and each message contains relevant context.
Signed-off-by: Zach Brown <zab@versity.com>
Currently the server's recovery timeout work synchronously reclaims
resources for each client whose recovery timed out.
scoutfs_recov_next_pending() can always return the head of the pending
list because its caller will always remove it from the list as it
iterates.
As we move to real fencing the server will be creating fence requests
for all the timed out clients concurrently. It will need to iterate
over all the rids for clients in recovery.
So we sort recovery's pending list by rid and change _recov_next_pending
to return the next pending rid after a rid argument. This lets the
server iterate over all the pending rids at once.
Signed-off-by: Zach Brown <zab@versity.com>
Client recovery in the server doesn't add the omap rid for all the
clients that it's waiting for. It only adds the rid as they connect. A
client whose recovery timeout expires and is evicted will try to have
its omap rid removed without being added.
Today this triggers a warning and returns an error from a time when the
omap rid lifecycle was more rigid. Now that it's being called by the
server's reclaim_rid, along with a bunch of other functions that succeed
if called for non-existant clients, let's have the omap remove_rid do
the same.
Signed-off-by: Zach Brown <zab@versity.com>
I saw a confusing hang that looked like a lack of ordering between
a waker setting shutting_down and a wait event testing it after
being woken up. Let's see if more barriers help.
Signed-off-by: Zach Brown <zab@versity.com>
Our connection state spans sockets that can disconnect and reconnect.
While sockets are connected we store the socket's remote address in the
connection's peername and we clear it as sockets disconnect.
Fencing wants to know the last connected address of the mount. It's a
bit of metadata we know about the mount that can be used to find it and
fence it. As we store the peer address we also stash it away as the
last known peer address for the socket. Fencing can then use that
instead of the current socket peer address which is guaranteed to be
uninitialized because there's no socket connected.
Signed-off-by: Zach Brown <zab@versity.com>
The client currently always queues immediate connect work if it's
nodify_down is called. It was assuming that notify_down is only called
from a healthy established connection. But it's also called for
unsuccessful conneect attempts that might not have timed out. Say the
host is up but the port isn't listening.
This results in spamming connection attempts while an old stale leader
block until a new server is elected, fences the previous leader, and
updates their quorum block.
The fix is to explicitly manage the connection work queueing delay. We
only set it to immediately queue on mount and when we see a greeting
reply from the server. We always set it to a longer timeout as we start
a connection attempt. This means we'll always have a long reconnect
delay unless we really connected to a server.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which exercises the various reasons for fencing mounts and
checks that we reclaim the resources that they had.
Signed-off-by: Zach Brown <zab@versity.com>
The server is responsible for calling the fencing subsystem. It is the
source of fencing requests as it decides that previous mounts are
unresponsive. It is responsible for reclaiming resources for fenced
mounts and freeing their associated fence request.
Signed-off-by: Zach Brown <zab@versity.com>
Add sysfs attribute creation that can provide the parent dir kobject
instead of always creating the sysfs object dir off of the main
per-mount dir.
Signed-off-by: Zach Brown <zab@versity.com>
Add super_ops->umount_begin so that we can implement a forced unmount
which tries to avoid issuing any more network or storage ops. It can
return errors and lose unsynchronized data.
Signed-off-by: Zach Brown <zab@versity.com>
Add the data_alloc_zone_blocks volume option. This changes the
behaviour of the server to try and give mounts free data extents which
fall in exclusive fixed-size zones.
We add the field to the scoutfs_volume_options struct and add it to the
set_volopt server handler which enforces constrains on the size of the
zones.
We then add fields to the log_trees struct which records the size of the
zones and sets bits for the zones that contain free extents in the
data_avail allocator root. The get_log_trees handler is changed to read
all the zone bitmaps from all the items, pass those bitmaps in to
_alloc_move to direct data allocations, and finally update the bitmaps
in the log_trees items to cover the newly allocated extents. The
log_trees data_alloc_zone fields are cleared as the mount's logs are
reclaimed to indicate that the mount is no longer writing to the zone.
The policy mechanism of finding free extents based on the bitmaps is
ipmlemented down in _data_alloc_move().
Signed-off-by: Zach Brown <zab@versity.com>
Add parameters so that scoutfs_alloc_move() can first search for source
extents in specified zones. It uses relatively cheap searches through
the order items to find extents that intersect with the regions
described by the zone bitmaps.
Signed-off-by: Zach Brown <zab@versity.com>
Allocators store free extents in two items, one sorted by their blkno
position and the other by their precise length.
The length index makes it easy to search for precise extent lengths, but
it makes it hard to search for a large extent within a given blkno
region. Skipping in the blkno dimension has to be done for every
precise length value.
We don't need that level of precision. If we index the extents by a
coarser order of the length then we have a fixed number of orders in
which we have to skip in the blkno dimension when searching within a
specific region.
This changes the length item to be stored at the log(8) order of the
length of the extents. This groups extents into orders that are close
to the human-friendly base 10 orders of magnitude.
With this change the order field in the key no longer stores the precise
extent length. To preserve the length of the extent we need to use
another field. The only 64bit field remaining is the first which is a
higher comparision priority than the type. So we use the highest
comparison priority zone field to differentiate the position and order
indexes and can now use all three 64bit fields in the key.
Finally, we have to be careful when constructing a key to use _next when
searching for a large extent. Previously keys were relying on the magic
property that building a key from an extent length of 0 ended up at the
key value -0 = 0. That only worked because we never stored zero length
extents. We now store zero length orders so we can't use the negative
trick anymore. We explicitly treat 0 length extents carefully when
building keys and we subtract the order from U64_MAX to store the orders
from largest to smallest.
Signed-off-by: Zach Brown <zab@versity.com>
Introduce global volume options. They're stored in the superblock and
can be seen in sysfs files that use network commands to get and
set the options on the server.
Signed-off-by: Zach Brown <zab@versity.com>
A lock that is undergoing invalidation is put on a list of locks in the
super block. Invalidation requests put locks on the list. While locks
are invalidated they're temporarily put on a private list.
To support a request arriving while the lock is being processed we
carefully manage the invalidation fields in the lock between the
invalidation worker and the incoming request. The worker correctly
noticed that a new invalidation request had arrived but it left the lock
on its private list instead of putting it back on the invalidation list
for further processing. The lock was unreachable, wouldn't get
invalidated, and caused everyone trying to use the lock to block
indefinitely.
When the worker sees another request arrive for an invalidating lock it
needs to move the lock from the private list back to the invalidation
list.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we added a ilookup variant that ignored I_FREEING inodes
to avoid a deadlock between lock invalidation (lock->I_FREEING) and
eviction (I_FREEING->lock);
Now we're seeing similar deadlocks between eviction (I_FREEING->lock)
and fh_to_dentry's iget (lock->I_FREEING).
I think it's reasonable to ignore all inodes with I_FREEING set when
we're using our _test callback in ilookup or iget. We can remove the
_nofreeing ilookup variant and move its I_FREEING test into the
iget_test callback provided to both ilookup and iget.
Callers will get the same result, it will just happen without waiting
for a previously I_FREEING inode to leave. They'll get NULL instead of
waiting from ilookup. They'll allocate and start to initialize a newer
instance of the inode and insert it along side the previous instance.
We don't have inode number re-use so we don't have the problem where a
newly allocated inode number is relying on inode cache serialization to
not find a previously allocated inode that is being evicted.
This change does allow for concurrent iget of an inode number that is
being deleted on a local node. This could happen in fh_to_dentry with a
raw inode number. But this was already a problem between mounts because
they don't have a shared inode cache to serialize them. Once we fix
that between nodes, we fix it on a single node as well.
Signed-off-by: Zach Brown <zab@versity.com>
The vfs often calls filesystem methods with i_mutex held. This creates
a natural ordering of i_mutex outside of cluster locks. The file
aio_read method acquired i_mutex after its cluster lock, creating a
deadlock with other vfs methods like setattr.
The acquisition of i_mutex after the cluster lock was due to using the
pattern where we use the per-task lock to discover if we're the first
user of the lock in a call chain. Readpage has to do this, but file
aio_read doesn't. It should never be called recursively. So we can
acquire the i_mutex outside of the cluster lock and warn if we ever are
called recursively.
Signed-off-by: Zach Brown <zab@versity.com>
When move blocks is staging it requires an overlapping offline extent to
cover the entire region to move.
It performs the stage by modifying extents at a time. If there are
fragmented source extents it will modify each of them at a time in the
region.
When looking for the extent to match the source extent it looked from
the iblock of the start of the whole operation, not the start of the
source extent it's matching. This meant that it would find a the first
previous online extent it just modified, which wouldn't be online, and
would return -EINVAL.
The fix is to have it search from the logical start of the extent it's
trying to match, not the start of the region.
Signed-off-by: Zach Brown <zab@versity.com>
The client's incoming lock invalidation request handler triggers a
BUG_ON if it gets a request for a lock that is already processing a
previous invalidation request. The server is supposed to only send
one request at a time.
The problem is that the batched invalidation request handling will send
responses outside of spinlock coverage before reacquirin the lock and
finishing processing once the response send has been successful.
This gives a window for another invalidation request to arrive after the
response was sent but before the invalidation finished processing. This
triggers the bug.
The fix is to mark the lock such that we can recognize a valid second
request arriving after we send the response but before we finish
processing. If it arrives we'll continue invalidation processing with
the arguments from the new request.
Signed-off-by: Zach Brown <zab@versity.com>
Lock teardown during unmount involves first calling shutdown and then
destroy. The shutdown call is meant to ensure that it's safe to tear
down the client network connections. Once shutdown returns locking is
promising that it won't call into the client to send new lock requests.
The current shutdown implementation is very heavy handed and shuts down
everything. This creates a deadlock. After calling lock shutdown, the
client will send its farewell and wait for a response. The server might
not send the farewell response until other mounts have unmounted if our
client is in the server's mount. In this case we stil have to be
processing lock invalidation requests to allow other unmounting clients
to make forward progress.
This is reasonably easy and safe to do. We only use the shutdown flag
to stop lock calls that would change lock state and send requests. We
don't have it stop incoming requests processing in the work queueing
functions. It's safe to keep processing incoming requests between
_shutdown and _destroy because the requests already come in through the
client. As the client shuts down it will stop calling us.
Signed-off-by: Zach Brown <zab@versity.com>
Even though we can pass in gfp flags to vmalloc it eventually calls pte
alloc functions which ignore the caller's flags and use user gfp flags.
This risks reclaim re-entering fs paths during allocations in the block
cache. These allocs that allowed reclaim deep in the fs was causing
lockdep to add RECLAIM dependencies between locks and holler about
deadlocks.
We apply the same pattern that xfs does for disabling reclaim while
allocating vmalloced block payloads. Setting PF_MEMALLOC_NOIO causes
reclaim in that task to clear __GFP_IO and __GFP_FS, regardless of the
individual allocation flags in the task, preventing recursion.
Signed-off-by: Zach Brown <zab@versity.com>
The shared recovery layer outputs different messages than when it ran
only for lock_recovery in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Locks have a bunch of state that reflects concurrent processing.
Testing that state determines when it's safe to free a lock because
nothing is going on.
During unmount we abruptly stop processing locks. Unmount will send a
farewell to the server which will remove all the state associated with
the client that's unmounting for all its locks, regardless of the state
the locks were in.
The client unmount path has to clean up the interupted lock state and
free it, carefully avoiding assertions that would otherwise indicate
that we're freeing used locks. The move to async lock invalidation
forgot to clean up the invalidation state. Previously a synchronous
work function would set and clear invalidate_pending while it was
running. Once we finished waiting for it invalidate_pending would be
clear. The move to async invalidation work meant that we can still have
invalidate_pending with no work executing. Lock destruction removed
locks from the invalidation list but forgot to clear the
invalidate_pending flag.
This triggered assertions during unmount that were otherwise harmless.
There was other use of the lock, we just forgot to clean up the lock
state.
Signed-off-by: Zach Brown <zab@versity.com>
The data_info struct holds the data allocator that is filled by
transactions as they commit. We have to free it after we've shutdown
transactions. It's more like the forest in this regard so we move its
desctruction down by the forest to group similar behaviour.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the lock client waits for invalidation work and prevents
future work from being queued. We're currently shutting down the
subsystems that lock calls before lock itself, leading to crashes if we
happen to have invalidations executing as we unmount.
Shutting down locking before its dependencies fixes this. This was hit
in testing during the inode deletion fixes because it created the
perfect race by acquiring locks during unmount so that the server was
very unlikely to send invalidations on behalf to one mount on behalf of
another as they both unmounted.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the transaction during unmount relied on the vfs unmount
path to perform a sync of any remaining dirty transaction. There are
ways that we can dirty a transaction during unmount after it calls
the fs sync, so we try to write any remaining dirty transaction before
shutting down.
Signed-off-by: Zach Brown <zab@versity.com>
We've had a long-standing deadlock between lock invalidation and
eviction. Invalidating a lock wants to lookup inodes and drop their
resources while blocking locks. Eviction wants to get a lock to perform
final deletion while the inodes has I_FREEING set which blocks lookups.
We only saw this deadlock a handful of times in all of the time we've
run the code, but it's now much more common now that we're acquiring
locks in iput to test that nlink is zero instead of only when nlink is
zero. I see unmount hang regularly when testing final inode deletion.
This adds a lookup variant for invalidation which will refuse to
return freeing inodes so they won't be waited on. Once they're freeing
they can't be seen by future lock users so they don't need to be
invalidated. This keeps the lock invalication promise and avoids
sleeping on freeing inodes which creates the deadlock.
Signed-off-by: Zach Brown <zab@versity.com>
t_umount had a typo that had it try to unmount a mount based on a
caller's variable, which accidentally happened to work for its only
caller. Future callers would not have been so lucky.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we wouldn't try and remove cached dentries and inodes as
lock revocation removed cluster lock coverage. The next time
we tried to use the cached dentries or inodes we'd acquire
a lock and refresh them.
But now cached inodes prevent final inode deletion. If they linger
outside cluster locking then any final deletion will need to be deferred
until all its cached inodes are naturally dropped at some point in the
future across the cluster. It might take refreshing the dentries or for
memory pressure to push out the old cached inodes.
This tries to proctively drop cached dentries and inodes as we lose
cluster lock coverage if they're not actively referenced. We need to be
careful not to perform final inode deletion during lock invalidation
because it will deadlock, so we defer an iput which could delete during
evict out to async work.
Now deletion can be done synchronously in the task that is performing
the unlink because previous use of the inode on remote mounts hasn't
left unused cached inodes sitting around.
Signed-off-by: Zach Brown <zab@versity.com>
Today an inode's items are deleted once its nlink reaches zero and the
final iput is called in a local mount. This can delete inodes from
under other mounts which have opened the inode before it was unlinked on
another mount.
We fix this by adding cached inode tracking. Each mount maintains
groups of cached inode bitmaps at the same granularity as inode locking.
As a mount performs its final iput it gets a bitmap from the server
which indicates if any other mount has inodes in the group open.
This makes the two fast paths of opening and closing linked files and of
deleting a file that was unlinked locally only pay a moderate cost of
either maintaining the bitmap locally and only getting the open map once
per lock group. Removing many files in a group will only lock and get
the open map once per group.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we have the recov layer we can have the lock server use it to
track lock recovery. The lock server no longer needs its own recovery
tracking structures and can instead call recov. We add a call for the
server to call to kick lock processing once lock recovery finishes. We
can get rid of the persistent lock_client items now that the server is
driving recovery from the mounted_client items.
Signed-off-by: Zach Brown <zab@versity.com>
The server starts recovery when it finds mounted client items as it
starts up. The clients are done recovering once they send their
greeting. If they don't recover in time then they'll be fenced.
Signed-off-by: Zach Brown <zab@versity.com>
Add a little set of functions to help the server track which clients are
waiting to recover which state. The open map messages need to wait for
recovery so we're moving recovery out of being only in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Add lock coverage which tracks if the inode has been refreshed and is
covered by the inode group cluster lock. This will be used by
drop_inode and evict_inode to discover that the inode is current and
doesn't need to be refreshed.
Signed-off-by: Zach Brown <zab@versity.com>
The sneaky rhashtable_insert_fast() can't return -EEXIST despite the
last line of the function *REALLY* making it look like it can. It just
inserts new objects at the head of the bucket lists without comparing
the insertion with existing objects.
The block cache was relying on insertion to resolve duplicate racing
allocated blocks. Because it couldn't return -EEXIST we could get
duplicate cached blocks present in the hash table.
rhashtable_lookup_insert_fast() fixes this by actually comparing the
inserted objects key with the objects found in the insertion bucket. A
racing allocator trying to insert a duplicate cached block will get an
error, drop their allocated block, and retry their lookup.
Signed-off-by: Zach Brown <zab@versity.com>
The rhashtable can return EBUSY if you insert fast enough to trigger an
expansion of the next table size that is waiting to be rehashed in an
rcu callback. If we get EBUSY from rhasthable_insert we call
synchronize_rcu to wait for the rehash to complete before trying again.
This was hit in testing restores of a very large namespace and took a
few hours to hit.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing cpu livelocks in block shrinking where counters show that a
single block cache shrink call is only getting EAGAIN from repeated
rhashtable walk attempts. It occurred to me that the running task might
be preventing an RCU grace period from ending by never blocking.
The hope of this commit is that by waiting for rcu callbacks to run
we'll ensure that any pending rebalance callback runs before we retry
the rhashtable walk again. I haven't been able to reproduce this easily
so this is a stab in the dark.
Signed-off-by: Zach Brown <zab@versity.com>
Kinda weird to goto back to the out label and then out the bottom. Just
return -EIO, like forest_next_hint() does.
Don't call client_get_roots() right before retry, since is the first thing
retry does.
Signed-off-by: Andy Grover <agrover@versity.com>
Support O_TMPFILE: Create an unlinked file and put it on the orphan list.
If it ever gains a link, take it off the orphan list.
Change MOVE_BLOCKS ioctl to allow moving blocks into offline extent ranges.
Ioctl callers must set a new flag to enable this operation mode.
RH-compat: tmpfile support it actually backported by RH into 3.10 kernel.
We need to use some of their kabi-maintaining wrappers to use it:
use a struct inode_operations_wrapper instead of base struct
inode_operations, set S_IOPS_WRAPPER flag in i_flags. This lets
RH's modified vfs_tmpfile() find our tmpfile fn pointer.
Add a test that tests both creating tmpfiles as well as moving their
contents into a destination file via MOVE_BLOCKS.
xfstests common/004 now runs because tmpfile is supported.
Signed-off-by: Andy Grover <agrover@versity.com>
The srch client compaction work initializes allocators, dirties blocks,
and writes them out as its transaction. It forgot to call the
pre-commit allocator prepare function.
The prepare function drops block references used by the meta allocator
during the transaction. This leaked block references which kept blocks
from being freed by the shrinker under memory pressure. Eventually
memory was full of leaked blocks and the shrinker walked all of them
looking blocks to free, resulting in an effective livelock that ground
the system to a crawl.
Signed-off-by: Zach Brown <zab@versity.com>
By the time we get to destroying the block cache we should have put all
our block references. Warn as we tear down the blocks if we see any
blocks that still have references, implying a ref leak. This caught a
leak caused by srch compaction forgetting to put allocator list block
refs.
Signed-off-by: Zach Brown <zab@versity.com>
That comment looked very weird indeed until I recognized that I must
have forgotten to delete the first two attempts at starting the
sentence.
Signed-off-by: Zach Brown <zab@versity.com>
The very first greeting a client sends is unique becuase it doesn't yet
have a server_term field set and tells the server to create items to
track the client.
A server processing this request can create the items and then shut down
before the client is able to receive the reply. They'll resend the
greeting without server_term but then the next server will get -EEXIST
errors as it tries to create items for the client. This causes the
connection to break, which the client tries to reestablish, and the
pattern repeats indefinitely.
The fix is to simply recognize that -EEXIST is acceptable during item
creation. Server message handlers always have to address the case where
a resent message was already processed by a previous server but it's
response didn't make it to the client.
Signed-off-by: Zach Brown <zab@versity.com>
Remove an old client info field from the unmount barrier mechanism which
was removed a while ago. It used to be compared to a super field to
decide to finish unmount without reconnecting but now we check for our
mounted_client item in the server's btree.
Signed-off-by: Zach Brown <zab@versity.com>
Define a family field, and add a union for IPv4 and v6 variants, although
v6 is not supported yet.
Family field is now used to determine presence of address in a quorum slot,
instead of checking if addr is zero.
Signed-off-by: Andy Grover <agrover@versity.com>
The block-stale-reads test was built from the ashes of a test that
used counters and triggers to work with the btree when it was
only used on the server.
The initial quick translation to try and trigger block cache retries
while the forest called the btree got so much wrong. It was still
trying to use some 'cl' variable that didn't refer to the client any
more, the trigger helpers now call statfs to find paths and can end up
triggering themselves. and many more counters stale reads can happen
throughout the system while we're working -- not just one from our
trigger.
This fixes it up to consistently use fs numbers instead of
the silly stale cl variable and be less sensitive to triggers firing and
counter differences.
Signed-off-by: Zach Brown <zab@versity.com>
t_trigger_arm always output the value of the trigger after arming on the
premise that tests required the trigger being armed. In the process of
showing the trigger it calls a bunch of t_ helpers that build the path
to the trigger file using statfs_more to get the rid of mounts.
If the trigger being armed is in the server's mount and the specific
trigger test is fired by the server's statfs_more request processing
then the trigger can be fired before read its value. Tests can
inconsistently fail as the golden output shows the trigger being armed
or not depending on if it was in the server's mount or not.
t_trigger_arm_silent doesn't output the value of the armed trigger. It
can be used for low level triggers that don't rely on reading the
trigger's value to discover that their effect has happened.
Signed-off-by: Zach Brown <zab@versity.com>
Tests can use t_counter_diff to put a message in their golden output
when a specific change in counters is expected. This adds
t_counter_diff_changed to output a message that indicates change or not,
for tests that want to see counters change but the amount of change
doesn't need to be precisely known.
Signed-off-by: Zach Brown <zab@versity.com>
Each transaction maintains a global list of inodes to sync. It checks
the inode and adds it in each write_end call per OS page. Locking and
unlocking the global spinlock was showing up in profiles. At the very
least, we can only get the lock once per large file that's written
during a transaction. This will reduce spinlock traffic on the lock by
the number of pages written per file. We'll want a better solution in
the long run, but this helps for now.
Signed-off-by: Zach Brown <zab@versity.com>
Each transaction hold makes multiple calls to _alloc_meta_low to see if
the transaction should be committed to refill allocators before the
caller's hold is acquired and they can dirty blocks in the transaction.
_alloc_meta_low was using a spinlock to sample the allocator list_head
blocks to determine if there was space available. The lock and unlock
stores were creating significant cacheline contention.
The _alloc_meta_low calls are higher frequency than allocations. We can
use a seqlock to have exclusive writers and allow concurrent
_alloc_meta_low readers who retry if a writer intervenes.
Signed-off-by: Zach Brown <zab@versity.com>
We saw the transaction info lock showing up in profiles. We were doing
quite a lot of work with that lock held. We can remove it entirely and
use an atomic.
Instead of a locked holders count and writer boolean we can use an
atomic holders and have a high bit indicate that the write_func is
pending. This turns the lock/unlock pairs in hold and release into
atomic inc/cmpxchg/dec operations.
Then we were checking allocators under the trans lock. Now that we have
an atomic holders count we can increment it to prevent the writer from
commiting and release it after the checks if we need another commit
before the hold.
And finally, we were freeing our allocated reservation struct under the
lock. We weren't actually doing anything with the reservation struct so
we can use journal_info as the nested hold counter instead of having it
point to an allocated and freed struct.
Signed-off-by: Zach Brown <zab@versity.com>
As the implementation shifted away from the ring of btree blocks and LSM
segments we lost callers to all these triggers. They're unused and can
be removed.
Signed-off-by: Zach Brown <zab@versity.com>
The previous test that triggered re-reading blocks, as though they were
stale, was written in the era where it only hit btree blocks and
everything else was stored in LSM segments.
This reworks the test to make it clear that it affects all our block
readers today. The test only exercise the core read retry path, but it
could be expanded to test callers retrying with newer references after
they get -ESTALE errors.
Signed-off-by: Zach Brown <zab@versity.com>
Our block cache consistency mechanism allows readers to try and read
stale block references. They check block headers of the block they read
to discover if it has been modified and they should retry the read with
newer block references.
For this to be correct the block contents can't change under the
readers. That's obviously true in the simple imagined case of one node
writing and another node reading. But we also have the case where the
stale reader and dirtying writer can be concurrent tasks in the same
mount which share a block cache.
There were a two failure cases that derive from the order of readers and
writers working with blocks.
If the reader goes first, the writer could find the existing block in
the cache and modify it while the reader assumes that it is read only.
The fix is to have the writer always remove any existing cached block
and insert a newly allocated block into the cache with the header fields
already changed. Any existing readers will still have their cached
block references and any new readers will see the modified headers and
return -ESTALE.
The next failure comes from readers trying to invalidate dirty blocks
when they see modified headers. They assumed that the existing cached
block was old and could be dropped so that a new current version could
be read. But in this case a local writer has clobbered the reader's
stale block and the reader should immediately return -ESTALE.
Signed-off-by: Zach Brown <zab@versity.com>
To create dirty blocks in memory each block type caller currently gets a
reference on a created block and then dirties it. The reference it gets
could be an existing cached block that stale readers are currently
using. This creates a problem with our block consistency protocol where
writers can dirty and modify cached blocks that readers are currently
reading in memory, leading to read corruption.
This commit is the first step in addressing that problem. We add a
scoutfs_block_dirty_ref() call which returns a reference to a dirtied
block from the block core in one call. We're only changing the callers
in this patch but we'll be reworking the dirtying mechanism in an
upcoming patch to avoid corrupting readers.
Signed-off-by: Zach Brown <zab@versity.com>
Update scoutfs print to use the new block_ref struct instead of the
handful of per-block type ref structs that we had accumulated.
Signed-off-by: Zach Brown <zab@versity.com>
Each of the different block types had a reading function that read a
block and then checked their reference struct for their block type.
This gets rid of each block reference type and has a single block_ref
type which is then checked by a single ref reading function in the block
core. By putting ref checking in the core we no longer have to export
checking the block header crc, verifying headers, invalidating blocks,
or even reading raw blocks themseves. Everyone reads refs and leaves
the checking up to the core.
The changes don't have a significant functional effect. This is mostly
just changing types and moving code around. (There are some changes to
visible counters.)
This shares code, which is nice, but this is putting the block reference
checking in one place in the block core so that in a few patches we can
fix problems with writers dirtying blocks that are being read.
Signed-off-by: Zach Brown <zab@versity.com>
The block cache wasn't safely racing readers walking the rcu radix_tree
and the shrinker walking the LRU list. A reader could get a reference
to a block that had been removed from the radix and was queued for
freeing. It'd clobber the free's llist_head union member by putting the
block back on the lru and both the read and free would crash as they
each corrupted each other's memory. We rarely saw this in heavy load
testing.
The fix is to clean up the use of rcu, refcounting, and freeing.
First, we get rid of the LRU list. Now we don't have to worry about
resolving racing accesses of blocks between two independent structures.
Instead of shrinking walking the LRU list, we can mark blocks on access
such that shrinking can walk all blocks randomly and expect to quickly
find candidates to shrink.
To make it easier to concurrently walk all the blocks we switch to the
rhashtable instead of the radix tree. It also has nice per-bucket
locking so we can get rid of the global lock that protected the LRU list
and radix insertion. (And it isn't limited to 'long' keys so we can get
rid of the check for max meta blknos that couldn't be cached.)
Now we need to tighten up when read can get a reference and when shrink
can remove blocks. We have presence in the hash table hold a refcount
but we make it a magic high bit in the refcount so that it can be
differentiated from other references. Now lookup can atomically get a
reference to blocks that are in the hash table, and shrinking can
atomically remove blocks when it is the only other reference.
We also clean up freeing a bit. It has to wait for the rcu grace period
to ensure that no other rcu readers can reference the blocks its
freeing. It has to iterate over the list with _safe because it's
freeing as it goes.
Interestingly, when reworking the shrinker I noticed that we weren't
scaling the nr_to_scan from the pages we returned in previous shrink
calls back to blocks. We now divide the input from pages back into
blocks.
Signed-off-by: Zach Brown <zab@versity.com>
We had a mutex protecting the list of farewell requests. The critical
sections are all very short so we can use a spinlock and be a bit
clearer and more efficient. While we're at it, refactor freeing to free
outside of the criticial section.
Signed-off-by: Zach Brown <zab@versity.com>
The server has to be careful to only send farewell responses to quorum
clients once it knows that it won't need their vote to elect a leader to
server remaining clients.
The logic for doing this forgot to take non-quorum clients into account.
It would send farewell requests to all the final majority of quorum
members once they all tried to unmount. This could leave non-quorum
clients hung in unmount trying to send their farewell requests.
The fix is to count mouted_clients items for non-quorum clients and hold
off on sending farewell requests to the final majority until those
non-quorum clients have unmounted.
Signed-off-by: Zach Brown <zab@versity.com>
The recent quorum and unmount fixes should have addressed the failures
we were seeing in the mount-unmount-race test.
Signed-off-by: Zach Brown <zab@versity.com>
Update the man pages with descriptions of the new mkfs -Q quorum slot
configuration and quorum_slot_nr mount option.
Signed-off-by: Zach Brown <zab@versity.com>
We mask device numbers in command output to 0:0 so that we can have
consistent golden test output. The device number matching regex
responsible for this missed a few digits.
It didn't show up until we both tested enough mounts to get larger
device minor numbers and fixed multi-mount consistency so that the
affected tests didn't fail for other reasons.
Signed-off-by: Zach Brown <zab@versity.com>
Our test unmount function unmounted the device instead of the mount
point. It was written this way back in an old version of the harness
which didn't track mount points.
Now that we have mount points, we can just unmount that. This stops the
umount command from having to search through all the current mounts
looking for the mountpoint for the device it was asked to unmount.
Signed-off-by: Zach Brown <zab@versity.com>
I got a test failure where waiting returned an error, but it wasn't
clear what the error was or where it might have come from. Add more
logging so that we learn more about what might have gone wrong.
Signed-off-by: Zach Brown <zab@versity.com>
Update the example configuration in the README to specify the quorum
slots in mkfs arguments and mount options.
Signed-off-by: Zach Brown <zab@versity.com>
The mounted_clients btree stores items to track mounted clients. It's
modified by multiple greeting workers and the farewell work.
The greeting work was serialized by the farewell_mutex, but the
modifications in the farewell thread weren't protected. This could
result in modifications between the threads being lost if the dirty
block reference updates raced in just the right way. I saw this in
testing with deletions in farewell being lost and then that lingering
item preventing unmount because the server thought it had to wait for a
remaining quorum member to unmount.
We fix this by adding a mutex specifically to protect the
mounted_clients btree in the server.
Signed-off-by: Zach Brown <zab@versity.com>
As clients unmount they send a farewell request that cleans up
persistent state associated with the mount. The client needs to be sure
that it gets processed, and we must maintain a majority of quorum
members mounted to be able to elect a server to process farewell
requests.
We had a mechanism using the unmount_barrier fields in the greeting and
super_block to let the final unmounting quorum majority know that their
farewells have been processed and that they didn't need to keep trying
to reconnect.
But we missed that we also need this out of band farewell handling
signal for non-quorum member clients as well. The server can send
farewells to a non-member client as well as the final majority and then
tear down all the connections before the non-quorum client can see its
farewell response. It also needs to be able to know that its farewell
has been processed before the server let the final majority unmount.
We can remove the custom unmount_barrier method and instead have all
unmounting clients check for their mounted_client item in the server's
btree. This item is removed as the last step of farewell processing so
if the client sees that it has been removed it knows that it doesn't
need to resend the farewell and can finish unmounting.
This fixes a bug where a non-quorum unmount could hang if it raced with
the final majority unmounting. I was able to trigger this hang in our
tests with 5 mounts and 3 quorum members.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs mkfs had two block writing functions: write_block to fill out
some block header fields including crc calculation, and then
write_block_raw to pwrite the raw buffer to the bytes in the device.
These were used inconsistenly as blocks came and went over time. Most
callers filled out all the header fields themselves and called the raw
writer. write_block was only used for super writing, which made sense
because it clobbered the block's header with the super header so the
caller's set header magic and seq fields would be lost.
This cleans up the mess. We only have one block writer and the caller
provides all the hdr fields. Everything uses it instead of filling out
the fields themselves and calling the raw writer.
Signed-off-by: Zach Brown <zab@versity.com>
Add macros for stringifying either the name of a macro or its value. In
keeping with making our utils/ sort of look like kernel code, we use the
kernel stringify names.
Signed-off-by: Zach Brown <zab@versity.com>
Previously quorum configuration specified the number of votes needed to
elected the leader. This was an excessive amount of freedom in the
configuration of the cluster which created all sorts of problems which
had to be designed around.
Most acutely, though, it required a probabilistic mechanism for mounts
to persistently record that they're starting a server so that future
servers could find and possibly fence them. They would write to a lot
of quorum blocks and trust that it was unlikely that future servers
would overwrite all of their written blocks. Overwriting was always
possible, which would be bad enough, but it also required so much IO
that we had to use long election timeouts to avoid spurious fencing.
These longer timeouts had already gone wrong on some storage
configurations, leading to hung mounts.
To fix this and other problems we see coming, like live membership
changes, we now specifically configure the number and identity of mounts
which will be participating in quorum voting. With specific identities,
mounts now have a corresponding specific block they can write to and
which future servers can read from to see if they're still running.
We change the quorum config in the super block from a single
quorum_count to an array of quorum slots which specify the address of
the mount that is assigned to that slot. The mount argument to specify
a quorum voter changes from "server_addr=$addr" to "quorum_slot_nr=$nr"
which specifies the mount's slot. The slot's address is used for udp
election messages and tcp server connections.
Now that we specifically have configured unique IP addresses for all the
quorum members, we can use UDP messages to send and receive the vote
mesages in the raft protocol to elect a leader. The quorum code doesn't
have to read and write disk block votes and is a more reasonable core
loop that either waits for received network messages or timeouts to
advance the raft election state machine.
The quorum blocks are now used for slots to store their persistent raft
term and to set their leader state. We have event fields in the block
to record the timestamp of the most recent interesting events that
happened to the slot.
Now that raft doesn't use IO, we can leave the quorum election work
running in the background. The raft work in the quorum members is
always running so we can use a much more typical raft implementation
with heartbeats. Critically, this decouples the client and election
life cycles. Quorum is always running and is responsible for starting
and stopping the server. The client repeatedly tries to connect to a
server, it has nothing to do with deciding to participate in quorum.
Finally, we add a quorum/status sysfs file which shows the state of the
quorum raft protocol in a member mount and has the last messages that
were sent to or received from the other members.
Signed-off-by: Zach Brown <zab@versity.com>
As a client unmounts it sends a farewell request to the server. We have
to carefully manage unmounting the final quorum members so that there is
always a remaining quorum to elect a leader to start a server to process
all their farewell requests.
The mechanism for doing this described these clients as "voters".
That's not really right, in our terminology voters and candidates are
temporary roles taken on by members during a specific election term in
the raft protocol. It's more accurate to describe the final set of
clients as quorum members. They can be voters or candidates depending
on how the raft protocol timeouts workout in any given election.
So we rename the greeting flag, mounted client flag, and the code and
comments on either side of the client and server to be a little clearer.
This only changes symbols and comments, there should be no functional
change.
Signed-off-by: Zach Brown <zab@versity.com>
As we read the super we check the first and last meta and data blkno
fields. The tests weren't updated as we moved from one device to two
metadata and data devices.
Add a helper that tests the range for the device and test both meta and
data ranges fully, instead of only testing the endpoints of each and
assuming they're related because they're living on one device.
Signed-off-by: Zach Brown <zab@versity.com>
The mount-unmount-race test is occasionally hanging, disable it while we
debug it and have test coverage for unrelated work.
Signed-off-by: Zach Brown <zab@versity.com>
This is checked for by the kernel ioctl code, so giving unaligned values
will return an error, instead of aborting with an assert.
Signed-off-by: Andy Grover <agrover@versity.com>
As a core principle, all server message processing needs to be safe to
replay as servers shut down and requests are resent to new servers.
The advance_seq handler got this wrong. It would only try to remove a
trans_seq item for the seq sent by the client before inserting a new
item for the next seq. This change could be committed before the reply
was lost as the server shuts down. The next server would process the
resent request but wouldn't find the old item for the seq that the
client sent, and would ignore the new item that the previous server
inserted. It would then insert another greater seq for the same client.
This would leave behind a stale old trans_seq that would be returned as
the last_seq which would forever limit the results that could be
returned from the seq index walks.
This fix is to always remove all previous seq items for the client
before inserting a new one. This creates O(clients) server work, but
it's minimal.
This manifest as occasional simple-inode-index test failures (say 1 in
5?) which would trigger if the unmounts during previous tests would
happen to have advance_seq resent across server shutdowns. With this
change the test now reliably passes.
Signed-off-by: Zach Brown <zab@versity.com>
We've grown some test names that are prefixes of others
(createmany-parallel, createmany-parallel-mounts). When we're searching
for lines with the test name we have to search for the exact test name,
by terminating the name with a space, instead of searching for a line
that starts with the test name.
This fixes strange output and saved passed stats for the names that
share a prefix.
Signed-off-by: Zach Brown <zab@versity.com>
The message indicating that xfstests output was now being shown was
mashed up against the previous passed stats and it was gross and I hated
it.
Signed-off-by: Zach Brown <zab@versity.com>
When running in debug kernels in guests we can really bog down things
enough to trigger hrtimer warnings. I don't think there's much we can
reasonably do about that.
Signed-off-by: Zach Brown <zab@versity.com>
Farewell work is queued by farewell message processing. Server shutdown
didn't properly wait for pending farewell work to finish before tearing
down. As the server work destroyed the server's connection the farewell
work could stlil be running and try to send responses down the socket.
We make the server more carefully avoid queueuing farewell work if it's
in the process of shutting down and wait for farewell work to finish
before destroying the server's resources.
This fixed all manner of crashes that were seen in testing when a bunch
of nodes unmounted, creating farewell work on the server as it itself
unmounted and destroyed the server.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_srch_get_compact() is building up a compaction request which has
a list of srch files to read and sort and write into a new srch file.
It finds input files by searching for a sufficient number of similar
files: first any unsorted log files and then sorted log files that are
around the same size.
It finds the files by using btree next on the srch zone which has types
for unsorted srch log files, sorted srch files, but also pending and
busy compaction items.
It was being far too cute about iterating over different key types. It
was trying to adapt to finding the next key and was making assumptions
about the order of key types. It didn't notice that the pending and
busy key types followed log and sorted and would generate EIO when it
ran into them and found their value length didn't match what it was
expecting.
Rework the next item ref parsing so that it returns -ENOENT if it gets
an unexpected key type, then look for the next key type when checking
enoent.
Signed-off-by: Zach Brown <zab@versity.com>
Add a function that tests can use to skip when the metadata device isn't
large enough. I thought we needed to avoid enospc in a particular test,
but it turns out the test's failure was unrelated. So this isn't used
for now but it seems nice to keep around.
Signed-off-by: Zach Brown <zab@versity.com>
The grace period is intended to let lock holders squeeze in more bulk
work before another node pulls the lock out from under them. The length
of the delay is a balance between getting more work done per lock hold
and adding latency to ping-ponging workloads.
The current grace period was too short. To do work in the conflicting
case you often have to read the result that the other mount wrote as you
invalidated their lock. The test was written in the LSM world where
we'd effectively read a single level 0 1MB segment. In the btree world
we're checking bloom blocks and reading the other mount's btree. It has
more dependent read latency.
So we turn up the grace period to let conflicting readers squeeze in
more work before pulling the lock out from under them. This value was
chosen to make lock-conflicting-batch-commit pass in guests sharing nvme
metadata devices in debugging kernels.
Signed-off-by: Zach Brown <zab@versity.com>
The test had a silly typo in the label it put on the time it took mounts
to perform conflicting metadata changes.
Signed-off-by: Zach Brown <zab@versity.com>
When we're splicing in dentries in lookup we can be splicing the result
of changes on other nodes into a stale dcache. The stale dcache might
contain dir entries and the dcache does not allow aliased directories.
Use d_materialise_unique() to splice in dir inodes so that we remove all
aliased dentries which must be stale.
We can still use d_splice_alias() for all other inode types. Any
existing stale dentries will fail revalidation before they're used.
Signed-off-by: Zach Brown <zab@versity.com>
We can lose interesting state if the mounts are unmounted as tests fail,
only unmount if all the tests pass.
Signed-off-by: Zach Brown <zab@versity.com>
Weirdly, run-tests was treating trace_printk not as an option to enable
trace_printk() traces but as an option to print trace events to the
console with printk? That's not a thing.
Make -P really enable trace_printk tracing and collect it as it would
enabled trace events. It needs to be treated seperately from the -t
options that enable trace events.
While we're at it treat the -P trace dumping option as a stand-alone
option that works without -t arguments.
Signed-off-by: Zach Brown <zab@versity.com>
run-tests.sh has a -t argument which takes a whitespace seperated string
of globs of events to enable. This was hard to use and made it very
easy to accidentally expand the globs at the wrong place in the script.
This makes each -t argument specify a single word glob which is stored
in an array so the glob isn't expanded until it's applied to the trace
event path. We also add an error for -t globs that didn't match any
events and add a message with the count of -t arguments and enabled
events.
Signed-off-by: Zach Brown <zab@versity.com>
The lock invalidation work function needs to be careful not to requeue
itself while we're shutting down or we can be left with invalidation
functions racing with shutdown. Invalidation calls igrab so we can end
up with unmount warning that there are still inodes in use.
Signed-off-by: Zach Brown <zab@versity.com>
Add a new distinguishable return value (ENOBUFS) from allocator for if
the transaction cannot alloc space. This doesn't mean the filesystem is
full -- opening a new transaction may result in forward progress.
Alter fallocate and get_blocks code to check for this err val and retry
with a new transaction. Handling actual ENOSPC can still happen, of
course.
Add counter called "alloc_trans_retry" and increment it from both spots.
Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: fixed up write_begin error paths]
The item cache page life cycle is tricky. There are no proper page
reference counts, everthing is done by nesting the page rwlock inside
item_cache_info rwlock. The intent is that you can only reference pages
while you hold the rwlocks appropriately. The per-cpu page references
are outside that locking regime so they add a reference count. Now
there are reference counts for the main cache index reference and for
each per-cpu reference.
The end result of all this is that you can only reference pages outside
of locks if you're protected by references.
Lock invalidation messed this up by trying to add its right split page
to the lru after it was unlocked. Its page reference wasn't protected
at this point. Shrinking could be freeing that page, and so it could be
putting a freed page's memory back on the lru.
Shrinking had a little bug that it was using list_move to move an
initialized lru_head list_head. It turns out to be harmless (list_del
will just follow pointers to itself and set itself as next and prev all
over again), but boy does it catch one's eye. Let's remove all
confusion and drop the reference while holding the cinf->rwlock instead
of trying to optimize freeing outside locks.
Finally, the big one: inserting a read item after compacting the page to
make room was inserting into stale parent pointers into the old
pre-compacted page, rather than the new page that was swapped in by
compaction. This left references to a freed page in the page rbtree and
hilarity ensued.
Signed-off-by: Zach Brown <zab@versity.com>
Instead of hashing headers, define an interop version. Do not mount
superblocks that have a different version, either higher or lower.
Since this is pretty much the same as the format hash except it's a
constant, minimal code changes are needed.
Initial dev version is 0, with the intent that version will be bumped to
1 immediately prior to tagging initial release version.
Update README. Fix comments.
Add interop version to notes and modinfo.
Signed-off-by: Andy Grover <agrover@versity.com>
Add a relatively constrained ioctl that moves extents between regular
files. This is intended to be used by tasks which combine many existing
files into a much larger file without reading and writing all the file
contents.
Signed-off-by: Zach Brown <zab@versity.com>
By convention we have the _IO* ioctl definition after the argument
structs and ALLOC_DETAIL got it a bit wrong so move it down.
Signed-off-by: Zach Brown <zab@versity.com>
We were checking for the wrong magic value.
We now need to use -f when running mkfs in run-tests for things to work.
Signed-off-by: Andy Grover <agrover@versity.com>
This more closely matches stage ioctl and other conventions.
Also change release code to use offset/length nomenclature for consistency.
Signed-off-by: Andy Grover <agrover@versity.com>
Update for cli args and options changes. Reorder subcommands to match
scoutfs built-in help.
Consistent ScoutFS capitalization.
Tighten up some descriptions and verbiage for consistency and omit
descriptions of internals in a few spots.
Add SEE ALSO for blockdev(8) and wipefs(8).
Signed-off-by: Andy Grover <agrover@versity.com>
Make it static and then use it both for argp_parse as well as
cmd_register_argp.
Split commands into five groups, to help understanding of their
usefulness.
Mention that each command has its own help text, and that we are being
fancy to keep the user from having to give fs path.
Signed-off-by: Andy Grover <agrover@versity.com>
This has some fancy parsing going on, and I decided to just leave it
in the main function instead of going to the effort to move it all
to the parsing function.
Signed-off-by: Andy Grover <agrover@versity.com>
Support max-meta-size and max-data-size using KMGTP units with rounding.
Detect other fs signatures using blkid library.
Detect ScoutFS super using magic value.
Move read_block() from print.c into util.c since blkid also needs it.
Signed-off-by: Andy Grover <agrover@versity.com>
Print warning if printing a data dev, you probably wanted the meta dev.
Change read_block to return err value. Otherwise there are confusing
ENOMEM messages when pread() fails. e.g. try to print /dev/null.
Signed-off-by: Andy Grover <agrover@versity.com>
Make offset and length optional. Allow size units (KMGTP) to be used
for offset/length.
release: Since off/len no longer given in 4k blocks, round offset and
length to to 4KiB, down and up respectively. Emit a message if rounding
occurs.
Make version a required option.
stage: change ordering to src (the archive file) then the dest (the
staged file).
Signed-off-by: Andy Grover <agrover@versity.com>
With many concurrent writers we were seeing excessive commits forced
because it thought the data allocator was running low. The transaction
was checking the raw total_len value in the data_avail alloc_root for
the number of free data blocks. But this read wasn't locked, and
allocators could completely remove a large free extent and then
re-insert a slightly smaller free extent as they perform their
alloction. The transaction could see a temporary very small total_len
and trigger a commit.
Data allocations are serialized by a heavy mutex so we don't want to
have the reader try and use that to see a consistent total_len. Instead
we create a data allocator run-time struct that has a consistent
total_len that is updated after all the extent items are manipulated.
This also gives us a place to put the caller's cached extent so that it
can be included in the total_len, previously it wasn't included in the
free total that the transaction saw.
The file data allocator can then initialize and use this struct instead
of its raw use of the root and cached extent. Then the transaction can
sample its consistent total_len that reflects the root and cached
extent.
A subtle detail is that fallocate can't use _free_data to return an
allocated extent on error to the avail pool. It instead frees into the
data_free pool like normal frees. It doesn't really matter that this
could prematurely drain the avail pool because it's in an error path.
Signed-off-by: Zach Brown <zab@versity.com>
Implement a fallback mechanism for opening paths to a filesystem. If
explicitly given, use that. If env var is set, use that. Otherwise, use
current working directory.
Use wordexp to expand ~, $HOME, etc.
Signed-off-by: Andy Grover <agrover@versity.com>
Finally get rid of the last silly vestige of the ancient 'ci' name and
update the scoutfs_inode_info pointers to si. This is just a global
search and replace, nothing functional changes.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which stages a file in multiple parts while a long-lived
process is blocking on offline extents trying to compare the file to the
known contents.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we have full precision extents a writer with i_mutex and a page
lock can be modifying large extent items which cover much of the
surrounding pages in the file. Readers can be in a different page with
only the page lock and try to work with extent items as the writer is
deleting and creating them.
We add a per-inode rwsem which just protects file extent item
manipulation. We try to acquire it as close to the item use as possible
in data.c which is the only place we work with file extent items.
This stops rare read corruption we were seeing where get_block in a
reader was racing with extent item deletion in a stager at a further
offset in the file.
Signed-off-by: Zach Brown <zab@versity.com>
2020-12-15 11:56:50 -08:00
277 changed files with 40313 additions and 9115 deletions
# This exit code is *reserved* for tests that are up-front never going to work
# in certain cases. This should be expressly documented per-case and made
# abundantly clear before merging. The test itself should document its case.
#
t_skip_permitted()
{
t_status_msg "$@"
exit$T_SKIP_PERMITTED_STATUS
}
t_fail()
{
t_status_msg "$@"
@@ -35,24 +47,54 @@ t_fail()
t_quiet()
{
echo"# $*" >> "$T_TMPDIR/quiet.log"
"$@" > "$T_TMPDIR/quiet.log" 2>&1||\
"$@" >>"$T_TMPDIR/quiet.log" 2>&1||\
t_fail "quiet command failed"
}
#
# redirect test output back to the output of the invoking script intead
# of the compared output.
# Quietly run a command during a test. The output is logged but only
# the return code is printed, presumably because the output contains
# a lot of invocation specific text that is difficult to filter.
#
t_restore_output()
t_rc()
{
echo"# $*" >> "$T_TMP.rc.log"
"$@" >> "$T_TMP.rc.log" 2>&1
echo"rc: $?"
}
#
# As run, stdout/err are redirected to a file that will be compared with
# the stored expected golden output of the test. This redirects
# stdout/err in the script to stdout of the invoking run-test. It's
# intended to give visible output of tests without being included in the
# golden output.
#
# (see the goofy "exec" fd manipulation in the main run-tests as it runs
# each test)
#
t_stdout_invoked()
{
exec >&6 2>&1
}
#
# redirect a command's output back to the compared output after the
# test has restored its output
# This undoes t_stdout_invokved, returning the test's stdout/err to the
# output file as it was when it was launched.
#
t_compare_output()
t_stdout_compare()
{
"$@" >&7 2>&1
exec >&7 2>&1
}
#
# usually bash prints an annoying output message when jobs
# are killed. We can avoid that by redirecting stderr for
# the bash process when it reaps the jobs that are killed.
#
t_silent_kill(){
exec{ERR}>&2 2>/dev/null
kill"$@"
wait"$@"
exec 2>&$ERR{ERR}>&-
}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.