Based on my reading of the gfs2 driver, it appears it's likely the safer
approach to take copy_splice_read instead of filemap_splice_read as it
may potentially lead to cluster deadlocks.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Due to folios, the kernel will call scoutfs_writepages() and this
becomes unused. It could be ported but the helper function to call isn't
exported anymore.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This caller of scoutfs_get_block is now actively used in el10 and
the WARN_ON_ONCE(!lock) in data.c:567 triggers.
XXX FIXME XXX
However, this will hit the `BUG_ON(!list_empty(pages));` that's a few
lines further, in some of our testing, so, it's still not right
Signed-off-by: Auke Kok <auke.kok@versity.com>
This is somewhat cumbersome, we want to see the error message, but the
format changes enough to make this messy. We opt to change the golden to
the new format, which only shows one of the arguments in its error
output: the thing that cannot be overwritten. We then add a filter that
rewrites the old output format with sed patterns to be exactly like the
new format, so this will work everywhere again, without changing or
adding filters to obscure error messages.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since v5.1-rc3-29-gaa30f47cf666, and in el9, there are changes to reduce
the amount of boilerplate code needed to hook up lots of attribute files
using a .default_groups member. In el10, this becomes the required
method as the .default_attrs member now becomes removed. This touches
every sysfs part that we have.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v6.9-rc4-8-gead083aeeed9, this now takes a struct file argument,
adding to the ifdef salad we've got going on here.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since ~v6.5-rc1-95-g0d72b92883c6, generic_fillattr() asks us to pass
through the request_mask from the caller. This allows it to only
request a subset.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Yet another major shrinker API evolution in v6.6-rc4-53-gc42d50aefd17.
The struct shrinker now has to be dynamically allocated. This is
purposely a backwards incompatible break. We add another KC_ wrapper
around the new shrinker_alloc() and move some initialization around to
make this as much as possible low impact, but compatible with the old
APIs through substitution.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The return type always has been int, so, we just need to add return
value checking and do something with it. We could return -ENOMEM here as
well, either way it'll fall all the way through no matter what.
This is since v6.4-rc2-100-g83f2caaaf9cb.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v6.8-9146-gc759e609030c, the second argument for __assign_str() was
removed, as the second parameter is already derived from the __string()
definition and no longer needed. We have to do a little digging in
headers here to find the definition.
Note the missing `;` at a few places... it has to be added now.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v6.9-rc4-29-g203c1ce0bb06 removes bd_inode. The canonical replacement is
bd_mapping->host, were applicable. We have one use where we directly
need the mapping instead of the inode, as well.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Instead of defining a struct that ends with a flex array member with
`val[0]`, the compiler now balks at this since technically, the spec
considers this unsanitary. As a result however, we can't memcpy to
`struct->val` since that's a pointer and now we're writing something of
a different length (u8's in our case) into something that's of pointer
size. So there we have to do the opposite, and memcpy to
&struct->val[0].
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v6.12-rc1-3-g5f60d5f6bbc1, asm/unaligned.h only included
asm-generic/unaligned.h and that was cleaned up from architecture
specific things. Everyone should now include linux/unaligned.h and the
former include was removed.
A quick peek at server.c shows that while included, it no longer uses
any function from this header at all, so it can just be dropped.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The new format in el10 has non-hex output, separated by a comma. Add the
additional filter string so this works as expected.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v6.6-rc5-1-g077c212f0344, one can no longer directly access the
inode m_time and a_time etc. We have to go through these static inline
functions to get to them. The compat is matched closely to mimic the
new functions.
Further back, ctime accessors were added in v6.5-rc1-7-g9b6304c1d537,
and need to be applied as well.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v6.1-rc5-2-ge9a688bcb193, get_random_u32_below() becomes available and
can start replacing prandom_bytes_max(). Switch to it where we can.
get_random_bytes() has been available since el7, so also replace
prandom_bytes() where we're using it.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In el9.6, the kernel VFS no longer goes through xattr handlers to
retreive ACLs, but instead calls the FS drivers' .get_{inode_}acl
method. In the initial compat version we hooked up to .get_acl given
the identical name that was used in the past.
However, this results in caching issues, as was encountered by customers
and exposed in the added test case `basic-acl-consistency`. The result
is that some group ACL entries may appear randomly missing. Dropping
caches may temporarily fix the issue.
The root cause of the issue is that the VFS now has 2 separate paths to
retreive ACL's from the FS driver, and, they have conflicting
implications for caching. `.get_acl` is purely meant for filesystems
like overlay/ecryptfs where no caching should ever go on as they are
fully passthrough only. Filesystems with dentries (i.e. all normal
filesystems should not expose this interface, and instead expose the
.get_inode_acl method. And indeed, in introducing the new interface, the
upstream kernel converts all but a few fs's to use .get_inode_acl().
The functional change in the driver is to detach KC_GET_ACL_DENTRY and
introduce KC_GET_INODE_ACL to handle the new (and required) interface.
KC_SET_ACL_DENTRY is detached due to it being a different changeset in
the kernel and we should separate these for good measure now.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This test case is used to detect and reproduce a customer issue we're
seeing where the new .get_acl() method API and underlying changes in
el9_6+ are causing ACL cache fetching to return inconsistent results,
which shows as missing ACLs on directories.
This particular sequence is consistent enough that it warrants making
it into a specific test.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Adds basic mkfs/mount/umount helpers that handle all the basics
for making, mounting and unmounting scratch devices. The mount/unmount
create "$T_MSCR", which lives in "$T_TMPDIR".
Signed-off-by: Auke Kok <auke.kok@versity.com>
There have been a few failures where output is generated just before we
crash but it didn't have a chance to be written. Add a best-effort
background sync before crashing. There's a good chance it'll hang if
the system is stuck so we don't wait for it directly, just for .. some
time to pass.
Signed-off-by: Zach Brown <zab@versity.com>
Errors from lock server calls typically shut the server down.
During normal unmount a client's locks are reclaimed before the
connection is disconnected. The lock server won't try to send to
unmounting clients.
Clients whose connections time out can cause ENOTCONN errors. Their
connection is freed before they're fenced and their locks are reclaimed.
The server can try to send to the client for a lock that's disconnected
and get a send error.
These errors shouldn't shut down the server. The client is either going
to be fenced and have the locks reclaimed, ensuring forward progress, or
the server is going to shutdown if it can't fence.
This was seen in testing as multiple clients were timed out.
Signed-off-by: Zach Brown <zab@versity.com>
The fence-and-reclaim test runs a bunch of scenarios and makes sure that
the fence agent was run on the appropriate mount's rids.
Unfortunately the checks were racey. The check itself only looked at
the log once to see if the rid had been fenced. Each check had steps
before that would wait until the rid should have been fenced and could
be checked.
Those steps were racey. They'd do things like make sure a fence request
wasn't pending, but never waited for it to be created in the first
place. They'd falsely indicate that the log should be checked and when
the rid wasn't found in the log the test would fail. In logs of
failures we'd see that the rids were fenced after this test failed and
moved on to the next.
This simplifies the checks. It gets rid of all the intermediate steps
and just waits around for the rid to be fenced, with a timeout. This
avoids the flakey tests.
Signed-off-by: Zach Brown <zab@versity.com>
Lock invalidation has assertions for critical errors, but it doesn't
allow the synthetic errors that come from forced unmount severing the
client's connection to the world.
Signed-off-by: Zach Brown <zab@versity.com>
The net layer was initially built around send queue lists with the
presumption that there wouldn't be many messages in flight and that
responses would be sent roughly in order.
In the modern era, we can have 10s of thousands of lock request messages
in flight. This lead to o(n^2) processing in quite a few places as recv
processing searched for either requests to complete or responses to
free.
This adds messages to two rbtrees, indexing either requests by their id
or responses by their send sequence. Recv processing can find messages
in o(log n). This patch intends to be minimally disruptive. It's only
replacing the search of the send and resend queues in the recv path with
rbtrees. Other uses of the two queue lists are untouched.
On a single node, with ~40k lock shrink attempts in flight, we go from
processing ~800 total request/grant request/response pairs per second to
~60,000 per second.
Signed-off-by: Zach Brown <zab@versity.com>
The use of the VM shrinker was a bad fit for locks. Shrinking a lock
requires a round trip with the server to request a null mode. The VM
treats the locks like a cache, as expected, which leads to huge amounts
of locks accumulating and then being shrank in bulk. This creates a
huge backlog of locks making their way through the network conversation
with the server that implements invalidating to a null mode and freeing.
It starves other network and lock processing, possibly for minutes.
This removes the VM shrinker and instead introduces an option that sets
a limit on the number of idle locks. As the number of locks exceeds the
count we only try to free an oldest lock at each lock call. This
results in a lock freeing pace that is proportional to the allocation of
new locks by callers and so is throttled by the work done while callers
hold locks. It avoids the bulk shrinking of 10s of thousands of locks
that we see in the field.
Signed-off-by: Zach Brown <zab@versity.com>
We observe that unmount in this test can consume up to 10sec of time
before proceeding to record heartbeat timeout elections by followers.
When this happens, elections and new leaders happen before unmount even
completes. This indicates that hearbeat packets from the unmount are
ceased immediately, but the unmount is taking longer doing other things.
The timeouts then trigger, possibly during the unmount.
The result is that with timeouts of 3 seconds, we're not actually
waiting for an election at all. It already happened 7 seconds ago. The
code here just "sees" that it happens a few hundred ms after it started
looking for it.
There's a few ways about this fix. We could record the actual timestamp
of the election, and compare it with the actual timestamp of the last
heartbeat packet. This would be conclusive, and could disregard any
complication from umount taking too long. But it also means adding
timestamping in various places, or having to rely on tcpdump with packet
processing.
We can't just record $start before unmount. We will still violate the
part of the test that checks that elections didn't happen too late.
Especially in the 3sec test case if unmount takes 10sec.
The simplest solution is to unmount in a bg thread, and circle around
later to `wait` for it to assure we can re-mount without ill effect.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Due to an iput race, the "unlink wait for open on other mount"
subtest can fail. If the unlink happens inline, then the test
passes. But if the orphan scanner has to complete the unlink
work, it's possible that there won't be enough log merge work
for the scanner to do the cleanup before we look at the seq index.
Add SCOUTFS_TRIGGER_LOG_MERGE_FORCE_FINALIZE_OURS, to allow
forcing a log merge. Add new counters, log_merges_start and
log_merge_complete, so that tests can see that a merge has happened.
Then we have to wait for the orphan scanner to do its work.
Add a new counter, orphan_scan_empty, that increments each time
the scanner walks the entire inode space without finding any
orphans. Once the test sees that counter increment, it should be
safe to check the seq index and see that the unlinked inode is gone.
Signed-off-by: Chris Kirby <ckirby@versity.com>
The /sys/kernel/debug/tracing/buffer_size_kb file always reads as
"7 (expanded: 1408)". So the -T option to run-test.sh won't work,
because it tries to multiply that string by the given factor.
It always defaults to 1408 on every platform we currently support.
Just use that value so we can specify -T in CI runs.
Signed-off-by: Chris Kirby <ckirby@versity.com>
A few callers of alloc_move_empty in the server were providing a budget
that was too small. Recent changes to extent_mod_blocks increased the
max budget that is necessary to move extents between btrees. The
existing WAG of 100 was too small for trees of height 2 and 3. This
caused looping in production.
We can increase the move budget to half the overall commit budget, which
leaves room for a height of around 7 each. This is much greater than we
see in practice because the size of the per-mount btrees is effectiely
limited by both watermarks and thresholds to commit and drain.
Signed-off-by: Zach Brown <zab@versity.com>
It's possible to trigger the block device autoloading mechanism
with a mknod()/stat(), and this mechanism has long been declared
obsolete, thus triggering a dmesg warning since el9_7, which then
fails the test. You may need to `rmmod loop` to reproduce.
Avoid this by avoiding to trigger a loop autoload - we just make a
different blockdev. Chosing `42` here should avoid any autoload
mechanism as this number is explicitly for demo drivers and should
never trigger an autoload.
We also just ignore the warning line in dmesg. Other tests can and
might perhaps still trigger this, as well as background noise running
during the test.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Use of T_M0 and variants should be reserved for e.g. scoutfs
<subcommand> -p <mountpoint> type of usages. Tests should create
individual content files in the assigned subdirectory.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The tap output file was not yet complete as it failed to include
the contents of `status.msg`. In a few cases, that would mean it
lacks important context.
Signed-off-by: Auke Kok <auke.kok@versity.com>