Compare commits

..

214 Commits

Author SHA1 Message Date
Auke Kok
5ae79a385f Add IPv6 support to the kernel module.
This adds IPv6 support to the kernel module side.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-12-03 19:27:10 -05:00
Auke Kok
62965d828e Enable ipv6 in testing.
Instead of using 127.0.0.1, we initialize the quorum slots to ::1,
enabling all ipv6 support.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-12-03 19:27:10 -05:00
Auke Kok
3ca0bef6ad Add ipv6 support to scoutfs userspace utility.
This change adds ipv6 support to various scoutfs sub-commands, allowing
users to mkfs, print and change-quorum-config using ipv6 addresses, and
modifies the outputs.

Any ipv6 address/port is displayed as [::1]:5000 to comply with the
related RFC's. Input strings remain consistent as the quorum config
input value is comma-separated already, not posing any issues.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-12-03 19:27:10 -05:00
Auke Kok
79adec53ea Don't stack alloc struct scoutfs_quorum_block_event old
The size of this thing is well over 1kb, and the compiler will
error on several supported distributions that this particular
function reaches over 2k stack frame size, which is excessive,
even for a function that isn't called regularly.

We can allocate the thing in one go if we smartly allocate this
as an array of (an array of structs) which allows us to index
it as a 2d array as before, taking away some of the additional
complexities.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-12-03 19:27:10 -05:00
Zach Brown
e194714004 Merge pull request #264 from versity/auke/findmnt_retval
Findmnt returns 1 when no matching entries found
2025-12-03 14:29:31 -08:00
Auke Kok
8bb2f83cf9 Findmnt returns 1 when no matching entries found
Our local fence script attempts to interpret errors executing `findmnt`
as critical errors, but the program exit code explicitly returns
EXIT_FAILURE when the total number of matching mount entries is zero.

This can happen if the mount disappeared while we're attempting to
fence the mount, but, the scoutfs sysfs files are still in place as
we read them. It's a small window, but, it's a fork/exec plus full
parse of /etc/fstab, and a lot can happen in the 0.015s findmnt takes
on my system.

There's no other exit codes from findmnt other than 0 and 1. At that
point, we can only assume that if the stdout is empty, the mount
isn't there anymore.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-12-02 12:55:11 -08:00
Zach Brown
6a9a6789d5 Merge pull request #267 from versity/clk/merge_enoent
Handle ENOENT when getting log merge status item
2025-12-02 09:34:28 -08:00
Chris Kirby
ee630b164f Handle ENOENT when getting log merge status item
Tests that cause client retries can fail with this error
from server_commit_log_merge():

error -2 committing log merge: getting merge status item

This can happen if the server has already committed and resolved
the log merge that is being retried. We can safely ignore ENOENT here
just like we do a few lines later.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-12-01 08:58:24 -06:00
Zach Brown
1c7678b6f5 Merge pull request #263 from versity/zab/v1.26
v1.26 Release
2025-11-18 09:39:27 -08:00
Zach Brown
22b5e79bbd v1.26 Release
Finish the release notes for the 1.26 release.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-17 14:42:14 -08:00
Zach Brown
259e639271 Merge pull request #262 from versity/zab/ino_alloc_per_lock
Add ino_alloc_per_lock option
2025-11-14 13:57:49 -08:00
Zach Brown
4d66c38c71 Remove redundant WARN in commit_log_trees
The server's commit_log_trees has an error message that includes the
source of the error, but it's not used for all errors.  The WARN_ON is
redundant with the message and is removed because it isn't filtered out
when we see errors from forced unmount.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-14 10:04:30 -08:00
Zach Brown
7ef62894bd Add ino_alloc_per_lock option
Add an option that can limit the number of inode numbers that are
allocated per lock group.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 17:19:04 -08:00
Zach Brown
1f363a1ead Merge pull request #261 from versity/zab/log_merge_double_free
Zab/log merge double free
2025-11-13 17:18:30 -08:00
Zach Brown
8ddf9b8c8c Handle disappearing fencing requests and targets
The userspace fencing process wasn't careful about handling underlying
directories that disappear while it was working.

On the server/fenced side, fencing requests can linger after they've
been resolved by writing 1 to fenced or error.  The script could come
back around to see the directory before the server finally removes it,
causing all later uses of the request dir to fail.  We saw this in the
logs as a bunch of cat errors for the various request files.

On the local fence script side, all the mounts can be in the process of
being unmounted so both the /sys/fs dirs and the mount it self can be
removed while we're working.

For both, when we're working with the /sys/fs files we read them without
logging errors and then test that the dir still exists before using what
we read.  When fencing a mount, we stop if findmnt doesn't find the
mount and then raise a umount error if the /sys/fs dir exists after
umount fails.

And while we're at it, we have each scripts logging append instead of
truncating (if, say, it's a log file instead of an interactive tty).

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
fd80c17ab6 Filter out kernel message when guests are slow
Ignore more kernel messages when debug guests are being slow.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
991e2cbdf8 Ignore slow quorum hb transfers in tests
We're getting test failures from messages that our guests can be
unresponsive.  They sure can be.  We don't need to fail for this one
specific case.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
92ac132873 Silence merge splice error when forcing
Silence another error warning and assertion that's assuming that the
result of the errors is going to be persistent.  When we're forcing an
unmount we've severed storage and networking.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Auke Kok
ad078cd93c Avoid lock stalling mmap_stress
mmap_stress gets completely stalled in lock messaging and starving
most of the mmap_stress threads, which causes it to delay and even
time out in CI.

Instead of spawning threads over all 5 test nodes, we reduce it
to spawning over only 2 artificially. This still does a good number
of operations on those node, and now the work is spread across the
two nodes evenly.

Additionaly, I've added a miniscule (10ms) delay in between operations
that should hopefully be sufficient for other locking attempts to
settle and allow the threads to better spread the work.

This now shows that all the threads exit within < 0.25s on my test
machine, which is a lot better than the 40s variation that I was seeing
locally. Hopefully this fares better in CI.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-11-13 12:43:31 -08:00
Auke Kok
90cb458cd5 Make mmap_stress not exceed a fixed amount of time.
There's a scenarion where mmap_stress gets enough resources that
twoe of the threads will starve the others, which then all take
a very long time catching up committing changes.

Because this test program didn't finish until all the threads had
completed a fixed amount of work, essentially these threads all
ended up tripping over eachother. In CI this would exceed 6h+,
while originally I intended this to run in about 100s or so.

Instead, cap the run time to ~30s by default. If threads exceed
this time, they will immediately exit, which causes any clog in
contention between the threads to drain relatively quickly.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
1ab798e7eb Silence inconsistent srch on forced unmount
Assembling a srch compaction operation creates an item and populates it
with allocator state.  It doesn't cleanly unwind the allocation and undo
the compaction item change if allocation filling fails and issues a
warning.

This warning isn't needed if the error shows that we're in forced
unmount.  The inconsistent state won't be applied, it will be dropped on
the floor as the mount is torn down.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
e182914e51 Fix double free of metadata blocks in log merging
The log merging process is meant to provide parallelism across workers
in mounts.  The idea is that the server hands out a bunch of concurrent
non-intersecting work that's based on the structure of the stable input
fs_root btree.

The nature of the parallel work (cow of the blocks that intersect a key
range) means that the ranges of concurrently issued work can't overlap
or the work will all cow the same input blocks, freeing that input
stable block multiple times.  We're seeing this in testing.

Correctness was intended by having an advancing key that sweeps sorted
ranges.  Duplicate ranges would never be hit as the key advanced past
each it visited.  This was broken by the mapping of the fs item keys to
log merge tree keys by clobbering the sk_zone key value.  It effectively
interleaves the ranges of each zone in the fs root (meta indexes,
orphans, fs items).  With just the right log merge conditions that
involve logged items in the right places and partial completed work to
insert remaining ranges behind the key, ranges can be stored at mapped
keys that end up with ranges out of order.  The server iterates over
these and ends up issueing overlapping work, which results in duplicated
frees of the input blocks.

The fix, without changing the format of the stored log tree items, is to
perform a full sweep of all the range items and determine the next item
by looking at the full precision stored keys.  This ensures that the
processed ranges always advance and never overlap.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
8484a58dd6 Have xfstest pass when using args
The xfstests's golden output includes the full set of tests we expect to
run when no args are specified.  If we specify args then the set of
tests can change and the test will always fail when they do.

This fixes that by having the test check the set of tests itself, rather
than relying on golden output.  If args are specified then our xfstest
only fails if any of the executed xfstest tests failed.  Without args,
we perform the same scraping of the check output and compare it against
the expected results ourself.

It would have been a bit much to put that large file inline in the test
file, so we add a dir of per-test files in revision control.  We can
also put the list of exclusions there.

We can also clean up the output redirection helper functions to make
them more clear.  After xfstests has executed we want to redirect output
back to the compared output so that we can catch any unexpected output.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
a077104531 Add crash monitor to run-tests
Add a little background function that runs during the test which
triggers a crash if it finds catastrophic failure conditions.

This is the second bg task we want to kill and we can only have one
function run on the EXIT trap, so we create a generic process killing
trap function.

We feed it the fenced pid as well.  run-tests didn't log much of value
into the fenced log, and we're not logging the kills into anymore, so we
just remove run-tests fenced logging.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
Zach Brown
23aaa994df Add -l to run-tests for looping over tests
Add an option to run-tests to have it loop over each test that will be
run a number of times.  Looping stops if the test doesn't pass.

Most of the change in the per-test execution is indenting as we add the
for loop block.  The stats and kmsg output are lifted up before of the
loop.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-06 12:07:42 -08:00
Zach Brown
7d14b57b2d Export PATH once in run-tests
Might as well just export the PATH once as we change it, no need to
export it in every test iteration.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-06 11:02:38 -08:00
Zach Brown
3f252be4be Merge pull request #241 from versity/auke/waiter_err_data_version_obsolete
Ignore data_version in scoutfs_ioc_data_wait_err.
2025-11-04 10:09:57 -08:00
Auke Kok
a4d25d9b55 Ignore data_version in scoutfs_ioc_data_wait_err.
The data_wait_err ioctl currently requires the correct data_version
for the inode to be passed in, or else the ioctl returns -ESTALE. But
the ioctl itself is just a passthrough mechanism for notifying data
waiters, which doesn't involve the data_version at all.

Instead, we can just drop checking the value. The field remains in the
headers, but we've marked it as being ignored from now on. The reason
for the change is documented in the header file as well.

This all is a lot simpler than having to modify/rev the data_waiters
interface to support passing back the data_version, because there isn't
any space left to easily do this, and then userspace would just pass it
back to the data_wait_err ioctl.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-31 12:24:03 -04:00
Zach Brown
79cd25f693 Merge pull request #255 from versity/zab/compact_error_block_leak
Don't leak alloc blocks on srch compact error
2025-10-31 09:08:17 -07:00
Zach Brown
f2646130ae Don't leak alloc blocks on srch compact error
scoutfs_alloc_prepare_commit() is badly named.  All it really does is
put the references to the two dirty alloc list blocks in the allocator.
It must allways be called if allocation was attempted, but it's easier
to require that it always be paired with _alloc_init().

If the srch compaction worker in the client sees an error it will send
the error back to the server without writing its dirty blocks.  In
avoiding the write it also avoided putting the two block references,
leading to leaked blocks.  We've been seeing rare messages with leaked
blocks in tests.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-30 14:47:18 -07:00
Zach Brown
1c66f9a9a5 Merge pull request #227 from versity/auke/el96
RHEL 9.6 support.
2025-10-30 13:39:05 -07:00
Auke Kok
afb6ba00ad POSIX ACL changes.
The .get_acl() method now gets passed a mnt_idmap arg, and we can now
choose to implement either .get_acl() or .get_inode_acl(). Technically
.get_acl() is a new implementation, and .get_inode_acl() is the old.
That second method now also gets an rcu flag passed, but we should be
fine either way.

Deeper under the covers however we do need to hook up the .set_acl()
method for inodes, otherwise setfacl will just fail with -ENOTSUPP. To
make this not super messy (it already is) we tack on the get_acl()
changes here.

This is all roughly ca. v6.1-rc1-4-g7420332a6ff4.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-30 13:59:44 -04:00
Auke Kok
29e486e411 All vfs methods now take a mnt_idmap instead of user_namespace arg.
Similar to before when namespaces were added, they are now translated to
a mnt_idmap, since v6.2-rc1-2-gabf08576afe3.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-30 13:58:34 -04:00
Zach Brown
8f3177fe33 Merge pull request #254 from versity/zab/shrink_cleanup
Zab/shrink cleanup
2025-10-30 08:56:33 -07:00
Zach Brown
419079e606 Merge pull request #239 from versity/auke/keepalive
Add tcp_keepalive_timeout_ms option, change default to 60s
2025-10-29 17:15:17 -07:00
Zach Brown
6a70ee03b5 Dump block alloc stacks for leaked blocks
The typical pattern of spinning isolating a list_lru results in a
livelock if there are blocks with leaked refcounts.  We're rarely seeing
this in testing.

We can have a modest array in each block that records the stack of the
caller that initially allocated the block and dump that stack for any
blocks that we're unable to shrink/isolate.  Instead of spinning
shrinking, we can give it a good try and then print the blocks that
remain and carry on with unmount, leaking a few blocks.  (Past events
have had 2 blocks.)

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-29 16:16:58 -07:00
Zach Brown
38a2ffe0c7 Add stacktrace kernelcompat
Signed-off-by: Zach Brown <zab@versity.com>
2025-10-29 10:12:52 -07:00
Zach Brown
4b41cf9789 Centralize port numbers and avoid ephemeral
The tests were using high ephemeral port numbers for the mount server's
listening port.  This caused occasional failure if the client's
ephemeral ports happened to collide with the ports used by the tests.

This ports all the port number configuration in one place and has a
quick check to make sure it doesn't wander into the current ephemeral
range.  Then it updates all the tests to use the chosen ports.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-29 10:12:52 -07:00
Zach Brown
102899290e Allow harmless srch compact commit errors
The server's srch commit error warnings were a bit severe.  The
compaction operations are a function of persistent state.  If they fail
then the inputs still exist and the next attempt will retry whatever
failed.  Not all errors are a problem, only those that result in partial
commits that leave inconsistent state.

In particular, we have to support the case where a client retransmits a
compaction request to a new server after a first server performed the
commit but couldn't respond.  Throwing warnings when the new server gets
ENOENT looking for the busy compaction item isn't helpful.  This came in
tests as background compaction was in flight as tests unmounted and
mounted servers repeatedly to test lock recovery.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-29 10:12:52 -07:00
Zach Brown
89387fb192 Use list_lru for block cache shrinking
The block cache had a bizarre cache eviction policy that was trying to
avoid precise LRU updates at each block.  It had pretty bad behaviour,
including only allowing reclaim of maybe 20% of the blocks that were
visited by the shrinker.

We can use the existing list_lru facility in the kernel to do a better
job.  Blocks only exhibit contention as they're allocated and added to
per-node lists.  From then on we only set accessed bits and the private
list walkers move blocks around on the list as we see the accessed bits.
(It looks more like a fifo with lazy promotion than a "LRU" that is
actively moving list items around as they're accessed.)

Using the facility means changing how we remove blocks from the cache
and hide them from lookup.  We clean up the refcount inserted flag a bit
to be expressed more as a base refcount that can be acquired by
whoever's removing from the cache.  It seems a lot clearer.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-29 10:12:52 -07:00
Zach Brown
8b6418fb79 Add kernelcompat for list_lru
Add kernelcompat helpers for initial use of list_lru for shrinking.  The
most complicated part is the walk callback type changing.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-29 10:12:52 -07:00
Zach Brown
206c24c41f Retry stale item reads instead of stopping reclaim
Readers can read a set of items that is stale with respect to items that
were dirtied and written under a local cluster lock after the read
started.

The active reader machanism addressed this by refusing to shrink pages
that could contain items that were dirtied while any readers were in
flight.  Under the right circumstances this can result in refusing to
shrink quite a lot of pages indeed.

This changes the mechanism to allow pages to be reclaimed, and instead
forces stale readers to retry.  The gamble is that reads are much faster
than writes.  A small fraction should have to retry, and when they do
they can be satisfied by the block cache.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-29 10:12:52 -07:00
Auke Kok
f67462750b Add tcp_keepalive_timeout_ms option, change default to 60s
The default TCP keepalive value is currently 10s, resulting in clients
being disconnected after 10 seconds of not replying to a TCP keepalive
packet. These keepalive values are reasonable most of the times, but
we've seen client disconnects where this timeout has been exceeded,
resulting in fencing. The cause for this is unknown at this time, but it
is suspected that network intermissions are happening.

This change adds a configurable value for this specific client socket
timeout. It enforces that its value is above UNRESPONSIVE_PROBES, whose
value remains unchanged.

The default value of 10000ms (10s) is changed to 60s. This is the value
we're assuming is much better suited for customers and has been briefly
trialed, showing that it may help to avoid network level interruptions
better.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-28 18:45:43 -04:00
Zach Brown
fd8aaa0810 Merge pull request #205 from versity/auke/scar
Changes from static analysis.
2025-10-27 16:06:32 -07:00
Auke Kok
a5dbe7f286 Don't set ret = -ENOMEM and immediately overwrite.
It's possible that scoutfs_net_alloc_conn() fails due to -ENOMEM, which
is legitimately a failure, thus the code here releases the sock again.

But the code block here sets `ret = ENOMEM` and then restarts the loop,
which immediately sets `ret = kernel_accept()`, thus overwriting the
-ENOMEM error value.

We can argue that an ENOMEM error situation here is not catastrophical.
If this is the first that we're ever receiving an ENOMEM situation here
while trying to accept a new client, we can just release the socket and
wait for the client to try again. If the kernel at that point still is
out of memory to handle the new incoming connection, that will then
cascade down and clean up the while listener at that point.

The alternative is to let this error path unwind out and break down the
listener immediately, something the code today doesn't do. We're keeping
the behavior therefore the same.

I've opted therefore to replace the `ret = -ENOMEM` assignment with a
comment explaining why we're ignoring the error situation here.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:26:44 -04:00
Auke Kok
c1e89d597d Fix NULL dereference on error branch in handle_request.
If scoutfs_send_omap_response fails for any reason, req is NULL and we
would hit a hard NULL deref during unwinding.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:24:00 -04:00
Auke Kok
2c4316b096 Avoid uninitialized map, flags in ext.
This function returns a stack pointer to a struct scoutfs_extent, after
setting start, len to an extent found in the proper zone, but it leaves
map and flags members unset.

Initialize the struct to {0,} avoids passing uninitialized values up the
callstack.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:24:00 -04:00
Auke Kok
e704cd7074 Fix masking of EIO in compact_logs.
Several of the inconsistency error paths already correctly `goto out`
but this one has a `break`. This would result in doing a whole lot of
work on corrupted data.

Make this error path go to `out` instead as the others do.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:23:59 -04:00
Auke Kok
8c5b09aee8 Prevent masking away inconsistent state in search_sorted_file.
In these two error conditions we explicitly set `ret = -EIO` but then
`break` to set `ret = 0` immediately again, masking away a critical
error code that should be returned.

Instead, `goto out` retains the EIO error value for the caller.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:23:28 -04:00
Auke Kok
d2cd610c53 Fix return of uninit value.
The value of `ret` is not initialized. If the writeback list is empty,
or, if igrab() fails on the only inode on the list, the value
of `ret` is returned without being initialized. This would cause the
caller to needlessly have to retry, perhaps possibly make things worse.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:21:06 -04:00
Auke Kok
52563d3f73 Address double copy_to_user, possible 1-byte leak.
We shouldn't copy the entire _dirent struct and then copy in the name
again right after, just stop at offsetoff(struct, name).

Now that we're no longer copying the uninitialized name[3] from ent,
there is no more possible 1-byte leak here, too.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:21:06 -04:00
Auke Kok
4358d57f55 Avoid possible NULL deref on ENOMEM.
Assure that we reschedule even if this happens. Maybe it'll recover. If
not, we'll have other issues elsewhere first.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:21:06 -04:00
Auke Kok
021830ab04 If kzalloc fails, avoid NULL deref.
We still assign NULL to sbi->s_fs_info to aid checks in cleanup paths.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:21:06 -04:00
Auke Kok
3e63739711 plug df ioctl leaks.
The `type` member and padding are not initialized before being
copied to userspace.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:21:06 -04:00
Auke Kok
b25d8e8741 Plug super leak.
We accidentally could leak super here, so make sure to free it.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:21:06 -04:00
Auke Kok
4a9760afe0 Incorrect array_size test.
ARRAY_SIZE(...) will return `3` for this array with members from 0 to 2,
therefore arr[3] is out of bounds. The array length test is off by one
and needs fixing.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-24 14:21:06 -04:00
Zach Brown
33f6e9d0cd Merge pull request #248 from versity/auke/shuffle-tests
Add option to shuffle test order.
2025-10-23 09:33:34 -07:00
Zach Brown
f9780fc391 Merge pull request #253 from versity/zab/msg_rate_shrink
Zab/msg rate shrink
2025-10-23 09:32:25 -07:00
Zach Brown
aa8517d29b Remove msghdr iov_iter kernelcompat
This removes the KC_MSGHDR_STRUCT_IOV_ITER kernel compat.
kernel_{send,recv}msg() initializes either msg_iov or msg_iter.

This isn't a clean revert of "69068ae2 Initialize msg.msg_iter from
iovec." because previous patches fixed the order of arguments, and the
net send caller was removed.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-22 11:18:30 -07:00
Zach Brown
feae5757c4 Send messages in batches
Previous work had the receiver try to receive multiple messages in bulk.
This does the same for the sender.

We walk the send queue and initialize a vector that we then send with
one call.  This is intentionally similar to the single message sending
pattern to avoid unintended changes.

Along with the changes to recieve in bulk this ended up increasing the
message processing rate by about 6x when both send and receive were
going full throttle.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-22 11:18:30 -07:00
Zach Brown
e79086f381 Fix swapped sendmsg nr_segs/count
When the msg_iter compat was added the iter was initialized with nr_segs
and count swapped.  I'm not convinced this had any effect because the
kernel_{send,recv}msg() call would initialize msg_iter again with the
correct arguments.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-22 11:18:30 -07:00
Zach Brown
45e815bf76 Receive incoming messages in bulk
Our messaging layer is used for small control messages, not large data
payloads.  By calling recvmsg twice for every incoming message we're
hitting the socket lock reasonably hard.  With senders doing the same,
and a lot of messages flowing in each direction, the contention is
non-trivial.

This changes the receiver to copy as much of the incoming stream into a
page that is then framed and copied again into individual allocated
messages that can be processed concurrently.  We're avoiding contention
with the sender on the socket at the cost of additional copies of our
small messages.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-22 11:18:30 -07:00
Zach Brown
c313b71b2e Process client lock messages in ordered work
The lock client has a requirement that it can't handle some messages
being processed out of order.  Previously it had detected message
ordering itself, but had missed some cases.  Recieve processing was then
changed to always call lock message processing from the recv work to
globally order all lock messages.

This inline processing was contributing to excessive latencies in making
our way through the incoming receive queue, delaying work that would
otherwise be parallel once we got it off the recv queue.

This was seen in practice as a giant flood of lock shrink messages
arrived at the client.  It processed each in turn, starving a statfs
response long enough to trigger the hung task warning.

This fix does two things.

First, it moves ordered recv processing out of the recv work.  It lets
the recv work drain the socket quickly and turn it into a list that the
ordered work is consuming.  Other messages will have a chance to be
received and queued to their processing work without having to wait for
the ordered work to be processed.

Secondly, it adds parallelism to the ordered processing.  The incoming
lock messages don't need global ordering, they need ordering within each
lock.  We add an arbitrary but reasonable number of ordered workers and
hash lock messages to each worker based on the lock's key.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-22 11:18:30 -07:00
Zach Brown
0ecaceba14 Merge pull request #236 from versity/team/ci_green
Team/ci green
2025-10-22 11:05:08 -07:00
Chris Kirby
b4d8323750 Quorum message cleanup
Make sure to log an error if the SCOUTFS_QUORUM_EVENT_END
update_quorum_block() call fails in scoutfs_quorum_worker().

Correctly print if the reader or writer failed when logging errors
in update_quorum_block().

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-22 10:59:03 -07:00
Chris Kirby
aa48a8ccfc Generate sorted srch-safe entry pairs
During log compaction, the SRCH_COMPACT_LOGS_PAD_SAFE trigger was
generating inode numbers that were not in sorted order. This resulted
in later failures during srch-basic-functionality, because we were
winding up with out of order first/last pairs and merging incorrectly.

Instead, reuse the single entry in the block repeatedly, generating
zero-padded pairs of this entry that are interpreted as create/delete
and vanish during searching and merging. These aren't encoded in the
normal way, but the extra zeroes are ignored during the decoding phase.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-22 10:59:03 -07:00
Chris Kirby
d277d7e955 Fix race condition in orphan-inodes test
Make sure that the orphan scanners can see deletions after forced unmounts
by waiting for reclaim_open_log_tree() to run on each mount; and waiting for
finalize_and_start_log_merge() to run and not find any finalized trees.

Do this by adding two new counters: reclaimed_open_logs and
log_merge_no_finalized and fixing the orphan-inodes test to check those
before waiting for the orphan scanners to complete.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-22 10:59:03 -07:00
Chris Kirby
c72bf915ae Use ENOLINK as a special error code during forced unmount
Tests such as quorum-heartbeat-timeout were failing with EIO messages in
dmesg output due to expected errors during forced unmount. Use ENOLINK
instead, and filter all errors from dmesg with this errno (67).

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-22 10:58:44 -07:00
Auke Kok
c3e6f3cd54 Don't run format-version-forward-back on el8, either
This test compiles an earlier commit from the tree that is starting to
fail due to various changes on the OS level, most recently due to sparse
issues with newer kernel headers. This problem will likely increase
in the future as we add more supported releases.

We opt to just only run this test on el7 for now. While we could have
made this skip sparse checks that fail it on el8, it will suffice at
this point if this just works on one of the supported OS versions
during testing.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-15 17:35:17 -05:00
Zach Brown
c19280c83c Add cond_resched to iput worker
The iput worker can accumulate quite a bit of pending work to do.  We've
seen hung task warnings while it's doing its work (admitedly in debug
kernels).  There's no harm in throwing in a cond_resched so other tasks
get a chance to do work.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-15 17:35:17 -05:00
Chris Kirby
01847d9fb6 Add tracing for get_file_block() and scoutfs_ioc_search_xattrs().
Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-15 17:35:17 -05:00
Chris Kirby
84a48ed8e2 Fix several cases in srch.c where the return value of EIO should have been -EIO.
Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-15 17:35:17 -05:00
Chris Kirby
d38e41cb57 Add the inode number to scoutfs_xattr_set traces.
Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-15 17:35:17 -05:00
Chris Kirby
a896984f59 Only start new quorum election after a receive failure
It's possible for the quorum worker to be preempted for a long period,
especially on debug kernels. Since we only check for how much time
has passed, it's possible for a clean receive to inadvertently
trigger an election. This can cause the quorum-heartbeat-timeout
test to fail due to observed delays outside of the expected bounds.

Instead, make sure we had a receive failure before comparing timestamps.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-15 17:35:17 -05:00
Chris Kirby
35bcad91a6 Close window where we can lose search items
In finalize_and_start_log_merge(), we overwrite the server
mount's log tree with its finalized form and then later write out
its next open log tree. This leaves a window where the mount's
srch_file is nulled out, causing us to lose any search items in
that log tree.

This shows up as intermittent failures in the srch-basic-functionality
test.

Eliminate this timing window by doing what unmount/reclaim does when
it finalizes, by moving the resources from the item that we finalize
into server trees/items as it finalizes. Then there is no window
where those resources exist only in memory until we create another
transaction.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Auke Kok
0b7b9d4a5e Avoid trigger munching of block_remove_stale trigger.
It's entirely likely that the trigger here is munched by a read on a
dirty block from any unrelated or background read. Avoid that by putting
the trigger at the end of the condition list.

Now that the order is swapped, we have to avoid a null deref in
block_is_dirty(bp) here, as well.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-06 12:27:25 -05:00
Auke Kok
f86a7b4d3c Fully wait for orphan inode scan to complete.
The issue with the previous attempt to fix the orphan-inodes test was
that we would regularly exceed the 120s timeout value put in there.

Instead, in this commit, we change the code to add a new counter to
indicate orphan deletion progress. When orphan inodes are deleted, the
increment of this counter indicates progress happened. Inversely,
every time the counter doesn't increment, and the orphan scan attempts
counter increments, we know that there was no more work to be done.

For safety, we wait until 2 consecutive scan attempts were made without
forward progress in the test case.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-06 12:27:25 -05:00
Auke Kok
96eb9662a1 Revert "Extend orphan-inodes timeout."
This reverts commit 138c7c6b49.

The timeout value here is still exceeded by CI test jobs, and thus
causing the test to fail.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
47af90d078 Fix race in offline-extent-waiting test
Before comparing file contents, wait for the background dd to complete.
Also fix a typo.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
669e37c636 Remove hung task workaround from large-fragmented-free test
Adjusting hung_task_timeout_secs is still needed for this test to pass
with a debug kernel. But the logic belongs on the platform side.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
bb3e1f3665 Fix commit budget calculation with multiple holders
The try_drain_data_freed() path was generating errors about overrunning
its commit budget:

scoutfs f.2b8928.r.02689f error: 1 holders exceeded alloc budget av: bef 8185 now 8036, fr: bef 8185 now 7602

The budget overrun check was using the current number of commit holders
(in this case one) instead of the the maximum number of concurrent holders
(in this case two). So even well behaved paths like try_drain_data_freed()
can appear to exceed their commit budget if other holders dirty some blocks
and apply their commits before the try_drain_data_freed() thread does its
final budget reconciliation.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
0d262de4ac Fix dirtied block calculation in extent_mod_blocks()
Free extents are stored in two btrees: one sorted by block number, one
by size. So if you insert a new extent between two existing extents, you can
be modifying two items in the by-block-number tree. And depending on the size
of those items, that can result in three items over in the -by-size tree.
So that's a 5x multiplier per level.

If we're shrinking the tree and adding more freed blocks, we're conceptually
dirtying two blocks at each level to merge. (current *2 in the code).
But if they fall under the low water mark then one of them is freed, so we
can have *3 per level in this case.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Auke Kok
70bd936213 Ignore sparse error about stat.h on el8.
On el8, sparse is at 0.6.4 in epel-release, but it fails with:
```
[SP src/util.c]
src/util.c: note: in included file (through /usr/include/sys/stat.h):
/usr/include/bits/statx.h:30:6: error: not a function <noident>
/usr/include/bits/statx.h:30:6: error: bad constant expression type
```

This is due to us needing O_DIRECT from <fcntl.h>, so we set _GNU_SOURCE
before including it, but this causes (through _USE_GNU in sys/stat.h)
statx.h to be included, and that has __has_include, and sparse is too
dumb to understand it.

Just shut it up.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
3f786596e0 Don't overrun the block budget in server_log_merge_free_work().
This fixes a potential fence post failure like the following:

error: 1 holders exceeded alloc budget av: bef 7407 now 7392, fr: bef 8185 now 7672

The code is only accounting for the freed btree blocks, not the dirtying of
other items. So it's possible to be at exactly (COMMIT_HOLD_ALLOC_BUDGET / 2),
dirty some log btree blocks, loop again, then consume another
(COMMIT_HOLD_ALLOC_BUDGET / 2) and blow past the total budget.

In this example, we went over by 13 blocks.

By only consuming up to 1/8 of the budget on each loop, and committing when we
have consumed 3/4 of the budget, we can avoid the fence post condition.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Zach Brown
cad47ed1ed Merge pull request #247 from versity/zab/sparse_error
Zab/sparse error
2025-10-06 10:09:07 -07:00
Auke Kok
bf87ea0a1c Add option to shuffle test order.
The `-R` option will shuffle the order in which tests are executed.

The testing order shouldn't affect the outcome of any of the tests, but
in practice many of these tests will execute code slightly different
based on the history of the filesystem, resources allocated, memory
usage etc. of tests that were executed before. Shuffling the order of
tests therefore introduces small semi-random variations in the
enviroment.

The xfstests test is the only one that can't be shuffled yet into the
mix, so it is kept at the end. This is because it leaves the filesystems
unmounted. At a later point we may want to address this.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-03 14:55:32 -07:00
Zach Brown
e088424d70 Add initial filters for warnings in distro source
Add a chunk of filters for sparse warnings that trigger on distro kernel
source.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:35:36 -07:00
Zach Brown
d0cf026298 Require sparse, and filter kernel sparse output
Fail the build if we don't check with sparse in both the kernel and
userspace utils.  Add a filtering wrapper to the kernel build so that we
have a place to filter out uninteresting errors from kernel sources that
we're building against.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:35:36 -07:00
Zach Brown
03fa1ce7c5 Avoid bad sparse warning in lock_invalidate()
This is another example of refactoring a loop to avoid sparse warnings
from doing something in the else of a failed trylock if.  We want to
drop and reacquire the lock if the trylock fails so we do it every loop
iteration.  This shouldn't be experiencing much contention because most
of the cov users are usually done under locks and invalidation has
excluded lock holders.  So the additional lock and unlock noise should
be local.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:35:36 -07:00
Zach Brown
3d9f10de93 Work around sparse warning in _item_write_done
scoutfs_item_write_done() acquires the cinf dirty_lock and pg rwlock out
of order.  It uses a trylock to detect failure and back off of both
before retrying.

sparse seems to have some peculiar sensitivity to following the else
branch from a failed trylock while already in a context.  Doing that
consistently triggered the spurious mismatched context warning.

This refactors the loop to always drop and reacquire the dirty_lock
after attemping the trylock.  It's not great, but this shouldn't be very
contended because the transaction write has serialized write lock
holderse that would be trying to dirty items.  The silly lock noise will
be mostly cached.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:34:23 -07:00
Zach Brown
9741d40e10 Merge pull request #229 from versity/zab/v1.25
v1.25 Release
2025-06-04 11:21:25 -07:00
Zach Brown
48ac7bdf7c v1.25 Release
Finish the release notes for the 1.25 release.

Signed-off-by: Zach Brown <zab@versity.com>
2025-06-03 13:35:42 -07:00
Zach Brown
7865ee9f54 Merge pull request #223 from versity/auke/el9_5_wmaybe-uninit
Fix -Wmaybe-uninitalized since rhel9.5
2025-05-12 12:21:02 -07:00
Zach Brown
624eb128c6 Merge pull request #221 from versity/auke/enospc-test
Give enospc test more time to commit unlink.
2025-05-09 11:27:04 -07:00
Zach Brown
091eb3b683 Merge pull request #219 from versity/auke/fix-tests-failing-dirty-test-dirs
Fix test cases that don't run cleanly in a semi-dirty env.
2025-05-09 11:17:24 -07:00
Zach Brown
04e8cc6295 Merge pull request #220 from versity/auke/orphan-inodes
Extend orphan-inodes timeout.
2025-05-09 11:15:13 -07:00
Zach Brown
0f6fdb3eb5 Merge pull request #222 from versity/auke/t_kill_silent
Properly silently kill background tasks.
2025-05-09 11:11:24 -07:00
Auke Kok
2f48a606e8 Fix -Wmaybe-uninitalized since rhel9.5
Looks like the compiler isn't smart enough to understand the pass by
pointer value, and we can initialize it here easily.

make[1]: Entering directory '/usr/src/kernels/5.14.0-503.26.1.el9_5.x86_64'
  CC [M]  /home/auke/scoutfs/kmod/src/server.o
/home/auke/scoutfs/kmod/src/server.c: In function ‘fence_pending_recov_worker’:
/home/auke/scoutfs/kmod/src/server.c:4170:23: error: ‘addr.v4.addr’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
 4170 |                 ret = scoutfs_fence_start(sb, rid, le32_to_be32(addr.v4.addr),
      |                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 4171 |                                           SCOUTFS_FENCE_CLIENT_RECOVERY);
      |                                           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors

There's still the obvious issue here that we'd intended to support ipv6
but just disregard that here.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-08 15:20:50 -07:00
Auke Kok
377e49caf1 Properly silently kill background tasks.
Occasionally, we have some tests fail because these kills produce:

tests/lock-recover-invalidate.sh: line 42:  9928 Terminated

Even though we expected them to be silent. In these particular cases we
already don't care about this output.

We borrow the silent_kill() function from orphan-inodes and promote it
to t_silent_kill() in funcs/exec.sh, and then use it everywhere where
appropriate.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-08 12:03:04 -07:00
Auke Kok
d08eb66adc Give enospc test more time to commit unlink.
The current test sequence performs the unlink and immediately tests
whether enough resources are available to create new files again, and
this consistently fails.

One of my crummy VMs takes a good 12 seconds before the `touch` actually
succeeds. We care about the filesystem eventually returning from ENOSPC,
and certainly we don't want it to take forever, but there is a period
after our first ENOSPC error and cleanup that we expect ENOSPC to fail
for a bit longer.

Make the timeout 120s. As soon as the `touch` completes, exit the wait
loop.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-08 11:40:13 -07:00
Zach Brown
6f19d0bd36 Merge pull request #216 from versity/zab/stop_ending_dirty_data_freed
Zab/stop ending dirty data freed
2025-05-08 11:18:23 -07:00
Auke Kok
1d0cde7cc3 Clean up old test data as needed.
If run without `-m` (explicit mkfs) in subsequent testing, old test
data files may break several tests. Most failures are -EEXIST, but
there are some more subtle ones.

This change erases any existing test dir as needed just before we
run the tests, and avoids the issue entirely.

I considered doing a `mv dir dir.$$ && rm -rf dir.$$ &` alternative
solution but that likely will interfere disproportionally with
tests that do disconnects and other thing that can be impacted by an
unlink storm.

This has an obvious performance aspect - tests will be a little
slower to start on subsequent runs. In CI, this will effectively be
a no-op though.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-08 10:10:01 -07:00
Auke Kok
138c7c6b49 Extend orphan-inodes timeout.
This test regularly fails in CI when the 15 seconds elapses and the
system still hasn't concluded the mount log merges and orphan inode
scans needed to unlink the test files.

Instead of just extending the timeout value, we test-and-retry for 120s.
This hopefully is faster in most cases. My smallest VM needs about 6s-8s
on average.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-08 09:56:45 -07:00
Zach Brown
8aa1a98901 Merge pull request #210 from versity/auke/perf-irq-took-too-long
Filter out perf `interrupt took too long` dmesg.
2025-04-30 10:04:00 -07:00
Zach Brown
888b1394a6 Retry client commit and get log trees separately
The client transaction commit worker has a series of functions that it
calls to commit the current transaction and open the next one.  If any
of them fail, it retries all of them from the beginning each time until
they all succeed.

This pattern behaves badly since we added the strict get_trans_seq and
commit_trans_seq latching in the log_trees.  The server will only commit
the items for a get or commit request once, and will fail a commit
request if it isn't given the seq that matches the current item.

If the server gets an error it can have persisted items while sending an
error to the client.  If this error was for a get request, then the
client will retry all of its transaction write functions.  This includes
the commit request which is now using a stale seq and will fail
indefinitely.  This is visible in the server log as:

  error -5 committing client logs for rid e57e37132c919c4f: invalid log trees item get_trans_seq

The solution is to retry the commit and get phases independently.  This
way a failed get will be retried on its own without running through the
commit phase that had succeeded.  The client will eventually get the
next seq that it can then safely commit.

Signed-off-by: Zach Brown <zab@versity.com>
2025-04-29 11:46:38 -07:00
Zach Brown
e457694f19 Don't send dirty data_freed blocks to client
At the end of get_log_trees we can try and drain the data_freed extent
tree, which can take multiple commits.  If a commit fails then the
blocks are still dirty in memory.  We can't send references to those
blocks to the client.  We have to return an error and not send the
log_trees, like the main get_log_trees does.  The client will retry and
eventually get a log_trees that references blocks that were successfully
committed.

Signed-off-by: Zach Brown <zab@versity.com>
2025-04-29 11:46:38 -07:00
Zach Brown
459de5b478 Merge pull request #211 from versity/auke/tapf-output
TAP formatted output.
2025-04-15 14:25:06 -07:00
Auke Kok
24031cde1d TAP formatted output.
Stored as `results/scoutfs.tap`, this file contains TAP format 14
generated test results.

Embedded in the output are some metadata so that these files can be
aggregated and stored in an unique and deduplicating way, but using a
generated UUID at the start of testing. The file itself also catches git
ID, date, and kernel version, as well as the (possibly altered) test
sequence used.

Any test that has diff or dmesg output will be considered failed, and a
copy of the relevant data is included as comments.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-04-15 12:02:41 -07:00
Zach Brown
04cc41719c Merge pull request #209 from versity/auke/basic-truncate-yes-pipefail
Ignore pipefail alternative error when not a tty.
2025-04-14 13:15:03 -07:00
Auke Kok
1b47e9429e Filter out perf interrupt took too long dmesg.
Example:

```
[ 2469.638414] perf: interrupt took too long (2507 > 2500), lowering kernel.perf_event_max_sample_rate to 79000
```

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-04-14 12:06:58 -07:00
Auke Kok
7ea084082d Ignore pipefail alternative error when not a tty.
This happens with the basic-truncate test, only. It's the only user
of the `yes` program.

The `yes` command normally fails gracefully under the usual runs that
are attached to some terminal. But when the test script runs entirely
under something else, it will throw a needless error message that
pollutes the test output:

  `yes: standard output: Broken pipe`

Adjust the redirect to omit all stderr for `yes` in this case.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-04-14 11:13:39 -07:00
Zach Brown
f565451f76 Merge pull request #208 from versity/zab/v1.24
v1.24 Release
2025-03-17 11:18:42 -07:00
Zach Brown
05f14640fb v1.24 Release
Finish the release notes for the 1.24 release.

Signed-off-by: Zach Brown <zab@versity.com>
2025-03-14 12:19:30 -07:00
Zach Brown
609fc56cd6 Merge pull request #203 from versity/auke/new_inode_ctime
Fix new_inode ctime assignment.
2025-02-25 15:23:16 -08:00
Zach Brown
a4b5a256eb Merge pull request #175 from versity/auke/mmap
Support for mmap() writable mappings.
2025-02-20 14:03:01 -08:00
Zach Brown
f701ce104c Merge pull request #204 from versity/zab/remove_wordexp
Remove wordexp expansion of utils path argument
2025-02-19 09:27:15 -08:00
Zach Brown
c6dab3c306 Remove wordexp expansion of utils path argument
scoutfs cli commands were using a helper that tried to perform word
expansion on the path argument.  This was done with the intent of
providing the convenience of shell expansion (env vars, ~) within the
cli command argument.

But it breaks paths that accidentally have their file names match the
syntax that wordexp supports.   "[ ]" tripped up files in the wild.

We don't need to provide shell expansion functionality in our argument
parsing.  The shell can do that.  The cli must pass the arguments
straight through, no parsing at all.

Signed-off-by: Zach Brown <zab@versity.com>
2025-02-18 11:55:37 -08:00
Auke Kok
e3e2cfceec Fix new_inode ctime assignment.
Very old copy/paste bug here, we want to update new_inode's ctime
instead. old_inode already is updated.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-02-18 13:15:49 -05:00
Zach Brown
5a10c79409 Merge pull request #201 from versity/auke/fixes_pre_parallel_restore
Misc. fixes and changes to support parallel_restore and check.
2025-02-02 06:53:25 -08:00
Auke Kok
e9d147260c Fix ctx->pos updating to properly handle dent gaps
We need to assure we're emitting dents with the proper position
and we already have them as part of our dent. The only caveat is
to increment ctx->pos once beyond the list to make sure the caller
doesn't call us once more.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
6c85879489 Assert unlock doesn't underflow lock user count.
While debugging a double unlock error we hit this condition and
debugging would have been a lot easier had we enforced this simple
constraint that we can't decrement the lock users count if it's
already 0.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
8b76a53cf3 Avoid cluster locking while put_user() in _allocated_inos.
Similar to fiemap, readdir and walk_inodes, this method could have
put_user during a page fault, causing potentially a deadlock.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
e76a171c40 Avoid faulting while cluster locked in _walk_inodes.
Similar to readdir and fiemap vfs methods, we can't copy to user while
holding cluster locks. The previous comment about it being safe no
longer applies, and this could deadlock.

Rewrite the loop to iterate and store entries in a page, then flush
the page contents while not holding a clusterlock.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
8cb08507d6 Do not copy to user while holding locks in scoutfs_data_fiemap()
Now that we support mmap writes, at any point in time we could
pagefault and lock for writes. That means - just like readdir -
we can no longer lock and copy_to_user, since it also may page fault
and thus deadlock.

We statically allocate 32 extent entries on the stack and use
these to shuffle out fiemap entries at a time, locking and
unlocking around collecting and fiemap_fill_extent_next.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
cad12d5ce8 Avoid deadlock in _readdir() due to copy_to_user().
dir_emit() will copy_to_user, which can pagefault. If this happens while
cluster locked, we could deadlock.

We use a single page to stage dir_emit data, and iterate between
fetching dirents while locked, and emitting them while not locked.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
e59a5f8ebd Readdir w/offset validation.
Verify using xfs_io that readdir offsets match expected output.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
1bcd1d4d00 Drop readdir pre-.iterate() compat (el7.5ish).
These 2 sections of compat for readdir are wholly obsolete and can be
hard dropped, which restores the method to look like current upstream
code.

This was added in ddd1a4e.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Auke Kok
b944f609aa remap_pages ops becomes obsolete. 2025-01-23 14:28:40 -05:00
Auke Kok
519b47a53c mmap() trace events.
We merely trace exit values and position, and ignore length.

Because vm_fault_t is __bitwise, sparse will loudly complain about
a plain cast to u32, so we must __force (on el8). ret will be 512 in
normal cases.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Auke Kok
92f704d35a Enable all xfstests mmap() tests.
Now that all of these should be passing, we enable all mmap() tests in
xfstests, and update the golden output with the new tests.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Auke Kok
311bf75902 Add mmap tests.
Two test programs are added. The run time is about 1min on my el7
instance.

The test script finishes up with a read/write mmap test on offline
extents to verify the data wait paths in those functions.

One program will perform vfs read/write and mmap read/write calls on
the same file from across 5 threads (mounts) repeatedly.  The goal
is to assure there are no locking issues between read/write paths.

The second test program performs consistency checking on a file that is
repeatedly written/read using memory maps and normal reads and writes,
and the content is verified after every operation.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Benjamin LaHaise
3788d67101 Add support for writable shared mmap()ings
Add support for writable MAP_SHARED mmap()ings.  Avoid issues with late
writepage()s building transactions by doing the block_write_begin() work in
scoutfs_data_page_mkwrite().  Ensure the page is marked dirty and prepared
for write, then let the VM complete the write when the page is flushed or
invalidated.

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Benjamin LaHaise
b7a3d03711 Add support for read only mmap()
Adds the required memory mapped ops struct and page fault handler
for reads.

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Zach Brown
295f751aed Add test_bit to utils bitmap
Add test_bit() to the trivial utils bitmap.c implementation.

Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:58:58 -08:00
Zach Brown
7f6032d9b4 Add lk rbtree wrapper
Import the kernel's rbtree implementation with a wrapper so we can use
it from userspace.

Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:58:49 -08:00
Zach Brown
7e3a6537ec Add userspace version of our dirent name hash
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:58:41 -08:00
Zach Brown
49b7b70438 Add userspace version of our mode to type
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:58:31 -08:00
Zach Brown
de0fdd1f9f Promote userspace btree block initialization
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:58:23 -08:00
Zach Brown
a6d7de3c00 Add fls64() alias for userspace flsll()
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:58:16 -08:00
Zach Brown
2c2c127c5e Add put_unaligned_leXX() for userspace
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:58:10 -08:00
Zach Brown
9491c784e7 Add srch_encode_entry() for userspace utils
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:57:56 -08:00
Zach Brown
c3b30930fa Add bloom filter index calc for userspace utils
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:57:46 -08:00
Zach Brown
e7e46a80e6 Add userspace NSEC_PER_SEC
Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:57:39 -08:00
Zach Brown
1ddf752f42 Import a few more functions to our list.h
Import a few more functions from the kernel's list.h into our imported
copy.

Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:57:29 -08:00
Zach Brown
14b65c6360 Fix printing alloc list block extents
The list alloc blocks have an array of blknos that are offset by a start
field in the block header.  The print code wasn't using that and was
always referencing the beginning of the array, which could miss blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2025-01-22 09:57:21 -08:00
Zach Brown
934f6c7648 Merge pull request #199 from versity/zab/v1.23
v1.23 Release
2024-12-11 17:02:52 -08:00
Zach Brown
a88972b50e v1.23 Release
Finish the release notes for the 1.23 release.

Signed-off-by: Zach Brown <zab@versity.com>
2024-12-11 13:07:44 -08:00
Zach Brown
3e71f49260 Merge pull request #195 from versity/auke/el9_5
RHEL9.5 kernel support
2024-12-03 14:27:57 -08:00
Zach Brown
8a082e3f99 Merge pull request #197 from versity/greg/block-el9-minor-upgrades
Block EL9 minor version upgrades
2024-12-03 14:09:17 -08:00
Greg Cymbalski
110d5ea0d5 Block EL9 minor version upgrades
Since kABI migrations across minor versions is a thing of the past going
forward, we now:
- Detect if we're on EL9
- If so, add a requirement on the various flavors of release package to
  that specific major.minor version

This appropriately does not allow upgrades across minor versions.
2024-12-02 16:04:24 -08:00
Auke Kok
669de459a7 bdev_open_by_path is now removed as well.
Additional blkdev/bdev changes now cause this call to be removed as
well resulting in us having to use yet another API to do the same for
el9_5.

The changes are a little more subtle as now the bdev_mount() call passes
a custom bd_holder_ops that we must match or else throw a WARN_ON, so we
switch to using sbi as our holder arg instead.

Make sure to bdev_fput and not fput, since we don't want to have our
private data cleanup deferred, failing xfstests generic/604.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-27 18:52:39 -08:00
Auke Kok
621271f8cf backing_dev_info is entirely removed.
The assignments to it is no longer needed at all. All references can be
dropped since v6.4-rc4-163-g0d625446d0a4.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-08 13:32:21 -05:00
Auke Kok
d1092cdbe9 current_time() is no longer extern.
Since v6.5-rc1-7-g9b6304c1d537, current_time() is no longer
extern, so we need to update this grep regex to continue to match.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-08 13:21:03 -05:00
Zach Brown
aed7169fac Merge pull request #194 from versity/zab/v1.22
v1.22 Release
2024-11-03 15:06:40 -08:00
Zach Brown
7f313f2818 v1.22 Release
Finish the release notes for the 1.22 release.

Signed-off-by: Zach Brown <zab@versity.com>
2024-11-01 13:05:52 -07:00
Zach Brown
6b4e666952 Merge pull request #193 from versity/zab/hung_lock_fixes
Zab/hung lock fixes
2024-10-31 16:56:51 -07:00
Zach Brown
4a26059d00 Add lock-shrink-read-race test
Add a quick test that races readers and shrinking to stress lock object
refcount racing between concurrent lock request handling threads in the
lock server.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-31 15:35:11 -07:00
Zach Brown
19e78c32fc Allow null lock compatibility between nodes
Right now a client requesting a null mode for a lock will cause
invalidations of all existing granted modes of the lock across the
cluster.

This is unneccessarily broad.  The absolute requirement is that a null
request invalidates other existing granted modes on the client.  That's
how the client safely resolves shrinking's desire to free locks while
the locks are in use.  It relies on turning it into the race between use
and remote invalidation.

But that only requires invalidating existing grants from the requesting
client, not all clients.  It is always safe for null grants to coexist
with all grants on other clients.  Consider the existing mechanics
involving null modes.  First, null locks are instatiated on the client
before sending any requests at all.  At any given time newly allocated
null locks are coexisting with all existing locks across the cluster.
Second, the server frees the client entry tracking struct the moment it
sends a null grant to the client.  From that point on the client's null
lock can not have any impact on the rest of the lock holders because the
server has forgotten about it.

So we add this case to the server's test that two client lock modes are
compatible.  We take the opportunity to comment the heck out of this
function instead of making it a dense boolean composition.  The only
functional change is the addition of this case, the existing cases are
refactored but unchanged.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-31 15:34:59 -07:00
Zach Brown
8c1a45c9f5 Use bools instead of weird addition as or in net
When freeing acked reesponses in the net layer we sweep the send and
resend queues looking for queued responses up to the sequence number
we've had acked.  The code that did this used a weird pattern of
returning ints and adding them which gave me pause.  Clean it up to use
bools and or (not short-circuiting ||) to more obviously communicate
what's going on.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-30 13:38:12 -07:00
Zach Brown
5a6eb569f3 Add some lock debugging trace fields
Over time some fields have been added to the lock struct which haven't
been added to the lock tracing output.  Add some of the more relevant
lock fields to tracing.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-30 13:16:04 -07:00
Zach Brown
69d9040e68 Close lock server use-after-free race
Lock object lifetimes in the lock server are protected by reference
counts.  References are acquired while holding a lock on an rbtree.

Unfortunately, the decision to free lock objects wasn't tested while
also holding that lock on the rbtree.  A caller putting their object
would test the refcount, then wait to get the rbtree lock to remove it
from the tree.

There's a possible race where the decision is made to remove the object
but another reference is added before the object is removed.  This was
seen in testing and manifest as an incoming request handling path adding
a request message to the object before it is freed, losing the message.
Clients would then hang on a lock that never saw a response because
their request was freed with the lock object.

The fix is to hold the rbtree lock when testing the refcount and
deciding to free.  It adds a bit more contention but not significantly
so, given the wild existing contention on a per-fs spinlocked rbtree.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-30 13:04:13 -07:00
Zach Brown
d94ec29ffa Merge pull request #192 from versity/greg/with-debug-kmod
Generate debug packages
2024-10-24 15:35:03 -07:00
Greg Cymbalski
70c36ae394 Generate debug packages
We had previously explicitly disabled this; let's start generating them.
2024-10-24 14:56:09 -07:00
Zach Brown
1d08a58add Merge pull request #151 from versity/auke/el9
EL9 support.
2024-10-04 11:46:47 -07:00
Auke Kok
fc7876e844 Allow certain tests to skip, but not fail exit condition.
Previously, any t_skip would cause the final test result to be a failure
because up until now no test should have been skipped.

However, with format-version-forward-back not being compatible with el9,
we are going to rely on el7/8 testing for that test soleley, and
therefore we have to allow skipping of this test on el9 and newer OS
versions.

We add `t_skip_permitted` to signal this from the test case to the
run-tests.sh script. A new exit code is passed, and all accounting is
updated to reflect that a test was skipped, but this was permitted. We
modify format-version-forward-back to use this new exit path.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
5337b9e221 Ingore Process accounting resumed dmesg.
I'm seeing more and more of these as audit is enabled in el8 and el9
images I am using for testing, and during ENOSPC tests this has a chance
of triggering process accounting suspension, and subsequent resume.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
8a22bdd366 Ignore device mapper size change dmesg output.
In v1.18-10-g5507ee5, we changed the test code away from loopback
to device-mapper, which simplified our DUT setup code.

However, this results in the occasional `device changed size` messages
now being emitted by the `dm` driver instead of the `loop` kernel
module. We have to additionally ignore these kernel messages from now as
well.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
235ab133a7 We must provide a_ops->dirty_folio and invalidate_folio.
In v5.17-rc4-53-g3a3bae50af5d, we can no longer omit having this
method unhooked as the mm caller blindly calls it now. In-kernel
filesystems all were fixed in this change.

aops->invalidatepage was the old aops method that would free pages
with private attached data. This method is replaced with the
new invalidate_folio method. If this method is NULL, the memory
will become orphaned. (v5.17-rc4-29-gf50015a596fa)

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
9335d2eb86 Don't --track when checking out a tag.
I've pushed a tag/release to scoutfs-xfstests-dev instead of a full
blown branch. This seems simpler and cleaner than using branches,
because we're going to end up rebasing these things a lot. However, we
can't --track tags, so, if the branch name passed to -x is actually a
tag instead of a branch, we have to omit the --track option here.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
97b081de3f Switch xfstests tag over in CI jobs using this marker file.
CI testing needs to know which xfstests branch to use on all OSs.

We can't just use the el9 xfstests branch on el9 only, because we
need to run the same el9 xfstests on el8 and el7 as well, otherwise
testing will just fail.

So, we put a marker file in our git repo that tells us that we're
not going to use the default `scoutfs` branch from scoutfs-xfstests-dev
but our own special tag or branch. The CI job then should pass the
proper -x {branch} flag to the run-tests.sh script.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
21b5032365 Add new xfstests that we won't support or don't pass
The new version of xfstests adds a _lot_ more tests to our mix. Many
of the new ones will auto enable or auto skip as needed.

There are tests we can't or won't support that will be in future
xfstests. Disable them now so we can avoid dealing with them later.

Quite a few fall into "we don't support these types of mounting yet",
mostly bind-mount or dm-mapper things. We disable all the swapfile
tests flatout.

A few tests fail on el7 but not el8/9 but we don't have a way to run
them without failing yet, so disable them as well.

Update golden with the proper new array of tests. This all requires
the `auke/scoutfs-el9` branch in `versity/scoutfs-xfstests-dev`.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
4723f4f9ab Disable format-version-forward-back test on el9+.
Using t_skip, we just skip this test on el9.

If we ever want to add a formatversion 2->3 test, perhaps we should
just add a separate test script, instead of going over a static array.

But let's not worry about this too much right now.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
0a8b3f4e94 Fix basic-posix-acl test output on el9
It turns out that on el9, `bash -c` prints out `bash: line 1: cd..`
instead of `line 0:` on el7 or el8. So discard all the stderr from
these `cd` lines entirely and just rely on the expected echo
output to stdout.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
8a4b0967cb Add fiemap output through scoutfs util.
There's filefrag already, and that works, but, it's output is very
inconsistent between various OS release versions, and it has already
meant that we'd needed to adjust tests to account for these little
but insignificant changes. A lot more work than useful. It's even
more changed in el9.

This adds `scoutfs get-fiemap FILE` and prints out block extent
info with flags that we care about as an abbreviated letter: U for
Unwritten, L for Last, and O for Unknown (as in, "offline").

The -P/--physical and -L/--logical options turn off logical or physical
offset display, in case you only want to see the offsets in either
units. You can pass -b/--byte to display offsets and lengths in
byte values. The block size will then be obtained from fstat() of
the queried file (4096 for scoutfs).

I've removed all uses of filefrag from our scoutfs tests. Xfstests
still calls it but their internal diff takes care of that issue.

Where needed and appropriate, the tests are adjusted so that the output
of `scoutfs get-fiemap` is as close as it can to what it used to be,
so that reading the test results allows the quick view of what might
have been going wrong.

There are some output strings I have not bothered to update because
there's no real value to updating every output string to match,
and we just adjust the golden file accordingly.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
606c519e96 Simple-staging doesn't actually test overflow.
This isn't a simple case where we can use u64_region_wraps because
length is s32.

Let's actually test an overflow case instead of a case that doesn't
overflow, though. We still should properly add an overflow test here as
well.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
7d0e7e29f8 Avoid integer wrapping pitfalls for (off, len) pairs.
We use check_add_overflow(a, b, d) here to validate that (off, len)
pairs do not exceed the max value type. The kernel conveniently has
several macros to sort out the problems with signed or unsigned types.

However, we're not interested in purely seeing whether (a + b)
overflows, because we're using this for (off, len) overflow checks,
where the bytes we read are from 0 to len -1. We must therefore call
this check with (b) being "len - 1".

I've made sure that we don't accidentally fail when (len == 0)
in all cases by making sure we've already checked this condition
before, and moving code around as needed to ensure that (len > 0)
in all cases where we check.

The macro check_add_overflow requires a (d) argument in which
temporarily the result of the addition is stored and then checked to see
if an overflow occurred. We put a `tmp` variable on the stack of the
correct type as needed to make the checks function.

simple-release-extents test mistakenly relied on this buggy wrap code,
so it needs fixing. The move-blocks test also got it wrong.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
69de6d7a74 Check for zero len in scoutfs_data_wait_check
We consistently enter scoutfs_data_wait_check when len == 0 from
scoutfs_aio_write() which directly passes the i_size_read() value,
and for cases where we `echo >> $FILE` this is always reached.

This can cause the wrapping check to fail since `0 + (0 - 1) < 0` which
triggers the WARN_ON_ONCE wrap check that needs updating to allow
certain operations on huge files.

More importantly we can just omit all these checks if `len == 0` anyway,
since they should always succeed and never should require taking all the
locks.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
ac00f5cedb Free after getline(), even if fail, and catch eof() on el9
getline() allocates the space for the return value even if there is an
error, so when it returns an error, we still have to free() it.

In el9, when reading stdin we will get errno=0 returned (no error) when
we hit the end of stdin. This behavior is different from el7/8. We don't
want to throw an error here to avoid failing the test, since it doesn't.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
6d42d260cf xargs option conflict now a warning in el9
The warnings thrown by el9's version of xargs are unexpected output and
cause this test to fail. When using the -I option (replace) the -n 1
arguments are always assumed. In el7/8 no warnings were printed.

We can just remove `-n 1` since the argument is never needed.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
00ebe92186 Add stddef.h to util.h to avoid duplicate offsetof() def.
In el9 releases, our includes declare offsetof() before our header
chain includes stddef.h, which doesn't properly check if offsetof
is already defined, leading to a redefinition. Just include stddef
at all times here.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
570c05898c Correct endian conversion length (blkno is le64)
Trivial correction of wrong bitlength conversion.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c298360a49 blkdev api changes - pass holder to replace FMODE_EXCL
Passing a holder ptr to these functions now replaces the FMODE_EXCL
flag. _put no longer needs flags for this reason, but the holder
instead.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
95f4e56546 Introduce blk_mode_t instead of abuse of fmode_t
v6.4-rc2-198-g05bdb9965305 adds a new type for passing flags instead
of abusing fmode_t flags. They are essentially the same flags just
in a new type.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
d5c2768f04 .tmpfile method now passed a struct file, which must be opened.
v6.0-rc6-9-g863f144f12ad changes the VFS method to pass in a struct
file and not a dentry in preperation for tmpfile support in fuse.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
676d429264 Assume el9 is the same as el8 for rpmbuild purposes.
The current spec template can't handle future major el releases
gracefully and fails to build entirely. We isolate all changes
so that they are either "el7 specific" or generic. This rids us
entirely of el8 specific conditionals.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
5b260e6b54 block_write_begin() no longer is being passed aop_flags.
The flag is now obsolete, we don't need to set flags here or
pass them.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
e2b06f2c92 mpage_readpage() is now replaced with mpage_read_folio.
Folios are the new data types used for passing pages. For now,
folios only appear to have a single page. Future kernels will
change that.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
546b437df7 Shrinkers are now registered with a name.
v5.19-rc4-52-ge33c267ab70d Adds shrinker names to the registration
call to aid with shrinker debugging, which is highly opaque.

To enable you'll have to recompile the kernel with
CONFIG_SHRINKER_DEBUG=y though, since it's disabled by default in
OSV kernels.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
381f4543b7 Use iter based read/write to support splice and thus sendfile().
The iter based read/write calls can support splice in el9 if we
hook up these calls, otherwise splice will stop working.

->write() similar to: v3.15-rc4-330-g8d0207652cbe. ->read() to
generic implementation.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
418a441604 kernel_setsockopt no longer available.
We instead opt to use sock_setsockopt which is generally exactly the
same and can be easily converted to map to kernel_setsockopt without
impacting the code significantly.

There are 3 methods we're calling with usec timeval's, and that is
significantly different now that this requires a bit more compat code
so we split these out to separate compat functions to handle them.

Some of the TCP sock functions also have a slightly different signature
that we want to split them out (struct socket vs. sock). Some further
no longer return success, either.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
f3abf9710b generic_perform_write signature changed
It now only needs the iocb and no longer the flip.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
8a45c2baff Deprecate struct timeval.
We switch to using 64bit usec structs and recommended replacement
functions from Documentation/core-api/timekeeping.rst.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
345ebd0876 fiemap_prep replaces fiemap_check_flags.
v5.7-rc4-53-gcddf8a2c4a82

The prep helper replaces the sanity checks.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
b718cf09de Handle idmapped mounts in xattr_handler
In v5.11-rc4-8-ge65ce2a50cf6 the *set handler is passed a
user_namespace struct pointing to the map from the mount.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
e4721366ff Added user_ns argument to posix_acl_update_mode, set_posix_acl
v5.11-rc4-8-ge65ce2a50cf6 adds idmap support to these calls.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
4ef64c6fcf Vfs methods become user namespace mount aware.
v5.11-rc4-24-g549c7297717c

All of these VFS methods are now passed a user_namespace.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
2d58ee2a37 Account for new bio_alloc() args.
Block device and opf are now passed through and set. We mimic compat
code to do the same.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
1f0dd7f025 __vmalloc defaults to PAGE_KERNEL everywhere, so the arg was removed.
v5.7-523-g88dca4ca5a93

__vmalloc no longer has the 3rd argument.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
077468ac1e debugfs_create_atomic_t now returns void, don't check result
Greg KH tells us to do just this in v5.4-rc5-31-g9927c6fa3e1d:

	No one checks the return value of debugfs_create_atomic_t(),
	as it's not needed, so make the return value void, so that no
	one tries to do so in the future.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c951713ab2 list_cmp_func_t introduced, using const.
v5.12-rc6-9-g4f0f586bf0c8

All list_sort functions use the list_cmp_func_t type, which compares
list_head member types. These are now required to be `const` as the
compiler will now check them. This propagates into our callers.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
ad82a5e52a Squelch warning from bpf_iter.c.
v5.7-rc2-1174-gfd4f12bc38c3 significantly rewrites the bpf iterator
which hits this _next() function. It also adds a check that verifies
that the *pos is incremented after every call, even if it goes beyond
the last member (in which case it's not used).

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
d3c5328909 setattr_prepare no longer extern in fs.h
v5.11-rc4-7-g2f221d6f7b88 Changes setattr_prepare from an extern
to plain int. There's no impact further to the compat to keep it
working except for the detection regex.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c30172210f Use blk_opf_t to pass bio op flags
Compat is back to unsigned int.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
19af6e28fb "unaligned/access_ok.h" is not needed, and removed.
Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
8885486bc8 Add several low level includes.
Newer kernels include less header dependencies by default, so we have
to add these.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
0204e092e4 FIELD_SIZEOF was deprecated.
We could use sizeof_field as a direct replacement (which is the same)
except that this entire thing can directly use offsetofend().

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Zach Brown
3b816cfd01 Merge pull request #183 from versity/auke/setattr_more_data_version_zero
Don't pass data version to attr_x unless the ioctl means to set it.
2024-10-03 12:25:46 -07:00
Auke Kok
b45fbe0bbb Don't pass data version to attr_x unless the ioctl means to set it.
The wrapper in setattr_more that translates the operations to attr_x
needs to decide whether to ask attr_x to perform a change to any of
the fields passed to it or not. For the date and size fields this
is implicit - we always tell attr_x to change them. For any of the
other fields, it should be explicit.

The only field that is in the struct that this applies to is
data_version. Because the data version field by default is zero,
we use that as condition to decide whether to pass the data_version
down to attr_x.

Previously, the code would always pass a data_version=0 down to attr_x,
triggering one of the validity checks, making it return -EINVAL. We
add a simple test case to test for this issue.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-09-27 19:31:22 -04:00
Zach Brown
c5d9b93b96 Merge pull request #187 from versity/auke/sparsefilt
Sparse fix for epel 0.6.4 sparse - redefines
2024-09-27 14:13:28 -07:00
Zach Brown
2984f4d3a8 Merge pull request #189 from versity/greg/el9-spec-fix
Use path-inspecific weak-modules (EL9 fix)
2024-09-27 14:05:40 -07:00
Auke Kok
3b8d2eab8e Sparse fix for epel 0.6.4 sparse - redefines
We should rely on sparse from epel to do automated sparse checking and
not a git tag. But the 0.6.4 build currently fails on sparse/gcc
redefines.

This magic Awk from Zach script processes sparse and gcc internal defines
and leaves the one intact that sparse doesn't have.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-09-27 15:37:47 -04:00
Greg Cymbalski
4dde57dc27 Rely on $PATH for weak-modules
This avoids having to deal with EL-specific path differences for
the weak-modules script.
2024-09-27 12:30:25 -07:00
Zach Brown
a4be74f4b1 Merge pull request #182 from versity/auke/write_test_name_to_kmsg
Write to kmsg which test we're executing.
2024-09-27 09:09:36 -07:00
Auke Kok
9d8ac2c7d7 Write to kmsg which test we're executing.
This is done by xfstests and it's so much easier to follow what is going
on from logs or e.g. serial console that I thought I should do this for
scoutfs tests as well. It makes it so much easier to discern which test
may have been cause for issues when running a bunch of tests and you're
looking back at logs later.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-08-28 14:36:55 -07:00
123 changed files with 7228 additions and 2137 deletions

View File

@@ -1,6 +1,100 @@
Versity ScoutFS Release Notes
=============================
---
v1.26
\
*Nov 17, 2025*
Add the ino\_alloc\_per\_lock mount option. This changes the number of
inode numbers allocated under each cluster lock and can alleviate lock
contention for some patterns of larger file creation.
Add the tcp\_keepalive\_timeout\_ms mount option. This can enable the
system to survive longer periods of networking outages.
Fix a rare double free of internal btree metadata blocks when merging
log trees. The duplicated freed metadata block numbers would cause
persistent errors in the server, preventing the server from starting and
hanging the system.
Fix the data\_wait interface to not require the correct data\_version of
the inode when raising an error. This lets callers raise errors when
they're unable to recall the details of the inode to discover its
data\_version.
Change scoutfs to more aggressively reclaim cached memory when under
memory pressure. This makes scoutfs behave more like other kernel
components and it integrates better with the reclaim policy heuristics
in the VM core of the kernel.
Change scoutfs to more efficiently transmit and receive socket messages.
Under heavy load this can process messages sufficiently more quickly to
avoid hung task messages for tasks that were waiting for cluster lock
messages to be processed.
Fix faulty server block commit budget calculations that were generating
spurious "holders exceeded alloc budget" console messages.
---
v1.25
\
*Jun 3, 2025*
Fix a bug that could cause indefinite retries of failed client commits.
Under specific error conditions the client and server's understanding of
the current client commit could get out of sync. The client would retry
commits indefinitely that could never succeed. This manifested as
infinite "critical transaction commit failure" messages in the kernel
log on the client and matching "error <nr> committing client logs" on
the server.
Fix a bug in a specific case of server error handling that could result
in sending references to unwritten blocks to the client. The client
would try to read blocks that hadn't been written and return spurious
errors. This was seen under low free space conditions on the server and
resulted in error messages with error code 116 (The errno enum for
ESTALE, the client's indication that it couldn't read the blocks that it
expected.)
---
v1.24
\
*Mar 14, 2025*
Add support for coherent read and write mmap() mappings of regular file
data between mounts.
Fix a bug that was causing scoutfs utilities to parse and change some
file names before passing them on to the kernel for processing. This
fixes spurious scoutfs command errors for files with the offending
patterns in their names.
Fix a bug where rename wasn't updating the ctime of the inode at the
destination name if it existed.
---
v1.23
\
*Dec 11, 2024*
Add support for kernels in the RHEL 9.5 minor release.
---
v1.22
\
*Nov 1, 2024*
Add support for building against the RHEL9 family of kernels.
Fix failure of the setattr\_more ioctl() to set the attributes of a
zero-length file when restoring.
Fix support for POSIX ACLs in the RHEL8 and later family of kernels.
Fix a race condition in the lock server that could drop lock requests
under heavy load and cause cluster lock attempts to hang.
---
v1.21
\

View File

@@ -5,13 +5,6 @@ ifeq ($(SK_KSRC),)
SK_KSRC := $(shell echo /lib/modules/`uname -r`/build)
endif
# fail if sparse fails if we find it
ifeq ($(shell sparse && echo found),found)
SP =
else
SP = @:
endif
SCOUTFS_GIT_DESCRIBE ?= \
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
echo no-git)
@@ -36,9 +29,7 @@ TARFILE = scoutfs-kmod-$(RPM_VERSION).tar
all: module
module:
$(MAKE) $(SCOUTFS_ARGS)
$(SP) $(MAKE) C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
$(MAKE) CHECK=$(CURDIR)/src/sparse-filtered.sh C=1 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
modules_install:
$(MAKE) $(SCOUTFS_ARGS) modules_install

View File

@@ -4,18 +4,13 @@
%define kmod_git_describe @@GITDESCRIBE@@
%define pkg_date %(date +%%Y%%m%%d)
# Disable the building of the debug package(s).
%define debug_package %{nil}
# take kernel version or default to uname -r
%{!?kversion: %global kversion %(uname -r)}
%global kernel_version %{kversion}
%global sanitized_kversion %(echo %{kernel_version} | sed -e 's/-/_/g' -e 's/\.el.*//')
%if 0%{?el7}
%global kernel_source() /usr/src/kernels/%{kernel_version}.$(arch)
%endif
%if 0%{?el8}
%else
%global kernel_source() /usr/src/kernels/%{kernel_version}
%endif
@@ -23,21 +18,19 @@
%if 0%{?el7}
Name: %{kmod_name}
%endif
%if 0%{?el8}
%else
Name: kmod-%{kmod_name}
%endif
Summary: %{kmod_name} kernel module
Version: %{kmod_version}
Release: %{_release}+%{sanitized_kversion}%{?dist}
Release: %{_release}%{?dist}
License: GPLv2
Group: System/Kernel
URL: http://scoutfs.org/
%if 0%{?el7}
BuildRequires: %{kernel_module_package_buildreqs}
%endif
%if 0%{?el8}
%else
BuildRequires: elfutils-libelf-devel
%endif
BuildRequires: kernel-devel-uname-r = %{kernel_version}
@@ -55,10 +48,23 @@ Source: %{kmod_name}-kmod-%{kmod_version}.tar
%endif
%global install_mod_dir extra/%{kmod_name}
%if 0%{?el8}
%if ! 0%{?el7}
%global flavors_to_build x86_64
%endif
# el9 sanity: make sure we lock to the minor release we built for and block upgrades
%{lua:
if string.match(rpm.expand("%{dist}"), "%.el9") then
rpm.define("el9 1")
end
}
%if 0%{?el9}
%define release_major_minor 9.%{lua: print(rpm.expand("%{dist}"):match("%.el9_(%d)"))}
Requires: system-release = %{release_major_minor}
%endif
%description
%{kmod_name} - kernel module
@@ -94,7 +100,7 @@ done
# mark modules executable so that strip-to-file can strip them
find %{buildroot} -type f -name \*.ko -exec %{__chmod} u+x \{\} \;
%if 0%{?el8}
%if ! 0%{?el7}
%files
/lib/modules
@@ -112,8 +118,5 @@ SCOUTFS_RPM_NAME=$(rpm -q %{name} | grep "%{version}-%{release}")
rpm -ql $SCOUTFS_RPM_NAME | grep '\.ko$' > /var/run/%{name}-modules-%{version}-%{release} || true
%postun
if [ -x /sbin/weak-modules ]; then
cat /var/run/%{name}-modules-%{version}-%{release} | /sbin/weak-modules --remove-modules --no-initramfs
fi
cat /var/run/%{name}-modules-%{version}-%{release} | weak-modules --remove-modules --no-initramfs
rm /var/run/%{name}-modules-%{version}-%{release} || true

View File

@@ -6,26 +6,6 @@
ccflags-y += -include $(src)/kernelcompat.h
#
# v3.10-rc6-21-gbb6f619b3a49
#
# _readdir changes from fop->readdir() to fop->iterate() and from
# filldir(dirent) to dir_emit(ctx).
#
ifneq (,$(shell grep 'iterate.*dir_context' include/linux/fs.h))
ccflags-y += -DKC_ITERATE_DIR_CONTEXT
endif
#
# v3.10-rc6-23-g5f99f4e79abc
#
# Helpers including dir_emit_dots() are added in the process of
# switching dcache_readdir() from fop->readdir() to fop->iterate()
#
ifneq (,$(shell grep 'dir_emit_dots' include/linux/fs.h))
ccflags-y += -DKC_DIR_EMIT_DOTS
endif
#
# v3.18-rc2-19-gb5ae6b15bd73
#
@@ -78,8 +58,9 @@ endif
# v4.8-rc1-29-g31051c85b5e2
#
# inode_change_ok() removed - replace with setattr_prepare()
# v5.11-rc4-7-g2f221d6f7b88 removes extern attribute
#
ifneq (,$(shell grep 'extern int setattr_prepare' include/linux/fs.h))
ifneq (,$(shell grep 'int setattr_prepare' include/linux/fs.h))
ccflags-y += -DKC_SETATTR_PREPARE
endif
@@ -177,21 +158,12 @@ ifneq (,$(shell grep 'sock_create_kern.*struct net' include/linux/net.h))
ccflags-y += -DKC_SOCK_CREATE_KERN_NET=1
endif
#
# v3.18-rc6-1619-gc0371da6047a
#
# iov_iter is now part of struct msghdr
#
ifneq (,$(shell grep 'struct iov_iter.*msg_iter' include/linux/socket.h))
ccflags-y += -DKC_MSGHDR_STRUCT_IOV_ITER=1
endif
#
# v4.17-rc6-7-g95582b008388
#
# Kernel has current_time(inode) to uniformly retreive timespec in the right unit
#
ifneq (,$(shell grep 'extern struct timespec64 current_time' include/linux/fs.h))
ifneq (,$(shell grep 'struct timespec64 current_time' include/linux/fs.h))
ccflags-y += -DKC_CURRENT_TIME_INODE=1
endif
@@ -258,3 +230,259 @@ endif
ifneq (,$(shell grep 'static inline const char .xattr_prefix' include/linux/xattr.h))
ccflags-y += -DKC_XATTR_HANDLER_NAME=1
endif
#
# v5.19-rc4-96-g342a72a33407
#
# Adds `typedef __u32 __bitwise blk_opf_t` to aid flag checking
ifneq (,$(shell grep 'typedef __u32 __bitwise blk_opf_t' include/linux/blk_types.h))
ccflags-y += -DKC_HAVE_BLK_OPF_T=1
endif
#
# v5.12-rc6-9-g4f0f586bf0c8
#
# list_sort cmp function takes const list_head args
ifneq (,$(shell grep 'const struct list_head ., const struct list_head .' include/linux/list_sort.h))
ccflags-y += -DKC_LIST_CMP_CONST_ARG_LIST_HEAD
endif
# v5.7-523-g88dca4ca5a93
#
# The pgprot argument to vmalloc is always PAGE_KERNEL, so it is removed.
ifneq (,$(shell grep 'extern void .__vmalloc.unsigned long size, gfp_t gfp_mask, pgprot_t prot' include/linux/vmalloc.h))
ccflags-y += -DKC_VMALLOC_PGPROT_T
endif
# v6.2-rc1-18-g01beba7957a2
#
# fs: port inode_owner_or_capable() to mnt_idmap
ifneq (,$(shell grep 'bool inode_owner_or_capable.struct user_namespace .mnt_userns' include/linux/fs.h))
ccflags-y += -DKC_INODE_OWNER_OR_CAPABLE_USERNS
endif
#
# v5.11-rc4-5-g47291baa8ddf
#
# namei: make permission helpers idmapped mount aware
ifneq (,$(shell grep 'int inode_permission.struct user_namespace' include/linux/fs.h))
ccflags-y += -DKC_INODE_PERMISSION_USERNS
endif
#
# v5.11-rc4-24-g549c7297717c
#
# fs: make helpers idmap mount aware
# Enlarges the VFS API methods to include user namespace argument.
ifneq (,$(shell grep 'int ..mknod. .struct user_namespace' include/linux/fs.h))
ccflags-y += -DKC_VFS_METHOD_USER_NAMESPACE_ARG
endif
#
# v6.2-rc1-2-gabf08576afe3
#
# fs: vfs methods use struct mnt_idmap instead of struct user_namespace
ifneq (,$(shell grep 'int vfs_mknod.struct mnt_idmap' include/linux/fs.h))
ccflags-y += -DKC_VFS_METHOD_MNT_IDMAP_ARG
endif
#
# v5.17-rc2-21-g07888c665b40
#
# Detect new style bio_alloc - pass bdev and opf.
ifneq (,$(shell grep 'struct bio .bio_alloc.struct block_device .bdev' include/linux/bio.h))
ccflags-y += -DKC_BIO_ALLOC_DEV_OPF_ARGS
endif
#
# v5.7-rc4-53-gcddf8a2c4a82
#
# fiemap_prep() replaces fiemap_check_flags()
ifneq (,$(shell grep -s 'int fiemap_prep.struct inode' include/linux/fiemap.h))
ccflags-y += -DKC_FIEMAP_PREP
endif
#
# v5.17-13043-g800ba29547e1
#
# generic_perform_write args use kiocb for passing filp and pos
ifneq (,$(shell grep 'ssize_t generic_perform_write.struct kiocb ., struct iov_iter' include/linux/fs.h))
ccflags-y += -DKC_GENERIC_PERFORM_WRITE_KIOCB_IOV_ITER
endif
#
# v5.7-rc6-2496-g76ee0785f42a
#
# net: add sock_set_sndtimeo
ifneq (,$(shell grep 'void sock_set_sndtimeo.struct sock' include/net/sock.h))
ccflags-y += -DKC_SOCK_SET_SNDTIMEO
endif
#
# v5.8-rc4-1931-gba423fdaa589
#
# setsockopt functions are now passed a sockptr_t value instead of char*
ifneq (,$(shell grep -s 'include .linux/sockptr.h.' include/linux/net.h))
ccflags-y += -DKC_SETSOCKOPT_SOCKPTR_T
endif
#
# v5.7-rc6-2507-g71c48eb81c9e
#
# Adds a bunch of low level TCP sock parameter functions that we want to use.
ifneq (,$(shell grep 'int tcp_sock_set_keepintvl' include/linux/tcp.h))
ccflags-y += -DKC_HAVE_TCP_SET_SOCKFN
endif
#
# v4.16-rc3-13-ga84d1169164b
#
# Fixes y2038 issues with struct timeval.
ifneq (,$(shell grep -s '^struct __kernel_old_timeval .' include/uapi/linux/time_types.h))
ccflags-y += -DKC_KERNEL_OLD_TIMEVAL_STRUCT
endif
#
# v5.19-rc4-52-ge33c267ab70d
#
# register_shrinker now requires a name, used for debug stats etc.
ifneq (,$(shell grep 'int __printf.*register_shrinker.struct shrinker .shrinker,' include/linux/shrinker.h))
ccflags-y += -DKC_SHRINKER_NAME
endif
#
# v5.18-rc5-246-gf132ab7d3ab0
#
# mpage_readpage() is now replaced with mpage_read_folio.
ifneq (,$(shell grep 'int mpage_read_folio.struct folio .folio' include/linux/mpage.h))
ccflags-y += -DKC_MPAGE_READ_FOLIO
endif
#
# v5.18-rc5-219-gb3992d1e2ebc
#
# block_write_begin() no longer is being passed aop_flags
ifneq (,$(shell grep -C1 'int block_write_begin' include/linux/buffer_head.h | tail -n 2 | grep 'unsigned flags'))
ccflags-y += -DKC_BLOCK_WRITE_BEGIN_AOP_FLAGS
endif
#
# v6.0-rc6-9-g863f144f12ad
#
# the .tmpfile() vfs method calling convention changed and now a struct
# file* is passed to this metiond instead of a dentry. The function also
# should open the created file and call finish_open_simple() before returning.
ifneq (,$(shell grep 'extern void d_tmpfile.struct dentry' include/linux/dcache.h))
ccflags-y += -DKC_D_TMPFILE_DENTRY
endif
#
# v6.4-rc2-201-g0733ad800291
#
# New blk_mode_t replaces abuse of fmode_t
ifneq (,$(shell grep 'typedef unsigned int __bitwise blk_mode_t' include/linux/blkdev.h))
ccflags-y += -DKC_HAVE_BLK_MODE_T
endif
#
# v6.4-rc2-186-g2736e8eeb0cc
#
# Reworks FMODE_EXCL kludge and instead modifies the blkdev_put() call to pass in
# the (exclusive) holder to implement FMODE_EXCL handling.
ifneq (,$(shell grep 'blkdev_put.struct block_device .bdev, void .holder' include/linux/blkdev.h))
ccflags-y += -DKC_BLKDEV_PUT_HOLDER_ARG
endif
#
# v6.4-rc4-163-g0d625446d0a4
#
# Entirely removes current->backing_dev_info to ultimately remove buffer_head
# completely at some point.
ifneq (,$(shell grep 'struct backing_dev_info.*backing_dev_info;' include/linux/sched.h))
ccflags-y += -DKC_CURRENT_BACKING_DEV_INFO
endif
#
# v6.8-rc1-4-gf3a608827d1f
#
# adds bdev_file_open_by_path() and later in v6.8-rc1-30-ge97d06a46526 removes bdev_open_by_path()
# which requires us to use the file method from now on.
ifneq (,$(shell grep 'struct file.*bdev_file_open_by_path.const char.*path' include/linux/blkdev.h))
ccflags-y += -DKC_BDEV_FILE_OPEN_BY_PATH
endif
# v4.0-rc7-1796-gfe0f07d08ee3
#
# direct-io changes modify inode_dio_done to now be called inode_dio_end
ifneq (,$(shell grep 'void inode_dio_end.struct inode' include/linux/fs.h))
ccflags-y += -DKC_INODE_DIO_END
endif
#
# v5.0-6476-g3d3539018d2c
#
# page fault handlers return a bitmask vm_fault_t instead
# Note: el8's header has a slightly modified prefix here
ifneq (,$(shell grep 'typedef.*__bitwise unsigned.*int vm_fault_t' include/linux/mm_types.h))
ccflags-y += -DKC_MM_VM_FAULT_T
endif
# v3.19-499-gd83a08db5ba6
#
# .remap pages becomes obsolete
ifneq (,$(shell grep 'int ..remap_pages..struct vm_area_struct' include/linux/mm.h))
ccflags-y += -DKC_MM_REMAP_PAGES
endif
#
# v3.19-4742-g503c358cf192
#
# list_lru_shrink_count() and list_lru_shrink_walk() introduced
#
ifneq (,$(shell grep 'list_lru_shrink_count.*struct list_lru' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_SHRINK_COUNT_WALK
endif
#
# v3.19-4757-g3f97b163207c
#
# lru_list_walk_cb lru arg added
#
ifneq (,$(shell grep 'struct list_head \*item, spinlock_t \*lock, void \*cb_arg' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_WALK_CB_ITEM_LOCK
endif
#
# v6.7-rc4-153-g0a97c01cd20b
#
# list_lru_{add,del} -> list_lru_{add,del}_obj
#
ifneq (,$(shell grep '^bool list_lru_add_obj' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_ADD_OBJ
endif
#
# v6.12-rc6-227-gda0c02516c50
#
# lru_list_walk_cb lock arg removed
#
ifneq (,$(shell grep 'struct list_lru_one \*list, spinlock_t \*lock, void \*cb_arg' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_WALK_CB_LIST_LOCK
endif
#
# v5.1-rc4-273-ge9b98e162aa5
#
# introduce stack trace helpers
#
ifneq (,$(shell grep '^unsigned int stack_trace_save' include/linux/stacktrace.h))
ccflags-y += -DKC_STACK_TRACE_SAVE
endif
# v6.1-rc1-4-g7420332a6ff4
#
# .get_acl() method now has dentry arg (and mnt_idmap). The old get_acl has been renamed
# to get_inode_acl() and is still available as well, but has an extra rcu param.
ifneq (,$(shell grep 'struct posix_acl ...get_acl..struct mnt_idmap ., struct dentry' include/linux/fs.h))
ccflags-y += -DKC_GET_ACL_DENTRY
endif

View File

@@ -107,8 +107,15 @@ struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct s
return acl;
}
#ifdef KC_GET_ACL_DENTRY
struct posix_acl *scoutfs_get_acl(KC_VFS_NS_DEF
struct dentry *dentry, int type)
{
struct inode *inode = dentry->d_inode;
#else
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
{
#endif
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
struct posix_acl *acl;
@@ -153,7 +160,8 @@ int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
switch (type) {
case ACL_TYPE_ACCESS:
if (acl) {
ret = posix_acl_update_mode(inode, &new_mode, &acl);
ret = posix_acl_update_mode(KC_VFS_INIT_NS
inode, &new_mode, &acl);
if (ret < 0)
goto out;
set_mode = true;
@@ -200,8 +208,15 @@ out:
return ret;
}
#ifdef KC_GET_ACL_DENTRY
int scoutfs_set_acl(KC_VFS_NS_DEF
struct dentry *dentry, struct posix_acl *acl, int type)
{
struct inode *inode = dentry->d_inode;
#else
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
{
#endif
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
LIST_HEAD(ind_locks);
@@ -239,7 +254,12 @@ int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value,
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
#ifdef KC_GET_ACL_DENTRY
acl = scoutfs_get_acl(KC_VFS_INIT_NS
dentry, type);
#else
acl = scoutfs_get_acl(dentry->d_inode, type);
#endif
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl == NULL)
@@ -252,7 +272,9 @@ int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value,
}
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_set_xattr(const struct xattr_handler *handler, struct dentry *dentry,
int scoutfs_acl_set_xattr(const struct xattr_handler *handler,
KC_VFS_NS_DEF
struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags)
{
@@ -265,7 +287,7 @@ int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *v
struct posix_acl *acl = NULL;
int ret;
if (!inode_owner_or_capable(dentry->d_inode))
if (!inode_owner_or_capable(KC_VFS_INIT_NS dentry->d_inode))
return -EPERM;
if (!IS_POSIXACL(dentry->d_inode))
@@ -283,7 +305,11 @@ int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *v
}
}
#ifdef KC_GET_ACL_DENTRY
ret = scoutfs_set_acl(KC_VFS_INIT_NS dentry, acl, type);
#else
ret = scoutfs_set_acl(dentry->d_inode, acl, type);
#endif
out:
posix_acl_release(acl);

View File

@@ -1,16 +1,23 @@
#ifndef _SCOUTFS_ACL_H_
#define _SCOUTFS_ACL_H_
#ifdef KC_GET_ACL_DENTRY
struct posix_acl *scoutfs_get_acl(KC_VFS_NS_DEF struct dentry *dentry, int type);
int scoutfs_set_acl(KC_VFS_NS_DEF struct dentry *dentry, struct posix_acl *acl, int type);
#else
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type);
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
#endif
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks);
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_get_xattr(const struct xattr_handler *, struct dentry *dentry,
struct inode *inode, const char *name, void *value,
size_t size);
int scoutfs_acl_set_xattr(const struct xattr_handler *, struct dentry *dentry,
int scoutfs_acl_set_xattr(const struct xattr_handler *,
KC_VFS_NS_DEF
struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags);
#else

View File

@@ -14,6 +14,7 @@
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/blkdev.h>
#include <linux/sort.h>
#include <linux/random.h>
@@ -85,18 +86,47 @@ static u64 smallest_order_length(u64 len)
}
/*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
* Moving an extent between trees can dirty blocks in several ways. This
* function calculates worst case number of blocks across these scenarions.
* We treat the alloc and free counts independently, so the values below are
* max(allocated, freed), not the sum.
*
* We track extents with two separate btree items: by block number and by size.
*
* If we're removing an extent from the btree (allocating), we can dirty
* two blocks if the keys are in different leaves. If we wind up merging
* leaves because we fall below the low water mark, we can wind up freeing
* three leaves.
*
* That sequence is as follows, assuming the original keys are removed from
* blocks A and B:
*
* Allocate new dirty A' and B'
* Free old stable A and B
* B' has fallen below the low water mark, so copy B' into A'
* Free B'
*
* An extent insertion (freeing an extent) can dirty up to five distinct items
* in the btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent:
*
* In the by-blkno portion of the btree, we can dirty (allocate for COW) up
* to two blocks- either by merging adjacent extents, which can cause us to
* join leaf blocks; or by an insertion that causes a split.
*
* In the by-size portion, we never merge extents, so normally we just dirty
* a single item with a size insertion. But if we merged adjacent extents in
* the by-blkno portion of the tree, we might be working with three by-sizex
* items: removing the two old ones that were combined in the merge; and
* adding the new one for the larger, merged size.
*
* Finally, dirtying the paths to these leaves can grow the tree and grow/shrink
* neighbours at each level, so we multiply by the height of the tree after
* accounting for a possible new level.
*/
static u32 extent_mod_blocks(u32 height)
{
return ((1 + height) * 2) * 3;
return ((1 + height) * 3) * 5;
}
/*
@@ -827,7 +857,7 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
.zone = SCOUTFS_FREE_EXTENT_ORDER_ZONE,
};
struct scoutfs_extent found;
struct scoutfs_extent ext;
struct scoutfs_extent ext = {0,};
u64 start;
u64 len;
int nr;

View File

@@ -22,6 +22,8 @@
#include <linux/rhashtable.h>
#include <linux/random.h>
#include <linux/sched/mm.h>
#include <linux/list_lru.h>
#include <linux/stacktrace.h>
#include "format.h"
#include "super.h"
@@ -38,26 +40,12 @@
* than the page size. Callers can have their own contexts for tracking
* dirty blocks that are written together. We pin dirty blocks in
* memory and only checksum them all as they're all written.
*
* Memory reclaim is driven by maintaining two very coarse groups of
* blocks. As we access blocks we mark them with an increasing counter
* to discourage them from being reclaimed. We then define a threshold
* at the current counter minus half the population. Recent blocks have
* a counter greater than the threshold, and all other blocks with
* counters less than it are considered older and are candidates for
* reclaim. This results in access updates rarely modifying an atomic
* counter as blocks need to be moved into the recent group, and shrink
* can randomly scan blocks looking for the half of the population that
* will be in the old group. It's reasonably effective, but is
* particularly efficient and avoids contention between concurrent
* accesses and shrinking.
*/
struct block_info {
struct super_block *sb;
atomic_t total_inserted;
atomic64_t access_counter;
struct rhashtable ht;
struct list_lru lru;
wait_queue_head_t waitq;
KC_DEFINE_SHRINKER(shrinker);
struct work_struct free_work;
@@ -76,28 +64,15 @@ enum block_status_bits {
BLOCK_BIT_PAGE_ALLOC, /* page (possibly high order) allocation */
BLOCK_BIT_VIRT, /* mapped virt allocation */
BLOCK_BIT_CRC_VALID, /* crc has been verified */
BLOCK_BIT_ACCESSED, /* seen by lookup since last lru add/walk */
};
/*
* We want to tie atomic changes in refcounts to whether or not the
* block is still visible in the hash table, so we store the hash
* table's reference up at a known high bit. We could naturally set the
* inserted bit through excessive refcount increments. We don't do
* anything about that but at least warn if we get close.
*
* We're avoiding the high byte for no real good reason, just out of a
* historical fear of implementations that don't provide the full
* precision.
*/
#define BLOCK_REF_INSERTED (1U << 23)
#define BLOCK_REF_FULL (BLOCK_REF_INSERTED >> 1)
struct block_private {
struct scoutfs_block bl;
struct super_block *sb;
atomic_t refcount;
u64 accessed;
struct rhash_head ht_head;
struct list_head lru_head;
struct list_head dirty_entry;
struct llist_node free_node;
unsigned long bits;
@@ -106,13 +81,15 @@ struct block_private {
struct page *page;
void *virt;
};
unsigned int stack_len;
unsigned long stack[10];
};
#define TRACE_BLOCK(which, bp) \
do { \
__typeof__(bp) _bp = (bp); \
trace_scoutfs_block_##which(_bp->sb, _bp, _bp->bl.blkno, atomic_read(&_bp->refcount), \
atomic_read(&_bp->io_count), _bp->bits, _bp->accessed); \
atomic_read(&_bp->io_count), _bp->bits); \
} while (0)
#define BLOCK_PRIVATE(_bl) \
@@ -120,14 +97,23 @@ do { \
static __le32 block_calc_crc(struct scoutfs_block_header *hdr, u32 size)
{
int off = offsetof(struct scoutfs_block_header, crc) +
FIELD_SIZEOF(struct scoutfs_block_header, crc);
int off = offsetofend(struct scoutfs_block_header, crc);
u32 calc = crc32c(~0, (char *)hdr + off, size - off);
return cpu_to_le32(calc);
}
static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
static noinline void save_block_stack(struct block_private *bp)
{
bp->stack_len = stack_trace_save(bp->stack, ARRAY_SIZE(bp->stack), 2);
}
static void print_block_stack(struct block_private *bp)
{
stack_trace_print(bp->stack, bp->stack_len, 1);
}
static noinline struct block_private *block_alloc(struct super_block *sb, u64 blkno)
{
struct block_private *bp;
unsigned int nofs_flags;
@@ -159,7 +145,7 @@ static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
*/
lockdep_off();
nofs_flags = memalloc_nofs_save();
bp->virt = __vmalloc(SCOUTFS_BLOCK_LG_SIZE, GFP_NOFS | __GFP_HIGHMEM, PAGE_KERNEL);
bp->virt = kc__vmalloc(SCOUTFS_BLOCK_LG_SIZE, GFP_NOFS | __GFP_HIGHMEM);
memalloc_nofs_restore(nofs_flags);
lockdep_on();
@@ -177,11 +163,13 @@ static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
bp->bl.blkno = blkno;
bp->sb = sb;
atomic_set(&bp->refcount, 1);
INIT_LIST_HEAD(&bp->lru_head);
INIT_LIST_HEAD(&bp->dirty_entry);
set_bit(BLOCK_BIT_NEW, &bp->bits);
atomic_set(&bp->io_count, 0);
TRACE_BLOCK(allocate, bp);
save_block_stack(bp);
out:
if (!bp)
@@ -234,32 +222,85 @@ static void block_free_work(struct work_struct *work)
}
/*
* Get a reference to a block while holding an existing reference.
* Users of blocks hold a refcount. If putting a refcount drops to zero
* then the block is freed.
*
* Acquiring new references and claiming the exclusive right to tear
* down a block is built around this LIVE_REFCOUNT_BASE refcount value.
* As blocks are initially cached they have the live base added to their
* refcount. Lookups will only increment the refcount and return blocks
* for reference holders while the refcount is >= than the base.
*
* To remove a block from the cache and eventually free it, either by
* the lru walk in the shrinker, or by reference holders, the live base
* is removed and turned into a normal refcount increment that will be
* put by the caller. This can only be done once for a block, and once
* its done lookup will not return any more references.
*/
#define LIVE_REFCOUNT_BASE (INT_MAX ^ (INT_MAX >> 1))
/*
* Inc the refcount while holding an incremented refcount. We can't
* have so many individual reference holders that they pass the live
* base.
*/
static void block_get(struct block_private *bp)
{
WARN_ON_ONCE((atomic_read(&bp->refcount) & ~BLOCK_REF_INSERTED) <= 0);
int now = atomic_inc_return(&bp->refcount);
atomic_inc(&bp->refcount);
BUG_ON(now <= 1);
BUG_ON(now == LIVE_REFCOUNT_BASE);
}
/*
* Get a reference to a block as long as it's been inserted in the hash
* table and hasn't been removed.
*/
static struct block_private *block_get_if_inserted(struct block_private *bp)
* if (*v >= u) {
* *v += a;
* return true;
* }
*/
static bool atomic_add_unless_less(atomic_t *v, int a, int u)
{
int cnt;
int c;
do {
cnt = atomic_read(&bp->refcount);
WARN_ON_ONCE(cnt & BLOCK_REF_FULL);
if (!(cnt & BLOCK_REF_INSERTED))
return NULL;
c = atomic_read(v);
if (c < u)
return false;
} while (atomic_cmpxchg(v, c, c + a) != c);
} while (atomic_cmpxchg(&bp->refcount, cnt, cnt + 1) != cnt);
return true;
}
return bp;
static bool block_get_if_live(struct block_private *bp)
{
return atomic_add_unless_less(&bp->refcount, 1, LIVE_REFCOUNT_BASE);
}
/*
* If the refcount still has the live base, subtract it and increment
* the callers refcount that they'll put.
*/
static bool block_get_remove_live(struct block_private *bp)
{
return atomic_add_unless_less(&bp->refcount, (1 - LIVE_REFCOUNT_BASE), LIVE_REFCOUNT_BASE);
}
/*
* Only get the live base refcount if it is the only refcount remaining.
* This means that there are no active refcount holders and the block
* can't be dirty or under IO, which both hold references.
*/
static bool block_get_remove_live_only(struct block_private *bp)
{
int c;
do {
c = atomic_read(&bp->refcount);
if (c != LIVE_REFCOUNT_BASE)
return false;
} while (atomic_cmpxchg(&bp->refcount, c, c - LIVE_REFCOUNT_BASE + 1) != c);
return true;
}
/*
@@ -291,104 +332,73 @@ static const struct rhashtable_params block_ht_params = {
};
/*
* Insert a new block into the hash table. Once it is inserted in the
* hash table readers can start getting references. The caller may have
* multiple refs but the block can't already be inserted.
* Insert the block into the cache so that it's visible for lookups.
* The caller can hold references (including for a dirty block).
*
* We make sure the base is added and the block is in the lru once it's
* in the hash. If hash table insertion fails it'll be briefly visible
* in the lru, but won't be isolated/evicted because we hold an
* incremented refcount in addition to the live base.
*/
static int block_insert(struct super_block *sb, struct block_private *bp)
{
DECLARE_BLOCK_INFO(sb, binf);
int ret;
WARN_ON_ONCE(atomic_read(&bp->refcount) & BLOCK_REF_INSERTED);
BUG_ON(atomic_read(&bp->refcount) >= LIVE_REFCOUNT_BASE);
atomic_add(LIVE_REFCOUNT_BASE, &bp->refcount);
smp_mb__after_atomic(); /* make sure live base is visible to list_lru walk */
list_lru_add_obj(&binf->lru, &bp->lru_head);
retry:
atomic_add(BLOCK_REF_INSERTED, &bp->refcount);
ret = rhashtable_lookup_insert_fast(&binf->ht, &bp->ht_head, block_ht_params);
if (ret < 0) {
atomic_sub(BLOCK_REF_INSERTED, &bp->refcount);
if (ret == -EBUSY) {
/* wait for pending rebalance to finish */
synchronize_rcu();
goto retry;
} else {
atomic_sub(LIVE_REFCOUNT_BASE, &bp->refcount);
BUG_ON(atomic_read(&bp->refcount) >= LIVE_REFCOUNT_BASE);
list_lru_del_obj(&binf->lru, &bp->lru_head);
}
} else {
atomic_inc(&binf->total_inserted);
TRACE_BLOCK(insert, bp);
}
return ret;
}
static u64 accessed_recently(struct block_info *binf)
{
return atomic64_read(&binf->access_counter) - (atomic_read(&binf->total_inserted) >> 1);
}
/*
* Make sure that a block that is being accessed is less likely to be
* reclaimed if it is seen by the shrinker. If the block hasn't been
* accessed recently we update its accessed value.
* Indicate to the lru walker that this block has been accessed since it
* was added or last walked.
*/
static void block_accessed(struct super_block *sb, struct block_private *bp)
{
DECLARE_BLOCK_INFO(sb, binf);
if (bp->accessed == 0 || bp->accessed < accessed_recently(binf)) {
if (!test_and_set_bit(BLOCK_BIT_ACCESSED, &bp->bits))
scoutfs_inc_counter(sb, block_cache_access_update);
bp->accessed = atomic64_inc_return(&binf->access_counter);
}
}
/*
* The caller wants to remove the block from the hash table and has an
* idea what the refcount should be. If the refcount does still
* indicate that the block is hashed, and we're able to clear that bit,
* then we can remove it from the hash table.
* Remove the block from the cache. When this returns the block won't
* be visible for additional references from lookup.
*
* The caller makes sure that it's safe to be referencing this block,
* either with their own held reference (most everything) or by being in
* an rcu grace period (shrink).
*/
static bool block_remove_cnt(struct super_block *sb, struct block_private *bp, int cnt)
{
DECLARE_BLOCK_INFO(sb, binf);
int ret;
if ((cnt & BLOCK_REF_INSERTED) &&
(atomic_cmpxchg(&bp->refcount, cnt, cnt & ~BLOCK_REF_INSERTED) == cnt)) {
TRACE_BLOCK(remove, bp);
ret = rhashtable_remove_fast(&binf->ht, &bp->ht_head, block_ht_params);
WARN_ON_ONCE(ret); /* must have been inserted */
atomic_dec(&binf->total_inserted);
return true;
}
return false;
}
/*
* Try to remove the block from the hash table as long as the refcount
* indicates that it is still in the hash table. This can be racing
* with normal refcount changes so it might have to retry.
* We always try and remove from the hash table. It's safe to remove a
* block that isn't hashed, it just returns -ENOENT.
*
* This is racing with the lru walk in the shrinker also trying to
* remove idle blocks from the cache. They both try to remove the live
* refcount base and perform their removal and put if they get it.
*/
static void block_remove(struct super_block *sb, struct block_private *bp)
{
int cnt;
DECLARE_BLOCK_INFO(sb, binf);
do {
cnt = atomic_read(&bp->refcount);
} while ((cnt & BLOCK_REF_INSERTED) && !block_remove_cnt(sb, bp, cnt));
}
rhashtable_remove_fast(&binf->ht, &bp->ht_head, block_ht_params);
/*
* Take one shot at removing the block from the hash table if it's still
* in the hash table and the caller has the only other reference.
*/
static bool block_remove_solo(struct super_block *sb, struct block_private *bp)
{
return block_remove_cnt(sb, bp, BLOCK_REF_INSERTED | 1);
if (block_get_remove_live(bp)) {
list_lru_del_obj(&binf->lru, &bp->lru_head);
block_put(sb, bp);
}
}
static bool io_busy(struct block_private *bp)
@@ -397,37 +407,6 @@ static bool io_busy(struct block_private *bp)
return test_bit(BLOCK_BIT_IO_BUSY, &bp->bits);
}
/*
* Called during shutdown with no other users.
*/
static void block_remove_all(struct super_block *sb)
{
DECLARE_BLOCK_INFO(sb, binf);
struct rhashtable_iter iter;
struct block_private *bp;
rhashtable_walk_enter(&binf->ht, &iter);
rhashtable_walk_start(&iter);
for (;;) {
bp = rhashtable_walk_next(&iter);
if (bp == NULL)
break;
if (bp == ERR_PTR(-EAGAIN))
continue;
if (block_get_if_inserted(bp)) {
block_remove(sb, bp);
WARN_ON_ONCE(atomic_read(&bp->refcount) != 1);
block_put(sb, bp);
}
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
WARN_ON_ONCE(atomic_read(&binf->total_inserted) != 0);
}
/*
* XXX The io_count and sb fields in the block_private are only used
@@ -438,7 +417,7 @@ static void block_remove_all(struct super_block *sb)
* possible. Final freeing, verifying checksums, and unlinking errored
* blocks are all done by future users of the blocks.
*/
static void block_end_io(struct super_block *sb, unsigned int opf,
static void block_end_io(struct super_block *sb, blk_opf_t opf,
struct block_private *bp, int err)
{
DECLARE_BLOCK_INFO(sb, binf);
@@ -478,7 +457,7 @@ static void KC_DECLARE_BIO_END_IO(block_bio_end_io, struct bio *bio)
* Kick off IO for a single block.
*/
static int block_submit_bio(struct super_block *sb, struct block_private *bp,
unsigned int opf)
blk_opf_t opf)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct bio *bio = NULL;
@@ -489,7 +468,7 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
int ret = 0;
if (scoutfs_forcing_unmount(sb))
return -EIO;
return -ENOLINK;
sector = bp->bl.blkno << (SCOUTFS_BLOCK_LG_SHIFT - 9);
@@ -505,15 +484,13 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
for (off = 0; off < SCOUTFS_BLOCK_LG_SIZE; off += PAGE_SIZE) {
if (!bio) {
bio = bio_alloc(GFP_NOFS, SCOUTFS_BLOCK_LG_PAGES_PER);
bio = kc_bio_alloc(sbi->meta_bdev, SCOUTFS_BLOCK_LG_PAGES_PER, opf, GFP_NOFS);
if (!bio) {
ret = -ENOMEM;
break;
}
kc_bio_set_opf(bio, opf);
kc_bio_set_sector(bio, sector + (off >> 9));
bio_set_dev(bio, sbi->meta_bdev);
bio->bi_end_io = block_bio_end_io;
bio->bi_private = bp;
@@ -546,6 +523,10 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
return ret;
}
/*
* Return a block with an elevated refcount if it was present in the
* hash table and its refcount didn't indicate that it was being freed.
*/
static struct block_private *block_lookup(struct super_block *sb, u64 blkno)
{
DECLARE_BLOCK_INFO(sb, binf);
@@ -553,8 +534,8 @@ static struct block_private *block_lookup(struct super_block *sb, u64 blkno)
rcu_read_lock();
bp = rhashtable_lookup(&binf->ht, &blkno, block_ht_params);
if (bp)
bp = block_get_if_inserted(bp);
if (bp && !block_get_if_live(bp))
bp = NULL;
rcu_read_unlock();
return bp;
@@ -715,8 +696,8 @@ retry:
ret = 0;
out:
if ((ret == -ESTALE || scoutfs_trigger(sb, BLOCK_REMOVE_STALE)) &&
!retried && !block_is_dirty(bp)) {
if (!retried && !IS_ERR_OR_NULL(bp) && !block_is_dirty(bp) &&
(ret == -ESTALE || scoutfs_trigger(sb, BLOCK_REMOVE_STALE))) {
retried = true;
scoutfs_inc_counter(sb, block_cache_remove_stale);
block_remove(sb, bp);
@@ -1081,100 +1062,106 @@ static unsigned long block_count_objects(struct shrinker *shrink, struct shrink_
struct super_block *sb = binf->sb;
scoutfs_inc_counter(sb, block_cache_count_objects);
return shrinker_min_long(atomic_read(&binf->total_inserted));
return list_lru_shrink_count(&binf->lru, sc);
}
struct isolate_args {
struct super_block *sb;
struct list_head dispose;
};
#define DECLARE_ISOLATE_ARGS(sb_, name_) \
struct isolate_args name_ = { \
.sb = sb_, \
.dispose = LIST_HEAD_INIT(name_.dispose), \
}
static enum lru_status isolate_lru_block(struct list_head *item, struct list_lru_one *list,
void *cb_arg)
{
struct block_private *bp = container_of(item, struct block_private, lru_head);
struct isolate_args *ia = cb_arg;
TRACE_BLOCK(isolate, bp);
/* rotate accessed blocks to the tail of the list (lazy promotion) */
if (test_and_clear_bit(BLOCK_BIT_ACCESSED, &bp->bits)) {
scoutfs_inc_counter(ia->sb, block_cache_isolate_rotate);
return LRU_ROTATE;
}
/* any refs, including dirty/io, stop us from acquiring lru refcount */
if (!block_get_remove_live_only(bp)) {
scoutfs_inc_counter(ia->sb, block_cache_isolate_skip);
return LRU_SKIP;
}
scoutfs_inc_counter(ia->sb, block_cache_isolate_removed);
list_lru_isolate_move(list, &bp->lru_head, &ia->dispose);
return LRU_REMOVED;
}
static void shrink_dispose_blocks(struct super_block *sb, struct list_head *dispose)
{
struct block_private *bp;
struct block_private *bp__;
list_for_each_entry_safe(bp, bp__, dispose, lru_head) {
list_del_init(&bp->lru_head);
block_remove(sb, bp);
block_put(sb, bp);
}
}
/*
* Remove a number of cached blocks that haven't been used recently.
*
* We don't maintain a strictly ordered LRU to avoid the contention of
* accesses always moving blocks around in some precise global
* structure.
*
* Instead we use counters to divide the blocks into two roughly equal
* groups by how recently they were accessed. We randomly walk all
* inserted blocks looking for any blocks in the older half to remove
* and free. The random walk and there being two groups means that we
* typically only walk a small multiple of the number we're looking for
* before we find them all.
*
* Our rcu walk of blocks can see blocks in all stages of their life
* cycle, from dirty blocks to those with 0 references that are queued
* for freeing. We only want to free idle inserted blocks so we
* atomically remove blocks when the only references are ours and the
* hash table.
*/
static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_control *sc)
{
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
struct super_block *sb = binf->sb;
struct rhashtable_iter iter;
struct block_private *bp;
bool stop = false;
unsigned long freed = 0;
unsigned long nr = sc->nr_to_scan;
u64 recently;
DECLARE_ISOLATE_ARGS(sb, ia);
unsigned long freed;
scoutfs_inc_counter(sb, block_cache_scan_objects);
recently = accessed_recently(binf);
rhashtable_walk_enter(&binf->ht, &iter);
rhashtable_walk_start(&iter);
freed = kc_list_lru_shrink_walk(&binf->lru, sc, isolate_lru_block, &ia);
shrink_dispose_blocks(sb, &ia.dispose);
return freed;
}
/*
* This isn't great but I don't see a better way. We want to
* walk the hash from a random point so that we're not
* constantly walking over the same region that we've already
* freed old blocks within. The interface doesn't let us do
* this explicitly, but this seems to work? The difference this
* makes is enormous, around a few orders of magnitude fewer
* _nexts per shrink.
*/
if (iter.walker.tbl)
iter.slot = prandom_u32_max(iter.walker.tbl->size);
static enum lru_status dump_lru_block(struct list_head *item, struct list_lru_one *list,
void *cb_arg)
{
struct block_private *bp = container_of(item, struct block_private, lru_head);
while (nr > 0) {
bp = rhashtable_walk_next(&iter);
if (bp == NULL)
break;
if (bp == ERR_PTR(-EAGAIN)) {
/*
* We can be called from reclaim in the allocation
* to resize the hash table itself. We have to
* return so that the caller can proceed and
* enable hash table iteration again.
*/
scoutfs_inc_counter(sb, block_cache_shrink_stop);
stop = true;
break;
}
printk("blkno %llu refcount 0x%x io_count %d bits 0x%lx\n",
bp->bl.blkno, atomic_read(&bp->refcount), atomic_read(&bp->io_count),
bp->bits);
print_block_stack(bp);
scoutfs_inc_counter(sb, block_cache_shrink_next);
return LRU_SKIP;
}
if (bp->accessed >= recently) {
scoutfs_inc_counter(sb, block_cache_shrink_recent);
continue;
}
/*
* Called during shutdown with no other users. The isolating walk must
* find blocks on the lru that only have references for presence on the
* lru and in the hash table.
*/
static void block_shrink_all(struct super_block *sb)
{
DECLARE_BLOCK_INFO(sb, binf);
DECLARE_ISOLATE_ARGS(sb, ia);
long count;
if (block_get_if_inserted(bp)) {
if (block_remove_solo(sb, bp)) {
scoutfs_inc_counter(sb, block_cache_shrink_remove);
TRACE_BLOCK(shrink, bp);
freed++;
nr--;
}
block_put(sb, bp);
}
count = DIV_ROUND_UP(list_lru_count(&binf->lru), 128) * 2;
do {
kc_list_lru_walk(&binf->lru, isolate_lru_block, &ia, 128);
shrink_dispose_blocks(sb, &ia.dispose);
} while (list_lru_count(&binf->lru) > 0 && --count > 0);
count = list_lru_count(&binf->lru);
if (count > 0) {
scoutfs_err(sb, "failed to isolate/dispose %ld blocks", count);
kc_list_lru_walk(&binf->lru, dump_lru_block, sb, count);
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
if (stop)
return SHRINK_STOP;
else
return freed;
}
struct sm_block_completion {
@@ -1201,7 +1188,7 @@ static void KC_DECLARE_BIO_END_IO(sm_block_bio_end_io, struct bio *bio)
* only layer that sees the full block buffer so we pass the calculated
* crc to the caller for them to check in their context.
*/
static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsigned int opf,
static int sm_block_io(struct super_block *sb, struct block_device *bdev, blk_opf_t opf,
u64 blkno, struct scoutfs_block_header *hdr, size_t len, __le32 *blk_crc)
{
struct scoutfs_block_header *pg_hdr;
@@ -1213,7 +1200,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsign
BUILD_BUG_ON(PAGE_SIZE < SCOUTFS_BLOCK_SM_SIZE);
if (scoutfs_forcing_unmount(sb))
return -EIO;
return -ENOLINK;
if (WARN_ON_ONCE(len > SCOUTFS_BLOCK_SM_SIZE) ||
WARN_ON_ONCE(!op_is_write(opf) && !blk_crc))
@@ -1233,15 +1220,13 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsign
pg_hdr->crc = block_calc_crc(pg_hdr, SCOUTFS_BLOCK_SM_SIZE);
}
bio = bio_alloc(GFP_NOFS, 1);
bio = kc_bio_alloc(bdev, 1, opf, GFP_NOFS);
if (!bio) {
ret = -ENOMEM;
goto out;
}
kc_bio_set_opf(bio, opf | REQ_SYNC);
kc_bio_set_sector(bio, blkno << (SCOUTFS_BLOCK_SM_SHIFT - 9));
bio_set_dev(bio, bdev);
bio->bi_end_io = sm_block_bio_end_io;
bio->bi_private = &sbc;
bio_add_page(bio, page, SCOUTFS_BLOCK_SM_SIZE, 0);
@@ -1281,7 +1266,7 @@ int scoutfs_block_write_sm(struct super_block *sb,
int scoutfs_block_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct block_info *binf;
struct block_info *binf = NULL;
int ret;
binf = kzalloc(sizeof(struct block_info), GFP_KERNEL);
@@ -1290,19 +1275,19 @@ int scoutfs_block_setup(struct super_block *sb)
goto out;
}
ret = rhashtable_init(&binf->ht, &block_ht_params);
if (ret < 0) {
kfree(binf);
ret = list_lru_init(&binf->lru);
if (ret < 0)
goto out;
ret = rhashtable_init(&binf->ht, &block_ht_params);
if (ret < 0)
goto out;
}
binf->sb = sb;
atomic_set(&binf->total_inserted, 0);
atomic64_set(&binf->access_counter, 0);
init_waitqueue_head(&binf->waitq);
KC_INIT_SHRINKER_FUNCS(&binf->shrinker, block_count_objects,
block_scan_objects);
KC_REGISTER_SHRINKER(&binf->shrinker);
KC_REGISTER_SHRINKER(&binf->shrinker, "scoutfs-block:" SCSBF, SCSB_ARGS(sb));
INIT_WORK(&binf->free_work, block_free_work);
init_llist_head(&binf->free_llist);
@@ -1310,8 +1295,10 @@ int scoutfs_block_setup(struct super_block *sb)
ret = 0;
out:
if (ret)
scoutfs_block_destroy(sb);
if (ret < 0 && binf) {
list_lru_destroy(&binf->lru);
kfree(binf);
}
return ret;
}
@@ -1323,9 +1310,10 @@ void scoutfs_block_destroy(struct super_block *sb)
if (binf) {
KC_UNREGISTER_SHRINKER(&binf->shrinker);
block_remove_all(sb);
block_shrink_all(sb);
flush_work(&binf->free_work);
rhashtable_destroy(&binf->ht);
list_lru_destroy(&binf->lru);
kfree(binf);
sbi->block_info = NULL;

View File

@@ -20,6 +20,7 @@
#include <net/sock.h>
#include <net/tcp.h>
#include <asm/barrier.h>
#include <linux/overflow.h>
#include "format.h"
#include "counters.h"
@@ -68,6 +69,7 @@ int scoutfs_client_alloc_inodes(struct super_block *sb, u64 count,
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_net_inode_alloc ial;
__le64 lecount = cpu_to_le64(count);
u64 tmp;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
@@ -80,7 +82,7 @@ int scoutfs_client_alloc_inodes(struct super_block *sb, u64 count,
if (*nr == 0)
ret = -ENOSPC;
else if (*ino + *nr < *ino)
else if (check_add_overflow(*ino, *nr - 1, &tmp))
ret = -EINVAL;
}
@@ -433,8 +435,8 @@ static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
if (ret == -ENOENT)
ret = 0;
kfree(super);
out:
kfree(super);
return ret;
}
@@ -477,7 +479,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_mount_options opts;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
struct sockaddr_storage sin;
bool am_quorum;
int ret;

View File

@@ -26,17 +26,15 @@
EXPAND_COUNTER(block_cache_alloc_page_order) \
EXPAND_COUNTER(block_cache_alloc_virt) \
EXPAND_COUNTER(block_cache_end_io_error) \
EXPAND_COUNTER(block_cache_isolate_removed) \
EXPAND_COUNTER(block_cache_isolate_rotate) \
EXPAND_COUNTER(block_cache_isolate_skip) \
EXPAND_COUNTER(block_cache_forget) \
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_count_objects) \
EXPAND_COUNTER(block_cache_scan_objects) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_stop) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
@@ -90,6 +88,7 @@
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(inode_deleted) \
EXPAND_COUNTER(item_cache_count_objects) \
EXPAND_COUNTER(item_cache_scan_objects) \
EXPAND_COUNTER(item_clear_dirty) \
@@ -117,10 +116,11 @@
EXPAND_COUNTER(item_pcpu_page_hit) \
EXPAND_COUNTER(item_pcpu_page_miss) \
EXPAND_COUNTER(item_pcpu_page_miss_keys) \
EXPAND_COUNTER(item_read_pages_barrier) \
EXPAND_COUNTER(item_read_pages_retry) \
EXPAND_COUNTER(item_read_pages_split) \
EXPAND_COUNTER(item_shrink_page) \
EXPAND_COUNTER(item_shrink_page_dirty) \
EXPAND_COUNTER(item_shrink_page_reader) \
EXPAND_COUNTER(item_shrink_page_trylock) \
EXPAND_COUNTER(item_update) \
EXPAND_COUNTER(item_write_dirty) \
@@ -145,6 +145,7 @@
EXPAND_COUNTER(lock_shrink_work) \
EXPAND_COUNTER(lock_unlock) \
EXPAND_COUNTER(lock_wait) \
EXPAND_COUNTER(log_merge_no_finalized) \
EXPAND_COUNTER(log_merge_wait_timeout) \
EXPAND_COUNTER(net_dropped_response) \
EXPAND_COUNTER(net_send_bytes) \
@@ -181,6 +182,7 @@
EXPAND_COUNTER(quorum_send_vote) \
EXPAND_COUNTER(quorum_server_shutdown) \
EXPAND_COUNTER(quorum_term_follower) \
EXPAND_COUNTER(reclaimed_open_logs) \
EXPAND_COUNTER(server_commit_hold) \
EXPAND_COUNTER(server_commit_queue) \
EXPAND_COUNTER(server_commit_worker) \

View File

@@ -20,7 +20,9 @@
#include <linux/hash.h>
#include <linux/log2.h>
#include <linux/falloc.h>
#include <linux/fiemap.h>
#include <linux/writeback.h>
#include <linux/overflow.h>
#include "format.h"
#include "super.h"
@@ -558,7 +560,7 @@ static int scoutfs_get_block(struct inode *inode, sector_t iblock,
u64 offset;
int ret;
WARN_ON_ONCE(create && !inode_is_locked(inode));
WARN_ON_ONCE(create && !rwsem_is_locked(&si->extent_sem));
/* make sure caller holds a cluster lock */
lock = scoutfs_per_task_get(&si->pt_data_lock);
@@ -679,8 +681,14 @@ int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_
* We can return errors from locking and checking offline extents. The
* page is unlocked if we return an error.
*/
#ifdef KC_MPAGE_READ_FOLIO
static int scoutfs_read_folio(struct file *file, struct folio *folio)
{
struct page *page = &folio->page;
#else
static int scoutfs_readpage(struct file *file, struct page *page)
{
#endif
struct inode *inode = file->f_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
@@ -727,7 +735,11 @@ static int scoutfs_readpage(struct file *file, struct page *page)
return ret;
}
#ifdef KC_MPAGE_READ_FOLIO
ret = mpage_read_folio(folio, scoutfs_get_block_read);
#else
ret = mpage_readpage(page, scoutfs_get_block_read);
#endif
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
@@ -825,7 +837,10 @@ struct write_begin_data {
static int scoutfs_write_begin(struct file *file,
struct address_space *mapping, loff_t pos,
unsigned len, unsigned flags,
unsigned len,
#ifdef KC_BLOCK_WRITE_BEGIN_AOP_FLAGS
unsigned flags,
#endif
struct page **pagep, void **fsdata)
{
struct inode *inode = mapping->host;
@@ -860,13 +875,18 @@ retry:
if (ret < 0)
goto out;
#ifdef KC_BLOCK_WRITE_BEGIN_AOP_FLAGS
/* can't re-enter fs, have trans */
flags |= AOP_FLAG_NOFS;
#endif
/* generic write_end updates i_size and calls dirty_inode */
ret = scoutfs_dirty_inode_item(inode, wbd->lock) ?:
block_write_begin(mapping, pos, len, flags, pagep,
scoutfs_get_block_write);
block_write_begin(mapping, pos, len,
#ifdef KC_BLOCK_WRITE_BEGIN_AOP_FLAGS
flags,
#endif
pagep, scoutfs_get_block_write);
if (ret < 0) {
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &wbd->ind_locks);
@@ -1068,6 +1088,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
loff_t end;
u64 iblock;
u64 last;
loff_t tmp;
s64 ret;
/* XXX support more flags */
@@ -1076,14 +1097,14 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
goto out;
}
/* catch wrapping */
if (offset + len < offset) {
ret = -EINVAL;
if (len == 0) {
ret = 0;
goto out;
}
if (len == 0) {
ret = 0;
/* catch wrapping */
if (check_add_overflow(offset, len - 1, &tmp)) {
ret = -EINVAL;
goto out;
}
@@ -1304,8 +1325,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
goto out;
}
ret = inode_permission(from, MAY_WRITE) ?:
inode_permission(to, MAY_WRITE);
ret = inode_permission(KC_VFS_INIT_NS from, MAY_WRITE) ?:
inode_permission(KC_VFS_INIT_NS to, MAY_WRITE);
if (ret < 0)
goto out;
@@ -1530,33 +1551,32 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_lock *lock = NULL;
struct scoutfs_extent *info = NULL;
struct page *page = NULL;
struct scoutfs_extent ext;
struct scoutfs_extent cur;
struct data_ext_args args;
u32 last_flags;
u64 iblock;
u64 last;
int entries = 0;
int ret;
int complete = 0;
if (len == 0) {
ret = 0;
goto out;
}
ret = fiemap_check_flags(fieinfo, FIEMAP_FLAG_SYNC);
ret = fiemap_prep(inode, fieinfo, start, &len, FIEMAP_FLAG_SYNC);
if (ret)
goto out;
inode_lock(inode);
down_read(&si->extent_sem);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret)
goto unlock;
args.ino = ino;
args.inode = inode;
args.lock = lock;
page = alloc_page(GFP_KERNEL);
if (!page) {
ret = -ENOMEM;
goto out;
}
/* use a dummy extent to track */
memset(&cur, 0, sizeof(cur));
@@ -1565,48 +1585,93 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
iblock = start >> SCOUTFS_BLOCK_SM_SHIFT;
last = (start + len - 1) >> SCOUTFS_BLOCK_SM_SHIFT;
args.ino = ino;
args.inode = inode;
/* outer loop */
while (iblock <= last) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &ext);
if (ret < 0) {
if (ret == -ENOENT)
/* lock */
inode_lock(inode);
down_read(&si->extent_sem);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret) {
up_read(&si->extent_sem);
inode_unlock(inode);
break;
}
args.lock = lock;
/* collect entries */
info = page_address(page);
memset(info, 0, PAGE_SIZE);
while (entries < (PAGE_SIZE / sizeof(struct fiemap_extent)) - 1) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &ext);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
complete = 1;
last_flags = FIEMAP_EXTENT_LAST;
break;
}
trace_scoutfs_data_fiemap_extent(sb, ino, &ext);
if (ext.start > last) {
/* not setting _LAST, it's for end of file */
ret = 0;
last_flags = FIEMAP_EXTENT_LAST;
break;
complete = 1;
break;
}
if (scoutfs_ext_can_merge(&cur, &ext)) {
/* merged extents could be greater than input len */
cur.len += ext.len;
} else {
/* fill it */
memcpy(info, &cur, sizeof(cur));
entries++;
info++;
cur = ext;
}
iblock = ext.start + ext.len;
}
trace_scoutfs_data_fiemap_extent(sb, ino, &ext);
/* unlock */
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
up_read(&si->extent_sem);
inode_unlock(inode);
if (ext.start > last) {
/* not setting _LAST, it's for end of file */
ret = 0;
if (ret)
break;
}
if (scoutfs_ext_can_merge(&cur, &ext)) {
/* merged extents could be greater than input len */
cur.len += ext.len;
} else {
ret = fill_extent(fieinfo, &cur, 0);
/* emit entries */
info = page_address(page);
for (; entries > 0; entries--) {
ret = fill_extent(fieinfo, info, 0);
if (ret != 0)
goto unlock;
cur = ext;
goto out;
info++;
}
iblock = ext.start + ext.len;
if (complete)
break;
}
/* still one left, it's in cur */
if (cur.len)
ret = fill_extent(fieinfo, &cur, last_flags);
unlock:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
up_read(&si->extent_sem);
inode_unlock(inode);
out:
if (ret == 1)
ret = 0;
if (page)
__free_page(page);
trace_scoutfs_data_fiemap(sb, start, len, ret);
return ret;
@@ -1709,12 +1774,16 @@ int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u64 last_block;
u64 on;
u64 off;
loff_t tmp;
int ret = 0;
if (len == 0)
goto out;
if (WARN_ON_ONCE(sef & SEF_UNKNOWN) ||
WARN_ON_ONCE(op & SCOUTFS_IOC_DWO_UNKNOWN) ||
WARN_ON_ONCE(dw && !RB_EMPTY_NODE(&dw->node)) ||
WARN_ON_ONCE(pos + len < pos)) {
WARN_ON_ONCE(check_add_overflow(pos, len - 1, &tmp))) {
ret = -EINVAL;
goto out;
}
@@ -1889,8 +1958,244 @@ int scoutfs_data_waiting(struct super_block *sb, u64 ino, u64 iblock,
return ret;
}
#ifdef KC_MM_VM_FAULT_T
static vm_fault_t scoutfs_data_page_mkwrite(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
#else
static int scoutfs_data_page_mkwrite(struct vm_area_struct *vma,
struct vm_fault *vmf)
{
#endif
struct page *page = vmf->page;
struct file *file = vma->vm_file;
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
struct write_begin_data wbd;
u64 ind_seq;
loff_t pos;
loff_t size;
unsigned int len = PAGE_SIZE;
vm_fault_t ret = VM_FAULT_SIGBUS;
int err;
pos = vmf->pgoff << PAGE_SHIFT;
sb_start_pagefault(sb);
err = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (err) {
ret = vmf_error(err);
goto out;
}
size = i_size_read(inode);
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, lock)) {
/* data_version is per inode, whole file must be online */
err = scoutfs_data_wait_check(inode, 0, size,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_WRITE,
&dw, lock);
if (err != 0) {
if (err < 0)
ret = vmf_error(err);
goto out_unlock;
}
}
/* scoutfs_write_begin */
memset(&wbd, 0, sizeof(wbd));
INIT_LIST_HEAD(&wbd.ind_locks);
wbd.lock = lock;
/*
* Start transaction before taking page locks - we want to make sure we're
* not locking a page, then waiting for trans, because writeback might race
* against it and cause a lock inversion hang - as demonstrated by both
* holetest and fsstress tests in xfstests.
*/
do {
err = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &wbd.ind_locks, inode,
true) ?:
scoutfs_inode_index_try_lock_hold(sb, &wbd.ind_locks,
ind_seq, false);
} while (err > 0);
if (err < 0) {
ret = vmf_error(err);
goto out_trans;
}
down_write(&si->extent_sem);
if (!trylock_page(page)) {
ret = VM_FAULT_NOPAGE;
goto out_sem;
}
ret = VM_FAULT_LOCKED;
if ((page->mapping != inode->i_mapping) ||
(!PageUptodate(page)) ||
(page_offset(page) > size)) {
unlock_page(page);
ret = VM_FAULT_NOPAGE;
goto out_sem;
}
if (page->index == (size - 1) >> PAGE_SHIFT)
len = ((size - 1) & ~PAGE_MASK) + 1;
err = __block_write_begin(page, pos, PAGE_SIZE, scoutfs_get_block);
if (err) {
ret = vmf_error(err);
unlock_page(page);
goto out_sem;
}
/* end scoutfs_write_begin */
/*
* We mark the page dirty already here so that when freeze is in
* progress, we are guaranteed that writeback during freezing will
* see the dirty page and writeprotect it again.
*/
set_page_dirty(page);
wait_for_stable_page(page);
/* scoutfs_write_end */
scoutfs_inode_set_data_seq(inode);
scoutfs_inode_inc_data_version(inode);
file_update_time(vma->vm_file);
scoutfs_update_inode_item(inode, wbd.lock, &wbd.ind_locks);
scoutfs_inode_queue_writeback(inode);
out_sem:
up_write(&si->extent_sem);
out_trans:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &wbd.ind_locks);
/* end scoutfs_write_end */
out_unlock:
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
out:
sb_end_pagefault(sb);
if (scoutfs_data_wait_found(&dw)) {
/*
* It'd be really nice to not hold the mmap_sem lock here
* before waiting for data, and then return VM_FAULT_RETRY
*/
err = scoutfs_data_wait(inode, &dw);
if (err == 0)
ret = VM_FAULT_NOPAGE;
else
ret = vmf_error(err);
}
trace_scoutfs_data_page_mkwrite(sb, scoutfs_ino(inode), pos, (__force u32)ret);
return ret;
}
#ifdef KC_MM_VM_FAULT_T
static vm_fault_t scoutfs_data_filemap_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
#else
static int scoutfs_data_filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
#endif
struct file *file = vma->vm_file;
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
loff_t pos;
int err;
vm_fault_t ret = VM_FAULT_SIGBUS;
pos = vmf->pgoff;
pos <<= PAGE_SHIFT;
retry:
err = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (err < 0)
return vmf_error(err);
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
/* protect checked extents from stage/release */
atomic_inc(&inode->i_dio_count);
err = scoutfs_data_wait_check(inode, pos, PAGE_SIZE,
SEF_OFFLINE, SCOUTFS_IOC_DWO_READ,
&dw, inode_lock);
if (err != 0) {
if (err < 0)
ret = vmf_error(err);
goto out;
}
}
#ifdef KC_MM_VM_FAULT_T
ret = filemap_fault(vmf);
#else
ret = filemap_fault(vma, vmf);
#endif
out:
if (scoutfs_per_task_del(&si->pt_data_lock, &pt_ent))
kc_inode_dio_end(inode);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {
err = scoutfs_data_wait(inode, &dw);
if (err == 0)
goto retry;
ret = VM_FAULT_RETRY;
}
trace_scoutfs_data_filemap_fault(sb, scoutfs_ino(inode), pos, (__force u32)ret);
return ret;
}
static const struct vm_operations_struct scoutfs_data_file_vm_ops = {
.fault = scoutfs_data_filemap_fault,
.page_mkwrite = scoutfs_data_page_mkwrite,
#ifdef KC_MM_REMAP_PAGES
.remap_pages = generic_file_remap_pages,
#endif
};
static int scoutfs_file_mmap(struct file *file, struct vm_area_struct *vma)
{
file_accessed(file);
vma->vm_ops = &scoutfs_data_file_vm_ops;
return 0;
}
const struct address_space_operations scoutfs_file_aops = {
#ifdef KC_MPAGE_READ_FOLIO
.dirty_folio = block_dirty_folio,
.invalidate_folio = block_invalidate_folio,
.read_folio = scoutfs_read_folio,
#else
.readpage = scoutfs_readpage,
#endif
#ifndef KC_FILE_AOPS_READAHEAD
.readpages = scoutfs_readpages,
#else
@@ -1911,7 +2216,10 @@ const struct file_operations scoutfs_file_fops = {
#else
.read_iter = scoutfs_file_read_iter,
.write_iter = scoutfs_file_write_iter,
.splice_read = generic_file_splice_read,
.splice_write = iter_file_splice_write,
#endif
.mmap = scoutfs_file_mmap,
.unlocked_ioctl = scoutfs_ioctl,
.fsync = scoutfs_file_fsync,
.llseek = scoutfs_file_llseek,

View File

@@ -11,11 +11,13 @@
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/stddef.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/uio.h>
#include <linux/xattr.h>
#include <linux/namei.h>
#include <linux/mm.h>
#include "format.h"
#include "file.h"
@@ -434,6 +436,15 @@ out:
return d_splice_alias(inode, dentry);
}
/*
* Helper to make iterating through dirent ptrs aligned
*/
static inline struct scoutfs_dirent *next_aligned_dirent(struct scoutfs_dirent *dent, u8 len)
{
return (void *)dent +
ALIGN(offsetof(struct scoutfs_dirent, name[len]), __alignof__(struct scoutfs_dirent));
}
/*
* readdir simply iterates over the dirent items for the dir inode and
* uses their offset as the readdir position.
@@ -441,76 +452,112 @@ out:
* It will need to be careful not to read past the region of the dirent
* hash offset keys that it has access to.
*/
static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
void *dirent, kc_readdir_ctx_t ctx)
static int scoutfs_readdir(struct file *file, struct dir_context *ctx)
{
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent *dent = NULL;
/* we'll store name_len in dent->__pad[0] */
#define hacky_name_len __pad[0]
struct scoutfs_key last_key;
struct scoutfs_key key;
struct page *page = NULL;
int name_len;
u64 pos;
int entries = 0;
int ret;
int complete = 0;
struct scoutfs_dirent *end;
if (!kc_dir_emit_dots(file, dirent, ctx))
if (!dir_emit_dots(file, ctx))
return 0;
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
page = alloc_page(GFP_KERNEL);
if (!page)
return -ENOMEM;
}
end = page_address(page) + PAGE_SIZE;
init_dirent_key(&last_key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
SCOUTFS_DIRENT_LAST_POS, 0);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &dir_lock);
if (ret)
goto out;
/*
* lock and fetch dirent items, until the page no longer fits
* a max size dirent (288b). Then unlock and dir_emit the ones
* we stored in the page.
*/
for (;;) {
init_dirent_key(&key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
kc_readdir_pos(file, ctx), 0);
/* lock */
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &dir_lock);
if (ret)
break;
ret = scoutfs_item_next(sb, &key, &last_key, dent,
dirent_bytes(SCOUTFS_NAME_LEN),
dir_lock);
if (ret < 0) {
if (ret == -ENOENT)
dent = page_address(page);
pos = ctx->pos;
while (next_aligned_dirent(dent, SCOUTFS_NAME_LEN) < end) {
init_dirent_key(&key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
pos, 0);
ret = scoutfs_item_next(sb, &key, &last_key, dent,
dirent_bytes(SCOUTFS_NAME_LEN),
dir_lock);
if (ret < 0) {
if (ret == -ENOENT) {
ret = 0;
complete = 1;
}
break;
}
name_len = ret - sizeof(struct scoutfs_dirent);
dent->hacky_name_len = name_len;
if (name_len < 1 || name_len > SCOUTFS_NAME_LEN) {
scoutfs_corruption(sb, SC_DIRENT_READDIR_NAME_LEN,
corrupt_dirent_readdir_name_len,
"dir_ino %llu pos %llu key "SK_FMT" len %d",
scoutfs_ino(inode),
pos,
SK_ARG(&key), name_len);
ret = -EIO;
break;
}
pos = le64_to_cpu(dent->pos) + 1;
dent = next_aligned_dirent(dent, name_len);
entries++;
}
/* unlock */
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
if (ret < 0)
break;
dent = page_address(page);
for (; entries > 0; entries--) {
ctx->pos = le64_to_cpu(dent->pos);
if (!dir_emit(ctx, dent->name, dent->hacky_name_len,
le64_to_cpu(dent->ino),
dentry_type(dent->type))) {
ret = 0;
goto out;
}
dent = next_aligned_dirent(dent, dent->hacky_name_len);
/* always advance ctx->pos past */
ctx->pos++;
}
if (complete)
break;
}
name_len = ret - sizeof(struct scoutfs_dirent);
if (name_len < 1 || name_len > SCOUTFS_NAME_LEN) {
scoutfs_corruption(sb, SC_DIRENT_READDIR_NAME_LEN,
corrupt_dirent_readdir_name_len,
"dir_ino %llu pos %llu key "SK_FMT" len %d",
scoutfs_ino(inode),
kc_readdir_pos(file, ctx),
SK_ARG(&key), name_len);
ret = -EIO;
goto out;
}
pos = le64_to_cpu(key.skd_major);
kc_readdir_pos(file, ctx) = pos;
if (!kc_dir_emit(ctx, dirent, dent->name, name_len, pos,
le64_to_cpu(dent->ino),
dentry_type(dent->type))) {
ret = 0;
break;
}
kc_readdir_pos(file, ctx) = pos + 1;
}
out:
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
kfree(dent);
if (page)
__free_page(page);
return ret;
}
@@ -703,8 +750,9 @@ out_unlock:
return inode;
}
static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
dev_t rdev)
static int scoutfs_mknod(KC_VFS_NS_DEF
struct inode *dir,
struct dentry *dentry, umode_t mode, dev_t rdev)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
@@ -773,15 +821,20 @@ out:
}
/* XXX hmm, do something with excl? */
static int scoutfs_create(struct inode *dir, struct dentry *dentry,
umode_t mode, bool excl)
static int scoutfs_create(KC_VFS_NS_DEF
struct inode *dir,
struct dentry *dentry, umode_t mode, bool excl)
{
return scoutfs_mknod(dir, dentry, mode | S_IFREG, 0);
return scoutfs_mknod(KC_VFS_NS
dir, dentry, mode | S_IFREG, 0);
}
static int scoutfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
static int scoutfs_mkdir(KC_VFS_NS_DEF
struct inode *dir,
struct dentry *dentry, umode_t mode)
{
return scoutfs_mknod(dir, dentry, mode | S_IFDIR, 0);
return scoutfs_mknod(KC_VFS_NS
dir, dentry, mode | S_IFDIR, 0);
}
static int scoutfs_link(struct dentry *old_dentry,
@@ -1176,7 +1229,8 @@ static const char *scoutfs_get_link(struct dentry *dentry, struct inode *inode,
* Symlink target paths can be annoyingly large. We store relatively
* rare large paths in multiple items.
*/
static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
static int scoutfs_symlink(KC_VFS_NS_DEF
struct inode *dir, struct dentry *dentry,
const char *symname)
{
struct super_block *sb = dir->i_sb;
@@ -1563,7 +1617,8 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
* from using parent/child locking orders as two groups can have both
* parent and child relationships to each other.
*/
static int scoutfs_rename_common(struct inode *old_dir,
static int scoutfs_rename_common(KC_VFS_NS_DEF
struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
@@ -1757,7 +1812,7 @@ retry:
}
old_inode->i_ctime = now;
if (new_inode)
old_inode->i_ctime = now;
new_inode->i_ctime = now;
inode_inc_iversion(old_dir);
inode_inc_iversion(old_inode);
@@ -1840,18 +1895,21 @@ static int scoutfs_rename(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry)
{
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, 0);
return scoutfs_rename_common(KC_VFS_INIT_NS
old_dir, old_dentry, new_dir, new_dentry, 0);
}
#endif
static int scoutfs_rename2(struct inode *old_dir,
static int scoutfs_rename2(KC_VFS_NS_DEF
struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
if (flags & ~RENAME_NOREPLACE)
return -EINVAL;
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, flags);
return scoutfs_rename_common(KC_VFS_NS
old_dir, old_dentry, new_dir, new_dentry, flags);
}
#ifdef KC_FMODE_KABI_ITERATE
@@ -1863,8 +1921,18 @@ static int scoutfs_dir_open(struct inode *inode, struct file *file)
}
#endif
static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
static int scoutfs_tmpfile(KC_VFS_NS_DEF
struct inode *dir,
#ifdef KC_D_TMPFILE_DENTRY
struct dentry *dentry,
#else
struct file *file,
#endif
umode_t mode)
{
#ifndef KC_D_TMPFILE_DENTRY
struct dentry *dentry = file->f_path.dentry;
#endif
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
@@ -1891,7 +1959,11 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
si->crtime = inode->i_mtime;
insert_inode_hash(inode);
ihold(inode); /* need to update inode modifications in d_tmpfile */
#ifdef KC_D_TMPFILE_DENTRY
d_tmpfile(dentry, inode);
#else
d_tmpfile(file, inode);
#endif
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
@@ -1899,6 +1971,10 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
scoutfs_inode_index_unlock(sb, &ind_locks);
#ifndef KC_D_TMPFILE_DENTRY
ret = finish_open_simple(file, 0);
#endif
out:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
@@ -1944,7 +2020,7 @@ const struct inode_operations scoutfs_symlink_iops = {
};
const struct file_operations scoutfs_dir_fops = {
.KC_FOP_READDIR = scoutfs_readdir,
.iterate = scoutfs_readdir,
#ifdef KC_FMODE_KABI_ITERATE
.open = scoutfs_dir_open,
#endif
@@ -1977,6 +2053,9 @@ const struct inode_operations scoutfs_dir_iops = {
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
#ifdef KC_GET_ACL_DENTRY
.set_acl = scoutfs_set_acl,
#endif
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER

View File

@@ -25,6 +25,7 @@
#include "sysfs.h"
#include "server.h"
#include "fence.h"
#include "net.h"
/*
* Fencing ensures that a given mount can no longer write to the
@@ -79,7 +80,7 @@ struct pending_fence {
struct timer_list timer;
ktime_t start_kt;
__be32 ipv4_addr;
union scoutfs_inet_addr addr;
bool fenced;
bool error;
int reason;
@@ -105,12 +106,12 @@ static ssize_t elapsed_secs_show(struct kobject *kobj,
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
ktime_t now = ktime_get();
struct timeval tv = { 0, };
ktime_t t = ns_to_ktime(0);
if (ktime_after(now, fence->start_kt))
tv = ktime_to_timeval(ktime_sub(now, fence->start_kt));
t = ktime_sub(now, fence->start_kt);
return snprintf(buf, PAGE_SIZE, "%llu", (long long)tv.tv_sec);
return snprintf(buf, PAGE_SIZE, "%llu", (long long)ktime_divns(t, NSEC_PER_SEC));
}
SCOUTFS_ATTR_RO(elapsed_secs);
@@ -171,14 +172,19 @@ static ssize_t error_store(struct kobject *kobj, struct kobj_attribute *attr, co
}
SCOUTFS_ATTR_RW(error);
static ssize_t ipv4_addr_show(struct kobject *kobj,
static ssize_t inet_addr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
struct sockaddr_storage sin;
return snprintf(buf, PAGE_SIZE, "%pI4", &fence->ipv4_addr);
memset(&sin, 0, sizeof(struct sockaddr_storage));
scoutfs_addr_to_sin(&sin, &fence->addr);
return snprintf(buf, PAGE_SIZE, "%pISc", SIN_ARG(&sin));
}
SCOUTFS_ATTR_RO(ipv4_addr);
SCOUTFS_ATTR_RO(inet_addr);
static ssize_t reason_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
@@ -212,7 +218,7 @@ static struct attribute *fence_attrs[] = {
SCOUTFS_ATTR_PTR(elapsed_secs),
SCOUTFS_ATTR_PTR(fenced),
SCOUTFS_ATTR_PTR(error),
SCOUTFS_ATTR_PTR(ipv4_addr),
SCOUTFS_ATTR_PTR(inet_addr),
SCOUTFS_ATTR_PTR(reason),
SCOUTFS_ATTR_PTR(rid),
NULL,
@@ -232,7 +238,7 @@ static void fence_timeout(struct timer_list *timer)
wake_up(&fi->waitq);
}
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason)
int scoutfs_fence_start(struct super_block *sb, u64 rid, union scoutfs_inet_addr *addr, int reason)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
@@ -248,7 +254,7 @@ int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int r
scoutfs_sysfs_init_attrs(sb, &fence->ssa);
fence->start_kt = ktime_get();
fence->ipv4_addr = ipv4_addr;
memcpy(&fence->addr, addr, sizeof(union scoutfs_inet_addr));
fence->fenced = false;
fence->error = false;
fence->reason = reason;

View File

@@ -7,7 +7,7 @@ enum {
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER,
};
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason);
int scoutfs_fence_start(struct super_block *sb, u64 rid, union scoutfs_inet_addr *addr, int reason);
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error);
int scoutfs_fence_reason_pending(struct super_block *sb, int reason);
int scoutfs_fence_free(struct super_block *sb, u64 rid);

View File

@@ -267,7 +267,8 @@ out:
}
#endif
int scoutfs_permission(struct inode *inode, int mask)
int scoutfs_permission(KC_VFS_NS_DEF
struct inode *inode, int mask)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *inode_lock = NULL;
@@ -281,7 +282,8 @@ int scoutfs_permission(struct inode *inode, int mask)
if (ret)
return ret;
ret = generic_permission(inode, mask);
ret = generic_permission(KC_VFS_INIT_NS
inode, mask);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);

View File

@@ -10,7 +10,8 @@ ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
ssize_t scoutfs_file_read_iter(struct kiocb *, struct iov_iter *);
ssize_t scoutfs_file_write_iter(struct kiocb *, struct iov_iter *);
#endif
int scoutfs_permission(struct inode *inode, int mask);
int scoutfs_permission(KC_VFS_NS_DEF
struct inode *inode, int mask);
loff_t scoutfs_file_llseek(struct file *file, loff_t offset, int whence);
#endif /* _SCOUTFS_FILE_H_ */

View File

@@ -470,7 +470,7 @@ struct scoutfs_srch_compact {
* @get_trans_seq, @commit_trans_seq: These pair of sequence numbers
* determine if a transaction is currently open for the mount that owns
* the log_trees struct. get_trans_seq is advanced by the server as the
* transaction is opened. The server sets comimt_trans_seq equal to
* transaction is opened. The server sets commit_trans_seq equal to
* get_ as the transaction is committed.
*/
struct scoutfs_log_trees {
@@ -1091,7 +1091,8 @@ enum scoutfs_net_cmd {
EXPAND_NET_ERRNO(ENOMEM) \
EXPAND_NET_ERRNO(EIO) \
EXPAND_NET_ERRNO(ENOSPC) \
EXPAND_NET_ERRNO(EINVAL)
EXPAND_NET_ERRNO(EINVAL) \
EXPAND_NET_ERRNO(ENOLINK)
#undef EXPAND_NET_ERRNO
#define EXPAND_NET_ERRNO(which) SCOUTFS_NET_ERR_##which,

View File

@@ -150,6 +150,9 @@ static const struct inode_operations scoutfs_file_iops = {
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
#ifdef KC_GET_ACL_DENTRY
.set_acl = scoutfs_set_acl,
#endif
.fiemap = scoutfs_data_fiemap,
};
@@ -163,6 +166,9 @@ static const struct inode_operations scoutfs_special_iops = {
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
#ifdef KC_GET_ACL_DENTRY
.set_acl = scoutfs_set_acl,
#endif
};
/*
@@ -373,7 +379,8 @@ int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
{
struct inode *inode = dentry->d_inode;
#else
int scoutfs_getattr(const struct path *path, struct kstat *stat,
int scoutfs_getattr(KC_VFS_NS_DEF
const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int query_flags)
{
struct inode *inode = d_inode(path->dentry);
@@ -385,7 +392,8 @@ int scoutfs_getattr(const struct path *path, struct kstat *stat,
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret == 0) {
generic_fillattr(inode, stat);
generic_fillattr(KC_VFS_INIT_NS
inode, stat);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
return ret;
@@ -483,7 +491,8 @@ int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock)
* re-acquire it. Ideally we'd fix this so that we can acquire the lock
* instead of the caller.
*/
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr)
int scoutfs_setattr(KC_VFS_NS_DEF
struct dentry *dentry, struct iattr *attr)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
@@ -501,7 +510,8 @@ retry:
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret)
return ret;
ret = setattr_prepare(dentry, attr);
ret = setattr_prepare(KC_VFS_INIT_NS
dentry, attr);
if (ret)
goto out;
@@ -565,7 +575,8 @@ retry:
if (ret < 0)
goto release;
setattr_copy(inode, attr);
setattr_copy(KC_VFS_INIT_NS
inode, attr);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
@@ -979,10 +990,10 @@ static bool inode_has_index(umode_t mode, u8 type)
}
}
static int cmp_index_lock(void *priv, struct list_head *A, struct list_head *B)
static int cmp_index_lock(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct index_lock *a = list_entry(A, struct index_lock, head);
struct index_lock *b = list_entry(B, struct index_lock, head);
KC_LIST_CMP_CONST struct index_lock *a = list_entry(A, KC_LIST_CMP_CONST struct index_lock, head);
KC_LIST_CMP_CONST struct index_lock *b = list_entry(B, KC_LIST_CMP_CONST struct index_lock, head);
return ((int)a->type - (int)b->type) ?:
scoutfs_cmp_u64s(a->major, b->major) ?:
@@ -1471,12 +1482,6 @@ static int remove_index_items(struct super_block *sb, u64 ino,
* Return an allocated and unused inode number. Returns -ENOSPC if
* we're out of inode.
*
* Each parent directory has its own pool of free inode numbers. Items
* are sorted by their inode numbers as they're stored in segments.
* This will tend to group together files that are created in a
* directory at the same time in segments. Concurrent creation across
* different directories will be stored in their own regions.
*
* Inode numbers are never reclaimed. If the inode is evicted or we're
* unmounted the pending inode numbers will be lost. Asking for a
* relatively small number from the server each time will tend to
@@ -1486,12 +1491,18 @@ static int remove_index_items(struct super_block *sb, u64 ino,
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret)
{
DECLARE_INODE_SB_INFO(sb, inf);
struct scoutfs_mount_options opts;
struct inode_allocator *ia;
u64 ino;
u64 nr;
int ret;
ia = is_dir ? &inf->dir_ino_alloc : &inf->ino_alloc;
scoutfs_options_read(sb, &opts);
if (is_dir && opts.ino_alloc_per_lock == SCOUTFS_LOCK_INODE_GROUP_NR)
ia = &inf->dir_ino_alloc;
else
ia = &inf->ino_alloc;
spin_lock(&ia->lock);
@@ -1512,6 +1523,17 @@ int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret)
*ino_ret = ia->ino++;
ia->nr--;
if (opts.ino_alloc_per_lock != SCOUTFS_LOCK_INODE_GROUP_NR) {
nr = ia->ino & SCOUTFS_LOCK_INODE_GROUP_MASK;
if (nr >= opts.ino_alloc_per_lock) {
nr = SCOUTFS_LOCK_INODE_GROUP_NR - nr;
if (nr > ia->nr)
nr = ia->nr;
ia->ino += nr;
ia->nr -= nr;
}
}
spin_unlock(&ia->lock);
ret = 0;
out:
@@ -1562,7 +1584,8 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
scoutfs_inode_set_data_seq(inode);
inode->i_ino = ino; /* XXX overflow */
inode_init_owner(inode, dir, mode);
inode_init_owner(KC_VFS_INIT_NS
inode, dir, mode);
inode_set_bytes(inode, 0);
inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
inode->i_rdev = rdev;
@@ -1848,6 +1871,9 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
goto out;
ret = delete_inode_items(sb, ino, &sinode, lock, orph_lock);
if (ret == 0)
scoutfs_inc_counter(sb, inode_deleted);
out:
if (clear_trying)
clear_bit(bit_nr, ldata->trying);
@@ -1956,6 +1982,8 @@ static void iput_worker(struct work_struct *work)
while (count-- > 0)
iput(inode);
cond_resched();
/* can't touch inode after final iput */
spin_lock(&inf->iput_lock);
@@ -2177,7 +2205,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct inode *inode;
int ret;
int ret = 0;
spin_lock(&inf->writeback_lock);

View File

@@ -135,10 +135,12 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
#else
int scoutfs_getattr(const struct path *path, struct kstat *stat,
int scoutfs_getattr(KC_VFS_NS_DEF
const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int query_flags);
#endif
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_setattr(KC_VFS_NS_DEF
struct dentry *dentry, struct iattr *attr);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary);

View File

@@ -23,6 +23,7 @@
#include <linux/aio.h>
#include <linux/list_sort.h>
#include <linux/backing-dev.h>
#include <linux/overflow.h>
#include "format.h"
#include "key.h"
@@ -47,6 +48,7 @@
#include "wkic.h"
#include "quota.h"
#include "scoutfs_trace.h"
#include "util.h"
/*
* We make inode index items coherent by locking fixed size regions of
@@ -56,25 +58,23 @@
* key space after we find no items in a given lock region. This is
* relatively cheap because reading is going to check the segments
* anyway.
*
* This is copying to userspace while holding a read lock. This is safe
* because faulting can send a request for a write lock while the read
* lock is being used. The cluster locks don't block tasks in a node,
* they match and the tasks fall back to local locking. In this case
* the spin locks around the item cache.
*/
static long scoutfs_ioc_walk_inodes(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_walk_inodes __user *uwalk = (void __user *)arg;
struct scoutfs_ioctl_walk_inodes walk;
struct scoutfs_ioctl_walk_inodes_entry ent;
struct scoutfs_ioctl_walk_inodes_entry *ent = NULL;
struct scoutfs_ioctl_walk_inodes_entry *end;
struct scoutfs_key next_key;
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_lock *lock;
struct page *page = NULL;
u64 last_seq;
u64 entries = 0;
int ret = 0;
int complete = 0;
u32 nr = 0;
u8 type;
@@ -105,6 +105,10 @@ static long scoutfs_ioc_walk_inodes(struct file *file, unsigned long arg)
}
}
page = alloc_page(GFP_KERNEL);
if (!page)
return -ENOMEM;
scoutfs_inode_init_index_key(&key, type, walk.first.major,
walk.first.minor, walk.first.ino);
scoutfs_inode_init_index_key(&last_key, type, walk.last.major,
@@ -113,77 +117,107 @@ static long scoutfs_ioc_walk_inodes(struct file *file, unsigned long arg)
/* cap nr to the max the ioctl can return to a compat task */
walk.nr_entries = min_t(u64, walk.nr_entries, INT_MAX);
ret = scoutfs_lock_inode_index(sb, SCOUTFS_LOCK_READ, type,
walk.first.major, walk.first.ino,
&lock);
if (ret < 0)
goto out;
end = page_address(page) + PAGE_SIZE;
for (nr = 0; nr < walk.nr_entries; ) {
/* outer loop */
for (nr = 0;;) {
ent = page_address(page);
/* make sure _pad and minor are zeroed */
memset(ent, 0, PAGE_SIZE);
ret = scoutfs_item_next(sb, &key, &last_key, NULL, 0, lock);
if (ret < 0 && ret != -ENOENT)
ret = scoutfs_lock_inode_index(sb, SCOUTFS_LOCK_READ, type,
le64_to_cpu(key.skii_major),
le64_to_cpu(key.skii_ino),
&lock);
if (ret)
break;
if (ret == -ENOENT) {
/* done if lock covers last iteration key */
if (scoutfs_key_compare(&last_key, &lock->end) <= 0) {
ret = 0;
/* inner loop 1 */
while (ent + 1 < end) {
ret = scoutfs_item_next(sb, &key, &last_key, NULL, 0, lock);
if (ret < 0 && ret != -ENOENT)
break;
if (ret == -ENOENT) {
/* done if lock covers last iteration key */
if (scoutfs_key_compare(&last_key, &lock->end) <= 0) {
ret = 0;
complete = 1;
break;
}
/* continue iterating after locked empty region */
key = lock->end;
scoutfs_key_inc(&key);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
/* avoid double-unlocking here after break */
lock = NULL;
ret = scoutfs_forest_next_hint(sb, &key, &next_key);
if (ret < 0 && ret != -ENOENT)
break;
if (ret == -ENOENT ||
scoutfs_key_compare(&next_key, &last_key) > 0) {
ret = 0;
complete = 1;
break;
}
key = next_key;
ret = scoutfs_lock_inode_index(sb, SCOUTFS_LOCK_READ,
type,
le64_to_cpu(key.skii_major),
le64_to_cpu(key.skii_ino),
&lock);
if (ret)
break;
continue;
}
/* continue iterating after locked empty region */
key = lock->end;
ent->major = le64_to_cpu(key.skii_major);
ent->ino = le64_to_cpu(key.skii_ino);
scoutfs_key_inc(&key);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
ent++;
entries++;
ret = scoutfs_forest_next_hint(sb, &key, &next_key);
if (ret < 0 && ret != -ENOENT)
goto out;
if (nr + entries >= walk.nr_entries) {
complete = 1;
break;
}
}
if (ret == -ENOENT ||
scoutfs_key_compare(&next_key, &last_key) > 0) {
ret = 0;
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
if (ret < 0)
break;
/* inner loop 2 */
ent = page_address(page);
for (; entries > 0; entries--) {
if (copy_to_user((void __user *)walk.entries_ptr, ent,
sizeof(struct scoutfs_ioctl_walk_inodes_entry))) {
ret = -EFAULT;
goto out;
}
key = next_key;
ret = scoutfs_lock_inode_index(sb, SCOUTFS_LOCK_READ,
key.sk_type,
le64_to_cpu(key.skii_major),
le64_to_cpu(key.skii_ino),
&lock);
if (ret < 0)
goto out;
continue;
walk.entries_ptr += sizeof(struct scoutfs_ioctl_walk_inodes_entry);
ent++;
nr++;
}
ent.major = le64_to_cpu(key.skii_major);
ent.minor = 0;
ent.ino = le64_to_cpu(key.skii_ino);
if (copy_to_user((void __user *)walk.entries_ptr, &ent,
sizeof(ent))) {
ret = -EFAULT;
if (complete)
break;
}
nr++;
walk.entries_ptr += sizeof(ent);
scoutfs_key_inc(&key);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
out:
if (page)
__free_page(page);
if (nr > 0)
ret = nr;
return ret;
}
@@ -288,6 +322,7 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
u64 online;
u64 offline;
u64 isize;
__u64 tmp;
int ret;
if (copy_from_user(&args, (void __user *)arg, sizeof(args)))
@@ -297,12 +332,11 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
if (args.length == 0)
return 0;
if (((args.offset + args.length) < args.offset) ||
if ((check_add_overflow(args.offset, args.length - 1, &tmp)) ||
(args.offset & SCOUTFS_BLOCK_SM_MASK) ||
(args.length & SCOUTFS_BLOCK_SM_MASK))
return -EINVAL;
ret = mnt_want_write_file(file);
if (ret)
return ret;
@@ -407,8 +441,6 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
if (!S_ISREG(inode->i_mode)) {
ret = -EINVAL;
} else if (scoutfs_inode_data_version(inode) != args.data_version) {
ret = -ESTALE;
} else {
ret = scoutfs_data_wait_err(inode, sblock, eblock, args.op,
args.err);
@@ -522,7 +554,9 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
}
si->staging = true;
#ifdef KC_CURRENT_BACKING_DEV_INFO
current->backing_dev_info = inode_to_bdi(inode);
#endif
pos = args.offset;
written = 0;
@@ -535,7 +569,9 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
} while (ret > 0 && written < args.length);
si->staging = false;
#ifdef KC_CURRENT_BACKING_DEV_INFO
current->backing_dev_info = NULL;
#endif
out:
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
@@ -674,8 +710,8 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
goto out;
}
iax->x_mask = SCOUTFS_IOC_IAX_DATA_VERSION | SCOUTFS_IOC_IAX_CTIME |
SCOUTFS_IOC_IAX_CRTIME | SCOUTFS_IOC_IAX_SIZE;
iax->x_mask = SCOUTFS_IOC_IAX_CTIME | SCOUTFS_IOC_IAX_CRTIME |
SCOUTFS_IOC_IAX_SIZE;
iax->data_version = sm.data_version;
iax->ctime_sec = sm.ctime_sec;
iax->ctime_nsec = sm.ctime_nsec;
@@ -686,6 +722,9 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
if (sm.flags & SCOUTFS_IOC_SETATTR_MORE_OFFLINE)
iax->x_flags |= SCOUTFS_IOC_IAX_F_SIZE_OFFLINE;
if (sm.data_version != 0)
iax->x_mask |= SCOUTFS_IOC_IAX_DATA_VERSION;
ret = mnt_want_write_file(file);
if (ret < 0)
goto out;
@@ -713,7 +752,8 @@ static long scoutfs_ioc_listxattr_hidden(struct file *file, unsigned long arg)
int total = 0;
int ret;
ret = inode_permission(inode, MAY_READ);
ret = inode_permission(KC_VFS_INIT_NS
inode, MAY_READ);
if (ret < 0)
goto out;
@@ -912,6 +952,9 @@ static int copy_alloc_detail_to_user(struct super_block *sb, void *arg,
if (args->copied == args->nr)
return -EOVERFLOW;
/* .type and .pad need clearing */
memset(&ade, 0, sizeof(struct scoutfs_ioctl_alloc_detail_entry));
ade.blocks = blocks;
ade.id = id;
ade.meta = !!meta;
@@ -951,6 +994,7 @@ static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
struct scoutfs_ioctl_move_blocks mb;
struct file *from_file;
struct inode *from;
u64 tmp;
int ret;
if (copy_from_user(&mb, umb, sizeof(mb)))
@@ -959,8 +1003,8 @@ static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
if (mb.len == 0)
return 0;
if (mb.from_off + mb.len < mb.from_off ||
mb.to_off + mb.len < mb.to_off)
if ((check_add_overflow(mb.from_off, mb.len - 1, &tmp)) ||
(check_add_overflow(mb.to_off, mb.len - 1, &tmp)))
return -EOVERFLOW;
from_file = fget(mb.from_fd);
@@ -1152,11 +1196,15 @@ static long scoutfs_ioc_get_allocated_inos(struct file *file, unsigned long arg)
struct scoutfs_lock *lock = NULL;
struct scoutfs_key key;
struct scoutfs_key end;
struct page *page = NULL;
u64 __user *uinos;
u64 bytes;
u64 ino;
u64 *ino;
u64 *ino_end;
int entries = 0;
int nr;
int ret;
int complete = 0;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
@@ -1178,47 +1226,83 @@ static long scoutfs_ioc_get_allocated_inos(struct file *file, unsigned long arg)
goto out;
}
page = alloc_page(GFP_KERNEL);
if (!page) {
ret = -ENOMEM;
goto out;
}
ino_end = page_address(page) + PAGE_SIZE;
scoutfs_inode_init_key(&key, gai.start_ino);
scoutfs_inode_init_key(&end, gai.start_ino | SCOUTFS_LOCK_INODE_GROUP_MASK);
uinos = (void __user *)gai.inos_ptr;
bytes = gai.inos_bytes;
nr = 0;
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, gai.start_ino, &lock);
if (ret < 0)
goto out;
for (;;) {
while (bytes >= sizeof(*uinos)) {
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, gai.start_ino, &lock);
if (ret < 0)
goto out;
ret = scoutfs_item_next(sb, &key, &end, NULL, 0, lock);
if (ret < 0) {
if (ret == -ENOENT)
ino = page_address(page);
while (ino < ino_end) {
ret = scoutfs_item_next(sb, &key, &end, NULL, 0, lock);
if (ret < 0) {
if (ret == -ENOENT) {
ret = 0;
complete = 1;
}
break;
}
if (key.sk_zone != SCOUTFS_FS_ZONE) {
ret = 0;
break;
complete = 1;
break;
}
/* all fs items are owned by allocated inodes, and _first is always ino */
*ino = le64_to_cpu(key._sk_first);
scoutfs_inode_init_key(&key, *ino + 1);
ino++;
entries++;
nr++;
bytes -= sizeof(*uinos);
if (bytes < sizeof(*uinos)) {
complete = 1;
break;
}
if (nr == INT_MAX) {
complete = 1;
break;
}
}
if (key.sk_zone != SCOUTFS_FS_ZONE) {
ret = 0;
break;
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
/* all fs items are owned by allocated inodes, and _first is always ino */
ino = le64_to_cpu(key._sk_first);
if (put_user(ino, uinos)) {
if (ret < 0)
break;
ino = page_address(page);
if (copy_to_user(uinos, ino, entries * sizeof(*uinos))) {
ret = -EFAULT;
break;
goto out;
}
uinos++;
bytes -= sizeof(*uinos);
if (++nr == INT_MAX)
uinos += entries;
entries = 0;
if (complete)
break;
scoutfs_inode_init_key(&key, ino + 1);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
out:
if (page)
__free_page(page);
return ret ?: nr;
}
@@ -1286,7 +1370,7 @@ static long scoutfs_ioc_get_referring_entries(struct file *file, unsigned long a
ent.d_type = bref->d_type;
ent.name_len = name_len;
if (copy_to_user(uent, &ent, sizeof(struct scoutfs_ioctl_dirent)) ||
if (copy_to_user(uent, &ent, offsetof(struct scoutfs_ioctl_dirent, name[0])) ||
copy_to_user(&uent->name[0], bref->dent.name, name_len) ||
put_user('\0', &uent->name[name_len])) {
ret = -EFAULT;

View File

@@ -366,10 +366,15 @@ struct scoutfs_ioctl_statfs_more {
*
* Find current waiters that match the inode, op, and block range to wake
* up and return an error.
*
* (*) ca. v1.25 and earlier required that the data_version passed match
* that of the waiter, but this check is removed. It was never needed
* because no data is modified during this ioctl. Any data_version value
* here is thus since then ignored.
*/
struct scoutfs_ioctl_data_wait_err {
__u64 ino;
__u64 data_version;
__u64 data_version; /* Ignored, see above (*) */
__u64 offset;
__u64 count;
__u64 op;

View File

@@ -86,6 +86,8 @@ struct item_cache_info {
/* often walked, but per-cpu refs are fast path */
rwlock_t rwlock;
struct rb_root pg_root;
/* stop readers from caching stale items behind reclaimed cleaned written items */
u64 read_dirty_barrier;
/* page-granular modification by writers, then exclusive to commit */
spinlock_t dirty_lock;
@@ -96,10 +98,6 @@ struct item_cache_info {
spinlock_t lru_lock;
struct list_head lru_list;
unsigned long lru_pages;
/* written by page readers, read by shrink */
spinlock_t active_lock;
struct list_head active_list;
};
#define DECLARE_ITEM_CACHE_INFO(sb, name) \
@@ -1285,78 +1283,6 @@ static int cache_empty_page(struct super_block *sb,
return 0;
}
/*
* Readers operate independently from dirty items and transactions.
* They read a set of persistent items and insert them into the cache
* when there aren't already pages whose key range contains the items.
* This naturally prefers cached dirty items over stale read items.
*
* We have to deal with the case where dirty items are written and
* invalidated while a read is in flight. The reader won't have seen
* the items that were dirty in their persistent roots as they started
* reading. By the time they insert their read pages the previously
* dirty items have been reclaimed and are not in the cache. The old
* stale items will be inserted in their place, effectively corrupting
* by having the dirty items disappear.
*
* We fix this by tracking the max seq of items in pages. As readers
* start they record the current transaction seq. Invalidation skips
* pages with a max seq greater than the first reader seq because the
* items in the page have to stick around to prevent the readers stale
* items from being inserted.
*
* This naturally only affects a small set of pages with items that were
* written relatively recently. If we're in memory pressure then we
* probably have a lot of pages and they'll naturally have items that
* were visible to any raders. We don't bother with the complicated and
* expensive further refinement of tracking the ranges that are being
* read and comparing those with pages to invalidate.
*/
struct active_reader {
struct list_head head;
u64 seq;
};
#define INIT_ACTIVE_READER(rdr) \
struct active_reader rdr = { .head = LIST_HEAD_INIT(rdr.head) }
static void add_active_reader(struct super_block *sb, struct active_reader *active)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
BUG_ON(!list_empty(&active->head));
active->seq = scoutfs_trans_sample_seq(sb);
spin_lock(&cinf->active_lock);
list_add_tail(&active->head, &cinf->active_list);
spin_unlock(&cinf->active_lock);
}
static u64 first_active_reader_seq(struct item_cache_info *cinf)
{
struct active_reader *active;
u64 first;
/* only the calling task adds or deletes this active */
spin_lock(&cinf->active_lock);
active = list_first_entry_or_null(&cinf->active_list, struct active_reader, head);
first = active ? active->seq : U64_MAX;
spin_unlock(&cinf->active_lock);
return first;
}
static void del_active_reader(struct item_cache_info *cinf, struct active_reader *active)
{
/* only the calling task adds or deletes this active */
if (!list_empty(&active->head)) {
spin_lock(&cinf->active_lock);
list_del_init(&active->head);
spin_unlock(&cinf->active_lock);
}
}
/*
* Add a newly read item to the pages that we're assembling for
* insertion into the cache. These pages are private, they only exist
@@ -1450,24 +1376,34 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
* and duplicates, we insert any resulting pages which don't overlap
* with existing cached pages.
*
* We only insert uncached regions because this is called with cluster
* locks held, but without locking the cache. The regions we read can
* be stale with respect to the current cache, which can be read and
* dirtied by other cluster lock holders on our node, but the cluster
* locks protect the stable items we read. Invalidation is careful not
* to drop pages that have items that we couldn't see because they were
* dirty when we started reading.
*
* The forest item reader is reading stable trees that could be
* overwritten. It can return -ESTALE which we return to the caller who
* will retry the operation and work with a new set of more recent
* btrees.
*
* We only insert uncached regions because this is called with cluster
* locks held, but without locking the cache. The regions we read can
* be stale with respect to the current cache, which can be read and
* dirtied by other cluster lock holders on our node, but the cluster
* locks protect the stable items we read.
*
* Using the presence of locally written dirty pages to override stale
* read pages only works if, well, the more recent locally written pages
* are still present. Readers are totally decoupled from writers and
* can have a set of items that is very old indeed. In the mean time
* more recent items would have been dirtied locally, committed,
* cleaned, and reclaimed. We have a coarse barrier which ensures that
* readers can't insert items read from old roots from before local data
* was written. If a write completes while a read is in progress the
* read will have to retry. The retried read can use cached blocks so
* we're relying on reads being much faster than writes to reduce the
* overhead to mostly cpu work of recollecting the items from cached
* blocks via a more recent root from the server.
*/
static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
struct scoutfs_key *key, struct scoutfs_lock *lock)
{
struct rb_root root = RB_ROOT;
INIT_ACTIVE_READER(active);
struct cached_page *right = NULL;
struct cached_page *pg;
struct cached_page *rd;
@@ -1480,6 +1416,7 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
struct rb_node *par;
struct rb_node *pg_tmp;
struct rb_node *item_tmp;
u64 rdbar;
int pgi;
int ret;
@@ -1493,8 +1430,9 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
pg->end = lock->end;
rbtree_insert(&pg->node, NULL, &root.rb_node, &root);
/* set active reader seq before reading persistent roots */
add_active_reader(sb, &active);
read_lock(&cinf->rwlock);
rdbar = cinf->read_dirty_barrier;
read_unlock(&cinf->rwlock);
start = lock->start;
end = lock->end;
@@ -1533,6 +1471,13 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
retry:
write_lock(&cinf->rwlock);
/* can't insert if write has cleaned since we read */
if (cinf->read_dirty_barrier != rdbar) {
scoutfs_inc_counter(sb, item_read_pages_barrier);
ret = -ESTALE;
goto unlock;
}
while ((rd = first_page(&root))) {
pg = page_rbtree_walk(sb, &cinf->pg_root, &rd->start, &rd->end,
@@ -1570,12 +1515,12 @@ retry:
}
}
ret = 0;
unlock:
write_unlock(&cinf->rwlock);
ret = 0;
out:
del_active_reader(cinf, &active);
/* free any pages we left dangling on error */
for_each_page_safe(&root, rd, pg_tmp) {
rbtree_erase(&rd->node, &root);
@@ -1635,6 +1580,7 @@ retry:
ret = read_pages(sb, cinf, key, lock);
if (ret < 0 && ret != -ESTALE)
goto out;
scoutfs_inc_counter(sb, item_read_pages_retry);
goto retry;
}
@@ -2241,18 +2187,18 @@ u64 scoutfs_item_dirty_pages(struct super_block *sb)
return (u64)atomic_read(&cinf->dirty_pages);
}
static int cmp_pg_start(void *priv, struct list_head *A, struct list_head *B)
static int cmp_pg_start(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct cached_page *a = list_entry(A, struct cached_page, dirty_head);
struct cached_page *b = list_entry(B, struct cached_page, dirty_head);
KC_LIST_CMP_CONST struct cached_page *a = list_entry(A, KC_LIST_CMP_CONST struct cached_page, dirty_head);
KC_LIST_CMP_CONST struct cached_page *b = list_entry(B, KC_LIST_CMP_CONST struct cached_page, dirty_head);
return scoutfs_key_compare(&a->start, &b->start);
}
static int cmp_item_key(void *priv, struct list_head *A, struct list_head *B)
static int cmp_item_key(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct cached_item *a = list_entry(A, struct cached_item, dirty_head);
struct cached_item *b = list_entry(B, struct cached_item, dirty_head);
KC_LIST_CMP_CONST struct cached_item *a = list_entry(A, KC_LIST_CMP_CONST struct cached_item, dirty_head);
KC_LIST_CMP_CONST struct cached_item *b = list_entry(B, KC_LIST_CMP_CONST struct cached_item, dirty_head);
return scoutfs_key_compare(&a->key, &b->key);
}
@@ -2401,6 +2347,12 @@ out:
* The caller has successfully committed all the dirty btree blocks that
* contained the currently dirty items. Clear all the dirty items and
* pages.
*
* This strange lock/trylock loop comes from sparse issuing spurious
* mismatched context warnings if we do anything (like unlock and relax)
* in the else branch of the failed trylock. We're jumping through
* hoops to not use the else but still drop and reacquire the dirty_lock
* if the trylock fails.
*/
int scoutfs_item_write_done(struct super_block *sb)
{
@@ -2409,40 +2361,35 @@ int scoutfs_item_write_done(struct super_block *sb)
struct cached_item *tmp;
struct cached_page *pg;
retry:
/* don't let read_pages miss written+cleaned items */
write_lock(&cinf->rwlock);
cinf->read_dirty_barrier++;
write_unlock(&cinf->rwlock);
spin_lock(&cinf->dirty_lock);
while ((pg = list_first_entry_or_null(&cinf->dirty_list,
struct cached_page,
dirty_head))) {
if (!write_trylock(&pg->rwlock)) {
while ((pg = list_first_entry_or_null(&cinf->dirty_list, struct cached_page, dirty_head))) {
if (write_trylock(&pg->rwlock)) {
spin_unlock(&cinf->dirty_lock);
cpu_relax();
goto retry;
}
list_for_each_entry_safe(item, tmp, &pg->dirty_list,
dirty_head) {
clear_item_dirty(sb, cinf, pg, item);
if (item->delta)
scoutfs_inc_counter(sb, item_delta_written);
/* free deletion items */
if (item->deletion || item->delta)
erase_item(pg, item);
else
item->persistent = 1;
}
write_unlock(&pg->rwlock);
spin_lock(&cinf->dirty_lock);
}
spin_unlock(&cinf->dirty_lock);
list_for_each_entry_safe(item, tmp, &pg->dirty_list,
dirty_head) {
clear_item_dirty(sb, cinf, pg, item);
if (item->delta)
scoutfs_inc_counter(sb, item_delta_written);
/* free deletion items */
if (item->deletion || item->delta)
erase_item(pg, item);
else
item->persistent = 1;
}
write_unlock(&pg->rwlock);
spin_lock(&cinf->dirty_lock);
}
} while (pg);
spin_unlock(&cinf->dirty_lock);
return 0;
@@ -2597,24 +2544,15 @@ static unsigned long item_cache_scan_objects(struct shrinker *shrink,
struct cached_page *tmp;
struct cached_page *pg;
unsigned long freed = 0;
u64 first_reader_seq;
int nr = sc->nr_to_scan;
scoutfs_inc_counter(sb, item_cache_scan_objects);
/* can't invalidate pages with items that weren't visible to first reader */
first_reader_seq = first_active_reader_seq(cinf);
write_lock(&cinf->rwlock);
spin_lock(&cinf->lru_lock);
list_for_each_entry_safe(pg, tmp, &cinf->lru_list, lru_head) {
if (first_reader_seq <= pg->max_seq) {
scoutfs_inc_counter(sb, item_shrink_page_reader);
continue;
}
if (!write_trylock(&pg->rwlock)) {
scoutfs_inc_counter(sb, item_shrink_page_trylock);
continue;
@@ -2681,8 +2619,6 @@ int scoutfs_item_setup(struct super_block *sb)
atomic_set(&cinf->dirty_pages, 0);
spin_lock_init(&cinf->lru_lock);
INIT_LIST_HEAD(&cinf->lru_list);
spin_lock_init(&cinf->active_lock);
INIT_LIST_HEAD(&cinf->active_list);
cinf->pcpu_pages = alloc_percpu(struct item_percpu_pages);
if (!cinf->pcpu_pages)
@@ -2693,7 +2629,7 @@ int scoutfs_item_setup(struct super_block *sb)
KC_INIT_SHRINKER_FUNCS(&cinf->shrinker, item_cache_count_objects,
item_cache_scan_objects);
KC_REGISTER_SHRINKER(&cinf->shrinker);
KC_REGISTER_SHRINKER(&cinf->shrinker, "scoutfs-item:" SCSBF, SCSB_ARGS(sb));
#ifdef KC_CPU_NOTIFIER
cinf->notifier.notifier_call = item_cpu_callback;
register_hotcpu_notifier(&cinf->notifier);
@@ -2715,8 +2651,6 @@ void scoutfs_item_destroy(struct super_block *sb)
int cpu;
if (cinf) {
BUG_ON(!list_empty(&cinf->active_list));
#ifdef KC_CPU_NOTIFIER
unregister_hotcpu_notifier(&cinf->notifier);
#endif

View File

@@ -67,12 +67,11 @@ kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos, loff_t *ppos,
size_t count, ssize_t written)
{
struct file *file = iocb->ki_filp;
ssize_t status;
struct iov_iter i;
iov_iter_init(&i, WRITE, iov, nr_segs, count);
status = generic_perform_write(file, &i, pos);
status = kc_generic_perform_write(iocb, &i, pos);
if (likely(status >= 0)) {
written += status;
@@ -82,3 +81,69 @@ kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
return written ? written : status;
}
#endif
#include <linux/list_lru.h>
#ifdef KC_LIST_LRU_WALK_CB_ITEM_LOCK
static enum lru_status kc_isolate(struct list_head *item, spinlock_t *lock, void *cb_arg)
{
struct kc_isolate_args *args = cb_arg;
/* isolate doesn't use list, nr_items updated in caller */
return args->isolate(item, NULL, args->cb_arg);
}
unsigned long kc_list_lru_walk(struct list_lru *lru, kc_list_lru_walk_cb_t isolate, void *cb_arg,
unsigned long nr_to_walk)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_walk(lru, kc_isolate, &args, nr_to_walk);
}
unsigned long kc_list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
kc_list_lru_walk_cb_t isolate, void *cb_arg)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_shrink_walk(lru, sc, kc_isolate, &args);
}
#endif
#ifdef KC_LIST_LRU_WALK_CB_LIST_LOCK
static enum lru_status kc_isolate(struct list_head *item, struct list_lru_one *list,
spinlock_t *lock, void *cb_arg)
{
struct kc_isolate_args *args = cb_arg;
return args->isolate(item, list, args->cb_arg);
}
unsigned long kc_list_lru_walk(struct list_lru *lru, kc_list_lru_walk_cb_t isolate, void *cb_arg,
unsigned long nr_to_walk)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_walk(lru, kc_isolate, &args, nr_to_walk);
}
unsigned long kc_list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
kc_list_lru_walk_cb_t isolate, void *cb_arg)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_shrink_walk(lru, sc, kc_isolate, &args);
}
#endif

View File

@@ -29,50 +29,6 @@ do { \
})
#endif
#ifndef KC_ITERATE_DIR_CONTEXT
typedef filldir_t kc_readdir_ctx_t;
#define KC_DECLARE_READDIR(name, file, dirent, ctx) name(file, dirent, ctx)
#define KC_FOP_READDIR readdir
#define kc_readdir_pos(filp, ctx) (filp)->f_pos
#define kc_dir_emit_dots(file, dirent, ctx) dir_emit_dots(file, dirent, ctx)
#define kc_dir_emit(ctx, dirent, name, name_len, pos, ino, dt) \
(ctx(dirent, name, name_len, pos, ino, dt) == 0)
#else
typedef struct dir_context * kc_readdir_ctx_t;
#define KC_DECLARE_READDIR(name, file, dirent, ctx) name(file, ctx)
#define KC_FOP_READDIR iterate
#define kc_readdir_pos(filp, ctx) (ctx)->pos
#define kc_dir_emit_dots(file, dirent, ctx) dir_emit_dots(file, ctx)
#define kc_dir_emit(ctx, dirent, name, name_len, pos, ino, dt) \
dir_emit(ctx, name, name_len, ino, dt)
#endif
#ifndef KC_DIR_EMIT_DOTS
/*
* Kernels before ->iterate and don't have dir_emit_dots so we give them
* one that works with the ->readdir() filldir() method.
*/
static inline int dir_emit_dots(struct file *file, void *dirent,
filldir_t filldir)
{
if (file->f_pos == 0) {
if (filldir(dirent, ".", 1, 1,
file->f_path.dentry->d_inode->i_ino, DT_DIR))
return 0;
file->f_pos = 1;
}
if (file->f_pos == 1) {
if (filldir(dirent, "..", 2, 1,
parent_ino(file->f_path.dentry), DT_DIR))
return 0;
file->f_pos = 2;
}
return 1;
}
#endif
#ifdef KC_POSIX_ACL_VALID_USER_NS
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(user_ns, acl)
#else
@@ -197,7 +153,11 @@ struct timespec64 kc_current_time(struct inode *inode);
} while (0)
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(ptr, type, shrinker)
#define KC_REGISTER_SHRINKER(ptr) (register_shrinker(ptr))
#ifdef KC_SHRINKER_NAME
#define KC_REGISTER_SHRINKER register_shrinker
#else
#define KC_REGISTER_SHRINKER(ptr, fmt, ...) (register_shrinker(ptr))
#endif /* KC_SHRINKER_NAME */
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr))
#define KC_SHRINKER_FN(ptr) (ptr)
#else
@@ -224,7 +184,7 @@ struct kc_shrinker_wrapper {
_wrap->shrink.seeks = DEFAULT_SEEKS; \
} while (0)
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(container_of(ptr, struct kc_shrinker_wrapper, shrink), type, shrinker)
#define KC_REGISTER_SHRINKER(ptr) (register_shrinker(ptr.shrink))
#define KC_REGISTER_SHRINKER(ptr, fmt, ...) (register_shrinker(ptr.shrink))
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr.shrink))
#define KC_SHRINKER_FN(ptr) (ptr.shrink)
@@ -271,6 +231,262 @@ ssize_t kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *i
unsigned long nr_segs, loff_t pos, loff_t *ppos,
size_t count, ssize_t written);
#define generic_file_buffered_write kc_generic_file_buffered_write
#ifdef KC_GENERIC_PERFORM_WRITE_KIOCB_IOV_ITER
static inline int kc_generic_perform_write(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
iocb->ki_pos = pos;
return generic_perform_write(iocb, iter);
}
#else
static inline int kc_generic_perform_write(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
struct file *file = iocb->ki_filp;
return generic_perform_write(file, iter, pos);
}
#endif
#endif // KC_GENERIC_FILE_BUFFERED_WRITE
#ifndef KC_HAVE_BLK_OPF_T
/* typedef __u32 __bitwise blk_opf_t; */
typedef unsigned int blk_opf_t;
#endif
#ifdef KC_LIST_CMP_CONST_ARG_LIST_HEAD
#define KC_LIST_CMP_CONST const
#else
#define KC_LIST_CMP_CONST
#endif
#ifdef KC_VMALLOC_PGPROT_T
#define kc__vmalloc(size, gfp_mask) __vmalloc(size, gfp_mask, PAGE_KERNEL)
#else
#define kc__vmalloc __vmalloc
#endif
#ifdef KC_VFS_METHOD_MNT_IDMAP_ARG
#define KC_VFS_NS_DEF struct mnt_idmap *mnt_idmap,
#define KC_VFS_NS mnt_idmap,
#define KC_VFS_INIT_NS &nop_mnt_idmap,
#else
#ifdef KC_VFS_METHOD_USER_NAMESPACE_ARG
#define KC_VFS_NS_DEF struct user_namespace *mnt_user_ns,
#define KC_VFS_NS mnt_user_ns,
#define KC_VFS_INIT_NS &init_user_ns,
#else
#define KC_VFS_NS_DEF
#define KC_VFS_NS
#define KC_VFS_INIT_NS
#endif
#endif /* KC_VFS_METHOD_MNT_IDMAP_ARG */
#ifdef KC_BIO_ALLOC_DEV_OPF_ARGS
#define kc_bio_alloc bio_alloc
#else
#include <linux/bio.h>
static inline struct bio *kc_bio_alloc(struct block_device *bdev, unsigned short nr_vecs,
blk_opf_t opf, gfp_t gfp_mask)
{
struct bio *b = bio_alloc(gfp_mask, nr_vecs);
if (b) {
kc_bio_set_opf(b, opf);
bio_set_dev(b, bdev);
}
return b;
}
#endif
#ifndef KC_FIEMAP_PREP
#define fiemap_prep(inode, fieinfo, start, len, flags) fiemap_check_flags(fieinfo, flags)
#endif
#ifndef KC_KERNEL_OLD_TIMEVAL_STRUCT
#define __kernel_old_timeval timeval
#define ns_to_kernel_old_timeval(ktime) ns_to_timeval(ktime.tv64)
#endif
#ifdef KC_SOCK_SET_SNDTIMEO
#include <net/sock.h>
static inline int kc_sock_set_sndtimeo(struct socket *sock, s64 secs)
{
sock_set_sndtimeo(sock->sk, secs);
return 0;
}
static inline int kc_tcp_sock_set_rcvtimeo(struct socket *sock, ktime_t to)
{
struct __kernel_old_timeval tv;
sockptr_t kopt;
tv = ns_to_kernel_old_timeval(to);
kopt = KERNEL_SOCKPTR(&tv);
return sock_setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO_NEW,
kopt, sizeof(tv));
}
#else
#include <net/sock.h>
static inline int kc_sock_set_sndtimeo(struct socket *sock, s64 secs)
{
struct timeval tv = { .tv_sec = secs, .tv_usec = 0 };
return kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
(char *)&tv, sizeof(tv));
}
static inline int kc_tcp_sock_set_rcvtimeo(struct socket *sock, ktime_t to)
{
struct __kernel_old_timeval tv;
tv = ns_to_kernel_old_timeval(to);
return kernel_setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO,
(char *)&tv, sizeof(tv));
}
#endif
#ifdef KC_SETSOCKOPT_SOCKPTR_T
static inline int kc_sock_setsockopt(struct socket *sock, int level, int op, int *optval, unsigned int optlen)
{
sockptr_t kopt = KERNEL_SOCKPTR(optval);
return sock_setsockopt(sock, level, op, kopt, sizeof(optval));
}
#else
static inline int kc_sock_setsockopt(struct socket *sock, int level, int op, int *optval, unsigned int optlen)
{
return kernel_setsockopt(sock, level, op, (char *)optval, sizeof(optval));
}
#endif
#ifdef KC_HAVE_TCP_SET_SOCKFN
#include <linux/net.h>
#include <net/tcp.h>
static inline int kc_tcp_sock_set_keepintvl(struct socket *sock, int val)
{
return tcp_sock_set_keepintvl(sock->sk, val);
}
static inline int kc_tcp_sock_set_keepidle(struct socket *sock, int val)
{
return tcp_sock_set_keepidle(sock->sk, val);
}
static inline int kc_tcp_sock_set_user_timeout(struct socket *sock, int val)
{
tcp_sock_set_user_timeout(sock->sk, val);
return 0;
}
static inline int kc_tcp_sock_set_nodelay(struct socket *sock)
{
tcp_sock_set_nodelay(sock->sk);
return 0;
}
#else
#include <linux/net.h>
#include <net/tcp.h>
static inline int kc_tcp_sock_set_keepintvl(struct socket *sock, int val)
{
int optval = val;
return kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL, (char *)&optval, sizeof(optval));
}
static inline int kc_tcp_sock_set_keepidle(struct socket *sock, int val)
{
int optval = val;
return kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE, (char *)&optval, sizeof(optval));
}
static inline int kc_tcp_sock_set_user_timeout(struct socket *sock, int val)
{
int optval = val;
return kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT, (char *)&optval, sizeof(optval));
}
static inline int kc_tcp_sock_set_nodelay(struct socket *sock)
{
int optval = 1;
return kernel_setsockopt(sock, SOL_TCP, TCP_NODELAY, (char *)&optval, sizeof(optval));
}
#endif
#ifdef KC_INODE_DIO_END
#define kc_inode_dio_end inode_dio_end
#else
#define kc_inode_dio_end inode_dio_done
#endif
#ifndef KC_MM_VM_FAULT_T
typedef unsigned int vm_fault_t;
static inline vm_fault_t vmf_error(int err)
{
if (err == -ENOMEM)
return VM_FAULT_OOM;
return VM_FAULT_SIGBUS;
}
#endif
#include <linux/list_lru.h>
#ifndef KC_LIST_LRU_SHRINK_COUNT_WALK
/* we don't bother with sc->{nid,memcg} (which doesn't exist in oldest kernels) */
static inline unsigned long list_lru_shrink_count(struct list_lru *lru,
struct shrink_control *sc)
{
return list_lru_count(lru);
}
static inline unsigned long
list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
list_lru_walk_cb isolate, void *cb_arg)
{
return list_lru_walk(lru, isolate, cb_arg, sc->nr_to_scan);
}
#endif
#ifndef KC_LIST_LRU_ADD_OBJ
#define list_lru_add_obj list_lru_add
#define list_lru_del_obj list_lru_del
#endif
#if defined(KC_LIST_LRU_WALK_CB_LIST_LOCK) || defined(KC_LIST_LRU_WALK_CB_ITEM_LOCK)
struct list_lru_one;
typedef enum lru_status (*kc_list_lru_walk_cb_t)(struct list_head *item, struct list_lru_one *list,
void *cb_arg);
struct kc_isolate_args {
kc_list_lru_walk_cb_t isolate;
void *cb_arg;
};
unsigned long kc_list_lru_walk(struct list_lru *lru, kc_list_lru_walk_cb_t isolate, void *cb_arg,
unsigned long nr_to_walk);
unsigned long kc_list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
kc_list_lru_walk_cb_t isolate, void *cb_arg);
#else
#define kc_list_lru_shrink_walk list_lru_shrink_walk
#endif
#if defined(KC_LIST_LRU_WALK_CB_ITEM_LOCK)
/* isolate moved by hand, nr_items updated in walk as _REMOVE returned */
static inline void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
struct list_head *head)
{
list_move(item, head);
}
#endif
#ifndef KC_STACK_TRACE_SAVE
#include <linux/stacktrace.h>
static inline unsigned int stack_trace_save(unsigned long *store, unsigned int size,
unsigned int skipnr)
{
struct stack_trace trace = {
.entries = store,
.max_entries = size,
.skip = skipnr,
};
save_stack_trace(&trace);
return trace.nr_entries;
}
static inline void stack_trace_print(unsigned long *entries, unsigned int nr_entries, int spaces)
{
struct stack_trace trace = {
.entries = entries,
.nr_entries = nr_entries,
};
print_stack_trace(&trace, spaces);
}
#endif
#endif

View File

@@ -125,8 +125,8 @@ static inline bool scoutfs_key_is_ones(struct scoutfs_key *key)
* other alternatives across keys that first differ in any of the
* values. Say maybe 20% faster than memcmp.
*/
static inline int scoutfs_key_compare(struct scoutfs_key *a,
struct scoutfs_key *b)
static inline int scoutfs_key_compare(const struct scoutfs_key *a,
const struct scoutfs_key *b)
{
return scoutfs_cmp(a->sk_zone, b->sk_zone) ?:
scoutfs_cmp(le64_to_cpu(a->_sk_first), le64_to_cpu(b->_sk_first)) ?:
@@ -142,10 +142,10 @@ static inline int scoutfs_key_compare(struct scoutfs_key *a,
* 1: a_start > b_end
* else 0: ranges overlap
*/
static inline int scoutfs_key_compare_ranges(struct scoutfs_key *a_start,
struct scoutfs_key *a_end,
struct scoutfs_key *b_start,
struct scoutfs_key *b_end)
static inline int scoutfs_key_compare_ranges(const struct scoutfs_key *a_start,
const struct scoutfs_key *a_end,
const struct scoutfs_key *b_start,
const struct scoutfs_key *b_end)
{
return scoutfs_key_compare(a_end, b_start) < 0 ? -1 :
scoutfs_key_compare(a_start, b_end) > 0 ? 1 :

View File

@@ -168,7 +168,6 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode prev, enum scoutfs_lock_mode mode)
{
struct scoutfs_lock_coverage *cov;
struct scoutfs_lock_coverage *tmp;
u64 ino, last;
int ret = 0;
@@ -192,19 +191,22 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
retry:
/* remove cov items to tell users that their cache is stale */
/*
* Remove cov items to tell users that their cache is
* stale. The unlock pattern comes from avoiding bad
* sparse warnings when taking else in a failed trylock.
*/
spin_lock(&lock->cov_list_lock);
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
if (!spin_trylock(&cov->cov_lock)) {
spin_unlock(&lock->cov_list_lock);
cpu_relax();
goto retry;
while ((cov = list_first_entry_or_null(&lock->cov_list,
struct scoutfs_lock_coverage, head))) {
if (spin_trylock(&cov->cov_lock)) {
list_del_init(&cov->head);
cov->lock = NULL;
spin_unlock(&cov->cov_lock);
scoutfs_inc_counter(sb, lock_invalidate_coverage);
}
list_del_init(&cov->head);
cov->lock = NULL;
spin_unlock(&cov->cov_lock);
scoutfs_inc_counter(sb, lock_invalidate_coverage);
spin_unlock(&lock->cov_list_lock);
spin_lock(&lock->cov_list_lock);
}
spin_unlock(&lock->cov_list_lock);
@@ -302,6 +304,7 @@ static void lock_inc_count(unsigned int *counts, enum scoutfs_lock_mode mode)
static void lock_dec_count(unsigned int *counts, enum scoutfs_lock_mode mode)
{
BUG_ON(mode < 0 || mode >= SCOUTFS_LOCK_NR_MODES);
BUG_ON(counts[mode] == 0);
counts[mode]--;
}
@@ -1732,7 +1735,7 @@ int scoutfs_lock_setup(struct super_block *sb)
linfo->lock_range_tree = RB_ROOT;
KC_INIT_SHRINKER_FUNCS(&linfo->shrinker, lock_count_objects,
lock_scan_objects);
KC_REGISTER_SHRINKER(&linfo->shrinker);
KC_REGISTER_SHRINKER(&linfo->shrinker, "scoutfs-lock:" SCSBF, SCSB_ARGS(sb));
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);

View File

@@ -202,21 +202,48 @@ static u8 invalidation_mode(u8 granted, u8 requested)
/*
* Return true of the client lock instances described by the entries can
* be granted at the same time. Typically this only means they're both
* modes that are compatible between nodes. In addition there's the
* special case where a read lock on a client is compatible with a write
* lock on the same client because the client's cache covered by the
* read lock is still valid if they get a write lock.
* be granted at the same time. There's only three cases where this is
* true.
*
* First, the two locks are both of the same mode that allows full
* sharing -- read and write only. The only point of these modes is
* that everyone can share them.
*
* Second, a write lock gives the client permission to read as well.
* This means that a client can upgrade its read lock to a write lock
* without having to invalidate the existing read and drop caches.
*
* Third, null locks are always compatible between clients. It's as
* though the client with the null lock has no lock at all. But it's
* never compatible with all locks on the client requesting null.
* Sending invalidations for existing locks on a client when we get a
* null request is how we resolve races in shrinking locks -- we turn it
* into the unsolicited remote invalidation case.
*
* All other mode and client combinations can not be shared, most
* typically a write lock invalidating all other non-write holders to
* drop caches and force a read after the write has completed.
*/
static bool client_entries_compatible(struct client_lock_entry *granted,
struct client_lock_entry *requested)
{
return (granted->mode == requested->mode &&
(granted->mode == SCOUTFS_LOCK_READ ||
granted->mode == SCOUTFS_LOCK_WRITE_ONLY)) ||
(granted->rid == requested->rid &&
granted->mode == SCOUTFS_LOCK_READ &&
requested->mode == SCOUTFS_LOCK_WRITE);
/* only read and write_only can be full shared */
if ((granted->mode == requested->mode) &&
(granted->mode == SCOUTFS_LOCK_READ || granted->mode == SCOUTFS_LOCK_WRITE_ONLY))
return true;
/* _write includes reading, so a client can upgrade its read to write */
if (granted->rid == requested->rid &&
granted->mode == SCOUTFS_LOCK_READ &&
requested->mode == SCOUTFS_LOCK_WRITE)
return true;
/* null is always compatible across clients, never within a client */
if ((granted->rid != requested->rid) &&
(granted->mode == SCOUTFS_LOCK_NULL || requested->mode == SCOUTFS_LOCK_NULL))
return true;
return false;
}
/*
@@ -317,16 +344,18 @@ static void put_server_lock(struct lock_server_info *inf,
BUG_ON(!mutex_is_locked(&snode->mutex));
spin_lock(&inf->lock);
if (atomic_dec_and_test(&snode->refcount) &&
list_empty(&snode->granted) &&
list_empty(&snode->requested) &&
list_empty(&snode->invalidated)) {
spin_lock(&inf->lock);
rb_erase(&snode->node, &inf->locks_root);
spin_unlock(&inf->lock);
should_free = true;
}
spin_unlock(&inf->lock);
mutex_unlock(&snode->mutex);
if (should_free) {

View File

@@ -35,6 +35,12 @@ do { \
} \
} while (0) \
#define scoutfs_bug_on_err(sb, err, fmt, args...) \
do { \
__typeof__(err) _err = (err); \
scoutfs_bug_on(sb, _err < 0 && _err != -ENOLINK, fmt, ##args); \
} while (0)
/*
* Each message is only generated once per volume. Remounting resets
* the messages.

View File

@@ -20,6 +20,7 @@
#include <net/sock.h>
#include <net/tcp.h>
#include <linux/log2.h>
#include <linux/jhash.h>
#include "format.h"
#include "counters.h"
@@ -31,6 +32,7 @@
#include "endian_swap.h"
#include "tseq.h"
#include "fence.h"
#include "options.h"
/*
* scoutfs networking delivers requests and responses between nodes.
@@ -134,6 +136,7 @@ struct message_send {
struct message_recv {
struct scoutfs_tseq_entry tseq_entry;
struct work_struct proc_work;
struct list_head ordered_head;
struct scoutfs_net_connection *conn;
struct scoutfs_net_header nh;
};
@@ -332,7 +335,7 @@ static int submit_send(struct super_block *sb,
return -EINVAL;
if (scoutfs_forcing_unmount(sb))
return -EIO;
return -ENOLINK;
msend = kmalloc(offsetof(struct message_send,
nh.data[data_len]), GFP_NOFS);
@@ -498,16 +501,61 @@ static void scoutfs_net_proc_worker(struct work_struct *work)
trace_scoutfs_net_proc_work_exit(sb, 0, ret);
}
static void scoutfs_net_ordered_proc_worker(struct work_struct *work)
{
struct scoutfs_work_list *wlist = container_of(work, struct scoutfs_work_list, work);
struct message_recv *mrecv;
struct message_recv *mrecv__;
LIST_HEAD(list);
spin_lock(&wlist->lock);
list_splice_init(&wlist->list, &list);
spin_unlock(&wlist->lock);
list_for_each_entry_safe(mrecv, mrecv__, &list, ordered_head) {
list_del_init(&mrecv->ordered_head);
scoutfs_net_proc_worker(&mrecv->proc_work);
}
}
/*
* Some messages require in-order processing. But the scope of the
* ordering isn't global. In the case of lock messages, it's per lock.
* So for these messages we hash them to a number of ordered workers who
* walk a list and call the usual work function in order. This replaced
* first the proc work detecting OOO and re-ordering, and then only
* calling proc from the one recv work context.
*/
static void queue_ordered_proc(struct scoutfs_net_connection *conn, struct message_recv *mrecv)
{
struct scoutfs_work_list *wlist;
struct scoutfs_net_lock *nl;
u32 h;
if (WARN_ON_ONCE(mrecv->nh.cmd != SCOUTFS_NET_CMD_LOCK ||
le16_to_cpu(mrecv->nh.data_len) != sizeof(struct scoutfs_net_lock)))
return scoutfs_net_proc_worker(&mrecv->proc_work);
nl = (void *)mrecv->nh.data;
h = jhash(&nl->key, sizeof(struct scoutfs_key), 0x6fdd3cd5);
wlist = &conn->ordered_proc_wlists[h % conn->ordered_proc_nr];
spin_lock(&wlist->lock);
list_add_tail(&mrecv->ordered_head, &wlist->list);
spin_unlock(&wlist->lock);
queue_work(conn->workq, &wlist->work);
}
/*
* Free live responses up to and including the seq by marking them dead
* and moving them to the send queue to be freed.
*/
static int move_acked_responses(struct scoutfs_net_connection *conn,
struct list_head *list, u64 seq)
static bool move_acked_responses(struct scoutfs_net_connection *conn,
struct list_head *list, u64 seq)
{
struct message_send *msend;
struct message_send *tmp;
int ret = 0;
bool moved = false;
assert_spin_locked(&conn->lock);
@@ -519,20 +567,20 @@ static int move_acked_responses(struct scoutfs_net_connection *conn,
msend->dead = 1;
list_move(&msend->head, &conn->send_queue);
ret = 1;
moved = true;
}
return ret;
return moved;
}
/* acks are processed inline in the recv worker */
static void free_acked_responses(struct scoutfs_net_connection *conn, u64 seq)
{
int moved;
bool moved;
spin_lock(&conn->lock);
moved = move_acked_responses(conn, &conn->send_queue, seq) +
moved = move_acked_responses(conn, &conn->send_queue, seq) |
move_acked_responses(conn, &conn->resend_queue, seq);
spin_unlock(&conn->lock);
@@ -541,33 +589,17 @@ static void free_acked_responses(struct scoutfs_net_connection *conn, u64 seq)
queue_work(conn->workq, &conn->send_work);
}
static int recvmsg_full(struct socket *sock, void *buf, unsigned len)
static int k_recvmsg(struct socket *sock, void *buf, unsigned len)
{
struct msghdr msg;
struct kvec kv;
int ret;
struct kvec kv = {
.iov_base = buf,
.iov_len = len,
};
struct msghdr msg = {
.msg_flags = MSG_NOSIGNAL,
};
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, READ, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_recvmsg(sock, &msg, &kv, 1, len, msg.msg_flags);
if (ret <= 0)
return -ECONNABORTED;
len -= ret;
buf += ret;
}
return 0;
return kernel_recvmsg(sock, &msg, &kv, 1, len, msg.msg_flags);
}
static bool invalid_message(struct scoutfs_net_connection *conn,
@@ -604,6 +636,72 @@ static bool invalid_message(struct scoutfs_net_connection *conn,
return false;
}
static int recv_one_message(struct super_block *sb, struct net_info *ninf,
struct scoutfs_net_connection *conn, struct scoutfs_net_header *nh,
unsigned int data_len)
{
struct message_recv *mrecv;
int ret;
scoutfs_inc_counter(sb, net_recv_messages);
scoutfs_add_counter(sb, net_recv_bytes, nh_bytes(data_len));
trace_scoutfs_net_recv_message(sb, &conn->sockname, &conn->peername, nh);
/* caller's invalid message checked data len */
mrecv = kmalloc(offsetof(struct message_recv, nh.data[data_len]), GFP_NOFS);
if (!mrecv) {
ret = -ENOMEM;
goto out;
}
mrecv->conn = conn;
INIT_WORK(&mrecv->proc_work, scoutfs_net_proc_worker);
INIT_LIST_HEAD(&mrecv->ordered_head);
mrecv->nh = *nh;
if (data_len)
memcpy(mrecv->nh.data, (nh + 1), data_len);
if (nh->cmd == SCOUTFS_NET_CMD_GREETING) {
/* greetings are out of band, no seq mechanics */
set_conn_fl(conn, saw_greeting);
} else if (le64_to_cpu(nh->seq) <=
atomic64_read(&conn->recv_seq)) {
/* drop any resent duplicated messages */
scoutfs_inc_counter(sb, net_recv_dropped_duplicate);
kfree(mrecv);
ret = 0;
goto out;
} else {
/* record that we've received sender's seq */
atomic64_set(&conn->recv_seq, le64_to_cpu(nh->seq));
/* and free our responses that sender has received */
free_acked_responses(conn, le64_to_cpu(nh->recv_seq));
}
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/*
* Initial received greetings are processed inline
* before any other incoming messages.
*
* Incoming requests or responses to the lock client
* can't handle re-ordering, so they're queued to
* ordered receive processing work.
*/
if (nh->cmd == SCOUTFS_NET_CMD_GREETING)
scoutfs_net_proc_worker(&mrecv->proc_work);
else if (nh->cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn)
queue_ordered_proc(conn, mrecv);
else
queue_work(conn->workq, &mrecv->proc_work);
ret = 0;
out:
return ret;
}
/*
* Always block receiving from the socket. Errors trigger shutting down
* the connection.
@@ -614,86 +712,72 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
struct super_block *sb = conn->sb;
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct socket *sock = conn->sock;
struct scoutfs_net_header nh;
struct message_recv *mrecv;
struct scoutfs_net_header *nh;
struct page *page = NULL;
unsigned int data_len;
int hdr_off;
int rx_off;
int size;
int ret;
trace_scoutfs_net_recv_work_enter(sb, 0, 0);
page = alloc_page(GFP_NOFS);
if (!page) {
ret = -ENOMEM;
goto out;
}
hdr_off = 0;
rx_off = 0;
for (;;) {
/* receive the header */
ret = recvmsg_full(sock, &nh, sizeof(nh));
if (ret)
break;
/* receiving an invalid message breaks the connection */
if (invalid_message(conn, &nh)) {
scoutfs_inc_counter(sb, net_recv_invalid_message);
ret = -EBADMSG;
break;
ret = k_recvmsg(sock, page_address(page) + rx_off, PAGE_SIZE - rx_off);
if (ret <= 0) {
ret = -ECONNABORTED;
goto out;
}
data_len = le16_to_cpu(nh.data_len);
rx_off += ret;
scoutfs_inc_counter(sb, net_recv_messages);
scoutfs_add_counter(sb, net_recv_bytes, nh_bytes(data_len));
trace_scoutfs_net_recv_message(sb, &conn->sockname,
&conn->peername, &nh);
for (;;) {
size = rx_off - hdr_off;
if (size < sizeof(struct scoutfs_net_header))
break;
/* invalid message checked data len */
mrecv = kmalloc(offsetof(struct message_recv,
nh.data[data_len]), GFP_NOFS);
if (!mrecv) {
ret = -ENOMEM;
break;
nh = page_address(page) + hdr_off;
/* receiving an invalid message breaks the connection */
if (invalid_message(conn, nh)) {
scoutfs_inc_counter(sb, net_recv_invalid_message);
ret = -EBADMSG;
break;
}
data_len = le16_to_cpu(nh->data_len);
if (sizeof(struct scoutfs_net_header) + data_len > size)
break;
ret = recv_one_message(sb, ninf, conn, nh, data_len);
if (ret < 0)
goto out;
hdr_off += sizeof(struct scoutfs_net_header) + data_len;
}
mrecv->conn = conn;
INIT_WORK(&mrecv->proc_work, scoutfs_net_proc_worker);
mrecv->nh = nh;
/* receive the data payload */
ret = recvmsg_full(sock, mrecv->nh.data, data_len);
if (ret) {
kfree(mrecv);
break;
if ((PAGE_SIZE - rx_off) <
(sizeof(struct scoutfs_net_header) + SCOUTFS_NET_MAX_DATA_LEN)) {
if (size)
memmove(page_address(page), page_address(page) + hdr_off, size);
hdr_off = 0;
rx_off = size;
}
if (nh.cmd == SCOUTFS_NET_CMD_GREETING) {
/* greetings are out of band, no seq mechanics */
set_conn_fl(conn, saw_greeting);
} else if (le64_to_cpu(nh.seq) <=
atomic64_read(&conn->recv_seq)) {
/* drop any resent duplicated messages */
scoutfs_inc_counter(sb, net_recv_dropped_duplicate);
kfree(mrecv);
continue;
} else {
/* record that we've received sender's seq */
atomic64_set(&conn->recv_seq, le64_to_cpu(nh.seq));
/* and free our responses that sender has received */
free_acked_responses(conn, le64_to_cpu(nh.recv_seq));
}
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/*
* Initial received greetings are processed
* synchronously before any other incoming messages.
*
* Incoming requests or responses to the lock client are
* called synchronously to avoid reordering.
*/
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
scoutfs_net_proc_worker(&mrecv->proc_work);
else
queue_work(conn->workq, &mrecv->proc_work);
}
out:
__free_page(page);
if (ret)
scoutfs_inc_counter(sb, net_recv_error);
@@ -703,33 +787,41 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
trace_scoutfs_net_recv_work_exit(sb, 0, ret);
}
static int sendmsg_full(struct socket *sock, void *buf, unsigned len)
/*
* This consumes the kvec.
*/
static int k_sendmsg_full(struct socket *sock, struct kvec *kv, unsigned long nr_segs, size_t count)
{
struct msghdr msg;
struct kvec kv;
int ret;
int ret = 0;
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
while (count > 0) {
struct msghdr msg = {
.msg_flags = MSG_NOSIGNAL,
};
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, WRITE, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_sendmsg(sock, &msg, &kv, 1, len);
if (ret <= 0)
return -ECONNABORTED;
ret = kernel_sendmsg(sock, &msg, kv, nr_segs, count);
if (ret <= 0) {
ret = -ECONNABORTED;
break;
}
len -= ret;
buf += ret;
count -= ret;
if (count) {
while (nr_segs > 0 && ret >= kv->iov_len) {
ret -= kv->iov_len;
kv++;
nr_segs--;
}
if (nr_segs > 0 && ret > 0) {
kv->iov_base += ret;
kv->iov_len -= ret;
}
BUG_ON(nr_segs == 0);
}
ret = 0;
}
return 0;
return ret;
}
static void free_msend(struct net_info *ninf, struct message_send *msend)
@@ -760,54 +852,73 @@ static void scoutfs_net_send_worker(struct work_struct *work)
struct super_block *sb = conn->sb;
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct message_send *msend;
int ret = 0;
struct message_send *_msend_;
struct kvec kv[16];
unsigned long nr_segs;
size_t count;
int len;
int ret;
trace_scoutfs_net_send_work_enter(sb, 0, 0);
spin_lock(&conn->lock);
while ((msend = list_first_entry_or_null(&conn->send_queue,
struct message_send, head))) {
if (msend->dead) {
free_msend(ninf, msend);
continue;
}
if ((msend->nh.cmd == SCOUTFS_NET_CMD_FAREWELL) &&
nh_is_response(&msend->nh)) {
set_conn_fl(conn, saw_farewell);
}
msend->nh.recv_seq =
cpu_to_le64(atomic64_read(&conn->recv_seq));
spin_unlock(&conn->lock);
len = nh_bytes(le16_to_cpu(msend->nh.data_len));
scoutfs_inc_counter(sb, net_send_messages);
scoutfs_add_counter(sb, net_send_bytes, len);
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
ret = sendmsg_full(conn->sock, &msend->nh, len);
for (;;) {
nr_segs = 0;
count = 0;
spin_lock(&conn->lock);
list_for_each_entry_safe(msend, _msend_, &conn->send_queue, head) {
if (msend->dead) {
free_msend(ninf, msend);
continue;
}
msend->nh.recv_seq = 0;
len = nh_bytes(le16_to_cpu(msend->nh.data_len));
if (ret)
break;
if ((msend->nh.cmd == SCOUTFS_NET_CMD_FAREWELL) &&
nh_is_response(&msend->nh)) {
set_conn_fl(conn, saw_farewell);
}
/* resend if it wasn't freed while we sent */
if (!msend->dead)
list_move_tail(&msend->head, &conn->resend_queue);
msend->nh.recv_seq = cpu_to_le64(atomic64_read(&conn->recv_seq));
scoutfs_inc_counter(sb, net_send_messages);
scoutfs_add_counter(sb, net_send_bytes, len);
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
count += len;
kv[nr_segs].iov_base = &msend->nh;
kv[nr_segs].iov_len = len;
if (++nr_segs == ARRAY_SIZE(kv))
break;
}
spin_unlock(&conn->lock);
if (nr_segs == 0) {
ret = 0;
goto out;
}
ret = k_sendmsg_full(conn->sock, kv, nr_segs, count);
if (ret < 0)
goto out;
spin_lock(&conn->lock);
list_for_each_entry_safe(msend, _msend_, &conn->send_queue, head) {
msend->nh.recv_seq = 0;
/* resend if it wasn't freed while we sent */
if (!msend->dead)
list_move_tail(&msend->head, &conn->resend_queue);
if (--nr_segs == 0)
break;
}
spin_unlock(&conn->lock);
}
spin_unlock(&conn->lock);
out:
if (ret) {
scoutfs_inc_counter(sb, net_send_error);
shutdown_conn(conn);
@@ -862,6 +973,7 @@ static void scoutfs_net_destroy_worker(struct work_struct *work)
destroy_workqueue(conn->workq);
scoutfs_tseq_del(&ninf->conn_tseq_tree, &conn->tseq_entry);
kfree(conn->info);
kfree(conn->ordered_proc_wlists);
trace_scoutfs_conn_destroy_free(conn);
kfree(conn);
@@ -887,7 +999,7 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
* send queue. It doesn't work if the connection is idle. Adding
* keepalice probes with user_timeout set changes how the keepalive
* keepalive probes with user_timeout set changes how the keepalive
* timeout is calculated. CNT no longer matters. Each time
* additional probes (not the first) are sent the user timeout is
* checked against the last time data was received. If none of the
@@ -899,58 +1011,50 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
* elapses during the probe timer processing after the unsuccessful
* probes.
*/
#define UNRESPONSIVE_TIMEOUT_SECS 10
#define UNRESPONSIVE_PROBES 3
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
static int sock_opts_and_names(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct socket *sock)
{
struct timeval tv;
struct scoutfs_mount_options opts;
int optval;
int ret;
scoutfs_options_read(sb, &opts);
/* we use a keepalive timeout instead of send timeout */
tv.tv_sec = 0;
tv.tv_usec = 0;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
(char *)&tv, sizeof(tv));
ret = kc_sock_set_sndtimeo(sock, 0);
if (ret)
goto out;
/* not checked when user_timeout != 0, but for clarity */
optval = UNRESPONSIVE_PROBES;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&optval, sizeof(optval));
ret = kc_sock_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
&optval, sizeof(optval));
if (ret)
goto out;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&optval, sizeof(optval));
optval = (opts.tcp_keepalive_timeout_ms / MSEC_PER_SEC) - UNRESPONSIVE_PROBES;
ret = kc_tcp_sock_set_keepidle(sock, optval);
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&optval, sizeof(optval));
ret = kc_tcp_sock_set_keepintvl(sock, optval);
if (ret)
goto out;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&optval, sizeof(optval));
optval = opts.tcp_keepalive_timeout_ms;
ret = kc_tcp_sock_set_user_timeout(sock, optval);
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&optval, sizeof(optval));
ret = kc_sock_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_NODELAY,
(char *)&optval, sizeof(optval));
ret = kc_tcp_sock_set_nodelay(sock);
if (ret)
goto out;
@@ -1001,13 +1105,19 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
conn->notify_down,
conn->info_size,
conn->req_funcs, "accepted");
/*
* scoutfs_net_alloc_conn() can fail due to ENOMEM. If this
* is the only thing that does so, there's no harm in trying
* to see if kernel_accept() can get enough memory to try accepting
* a new connection again. If that then fails with ENOMEM, it'll
* shut down the conn anyway. So just retry here.
*/
if (!acc_conn) {
sock_release(acc_sock);
ret = -ENOMEM;
continue;
}
ret = sock_opts_and_names(acc_conn, acc_sock);
ret = sock_opts_and_names(sb, acc_conn, acc_sock);
if (ret) {
sock_release(acc_sock);
destroy_conn(acc_conn);
@@ -1049,22 +1159,19 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
DEFINE_CONN_FROM_WORK(conn, work, connect_work);
struct super_block *sb = conn->sb;
struct socket *sock;
struct timeval tv;
int ret;
trace_scoutfs_net_connect_work_enter(sb, 0, 0);
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
ret = kc_sock_create_kern(conn->connect_sin.ss_family,
SOCK_STREAM, IPPROTO_TCP, &sock);
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
/* caller specified connect timeout */
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
(char *)&tv, sizeof(tv));
/* caller specified connect timeout, defaults to 1 sec */
ret = kc_sock_set_sndtimeo(sock, conn->connect_timeout_ms / MSEC_PER_SEC);
if (ret) {
sock_release(sock);
goto out;
@@ -1078,11 +1185,13 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
trace_scoutfs_conn_connect_start(conn);
ret = kernel_connect(sock, (struct sockaddr *)&conn->connect_sin,
sizeof(struct sockaddr_in), 0);
conn->connect_sin.ss_family == AF_INET ?
sizeof(struct sockaddr_in) : sizeof(struct sockaddr_in6),
0);
if (ret)
goto out;
ret = sock_opts_and_names(conn, sock);
ret = sock_opts_and_names(sb, conn, sock);
if (ret)
goto out;
@@ -1120,6 +1229,13 @@ static bool empty_accepted_list(struct scoutfs_net_connection *conn)
return empty;
}
/*
* sockaddr_storage wraps both _in and _in6, which have _port always
* __be16 at the same offset, and we only need to test whether it's
* zero.
*/
#define sockaddr_port_is_nonzero(sin) ((sin).__data[0] || (sin).__data[1])
/*
* Safely shut down an active connection. This can be triggered by
* errors in workers or by an external call to free the connection. The
@@ -1143,7 +1259,7 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
trace_scoutfs_conn_shutdown_start(conn);
/* connected and accepted conns print a message */
if (conn->peername.sin_port != 0)
if (sockaddr_port_is_nonzero(conn->peername))
scoutfs_info(sb, "%s "SIN_FMT" -> "SIN_FMT,
conn->listening_conn ? "server closing" :
"client disconnected",
@@ -1273,6 +1389,7 @@ static void scoutfs_net_reconn_free_worker(struct work_struct *work)
DEFINE_CONN_FROM_WORK(conn, work, reconn_free_dwork.work);
struct super_block *sb = conn->sb;
struct scoutfs_net_connection *acc;
union scoutfs_inet_addr addr;
unsigned long now = jiffies;
unsigned long deadline = 0;
bool requeue = false;
@@ -1293,8 +1410,9 @@ restart:
if (!test_conn_fl(conn, shutting_down)) {
scoutfs_info(sb, "client "SIN_FMT" reconnect timed out, fencing",
SIN_ARG(&acc->last_peername));
scoutfs_sin_to_addr(&addr, &acc->last_peername);
ret = scoutfs_fence_start(sb, acc->rid,
acc->last_peername.sin_addr.s_addr,
&addr,
SCOUTFS_FENCE_CLIENT_RECONNECT);
if (ret) {
scoutfs_err(sb, "client fence returned err %d, shutting down server",
@@ -1343,25 +1461,30 @@ scoutfs_net_alloc_conn(struct super_block *sb,
{
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct scoutfs_net_connection *conn;
unsigned int nr;
unsigned int i;
nr = min_t(unsigned int, num_possible_cpus(),
PAGE_SIZE / sizeof(struct scoutfs_work_list));
conn = kzalloc(sizeof(struct scoutfs_net_connection), GFP_NOFS);
if (!conn)
return NULL;
if (info_size) {
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
if (conn) {
if (info_size)
conn->info = kzalloc(info_size, GFP_NOFS);
conn->ordered_proc_wlists = kmalloc_array(nr, sizeof(struct scoutfs_work_list),
GFP_NOFS);
conn->workq = alloc_workqueue("scoutfs_net_%s",
WQ_UNBOUND | WQ_NON_REENTRANT, 0,
name_suffix);
}
conn->workq = alloc_workqueue("scoutfs_net_%s",
WQ_UNBOUND | WQ_NON_REENTRANT, 0,
name_suffix);
if (!conn->workq) {
kfree(conn->info);
kfree(conn);
if (!conn || (info_size && !conn->info) || !conn->workq || !conn->ordered_proc_wlists) {
if (conn) {
kfree(conn->info);
kfree(conn->ordered_proc_wlists);
if (conn->workq)
destroy_workqueue(conn->workq);
kfree(conn);
}
return NULL;
}
@@ -1372,9 +1495,9 @@ scoutfs_net_alloc_conn(struct super_block *sb,
conn->req_funcs = req_funcs;
spin_lock_init(&conn->lock);
init_waitqueue_head(&conn->waitq);
conn->sockname.sin_family = AF_INET;
conn->peername.sin_family = AF_INET;
conn->last_peername.sin_family = AF_INET;
conn->sockname.ss_family = AF_UNSPEC;
conn->peername.ss_family = AF_UNSPEC;
conn->last_peername.ss_family = AF_UNSPEC;
INIT_LIST_HEAD(&conn->accepted_head);
INIT_LIST_HEAD(&conn->accepted_list);
conn->next_send_seq = 1;
@@ -1391,6 +1514,13 @@ scoutfs_net_alloc_conn(struct super_block *sb,
INIT_DELAYED_WORK(&conn->reconn_free_dwork,
scoutfs_net_reconn_free_worker);
conn->ordered_proc_nr = nr;
for (i = 0; i < nr; i++) {
INIT_WORK(&conn->ordered_proc_wlists[i].work, scoutfs_net_ordered_proc_worker);
spin_lock_init(&conn->ordered_proc_wlists[i].lock);
INIT_LIST_HEAD(&conn->ordered_proc_wlists[i].list);
}
scoutfs_tseq_add(&ninf->conn_tseq_tree, &conn->tseq_entry);
trace_scoutfs_conn_alloc(conn);
@@ -1444,7 +1574,7 @@ void scoutfs_net_free_conn(struct super_block *sb,
*/
int scoutfs_net_bind(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin)
struct sockaddr_storage *sin)
{
struct socket *sock = NULL;
int addrlen;
@@ -1455,19 +1585,19 @@ int scoutfs_net_bind(struct super_block *sb,
if (WARN_ON_ONCE(conn->sock))
return -EINVAL;
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
ret = kc_sock_create_kern(sin->ss_family, SOCK_STREAM, IPPROTO_TCP, &sock);
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
(char *)&optval, sizeof(optval));
ret = kc_sock_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
&optval, sizeof(optval));
if (ret)
goto out;
addrlen = sizeof(struct sockaddr_in);
addrlen = sin->ss_family == AF_INET ? sizeof(struct sockaddr_in) : sizeof(struct sockaddr_in6);
ret = kernel_bind(sock, (struct sockaddr *)sin, addrlen);
if (ret)
goto out;
@@ -1483,7 +1613,7 @@ int scoutfs_net_bind(struct super_block *sb,
ret = 0;
conn->sock = sock;
*sin = conn->sockname;
sin = (struct sockaddr_storage *)&conn->sockname;
out:
if (ret < 0 && sock)
@@ -1518,7 +1648,7 @@ static bool connect_result(struct scoutfs_net_connection *conn, int *error)
done = true;
*error = 0;
} else if (test_conn_fl(conn, shutting_down) ||
conn->connect_sin.sin_family == 0) {
conn->connect_sin.ss_family == AF_UNSPEC) {
done = true;
*error = -ESHUTDOWN;
}
@@ -1539,7 +1669,7 @@ static bool connect_result(struct scoutfs_net_connection *conn, int *error)
*/
int scoutfs_net_connect(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin, unsigned long timeout_ms)
struct sockaddr_storage *sin, unsigned long timeout_ms)
{
int ret = 0;

View File

@@ -1,10 +1,18 @@
#ifndef _SCOUTFS_NET_H_
#define _SCOUTFS_NET_H_
#include <linux/spinlock.h>
#include <linux/list.h>
#include <linux/in.h>
#include "endian_swap.h"
#include "tseq.h"
struct scoutfs_work_list {
struct work_struct work;
spinlock_t lock;
struct list_head list;
};
struct scoutfs_net_connection;
/* These are called in their own blocking context */
@@ -41,15 +49,15 @@ struct scoutfs_net_connection {
unsigned long flags; /* CONN_FL_* bitmask */
unsigned long reconn_deadline;
struct sockaddr_in connect_sin;
struct sockaddr_storage connect_sin;
unsigned long connect_timeout_ms;
struct socket *sock;
u64 rid;
u64 greeting_id;
struct sockaddr_in sockname;
struct sockaddr_in peername;
struct sockaddr_in last_peername;
struct sockaddr_storage sockname;
struct sockaddr_storage peername;
struct sockaddr_storage last_peername;
struct list_head accepted_head;
struct scoutfs_net_connection *listening_conn;
@@ -61,6 +69,8 @@ struct scoutfs_net_connection {
struct list_head resend_queue;
atomic64_t recv_seq;
unsigned int ordered_proc_nr;
struct scoutfs_work_list *ordered_proc_wlists;
struct workqueue_struct *workq;
struct work_struct listen_work;
@@ -87,27 +97,44 @@ enum conn_flags {
CONN_FL_reconn_freeing = (1UL << 6), /* waiting done, setter frees */
};
#define SIN_FMT "%pIS:%u"
#define SIN_ARG(sin) sin, be16_to_cpu((sin)->sin_port)
#define SIN_FMT "%pISpc"
#define SIN_ARG(sin) sin
static inline void scoutfs_addr_to_sin(struct sockaddr_in *sin,
static inline void scoutfs_addr_to_sin(struct sockaddr_storage *sin,
union scoutfs_inet_addr *addr)
{
BUG_ON(addr->v4.family != cpu_to_le16(SCOUTFS_AF_IPV4));
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->v4.addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
if (addr->v4.family == cpu_to_le16(SCOUTFS_AF_IPV4)) {
struct sockaddr_in *sin4 = (struct sockaddr_in *)sin;
memset(sin, 0, sizeof(struct sockaddr_storage));
sin4->sin_family = AF_INET;
sin4->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->v4.addr));
sin4->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
} else if (addr->v6.family == cpu_to_le16(SCOUTFS_AF_IPV6)) {
struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)sin;
memset(sin, 0, sizeof(struct sockaddr_storage));
sin6->sin6_family = AF_INET6;
memcpy(&sin6->sin6_addr.in6_u.u6_addr8, &addr->v6.addr, 16);
sin6->sin6_port = cpu_to_be16(le16_to_cpu(addr->v6.port));
} else
BUG();
}
static inline void scoutfs_sin_to_addr(union scoutfs_inet_addr *addr, struct sockaddr_in *sin)
static inline void scoutfs_sin_to_addr(union scoutfs_inet_addr *addr, struct sockaddr_storage *sin)
{
BUG_ON(sin->sin_family != AF_INET);
memset(addr, 0, sizeof(union scoutfs_inet_addr));
addr->v4.family = cpu_to_le16(SCOUTFS_AF_IPV4);
addr->v4.addr = be32_to_le32(sin->sin_addr.s_addr);
addr->v4.port = be16_to_le16(sin->sin_port);
if (sin->ss_family == AF_INET) {
struct sockaddr_in *sin4 = (struct sockaddr_in *)sin;
memset(addr, 0, sizeof(union scoutfs_inet_addr));
addr->v4.family = cpu_to_le16(SCOUTFS_AF_IPV4);
addr->v4.addr = be32_to_le32(sin4->sin_addr.s_addr);
addr->v4.port = be16_to_le16(sin4->sin_port);
} else if (sin->ss_family == AF_INET6) {
struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)sin;
memset(addr, 0, sizeof(union scoutfs_inet_addr));
addr->v6.family = cpu_to_le16(SCOUTFS_AF_IPV6);
memcpy(&addr->v6.addr, &sin6->sin6_addr.in6_u.u6_addr8, 16);
addr->v6.port = be16_to_le16(sin6->sin6_port);
} else
BUG();
}
struct scoutfs_net_connection *
@@ -118,10 +145,10 @@ scoutfs_net_alloc_conn(struct super_block *sb,
u64 scoutfs_net_client_rid(struct scoutfs_net_connection *conn);
int scoutfs_net_connect(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin, unsigned long timeout_ms);
struct sockaddr_storage *sin, unsigned long timeout_ms);
int scoutfs_net_bind(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin);
struct sockaddr_storage *sin);
void scoutfs_net_listen(struct super_block *sb,
struct scoutfs_net_connection *conn);
int scoutfs_net_submit_request(struct super_block *sb,

View File

@@ -592,7 +592,7 @@ static int handle_request(struct super_block *sb, struct omap_request *req)
ret = 0;
out:
free_rids(&priv_rids);
if (ret < 0) {
if ((ret < 0) && (req != NULL)) {
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
NULL, ret);
free_req(req);

View File

@@ -33,12 +33,14 @@ enum {
Opt_acl,
Opt_data_prealloc_blocks,
Opt_data_prealloc_contig_only,
Opt_ino_alloc_per_lock,
Opt_log_merge_wait_timeout_ms,
Opt_metadev_path,
Opt_noacl,
Opt_orphan_scan_delay_ms,
Opt_quorum_heartbeat_timeout_ms,
Opt_quorum_slot_nr,
Opt_tcp_keepalive_timeout_ms,
Opt_err,
};
@@ -46,12 +48,14 @@ static const match_table_t tokens = {
{Opt_acl, "acl"},
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
{Opt_ino_alloc_per_lock, "ino_alloc_per_lock=%s"},
{Opt_log_merge_wait_timeout_ms, "log_merge_wait_timeout_ms=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_noacl, "noacl"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_heartbeat_timeout_ms, "quorum_heartbeat_timeout_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_tcp_keepalive_timeout_ms, "tcp_keepalive_timeout_ms=%s"},
{Opt_err, NULL}
};
@@ -126,16 +130,20 @@ static void free_options(struct scoutfs_mount_options *opts)
#define MIN_DATA_PREALLOC_BLOCKS 1ULL
#define MAX_DATA_PREALLOC_BLOCKS ((unsigned long long)SCOUTFS_BLOCK_SM_MAX)
#define DEFAULT_TCP_KEEPALIVE_TIMEOUT_MS (60 * MSEC_PER_SEC)
static void init_default_options(struct scoutfs_mount_options *opts)
{
memset(opts, 0, sizeof(*opts));
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
opts->data_prealloc_contig_only = 1;
opts->ino_alloc_per_lock = SCOUTFS_LOCK_INODE_GROUP_NR;
opts->log_merge_wait_timeout_ms = DEFAULT_LOG_MERGE_WAIT_TIMEOUT_MS;
opts->orphan_scan_delay_ms = -1;
opts->quorum_heartbeat_timeout_ms = SCOUTFS_QUORUM_DEF_HB_TIMEO_MS;
opts->quorum_slot_nr = -1;
opts->tcp_keepalive_timeout_ms = DEFAULT_TCP_KEEPALIVE_TIMEOUT_MS;
}
static int verify_log_merge_wait_timeout_ms(struct super_block *sb, int ret, int val)
@@ -168,6 +176,21 @@ static int verify_quorum_heartbeat_timeout_ms(struct super_block *sb, int ret, u
return 0;
}
static int verify_tcp_keepalive_timeout_ms(struct super_block *sb, int ret, int val)
{
if (ret < 0) {
scoutfs_err(sb, "failed to parse tcp_keepalive_timeout_ms value");
return -EINVAL;
}
if (val <= (UNRESPONSIVE_PROBES * MSEC_PER_SEC)) {
scoutfs_err(sb, "invalid tcp_keepalive_timeout_ms value %d, must be larger than %lu",
val, (UNRESPONSIVE_PROBES * MSEC_PER_SEC));
return -EINVAL;
}
return 0;
}
/*
* Parse the option string into our options struct. This can allocate
* memory in the struct. The caller is responsible for always calling
@@ -218,6 +241,26 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
opts->data_prealloc_contig_only = nr;
break;
case Opt_ino_alloc_per_lock:
ret = match_int(args, &nr);
if (ret < 0 || nr < 1 || nr > SCOUTFS_LOCK_INODE_GROUP_NR) {
scoutfs_err(sb, "invalid ino_alloc_per_lock option, must be between 1 and %u",
SCOUTFS_LOCK_INODE_GROUP_NR);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->ino_alloc_per_lock = nr;
break;
case Opt_tcp_keepalive_timeout_ms:
ret = match_int(args, &nr);
ret = verify_tcp_keepalive_timeout_ms(sb, ret, nr);
if (ret < 0)
return ret;
opts->tcp_keepalive_timeout_ms = nr;
break;
case Opt_log_merge_wait_timeout_ms:
ret = match_int(args, &nr);
ret = verify_log_merge_wait_timeout_ms(sb, ret, nr);
@@ -365,12 +408,14 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
seq_puts(seq, ",acl");
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
seq_printf(seq, ",ino_alloc_per_lock=%u", opts.ino_alloc_per_lock);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
if (!is_acl)
seq_puts(seq, ",noacl");
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
if (opts.quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
seq_printf(seq, ",tcp_keepalive_timeout_ms=%d", opts.tcp_keepalive_timeout_ms);
return 0;
}
@@ -452,6 +497,45 @@ static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj
}
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
static ssize_t ino_alloc_per_lock_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.ino_alloc_per_lock);
}
static ssize_t ino_alloc_per_lock_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < 1 || val > SCOUTFS_LOCK_INODE_GROUP_NR) {
scoutfs_err(sb, "invalid ino_alloc_per_lock option, must be between 1 and %u",
SCOUTFS_LOCK_INODE_GROUP_NR);
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.ino_alloc_per_lock = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(ino_alloc_per_lock);
static ssize_t log_merge_wait_timeout_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
@@ -592,6 +676,7 @@ SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
SCOUTFS_ATTR_PTR(ino_alloc_per_lock),
SCOUTFS_ATTR_PTR(log_merge_wait_timeout_ms),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),

View File

@@ -8,13 +8,17 @@
struct scoutfs_mount_options {
u64 data_prealloc_blocks;
bool data_prealloc_contig_only;
unsigned int ino_alloc_per_lock;
unsigned int log_merge_wait_timeout_ms;
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;
u64 quorum_heartbeat_timeout_ms;
int tcp_keepalive_timeout_ms;
};
#define UNRESPONSIVE_PROBES 3
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);
int scoutfs_options_show(struct seq_file *seq, struct dentry *root);

View File

@@ -145,14 +145,26 @@ struct quorum_info {
#define DECLARE_QUORUM_INFO_KOBJ(kobj, name) \
DECLARE_QUORUM_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
static bool quorum_slot_present(struct scoutfs_quorum_config *qconf, int i)
static bool quorum_slot_ipv4(struct scoutfs_quorum_config *qconf, int i)
{
BUG_ON(i < 0 || i > SCOUTFS_QUORUM_MAX_SLOTS);
return qconf->slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
}
static void quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
static bool quorum_slot_ipv6(struct scoutfs_quorum_config *qconf, int i)
{
BUG_ON(i < 0 || i > SCOUTFS_QUORUM_MAX_SLOTS);
return qconf->slots[i].addr.v6.family == cpu_to_le16(SCOUTFS_AF_IPV6);
}
static bool quorum_slot_present(struct scoutfs_quorum_config *qconf, int i)
{
return quorum_slot_ipv4(qconf, i) || quorum_slot_ipv6(qconf, i);
}
static void quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_storage *sin)
{
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
@@ -179,11 +191,18 @@ static int create_socket(struct super_block *sb)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct socket *sock = NULL;
struct sockaddr_in sin;
struct sockaddr_storage sin;
struct scoutfs_quorum_slot slot = qinf->qconf.slots[qinf->our_quorum_slot_nr];
int addrlen;
int ret;
ret = kc_sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
if (le16_to_cpu(slot.addr.v4.family) == SCOUTFS_AF_IPV4)
ret = kc_sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
else if (le16_to_cpu(slot.addr.v6.family) == SCOUTFS_AF_IPV6)
ret = kc_sock_create_kern(PF_INET6, SOCK_DGRAM, IPPROTO_UDP, &sock);
else
BUG();
if (ret) {
scoutfs_err(sb, "quorum couldn't create udp socket: %d", ret);
goto out;
@@ -192,9 +211,9 @@ static int create_socket(struct super_block *sb)
/* rather fail and retry than block waiting for free */
sock->sk->sk_allocation = GFP_ATOMIC;
addrlen = (le16_to_cpu(slot.addr.v4.family) == SCOUTFS_AF_IPV4) ?
sizeof(struct sockaddr_in) : sizeof(struct sockaddr_in6);
quorum_slot_sin(&qinf->qconf, qinf->our_quorum_slot_nr, &sin);
addrlen = sizeof(sin);
ret = kernel_bind(sock, (struct sockaddr *)&sin, addrlen);
if (ret) {
scoutfs_err(sb, "quorum failed to bind udp socket to "SIN_FMT": %d",
@@ -241,12 +260,8 @@ static int send_msg_members(struct super_block *sb, int type, u64 term, int only
.iov_base = &qmes,
.iov_len = sizeof(qmes),
};
struct sockaddr_in sin;
struct sockaddr_storage sin;
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL,
.msg_name = &sin,
.msg_namelen = sizeof(sin),
@@ -268,9 +283,7 @@ static int send_msg_members(struct super_block *sb, int type, u64 term, int only
scoutfs_quorum_slot_sin(&qinf->qconf, i, &sin);
now = ktime_get();
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, WRITE, (struct iovec *)&kv, sizeof(qmes), 1);
#endif
ret = kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
if (ret != kv.iov_len)
failed++;
@@ -303,7 +316,6 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_quorum_message qmes;
struct timeval tv;
ktime_t rel_to;
ktime_t now;
int ret;
@@ -313,10 +325,6 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
.iov_len = sizeof(struct scoutfs_quorum_message),
};
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_NOSIGNAL,
};
@@ -328,19 +336,12 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
else
rel_to = ns_to_ktime(0);
tv = ktime_to_timeval(rel_to);
if (tv.tv_sec == 0 && tv.tv_usec == 0) {
if (ktime_compare(rel_to, ns_to_ktime(NSEC_PER_USEC)) <= 0) {
mh.msg_flags |= MSG_DONTWAIT;
} else {
ret = kernel_setsockopt(qinf->sock, SOL_SOCKET, SO_RCVTIMEO,
(char *)&tv, sizeof(tv));
if (ret < 0)
return ret;
ret = kc_tcp_sock_set_rcvtimeo(qinf->sock, rel_to);
}
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, READ, (struct iovec *)&kv, sizeof(struct scoutfs_quorum_message), 1);
#endif
ret = kernel_recvmsg(qinf->sock, &mh, &kv, 1, kv.iov_len, mh.msg_flags);
if (ret < 0)
return ret;
@@ -486,7 +487,7 @@ static void set_quorum_block_event(struct super_block *sb, struct scoutfs_quorum
if (WARN_ON_ONCE(event < 0 || event >= SCOUTFS_QUORUM_EVENT_NR))
return;
getnstimeofday64(&ts);
ktime_get_ts64(&ts);
le64_add_cpu(&blk->write_nr, 1);
ev = &blk->events[event];
@@ -525,10 +526,10 @@ static int update_quorum_block(struct super_block *sb, int event, u64 term, bool
set_quorum_block_event(sb, &blk, event, term);
ret = write_quorum_block(sb, blkno, &blk);
if (ret < 0)
scoutfs_err(sb, "error %d reading quorum block %llu to update event %d term %llu",
scoutfs_err(sb, "error %d writing quorum block %llu after updating event %d term %llu",
ret, blkno, event, term);
} else {
scoutfs_err(sb, "error %d writing quorum block %llu after updating event %d term %llu",
scoutfs_err(sb, "error %d reading quorum block %llu to update event %d term %llu",
ret, blkno, event, term);
}
@@ -560,10 +561,11 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_c
u64 term)
{
#define NR_OLD 2
struct scoutfs_quorum_block_event old[SCOUTFS_QUORUM_MAX_SLOTS][NR_OLD] = {{{0,}}};
struct scoutfs_quorum_block_event (*old)[NR_OLD];
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_quorum_block blk;
struct sockaddr_in sin;
struct sockaddr_storage sin;
union scoutfs_inet_addr addr;
const __le64 lefsid = cpu_to_le64(sbi->fsid);
const u64 rid = sbi->rid;
bool fence_started = false;
@@ -576,13 +578,20 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_c
BUILD_BUG_ON(SCOUTFS_QUORUM_BLOCKS < SCOUTFS_QUORUM_MAX_SLOTS);
old = kmalloc(NR_OLD * SCOUTFS_QUORUM_MAX_SLOTS * sizeof(struct scoutfs_quorum_block_event), GFP_KERNEL);
if (!old) {
ret = -ENOMEM;
goto out;
}
memset(old, 0, NR_OLD * SCOUTFS_QUORUM_MAX_SLOTS * sizeof(struct scoutfs_quorum_block_event));
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(qconf, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
if (ret < 0)
goto out;
goto out_free;
/* elected leader still running */
if (le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_ELECT].term) >
@@ -616,14 +625,17 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_c
scoutfs_info(sb, "fencing previous leader "SCSBF" at term %llu in slot %u with address "SIN_FMT,
SCSB_LEFR_ARGS(lefsid, fence_rid),
le64_to_cpu(old[i][j].term), i, SIN_ARG(&sin));
ret = scoutfs_fence_start(sb, le64_to_cpu(fence_rid), sin.sin_addr.s_addr,
scoutfs_sin_to_addr(&addr, &sin);
ret = scoutfs_fence_start(sb, le64_to_cpu(fence_rid), &addr,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER);
if (ret < 0)
goto out;
goto out_free;
fence_started = true;
}
}
out_free:
kfree(old);
out:
err = scoutfs_fence_wait_fenced(sb, msecs_to_jiffies(SCOUTFS_QUORUM_FENCE_TO_MS));
if (ret == 0)
@@ -726,7 +738,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
struct quorum_info *qinf = container_of(work, struct quorum_info, work);
struct scoutfs_mount_options opts;
struct super_block *sb = qinf->sb;
struct sockaddr_in unused;
struct sockaddr_storage unused;
struct quorum_host_msg msg;
struct quorum_status qst = {0,};
struct hb_recording hbr;
@@ -827,6 +839,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* followers and candidates start new election on timeout */
if (qst.role != LEADER &&
msg.type == SCOUTFS_QUORUM_MSG_INVALID &&
ktime_after(ktime_get(), qst.timeout)) {
/* .. but only if their server has stopped */
if (!scoutfs_server_is_down(sb)) {
@@ -987,7 +1000,10 @@ static void scoutfs_quorum_worker(struct work_struct *work)
}
/* record that this slot no longer has an active quorum */
update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_END, qst.term, true);
err = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_END, qst.term, true);
if (err < 0 && ret == 0)
ret = err;
out:
if (ret < 0) {
scoutfs_err(sb, "quorum service saw error %d, shutting down. This mount is no longer participating in quorum. It should be remounted to restore service.",
@@ -1004,7 +1020,7 @@ out:
* leader with the greatest elected term. If we get it wrong the
* connection will timeout and the client will try again.
*/
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_storage *sin)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_quorum_block blk;
@@ -1063,7 +1079,7 @@ u8 scoutfs_quorum_votes_needed(struct super_block *sb)
return qinf->votes_needed;
}
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_storage *sin)
{
return quorum_slot_sin(qconf, i, sin);
}
@@ -1076,7 +1092,7 @@ static char *role_str(int role)
[LEADER] = "leader",
};
if (role < 0 || role > ARRAY_SIZE(roles) || !roles[role])
if (role < 0 || role >= ARRAY_SIZE(roles) || !roles[role])
return "invalid";
return roles[role];
@@ -1222,8 +1238,12 @@ static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
struct scoutfs_quorum_config *qconf)
{
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
struct sockaddr_in other;
struct sockaddr_in sin;
struct sockaddr_storage other;
struct sockaddr_storage sin;
struct sockaddr_in *sin4;
struct sockaddr_in *other4;
struct sockaddr_in6 *sin6;
struct sockaddr_in6 *other6;
int found = 0;
int ret;
int i;
@@ -1234,35 +1254,78 @@ static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
if (!quorum_slot_present(qconf, i))
continue;
scoutfs_quorum_slot_sin(qconf, i, &sin);
if (quorum_slot_ipv4(qconf, i)) {
scoutfs_quorum_slot_sin(qconf, i, &sin);
sin4 = (struct sockaddr_in *)&sin;
if (!valid_ipv4_unicast(sin.sin_addr.s_addr)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 unicast address: "SIN_FMT,
i, SIN_ARG(&sin));
return -EINVAL;
}
if (!valid_ipv4_port(sin.sin_port)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 port number:"SIN_FMT,
i, SIN_ARG(&sin));
return -EINVAL;
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (!quorum_slot_present(qconf, j))
continue;
scoutfs_quorum_slot_sin(qconf, j, &other);
if (sin.sin_addr.s_addr == other.sin_addr.s_addr &&
sin.sin_port == other.sin_port) {
scoutfs_err(sb, "quorum slots #%u and #%u have the same address: "SIN_FMT,
i, j, SIN_ARG(&sin));
if (!valid_ipv4_unicast(sin4->sin_addr.s_addr)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 unicast address: "SIN_FMT,
i, SIN_ARG(&sin));
return -EINVAL;
}
}
found++;
if (!valid_ipv4_port(sin4->sin_port)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 port number:"SIN_FMT,
i, SIN_ARG(&sin));
return -EINVAL;
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (!quorum_slot_ipv4(qconf, j))
continue;
scoutfs_quorum_slot_sin(qconf, j, &other);
other4 = (struct sockaddr_in *)&other;
if (sin4->sin_addr.s_addr == other4->sin_addr.s_addr &&
sin4->sin_port == other4->sin_port) {
scoutfs_err(sb, "quorum slots #%u and #%u have the same address: "SIN_FMT,
i, j, SIN_ARG(&sin));
return -EINVAL;
}
}
found++;
} else if (quorum_slot_ipv6(qconf, i)) {
quorum_slot_sin(qconf, i, &sin);
sin6 = (struct sockaddr_in6 *)&sin;
if ((sin6->sin6_addr.in6_u.u6_addr32[0] == 0) && (sin6->sin6_addr.in6_u.u6_addr32[1] == 0) &&
(sin6->sin6_addr.in6_u.u6_addr32[2] == 0) && (sin6->sin6_addr.in6_u.u6_addr32[3] == 0)) {
scoutfs_err(sb, "quorum slot #%d has unspecified ipv6 address:"SIN_FMT,
i, SIN_ARG(&sin));
return -EINVAL;
}
if (sin6->sin6_addr.in6_u.u6_addr8[0] == 0xff) {
scoutfs_err(sb, "quorum slot #%d has multicast ipv6 address:"SIN_FMT,
i, SIN_ARG(&sin));
return -EINVAL;
}
if (!valid_ipv4_port(sin6->sin6_port)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv6 port number:"SIN_FMT,
i, SIN_ARG(&sin));
return -EINVAL;
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (!quorum_slot_ipv6(qconf, j))
continue;
quorum_slot_sin(qconf, j, &other);
other6 = (struct sockaddr_in6 *)&other;
if ((ipv6_addr_equal(&sin6->sin6_addr, &other6->sin6_addr)) &&
(sin6->sin6_port == other6->sin6_port)) {
scoutfs_err(sb, "quorum slots #%u and #%u have the same address: "SIN_FMT,
i, j, SIN_ARG(&sin));
return -EINVAL;
}
}
found++;
}
}
if (found == 0) {
@@ -1325,8 +1388,8 @@ int scoutfs_quorum_setup(struct super_block *sb)
qinf = kzalloc(sizeof(struct quorum_info), GFP_KERNEL);
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_KERNEL);
if (qinf)
qinf->hb_delay = __vmalloc(HB_DELAY_NR * sizeof(struct count_recent),
GFP_KERNEL | __GFP_ZERO, PAGE_KERNEL);
qinf->hb_delay = kc__vmalloc(HB_DELAY_NR * sizeof(struct count_recent),
GFP_KERNEL | __GFP_ZERO);
if (!qinf || !super || !qinf->hb_delay) {
if (qinf)
vfree(qinf->hb_delay);

View File

@@ -1,11 +1,11 @@
#ifndef _SCOUTFS_QUORUM_H_
#define _SCOUTFS_QUORUM_H_
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_storage *sin);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i,
struct sockaddr_in *sin);
struct sockaddr_storage *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
u64 term);

View File

@@ -1226,7 +1226,7 @@ int scoutfs_quota_setup(struct super_block *sb)
init_waitqueue_head(&qtinf->waitq);
KC_INIT_SHRINKER_FUNCS(&qtinf->shrinker, count_cached_checks, scan_cached_checks);
KC_REGISTER_SHRINKER(&qtinf->shrinker);
KC_REGISTER_SHRINKER(&qtinf->shrinker, "scoutfs-quota:" SCSBF, SCSB_ARGS(sb));
sbi->squota_info = qtinf;

View File

@@ -76,10 +76,10 @@ static struct recov_pending *lookup_pending(struct recov_info *recinf, u64 rid,
* We keep the pending list sorted by rid so that we can iterate over
* them. The list should be small and shouldn't be used often.
*/
static int cmp_pending_rid(void *priv, struct list_head *A, struct list_head *B)
static int cmp_pending_rid(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct recov_pending *a = list_entry(A, struct recov_pending, head);
struct recov_pending *b = list_entry(B, struct recov_pending, head);
KC_LIST_CMP_CONST struct recov_pending *a = list_entry(A, KC_LIST_CMP_CONST struct recov_pending, head);
KC_LIST_CMP_CONST struct recov_pending *b = list_entry(B, KC_LIST_CMP_CONST struct recov_pending, head);
return scoutfs_cmp_u64s(a->rid, b->rid);
}

View File

@@ -24,7 +24,6 @@
#include <linux/tracepoint.h>
#include <linux/in.h>
#include <linux/unaligned/access_ok.h>
#include "key.h"
#include "format.h"
@@ -287,6 +286,52 @@ TRACE_EVENT(scoutfs_data_alloc_block_enter,
STE_ENTRY_ARGS(ext))
);
TRACE_EVENT(scoutfs_data_page_mkwrite,
TP_PROTO(struct super_block *sb, __u64 ino, __u64 pos, __u32 ret),
TP_ARGS(sb, ino, pos, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(__u64, pos)
__field(__u32, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->pos = pos;
__entry->ret = ret;
),
TP_printk(SCSBF" ino %llu pos %llu ret %u ",
SCSB_TRACE_ARGS, __entry->ino, __entry->pos, __entry->ret)
);
TRACE_EVENT(scoutfs_data_filemap_fault,
TP_PROTO(struct super_block *sb, __u64 ino, __u64 pos, __u32 ret),
TP_ARGS(sb, ino, pos, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(__u64, pos)
__field(__u32, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->pos = pos;
__entry->ret = ret;
),
TP_printk(SCSBF" ino %llu pos %llu ret %u ",
SCSB_TRACE_ARGS, __entry->ino, __entry->pos, __entry->ret)
);
DECLARE_EVENT_CLASS(scoutfs_data_file_extent_class,
TP_PROTO(struct super_block *sb, __u64 ino, struct scoutfs_extent *ext),
@@ -778,13 +823,14 @@ DEFINE_EVENT(scoutfs_lock_info_class, scoutfs_lock_destroy,
);
TRACE_EVENT(scoutfs_xattr_set,
TP_PROTO(struct super_block *sb, size_t name_len, const void *value,
size_t size, int flags),
TP_PROTO(struct super_block *sb, __u64 ino, size_t name_len,
const void *value, size_t size, int flags),
TP_ARGS(sb, name_len, value, size, flags),
TP_ARGS(sb, ino, name_len, value, size, flags),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(size_t, name_len)
__field(const void *, value)
__field(size_t, size)
@@ -793,15 +839,16 @@ TRACE_EVENT(scoutfs_xattr_set,
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->name_len = name_len;
__entry->value = value;
__entry->size = size;
__entry->flags = flags;
),
TP_printk(SCSBF" name_len %zu value %p size %zu flags 0x%x",
SCSB_TRACE_ARGS, __entry->name_len, __entry->value,
__entry->size, __entry->flags)
TP_printk(SCSBF" ino %llu name_len %zu value %p size %zu flags 0x%x",
SCSB_TRACE_ARGS, __entry->ino, __entry->name_len,
__entry->value, __entry->size, __entry->flags)
);
TRACE_EVENT(scoutfs_advance_dirty_super,
@@ -1047,9 +1094,12 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
sk_trace_define(start)
sk_trace_define(end)
__field(u64, refresh_gen)
__field(u64, write_seq)
__field(u64, dirty_trans_seq)
__field(unsigned char, request_pending)
__field(unsigned char, invalidate_pending)
__field(int, mode)
__field(int, invalidating_mode)
__field(unsigned int, waiters_cw)
__field(unsigned int, waiters_pr)
__field(unsigned int, waiters_ex)
@@ -1062,9 +1112,12 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
sk_trace_assign(start, &lck->start);
sk_trace_assign(end, &lck->end);
__entry->refresh_gen = lck->refresh_gen;
__entry->write_seq = lck->write_seq;
__entry->dirty_trans_seq = lck->dirty_trans_seq;
__entry->request_pending = lck->request_pending;
__entry->invalidate_pending = lck->invalidate_pending;
__entry->mode = lck->mode;
__entry->invalidating_mode = lck->invalidating_mode;
__entry->waiters_pr = lck->waiters[SCOUTFS_LOCK_READ];
__entry->waiters_ex = lck->waiters[SCOUTFS_LOCK_WRITE];
__entry->waiters_cw = lck->waiters[SCOUTFS_LOCK_WRITE_ONLY];
@@ -1072,10 +1125,11 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__entry->users_ex = lck->users[SCOUTFS_LOCK_WRITE];
__entry->users_cw = lck->users[SCOUTFS_LOCK_WRITE_ONLY];
),
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u reqpnd %u invpnd %u rfrgen %llu waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u invmd %u reqp %u invp %u refg %llu wris %llu dts %llu waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
SCSB_TRACE_ARGS, sk_trace_args(start), sk_trace_args(end),
__entry->mode, __entry->request_pending,
__entry->invalidate_pending, __entry->refresh_gen,
__entry->mode, __entry->invalidating_mode, __entry->request_pending,
__entry->invalidate_pending, __entry->refresh_gen, __entry->write_seq,
__entry->dirty_trans_seq,
__entry->waiters_pr, __entry->waiters_ex, __entry->waiters_cw,
__entry->users_pr, __entry->users_ex, __entry->users_cw)
);
@@ -1125,35 +1179,37 @@ DEFINE_EVENT(scoutfs_lock_class, scoutfs_lock_shrink,
);
DECLARE_EVENT_CLASS(scoutfs_net_class,
TP_PROTO(struct super_block *sb, struct sockaddr_in *name,
struct sockaddr_in *peer, struct scoutfs_net_header *nh),
TP_PROTO(struct super_block *sb, struct sockaddr_storage *name,
struct sockaddr_storage *peer, struct scoutfs_net_header *nh),
TP_ARGS(sb, name, peer, nh),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
si4_trace_define(name)
si4_trace_define(peer)
__field_struct(struct sockaddr_storage, name)
__field_struct(struct sockaddr_storage, peer)
snh_trace_define(nh)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
si4_trace_assign(name, name);
si4_trace_assign(peer, peer);
memcpy(&__entry->name, name, sizeof(struct sockaddr_storage));
memcpy(&__entry->peer, peer, sizeof(struct sockaddr_storage));
snh_trace_assign(nh, nh);
),
TP_printk(SCSBF" name "SI4_FMT" peer "SI4_FMT" nh "SNH_FMT,
SCSB_TRACE_ARGS, si4_trace_args(name), si4_trace_args(peer),
TP_printk(SCSBF" name "SIN_FMT" peer "SIN_FMT" nh "SNH_FMT,
SCSB_TRACE_ARGS,
&__entry->name,
&__entry->peer,
snh_trace_args(nh))
);
DEFINE_EVENT(scoutfs_net_class, scoutfs_net_send_message,
TP_PROTO(struct super_block *sb, struct sockaddr_in *name,
struct sockaddr_in *peer, struct scoutfs_net_header *nh),
TP_PROTO(struct super_block *sb, struct sockaddr_storage *name,
struct sockaddr_storage *peer, struct scoutfs_net_header *nh),
TP_ARGS(sb, name, peer, nh)
);
DEFINE_EVENT(scoutfs_net_class, scoutfs_net_recv_message,
TP_PROTO(struct super_block *sb, struct sockaddr_in *name,
struct sockaddr_in *peer, struct scoutfs_net_header *nh),
TP_PROTO(struct super_block *sb, struct sockaddr_storage *name,
struct sockaddr_storage *peer, struct scoutfs_net_header *nh),
TP_ARGS(sb, name, peer, nh)
);
@@ -1186,8 +1242,8 @@ DECLARE_EVENT_CLASS(scoutfs_net_conn_class,
__field(void *, sock)
__field(__u64, c_rid)
__field(__u64, greeting_id)
si4_trace_define(sockname)
si4_trace_define(peername)
__field_struct(struct sockaddr_storage, sockname)
__field_struct(struct sockaddr_storage, peername)
__field(unsigned char, e_accepted_head)
__field(void *, listening_conn)
__field(unsigned char, e_accepted_list)
@@ -1205,8 +1261,8 @@ DECLARE_EVENT_CLASS(scoutfs_net_conn_class,
__entry->sock = conn->sock;
__entry->c_rid = conn->rid;
__entry->greeting_id = conn->greeting_id;
si4_trace_assign(sockname, &conn->sockname);
si4_trace_assign(peername, &conn->peername);
memcpy(&__entry->sockname, &conn->sockname, sizeof(struct sockaddr_storage));
memcpy(&__entry->peername, &conn->peername, sizeof(struct sockaddr_storage));
__entry->e_accepted_head = !!list_empty(&conn->accepted_head);
__entry->listening_conn = conn->listening_conn;
__entry->e_accepted_list = !!list_empty(&conn->accepted_list);
@@ -1216,7 +1272,7 @@ DECLARE_EVENT_CLASS(scoutfs_net_conn_class,
__entry->e_resend_queue = !!list_empty(&conn->resend_queue);
__entry->recv_seq = atomic64_read(&conn->recv_seq);
),
TP_printk(SCSBF" flags %s rc_dl %lu cto %lu sk %p rid %llu grid %llu sn "SI4_FMT" pn "SI4_FMT" eah %u lc %p eal %u nss %llu nsi %llu esq %u erq %u rs %llu",
TP_printk(SCSBF" flags %s rc_dl %lu cto %lu sk %p rid %llu grid %llu sn "SIN_FMT" pn "SIN_FMT" eah %u lc %p eal %u nss %llu nsi %llu esq %u erq %u rs %llu",
SCSB_TRACE_ARGS,
print_conn_flags(__entry->flags),
__entry->reconn_deadline,
@@ -1224,8 +1280,8 @@ DECLARE_EVENT_CLASS(scoutfs_net_conn_class,
__entry->sock,
__entry->c_rid,
__entry->greeting_id,
si4_trace_args(sockname),
si4_trace_args(peername),
&__entry->sockname,
&__entry->peername,
__entry->e_accepted_head,
__entry->listening_conn,
__entry->e_accepted_list,
@@ -1914,15 +1970,17 @@ DEFINE_EVENT(scoutfs_server_client_count_class, scoutfs_server_client_down,
);
DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing,
exceeded),
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, holding)
__field(int, applying)
__field(int, nr_holders)
__field(u32, budget)
__field(__u32, avail_before)
__field(__u32, freed_before)
__field(int, committing)
@@ -1933,35 +1991,45 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
__entry->holding = !!holding;
__entry->applying = !!applying;
__entry->nr_holders = nr_holders;
__entry->budget = budget;
__entry->avail_before = avail_before;
__entry->freed_before = freed_before;
__entry->committing = !!committing;
__entry->exceeded = !!exceeded;
),
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u committing %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying, __entry->nr_holders,
__entry->avail_before, __entry->freed_before, __entry->committing,
__entry->exceeded)
TP_printk(SCSBF" holding %u applying %u nr %u budget %u avail_before %u freed_before %u committing %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying,
__entry->nr_holders, __entry->budget,
__entry->avail_before, __entry->freed_before,
__entry->committing, __entry->exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_hold,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_apply,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_start,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_end,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
);
#define slt_symbolic(mode) \
@@ -2399,6 +2467,27 @@ TRACE_EVENT(scoutfs_block_dirty_ref,
__entry->block_blkno, __entry->block_seq)
);
TRACE_EVENT(scoutfs_get_file_block,
TP_PROTO(struct super_block *sb, u64 blkno, int flags),
TP_ARGS(sb, blkno, flags),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, blkno)
__field(int, flags)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->blkno = blkno;
__entry->flags = flags;
),
TP_printk(SCSBF" blkno %llu flags 0x%x",
SCSB_TRACE_ARGS, __entry->blkno, __entry->flags)
);
TRACE_EVENT(scoutfs_block_stale,
TP_PROTO(struct super_block *sb, struct scoutfs_block_ref *ref,
struct scoutfs_block_header *hdr, u32 magic, u32 crc),
@@ -2439,8 +2528,8 @@ TRACE_EVENT(scoutfs_block_stale,
DECLARE_EVENT_CLASS(scoutfs_block_class,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno, int refcount, int io_count,
unsigned long bits, __u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed),
unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, bp)
@@ -2448,7 +2537,6 @@ DECLARE_EVENT_CLASS(scoutfs_block_class,
__field(int, refcount)
__field(int, io_count)
__field(long, bits)
__field(__u64, accessed)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
@@ -2457,71 +2545,65 @@ DECLARE_EVENT_CLASS(scoutfs_block_class,
__entry->refcount = refcount;
__entry->io_count = io_count;
__entry->bits = bits;
__entry->accessed = accessed;
),
TP_printk(SCSBF" bp %p blkno %llu refcount %d io_count %d bits 0x%lx accessed %llu",
TP_printk(SCSBF" bp %p blkno %llu refcount %x io_count %d bits 0x%lx",
SCSB_TRACE_ARGS, __entry->bp, __entry->blkno, __entry->refcount,
__entry->io_count, __entry->bits, __entry->accessed)
__entry->io_count, __entry->bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_allocate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_free,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_insert,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_remove,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_end_io,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_submit,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_invalidate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_mark_dirty,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_forget,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_shrink,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_isolate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DECLARE_EVENT_CLASS(scoutfs_ext_next_class,
@@ -2996,6 +3078,27 @@ DEFINE_EVENT(scoutfs_srch_compact_class, scoutfs_srch_compact_client_recv,
TP_ARGS(sb, sc)
);
TRACE_EVENT(scoutfs_ioc_search_xattrs,
TP_PROTO(struct super_block *sb, u64 ino, u64 last_ino),
TP_ARGS(sb, ino, last_ino),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(u64, ino)
__field(u64, last_ino)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->last_ino = last_ino;
),
TP_printk(SCSBF" ino %llu last_ino %llu", SCSB_TRACE_ARGS,
__entry->ino, __entry->last_ino)
);
#endif /* _TRACE_SCOUTFS_H */
/* This part must be outside protection */

View File

@@ -65,6 +65,7 @@ struct commit_users {
struct list_head holding;
struct list_head applying;
unsigned int nr_holders;
u32 budget;
u32 avail_before;
u32 freed_before;
bool committing;
@@ -84,8 +85,9 @@ static void init_commit_users(struct commit_users *cusers)
do { \
__typeof__(cusers) _cusers = (cusers); \
trace_scoutfs_server_commit_##which(sb, !list_empty(&_cusers->holding), \
!list_empty(&_cusers->applying), _cusers->nr_holders, _cusers->avail_before, \
_cusers->freed_before, _cusers->committing, _cusers->exceeded); \
!list_empty(&_cusers->applying), _cusers->nr_holders, _cusers->budget, \
_cusers->avail_before, _cusers->freed_before, _cusers->committing, \
_cusers->exceeded); \
} while (0)
struct server_info {
@@ -298,12 +300,11 @@ static void check_holder_budget(struct super_block *sb, struct server_info *serv
{
static bool exceeded_once = false;
struct commit_hold *hold;
struct timespec ts;
struct timespec64 ts;
u32 avail_used;
u32 freed_used;
u32 avail_now;
u32 freed_now;
u32 budget;
assert_spin_locked(&cusers->lock);
@@ -318,19 +319,18 @@ static void check_holder_budget(struct super_block *sb, struct server_info *serv
else
freed_used = SCOUTFS_ALLOC_LIST_MAX_BLOCKS - freed_now;
budget = cusers->nr_holders * COMMIT_HOLD_ALLOC_BUDGET;
if (avail_used <= budget && freed_used <= budget)
if (avail_used <= cusers->budget && freed_used <= cusers->budget)
return;
exceeded_once = true;
cusers->exceeded = cusers->nr_holders;
scoutfs_err(sb, "%u holders exceeded alloc budget av: bef %u now %u, fr: bef %u now %u",
cusers->nr_holders, cusers->avail_before, avail_now,
scoutfs_err(sb, "holders exceeded alloc budget %u av: bef %u now %u, fr: bef %u now %u",
cusers->budget, cusers->avail_before, avail_now,
cusers->freed_before, freed_now);
list_for_each_entry(hold, &cusers->holding, entry) {
ts = ktime_to_timespec(hold->start);
ts = ktime_to_timespec64(hold->start);
scoutfs_err(sb, "exceeding hold start %llu.%09llu av %u fr %u",
(u64)ts.tv_sec, (u64)ts.tv_nsec, hold->avail, hold->freed);
hold->exceeded = true;
@@ -349,7 +349,7 @@ static bool hold_commit(struct super_block *sb, struct server_info *server,
{
bool has_room;
bool held;
u32 budget;
u32 new_budget;
u32 av;
u32 fr;
@@ -367,8 +367,8 @@ static bool hold_commit(struct super_block *sb, struct server_info *server,
}
/* +2 for our additional hold and then for the final commit work the server does */
budget = (cusers->nr_holders + 2) * COMMIT_HOLD_ALLOC_BUDGET;
has_room = av >= budget && fr >= budget;
new_budget = max(cusers->budget, (cusers->nr_holders + 2) * COMMIT_HOLD_ALLOC_BUDGET);
has_room = av >= new_budget && fr >= new_budget;
/* checking applying so holders drain once an apply caller starts waiting */
held = !cusers->committing && has_room && list_empty(&cusers->applying);
@@ -388,6 +388,7 @@ static bool hold_commit(struct super_block *sb, struct server_info *server,
list_add_tail(&hold->entry, &cusers->holding);
cusers->nr_holders++;
cusers->budget = new_budget;
} else if (!has_room && cusers->nr_holders == 0 && !cusers->committing) {
cusers->committing = true;
@@ -445,7 +446,7 @@ static int server_apply_commit(struct super_block *sb, struct commit_hold *hold,
{
DECLARE_SERVER_INFO(sb, server);
struct commit_users *cusers = &server->cusers;
struct timespec ts;
struct timespec64 ts;
spin_lock(&cusers->lock);
@@ -454,7 +455,7 @@ static int server_apply_commit(struct super_block *sb, struct commit_hold *hold,
check_holder_budget(sb, server, cusers);
if (hold->exceeded) {
ts = ktime_to_timespec(hold->start);
ts = ktime_to_timespec64(hold->start);
scoutfs_err(sb, "exceeding hold start %llu.%09llu stack:",
(u64)ts.tv_sec, (u64)ts.tv_nsec);
dump_stack();
@@ -516,6 +517,7 @@ static void commit_end(struct super_block *sb, struct commit_users *cusers, int
list_for_each_entry_safe(hold, tmp, &cusers->applying, entry)
list_del_init(&hold->entry);
cusers->committing = false;
cusers->budget = 0;
spin_unlock(&cusers->lock);
wake_up(&cusers->waitq);
@@ -608,7 +610,7 @@ static void scoutfs_server_commit_func(struct work_struct *work)
goto out;
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
ret = -ENOLINK;
goto out;
}
@@ -992,10 +994,11 @@ static int for_each_rid_last_lt(struct super_block *sb, struct scoutfs_btree_roo
}
/*
* Log merge range items are stored at the starting fs key of the range.
* The only fs key field that doesn't hold information is the zone, so
* we use the zone to differentiate all types that we store in the log
* merge tree.
* Log merge range items are stored at the starting fs key of the range
* with the zone overwritten to indicate the log merge item type. This
* day0 mistake loses sorting information for items in the different
* zones in the fs root, so the range items aren't strictly sorted by
* the starting key of their range.
*/
static void init_log_merge_key(struct scoutfs_key *key, u8 zone, u64 first,
u64 second)
@@ -1027,6 +1030,51 @@ static int next_log_merge_item_key(struct super_block *sb, struct scoutfs_btree_
return ret;
}
/*
* The range items aren't sorted by their range.start because
* _RANGE_ZONE clobbers the range's zone. We sweep all the items and
* find the range with the next least starting key that's greater than
* the caller's starting key. We have to be careful to iterate over the
* log_merge tree keys because the ranges can overlap as they're mapped
* to the log_merge keys by clobbering their zone.
*/
static int next_log_merge_range(struct super_block *sb, struct scoutfs_btree_root *root,
struct scoutfs_key *start, struct scoutfs_log_merge_range *rng)
{
struct scoutfs_log_merge_range *next;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
key = *start;
key.sk_zone = SCOUTFS_LOG_MERGE_RANGE_ZONE;
scoutfs_key_set_ones(&rng->start);
do {
ret = scoutfs_btree_next(sb, root, &key, &iref);
if (ret == 0) {
if (iref.key->sk_zone != SCOUTFS_LOG_MERGE_RANGE_ZONE) {
ret = -ENOENT;
} else if (iref.val_len != sizeof(struct scoutfs_log_merge_range)) {
ret = -EIO;
} else {
next = iref.val;
if (scoutfs_key_compare(&next->start, &rng->start) < 0 &&
scoutfs_key_compare(&next->start, start) >= 0)
*rng = *next;
key = *iref.key;
scoutfs_key_inc(&key);
}
scoutfs_btree_put_iref(&iref);
}
} while (ret == 0);
if (ret == -ENOENT && !scoutfs_key_is_ones(&rng->start))
ret = 0;
return ret;
}
static int next_log_merge_item(struct super_block *sb,
struct scoutfs_btree_root *root,
u8 zone, u64 first, u64 second,
@@ -1038,6 +1086,101 @@ static int next_log_merge_item(struct super_block *sb,
return next_log_merge_item_key(sb, root, zone, &key, val, val_len);
}
static int do_finalize_ours(struct super_block *sb,
struct scoutfs_log_trees *lt,
struct commit_hold *hold)
{
struct server_info *server = SCOUTFS_SB(sb)->server_info;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_key key;
char *err_str = NULL;
u64 rid = le64_to_cpu(lt->rid);
bool more;
int ret;
int err;
mutex_lock(&server->srch_mutex);
ret = scoutfs_srch_rotate_log(sb, &server->alloc, &server->wri,
&super->srch_root, &lt->srch_file, true);
mutex_unlock(&server->srch_mutex);
if (ret < 0) {
scoutfs_err(sb, "error rotating srch log for rid %016llx: %d",
rid, ret);
return ret;
}
do {
more = false;
/*
* All of these can return errors, perhaps indicating successful
* partial progress, after having modified the allocator trees.
* We always have to update the roots in the log item.
*/
mutex_lock(&server->alloc_mutex);
ret = (err_str = "splice meta_freed to other_freed",
scoutfs_alloc_splice_list(sb, &server->alloc,
&server->wri, server->other_freed,
&lt->meta_freed)) ?:
(err_str = "splice meta_avail",
scoutfs_alloc_splice_list(sb, &server->alloc,
&server->wri, server->other_freed,
&lt->meta_avail)) ?:
(err_str = "empty data_avail",
alloc_move_empty(sb, &super->data_alloc,
&lt->data_avail,
COMMIT_HOLD_ALLOC_BUDGET / 2)) ?:
(err_str = "empty data_freed",
alloc_move_empty(sb, &super->data_alloc,
&lt->data_freed,
COMMIT_HOLD_ALLOC_BUDGET / 2));
mutex_unlock(&server->alloc_mutex);
/*
* only finalize, allowing merging, once the allocators are
* fully freed
*/
if (ret == 0) {
/* the transaction is no longer open */
le64_add_cpu(&lt->flags, SCOUTFS_LOG_TREES_FINALIZED);
lt->finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
}
scoutfs_key_init_log_trees(&key, rid, le64_to_cpu(lt->nr));
err = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, lt,
sizeof(*lt));
BUG_ON(err != 0); /* alloc, log, srch items out of sync */
if (ret == -EINPROGRESS) {
more = true;
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, hold, 0);
if (ret < 0)
WARN_ON_ONCE(ret < 0);
server_hold_commit(sb, hold);
mutex_lock(&server->logs_mutex);
} else if (ret == 0) {
memset(&lt->item_root, 0, sizeof(lt->item_root));
memset(&lt->bloom_ref, 0, sizeof(lt->bloom_ref));
lt->inode_count_delta = 0;
lt->max_item_seq = 0;
lt->finalize_seq = 0;
le64_add_cpu(&lt->nr, 1);
lt->flags = 0;
}
} while (more);
if (ret < 0) {
scoutfs_err(sb,
"error %d finalizing log trees for rid %016llx: %s",
ret, rid, err_str);
}
return ret;
}
/*
* Finalizing the log btrees for merging needs to be done carefully so
* that items don't appear to go backwards in time.
@@ -1089,7 +1232,6 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
struct scoutfs_log_merge_range rng;
struct scoutfs_mount_options opts;
struct scoutfs_log_trees each_lt;
struct scoutfs_log_trees fin;
unsigned int delay_ms;
unsigned long timeo;
bool saw_finalized;
@@ -1160,6 +1302,7 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
/* done if we're not finalizing and there's no finalized */
if (!finalize_ours && !saw_finalized) {
ret = 0;
scoutfs_inc_counter(sb, log_merge_no_finalized);
break;
}
@@ -1194,32 +1337,11 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
/* Finalize ours if it's visible to others */
if (ours_visible) {
fin = *lt;
memset(&fin.meta_avail, 0, sizeof(fin.meta_avail));
memset(&fin.meta_freed, 0, sizeof(fin.meta_freed));
memset(&fin.data_avail, 0, sizeof(fin.data_avail));
memset(&fin.data_freed, 0, sizeof(fin.data_freed));
memset(&fin.srch_file, 0, sizeof(fin.srch_file));
le64_add_cpu(&fin.flags, SCOUTFS_LOG_TREES_FINALIZED);
fin.finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
scoutfs_key_init_log_trees(&key, le64_to_cpu(fin.rid),
le64_to_cpu(fin.nr));
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &fin,
sizeof(fin));
ret = do_finalize_ours(sb, lt, hold);
if (ret < 0) {
err_str = "updating finalized log_trees";
err_str = "finalizing ours";
break;
}
memset(&lt->item_root, 0, sizeof(lt->item_root));
memset(&lt->bloom_ref, 0, sizeof(lt->bloom_ref));
lt->inode_count_delta = 0;
lt->max_item_seq = 0;
lt->finalize_seq = 0;
le64_add_cpu(&lt->nr, 1);
lt->flags = 0;
}
/* wait a bit for mounts to arrive */
@@ -1299,12 +1421,10 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
* is nested inside holding commits so we recheck the persistent item
* each time we commit to make sure it's still what we think. The
* caller is still going to send the item to the client so we update the
* caller's each time we make progress. This is a best-effort attempt
* to clean up and it's valid to leave extents in data_freed we don't
* return errors to the caller. The client will continue the work later
* in get_log_trees or as the rid is reclaimed.
* caller's each time we make progress. If we hit an error applying the
* changes we make then we can't send the log_trees to the client.
*/
static void try_drain_data_freed(struct super_block *sb, struct scoutfs_log_trees *lt)
static int try_drain_data_freed(struct super_block *sb, struct scoutfs_log_trees *lt)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
@@ -1313,6 +1433,7 @@ static void try_drain_data_freed(struct super_block *sb, struct scoutfs_log_tree
struct scoutfs_log_trees drain;
struct scoutfs_key key;
COMMIT_HOLD(hold);
bool apply = false;
int ret = 0;
int err;
@@ -1321,22 +1442,27 @@ static void try_drain_data_freed(struct super_block *sb, struct scoutfs_log_tree
while (lt->data_freed.total_len != 0) {
server_hold_commit(sb, &hold);
mutex_lock(&server->logs_mutex);
apply = true;
ret = find_log_trees_item(sb, &super->logs_root, false, rid, U64_MAX, &drain);
if (ret < 0)
if (ret < 0) {
ret = 0;
break;
}
/* careful to only keep draining the caller's specific open trans */
if (drain.nr != lt->nr || drain.get_trans_seq != lt->get_trans_seq ||
drain.commit_trans_seq != lt->commit_trans_seq || drain.flags != lt->flags) {
ret = -ENOENT;
ret = 0;
break;
}
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri,
&super->logs_root, &key);
if (ret < 0)
if (ret < 0) {
ret = 0;
break;
}
/* moving can modify and return errors, always update caller and item */
mutex_lock(&server->alloc_mutex);
@@ -1352,19 +1478,19 @@ static void try_drain_data_freed(struct super_block *sb, struct scoutfs_log_tree
BUG_ON(err < 0); /* dirtying must guarantee success */
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
if (ret < 0) {
ret = 0; /* don't try to abort, ignoring ret */
apply = false;
if (ret < 0)
break;
}
}
/* try to cleanly abort and write any partial dirty btree blocks, but ignore result */
if (ret < 0) {
if (apply) {
mutex_unlock(&server->logs_mutex);
server_apply_commit(sb, &hold, 0);
server_apply_commit(sb, &hold, ret);
}
return ret;
}
/*
@@ -1572,9 +1698,9 @@ out:
scoutfs_err(sb, "error %d getting log trees for rid %016llx: %s",
ret, rid, err_str);
/* try to drain excessive data_freed with additional commits, if needed, ignoring err */
/* try to drain excessive data_freed with additional commits, if needed */
if (ret == 0)
try_drain_data_freed(sb, &lt);
ret = try_drain_data_freed(sb, &lt);
return scoutfs_net_response(sb, conn, cmd, id, ret, &lt, sizeof(lt));
}
@@ -1602,6 +1728,7 @@ static int server_commit_log_trees(struct super_block *sb,
int ret;
if (arg_len != sizeof(struct scoutfs_log_trees)) {
err_str = "invalid message log_trees size";
ret = -EINVAL;
goto out;
}
@@ -1665,7 +1792,7 @@ static int server_commit_log_trees(struct super_block *sb,
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
BUG_ON(ret < 0); /* dirtying should have guaranteed success */
BUG_ON(ret < 0); /* dirtying should have guaranteed success, srch item inconsistent */
if (ret < 0)
err_str = "updating log trees item";
@@ -1673,11 +1800,10 @@ unlock:
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
if (ret < 0)
scoutfs_err(sb, "server error %d committing client logs for rid %016llx: %s",
ret, rid, err_str);
out:
WARN_ON_ONCE(ret < 0);
if (ret < 0)
scoutfs_err(sb, "server error %d committing client logs for rid %016llx, nr %llu: %s",
ret, rid, le64_to_cpu(lt.nr), err_str);
return scoutfs_net_response(sb, conn, cmd, id, ret, NULL, 0);
}
@@ -1810,6 +1936,9 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
out:
mutex_unlock(&server->logs_mutex);
if (ret == 0)
scoutfs_inc_counter(sb, reclaimed_open_logs);
if (ret < 0 && ret != -EINPROGRESS)
scoutfs_err(sb, "server error %d reclaiming log trees for rid %016llx: %s",
ret, rid, err_str);
@@ -2011,7 +2140,7 @@ static int server_srch_get_compact(struct super_block *sb,
apply:
ret = server_apply_commit(sb, &hold, ret);
WARN_ON_ONCE(ret < 0 && ret != -ENOENT); /* XXX leaked busy item */
WARN_ON_ONCE(ret < 0 && ret != -ENOENT && ret != -ENOLINK); /* XXX leaked busy item */
out:
ret = scoutfs_net_response(sb, conn, cmd, id, ret,
sc, sizeof(struct scoutfs_srch_compact));
@@ -2051,7 +2180,7 @@ static int server_srch_commit_compact(struct super_block *sb,
&super->srch_root, rid, sc,
&av, &fr);
mutex_unlock(&server->srch_mutex);
if (ret < 0) /* XXX very bad, leaks allocators */
if (ret < 0)
goto apply;
/* reclaim allocators if they were set by _srch_commit_ */
@@ -2061,10 +2190,10 @@ static int server_srch_commit_compact(struct super_block *sb,
scoutfs_alloc_splice_list(sb, &server->alloc, &server->wri,
server->other_freed, &fr);
mutex_unlock(&server->alloc_mutex);
WARN_ON(ret < 0); /* XXX leaks allocators */
apply:
ret = server_apply_commit(sb, &hold, ret);
out:
WARN_ON(ret < 0); /* XXX leaks allocators */
return scoutfs_net_response(sb, conn, cmd, id, ret, NULL, 0);
}
@@ -2389,10 +2518,9 @@ out:
}
}
if (ret < 0)
scoutfs_err(sb, "server error %d splicing log merge completion: %s", ret, err_str);
BUG_ON(ret); /* inconsistent */
/* inconsistent */
scoutfs_bug_on_err(sb, ret,
"server error %d splicing log merge completion: %s", ret, err_str);
return ret ?: einprogress;
}
@@ -2527,7 +2655,7 @@ static void server_log_merge_free_work(struct work_struct *work)
ret = scoutfs_btree_free_blocks(sb, &server->alloc,
&server->wri, &fr.key,
&fr.root, COMMIT_HOLD_ALLOC_BUDGET / 2);
&fr.root, COMMIT_HOLD_ALLOC_BUDGET / 8);
if (ret < 0) {
err_str = "freeing log btree";
break;
@@ -2546,7 +2674,7 @@ static void server_log_merge_free_work(struct work_struct *work)
/* freed blocks are in allocator, we *have* to update fr */
BUG_ON(ret < 0);
if (server_hold_alloc_used_since(sb, &hold) >= COMMIT_HOLD_ALLOC_BUDGET / 2) {
if (server_hold_alloc_used_since(sb, &hold) >= (COMMIT_HOLD_ALLOC_BUDGET * 3) / 4) {
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
commit = false;
@@ -2637,10 +2765,7 @@ restart:
/* find the next range, always checking for splicing */
for (;;) {
key = stat.next_range_key;
key.sk_zone = SCOUTFS_LOG_MERGE_RANGE_ZONE;
ret = next_log_merge_item_key(sb, &super->log_merge, SCOUTFS_LOG_MERGE_RANGE_ZONE,
&key, &rng, sizeof(rng));
ret = next_log_merge_range(sb, &super->log_merge, &stat.next_range_key, &rng);
if (ret < 0 && ret != -ENOENT) {
err_str = "finding merge range item";
goto out;
@@ -2911,7 +3036,13 @@ static int server_commit_log_merge(struct super_block *sb,
SCOUTFS_LOG_MERGE_STATUS_ZONE, 0, 0,
&stat, sizeof(stat));
if (ret < 0) {
err_str = "getting merge status item";
/*
* During a retransmission, it's possible that the server
* already committed and resolved this log merge. ENOENT
* is expected in that case.
*/
if (ret != -ENOENT)
err_str = "getting merge status item";
goto out;
}
@@ -3410,7 +3541,7 @@ static bool invalid_mounted_client_item(struct scoutfs_btree_item_ref *iref)
* it's acceptable to see -EEXIST.
*/
static int insert_mounted_client(struct super_block *sb, u64 rid, u64 gr_flags,
struct sockaddr_in *sin)
struct sockaddr_storage *sin)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
@@ -4149,7 +4280,7 @@ static void fence_pending_recov_worker(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
fence_pending_recov_work);
struct super_block *sb = server->sb;
union scoutfs_inet_addr addr;
union scoutfs_inet_addr addr = {{0,}};
u64 rid = 0;
int ret = 0;
@@ -4163,7 +4294,7 @@ static void fence_pending_recov_worker(struct work_struct *work)
break;
}
ret = scoutfs_fence_start(sb, rid, le32_to_be32(addr.v4.addr),
ret = scoutfs_fence_start(sb, rid, &addr,
SCOUTFS_FENCE_CLIENT_RECOVERY);
if (ret < 0) {
scoutfs_err(sb, "fence returned err %d, shutting down server", ret);
@@ -4314,7 +4445,7 @@ static void scoutfs_server_worker(struct work_struct *work)
struct scoutfs_net_connection *conn = NULL;
struct scoutfs_mount_options opts;
DECLARE_WAIT_QUEUE_HEAD(waitq);
struct sockaddr_in sin;
struct sockaddr_storage sin;
bool alloc_init = false;
u64 max_seq;
int ret;
@@ -4323,7 +4454,7 @@ static void scoutfs_server_worker(struct work_struct *work)
scoutfs_options_read(sb, &opts);
scoutfs_quorum_slot_sin(&server->qconf, opts.quorum_slot_nr, &sin);
scoutfs_info(sb, "server starting at "SIN_FMT, SIN_ARG(&sin));
scoutfs_info(sb, "server starting at "SIN_FMT, &sin);
scoutfs_block_writer_init(sb, &server->wri);
server->finalize_sent_seq = 0;

View File

@@ -1,27 +1,6 @@
#ifndef _SCOUTFS_SERVER_H_
#define _SCOUTFS_SERVER_H_
#define SI4_FMT "%u.%u.%u.%u:%u"
#define si4_trace_define(name) \
__field(__u32, name##_addr) \
__field(__u16, name##_port)
#define si4_trace_assign(name, sin) \
do { \
__typeof__(sin) _sin = (sin); \
\
__entry->name##_addr = be32_to_cpu(_sin->sin_addr.s_addr); \
__entry->name##_port = be16_to_cpu(_sin->sin_port); \
} while(0)
#define si4_trace_args(name) \
(__entry->name##_addr >> 24), \
(__entry->name##_addr >> 16) & 255, \
(__entry->name##_addr >> 8) & 255, \
__entry->name##_addr & 255, \
__entry->name##_port
#define SNH_FMT \
"seq %llu recv_seq %llu id %llu data_len %u cmd %u flags 0x%x error %u"
#define SNH_ARG(nh) \

45
kmod/src/sparse-filtered.sh Executable file
View File

@@ -0,0 +1,45 @@
#!/bin/bash
#
# Unfortunately, kernels can ship which contain sparse errors that are
# unrelated to us.
#
# The exit status of this filtering wrapper will indicate an error if
# sparse wasn't found or if there were any unfiltered output lines. It
# can hide error exit status from sparse or grep if they don't produce
# output that makes it past the filters.
#
# must have sparse. Fail with error message, mask success path.
which sparse > /dev/null || exit 1
# initial unmatchable, additional added as RE+="|..."
RE="$^"
#
# Darn. sparse has multi-line error messages, and I'd rather not bother
# with multi-line filters. So we'll just drop this context.
#
# command-line: note: in included file (through include/linux/netlink.h, include/linux/ethtool.h, include/linux/netdevice.h, include/net/sock.h, /root/scoutfs/kmod/src/kernelcompat.h, builtin):
# fprintf(stderr, "%s: note: in included file%s:\n",
#
RE+="|: note: in included file"
# 3.10.0-1160.119.1.el7.x86_64.debug
# include/linux/posix_acl.h:138:9: warning: incorrect type in assignment (different address spaces)
# include/linux/posix_acl.h:138:9: expected struct posix_acl *<noident>
# include/linux/posix_acl.h:138:9: got struct posix_acl [noderef] <asn:4>*<noident>
RE+="|include/linux/posix_acl.h:"
# 3.10.0-1160.119.1.el7.x86_64.debug
#include/uapi/linux/perf_event.h:146:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)
RE+="|include/uapi/linux/perf_event.h:"
# 4.18.0-513.24.1.el8_9.x86_64+debug'
#./include/linux/skbuff.h:824:1: warning: directive in macro's argument list
RE+="|include/linux/skbuff.h:"
sparse "$@" |& \
grep -E -v "($RE)" |& \
awk '{ print $0 } END { exit NR > 0 }'
exit $?

View File

@@ -18,6 +18,7 @@
#include <linux/pagemap.h>
#include <linux/vmalloc.h>
#include <linux/sort.h>
#include <asm/unaligned.h>
#include "super.h"
#include "format.h"
@@ -61,7 +62,7 @@
* re-allocated and re-written. Search can restart by checking the
* btree for the current set of files. Compaction reads log files which
* are protected from other compactions by the persistent busy items
* created by the server. Compaction won't see it's blocks reused out
* created by the server. Compaction won't see its blocks reused out
* from under it, but it can encounter stale cached blocks that need to
* be invalidated.
*/
@@ -441,6 +442,10 @@ out:
if (ret == 0 && (flags & GFB_INSERT) && blk >= le64_to_cpu(sfl->blocks))
sfl->blocks = cpu_to_le64(blk + 1);
if (bl) {
trace_scoutfs_get_file_block(sb, bl->blkno, flags);
}
*bl_ret = bl;
return ret;
}
@@ -532,23 +537,35 @@ out:
* the pairs cancel each other out by all readers (the second encoding
* looks like deletion) so they aren't visible to the first/last bounds of
* the block or file.
*
* We use the same entry repeatedly, so the diff between them will be empty.
* This lets us just emit the two-byte count word, leaving the other bytes
* as zero.
*
* Split the desired total len into two pieces, adding any remainder to the
* first four-bit value.
*/
static int append_padded_entry(struct scoutfs_srch_file *sfl, u64 blk,
struct scoutfs_srch_block *srb, struct scoutfs_srch_entry *sre)
static void append_padded_entry(struct scoutfs_srch_file *sfl,
struct scoutfs_srch_block *srb,
int len)
{
int ret;
int each;
int rem;
u16 lengths = 0;
u8 *buf = srb->entries + le32_to_cpu(srb->entry_bytes);
ret = encode_entry(srb->entries + le32_to_cpu(srb->entry_bytes),
sre, &srb->tail);
if (ret > 0) {
srb->tail = *sre;
le32_add_cpu(&srb->entry_nr, 1);
le32_add_cpu(&srb->entry_bytes, ret);
le64_add_cpu(&sfl->entries, 1);
ret = 0;
}
each = (len - 2) >> 1;
rem = (len - 2) & 1;
return ret;
lengths |= each + rem;
lengths |= each << 4;
memset(buf, 0, len);
put_unaligned_le16(lengths, buf);
le32_add_cpu(&srb->entry_nr, 1);
le32_add_cpu(&srb->entry_bytes, len);
le64_add_cpu(&sfl->entries, 1);
}
/*
@@ -559,61 +576,41 @@ static int append_padded_entry(struct scoutfs_srch_file *sfl, u64 blk,
* This is called when there is a single existing entry in the block.
* We have the entire block to work with. We encode pairs of matching
* entries. This hides them from readers (both searches and merging) as
* they're interpreted as creation and deletion and are deleted. We use
* the existing hash value of the first entry in the block but then set
* the inode to an impossibly large number so it doesn't interfere with
* anything.
* they're interpreted as creation and deletion and are deleted.
*
* To hit the specific offset we very carefully manage the amount of
* bytes of change between fields in the entry. We know that if we
* change all the byte of the ino and id we end up with a 20 byte
* (2+8+8,2) encoding of the pair of entries. To have the last entry
* start at the _SAFE_POS offset we know that the final 20 byte pair
* encoding needs to end at 2 bytes (second entry encoding) after the
* _SAFE_POS offset.
* For simplicity and to maintain sort ordering within the block, we reuse
* the existing entry. This lets us skip the encoding step, because we know
* the diff will be zero. We can zero-pad the resulting entries to hit the
* target offset exactly.
*
* So as we encode pairs we watch the delta of our current offset from
* that desired final offset of 2 past _SAFE_POS. If we're a multiple
* of 20 away then we encode the full 20 byte pairs. If we're not, then
* we drop a byte to encode 19 bytes. That'll slowly change the offset
* to be a multiple of 20 again while encoding large entries.
* Because we can't predict the exact number of entry_bytes when we start,
* we adjust the byte count of subsequent entries until we wind up at a
* multiple of 20 bytes away from our goal and then use that length for
* the remaining entries.
*
* We could just use a single pair of unnaturally large entries to consume
* the needed space, adjusting for an odd number of entry_bytes if necessary.
* The use of 19 or 20 bytes for the entry pair matches what we would see with
* real (non-zero) entries that vary from the existing entry.
*/
static void pad_entries_at_safe(struct scoutfs_srch_file *sfl, u64 blk,
static void pad_entries_at_safe(struct scoutfs_srch_file *sfl,
struct scoutfs_srch_block *srb)
{
struct scoutfs_srch_entry sre;
u32 target;
s32 diff;
u64 hash;
u64 ino;
u64 id;
int ret;
hash = le64_to_cpu(srb->tail.hash);
ino = le64_to_cpu(srb->tail.ino) | (1ULL << 62);
id = le64_to_cpu(srb->tail.id);
target = SCOUTFS_SRCH_BLOCK_SAFE_BYTES + 2;
while ((diff = target - le32_to_cpu(srb->entry_bytes)) > 0) {
ino ^= 1ULL << (7 * 8);
append_padded_entry(sfl, srb, 10);
if (diff % 20 == 0) {
id ^= 1ULL << (7 * 8);
append_padded_entry(sfl, srb, 10);
} else {
id ^= 1ULL << (6 * 8);
append_padded_entry(sfl, srb, 9);
}
sre.hash = cpu_to_le64(hash);
sre.ino = cpu_to_le64(ino);
sre.id = cpu_to_le64(id);
ret = append_padded_entry(sfl, blk, srb, &sre);
if (ret == 0)
ret = append_padded_entry(sfl, blk, srb, &sre);
BUG_ON(ret != 0);
diff = target - le32_to_cpu(srb->entry_bytes);
}
WARN_ON_ONCE(diff != 0);
}
/*
@@ -748,14 +745,14 @@ static int search_log_file(struct super_block *sb,
for (i = 0; i < le32_to_cpu(srb->entry_nr); i++) {
if (pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES) {
/* can only be inconsistency :/ */
ret = EIO;
ret = -EIO;
break;
}
ret = decode_entry(srb->entries + pos, &sre, &prev);
if (ret <= 0) {
/* can only be inconsistency :/ */
ret = EIO;
ret = -EIO;
break;
}
pos += ret;
@@ -858,15 +855,15 @@ static int search_sorted_file(struct super_block *sb,
if (pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES) {
/* can only be inconsistency :/ */
ret = EIO;
break;
ret = -EIO;
goto out;
}
ret = decode_entry(srb->entries + pos, &sre, &prev);
if (ret <= 0) {
/* can only be inconsistency :/ */
ret = EIO;
break;
ret = -EIO;
goto out;
}
pos += ret;
prev = sre;
@@ -971,6 +968,8 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
scoutfs_inc_counter(sb, srch_search_xattrs);
trace_scoutfs_ioc_search_xattrs(sb, ino, last_ino);
*done = false;
srch_init_rb_root(sroot);
@@ -1407,7 +1406,7 @@ int scoutfs_srch_commit_compact(struct super_block *sb,
ret = -EIO;
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) /* XXX leaks allocators */
if (ret < 0)
goto out;
/* restore busy to pending if the operation failed */
@@ -1427,10 +1426,8 @@ int scoutfs_srch_commit_compact(struct super_block *sb,
/* update file references if we finished compaction (!deleting) */
if (!(res->flags & SCOUTFS_SRCH_COMPACT_FLAG_DELETE)) {
ret = commit_files(sb, alloc, wri, root, res);
if (ret < 0) {
/* XXX we can't commit, shutdown? */
if (ret < 0)
goto out;
}
/* transition flags for deleting input files */
for (i = 0; i < res->nr; i++) {
@@ -1457,7 +1454,7 @@ update:
le64_to_cpu(pending->id), 0);
ret = scoutfs_btree_insert(sb, alloc, wri, root, &key,
pending, sizeof(*pending));
if (ret < 0)
if (WARN_ON_ONCE(ret < 0)) /* XXX inconsistency */
goto out;
}
@@ -1470,7 +1467,6 @@ update:
BUG_ON(err); /* both busy and pending present */
}
out:
WARN_ON_ONCE(ret < 0); /* XXX inconsistency */
kfree(busy);
return ret;
}
@@ -1588,8 +1584,7 @@ static int kway_merge(struct super_block *sb,
nr_parents = max_t(unsigned long, 1, roundup_pow_of_two(nr) - 1);
/* root at [1] for easy sib/parent index calc, final pad for odd sib */
nr_nodes = 1 + nr_parents + nr + 1;
tnodes = __vmalloc(nr_nodes * sizeof(struct tourn_node),
GFP_NOFS, PAGE_KERNEL);
tnodes = kc__vmalloc(nr_nodes * sizeof(struct tourn_node), GFP_NOFS);
if (!tnodes)
return -ENOMEM;
@@ -1669,7 +1664,7 @@ static int kway_merge(struct super_block *sb,
/* end sorted block on _SAFE offset for testing */
if (bl && le32_to_cpu(srb->entry_nr) == 1 && logs_input &&
scoutfs_trigger(sb, SRCH_COMPACT_LOGS_PAD_SAFE)) {
pad_entries_at_safe(sfl, blk, srb);
pad_entries_at_safe(sfl, srb);
scoutfs_block_put(sb, bl);
bl = NULL;
blk++;
@@ -1802,7 +1797,7 @@ static void swap_page_sre(void *A, void *B, int size)
* typically, ~10x worst case).
*
* Because we read and sort all the input files we must perform the full
* compaction in one operation. The server must have given us a
* compaction in one operation. The server must have given us
* sufficiently large avail/freed lists, otherwise we'll return ENOSPC.
*/
static int compact_logs(struct super_block *sb,
@@ -1866,14 +1861,14 @@ static int compact_logs(struct super_block *sb,
if (pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES) {
/* can only be inconsistency :/ */
ret = EIO;
break;
ret = -EIO;
goto out;
}
ret = decode_entry(srb->entries + pos, sre, &prev);
if (ret <= 0) {
/* can only be inconsistency :/ */
ret = EIO;
ret = -EIO;
goto out;
}
prev = *sre;
@@ -2281,12 +2276,11 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
} else {
ret = -EINVAL;
}
if (ret < 0)
goto commit;
ret = scoutfs_alloc_prepare_commit(sb, &alloc, &wri) ?:
scoutfs_alloc_prepare_commit(sb, &alloc, &wri);
if (ret == 0)
scoutfs_block_writer_write(sb, &wri);
commit:
/* the server won't use our partial compact if _ERROR is set */
sc->meta_avail = alloc.avail;
sc->meta_freed = alloc.freed;
@@ -2303,7 +2297,7 @@ out:
scoutfs_inc_counter(sb, srch_compact_error);
scoutfs_block_writer_forget_all(sb, &wri);
queue_compact_work(srinf, sc->nr > 0 && ret == 0);
queue_compact_work(srinf, sc != NULL && sc->nr > 0 && ret == 0);
kfree(sc);
}

View File

@@ -160,7 +160,17 @@ static void scoutfs_metadev_close(struct super_block *sb)
* from kill_sb->put_super.
*/
lockdep_off();
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
bdev_fput(sbi->meta_bdev_file);
#else
#ifdef KC_BLKDEV_PUT_HOLDER_ARG
blkdev_put(sbi->meta_bdev, sb);
#else
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
#endif
#endif
lockdep_on();
sbi->meta_bdev = NULL;
}
@@ -477,7 +487,11 @@ out:
static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
{
struct scoutfs_mount_options opts;
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
struct file *meta_bdev_file;
#else
struct block_device *meta_bdev;
#endif
struct scoutfs_sb_info *sbi;
struct inode *inode;
int ret;
@@ -498,9 +512,9 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sbi = kzalloc(sizeof(struct scoutfs_sb_info), GFP_KERNEL);
sb->s_fs_info = sbi;
sbi->sb = sb;
if (!sbi)
return -ENOMEM;
sbi->sb = sb;
ret = assign_random_id(sbi);
if (ret < 0)
@@ -523,7 +537,27 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
/*
* pass sbi as holder, since dev_mount already passes sb, which triggers a
* WARN_ON because dev_mount also passes non-NULL hops. By passing sbi
* here we just get a simple error in our test cases.
*/
meta_bdev_file = bdev_file_open_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sbi, NULL);
if (IS_ERR(meta_bdev_file)) {
scoutfs_err(sb, "could not open metadev: error %ld",
PTR_ERR(meta_bdev_file));
ret = PTR_ERR(meta_bdev_file);
goto out;
}
sbi->meta_bdev_file = meta_bdev_file;
sbi->meta_bdev = file_bdev(meta_bdev_file);
#else
#ifdef KC_BLKDEV_PUT_HOLDER_ARG
meta_bdev = blkdev_get_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sb, NULL);
#else
meta_bdev = blkdev_get_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sb);
#endif
if (IS_ERR(meta_bdev)) {
scoutfs_err(sb, "could not open metadev: error %ld",
PTR_ERR(meta_bdev));
@@ -531,6 +565,8 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
sbi->meta_bdev = meta_bdev;
#endif
ret = set_blocksize(sbi->meta_bdev, SCOUTFS_BLOCK_SM_SIZE);
if (ret != 0) {
scoutfs_err(sb, "failed to set metadev blocksize, returned %d",

View File

@@ -42,6 +42,9 @@ struct scoutfs_sb_info {
u64 fmt_vers;
struct block_device *meta_bdev;
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
struct file *meta_bdev_file;
#endif
spinlock_t next_ino_lock;
@@ -101,7 +104,11 @@ static inline bool SCOUTFS_IS_META_BDEV(struct scoutfs_super_block *super_block)
return !!(le64_to_cpu(super_block->flags) & SCOUTFS_FLAG_IS_META_BDEV);
}
#ifdef KC_HAVE_BLK_MODE_T
#define SCOUTFS_META_BDEV_MODE (BLK_OPEN_READ | BLK_OPEN_WRITE | BLK_OPEN_EXCL)
#else
#define SCOUTFS_META_BDEV_MODE (FMODE_READ | FMODE_WRITE | FMODE_EXCL)
#endif
static inline bool scoutfs_forcing_unmount(struct super_block *sb)
{

View File

@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include "super.h"
#include "sysfs.h"

View File

@@ -159,6 +159,58 @@ static bool drained_holders(struct trans_info *tri)
return holders == 0;
}
static int commit_current_log_trees(struct super_block *sb, char **str)
{
DECLARE_TRANS_INFO(sb, tri);
return (*str = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(*str = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(*str = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(*str = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc, &tri->wri)) ?:
(*str = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(*str = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(*str = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb);
}
static int get_next_log_trees(struct super_block *sb, char **str)
{
return (*str = "get log trees", scoutfs_trans_get_log_trees(sb));
}
static int retry_forever(struct super_block *sb, int (*func)(struct super_block *sb, char **str))
{
bool retrying = false;
char *str;
int ret;
do {
str = NULL;
ret = func(sb, &str);
if (ret < 0) {
if (!retrying) {
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
str, ret);
retrying = true;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -ENOLINK;
break;
}
msleep(2 * MSEC_PER_SEC);
} else if (retrying) {
scoutfs_info(sb, "retried transaction commit succeeded");
}
} while (ret < 0);
return ret;
}
/*
* This work func is responsible for writing out all the dirty blocks
* that make up the current dirty transaction. It prevents writers from
@@ -184,8 +236,6 @@ void scoutfs_trans_write_func(struct work_struct *work)
struct trans_info *tri = container_of(work, struct trans_info, write_work.work);
struct super_block *sb = tri->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
bool retrying = false;
char *s = NULL;
int ret = 0;
tri->task = current;
@@ -202,7 +252,7 @@ void scoutfs_trans_write_func(struct work_struct *work)
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
ret = -ENOLINK;
goto out;
}
@@ -214,37 +264,9 @@ void scoutfs_trans_write_func(struct work_struct *work)
scoutfs_inc_counter(sb, trans_commit_written);
do {
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc,
&tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
if (ret < 0) {
if (!retrying) {
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
s, ret);
retrying = true;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
break;
}
msleep(2 * MSEC_PER_SEC);
} else if (retrying) {
scoutfs_info(sb, "retried transaction commit succeeded");
}
} while (ret < 0);
/* retry {commit,get}_log_trees until they succeeed, can only fail when forcing unmount */
ret = retry_forever(sb, commit_current_log_trees) ?:
retry_forever(sb, get_next_log_trees);
out:
spin_lock(&tri->write_lock);
tri->write_count++;

View File

@@ -93,13 +93,9 @@ int scoutfs_setup_triggers(struct super_block *sb)
goto out;
}
for (i = 0; i < ARRAY_SIZE(triggers->atomics); i++) {
if (!debugfs_create_atomic_t(names[i], 0644, triggers->dir,
&triggers->atomics[i])) {
ret = -ENOMEM;
goto out;
}
}
for (i = 0; i < ARRAY_SIZE(triggers->atomics); i++)
debugfs_create_atomic_t(names[i], 0644, triggers->dir,
&triggers->atomics[i]);
ret = 0;
out:

View File

@@ -183,6 +183,13 @@ static void *scoutfs_tseq_seq_next(struct seq_file *m, void *v, loff_t *pos)
ent = tseq_rb_next(ent);
if (ent)
*pos = ent->pos;
else
/*
* once we hit the end, *pos is never used, but it has to
* be updated to avoid an error in bpf_seq_read()
*/
(*pos)++;
return ent;
}

View File

@@ -1113,7 +1113,7 @@ int scoutfs_wkic_setup(struct super_block *sb)
winf->sb = sb;
KC_INIT_SHRINKER_FUNCS(&winf->shrinker, wkic_shrink_count, wkic_shrink_scan);
KC_REGISTER_SHRINKER(&winf->shrinker);
KC_REGISTER_SHRINKER(&winf->shrinker, "scoutfs-weak_item:" SCSBF, SCSB_ARGS(sb));
sbi->wkic_info = winf;
return 0;

View File

@@ -742,7 +742,7 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
int ret;
int err;
trace_scoutfs_xattr_set(sb, name_len, value, size, flags);
trace_scoutfs_xattr_set(sb, ino, name_len, value, size, flags);
if (WARN_ON_ONCE(tgs->totl && tgs->indx) ||
WARN_ON_ONCE((tgs->totl | tgs->indx) && !tag_lock))
@@ -1026,7 +1026,9 @@ static int scoutfs_xattr_get_handler
static int scoutfs_xattr_set_handler
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
(const struct xattr_handler *handler, struct dentry *dentry,
(const struct xattr_handler *handler,
KC_VFS_NS_DEF
struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags)
{

2
tests/.gitignore vendored
View File

@@ -10,3 +10,5 @@ src/stage_tmpfile
src/create_xattr_loop
src/o_tmpfile_umask
src/o_tmpfile_linkat
src/mmap_stress
src/mmap_validate

1
tests/.xfstests-branch Normal file
View File

@@ -0,0 +1 @@
v2022.05.01-2-g787cd20

View File

@@ -13,7 +13,9 @@ BIN := src/createmany \
src/create_xattr_loop \
src/fragmented_data_extents \
src/o_tmpfile_umask \
src/o_tmpfile_linkat
src/o_tmpfile_linkat \
src/mmap_stress \
src/mmap_validate
DEPS := $(wildcard src/*.d)
@@ -23,8 +25,10 @@ ifneq ($(DEPS),)
-include $(DEPS)
endif
src/mmap_stress: LIBS+=-lpthread
$(BIN): %: %.c Makefile
gcc $(CFLAGS) -MD -MP -MF $*.d $< -o $@
gcc $(CFLAGS) -MD -MP -MF $*.d $< -o $@ $(LIBS)
.PHONY: clean
clean:

View File

@@ -117,6 +117,7 @@ used during the test.
| T\_NR\_MOUNTS | number of mounts | -n | 3 |
| T\_O[0-9] | mount options | created per run | -o server\_addr= |
| T\_QUORUM | quorum count | -q | 2 |
| T\_EXTRA | per-test file dir | revision ctled | tests/extra/t |
| T\_TMP | per-test tmp prefix | made for test | results/tmp/t/tmp |
| T\_TMPDIR | per-test tmp dir dir | made for test | results/tmp/t |

View File

@@ -0,0 +1,882 @@
Ran:
generic/001
generic/002
generic/004
generic/005
generic/006
generic/007
generic/008
generic/009
generic/011
generic/012
generic/013
generic/014
generic/015
generic/016
generic/018
generic/020
generic/021
generic/022
generic/023
generic/024
generic/025
generic/026
generic/028
generic/029
generic/030
generic/031
generic/032
generic/033
generic/034
generic/035
generic/037
generic/039
generic/040
generic/041
generic/050
generic/052
generic/053
generic/056
generic/057
generic/058
generic/059
generic/060
generic/061
generic/062
generic/063
generic/064
generic/065
generic/066
generic/067
generic/069
generic/070
generic/071
generic/073
generic/076
generic/078
generic/079
generic/080
generic/081
generic/082
generic/084
generic/086
generic/087
generic/088
generic/090
generic/091
generic/092
generic/094
generic/096
generic/097
generic/098
generic/099
generic/101
generic/104
generic/105
generic/106
generic/107
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/117
generic/118
generic/119
generic/120
generic/121
generic/122
generic/123
generic/124
generic/126
generic/128
generic/129
generic/130
generic/131
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/141
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/169
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/184
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/215
generic/216
generic/217
generic/218
generic/219
generic/220
generic/221
generic/222
generic/223
generic/225
generic/227
generic/228
generic/229
generic/230
generic/235
generic/236
generic/237
generic/238
generic/240
generic/244
generic/245
generic/246
generic/247
generic/248
generic/249
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/257
generic/258
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/286
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/294
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/306
generic/307
generic/308
generic/309
generic/312
generic/313
generic/314
generic/315
generic/316
generic/317
generic/319
generic/322
generic/324
generic/325
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/335
generic/336
generic/337
generic/341
generic/342
generic/343
generic/346
generic/348
generic/353
generic/355
generic/358
generic/359
generic/360
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/375
generic/376
generic/377
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
generic/389
generic/391
generic/392
generic/393
generic/394
generic/395
generic/396
generic/397
generic/398
generic/400
generic/401
generic/402
generic/403
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/426
generic/427
generic/428
generic/436
generic/437
generic/439
generic/440
generic/443
generic/445
generic/446
generic/448
generic/449
generic/450
generic/451
generic/452
generic/453
generic/454
generic/456
generic/458
generic/460
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/477
generic/478
generic/479
generic/480
generic/481
generic/483
generic/485
generic/486
generic/487
generic/488
generic/489
generic/490
generic/491
generic/492
generic/498
generic/499
generic/501
generic/502
generic/503
generic/504
generic/505
generic/506
generic/507
generic/508
generic/509
generic/510
generic/511
generic/512
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/523
generic/524
generic/525
generic/526
generic/527
generic/528
generic/529
generic/530
generic/531
generic/533
generic/534
generic/535
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/547
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/557
generic/566
generic/567
generic/571
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/604
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/611
generic/612
generic/613
generic/614
generic/618
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/632
generic/634
generic/635
generic/637
generic/638
generic/639
generic/640
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/676
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/032
Not
run:
generic/008
generic/009
generic/012
generic/015
generic/016
generic/018
generic/021
generic/022
generic/025
generic/026
generic/031
generic/033
generic/050
generic/052
generic/058
generic/059
generic/060
generic/061
generic/063
generic/064
generic/078
generic/079
generic/081
generic/082
generic/091
generic/094
generic/096
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/118
generic/119
generic/121
generic/122
generic/123
generic/128
generic/130
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/216
generic/217
generic/218
generic/219
generic/220
generic/222
generic/223
generic/225
generic/227
generic/229
generic/230
generic/235
generic/238
generic/240
generic/244
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/312
generic/314
generic/316
generic/317
generic/324
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/353
generic/355
generic/358
generic/359
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
generic/391
generic/392
generic/395
generic/396
generic/397
generic/398
generic/400
generic/402
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/427
generic/439
generic/440
generic/446
generic/449
generic/450
generic/451
generic/453
generic/454
generic/456
generic/458
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/485
generic/487
generic/488
generic/491
generic/492
generic/499
generic/501
generic/503
generic/505
generic/506
generic/507
generic/508
generic/511
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/528
generic/530
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/566
generic/567
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/612
generic/613
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/635
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/032
Passed all 512 tests

View File

@@ -0,0 +1,44 @@
generic/003 # missing atime update in buffered read
generic/075 # file content mismatch failures (fds, etc)
generic/103 # enospc causes trans commit failures
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/213 # enospc causes trans commit failures
generic/318 # can't support user namespaces until v5.11
generic/321 # requires selinux enabled for '+' in ls?
generic/338 # BUG_ON update inode error handling
generic/347 # _dmthin_mount doesn't work?
generic/356 # swap
generic/357 # swap
generic/409 # bind mounts not scripted yet
generic/410 # bind mounts not scripted yet
generic/411 # bind mounts not scripted yet
generic/423 # symlink inode size is strlen() + 1 on scoutfs
generic/430 # xfs_io copy_range missing in el7
generic/431 # xfs_io copy_range missing in el7
generic/432 # xfs_io copy_range missing in el7
generic/433 # xfs_io copy_range missing in el7
generic/434 # xfs_io copy_range missing in el7
generic/441 # dm-mapper
generic/444 # el9's posix_acl_update_mode is buggy ?
generic/467 # open_by_handle ESTALE
generic/472 # swap
generic/484 # dm-mapper
generic/493 # swap
generic/494 # swap
generic/495 # swap
generic/496 # swap
generic/497 # swap
generic/532 # xfs_io statx attrib_mask missing in el7
generic/554 # swap
generic/563 # cgroup+loopdev
generic/564 # xfs_io copy_range missing in el7
generic/565 # xfs_io copy_range missing in el7
generic/568 # falloc not resulting in block count increase
generic/569 # swap
generic/570 # swap
generic/620 # dm-hugedisk
generic/633 # id-mapped mounts missing in el7
generic/636 # swap
generic/641 # swap
generic/643 # swap

View File

@@ -8,36 +8,33 @@
echo "$0 running rid '$SCOUTFS_FENCED_REQ_RID' ip '$SCOUTFS_FENCED_REQ_IP' args '$@'"
log() {
echo "$@" > /dev/stderr
echo_fail() {
echo "$@" >> /dev/stderr
exit 1
}
echo_fail() {
echo "$@" > /dev/stderr
exit 1
# silence error messages
quiet_cat()
{
cat "$@" 2>/dev/null
}
rid="$SCOUTFS_FENCED_REQ_RID"
shopt -s nullglob
for fs in /sys/fs/scoutfs/*; do
[ ! -d "$fs" ] && continue
fs_rid="$(quiet_cat $fs/rid)"
nr="$(quiet_cat $fs/data_device_maj_min)"
[ ! -d "$fs" -o "$fs_rid" != "$rid" ] && continue
fs_rid="$(cat $fs/rid)" || \
echo_fail "failed to get rid in $fs"
if [ "$fs_rid" != "$rid" ]; then
continue
mnt=$(findmnt -l -n -t scoutfs -o TARGET -S $nr)
[ -z "$mnt" ] && continue
if ! umount -qf "$mnt"; then
if [ -d "$fs" ]; then
echo_fail "umount -qf $mnt failed"
fi
fi
nr="$(cat $fs/data_device_maj_min)" || \
echo_fail "failed to get data device major:minor in $fs"
mnts=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
echo_fail "findmnt -t scoutfs -S $nr failed"
for mnt in $mnts; do
umount -f "$mnt" || \
echo_fail "umout -f $mnt failed"
done
done
exit 0

View File

@@ -7,8 +7,9 @@ t_status_msg()
export T_PASS_STATUS=100
export T_SKIP_STATUS=101
export T_FAIL_STATUS=102
export T_SKIP_PERMITTED_STATUS=103
export T_FIRST_STATUS="$T_PASS_STATUS"
export T_LAST_STATUS="$T_FAIL_STATUS"
export T_LAST_STATUS="$T_SKIP_PERMITTED_STATUS"
t_pass()
{
@@ -21,6 +22,17 @@ t_skip()
exit $T_SKIP_STATUS
}
#
# This exit code is *reserved* for tests that are up-front never going to work
# in certain cases. This should be expressly documented per-case and made
# abundantly clear before merging. The test itself should document its case.
#
t_skip_permitted()
{
t_status_msg "$@"
exit $T_SKIP_PERMITTED_STATUS
}
t_fail()
{
t_status_msg "$@"
@@ -52,19 +64,37 @@ t_rc()
}
#
# redirect test output back to the output of the invoking script intead
# of the compared output.
# As run, stdout/err are redirected to a file that will be compared with
# the stored expected golden output of the test. This redirects
# stdout/err in the script to stdout of the invoking run-test. It's
# intended to give visible output of tests without being included in the
# golden output.
#
t_restore_output()
# (see the goofy "exec" fd manipulation in the main run-tests as it runs
# each test)
#
t_stdout_invoked()
{
exec >&6 2>&1
}
#
# redirect a command's output back to the compared output after the
# test has restored its output
# This undoes t_stdout_invokved, returning the test's stdout/err to the
# output file as it was when it was launched.
#
t_compare_output()
t_stdout_compare()
{
"$@" >&7 2>&1
exec >&7 2>&1
}
#
# usually bash prints an annoying output message when jobs
# are killed. We can avoid that by redirecting stderr for
# the bash process when it reaps the jobs that are killed.
#
t_silent_kill() {
exec {ERR}>&2 2>/dev/null
kill "$@"
wait "$@"
exec 2>&$ERR {ERR}>&-
}

View File

@@ -121,6 +121,7 @@ t_filter_dmesg()
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
re="$re|clocksource: Long readout interval"
# fencing tests force unmounts and trigger timeouts
re="$re|scoutfs .* forcing unmount"
@@ -140,17 +141,35 @@ t_filter_dmesg()
re="$re|scoutfs .* error.*server failed to bind to.*"
re="$re|scoutfs .* critical transaction commit failure.*"
# ENOLINK (-67) indicates an expected forced unmount error
re="$re|scoutfs .* error -67 .*"
# change-devices causes loop device resizing
re="$re|loop: module loaded"
re="$re|loop[0-9].* detected capacity change from.*"
re="$re|dm-[0-9].* detected capacity change from.*"
# ignore systemd-journal rotating
re="$re|systemd-journald.*"
# process accounting can be noisy
re="$re|Process accounting resumed.*"
# format vers back/compat tries bad mounts
re="$re|scoutfs .* error.*outside of supported version.*"
re="$re|scoutfs .* error.*could not get .*super.*"
# ignore "unsafe core pattern" when xfstests tries to disable cores"
re="$re|Unsafe core_pattern used with fs.suid_dumpable=2.*"
re="$re|Pipe handler or fully qualified core dump path required.*"
re="$re|Set kernel.core_pattern before fs.suid_dumpable.*"
# perf warning that it adjusted sample rate
re="$re|perf: interrupt took too long.*lowering kernel.perf_event_max_sample_rate.*"
# some ci test guests are unresponsive
re="$re|longest quorum heartbeat .* delay"
egrep -v "($re)" | \
ignore_harmless_unwind_kasan_stack_oob
}

88
tests/funcs/tap.sh Normal file
View File

@@ -0,0 +1,88 @@
#
# Generate TAP format test results
#
t_tap_header()
{
local runid=$1
local sequence=( $(echo $tests) )
local count=${#sequence[@]}
# avoid recreating the same TAP result over again - harness sets this
[[ -z "$runid" ]] && runid="*test*"
cat > $T_RESULTS/scoutfs.tap <<TAPEOF
TAP version 14
1..${count}
#
# TAP results for run ${runid}
#
# host/run info:
#
# hostname: ${HOSTNAME}
# test start time: $(date --utc)
# uname -r: $(uname -r)
# scoutfs commit id: $(git describe --tags)
#
# sequence for this run:
#
TAPEOF
# Sequence
for t in ${tests}; do
echo ${t/.sh/}
done | cat -n | expand | column -c 120 | expand | sed 's/^ /#/' >> $T_RESULTS/scoutfs.tap
echo "#" >> $T_RESULTS/scoutfs.tap
}
t_tap_progress()
{
(
local i=$(( testcount + 1 ))
local testname=$1
local result=$2
local diff=""
local dmsg=""
if [[ -s "$T_RESULTS/tmp/${testname}/dmesg.new" ]]; then
dmsg="1"
fi
if ! cmp -s golden/${testname} $T_RESULTS/output/${testname}; then
diff="1"
fi
if [[ "${result}" == "100" ]] && [[ -z "${dmsg}" ]] && [[ -z "${diff}" ]]; then
echo "ok ${i} - ${testname}"
elif [[ "${result}" == "103" ]]; then
echo "ok ${i} - ${testname}"
echo "# ${testname} ** skipped - permitted **"
else
echo "not ok ${i} - ${testname}"
case ${result} in
101)
echo "# ${testname} ** skipped **"
;;
102)
echo "# ${testname} ** failed **"
;;
esac
if [[ -n "${diff}" ]]; then
echo "#"
echo "# diff:"
echo "#"
diff -u golden/${testname} $T_RESULTS/output/${testname} | expand | sed 's/^/# /'
fi
if [[ -n "${dmsg}" ]]; then
echo "#"
echo "# dmesg:"
echo "#"
cat "$T_RESULTS/tmp/${testname}/dmesg.new" | sed 's/^/# /'
fi
fi
) >> $T_RESULTS/scoutfs.tap
}

View File

@@ -41,9 +41,7 @@ group::r-x
mask::rwx
other::r-x
bash: line 0: cd: dir-root: Permission denied
Failed
bash: line 0: cd: symlinkdir-root: Permission denied
Failed
# file: dir-root
# owner: root

View File

@@ -1,29 +1,29 @@
== initial writes smaller than prealloc grow to prealloc size
/mnt/test/test/data-prealloc/file-1: 7 extents found
/mnt/test/test/data-prealloc/file-2: 7 extents found
/mnt/test/test/data-prealloc/file-1: extents: 7
/mnt/test/test/data-prealloc/file-2: extents: 7
== larger files get full prealloc extents
/mnt/test/test/data-prealloc/file-1: 9 extents found
/mnt/test/test/data-prealloc/file-2: 9 extents found
/mnt/test/test/data-prealloc/file-1: extents: 9
/mnt/test/test/data-prealloc/file-2: extents: 9
== non-streaming writes with contig have per-block extents
/mnt/test/test/data-prealloc/file-1: 32 extents found
/mnt/test/test/data-prealloc/file-2: 32 extents found
/mnt/test/test/data-prealloc/file-1: extents: 32
/mnt/test/test/data-prealloc/file-2: extents: 32
== any writes to region prealloc get full extents
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
== streaming offline writes get full extents either way
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
== goofy preallocation amounts work
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 3 extents found
/mnt/test/test/data-prealloc/file-2: 3 extents found
/mnt/test/test/data-prealloc/file-1: extents: 6
/mnt/test/test/data-prealloc/file-2: extents: 6
/mnt/test/test/data-prealloc/file-1: extents: 6
/mnt/test/test/data-prealloc/file-2: extents: 6
/mnt/test/test/data-prealloc/file-1: extents: 3
/mnt/test/test/data-prealloc/file-2: extents: 3
== block writes into region allocs hole
wrote blk 24
wrote blk 32

View File

@@ -1,4 +1,3 @@
== setting longer hung task timeout
== creating fragmented extents
== unlink file with moved extents to free extents per block
== cleanup

View File

@@ -0,0 +1,2 @@
=== setup
=== spin reading and shrinking

27
tests/golden/mmap Normal file
View File

@@ -0,0 +1,27 @@
== mmap_stress
thread 0 complete
thread 1 complete
thread 2 complete
thread 3 complete
thread 4 complete
== basic mmap/read/write consistency checks
== mmap read from offline extent
0: offset: 0 length: 2 flags: O.L
extents: 1
1
00000200: ea ea ea ea ea ea ea ea ea ea ea ea ea ea ea ea ................
0
0: offset: 0 length: 2 flags: ..L
extents: 1
== mmap write to an offline extent
0: offset: 0 length: 2 flags: O.L
extents: 1
1
0
0: offset: 0 length: 2 flags: ..L
extents: 1
00000000 ea ea ea ea ea ea ea ea ea ea ea ea ea ea ea ea |................|
00000010 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 |................|
00000020 ea ea ea ea ea ea ea ea ea ea ea ea ea ea ea ea |................|
00000030
== done

View File

@@ -49,7 +49,7 @@ offline wating should be empty:
0
== truncating does wait
truncate should be waiting for first block:
trunate should no longer be waiting:
truncate should no longer be waiting:
0
== writing waits
should be waiting for write

View File

@@ -22,10 +22,8 @@ scoutfs: setattr failed: Invalid argument (22)
== large ctime is set
1972-02-19 00:06:25.999999999 +0000
== large offline extents are created
Filesystem type is: 554f4353
File size of /mnt/test/test/setattr_more/file is 40988672 (10007 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 10006: 0.. 10006: 10007: unknown,eof
/mnt/test/test/setattr_more/file: 1 extent found
0: offset: 0 0 length: 10007 flags: O.L
extents: 1
== correct offline extent length
976563
== omitting data_version should not fail

View File

@@ -0,0 +1,97 @@
== create content
== readdir all
00000000: d_off: 0x00000001 d_reclen: 0x18 d_type: DT_DIR d_name: .
00000001: d_off: 0x00000002 d_reclen: 0x18 d_type: DT_DIR d_name: ..
00000002: d_off: 0x00000003 d_reclen: 0x18 d_type: DT_REG d_name: a
00000003: d_off: 0x00000004 d_reclen: 0x20 d_type: DT_REG d_name: aaaaaaaa
00000004: d_off: 0x00000005 d_reclen: 0x28 d_type: DT_REG d_name: aaaaaaaaaaaaaaa
00000005: d_off: 0x00000006 d_reclen: 0x30 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaa
00000006: d_off: 0x00000007 d_reclen: 0x38 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000007: d_off: 0x00000008 d_reclen: 0x38 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000008: d_off: 0x00000009 d_reclen: 0x40 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000009: d_off: 0x0000000a d_reclen: 0x48 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000000a: d_off: 0x0000000b d_reclen: 0x50 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000000b: d_off: 0x0000000c d_reclen: 0x58 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000000c: d_off: 0x0000000d d_reclen: 0x60 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000000d: d_off: 0x0000000e d_reclen: 0x68 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000000e: d_off: 0x0000000f d_reclen: 0x70 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000000f: d_off: 0x00000010 d_reclen: 0x70 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000010: d_off: 0x00000011 d_reclen: 0x78 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000011: d_off: 0x00000012 d_reclen: 0x80 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000012: d_off: 0x00000013 d_reclen: 0x88 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000013: d_off: 0x00000014 d_reclen: 0x90 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000014: d_off: 0x00000015 d_reclen: 0x98 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000015: d_off: 0x00000016 d_reclen: 0xa0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000016: d_off: 0x00000017 d_reclen: 0xa8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000017: d_off: 0x00000018 d_reclen: 0xa8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000018: d_off: 0x00000019 d_reclen: 0xb0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000019: d_off: 0x0000001a d_reclen: 0xb8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001a: d_off: 0x0000001b d_reclen: 0xc0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001b: d_off: 0x0000001c d_reclen: 0xc8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001c: d_off: 0x0000001d d_reclen: 0xd0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001d: d_off: 0x0000001e d_reclen: 0xd8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001e: d_off: 0x0000001f d_reclen: 0xe0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001f: d_off: 0x00000020 d_reclen: 0xe0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000020: d_off: 0x00000021 d_reclen: 0xe8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000021: d_off: 0x00000022 d_reclen: 0xf0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000022: d_off: 0x00000023 d_reclen: 0xf8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000023: d_off: 0x00000024 d_reclen: 0x100 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000024: d_off: 0x00000025 d_reclen: 0x108 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000025: d_off: 0x00000026 d_reclen: 0x110 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
== readdir offset
00000014: d_off: 0x00000015 d_reclen: 0x98 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000015: d_off: 0x00000016 d_reclen: 0xa0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000016: d_off: 0x00000017 d_reclen: 0xa8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000017: d_off: 0x00000018 d_reclen: 0xa8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000018: d_off: 0x00000019 d_reclen: 0xb0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000019: d_off: 0x0000001a d_reclen: 0xb8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001a: d_off: 0x0000001b d_reclen: 0xc0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001b: d_off: 0x0000001c d_reclen: 0xc8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001c: d_off: 0x0000001d d_reclen: 0xd0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001d: d_off: 0x0000001e d_reclen: 0xd8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001e: d_off: 0x0000001f d_reclen: 0xe0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001f: d_off: 0x00000020 d_reclen: 0xe0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000020: d_off: 0x00000021 d_reclen: 0xe8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000021: d_off: 0x00000022 d_reclen: 0xf0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000022: d_off: 0x00000023 d_reclen: 0xf8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000023: d_off: 0x00000024 d_reclen: 0x100 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000024: d_off: 0x00000025 d_reclen: 0x108 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000025: d_off: 0x00000026 d_reclen: 0x110 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
== readdir len (bytes)
00000000: d_off: 0x00000001 d_reclen: 0x18 d_type: DT_DIR d_name: .
00000001: d_off: 0x00000002 d_reclen: 0x18 d_type: DT_DIR d_name: ..
00000002: d_off: 0x00000003 d_reclen: 0x18 d_type: DT_REG d_name: a
00000003: d_off: 0x00000004 d_reclen: 0x20 d_type: DT_REG d_name: aaaaaaaa
00000004: d_off: 0x00000005 d_reclen: 0x28 d_type: DT_REG d_name: aaaaaaaaaaaaaaa
00000005: d_off: 0x00000006 d_reclen: 0x30 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaa
00000006: d_off: 0x00000007 d_reclen: 0x38 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa
== introduce gap
00000000: d_off: 0x00000001 d_reclen: 0x18 d_type: DT_DIR d_name: .
00000001: d_off: 0x00000002 d_reclen: 0x18 d_type: DT_DIR d_name: ..
00000002: d_off: 0x00000003 d_reclen: 0x18 d_type: DT_REG d_name: a
00000003: d_off: 0x00000004 d_reclen: 0x20 d_type: DT_REG d_name: aaaaaaaa
00000004: d_off: 0x00000005 d_reclen: 0x28 d_type: DT_REG d_name: aaaaaaaaaaaaaaa
00000005: d_off: 0x00000006 d_reclen: 0x30 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaa
00000006: d_off: 0x00000007 d_reclen: 0x38 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000007: d_off: 0x00000008 d_reclen: 0x38 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000008: d_off: 0x00000009 d_reclen: 0x40 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000009: d_off: 0x00000014 d_reclen: 0x48 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000014: d_off: 0x00000015 d_reclen: 0x98 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000015: d_off: 0x00000016 d_reclen: 0xa0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000016: d_off: 0x00000017 d_reclen: 0xa8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000017: d_off: 0x00000018 d_reclen: 0xa8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000018: d_off: 0x00000019 d_reclen: 0xb0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000019: d_off: 0x0000001a d_reclen: 0xb8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001a: d_off: 0x0000001b d_reclen: 0xc0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001b: d_off: 0x0000001c d_reclen: 0xc8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001c: d_off: 0x0000001d d_reclen: 0xd0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001d: d_off: 0x0000001e d_reclen: 0xd8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001e: d_off: 0x0000001f d_reclen: 0xe0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0000001f: d_off: 0x00000020 d_reclen: 0xe0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000020: d_off: 0x00000021 d_reclen: 0xe8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000021: d_off: 0x00000022 d_reclen: 0xf0 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000022: d_off: 0x00000023 d_reclen: 0xf8 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000023: d_off: 0x00000024 d_reclen: 0x100 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000024: d_off: 0x00000025 d_reclen: 0x108 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
00000025: d_off: 0x00000026 d_reclen: 0x110 d_type: DT_REG d_name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
== cleanup

View File

@@ -1,5 +1,9 @@
== create/release/stage single block file
0: offset: 0 0 length: 1 flags: O.L
extents: 1
== create/release/stage larger file
0: offset: 0 0 length: 4096 flags: O.L
extents: 1
== multiple release,drop_cache,stage cycles
== release+stage shouldn't change stat, data seq or vers
== stage does change meta_seq
@@ -7,16 +11,22 @@
stage: must provide file version with --data-version
Try `stage --help' or `stage --usage' for more information.
== wrapped region fails
stage returned -1, not 4096: error Invalid argument (22)
stage returned -1, not 8192: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== non-block aligned offset fails
stage returned -1, not 4095: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
0: offset: 0 0 length: 1 flags: O.L
extents: 1
== non-block aligned len within block fails
stage returned -1, not 1024: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
0: offset: 0 0 length: 1 flags: O.L
extents: 1
== partial final block that writes to i_size does work
== zero length stage doesn't bring blocks online
0: offset: 0 0 length: 100 flags: O.L
extents: 1
== stage of non-regular file fails
ioctl failed: Inappropriate ioctl for device (25)
stage: must provide file version with --data-version

View File

@@ -1,288 +0,0 @@
Ran:
generic/001
generic/002
generic/004
generic/005
generic/006
generic/007
generic/011
generic/013
generic/014
generic/020
generic/023
generic/024
generic/028
generic/032
generic/034
generic/035
generic/037
generic/039
generic/040
generic/041
generic/053
generic/056
generic/057
generic/062
generic/065
generic/066
generic/067
generic/069
generic/070
generic/071
generic/073
generic/076
generic/084
generic/086
generic/087
generic/088
generic/090
generic/092
generic/098
generic/101
generic/104
generic/105
generic/106
generic/107
generic/117
generic/124
generic/129
generic/131
generic/169
generic/184
generic/221
generic/228
generic/236
generic/237
generic/245
generic/249
generic/257
generic/258
generic/286
generic/294
generic/306
generic/307
generic/308
generic/309
generic/313
generic/315
generic/319
generic/322
generic/335
generic/336
generic/337
generic/341
generic/342
generic/343
generic/348
generic/360
generic/375
generic/376
generic/377
Not
run:
generic/008
generic/009
generic/012
generic/015
generic/016
generic/018
generic/021
generic/022
generic/025
generic/026
generic/031
generic/033
generic/050
generic/052
generic/058
generic/059
generic/060
generic/061
generic/063
generic/064
generic/078
generic/079
generic/081
generic/082
generic/091
generic/094
generic/096
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/118
generic/119
generic/121
generic/122
generic/123
generic/128
generic/130
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/216
generic/217
generic/218
generic/219
generic/220
generic/222
generic/223
generic/225
generic/227
generic/229
generic/230
generic/235
generic/238
generic/240
generic/244
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/312
generic/314
generic/316
generic/317
generic/324
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/353
generic/355
generic/356
generic/357
generic/358
generic/359
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
shared/001
shared/002
shared/003
shared/004
shared/032
shared/051
shared/289
Passed all 79 tests

View File

@@ -56,6 +56,7 @@ $(basename $0) options:
| only tests matching will be run. Can be provided multiple
| times
-i | Force removing and inserting the built scoutfs.ko module.
-l <nr> | Loop each test <nr> times while passing, last run counts.
-M <file> | Specify the filesystem's meta data device path that contains
| the file system to be tested. Will be clobbered by -m mkfs.
-m | Run mkfs on the device before mounting and running
@@ -69,6 +70,7 @@ $(basename $0) options:
-r <dir> | Specify the directory in which to store results of
| test runs. The directory will be created if it doesn't
| exist. Previous results will be deleted as each test runs.
-R | shuffle the test order randomly using shuf
-s | Skip git repo checkouts.
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
@@ -89,6 +91,8 @@ done
# set some T_ defaults
T_TRACE_DUMP="0"
T_TRACE_PRINTK="0"
T_PORT_START="19700"
T_LOOP_ITER="1"
# array declarations to be able to use array ops
declare -a T_TRACE_GLOB
@@ -129,6 +133,12 @@ while true; do
-i)
T_INSMOD="1"
;;
-l)
test -n "$2" || die "-l must have a nr iterations argument"
test "$2" -eq "$2" 2>/dev/null || die "-l <nr> argument must be an integer"
T_LOOP_ITER="$2"
shift
;;
-M)
test -n "$2" || die "-z must have meta device file argument"
T_META_DEVICE="$2"
@@ -164,6 +174,9 @@ while true; do
T_RESULTS="$2"
shift
;;
-R)
T_SHUF="1"
;;
-s)
T_SKIP_CHECKOUT="1"
;;
@@ -261,13 +274,37 @@ for e in T_META_DEVICE T_DATA_DEVICE T_EX_META_DEV T_EX_DATA_DEV T_KMOD T_RESULT
eval $e=\"$(readlink -f "${!e}")\"
done
# try and check ports, but not necessary
T_TEST_PORT="$T_PORT_START"
T_SCRATCH_PORT="$((T_PORT_START + 100))"
T_DEV_PORT="$((T_PORT_START + 200))"
read local_start local_end < /proc/sys/net/ipv4/ip_local_port_range
if [ -n "$local_start" -a -n "$local_end" -a "$local_start" -lt "$local_end" ]; then
if [ ! "$T_DEV_PORT" -lt "$local_start" -a ! "$T_TEST_PORT" -gt "$local_end" ]; then
die "listening port range $T_TEST_PORT - $T_DEV_PORT is within local dynamic port range $local_start - $local_end in /proc/sys/net/ipv4/ip_local_port_range"
fi
fi
# permute sequence?
T_SEQUENCE=sequence
if [ -n "$T_SHUF" ]; then
msg "shuffling test order"
shuf sequence -o sequence.shuf
# keep xfstests at the end
if grep -q 'xfstests.sh' sequence.shuf ; then
sed -i '/xfstests.sh/d' sequence.shuf
echo "xfstests.sh" >> sequence.shuf
fi
T_SEQUENCE=sequence.shuf
fi
# include everything by default
test -z "$T_INCLUDE" && T_INCLUDE="-e '.*'"
# (quickly) exclude nothing by default
test -z "$T_EXCLUDE" && T_EXCLUDE="-e '\Zx'"
# eval to strip re ticks but not expand
tests=$(grep -v "^#" sequence |
tests=$(grep -v "^#" $T_SEQUENCE |
eval grep "$T_INCLUDE" | eval grep -v "$T_EXCLUDE")
test -z "$tests" && \
die "no tests found by including $T_INCLUDE and excluding $T_EXCLUDE"
@@ -346,7 +383,7 @@ fi
quo=""
if [ -n "$T_MKFS" ]; then
for i in $(seq -0 $((T_QUORUM - 1))); do
quo="$quo -Q $i,127.0.0.1,$((42000 + i))"
quo="$quo -Q $i,::1,$((T_TEST_PORT + i))"
done
msg "making new filesystem with $T_QUORUM quorum members"
@@ -401,6 +438,30 @@ cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/sys/kernel/debug/tracing/buffer_size_kb \
/proc/sys/kernel/ftrace_dump_on_oops
# we can record pids to kill as we exit, we kill in reverse added order
atexit_kill_pids=""
add_atexit_kill_pid()
{
atexit_kill_pids="$1 $atexit_kill_pids"
}
atexit_kill()
{
local pid
# suppress bg function exited messages
exec {ERR}>&2 2>/dev/null
for pid in $atexit_kill_pids; do
if test -e "/proc/$pid/status" ; then
kill "$pid"
wait "$pid"
fi
done
exec 2>&$ERR {ERR}>&-
}
trap atexit_kill EXIT
#
# Build a fenced config that runs scripts out of the repository rather
# than the default system directory
@@ -414,26 +475,43 @@ EOF
export SCOUTFS_FENCED_CONFIG_FILE="$conf"
T_FENCED_LOG="$T_RESULTS/fenced.log"
#
# Run the agent in the background, log its output, an kill it if we
# exit
#
fenced_log()
{
echo "[$(timestamp)] $*" >> "$T_FENCED_LOG"
}
fenced_pid=""
kill_fenced()
{
if test -n "$fenced_pid" -a -d "/proc/$fenced_pid" ; then
fenced_log "killing fenced pid $fenced_pid"
kill "$fenced_pid"
fi
}
trap kill_fenced EXIT
$T_UTILS/fenced/scoutfs-fenced > "$T_FENCED_LOG" 2>&1 &
fenced_pid=$!
fenced_log "started fenced pid $fenced_pid in the background"
add_atexit_kill_pid $fenced_pid
#
# some critical failures will cause fs operations to hang. We can watch
# for evidence of them and cause the system to crash, at least.
#
crash_monitor()
{
local bad=0
while sleep 1; do
if dmesg | grep -q "inserting extent.*overlaps existing"; then
echo "run-tests monitor saw overlapping extent message"
bad=1
fi
if dmesg | grep -q "error indicated by fence action" ; then
echo "run-tests monitor saw fence agent error message"
bad=1
fi
if [ ! -e "/proc/${fenced_pid}/status" ]; then
echo "run-tests monitor didn't see fenced pid $fenced_pid /proc dir"
bad=1
fi
if [ "$bad" != 0 ]; then
echo "run-tests monitor triggering crash"
echo c > /proc/sysrq-trigger
exit 1
fi
done
}
crash_monitor &
add_atexit_kill_pid $!
# setup dm tables
echo "0 $(blockdev --getsz $T_META_DEVICE) linear $T_META_DEVICE 0" > \
@@ -506,109 +584,130 @@ fi
. funcs/filter.sh
# give tests access to built binaries in src/, prefer over installed
PATH="$PWD/src:$PATH"
export PATH="$PWD/src:$PATH"
msg "running tests"
> "$T_RESULTS/skip.log"
> "$T_RESULTS/fail.log"
# generate a test ID to make sure we can de-duplicate TAP results in aggregation
. funcs/tap.sh
t_tap_header $(uuidgen)
testcount=0
passed=0
skipped=0
failed=0
skipped_permitted=0
for t in $tests; do
# tests has basenames from sequence, get path and name
t="tests/$t"
test_name=$(basename "$t" | sed -e 's/.sh$//')
# create a temporary dir and file path for the test
T_TMPDIR="$T_RESULTS/tmp/$test_name"
T_TMP="$T_TMPDIR/tmp"
cmd rm -rf "$T_TMPDIR"
cmd mkdir -p "$T_TMPDIR"
# create a test name dir in the fs
T_DS=""
for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
dir="${T_M[$i]}/test/$test_name"
test $i == 0 && cmd mkdir -p "$dir"
eval T_D$i=$dir
T_D[$i]=$dir
T_DS+="$dir "
done
# export all our T_ variables
for v in ${!T_*}; do
eval export $v
done
export PATH # give test access to scoutfs binary
# prepare to compare output to golden output
test -e "$T_RESULTS/output" || cmd mkdir -p "$T_RESULTS/output"
out="$T_RESULTS/output/$test_name"
> "$T_TMPDIR/status.msg"
golden="golden/$test_name"
# get stats from previous pass
last="$T_RESULTS/last-passed-test-stats"
stats=$(grep -s "^$test_name " "$last" | cut -d " " -f 2-)
test -n "$stats" && stats="last: $stats"
printf " %-30s $stats" "$test_name"
# record dmesg before
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.before"
# mark in dmesg as to what test we are running
echo "run scoutfs test $test_name" > /dev/kmsg
# give tests stdout and compared output on specific fds
exec 6>&1
exec 7>$out
# let the test get at its extra files
T_EXTRA="$T_TESTS/extra/$test_name"
# run the test with access to our functions
start_secs=$SECONDS
bash -c "for f in funcs/*.sh; do . \$f; done; . $t" >&7 2>&1
sts="$?"
log "test $t exited with status $sts"
stats="$((SECONDS - start_secs))s"
for iter in $(seq 1 $T_LOOP_ITER); do
# close our weird descriptors
exec 6>&-
exec 7>&-
# create a temporary dir and file path for the test
T_TMPDIR="$T_RESULTS/tmp/$test_name"
T_TMP="$T_TMPDIR/tmp"
cmd rm -rf "$T_TMPDIR"
cmd mkdir -p "$T_TMPDIR"
# compare output if the test returned passed status
if [ "$sts" == "$T_PASS_STATUS" ]; then
if [ ! -e "$golden" ]; then
message="no golden output"
sts=$T_FAIL_STATUS
elif ! cmp -s "$golden" "$out"; then
message="output differs"
sts=$T_FAIL_STATUS
diff -u "$golden" "$out" >> "$T_RESULTS/fail.log"
# create a test name dir in the fs, clean up old data as needed
T_DS=""
for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
dir="${T_M[$i]}/test/$test_name"
test $i == 0 && (
test -d "$dir" && cmd rm -rf "$dir"
cmd mkdir -p "$dir"
)
eval T_D$i=$dir
T_D[$i]=$dir
T_DS+="$dir "
done
# export all our T_ variables
for v in ${!T_*}; do
eval export $v
done
# prepare to compare output to golden output
test -e "$T_RESULTS/output" || cmd mkdir -p "$T_RESULTS/output"
out="$T_RESULTS/output/$test_name"
> "$T_TMPDIR/status.msg"
golden="golden/$test_name"
# record dmesg before
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.before"
# give tests stdout and compared output on specific fds
exec 6>&1
exec 7>$out
# run the test with access to our functions
start_secs=$SECONDS
bash -c "for f in funcs/*.sh; do . \$f; done; . $t" >&7 2>&1
sts="$?"
log "test $t exited with status $sts"
stats="$((SECONDS - start_secs))s"
# close our weird descriptors
exec 6>&-
exec 7>&-
# compare output if the test returned passed status
if [ "$sts" == "$T_PASS_STATUS" ]; then
if [ ! -e "$golden" ]; then
message="no golden output"
sts=$T_FAIL_STATUS
elif ! cmp -s "$golden" "$out"; then
message="output differs"
sts=$T_FAIL_STATUS
diff -u "$golden" "$out" >> "$T_RESULTS/fail.log"
fi
else
# get message from t_*() functions
message=$(cat "$T_TMPDIR/status.msg")
fi
else
# get message from t_*() functions
message=$(cat "$T_TMPDIR/status.msg")
fi
# see if anything unexpected was added to dmesg
if [ "$sts" == "$T_PASS_STATUS" ]; then
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.after"
diff --old-line-format="" --unchanged-line-format="" \
"$T_TMPDIR/dmesg.before" "$T_TMPDIR/dmesg.after" > \
"$T_TMPDIR/dmesg.new"
# see if anything unexpected was added to dmesg
if [ "$sts" == "$T_PASS_STATUS" ]; then
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.after"
diff --old-line-format="" --unchanged-line-format="" \
"$T_TMPDIR/dmesg.before" "$T_TMPDIR/dmesg.after" > \
"$T_TMPDIR/dmesg.new"
if [ -s "$T_TMPDIR/dmesg.new" ]; then
message="unexpected messages in dmesg"
sts=$T_FAIL_STATUS
cat "$T_TMPDIR/dmesg.new" >> "$T_RESULTS/fail.log"
if [ -s "$T_TMPDIR/dmesg.new" ]; then
message="unexpected messages in dmesg"
sts=$T_FAIL_STATUS
cat "$T_TMPDIR/dmesg.new" >> "$T_RESULTS/fail.log"
fi
fi
fi
# record unknown exit status
if [ "$sts" -lt "$T_FIRST_STATUS" -o "$sts" -gt "$T_LAST_STATUS" ]; then
message="unknown status: $sts"
sts=$T_FAIL_STATUS
fi
# record unknown exit status
if [ "$sts" -lt "$T_FIRST_STATUS" -o "$sts" -gt "$T_LAST_STATUS" ]; then
message="unknown status: $sts"
sts=$T_FAIL_STATUS
fi
# stop looping if we didn't pass
if [ "$sts" != "$T_PASS_STATUS" ]; then
break;
fi
done
# show and record the result of the test
if [ "$sts" == "$T_PASS_STATUS" ]; then
@@ -618,6 +717,10 @@ for t in $tests; do
grep -s -v "^$test_name " "$last" > "$last.tmp"
echo "$test_name $stats" >> "$last.tmp"
mv -f "$last.tmp" "$last"
elif [ "$sts" == "$T_SKIP_PERMITTED_STATUS" ]; then
echo " [ skipped (permitted): $message ]"
echo "$test_name skipped (permitted) $message " >> "$T_RESULTS/skip.log"
((skipped_permitted++))
elif [ "$sts" == "$T_SKIP_STATUS" ]; then
echo " [ skipped: $message ]"
echo "$test_name $message" >> "$T_RESULTS/skip.log"
@@ -629,9 +732,14 @@ for t in $tests; do
test -n "$T_ABORT" && die "aborting after first failure"
fi
# record results for TAP format output
t_tap_progress $test_name $sts
((testcount++))
done
msg "all tests run: $passed passed, $skipped skipped, $failed failed"
msg "all tests run: $passed passed, $skipped skipped, $skipped_permitted skipped (permitted), $failed failed"
if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then

View File

@@ -6,6 +6,7 @@ inode-items-updated.sh
simple-inode-index.sh
simple-staging.sh
simple-release-extents.sh
simple-readdir.sh
get-referring-entries.sh
fallocate.sh
basic-truncate.sh
@@ -17,6 +18,7 @@ projects.sh
large-fragmented-free.sh
format-version-forward-back.sh
enospc.sh
mmap.sh
srch-safe-merge-pos.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
@@ -25,6 +27,7 @@ totl-xattr-tag.sh
quota.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-shrink-read-race.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
lock-recover-invalidate.sh

192
tests/src/mmap_stress.c Normal file
View File

@@ -0,0 +1,192 @@
#define _GNU_SOURCE
/*
* mmap() stress test for scoutfs
*
* This test exercises the scoutfs kernel module's locking by
* repeatedly reading/writing using mmap and pread/write calls
* across 5 clients (mounts).
*
* Each thread operates on a single thread/client, and performs
* operations in a random order on the file.
*
* The goal is to assure that locking between _page_mkwrite vfs
* calls and the normal read/write paths do not cause deadlocks.
*
* There is no content validation performed. All that is done is
* assure that the programs continues without errors.
*/
#include <sys/types.h>
#include <stdio.h>
#include <sys/stat.h>
#include <inttypes.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
#include <sys/mman.h>
#include <pthread.h>
#include <errno.h>
static int size = 0;
static int duration = 0;
struct thread_info {
int nr;
int fd;
};
static void *run_test_func(void *ptr)
{
void *buf = NULL;
char *addr = NULL;
struct thread_info *tinfo = ptr;
uint64_t seconds = 0;
struct timespec ts;
int c = 0;
int fd;
ssize_t read, written, ret;
int preads = 0, pwrites = 0, mreads = 0, mwrites = 0;
fd = tinfo->fd;
if (posix_memalign(&buf, 4096, size) != 0) {
perror("calloc");
exit(-1);
}
addr = mmap(NULL, size, PROT_WRITE | PROT_READ, MAP_SHARED, fd, 0);
if (addr == MAP_FAILED) {
perror("mmap");
exit(-1);
}
usleep(100000); /* 0.1sec to allow all threads to start roughly at the same time */
clock_gettime(CLOCK_REALTIME, &ts); /* record start time */
seconds = ts.tv_sec + duration;
for (;;) {
if (++c % 16 == 0) {
clock_gettime(CLOCK_REALTIME, &ts);
if (ts.tv_sec >= seconds)
break;
}
switch (rand() % 4) {
case 0: /* pread */
preads++;
for (read = 0; read < size;) {
ret = pread(fd, buf, size - read, read);
if (ret < 0) {
perror("pwrite");
exit(-1);
}
read += ret;
}
break;
case 1: /* pwrite */
pwrites++;
memset(buf, (char)(c & 0xff), size);
for (written = 0; written < size;) {
ret = pwrite(fd, buf, size - written, written);
if (ret < 0) {
perror("pwrite");
exit(-1);
}
written += ret;
}
break;
case 2: /* mmap read */
mreads++;
memcpy(buf, addr, size); /* noerr */
break;
case 3: /* mmap write */
mwrites++;
memset(buf, (char)(c & 0xff), size);
memcpy(addr, buf, size); /* noerr */
break;
}
usleep(10000);
}
munmap(addr, size);
free(buf);
printf("thread %u complete: preads %u pwrites %u mreads %u mwrites %u\n", tinfo->nr,
mreads, mwrites, preads, pwrites);
return NULL;
}
int main(int argc, char **argv)
{
pthread_t thread[5];
struct thread_info tinfo[5];
int fd[5];
int ret;
int i;
if (argc != 8) {
fprintf(stderr, "%s requires 7 arguments - size duration file1 file2 file3 file4 file5\n", argv[0]);
exit(-1);
}
size = atoi(argv[1]);
if (size <= 0) {
fprintf(stderr, "invalid size, must be greater than 0\n");
exit(-1);
}
duration = atoi(argv[2]);
if (duration < 0) {
fprintf(stderr, "invalid duration, must be greater than or equal to 0\n");
exit(-1);
}
/* create and truncate one fd */
fd[0] = open(argv[3], O_RDWR | O_CREAT | O_TRUNC, 00644);
if (fd[0] < 0) {
perror("open");
exit(-1);
}
/* make it the test size */
if (posix_fallocate(fd[0], 0, size) != 0) {
perror("fallocate");
exit(-1);
}
/* now open the rest of the fds */
for (i = 1; i < 5; i++) {
fd[i] = open(argv[3+i], O_RDWR);
if (fd[i] < 0) {
perror("open");
exit(-1);
}
}
/* start threads */
for (i = 0; i < 5; i++) {
tinfo[i].fd = fd[i];
tinfo[i].nr = i;
ret = pthread_create(&thread[i], NULL, run_test_func, (void*)&tinfo[i]);
if (ret) {
perror("pthread_create");
exit(-1);
}
}
/* wait for complete */
for (i = 0; i < 5; i++)
pthread_join(thread[i], NULL);
for (i = 0; i < 5; i++)
close(fd[i]);
exit(0);
}

159
tests/src/mmap_validate.c Normal file
View File

@@ -0,0 +1,159 @@
#define _GNU_SOURCE
/*
* mmap() content consistency checking for scoutfs
*
* This test program validates that content from memory mappings
* are consistent across clients, whether written/read with mmap or
* normal writes/reads.
*
* One side of (read/write) will always be memory mapped. It may
* be that both sides do memory mapped (33% of the time).
*/
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <errno.h>
static int count = 0;
static int size = 0;
static void run_test_func(int fd1, int fd2)
{
void *buf1 = NULL;
void *buf2 = NULL;
char *addr1 = NULL;
char *addr2 = NULL;
int c = 0;
ssize_t read, written, ret;
/* buffers for both sides to compare */
if (posix_memalign(&buf1, 4096, size) != 0) {
perror("calloc1");
exit(-1);
}
if (posix_memalign(&buf2, 4096, size) != 0) {
perror("calloc1");
exit(-1);
}
/* memory maps for both sides */
addr1 = mmap(NULL, size, PROT_WRITE | PROT_READ, MAP_SHARED, fd1, 0);
if (addr1 == MAP_FAILED) {
perror("mmap1");
exit(-1);
}
addr2 = mmap(NULL, size, PROT_WRITE | PROT_READ, MAP_SHARED, fd2, 0);
if (addr2 == MAP_FAILED) {
perror("mmap2");
exit(-1);
}
for (;;) {
if (++c > count) /* 10k iterations */
break;
/* put a pattern in buf1 */
memset(buf1, c & 0xff, size);
/* pwrite or mmap write from buf1 */
switch (c % 3) {
case 0: /* pwrite */
for (written = 0; written < size;) {
ret = pwrite(fd1, buf1, size - written, written);
if (ret < 0) {
perror("pwrite");
exit(-1);
}
written += ret;
}
break;
default: /* mmap write */
memcpy(addr1, buf1, size);
break;
}
/* pread or mmap read to buf2 */
switch (c % 3) {
case 2: /* pread */
for (read = 0; read < size;) {
ret = pread(fd2, buf2, size - read, read);
if (ret < 0) {
perror("pwrite");
exit(-1);
}
read += ret;
}
break;
default: /* mmap read */
memcpy(buf2, addr2, size);
break;
}
/* compare bufs */
if (memcmp(buf1, buf2, size) != 0) {
fprintf(stderr, "memcmp() failed\n");
exit(-1);
}
}
munmap(addr1, size);
munmap(addr2, size);
free(buf1);
free(buf2);
}
int main(int argc, char **argv)
{
int fd[1];
if (argc != 5) {
fprintf(stderr, "%s requires 4 arguments - size count file1 file2\n", argv[0]);
exit(-1);
}
size = atoi(argv[1]);
if (size <= 0) {
fprintf(stderr, "invalid size, must be greater than 0\n");
exit(-1);
}
count = atoi(argv[2]);
if (count < 3) {
fprintf(stderr, "invalid count, must be greater than 3\n");
exit(-1);
}
/* create and truncate one fd */
fd[0] = open(argv[3], O_RDWR | O_CREAT | O_TRUNC, 00644);
if (fd[0] < 0) {
perror("open");
exit(-1);
}
fd[1] = open(argv[4], O_RDWR , 00644);
if (fd[1] < 0) {
perror("open");
exit(-1);
}
/* make it the test size */
if (posix_fallocate(fd[0], 0, size) != 0) {
perror("fallocate");
exit(-1);
}
/* run the test function */
run_test_func(fd[0], fd[1]);
close(fd[0]);
close(fd[1]);
exit(0);
}

View File

@@ -15,7 +15,7 @@ echo "== prepare devices, mount point, and logs"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"
> $T_TMP.mount.out
scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \
scoutfs mkfs -f -Q 0,127.0.0.1,$T_SCRATCH_PORT "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \
|| t_fail "mkfs failed"
echo "== bad devices, bad options"

View File

@@ -55,16 +55,16 @@ L dir-root/file-group-write
L symlinkdir-root/file-group-write
echo "== directory exec"
setpriv $SET_UID bash -c "cd dir-root && echo Success"
setpriv $SET_UID bash -c "cd symlinkdir-root && echo Success"
setpriv $SET_UID bash -c "cd dir-root 2>&- && echo Success"
setpriv $SET_UID bash -c "cd symlinkdir-root 2>&- && echo Success"
setfacl -m u:22222:rw dir-root
getfacl dir-root
setpriv $SET_UID bash -c "cd dir-root || echo Failed"
setpriv $SET_UID bash -c "cd symlinkdir-root || echo Failed"
setpriv $SET_UID bash -c "cd dir-root 2>&- || echo Failed"
setpriv $SET_UID bash -c "cd symlinkdir-root 2>&- || echo Failed"
setfacl -m g:44444:rwx dir-root
getfacl dir-root
setpriv $SET_GID bash -c "cd dir-root && echo Success"
setpriv $SET_GID bash -c "cd symlinkdir-root && echo Success"
setpriv $SET_GID bash -c "cd dir-root 2>&- && echo Success"
setpriv $SET_GID bash -c "cd symlinkdir-root 2>&- && echo Success"
echo "== get/set attr"
rm -rf file-root

View File

@@ -3,13 +3,13 @@
# operations in one mount and verify the results in another.
#
t_require_commands getfattr setfattr dd filefrag diff touch stat scoutfs
t_require_commands getfattr setfattr dd diff touch stat scoutfs
t_require_mounts 2
GETFATTR="getfattr --absolute-names"
SETFATTR="setfattr"
DD="dd status=none"
FILEFRAG="filefrag -v -b4096"
FIEMAP="scoutfs get-fiemap"
echo "== root inode updates flow back and forth"
sleep 1
@@ -55,8 +55,8 @@ for i in $(seq 1 10); do
conv=notrunc oflag=append &
wait
done
$FILEFRAG "$T_D0/file" | t_filter_fs > "$T_TMP.0"
$FILEFRAG "$T_D1/file" | t_filter_fs > "$T_TMP.1"
$FIEMAP "$T_D0/file" > "$T_TMP.0"
$FIEMAP "$T_D1/file" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== unlinked file isn't found"

View File

@@ -11,7 +11,7 @@ FILE="$T_D0/file"
# final block as we truncated past it.
#
echo "== truncate writes zeroed partial end of file block"
yes | dd of="$FILE" bs=8K count=1 status=none iflag=fullblock
yes 2>/dev/null | dd of="$FILE" bs=8K count=1 status=none iflag=fullblock
sync
# not passing iflag=fullblock causes the file occasionally to just be

View File

@@ -11,7 +11,7 @@ truncate -s $sz "$T_TMP.equal"
truncate -s $large_sz "$T_TMP.large"
echo "== make scratch fs"
t_quiet scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV"
t_quiet scoutfs mkfs -f -Q 0,127.0.0.1,$T_SCRATCH_PORT "$T_EX_META_DEV" "$T_EX_DATA_DEV"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"

View File

@@ -4,7 +4,16 @@
# merge adjacent consecutive allocations. (we don't have multiple
# allocation cursors)
#
t_require_commands scoutfs stat filefrag dd touch truncate
t_require_commands scoutfs stat dd touch truncate
get_fiemap()
{
scoutfs get-fiemap "$1" | awk '($1 != "extents:") {
unwritten = (substr($8, 2, 1) == "U") ? "unwritten" : "";
eof = (substr($8, 3, 1) == "L") ? "eof" : "";
print $3 ".. " $6 ": " unwritten eof;
};'
}
write_block()
{
@@ -76,26 +85,9 @@ print_extents_found()
{
local prefix="$1"
filefrag "$prefix"* 2>&1 | grep "extent.*found" | t_filter_fs
}
#
# print the logical start, len, and flags if they're there.
#
print_logical_extents()
{
local file="$1"
filefrag -v -b4096 "$file" 2>&1 | t_filter_fs | awk '
($1 ~ /[0-9]+:/) {
if ($NF !~ /[0-9]+:/) {
flags=$NF
} else {
flags=""
}
print $2, $6, flags
}
' | sed 's/last,eof/eof/'
for f in "$prefix"-*; do
echo "$f: $(scoutfs get-fiemap "$f" | tail -n 1)" | t_filter_fs
done
}
t_save_all_sysfs_mount_options data_prealloc_blocks
@@ -197,7 +189,7 @@ for sides in 0 1 2 3; do
done
echo before:
print_logical_extents "$prefix"
get_fiemap "$prefix"
# now write into the first, middle, and last empty block of each
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
@@ -223,7 +215,7 @@ for sides in 0 1 2 3; do
# mid (both has 6 blocks internally)
2) write_block $prefix $((left + 3)) ;;
esac
print_logical_extents "$prefix"
get_fiemap "$prefix"
((base+=8))
done
done

View File

@@ -57,7 +57,7 @@ test "$before" == "$after" || \
# XXX this is all pretty manual, would be nice to have helpers
echo "== make small meta fs"
# meta device just big enough for reserves and the metadata we'll fill
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
scoutfs mkfs -A -f -Q 0,127.0.0.1,$T_SCRATCH_PORT -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"
@@ -88,6 +88,11 @@ rm -rf "$SCR/xattrs"
echo "== make sure we can create again"
file="$SCR/file-after"
C=120
while (( C-- )); do
touch $file 2> /dev/null && break
sleep 1
done
touch $file
setfattr -n user.scoutfs-enospc -v 1 "$file"
sync

View File

@@ -11,6 +11,11 @@
# format version.
#
# not supported on el8 or higher
if [ $(source /etc/os-release ; echo ${VERSION_ID:0:1}) -gt 7 ]; then
t_skip_permitted "Unsupported OS version"
fi
mount_has_format_version()
{
local mnt="$1"
@@ -84,7 +89,7 @@ for vers in $(seq $MIN $((MAX - 1))); do
old_module="$builds/$vers/scoutfs.ko"
echo "mkfs $vers" >> "$T_TMP.log"
t_quiet $old_scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" \
t_quiet $old_scoutfs mkfs -f -Q 0,127.0.0.1,$T_SCRATCH_PORT "$T_EX_META_DEV" "$T_EX_DATA_DEV" \
|| t_fail "mkfs $vers failed"
echo "mount $vers with $vers" >> "$T_TMP.log"

View File

@@ -10,30 +10,6 @@ EXTENTS_PER_BTREE_BLOCK=600
EXTENTS_PER_LIST_BLOCK=8192
FREED_EXTENTS=$((EXTENTS_PER_BTREE_BLOCK * EXTENTS_PER_LIST_BLOCK))
#
# This test specifically creates a pathologically sparse file that will
# be as expensive as possible to free. This is usually fine on
# dedicated or reasonable hardware, but trying to run this in
# virtualized debug kernels can take a very long time. This test is
# about making sure that the server doesn't fail, not that the platform
# can handle the scale of work that our btree formats happen to require
# while execution is bogged down with use-after-free memory reference
# tracking. So we give the test a lot more breathing room before
# deciding that its hung.
#
echo "== setting longer hung task timeout"
if [ -w /proc/sys/kernel/hung_task_timeout_secs ]; then
secs=$(cat /proc/sys/kernel/hung_task_timeout_secs)
test "$secs" -gt 0 || \
t_fail "confusing value '$secs' from /proc/sys/kernel/hung_task_timeout_secs"
restore_hung_task_timeout()
{
echo "$secs" > /proc/sys/kernel/hung_task_timeout_secs
}
trap restore_hung_task_timeout EXIT
echo "$((secs * 5))" > /proc/sys/kernel/hung_task_timeout_secs
fi
echo "== creating fragmented extents"
fragmented_data_extents $FREED_EXTENTS $EXTENTS_PER_BTREE_BLOCK "$T_D0/alloc" "$T_D0/move"

View File

@@ -38,6 +38,6 @@ while [ "$SECONDS" -lt "$END" ]; do
done
echo "== stopping background load"
kill $load_pids
t_silent_kill $load_pids
t_pass

View File

@@ -0,0 +1,40 @@
#
# We had a lock server refcounting bug that could let one thread get a
# reference on a lock struct that was being freed by another thread. We
# were able to reproduce this by having all clients try and produce a
# lot of read and null requests.
#
# This will manfiest as a hung lock and timed out test runs, probably
# with hung task messages on the console. Depending on how the race
# turns out, it can trigger KASAN warnings in
# process_waiting_requests().
#
READERS_PER=3
SECS=30
echo "=== setup"
touch "$T_D0/file"
echo "=== spin reading and shrinking"
END=$((SECONDS + SECS))
for m in $(t_fs_nrs); do
eval file="\$T_D${m}/file"
# lots of tasks reading as fast as they can
for t in $(seq 1 $READERS_PER); do
(while [ $SECONDS -lt $END ]; do
stat $file > /dev/null
done) &
done
# one task shrinking (triggering null requests) and reading
(while [ $SECONDS -lt $END ]; do
stat $file > /dev/null
t_trigger_arm_silent statfs_lock_purge $m
stat -f "$file" > /dev/null
done) &
done
wait
t_pass

54
tests/tests/mmap.sh Normal file
View File

@@ -0,0 +1,54 @@
#
# test mmap() and normal read/write consistency between different nodes
#
t_require_commands mmap_stress mmap_validate scoutfs xfs_io
echo "== mmap_stress"
mmap_stress 8192 30 "$T_D0/mmap_stress" "$T_D0/mmap_stress" "$T_D0/mmap_stress" "$T_D3/mmap_stress" "$T_D3/mmap_stress" | sed 's/:.*//g' | sort
echo "== basic mmap/read/write consistency checks"
mmap_validate 256 1000 "$T_D0/mmap_val1" "$T_D1/mmap_val1"
mmap_validate 8192 1000 "$T_D0/mmap_val2" "$T_D1/mmap_val2"
mmap_validate 88400 1000 "$T_D0/mmap_val3" "$T_D1/mmap_val3"
echo "== mmap read from offline extent"
F="$T_D0/mmap-offline"
touch "$F"
xfs_io -c "pwrite -S 0xEA 0 8192" "$F" > /dev/null
cp "$F" "${F}-stage"
vers=$(scoutfs stat -s data_version "$F")
scoutfs release "$F" -V "$vers" -o 0 -l 8192
scoutfs get-fiemap -L "$F"
xfs_io -c "mmap -rwx 0 8192" \
-c "mread -v 512 16" "$F" &
sleep 1
# should be 1 - data waiting
jobs | wc -l
scoutfs stage "${F}-stage" "$F" -V "$vers" -o 0 -l 8192
# xfs_io thread <here> will output 16 bytes of read data
sleep 1
# should be 0 - no more waiting jobs, xfs_io should have exited
jobs | wc -l
scoutfs get-fiemap -L "$F"
echo "== mmap write to an offline extent"
# reuse the same file
scoutfs release "$F" -V "$vers" -o 0 -l 8192
scoutfs get-fiemap -L "$F"
xfs_io -c "mmap -rwx 0 8192" \
-c "mwrite -S 0x11 528 16" "$F" &
sleep 1
# should be 1 job waiting
jobs | wc -l
scoutfs stage "${F}-stage" "$F" -V "$vers" -o 0 -l 8192
# no output here from write
sleep 1
# should be 0 - no more waiting jobs, xfs_io should have exited
jobs | wc -l
scoutfs get-fiemap -L "$F"
# read back contents to assure write changed the file
dd status=none if="$F" bs=1 count=48 skip=512 | hexdump -C
echo "== done"
t_pass

View File

@@ -83,9 +83,9 @@ touch "$OTHER"
ln "$FROM" "$HARD"
echo "== wrapped offsets should fail"
HUGE=0x8000000000000000
scoutfs move-blocks "$FROM" -f "$HUGE" -l "$HUGE" "$TO" -t 0 2>&1 | t_filter_fs
scoutfs move-blocks "$FROM" -f 0 -l "$HUGE" "$TO" -t "$HUGE" 2>&1 | t_filter_fs
HUGE=0xfffffffffffff000
scoutfs move-blocks "$FROM" -f "$HUGE" -l "8192" "$TO" -t 0 2>&1 | t_filter_fs
scoutfs move-blocks "$FROM" -f 0 -l "$HUGE" "$TO" -t "8192" 2>&1 | t_filter_fs
echo "== specifying same file fails"
scoutfs move-blocks "$FROM" -f 0 -l "$BS" "$HARD" -t 0 2>&1 | t_filter_fs

View File

@@ -157,7 +157,7 @@ echo "truncate should be waiting for first block:"
expect_wait "$DIR/file" "change_size" $ino 0
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
sleep .1
echo "trunate should no longer be waiting:"
echo "truncate should no longer be waiting:"
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
cat "$DIR/golden" > "$DIR/file"
vers=$(scoutfs stat -s data_version "$DIR/file")
@@ -168,10 +168,13 @@ scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
# overwrite, not truncate+write
dd if="$DIR/other" of="$DIR/file" \
bs=$BS count=$BLOCKS conv=notrunc status=none &
pid="$!"
sleep .1
echo "should be waiting for write"
expect_wait "$DIR/file" "write" $ino 0
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
# wait for the background dd to complete
wait "$pid" 2> /dev/null
cmp "$DIR/file" "$DIR/other"
echo "== cleanup"

View File

@@ -5,18 +5,6 @@
t_require_commands sleep touch sync stat handle_cat kill rm
t_require_mounts 2
#
# usually bash prints an annoying output message when jobs
# are killed. We can avoid that by redirecting stderr for
# the bash process when it reaps the jobs that are killed.
#
silent_kill() {
exec {ERR}>&2 2>/dev/null
kill "$@"
wait "$@"
exec 2>&$ERR {ERR}>&-
}
#
# We don't have a great way to test that inode items still exist. We
# don't prevent opening handles with nlink 0 today, so we'll use that.
@@ -52,7 +40,7 @@ inode_exists $ino || echo "$ino didn't exist"
echo "== orphan from failed evict deletion is picked up"
# pending kill signal stops evict from getting locks and deleting
silent_kill $pid
t_silent_kill $pid
t_set_sysfs_mount_option 0 orphan_scan_delay_ms 1000
sleep 5
inode_exists $ino && echo "$ino still exists"
@@ -70,7 +58,7 @@ for nr in $(t_fs_nrs); do
rm -f "$path"
done
sync
silent_kill $pids
t_silent_kill $pids
for nr in $(t_fs_nrs); do
t_force_umount $nr
done
@@ -79,10 +67,49 @@ t_mount_all
while test -d $(echo /sys/fs/scoutfs/*/fence/* | cut -d " " -f 1); do
sleep .5
done
sv=$(t_server_nr)
# wait for reclaim_open_log_tree() to complete for each mount
while [ $(t_counter reclaimed_open_logs $sv) -lt $T_NR_MOUNTS ]; do
sleep 1
done
# wait for finalize_and_start_log_merge() to find no active merges in flight
# and not find any finalized trees
while [ $(t_counter log_merge_no_finalized $sv) -lt 1 ]; do
sleep 1
done
# wait for orphan scans to run
t_set_all_sysfs_mount_options orphan_scan_delay_ms 1000
# also have to wait for delayed log merge work from mount
sleep 15
# wait until we see two consecutive orphan scan attempts without
# any inode deletion forward progress in each mount
for nr in $(t_fs_nrs); do
C=0
LOSA=$(t_counter orphan_scan_attempts $nr)
LDOP=$(t_counter inode_deleted $nr)
while [ $C -lt 2 ]; do
sleep 1
OSA=$(t_counter orphan_scan_attempts $nr)
DOP=$(t_counter inode_deleted $nr)
if [ $OSA != $LOSA ]; then
if [ $DOP == $LDOP ]; then
(( C++ ))
else
C=0
fi
fi
LOSA=$OSA
LDOP=$DOP
done
done
for ino in $inos; do
inode_exists $ino && echo "$ino still exists"
done
@@ -131,7 +158,7 @@ while [ $SECONDS -lt $END ]; do
done
# trigger eviction deletion of each file in each mount
silent_kill $pids
t_silent_kill $pids
wait || t_fail "handle_fsetxattr failed"

View File

@@ -22,7 +22,7 @@ reset_all()
getfattr --absolute-names -d -m - "$T_D0" | \
grep "^scoutfs.totl." | \
cut -d '=' -f 1 | \
xargs -n 1 -I'{}' setfattr -x '{}' "$T_D0"
xargs -I'{}' setfattr -x '{}' "$T_D0"
}
echo "== prepare dir with write perm for test ids"

View File

@@ -72,7 +72,7 @@ quarter_data=$(echo "$size_data / 4" | bc)
# XXX this is all pretty manual, would be nice to have helpers
echo "== make initial small fs"
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m $quarter_meta -d $quarter_data \
scoutfs mkfs -A -f -Q 0,127.0.0.1,$T_SCRATCH_PORT -m $quarter_meta -d $quarter_data \
"$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="$T_TMPDIR/mnt.scratch"

View File

@@ -2,7 +2,7 @@
# Test correctness of the setattr_more ioctl.
#
t_require_commands filefrag scoutfs touch mkdir rm stat mknod
t_require_commands scoutfs touch mkdir rm stat mknod
FILE="$T_D0/file"
@@ -55,17 +55,10 @@ scoutfs setattr -t 67305985.999999999 -V 1 -s 1 "$FILE" 2>&1 | t_filter_fs
TZ=GMT stat -c "%z" "$FILE"
rm "$FILE"
#
# With e2fsprogs-v1.42.10-10-g29758d2f, the output of filefrag 'flags' changes
# significantly. First, the _LAST flag is now output. Second, the 'unknown'
# flag is now printed out as 'unknown_loc'. To compensate for this, we check
# and replace the "correct" output for new versions here with the expected
# value.
#
echo "== large offline extents are created"
touch "$FILE"
scoutfs setattr -V 1 -o -s $((10007 * 4096)) "$FILE" 2>&1 | t_filter_fs
filefrag -v -b4096 "$FILE" 2>&1 | sed 's/last,unknown_loc,eof$/unknown,eof/' | t_filter_fs
scoutfs get-fiemap "$FILE"
rm "$FILE"
# had a bug where we were creating extents that were too long
@@ -75,4 +68,11 @@ scoutfs setattr -V 1 -o -s 4000000000 "$FILE" 2>&1 | t_filter_fs
scoutfs stat -s offline_blocks "$FILE"
rm "$FILE"
# Do not fail if data_version is unset - the unset `0` value should not
# be passed down to attr_x handling code which will -EINVAL on that.
echo "== omitting data_version should not fail"
touch "$FILE"
scoutfs setattr -s 0 -t 1725670311.0 -r 1725670311.0 "$FILE"
rm "$FILE"
t_pass

View File

@@ -0,0 +1,37 @@
#
# verify d_off output of xfs_io is consistent.
#
t_require_commands xfs_io
filt()
{
grep d_off | cut -d ' ' -f 1,4-
}
echo "== create content"
for s in $(seq 1 7 250); do
f=$(printf '%*s' $s | tr ' ' 'a')
touch ${T_D0}/$f
done
echo "== readdir all"
xfs_io -c "readdir -v" $T_D0 | filt
echo "== readdir offset"
xfs_io -c "readdir -v -o 20" $T_D0 | filt
echo "== readdir len (bytes)"
xfs_io -c "readdir -v -l 193" $T_D0 | filt
echo "== introduce gap"
for s in $(seq 57 7 120); do
f=$(printf '%*s' $s | tr ' ' 'a')
rm -f ${T_D0}/$f
done
xfs_io -c "readdir -v" $T_D0 | filt
echo "== cleanup"
rm -rf $T_D0
t_pass

View File

@@ -7,6 +7,7 @@ t_require_commands xfs_io filefrag scoutfs mknod
# this test wants to ignore unwritten extents
fiemap_file() {
filefrag -v -b4096 "$1" | grep -v "unwritten"
scoutfs get-fiemap "$1" | grep -v 'flags:.*U'
}
create_file() {
@@ -61,7 +62,10 @@ echo "== release past i_size is fine"
release_vers "$FILE" stat 400K 4K
echo "== wrapped blocks fails"
release_vers "$FILE" stat $vers 0x8000000000000000 0x8000000000000000
# just under!
release_vers "$FILE" stat $vers 0xfffffffffffff000 4096
# this goes over
release_vers "$FILE" stat $vers 0xfffffffffffff000 8192
echo "== releasing non-file fails"
mknod "$CHAR" c 1 3
@@ -105,25 +109,20 @@ for c in $(seq 0 4); do
fi
done
start=$(fiemap_file "$FILE" | \
awk '($1 == "0:"){print substr($4, 0, length($4)- 2)}')
release_vers "$FILE" stat $(($a * 4))K 4K
release_vers "$FILE" stat $(($b * 4))K 4K
release_vers "$FILE" stat $(($c * 4))K 4K
echo -n "$a $b $c:"
fiemap_file "$FILE" | \
awk 'BEGIN{ORS=""}($1 == (NR - 4)":") {
off=substr($2, 0, length($2)- 2);
phys=substr($4, 0, length($4)- 2);
if (phys > 100) {
phys = phys - phys + 100 + off;
}
len=substr($6, 0, length($6)- 1);
print " (" off, phys, len ")";
}'
scoutfs get-fiemap "$FILE" | \
awk 'BEGIN{ORS=""}($1 != "extents:") {
off=$3;
len=$6;
phys=substr($8, 0, 1);
phys = (phys == ".") ? off + 100 : 0;
print " (" off, phys, len ")"
};'
echo
rm "$FILE"

View File

@@ -2,11 +2,7 @@
# Test correctness of the staging operation
#
t_require_commands filefrag dd scoutfs cp cmp rm
fiemap_file() {
filefrag -v -b4096 "$1"
}
t_require_commands dd scoutfs cp cmp rm
create_file() {
local file="$1"
@@ -62,7 +58,7 @@ create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 4K
# make sure there only offline extents
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
scoutfs get-fiemap "$FILE"
stage_vers "$FILE" stat 0 4096 "$T_TMP"
cmp "$FILE" "$T_TMP"
rm -f "$FILE"
@@ -72,7 +68,7 @@ create_file "$FILE" $((4096 * 4096))
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 16M
# make sure there only offline extents
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
scoutfs get-fiemap "$FILE"
stage_vers "$FILE" stat 0 $((4096 * 4096)) "$T_TMP"
cmp "$FILE" "$T_TMP"
rm -f "$FILE"
@@ -143,8 +139,8 @@ hexdump -C "$FILE"
rm -f "$FILE"
echo "== wrapped region fails"
create_file "$FILE" 4096
stage_vers "$FILE" stat 0xFFFFFFFFFFFFF000 4096 /dev/zero
create_file "$FILE" 8192
stage_vers "$FILE" stat 0xFFFFFFFFFFFFF000 8192 /dev/zero
rm -f "$FILE"
echo "== non-block aligned offset fails"
@@ -152,7 +148,7 @@ create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 4K
stage_vers "$FILE" stat 1 4095 "$T_TMP"
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
scoutfs get-fiemap "$FILE"
rm -f "$FILE"
echo "== non-block aligned len within block fails"
@@ -160,7 +156,7 @@ create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 4K
stage_vers "$FILE" stat 0 1024 "$T_TMP"
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
scoutfs get-fiemap "$FILE"
rm -f "$FILE"
echo "== partial final block that writes to i_size does work"
@@ -175,7 +171,7 @@ echo "== zero length stage doesn't bring blocks online"
create_file "$FILE" $((4096 * 100))
release_vers "$FILE" stat 0 400K
stage_vers "$FILE" stat 4096 0 /dev/zero
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
scoutfs get-fiemap "$FILE"
rm -f "$FILE"
# XXX yup, needs to be updated for demand staging

View File

@@ -30,8 +30,13 @@ t_quiet mkdir -p "$T_TMPDIR/mnt.scratch"
t_quiet cd "$T_XFSTESTS_REPO"
if [ -z "$T_SKIP_CHECKOUT" ]; then
t_quiet git fetch
# if we're passed a tag instead of a branch, we can't --track
TRACK="--track"
if git tag -l | grep -q "$T_XFSTESTS_BRANCH" ; then
TRACK=""
fi
# this remote use is bad, do better
t_quiet git checkout -B "$T_XFSTESTS_BRANCH" --track "origin/$T_XFSTESTS_BRANCH"
t_quiet git checkout -B "$T_XFSTESTS_BRANCH" ${TRACK} "origin/$T_XFSTESTS_BRANCH"
fi
t_quiet make
t_quiet sync
@@ -45,9 +50,9 @@ t_quiet sync
cat << EOF > local.config
export FSTYP=scoutfs
export MKFS_OPTIONS="-f"
export MKFS_TEST_OPTIONS="-Q 0,127.0.0.1,42000"
export MKFS_SCRATCH_OPTIONS="-Q 0,127.0.0.1,43000"
export MKFS_DEV_OPTIONS="-Q 0,127.0.0.1,44000"
export MKFS_TEST_OPTIONS="-Q 0,127.0.0.1,$T_TEST_PORT"
export MKFS_SCRATCH_OPTIONS="-Q 0,127.0.0.1,$T_SCRATCH_PORT"
export MKFS_DEV_OPTIONS="-Q 0,127.0.0.1,$T_DEV_PORT"
export TEST_DEV=$T_DB0
export TEST_DIR=$T_M0
export SCRATCH_META_DEV=$T_EX_META_DEV
@@ -58,51 +63,47 @@ export MOUNT_OPTIONS="-o quorum_slot_nr=0,metadev_path=$T_MB0"
export TEST_FS_MOUNT_OPTS="-o quorum_slot_nr=0,metadev_path=$T_MB0"
EOF
cat << EOF > local.exclude
generic/003 # missing atime update in buffered read
generic/029 # mmap missing
generic/030 # mmap missing
generic/075 # file content mismatch failures (fds, etc)
generic/080 # mmap missing
generic/103 # enospc causes trans commit failures
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/120 # (can't exec 'cause no mmap)
generic/126 # (can't exec 'cause no mmap)
generic/141 # mmap missing
generic/213 # enospc causes trans commit failures
generic/215 # mmap missing
generic/246 # mmap missing
generic/247 # mmap missing
generic/248 # mmap missing
generic/318 # can't support user namespaces until v5.11
generic/321 # requires selinux enabled for '+' in ls?
generic/325 # mmap missing
generic/338 # BUG_ON update inode error handling
generic/346 # mmap missing
generic/347 # _dmthin_mount doesn't work?
EOF
cp "$T_EXTRA/local.exclude" local.exclude
t_restore_output
t_stdout_invoked
echo " (showing output of xfstests)"
args="-E local.exclude ${T_XFSTESTS_ARGS:--g quick}"
./check $args
# the fs is unmounted when check finishes
t_stdout_compare
#
# ./check writes the results of the run to check.log. It lists
# the tests it ran, skipped, or failed. Then it writes a line saying
# everything passed or some failed. We scrape the most recent run and
# use it as the output to compare to make sure that we run the right
# tests and get the right results.
# ./check writes the results of the run to check.log. It lists the
# tests it ran, skipped, or failed. Then it writes a line saying
# everything passed or some failed.
#
#
# If XFSTESTS_ARGS were specified then we just pass/fail to match the
# check run.
#
if [ -n "$T_XFSTESTS_ARGS" ]; then
if tail -1 results/check.log | grep -q "Failed"; then
t_fail
else
t_pass
fi
fi
#
# Otherwise, typically, when there were no args then we scrape the most
# recent run and use it as the output to compare to make sure that we
# run the right tests and get the right results.
#
awk '
/^(Ran|Not run|Failures):.*/ {
if (pf) {
res=""
pf=""
} res = res "\n" $0
}
res = res "\n" $0
}
/^(Passed|Failed).*tests$/ {
pf=$0
@@ -112,10 +113,14 @@ awk '
}' < results/check.log > "$T_TMPDIR/results"
# put a test per line so diff shows tests that differ
egrep "^(Ran|Not run|Failures):" "$T_TMPDIR/results" | \
fmt -w 1 > "$T_TMPDIR/results.fmt"
egrep "^(Passed|Failed).*tests$" "$T_TMPDIR/results" >> "$T_TMPDIR/results.fmt"
grep -E "^(Ran|Not run|Failures):" "$T_TMPDIR/results" | fmt -w 1 > "$T_TMPDIR/results.fmt"
grep -E "^(Passed|Failed).*tests$" "$T_TMPDIR/results" >> "$T_TMPDIR/results.fmt"
t_compare_output cat "$T_TMPDIR/results.fmt"
diff -u "$T_EXTRA/expected-results" "$T_TMPDIR/results.fmt" > "$T_TMPDIR/results.diff"
if [ -s "$T_TMPDIR/results.diff" ]; then
echo "tests that were skipped/run differed from expected:"
cat "$T_TMPDIR/results.diff"
t_fail
fi
t_pass

View File

@@ -7,7 +7,7 @@ message_output()
error_message()
{
message_output "$@" >&2
message_output "$@" >> /dev/stderr
}
error_exit()
@@ -62,31 +62,27 @@ test -x "$SCOUTFS_FENCED_RUN" || \
# files disappear.
#
# generate failure messages to stderr while still echoing 0 for the caller
careful_cat()
# silence error messages
quiet_cat()
{
local path="$@"
cat "$@" || echo 0
cat "$@" 2>/dev/null
}
while sleep $SCOUTFS_FENCED_DELAY; do
shopt -s nullglob
for fence in /sys/fs/scoutfs/*/fence/*; do
# catches unmatched regex when no dirs
if [ ! -d "$fence" ]; then
continue
fi
# skip requests that have been handled
if [ "$(careful_cat $fence/fenced)" == 1 -o \
"$(careful_cat $fence/error)" == 1 ]; then
continue
fi
srv=$(basename $(dirname $(dirname $fence)))
rid="$(cat $fence/rid)"
ip="$(cat $fence/ipv4_addr)"
reason="$(cat $fence/reason)"
fenced="$(quiet_cat $fence/fenced)"
error="$(quiet_cat $fence/error)"
rid="$(quiet_cat $fence/rid)"
ip="$(quiet_cat $fence/ipv4_addr)"
reason="$(quiet_cat $fence/reason)"
# request dirs can linger then disappear after fenced/error is set
if [ ! -d "$fence" -o "$fenced" == "1" -o "$error" == "1" ]; then
continue
fi
log_message "server $srv fencing rid $rid at IP $ip for $reason"

View File

@@ -55,6 +55,14 @@ with initial sparse regions (perhaps by multiple threads writing to
different regions) and wasted space isn't an issue (perhaps because the
file population contains few small files).
.TP
.B ino_alloc_per_lock=<number>
This option determines how many inode numbers are allocated in the same
cluster lock. The default, and maximum, is 1024. The minimum is 1.
Allocating fewer inodes per lock can allow more parallelism between
mounts because there are more locks that cover the same number of
created files. This can be helpful when working with smaller numbers of
large files.
.TP
.B log_merge_wait_timeout_ms=<number>
This option sets the amount of time, in milliseconds, that log merge
creation can wait before timing out. This setting is per-mount, only
@@ -130,6 +138,23 @@ the server for the filesystem if it is elected leader.
The assigned number must match one of the slots defined with \-Q options
when the filesystem was created with mkfs. If the number assigned
doesn't match a number created during mkfs then the mount will fail.
.TP
.B tcp_keepalive_timeout_ms=<number>
This option sets the amount of time, in milliseconds, that a client
connection will wait for active TCP packets, before deciding that
the connection is dead. This setting is per-mount and only changes
the behavior of that mount.
.sp
The default value of this setting is 60000msec (60s). Any precision
beyond a whole second is likely unrealistic due to the nature of
TCP keepalive mechanisms in the Linux kernel. Valid values are any
value higher than 3000 (3s).
.sp
The TCP keepalive mechanism is complex and observing a lost connection
quickly is important to maintain cluster stability. If the local
network suffers from intermittent outages this option may provide
some respite to overcome these outages without the cluster becoming
desynchronized.
.SH VOLUME OPTIONS
Volume options are persistent options which are stored in the super
block in the metadata device and which apply to all mounts of the volume.

View File

@@ -1,7 +1,7 @@
#!/bin/bash
# can we find sparse? If not, we're done.
which sparse > /dev/null 2>&1 || exit 0
# must have sparse. Fail with error message, mask success path.
which sparse > /dev/null || exit 1
#
# one of the problems with using sparse in userspace is that it picks up
@@ -22,6 +22,11 @@ RE="$RE|warning: memset with byte count of 4194304"
# some sparse versions don't know about some builtins
RE="$RE|error: undefined identifier '__builtin_fpclassify'"
# on el8, sparse can't handle __has_include for some reason when _GNU_SOURCE
# is defined, and we need that for O_DIRECT.
RE="$RE|note: in included file .through /usr/include/sys/stat.h.:"
RE="$RE|/usr/include/bits/statx.h:30:6: error: "
#
# don't filter out 'too many errors' here, it can signify that
# sparse doesn't understand something and is throwing a *ton*
@@ -37,10 +42,23 @@ search=$(gcc -print-search-dirs | awk '($1 == "install:"){print "-I" $2}')
#
# We're trying to use sparse against glibc headers which go wild trying to
# use internal compiler macros to test features. We copy gcc's and give
# them to sparse. But not __SIZE_TYPE__ 'cause sparse defines that one.
# them to sparse, but not the ones that sparse already has.
#
defines=".sparse.gcc-defines.h"
gcc -dM -E -x c - < /dev/null | grep -v __SIZE_TYPE__ > $defines
defines=".sparse.gcc-defines.$$.h"
awk '
# save defines from gcc
( FNR == NR ) { lines[$2]=$0 }
# delete defines that sparse also has
( FNR < NR ) { delete lines[$2] }
# dump remaining lines unique to gcc
END {
for (a in lines) {
print lines[a]
}
}
' <(gcc -dM -E -x c - < /dev/null) <(sparse -dM -E -x c - < /dev/null) > $defines
include="-include $defines"
#
@@ -54,6 +72,9 @@ else
fi
sparse $m64 $include $search/include "$@" 2>&1 | egrep -v "($RE)" | tee .sparse.output
rm -f $defines
if [ -s .sparse.output ]; then
exit 1
else

View File

@@ -10,6 +10,11 @@
* Just a quick simple native bitmap.
*/
int test_bit(unsigned long *bits, u64 nr)
{
return !!(bits[nr / BITS_PER_LONG] & (1UL << (nr & (BITS_PER_LONG - 1))));
}
void set_bit(unsigned long *bits, u64 nr)
{
bits[nr / BITS_PER_LONG] |= 1UL << (nr & (BITS_PER_LONG - 1));

Some files were not shown because too many files have changed in this diff Show More