Compare commits

...

310 Commits
v1.10 ... v1.23

Author SHA1 Message Date
Zach Brown
a88972b50e v1.23 Release
Finish the release notes for the 1.23 release.

Signed-off-by: Zach Brown <zab@versity.com>
2024-12-11 13:07:44 -08:00
Zach Brown
3e71f49260 Merge pull request #195 from versity/auke/el9_5
RHEL9.5 kernel support
2024-12-03 14:27:57 -08:00
Zach Brown
8a082e3f99 Merge pull request #197 from versity/greg/block-el9-minor-upgrades
Block EL9 minor version upgrades
2024-12-03 14:09:17 -08:00
Greg Cymbalski
110d5ea0d5 Block EL9 minor version upgrades
Since kABI migrations across minor versions is a thing of the past going
forward, we now:
- Detect if we're on EL9
- If so, add a requirement on the various flavors of release package to
  that specific major.minor version

This appropriately does not allow upgrades across minor versions.
2024-12-02 16:04:24 -08:00
Auke Kok
669de459a7 bdev_open_by_path is now removed as well.
Additional blkdev/bdev changes now cause this call to be removed as
well resulting in us having to use yet another API to do the same for
el9_5.

The changes are a little more subtle as now the bdev_mount() call passes
a custom bd_holder_ops that we must match or else throw a WARN_ON, so we
switch to using sbi as our holder arg instead.

Make sure to bdev_fput and not fput, since we don't want to have our
private data cleanup deferred, failing xfstests generic/604.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-27 18:52:39 -08:00
Auke Kok
621271f8cf backing_dev_info is entirely removed.
The assignments to it is no longer needed at all. All references can be
dropped since v6.4-rc4-163-g0d625446d0a4.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-08 13:32:21 -05:00
Auke Kok
d1092cdbe9 current_time() is no longer extern.
Since v6.5-rc1-7-g9b6304c1d537, current_time() is no longer
extern, so we need to update this grep regex to continue to match.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-08 13:21:03 -05:00
Zach Brown
aed7169fac Merge pull request #194 from versity/zab/v1.22
v1.22 Release
2024-11-03 15:06:40 -08:00
Zach Brown
7f313f2818 v1.22 Release
Finish the release notes for the 1.22 release.

Signed-off-by: Zach Brown <zab@versity.com>
2024-11-01 13:05:52 -07:00
Zach Brown
6b4e666952 Merge pull request #193 from versity/zab/hung_lock_fixes
Zab/hung lock fixes
2024-10-31 16:56:51 -07:00
Zach Brown
4a26059d00 Add lock-shrink-read-race test
Add a quick test that races readers and shrinking to stress lock object
refcount racing between concurrent lock request handling threads in the
lock server.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-31 15:35:11 -07:00
Zach Brown
19e78c32fc Allow null lock compatibility between nodes
Right now a client requesting a null mode for a lock will cause
invalidations of all existing granted modes of the lock across the
cluster.

This is unneccessarily broad.  The absolute requirement is that a null
request invalidates other existing granted modes on the client.  That's
how the client safely resolves shrinking's desire to free locks while
the locks are in use.  It relies on turning it into the race between use
and remote invalidation.

But that only requires invalidating existing grants from the requesting
client, not all clients.  It is always safe for null grants to coexist
with all grants on other clients.  Consider the existing mechanics
involving null modes.  First, null locks are instatiated on the client
before sending any requests at all.  At any given time newly allocated
null locks are coexisting with all existing locks across the cluster.
Second, the server frees the client entry tracking struct the moment it
sends a null grant to the client.  From that point on the client's null
lock can not have any impact on the rest of the lock holders because the
server has forgotten about it.

So we add this case to the server's test that two client lock modes are
compatible.  We take the opportunity to comment the heck out of this
function instead of making it a dense boolean composition.  The only
functional change is the addition of this case, the existing cases are
refactored but unchanged.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-31 15:34:59 -07:00
Zach Brown
8c1a45c9f5 Use bools instead of weird addition as or in net
When freeing acked reesponses in the net layer we sweep the send and
resend queues looking for queued responses up to the sequence number
we've had acked.  The code that did this used a weird pattern of
returning ints and adding them which gave me pause.  Clean it up to use
bools and or (not short-circuiting ||) to more obviously communicate
what's going on.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-30 13:38:12 -07:00
Zach Brown
5a6eb569f3 Add some lock debugging trace fields
Over time some fields have been added to the lock struct which haven't
been added to the lock tracing output.  Add some of the more relevant
lock fields to tracing.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-30 13:16:04 -07:00
Zach Brown
69d9040e68 Close lock server use-after-free race
Lock object lifetimes in the lock server are protected by reference
counts.  References are acquired while holding a lock on an rbtree.

Unfortunately, the decision to free lock objects wasn't tested while
also holding that lock on the rbtree.  A caller putting their object
would test the refcount, then wait to get the rbtree lock to remove it
from the tree.

There's a possible race where the decision is made to remove the object
but another reference is added before the object is removed.  This was
seen in testing and manifest as an incoming request handling path adding
a request message to the object before it is freed, losing the message.
Clients would then hang on a lock that never saw a response because
their request was freed with the lock object.

The fix is to hold the rbtree lock when testing the refcount and
deciding to free.  It adds a bit more contention but not significantly
so, given the wild existing contention on a per-fs spinlocked rbtree.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-30 13:04:13 -07:00
Zach Brown
d94ec29ffa Merge pull request #192 from versity/greg/with-debug-kmod
Generate debug packages
2024-10-24 15:35:03 -07:00
Greg Cymbalski
70c36ae394 Generate debug packages
We had previously explicitly disabled this; let's start generating them.
2024-10-24 14:56:09 -07:00
Zach Brown
1d08a58add Merge pull request #151 from versity/auke/el9
EL9 support.
2024-10-04 11:46:47 -07:00
Auke Kok
fc7876e844 Allow certain tests to skip, but not fail exit condition.
Previously, any t_skip would cause the final test result to be a failure
because up until now no test should have been skipped.

However, with format-version-forward-back not being compatible with el9,
we are going to rely on el7/8 testing for that test soleley, and
therefore we have to allow skipping of this test on el9 and newer OS
versions.

We add `t_skip_permitted` to signal this from the test case to the
run-tests.sh script. A new exit code is passed, and all accounting is
updated to reflect that a test was skipped, but this was permitted. We
modify format-version-forward-back to use this new exit path.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
5337b9e221 Ingore Process accounting resumed dmesg.
I'm seeing more and more of these as audit is enabled in el8 and el9
images I am using for testing, and during ENOSPC tests this has a chance
of triggering process accounting suspension, and subsequent resume.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
8a22bdd366 Ignore device mapper size change dmesg output.
In v1.18-10-g5507ee5, we changed the test code away from loopback
to device-mapper, which simplified our DUT setup code.

However, this results in the occasional `device changed size` messages
now being emitted by the `dm` driver instead of the `loop` kernel
module. We have to additionally ignore these kernel messages from now as
well.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
235ab133a7 We must provide a_ops->dirty_folio and invalidate_folio.
In v5.17-rc4-53-g3a3bae50af5d, we can no longer omit having this
method unhooked as the mm caller blindly calls it now. In-kernel
filesystems all were fixed in this change.

aops->invalidatepage was the old aops method that would free pages
with private attached data. This method is replaced with the
new invalidate_folio method. If this method is NULL, the memory
will become orphaned. (v5.17-rc4-29-gf50015a596fa)

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
9335d2eb86 Don't --track when checking out a tag.
I've pushed a tag/release to scoutfs-xfstests-dev instead of a full
blown branch. This seems simpler and cleaner than using branches,
because we're going to end up rebasing these things a lot. However, we
can't --track tags, so, if the branch name passed to -x is actually a
tag instead of a branch, we have to omit the --track option here.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
97b081de3f Switch xfstests tag over in CI jobs using this marker file.
CI testing needs to know which xfstests branch to use on all OSs.

We can't just use the el9 xfstests branch on el9 only, because we
need to run the same el9 xfstests on el8 and el7 as well, otherwise
testing will just fail.

So, we put a marker file in our git repo that tells us that we're
not going to use the default `scoutfs` branch from scoutfs-xfstests-dev
but our own special tag or branch. The CI job then should pass the
proper -x {branch} flag to the run-tests.sh script.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
21b5032365 Add new xfstests that we won't support or don't pass
The new version of xfstests adds a _lot_ more tests to our mix. Many
of the new ones will auto enable or auto skip as needed.

There are tests we can't or won't support that will be in future
xfstests. Disable them now so we can avoid dealing with them later.

Quite a few fall into "we don't support these types of mounting yet",
mostly bind-mount or dm-mapper things. We disable all the swapfile
tests flatout.

A few tests fail on el7 but not el8/9 but we don't have a way to run
them without failing yet, so disable them as well.

Update golden with the proper new array of tests. This all requires
the `auke/scoutfs-el9` branch in `versity/scoutfs-xfstests-dev`.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
4723f4f9ab Disable format-version-forward-back test on el9+.
Using t_skip, we just skip this test on el9.

If we ever want to add a formatversion 2->3 test, perhaps we should
just add a separate test script, instead of going over a static array.

But let's not worry about this too much right now.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
0a8b3f4e94 Fix basic-posix-acl test output on el9
It turns out that on el9, `bash -c` prints out `bash: line 1: cd..`
instead of `line 0:` on el7 or el8. So discard all the stderr from
these `cd` lines entirely and just rely on the expected echo
output to stdout.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
8a4b0967cb Add fiemap output through scoutfs util.
There's filefrag already, and that works, but, it's output is very
inconsistent between various OS release versions, and it has already
meant that we'd needed to adjust tests to account for these little
but insignificant changes. A lot more work than useful. It's even
more changed in el9.

This adds `scoutfs get-fiemap FILE` and prints out block extent
info with flags that we care about as an abbreviated letter: U for
Unwritten, L for Last, and O for Unknown (as in, "offline").

The -P/--physical and -L/--logical options turn off logical or physical
offset display, in case you only want to see the offsets in either
units. You can pass -b/--byte to display offsets and lengths in
byte values. The block size will then be obtained from fstat() of
the queried file (4096 for scoutfs).

I've removed all uses of filefrag from our scoutfs tests. Xfstests
still calls it but their internal diff takes care of that issue.

Where needed and appropriate, the tests are adjusted so that the output
of `scoutfs get-fiemap` is as close as it can to what it used to be,
so that reading the test results allows the quick view of what might
have been going wrong.

There are some output strings I have not bothered to update because
there's no real value to updating every output string to match,
and we just adjust the golden file accordingly.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 15:38:34 -07:00
Auke Kok
606c519e96 Simple-staging doesn't actually test overflow.
This isn't a simple case where we can use u64_region_wraps because
length is s32.

Let's actually test an overflow case instead of a case that doesn't
overflow, though. We still should properly add an overflow test here as
well.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
7d0e7e29f8 Avoid integer wrapping pitfalls for (off, len) pairs.
We use check_add_overflow(a, b, d) here to validate that (off, len)
pairs do not exceed the max value type. The kernel conveniently has
several macros to sort out the problems with signed or unsigned types.

However, we're not interested in purely seeing whether (a + b)
overflows, because we're using this for (off, len) overflow checks,
where the bytes we read are from 0 to len -1. We must therefore call
this check with (b) being "len - 1".

I've made sure that we don't accidentally fail when (len == 0)
in all cases by making sure we've already checked this condition
before, and moving code around as needed to ensure that (len > 0)
in all cases where we check.

The macro check_add_overflow requires a (d) argument in which
temporarily the result of the addition is stored and then checked to see
if an overflow occurred. We put a `tmp` variable on the stack of the
correct type as needed to make the checks function.

simple-release-extents test mistakenly relied on this buggy wrap code,
so it needs fixing. The move-blocks test also got it wrong.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
69de6d7a74 Check for zero len in scoutfs_data_wait_check
We consistently enter scoutfs_data_wait_check when len == 0 from
scoutfs_aio_write() which directly passes the i_size_read() value,
and for cases where we `echo >> $FILE` this is always reached.

This can cause the wrapping check to fail since `0 + (0 - 1) < 0` which
triggers the WARN_ON_ONCE wrap check that needs updating to allow
certain operations on huge files.

More importantly we can just omit all these checks if `len == 0` anyway,
since they should always succeed and never should require taking all the
locks.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
ac00f5cedb Free after getline(), even if fail, and catch eof() on el9
getline() allocates the space for the return value even if there is an
error, so when it returns an error, we still have to free() it.

In el9, when reading stdin we will get errno=0 returned (no error) when
we hit the end of stdin. This behavior is different from el7/8. We don't
want to throw an error here to avoid failing the test, since it doesn't.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
6d42d260cf xargs option conflict now a warning in el9
The warnings thrown by el9's version of xargs are unexpected output and
cause this test to fail. When using the -I option (replace) the -n 1
arguments are always assumed. In el7/8 no warnings were printed.

We can just remove `-n 1` since the argument is never needed.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
00ebe92186 Add stddef.h to util.h to avoid duplicate offsetof() def.
In el9 releases, our includes declare offsetof() before our header
chain includes stddef.h, which doesn't properly check if offsetof
is already defined, leading to a redefinition. Just include stddef
at all times here.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
570c05898c Correct endian conversion length (blkno is le64)
Trivial correction of wrong bitlength conversion.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c298360a49 blkdev api changes - pass holder to replace FMODE_EXCL
Passing a holder ptr to these functions now replaces the FMODE_EXCL
flag. _put no longer needs flags for this reason, but the holder
instead.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
95f4e56546 Introduce blk_mode_t instead of abuse of fmode_t
v6.4-rc2-198-g05bdb9965305 adds a new type for passing flags instead
of abusing fmode_t flags. They are essentially the same flags just
in a new type.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
d5c2768f04 .tmpfile method now passed a struct file, which must be opened.
v6.0-rc6-9-g863f144f12ad changes the VFS method to pass in a struct
file and not a dentry in preperation for tmpfile support in fuse.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
676d429264 Assume el9 is the same as el8 for rpmbuild purposes.
The current spec template can't handle future major el releases
gracefully and fails to build entirely. We isolate all changes
so that they are either "el7 specific" or generic. This rids us
entirely of el8 specific conditionals.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
5b260e6b54 block_write_begin() no longer is being passed aop_flags.
The flag is now obsolete, we don't need to set flags here or
pass them.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
e2b06f2c92 mpage_readpage() is now replaced with mpage_read_folio.
Folios are the new data types used for passing pages. For now,
folios only appear to have a single page. Future kernels will
change that.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
546b437df7 Shrinkers are now registered with a name.
v5.19-rc4-52-ge33c267ab70d Adds shrinker names to the registration
call to aid with shrinker debugging, which is highly opaque.

To enable you'll have to recompile the kernel with
CONFIG_SHRINKER_DEBUG=y though, since it's disabled by default in
OSV kernels.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
381f4543b7 Use iter based read/write to support splice and thus sendfile().
The iter based read/write calls can support splice in el9 if we
hook up these calls, otherwise splice will stop working.

->write() similar to: v3.15-rc4-330-g8d0207652cbe. ->read() to
generic implementation.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
418a441604 kernel_setsockopt no longer available.
We instead opt to use sock_setsockopt which is generally exactly the
same and can be easily converted to map to kernel_setsockopt without
impacting the code significantly.

There are 3 methods we're calling with usec timeval's, and that is
significantly different now that this requires a bit more compat code
so we split these out to separate compat functions to handle them.

Some of the TCP sock functions also have a slightly different signature
that we want to split them out (struct socket vs. sock). Some further
no longer return success, either.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
f3abf9710b generic_perform_write signature changed
It now only needs the iocb and no longer the flip.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
8a45c2baff Deprecate struct timeval.
We switch to using 64bit usec structs and recommended replacement
functions from Documentation/core-api/timekeeping.rst.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
345ebd0876 fiemap_prep replaces fiemap_check_flags.
v5.7-rc4-53-gcddf8a2c4a82

The prep helper replaces the sanity checks.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
b718cf09de Handle idmapped mounts in xattr_handler
In v5.11-rc4-8-ge65ce2a50cf6 the *set handler is passed a
user_namespace struct pointing to the map from the mount.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
e4721366ff Added user_ns argument to posix_acl_update_mode, set_posix_acl
v5.11-rc4-8-ge65ce2a50cf6 adds idmap support to these calls.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
4ef64c6fcf Vfs methods become user namespace mount aware.
v5.11-rc4-24-g549c7297717c

All of these VFS methods are now passed a user_namespace.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
2d58ee2a37 Account for new bio_alloc() args.
Block device and opf are now passed through and set. We mimic compat
code to do the same.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
1f0dd7f025 __vmalloc defaults to PAGE_KERNEL everywhere, so the arg was removed.
v5.7-523-g88dca4ca5a93

__vmalloc no longer has the 3rd argument.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
077468ac1e debugfs_create_atomic_t now returns void, don't check result
Greg KH tells us to do just this in v5.4-rc5-31-g9927c6fa3e1d:

	No one checks the return value of debugfs_create_atomic_t(),
	as it's not needed, so make the return value void, so that no
	one tries to do so in the future.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c951713ab2 list_cmp_func_t introduced, using const.
v5.12-rc6-9-g4f0f586bf0c8

All list_sort functions use the list_cmp_func_t type, which compares
list_head member types. These are now required to be `const` as the
compiler will now check them. This propagates into our callers.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
ad82a5e52a Squelch warning from bpf_iter.c.
v5.7-rc2-1174-gfd4f12bc38c3 significantly rewrites the bpf iterator
which hits this _next() function. It also adds a check that verifies
that the *pos is incremented after every call, even if it goes beyond
the last member (in which case it's not used).

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
d3c5328909 setattr_prepare no longer extern in fs.h
v5.11-rc4-7-g2f221d6f7b88 Changes setattr_prepare from an extern
to plain int. There's no impact further to the compat to keep it
working except for the detection regex.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c30172210f Use blk_opf_t to pass bio op flags
Compat is back to unsigned int.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
19af6e28fb "unaligned/access_ok.h" is not needed, and removed.
Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
8885486bc8 Add several low level includes.
Newer kernels include less header dependencies by default, so we have
to add these.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
0204e092e4 FIELD_SIZEOF was deprecated.
We could use sizeof_field as a direct replacement (which is the same)
except that this entire thing can directly use offsetofend().

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Zach Brown
3b816cfd01 Merge pull request #183 from versity/auke/setattr_more_data_version_zero
Don't pass data version to attr_x unless the ioctl means to set it.
2024-10-03 12:25:46 -07:00
Auke Kok
b45fbe0bbb Don't pass data version to attr_x unless the ioctl means to set it.
The wrapper in setattr_more that translates the operations to attr_x
needs to decide whether to ask attr_x to perform a change to any of
the fields passed to it or not. For the date and size fields this
is implicit - we always tell attr_x to change them. For any of the
other fields, it should be explicit.

The only field that is in the struct that this applies to is
data_version. Because the data version field by default is zero,
we use that as condition to decide whether to pass the data_version
down to attr_x.

Previously, the code would always pass a data_version=0 down to attr_x,
triggering one of the validity checks, making it return -EINVAL. We
add a simple test case to test for this issue.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-09-27 19:31:22 -04:00
Zach Brown
c5d9b93b96 Merge pull request #187 from versity/auke/sparsefilt
Sparse fix for epel 0.6.4 sparse - redefines
2024-09-27 14:13:28 -07:00
Zach Brown
2984f4d3a8 Merge pull request #189 from versity/greg/el9-spec-fix
Use path-inspecific weak-modules (EL9 fix)
2024-09-27 14:05:40 -07:00
Auke Kok
3b8d2eab8e Sparse fix for epel 0.6.4 sparse - redefines
We should rely on sparse from epel to do automated sparse checking and
not a git tag. But the 0.6.4 build currently fails on sparse/gcc
redefines.

This magic Awk from Zach script processes sparse and gcc internal defines
and leaves the one intact that sparse doesn't have.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-09-27 15:37:47 -04:00
Greg Cymbalski
4dde57dc27 Rely on $PATH for weak-modules
This avoids having to deal with EL-specific path differences for
the weak-modules script.
2024-09-27 12:30:25 -07:00
Zach Brown
a4be74f4b1 Merge pull request #182 from versity/auke/write_test_name_to_kmsg
Write to kmsg which test we're executing.
2024-09-27 09:09:36 -07:00
Zach Brown
b66e52f3f8 Merge pull request #186 from versity/auke/add_quota_wkic_shrinker_counters
Add shrinker counters for wkic and quota_info.
2024-09-19 09:59:44 -07:00
Auke Kok
fb93d82b1e Add shrinker counters for wkic and quota_info.
These new shrinkers were recently added. Because there's very little
ways to debug them, or even see them properly function, we should at
least add counters for them.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-09-18 13:40:54 -04:00
Auke Kok
9d8ac2c7d7 Write to kmsg which test we're executing.
This is done by xfstests and it's so much easier to follow what is going
on from logs or e.g. serial console that I thought I should do this for
scoutfs tests as well. It makes it so much easier to discern which test
may have been cause for issues when running a bunch of tests and you're
looking back at logs later.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-08-28 14:36:55 -07:00
Zach Brown
49acbb4415 Merge pull request #181 from versity/auke/basic-posix-acl
POSIX ACL fixes for el8, plus tests
2024-08-26 14:19:45 -07:00
Auke Kok
7b039a1d18 Add basic POSIX ACL tests.
These are extremely limited and very quick basic ACL tests we can
trivially do in under a second - purely basic funtionality tests only.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-08-12 15:07:43 -04:00
Auke Kok
ccd65b9a61 Fix POSIX ACL use in el8+.
In 29160b0b I mistakenly disabled all caching of ACLs for el8
instead of only disabling cache lookups. The correct change
should have been to disable cache lookups only, and leave setting the
acl cache after storing or fetching, as the kernel needs this data
to resolve acls when doing permission checks.

Restore the acl cache insertions fixes.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-08-09 17:57:23 -04:00
Zach Brown
aeb1dbc5f5 Merge pull request #180 from versity/zab/cleanup_large_test_tmp_output
Clean up large test files
2024-07-23 10:44:07 -07:00
Zach Brown
e20d3ae1e8 Clean up large test files
The test harness provides a TMP directory for tests to use.  It's badly
named.  It's meant to be more of a scratch directory that is not on the
FS being tested.

Tests use it both for small log files that give insight into the
platform and for large generated files that are not worth saving.  We
want to save the directory after test runs to get at the log files, but
we don't want to burn a ton of space also saving large generated files

This updates the handful of tests to remove their handful of files that
are large enough to be a problem.  With these out of the way we can save
the tmp/ directory without its space consumption getting out of hand.

Signed-off-by: Zach Brown <zab@versity.com>
2024-07-22 14:08:32 -07:00
Zach Brown
3228749957 Merge pull request #137 from versity/auke/client-unmount-test-debug-data
Fix the debug output of client-unmount-recovery
2024-07-22 14:03:15 -07:00
Auke Kok
db445ce517 Fix the debug output of client-unmount-recovery
The script really wants to print rid instead of pid. But in case
of failure, we can just dump the arrays as well.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-07-22 14:02:53 -04:00
Zach Brown
bb5d98730b Merge pull request #167 from versity/auke/test-zero-time-createmany-parallel-mounts
Avoid math issues on fast test machines.
2024-07-22 10:12:09 -07:00
Zach Brown
cb0838a0ef Merge pull request #179 from versity/auke/extra_version_device_checks
Extra device checks when changing formatversion.
2024-07-12 13:24:42 -07:00
Auke Kok
7eaed848ed Increase time measurement accuracy beyond whole seconds.
We can rely on `bc` and `date` to record, manipulate and compare
time data with nanosecond precision. This fixes timing issues on
faster systems where this test completes a single pass of createmany in
under 1.0 second, causing the math to always fail.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-07-12 15:28:28 -04:00
Auke Kok
267c1cc2d5 Check meta flags bit set/unset for devices.
This extra check assures the passed meta device and data device
are indeed what they should be, and prevents against unwanted
swapping or repeated duplicate device arguments.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-07-12 15:22:45 -04:00
Zach Brown
c6b92329b3 Merge pull request #178 from versity/zab/v1.21
v1.21 Release
2024-07-01 14:31:27 -07:00
Zach Brown
91e7f051cf v1.21 Release
Finish the release notes for the 1.21 release.

Signed-off-by: Zach Brown <zab@versity.com>
2024-07-01 13:49:35 -07:00
Zach Brown
7645f04363 Merge pull request #177 from versity/zab/retention_quota_project_indx
Zab/retention quota project indx
2024-06-28 17:21:15 -07:00
Zach Brown
8c06302984 Let run-tests specify mkfs format version
Add a run-tests -V option that passes through the -V option to mkfs so
that runs can specify the format version that the primary volume will
have.  This doesn't affect the scratch file system versions.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
1bc83e9e2d Add indx xattr tag support to utils
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
38c6d66ffc Add indx xattr tag support
Add support for the indx xattr tag which lets xattrs determine the sort
order of by their inode number in a global index.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
6a17dc335f Add quota tests
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
e0bb6ca481 Add quota support to utils
Add scoutfs cli commands for managing quotas and add its persistent
structures to the print command.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
38e6f11ee4 Add quota support
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
442980f1c9 Add project ID tests
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
82c2d0b1d0 Add o_tmpfile_linkat test binary
Add a test binary that uses o_tmpfile and linkat to create a file in a
given dir.  We have something similar, but it's weirdly specific to a
given test.  This is a simpler building block that could be used by more
tests.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
4a8240748e Add project ID support
Add support for project IDs.  They're managed through the _attr_x
interfaces and are inherited from the parent directory during creation.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
60ca950f42 Drop caches in totl test
Now that the _READ_XATTR_TOTALS ioctl uses the weak item cache we have
to drop caches before each attempt to read the xattrs that we just wrote
and synced.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
9c45e8b7ef read_xattr_totls ioctl uses weak item cache
Change the read_xattr_totls ioctl to use the weak item cache instead of
manually reading and merging the fs items for the xattr totals on every
call.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
ee9e8c3e1a Extract .totl. item merging into own functions
The _READ_XATTR_TOTALS ioctl had manual code for merging the .totl.
total and value while reading fs items.  We're going to want to do this
in another reader so let's put these in their own funcions that clearly
isolate the logic of merging the fs items into a coherent result.

We can get rid of some of the totl_read_ counters that tracked which
items we were merging.  They weren't adding much value and conflated the
reading ioctl interface with the merging logic.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
5f156b7a36 Add scoutfs_forest_read_items_roots
Add a forest item reading interface that lets the caller specify the net
roots instead of always getting them from a network request.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
3a51ca369b Add the weak item cache
Add the weak item cache that is used for reads that can handle results
being a little behind.  This gives us a lot more freedom to implement
the cache that biases concurrent reads.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Bryant G. Duffy-Ly
460f3ce503 Add unit tests for retention
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
[zab@versity.com: refactored for retention, added test cases]
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
fb5331a1d9 Add inode retention bit
Add a bit to the private scoutfs inode flags which indicates that the
inode is in retention mode.  The bit is visible through the _attr_x
interface.  It can only be set on regular files and when set it prevents
modification to all but non-user xattrs.  It can be cleared by root.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
5a53e7144d Add format-version back/forward compat test
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
a23877b150 Add fs test functions for mounted paths
We have some fs functions which return info based on the test mount nr
as the test has setup. This refactors those a bit to also provide
some of the info when the caller has a path in a given mount. This will
let tests work with scratch mounts a little more easily.

Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
5ccdf3c9f0 Add T_MODULE for tests
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
270726a6ea Implement stat_more and setattr_more with attr_x
Now that we have the attr_x calls we can implement stat_more with
get_attr_x and setattr_more with set_attr_x.

The conversion of stat_more fixes a surprising consistency bug.
stat_more wasn't acquiring a cluster lock for the inode nore refreshing
it so it could have returned stale data if modifications were made in
another mount.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
de304628ea Add attr_x commands and documentation to utils
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
6a99ca9ede Add attr_x core and ioctls
The existing stat_more and setattr_more interfaces aren't extensible.
This solves that problem by adding attribute interfaces which specify
the specific fields to work with.

We're about to add a few more inode fields and it makes sense to add
them to this extensible structure rather than adding more ioctls or
relatively clumsy xattrs.  This is modeled loosely on the upstream
kernel's statx support.

The ioctl entry points call core functions so that we can also implement
the existing stat_more and setattr_more interfaces in terms of these new
attr_x functions.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
0521bd0e6b Make offline extent creation use one transaction
Initially setattr_more followed the general pattern where extent
manipulation might require multiple transactions if there are lots of
extent items to work with.   The scoutfs_data_init_offline_extent()
function that creates an offline extent handled transactions itself.

But in this case the call only supports adding a single offline extent.
It will always use a small fixed amount of metadata and could be
combined with other metadata changes in one atomic transaction.

This changes scoutfs_data_init_offline_extent() to have the caller
handle transactions, inode updates, etc.  This lets the caller perform
all the restore changes in one transaction.  This interface change will
then be used as we add another caller that adds a single offline extent
in the same way.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
361491846d Add scoutfs_fmt_vers_unsupported()
Add a little inline helper to test whether the mounted format version
supports a feature or not, returning an errno that callers can use when
they can return a shared expected error.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Bryant G. Duffy-Ly
9ba4271c26 Add new max format version of 2
We're about to add new format structures so increment the max version to
2.  Future commits will add the features before we release version 2 in
the wild.

Signed-off-by: Zach Brown <zab@zabbo.net>
2024-06-28 14:53:49 -07:00
Bryant G. Duffy-Ly
90cfaf17d1 Initial support for different inode sizes
We're about to increase the inode size and increment the format version.
Inode reading and writing has to handle different valid inode sizes as
allowed by the format version.   This is the initial skeletal work that
later patches which really increase the inode size will further refine
to add the specific known sizes and format versions.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
[zab@versity.com: reworded description, reworked to use _within]
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
6931cb7b0e Add scoutfs_inode_[gs]et_flags
Add functions for getting and setting our private inode flags.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
7d4db05445 Add scoutfs_item_lookup_smaller_zero
Add a lookup variant that returns an error if the item value is larger
than the caller's value buffer size and which zeros the rest of the
caller's buffer if the returned value is smaller.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
7b71250072 Merge pull request #176 from versity/zab/accumulated_fixes
Zab/accumulated fixes
2024-06-26 13:21:50 -07:00
Zach Brown
8e37be279c Use seqlock to protect inode fields
We were using a seqcount to protect high frequency reads and writes to
some of our private inode fields.  The writers were serialized by the
caller but that's a bit too easy to get wrong.  We're already storing
the write seqcount update so the additional internal spinlock stores in
seqlocks isn't a significant additional overhead.  The seqlocks also
handle preemption for us.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
d6642da44d Prevent downgrade of format version
Don't let change-format-version decrease the format version.  It doesn't
have the machinery to go back and migrate newer structures to older
structures that would be compatible with code expecting the older
version.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
[zab@versity.com: split from initial patch with other changes]
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
4b87045447 Pre-declare scoutfs_lock in forest.h
Definitions in forest.h use lock pointers.  Pre-declare the struct so it
doesn't break inclusion without lock.h, following current practice in
the header.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
3f773a8594 Fix uninit written in scoutfs_file_write_iter
scoutfs_file_write_iter tried to track written bytes and return those
unless there was an error.  But written was uninitialized if we got
errors in any of the calls leading up to performing the write.  The
bytes written were also not being passed to the generic_write_sync
helper.  This fixes up all those inconsistencies and makes it look like
the write_iter path in other filesystems.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
c385eea9a1 Check for all offline in scoutfs_file_write_iter
When we write to file contents we change the data_version.  To stage old
contents into an offline region the data_version of the file must match
the archived copy.  When writing we have to make sure that there is no
offline data so that we don't increase the data_version which will
prevent staging of any other file regions because the data_versions no
longer match.

scoutfs_file_write_iter was only checking for offline data in its write
region, not the entire file.  Fix it to match the _aio_write method and
check the whole file.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
c296bc1959 Remove scoutfs_data_wait_check_iter
scoutfs_data_wait_check_iter() was checking the contiguous region of the
file starting at its pos and extending for iter_iov_count() bytes.  The
caller can do that with the previous _data_wait_check() method by
providing the same count that _check_iter() was using.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
3052feac29 Have item cache show unprotected lock
The item cache has a bit of safety checks that make sure that an
operation is performed while holding a lock that covers the item.  It
dumped a stack trace via WARN when that wasn't true, but it didn't
include any details about the keys or lock modes involved.

This adds a message that's printed once which includes the keys and
modes when an operation is attempted that isn't protected.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
1fa0d7727c scoutfs_item_create checks wrong lock mode
scoutfs_item_create() was checking that its lock had a read mode, when
it should have been checking for a write mode.  This worked out because
callers with write mode locks are also protecting reads.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
2af6f47c8b Fix bad error exit path in unlink
Unlink looks up the entry items for the name it is removing because we
no longer store the extra key material in dentries.  If this lookup
fails it will use an error path which release a transaction which wasn't
held.  Thankfully this error path is unlikely (corruption or systemic
errors like eio or enomem) so we haven't hit this in practice.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
6db69b7a4f Set root inode crtime in mkfs
When we added the crtime creation timestamp to the inode we forgot to
update mkfs to set the crtime of the root inode.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:18 -07:00
Zach Brown
8ca1f1994d Merge pull request #174 from versity/zab/trace_block_estale
Add tracepoint as block read returns ESTALE
2024-06-11 09:42:45 -07:00
Zach Brown
48716461e4 Add tracepoint as block read returns ESTALE
Block reads can return ESTALE naturally as mounts read through old
cached blocks.  We won't always log it as an error but we should add a
tracepoint that can be inspected.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-10 11:03:38 -07:00
Zach Brown
965b692bdc Merge pull request #171 from versity/zab/v1.20
v1.20 Release
2024-04-22 14:51:39 -07:00
Zach Brown
c3c4b08038 v1.20 Release
Finish the release notes for the 1.20 release.

Signed-off-by: Zach Brown <zab@versity.com>
2024-04-22 13:20:42 -07:00
Zach Brown
0519830229 Merge pull request #165 from versity/greg/kmod-uninstall-cleanup
More cleanly drive weak-modules on install/uninstall
2024-04-11 14:32:06 -07:00
Greg Cymbalski
4d6e1a14ae More safely install/uninstall with weak-modules
This addresses some minor issues with how we handle driving the
weak-modules infrastructure for handling running on kernels not
explicitly built for.

For one, we now drive weak-modules at install-time more explicitly (it
was adding symlinks for all modules into the right place for the running
kernel, whereas now it only handles that for scoutfs against all
installed kernels).

Also we no longer leave stale modules on the filesystem after an
uninstall/upgrade, similar to what's done for vsm's kmods right now.
RPM's pre/postinstall scriptlets are used to drive weak-modules to clean
things up.

Note that this (intentionally) does not (re)generate initrds of any
kind.

Finally, this was tested on both the native kernel version and on
updates that would need the migrated modules. As a result, installs are
a little quicker, the module still gets migrated successfully, and
uninstalls correctly remove (only) the packaged module.
2024-04-11 13:20:50 -07:00
Greg Cymbalski
fc3e061ea8 Merge pull request #164 from versity/greg/preserve-git-describe
Encode git info into spec to keep git info in final kmod
2024-03-29 13:48:33 -07:00
Greg Cymbalski
a4bc3fb27d Capture git info at spec creation time, pass into make 2024-02-05 15:44:10 -08:00
Zach Brown
67990a7007 Merge pull request #162 from versity/zab/v1.19
v1.19 Release
2024-01-30 15:46:49 -08:00
Zach Brown
ba819be8f9 v1.19 Release
Finish the release notes for the 1.19 release.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-30 12:11:23 -08:00
Zach Brown
1b103184ca Merge pull request #161 from versity/zab/merge_timeout_option_fix
Correctly set the log_merge_wait_timeout_ms option
2024-01-30 12:07:10 -08:00
Zach Brown
c3890abd7b Correctly set the log_merge_wait_timeout_ms option
The initial code for setting the timeout used the wrong parsed variable.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-30 12:01:35 -08:00
Zach Brown
5ab38bfa48 Merge pull request #160 from versity/zab/log_merging_speedups
Zab/log merging speedups
2024-01-29 12:26:55 -08:00
Zach Brown
e9ad61b444 Delete multiple log trees items per server commit
server_log_merge_free_work() is responsible for freeing all the input
log trees for a log merge operation that has finished.  It looks for the
next item to free, frees the log btree it references, and then deletes
the item.  It was doing this with a full server commit for each item
which can take an agonizingly long time.

This changes it perform multiple deletions in a commit as long as
there's plenty of alloc space.  The moment the commit gets low it
applies the commit and opens a new one.  This sped up the deletion of a
few hundred thousand log tree items from taking hours to seconds.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:30:17 -08:00
Zach Brown
91bbf90f71 Don't pin input btrees when merging
The btree_merge code was pinning leaf blocks for all input btrees as it
iterated over them.  This doesn't work when there are a very large
number of input btrees.  It can run out of memory trying to hold a
reference to a 64KiB leaf block for each input root.

This reworks the btree merging code.  It reads a window of blocks from
all input trees to get a set of merged items.  It can take multiple
passes to complete the merge but by setting the merge window large
enough this overhead is reduced.  Merging now consumes a fixed amount of
memory rather than using memory proportional to the number of input
btrees.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:30:17 -08:00
Zach Brown
b5630f540d Add tracing of the log merge finalizing decision
Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:30:17 -08:00
Zach Brown
90a4c82363 Make log merge wait timeout tunable
Add a mount option for the amount of time that log merge creation can
wait before giving up.  We add some counters so we can see how often
the timeout is being hit and what the average successfull wait time is.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:25:56 -08:00
Zach Brown
f654fa0fda Send syncs once when starting to merge
The server sends sync requests to clients when it sees that they have
open log trees that need to be committed for log merging to proceed.

These are currently sent in the context of each client's get_log_trees
request, resulting in sync requests queued for one client from all
clients.  Depending on message delivery and commit latencies, this can
create a sync storm.

The server's sends are reliable and the open commits are marked with the
seq when they opened.  It's easy for us to record having sent syncs to
all open commits so that future attempts can be avoided.  Later open
commits will have higher seqs and will get a new round of syncs sent.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:25:20 -08:00
Zach Brown
50168a2d2a Check each client's last log item for stable seq
The server was checking all client log_trees items to search for the
lowest commit seq that was still open.  This can be expensive when there
are a lot of finalized log_trees items that won't have open seqs.  Only
the last log_trees item for each client rid can be open, and the items
are sorted by rid and nr, so we can easily only check the last item for
each client rid.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:24:50 -08:00
Zach Brown
3c0616524a Only search last log_trees per rid for finalizing
During get_log_trees the server checks log_trees items to see if it
should start a log merge operation.  It did this by iterating over all
log_trees items and there can be quite a lot of them.

It doesn't need to see all of the items.  It only needs to see the most
recent log_trees item for each mount.  That's enough to make the
decisions that start the log merging process.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:23:59 -08:00
Zach Brown
8d3e6883c6 Merge pull request #159 from versity/auke/trans_hold
Fix ret output for scoutfs_trans_hold trace pt.
2024-01-09 09:23:32 -08:00
Auke Kok
8747dae61c Fix ret output for scoutfs_trans_hold trace pt.
Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-01-08 16:27:41 -08:00
Zach Brown
fffcf4a9bb Merge pull request #158 from versity/zab/kasan_stack_oob_get_reg
Ignore spurious KASAN unwind warning
2023-11-22 10:04:18 -08:00
Zach Brown
b552406427 Ignore spurious KASAN unwind warning
KASAN could raise a spurious warning if the unwinder started in code
without ORC metadata and tried to access in the KASAN stack frame
redzones.  This was fixed upstream but we can rarely see it in older
kernels.  We can ignore these messages.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-21 12:25:16 -08:00
Zach Brown
d812599e6b Merge pull request #157 from versity/zab/dmsetup_test_devices
Zab/dmsetup test devices
2023-11-21 10:13:02 -08:00
Zach Brown
03ab5cedb6 clean up createmany-parallel-mounts test
This test is trying to make sure that concurrent work isn't much, much,
slower than individual work.  It does this by timing creating a bunch of
files in a dir on a mount and then timing doing the same in two mounts
concurrently.  But it messed it up the concurrency pretty badly.

It had the concurrent createmany tasks creating files with a full path.
That means that every create is trying to read all the parent
directories.  The way inode number allocation works means that one of
the mounts is likely to be getting a write lock that includes a shared
parent.  This created a ton of cluster lock contention between the two
tasks.

Then it didn't sync the creates between phases.  It could be
accidentally recording the time it took to write out the dirty single
creates as time taken during the parallel creates.

By syncing between phases and having the createmany tasks create files
relative to their per-mount directories we actually perform concurrent
work and test that we're not creating contention outside of the task
load.

This became a problem as we switched from loopback devices to device
mapper devices.  The loopback writers were using buffered writes so we
were masking the io cost of constantly invalidating and refilling the
item cache by turning the reads into memory copies out of the page
cache.

While we're in here we actually clean up the created files and then use
t_fail to fail the test while the files still exist so they can be
examined.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-15 15:12:57 -08:00
Zach Brown
2b94cd6468 Add loop module kernel message filter
Now that we're not setting up per-mount loopback devices we can not have
the loop module loaded until tests are running.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-15 13:39:38 -08:00
Zach Brown
5507ee5351 Use device-mapper for per-mount test devices
We don't directly mount the underlying devices for each mount because
the kernel notices multiple mounts and doesn't setup a new super block
for each.

Previously the script used loopback devices to create the local shared
block construct 'cause it was easy.  This introduced corruption of
blocks that saw concurrent read and write IOs.  The buffered kernel file
IO paths that loopback eventually degrades into by default (via splice)
could have buffered readers copying out of pages without the page lock
while writers modified the page.  This manifest as occasional crc
failure of blocks that we knowingly issue concurrent reads and writes to
from multiple mounts (the quorum and super blocks).

This changes the script to use device-mapper linear passthrough devices.
Their IOs don't hit a caching layer and don't provide an opportunity to
corrupt blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-15 13:39:38 -08:00
Zach Brown
1600a121d9 Merge pull request #156 from versity/zab/large_fragmented_free_hung_task
Extend hung task timeout for large-fragmented-free
2023-11-15 09:49:13 -08:00
Zach Brown
6daf24ff37 Extend hung task timeout for large-fragmented-free
Our large fragmented free test creates pathologically file extents which
are as expensive as possible to free.  We know that debugging kernels
can take a long time to do this so we can extend the hung task timeout.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-14 15:01:37 -08:00
Zach Brown
cd5d9ff3e0 Merge pull request #154 from versity/zab/srch_test_fixes
Zab/srch test fixes
2023-11-13 09:47:46 -08:00
Zach Brown
d94e49eb63 Fix quoted glob in srch-basic-functionality
One of the phases of this test wanted to delete files but got the glob
quoting wrong.  This didn't matter for the original test but when we
changed the test to use its own xattr name then those existing undeleted
files got confused with other files in later phases of the test.

This changes the test to delete the files with a more reliable find
pattern instead of using shell glob expansion.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-09 14:16:36 -08:00
Zach Brown
1dbe408539 Add tracing of srch compact struct communication
Signed-off-by: Zach Brown <zab@versity.com>
2023-11-09 14:16:33 -08:00
Zach Brown
bf21699ad7 bulk_create_paths test tool takes xattr name
Previously the bulk_create_paths test tool used the same xattr name for
each category of xattrs it was creating.

This created a problem where two tests got their xattrs confused with
each other.  The first test created a bunch of srch xattrs, failed, and
didn't clean up after itself.  The second test saw these search xattrs
as its own and got very confused when there were far more srch xattrs
than it thought it had created.

This lets each test specify the srch xattr names that are created by
bulk_create_paths so that tests can work with their xattrs independent
of each other.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-09 14:15:44 -08:00
Zach Brown
c7c67a173d Specifically wait for compaction in srch test
We just added a test to try and get srch compaction stuck by having an
input file continue at a specific offset.  To exercise the bug the test
needs to perform 6 compactions.  It needs to merge 4 sets of logs into 4
sorted files, it needs to make partial progress merging those 4 sorted
files into another file, and then finall attempt to continue compacting
from the partial progress offset.

The first version of the test didn't necessarily ensure that these
compactions happened.  It created far too many log files then just
waited for time to pass.  If the host was slow then the mounts may not
make it through the initial logs to try and compact the sorted files.
The triggers wouldn't fire and the test would fail.

These changes much more carefully orchestrate and watch the various
steps of compaction to make sure that we trigger the bug.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-09 14:13:13 -08:00
Zach Brown
0d10189f58 Make srch compact request delay tunable
Add a sysfs file for getting and setting the delay between srch
compaction requests from the client.  We'll use this in testing to
ensure compaction runs promptly.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-09 14:13:07 -08:00
Zach Brown
6b88f3268e Merge pull request #153 from versity/zab/v1.18
v1.18 Release
2023-11-08 10:57:56 -08:00
Zach Brown
4b2afa61b8 v1.18 Release
Finish the release notes for the 1.18 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-07 16:01:59 -08:00
Zach Brown
222ba2cede Merge pull request #152 from versity/zab/stuck_srch_compact
Zab/stuck srch compact
2023-11-07 15:56:39 -08:00
Zach Brown
c7e97eeb1f Allow srch compaction from _SAFE_BYTES
Compacting sorted srch files can take multiple transactions because they
can be very large.  Each transaction resumes at a byte offset in a block
where the previous transaction stopped.

The resuming code tests that the byte offsets are sane but had a mistake
in testing the offset to skip to.  It returned an error if the
compaction resumed from the last possible safe offset for decoding
entries.

If a system is unlucky enough to have a compaction transaction stop at
just this offset then compaction stops making forward progress as each
attempt to resume returns an error.

The fix allows continuation from this last safe offset while returning
errors for attempts to continue *past* that offset.  This matches all
the encoding code which allows encoding the last entry in the block at
this offset.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-07 12:34:00 -08:00
Zach Brown
21c070b42d Add test for srch continutation safe pos errors
Add a test for srch compaction getting stuck hitting errors continuing a
partial operation.  It ensures that a block has an encoded entry at
the _SAFE_BYTES offset, that an operaton stops precisely at that
offset, and then watches for errors.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-07 12:34:00 -08:00
Zach Brown
77fbf92968 Add t_trigger_set helper
Add a helper to arm or disarm a trigger with a value argument.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-07 12:12:10 -08:00
Zach Brown
d5c699c3b4 Don't respond with ENOENT for no srch compaction
The srch compaction request building function and the srch compaction
worker both have logic to recognize a valid response with no input files
indicating that there's no work to do.  The server unfortunately
translated nr == 0 into ENOENT and send that error response to the
client.  This caused the client to increment error counters in the
common case when there's no compaction work to perform.  We'd like the
error counter to reflect actual errors, we're about to check it in a
test, so let's fix this up to the server sends a sucessful response with
nr == 0 to indicate that there's no work to do.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-07 10:30:38 -08:00
Zach Brown
b56b8e502c Merge pull request #145 from versity/zab/server_seqlock
Use seqlock instead of seqcount in server
2023-10-24 14:36:56 -07:00
Zach Brown
5ff372561d Merge pull request #146 from versity/auke/truncatedd
Ensure dd creates the full 8K input test file.
2023-10-24 10:10:11 -07:00
Zach Brown
bdecee5e5d Merge pull request #147 from versity/zab/v1.17
v1.17 Release
2023-10-24 09:52:36 -07:00
Zach Brown
a9281b75fa v1.17 Release
Finish the release notes for the 1.17 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-10-23 14:20:13 -07:00
Auke Kok
707e1b2d59 Ensure dd creates the full 8K input test file.
Without `iflag=fullblock` we encounter sporadic cases where the
input file to the truncate test isn't fully written to 8K and ends
up to be only 4K. The subsequent truncate tests then fail.

We add a check to the input test file size just to be sure in the
future.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-23 17:04:19 -04:00
Zach Brown
006f429f72 Use seqlock instead of seqcount in server
The server had a few lower level seqcounts that it used to protect
state.  One user got it wrong by forgetting to disable pre-emption
around writers.  Debug kernels warned as write_seqcount_begin() was
called without preemption disabled.

We fix that user and make it easier to get right in the future by having
one higher level seqlock and using that consistently for seq read
begin/retry and write lock/unlock patterns.

Signed-off-by: Zach Brown <zab@versity.com>
2023-10-19 15:43:15 -07:00
Zach Brown
d71583bcf5 Merge pull request #134 from versity/auke/tests-add-bc
Add `bc` to test requirement.
2023-10-16 15:12:22 -07:00
Zach Brown
bb835b948d Merge pull request #138 from versity/auke/ignore-journald-rotate
Filter out journald rotate messages.
2023-10-16 14:54:56 -07:00
Zach Brown
bcdc4f5423 Merge pull request #143 from versity/zab/t_quiet_appends
t_quiet appends command output
2023-10-12 11:58:50 -07:00
Auke Kok
7ceb215c91 Filter out journald rotate messages.
On el9 distros systemd-journald will log rotation events into kmesg.
Since the default logs on VM images are transient only, they are
rotated several times during a single test cycle, causing test failures.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-12 12:27:41 -04:00
Auke Kok
d4d2b0850b Add bc to test requirement.
Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-12 12:21:29 -04:00
Zach Brown
cf05aefe50 t_quiet appends command output
The t_quiet test command execution helper was constantly truncating the
quiet.log with the output of each command.   It was meant to show each
command and its output as they're run.

Signed-off-by: Zach Brown <zab@versity.com>
2023-10-11 14:50:04 -07:00
Zach Brown
9f06065ce7 Merge pull request #123 from versity/auke/el8
el8 support
2023-10-10 10:37:43 -07:00
Ben McClelland
d2c2fece2a Add rpm spec file support for el8 builds
The rpmbuild support files no longer define the previously used kernel
module macros. This carves out the differences between el7 and el8 with
conditionals based on the distro we are building for.

Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
0e1e55d25b Ignore last flag output by filefrag.
New versions of filefrag will output the presence of the `last`
flag as well, but we don't care.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
293cee9554 Don't use static struct initializer.
In rhel7 this is a nested struct with ktime_t. However, in rhel8
ktime_t is a simple s64, and not a union, and thus we can't do
this as easily. Just memset it.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
a7704e0b56 Allow the kernel to return -ESTALE from orphan-inode test
In newer kernels, we always get -ESTALE because the inode has been
marked immediately as deleting. Since this is expected behavior we
should not fail the test here on this error value.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
819df4be60 Skip userns based testing for RHEL8.
In RHEL7, this was skipped automatically. In RHEL8, we don't support
the needed passing through of the actual user namespace into our
ACL set/get handlers. Once we get around v5.11 or so, the handlers
are automatically passed the namespace. Until then, skip this test.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
592e3d471f Use .prefix for POSIX acl instead of .name.
New kernels expect to do a partial match when a .prefix is used here,
and provide a .name member in case matching should look at the whole
string. This is what we want.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
29160b0bc6 Don't cache ACL's in newer kernels.
The caller takes care of caching for us. Us doing caching
messes with memory management of cached ACLs and breaks.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
11c041d2ea New versions of getfattr will quote empty attr values.
Instead of messing with quotes and using grep for the correct
xattr name, directly query the value of the xattr being tested
only, and compare that to the input.

Side effect is that this is significantly simpler and faster.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
46e8dfe884 Account for coreutils using statx() call instead of stat()
`stat` internally switched to using the new `statx` syscall, and this
affects the output of perror() subsequently. This is the same error
as before (and expected).

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
a9beeaf5da Account for e2fsprogs output format changes.
The filefrag program in e2fsprogs-v1.42.10-10-g29758d2f now includes
an extra flag, and changes how the `unknown` flag is output.

We essentially adjust for this "new" golden value on the fly if we
encounter it. We don't expect future changes to the output.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
205d8ebd4a Account for quoting style changes in coreutils.
In older versions of coreutils, quoted strings are occasionally
output using utf-8 open/close single quotes.

New versions of coreutils will exclusively use the ASCII single quote
character "'" when the output is not a TTY - as is the case with
all test scripts.

We can avoid most of these problems by always setting LC_ALL=C in
testing, however.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
e580f33f82 Ignore loop device resizing messages.
These occasionally trigger during tests.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
d480243c11 Support .read/write_iter callbacks in lieu of .aio_read/write
The aio_read and aio_write callbacks are no longer used by newer
kernels which now uses iter based readers and writers.

We can avoid implementing plain .read and .write as an iter will
be generated when needed for us automatically.

We add a new data_wait_check_iter() function accordingly.

With these methods removed from the kernel, the el8 kernel no
longer uses the extended ops wrapper struct and is much closer now
to upstream. As a result, a lot of methods are moving around from
inode_dir_operations to and from inode_file_operations etc, and
perhaps things will look a bit more structured as a result.

As a result, we need a slightly different data_wait_check() that
accounts for the iter and offset properly.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
bafecbc604 Implement .readahead for address_space_operations (aops).
.readpages is obsolete in el8 kernels. We implement the .readahead
method instead which is passed a struct readahead_control. We use
the readahead_page(rac) accessor to retrieve page by page from the
struct.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
65be4682e3 implement generic_file_buffered_write()
This function is removed in el8 therefore we need to implement
it ourselves now. Copy it.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
e88845d185 (un)register_hotcpu_notifier is obsolete
v4.9-12228-g530e9b76ae8f Drops all (un)register_(hot)cpu_notifier()
API functions. From here on we need to use the new cpuhp_* API.

We avoid this entirely for now, at the cost of leaking pages until
the filesystem is unmounted.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
ec50e66fff Timespec64 changes for yr2038.
Provide a fallback `current_time(inode)` implementation for older
kernels.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
0e91f9a277 Adjust scoutfs_quorum_loop trace point.
Convert the timeout struct unto a u64 nsecs value before passing it to
the trace point event, as to not overflow the 64bit limitation on args.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
69068ae2c0 Initialize msg.msg_iter from iovec.
Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
016dac39bf Handle net arg being added to sock_create_kern()
Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
e69cf3dec8 kernel_getsockname and kernel_getpeername dropped addrlen arg.
v4.16-rc1-1-g9b2c45d479d0

This interface now returns (sizeof (addr)) on success, instead of 0.
Therefore, we have to change the error condition detection.

The compat for older kernels handles the addrlen check internally.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
d6c143a639 xattr functions are now passed flags through struct xattr_handler
Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
09ae100254 Remove the use of backing_dev_info pt from address_space.
Instead, use the new inline inode_to_bdi from <backing-dev.h> to fill
in the task's backing_dev_info.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
50f5077863 Do not use MS_* flags anymore in kernel space.
MS_* flags from <linux/mount.h> should not be used in the kernel
anymore from 4.x onwards. Instead, we need to use the SB_* versions

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Zach Brown
cca4fcb788 Use count/scan objects shrinking interface
Move to the more recent interfaces for counting and scanning cached
objects to shrink.

Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
1d150da3f0 Use page->lru instead of page->list
With v3.14-rc1-10-g34bf6ef94a83, page->list is removed Instead,
use the union member ->lru.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Zach Brown
28f03d3558 Use more modern bio interfaces
Move towards modern bio intefaces, while unfortunately carrying along a
bunch of compat functions that let us still work with the old
incompatible interfaces.

Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Zach Brown
4275f6e6e5 Use memalloc_nofs_save
memalloc_nofs_save() was introduced as preferential to trying to use GFP
flags to indicate that a task should not recurse during reclaim.  We use
it instead of the _noio_ we were using before.

Signed-off-by: Zach Brown <zab@versity.com>
2023-10-09 15:35:40 -04:00
Zach Brown
70a5b6ffe2 Use percpu_counter_add_batch
__percpu_counter_add_batch was renamed to make it clear that the __
doesn't mean it's less safe, as it means in other calls in the API, but
just that it takes an additional parameter.

Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
b89ecd47b4 Use __posix_acl_create/_chmod and add backwards compatibility
There are new interfaces available but the old one has been retained
for us to use. In case of older kernels, we will need to fall back
to the previous name of these functions.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
4293816764 Fix argument test for __posix_acl_valid.
The argument is fixed to be user_namespace, instead of user_ns.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
f0de59a9a3 Use setattr_preapre() as inode_change_ok() was removed in v4.8-rc1
Instead, we can call setattr_prepare() directly. We provide a fallback
for older kernels.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
1f0a08eacb Use the new inode->i_version manipulation methods.
Provide fallback in degraded mode for kernels pre-v4.15-rc3 by directly
manipulating the member as needed.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
dac3f056a5 inode->i_mutex has been replaced with inode->i_rwsem.
Since v4.6-rc3-27-g9902af79c01a, inode->i_mutex has been replaced
with ->i_rwsem. However, long since whenever, inode_lock() and
related functions already worked as intended and provided fully
exclusive locking to the inode.

To avoid a name clash on pre-rhel8 kernels, we have to rename a
stack variable in `src/file.c`.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
af868aad9b New inode->i_version API requires <iversion.h>
Since v4.15-rc3-4-gae5e165d855d, <linux/iversion.h> contains a new
inode->i_version API and it is not included by default.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
cf4df0ef9f use $(MAKE) to allow passing jobserver flags.
With this, we can `make -jX` to speed up compiles a bit from
the kmod folder.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
81aa58253e module_init/_exit should have a semicolon at eol.
In the past this was not needed but since el7 onwards these macros
should require the semicolon.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
c683ded0e6 Adjust for new augmented rbtree compute callback function signature
The new variant of the code that recomputes the augmented value
is designed to handle non-scalar types and to facilitate that, it
has new semantics for the _compute callback. It is now passed a
boolean flag `exit` that indicates that if the value isn't changed,
it should exit and halt propagation.

The callback function now shall return whether that propagation should
stop or not, and not the computed new value. The callback can now
directly update the new computed value in the node.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
f27431b3ae Add include <blkdev.h>.
Fixes: Error: implicit declaration of function ‘blkdev_put’

Previously this was an `extern` in <fs.h> and included implicitly,
hence the need to hard include it now.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
28c3cee995 preempt_mask.h is removed entirely.
v4.1-rc4-22-g92cf211874e9 merges this into preempt.h, and on
rhel7 kernels we don't need this include anymore either.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
430960ef3c page_cache_release() is removed. put_page() instead.
Even in 3.x, this already was equivalent.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
7006a84d96 flush_work_sync is equivalent to flush_work.
v3.15-rc1-6-g1a56f2aa4752 removes flush_work_sync entirely, but
ever since v3.6-rc1-25-g606a5020b9bd which made all workqueues
non-reentrant, it has been equivalent to flush_work.

This is safe because in all cases only one server->work can be
in flight at a time.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
eafb8621da d_materialise_unique replaced with d_splice_alias.
Note argument order reversal.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
006555d42a READ_ONCE() replaces ACCESS_ONCE()
v3.18-rc3-2-g230fa253df63 forces us to remove ACCESS_ONCE() with
READ_ONCE(), but it is probably the better interface and works with
non-scalar types.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
8e458f9230 PAGE_CACHE_SIZE was removed, replace with PAGE_SIZE.
PAGE_CACHE_SIZE was previously defined to be equivalent to PAGE_SIZE.

This symbol was removed in v4.6-rc1-32-g1fa64f198b9f.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Auke Kok
32c0dbce09 Include kernel.h and fs.h at the top of kernelcompat.h
Because we `-include src/kernelcompat.h` from the command line,
this header gets included before any of the kernel includes in
most .c and .h files. We should at least make sure we pull in
<fs> and <kernel> since they're required.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2023-10-09 15:35:40 -04:00
Zach Brown
9c9ba651bd Merge pull request #141 from versity/zab/fence-reclaim-racey-seq-test
Remove seq test from fence-and-reclaim
2023-10-09 12:21:48 -07:00
Zach Brown
14eddb6420 Remove seq test from fence-and-reclaim
The fence-and-reclaim test has a little function that runs after fencing
and recovery to make sure that all the mounts are operational again.
The main thing it does is re-use the same locks across a lot of files to
ensure that lock recovery didn't lose any locks that stop forward
progress.

But I also threw in a test of the committed_seq machinery, as a bit of
belt and suspenders.  The problem is the test is racey.  It samples the
seq after the write so the greatest seq it rememebers can be after the
write and will not be committed by the other nodes reads.  It being less
than the committed_seq is a totally reasonable race.

Which explains why this test has been rarely failing since it was
written.  There's no particular reason to test the committed_seq
machinery here, so we can just remove that racey test.

Signed-off-by: Zach Brown <zab@versity.com>
2023-10-09 10:56:15 -07:00
Zach Brown
597208324d Merge pull request #140 from versity/zab/v1.16
v1.16 Release
2023-10-04 11:51:45 -07:00
Zach Brown
8596c9ad45 v1.16 Release
Finish the release notes for the 1.16 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-10-04 10:32:55 -07:00
Zach Brown
8a705ea380 Merge pull request #139 from versity/zab/hold_commit_stuck
Start server commits when holds wait for alloc
2023-10-04 10:27:12 -07:00
Zach Brown
4784ccdfd5 Start server commits when holds wait for alloc
Server code that wants to dirty blocks by holding a commit won't be
allowed to until the current allocators for the server transaction have
enough space for the holder.  As an active holder applies the commit the
allocators are refilled and the waiting holders will proceed.

But the current allocators can have no resources as the server starts
up.  There will never be active holders to apply the commit and refill
the allocators.  In this case all the holders will block indefinitely.

The fix is to trigger a server commit when a holder doesn't have room.
It used to be that commits were only triggered when apply callers were
waiting.  We transfer some of that logic into a new 'committing' field
so that we can have commits in flight without apply callers waiting.  We
add it to the server commit tracing.

While we're at it we clean up the logic that tests if a hold can
proceed.  It used to be confusingly split across two functions that both
could sample the current allocator space remaining.  This could lead to
weird cases where the first holder could use the second alloc remaining
call, not the one whose values were tested to see if the holder could
fit.  Now each hold check only samples the allocators once.

And finally we fix a subtle case where the budget exceeded message can
spuriously trigger in the case where dirtying the freed list created a
new empty block after the holder recorded the amount of space in the
freed block.

Signed-off-by: Zach Brown <zab@versity.com>
2023-10-03 13:32:09 -07:00
Zach Brown
778c2769df Merge pull request #132 from versity/zab/v1.15
v1.15 Release
2023-07-17 13:02:10 -07:00
Zach Brown
9e3529060e v1.15 Release
Finish the release notes for the 1.15 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-07-17 12:07:13 -07:00
Zach Brown
1672b3ecec Merge pull request #130 from versity/zab/noncontig_alloc_einval
Fix partial preallocation when _contig_only = 0
2023-07-17 10:21:18 -07:00
Zach Brown
55f9435fad Fix partial preallocation when _contig_only = 0
Data preallocation attempts to allocate large aligned regions of
extents.  It tried to fill the hole around a write offset that
didn't contain an extent.  It missed the case where there can be
multiple extents between the start of the region and the hole.
It could try to overwrite these additional existing extents and writes
could return EINVAL.

We fix this by trimming the preallocation to start at the write offset
if there are any extents in the region before the write offset.  The
data preallocation test output has to be updated now that allocation
extents won't grow towards the start of the region when there are
existing extents.

Signed-off-by: Zach Brown <zab@versity.com>
2023-07-17 09:36:09 -07:00
Zach Brown
072f6868d3 Merge pull request #131 from versity/zab/server_merge_splice_failure
Process log merge splicing in many commits
2023-07-15 21:03:32 -07:00
Zach Brown
8a64b46a2f Process log merge splicing in many commits
Log merge completions were spliced in one server commit.  It's possible
to get enough completion work pending that it all can't be completed in
one server commit.  Operations fail with ENOSPC and because these
changes can't be unwound cleanly the server asserts.

This allows the completion splicing to break the work up into multiple
commits.

Processing completions in multiple commits means that request creation
can observe the merge status in states that weren't possible before.
Splicing is careful to maintain an elevated nr_complete count while the
client can't get requests because the tree is rebalancing.

Signed-off-by: Zach Brown <zab@versity.com>
2023-07-14 13:28:29 -07:00
Zach Brown
14901c39aa Merge pull request #129 from versity/zab/v1.14
v1.14 Release
2023-06-29 11:30:01 -07:00
Zach Brown
e095127ae9 v1.14 Release
Finish the release notes for the 1.14 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-29 10:03:53 -07:00
Zach Brown
a9da27444f Merge pull request #128 from versity/zab/prealloc_fragmentation
Zab/prealloc fragmentation
2023-06-29 09:57:32 -07:00
Zach Brown
49fe89741d Merge pull request #125 from versity/zab/get_referring_entries
Zab/get referring entries
2023-06-29 09:57:06 -07:00
Zach Brown
847916860d Advance move_blocks extent search offset
The move_blocks ioctl finds extents to move in the source file by
searching from the starting block offset of the region to move.
Logically, this is fine.  After each extent item is deleted the next
search will find the next extent.

The problem is that deleted items still exist in the item cache.  The
next iteration has to skip over all the deleted extents from the start
of the region.  This is fine with large extents, but with heavily
fragmented extents this creates a huge amplification of the number of
items to traverse when moving the fragmented extents in a large file.
(It's not quite O(n^2)/2 for the total extents, deleted items are purged
as we write out the dirty items in each transaction.. but it's still
immense.)

The fix is to simply start searching for the next extent after the one
we just moved.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-28 16:54:28 -07:00
Zach Brown
564b942ead Write test for hole filling noncontig prealloc
Add a test which exercises filling holes in prealloc regions when the
_contig_only prealloc option is not set.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-28 16:16:04 -07:00
Zach Brown
3d99fda0f6 Preallocate data around iblock when noncontig
If the _contig_only option isn't set then we try to preallocate aligned
regions of files.  The initial implementation naively only allowed one
preallocation attempt in each aligned region.  If it got a small
allocation that didn't fill the region then every future allocation
in the region would be a single block.

This changes every preallocation in the region to attempt to fill the
hole in the region that iblock fell in.  It uses an extra extent search
(item cache search) to try and avoid thousands of single block
allocations.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-28 12:21:25 -07:00
Zach Brown
6c0ab75477 Merge pull request #126 from versity/zab/rht_block_shrink_deadlock
Avoid deadlock from block reclaim in rht resize
2023-06-16 10:30:16 -07:00
Zach Brown
89b238a5c4 Add more acceptable quorum delay during testing
Loaded VMs can see a few more seconds delay.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-16 09:38:58 -07:00
Zach Brown
05371b83f0 Update expected console messages during testing
Signed-off-by: Zach Brown <zab@versity.com>
2023-06-16 09:37:37 -07:00
Zach Brown
acafb869e7 Avoid deadlock from block reclaim in rht resize
The RCU hash table uses deferred work to resize the hash table.  There's
a time during resize when hash table iteration will return EAGAIN until
resize makes more progress.  During this time resize can perform
GFP_KERNEL allocations.

Our shrinker tries to iterate over its RCU hash table to find blocks to
reclaim.  It tries to restart iteration if it gets EAGAIN on the
assumption that it will be usable again soon.

Combine the two and our shrinker can get stuck retrying iteration
indefinitely because it's shrinking on behalf of the hash table resizing
that is trying to allocate the next table before making iteration work
again.  We have to stop shrinking in this case so that the resizing
caller can proceed.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-15 14:45:26 -07:00
Zach Brown
74c5fe1115 Add get-referring-entries test
Signed-off-by: Zach Brown <zab@versity.com>
2023-06-14 14:12:10 -07:00
Zach Brown
2279e9657f Add get_referring_entries scoutfs command
Add a cli command for the get_referring_entries ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-14 14:12:10 -07:00
Zach Brown
707752a7bf Add get_referring_entries ioctl
Add an ioctl that gives the callers all entries that refer to an inode.
It's like a backwards readdir.  It's a light bit of translation between
the internal _add_next_linkrefs() list of entries and the ioctl
interface of a buffer of entry structs.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-14 14:12:10 -07:00
Zach Brown
0316c22026 Extend scoutfs_dir_add_next_linkrefs
Extend scoutfs_dir_add_next_linkref() to be able to return multiple
backrefs under the lock for each call and have it take an argument to
limit the number of backrefs that can be added and returned.

Its return code changes a bit in that it returns 1 on success instead of
0 so we have to be a little careful with callers who were expecting 0.
It still returns -ENOENT when no entries are found.

We break up its tracepoint into one that records each entry added and
one that records the result of each call.

This will be used by an ioctl to give callers just the entries that
point to an inode instead of assembling full paths from the root.

Signed-off-by: Zach Brown <zab@versity.com>
2023-06-14 14:12:10 -07:00
Zach Brown
5a1e5639c2 Merge pull request #124 from versity/zab/fix_quo_hb_mount_option
Zab/fix quo hb mount option
2023-06-07 10:50:32 -07:00
Zach Brown
950963375b Update quorum heartbeat test for mount option
Update the quorum_heartbeat_timeout_ms test to also test the mount
option, not just updating the timeout via sysfs.  This takes some
reworking as we have to avoid the active leader/server when setting the
timeout via the mount option.  We also allow for a bit more slack around
comparing kernel sleeps and userspace wall clocks.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-23 09:57:13 -07:00
Zach Brown
e52435b993 Add t_mount_opt
Add a test helper that mounts with a mount option.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-22 16:30:01 -07:00
Zach Brown
2b72c57cb0 Fix crash in quorum_heartbeat_timeout_ms parsing
Mount option parsing runs early enough that the rest of the option
read/write serialization infrastructure isn't set up yet.  The
quorum_heartbeat_timeout_ms mount option tried to use a helper that
updated the stored option but it wasn't initialized yet so it crashed.

The helper was really only to have the option validity test in one
place.  It's reworked to only verify the option and the actual setting
is left to the callers.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-22 16:29:56 -07:00
Zach Brown
9c67b2a42d Merge pull request #122 from versity/zab/v1.13
v1.13 Release
2023-05-19 11:38:48 -07:00
Zach Brown
0b38aeb5a4 v1.13 Release
Finish the release notes for the 1.13 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-19 10:38:40 -07:00
Zach Brown
2daf873983 Merge pull request #121 from versity/zab/heartbeat_fencing_tweaks
Zab/heartbeat fencing tweaks
2023-05-18 17:10:40 -07:00
Zach Brown
904c5dce90 Filter forced unmount transaction commit error
Add a transaction commit error message to the set of errors we ignore
when triggering forced unmount.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-18 15:50:34 -07:00
Zach Brown
57c6d78df8 Add test of quorum heartbeat timeout setting
Signed-off-by: Zach Brown <zab@versity.com>
2023-05-18 15:50:33 -07:00
Zach Brown
74e9d0f764 Silence test syfs option failure
If setting a sysfs option failes the bash write error is output.  It
contains the script line number which can fail over time, leading to
mismatched golden output failures if we used the output as an expected
indication of failure.  Callers should test its rc and output
accordingly if they want the failure logged and compared.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-18 11:15:28 -07:00
Zach Brown
98eb0eb649 Add t_quorum_nrs test helper
Add a quick function that outputs the fs numbers of the quorum mounts.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-18 11:15:28 -07:00
Zach Brown
15de0c21c1 Have quorum drop messages on force unmount
Forced unmount is supposed to isolate the mount from the world.  The
net.c TCP messaging returns errors when sending during forced unmount.
The quorum code has its own UDP messaging and wasn't taking forced
unmount into account.

This lead to quorum still being able to send resignation messages to
other quorum peers during forced unmount, making it hard to test
heartbeat timeouts with forced unmount.

The quorum messaging is already unreliable so we can easily make it drop
messages during forced unmount.  Now forced unmount more fully isolates
the quorum code and it becomes easier to test.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-18 10:01:19 -07:00
Zach Brown
7b65767803 Track and log quorum heartbeat delays
Add tracking and reporting of delays in sending or receiving quorum
heartbeat messages.  We measure the time between back to back sends or
receives of heartbeat messages.  We record these delays truncated down
to second granularity in the quorum sysfs status file.  We log messages
to the console for each longest measured delay up to the maximum
configurable heartbeat timeout.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-17 14:44:27 -07:00
Zach Brown
46640e4ff9 Add counter for quorum heartbeat send failures
Add a counter which tracks the number of heartbeat message send attempts
which fail.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-17 14:44:27 -07:00
Zach Brown
912906f050 Make quorum heartbeat timeout tunable
Add mount and sysfs options for changing the quorum heartbeat timeout.
This allows setting a longer delay in taking over for failed hosts that
has a greater chance of surviving temporary non-fatal delays.

We also double the existing default timeout to 10s which is still
reasonably responsive.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-17 14:44:27 -07:00
Zach Brown
ec02cf442b Use lower latency allocation in quorum socket
The quorum udp socket allocation still allowed starting io which can
trigger longer latencies trying to free memory.  We change the flags to
prefer dipping into emergency pools and then failing rather than
blocking trying to satisfy an allocation.  We'd much rather have a given
heartbeat attempt fail and have the opportunity to succeed at the next
interval rather than running the risk of blocking across multiple
intervals.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-17 14:44:27 -07:00
Zach Brown
0e9cd1eea5 Use specific work queue for quorum work
The quorum work was using the system workq.  While that's mostly fine,
we can create a dedicated workqueue with the specific flags that we
need.  The quorum work needs to run promptly to avoid fencing so we set
it to high priority.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-17 14:44:27 -07:00
Zach Brown
e18ea24561 Move quorum recv that sets timeout before check
In the quorum work loop some message receive actions extend the timeout
after the timeout expiration is checked.  This is usually fine when the
work runs soon after the messages are received and before the timeout
expires.  But under load the work might not schedule until long after
both the message has been received and the timeout has expired.

If the message was a heartbeat message then the wakeup delay would be
mistaken for lack of activity on the server and it would try to take
over for an otherwise active server.

This moves the extension of the heartbeat on message receive to before
the timeout is checked.  In our case of a delayed heartbeat message it
would still find it in the recv queue and extend the timeout, avoiding
fencing an active server.

Signed-off-by: Zach Brown <zab@versity.com>
2023-05-17 09:56:53 -07:00
Zach Brown
723309ff75 Merge pull request #120 from versity/zab/v1.12
v1.12 Release
2023-04-17 15:33:36 -07:00
Zach Brown
9bfad7d324 v1.12 Release
Finish the release notes for the 1.12 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-04-17 14:30:08 -07:00
Zach Brown
448e0abacb Merge pull request #118 from versity/zab/prepare_empty_data_dev
Zab/prepare empty data dev
2023-04-17 14:20:29 -07:00
Zach Brown
2a6d827e7a Add test for changing devices
Add a relatively small initial test for swapping devices around.

Signed-off-by: Zach Brown <zab@versity.com>
2023-04-17 12:47:50 -07:00
Zach Brown
e7bd1b45dc Add prepare-empty-data-device scoutfs command
Add a command for writing a super block to a new data device after
reading the metadata device to ensure that there's no existing
data on the old data device.

Signed-off-by: Zach Brown <zab@versity.com>
2023-04-17 12:47:50 -07:00
Zach Brown
6ded240089 Add t_rc test execution helper function
Add a quick wrapper to run commands whose output is saved while only
echoing their return code.

Signed-off-by: Zach Brown <zab@versity.com>
2023-04-17 12:47:50 -07:00
Zach Brown
99a20bc383 Put scratch mount point in test tmp dirs
Some tests had grown a bad pattern of making a mount point for the
scratch mount in the root /mnt directory.  Change them to use a mount
point in their test's temp directory outside the testing fs.

Signed-off-by: Zach Brown <zab@versity.com>
2023-04-17 12:47:50 -07:00
Zach Brown
18903ce500 Alphabetize command listing in scoutfs man page
List the scoutfs utility commands in the man page in alphabetical order.

Signed-off-by: Zach Brown <zab@versity.com>
2023-04-17 12:47:50 -07:00
Zach Brown
b76e22ffcf Refactor user util functions for device size
Split the existing device_size() into get_device_size() and
limit_device_size().  An upcoming command wants to get the device size
without applying limiting policy.

Signed-off-by: Zach Brown <zab@versity.com>
2023-04-17 12:47:50 -07:00
Zach Brown
d6863d6832 Merge pull request #119 from versity/zab/inode_nsec
Set sb->s_time_gran to support nsecs
2023-04-17 12:39:13 -07:00
Zach Brown
bb01a3990f Set sb->s_time_gran to support nsecs
We missed initializing sb->s_time_gran which controls how some parts of
the kernel truncate the granularity of nsec in timespec.  Some paths
don't use it at all so time would be maintained at full precision.  But
other paths, particularly setattr_copy() from userspace and
notify_change() from the kernel use it to truncate as times are set.

Setting s_time_gran to 1 maintains full nsec precision.

Signed-off-by: Zach Brown <zab@versity.com>
2023-03-24 10:50:34 -07:00
Zach Brown
409631ceb1 Merge pull request #117 from versity/zab/rename_into_root
Zab/rename into root
2023-03-13 09:28:57 -07:00
Zach Brown
f1264c7e47 Add test to rename into root directory
The ancestor tests in rename were preventing renaming into the root
directory.

Signed-off-by: Zach Brown <zab@versity.com>
2023-03-08 11:00:59 -08:00
Zach Brown
a61b8d9961 Fix renaming into root directory
The VFS performs a lot of checks on renames before calling the fs
method.  We acquire locks and refresh inodes in the rename method so we
have to duplciate a lot of the vfs checks.

One of the checks involves loops with ancestors and subdirectories.  We
missed the case where the root directory is the destination and doesn't
have any parent directories.  The backref walker it calls returns
-ENOENT instead of 0 with an empty set of parents and that error bubbled
up to rename.

The fix is to notice when we're asking for ancestors of the one
directory that can't have ancestors and short circuit the test.

Signed-off-by: Zach Brown <zab@versity.com>
2023-03-08 11:00:59 -08:00
Zach Brown
eac57a1f7a Merge pull request #116 from versity/zab/v1.11
v1.11 Release
2023-02-02 12:02:45 -08:00
Zach Brown
5512d5c03e v1.11 Release
Finish the release notes for the 1.11 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-02-02 11:00:38 -08:00
Zach Brown
8cf7be4651 Merge pull request #115 from versity/zab/utils_flush
Zab/utils flush
2023-02-02 10:25:12 -08:00
Zach Brown
3363b4fb79 Flush device caches in buffered util cmds
Add calls to our new device cache flushing helper in commands that use
buffered reads.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-18 10:52:02 -08:00
Zach Brown
ddb5cce2a5 Add quick utils flush_device helper
Add a quick helper that just calls cache flushing ioctls on different
kinds of files.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-18 10:27:47 -08:00
Zach Brown
1b0e9c45f4 Merge pull request #114 from versity/zab/commit_lt_dirty
Allow replaying srch file rotation
2023-01-17 16:07:13 -08:00
Zach Brown
2e2ccb6f61 Allow replaying srch file rotation
When a client no longer needs to append to a srch file, for whatever
reason, we move the reference from the log_trees item into a specific
srch file btree item in the server's srch file tracking btree.

Zeroing the log_trees item and inserting the server's btree item are
done in a server commit and should be written atomically.

But commit_log_trees had an error handling case that could leave the
newly inserted item dirty in memory without zeroing the srch file
reference in the existing log_trees item.  Future attempts to rotate the
file reference, perhaps by retrying the commit or by reclaiming the
client's rid, would get EEXIST and fail.

This fixes the error handling path to ensure that we'll keep the dirty
srch file btree and log_trees item in sync.  The desynced items can
still exist in the world so we'll tolerate getting EEXIST on insertion.
After enough time has passed, or if repair zeroed the duplicate
reference, we could remove this special case from insertion.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-17 14:33:27 -08:00
Zach Brown
01c8bba56d Merge pull request #109 from versity/zab/server_statfs_stable_blocks
Zab/server statfs stable blocks
2023-01-12 09:58:48 -08:00
Zach Brown
17cb1fe84b Merge pull request #110 from versity/zab/partial_alloc_move
Allow partial extent motion
2023-01-12 09:58:12 -08:00
Zach Brown
78ae87031b Merge pull request #112 from versity/zab/tmpfile_umask
Zab/tmpfile umask
2023-01-12 09:57:56 -08:00
Zach Brown
bf93ea73c4 Merge pull request #113 from versity/zab/move_blocks_loop_fixes
Fix move_blocks loop exit conditions
2023-01-12 09:56:25 -08:00
Zach Brown
a23e7478a0 Fix move_blocks loop exit conditions
The move_blocks ioctl intends to only move extents whose bytes fall
inside i_size.  This is easy except for a final extent that straddles an
i_size that isn't aligned to 4K data blocks.

The code that either checked for an extent being entirely past i_size or
for limiting the number of blocks to move by i_size clumsily compared
i_size offsets in bytes with extent counts in 4KB blocks.  In just the
right circumstances, probably with the help of a byte length to move
that is much larger than i_size, the length calculation could result in
trying to move 0 blocks.  Once this hit the loop would keep finding that
extent and calculating 0 blocks to move and would be stuck.

We fix this by clamping the count of blocks in extents to move in terms
of byte offsets at the start of the loop.  This gets rid of the extra
size checks and byte offset use in the loop.  We also add a sanity check
to make sure that we can't get stuck if, say, corruption resulted in an
otherwise impossible zero length extent.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-10 09:34:52 -08:00
Zach Brown
9ba2ee5c88 Add testing of O_TMPFILE umask
There were kernels that didn't apply the current umask to inode modes
created with O_TMPFILE without acls.  Let's have a test running to make
sure that we're not surprised if we come across one.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-09 14:49:23 -08:00
Zach Brown
fe33a492c2 Make o_tmpfile test more generic
The o_tmpfile test only did one thing, clean it up a bit so we can add
more tests to the file.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-09 10:14:40 -08:00
Zach Brown
77c0ff89fb Rename stage-tmpfile to o_tmpfile
We had a one-off test that was overly specific to staging from tmpfile.
This renames it to a more generic test where we can add more tests of
o_tmpfile in general.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-09 10:07:15 -08:00
Zach Brown
7c2d83e2f8 Remove saved super block in scoutfs_sb_info
Now that we've removed its users we can remove the global saved copy of
the super block from scoutfs_sb_info.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-06 11:15:45 -08:00
Zach Brown
40aa47c888 Have the server keep a private dirty super block
As the server does its work its transactions modify a dirty super block
in memory.  This used the global super block in scoutfs_sb_info which
was visible to everything, including the client.  Move the dirty super
block over to the private server info so that only the server can see
it.

This is mostly boring storage motion but we do change that the quorum
code hands the server a static copy of the quorum config to use as it
starts up before it reads the most recent super block.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-06 11:15:45 -08:00
Zach Brown
c1bd7bcce5 Allow partial extent motion
Refilling a client's data_avail is the only alloc_move call that doesn't
try and limit the number of blocks that it dirties.  If it doesn't find
sufficiently large extents it can exhaust the server's alloc budget
without hitting the target.  It'll try to dirty blocks and return a hard
error.

This changes that behaviour to allow returning 0 if it moved any
extents.  Other callers can deal with partial progress as they already
limit the blocks they dirty.  This will also return ENOSPC if it hadn't
moved anything just as the current code would.

The result is that data fill can not necessarily hit the target.  It
might take multiple commits to fill the data_avail btree.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-15 20:47:41 -08:00
Zach Brown
7720222588 Have statfs use unlocked stable roots
The server's statfs request handler was intending to lock dirty
structures as they were walked to get sums used for statfs fields.
Other callers walk stable structures, though, so the summation calls had
grown iteration over other structures that the server didn't know it had
to lock.

This meant that the server was walking unlocked dirty structures as they
were being modified.  The races are very tight, but it can result in
request handling errors that shut down connections and IO errors from
trying to read inconsistent refs as they were modified by the locked
writer.

We've built up infrastructure so the server can now walk stable
structures just like the other callers.  It will no longer wander into
dirty blocks so it doesn't need to lock them and it will retry if its
walk of stale data crosses a broken reference.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
fff07ce19c Use stale block read retrying helper
Transition from manual checking for persistent ESTALE to the shared
helper that we just added.  This should not change behavior.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
464de56d28 Add stale block read retrying helper
Many readers had little implementations of the logic to decide to retry
stale reads with different refs or decide that they're persistent and
return hard errors.  Let's move that into a small helper.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
342c206550 Have scoutfs_forest_inode_count return stale reads
scoutfs_forest_inode_count() assumed it was called with stable refs and
would always translate ESTALE to EIO.  Change it so that it passes
ESTALE to the caller who is responsible for handling it.

The server will use this to retry reading from stable supers that it's
storing in memory.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
fe4734d019 Save a full stable super in the server
The server has a mechanism for tracking the last stable roots used by
network rpcs.  We expand it a bit to include the entire super so
that we can add users in the server which want the last full stable
super.  We can still use the stable super to give out the stable
roots.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
b1a43bb312 Make quorum config use more precise
The quorum code was using the copy of the super block in the sb info for
its config.  With that going away we make different users more carefully
reference the config.  The quorum agent has a copy that it reads on
setup, the client rarely reads a copy when trying to connect, and the
server uses its super.

This is about data access isolation and should have no functional effect
other than to cause more super reads.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
929703213f Add fsid sbi field
A few paths throughout the code get the fsid for the current mount by
using the copy of the super block that we store in the scoutfs_sb_info
for the mount.  We'd like to remove the super block from the sbi and
it's cleaner to have a specific constant field for the fsid of the mount
which will not change.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
78279ffb4a Merge pull request #108 from versity/zab/v1.10
v1.10 Release
2022-12-07 13:33:45 -08:00
149 changed files with 13505 additions and 2077 deletions

View File

@@ -1,6 +1,184 @@
Versity ScoutFS Release Notes
=============================
---
v1.23
\
*Dec 11, 2024*
Add support for kernels in the RHEL 9.5 minor release.
---
v1.22
\
*Nov 1, 2024*
Add support for building against the RHEL9 family of kernels.
Fix failure of the setattr\_more ioctl() to set the attributes of a
zero-length file when restoring.
Fix support for POSIX ACLs in the RHEL8 and later family of kernels.
Fix a race condition in the lock server that could drop lock requests
under heavy load and cause cluster lock attempts to hang.
---
v1.21
\
*Jul 1, 2024*
This release adds features that rely on incompatible changes to
structure the file system. The process of advancing the format version
to enable these features is described in scoutfs(5).
Added the ".indx." extended attribute tag which can be used to determine
the sorting of files in a global index.
Added ScoutFS quotas which let rules define file size and count limits
in terms of ".totl." extended attribute totals.
Added the project ID file attribute which is inherited from parent
directories on creation. ScoutFS quota rules can reference project IDs.
Add a retention attribute for files which prevents modification once
enabled.
---
v1.20
\
*Apr 22, 2024*
Minor changes to packaging to better support "weak" module linking of
the kernel module, and to including git hashes in the built package. No
changes in runtime behaviour.
---
v1.19
\
*Jan 30, 2024*
Added the log\_merge\_wait\_timeout\_ms mount option to set the timeout
for creating log merge operations. The previous timeout, now the
default, was too short for some systems and was resulting in consistent
timeouts which created an excessive number of log trees waiting to be
merged.
Improved performance of many in-mount server operations when there are a
large number of log trees waiting to be merged.
---
v1.18
\
*Nov 7, 2023*
Fixed a bug where background srch file compaction could stop making
forward progress if a partial compaction operation was committed at a
specific byte offset in a block. This would cause srch file searches to
be progressively more expensive over time. Once this fix is running
background compaction will resume, bringing the cost of searches back
down.
---
v1.17
\
*Oct 23, 2023*
Add support for EL8 generation kernels.
---
v1.16
\
*Oct 4, 2023*
Fix an issue where the server could hang on startup if its persistent
allocator structures were left in a specific degraded state by the
previously active server.
---
v1.15
\
*Jul 17, 2023*
Process log btree merge splicing in multiple commits. This prevents a
rare case where pending log merge completions contain more work than can
be done in a single server commit, causing the server to trigger an
assert shortly after starting.
Fix spurious EINVAL from data writes when data\_prealloc\_contig\_only was
set to 0.
---
v1.14
\
*Jun 29, 2023*
Add get\_referring\_entries ioctl for getting directory entries that
refer to an inode.
Fix excessive CPU use in the move\_blocks interface when moving a large
number of extents.
Reduce fragmented data allocation when contig\_only prealloc is not in
use by more consistently allocating multi-block extents within each
aligned prealloc region.
Avoid rare deadlock in metadata block cache recalim under both heavy
load and memory pressure.
Fix crash when using quorum\_heartbeat\_timeout\_ms mount option.
---
v1.13
\
*May 19, 2023*
Add the quorum\_heartbeat\_timeout\_ms mount option to set the quorum
heartbeat timeout.
Change some task prioritization and allocation behavior of the quorum
agent to help reduce delays in sending and receiving heartbeat messages.
---
v1.12
\
*Apr 17, 2023*
Add the prepare-empty-data-device scoutfs command. A data device can be
unused when no files have data blocks, perhaps because they're archived
and offline. In this case the data device can be swapped out for
another device without changes to the metadata device.
Fix an oversight which limited inode timestamps to second granularity
for some operations. All operations now record timestamps with full
nanosecond precision.
Fix spurious ENOENT failures when renaming from other directories into
the root directory.
---
v1.11
\
*Feb 2, 2023*
Fixed a free extent processing error that could prevent mount from
proceeding when free data extents were sufficiently fragmented. It now
properly handle very fragmented free extent maps.
Fixed a statfs server processing race that could return spurious errors
and shut down the server. With the race closed statfs processing is
reliable.
Fixed a rare livelock in the move\_blocks ioctl. With the right
relationship between ioctl arguments and eventual file extent items the
core loop in the move\_blocks ioctl could get stuck looping on an extent
item and never return. The loop exit conditions were fixed and the loop
will always advance through all extents.
Changed the 'print' scoutfs commands to flush the block cache for the
devices. It was inconvenient to expect cache flushing to be a separate
step to ensure consistency with remote node writes.
---
v1.10
\

View File

@@ -12,17 +12,22 @@ else
SP = @:
endif
SCOUTFS_GIT_DESCRIBE := \
SCOUTFS_GIT_DESCRIBE ?= \
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
echo no-git)
ESCAPED_GIT_DESCRIBE := \
$(shell echo $(SCOUTFS_GIT_DESCRIBE) |sed -e 's/\//\\\//g')
RPM_GITHASH ?= $(shell git rev-parse --short HEAD)
SCOUTFS_ARGS := SCOUTFS_GIT_DESCRIBE=$(SCOUTFS_GIT_DESCRIBE) \
RPM_GITHASH=$(RPM_GITHASH) \
CONFIG_SCOUTFS_FS=m -C $(SK_KSRC) M=$(CURDIR)/src \
EXTRA_CFLAGS="-Werror"
# - We use the git describe from tags to set up the RPM versioning
RPM_VERSION := $(shell git describe --long --tags | awk -F '-' '{gsub(/^v/,""); print $$1}')
RPM_GITHASH := $(shell git rev-parse --short HEAD)
TARFILE = scoutfs-kmod-$(RPM_VERSION).tar
@@ -31,17 +36,18 @@ TARFILE = scoutfs-kmod-$(RPM_VERSION).tar
all: module
module:
make $(SCOUTFS_ARGS)
$(SP) make C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
$(MAKE) $(SCOUTFS_ARGS)
$(SP) $(MAKE) C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
modules_install:
make $(SCOUTFS_ARGS) modules_install
$(MAKE) $(SCOUTFS_ARGS) modules_install
%.spec: %.spec.in .FORCE
sed -e 's/@@VERSION@@/$(RPM_VERSION)/g' \
-e 's/@@GITHASH@@/$(RPM_GITHASH)/g' < $< > $@+
-e 's/@@GITHASH@@/$(RPM_GITHASH)/g' \
-e 's/@@GITDESCRIBE@@/$(ESCAPED_GIT_DESCRIBE)/g' < $< > $@+
mv $@+ $@
@@ -50,4 +56,4 @@ dist: scoutfs-kmod.spec
@ tar rf $(TARFILE) --transform="s@\(.*\)@scoutfs-kmod-$(RPM_VERSION)/\1@" scoutfs-kmod.spec
clean:
make $(SCOUTFS_ARGS) clean
$(MAKE) $(SCOUTFS_ARGS) clean

View File

@@ -1,18 +1,26 @@
%define kmod_name scoutfs
%define kmod_version @@VERSION@@
%define kmod_git_hash @@GITHASH@@
%define kmod_git_describe @@GITDESCRIBE@@
%define pkg_date %(date +%%Y%%m%%d)
# take kernel version or default to uname -r
%{!?kversion: %global kversion %(uname -r)}
%global kernel_version %{kversion}
%if 0%{?el7}
%global kernel_source() /usr/src/kernels/%{kernel_version}.$(arch)
%global kernel_release() %{kversion}
%else
%global kernel_source() /usr/src/kernels/%{kernel_version}
%endif
%{!?_release: %global _release 0.%{pkg_date}git%{kmod_git_hash}}
%if 0%{?el7}
Name: %{kmod_name}
%else
Name: kmod-%{kmod_name}
%endif
Summary: %{kmod_name} kernel module
Version: %{kmod_version}
Release: %{_release}%{?dist}
@@ -20,24 +28,42 @@ License: GPLv2
Group: System/Kernel
URL: http://scoutfs.org/
%if 0%{?el7}
BuildRequires: %{kernel_module_package_buildreqs}
BuildRequires: git
%else
BuildRequires: elfutils-libelf-devel
%endif
BuildRequires: kernel-devel-uname-r = %{kernel_version}
BuildRequires: git
BuildRequires: module-init-tools
ExclusiveArch: x86_64
Source: %{kmod_name}-kmod-%{kmod_version}.tar
%if 0%{?el7}
# Build only for standard kernel variant(s); for debug packages, append "debug"
# after "default" (separated by space)
%kernel_module_package default
%endif
# Disable the building of the debug package(s).
%define debug_package %{nil}
%global install_mod_dir extra/%{kmod_name}
%global install_mod_dir extra/%{name}
%if ! 0%{?el7}
%global flavors_to_build x86_64
%endif
# el9 sanity: make sure we lock to the minor release we built for and block upgrades
%{lua:
if string.match(rpm.expand("%{dist}"), "%.el9") then
rpm.define("el9 1")
end
}
%if 0%{?el9}
%define release_major_minor 9.%{lua: print(rpm.expand("%{dist}"):match("%.el9_(%d)"))}
Requires: system-release = %{release_major_minor}
%endif
%description
%{kmod_name} - kernel module
@@ -57,7 +83,7 @@ echo "Building for kernel: %{kernel_version} flavors: '%{flavors_to_build}'"
for flavor in %flavors_to_build; do
rm -rf obj/$flavor
cp -r source obj/$flavor
make SK_KSRC=%{kernel_source $flavor} -C obj/$flavor module
make RPM_GITHASH=%{kmod_git_hash} SCOUTFS_GIT_DESCRIBE=%{kmod_git_describe} SK_KSRC=%{kernel_source $flavor} -C obj/$flavor module
done
%install
@@ -66,7 +92,7 @@ export INSTALL_MOD_DIR=%{install_mod_dir}
mkdir -p %{install_mod_dir}
for flavor in %{flavors_to_build}; do
export KSRC=%{kernel_source $flavor}
export KVERSION=%{kernel_release $KSRC}
export KVERSION=%{kversion}
install -d $INSTALL_MOD_PATH/lib/modules/$KVERSION/%{install_mod_dir}
cp $PWD/obj/$flavor/src/scoutfs.ko $INSTALL_MOD_PATH/lib/modules/$KVERSION/%{install_mod_dir}/
done
@@ -74,7 +100,23 @@ done
# mark modules executable so that strip-to-file can strip them
find %{buildroot} -type f -name \*.ko -exec %{__chmod} u+x \{\} \;
%if ! 0%{?el7}
%files
/lib/modules
%post
echo /lib/modules/%{kversion}/%{install_mod_dir}/scoutfs.ko | weak-modules --add-modules --no-initramfs
depmod -a
%endif
%clean
rm -rf %{buildroot}
%preun
# stash our modules for postun cleanup
SCOUTFS_RPM_NAME=$(rpm -q %{name} | grep "%{version}-%{release}")
rpm -ql $SCOUTFS_RPM_NAME | grep '\.ko$' > /var/run/%{name}-modules-%{version}-%{release} || true
%postun
cat /var/run/%{name}-modules-%{version}-%{release} | weak-modules --remove-modules --no-initramfs
rm /var/run/%{name}-modules-%{version}-%{release} || true

View File

@@ -9,6 +9,7 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
scoutfs-y += \
acl.o \
attr_x.o \
avl.o \
alloc.o \
block.o \
@@ -25,6 +26,7 @@ scoutfs-y += \
inode.o \
ioctl.o \
item.o \
kernelcompat.o \
lock.o \
lock_server.o \
msg.o \
@@ -33,6 +35,7 @@ scoutfs-y += \
options.o \
per_task.o \
quorum.o \
quota.o \
recov.o \
scoutfs_trace.o \
server.o \
@@ -41,10 +44,12 @@ scoutfs-y += \
srch.o \
super.o \
sysfs.o \
totl.o \
trans.o \
triggers.o \
tseq.o \
volopt.o \
wkic.o \
xattr.o
#

View File

@@ -26,6 +26,16 @@ ifneq (,$(shell grep 'dir_emit_dots' include/linux/fs.h))
ccflags-y += -DKC_DIR_EMIT_DOTS
endif
#
# v3.18-rc2-19-gb5ae6b15bd73
#
# Folds d_materialise_unique into d_splice_alias. Note reversal
# of arguments (Also note Documentation/filesystems/porting.rst)
#
ifneq (,$(shell grep 'd_materialise_unique' include/linux/dcache.h))
ccflags-y += -DKC_D_MATERIALISE_UNIQUE=1
endif
#
# RHEL extended the fop struct so to use it we have to set
# a flag to indicate that the struct is large enough and
@@ -40,6 +50,384 @@ endif
#
# Added user_ns argument to posix_acl_valid
#
ifneq (,$(shell grep 'posix_acl_valid.*user_ns,' include/linux/posix_acl.h))
ifneq (,$(shell grep 'posix_acl_valid.*user_namespace' include/linux/posix_acl.h))
ccflags-y += -DKC_POSIX_ACL_VALID_USER_NS
endif
#
# v5.3-12296-g6d2052d188d9
#
# The RBCOMPUTE function is now passed an extra flag, and should return a bool
# to indicate whether the propagated callback should stop or not.
#
ifneq (,$(shell grep 'static inline bool RBNAME.*_compute_max' include/linux/rbtree_augmented.h))
ccflags-y += -DKC_RB_TREE_AUGMENTED_COMPUTE_MAX
endif
#
# v3.13-25-g37bc15392a23
#
# Renames posix_acl_create to __posix_acl_create and provide some
# new interfaces for creating ACLs
#
ifneq (,$(shell grep '__posix_acl_create' include/linux/posix_acl.h))
ccflags-y += -DKC___POSIX_ACL_CREATE
endif
#
# v4.8-rc1-29-g31051c85b5e2
#
# inode_change_ok() removed - replace with setattr_prepare()
# v5.11-rc4-7-g2f221d6f7b88 removes extern attribute
#
ifneq (,$(shell grep 'int setattr_prepare' include/linux/fs.h))
ccflags-y += -DKC_SETATTR_PREPARE
endif
#
# v4.15-rc3-4-gae5e165d855d
#
# linux/iversion.h needs to manually be included for code that
# manipulates this field.
#
ifneq (,$(shell grep -s 'define _LINUX_IVERSION_H' include/linux/iversion.h))
ccflags-y += -DKC_NEED_LINUX_IVERSION_H=1
endif
# v4.11-12447-g104b4e5139fe
#
# Renamed __percpu_counter_add to percpu_counter_add_batch to clarify
# that the __ wasn't less safe, just took an extra parameter.
#
ifneq (,$(shell grep 'percpu_counter_add_batch' include/linux/percpu_counter.h))
ccflags-y += -DKC_PERCPU_COUNTER_ADD_BATCH
endif
#
# v4.11-4550-g7dea19f9ee63
#
# Introduced memalloc_nofs_{save,restore} preferred instead of _noio_.
#
ifneq (,$(shell grep 'memalloc_nofs_save' include/linux/sched/mm.h))
ccflags-y += -DKC_MEMALLOC_NOFS_SAVE
endif
#
# v4.7-12414-g1eff9d322a44
#
# Renamed bi_rw to bi_opf to force old code to catch up. We use it as a
# single switch between old and new bio structures.
#
ifneq (,$(shell grep 'bi_opf' include/linux/blk_types.h))
ccflags-y += -DKC_BIO_BI_OPF
endif
#
# v4.12-rc2-201-g4e4cbee93d56
#
# Moves to bi_status BLK_STS_ API instead of having a mix of error
# end_io args or bi_error.
#
ifneq (,$(shell grep 'bi_status' include/linux/blk_types.h))
ccflags-y += -DKC_BIO_BI_STATUS
endif
#
# v3.11-8765-ga0b02131c5fc
#
# Remove the old ->shrink() API, ->{scan,count}_objects is preferred.
#
ifneq (,$(shell grep '(*shrink)' include/linux/shrinker.h))
ccflags-y += -DKC_SHRINKER_SHRINK
endif
#
# v3.19-4777-g6bec00352861
#
# backing_dev_info is removed from address_space. Instead we need to use
# inode_to_bdi() inline from <backing-dev.h>.
#
ifneq (,$(shell grep 'struct backing_dev_info.*backing_dev_info' include/linux/fs.h))
ccflags-y += -DKC_LINUX_BACKING_DEV_INFO=1
endif
#
# v4.3-9290-ge409de992e3e
#
# xattr handlers are now passed a struct that contains `flags`
#
ifneq (,$(shell grep 'int...get..const struct xattr_handler.*struct dentry.*dentry,' include/linux/xattr.h))
ccflags-y += -DKC_XATTR_STRUCT_XATTR_HANDLER=1
endif
#
# v4.16-rc1-1-g9b2c45d479d0
#
# kernel_getsockname() and kernel_getpeername dropped addrlen arg
#
ifneq (,$(shell grep 'kernel_getsockname.*,$$' include/linux/net.h))
ccflags-y += -DKC_KERNEL_GETSOCKNAME_ADDRLEN=1
endif
#
# v4.1-rc1-410-geeb1bd5c40ed
#
# Adds a struct net parameter to sock_create_kern
#
ifneq (,$(shell grep 'sock_create_kern.*struct net' include/linux/net.h))
ccflags-y += -DKC_SOCK_CREATE_KERN_NET=1
endif
#
# v3.18-rc6-1619-gc0371da6047a
#
# iov_iter is now part of struct msghdr
#
ifneq (,$(shell grep 'struct iov_iter.*msg_iter' include/linux/socket.h))
ccflags-y += -DKC_MSGHDR_STRUCT_IOV_ITER=1
endif
#
# v4.17-rc6-7-g95582b008388
#
# Kernel has current_time(inode) to uniformly retreive timespec in the right unit
#
ifneq (,$(shell grep 'struct timespec64 current_time' include/linux/fs.h))
ccflags-y += -DKC_CURRENT_TIME_INODE=1
endif
#
# v4.9-12228-g530e9b76ae8f
#
# register_cpu_notifier and family were all removed and to be
# replaced with cpuhp_* API calls.
#
ifneq (,$(shell grep 'define register_hotcpu_notifier' include/linux/cpu.h))
ccflags-y += -DKC_CPU_NOTIFIER
endif
#
# v3.14-rc8-130-gccad2365668f
#
# generic_file_buffered_write is removed, backport it
#
ifneq (,$(shell grep 'extern ssize_t generic_file_buffered_write' include/linux/fs.h))
ccflags-y += -DKC_GENERIC_FILE_BUFFERED_WRITE=1
endif
#
# v5.7-438-g8151b4c8bee4
#
# struct address_space_operations switches away from .readpages to .readahead
#
# RHEL has backported this feature all the way to RHEL8, as part of RHEL_KABI,
# which means we need to detect this very precisely
#
ifneq (,$(shell grep 'readahead.*struct readahead_control' include/linux/fs.h))
ccflags-y += -DKC_FILE_AOPS_READAHEAD
endif
#
# v4.0-rc7-1743-g8436318205b9
#
# .aio_read and .aio_write no longer exist. All reads and writes now use the
# .read_iter and .write_iter methods, or must implement .read and .write (which
# we don't).
#
ifneq (,$(shell grep 'ssize_t.*aio_read' include/linux/fs.h))
ccflags-y += -DKC_LINUX_HAVE_FOP_AIO_READ=1
endif
#
# rhel7 has a custom inode_operations_wrapper struct that is discarded
# entirely in favor of upstream structure since rhel8.
#
ifneq (,$(shell grep 'void.*follow_link.*struct dentry' include/linux/fs.h))
ccflags-y += -DKC_LINUX_HAVE_RHEL_IOPS_WRAPPER=1
endif
ifneq (,$(shell grep 'size_t.*ki_left;' include/linux/aio.h))
ccflags-y += -DKC_LINUX_AIO_KI_LEFT=1
endif
#
# v4.4-rc4-4-g98e9cb5711c6
#
# Introduces a new xattr_handler .name member that can be used to match the
# entire field, instead of just a prefix. For these kernels, we must use
# the new .name field instead.
ifneq (,$(shell grep 'static inline const char .xattr_prefix' include/linux/xattr.h))
ccflags-y += -DKC_XATTR_HANDLER_NAME=1
endif
#
# v5.19-rc4-96-g342a72a33407
#
# Adds `typedef __u32 __bitwise blk_opf_t` to aid flag checking
ifneq (,$(shell grep 'typedef __u32 __bitwise blk_opf_t' include/linux/blk_types.h))
ccflags-y += -DKC_HAVE_BLK_OPF_T=1
endif
#
# v5.12-rc6-9-g4f0f586bf0c8
#
# list_sort cmp function takes const list_head args
ifneq (,$(shell grep 'const struct list_head ., const struct list_head .' include/linux/list_sort.h))
ccflags-y += -DKC_LIST_CMP_CONST_ARG_LIST_HEAD
endif
# v5.7-523-g88dca4ca5a93
#
# The pgprot argument to vmalloc is always PAGE_KERNEL, so it is removed.
ifneq (,$(shell grep 'extern void .__vmalloc.unsigned long size, gfp_t gfp_mask, pgprot_t prot' include/linux/vmalloc.h))
ccflags-y += -DKC_VMALLOC_PGPROT_T
endif
# v6.2-rc1-18-g01beba7957a2
#
# fs: port inode_owner_or_capable() to mnt_idmap
ifneq (,$(shell grep 'bool inode_owner_or_capable.struct user_namespace .mnt_userns' include/linux/fs.h))
ccflags-y += -DKC_INODE_OWNER_OR_CAPABLE_USERNS
endif
#
# v5.11-rc4-5-g47291baa8ddf
#
# namei: make permission helpers idmapped mount aware
ifneq (,$(shell grep 'int inode_permission.struct user_namespace' include/linux/fs.h))
ccflags-y += -DKC_INODE_PERMISSION_USERNS
endif
#
# v5.11-rc4-24-g549c7297717c
#
# fs: make helpers idmap mount aware
# Enlarges the VFS API methods to include user namespace argument.
ifneq (,$(shell grep 'int ..mknod. .struct user_namespace' include/linux/fs.h))
ccflags-y += -DKC_VFS_METHOD_USER_NAMESPACE_ARG
endif
#
# v5.17-rc2-21-g07888c665b40
#
# Detect new style bio_alloc - pass bdev and opf.
ifneq (,$(shell grep 'struct bio .bio_alloc.struct block_device .bdev' include/linux/bio.h))
ccflags-y += -DKC_BIO_ALLOC_DEV_OPF_ARGS
endif
#
# v5.7-rc4-53-gcddf8a2c4a82
#
# fiemap_prep() replaces fiemap_check_flags()
ifneq (,$(shell grep -s 'int fiemap_prep.struct inode' include/linux/fiemap.h))
ccflags-y += -DKC_FIEMAP_PREP
endif
#
# v5.17-13043-g800ba29547e1
#
# generic_perform_write args use kiocb for passing filp and pos
ifneq (,$(shell grep 'ssize_t generic_perform_write.struct kiocb ., struct iov_iter' include/linux/fs.h))
ccflags-y += -DKC_GENERIC_PERFORM_WRITE_KIOCB_IOV_ITER
endif
#
# v5.7-rc6-2496-g76ee0785f42a
#
# net: add sock_set_sndtimeo
ifneq (,$(shell grep 'void sock_set_sndtimeo.struct sock' include/net/sock.h))
ccflags-y += -DKC_SOCK_SET_SNDTIMEO
endif
#
# v5.8-rc4-1931-gba423fdaa589
#
# setsockopt functions are now passed a sockptr_t value instead of char*
ifneq (,$(shell grep -s 'include .linux/sockptr.h.' include/linux/net.h))
ccflags-y += -DKC_SETSOCKOPT_SOCKPTR_T
endif
#
# v5.7-rc6-2507-g71c48eb81c9e
#
# Adds a bunch of low level TCP sock parameter functions that we want to use.
ifneq (,$(shell grep 'int tcp_sock_set_keepintvl' include/linux/tcp.h))
ccflags-y += -DKC_HAVE_TCP_SET_SOCKFN
endif
#
# v4.16-rc3-13-ga84d1169164b
#
# Fixes y2038 issues with struct timeval.
ifneq (,$(shell grep -s '^struct __kernel_old_timeval .' include/uapi/linux/time_types.h))
ccflags-y += -DKC_KERNEL_OLD_TIMEVAL_STRUCT
endif
#
# v5.19-rc4-52-ge33c267ab70d
#
# register_shrinker now requires a name, used for debug stats etc.
ifneq (,$(shell grep 'int __printf.*register_shrinker.struct shrinker .shrinker,' include/linux/shrinker.h))
ccflags-y += -DKC_SHRINKER_NAME
endif
#
# v5.18-rc5-246-gf132ab7d3ab0
#
# mpage_readpage() is now replaced with mpage_read_folio.
ifneq (,$(shell grep 'int mpage_read_folio.struct folio .folio' include/linux/mpage.h))
ccflags-y += -DKC_MPAGE_READ_FOLIO
endif
#
# v5.18-rc5-219-gb3992d1e2ebc
#
# block_write_begin() no longer is being passed aop_flags
ifneq (,$(shell grep -C1 'int block_write_begin' include/linux/buffer_head.h | tail -n 2 | grep 'unsigned flags'))
ccflags-y += -DKC_BLOCK_WRITE_BEGIN_AOP_FLAGS
endif
#
# v6.0-rc6-9-g863f144f12ad
#
# the .tmpfile() vfs method calling convention changed and now a struct
# file* is passed to this metiond instead of a dentry. The function also
# should open the created file and call finish_open_simple() before returning.
ifneq (,$(shell grep 'extern void d_tmpfile.struct dentry' include/linux/dcache.h))
ccflags-y += -DKC_D_TMPFILE_DENTRY
endif
#
# v6.4-rc2-201-g0733ad800291
#
# New blk_mode_t replaces abuse of fmode_t
ifneq (,$(shell grep 'typedef unsigned int __bitwise blk_mode_t' include/linux/blkdev.h))
ccflags-y += -DKC_HAVE_BLK_MODE_T
endif
#
# v6.4-rc2-186-g2736e8eeb0cc
#
# Reworks FMODE_EXCL kludge and instead modifies the blkdev_put() call to pass in
# the (exclusive) holder to implement FMODE_EXCL handling.
ifneq (,$(shell grep 'blkdev_put.struct block_device .bdev, void .holder' include/linux/blkdev.h))
ccflags-y += -DKC_BLKDEV_PUT_HOLDER_ARG
endif
#
# v6.4-rc4-163-g0d625446d0a4
#
# Entirely removes current->backing_dev_info to ultimately remove buffer_head
# completely at some point.
ifneq (,$(shell grep 'struct backing_dev_info.*backing_dev_info;' include/linux/sched.h))
ccflags-y += -DKC_CURRENT_BACKING_DEV_INFO
endif
#
# v6.8-rc1-4-gf3a608827d1f
#
# adds bdev_file_open_by_path() and later in v6.8-rc1-30-ge97d06a46526 removes bdev_open_by_path()
# which requires us to use the file method from now on.
ifneq (,$(shell grep 'struct file.*bdev_file_open_by_path.const char.*path' include/linux/blkdev.h))
ccflags-y += -DKC_BDEV_FILE_OPEN_BY_PATH
endif

View File

@@ -69,12 +69,14 @@ struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct s
char *name;
int ret;
#ifndef KC___POSIX_ACL_CREATE
if (!IS_POSIXACL(inode))
return NULL;
acl = get_cached_acl(inode, type);
if (acl != ACL_NOT_CACHED)
return acl;
#endif
ret = acl_xattr_name_len(type, &name, NULL);
if (ret < 0)
@@ -112,8 +114,10 @@ struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
struct posix_acl *acl;
int ret;
#ifndef KC___POSIX_ACL_CREATE
if (!IS_POSIXACL(inode))
return NULL;
#endif
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret < 0) {
@@ -149,7 +153,8 @@ int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
switch (type) {
case ACL_TYPE_ACCESS:
if (acl) {
ret = posix_acl_update_mode(inode, &new_mode, &acl);
ret = posix_acl_update_mode(KC_VFS_INIT_NS
inode, &new_mode, &acl);
if (ret < 0)
goto out;
set_mode = true;
@@ -183,7 +188,7 @@ int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
if (!value) {
/* can be setting an acl that only affects mode, didn't need xattr */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
inode->i_ctime = current_time(inode);
}
}
@@ -218,10 +223,17 @@ int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
return ret;
}
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_get_xattr(const struct xattr_handler *handler, struct dentry *dentry,
struct inode *inode, const char *name, void *value,
size_t size)
{
int type = handler->flags;
#else
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type)
{
#endif
struct posix_acl *acl;
int ret = 0;
@@ -240,13 +252,23 @@ int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value,
return ret;
}
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_set_xattr(const struct xattr_handler *handler,
KC_VFS_NS_DEF
struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags)
{
int type = handler->flags;
#else
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type)
{
#endif
struct posix_acl *acl = NULL;
int ret;
if (!inode_owner_or_capable(dentry->d_inode))
if (!inode_owner_or_capable(KC_VFS_INIT_NS dentry->d_inode))
return -EPERM;
if (!IS_POSIXACL(dentry->d_inode))
@@ -301,7 +323,7 @@ int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
if (ret)
goto out;
}
ret = posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
ret = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
if (ret < 0)
return ret;
if (ret > 0)
@@ -345,7 +367,7 @@ int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
if (IS_ERR_OR_NULL(acl))
return PTR_ERR(acl);
ret = posix_acl_chmod(&acl, GFP_KERNEL, attr->ia_mode);
ret = __posix_acl_chmod(&acl, GFP_KERNEL, attr->ia_mode);
if (ret)
return ret;

View File

@@ -6,10 +6,21 @@ struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct s
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks);
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_get_xattr(const struct xattr_handler *, struct dentry *dentry,
struct inode *inode, const char *name, void *value,
size_t size);
int scoutfs_acl_set_xattr(const struct xattr_handler *,
KC_VFS_NS_DEF
struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags);
#else
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type);
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type);
#endif
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
struct scoutfs_lock *lock, struct list_head *ind_locks);
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,

View File

@@ -14,6 +14,7 @@
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/blkdev.h>
#include <linux/sort.h>
#include <linux/random.h>
@@ -976,6 +977,16 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
break;
}
/* return partial if the server alloc can't dirty any more */
if (scoutfs_alloc_meta_low(sb, alloc, 50 + extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
if (WARN_ON_ONCE(!moved))
ret = -ENOSPC;
else
ret = 0;
break;
}
/* searching set start/len, finish initializing alloced extent */
ext.map = found.map ? ext.start - found.start + found.map : 0;
ext.flags = found.flags;
@@ -1572,12 +1583,10 @@ out:
* call the caller's callback. This assumes that the super it's reading
* could be stale and will retry if it encounters stale blocks.
*/
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
int scoutfs_alloc_foreach(struct super_block *sb, scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
DECLARE_SAVED_REFS(saved);
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -1586,26 +1595,18 @@ int scoutfs_alloc_foreach(struct super_block *sb,
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
do {
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
ret = scoutfs_block_check_stale(sb, ret, &saved, &super->logs_root.ref,
&super->srch_root.ref);
} while (ret == -ESTALE);
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
out:
if (ret == -ESTALE) {
if (memcmp(&stale_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
} else {
BUILD_BUG_ON(sizeof(stale_refs) != sizeof(refs));
memcpy(stale_refs, refs, sizeof(stale_refs));
goto retry;
}
}
kfree(super);
return ret;
}

252
kmod/src/attr_x.c Normal file
View File

@@ -0,0 +1,252 @@
/*
* Copyright (C) 2024 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include "format.h"
#include "super.h"
#include "inode.h"
#include "ioctl.h"
#include "lock.h"
#include "trans.h"
#include "attr_x.h"
static int validate_attr_x_input(struct super_block *sb, struct scoutfs_ioctl_inode_attr_x *iax)
{
int ret;
if ((iax->x_mask & SCOUTFS_IOC_IAX__UNKNOWN) ||
(iax->x_flags & SCOUTFS_IOC_IAX_F__UNKNOWN))
return -EINVAL;
if ((iax->x_mask & SCOUTFS_IOC_IAX_RETENTION) &&
(ret = scoutfs_fmt_vers_unsupported(sb, SCOUTFS_FORMAT_VERSION_FEAT_RETENTION)))
return ret;
if ((iax->x_mask & SCOUTFS_IOC_IAX_PROJECT_ID) &&
(ret = scoutfs_fmt_vers_unsupported(sb, SCOUTFS_FORMAT_VERSION_FEAT_PROJECT_ID)))
return ret;
return 0;
}
/*
* If the mask indicates interest in the given attr then set the field
* to the caller's value and return the new size if it didn't already
* include the attr field.
*/
#define fill_attr(size, iax, bit, field, val) \
({ \
__typeof__(iax) _iax = (iax); \
__typeof__(size) _size = (size); \
\
if (_iax->x_mask & (bit)) { \
_iax->field = (val); \
_size = max(_size, offsetof(struct scoutfs_ioctl_inode_attr_x, field) + \
sizeof_field(struct scoutfs_ioctl_inode_attr_x, field)); \
} \
\
_size; \
})
/*
* Returns -errno on error, or >= number of bytes filled by the
* response. 0 can be returned if no attributes are requested in the
* input x_mask.
*/
int scoutfs_get_attr_x(struct inode *inode, struct scoutfs_ioctl_inode_attr_x *iax)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_lock *lock = NULL;
size_t size = 0;
u64 offline;
u64 online;
u64 bits;
int ret;
if (iax->x_mask == 0) {
ret = 0;
goto out;
}
ret = validate_attr_x_input(sb, iax);
if (ret < 0)
goto out;
inode_lock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret)
goto unlock;
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_META_SEQ,
meta_seq, scoutfs_inode_meta_seq(inode));
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_DATA_SEQ,
data_seq, scoutfs_inode_data_seq(inode));
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_DATA_VERSION,
data_version, scoutfs_inode_data_version(inode));
if (iax->x_mask & (SCOUTFS_IOC_IAX_ONLINE_BLOCKS | SCOUTFS_IOC_IAX_OFFLINE_BLOCKS)) {
scoutfs_inode_get_onoff(inode, &online, &offline);
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_ONLINE_BLOCKS,
online_blocks, online);
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_OFFLINE_BLOCKS,
offline_blocks, offline);
}
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_CTIME, ctime_sec, inode->i_ctime.tv_sec);
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_CTIME, ctime_nsec, inode->i_ctime.tv_nsec);
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_CRTIME, crtime_sec, si->crtime.tv_sec);
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_CRTIME, crtime_nsec, si->crtime.tv_nsec);
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_SIZE, size, i_size_read(inode));
if (iax->x_mask & SCOUTFS_IOC_IAX__BITS) {
bits = 0;
if ((iax->x_mask & SCOUTFS_IOC_IAX_RETENTION) &&
(scoutfs_inode_get_flags(inode) & SCOUTFS_INO_FLAG_RETENTION))
bits |= SCOUTFS_IOC_IAX_B_RETENTION;
size = fill_attr(size, iax, SCOUTFS_IOC_IAX__BITS, bits, bits);
}
size = fill_attr(size, iax, SCOUTFS_IOC_IAX_PROJECT_ID,
project_id, scoutfs_inode_get_proj(inode));
ret = size;
unlock:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
inode_unlock(inode);
out:
return ret;
}
static bool valid_attr_changes(struct inode *inode, struct scoutfs_ioctl_inode_attr_x *iax)
{
/* provided data_version must be non-zero */
if ((iax->x_mask & SCOUTFS_IOC_IAX_DATA_VERSION) && (iax->data_version == 0))
return false;
/* can only set size or data version in new regular files */
if (((iax->x_mask & SCOUTFS_IOC_IAX_SIZE) ||
(iax->x_mask & SCOUTFS_IOC_IAX_DATA_VERSION)) &&
(!S_ISREG(inode->i_mode) || scoutfs_inode_data_version(inode) != 0))
return false;
/* must provide non-zero data_version with non-zero size */
if (((iax->x_mask & SCOUTFS_IOC_IAX_SIZE) && (iax->size > 0)) &&
(!(iax->x_mask & SCOUTFS_IOC_IAX_DATA_VERSION) || (iax->data_version == 0)))
return false;
/* must provide non-zero size when setting offline extents to that size */
if ((iax->x_flags & SCOUTFS_IOC_IAX_F_SIZE_OFFLINE) &&
(!(iax->x_mask & SCOUTFS_IOC_IAX_SIZE) || (iax->size == 0)))
return false;
/* the retention bit only applies to regular files */
if ((iax->x_mask & SCOUTFS_IOC_IAX_RETENTION) && !S_ISREG(inode->i_mode))
return false;
return true;
}
int scoutfs_set_attr_x(struct inode *inode, struct scoutfs_ioctl_inode_attr_x *iax)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_lock *lock = NULL;
LIST_HEAD(ind_locks);
bool set_data_seq;
int ret;
/* initially all setting is root only, could loosen with finer grained checks */
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (iax->x_mask == 0) {
ret = 0;
goto out;
}
ret = validate_attr_x_input(sb, iax);
if (ret < 0)
goto out;
inode_lock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE, SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret)
goto unlock;
/* check for errors before making any changes */
if (!valid_attr_changes(inode, iax)) {
ret = -EINVAL;
goto unlock;
}
/* retention prevents modification unless also clearing retention */
ret = scoutfs_inode_check_retention(inode);
if (ret < 0 && !((iax->x_mask & SCOUTFS_IOC_IAX_RETENTION) &&
!(iax->bits & SCOUTFS_IOC_IAX_B_RETENTION)))
goto unlock;
/* setting only so we don't see 0 data seq with nonzero data_version */
if ((iax->x_mask & SCOUTFS_IOC_IAX_DATA_VERSION) && (iax->data_version > 0))
set_data_seq = true;
else
set_data_seq = false;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq, true);
if (ret)
goto unlock;
ret = scoutfs_dirty_inode_item(inode, lock);
if (ret < 0)
goto release;
/* creating offline extent first, it might fail */
if (iax->x_flags & SCOUTFS_IOC_IAX_F_SIZE_OFFLINE) {
ret = scoutfs_data_init_offline_extent(inode, iax->size, lock);
if (ret)
goto release;
}
/* make all changes once they're all checked and will succeed */
if (iax->x_mask & SCOUTFS_IOC_IAX_DATA_VERSION)
scoutfs_inode_set_data_version(inode, iax->data_version);
if (iax->x_mask & SCOUTFS_IOC_IAX_SIZE)
i_size_write(inode, iax->size);
if (iax->x_mask & SCOUTFS_IOC_IAX_CTIME) {
inode->i_ctime.tv_sec = iax->ctime_sec;
inode->i_ctime.tv_nsec = iax->ctime_nsec;
}
if (iax->x_mask & SCOUTFS_IOC_IAX_CRTIME) {
si->crtime.tv_sec = iax->crtime_sec;
si->crtime.tv_nsec = iax->crtime_nsec;
}
if (iax->x_mask & SCOUTFS_IOC_IAX_RETENTION) {
scoutfs_inode_set_flags(inode, ~SCOUTFS_INO_FLAG_RETENTION,
(iax->bits & SCOUTFS_IOC_IAX_B_RETENTION) ?
SCOUTFS_INO_FLAG_RETENTION : 0);
}
if (iax->x_mask & SCOUTFS_IOC_IAX_PROJECT_ID)
scoutfs_inode_set_proj(inode, iax->project_id);
scoutfs_update_inode_item(inode, lock, &ind_locks);
ret = 0;
release:
scoutfs_release_trans(sb);
unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
out:
return ret;
}

11
kmod/src/attr_x.h Normal file
View File

@@ -0,0 +1,11 @@
#ifndef _SCOUTFS_ATTR_X_H_
#define _SCOUTFS_ATTR_X_H_
#include <linux/kernel.h>
#include <linux/fs.h>
#include "ioctl.h"
int scoutfs_get_attr_x(struct inode *inode, struct scoutfs_ioctl_inode_attr_x *iax);
int scoutfs_set_attr_x(struct inode *inode, struct scoutfs_ioctl_inode_attr_x *iax);
#endif

View File

@@ -21,6 +21,7 @@
#include <linux/blkdev.h>
#include <linux/rhashtable.h>
#include <linux/random.h>
#include <linux/sched/mm.h>
#include "format.h"
#include "super.h"
@@ -30,6 +31,7 @@
#include "scoutfs_trace.h"
#include "alloc.h"
#include "triggers.h"
#include "util.h"
/*
* The scoutfs block cache manages metadata blocks that can be larger
@@ -57,7 +59,7 @@ struct block_info {
atomic64_t access_counter;
struct rhashtable ht;
wait_queue_head_t waitq;
struct shrinker shrinker;
KC_DEFINE_SHRINKER(shrinker);
struct work_struct free_work;
struct llist_head free_llist;
};
@@ -118,8 +120,7 @@ do { \
static __le32 block_calc_crc(struct scoutfs_block_header *hdr, u32 size)
{
int off = offsetof(struct scoutfs_block_header, crc) +
FIELD_SIZEOF(struct scoutfs_block_header, crc);
int off = offsetofend(struct scoutfs_block_header, crc);
u32 calc = crc32c(~0, (char *)hdr + off, size - off);
return cpu_to_le32(calc);
@@ -128,7 +129,7 @@ static __le32 block_calc_crc(struct scoutfs_block_header *hdr, u32 size)
static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
{
struct block_private *bp;
unsigned int noio_flags;
unsigned int nofs_flags;
/*
* If we had multiple blocks per page we'd need to be a little
@@ -156,9 +157,9 @@ static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
* spurious reclaim-on dependencies and warnings.
*/
lockdep_off();
noio_flags = memalloc_noio_save();
bp->virt = __vmalloc(SCOUTFS_BLOCK_LG_SIZE, GFP_NOFS | __GFP_HIGHMEM, PAGE_KERNEL);
memalloc_noio_restore(noio_flags);
nofs_flags = memalloc_nofs_save();
bp->virt = kc__vmalloc(SCOUTFS_BLOCK_LG_SIZE, GFP_NOFS | __GFP_HIGHMEM);
memalloc_nofs_restore(nofs_flags);
lockdep_on();
if (!bp->virt) {
@@ -436,11 +437,10 @@ static void block_remove_all(struct super_block *sb)
* possible. Final freeing, verifying checksums, and unlinking errored
* blocks are all done by future users of the blocks.
*/
static void block_end_io(struct super_block *sb, int rw,
static void block_end_io(struct super_block *sb, blk_opf_t opf,
struct block_private *bp, int err)
{
DECLARE_BLOCK_INFO(sb, binf);
bool is_read = !(rw & WRITE);
if (err) {
scoutfs_inc_counter(sb, block_cache_end_io_error);
@@ -450,7 +450,7 @@ static void block_end_io(struct super_block *sb, int rw,
if (!atomic_dec_and_test(&bp->io_count))
return;
if (is_read && !test_bit(BLOCK_BIT_ERROR, &bp->bits))
if (!op_is_write(opf) && !test_bit(BLOCK_BIT_ERROR, &bp->bits))
set_bit(BLOCK_BIT_UPTODATE, &bp->bits);
clear_bit(BLOCK_BIT_IO_BUSY, &bp->bits);
@@ -463,13 +463,13 @@ static void block_end_io(struct super_block *sb, int rw,
wake_up(&binf->waitq);
}
static void block_bio_end_io(struct bio *bio, int err)
static void KC_DECLARE_BIO_END_IO(block_bio_end_io, struct bio *bio)
{
struct block_private *bp = bio->bi_private;
struct super_block *sb = bp->sb;
TRACE_BLOCK(end_io, bp);
block_end_io(sb, bio->bi_rw, bp, err);
block_end_io(sb, kc_bio_get_opf(bio), bp, kc_bio_get_errno(bio));
bio_put(bio);
}
@@ -477,7 +477,7 @@ static void block_bio_end_io(struct bio *bio, int err)
* Kick off IO for a single block.
*/
static int block_submit_bio(struct super_block *sb, struct block_private *bp,
int rw)
blk_opf_t opf)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct bio *bio = NULL;
@@ -504,14 +504,13 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
for (off = 0; off < SCOUTFS_BLOCK_LG_SIZE; off += PAGE_SIZE) {
if (!bio) {
bio = bio_alloc(GFP_NOFS, SCOUTFS_BLOCK_LG_PAGES_PER);
bio = kc_bio_alloc(sbi->meta_bdev, SCOUTFS_BLOCK_LG_PAGES_PER, opf, GFP_NOFS);
if (!bio) {
ret = -ENOMEM;
break;
}
bio->bi_sector = sector + (off >> 9);
bio->bi_bdev = sbi->meta_bdev;
kc_bio_set_sector(bio, sector + (off >> 9));
bio->bi_end_io = block_bio_end_io;
bio->bi_private = bp;
@@ -528,18 +527,18 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
BUG();
if (!bio_add_page(bio, page, PAGE_SIZE, 0)) {
submit_bio(rw, bio);
kc_submit_bio(bio);
bio = NULL;
}
}
if (bio)
submit_bio(rw, bio);
kc_submit_bio(bio);
blk_finish_plug(&plug);
/* let racing end_io know we're done */
block_end_io(sb, rw, bp, ret);
block_end_io(sb, opf, bp, ret);
return ret;
}
@@ -640,7 +639,7 @@ static struct block_private *block_read(struct super_block *sb, u64 blkno)
if (!test_bit(BLOCK_BIT_UPTODATE, &bp->bits) &&
test_and_clear_bit(BLOCK_BIT_NEW, &bp->bits)) {
ret = block_submit_bio(sb, bp, READ);
ret = block_submit_bio(sb, bp, REQ_OP_READ);
if (ret < 0)
goto out;
}
@@ -677,10 +676,11 @@ out:
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_block_header *hdr;
struct block_private *bp = NULL;
bool retried = false;
__le32 crc = 0;
int ret;
retry:
@@ -693,7 +693,9 @@ retry:
/* corrupted writes might be a sign of a stale reference */
if (!test_bit(BLOCK_BIT_CRC_VALID, &bp->bits)) {
if (hdr->crc != block_calc_crc(hdr, SCOUTFS_BLOCK_LG_SIZE)) {
crc = block_calc_crc(hdr, SCOUTFS_BLOCK_LG_SIZE);
if (hdr->crc != crc) {
trace_scoutfs_block_stale(sb, ref, hdr, magic, le32_to_cpu(crc));
ret = -ESTALE;
goto out;
}
@@ -701,8 +703,9 @@ retry:
set_bit(BLOCK_BIT_CRC_VALID, &bp->bits);
}
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != super->hdr.fsid ||
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != cpu_to_le64(sbi->fsid) ||
hdr->seq != ref->seq || hdr->blkno != ref->blkno) {
trace_scoutfs_block_stale(sb, ref, hdr, magic, 0);
ret = -ESTALE;
goto out;
}
@@ -728,6 +731,36 @@ out:
return ret;
}
static bool stale_refs_match(struct scoutfs_block_ref *caller, struct scoutfs_block_ref *saved)
{
return !caller || (caller->blkno == saved->blkno && caller->seq == saved->seq);
}
/*
* Check if a read of a reference that gave ESTALE should be retried or
* should generate a hard error. If this is the second time we got
* ESTALE from the same refs then we return EIO and the caller should
* stop. As long as we keep seeing different refs we'll return ESTALE
* and the caller can keep trying.
*/
int scoutfs_block_check_stale(struct super_block *sb, int ret,
struct scoutfs_block_saved_refs *saved,
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b)
{
if (ret == -ESTALE) {
if (stale_refs_match(a, &saved->refs[0]) && stale_refs_match(b, &saved->refs[1])){
ret = -EIO;
} else {
if (a)
saved->refs[0] = *a;
if (b)
saved->refs[1] = *b;
}
}
return ret;
}
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl)
{
if (!IS_ERR_OR_NULL(bl))
@@ -797,7 +830,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
u32 magic, struct scoutfs_block **bl_ret,
u64 dirty_blkno, u64 *ref_blkno)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_block *cow_bl = NULL;
struct scoutfs_block *bl = NULL;
struct block_private *exist_bp = NULL;
@@ -865,7 +898,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
hdr = bl->data;
hdr->magic = cpu_to_le32(magic);
hdr->fsid = super->hdr.fsid;
hdr->fsid = cpu_to_le64(sbi->fsid);
hdr->blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
@@ -939,7 +972,7 @@ int scoutfs_block_writer_write(struct super_block *sb,
/* retry previous write errors */
clear_bit(BLOCK_BIT_ERROR, &bp->bits);
ret = block_submit_bio(sb, bp, WRITE);
ret = block_submit_bio(sb, bp, REQ_OP_WRITE);
if (ret < 0)
break;
}
@@ -1039,6 +1072,16 @@ u64 scoutfs_block_writer_dirty_bytes(struct super_block *sb,
return wri->nr_dirty_blocks * SCOUTFS_BLOCK_LG_SIZE;
}
static unsigned long block_count_objects(struct shrinker *shrink, struct shrink_control *sc)
{
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
struct super_block *sb = binf->sb;
scoutfs_inc_counter(sb, block_cache_count_objects);
return shrinker_min_long(atomic_read(&binf->total_inserted));
}
/*
* Remove a number of cached blocks that haven't been used recently.
*
@@ -1059,25 +1102,19 @@ u64 scoutfs_block_writer_dirty_bytes(struct super_block *sb,
* atomically remove blocks when the only references are ours and the
* hash table.
*/
static int block_shrink(struct shrinker *shrink, struct shrink_control *sc)
static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_control *sc)
{
struct block_info *binf = container_of(shrink, struct block_info,
shrinker);
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
struct super_block *sb = binf->sb;
struct rhashtable_iter iter;
struct block_private *bp;
unsigned long nr;
bool stop = false;
unsigned long freed = 0;
unsigned long nr = sc->nr_to_scan;
u64 recently;
nr = sc->nr_to_scan;
if (nr == 0)
goto out;
scoutfs_inc_counter(sb, block_cache_scan_objects);
scoutfs_inc_counter(sb, block_cache_shrink);
nr = DIV_ROUND_UP(nr, SCOUTFS_BLOCK_LG_PAGES_PER);
restart:
recently = accessed_recently(binf);
rhashtable_walk_enter(&binf->ht, &iter);
rhashtable_walk_start(&iter);
@@ -1099,12 +1136,15 @@ restart:
if (bp == NULL)
break;
if (bp == ERR_PTR(-EAGAIN)) {
/* hard exit to wait for rcu rebalance to finish */
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
scoutfs_inc_counter(sb, block_cache_shrink_restart);
synchronize_rcu();
goto restart;
/*
* We can be called from reclaim in the allocation
* to resize the hash table itself. We have to
* return so that the caller can proceed and
* enable hash table iteration again.
*/
scoutfs_inc_counter(sb, block_cache_shrink_stop);
stop = true;
break;
}
scoutfs_inc_counter(sb, block_cache_shrink_next);
@@ -1118,6 +1158,7 @@ restart:
if (block_remove_solo(sb, bp)) {
scoutfs_inc_counter(sb, block_cache_shrink_remove);
TRACE_BLOCK(shrink, bp);
freed++;
nr--;
}
block_put(sb, bp);
@@ -1126,9 +1167,11 @@ restart:
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
out:
return min_t(u64, (u64)atomic_read(&binf->total_inserted) * SCOUTFS_BLOCK_LG_PAGES_PER,
INT_MAX);
if (stop)
return SHRINK_STOP;
else
return freed;
}
struct sm_block_completion {
@@ -1136,11 +1179,11 @@ struct sm_block_completion {
int err;
};
static void sm_block_bio_end_io(struct bio *bio, int err)
static void KC_DECLARE_BIO_END_IO(sm_block_bio_end_io, struct bio *bio)
{
struct sm_block_completion *sbc = bio->bi_private;
sbc->err = err;
sbc->err = kc_bio_get_errno(bio);
complete(&sbc->comp);
bio_put(bio);
}
@@ -1155,9 +1198,8 @@ static void sm_block_bio_end_io(struct bio *bio, int err)
* only layer that sees the full block buffer so we pass the calculated
* crc to the caller for them to check in their context.
*/
static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw, u64 blkno,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc)
static int sm_block_io(struct super_block *sb, struct block_device *bdev, blk_opf_t opf,
u64 blkno, struct scoutfs_block_header *hdr, size_t len, __le32 *blk_crc)
{
struct scoutfs_block_header *pg_hdr;
struct sm_block_completion sbc;
@@ -1171,7 +1213,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
return -EIO;
if (WARN_ON_ONCE(len > SCOUTFS_BLOCK_SM_SIZE) ||
WARN_ON_ONCE(!(rw & WRITE) && !blk_crc))
WARN_ON_ONCE(!op_is_write(opf) && !blk_crc))
return -EINVAL;
page = alloc_page(GFP_NOFS);
@@ -1180,7 +1222,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
pg_hdr = page_address(page);
if (rw & WRITE) {
if (op_is_write(opf)) {
memcpy(pg_hdr, hdr, len);
if (len < SCOUTFS_BLOCK_SM_SIZE)
memset((char *)pg_hdr + len, 0,
@@ -1188,14 +1230,13 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
pg_hdr->crc = block_calc_crc(pg_hdr, SCOUTFS_BLOCK_SM_SIZE);
}
bio = bio_alloc(GFP_NOFS, 1);
bio = kc_bio_alloc(bdev, 1, opf, GFP_NOFS);
if (!bio) {
ret = -ENOMEM;
goto out;
}
bio->bi_sector = blkno << (SCOUTFS_BLOCK_SM_SHIFT - 9);
bio->bi_bdev = bdev;
kc_bio_set_sector(bio, blkno << (SCOUTFS_BLOCK_SM_SHIFT - 9));
bio->bi_end_io = sm_block_bio_end_io;
bio->bi_private = &sbc;
bio_add_page(bio, page, SCOUTFS_BLOCK_SM_SIZE, 0);
@@ -1203,12 +1244,12 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
init_completion(&sbc.comp);
sbc.err = 0;
submit_bio((rw & WRITE) ? WRITE_SYNC : READ_SYNC, bio);
kc_submit_bio(bio);
wait_for_completion(&sbc.comp);
ret = sbc.err;
if (ret == 0 && !(rw & WRITE)) {
if (ret == 0 && !op_is_write(opf)) {
memcpy(hdr, pg_hdr, len);
*blk_crc = block_calc_crc(pg_hdr, SCOUTFS_BLOCK_SM_SIZE);
}
@@ -1222,14 +1263,14 @@ int scoutfs_block_read_sm(struct super_block *sb,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc)
{
return sm_block_io(sb, bdev, READ, blkno, hdr, len, blk_crc);
return sm_block_io(sb, bdev, REQ_OP_READ, blkno, hdr, len, blk_crc);
}
int scoutfs_block_write_sm(struct super_block *sb,
struct block_device *bdev, u64 blkno,
struct scoutfs_block_header *hdr, size_t len)
{
return sm_block_io(sb, bdev, WRITE, blkno, hdr, len, NULL);
return sm_block_io(sb, bdev, REQ_OP_WRITE, blkno, hdr, len, NULL);
}
int scoutfs_block_setup(struct super_block *sb)
@@ -1254,9 +1295,9 @@ int scoutfs_block_setup(struct super_block *sb)
atomic_set(&binf->total_inserted, 0);
atomic64_set(&binf->access_counter, 0);
init_waitqueue_head(&binf->waitq);
binf->shrinker.shrink = block_shrink;
binf->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&binf->shrinker);
KC_INIT_SHRINKER_FUNCS(&binf->shrinker, block_count_objects,
block_scan_objects);
KC_REGISTER_SHRINKER(&binf->shrinker, "scoutfs-block:" SCSBF, SCSB_ARGS(sb));
INIT_WORK(&binf->free_work, block_free_work);
init_llist_head(&binf->free_llist);
@@ -1276,7 +1317,7 @@ void scoutfs_block_destroy(struct super_block *sb)
struct block_info *binf = SCOUTFS_SB(sb)->block_info;
if (binf) {
unregister_shrinker(&binf->shrinker);
KC_UNREGISTER_SHRINKER(&binf->shrinker);
block_remove_all(sb);
flush_work(&binf->free_work);
rhashtable_destroy(&binf->ht);

View File

@@ -13,6 +13,17 @@ struct scoutfs_block {
void *priv;
};
struct scoutfs_block_saved_refs {
struct scoutfs_block_ref refs[2];
};
#define DECLARE_SAVED_REFS(name) \
struct scoutfs_block_saved_refs name = {{{0,}}}
int scoutfs_block_check_stale(struct super_block *sb, int ret,
struct scoutfs_block_saved_refs *saved,
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b);
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret);
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);

View File

@@ -2029,187 +2029,253 @@ int scoutfs_btree_rebalance(struct super_block *sb,
key, SCOUTFS_BTREE_MAX_VAL_LEN, NULL, NULL, NULL);
}
struct merge_pos {
struct merged_range {
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_root root;
int size;
};
struct merged_item {
struct rb_node node;
struct scoutfs_btree_root *root;
struct scoutfs_block *bl;
struct scoutfs_btree_block *bt;
struct scoutfs_avl_node *avl;
struct scoutfs_key *key;
struct scoutfs_key key;
u64 seq;
u8 flags;
unsigned int val_len;
u8 *val;
u8 val[0];
};
static struct merge_pos *first_mpos(struct rb_root *root)
static inline struct merged_item *mitem_container(struct rb_node *node)
{
struct rb_node *node = rb_first(root);
if (node)
return container_of(node, struct merge_pos, node);
return node ? container_of(node, struct merged_item, node) : NULL;
}
static inline struct merged_item *first_mitem(struct rb_root *root)
{
return mitem_container(rb_first(root));
}
static inline struct merged_item *last_mitem(struct rb_root *root)
{
return mitem_container(rb_last(root));
}
static inline struct merged_item *next_mitem(struct merged_item *mitem)
{
return mitem_container(mitem ? rb_next(&mitem->node) : NULL);
}
static inline struct merged_item *prev_mitem(struct merged_item *mitem)
{
return mitem_container(mitem ? rb_prev(&mitem->node) : NULL);
}
static struct merged_item *find_mitem(struct rb_root *root, struct scoutfs_key *key,
struct rb_node **parent_ret, struct rb_node ***link_ret)
{
struct rb_node **node = &root->rb_node;
struct rb_node *parent = NULL;
struct merged_item *mitem;
int cmp;
while (*node) {
parent = *node;
mitem = container_of(*node, struct merged_item, node);
cmp = scoutfs_key_compare(key, &mitem->key);
if (cmp < 0) {
node = &(*node)->rb_left;
} else if (cmp > 0) {
node = &(*node)->rb_right;
} else {
*parent_ret = NULL;
*link_ret = NULL;
return mitem;
}
}
*parent_ret = parent;
*link_ret = node;
return NULL;
}
static struct merge_pos *next_mpos(struct merge_pos *mpos)
static void insert_mitem(struct merged_range *rng, struct merged_item *mitem,
struct rb_node *parent, struct rb_node **link)
{
struct rb_node *node;
if (mpos && (node = rb_next(&mpos->node)))
return container_of(node, struct merge_pos, node);
else
return NULL;
rb_link_node(&mitem->node, parent, link);
rb_insert_color(&mitem->node, &rng->root);
rng->size += item_len_bytes(mitem->val_len);
}
static void free_mpos(struct super_block *sb, struct merge_pos *mpos)
static void replace_mitem(struct merged_range *rng, struct merged_item *victim,
struct merged_item *new)
{
scoutfs_block_put(sb, mpos->bl);
kfree(mpos);
rb_replace_node(&victim->node, &new->node, &rng->root);
RB_CLEAR_NODE(&victim->node);
rng->size -= item_len_bytes(victim->val_len);
rng->size += item_len_bytes(new->val_len);
}
static void insert_mpos(struct rb_root *pos_root, struct merge_pos *ins)
static void free_mitem(struct merged_range *rng, struct merged_item *mitem)
{
struct rb_node **node = &pos_root->rb_node;
struct rb_node *parent = NULL;
struct merge_pos *mpos;
int cmp;
if (IS_ERR_OR_NULL(mitem))
return;
parent = NULL;
while (*node) {
parent = *node;
mpos = container_of(*node, struct merge_pos, node);
/* sort merge items by key then newest to oldest */
cmp = scoutfs_key_compare(ins->key, mpos->key) ?:
-scoutfs_cmp(ins->seq, mpos->seq);
if (cmp < 0)
node = &(*node)->rb_left;
else
node = &(*node)->rb_right;
if (!RB_EMPTY_NODE(&mitem->node)) {
rng->size -= item_len_bytes(mitem->val_len);
rb_erase(&mitem->node, &rng->root);
}
rb_link_node(&ins->node, parent, node);
rb_insert_color(&ins->node, pos_root);
kfree(mitem);
}
static void trim_range_size(struct merged_range *rng, int merge_window)
{
struct merged_item *mitem;
struct merged_item *tmp;
mitem = last_mitem(&rng->root);
while (mitem && rng->size > merge_window) {
rng->end = mitem->key;
scoutfs_key_dec(&rng->end);
tmp = mitem;
mitem = prev_mitem(mitem);
free_mitem(rng, tmp);
}
}
static void trim_range_end(struct merged_range *rng)
{
struct merged_item *mitem;
struct merged_item *tmp;
mitem = last_mitem(&rng->root);
while (mitem && scoutfs_key_compare(&mitem->key, &rng->end) > 0) {
tmp = mitem;
mitem = prev_mitem(mitem);
free_mitem(rng, tmp);
}
}
/*
* Find the next item in the merge_pos root in the caller's range and
* insert it into the rbtree sorted by key and version so that merging
* can find the next newest item at the front of the rbtree. We free
* the mpos on error or if there are no more items in the range.
* Record and combine logged items from log roots for merging with the
* writable destination root. The caller is responsible for trimming
* the range if it gets too large or if the key range shrinks.
*/
static int reset_mpos(struct super_block *sb, struct rb_root *pos_root, struct merge_pos *mpos,
struct scoutfs_key *start, struct scoutfs_key *end)
static int merge_read_item(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, void *arg)
{
struct scoutfs_btree_item *item;
struct scoutfs_avl_node *next;
struct btree_walk_key_range kr;
struct scoutfs_key walk_key;
int ret = 0;
struct merged_range *rng = arg;
struct merged_item *mitem;
struct merged_item *found;
struct rb_node *parent;
struct rb_node **link;
int ret;
/* always erase before freeing or inserting */
if (!RB_EMPTY_NODE(&mpos->node)) {
rb_erase(&mpos->node, pos_root);
RB_CLEAR_NODE(&mpos->node);
}
/*
* advance to next item via the avl tree. The caller's pos is
* only ever incremented past the last key so we can use next to
* iterate rather than using search to skip past multiple items.
*/
if (mpos->avl)
mpos->avl = scoutfs_avl_next(&mpos->bt->item_root, mpos->avl);
/* find the next leaf with the key if we run out of items */
walk_key = *start;
while (!mpos->avl && !scoutfs_key_is_zeros(&walk_key)) {
scoutfs_block_put(sb, mpos->bl);
mpos->bl = NULL;
ret = btree_walk(sb, NULL, NULL, mpos->root, BTW_NEXT, &walk_key,
0, &mpos->bl, &kr, NULL);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
free_mpos(sb, mpos);
found = find_mitem(&rng->root, key, &parent, &link);
if (found) {
ret = scoutfs_forest_combine_deltas(key, found->val, found->val_len, val, val_len);
if (ret < 0)
goto out;
if (ret > 0) {
if (ret == SCOUTFS_DELTA_COMBINED) {
scoutfs_inc_counter(sb, btree_merge_delta_combined);
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
scoutfs_inc_counter(sb, btree_merge_delta_null);
free_mitem(rng, found);
}
ret = 0;
goto out;
}
mpos->bt = mpos->bl->data;
mpos->avl = scoutfs_avl_search(&mpos->bt->item_root, cmp_key_item,
start, NULL, NULL, &next, NULL) ?: next;
if (mpos->avl == NULL)
walk_key = kr.iter_next;
if (found->seq >= seq) {
ret = 0;
goto out;
}
}
/* see if we're out of items within the range */
item = node_item(mpos->avl);
if (!item || scoutfs_key_compare(item_key(item), end) > 0) {
free_mpos(sb, mpos);
ret = 0;
mitem = kmalloc(offsetof(struct merged_item, val[val_len]), GFP_NOFS);
if (!mitem) {
ret = -ENOMEM;
goto out;
}
/* insert the next item within range at its version */
mpos->key = item_key(item);
mpos->seq = le64_to_cpu(item->seq);
mpos->flags = item->flags;
mpos->val_len = item_val_len(item);
mpos->val = item_val(mpos->bt, item);
mitem->key = *key;
mitem->seq = seq;
mitem->flags = flags;
mitem->val_len = val_len;
if (val_len)
memcpy(mitem->val, val, val_len);
if (found) {
replace_mitem(rng, found, mitem);
free_mitem(rng, found);
} else {
insert_mitem(rng, mitem, parent, link);
}
insert_mpos(pos_root, mpos);
ret = 0;
out:
return ret;
}
/*
* The caller has reset all the merge positions for all the input log
* btree roots and wants the next logged item it should try and merge
* with the items in the fs_root.
* Read a range of merged items. The caller has set the key bounds of
* the range. We read a merge window's worth of items from blocks in
* each input btree.
*
* We look ahead in the logged item stream to see if we should merge any
* older logged delta items into one result for the caller. We also
* take this opportunity to skip and reset the mpos for any older
* versions of the first item.
* The caller can only use the smallest range that overlaps with all the
* blocks that we read. We start reading from the range's start key so
* it will always be present and we don't need to adjust it. The final
* block we read from each input might not cover the range's end so it
* needs to be adjusted.
*
* The end range can also shrink if we have to drop items because the
* items exceeded the merge window size.
*/
static int next_resolved_mpos(struct super_block *sb, struct rb_root *pos_root,
struct scoutfs_key *end, struct merge_pos **mpos_ret)
static int read_merged_range(struct super_block *sb, struct merged_range *rng,
struct list_head *inputs, int merge_window)
{
struct merge_pos *mpos;
struct merge_pos *next;
struct scoutfs_btree_root_head *rhead;
struct scoutfs_key start;
struct scoutfs_key end;
struct scoutfs_key key;
int ret = 0;
int i;
while ((mpos = first_mpos(pos_root)) && (next = next_mpos(mpos)) &&
!scoutfs_key_compare(mpos->key, next->key)) {
list_for_each_entry(rhead, inputs, head) {
key = rng->start;
ret = scoutfs_forest_combine_deltas(mpos->key, mpos->val, mpos->val_len,
next->val, next->val_len);
if (ret < 0)
break;
/* reset advances to the next item */
key = *mpos->key;
scoutfs_key_inc(&key);
/* always skip next combined or older version */
ret = reset_mpos(sb, pos_root, next, &key, end);
if (ret < 0)
break;
if (ret == SCOUTFS_DELTA_COMBINED) {
scoutfs_inc_counter(sb, btree_merge_delta_combined);
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
scoutfs_inc_counter(sb, btree_merge_delta_null);
/* if merging resulted in no info, skip current */
ret = reset_mpos(sb, pos_root, mpos, &key, end);
for (i = 0; i < merge_window; i += SCOUTFS_BLOCK_LG_SIZE) {
start = key;
end = rng->end;
ret = scoutfs_btree_read_items(sb, &rhead->root, &key, &start, &end,
merge_read_item, rng);
if (ret < 0)
goto out;
if (scoutfs_key_compare(&end, &rng->end) >= 0)
break;
key = end;
scoutfs_key_inc(&key);
}
if (scoutfs_key_compare(&end, &rng->end) < 0) {
rng->end = end;
trim_range_end(rng);
}
if (rng->size > merge_window)
trim_range_size(rng, merge_window);
}
*mpos_ret = mpos;
trace_scoutfs_btree_merge_read_range(sb, &rng->start, &rng->end, rng->size);
ret = 0;
out:
return ret;
}
@@ -2226,6 +2292,13 @@ static int next_resolved_mpos(struct super_block *sb, struct rb_root *pos_root,
* to allocators running low or needing to join/split the parent.
* *next_ret is set to the next key which hasn't been merged so that the
* caller can retry with a new allocator and subtree.
*
* The number of input roots can be immense. The merge_window specifies
* the size of the set of merged items that we'll maintain as we iterate
* over all the input roots. Once we've merged items into the window
* from all the input roots the merged input items are then merged to
* the writable destination root. It may take multiple passes of
* windows of merged items to cover the input key range.
*/
int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_alloc *alloc,
@@ -2235,18 +2308,16 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *inputs,
bool subtree, int dirty_limit, int alloc_low)
bool subtree, int dirty_limit, int alloc_low, int merge_window)
{
struct scoutfs_btree_root_head *rhead;
struct rb_root pos_root = RB_ROOT;
struct scoutfs_btree_item *item;
struct scoutfs_btree_block *bt;
struct scoutfs_block *bl = NULL;
struct btree_walk_key_range kr;
struct scoutfs_avl_node *par;
struct scoutfs_key next;
struct merge_pos *mpos;
struct merge_pos *tmp;
struct merged_item *mitem;
struct merged_item *tmp;
struct merged_range rng;
int walk_val_len;
int walk_flags;
bool is_del;
@@ -2257,49 +2328,59 @@ int scoutfs_btree_merge(struct super_block *sb,
trace_scoutfs_btree_merge(sb, root, start, end);
scoutfs_inc_counter(sb, btree_merge);
list_for_each_entry(rhead, inputs, head) {
mpos = kzalloc(sizeof(*mpos), GFP_NOFS);
if (!mpos) {
ret = -ENOMEM;
goto out;
}
RB_CLEAR_NODE(&mpos->node);
mpos->root = &rhead->root;
ret = reset_mpos(sb, &pos_root, mpos, start, end);
if (ret < 0)
goto out;
}
walk_flags = BTW_DIRTY;
if (subtree)
walk_flags |= BTW_SUBTREE;
walk_val_len = 0;
while ((ret = next_resolved_mpos(sb, &pos_root, end, &mpos)) == 0 && mpos) {
rng.start = *start;
rng.end = *end;
rng.root = RB_ROOT;
rng.size = 0;
ret = read_merged_range(sb, &rng, inputs, merge_window);
if (ret < 0)
goto out;
for (;;) {
/* read next window as it empties (and it is possible to read an empty range) */
mitem = first_mitem(&rng.root);
if (!mitem) {
/* done if the read range hit the end */
if (scoutfs_key_compare(&rng.end, end) >= 0)
break;
/* read next batch of merged items */
rng.start = rng.end;
scoutfs_key_inc(&rng.start);
rng.end = *end;
ret = read_merged_range(sb, &rng, inputs, merge_window);
if (ret < 0)
break;
continue;
}
if (scoutfs_block_writer_dirty_bytes(sb, wri) >= dirty_limit) {
scoutfs_inc_counter(sb, btree_merge_dirty_limit);
ret = -ERANGE;
*next_ret = *mpos->key;
*next_ret = mitem->key;
goto out;
}
if (scoutfs_alloc_meta_low(sb, alloc, alloc_low)) {
scoutfs_inc_counter(sb, btree_merge_alloc_low);
ret = -ERANGE;
*next_ret = *mpos->key;
*next_ret = mitem->key;
goto out;
}
scoutfs_block_put(sb, bl);
bl = NULL;
ret = btree_walk(sb, alloc, wri, root, walk_flags,
mpos->key, walk_val_len, &bl, &kr, NULL);
&mitem->key, walk_val_len, &bl, &kr, NULL);
if (ret < 0) {
if (ret == -ERANGE)
*next_ret = *mpos->key;
*next_ret = mitem->key;
goto out;
}
bt = bl->data;
@@ -2311,22 +2392,21 @@ int scoutfs_btree_merge(struct super_block *sb,
continue;
}
while ((ret = next_resolved_mpos(sb, &pos_root, end, &mpos)) == 0 && mpos) {
while (mitem) {
/* walk to new leaf if we exceed parent ref key */
if (scoutfs_key_compare(mpos->key, &kr.end) > 0)
if (scoutfs_key_compare(&mitem->key, &kr.end) > 0)
break;
/* see if there's an existing item */
item = leaf_item_hash_search(sb, bt, mpos->key);
is_del = !!(mpos->flags & SCOUTFS_ITEM_FLAG_DELETION);
item = leaf_item_hash_search(sb, bt, &mitem->key);
is_del = !!(mitem->flags & SCOUTFS_ITEM_FLAG_DELETION);
/* see if we're merging delta items */
if (item && !is_del)
delta = scoutfs_forest_combine_deltas(mpos->key,
delta = scoutfs_forest_combine_deltas(&mitem->key,
item_val(bt, item),
item_val_len(item),
mpos->val, mpos->val_len);
mitem->val, mitem->val_len);
else
delta = 0;
if (delta < 0) {
@@ -2338,40 +2418,38 @@ int scoutfs_btree_merge(struct super_block *sb,
scoutfs_inc_counter(sb, btree_merge_delta_null);
}
trace_scoutfs_btree_merge_items(sb, mpos->root,
mpos->key, mpos->val_len,
trace_scoutfs_btree_merge_items(sb, &mitem->key, mitem->val_len,
item ? root : NULL,
item ? item_key(item) : NULL,
item ? item_val_len(item) : 0, is_del);
/* rewalk and split if ins/update needs room */
if (!is_del && !delta && !mid_free_item_room(bt, mpos->val_len)) {
if (!is_del && !delta && !mid_free_item_room(bt, mitem->val_len)) {
walk_flags |= BTW_INSERT;
walk_val_len = mpos->val_len;
walk_val_len = mitem->val_len;
break;
}
/* insert missing non-deletion merge items */
if (!item && !is_del) {
scoutfs_avl_search(&bt->item_root,
cmp_key_item, mpos->key,
scoutfs_avl_search(&bt->item_root, cmp_key_item, &mitem->key,
&cmp, &par, NULL, NULL);
create_item(bt, mpos->key, mpos->seq, mpos->flags,
mpos->val, mpos->val_len, par, cmp);
create_item(bt, &mitem->key, mitem->seq, mitem->flags,
mitem->val, mitem->val_len, par, cmp);
scoutfs_inc_counter(sb, btree_merge_insert);
}
/* update existing items */
if (item && !is_del && !delta) {
item->seq = cpu_to_le64(mpos->seq);
item->flags = mpos->flags;
update_item_value(bt, item, mpos->val, mpos->val_len);
item->seq = cpu_to_le64(mitem->seq);
item->flags = mitem->flags;
update_item_value(bt, item, mitem->val, mitem->val_len);
scoutfs_inc_counter(sb, btree_merge_update);
}
/* update combined delta item seq */
if (delta == SCOUTFS_DELTA_COMBINED) {
item->seq = cpu_to_le64(mpos->seq);
item->seq = cpu_to_le64(mitem->seq);
}
/*
@@ -2403,21 +2481,18 @@ int scoutfs_btree_merge(struct super_block *sb,
walk_flags &= ~(BTW_INSERT | BTW_DELETE);
walk_val_len = 0;
/* finished with this key, skip any older items */
next = *mpos->key;
scoutfs_key_inc(&next);
ret = reset_mpos(sb, &pos_root, mpos, &next, end);
if (ret < 0)
goto out;
/* finished with this merged item */
tmp = mitem;
mitem = next_mitem(mitem);
free_mitem(&rng, tmp);
}
}
ret = 0;
out:
scoutfs_block_put(sb, bl);
rbtree_postorder_for_each_entry_safe(mpos, tmp, &pos_root, node) {
free_mpos(sb, mpos);
}
rbtree_postorder_for_each_entry_safe(mitem, tmp, &rng.root, node)
free_mitem(&rng, mitem);
return ret;
}

View File

@@ -119,7 +119,7 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *input_list,
bool subtree, int dirty_limit, int alloc_low);
bool subtree, int dirty_limit, int alloc_low, int merge_window);
int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,

View File

@@ -20,6 +20,7 @@
#include <net/sock.h>
#include <net/tcp.h>
#include <asm/barrier.h>
#include <linux/overflow.h>
#include "format.h"
#include "counters.h"
@@ -68,6 +69,7 @@ int scoutfs_client_alloc_inodes(struct super_block *sb, u64 count,
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_net_inode_alloc ial;
__le64 lecount = cpu_to_le64(count);
u64 tmp;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
@@ -80,7 +82,7 @@ int scoutfs_client_alloc_inodes(struct super_block *sb, u64 count,
if (*nr == 0)
ret = -ENOSPC;
else if (*ino + *nr < *ino)
else if (check_add_overflow(*ino, *nr - 1, &tmp))
ret = -EINVAL;
}
@@ -356,7 +358,6 @@ static int client_greeting(struct super_block *sb,
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
int ret;
@@ -371,9 +372,9 @@ static int client_greeting(struct super_block *sb,
goto out;
}
if (gr->fsid != super->hdr.fsid) {
if (gr->fsid != cpu_to_le64(sbi->fsid)) {
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
le64_to_cpu(gr->fsid), sbi->fsid);
ret = -EINVAL;
goto out;
}
@@ -476,7 +477,6 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_mount_options opts;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
@@ -508,7 +508,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
goto out;
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.fsid = cpu_to_le64(sbi->fsid);
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.server_term = cpu_to_le64(client->server_term);
greet.rid = cpu_to_le64(sbi->rid);

View File

@@ -30,11 +30,13 @@
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_count_objects) \
EXPAND_COUNTER(block_cache_scan_objects) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_restart) \
EXPAND_COUNTER(block_cache_shrink_stop) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
@@ -88,6 +90,8 @@
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(item_cache_count_objects) \
EXPAND_COUNTER(item_cache_scan_objects) \
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
@@ -121,6 +125,7 @@
EXPAND_COUNTER(item_update) \
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_count_objects) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
@@ -134,11 +139,13 @@
EXPAND_COUNTER(lock_lock_error) \
EXPAND_COUNTER(lock_nonblock_eagain) \
EXPAND_COUNTER(lock_recover_request) \
EXPAND_COUNTER(lock_scan_objects) \
EXPAND_COUNTER(lock_shrink_attempted) \
EXPAND_COUNTER(lock_shrink_aborted) \
EXPAND_COUNTER(lock_shrink_work) \
EXPAND_COUNTER(lock_unlock) \
EXPAND_COUNTER(lock_wait) \
EXPAND_COUNTER(log_merge_wait_timeout) \
EXPAND_COUNTER(net_dropped_response) \
EXPAND_COUNTER(net_send_bytes) \
EXPAND_COUNTER(net_send_error) \
@@ -155,6 +162,8 @@
EXPAND_COUNTER(orphan_scan_error) \
EXPAND_COUNTER(orphan_scan_item) \
EXPAND_COUNTER(orphan_scan_omap_set) \
EXPAND_COUNTER(quota_info_count_objects) \
EXPAND_COUNTER(quota_info_scan_objects) \
EXPAND_COUNTER(quorum_candidate_server_stopping) \
EXPAND_COUNTER(quorum_elected) \
EXPAND_COUNTER(quorum_fence_error) \
@@ -166,6 +175,7 @@
EXPAND_COUNTER(quorum_recv_resignation) \
EXPAND_COUNTER(quorum_recv_vote) \
EXPAND_COUNTER(quorum_send_heartbeat) \
EXPAND_COUNTER(quorum_send_heartbeat_dropped) \
EXPAND_COUNTER(quorum_send_resignation) \
EXPAND_COUNTER(quorum_send_request) \
EXPAND_COUNTER(quorum_send_vote) \
@@ -187,26 +197,23 @@
EXPAND_COUNTER(srch_search_retry_empty) \
EXPAND_COUNTER(srch_search_sorted) \
EXPAND_COUNTER(srch_search_sorted_block) \
EXPAND_COUNTER(srch_search_stale_eio) \
EXPAND_COUNTER(srch_search_stale_retry) \
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
EXPAND_COUNTER(totl_read_copied) \
EXPAND_COUNTER(totl_read_finalized) \
EXPAND_COUNTER(totl_read_fs) \
EXPAND_COUNTER(totl_read_item) \
EXPAND_COUNTER(totl_read_logged) \
EXPAND_COUNTER(trans_commit_data_alloc_low) \
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
EXPAND_COUNTER(trans_commit_fsync) \
EXPAND_COUNTER(trans_commit_meta_alloc_low) \
EXPAND_COUNTER(trans_commit_sync_fs) \
EXPAND_COUNTER(trans_commit_timer) \
EXPAND_COUNTER(trans_commit_written)
EXPAND_COUNTER(trans_commit_written) \
EXPAND_COUNTER(wkic_count_objects) \
EXPAND_COUNTER(wkic_scan_objects)
#define FIRST_COUNTER alloc_alloc_data
#define LAST_COUNTER trans_commit_written
#define LAST_COUNTER wkic_scan_objects
#undef EXPAND_COUNTER
#define EXPAND_COUNTER(which) struct percpu_counter which;
@@ -233,12 +240,12 @@ struct scoutfs_counters {
#define SCOUTFS_PCPU_COUNTER_BATCH (1 << 30)
#define scoutfs_inc_counter(sb, which) \
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, 1, \
SCOUTFS_PCPU_COUNTER_BATCH)
percpu_counter_add_batch(&SCOUTFS_SB(sb)->counters->which, 1, \
SCOUTFS_PCPU_COUNTER_BATCH)
#define scoutfs_add_counter(sb, which, cnt) \
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, cnt, \
SCOUTFS_PCPU_COUNTER_BATCH)
percpu_counter_add_batch(&SCOUTFS_SB(sb)->counters->which, cnt, \
SCOUTFS_PCPU_COUNTER_BATCH)
void __init scoutfs_init_counters(void);
int scoutfs_setup_counters(struct super_block *sb);

View File

@@ -20,7 +20,9 @@
#include <linux/hash.h>
#include <linux/log2.h>
#include <linux/falloc.h>
#include <linux/fiemap.h>
#include <linux/writeback.h>
#include <linux/overflow.h>
#include "format.h"
#include "super.h"
@@ -307,7 +309,7 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
LIST_HEAD(ind_locks);
s64 ret = 0;
WARN_ON_ONCE(inode && !mutex_is_locked(&inode->i_mutex));
WARN_ON_ONCE(inode && !inode_is_locked(inode));
/* clamp last to the last possible block? */
if (last > SCOUTFS_BLOCK_SM_MAX)
@@ -456,11 +458,11 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
} else {
/*
* Preallocation of aligned regions only preallocates if
* the aligned region contains no extents at all. This
* could be fooled by offline sparse extents but we
* don't want to iterate over all offline extents in the
* aligned region.
* Preallocation within aligned regions tries to
* allocate an extent to fill the hole in the region
* that contains iblock. We'd have to add a bit of plumbing
* to find previous extents so we only search for a next
* extent from the front of the region and from iblock.
*/
div64_u64_rem(iblock, opts.data_prealloc_blocks, &rem);
start = iblock - rem;
@@ -468,8 +470,20 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, start, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (found.len && found.start < start + count)
count = 1;
/* trim count if there's an extent in the region before iblock */
if (found.len && found.start < iblock) {
count -= iblock - start;
start = iblock;
/* see if there's also an extent after iblock */
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, iblock, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
}
/* trim count by next extent after iblock */
if (found.len && found.start > start && found.start < start + count)
count = (found.start - start);
}
/* overall prealloc limit */
@@ -546,7 +560,7 @@ static int scoutfs_get_block(struct inode *inode, sector_t iblock,
u64 offset;
int ret;
WARN_ON_ONCE(create && !mutex_is_locked(&inode->i_mutex));
WARN_ON_ONCE(create && !inode_is_locked(inode));
/* make sure caller holds a cluster lock */
lock = scoutfs_per_task_get(&si->pt_data_lock);
@@ -574,6 +588,12 @@ static int scoutfs_get_block(struct inode *inode, sector_t iblock,
goto out;
}
if (create && !si->staging) {
ret = scoutfs_inode_check_retention(inode);
if (ret < 0)
goto out;
}
/* convert unwritten to written, could be staging */
if (create && ext.map && (ext.flags & SEF_UNWRITTEN)) {
un.start = iblock;
@@ -661,8 +681,14 @@ int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_
* We can return errors from locking and checking offline extents. The
* page is unlocked if we return an error.
*/
#ifdef KC_MPAGE_READ_FOLIO
static int scoutfs_read_folio(struct file *file, struct folio *folio)
{
struct page *page = &folio->page;
#else
static int scoutfs_readpage(struct file *file, struct page *page)
{
#endif
struct inode *inode = file->f_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
@@ -692,7 +718,7 @@ static int scoutfs_readpage(struct file *file, struct page *page)
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
ret = scoutfs_data_wait_check(inode, page_offset(page),
PAGE_CACHE_SIZE, SEF_OFFLINE,
PAGE_SIZE, SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ, &dw,
inode_lock);
if (ret != 0) {
@@ -709,7 +735,11 @@ static int scoutfs_readpage(struct file *file, struct page *page)
return ret;
}
#ifdef KC_MPAGE_READ_FOLIO
ret = mpage_read_folio(folio, scoutfs_get_block_read);
#else
ret = mpage_readpage(page, scoutfs_get_block_read);
#endif
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
@@ -717,6 +747,7 @@ static int scoutfs_readpage(struct file *file, struct page *page)
return ret;
}
#ifndef KC_FILE_AOPS_READAHEAD
/*
* This is used for opportunistic read-ahead which can throw the pages
* away if it needs to. If the caller didn't deal with offline extents
@@ -742,14 +773,14 @@ static int scoutfs_readpages(struct file *file, struct address_space *mapping,
list_for_each_entry_safe(page, tmp, pages, lru) {
ret = scoutfs_data_wait_check(inode, page_offset(page),
PAGE_CACHE_SIZE, SEF_OFFLINE,
PAGE_SIZE, SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ, NULL,
inode_lock);
if (ret < 0)
goto out;
if (ret > 0) {
list_del(&page->lru);
page_cache_release(page);
put_page(page);
if (--nr_pages == 0) {
ret = 0;
goto out;
@@ -763,6 +794,29 @@ out:
BUG_ON(!list_empty(pages));
return ret;
}
#else
static void scoutfs_readahead(struct readahead_control *rac)
{
struct inode *inode = rac->file->f_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *inode_lock = NULL;
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
return;
ret = scoutfs_data_wait_check(inode, readahead_pos(rac),
readahead_length(rac), SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ, NULL,
inode_lock);
if (ret == 0)
mpage_readahead(rac, scoutfs_get_block_read);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
}
#endif
static int scoutfs_writepage(struct page *page, struct writeback_control *wbc)
{
@@ -783,7 +837,10 @@ struct write_begin_data {
static int scoutfs_write_begin(struct file *file,
struct address_space *mapping, loff_t pos,
unsigned len, unsigned flags,
unsigned len,
#ifdef KC_BLOCK_WRITE_BEGIN_AOP_FLAGS
unsigned flags,
#endif
struct page **pagep, void **fsdata)
{
struct inode *inode = mapping->host;
@@ -818,13 +875,18 @@ retry:
if (ret < 0)
goto out;
#ifdef KC_BLOCK_WRITE_BEGIN_AOP_FLAGS
/* can't re-enter fs, have trans */
flags |= AOP_FLAG_NOFS;
#endif
/* generic write_end updates i_size and calls dirty_inode */
ret = scoutfs_dirty_inode_item(inode, wbd->lock) ?:
block_write_begin(mapping, pos, len, flags, pagep,
scoutfs_get_block_write);
block_write_begin(mapping, pos, len,
#ifdef KC_BLOCK_WRITE_BEGIN_AOP_FLAGS
flags,
#endif
pagep, scoutfs_get_block_write);
if (ret < 0) {
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &wbd->ind_locks);
@@ -1026,6 +1088,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
loff_t end;
u64 iblock;
u64 last;
loff_t tmp;
s64 ret;
/* XXX support more flags */
@@ -1034,18 +1097,18 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
goto out;
}
/* catch wrapping */
if (offset + len < offset) {
ret = -EINVAL;
goto out;
}
if (len == 0) {
ret = 0;
goto out;
}
mutex_lock(&inode->i_mutex);
/* catch wrapping */
if (check_add_overflow(offset, len - 1, &tmp)) {
ret = -EINVAL;
goto out;
}
inode_lock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -1068,6 +1131,10 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
while(iblock <= last) {
ret = scoutfs_quota_check_data(sb, inode);
if (ret)
goto out_extent;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
if (ret)
goto out_extent;
@@ -1106,7 +1173,7 @@ out_extent:
up_write(&si->extent_sem);
out_mutex:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
mutex_unlock(&inode->i_mutex);
inode_unlock(inode);
out:
trace_scoutfs_data_fallocate(sb, ino, mode, offset, len, ret);
@@ -1119,9 +1186,9 @@ out:
* on regular files with no data extents. It's used to restore a file
* with an offline extent which can then trigger staging.
*
* The caller has taken care of locking the inode. We're updating the
* inode offline count as we create the offline extent so we take care
* of the index locking, updating, and transaction.
* The caller must take care of cluster locking, transactions, inode
* updates, and index updates (so that they can atomically make this
* change along with other metadata changes).
*/
int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
struct scoutfs_lock *lock)
@@ -1135,7 +1202,6 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
.lock = lock,
};
const u64 count = DIV_ROUND_UP(size, SCOUTFS_BLOCK_SM_SIZE);
LIST_HEAD(ind_locks);
u64 on;
u64 off;
int ret;
@@ -1148,28 +1214,10 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
goto out;
}
/* we're updating meta_seq with offline block count */
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
if (ret < 0)
goto out;
ret = scoutfs_dirty_inode_item(inode, lock);
if (ret < 0)
goto unlock;
down_write(&si->extent_sem);
ret = scoutfs_ext_insert(sb, &data_ext_ops, &args,
0, count, 0, SEF_OFFLINE);
up_write(&si->extent_sem);
if (ret < 0)
goto unlock;
scoutfs_update_inode_item(inode, lock, &ind_locks);
unlock:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
ret = 0;
out:
return ret;
}
@@ -1192,9 +1240,9 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
* explained above the move_blocks ioctl argument structure definition.
*
* The caller has processed the ioctl args and performed the most basic
* inode checks, but we perform more detailed inode checks once we have
* the inode lock and refreshed inodes. Our job is to safely lock the
* two files and move the extents.
* argument sanity and inode checks, but we perform more detailed inode
* checks once we have the inode lock and refreshed inodes. Our job is
* to safely lock the two files and move the extents.
*/
#define MOVE_DATA_EXTENTS_PER_HOLD 16
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
@@ -1209,7 +1257,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
struct data_ext_args from_args;
struct data_ext_args to_args;
struct scoutfs_extent ext;
struct timespec cur_time;
struct kc_timespec cur_time;
LIST_HEAD(locks);
bool done = false;
loff_t from_size;
@@ -1237,6 +1285,9 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
if (ret)
goto out;
if (!is_stage && (ret = scoutfs_inode_check_retention(to)))
goto out;
if ((from_off & SCOUTFS_BLOCK_SM_MASK) ||
(to_off & SCOUTFS_BLOCK_SM_MASK) ||
((byte_len & SCOUTFS_BLOCK_SM_MASK) &&
@@ -1253,6 +1304,16 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
from_iblock = from_off >> SCOUTFS_BLOCK_SM_SHIFT;
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
from_start = from_iblock;
/* only move extent blocks inside i_size, careful not to wrap */
from_size = i_size_read(from);
if (from_off >= from_size) {
ret = 0;
goto out;
}
if (from_off + byte_len > from_size)
count = ((from_size - from_off) + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
if (S_ISDIR(from->i_mode) || S_ISDIR(to->i_mode)) {
ret = -EISDIR;
@@ -1264,8 +1325,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
goto out;
}
ret = inode_permission(from, MAY_WRITE) ?:
inode_permission(to, MAY_WRITE);
ret = inode_permission(KC_VFS_INIT_NS from, MAY_WRITE) ?:
inode_permission(KC_VFS_INIT_NS to, MAY_WRITE);
if (ret < 0)
goto out;
@@ -1320,7 +1381,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
/* find the next extent to move */
ret = scoutfs_ext_next(sb, &data_ext_ops, &from_args,
from_iblock, 1, &ext);
from_start, 1, &ext);
if (ret < 0) {
if (ret == -ENOENT) {
done = true;
@@ -1329,9 +1390,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
break;
}
/* only move extents within count and i_size */
if (ext.start >= from_iblock + count ||
ext.start >= i_size_read(from)) {
/* done if next extent starts after moving region */
if (ext.start >= from_iblock + count) {
done = true;
ret = 0;
break;
@@ -1339,13 +1399,15 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
from_start = max(ext.start, from_iblock);
map = ext.map + (from_start - ext.start);
len = min3(from_iblock + count,
round_up((u64)i_size_read(from),
SCOUTFS_BLOCK_SM_SIZE),
ext.start + ext.len) - from_start;
len = min(from_iblock + count, ext.start + ext.len) - from_start;
to_start = to_iblock + (from_start - from_iblock);
/* we'd get stuck, shouldn't happen */
if (WARN_ON_ONCE(len == 0)) {
ret = -EIO;
goto out;
}
if (is_stage) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
to_start, 1, &off_ext);
@@ -1407,13 +1469,19 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
i_size_read(from);
i_size_write(to, to_size);
}
/* find next after moved extent, avoiding wrapping */
if (from_start + len < from_start)
from_start = from_iblock + count + 1;
else
from_start += len;
}
up_write(&from_si->extent_sem);
up_write(&to_si->extent_sem);
cur_time = CURRENT_TIME;
cur_time = current_time(from);
if (!is_stage) {
to->i_ctime = to->i_mtime = cur_time;
inode_inc_iversion(to);
@@ -1496,11 +1564,11 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
goto out;
}
ret = fiemap_check_flags(fieinfo, FIEMAP_FLAG_SYNC);
ret = fiemap_prep(inode, fieinfo, start, &len, FIEMAP_FLAG_SYNC);
if (ret)
goto out;
mutex_lock(&inode->i_mutex);
inode_lock(inode);
down_read(&si->extent_sem);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
@@ -1554,7 +1622,7 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
unlock:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
up_read(&si->extent_sem);
mutex_unlock(&inode->i_mutex);
inode_unlock(inode);
out:
if (ret == 1)
@@ -1662,12 +1730,16 @@ int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u64 last_block;
u64 on;
u64 off;
loff_t tmp;
int ret = 0;
if (len == 0)
goto out;
if (WARN_ON_ONCE(sef & SEF_UNKNOWN) ||
WARN_ON_ONCE(op & SCOUTFS_IOC_DWO_UNKNOWN) ||
WARN_ON_ONCE(dw && !RB_EMPTY_NODE(&dw->node)) ||
WARN_ON_ONCE(pos + len < pos)) {
WARN_ON_ONCE(check_add_overflow(pos, len - 1, &tmp))) {
ret = -EINVAL;
goto out;
}
@@ -1843,8 +1915,18 @@ int scoutfs_data_waiting(struct super_block *sb, u64 ino, u64 iblock,
}
const struct address_space_operations scoutfs_file_aops = {
#ifdef KC_MPAGE_READ_FOLIO
.dirty_folio = block_dirty_folio,
.invalidate_folio = block_invalidate_folio,
.read_folio = scoutfs_read_folio,
#else
.readpage = scoutfs_readpage,
#endif
#ifndef KC_FILE_AOPS_READAHEAD
.readpages = scoutfs_readpages,
#else
.readahead = scoutfs_readahead,
#endif
.writepage = scoutfs_writepage,
.writepages = scoutfs_writepages,
.write_begin = scoutfs_write_begin,
@@ -1852,10 +1934,17 @@ const struct address_space_operations scoutfs_file_aops = {
};
const struct file_operations scoutfs_file_fops = {
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
.read = do_sync_read,
.write = do_sync_write,
.aio_read = scoutfs_file_aio_read,
.aio_write = scoutfs_file_aio_write,
#else
.read_iter = scoutfs_file_read_iter,
.write_iter = scoutfs_file_write_iter,
.splice_read = generic_file_splice_read,
.splice_write = iter_file_splice_write,
#endif
.unlocked_ioctl = scoutfs_ioctl,
.fsync = scoutfs_file_fsync,
.llseek = scoutfs_file_llseek,

View File

@@ -34,6 +34,7 @@
#include "forest.h"
#include "acl.h"
#include "counters.h"
#include "quota.h"
#include "scoutfs_trace.h"
/*
@@ -272,7 +273,7 @@ static void set_dentry_fsdata(struct dentry *dentry, struct scoutfs_lock *lock)
static bool test_dentry_fsdata(struct dentry *dentry, u64 refresh)
{
u64 fsd = (unsigned long)ACCESS_ONCE(dentry->d_fsdata);
u64 fsd = (unsigned long)READ_ONCE(dentry->d_fsdata);
return fsd == refresh;
}
@@ -651,6 +652,10 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
if (ret)
goto out_unlock;
ret = scoutfs_quota_check_inode(sb, dir);
if (ret)
goto out_unlock;
if (orph_lock) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, ino, orph_lock);
if (ret < 0)
@@ -672,6 +677,8 @@ retry:
if (ret < 0)
goto out;
scoutfs_inode_set_proj(inode, scoutfs_inode_get_proj(dir));
ret = scoutfs_dirty_inode_item(dir, *dir_lock);
out:
if (ret)
@@ -696,8 +703,9 @@ out_unlock:
return inode;
}
static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
dev_t rdev)
static int scoutfs_mknod(KC_VFS_NS_DEF
struct inode *dir,
struct dentry *dentry, umode_t mode, dev_t rdev)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
@@ -735,7 +743,7 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
dir->i_mtime = dir->i_ctime = current_time(inode);
inode->i_mtime = inode->i_atime = inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_mtime;
inode_inc_iversion(dir);
@@ -766,15 +774,20 @@ out:
}
/* XXX hmm, do something with excl? */
static int scoutfs_create(struct inode *dir, struct dentry *dentry,
umode_t mode, bool excl)
static int scoutfs_create(KC_VFS_NS_DEF
struct inode *dir,
struct dentry *dentry, umode_t mode, bool excl)
{
return scoutfs_mknod(dir, dentry, mode | S_IFREG, 0);
return scoutfs_mknod(KC_VFS_NS
dir, dentry, mode | S_IFREG, 0);
}
static int scoutfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
static int scoutfs_mkdir(KC_VFS_NS_DEF
struct inode *dir,
struct dentry *dentry, umode_t mode)
{
return scoutfs_mknod(dir, dentry, mode | S_IFDIR, 0);
return scoutfs_mknod(KC_VFS_NS
dir, dentry, mode | S_IFDIR, 0);
}
static int scoutfs_link(struct dentry *old_dentry,
@@ -859,7 +872,7 @@ retry:
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, dir_size);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
dir->i_mtime = dir->i_ctime = current_time(inode);
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
inode_inc_iversion(dir);
@@ -900,7 +913,7 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = dentry->d_inode;
struct timespec ts = current_kernel_time();
struct kc_timespec ts = current_time(inode);
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_lock *dir_lock = NULL;
@@ -926,12 +939,16 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
goto unlock;
}
ret = scoutfs_inode_check_retention(inode);
if (ret < 0)
goto unlock;
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
ret = lookup_dirent(sb, scoutfs_ino(dir), dentry->d_name.name, dentry->d_name.len, hash,
&dent, dir_lock);
if (ret < 0)
goto out;
goto unlock;
if (should_orphan(inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
@@ -1059,14 +1076,14 @@ static int symlink_item_ops(struct super_block *sb, enum symlink_ops op, u64 ino
}
/*
* Full a buffer with the null terminated symlink, point nd at it, and
* return it so put_link can free it once the vfs is done.
* Fill a buffer with the null terminated symlink, and return it
* so callers can free it once the vfs is done.
*
* We chose to pay the runtime cost of per-call allocation and copy
* overhead instead of wiring up symlinks to the page cache, storing
* each small link in a full page, and later having to reclaim them.
*/
static void *scoutfs_follow_link(struct dentry *dentry, struct nameidata *nd)
static void *scoutfs_get_link_target(struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
@@ -1125,38 +1142,48 @@ out:
if (ret < 0) {
kfree(path);
path = ERR_PTR(ret);
} else {
nd_set_link(nd, path);
}
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
return path;
}
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
static void *scoutfs_follow_link(struct dentry *dentry, struct nameidata *nd)
{
char *path;
path = scoutfs_get_link_target(dentry);
if (!IS_ERR_OR_NULL(path))
nd_set_link(nd, path);
return path;
}
static void scoutfs_put_link(struct dentry *dentry, struct nameidata *nd,
void *cookie)
{
if (!IS_ERR_OR_NULL(cookie))
kfree(cookie);
}
#else
static const char *scoutfs_get_link(struct dentry *dentry, struct inode *inode, struct delayed_call *done)
{
char *path;
const struct inode_operations scoutfs_symlink_iops = {
.readlink = generic_readlink,
.follow_link = scoutfs_follow_link,
.put_link = scoutfs_put_link,
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
};
path = scoutfs_get_link_target(dentry);
if (!IS_ERR_OR_NULL(path))
set_delayed_call(done, kfree_link, path);
return path;
}
#endif
/*
* Symlink target paths can be annoyingly large. We store relatively
* rare large paths in multiple items.
*/
static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
static int scoutfs_symlink(KC_VFS_NS_DEF
struct inode *dir, struct dentry *dentry,
const char *symname)
{
struct super_block *sb = dir->i_sb;
@@ -1204,7 +1231,7 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
dir->i_mtime = dir->i_ctime = current_time(inode);
inode_inc_iversion(dir);
inode->i_ctime = dir->i_mtime;
@@ -1253,75 +1280,93 @@ int scoutfs_symlink_drop(struct super_block *sb, u64 ino,
}
/*
* Find the next link backref key for the given ino starting from the
* given dir inode and final entry position. If we find a backref item
* we add an allocated copy of it to the head of the caller's list.
* Find the next link backref items for the given ino starting from the
* given dir inode and final entry position. For each backref item we
* add an allocated copy of it to the head of the caller's list.
*
* Returns 0 if we added an entry, -ENOENT if we didn't, and -errno for
* search errors.
* Callers who are building a path can add one entry for each parent.
* They're left with a list of entries from the root down in list order.
*
* Callers who are gathering multiple entries for one inode get the
* entries in the opposite order that their items are found.
*
* Returns +ve for number of entries added, -ENOENT if no entries were
* found, or -errno on error. It weirdly won't return 0, but early
* callers preferred -ENOENT so we use that for the case of no entries.
*
* Callers are comfortable with the race inherent to incrementally
* building up a path with individual locked backref item lookups.
* gathering backrefs across multiple lock acquisitions.
*/
int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list)
int scoutfs_dir_add_next_linkrefs(struct super_block *sb, u64 ino, u64 dir_ino, u64 dir_pos,
int count, struct list_head *list)
{
struct scoutfs_link_backref_entry *prev_ent = NULL;
struct scoutfs_link_backref_entry *ent = NULL;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key last_key;
struct scoutfs_key key;
int nr = 0;
int len;
int ret;
ent = kmalloc(offsetof(struct scoutfs_link_backref_entry,
dent.name[SCOUTFS_NAME_LEN]), GFP_KERNEL);
if (!ent) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&ent->head);
init_dirent_key(&key, SCOUTFS_LINK_BACKREF_TYPE, ino, dir_ino, dir_pos);
init_dirent_key(&last_key, SCOUTFS_LINK_BACKREF_TYPE, ino, U64_MAX,
U64_MAX);
init_dirent_key(&last_key, SCOUTFS_LINK_BACKREF_TYPE, ino, U64_MAX, U64_MAX);
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, ino, &lock);
if (ret)
goto out;
ret = scoutfs_item_next(sb, &key, &last_key, &ent->dent,
dirent_bytes(SCOUTFS_NAME_LEN), lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
lock = NULL;
if (ret < 0)
goto out;
while (nr < count) {
ent = kmalloc(offsetof(struct scoutfs_link_backref_entry,
dent.name[SCOUTFS_NAME_LEN]), GFP_NOFS);
if (!ent) {
ret = -ENOMEM;
goto out;
}
len = ret - sizeof(struct scoutfs_dirent);
if (len < 1 || len > SCOUTFS_NAME_LEN) {
scoutfs_corruption(sb, SC_DIRENT_BACKREF_NAME_LEN,
corrupt_dirent_backref_name_len,
"ino %llu dir_ino %llu pos %llu key "SK_FMT" len %d",
ino, dir_ino, dir_pos, SK_ARG(&key), len);
ret = -EIO;
goto out;
INIT_LIST_HEAD(&ent->head);
ret = scoutfs_item_next(sb, &key, &last_key, &ent->dent,
dirent_bytes(SCOUTFS_NAME_LEN), lock);
if (ret < 0) {
if (ret == -ENOENT && prev_ent)
prev_ent->last = true;
goto out;
}
len = ret - sizeof(struct scoutfs_dirent);
if (len < 1 || len > SCOUTFS_NAME_LEN) {
scoutfs_corruption(sb, SC_DIRENT_BACKREF_NAME_LEN,
corrupt_dirent_backref_name_len,
"ino %llu dir_ino %llu pos %llu key "SK_FMT" len %d",
ino, dir_ino, dir_pos, SK_ARG(&key), len);
ret = -EIO;
goto out;
}
ent->dir_ino = le64_to_cpu(key.skd_major);
ent->dir_pos = le64_to_cpu(key.skd_minor);
ent->name_len = len;
ent->d_type = dentry_type(ent->dent.type);
ent->last = false;
trace_scoutfs_dir_add_next_linkref_found(sb, ino, ent->dir_ino, ent->dir_pos,
ent->name_len);
list_add(&ent->head, list);
prev_ent = ent;
ent = NULL;
nr++;
scoutfs_key_inc(&key);
}
list_add(&ent->head, list);
ent->dir_ino = le64_to_cpu(key.skd_major);
ent->dir_pos = le64_to_cpu(key.skd_minor);
ent->name_len = len;
ret = 0;
out:
trace_scoutfs_dir_add_next_linkref(sb, ino, dir_ino, dir_pos, ret,
ent ? ent->dir_ino : 0,
ent ? ent->dir_pos : 0,
ent ? ent->name_len : 0);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
trace_scoutfs_dir_add_next_linkrefs(sb, ino, dir_ino, dir_pos, count, nr, ret);
if (ent && list_empty(&ent->head))
kfree(ent);
return ret;
kfree(ent);
return nr ?: ret;
}
static u64 first_backref_dir_ino(struct list_head *list)
@@ -1396,7 +1441,7 @@ retry:
}
/* get the next link name to the given inode */
ret = scoutfs_dir_add_next_linkref(sb, ino, dir_ino, dir_pos, list);
ret = scoutfs_dir_add_next_linkrefs(sb, ino, dir_ino, dir_pos, 1, list);
if (ret < 0)
goto out;
@@ -1404,7 +1449,7 @@ retry:
par_ino = first_backref_dir_ino(list);
while (par_ino != SCOUTFS_ROOT_INO) {
ret = scoutfs_dir_add_next_linkref(sb, par_ino, 0, 0, list);
ret = scoutfs_dir_add_next_linkrefs(sb, par_ino, 0, 0, 1, list);
if (ret < 0) {
if (ret == -ENOENT) {
/* restart if there was no parent component */
@@ -1416,6 +1461,8 @@ retry:
par_ino = first_backref_dir_ino(list);
}
ret = 0;
out:
if (ret < 0)
scoutfs_dir_free_backref_path(sb, list);
@@ -1443,6 +1490,11 @@ static int item_d_ancestor(struct super_block *sb, u64 p1, u64 p2, u64 *p_ret)
*p_ret = 0;
if (p2 == SCOUTFS_ROOT_INO) {
ret = 0;
goto out;
}
ret = scoutfs_dir_get_backref_path(sb, p2, 0, 0, &list);
if (ret)
goto out;
@@ -1518,7 +1570,8 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
* from using parent/child locking orders as two groups can have both
* parent and child relationships to each other.
*/
static int scoutfs_rename_common(struct inode *old_dir,
static int scoutfs_rename_common(KC_VFS_NS_DEF
struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
@@ -1533,7 +1586,7 @@ static int scoutfs_rename_common(struct inode *old_dir,
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_dirent new_dent;
struct scoutfs_dirent old_dent;
struct timespec now;
struct kc_timespec now;
bool ins_new = false;
bool del_new = false;
bool ins_old = false;
@@ -1598,6 +1651,10 @@ static int scoutfs_rename_common(struct inode *old_dir,
goto out_unlock;
}
if ((old_inode && (ret = scoutfs_inode_check_retention(old_inode))) ||
(new_inode && (ret = scoutfs_inode_check_retention(new_inode))))
goto out_unlock;
if (should_orphan(new_inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(new_inode),
&orph_lock);
@@ -1699,7 +1756,7 @@ retry:
inc_nlink(new_dir);
}
now = CURRENT_TIME;
now = current_time(old_inode);
old_dir->i_ctime = now;
old_dir->i_mtime = now;
if (new_dir != old_dir) {
@@ -1786,21 +1843,26 @@ out_unlock:
return ret;
}
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
static int scoutfs_rename(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry)
{
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, 0);
return scoutfs_rename_common(KC_VFS_INIT_NS
old_dir, old_dentry, new_dir, new_dentry, 0);
}
#endif
static int scoutfs_rename2(struct inode *old_dir,
static int scoutfs_rename2(KC_VFS_NS_DEF
struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
if (flags & ~RENAME_NOREPLACE)
return -EINVAL;
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, flags);
return scoutfs_rename_common(KC_VFS_NS
old_dir, old_dentry, new_dir, new_dentry, flags);
}
#ifdef KC_FMODE_KABI_ITERATE
@@ -1812,8 +1874,18 @@ static int scoutfs_dir_open(struct inode *inode, struct file *file)
}
#endif
static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
static int scoutfs_tmpfile(KC_VFS_NS_DEF
struct inode *dir,
#ifdef KC_D_TMPFILE_DENTRY
struct dentry *dentry,
#else
struct file *file,
#endif
umode_t mode)
{
#ifndef KC_D_TMPFILE_DENTRY
struct dentry *dentry = file->f_path.dentry;
#endif
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
@@ -1836,11 +1908,15 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
if (ret < 0)
goto out; /* XXX returning error but items created */
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
si->crtime = inode->i_mtime;
insert_inode_hash(inode);
ihold(inode); /* need to update inode modifications in d_tmpfile */
#ifdef KC_D_TMPFILE_DENTRY
d_tmpfile(dentry, inode);
#else
d_tmpfile(file, inode);
#endif
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
@@ -1848,6 +1924,10 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
scoutfs_inode_index_unlock(sb, &ind_locks);
#ifndef KC_D_TMPFILE_DENTRY
ret = finish_open_simple(file, 0);
#endif
out:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
@@ -1861,6 +1941,37 @@ out:
return ret;
}
const struct inode_operations scoutfs_symlink_iops = {
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.readlink = generic_readlink,
.follow_link = scoutfs_follow_link,
.put_link = scoutfs_put_link,
#else
.get_link = scoutfs_get_link,
#endif
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
#endif
.listxattr = scoutfs_listxattr,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.removexattr = generic_removexattr,
#endif
.get_acl = scoutfs_get_acl,
#ifndef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.tmpfile = scoutfs_tmpfile,
.rename = scoutfs_rename_common,
.symlink = scoutfs_symlink,
.unlink = scoutfs_unlink,
.link = scoutfs_link,
.mkdir = scoutfs_mkdir,
.create = scoutfs_create,
.lookup = scoutfs_lookup,
#endif
};
const struct file_operations scoutfs_dir_fops = {
.KC_FOP_READDIR = scoutfs_readdir,
#ifdef KC_FMODE_KABI_ITERATE
@@ -1872,9 +1983,12 @@ const struct file_operations scoutfs_dir_fops = {
};
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
const struct inode_operations_wrapper scoutfs_dir_iops = {
.ops = {
#else
const struct inode_operations scoutfs_dir_iops = {
#endif
.lookup = scoutfs_lookup,
.mknod = scoutfs_mknod,
.create = scoutfs_create,
@@ -1882,17 +1996,25 @@ const struct inode_operations_wrapper scoutfs_dir_iops = {
.link = scoutfs_link,
.unlink = scoutfs_unlink,
.rmdir = scoutfs_unlink,
.rename = scoutfs_rename,
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.rename = scoutfs_rename,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = generic_removexattr,
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
},
#endif
.tmpfile = scoutfs_tmpfile,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.rename2 = scoutfs_rename2,
#else
.rename = scoutfs_rename2,
#endif
};

View File

@@ -5,7 +5,11 @@
#include "lock.h"
extern const struct file_operations scoutfs_dir_fops;
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
extern const struct inode_operations_wrapper scoutfs_dir_iops;
#else
extern const struct inode_operations scoutfs_dir_iops;
#endif
extern const struct inode_operations scoutfs_symlink_iops;
extern const struct dentry_operations scoutfs_dentry_ops;
@@ -15,6 +19,8 @@ struct scoutfs_link_backref_entry {
u64 dir_ino;
u64 dir_pos;
u16 name_len;
u8 d_type;
bool last;
struct scoutfs_dirent dent;
/* the full name is allocated and stored in dent.name[] */
};
@@ -24,9 +30,8 @@ int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,
void scoutfs_dir_free_backref_path(struct super_block *sb,
struct list_head *list);
int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list);
int scoutfs_dir_add_next_linkrefs(struct super_block *sb, u64 ino, u64 dir_ino, u64 dir_pos,
int count, struct list_head *list);
int scoutfs_symlink_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock, u64 i_size);

View File

@@ -114,8 +114,8 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
int ret;
u64 ino;
ret = scoutfs_dir_add_next_linkref(sb, scoutfs_ino(inode), 0, 0, &list);
if (ret)
ret = scoutfs_dir_add_next_linkrefs(sb, scoutfs_ino(inode), 0, 0, 1, &list);
if (ret < 0)
return ERR_PTR(ret);
ent = list_first_entry(&list, struct scoutfs_link_backref_entry, head);
@@ -138,9 +138,9 @@ static int scoutfs_get_name(struct dentry *parent, char *name,
LIST_HEAD(list);
int ret;
ret = scoutfs_dir_add_next_linkref(sb, scoutfs_ino(inode), dir_ino,
0, &list);
if (ret)
ret = scoutfs_dir_add_next_linkrefs(sb, scoutfs_ino(inode), dir_ino,
0, 1, &list);
if (ret < 0)
return ret;
ret = -ENOENT;

View File

@@ -105,12 +105,12 @@ static ssize_t elapsed_secs_show(struct kobject *kobj,
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
ktime_t now = ktime_get();
struct timeval tv = { 0, };
ktime_t t = ns_to_ktime(0);
if (ktime_after(now, fence->start_kt))
tv = ktime_to_timeval(ktime_sub(now, fence->start_kt));
t = ktime_sub(now, fence->start_kt);
return snprintf(buf, PAGE_SIZE, "%llu", (long long)tv.tv_sec);
return snprintf(buf, PAGE_SIZE, "%llu", (long long)ktime_divns(t, NSEC_PER_SEC));
}
SCOUTFS_ATTR_RO(elapsed_secs);

View File

@@ -28,7 +28,9 @@
#include "inode.h"
#include "per_task.h"
#include "omap.h"
#include "quota.h"
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
/*
* Start a high level file read. We check for offline extents in the
* read region here so that we only check the extents once. We use the
@@ -42,27 +44,27 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
int ret;
retry:
/* protect checked extents from release */
mutex_lock(&inode->i_mutex);
inode_lock(inode);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
inode_unlock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ,
&dw, inode_lock);
&dw, scoutfs_inode_lock);
if (ret != 0)
goto out;
} else {
@@ -74,7 +76,7 @@ retry:
out:
inode_dio_done(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
@@ -92,7 +94,7 @@ ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
int ret;
@@ -101,34 +103,42 @@ ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
return 0;
retry:
mutex_lock(&inode->i_mutex);
inode_lock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
if (ret)
goto out;
ret = scoutfs_complete_truncate(inode, inode_lock);
ret = scoutfs_inode_check_retention(inode);
if (ret < 0)
goto out;
ret = scoutfs_complete_truncate(inode, scoutfs_inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
/* data_version is per inode, whole file must be online */
ret = scoutfs_data_wait_check(inode, 0, i_size_read(inode),
SEF_OFFLINE,
SCOUTFS_IOC_DWO_WRITE,
&dw, inode_lock);
&dw, scoutfs_inode_lock);
if (ret != 0)
goto out;
}
ret = scoutfs_quota_check_data(sb, inode);
if (ret)
goto out;
/* XXX: remove SUID bit */
ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos);
out:
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
mutex_unlock(&inode->i_mutex);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
@@ -146,8 +156,119 @@ out:
return ret;
}
#else
ssize_t scoutfs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
int ret;
int scoutfs_permission(struct inode *inode, int mask)
retry:
/* protect checked extents from release */
inode_lock(inode);
atomic_inc(&inode->i_dio_count);
inode_unlock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
ret = scoutfs_data_wait_check(inode, iocb->ki_pos, iov_iter_count(to), SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ, &dw, scoutfs_inode_lock);
if (ret != 0)
goto out;
} else {
WARN_ON_ONCE(true);
}
ret = generic_file_read_iter(iocb, to);
out:
inode_dio_end(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
if (ret == 0)
goto retry;
}
return ret;
}
ssize_t scoutfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
ssize_t ret;
retry:
inode_lock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
if (ret)
goto out;
ret = generic_write_checks(iocb, from);
if (ret <= 0)
goto out;
ret = scoutfs_inode_check_retention(inode);
if (ret < 0)
goto out;
ret = scoutfs_complete_truncate(inode, scoutfs_inode_lock);
if (ret)
goto out;
ret = scoutfs_quota_check_data(sb, inode);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
/* data_version is per inode, whole file must be online */
ret = scoutfs_data_wait_check(inode, 0, i_size_read(inode), SEF_OFFLINE,
SCOUTFS_IOC_DWO_WRITE, &dw, scoutfs_inode_lock);
if (ret != 0)
goto out;
}
/* XXX: remove SUID bit */
ret = __generic_file_write_iter(iocb, from);
out:
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
if (ret == 0)
goto retry;
}
if (ret > 0)
ret = generic_write_sync(iocb, ret);
return ret;
}
#endif
int scoutfs_permission(KC_VFS_NS_DEF
struct inode *inode, int mask)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *inode_lock = NULL;
@@ -161,7 +282,8 @@ int scoutfs_permission(struct inode *inode, int mask)
if (ret)
return ret;
ret = generic_permission(inode, mask);
ret = generic_permission(KC_VFS_INIT_NS
inode, mask);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);

View File

@@ -1,11 +1,17 @@
#ifndef _SCOUTFS_FILE_H_
#define _SCOUTFS_FILE_H_
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos);
ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos);
int scoutfs_permission(struct inode *inode, int mask);
#else
ssize_t scoutfs_file_read_iter(struct kiocb *, struct iov_iter *);
ssize_t scoutfs_file_write_iter(struct kiocb *, struct iov_iter *);
#endif
int scoutfs_permission(KC_VFS_NS_DEF
struct inode *inode, int mask);
loff_t scoutfs_file_llseek(struct file *file, loff_t offset, int whence);
#endif /* _SCOUTFS_FILE_H_ */

View File

@@ -78,11 +78,6 @@ struct forest_refs {
struct scoutfs_block_ref logs_ref;
};
/* initialize some refs that initially aren't equal */
#define DECLARE_STALE_TRACKING_SUPER_REFS(a, b) \
struct forest_refs a = {{cpu_to_le64(0),}}; \
struct forest_refs b = {{cpu_to_le64(1),}}
struct forest_bloom_nrs {
unsigned int nrs[SCOUTFS_FOREST_BLOOM_NRS];
};
@@ -136,11 +131,11 @@ static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scout
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next)
{
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
struct scoutfs_net_roots roots;
struct scoutfs_btree_root item_root;
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
DECLARE_SAVED_REFS(saved);
struct scoutfs_key found;
struct scoutfs_key ltk;
bool checked_fs;
@@ -155,8 +150,6 @@ retry:
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
refs.fs_ref = roots.fs_root.ref;
refs.logs_ref = roots.logs_root.ref;
scoutfs_key_init_log_trees(&ltk, 0, 0);
checked_fs = false;
@@ -212,14 +205,10 @@ retry:
}
}
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
return -EIO;
prev_refs = refs;
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.fs_root.ref, &roots.logs_root.ref);
if (ret == -ESTALE)
goto retry;
}
out:
return ret;
}
@@ -249,19 +238,16 @@ static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u6
* We return -ESTALE if we hit stale blocks to give the caller a chance
* to reset their state and retry with a newer version of the btrees.
*/
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg)
int scoutfs_forest_read_items_roots(struct super_block *sb, struct scoutfs_net_roots *roots,
struct scoutfs_key *key, struct scoutfs_key *bloom_key,
struct scoutfs_key *start, struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg)
{
struct forest_read_items_data rid = {
.cb = cb,
.cb_arg = arg,
};
struct scoutfs_log_trees lt;
struct scoutfs_net_roots roots;
struct scoutfs_bloom_block *bb;
struct forest_bloom_nrs bloom;
SCOUTFS_BTREE_ITEM_REF(iref);
@@ -275,18 +261,14 @@ int scoutfs_forest_read_items(struct super_block *sb,
scoutfs_inc_counter(sb, forest_read_items);
calc_bloom_nrs(&bloom, bloom_key);
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
trace_scoutfs_forest_using_roots(sb, &roots->fs_root, &roots->logs_root);
*start = orig_start;
*end = orig_end;
/* start with fs root items */
rid.fic |= FIC_FS_ROOT;
ret = scoutfs_btree_read_items(sb, &roots.fs_root, key, start, end,
ret = scoutfs_btree_read_items(sb, &roots->fs_root, key, start, end,
forest_read_items, &rid);
if (ret < 0)
goto out;
@@ -294,7 +276,7 @@ int scoutfs_forest_read_items(struct super_block *sb,
scoutfs_key_init_log_trees(&ltk, 0, 0);
for (;; scoutfs_key_inc(&ltk)) {
ret = scoutfs_btree_next(sb, &roots.logs_root, &ltk, &iref);
ret = scoutfs_btree_next(sb, &roots->logs_root, &ltk, &iref);
if (ret == 0) {
if (iref.val_len == sizeof(lt)) {
ltk = *iref.key;
@@ -351,6 +333,23 @@ out:
return ret;
}
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg)
{
struct scoutfs_net_roots roots;
int ret;
ret = scoutfs_client_get_roots(sb, &roots);
if (ret == 0)
ret = scoutfs_forest_read_items_roots(sb, &roots, key, bloom_key, start, end,
cb, arg);
return ret;
}
/*
* If the items are deltas then combine the src with the destination
* value and store the result in the destination.
@@ -541,9 +540,8 @@ void scoutfs_forest_dec_inode_count(struct super_block *sb)
/*
* Return the total inode count from the super block and all the
* log_btrees it references. This assumes it's working with a block
* reference hierarchy that should be fully consistent. If we see
* ESTALE we've hit persistent corruption.
* log_btrees it references. ESTALE from read blocks is returned to the
* caller who is expected to retry or return hard errors.
*/
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count)
@@ -572,8 +570,6 @@ int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_bloc
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
else if (ret == -ESTALE)
ret = -EIO;
break;
}
}
@@ -735,7 +731,8 @@ static void scoutfs_forest_log_merge_worker(struct work_struct *work)
ret = scoutfs_btree_merge(sb, &alloc, &wri, &req.start, &req.end,
&next, &comp.root, &inputs,
!!(req.flags & cpu_to_le64(SCOUTFS_LOG_MERGE_REQUEST_SUBTREE)),
SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT, 10);
SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT, 10,
(2 * 1024 * 1024));
if (ret == -ERANGE) {
comp.remain = next;
le64_add_cpu(&comp.flags, SCOUTFS_LOG_MERGE_COMP_REMAIN);

View File

@@ -4,6 +4,7 @@
struct scoutfs_alloc;
struct scoutfs_block_writer;
struct scoutfs_block;
struct scoutfs_lock;
#include "btree.h"
@@ -23,6 +24,10 @@ int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
int scoutfs_forest_read_items_roots(struct super_block *sb, struct scoutfs_net_roots *roots,
struct scoutfs_key *key, struct scoutfs_key *bloom_key,
struct scoutfs_key *start, struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
struct scoutfs_lock *lock);
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq);

View File

@@ -8,9 +8,14 @@
*/
#define SCOUTFS_FORMAT_VERSION_MIN 1
#define SCOUTFS_FORMAT_VERSION_MIN_STR __stringify(SCOUTFS_FORMAT_VERSION_MIN)
#define SCOUTFS_FORMAT_VERSION_MAX 1
#define SCOUTFS_FORMAT_VERSION_MAX 2
#define SCOUTFS_FORMAT_VERSION_MAX_STR __stringify(SCOUTFS_FORMAT_VERSION_MAX)
#define SCOUTFS_FORMAT_VERSION_FEAT_RETENTION 2
#define SCOUTFS_FORMAT_VERSION_FEAT_PROJECT_ID 2
#define SCOUTFS_FORMAT_VERSION_FEAT_QUOTA 2
#define SCOUTFS_FORMAT_VERSION_FEAT_INDX_TAG 2
/* statfs(2) f_type */
#define SCOUTFS_SUPER_MAGIC 0x554f4353 /* "SCOU" */
@@ -175,6 +180,10 @@ struct scoutfs_key {
#define sko_rid _sk_first
#define sko_ino _sk_second
/* quota rules */
#define skqr_hash _sk_second
#define skqr_coll_nr _sk_third
/* xattr totl */
#define skxt_a _sk_first
#define skxt_b _sk_second
@@ -585,7 +594,9 @@ struct scoutfs_log_merge_freeing {
*/
#define SCOUTFS_INODE_INDEX_ZONE 4
#define SCOUTFS_ORPHAN_ZONE 8
#define SCOUTFS_QUOTA_ZONE 10
#define SCOUTFS_XATTR_TOTL_ZONE 12
#define SCOUTFS_XATTR_INDX_ZONE 14
#define SCOUTFS_FS_ZONE 16
#define SCOUTFS_LOCK_ZONE 20
/* Items only stored in server btrees */
@@ -608,6 +619,9 @@ struct scoutfs_log_merge_freeing {
/* orphan zone, redundant type used for clarity */
#define SCOUTFS_ORPHAN_TYPE 4
/* quota zone */
#define SCOUTFS_QUOTA_RULE_TYPE 4
/* fs zone */
#define SCOUTFS_INODE_TYPE 4
#define SCOUTFS_XATTR_TYPE 8
@@ -661,6 +675,34 @@ struct scoutfs_xattr_totl_val {
__le64 count;
};
#define SQ_RF_TOTL_COUNT (1 << 0)
#define SQ_RF__UNKNOWN (~((1 << 1) - 1))
#define SQ_NS_LITERAL 0
#define SQ_NS_PROJ 1
#define SQ_NS_UID 2
#define SQ_NS_GID 3
#define SQ_NS__NR 4
#define SQ_NS__NR_SELECT (SQ_NS__NR - 1) /* !literal */
#define SQ_NF_SELECT (1 << 0)
#define SQ_NF__UNKNOWN (~((1 << 1) - 1))
#define SQ_OP_INODE 0
#define SQ_OP_DATA 1
#define SQ_OP__NR 2
struct scoutfs_quota_rule_val {
__le64 name_val[3];
__le64 limit;
__u8 prio;
__u8 op;
__u8 rule_flags;
__u8 name_source[3];
__u8 name_flags[3];
__u8 _pad[7];
};
/* XXX does this exist upstream somewhere? */
#define member_sizeof(TYPE, MEMBER) (sizeof(((TYPE *)0)->MEMBER))
@@ -683,16 +725,19 @@ struct scoutfs_xattr_totl_val {
#define SCOUTFS_QUORUM_ELECT_VAR_MS 100
/*
* Once a leader is elected they send out heartbeats at regular
* intervals to force members to wait the much longer heartbeat timeout.
* Once heartbeat timeout expires without receiving a heartbeat they'll
* switch over the performing elections.
* Once a leader is elected they send heartbeat messages to all quorum
* members at regular intervals to force members to wait the much longer
* heartbeat timeout. Once the heartbeat timeout expires without
* receiving a heartbeat message a member will start an election.
*
* These determine how long it could take members to notice that a
* leader has gone silent and start to elect a new leader.
* leader has gone silent and start to elect a new leader. The
* heartbeat timeout can be changed at run time by options.
*/
#define SCOUTFS_QUORUM_HB_IVAL_MS 100
#define SCOUTFS_QUORUM_HB_TIMEO_MS (5 * MSEC_PER_SEC)
#define SCOUTFS_QUORUM_MIN_HB_TIMEO_MS (2 * MSEC_PER_SEC)
#define SCOUTFS_QUORUM_DEF_HB_TIMEO_MS (10 * MSEC_PER_SEC)
#define SCOUTFS_QUORUM_MAX_HB_TIMEO_MS (60 * MSEC_PER_SEC)
/*
* A newly elected leader will give fencing some time before giving up and
@@ -856,9 +901,38 @@ struct scoutfs_inode {
struct scoutfs_timespec ctime;
struct scoutfs_timespec mtime;
struct scoutfs_timespec crtime;
__le64 proj;
};
#define SCOUTFS_INO_FLAG_TRUNCATE 0x1
#define SCOUTFS_INODE_FMT_V1_BYTES offsetof(struct scoutfs_inode, proj)
/*
* There are so few versions that we don't mind doing this work inline
* so that both utils and kernel can share these. Mounting has already
* checked that the format version is within the supported min and max,
* so these functions only deal with size variance within that band.
*/
/* Returns the native written inode size for the given format version, 0 for bad version */
static inline int scoutfs_inode_vers_bytes(__u64 fmt_vers)
{
if (fmt_vers == 1)
return SCOUTFS_INODE_FMT_V1_BYTES;
else
return sizeof(struct scoutfs_inode);
}
/*
* Returns true if bytes is a valid inode size to read from the given
* version. The given version must be greater than the version that
* introduced the size.
*/
static inline int scoutfs_inode_valid_vers_bytes(__u64 fmt_vers, int bytes)
{
return (bytes == sizeof(struct scoutfs_inode) && fmt_vers == SCOUTFS_FORMAT_VERSION_MAX) ||
(bytes == SCOUTFS_INODE_FMT_V1_BYTES);
}
#define SCOUTFS_INO_FLAG_TRUNCATE 0x1
#define SCOUTFS_INO_FLAG_RETENTION 0x2
#define SCOUTFS_ROOT_INO 1

View File

@@ -91,7 +91,7 @@ static void scoutfs_inode_ctor(void *obj)
init_rwsem(&si->extent_sem);
mutex_init(&si->item_mutex);
seqcount_init(&si->seqcount);
seqlock_init(&si->seqlock);
si->staging = false;
scoutfs_per_task_init(&si->pt_data_lock);
atomic64_set(&si->data_waitq.changed, 0);
@@ -143,10 +143,12 @@ void scoutfs_destroy_inode(struct inode *inode)
static const struct inode_operations scoutfs_file_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = generic_removexattr,
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
.fiemap = scoutfs_data_fiemap,
};
@@ -154,10 +156,12 @@ static const struct inode_operations scoutfs_file_iops = {
static const struct inode_operations scoutfs_special_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = generic_removexattr,
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
};
@@ -174,8 +178,12 @@ static void set_inode_ops(struct inode *inode)
inode->i_fop = &scoutfs_file_fops;
break;
case S_IFDIR:
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
inode->i_op = &scoutfs_dir_iops.ops;
inode->i_flags |= S_IOPS_WRAPPER;
#else
inode->i_op = &scoutfs_dir_iops;
#endif
inode->i_fop = &scoutfs_dir_fops;
break;
case S_IFLNK:
@@ -242,12 +250,12 @@ static void set_item_info(struct scoutfs_inode_info *si,
set_item_major(si, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE, sinode->data_seq);
}
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode, int inode_bytes)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
i_size_write(inode, le64_to_cpu(cinode->size));
inode->i_version = le64_to_cpu(cinode->version);
inode_set_iversion_queried(inode, le64_to_cpu(cinode->version));
set_nlink(inode, le32_to_cpu(cinode->nlink));
i_uid_write(inode, le32_to_cpu(cinode->uid));
i_gid_write(inode, le32_to_cpu(cinode->gid));
@@ -270,6 +278,7 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
si->flags = le32_to_cpu(cinode->flags);
si->crtime.tv_sec = le64_to_cpu(cinode->crtime.sec);
si->crtime.tv_nsec = le32_to_cpu(cinode->crtime.nsec);
si->proj = le64_to_cpu(cinode->proj);
/*
* i_blocks is initialized from online and offline and is then
@@ -290,6 +299,24 @@ void scoutfs_inode_init_key(struct scoutfs_key *key, u64 ino)
};
}
/*
* Read an inode item into the caller's buffer and return the size that
* we read. Returns errors if the inode size is unsupported or doesn't
* make sense for the format version.
*/
static int lookup_inode_item(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_inode *sinode, struct scoutfs_lock *lock)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
int ret;
ret = scoutfs_item_lookup_smaller_zero(sb, key, sinode, sizeof(struct scoutfs_inode), lock);
if (ret >= 0 && !scoutfs_inode_valid_vers_bytes(sbi->fmt_vers, ret))
return -EIO;
return ret;
}
/*
* Refresh the vfs inode fields if the lock indicates that the current
* contents could be stale.
@@ -325,12 +352,12 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
mutex_lock(&si->item_mutex);
if (atomic64_read(&si->last_refreshed) < refresh_gen) {
ret = scoutfs_item_lookup_exact(sb, &key, &sinode,
sizeof(sinode), lock);
if (ret == 0) {
load_inode(inode, &sinode);
ret = lookup_inode_item(sb, &key, &sinode, lock);
if (ret > 0) {
load_inode(inode, &sinode, ret);
atomic64_set(&si->last_refreshed, refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
ret = 0;
}
} else {
ret = 0;
@@ -340,10 +367,18 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
return ret;
}
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat)
{
struct inode *inode = dentry->d_inode;
#else
int scoutfs_getattr(KC_VFS_NS_DEF
const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int query_flags)
{
struct inode *inode = d_inode(path->dentry);
#endif
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
int ret;
@@ -351,7 +386,8 @@ int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret == 0) {
generic_fillattr(inode, stat);
generic_fillattr(KC_VFS_INIT_NS
inode, stat);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
return ret;
@@ -384,7 +420,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
scoutfs_inode_inc_data_version(inode);
truncate_setsize(inode, new_size);
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
inode->i_ctime = inode->i_mtime = current_time(inode);
if (truncate)
si->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_inode_set_data_seq(inode);
@@ -449,7 +485,8 @@ int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock)
* re-acquire it. Ideally we'd fix this so that we can acquire the lock
* instead of the caller.
*/
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr)
int scoutfs_setattr(KC_VFS_NS_DEF
struct dentry *dentry, struct iattr *attr)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
@@ -467,11 +504,15 @@ retry:
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret)
return ret;
ret = inode_change_ok(inode, attr);
ret = setattr_prepare(KC_VFS_INIT_NS
dentry, attr);
if (ret)
goto out;
ret = scoutfs_inode_check_retention(inode);
if (ret < 0)
goto out;
attr_size = (attr->ia_valid & ATTR_SIZE) ? attr->ia_size :
i_size_read(inode);
@@ -496,9 +537,9 @@ retry:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
/* XXX callee locks instead? */
mutex_unlock(&inode->i_mutex);
inode_unlock(inode);
ret = scoutfs_data_wait(inode, &dw);
mutex_lock(&inode->i_mutex);
inode_lock(inode);
if (ret == 0)
goto retry;
@@ -528,7 +569,8 @@ retry:
if (ret < 0)
goto release;
setattr_copy(inode, attr);
setattr_copy(KC_VFS_INIT_NS
inode, attr);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
@@ -552,11 +594,9 @@ static void set_trans_seq(struct inode *inode, u64 *seq)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (*seq != sbi->trans_seq) {
preempt_disable();
write_seqcount_begin(&si->seqcount);
write_seqlock(&si->seqlock);
*seq = sbi->trans_seq;
write_seqcount_end(&si->seqcount);
preempt_enable();
write_sequnlock(&si->seqlock);
}
}
@@ -578,22 +618,18 @@ void scoutfs_inode_inc_data_version(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
preempt_disable();
write_seqcount_begin(&si->seqcount);
write_seqlock(&si->seqlock);
si->data_version++;
write_seqcount_end(&si->seqcount);
preempt_enable();
write_sequnlock(&si->seqlock);
}
void scoutfs_inode_set_data_version(struct inode *inode, u64 data_version)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
preempt_disable();
write_seqcount_begin(&si->seqcount);
write_seqlock(&si->seqlock);
si->data_version = data_version;
write_seqcount_end(&si->seqcount);
preempt_enable();
write_sequnlock(&si->seqlock);
}
void scoutfs_inode_add_onoff(struct inode *inode, s64 on, s64 off)
@@ -602,8 +638,7 @@ void scoutfs_inode_add_onoff(struct inode *inode, s64 on, s64 off)
if (inode && (on || off)) {
si = SCOUTFS_I(inode);
preempt_disable();
write_seqcount_begin(&si->seqcount);
write_seqlock(&si->seqlock);
/* inode and extents out of sync, bad callers */
if (((s64)si->online_blocks + on < 0) ||
@@ -624,8 +659,7 @@ void scoutfs_inode_add_onoff(struct inode *inode, s64 on, s64 off)
si->online_blocks,
si->offline_blocks);
write_seqcount_end(&si->seqcount);
preempt_enable();
write_sequnlock(&si->seqlock);
}
/* any time offline extents decreased we try and wake waiters */
@@ -633,16 +667,16 @@ void scoutfs_inode_add_onoff(struct inode *inode, s64 on, s64 off)
scoutfs_data_wait_changed(inode);
}
static u64 read_seqcount_u64(struct inode *inode, u64 *val)
static u64 read_seqlock_u64(struct inode *inode, u64 *val)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
unsigned int seq;
unsigned seq;
u64 v;
do {
seq = read_seqcount_begin(&si->seqcount);
seq = read_seqbegin(&si->seqlock);
v = *val;
} while (read_seqcount_retry(&si->seqcount, seq));
} while (read_seqretry(&si->seqlock, seq));
return v;
}
@@ -651,33 +685,82 @@ u64 scoutfs_inode_meta_seq(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
return read_seqcount_u64(inode, &si->meta_seq);
return read_seqlock_u64(inode, &si->meta_seq);
}
u64 scoutfs_inode_data_seq(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
return read_seqcount_u64(inode, &si->data_seq);
return read_seqlock_u64(inode, &si->data_seq);
}
u64 scoutfs_inode_data_version(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
return read_seqcount_u64(inode, &si->data_version);
return read_seqlock_u64(inode, &si->data_version);
}
void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
unsigned int seq;
unsigned seq;
do {
seq = read_seqcount_begin(&si->seqcount);
seq = read_seqbegin(&si->seqlock);
*on = SCOUTFS_I(inode)->online_blocks;
*off = SCOUTFS_I(inode)->offline_blocks;
} while (read_seqcount_retry(&si->seqcount, seq));
} while (read_seqretry(&si->seqlock, seq));
}
/*
* Get our private scoutfs inode flags, not the vfs i_flags.
*/
u32 scoutfs_inode_get_flags(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
unsigned seq;
u32 flags;
do {
seq = read_seqbegin(&si->seqlock);
flags = si->flags;
} while (read_seqretry(&si->seqlock, seq));
return flags;
}
void scoutfs_inode_set_flags(struct inode *inode, u32 and, u32 or)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
write_seqlock(&si->seqlock);
si->flags = (si->flags & and) | or;
write_sequnlock(&si->seqlock);
}
u64 scoutfs_inode_get_proj(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
unsigned seq;
u64 proj;
do {
seq = read_seqbegin(&si->seqlock);
proj = si->proj;
} while (read_seqretry(&si->seqlock, seq));
return proj;
}
void scoutfs_inode_set_proj(struct inode *inode, u64 proj)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
write_seqlock(&si->seqlock);
si->proj = proj;
write_sequnlock(&si->seqlock);
}
static int scoutfs_iget_test(struct inode *inode, void *arg)
@@ -750,7 +833,7 @@ struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf)
/* XXX ensure refresh, instead clear in drop_inode? */
si = SCOUTFS_I(inode);
atomic64_set(&si->last_refreshed, 0);
inode->i_version = 0;
inode_set_iversion_queried(inode, 0);
}
ret = scoutfs_inode_refresh(inode, lock);
@@ -789,7 +872,7 @@ out:
return inode;
}
static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
static void store_inode(struct scoutfs_inode *cinode, struct inode *inode, int inode_bytes)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 online_blocks;
@@ -798,7 +881,7 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
scoutfs_inode_get_onoff(inode, &online_blocks, &offline_blocks);
cinode->size = cpu_to_le64(i_size_read(inode));
cinode->version = cpu_to_le64(inode->i_version);
cinode->version = cpu_to_le64(inode_peek_iversion(inode));
cinode->nlink = cpu_to_le32(inode->i_nlink);
cinode->uid = cpu_to_le32(i_uid_read(inode));
cinode->gid = cpu_to_le32(i_gid_read(inode));
@@ -825,6 +908,7 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
cinode->crtime.sec = cpu_to_le64(si->crtime.tv_sec);
cinode->crtime.nsec = cpu_to_le32(si->crtime.tv_nsec);
memset(cinode->crtime.__pad, 0, sizeof(cinode->crtime.__pad));
cinode->proj = cpu_to_le64(si->proj);
}
/*
@@ -848,15 +932,18 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_inode sinode;
struct scoutfs_key key;
int inode_bytes;
int ret;
store_inode(&sinode, inode);
inode_bytes = scoutfs_inode_vers_bytes(sbi->fmt_vers);
store_inode(&sinode, inode, inode_bytes);
scoutfs_inode_init_key(&key, scoutfs_ino(inode));
ret = scoutfs_item_update(sb, &key, &sinode, sizeof(sinode), lock);
ret = scoutfs_item_update(sb, &key, &sinode, inode_bytes, lock);
if (!ret)
trace_scoutfs_dirty_inode(inode);
return ret;
@@ -897,10 +984,10 @@ static bool inode_has_index(umode_t mode, u8 type)
}
}
static int cmp_index_lock(void *priv, struct list_head *A, struct list_head *B)
static int cmp_index_lock(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct index_lock *a = list_entry(A, struct index_lock, head);
struct index_lock *b = list_entry(B, struct index_lock, head);
KC_LIST_CMP_CONST struct index_lock *a = list_entry(A, KC_LIST_CMP_CONST struct index_lock, head);
KC_LIST_CMP_CONST struct index_lock *b = list_entry(B, KC_LIST_CMP_CONST struct index_lock, head);
return ((int)a->type - (int)b->type) ?:
scoutfs_cmp_u64s(a->major, b->major) ?:
@@ -1058,9 +1145,11 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
const u64 ino = scoutfs_ino(inode);
struct scoutfs_key key;
struct scoutfs_inode sinode;
struct scoutfs_key key;
int inode_bytes;
int ret;
int err;
@@ -1069,15 +1158,17 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
/* set the meta version once per trans for any inode updates */
scoutfs_inode_set_meta_seq(inode);
inode_bytes = scoutfs_inode_vers_bytes(sbi->fmt_vers);
/* only race with other inode field stores once */
store_inode(&sinode, inode);
store_inode(&sinode, inode, inode_bytes);
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list, lock);
BUG_ON(ret);
scoutfs_inode_init_key(&key, ino);
err = scoutfs_item_update(sb, &key, &sinode, sizeof(sinode), lock);
err = scoutfs_item_update(sb, &key, &sinode, inode_bytes, lock);
if (err) {
scoutfs_err(sb, "inode %llu update err %d", ino, err);
BUG_ON(err);
@@ -1445,10 +1536,12 @@ out:
int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, dev_t rdev,
u64 ino, struct scoutfs_lock *lock, struct inode **inode_ret)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_inode_info *si;
struct scoutfs_key key;
struct scoutfs_inode sinode;
struct scoutfs_key key;
struct inode *inode;
int inode_bytes;
int ret;
inode = new_inode(sb);
@@ -1464,6 +1557,7 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
si->offline_blocks = 0;
si->next_readdir_pos = SCOUTFS_DIRENT_FIRST_POS;
si->next_xattr_id = 0;
si->proj = 0;
si->have_item = false;
atomic64_set(&si->last_refreshed, lock->refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
@@ -1473,20 +1567,23 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
scoutfs_inode_set_data_seq(inode);
inode->i_ino = ino; /* XXX overflow */
inode_init_owner(inode, dir, mode);
inode_init_owner(KC_VFS_INIT_NS
inode, dir, mode);
inode_set_bytes(inode, 0);
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
inode->i_rdev = rdev;
set_inode_ops(inode);
store_inode(&sinode, inode);
inode_bytes = scoutfs_inode_vers_bytes(sbi->fmt_vers);
store_inode(&sinode, inode, inode_bytes);
scoutfs_inode_init_key(&key, scoutfs_ino(inode));
ret = scoutfs_omap_set(sb, ino);
if (ret < 0)
goto out;
ret = scoutfs_item_create(sb, &key, &sinode, sizeof(sinode), lock);
ret = scoutfs_item_create(sb, &key, &sinode, inode_bytes, lock);
if (ret < 0)
scoutfs_omap_clear(sb, ino);
out:
@@ -1740,7 +1837,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
}
scoutfs_inode_init_key(&key, ino);
ret = scoutfs_item_lookup_exact(sb, &key, &sinode, sizeof(sinode), lock);
ret = lookup_inode_item(sb, &key, &sinode, lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
@@ -2129,6 +2226,17 @@ out:
return ret;
}
/*
* Return an error if the inode has the retention flag set and can not
* be modified. This mimics the errno returned by the vfs whan an
* inode's immutable flag is set. The flag won't be set on older format
* versions so we don't check the mounted format version here.
*/
int scoutfs_inode_check_retention(struct inode *inode)
{
return (scoutfs_inode_get_flags(inode) & SCOUTFS_INO_FLAG_RETENTION) ? -EPERM : 0;
}
int scoutfs_inode_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);

View File

@@ -21,8 +21,9 @@ struct scoutfs_inode_info {
u64 data_version;
u64 online_blocks;
u64 offline_blocks;
u64 proj;
u32 flags;
struct timespec crtime;
struct kc_timespec crtime;
/*
* Protects per-inode extent items, most particularly readers
@@ -47,7 +48,7 @@ struct scoutfs_inode_info {
atomic64_t last_refreshed;
/* initialized once for slab object */
seqcount_t seqcount;
seqlock_t seqlock;
bool staging; /* holder of i_mutex is staging */
struct scoutfs_per_task pt_data_lock;
struct scoutfs_data_waitq data_waitq;
@@ -120,12 +121,26 @@ u64 scoutfs_inode_meta_seq(struct inode *inode);
u64 scoutfs_inode_data_seq(struct inode *inode);
u64 scoutfs_inode_data_version(struct inode *inode);
void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off);
u32 scoutfs_inode_get_flags(struct inode *inode);
void scoutfs_inode_set_flags(struct inode *inode, u32 and, u32 or);
u64 scoutfs_inode_get_proj(struct inode *inode);
void scoutfs_inode_set_proj(struct inode *inode, u64 proj);
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_inode_check_retention(struct inode *inode);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
#else
int scoutfs_getattr(KC_VFS_NS_DEF
const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int query_flags);
#endif
int scoutfs_setattr(KC_VFS_NS_DEF
struct dentry *dentry, struct iattr *attr);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary);

File diff suppressed because it is too large Load Diff

View File

@@ -559,4 +559,288 @@ struct scoutfs_ioctl_get_allocated_inos {
#define SCOUTFS_IOC_GET_ALLOCATED_INOS \
_IOW(SCOUTFS_IOCTL_MAGIC, 16, struct scoutfs_ioctl_get_allocated_inos)
/*
* Get directory entries that refer to a specific inode.
*
* @ino: The target ino that we're finding referring entries to.
* Constant across all the calls that make up an iteration over all the
* inode's entries.
*
* @dir_ino: The inode number of a directory containing the entry to our
* inode to search from. If this parent directory contains no more
* entries to our inode then we'll search through other parent directory
* inodes in inode order.
*
* @dir_pos: The position in the dir_ino parent directory of the entry
* to our inode to search from. If there is no entry at this position
* then we'll search through other entry positions in increasing order.
* If we exhaust the parent directory then we'll search through
* additional parent directories in inode order.
*
* @entries_ptr: A pointer to the buffer where found entries will be
* stored. The pointer must be aligned to 16 bytes.
*
* @entries_bytes: The size of the buffer that will contain entries.
*
* To start iterating set the desired target ino, dir_ino to 0, dir_pos
* to 0, and set result_ptr and _bytes to a sufficiently large buffer.
* Each entry struct that's stored in the buffer adds some overhead so a
* large multiple of the largest possible name is a reasonable choice.
* (A few multiples of PATH_MAX perhaps.)
*
* Each call returns the total number of entries that were stored in the
* entries buffer. Zero is returned when the search was successful and
* no referring entries were found. The entries can be iterated over by
* advancing each starting struct offset by the total number of bytes in
* each entry. If the _LAST flag is set on an entry then there were no
* more entries referring to the inode at the time of the call and
* iteration can be stopped.
*
* To resume iteration set the next call's starting dir_ino and dir_pos
* to one past the last entry seen. Increment the last entry's dir_pos,
* and if it wrapped to 0, increment its dir_ino.
*
* This does not check that the caller has permission to read the
* entries found in each containing directory. It requires
* CAP_DAC_READ_SEARCH which bypasses path traversal permissions
* checking.
*
* Entries returned by a single call can reflect any combination of
* racing creation and removal of entries. Each entry existed at the
* time it was read though it may have changed in the time it took to
* return from the call. The set of entries returned may no longer
* reflect the current set of entries and may not have existed at the
* same time.
*
* This has no knowledge of the life cycle of the inode. It can return
* 0 when there are no referring entries because either the target inode
* doesn't exist, it is in the process of being deleted, or because it
* is still open while being unlinked.
*
* On success this returns the number of entries filled in the buffer.
* A return of 0 indicates that no entries referred to the inode.
*
* EINVAL is returned when there is a problem with the buffer. Either
* it was not aligned or it was not large enough for the first entry.
*
* Many other errnos indicate hard failure to find the next entry.
*/
struct scoutfs_ioctl_get_referring_entries {
__u64 ino;
__u64 dir_ino;
__u64 dir_pos;
__u64 entries_ptr;
__u64 entries_bytes;
};
/*
* @dir_ino: The inode of the directory containing the entry.
*
* @dir_pos: The readdir f_pos position of the entry within the
* directory.
*
* @ino: The inode number of the target of the entry.
*
* @flags: Flags associated with this entry.
*
* @d_type: Inode type as specified with DT_ enum values in readdir(3).
*
* @entry_bytes: The total bytes taken by the entry in memory, including
* the name and any alignment padding. The start of a following entry
* will be found after this number of bytes.
*
* @name_len: The number of bytes in the name not including the trailing
* null, ala strlen(3).
*
* @name: The null terminated name of the referring entry. In the
* struct definition this array is sized to naturally align the struct.
* That number of padded bytes are not necessarily found in the buffer
* returned by _get_referring_entries;
*/
struct scoutfs_ioctl_dirent {
__u64 dir_ino;
__u64 dir_pos;
__u64 ino;
__u16 entry_bytes;
__u8 flags;
__u8 d_type;
__u8 name_len;
__u8 name[3];
};
#define SCOUTFS_IOCTL_DIRENT_FLAG_LAST (1 << 0)
#define SCOUTFS_IOC_GET_REFERRING_ENTRIES \
_IOW(SCOUTFS_IOCTL_MAGIC, 17, struct scoutfs_ioctl_get_referring_entries)
struct scoutfs_ioctl_inode_attr_x {
__u64 x_mask;
__u64 x_flags;
__u64 meta_seq;
__u64 data_seq;
__u64 data_version;
__u64 online_blocks;
__u64 offline_blocks;
__u64 ctime_sec;
__u32 ctime_nsec;
__u32 crtime_nsec;
__u64 crtime_sec;
__u64 size;
__u64 bits;
__u64 project_id;
};
/*
* Behavioral flags set in the x_flags field. These flags don't
* necessarily correspond to specific attributes, but instead change the
* behaviour of a _get_ or _set_ operation.
*
* @SCOUTFS_IOC_IAX_F_SIZE_OFFLINE: When setting i_size, also create
* extents which are marked offline for the region of the file from
* offset 0 to the new set size. This can only be set when setting the
* size and has no effect if setting the size fails.
*/
#define SCOUTFS_IOC_IAX_F_SIZE_OFFLINE (1ULL << 0)
#define SCOUTFS_IOC_IAX_F__UNKNOWN (U64_MAX << 1)
/*
* Single-bit values stored in the @bits field. These indicate whether
* the bit is set, or not. The main _IAX_ bits set in the mask indicate
* whether this value bit is populated by _get or stored by _set.
*/
#define SCOUTFS_IOC_IAX_B_RETENTION (1ULL << 0)
/*
* x_mask bits which indicate which attributes of the inode to populate
* on return for _get or to set on the inode for _set. Each mask bit
* corresponds to the matching named field in the attr_x struct passed
* to the _get_ and _set_ calls.
*
* Each field can have different permissions or other attribute
* requirements which can cause calls to fail. If _set_ fails then no
* other attribute changes will have been made by the same call.
*
* @SCOUTFS_IOC_IAX_RETENTION: Mark a file for retention. When marked,
* no modification can be made to the file other than changing extended
* attributes outside the "user." prefix and clearing the retention
* mark. This can only be set on regular files and requires root (the
* CAP_SYS_ADMIN capability). Other attributes can be set with a
* set_attr_x call on a retention inode as long as that call also
* successfully clears the retention mark.
*/
#define SCOUTFS_IOC_IAX_META_SEQ (1ULL << 0)
#define SCOUTFS_IOC_IAX_DATA_SEQ (1ULL << 1)
#define SCOUTFS_IOC_IAX_DATA_VERSION (1ULL << 2)
#define SCOUTFS_IOC_IAX_ONLINE_BLOCKS (1ULL << 3)
#define SCOUTFS_IOC_IAX_OFFLINE_BLOCKS (1ULL << 4)
#define SCOUTFS_IOC_IAX_CTIME (1ULL << 5)
#define SCOUTFS_IOC_IAX_CRTIME (1ULL << 6)
#define SCOUTFS_IOC_IAX_SIZE (1ULL << 7)
#define SCOUTFS_IOC_IAX_RETENTION (1ULL << 8)
#define SCOUTFS_IOC_IAX_PROJECT_ID (1ULL << 9)
/* single bit attributes that are packed in the bits field as _B_ */
#define SCOUTFS_IOC_IAX__BITS (SCOUTFS_IOC_IAX_RETENTION)
/* inverse of all the bits we understand */
#define SCOUTFS_IOC_IAX__UNKNOWN (U64_MAX << 10)
#define SCOUTFS_IOC_GET_ATTR_X \
_IOW(SCOUTFS_IOCTL_MAGIC, 18, struct scoutfs_ioctl_inode_attr_x)
#define SCOUTFS_IOC_SET_ATTR_X \
_IOW(SCOUTFS_IOCTL_MAGIC, 19, struct scoutfs_ioctl_inode_attr_x)
/*
* (These fields are documented in the order that they're displayed by
* the scoutfs cli utility which matches the sort order of the rules.)
*
* @prio: The priority of the rule. Rules are sorted by their fields
* with prio at the highest magnitude. When multiple rules match the
* rule with the highest sort order is enforced. The priority field
* lets rules override the default field sort order.
*
* @name_val[3]: The three 64bit values that make up the name of the
* totl xattr whose total will be checked against the rule's limit to
* see if the quota rule has been exceeded. The behavior of the values
* can be changed by their corresponding name_source and name_flags.
*
* @name_source[3]: The SQ_NS_ enums that control where the value comes
* from. _LITERAL uses the value from name_val. Inode attribute
* sources (_PROJ, _UID, _GID) are taken from the inode of the operation
* that is being checked against the rule.
*
* @name_flags[3]: The SQ_NF_ enums that alter the name values. _SELECT
* makes the rule only match if the inode attribute of the operation
* matches the attribute value stored in name_val. This lets rules
* match a specific value of an attribute rather than mapping all
* attribute values of to totl names.
*
* @op: The SQ_OP_ enums which specify the operation that can't exceed
* the rule's limit. _INODE checks inode creation and the inode
* attributes are taken from the inode that would be created. _DATA
* checks file data block allocation and the inode fields come from the
* inode that is allocating the blocks.
*
* @limit: The 64bit value that is checked against the totl value
* described by the rule. If the totl value is greater than or equal to
* this value of the matching rule then the operation will return
* -EDQUOT.
*
* @rule_flags: SQ_RF_TOTL_COUNT indicates that the rule's limit should
* be checked against the number of xattrs contributing to a totl value
* instead of the sum of the xattrs.
*/
struct scoutfs_ioctl_quota_rule {
__u64 name_val[3];
__u64 limit;
__u8 prio;
__u8 op;
__u8 rule_flags;
__u8 name_source[3];
__u8 name_flags[3];
__u8 _pad[7];
};
struct scoutfs_ioctl_get_quota_rules {
__u64 iterator[2];
__u64 rules_ptr;
__u64 rules_nr;
};
/*
* Rules are uniquely identified by their non-padded fields. Addition will fail
* with -EEXIST if the specified rule already exists and deletion must find a rule
* with all matching fields to delete.
*/
#define SCOUTFS_IOC_GET_QUOTA_RULES \
_IOR(SCOUTFS_IOCTL_MAGIC, 20, struct scoutfs_ioctl_get_quota_rules)
#define SCOUTFS_IOC_ADD_QUOTA_RULE \
_IOW(SCOUTFS_IOCTL_MAGIC, 21, struct scoutfs_ioctl_quota_rule)
#define SCOUTFS_IOC_DEL_QUOTA_RULE \
_IOW(SCOUTFS_IOCTL_MAGIC, 22, struct scoutfs_ioctl_quota_rule)
/*
* Inodes can be indexed in a global key space at a position determined
* by a .indx. tagged xattr. The xattr name specifies the two index
* position values, with major having the more significant comparison
* order.
*/
struct scoutfs_ioctl_xattr_index_entry {
__u64 minor;
__u64 ino;
__u8 major;
__u8 _pad[7];
};
struct scoutfs_ioctl_read_xattr_index {
__u64 flags;
struct scoutfs_ioctl_xattr_index_entry first;
struct scoutfs_ioctl_xattr_index_entry last;
__u64 entries_ptr;
__u64 entries_nr;
};
#define SCOUTFS_IOC_READ_XATTR_INDEX \
_IOR(SCOUTFS_IOCTL_MAGIC, 23, struct scoutfs_ioctl_read_xattr_index)
#endif

View File

@@ -24,9 +24,11 @@
#include "item.h"
#include "forest.h"
#include "block.h"
#include "msg.h"
#include "trans.h"
#include "counters.h"
#include "scoutfs_trace.h"
#include "util.h"
/*
* The item cache maintains a consistent view of items that are read
@@ -76,8 +78,10 @@ struct item_cache_info {
/* almost always read, barely written */
struct super_block *sb;
struct item_percpu_pages __percpu *pcpu_pages;
struct shrinker shrinker;
KC_DEFINE_SHRINKER(shrinker);
#ifdef KC_CPU_NOTIFIER
struct notifier_block notifier;
#endif
/* often walked, but per-cpu refs are fast path */
rwlock_t rwlock;
@@ -1667,13 +1671,24 @@ out:
return ret;
}
static int lock_safe(struct scoutfs_lock *lock, struct scoutfs_key *key,
static int lock_safe(struct super_block *sb, struct scoutfs_lock *lock, struct scoutfs_key *key,
int mode)
{
if (WARN_ON_ONCE(!scoutfs_lock_protected(lock, key, mode)))
bool prot = scoutfs_lock_protected(lock, key, mode);
if (!prot) {
static bool once = false;
if (!once) {
scoutfs_err(sb, "lock (start "SK_FMT" end "SK_FMT" mode 0x%x) does not protect operation (key "SK_FMT" mode 0x%x)",
SK_ARG(&lock->start), SK_ARG(&lock->end), lock->mode,
SK_ARG(key), mode);
dump_stack();
once = true;
}
return -EINVAL;
else
return 0;
}
return 0;
}
static int optional_lock_mode_match(struct scoutfs_lock *lock, int mode)
@@ -1705,8 +1720,8 @@ static int copy_val(void *dst, int dst_len, void *src, int src_len)
* The amount of bytes copied is returned which can be 0 or truncated if
* the caller's buffer isn't big enough.
*/
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
static int item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, int len_limit, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
struct cached_item *item;
@@ -1715,7 +1730,7 @@ int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_lookup);
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_READ)))
if ((ret = lock_safe(sb, lock, key, SCOUTFS_LOCK_READ)))
goto out;
ret = get_cached_page(sb, cinf, lock, key, false, false, 0, &pg);
@@ -1726,6 +1741,8 @@ int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
item = item_rbtree_walk(&pg->item_root, key, NULL, NULL, NULL);
if (!item || item->deletion)
ret = -ENOENT;
else if (len_limit > 0 && item->val_len > len_limit)
ret = -EIO;
else
ret = copy_val(val, val_len, item->val, item->val_len);
@@ -1734,13 +1751,38 @@ out:
return ret;
}
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
return item_lookup(sb, key, val, val_len, 0, lock);
}
/*
* Copy an item's value into the caller's buffer. If the item's value
* is larger than the caller's buffer then -EIO is returned. If the
* item is smaller then the bytes from the end of the copied value to
* the end of the buffer are zeroed. The number of value bytes copied
* is returned, and 0 can be returned for an item with no value.
*/
int scoutfs_item_lookup_smaller_zero(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
int ret;
ret = item_lookup(sb, key, val, val_len, val_len, lock);
if (ret >= 0 && ret < val_len)
memset(val + ret, 0, val_len - ret);
return ret;
}
int scoutfs_item_lookup_exact(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock)
{
int ret;
ret = scoutfs_item_lookup(sb, key, val, val_len, lock);
ret = item_lookup(sb, key, val, val_len, 0, lock);
if (ret == val_len)
ret = 0;
else if (ret >= 0)
@@ -1790,7 +1832,7 @@ int scoutfs_item_next(struct super_block *sb, struct scoutfs_key *key,
goto out;
}
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_READ)))
if ((ret = lock_safe(sb, lock, key, SCOUTFS_LOCK_READ)))
goto out;
pos = *key;
@@ -1871,7 +1913,7 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_dirty);
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_WRITE)))
if ((ret = lock_safe(sb, lock, key, SCOUTFS_LOCK_WRITE)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
@@ -1917,7 +1959,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_create);
if ((ret = lock_safe(lock, key, mode)) ||
if ((ret = lock_safe(sb, lock, key, mode)) ||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
goto out;
@@ -1960,7 +2002,7 @@ int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
return item_create(sb, key, val, val_len, lock, NULL,
SCOUTFS_LOCK_READ, false);
SCOUTFS_LOCK_WRITE, false);
}
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
@@ -1991,7 +2033,7 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_update);
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_WRITE)))
if ((ret = lock_safe(sb, lock, key, SCOUTFS_LOCK_WRITE)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
@@ -2059,7 +2101,7 @@ int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_delta);
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_WRITE_ONLY)))
if ((ret = lock_safe(sb, lock, key, SCOUTFS_LOCK_WRITE_ONLY)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
@@ -2132,7 +2174,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_delete);
if ((ret = lock_safe(lock, key, mode)) ||
if ((ret = lock_safe(sb, lock, key, mode)) ||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
goto out;
@@ -2199,18 +2241,18 @@ u64 scoutfs_item_dirty_pages(struct super_block *sb)
return (u64)atomic_read(&cinf->dirty_pages);
}
static int cmp_pg_start(void *priv, struct list_head *A, struct list_head *B)
static int cmp_pg_start(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct cached_page *a = list_entry(A, struct cached_page, dirty_head);
struct cached_page *b = list_entry(B, struct cached_page, dirty_head);
KC_LIST_CMP_CONST struct cached_page *a = list_entry(A, KC_LIST_CMP_CONST struct cached_page, dirty_head);
KC_LIST_CMP_CONST struct cached_page *b = list_entry(B, KC_LIST_CMP_CONST struct cached_page, dirty_head);
return scoutfs_key_compare(&a->start, &b->start);
}
static int cmp_item_key(void *priv, struct list_head *A, struct list_head *B)
static int cmp_item_key(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct cached_item *a = list_entry(A, struct cached_item, dirty_head);
struct cached_item *b = list_entry(B, struct cached_item, dirty_head);
KC_LIST_CMP_CONST struct cached_item *a = list_entry(A, KC_LIST_CMP_CONST struct cached_item, dirty_head);
KC_LIST_CMP_CONST struct cached_item *b = list_entry(B, KC_LIST_CMP_CONST struct cached_item, dirty_head);
return scoutfs_key_compare(&a->key, &b->key);
}
@@ -2277,7 +2319,7 @@ int scoutfs_item_write_dirty(struct super_block *sb)
ret = -ENOMEM;
goto out;
}
list_add(&page->list, &pages);
list_add(&page->lru, &pages);
first = NULL;
prev = &first;
@@ -2290,7 +2332,7 @@ int scoutfs_item_write_dirty(struct super_block *sb)
ret = -ENOMEM;
goto out;
}
list_add(&second->list, &pages);
list_add(&second->lru, &pages);
}
/* read lock next sorted page, we're only dirty_list user */
@@ -2347,8 +2389,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
/* write all the dirty items into log btree blocks */
ret = scoutfs_forest_insert_list(sb, first);
out:
list_for_each_entry_safe(page, second, &pages, list) {
list_del_init(&page->list);
list_for_each_entry_safe(page, second, &pages, lru) {
list_del_init(&page->lru);
__free_page(page);
}
@@ -2530,27 +2572,35 @@ retry:
put_pg(sb, right);
}
static unsigned long item_cache_count_objects(struct shrinker *shrink,
struct shrink_control *sc)
{
struct item_cache_info *cinf = KC_SHRINKER_CONTAINER_OF(shrink, struct item_cache_info);
struct super_block *sb = cinf->sb;
scoutfs_inc_counter(sb, item_cache_count_objects);
return shrinker_min_long(cinf->lru_pages);
}
/*
* Shrink the size the item cache. We're operating against the fast
* path lock ordering and we skip pages if we can't acquire locks. We
* can run into dirty pages or pages with items that weren't visible to
* the earliest active reader which must be skipped.
*/
static int item_lru_shrink(struct shrinker *shrink,
struct shrink_control *sc)
static unsigned long item_cache_scan_objects(struct shrinker *shrink,
struct shrink_control *sc)
{
struct item_cache_info *cinf = container_of(shrink,
struct item_cache_info,
shrinker);
struct item_cache_info *cinf = KC_SHRINKER_CONTAINER_OF(shrink, struct item_cache_info);
struct super_block *sb = cinf->sb;
struct cached_page *tmp;
struct cached_page *pg;
unsigned long freed = 0;
u64 first_reader_seq;
int nr;
int nr = sc->nr_to_scan;
if (sc->nr_to_scan == 0)
goto out;
nr = sc->nr_to_scan;
scoutfs_inc_counter(sb, item_cache_scan_objects);
/* can't invalidate pages with items that weren't visible to first reader */
first_reader_seq = first_active_reader_seq(cinf);
@@ -2582,6 +2632,7 @@ static int item_lru_shrink(struct shrinker *shrink,
rbtree_erase(&pg->node, &cinf->pg_root);
invalidate_pcpu_page(pg);
write_unlock(&pg->rwlock);
freed++;
put_pg(sb, pg);
@@ -2591,10 +2642,11 @@ static int item_lru_shrink(struct shrinker *shrink,
write_unlock(&cinf->rwlock);
spin_unlock(&cinf->lru_lock);
out:
return min_t(unsigned long, cinf->lru_pages, INT_MAX);
return freed;
}
#ifdef KC_CPU_NOTIFIER
static int item_cpu_callback(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
@@ -2609,6 +2661,7 @@ static int item_cpu_callback(struct notifier_block *nfb,
return NOTIFY_OK;
}
#endif
int scoutfs_item_setup(struct super_block *sb)
{
@@ -2638,11 +2691,13 @@ int scoutfs_item_setup(struct super_block *sb)
for_each_possible_cpu(cpu)
init_pcpu_pages(cinf, cpu);
cinf->shrinker.shrink = item_lru_shrink;
cinf->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&cinf->shrinker);
KC_INIT_SHRINKER_FUNCS(&cinf->shrinker, item_cache_count_objects,
item_cache_scan_objects);
KC_REGISTER_SHRINKER(&cinf->shrinker, "scoutfs-item:" SCSBF, SCSB_ARGS(sb));
#ifdef KC_CPU_NOTIFIER
cinf->notifier.notifier_call = item_cpu_callback;
register_hotcpu_notifier(&cinf->notifier);
#endif
sbi->item_cache_info = cinf;
return 0;
@@ -2662,8 +2717,10 @@ void scoutfs_item_destroy(struct super_block *sb)
if (cinf) {
BUG_ON(!list_empty(&cinf->active_list));
#ifdef KC_CPU_NOTIFIER
unregister_hotcpu_notifier(&cinf->notifier);
unregister_shrinker(&cinf->shrinker);
#endif
KC_UNREGISTER_SHRINKER(&cinf->shrinker);
for_each_possible_cpu(cpu)
drop_pcpu_pages(sb, cinf, cpu);

View File

@@ -3,6 +3,8 @@
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_lookup_smaller_zero(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_lookup_exact(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock);

83
kmod/src/kernelcompat.c Normal file
View File

@@ -0,0 +1,83 @@
#include <linux/uio.h>
#include "kernelcompat.h"
#ifdef KC_SHRINKER_SHRINK
#include <linux/shrinker.h>
/*
* If a target doesn't have that .{count,scan}_objects() interface then
* we have a .shrink() helper that performs the shrink work in terms of
* count/scan.
*/
int kc_shrink_wrapper_fn(struct shrinker *shrink, struct shrink_control *sc)
{
struct kc_shrinker_wrapper *wrapper = container_of(shrink, struct kc_shrinker_wrapper, shrink);
unsigned long nr;
unsigned long rc;
if (sc->nr_to_scan != 0) {
rc = wrapper->scan_objects(shrink, sc);
/* translate magic values to the equivalent for older kernels */
if (rc == SHRINK_STOP)
return -1;
else if (rc == SHRINK_EMPTY)
return 0;
}
nr = wrapper->count_objects(shrink, sc);
return min_t(unsigned long, nr, INT_MAX);
}
#endif
#ifndef KC_CURRENT_TIME_INODE
struct timespec64 kc_current_time(struct inode *inode)
{
struct timespec64 now;
unsigned gran;
getnstimeofday64(&now);
if (unlikely(!inode->i_sb)) {
WARN(1, "current_time() called with uninitialized super_block in the inode");
return now;
}
gran = inode->i_sb->s_time_gran;
/* Avoid division in the common cases 1 ns and 1 s. */
if (gran == 1) {
/* nothing */
} else if (gran == NSEC_PER_SEC) {
now.tv_nsec = 0;
} else if (gran > 1 && gran < NSEC_PER_SEC) {
now.tv_nsec -= now.tv_nsec % gran;
} else {
WARN(1, "illegal file time granularity: %u", gran);
}
return now;
}
#endif
#ifndef KC_GENERIC_FILE_BUFFERED_WRITE
ssize_t
kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos, loff_t *ppos,
size_t count, ssize_t written)
{
ssize_t status;
struct iov_iter i;
iov_iter_init(&i, WRITE, iov, nr_segs, count);
status = kc_generic_perform_write(iocb, &i, pos);
if (likely(status >= 0)) {
written += status;
*ppos = pos + status;
}
return written ? written : status;
}
#endif

View File

@@ -1,8 +1,35 @@
#ifndef _SCOUTFS_KERNELCOMPAT_H_
#define _SCOUTFS_KERNELCOMPAT_H_
#ifndef KC_ITERATE_DIR_CONTEXT
#include <linux/kernel.h>
#include <linux/fs.h>
/*
* v4.15-rc3-4-gae5e165d855d
*
* new API for handling inode->i_version. This forces us to
* include this API where we need. We include it here for
* convenience instead of where it's needed.
*/
#ifdef KC_NEED_LINUX_IVERSION_H
#include <linux/iversion.h>
#else
/*
* Kernels before above version will need to fall back to
* manipulating inode->i_version as previous with degraded
* methods.
*/
#define inode_set_iversion_queried(inode, val) \
do { \
(inode)->i_version = val; \
} while (0)
#define inode_peek_iversion(inode) \
({ \
(inode)->i_version; \
})
#endif
#ifndef KC_ITERATE_DIR_CONTEXT
typedef filldir_t kc_readdir_ctx_t;
#define KC_DECLARE_READDIR(name, file, dirent, ctx) name(file, dirent, ctx)
#define KC_FOP_READDIR readdir
@@ -52,4 +79,363 @@ static inline int dir_emit_dots(struct file *file, void *dirent,
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(acl)
#endif
/*
* v3.6-rc1-24-gdbf2576e37da
*
* All workqueues are now non-reentrant, and the bit flag is removed
* shortly after its uses were removed.
*/
#ifndef WQ_NON_REENTRANT
#define WQ_NON_REENTRANT 0
#endif
/*
* v3.18-rc2-19-gb5ae6b15bd73
*
* Folds d_materialise_unique into d_splice_alias. Note reversal
* of arguments (Also note Documentation/filesystems/porting.rst)
*/
#ifndef KC_D_MATERIALISE_UNIQUE
#define d_materialise_unique(dentry, inode) d_splice_alias(inode, dentry)
#endif
/*
* v4.8-rc1-29-g31051c85b5e2
*
* fall back to inode_change_ok() if setattr_prepare() isn't available
*/
#ifndef KC_SETATTR_PREPARE
#define setattr_prepare(dentry, attr) inode_change_ok(d_inode(dentry), attr)
#endif
#ifndef KC___POSIX_ACL_CREATE
#define __posix_acl_create posix_acl_create
#define __posix_acl_chmod posix_acl_chmod
#endif
#ifndef KC_PERCPU_COUNTER_ADD_BATCH
#define percpu_counter_add_batch __percpu_counter_add
#endif
#ifndef KC_MEMALLOC_NOFS_SAVE
#define memalloc_nofs_save memalloc_noio_save
#define memalloc_nofs_restore memalloc_noio_restore
#endif
#ifdef KC_BIO_BI_OPF
#define kc_bio_get_opf(bio) \
({ \
(bio)->bi_opf; \
})
#define kc_bio_set_opf(bio, opf) \
do { \
(bio)->bi_opf = opf; \
} while (0)
#define kc_bio_set_sector(bio, sect) \
do { \
(bio)->bi_iter.bi_sector = sect;\
} while (0)
#define kc_submit_bio(bio) submit_bio(bio)
#else
#define kc_bio_get_opf(bio) \
({ \
(bio)->bi_rw; \
})
#define kc_bio_set_opf(bio, opf) \
do { \
(bio)->bi_rw = opf; \
} while (0)
#define kc_bio_set_sector(bio, sect) \
do { \
(bio)->bi_sector = sect; \
} while (0)
#define kc_submit_bio(bio) \
do { \
submit_bio((bio)->bi_rw, bio); \
} while (0)
#define bio_set_dev(bio, bdev) \
do { \
(bio)->bi_bdev = (bdev); \
} while (0)
#endif
#ifdef KC_BIO_BI_STATUS
#define KC_DECLARE_BIO_END_IO(name, bio) name(bio)
#define kc_bio_get_errno(bio) ({ blk_status_to_errno((bio)->bi_status); })
#else
#define KC_DECLARE_BIO_END_IO(name, bio) name(bio, int _error_arg)
#define kc_bio_get_errno(bio) ({ (int)((void)(bio), _error_arg); })
#endif
/*
* v4.13-rc1-6-ge462ec50cb5f
*
* MS_* (mount) flags from <linux/mount.h> should not be used in the kernel
* anymore from 4.x onwards. Instead, we need to use the SB_* (superblock) flags
*/
#ifndef SB_POSIXACL
#define SB_POSIXACL MS_POSIXACL
#define SB_I_VERSION MS_I_VERSION
#endif
#ifndef KC_CURRENT_TIME_INODE
struct timespec64 kc_current_time(struct inode *inode);
#define current_time kc_current_time
#define kc_timespec timespec
#else
#define kc_timespec timespec64
#endif
#ifndef KC_SHRINKER_SHRINK
#define KC_DEFINE_SHRINKER(name) struct shrinker name
#define KC_INIT_SHRINKER_FUNCS(name, countfn, scanfn) do { \
__typeof__(name) _shrink = (name); \
_shrink->count_objects = (countfn); \
_shrink->scan_objects = (scanfn); \
_shrink->seeks = DEFAULT_SEEKS; \
} while (0)
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(ptr, type, shrinker)
#ifdef KC_SHRINKER_NAME
#define KC_REGISTER_SHRINKER register_shrinker
#else
#define KC_REGISTER_SHRINKER(ptr, fmt, ...) (register_shrinker(ptr))
#endif /* KC_SHRINKER_NAME */
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr))
#define KC_SHRINKER_FN(ptr) (ptr)
#else
#include <linux/shrinker.h>
#ifndef SHRINK_STOP
#define SHRINK_STOP (~0UL)
#define SHRINK_EMPTY (~0UL - 1)
#endif
int kc_shrink_wrapper_fn(struct shrinker *shrink, struct shrink_control *sc);
struct kc_shrinker_wrapper {
unsigned long (*count_objects)(struct shrinker *, struct shrink_control *sc);
unsigned long (*scan_objects)(struct shrinker *, struct shrink_control *sc);
struct shrinker shrink;
};
#define KC_DEFINE_SHRINKER(name) struct kc_shrinker_wrapper name;
#define KC_INIT_SHRINKER_FUNCS(name, countfn, scanfn) do { \
struct kc_shrinker_wrapper *_wrap = (name); \
_wrap->count_objects = (countfn); \
_wrap->scan_objects = (scanfn); \
_wrap->shrink.shrink = kc_shrink_wrapper_fn; \
_wrap->shrink.seeks = DEFAULT_SEEKS; \
} while (0)
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(container_of(ptr, struct kc_shrinker_wrapper, shrink), type, shrinker)
#define KC_REGISTER_SHRINKER(ptr, fmt, ...) (register_shrinker(ptr.shrink))
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr.shrink))
#define KC_SHRINKER_FN(ptr) (ptr.shrink)
#endif /* KC_SHRINKER_SHRINK */
#ifdef KC_KERNEL_GETSOCKNAME_ADDRLEN
#include <linux/net.h>
#include <linux/inet.h>
static inline int kc_kernel_getsockname(struct socket *sock, struct sockaddr *addr)
{
int addrlen = sizeof(struct sockaddr_in);
int ret = kernel_getsockname(sock, addr, &addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
return -EAFNOSUPPORT;
else if (ret < 0)
return ret;
return sizeof(struct sockaddr_in);
}
static inline int kc_kernel_getpeername(struct socket *sock, struct sockaddr *addr)
{
int addrlen = sizeof(struct sockaddr_in);
int ret = kernel_getpeername(sock, addr, &addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
return -EAFNOSUPPORT;
else if (ret < 0)
return ret;
return sizeof(struct sockaddr_in);
}
#else
#define kc_kernel_getsockname(sock, addr) kernel_getsockname(sock, addr)
#define kc_kernel_getpeername(sock, addr) kernel_getpeername(sock, addr)
#endif
#ifdef KC_SOCK_CREATE_KERN_NET
#define kc_sock_create_kern(family, type, proto, res) sock_create_kern(&init_net, family, type, proto, res)
#else
#define kc_sock_create_kern sock_create_kern
#endif
#ifndef KC_GENERIC_FILE_BUFFERED_WRITE
ssize_t kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos, loff_t *ppos,
size_t count, ssize_t written);
#define generic_file_buffered_write kc_generic_file_buffered_write
#ifdef KC_GENERIC_PERFORM_WRITE_KIOCB_IOV_ITER
static inline int kc_generic_perform_write(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
iocb->ki_pos = pos;
return generic_perform_write(iocb, iter);
}
#else
static inline int kc_generic_perform_write(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
struct file *file = iocb->ki_filp;
return generic_perform_write(file, iter, pos);
}
#endif
#endif // KC_GENERIC_FILE_BUFFERED_WRITE
#ifndef KC_HAVE_BLK_OPF_T
/* typedef __u32 __bitwise blk_opf_t; */
typedef unsigned int blk_opf_t;
#endif
#ifdef KC_LIST_CMP_CONST_ARG_LIST_HEAD
#define KC_LIST_CMP_CONST const
#else
#define KC_LIST_CMP_CONST
#endif
#ifdef KC_VMALLOC_PGPROT_T
#define kc__vmalloc(size, gfp_mask) __vmalloc(size, gfp_mask, PAGE_KERNEL)
#else
#define kc__vmalloc __vmalloc
#endif
#ifdef KC_VFS_METHOD_USER_NAMESPACE_ARG
#define KC_VFS_NS_DEF struct user_namespace *mnt_user_ns,
#define KC_VFS_NS mnt_user_ns,
#define KC_VFS_INIT_NS &init_user_ns,
#else
#define KC_VFS_NS_DEF
#define KC_VFS_NS
#define KC_VFS_INIT_NS
#endif
#ifdef KC_BIO_ALLOC_DEV_OPF_ARGS
#define kc_bio_alloc bio_alloc
#else
#include <linux/bio.h>
static inline struct bio *kc_bio_alloc(struct block_device *bdev, unsigned short nr_vecs,
blk_opf_t opf, gfp_t gfp_mask)
{
struct bio *b = bio_alloc(gfp_mask, nr_vecs);
if (b) {
kc_bio_set_opf(b, opf);
bio_set_dev(b, bdev);
}
return b;
}
#endif
#ifndef KC_FIEMAP_PREP
#define fiemap_prep(inode, fieinfo, start, len, flags) fiemap_check_flags(fieinfo, flags)
#endif
#ifndef KC_KERNEL_OLD_TIMEVAL_STRUCT
#define __kernel_old_timeval timeval
#define ns_to_kernel_old_timeval(ktime) ns_to_timeval(ktime.tv64)
#endif
#ifdef KC_SOCK_SET_SNDTIMEO
#include <net/sock.h>
static inline int kc_sock_set_sndtimeo(struct socket *sock, s64 secs)
{
sock_set_sndtimeo(sock->sk, secs);
return 0;
}
static inline int kc_tcp_sock_set_rcvtimeo(struct socket *sock, ktime_t to)
{
struct __kernel_old_timeval tv;
sockptr_t kopt;
tv = ns_to_kernel_old_timeval(to);
kopt = KERNEL_SOCKPTR(&tv);
return sock_setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO_NEW,
kopt, sizeof(tv));
}
#else
#include <net/sock.h>
static inline int kc_sock_set_sndtimeo(struct socket *sock, s64 secs)
{
struct timeval tv = { .tv_sec = secs, .tv_usec = 0 };
return kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
(char *)&tv, sizeof(tv));
}
static inline int kc_tcp_sock_set_rcvtimeo(struct socket *sock, ktime_t to)
{
struct __kernel_old_timeval tv;
tv = ns_to_kernel_old_timeval(to);
return kernel_setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO,
(char *)&tv, sizeof(tv));
}
#endif
#ifdef KC_SETSOCKOPT_SOCKPTR_T
static inline int kc_sock_setsockopt(struct socket *sock, int level, int op, int *optval, unsigned int optlen)
{
sockptr_t kopt = KERNEL_SOCKPTR(optval);
return sock_setsockopt(sock, level, op, kopt, sizeof(optval));
}
#else
static inline int kc_sock_setsockopt(struct socket *sock, int level, int op, int *optval, unsigned int optlen)
{
return kernel_setsockopt(sock, level, op, (char *)optval, sizeof(optval));
}
#endif
#ifdef KC_HAVE_TCP_SET_SOCKFN
#include <linux/net.h>
#include <net/tcp.h>
static inline int kc_tcp_sock_set_keepintvl(struct socket *sock, int val)
{
return tcp_sock_set_keepintvl(sock->sk, val);
}
static inline int kc_tcp_sock_set_keepidle(struct socket *sock, int val)
{
return tcp_sock_set_keepidle(sock->sk, val);
}
static inline int kc_tcp_sock_set_user_timeout(struct socket *sock, int val)
{
tcp_sock_set_user_timeout(sock->sk, val);
return 0;
}
static inline int kc_tcp_sock_set_nodelay(struct socket *sock)
{
tcp_sock_set_nodelay(sock->sk);
return 0;
}
#else
#include <linux/net.h>
#include <net/tcp.h>
static inline int kc_tcp_sock_set_keepintvl(struct socket *sock, int val)
{
int optval = val;
return kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL, (char *)&optval, sizeof(optval));
}
static inline int kc_tcp_sock_set_keepidle(struct socket *sock, int val)
{
int optval = val;
return kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE, (char *)&optval, sizeof(optval));
}
static inline int kc_tcp_sock_set_user_timeout(struct socket *sock, int val)
{
int optval = val;
return kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT, (char *)&optval, sizeof(optval));
}
static inline int kc_tcp_sock_set_nodelay(struct socket *sock)
{
int optval = 1;
return kernel_setsockopt(sock, SOL_TCP, TCP_NODELAY, (char *)&optval, sizeof(optval));
}
#endif
#endif

View File

@@ -125,8 +125,8 @@ static inline bool scoutfs_key_is_ones(struct scoutfs_key *key)
* other alternatives across keys that first differ in any of the
* values. Say maybe 20% faster than memcmp.
*/
static inline int scoutfs_key_compare(struct scoutfs_key *a,
struct scoutfs_key *b)
static inline int scoutfs_key_compare(const struct scoutfs_key *a,
const struct scoutfs_key *b)
{
return scoutfs_cmp(a->sk_zone, b->sk_zone) ?:
scoutfs_cmp(le64_to_cpu(a->_sk_first), le64_to_cpu(b->_sk_first)) ?:
@@ -142,10 +142,10 @@ static inline int scoutfs_key_compare(struct scoutfs_key *a,
* 1: a_start > b_end
* else 0: ranges overlap
*/
static inline int scoutfs_key_compare_ranges(struct scoutfs_key *a_start,
struct scoutfs_key *a_end,
struct scoutfs_key *b_start,
struct scoutfs_key *b_end)
static inline int scoutfs_key_compare_ranges(const struct scoutfs_key *a_start,
const struct scoutfs_key *a_end,
const struct scoutfs_key *b_start,
const struct scoutfs_key *b_end)
{
return scoutfs_key_compare(a_end, b_start) < 0 ? -1 :
scoutfs_key_compare(a_start, b_end) > 0 ? 1 :

View File

@@ -12,7 +12,6 @@
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/preempt_mask.h> /* a rhel shed.h needed preempt_offset? */
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/mm.h>
@@ -36,6 +35,9 @@
#include "xattr.h"
#include "item.h"
#include "omap.h"
#include "util.h"
#include "totl.h"
#include "quota.h"
/*
* scoutfs uses a lock service to manage item cache consistency between
@@ -77,7 +79,7 @@ struct lock_info {
bool unmounting;
struct rb_root lock_tree;
struct rb_root lock_range_tree;
struct shrinker shrinker;
KC_DEFINE_SHRINKER(shrinker);
struct list_head lru_list;
unsigned long long lru_nr;
struct workqueue_struct *workq;
@@ -185,6 +187,9 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
return ret;
}
if (lock->start.sk_zone == SCOUTFS_QUOTA_ZONE && !lock_mode_can_read(mode))
scoutfs_quota_invalidate(sb);
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
retry:
@@ -1244,10 +1249,29 @@ int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode,
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_totl_set_range(&start, &end);
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
int scoutfs_lock_xattr_indx(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_xattr_indx_get_range(&start, &end);
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
int scoutfs_lock_quota(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_quota_get_lock_range(&start, &end);
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
@@ -1346,7 +1370,7 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode)
{
signed char lock_mode = ACCESS_ONCE(lock->mode);
signed char lock_mode = READ_ONCE(lock->mode);
return lock_modes_match(lock_mode, mode) &&
scoutfs_key_compare_ranges(key, key,
@@ -1401,6 +1425,17 @@ static void lock_shrink_worker(struct work_struct *work)
}
}
static unsigned long lock_count_objects(struct shrinker *shrink,
struct shrink_control *sc)
{
struct lock_info *linfo = KC_SHRINKER_CONTAINER_OF(shrink, struct lock_info);
struct super_block *sb = linfo->sb;
scoutfs_inc_counter(sb, lock_count_objects);
return shrinker_min_long(linfo->lru_nr);
}
/*
* Start the shrinking process for locks on the lru. If a lock is on
* the lru then it can't have any active users. We don't want to block
@@ -1413,21 +1448,18 @@ static void lock_shrink_worker(struct work_struct *work)
* mode which will prevent the lock from being freed when the null
* response arrives.
*/
static int scoutfs_lock_shrink(struct shrinker *shrink,
struct shrink_control *sc)
static unsigned long lock_scan_objects(struct shrinker *shrink,
struct shrink_control *sc)
{
struct lock_info *linfo = container_of(shrink, struct lock_info,
shrinker);
struct lock_info *linfo = KC_SHRINKER_CONTAINER_OF(shrink, struct lock_info);
struct super_block *sb = linfo->sb;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
unsigned long nr;
unsigned long freed = 0;
unsigned long nr = sc->nr_to_scan;
bool added = false;
int ret;
nr = sc->nr_to_scan;
if (nr == 0)
goto out;
scoutfs_inc_counter(sb, lock_scan_objects);
spin_lock(&linfo->lock);
@@ -1445,6 +1477,7 @@ restart:
lock->request_pending = 1;
list_add_tail(&lock->shrink_head, &linfo->shrink_list);
added = true;
freed++;
scoutfs_inc_counter(sb, lock_shrink_attempted);
trace_scoutfs_lock_shrink(sb, lock);
@@ -1459,10 +1492,8 @@ restart:
if (added)
queue_work(linfo->workq, &linfo->shrink_work);
out:
ret = min_t(unsigned long, linfo->lru_nr, INT_MAX);
trace_scoutfs_lock_shrink_exit(sb, sc->nr_to_scan, ret);
return ret;
trace_scoutfs_lock_shrink_exit(sb, sc->nr_to_scan, freed);
return freed;
}
void scoutfs_free_unused_locks(struct super_block *sb)
@@ -1473,7 +1504,7 @@ void scoutfs_free_unused_locks(struct super_block *sb)
.nr_to_scan = INT_MAX,
};
linfo->shrinker.shrink(&linfo->shrinker, &sc);
lock_scan_objects(KC_SHRINKER_FN(&linfo->shrinker), &sc);
}
static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
@@ -1580,7 +1611,7 @@ void scoutfs_lock_shutdown(struct super_block *sb)
trace_scoutfs_lock_shutdown(sb, linfo);
/* stop the shrinker from queueing work */
unregister_shrinker(&linfo->shrinker);
KC_UNREGISTER_SHRINKER(&linfo->shrinker);
flush_work(&linfo->shrink_work);
/* cause current and future lock calls to return errors */
@@ -1699,9 +1730,9 @@ int scoutfs_lock_setup(struct super_block *sb)
spin_lock_init(&linfo->lock);
linfo->lock_tree = RB_ROOT;
linfo->lock_range_tree = RB_ROOT;
linfo->shrinker.shrink = scoutfs_lock_shrink;
linfo->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&linfo->shrinker);
KC_INIT_SHRINKER_FUNCS(&linfo->shrinker, lock_count_objects,
lock_scan_objects);
KC_REGISTER_SHRINKER(&linfo->shrinker, "scoutfs-lock:" SCSBF, SCSB_ARGS(sb));
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);

View File

@@ -86,6 +86,10 @@ int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int
u64 ino, struct scoutfs_lock **lock);
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_xattr_indx(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_quota(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode mode);

View File

@@ -202,21 +202,48 @@ static u8 invalidation_mode(u8 granted, u8 requested)
/*
* Return true of the client lock instances described by the entries can
* be granted at the same time. Typically this only means they're both
* modes that are compatible between nodes. In addition there's the
* special case where a read lock on a client is compatible with a write
* lock on the same client because the client's cache covered by the
* read lock is still valid if they get a write lock.
* be granted at the same time. There's only three cases where this is
* true.
*
* First, the two locks are both of the same mode that allows full
* sharing -- read and write only. The only point of these modes is
* that everyone can share them.
*
* Second, a write lock gives the client permission to read as well.
* This means that a client can upgrade its read lock to a write lock
* without having to invalidate the existing read and drop caches.
*
* Third, null locks are always compatible between clients. It's as
* though the client with the null lock has no lock at all. But it's
* never compatible with all locks on the client requesting null.
* Sending invalidations for existing locks on a client when we get a
* null request is how we resolve races in shrinking locks -- we turn it
* into the unsolicited remote invalidation case.
*
* All other mode and client combinations can not be shared, most
* typically a write lock invalidating all other non-write holders to
* drop caches and force a read after the write has completed.
*/
static bool client_entries_compatible(struct client_lock_entry *granted,
struct client_lock_entry *requested)
{
return (granted->mode == requested->mode &&
(granted->mode == SCOUTFS_LOCK_READ ||
granted->mode == SCOUTFS_LOCK_WRITE_ONLY)) ||
(granted->rid == requested->rid &&
granted->mode == SCOUTFS_LOCK_READ &&
requested->mode == SCOUTFS_LOCK_WRITE);
/* only read and write_only can be full shared */
if ((granted->mode == requested->mode) &&
(granted->mode == SCOUTFS_LOCK_READ || granted->mode == SCOUTFS_LOCK_WRITE_ONLY))
return true;
/* _write includes reading, so a client can upgrade its read to write */
if (granted->rid == requested->rid &&
granted->mode == SCOUTFS_LOCK_READ &&
requested->mode == SCOUTFS_LOCK_WRITE)
return true;
/* null is always compatible across clients, never within a client */
if ((granted->rid != requested->rid) &&
(granted->mode == SCOUTFS_LOCK_NULL || requested->mode == SCOUTFS_LOCK_NULL))
return true;
return false;
}
/*
@@ -317,16 +344,18 @@ static void put_server_lock(struct lock_server_info *inf,
BUG_ON(!mutex_is_locked(&snode->mutex));
spin_lock(&inf->lock);
if (atomic_dec_and_test(&snode->refcount) &&
list_empty(&snode->granted) &&
list_empty(&snode->requested) &&
list_empty(&snode->invalidated)) {
spin_lock(&inf->lock);
rb_erase(&snode->node, &inf->locks_root);
spin_unlock(&inf->lock);
should_free = true;
}
spin_unlock(&inf->lock);
mutex_unlock(&snode->mutex);
if (should_free) {

View File

@@ -502,12 +502,12 @@ static void scoutfs_net_proc_worker(struct work_struct *work)
* Free live responses up to and including the seq by marking them dead
* and moving them to the send queue to be freed.
*/
static int move_acked_responses(struct scoutfs_net_connection *conn,
struct list_head *list, u64 seq)
static bool move_acked_responses(struct scoutfs_net_connection *conn,
struct list_head *list, u64 seq)
{
struct message_send *msend;
struct message_send *tmp;
int ret = 0;
bool moved = false;
assert_spin_locked(&conn->lock);
@@ -519,20 +519,20 @@ static int move_acked_responses(struct scoutfs_net_connection *conn,
msend->dead = 1;
list_move(&msend->head, &conn->send_queue);
ret = 1;
moved = true;
}
return ret;
return moved;
}
/* acks are processed inline in the recv worker */
static void free_acked_responses(struct scoutfs_net_connection *conn, u64 seq)
{
int moved;
bool moved;
spin_lock(&conn->lock);
moved = move_acked_responses(conn, &conn->send_queue, seq) +
moved = move_acked_responses(conn, &conn->send_queue, seq) |
move_acked_responses(conn, &conn->resend_queue, seq);
spin_unlock(&conn->lock);
@@ -549,12 +549,16 @@ static int recvmsg_full(struct socket *sock, void *buf, unsigned len)
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, READ, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_recvmsg(sock, &msg, &kv, 1, len, msg.msg_flags);
if (ret <= 0)
return -ECONNABORTED;
@@ -707,12 +711,16 @@ static int sendmsg_full(struct socket *sock, void *buf, unsigned len)
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, WRITE, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_sendmsg(sock, &msg, &kv, 1, len);
if (ret <= 0)
return -ECONNABORTED;
@@ -896,74 +904,59 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
struct timeval tv;
int addrlen;
int optval;
int ret;
/* we use a keepalive timeout instead of send timeout */
tv.tv_sec = 0;
tv.tv_usec = 0;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
(char *)&tv, sizeof(tv));
ret = kc_sock_set_sndtimeo(sock, 0);
if (ret)
goto out;
/* not checked when user_timeout != 0, but for clarity */
optval = UNRESPONSIVE_PROBES;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&optval, sizeof(optval));
ret = kc_sock_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
&optval, sizeof(optval));
if (ret)
goto out;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&optval, sizeof(optval));
ret = kc_tcp_sock_set_keepidle(sock, optval);
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&optval, sizeof(optval));
ret = kc_tcp_sock_set_keepintvl(sock, optval);
if (ret)
goto out;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&optval, sizeof(optval));
ret = kc_tcp_sock_set_user_timeout(sock, optval);
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&optval, sizeof(optval));
ret = kc_sock_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_NODELAY,
(char *)&optval, sizeof(optval));
ret = kc_tcp_sock_set_nodelay(sock);
if (ret)
goto out;
addrlen = sizeof(struct sockaddr_in);
ret = kernel_getsockname(sock, (struct sockaddr *)&conn->sockname,
&addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
ret = -EAFNOSUPPORT;
if (ret)
ret = kc_kernel_getsockname(sock, (struct sockaddr *)&conn->sockname);
if (ret < 0)
goto out;
addrlen = sizeof(struct sockaddr_in);
ret = kernel_getpeername(sock, (struct sockaddr *)&conn->peername,
&addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
ret = -EAFNOSUPPORT;
if (ret)
ret = kc_kernel_getpeername(sock, (struct sockaddr *)&conn->peername);
if (ret < 0)
goto out;
ret = 0;
conn->last_peername = conn->peername;
out:
return ret;
}
@@ -1047,22 +1040,18 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
DEFINE_CONN_FROM_WORK(conn, work, connect_work);
struct super_block *sb = conn->sb;
struct socket *sock;
struct timeval tv;
int ret;
trace_scoutfs_net_connect_work_enter(sb, 0, 0);
ret = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
/* caller specified connect timeout */
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
(char *)&tv, sizeof(tv));
/* caller specified connect timeout, defaults to 1 sec */
ret = kc_sock_set_sndtimeo(sock, conn->connect_timeout_ms / MSEC_PER_SEC);
if (ret) {
sock_release(sock);
goto out;
@@ -1453,15 +1442,15 @@ int scoutfs_net_bind(struct super_block *sb,
if (WARN_ON_ONCE(conn->sock))
return -EINVAL;
ret = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
(char *)&optval, sizeof(optval));
ret = kc_sock_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
&optval, sizeof(optval));
if (ret)
goto out;
@@ -1471,20 +1460,18 @@ int scoutfs_net_bind(struct super_block *sb,
goto out;
ret = kernel_listen(sock, 255);
if (ret)
if (ret < 0)
goto out;
addrlen = sizeof(struct sockaddr_in);
ret = kernel_getsockname(sock, (struct sockaddr *)&conn->sockname,
&addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
ret = -EAFNOSUPPORT;
if (ret)
ret = kc_kernel_getsockname(sock, (struct sockaddr *)&conn->sockname);
if (ret < 0)
goto out;
ret = 0;
conn->sock = sock;
*sin = conn->sockname;
ret = 0;
out:
if (ret < 0 && sock)
sock_release(sock);

View File

@@ -33,9 +33,11 @@ enum {
Opt_acl,
Opt_data_prealloc_blocks,
Opt_data_prealloc_contig_only,
Opt_log_merge_wait_timeout_ms,
Opt_metadev_path,
Opt_noacl,
Opt_orphan_scan_delay_ms,
Opt_quorum_heartbeat_timeout_ms,
Opt_quorum_slot_nr,
Opt_err,
};
@@ -44,9 +46,11 @@ static const match_table_t tokens = {
{Opt_acl, "acl"},
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
{Opt_log_merge_wait_timeout_ms, "log_merge_wait_timeout_ms=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_noacl, "noacl"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_heartbeat_timeout_ms, "quorum_heartbeat_timeout_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_err, NULL}
};
@@ -111,6 +115,10 @@ static void free_options(struct scoutfs_mount_options *opts)
kfree(opts->metadev_path);
}
#define MIN_LOG_MERGE_WAIT_TIMEOUT_MS 100UL
#define DEFAULT_LOG_MERGE_WAIT_TIMEOUT_MS 500
#define MAX_LOG_MERGE_WAIT_TIMEOUT_MS (60 * MSEC_PER_SEC)
#define MIN_ORPHAN_SCAN_DELAY_MS 100UL
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
@@ -124,8 +132,40 @@ static void init_default_options(struct scoutfs_mount_options *opts)
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
opts->data_prealloc_contig_only = 1;
opts->quorum_slot_nr = -1;
opts->log_merge_wait_timeout_ms = DEFAULT_LOG_MERGE_WAIT_TIMEOUT_MS;
opts->orphan_scan_delay_ms = -1;
opts->quorum_heartbeat_timeout_ms = SCOUTFS_QUORUM_DEF_HB_TIMEO_MS;
opts->quorum_slot_nr = -1;
}
static int verify_log_merge_wait_timeout_ms(struct super_block *sb, int ret, int val)
{
if (ret < 0) {
scoutfs_err(sb, "failed to parse log_merge_wait_timeout_ms value");
return -EINVAL;
}
if (val < MIN_LOG_MERGE_WAIT_TIMEOUT_MS || val > MAX_LOG_MERGE_WAIT_TIMEOUT_MS) {
scoutfs_err(sb, "invalid log_merge_wait_timeout_ms value %d, must be between %lu and %lu",
val, MIN_LOG_MERGE_WAIT_TIMEOUT_MS, MAX_LOG_MERGE_WAIT_TIMEOUT_MS);
return -EINVAL;
}
return 0;
}
static int verify_quorum_heartbeat_timeout_ms(struct super_block *sb, int ret, u64 val)
{
if (ret < 0) {
scoutfs_err(sb, "failed to parse quorum_heartbeat_timeout_ms value");
return -EINVAL;
}
if (val < SCOUTFS_QUORUM_MIN_HB_TIMEO_MS || val > SCOUTFS_QUORUM_MAX_HB_TIMEO_MS) {
scoutfs_err(sb, "invalid quorum_heartbeat_timeout_ms value %llu, must be between %lu and %lu",
val, SCOUTFS_QUORUM_MIN_HB_TIMEO_MS, SCOUTFS_QUORUM_MAX_HB_TIMEO_MS);
return -EINVAL;
}
return 0;
}
/*
@@ -151,7 +191,7 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
switch (token) {
case Opt_acl:
sb->s_flags |= MS_POSIXACL;
sb->s_flags |= SB_POSIXACL;
break;
case Opt_data_prealloc_blocks:
@@ -178,6 +218,14 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
opts->data_prealloc_contig_only = nr;
break;
case Opt_log_merge_wait_timeout_ms:
ret = match_int(args, &nr);
ret = verify_log_merge_wait_timeout_ms(sb, ret, nr);
if (ret < 0)
return ret;
opts->log_merge_wait_timeout_ms = nr;
break;
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
if (ret < 0)
@@ -185,7 +233,7 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
break;
case Opt_noacl:
sb->s_flags &= ~MS_POSIXACL;
sb->s_flags &= ~SB_POSIXACL;
break;
case Opt_orphan_scan_delay_ms:
@@ -206,6 +254,14 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
opts->orphan_scan_delay_ms = nr;
break;
case Opt_quorum_heartbeat_timeout_ms:
ret = match_u64(args, &nr64);
ret = verify_quorum_heartbeat_timeout_ms(sb, ret, nr64);
if (ret < 0)
return ret;
opts->quorum_heartbeat_timeout_ms = nr64;
break;
case Opt_quorum_slot_nr:
if (opts->quorum_slot_nr != -1) {
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
@@ -301,7 +357,7 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct scoutfs_mount_options opts;
const bool is_acl = !!(sb->s_flags & MS_POSIXACL);
const bool is_acl = !!(sb->s_flags & SB_POSIXACL);
scoutfs_options_read(sb, &opts);
@@ -396,6 +452,43 @@ static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj
}
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
static ssize_t log_merge_wait_timeout_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.log_merge_wait_timeout_ms);
}
static ssize_t log_merge_wait_timeout_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[30]; /* more than enough for octal -U64_MAX */
int val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtoint(nullterm, 0, &val);
ret = verify_log_merge_wait_timeout_ms(sb, ret, val);
if (ret == 0) {
write_seqlock(&optinf->seqlock);
optinf->opts.log_merge_wait_timeout_ms = val;
write_sequnlock(&optinf->seqlock);
ret = count;
}
return ret;
}
SCOUTFS_ATTR_RW(log_merge_wait_timeout_ms);
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
@@ -448,6 +541,43 @@ static ssize_t orphan_scan_delay_ms_store(struct kobject *kobj, struct kobj_attr
}
SCOUTFS_ATTR_RW(orphan_scan_delay_ms);
static ssize_t quorum_heartbeat_timeout_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%llu", opts.quorum_heartbeat_timeout_ms);
}
static ssize_t quorum_heartbeat_timeout_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[30]; /* more than enough for octal -U64_MAX */
u64 val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtoll(nullterm, 0, &val);
ret = verify_quorum_heartbeat_timeout_ms(sb, ret, val);
if (ret == 0) {
write_seqlock(&optinf->seqlock);
optinf->opts.quorum_heartbeat_timeout_ms = val;
write_sequnlock(&optinf->seqlock);
ret = count;
}
return ret;
}
SCOUTFS_ATTR_RW(quorum_heartbeat_timeout_ms);
static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
@@ -462,8 +592,10 @@ SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
SCOUTFS_ATTR_PTR(log_merge_wait_timeout_ms),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
SCOUTFS_ATTR_PTR(quorum_heartbeat_timeout_ms),
SCOUTFS_ATTR_PTR(quorum_slot_nr),
NULL,
};

View File

@@ -8,10 +8,11 @@
struct scoutfs_mount_options {
u64 data_prealloc_blocks;
bool data_prealloc_contig_only;
unsigned int log_merge_wait_timeout_ms;
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;
u64 quorum_heartbeat_timeout_ms;
};
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);

View File

@@ -100,6 +100,11 @@ struct last_msg {
ktime_t ts;
};
struct count_recent {
u64 count;
ktime_t recent;
};
enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
struct quorum_status {
@@ -112,8 +117,12 @@ struct quorum_status {
ktime_t timeout;
};
#define HB_DELAY_NR (SCOUTFS_QUORUM_MAX_HB_TIMEO_MS / MSEC_PER_SEC)
struct quorum_info {
struct super_block *sb;
struct scoutfs_quorum_config qconf;
struct workqueue_struct *workq;
struct work_struct work;
struct socket *sock;
bool shutdown;
@@ -125,6 +134,8 @@ struct quorum_info {
struct quorum_status show_status;
struct last_msg last_send[SCOUTFS_QUORUM_MAX_SLOTS];
struct last_msg last_recv[SCOUTFS_QUORUM_MAX_SLOTS];
struct count_recent *hb_delay;
unsigned long max_hb_delay;
struct scoutfs_sysfs_attrs ssa;
};
@@ -134,11 +145,18 @@ struct quorum_info {
#define DECLARE_QUORUM_INFO_KOBJ(kobj, name) \
DECLARE_QUORUM_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
static bool quorum_slot_present(struct scoutfs_super_block *super, int i)
static bool quorum_slot_present(struct scoutfs_quorum_config *qconf, int i)
{
BUG_ON(i < 0 || i > SCOUTFS_QUORUM_MAX_SLOTS);
return super->qconf.slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
return qconf->slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
}
static void quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
{
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
scoutfs_addr_to_sin(sin, &qconf->slots[i].addr);
}
static ktime_t election_timeout(void)
@@ -152,29 +170,29 @@ static ktime_t heartbeat_interval(void)
return ktime_add_ms(ktime_get(), SCOUTFS_QUORUM_HB_IVAL_MS);
}
static ktime_t heartbeat_timeout(void)
static ktime_t heartbeat_timeout(struct scoutfs_mount_options *opts)
{
return ktime_add_ms(ktime_get(), SCOUTFS_QUORUM_HB_TIMEO_MS);
return ktime_add_ms(ktime_get(), opts->quorum_heartbeat_timeout_ms);
}
static int create_socket(struct super_block *sb)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct socket *sock = NULL;
struct sockaddr_in sin;
int addrlen;
int ret;
ret = sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
ret = kc_sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
if (ret) {
scoutfs_err(sb, "quorum couldn't create udp socket: %d", ret);
goto out;
}
sock->sk->sk_allocation = GFP_NOFS;
/* rather fail and retry than block waiting for free */
sock->sk->sk_allocation = GFP_ATOMIC;
scoutfs_quorum_slot_sin(super, qinf->our_quorum_slot_nr, &sin);
quorum_slot_sin(&qinf->qconf, qinf->our_quorum_slot_nr, &sin);
addrlen = sizeof(sin);
ret = kernel_bind(sock, (struct sockaddr *)&sin, addrlen);
@@ -201,16 +219,20 @@ static __le32 quorum_message_crc(struct scoutfs_quorum_message *qmes)
return cpu_to_le32(crc32c(~0, qmes, len));
}
static void send_msg_members(struct super_block *sb, int type, u64 term,
int only)
/*
* Returns the number of failures from sendmsg.
*/
static int send_msg_members(struct super_block *sb, int type, u64 term, int only)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
int failed = 0;
ktime_t now;
int ret;
int i;
struct scoutfs_quorum_message qmes = {
.fsid = super->hdr.fsid,
.fsid = cpu_to_le64(sbi->fsid),
.term = cpu_to_le64(term),
.type = type,
.from = qinf->our_quorum_slot_nr,
@@ -221,8 +243,10 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
};
struct sockaddr_in sin;
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL,
.msg_name = &sin,
.msg_namelen = sizeof(sin),
@@ -232,15 +256,24 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
qmes.crc = quorum_message_crc(&qmes);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i) ||
if (!quorum_slot_present(&qinf->qconf, i) ||
(only >= 0 && i != only) || i == qinf->our_quorum_slot_nr)
continue;
scoutfs_quorum_slot_sin(super, i, &sin);
if (scoutfs_forcing_unmount(sb)) {
failed = 0;
break;
}
scoutfs_quorum_slot_sin(&qinf->qconf, i, &sin);
now = ktime_get();
kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, WRITE, (struct iovec *)&kv, sizeof(qmes), 1);
#endif
ret = kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
if (ret != kv.iov_len)
failed++;
spin_lock(&qinf->show_lock);
qinf->last_send[i].msg.term = term;
@@ -251,6 +284,8 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
if (i == only)
break;
}
return failed;
}
#define send_msg_to(sb, type, term, nr) send_msg_members(sb, type, term, nr)
@@ -266,9 +301,8 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
ktime_t abs_to)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_quorum_message qmes;
struct timeval tv;
ktime_t rel_to;
ktime_t now;
int ret;
@@ -278,8 +312,10 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
.iov_len = sizeof(struct scoutfs_quorum_message),
};
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_NOSIGNAL,
};
@@ -291,28 +327,30 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
else
rel_to = ns_to_ktime(0);
tv = ktime_to_timeval(rel_to);
if (tv.tv_sec == 0 && tv.tv_usec == 0) {
if (ktime_compare(rel_to, ns_to_ktime(NSEC_PER_USEC)) <= 0) {
mh.msg_flags |= MSG_DONTWAIT;
} else {
ret = kernel_setsockopt(qinf->sock, SOL_SOCKET, SO_RCVTIMEO,
(char *)&tv, sizeof(tv));
if (ret < 0)
return ret;
ret = kc_tcp_sock_set_rcvtimeo(qinf->sock, rel_to);
}
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, READ, (struct iovec *)&kv, sizeof(struct scoutfs_quorum_message), 1);
#endif
ret = kernel_recvmsg(qinf->sock, &mh, &kv, 1, kv.iov_len, mh.msg_flags);
if (ret < 0)
return ret;
if (scoutfs_forcing_unmount(sb))
return 0;
now = ktime_get();
if (ret != sizeof(qmes) ||
qmes.crc != quorum_message_crc(&qmes) ||
qmes.fsid != super->hdr.fsid ||
qmes.fsid != cpu_to_le64(sbi->fsid) ||
qmes.type >= SCOUTFS_QUORUM_MSG_INVALID ||
qmes.from >= SCOUTFS_QUORUM_MAX_SLOTS ||
!quorum_slot_present(super, qmes.from)) {
!quorum_slot_present(&qinf->qconf, qmes.from)) {
/* should we be trying to open a new socket? */
scoutfs_inc_counter(sb, quorum_recv_invalid);
return -EAGAIN;
@@ -342,7 +380,7 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
bool check_rid)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
const u64 fsid = sbi->fsid;
const u64 rid = sbi->rid;
char msg[150];
__le32 crc;
@@ -367,9 +405,9 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
else if (le32_to_cpu(blk->hdr.magic) != SCOUTFS_BLOCK_MAGIC_QUORUM)
snprintf(msg, sizeof(msg), "blk magic %08x != %08x",
le32_to_cpu(blk->hdr.magic), SCOUTFS_BLOCK_MAGIC_QUORUM);
else if (blk->hdr.fsid != super->hdr.fsid)
else if (blk->hdr.fsid != cpu_to_le64(fsid))
snprintf(msg, sizeof(msg), "blk fsid %016llx != %016llx",
le64_to_cpu(blk->hdr.fsid), le64_to_cpu(super->hdr.fsid));
le64_to_cpu(blk->hdr.fsid), fsid);
else if (le64_to_cpu(blk->hdr.blkno) != blkno)
snprintf(msg, sizeof(msg), "blk blkno %llu != %llu",
le64_to_cpu(blk->hdr.blkno), blkno);
@@ -410,8 +448,7 @@ out:
*/
static void read_greatest_term(struct super_block *sb, u64 *term)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_quorum_block blk;
int ret;
int e;
@@ -420,7 +457,7 @@ static void read_greatest_term(struct super_block *sb, u64 *term)
*term = 0;
for (s = 0; s < SCOUTFS_QUORUM_MAX_SLOTS; s++) {
if (!quorum_slot_present(super, s))
if (!quorum_slot_present(&qinf->qconf, s))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + s, &blk, false);
@@ -444,7 +481,7 @@ static void set_quorum_block_event(struct super_block *sb, struct scoutfs_quorum
if (WARN_ON_ONCE(event < 0 || event >= SCOUTFS_QUORUM_EVENT_NR))
return;
getnstimeofday64(&ts);
ktime_get_ts64(&ts);
le64_add_cpu(&blk->write_nr, 1);
ev = &blk->events[event];
@@ -514,14 +551,15 @@ static int update_quorum_block(struct super_block *sb, int event, u64 term, bool
* keeps us from being fenced while we allow userspace fencing to take a
* reasonably long time. We still want to timeout eventually.
*/
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
u64 term)
{
#define NR_OLD 2
struct scoutfs_quorum_block_event old[SCOUTFS_QUORUM_MAX_SLOTS][NR_OLD] = {{{0,}}};
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
struct sockaddr_in sin;
const __le64 lefsid = cpu_to_le64(sbi->fsid);
const u64 rid = sbi->rid;
bool fence_started = false;
u64 fenced = 0;
@@ -534,7 +572,7 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
BUILD_BUG_ON(SCOUTFS_QUORUM_BLOCKS < SCOUTFS_QUORUM_MAX_SLOTS);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
if (!quorum_slot_present(qconf, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
@@ -567,11 +605,11 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
continue;
scoutfs_inc_counter(sb, quorum_fence_leader);
scoutfs_quorum_slot_sin(super, i, &sin);
quorum_slot_sin(qconf, i, &sin);
fence_rid = old[i][j].rid;
scoutfs_info(sb, "fencing previous leader "SCSBF" at term %llu in slot %u with address "SIN_FMT,
SCSB_LEFR_ARGS(super->hdr.fsid, fence_rid),
SCSB_LEFR_ARGS(lefsid, fence_rid),
le64_to_cpu(old[i][j].term), i, SIN_ARG(&sin));
ret = scoutfs_fence_start(sb, le64_to_cpu(fence_rid), sin.sin_addr.s_addr,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER);
@@ -592,6 +630,71 @@ out:
return ret;
}
static void clear_hb_delay(struct quorum_info *qinf)
{
int i;
spin_lock(&qinf->show_lock);
qinf->max_hb_delay = 0;
for (i = 0; i < HB_DELAY_NR; i++) {
qinf->hb_delay[i].recent = ns_to_ktime(0);
qinf->hb_delay[i].count = 0;
}
spin_unlock(&qinf->show_lock);
}
struct hb_recording {
ktime_t prev;
int count;
};
/*
* Record long heartbeat delays. We only record the delay between back
* to back send attempts in the leader or back to back recv messages in
* the followers. The worker caller sets record_hb when their iteration
* sent or received a heartbeat. An iteration that does anything else
* resets the tracking.
*/
static void record_hb_delay(struct super_block *sb, struct quorum_info *qinf,
struct hb_recording *hbr, bool record_hb, int role)
{
bool log = false;
ktime_t now;
s64 s;
if (!record_hb) {
hbr->count = 0;
return;
}
now = ktime_get();
if (hbr->count < 2 && ++hbr->count < 2) {
hbr->prev = now;
return;
}
s = ktime_ms_delta(now, hbr->prev) / MSEC_PER_SEC;
hbr->prev = now;
if (s <= 0 || s >= HB_DELAY_NR)
return;
spin_lock(&qinf->show_lock);
if (qinf->max_hb_delay < s) {
qinf->max_hb_delay = s;
if (s >= 3)
log = true;
}
qinf->hb_delay[s].recent = now;
qinf->hb_delay[s].count++;
spin_unlock(&qinf->show_lock);
if (log)
scoutfs_info(sb, "longest quorum heartbeat %s delay of %lld sec",
role == LEADER ? "send" : "recv", s);
}
/*
* The main quorum task maintains its private status. It seemed cleaner
* to occasionally copy the status for showing in sysfs/debugfs files
@@ -616,16 +719,23 @@ static void update_show_status(struct quorum_info *qinf, struct quorum_status *q
static void scoutfs_quorum_worker(struct work_struct *work)
{
struct quorum_info *qinf = container_of(work, struct quorum_info, work);
struct scoutfs_mount_options opts;
struct super_block *sb = qinf->sb;
struct sockaddr_in unused;
struct quorum_host_msg msg;
struct quorum_status qst = {0,};
struct hb_recording hbr;
bool record_hb;
int ret;
int err;
memset(&hbr, 0, sizeof(struct hb_recording));
/* recording votes from slots as native single word bitmap */
BUILD_BUG_ON(SCOUTFS_QUORUM_MAX_SLOTS > BITS_PER_LONG);
scoutfs_options_read(sb, &opts);
/* start out as a follower */
qst.role = FOLLOWER;
qst.vote_for = -1;
@@ -635,7 +745,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* see if there's a server to chose heartbeat or election timeout */
if (scoutfs_quorum_server_sin(sb, &unused) == 0)
qst.timeout = heartbeat_timeout();
qst.timeout = heartbeat_timeout(&opts);
else
qst.timeout = election_timeout();
@@ -659,14 +769,16 @@ static void scoutfs_quorum_worker(struct work_struct *work)
ret = 0;
}
scoutfs_options_read(sb, &opts);
record_hb = false;
/* ignore messages from older terms */
if (msg.type != SCOUTFS_QUORUM_MSG_INVALID &&
msg.term < qst.term)
msg.type = SCOUTFS_QUORUM_MSG_INVALID;
trace_scoutfs_quorum_loop(sb, qst.role, qst.term, qst.vote_for,
qst.vote_bits,
ktime_to_timespec64(qst.timeout));
qst.vote_bits, ktime_to_ns(qst.timeout));
/* receiving greater terms resets term, becomes follower */
if (msg.type != SCOUTFS_QUORUM_MSG_INVALID &&
@@ -674,6 +786,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
if (qst.role == LEADER) {
scoutfs_warn(sb, "saw msg type %u from %u for term %llu while leader in term %llu, shutting down server.",
msg.type, msg.from, msg.term, qst.term);
clear_hb_delay(qinf);
}
qst.role = FOLLOWER;
qst.term = msg.term;
@@ -682,7 +795,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
scoutfs_inc_counter(sb, quorum_term_follower);
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT)
qst.timeout = heartbeat_timeout();
qst.timeout = heartbeat_timeout(&opts);
else
qst.timeout = election_timeout();
@@ -692,6 +805,21 @@ static void scoutfs_quorum_worker(struct work_struct *work)
goto out;
}
/* receiving heartbeats extends timeout, delaying elections */
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT) {
qst.timeout = heartbeat_timeout(&opts);
scoutfs_inc_counter(sb, quorum_recv_heartbeat);
record_hb = true;
}
/* receiving a resignation from server starts election */
if (msg.type == SCOUTFS_QUORUM_MSG_RESIGNATION &&
qst.role == FOLLOWER &&
msg.term == qst.term) {
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_recv_resignation);
}
/* followers and candidates start new election on timeout */
if (qst.role != LEADER &&
ktime_after(ktime_get(), qst.timeout)) {
@@ -744,6 +872,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.timeout = heartbeat_interval();
update_show_status(qinf, &qst);
clear_hb_delay(qinf);
/* record that we've been elected before starting up server */
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_ELECT, qst.term, true);
@@ -752,7 +881,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.server_start_term = qst.term;
qst.server_event = SCOUTFS_QUORUM_EVENT_ELECT;
scoutfs_server_start(sb, qst.term);
scoutfs_server_start(sb, &qinf->qconf, qst.term);
}
/*
@@ -798,6 +927,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.server_start_term);
scoutfs_inc_counter(sb, quorum_send_resignation);
clear_hb_delay(qinf);
}
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP,
@@ -811,24 +941,16 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* leaders regularly send heartbeats to delay elections */
if (qst.role == LEADER &&
ktime_after(ktime_get(), qst.timeout)) {
send_msg_others(sb, SCOUTFS_QUORUM_MSG_HEARTBEAT,
qst.term);
ret = send_msg_others(sb, SCOUTFS_QUORUM_MSG_HEARTBEAT, qst.term);
if (ret > 0) {
scoutfs_add_counter(sb, quorum_send_heartbeat_dropped, ret);
ret = 0;
}
qst.timeout = heartbeat_interval();
scoutfs_inc_counter(sb, quorum_send_heartbeat);
}
record_hb = true;
/* receiving heartbeats extends timeout, delaying elections */
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT) {
qst.timeout = heartbeat_timeout();
scoutfs_inc_counter(sb, quorum_recv_heartbeat);
}
/* receiving a resignation from server starts election */
if (msg.type == SCOUTFS_QUORUM_MSG_RESIGNATION &&
qst.role == FOLLOWER &&
msg.term == qst.term) {
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_recv_resignation);
}
/* followers vote once per term */
@@ -840,6 +962,8 @@ static void scoutfs_quorum_worker(struct work_struct *work)
msg.from);
scoutfs_inc_counter(sb, quorum_send_vote);
}
record_hb_delay(sb, qinf, &hbr, record_hb, qst.role);
}
update_show_status(qinf, &qst);
@@ -877,16 +1001,25 @@ out:
*/
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = NULL;
struct scoutfs_quorum_block blk;
u64 elect_term;
u64 term = 0;
int ret = 0;
int i;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
if (!quorum_slot_present(&super->qconf, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
@@ -900,7 +1033,7 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
if (elect_term > term &&
elect_term > le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_STOP].term)) {
term = elect_term;
scoutfs_quorum_slot_sin(super, i, sin);
scoutfs_quorum_slot_sin(&super->qconf, i, sin);
continue;
}
}
@@ -909,6 +1042,7 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
ret = -ENOENT;
out:
kfree(super);
return ret;
}
@@ -924,12 +1058,9 @@ u8 scoutfs_quorum_votes_needed(struct super_block *sb)
return qinf->votes_needed;
}
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin)
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
{
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
scoutfs_addr_to_sin(sin, &super->qconf.slots[i].addr);
return quorum_slot_sin(qconf, i, sin);
}
static char *role_str(int role)
@@ -969,9 +1100,11 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
{
DECLARE_QUORUM_INFO_KOBJ(kobj, qinf);
struct quorum_status qst;
struct count_recent cr;
struct last_msg last;
struct timespec64 ts;
const ktime_t now = ktime_get();
unsigned long ul;
size_t size;
int ret;
int i;
@@ -1029,6 +1162,26 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
(s64)ts.tv_sec, (int)ts.tv_nsec);
}
spin_lock(&qinf->show_lock);
ul = qinf->max_hb_delay;
spin_unlock(&qinf->show_lock);
if (ul)
snprintf_ret(buf, size, &ret, "HB Delay(s) Count Secs Since\n");
for (i = 1; i <= ul && i < HB_DELAY_NR; i++) {
spin_lock(&qinf->show_lock);
cr = qinf->hb_delay[i];
spin_unlock(&qinf->show_lock);
if (cr.count == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, cr.recent));
snprintf_ret(buf, size, &ret,
"%11u %9llu %lld.%09u\n",
i, cr.count, (s64)ts.tv_sec, (int)ts.tv_nsec);
}
return ret;
}
SCOUTFS_ATTR_RO(status);
@@ -1060,11 +1213,10 @@ static inline bool valid_ipv4_port(__be16 port)
return port != 0 && be16_to_cpu(port) != U16_MAX;
}
static int verify_quorum_slots(struct super_block *sb)
static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
struct scoutfs_quorum_config *qconf)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
DECLARE_QUORUM_INFO(sb, qinf);
struct sockaddr_in other;
struct sockaddr_in sin;
int found = 0;
@@ -1074,10 +1226,10 @@ static int verify_quorum_slots(struct super_block *sb)
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
if (!quorum_slot_present(qconf, i))
continue;
scoutfs_quorum_slot_sin(super, i, &sin);
scoutfs_quorum_slot_sin(qconf, i, &sin);
if (!valid_ipv4_unicast(sin.sin_addr.s_addr)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 unicast address: "SIN_FMT,
@@ -1092,10 +1244,10 @@ static int verify_quorum_slots(struct super_block *sb)
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (!quorum_slot_present(super, j))
if (!quorum_slot_present(qconf, j))
continue;
scoutfs_quorum_slot_sin(super, j, &other);
scoutfs_quorum_slot_sin(qconf, j, &other);
if (sin.sin_addr.s_addr == other.sin_addr.s_addr &&
sin.sin_port == other.sin_port) {
@@ -1113,11 +1265,11 @@ static int verify_quorum_slots(struct super_block *sb)
return -EINVAL;
}
if (!quorum_slot_present(super, qinf->our_quorum_slot_nr)) {
if (!quorum_slot_present(qconf, qinf->our_quorum_slot_nr)) {
char *str = slots;
*str = '\0';
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (quorum_slot_present(super, i)) {
if (quorum_slot_present(qconf, i)) {
ret = snprintf(str, &slots[ARRAY_SIZE(slots)] - str, "%c%u",
str == slots ? ' ' : ',', i);
if (ret < 2 || ret > 3) {
@@ -1141,16 +1293,22 @@ static int verify_quorum_slots(struct super_block *sb)
else
qinf->votes_needed = (found / 2) + 1;
qinf->qconf = *qconf;
return 0;
}
/*
* Once this schedules the quorum worker it can be elected leader and
* start the server, possibly before this returns.
* start the server, possibly before this returns. The quorum agent
* would be responsible for tracking the quorum config in the super
* block if it changes. Until then uses a static config that it reads
* during setup.
*/
int scoutfs_quorum_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = NULL;
struct scoutfs_mount_options opts;
struct quorum_info *qinf;
int ret;
@@ -1160,7 +1318,14 @@ int scoutfs_quorum_setup(struct super_block *sb)
return 0;
qinf = kzalloc(sizeof(struct quorum_info), GFP_KERNEL);
if (!qinf) {
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_KERNEL);
if (qinf)
qinf->hb_delay = kc__vmalloc(HB_DELAY_NR * sizeof(struct count_recent),
GFP_KERNEL | __GFP_ZERO);
if (!qinf || !super || !qinf->hb_delay) {
if (qinf)
vfree(qinf->hb_delay);
kfree(qinf);
ret = -ENOMEM;
goto out;
}
@@ -1174,7 +1339,20 @@ int scoutfs_quorum_setup(struct super_block *sb)
sbi->quorum_info = qinf;
qinf->sb = sb;
ret = verify_quorum_slots(sb);
/* a high priority single threaded context without mem reclaim */
qinf->workq = alloc_workqueue("scoutfs_quorum_work",
WQ_NON_REENTRANT | WQ_UNBOUND |
WQ_HIGHPRI, 1);
if (!qinf->workq) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
ret = verify_quorum_slots(sb, qinf, &super->qconf);
if (ret < 0)
goto out;
@@ -1188,12 +1366,13 @@ int scoutfs_quorum_setup(struct super_block *sb)
if (ret < 0)
goto out;
schedule_work(&qinf->work);
queue_work(qinf->workq, &qinf->work);
out:
if (ret)
scoutfs_quorum_destroy(sb);
kfree(super);
return ret;
}
@@ -1217,10 +1396,14 @@ void scoutfs_quorum_destroy(struct super_block *sb)
qinf->shutdown = true;
flush_work(&qinf->work);
if (qinf->workq)
destroy_workqueue(qinf->workq);
scoutfs_sysfs_destroy_attrs(sb, &qinf->ssa);
if (qinf->sock)
sock_release(qinf->sock);
vfree(qinf->hb_delay);
kfree(qinf);
sbi->quorum_info = NULL;
}

View File

@@ -4,10 +4,11 @@
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
u64 term);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);

1266
kmod/src/quota.c Normal file

File diff suppressed because it is too large Load Diff

48
kmod/src/quota.h Normal file
View File

@@ -0,0 +1,48 @@
#ifndef _SCOUTFS_QUOTA_H_
#define _SCOUTFS_QUOTA_H_
#include "ioctl.h"
/*
* Each rule's name can be in the ruleset's rbtree associated with the
* source attr that it selects. This lets checks only test rules that
* the inputs could match. The 'i' field indicates which name is in the
* tree so we can find the containing rule.
*
* This is mostly private to quota.c but we expose it for tracing.
*/
struct squota_rule {
u64 limit;
u8 prio;
u8 op;
u8 rule_flags;
struct squota_rule_name {
struct rb_node node;
u64 val;
u8 source;
u8 flags;
u8 i;
} names[3];
};
/* private to quota.c, only here for tracing */
struct squota_input {
u64 attrs[SQ_NS__NR_SELECT];
u8 op;
};
int scoutfs_quota_check_inode(struct super_block *sb, struct inode *dir);
int scoutfs_quota_check_data(struct super_block *sb, struct inode *inode);
int scoutfs_quota_get_rules(struct super_block *sb, u64 *iterator,
struct scoutfs_ioctl_quota_rule *irules, int nr);
int scoutfs_quota_mod_rule(struct super_block *sb, bool is_add,
struct scoutfs_ioctl_quota_rule *irule);
void scoutfs_quota_get_lock_range(struct scoutfs_key *start, struct scoutfs_key *end);
void scoutfs_quota_invalidate(struct super_block *sb);
int scoutfs_quota_setup(struct super_block *sb);
void scoutfs_quota_destroy(struct super_block *sb);
#endif

View File

@@ -76,10 +76,10 @@ static struct recov_pending *lookup_pending(struct recov_info *recinf, u64 rid,
* We keep the pending list sorted by rid so that we can iterate over
* them. The list should be small and shouldn't be used often.
*/
static int cmp_pending_rid(void *priv, struct list_head *A, struct list_head *B)
static int cmp_pending_rid(void *priv, KC_LIST_CMP_CONST struct list_head *A, KC_LIST_CMP_CONST struct list_head *B)
{
struct recov_pending *a = list_entry(A, struct recov_pending, head);
struct recov_pending *b = list_entry(B, struct recov_pending, head);
KC_LIST_CMP_CONST struct recov_pending *a = list_entry(A, KC_LIST_CMP_CONST struct recov_pending, head);
KC_LIST_CMP_CONST struct recov_pending *b = list_entry(B, KC_LIST_CMP_CONST struct recov_pending, head);
return scoutfs_cmp_u64s(a->rid, b->rid);
}

View File

@@ -24,7 +24,6 @@
#include <linux/tracepoint.h>
#include <linux/in.h>
#include <linux/unaligned/access_ok.h>
#include "key.h"
#include "format.h"
@@ -37,6 +36,10 @@
#include "net.h"
#include "data.h"
#include "ext.h"
#include "quota.h"
#include "trace/quota.h"
#include "trace/wkic.h"
struct lock_info;
@@ -439,6 +442,7 @@ DECLARE_EVENT_CLASS(scoutfs_trans_hold_release_class,
SCSB_TRACE_ASSIGN(sb);
__entry->journal_info = (unsigned long)journal_info;
__entry->holders = holders;
__entry->ret = ret;
),
TP_printk(SCSBF" journal_info 0x%0lx holders %d ret %d",
@@ -817,22 +821,17 @@ TRACE_EVENT(scoutfs_advance_dirty_super,
TP_printk(SCSBF" super seq now %llu", SCSB_TRACE_ARGS, __entry->seq)
);
TRACE_EVENT(scoutfs_dir_add_next_linkref,
TRACE_EVENT(scoutfs_dir_add_next_linkref_found,
TP_PROTO(struct super_block *sb, __u64 ino, __u64 dir_ino,
__u64 dir_pos, int ret, __u64 found_dir_ino,
__u64 found_dir_pos, unsigned int name_len),
__u64 dir_pos, unsigned int name_len),
TP_ARGS(sb, ino, dir_ino, dir_pos, ret, found_dir_pos, found_dir_ino,
name_len),
TP_ARGS(sb, ino, dir_ino, dir_pos, name_len),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(__u64, dir_ino)
__field(__u64, dir_pos)
__field(int, ret)
__field(__u64, found_dir_ino)
__field(__u64, found_dir_pos)
__field(unsigned int, name_len)
),
@@ -841,16 +840,43 @@ TRACE_EVENT(scoutfs_dir_add_next_linkref,
__entry->ino = ino;
__entry->dir_ino = dir_ino;
__entry->dir_pos = dir_pos;
__entry->ret = ret;
__entry->found_dir_ino = dir_ino;
__entry->found_dir_pos = dir_pos;
__entry->name_len = name_len;
),
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu ret %d found_dir_ino %llu found_dir_pos %llu name_len %u",
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_pos,
__entry->dir_ino, __entry->ret, __entry->found_dir_pos,
__entry->found_dir_ino, __entry->name_len)
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu name_len %u",
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_ino,
__entry->dir_pos, __entry->name_len)
);
TRACE_EVENT(scoutfs_dir_add_next_linkrefs,
TP_PROTO(struct super_block *sb, __u64 ino, __u64 dir_ino,
__u64 dir_pos, int count, int nr, int ret),
TP_ARGS(sb, ino, dir_ino, dir_pos, count, nr, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(__u64, dir_ino)
__field(__u64, dir_pos)
__field(int, count)
__field(int, nr)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->dir_ino = dir_ino;
__entry->dir_pos = dir_pos;
__entry->count = count;
__entry->nr = nr;
__entry->ret = ret;
),
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu count %d nr %d ret %d",
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_ino,
__entry->dir_pos, __entry->count, __entry->nr, __entry->ret)
);
TRACE_EVENT(scoutfs_write_begin,
@@ -1020,9 +1046,12 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
sk_trace_define(start)
sk_trace_define(end)
__field(u64, refresh_gen)
__field(u64, write_seq)
__field(u64, dirty_trans_seq)
__field(unsigned char, request_pending)
__field(unsigned char, invalidate_pending)
__field(int, mode)
__field(int, invalidating_mode)
__field(unsigned int, waiters_cw)
__field(unsigned int, waiters_pr)
__field(unsigned int, waiters_ex)
@@ -1035,9 +1064,12 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
sk_trace_assign(start, &lck->start);
sk_trace_assign(end, &lck->end);
__entry->refresh_gen = lck->refresh_gen;
__entry->write_seq = lck->write_seq;
__entry->dirty_trans_seq = lck->dirty_trans_seq;
__entry->request_pending = lck->request_pending;
__entry->invalidate_pending = lck->invalidate_pending;
__entry->mode = lck->mode;
__entry->invalidating_mode = lck->invalidating_mode;
__entry->waiters_pr = lck->waiters[SCOUTFS_LOCK_READ];
__entry->waiters_ex = lck->waiters[SCOUTFS_LOCK_WRITE];
__entry->waiters_cw = lck->waiters[SCOUTFS_LOCK_WRITE_ONLY];
@@ -1045,10 +1077,11 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__entry->users_ex = lck->users[SCOUTFS_LOCK_WRITE];
__entry->users_cw = lck->users[SCOUTFS_LOCK_WRITE_ONLY];
),
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u reqpnd %u invpnd %u rfrgen %llu waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u invmd %u reqp %u invp %u refg %llu wris %llu dts %llu waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
SCSB_TRACE_ARGS, sk_trace_args(start), sk_trace_args(end),
__entry->mode, __entry->request_pending,
__entry->invalidate_pending, __entry->refresh_gen,
__entry->mode, __entry->invalidating_mode, __entry->request_pending,
__entry->invalidate_pending, __entry->refresh_gen, __entry->write_seq,
__entry->dirty_trans_seq,
__entry->waiters_pr, __entry->waiters_ex, __entry->waiters_cw,
__entry->users_pr, __entry->users_ex, __entry->users_cw)
);
@@ -1724,21 +1757,41 @@ TRACE_EVENT(scoutfs_btree_merge,
sk_trace_args(end))
);
TRACE_EVENT(scoutfs_btree_merge_read_range,
TP_PROTO(struct super_block *sb, struct scoutfs_key *start, struct scoutfs_key *end,
int size),
TP_ARGS(sb, start, end, size),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
sk_trace_define(start)
sk_trace_define(end)
__field(int, size)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
sk_trace_assign(start, start);
sk_trace_assign(end, end);
__entry->size = size;
),
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" size %d",
SCSB_TRACE_ARGS, sk_trace_args(start), sk_trace_args(end), __entry->size)
);
TRACE_EVENT(scoutfs_btree_merge_items,
TP_PROTO(struct super_block *sb,
struct scoutfs_btree_root *m_root,
struct scoutfs_key *m_key, int m_val_len,
struct scoutfs_btree_root *f_root,
struct scoutfs_key *f_key, int f_val_len,
int is_del),
TP_ARGS(sb, m_root, m_key, m_val_len, f_root, f_key, f_val_len, is_del),
TP_ARGS(sb, m_key, m_val_len, f_root, f_key, f_val_len, is_del),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, m_root_blkno)
__field(__u64, m_root_seq)
__field(__u8, m_root_height)
sk_trace_define(m_key)
__field(int, m_val_len)
__field(__u64, f_root_blkno)
@@ -1751,10 +1804,6 @@ TRACE_EVENT(scoutfs_btree_merge_items,
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->m_root_blkno = m_root ?
le64_to_cpu(m_root->ref.blkno) : 0;
__entry->m_root_seq = m_root ? le64_to_cpu(m_root->ref.seq) : 0;
__entry->m_root_height = m_root ? m_root->height : 0;
sk_trace_assign(m_key, m_key);
__entry->m_val_len = m_val_len;
__entry->f_root_blkno = f_root ?
@@ -1766,11 +1815,9 @@ TRACE_EVENT(scoutfs_btree_merge_items,
__entry->is_del = !!is_del;
),
TP_printk(SCSBF" merge item root blkno %llu seq %llu height %u key "SK_FMT" val_len %d, fs item root blkno %llu seq %llu height %u key "SK_FMT" val_len %d, is_del %d",
SCSB_TRACE_ARGS, __entry->m_root_blkno, __entry->m_root_seq,
__entry->m_root_height, sk_trace_args(m_key),
__entry->m_val_len, __entry->f_root_blkno,
__entry->f_root_seq, __entry->f_root_height,
TP_printk(SCSBF" merge item key "SK_FMT" val_len %d, fs item root blkno %llu seq %llu height %u key "SK_FMT" val_len %d, is_del %d",
SCSB_TRACE_ARGS, sk_trace_args(m_key), __entry->m_val_len,
__entry->f_root_blkno, __entry->f_root_seq, __entry->f_root_height,
sk_trace_args(f_key), __entry->f_val_len, __entry->is_del)
);
@@ -1874,8 +1921,9 @@ DEFINE_EVENT(scoutfs_server_client_count_class, scoutfs_server_client_down,
DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded),
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing,
exceeded),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, holding)
@@ -1883,6 +1931,7 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
__field(int, nr_holders)
__field(__u32, avail_before)
__field(__u32, freed_before)
__field(int, committing)
__field(int, exceeded)
),
TP_fast_assign(
@@ -1892,31 +1941,33 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
__entry->nr_holders = nr_holders;
__entry->avail_before = avail_before;
__entry->freed_before = freed_before;
__entry->committing = !!committing;
__entry->exceeded = !!exceeded;
),
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u exceeded %u",
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u committing %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying, __entry->nr_holders,
__entry->avail_before, __entry->freed_before, __entry->exceeded)
__entry->avail_before, __entry->freed_before, __entry->committing,
__entry->exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_hold,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_apply,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_start,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_end,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
#define slt_symbolic(mode) \
@@ -1998,9 +2049,9 @@ DEFINE_EVENT(scoutfs_quorum_message_class, scoutfs_quorum_recv_message,
TRACE_EVENT(scoutfs_quorum_loop,
TP_PROTO(struct super_block *sb, int role, u64 term, int vote_for,
unsigned long vote_bits, struct timespec64 timeout),
unsigned long vote_bits, unsigned long long nsecs),
TP_ARGS(sb, role, term, vote_for, vote_bits, timeout),
TP_ARGS(sb, role, term, vote_for, vote_bits, nsecs),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
@@ -2009,8 +2060,7 @@ TRACE_EVENT(scoutfs_quorum_loop,
__field(int, vote_for)
__field(unsigned long, vote_bits)
__field(unsigned long, vote_count)
__field(unsigned long long, timeout_sec)
__field(int, timeout_nsec)
__field(unsigned long long, nsecs)
),
TP_fast_assign(
@@ -2020,14 +2070,13 @@ TRACE_EVENT(scoutfs_quorum_loop,
__entry->vote_for = vote_for;
__entry->vote_bits = vote_bits;
__entry->vote_count = hweight_long(vote_bits);
__entry->timeout_sec = timeout.tv_sec;
__entry->timeout_nsec = timeout.tv_nsec;
__entry->nsecs = nsecs;
),
TP_printk(SCSBF" term %llu role %d vote_for %d vote_bits 0x%lx vote_count %lu timeout %llu.%u",
TP_printk(SCSBF" term %llu role %d vote_for %d vote_bits 0x%lx vote_count %lu timeout %llu",
SCSB_TRACE_ARGS, __entry->term, __entry->role,
__entry->vote_for, __entry->vote_bits, __entry->vote_count,
__entry->timeout_sec, __entry->timeout_nsec)
__entry->nsecs)
);
TRACE_EVENT(scoutfs_trans_seq_last,
@@ -2051,6 +2100,71 @@ TRACE_EVENT(scoutfs_trans_seq_last,
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_server_finalize_items,
TP_PROTO(struct super_block *sb, u64 rid, u64 item_rid, u64 item_nr, u64 item_flags,
u64 item_get_trans_seq),
TP_ARGS(sb, rid, item_rid, item_nr, item_flags, item_get_trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, c_rid)
__field(__u64, item_rid)
__field(__u64, item_nr)
__field(__u64, item_flags)
__field(__u64, item_get_trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->c_rid = rid;
__entry->item_rid = item_rid;
__entry->item_nr = item_nr;
__entry->item_flags = item_flags;
__entry->item_get_trans_seq = item_get_trans_seq;
),
TP_printk(SCSBF" rid %016llx item_rid %016llx item_nr %llu item_flags 0x%llx item_get_trans_seq %llu",
SCSB_TRACE_ARGS, __entry->c_rid, __entry->item_rid, __entry->item_nr,
__entry->item_flags, __entry->item_get_trans_seq)
);
TRACE_EVENT(scoutfs_server_finalize_decision,
TP_PROTO(struct super_block *sb, u64 rid, bool saw_finalized, bool others_active,
bool ours_visible, bool finalize_ours, unsigned int delay_ms,
u64 finalize_sent_seq),
TP_ARGS(sb, rid, saw_finalized, others_active, ours_visible, finalize_ours, delay_ms,
finalize_sent_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, c_rid)
__field(bool, saw_finalized)
__field(bool, others_active)
__field(bool, ours_visible)
__field(bool, finalize_ours)
__field(unsigned int, delay_ms)
__field(__u64, finalize_sent_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->c_rid = rid;
__entry->saw_finalized = saw_finalized;
__entry->others_active = others_active;
__entry->ours_visible = ours_visible;
__entry->finalize_ours = finalize_ours;
__entry->delay_ms = delay_ms;
__entry->finalize_sent_seq = finalize_sent_seq;
),
TP_printk(SCSBF" rid %016llx saw_finalized %u others_active %u ours_visible %u finalize_ours %u delay_ms %u finalize_sent_seq %llu",
SCSB_TRACE_ARGS, __entry->c_rid, __entry->saw_finalized, __entry->others_active,
__entry->ours_visible, __entry->finalize_ours, __entry->delay_ms,
__entry->finalize_sent_seq)
);
TRACE_EVENT(scoutfs_get_log_merge_status,
TP_PROTO(struct super_block *sb, u64 rid, struct scoutfs_key *next_range_key,
u64 nr_requests, u64 nr_complete, u64 seq),
@@ -2291,6 +2405,44 @@ TRACE_EVENT(scoutfs_block_dirty_ref,
__entry->block_blkno, __entry->block_seq)
);
TRACE_EVENT(scoutfs_block_stale,
TP_PROTO(struct super_block *sb, struct scoutfs_block_ref *ref,
struct scoutfs_block_header *hdr, u32 magic, u32 crc),
TP_ARGS(sb, ref, hdr, magic, crc),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ref_blkno)
__field(__u64, ref_seq)
__field(__u32, hdr_crc)
__field(__u32, hdr_magic)
__field(__u64, hdr_fsid)
__field(__u64, hdr_seq)
__field(__u64, hdr_blkno)
__field(__u32, magic)
__field(__u32, crc)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ref_blkno = le64_to_cpu(ref->blkno);
__entry->ref_seq = le64_to_cpu(ref->seq);
__entry->hdr_crc = le32_to_cpu(hdr->crc);
__entry->hdr_magic = le32_to_cpu(hdr->magic);
__entry->hdr_fsid = le64_to_cpu(hdr->fsid);
__entry->hdr_seq = le64_to_cpu(hdr->seq);
__entry->hdr_blkno = le64_to_cpu(hdr->blkno);
__entry->magic = magic;
__entry->crc = crc;
),
TP_printk(SCSBF" ref_blkno %llu ref_seq %016llx hdr_crc %08x hdr_magic %08x hdr_fsid %016llx hdr_seq %016llx hdr_blkno %llu magic %08x crc %08x",
SCSB_TRACE_ARGS, __entry->ref_blkno, __entry->ref_seq, __entry->hdr_crc,
__entry->hdr_magic, __entry->hdr_fsid, __entry->hdr_seq, __entry->hdr_blkno,
__entry->magic, __entry->crc)
);
DECLARE_EVENT_CLASS(scoutfs_block_class,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno, int refcount, int io_count,
unsigned long bits, __u64 accessed),
@@ -2775,6 +2927,81 @@ TRACE_EVENT(scoutfs_omap_should_delete,
SCSB_TRACE_ARGS, __entry->ino, __entry->nlink, __entry->ret)
);
#define SSCF_FMT "[bo %llu bs %llu es %llu]"
#define SSCF_FIELDS(pref) \
__field(__u64, pref##_blkno) \
__field(__u64, pref##_blocks) \
__field(__u64, pref##_entries)
#define SSCF_ASSIGN(pref, sfl) \
__entry->pref##_blkno = le64_to_cpu((sfl)->ref.blkno); \
__entry->pref##_blocks = le64_to_cpu((sfl)->blocks); \
__entry->pref##_entries = le64_to_cpu((sfl)->entries);
#define SSCF_ENTRY_ARGS(pref) \
__entry->pref##_blkno, \
__entry->pref##_blocks, \
__entry->pref##_entries
DECLARE_EVENT_CLASS(scoutfs_srch_compact_class,
TP_PROTO(struct super_block *sb, struct scoutfs_srch_compact *sc),
TP_ARGS(sb, sc),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, id)
__field(__u8, nr)
__field(__u8, flags)
SSCF_FIELDS(out)
__field(__u64, in0_blk)
__field(__u64, in0_pos)
SSCF_FIELDS(in0)
__field(__u64, in1_blk)
__field(__u64, in1_pos)
SSCF_FIELDS(in1)
__field(__u64, in2_blk)
__field(__u64, in2_pos)
SSCF_FIELDS(in2)
__field(__u64, in3_blk)
__field(__u64, in3_pos)
SSCF_FIELDS(in3)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->id = le64_to_cpu(sc->id);
__entry->nr = sc->nr;
__entry->flags = sc->flags;
SSCF_ASSIGN(out, &sc->out)
__entry->in0_blk = le64_to_cpu(sc->in[0].blk);
__entry->in0_pos = le64_to_cpu(sc->in[0].pos);
SSCF_ASSIGN(in0, &sc->in[0].sfl)
__entry->in1_blk = le64_to_cpu(sc->in[0].blk);
__entry->in1_pos = le64_to_cpu(sc->in[0].pos);
SSCF_ASSIGN(in1, &sc->in[1].sfl)
__entry->in2_blk = le64_to_cpu(sc->in[0].blk);
__entry->in2_pos = le64_to_cpu(sc->in[0].pos);
SSCF_ASSIGN(in2, &sc->in[2].sfl)
__entry->in3_blk = le64_to_cpu(sc->in[0].blk);
__entry->in3_pos = le64_to_cpu(sc->in[0].pos);
SSCF_ASSIGN(in3, &sc->in[3].sfl)
),
TP_printk(SCSBF" id %llu nr %u flags 0x%x out "SSCF_FMT" in0 b %llu p %llu "SSCF_FMT" in1 b %llu p %llu "SSCF_FMT" in2 b %llu p %llu "SSCF_FMT" in3 b %llu p %llu "SSCF_FMT,
SCSB_TRACE_ARGS, __entry->id, __entry->nr, __entry->flags, SSCF_ENTRY_ARGS(out),
__entry->in0_blk, __entry->in0_pos, SSCF_ENTRY_ARGS(in0),
__entry->in1_blk, __entry->in1_pos, SSCF_ENTRY_ARGS(in1),
__entry->in2_blk, __entry->in2_pos, SSCF_ENTRY_ARGS(in2),
__entry->in3_blk, __entry->in3_pos, SSCF_ENTRY_ARGS(in3))
);
DEFINE_EVENT(scoutfs_srch_compact_class, scoutfs_srch_compact_client_send,
TP_PROTO(struct super_block *sb, struct scoutfs_srch_compact *sc),
TP_ARGS(sb, sc)
);
DEFINE_EVENT(scoutfs_srch_compact_class, scoutfs_srch_compact_client_recv,
TP_PROTO(struct super_block *sb, struct scoutfs_srch_compact *sc),
TP_ARGS(sb, sc)
);
#endif /* _TRACE_SCOUTFS_H */
/* This part must be outside protection */

File diff suppressed because it is too large Load Diff

View File

@@ -75,7 +75,7 @@ u64 scoutfs_server_seq(struct super_block *sb);
u64 scoutfs_server_next_seq(struct super_block *sb);
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
void scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_start(struct super_block *sb, struct scoutfs_quorum_config *qconf, u64 term);
void scoutfs_server_stop(struct super_block *sb);
void scoutfs_server_stop_wait(struct super_block *sb);
bool scoutfs_server_is_running(struct super_block *sb);

View File

@@ -18,6 +18,7 @@
#include <linux/pagemap.h>
#include <linux/vmalloc.h>
#include <linux/sort.h>
#include <asm/unaligned.h>
#include "super.h"
#include "format.h"
@@ -30,6 +31,9 @@
#include "client.h"
#include "counters.h"
#include "scoutfs_trace.h"
#include "triggers.h"
#include "sysfs.h"
#include "msg.h"
/*
* This srch subsystem gives us a way to find inodes that have a given
@@ -68,10 +72,14 @@ struct srch_info {
atomic_t shutdown;
struct workqueue_struct *workq;
struct delayed_work compact_dwork;
struct scoutfs_sysfs_attrs ssa;
atomic_t compact_delay_ms;
};
#define DECLARE_SRCH_INFO(sb, name) \
struct srch_info *name = SCOUTFS_SB(sb)->srch_info
#define DECLARE_SRCH_INFO_KOBJ(kobj, name) \
DECLARE_SRCH_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
#define SRE_FMT "%016llx.%llu.%llu"
#define SRE_ARG(sre) \
@@ -520,6 +528,95 @@ out:
return ret;
}
/*
* Padded entries are encoded in pairs after an existing entry. All of
* the pairs cancel each other out by all readers (the second encoding
* looks like deletion) so they aren't visible to the first/last bounds of
* the block or file.
*/
static int append_padded_entry(struct scoutfs_srch_file *sfl, u64 blk,
struct scoutfs_srch_block *srb, struct scoutfs_srch_entry *sre)
{
int ret;
ret = encode_entry(srb->entries + le32_to_cpu(srb->entry_bytes),
sre, &srb->tail);
if (ret > 0) {
srb->tail = *sre;
le32_add_cpu(&srb->entry_nr, 1);
le32_add_cpu(&srb->entry_bytes, ret);
le64_add_cpu(&sfl->entries, 1);
ret = 0;
}
return ret;
}
/*
* This is called by a testing trigger to create a very specific case of
* encoded entry offsets. We want the last entry in the block to start
* precisely at the _SAFE_BYTES offset.
*
* This is called when there is a single existing entry in the block.
* We have the entire block to work with. We encode pairs of matching
* entries. This hides them from readers (both searches and merging) as
* they're interpreted as creation and deletion and are deleted. We use
* the existing hash value of the first entry in the block but then set
* the inode to an impossibly large number so it doesn't interfere with
* anything.
*
* To hit the specific offset we very carefully manage the amount of
* bytes of change between fields in the entry. We know that if we
* change all the byte of the ino and id we end up with a 20 byte
* (2+8+8,2) encoding of the pair of entries. To have the last entry
* start at the _SAFE_POS offset we know that the final 20 byte pair
* encoding needs to end at 2 bytes (second entry encoding) after the
* _SAFE_POS offset.
*
* So as we encode pairs we watch the delta of our current offset from
* that desired final offset of 2 past _SAFE_POS. If we're a multiple
* of 20 away then we encode the full 20 byte pairs. If we're not, then
* we drop a byte to encode 19 bytes. That'll slowly change the offset
* to be a multiple of 20 again while encoding large entries.
*/
static void pad_entries_at_safe(struct scoutfs_srch_file *sfl, u64 blk,
struct scoutfs_srch_block *srb)
{
struct scoutfs_srch_entry sre;
u32 target;
s32 diff;
u64 hash;
u64 ino;
u64 id;
int ret;
hash = le64_to_cpu(srb->tail.hash);
ino = le64_to_cpu(srb->tail.ino) | (1ULL << 62);
id = le64_to_cpu(srb->tail.id);
target = SCOUTFS_SRCH_BLOCK_SAFE_BYTES + 2;
while ((diff = target - le32_to_cpu(srb->entry_bytes)) > 0) {
ino ^= 1ULL << (7 * 8);
if (diff % 20 == 0) {
id ^= 1ULL << (7 * 8);
} else {
id ^= 1ULL << (6 * 8);
}
sre.hash = cpu_to_le64(hash);
sre.ino = cpu_to_le64(ino);
sre.id = cpu_to_le64(id);
ret = append_padded_entry(sfl, blk, srb, &sre);
if (ret == 0)
ret = append_padded_entry(sfl, blk, srb, &sre);
BUG_ON(ret != 0);
diff = target - le32_to_cpu(srb->entry_bytes);
}
}
/*
* The caller is dropping an ino/id because the tracking rbtree is full.
* This loses information so we can't return any entries at or after the
@@ -861,7 +958,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_srch_rb_root *sroot,
u64 hash, u64 ino, u64 last_ino, bool *done)
{
struct scoutfs_net_roots prev_roots;
struct scoutfs_net_roots roots;
struct scoutfs_srch_entry start;
struct scoutfs_srch_entry end;
@@ -869,6 +965,7 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_log_trees lt;
struct scoutfs_srch_file sfl;
SCOUTFS_BTREE_ITEM_REF(iref);
DECLARE_SAVED_REFS(saved);
struct scoutfs_key key;
unsigned long limit = SRCH_LIMIT;
int ret;
@@ -877,7 +974,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
*done = false;
srch_init_rb_root(sroot);
memset(&prev_roots, 0, sizeof(prev_roots));
start.hash = cpu_to_le64(hash);
start.ino = cpu_to_le64(ino);
@@ -892,7 +988,6 @@ retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
memset(&roots.fs_root, 0, sizeof(roots.fs_root));
end = final;
@@ -968,16 +1063,10 @@ retry:
*done = sre_cmp(&end, &final) == 0;
ret = 0;
out:
if (ret == -ESTALE) {
if (memcmp(&prev_roots, &roots, sizeof(roots)) == 0) {
scoutfs_inc_counter(sb, srch_search_stale_eio);
ret = -EIO;
} else {
scoutfs_inc_counter(sb, srch_search_stale_retry);
prev_roots = roots;
goto retry;
}
}
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.srch_root.ref,
&roots.logs_root.ref);
if (ret == -ESTALE)
goto retry;
return ret;
}
@@ -995,6 +1084,9 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
struct scoutfs_key key;
int ret;
if (sfl->ref.blkno && !force && scoutfs_trigger(sb, SRCH_FORCE_LOG_ROTATE))
force = true;
if (sfl->ref.blkno == 0 ||
(!force && le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT))
return 0;
@@ -1003,6 +1095,14 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
le64_to_cpu(sfl->ref.blkno), 0);
ret = scoutfs_btree_insert(sb, alloc, wri, root, &key,
sfl, sizeof(*sfl));
/*
* While it's fine to replay moving the client's logging srch
* file to the core btree item, server commits should keep it
* from happening. So we'll warn if we see it happen. This can
* be removed eventually.
*/
if (WARN_ON_ONCE(ret == -EEXIST))
ret = 0;
if (ret == 0) {
memset(sfl, 0, sizeof(*sfl));
scoutfs_inc_counter(sb, srch_rotate_log);
@@ -1462,7 +1562,7 @@ static int kway_merge(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_srch_file *sfl,
kway_get_t kway_get, kway_advance_t kway_adv,
void **args, int nr)
void **args, int nr, bool logs_input)
{
DECLARE_SRCH_INFO(sb, srinf);
struct scoutfs_srch_block *srb = NULL;
@@ -1489,8 +1589,7 @@ static int kway_merge(struct super_block *sb,
nr_parents = max_t(unsigned long, 1, roundup_pow_of_two(nr) - 1);
/* root at [1] for easy sib/parent index calc, final pad for odd sib */
nr_nodes = 1 + nr_parents + nr + 1;
tnodes = __vmalloc(nr_nodes * sizeof(struct tourn_node),
GFP_NOFS, PAGE_KERNEL);
tnodes = kc__vmalloc(nr_nodes * sizeof(struct tourn_node), GFP_NOFS);
if (!tnodes)
return -ENOMEM;
@@ -1567,6 +1666,15 @@ static int kway_merge(struct super_block *sb,
blk++;
}
/* end sorted block on _SAFE offset for testing */
if (bl && le32_to_cpu(srb->entry_nr) == 1 && logs_input &&
scoutfs_trigger(sb, SRCH_COMPACT_LOGS_PAD_SAFE)) {
pad_entries_at_safe(sfl, blk, srb);
scoutfs_block_put(sb, bl);
bl = NULL;
blk++;
}
scoutfs_inc_counter(sb, srch_compact_entry);
} else {
@@ -1609,6 +1717,8 @@ static int kway_merge(struct super_block *sb,
empty++;
ret = 0;
} else if (ret < 0) {
if (ret == -ENOANO) /* just testing trigger */
ret = 0;
goto out;
}
@@ -1747,7 +1857,7 @@ static int compact_logs(struct super_block *sb,
goto out;
}
page->private = 0;
list_add_tail(&page->list, &pages);
list_add_tail(&page->lru, &pages);
nr_pages++;
scoutfs_inc_counter(sb, srch_compact_log_page);
}
@@ -1800,7 +1910,7 @@ static int compact_logs(struct super_block *sb,
/* sort page entries and reset private for _next */
i = 0;
list_for_each_entry(page, &pages, list) {
list_for_each_entry(page, &pages, lru) {
args[i++] = page;
if (atomic_read(&srinf->shutdown)) {
@@ -1816,12 +1926,12 @@ static int compact_logs(struct super_block *sb,
}
ret = kway_merge(sb, alloc, wri, &sc->out, kway_get_page, kway_adv_page,
args, nr_pages);
args, nr_pages, true);
if (ret < 0)
goto out;
/* make sure we finished all the pages */
list_for_each_entry(page, &pages, list) {
list_for_each_entry(page, &pages, lru) {
sre = page_priv_sre(page);
if (page->private < SRES_PER_PAGE && sre->ino != 0) {
ret = -ENOSPC;
@@ -1834,8 +1944,8 @@ static int compact_logs(struct super_block *sb,
out:
scoutfs_block_put(sb, bl);
vfree(args);
list_for_each_entry_safe(page, tmp, &pages, list) {
list_del(&page->list);
list_for_each_entry_safe(page, tmp, &pages, lru) {
list_del(&page->lru);
__free_page(page);
}
@@ -1874,12 +1984,18 @@ static int kway_get_reader(struct super_block *sb,
srb = rdr->bl->data;
if (rdr->pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES ||
rdr->skip >= SCOUTFS_SRCH_BLOCK_SAFE_BYTES ||
rdr->skip > SCOUTFS_SRCH_BLOCK_SAFE_BYTES ||
rdr->skip >= le32_to_cpu(srb->entry_bytes)) {
/* XXX inconsistency */
return -EIO;
}
if (rdr->decoded_bytes == 0 && rdr->pos == SCOUTFS_SRCH_BLOCK_SAFE_BYTES &&
scoutfs_trigger(sb, SRCH_MERGE_STOP_SAFE)) {
/* only used in testing */
return -ENOANO;
}
/* decode entry, possibly skipping start of the block */
while (rdr->decoded_bytes == 0 || rdr->pos < rdr->skip) {
ret = decode_entry(srb->entries + rdr->pos,
@@ -1969,7 +2085,7 @@ static int compact_sorted(struct super_block *sb,
}
ret = kway_merge(sb, alloc, wri, &sc->out, kway_get_reader,
kway_adv_reader, args, nr);
kway_adv_reader, args, nr, false);
sc->flags |= SCOUTFS_SRCH_COMPACT_FLAG_DONE;
for (i = 0; i < nr; i++) {
@@ -2098,8 +2214,15 @@ static int delete_files(struct super_block *sb, struct scoutfs_alloc *alloc,
return ret;
}
/* wait 10s between compact attempts on error, immediate after success */
#define SRCH_COMPACT_DELAY_MS (10 * MSEC_PER_SEC)
static void queue_compact_work(struct srch_info *srinf, bool immediate)
{
unsigned long delay;
if (!atomic_read(&srinf->shutdown)) {
delay = immediate ? 0 : msecs_to_jiffies(atomic_read(&srinf->compact_delay_ms));
queue_delayed_work(srinf->workq, &srinf->compact_dwork, delay);
}
}
/*
* Get a compaction operation from the server, sort the entries from the
@@ -2127,7 +2250,6 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
struct super_block *sb = srinf->sb;
struct scoutfs_block_writer wri;
struct scoutfs_alloc alloc;
unsigned long delay;
int ret;
int err;
@@ -2140,6 +2262,8 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
scoutfs_block_writer_init(sb, &wri);
ret = scoutfs_client_srch_get_compact(sb, sc);
if (ret >= 0)
trace_scoutfs_srch_compact_client_recv(sb, sc);
if (ret < 0 || sc->nr == 0)
goto out;
@@ -2168,6 +2292,7 @@ commit:
sc->meta_freed = alloc.freed;
sc->flags |= ret < 0 ? SCOUTFS_SRCH_COMPACT_FLAG_ERROR : 0;
trace_scoutfs_srch_compact_client_send(sb, sc);
err = scoutfs_client_srch_commit_compact(sb, sc);
if (err < 0 && ret == 0)
ret = err;
@@ -2178,14 +2303,56 @@ out:
scoutfs_inc_counter(sb, srch_compact_error);
scoutfs_block_writer_forget_all(sb, &wri);
if (!atomic_read(&srinf->shutdown)) {
delay = ret == 0 ? 0 : msecs_to_jiffies(SRCH_COMPACT_DELAY_MS);
queue_delayed_work(srinf->workq, &srinf->compact_dwork, delay);
}
queue_compact_work(srinf, sc->nr > 0 && ret == 0);
kfree(sc);
}
static ssize_t compact_delay_ms_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
DECLARE_SRCH_INFO_KOBJ(kobj, srinf);
return snprintf(buf, PAGE_SIZE, "%u", atomic_read(&srinf->compact_delay_ms));
}
#define MIN_COMPACT_DELAY_MS MSEC_PER_SEC
#define DEF_COMPACT_DELAY_MS (10 * MSEC_PER_SEC)
#define MAX_COMPACT_DELAY_MS (60 * MSEC_PER_SEC)
static ssize_t compact_delay_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_SRCH_INFO(sb, srinf);
char nullterm[30]; /* more than enough for octal -U64_MAX */
u64 val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtoll(nullterm, 0, &val);
if (ret < 0 || val < MIN_COMPACT_DELAY_MS || val > MAX_COMPACT_DELAY_MS) {
scoutfs_err(sb, "invalid compact_delay_ms value, must be between %lu and %lu",
MIN_COMPACT_DELAY_MS, MAX_COMPACT_DELAY_MS);
return -EINVAL;
}
atomic_set(&srinf->compact_delay_ms, val);
cancel_delayed_work(&srinf->compact_dwork);
queue_compact_work(srinf, false);
return count;
}
SCOUTFS_ATTR_RW(compact_delay_ms);
static struct attribute *srch_attrs[] = {
SCOUTFS_ATTR_PTR(compact_delay_ms),
NULL,
};
void scoutfs_srch_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
@@ -2202,6 +2369,8 @@ void scoutfs_srch_destroy(struct super_block *sb)
destroy_workqueue(srinf->workq);
}
scoutfs_sysfs_destroy_attrs(sb, &srinf->ssa);
kfree(srinf);
sbi->srch_info = NULL;
}
@@ -2219,8 +2388,15 @@ int scoutfs_srch_setup(struct super_block *sb)
srinf->sb = sb;
atomic_set(&srinf->shutdown, 0);
INIT_DELAYED_WORK(&srinf->compact_dwork, scoutfs_srch_compact_worker);
scoutfs_sysfs_init_attrs(sb, &srinf->ssa);
atomic_set(&srinf->compact_delay_ms, DEF_COMPACT_DELAY_MS);
sbi->srch_info = srinf;
ret = scoutfs_sysfs_create_attrs(sb, &srinf->ssa, srch_attrs, "srch");
if (ret < 0)
goto out;
srinf->workq = alloc_workqueue("scoutfs_srch_compact",
WQ_NON_REENTRANT | WQ_UNBOUND |
WQ_HIGHPRI, 0);
@@ -2229,8 +2405,7 @@ int scoutfs_srch_setup(struct super_block *sb)
goto out;
}
queue_delayed_work(srinf->workq, &srinf->compact_dwork,
msecs_to_jiffies(SRCH_COMPACT_DELAY_MS));
queue_compact_work(srinf, false);
ret = 0;
out:

View File

@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include <linux/slab.h>
#include <linux/pagemap.h>
#include <linux/magic.h>
@@ -48,6 +49,8 @@
#include "volopt.h"
#include "fence.h"
#include "xattr.h"
#include "wkic.h"
#include "quota.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
@@ -157,7 +160,17 @@ static void scoutfs_metadev_close(struct super_block *sb)
* from kill_sb->put_super.
*/
lockdep_off();
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
bdev_fput(sbi->meta_bdev_file);
#else
#ifdef KC_BLKDEV_PUT_HOLDER_ARG
blkdev_put(sbi->meta_bdev, sb);
#else
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
#endif
#endif
lockdep_on();
sbi->meta_bdev = NULL;
}
@@ -178,7 +191,7 @@ static void scoutfs_put_super(struct super_block *sb)
/*
* Wait for invalidation and iput to finish with any lingering
* inode references that escaped the evict_inodes in
* generic_shutdown_super. MS_ACTIVE is clear so final iput
* generic_shutdown_super. SB_ACTIVE is clear so final iput
* will always evict.
*/
scoutfs_lock_flush_invalidate(sb);
@@ -193,7 +206,9 @@ static void scoutfs_put_super(struct super_block *sb)
scoutfs_shutdown_trans(sb);
scoutfs_volopt_destroy(sb);
scoutfs_client_destroy(sb);
scoutfs_quota_destroy(sb);
scoutfs_inode_destroy(sb);
scoutfs_wkic_destroy(sb);
scoutfs_item_destroy(sb);
scoutfs_forest_destroy(sb);
scoutfs_data_destroy(sb);
@@ -461,9 +476,8 @@ static int scoutfs_read_supers(struct super_block *sb)
goto out;
}
sbi->fsid = le64_to_cpu(meta_super->hdr.fsid);
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
kfree(data_super);
@@ -473,7 +487,11 @@ out:
static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
{
struct scoutfs_mount_options opts;
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
struct file *meta_bdev_file;
#else
struct block_device *meta_bdev;
#endif
struct scoutfs_sb_info *sbi;
struct inode *inode;
int ret;
@@ -486,7 +504,8 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_d_op = &scoutfs_dentry_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_xattr = scoutfs_xattr_handlers;
sb->s_flags |= MS_I_VERSION | MS_POSIXACL;
sb->s_flags |= SB_I_VERSION | SB_POSIXACL;
sb->s_time_gran = 1;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -518,7 +537,27 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
/*
* pass sbi as holder, since dev_mount already passes sb, which triggers a
* WARN_ON because dev_mount also passes non-NULL hops. By passing sbi
* here we just get a simple error in our test cases.
*/
meta_bdev_file = bdev_file_open_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sbi, NULL);
if (IS_ERR(meta_bdev_file)) {
scoutfs_err(sb, "could not open metadev: error %ld",
PTR_ERR(meta_bdev_file));
ret = PTR_ERR(meta_bdev_file);
goto out;
}
sbi->meta_bdev_file = meta_bdev_file;
sbi->meta_bdev = file_bdev(meta_bdev_file);
#else
#ifdef KC_BLKDEV_PUT_HOLDER_ARG
meta_bdev = blkdev_get_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sb, NULL);
#else
meta_bdev = blkdev_get_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sb);
#endif
if (IS_ERR(meta_bdev)) {
scoutfs_err(sb, "could not open metadev: error %ld",
PTR_ERR(meta_bdev));
@@ -526,6 +565,8 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
sbi->meta_bdev = meta_bdev;
#endif
ret = set_blocksize(sbi->meta_bdev, SCOUTFS_BLOCK_SM_SIZE);
if (ret != 0) {
scoutfs_err(sb, "failed to set metadev blocksize, returned %d",
@@ -543,7 +584,9 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
scoutfs_block_setup(sb) ?:
scoutfs_forest_setup(sb) ?:
scoutfs_item_setup(sb) ?:
scoutfs_wkic_setup(sb) ?:
scoutfs_inode_setup(sb) ?:
scoutfs_quota_setup(sb) ?:
scoutfs_data_setup(sb) ?:
scoutfs_setup_trans(sb) ?:
scoutfs_omap_setup(sb) ?:
@@ -674,14 +717,14 @@ out:
teardown_module();
return ret;
}
module_init(scoutfs_module_init)
module_init(scoutfs_module_init);
static void __exit scoutfs_module_exit(void)
{
unregister_filesystem(&scoutfs_fs_type);
teardown_module();
}
module_exit(scoutfs_module_exit)
module_exit(scoutfs_module_exit);
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");

View File

@@ -30,17 +30,21 @@ struct recov_info;
struct omap_info;
struct volopt_info;
struct fence_info;
struct wkic_info;
struct squota_info;
struct scoutfs_sb_info {
struct super_block *sb;
/* assigned once at the start of each mount, read-only */
u64 fsid;
u64 rid;
u64 fmt_vers;
struct scoutfs_super_block super;
struct block_device *meta_bdev;
#ifdef KC_BDEV_FILE_OPEN_BY_PATH
struct file *meta_bdev_file;
#endif
spinlock_t next_ino_lock;
@@ -56,6 +60,8 @@ struct scoutfs_sb_info {
struct omap_info *omap_info;
struct volopt_info *volopt_info;
struct item_cache_info *item_cache_info;
struct wkic_info *wkic_info;
struct squota_info *squota_info;
struct fence_info *fence_info;
/* tracks tasks waiting for data extents */
@@ -98,7 +104,11 @@ static inline bool SCOUTFS_IS_META_BDEV(struct scoutfs_super_block *super_block)
return !!(le64_to_cpu(super_block->flags) & SCOUTFS_FLAG_IS_META_BDEV);
}
#ifdef KC_HAVE_BLK_MODE_T
#define SCOUTFS_META_BDEV_MODE (BLK_OPEN_READ | BLK_OPEN_WRITE | BLK_OPEN_EXCL)
#else
#define SCOUTFS_META_BDEV_MODE (FMODE_READ | FMODE_WRITE | FMODE_EXCL)
#endif
static inline bool scoutfs_forcing_unmount(struct super_block *sb)
{
@@ -135,14 +145,14 @@ static inline bool scoutfs_unmounting(struct super_block *sb)
(int)(le64_to_cpu(fsid) >> SCSB_SHIFT), \
(int)(le64_to_cpu(rid) >> SCSB_SHIFT)
#define SCSB_ARGS(sb) \
(int)(le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) >> SCSB_SHIFT), \
(int)(SCOUTFS_SB(sb)->fsid >> SCSB_SHIFT), \
(int)(SCOUTFS_SB(sb)->rid >> SCSB_SHIFT)
#define SCSB_TRACE_FIELDS \
__field(__u64, fsid) \
__field(__u64, rid)
#define SCSB_TRACE_ASSIGN(sb) \
__entry->fsid = SCOUTFS_HAS_SBI(sb) ? \
le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) : 0;\
SCOUTFS_SB(sb)->fsid : 0; \
__entry->rid = SCOUTFS_HAS_SBI(sb) ? \
SCOUTFS_SB(sb)->rid : 0;
#define SCSB_TRACE_ARGS \
@@ -157,4 +167,17 @@ int scoutfs_write_super(struct super_block *sb,
/* to keep this out of the ioctl.h public interface definition */
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
/*
* Returns 0 when supported, non-zero -errno when unsupported.
*/
static inline int scoutfs_fmt_vers_unsupported(struct super_block *sb, u64 vers)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi && (sbi->fmt_vers < vers))
return -EOPNOTSUPP;
else
return 0;
}
#endif

View File

@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include "super.h"
#include "sysfs.h"
@@ -60,10 +61,9 @@ static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return snprintf(buf, PAGE_SIZE, "%016llx\n",
le64_to_cpu(super->hdr.fsid));
return snprintf(buf, PAGE_SIZE, "%016llx\n", sbi->fsid);
}
ATTR_FUNCS_RO(fsid);

90
kmod/src/totl.c Normal file
View File

@@ -0,0 +1,90 @@
/*
* Copyright (C) 2023 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include "format.h"
#include "forest.h"
#include "totl.h"
void scoutfs_totl_set_range(struct scoutfs_key *start, struct scoutfs_key *end)
{
scoutfs_key_set_zeros(start);
start->sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_key_set_ones(end);
end->sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
}
void scoutfs_totl_merge_init(struct scoutfs_totl_merging *merg)
{
memset(merg, 0, sizeof(struct scoutfs_totl_merging));
}
void scoutfs_totl_merge_contribute(struct scoutfs_totl_merging *merg,
u64 seq, u8 flags, void *val, int val_len, int fic)
{
struct scoutfs_xattr_totl_val *tval = val;
if (fic & FIC_FS_ROOT) {
merg->fs_seq = seq;
merg->fs_total = le64_to_cpu(tval->total);
merg->fs_count = le64_to_cpu(tval->count);
} else if (fic & FIC_FINALIZED) {
merg->fin_seq = seq;
merg->fin_total += le64_to_cpu(tval->total);
merg->fin_count += le64_to_cpu(tval->count);
} else {
merg->log_seq = seq;
merg->log_total += le64_to_cpu(tval->total);
merg->log_count += le64_to_cpu(tval->count);
}
}
/*
* .totl. item merging has to be careful because the log btree merging
* code can write partial results to the fs_root. This means that a
* reader can see both cases where new finalized logs should be applied
* to the old fs items and where old finalized logs have already been
* applied to the partially merged fs items. Currently active logged
* items are always applied on top of all cases.
*
* These cases are differentiated with a combination of sequence numbers
* in items, the count of contributing xattrs, and a flag
* differentiating finalized and active logged items. This lets us
* recognize all cases, including when finalized logs were merged and
* deleted the fs item.
*/
void scoutfs_totl_merge_resolve(struct scoutfs_totl_merging *merg, __u64 *total, __u64 *count)
{
*total = 0;
*count = 0;
/* start with the fs item if we have it */
if (merg->fs_seq != 0) {
*total = merg->fs_total;
*count = merg->fs_count;
}
/* apply finalized logs if they're newer or creating */
if (((merg->fs_seq != 0) && (merg->fin_seq > merg->fs_seq)) ||
((merg->fs_seq == 0) && (merg->fin_count > 0))) {
*total += merg->fin_total;
*count += merg->fin_count;
}
/* always apply active logs which must be newer than fs and finalized */
if (merg->log_seq > 0) {
*total += merg->log_total;
*count += merg->log_count;
}
}

24
kmod/src/totl.h Normal file
View File

@@ -0,0 +1,24 @@
#ifndef _SCOUTFS_TOTL_H_
#define _SCOUTFS_TOTL_H_
#include "key.h"
struct scoutfs_totl_merging {
u64 fs_seq;
u64 fs_total;
u64 fs_count;
u64 fin_seq;
u64 fin_total;
s64 fin_count;
u64 log_seq;
u64 log_total;
s64 log_count;
};
void scoutfs_totl_set_range(struct scoutfs_key *start, struct scoutfs_key *end);
void scoutfs_totl_merge_init(struct scoutfs_totl_merging *merg);
void scoutfs_totl_merge_contribute(struct scoutfs_totl_merging *merg,
u64 seq, u8 flags, void *val, int val_len, int fic);
void scoutfs_totl_merge_resolve(struct scoutfs_totl_merging *merg, __u64 *total, __u64 *count);
#endif

143
kmod/src/trace/quota.h Normal file
View File

@@ -0,0 +1,143 @@
/*
* Tracing squota_input
*/
#define SQI_FMT "[%u %llu %llu %llu]"
#define SQI_ARGS(i) \
(i)->op, (i)->attrs[0], (i)->attrs[1], (i)->attrs[2]
#define SQI_FIELDS(pref) \
__array(__u64, pref##_attrs, SQ_NS__NR_SELECT) \
__field(__u8, pref##_op)
#define SQI_ASSIGN(pref, i) \
__entry->pref##_attrs[0] = (i)->attrs[0]; \
__entry->pref##_attrs[1] = (i)->attrs[1]; \
__entry->pref##_attrs[2] = (i)->attrs[2]; \
__entry->pref##_op = (i)->op;
#define SQI_ENTRY_ARGS(pref) \
__entry->pref##_op, __entry->pref##_attrs[0], \
__entry->pref##_attrs[1], __entry->pref##_attrs[2]
/*
* Tracing squota_rule
*/
#define SQR_FMT "[%u %llu,%u,%x %llu,%u,%x %llu,%u,%x %u %llu]"
#define SQR_ARGS(r) \
(r)->prio, \
(r)->name_val[0], (r)->name_source[0], (r)->name_flags[0], \
(r)->name_val[1], (r)->name_source[1], (r)->name_flags[1], \
(r)->name_val[2], (r)->name_source[2], (r)->name_flags[2], \
(r)->op, (r)->limit \
#define SQR_FIELDS(pref) \
__array(__u64, pref##_name_val, 3) \
__field(__u64, pref##_limit) \
__array(__u8, pref##_name_source, 3) \
__array(__u8, pref##_name_flags, 3) \
__field(__u8, pref##_prio) \
__field(__u8, pref##_op)
#define SQR_ASSIGN(pref, r) \
__entry->pref##_name_val[0] = (r)->names[0].val; \
__entry->pref##_name_val[1] = (r)->names[1].val; \
__entry->pref##_name_val[2] = (r)->names[2].val; \
__entry->pref##_limit = (r)->limit; \
__entry->pref##_name_source[0] = (r)->names[0].source; \
__entry->pref##_name_source[1] = (r)->names[1].source; \
__entry->pref##_name_source[2] = (r)->names[2].source; \
__entry->pref##_name_flags[0] = (r)->names[0].flags; \
__entry->pref##_name_flags[1] = (r)->names[1].flags; \
__entry->pref##_name_flags[2] = (r)->names[2].flags; \
__entry->pref##_prio = (r)->prio; \
__entry->pref##_op = (r)->op;
#define SQR_ENTRY_ARGS(pref) \
__entry->pref##_prio, __entry->pref##_name_val[0], \
__entry->pref##_name_source[0], __entry->pref##_name_flags[0], \
__entry->pref##_name_val[1], __entry->pref##_name_source[1], \
__entry->pref##_name_flags[1], __entry->pref##_name_val[2], \
__entry->pref##_name_source[2], __entry->pref##_name_flags[2], \
__entry->pref##_op, __entry->pref##_limit
TRACE_EVENT(scoutfs_quota_check,
TP_PROTO(struct super_block *sb, long rs_ptr, struct squota_input *inp, int ret),
TP_ARGS(sb, rs_ptr, inp, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(long, rs_ptr)
SQI_FIELDS(i)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->rs_ptr = rs_ptr;
SQI_ASSIGN(i, inp);
__entry->ret = ret;
),
TP_printk(SCSBF" rs_ptr %ld ret %d inp "SQI_FMT,
SCSB_TRACE_ARGS, __entry->rs_ptr, __entry->ret, SQI_ENTRY_ARGS(i))
);
DECLARE_EVENT_CLASS(scoutfs_quota_rule_op_class,
TP_PROTO(struct super_block *sb, struct squota_rule *rule, int ret),
TP_ARGS(sb, rule, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
SQR_FIELDS(r)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
SQR_ASSIGN(r, rule);
__entry->ret = ret;
),
TP_printk(SCSBF" "SQR_FMT" ret %d",
SCSB_TRACE_ARGS, SQR_ENTRY_ARGS(r), __entry->ret)
);
DEFINE_EVENT(scoutfs_quota_rule_op_class, scoutfs_quota_add_rule,
TP_PROTO(struct super_block *sb, struct squota_rule *rule, int ret),
TP_ARGS(sb, rule, ret)
);
DEFINE_EVENT(scoutfs_quota_rule_op_class, scoutfs_quota_del_rule,
TP_PROTO(struct super_block *sb, struct squota_rule *rule, int ret),
TP_ARGS(sb, rule, ret)
);
TRACE_EVENT(scoutfs_quota_totl_check,
TP_PROTO(struct super_block *sb, struct squota_input *inp, struct scoutfs_key *key,
u64 limit, int ret),
TP_ARGS(sb, inp, key, limit, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
SQI_FIELDS(i)
sk_trace_define(k)
__field(__u64, limit)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
SQI_ASSIGN(i, inp);
sk_trace_assign(k, key);
__entry->limit = limit;
__entry->ret = ret;
),
TP_printk(SCSBF" inp "SQI_FMT" key "SK_FMT" limit %llu ret %d",
SCSB_TRACE_ARGS, SQI_ENTRY_ARGS(i), sk_trace_args(k), __entry->limit,
__entry->ret)
);

112
kmod/src/trace/wkic.h Normal file
View File

@@ -0,0 +1,112 @@
DECLARE_EVENT_CLASS(scoutfs_wkic_wpage_class,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, ptr)
__field(int, which)
__field(bool, n0l)
__field(bool, n1l)
sk_trace_define(start)
sk_trace_define(end)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ptr = ptr;
__entry->which = which;
__entry->n0l = n0l;
__entry->n1l = n1l;
sk_trace_assign(start, start);
sk_trace_assign(end, end);
__entry->which = which;
),
TP_printk(SCSBF" ptr %p wh %d nl %u,%u start "SK_FMT " end "SK_FMT, SCSB_TRACE_ARGS,
__entry->ptr, __entry->which, __entry->n0l, __entry->n1l,
sk_trace_args(start), sk_trace_args(end))
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_alloced,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_freeing,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_found,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_trimmed,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_erased,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_inserting,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_inserted,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_shrinking,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_dropping,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_replaying,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
DEFINE_EVENT(scoutfs_wkic_wpage_class, scoutfs_wkic_wpage_filled,
TP_PROTO(struct super_block *sb, void *ptr, int which, bool n0l, bool n1l,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, ptr, which, n0l, n1l, start, end)
);
TRACE_EVENT(scoutfs_wkic_read_items,
TP_PROTO(struct super_block *sb, struct scoutfs_key *key, struct scoutfs_key *start,
struct scoutfs_key *end),
TP_ARGS(sb, key, start, end),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
sk_trace_define(key)
sk_trace_define(start)
sk_trace_define(end)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
sk_trace_assign(key, start);
sk_trace_assign(start, start);
sk_trace_assign(end, end);
),
TP_printk(SCSBF" key "SK_FMT" start "SK_FMT " end "SK_FMT, SCSB_TRACE_ARGS,
sk_trace_args(key), sk_trace_args(start), sk_trace_args(end))
);

View File

@@ -39,6 +39,9 @@ struct scoutfs_triggers {
static char *names[] = {
[SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE] = "block_remove_stale",
[SCOUTFS_TRIGGER_SRCH_COMPACT_LOGS_PAD_SAFE] = "srch_compact_logs_pad_safe",
[SCOUTFS_TRIGGER_SRCH_FORCE_LOG_ROTATE] = "srch_force_log_rotate",
[SCOUTFS_TRIGGER_SRCH_MERGE_STOP_SAFE] = "srch_merge_stop_safe",
[SCOUTFS_TRIGGER_STATFS_LOCK_PURGE] = "statfs_lock_purge",
};
@@ -90,13 +93,9 @@ int scoutfs_setup_triggers(struct super_block *sb)
goto out;
}
for (i = 0; i < ARRAY_SIZE(triggers->atomics); i++) {
if (!debugfs_create_atomic_t(names[i], 0644, triggers->dir,
&triggers->atomics[i])) {
ret = -ENOMEM;
goto out;
}
}
for (i = 0; i < ARRAY_SIZE(triggers->atomics); i++)
debugfs_create_atomic_t(names[i], 0644, triggers->dir,
&triggers->atomics[i]);
ret = 0;
out:

View File

@@ -3,6 +3,9 @@
enum scoutfs_trigger {
SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE,
SCOUTFS_TRIGGER_SRCH_COMPACT_LOGS_PAD_SAFE,
SCOUTFS_TRIGGER_SRCH_FORCE_LOG_ROTATE,
SCOUTFS_TRIGGER_SRCH_MERGE_STOP_SAFE,
SCOUTFS_TRIGGER_STATFS_LOCK_PURGE,
SCOUTFS_TRIGGER_NR,
};

View File

@@ -46,6 +46,23 @@ static struct scoutfs_tseq_entry *tseq_rb_next(struct scoutfs_tseq_entry *ent)
return rb_entry(node, struct scoutfs_tseq_entry, node);
}
#ifdef KC_RB_TREE_AUGMENTED_COMPUTE_MAX
static bool tseq_compute_total(struct scoutfs_tseq_entry *ent, bool exit)
{
loff_t total = 1 + tseq_node_total(ent->node.rb_left) +
tseq_node_total(ent->node.rb_right);
if (exit && ent->total == total)
return true;
ent->total = total;
return false;
}
RB_DECLARE_CALLBACKS(static, tseq_rb_callbacks, struct scoutfs_tseq_entry,
node, total, tseq_compute_total);
#else
static loff_t tseq_compute_total(struct scoutfs_tseq_entry *ent)
{
return 1 + tseq_node_total(ent->node.rb_left) +
@@ -53,7 +70,8 @@ static loff_t tseq_compute_total(struct scoutfs_tseq_entry *ent)
}
RB_DECLARE_CALLBACKS(static, tseq_rb_callbacks, struct scoutfs_tseq_entry,
node, loff_t, total, tseq_compute_total)
node, loff_t, total, tseq_compute_total);
#endif
void scoutfs_tseq_tree_init(struct scoutfs_tseq_tree *tree,
scoutfs_tseq_show_t show)
@@ -165,6 +183,13 @@ static void *scoutfs_tseq_seq_next(struct seq_file *m, void *v, loff_t *pos)
ent = tseq_rb_next(ent);
if (ent)
*pos = ent->pos;
else
/*
* once we hit the end, *pos is never used, but it has to
* be updated to avoid an error in bpf_seq_read()
*/
(*pos)++;
return ent;
}

View File

@@ -17,4 +17,15 @@ static inline void down_write_two(struct rw_semaphore *a,
down_write_nested(b, SINGLE_DEPTH_NESTING);
}
/*
* When returning shrinker counts from scan_objects, we should steer
* clear of the magic SHRINK_STOP and SHRINK_EMPTY values, which are near
* ~0UL values. Hence, we cap count to ~0L, which is arbitarily high
* enough to avoid it.
*/
static inline long shrinker_min_long(long count)
{
return min(count, LONG_MAX);
}
#endif

1160
kmod/src/wkic.c Normal file

File diff suppressed because it is too large Load Diff

19
kmod/src/wkic.h Normal file
View File

@@ -0,0 +1,19 @@
#ifndef _SCOUTFS_WKIC_H_
#define _SCOUTFS_WKIC_H_
#include "format.h"
typedef int (*wkic_iter_cb_t)(struct scoutfs_key *key, void *val, unsigned int val_len,
void *cb_arg);
int scoutfs_wkic_iterate(struct super_block *sb, struct scoutfs_key *key, struct scoutfs_key *last,
struct scoutfs_key *range_start, struct scoutfs_key *range_end,
wkic_iter_cb_t cb, void *cb_arg);
int scoutfs_wkic_iterate_stable(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *last, struct scoutfs_key *range_start,
struct scoutfs_key *range_end, wkic_iter_cb_t cb, void *cb_arg);
int scoutfs_wkic_setup(struct super_block *sb);
void scoutfs_wkic_destroy(struct super_block *sb);
#endif

View File

@@ -81,7 +81,20 @@ static void init_xattr_key(struct scoutfs_key *key, u64 ino, u32 name_hash,
#define SCOUTFS_XATTR_PREFIX "scoutfs."
#define SCOUTFS_XATTR_PREFIX_LEN (sizeof(SCOUTFS_XATTR_PREFIX) - 1)
/*
* We could have hidden the logic that needs this in a user-prefix
* specific .set handler, but I wanted to make sure that we always
* applied that logic from any call chains to _xattr_set. The
* additional strcmp isn't so expensive given all the rest of the work
* we're doing in here.
*/
static inline bool is_user(const char *name)
{
return !strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN);
}
#define HIDE_TAG "hide."
#define INDX_TAG "indx."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
#define TAG_LEN (sizeof(HIDE_TAG) - 1)
@@ -103,6 +116,9 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
if (!strncmp(name, HIDE_TAG, TAG_LEN)) {
if (++tgs->hide == 0)
return -EINVAL;
} else if (!strncmp(name, INDX_TAG, TAG_LEN)) {
if (++tgs->indx == 0)
return -EINVAL;
} else if (!strncmp(name, SRCH_TAG, TAG_LEN)) {
if (++tgs->srch == 0)
return -EINVAL;
@@ -540,47 +556,57 @@ static int parse_totl_u64(const char *s, int len, u64 *res)
}
/*
* non-destructive relatively quick parse of the last 3 dotted u64s that
* make up the name of the xattr total. -EINVAL is returned if there
* are anything but 3 valid u64 encodings between single dots at the end
* of the name.
* non-destructive relatively quick parse of final dotted u64s in an
* xattr name. If the required number of values are found then we
* return the number of bytes in the name that are not the final dotted
* u64s with their dots. -EINVAL is returned if we didn't find the
* required number of values.
*/
static int parse_totl_key(struct scoutfs_key *key, const char *name, int name_len)
static int parse_dotted_u64s(u64 *u64s, int nr, const char *name, int name_len)
{
u64 tot_name[3];
int end = name_len;
int nr = 0;
int len;
int ret;
int i;
int u;
/* parse name elements in reserve order from end of xattr name string */
for (i = name_len - 1; i >= 0 && nr < ARRAY_SIZE(tot_name); i--) {
for (u = nr - 1, i = name_len - 1; u >= 0 && i >= 0; i--) {
if (name[i] != '.')
continue;
len = end - (i + 1);
ret = parse_totl_u64(&name[i + 1], len, &tot_name[nr]);
ret = parse_totl_u64(&name[i + 1], len, &u64s[u]);
if (ret < 0)
goto out;
end = i;
nr++;
u--;
}
if (nr == ARRAY_SIZE(tot_name)) {
/* swap to account for parsing in reverse */
swap(tot_name[0], tot_name[2]);
scoutfs_xattr_init_totl_key(key, tot_name);
ret = 0;
} else {
if (u == -1)
ret = end;
else
ret = -EINVAL;
}
out:
return ret;
}
static int parse_totl_key(struct scoutfs_key *key, const char *name, int name_len)
{
u64 u64s[3];
int ret;
ret = parse_dotted_u64s(u64s, ARRAY_SIZE(u64s), name, name_len);
if (ret >= 0) {
scoutfs_xattr_init_totl_key(key, u64s);
ret = 0;
}
return ret;
}
static int apply_totl_delta(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_xattr_totl_val *tval, struct scoutfs_lock *lock)
{
@@ -607,6 +633,72 @@ int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
return SCOUTFS_DELTA_COMBINED;
}
void scoutfs_xattr_indx_get_range(struct scoutfs_key *start, struct scoutfs_key *end)
{
scoutfs_key_set_zeros(start);
start->sk_zone = SCOUTFS_XATTR_INDX_ZONE;
scoutfs_key_set_ones(end);
end->sk_zone = SCOUTFS_XATTR_INDX_ZONE;
}
/*
* .indx. keys are a bit funny because we're iterating over index keys
* by major:minor:inode:xattr_id. That doesn't map nicely to the
* comparison precedence of the key fields. We have to mess around a
* little bit to get the major into the most significant key bits and
* the low bits of xattr id into the least significant key bits.
*/
void scoutfs_xattr_init_indx_key(struct scoutfs_key *key, u8 major, u64 minor, u64 ino, u64 xid)
{
scoutfs_key_set_zeros(key);
key->sk_zone = SCOUTFS_XATTR_INDX_ZONE;
key->_sk_first = cpu_to_le64(((u64)major << 56) | (minor >> 8));
key->_sk_second = cpu_to_le64((minor << 56) | (ino >> 8));
key->_sk_third = cpu_to_le64((ino << 56) | (xid >> 8));
key->_sk_fourth = xid & 0xff;
}
void scoutfs_xattr_get_indx_key(struct scoutfs_key *key, u8 *major, u64 *minor, u64 *ino, u64 *xid)
{
*major = le64_to_cpu(key->_sk_first) >> 56;
*minor = (le64_to_cpu(key->_sk_first) << 8) | (le64_to_cpu(key->_sk_second) >> 56);
*ino = (le64_to_cpu(key->_sk_second) << 8) | (le64_to_cpu(key->_sk_third) >> 56);
*xid = (le64_to_cpu(key->_sk_third) << 8) | key->_sk_fourth;
}
void scoutfs_xattr_set_indx_key_xid(struct scoutfs_key *key, u64 xid)
{
u8 major;
u64 minor;
u64 ino;
u64 dummy;
scoutfs_xattr_get_indx_key(key, &major, &minor, &ino, &dummy);
scoutfs_xattr_init_indx_key(key, major, minor, ino, xid);
}
/*
* This initial parsing of the name doesn't yet have access to an xattr
* id to put in the key. That's added later as the existing xattr is
* found or a new xattr's id is allocated.
*/
static int parse_indx_key(struct scoutfs_key *key, const char *name, int name_len, u64 ino)
{
u64 u64s[2];
int ret;
ret = parse_dotted_u64s(u64s, ARRAY_SIZE(u64s), name, name_len);
if (ret < 0)
return ret;
if (u64s[0] > U8_MAX)
return -EINVAL;
scoutfs_xattr_init_indx_key(key, u64s[0], u64s[1], ino, 0);
return 0;
}
/*
* The confusing swiss army knife of creating, modifying, and deleting
* xattrs.
@@ -627,7 +719,7 @@ int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
const void *value, size_t size, int flags,
const struct scoutfs_xattr_prefix_tags *tgs,
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
struct scoutfs_lock *lck, struct scoutfs_lock *tag_lock,
struct list_head *ind_locks)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
@@ -635,10 +727,11 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr *xat = NULL;
struct scoutfs_key totl_key;
struct scoutfs_key tag_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
bool undo_indx = false;
u8 found_parts;
unsigned int xat_bytes_totl;
unsigned int xat_bytes;
@@ -651,7 +744,8 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
trace_scoutfs_xattr_set(sb, name_len, value, size, flags);
if (WARN_ON_ONCE(tgs->totl && !totl_lock))
if (WARN_ON_ONCE(tgs->totl && tgs->indx) ||
WARN_ON_ONCE((tgs->totl | tgs->indx) && !tag_lock))
return -EINVAL;
/* mirror the syscall's errors for large names and values */
@@ -664,10 +758,22 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
(flags & ~(XATTR_CREATE | XATTR_REPLACE)))
return -EINVAL;
if ((tgs->hide | tgs->srch | tgs->totl) && !capable(CAP_SYS_ADMIN))
if ((tgs->hide | tgs->indx | tgs->srch | tgs->totl) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs->totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
if (tgs->totl && ((ret = parse_totl_key(&tag_key, name, name_len)) != 0))
return ret;
if (tgs->indx &&
(ret = scoutfs_fmt_vers_unsupported(sb, SCOUTFS_FORMAT_VERSION_FEAT_INDX_TAG)))
return ret;
if (tgs->indx && ((ret = parse_indx_key(&tag_key, name, name_len, ino)) != 0))
return ret;
/* retention blocks user. xattr modification, all else allowed */
ret = scoutfs_inode_check_retention(inode);
if (ret < 0 && is_user(name))
return ret;
/* allocate enough to always read an existing xattr's totl */
@@ -708,6 +814,12 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
/* use existing xattr's id or allocate new when creating */
if (found_parts)
id = le64_to_cpu(key.skx_id);
else if (value)
id = si->next_xattr_id++;
if (found_parts && tgs->totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
@@ -718,12 +830,25 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
le64_add_cpu(&tval.total, -total);
}
/*
* indx xattrs don't have a value. After returning an error for
* non-zero val length or short circuiting modifying with the
* same 0 length, all we're left with is creating or deleting
* the xattr.
*/
if (tgs->indx) {
if (size != 0) {
ret = -EINVAL;
goto out;
}
if (found_parts && value) {
ret = 0;
goto out;
}
}
/* prepare the xattr header, name, and start of value in first item */
if (value) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
else
id = si->next_xattr_id++;
xat->name_len = name_len;
xat->val_len = cpu_to_le16(size);
memset(xat->__pad, 0, sizeof(xat->__pad));
@@ -741,9 +866,18 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
le64_add_cpu(&tval.total, total);
}
if (tgs->indx) {
scoutfs_xattr_set_indx_key_xid(&tag_key, id);
if (value)
ret = scoutfs_item_create_force(sb, &tag_key, NULL, 0, tag_lock, NULL);
else
ret = scoutfs_item_delete_force(sb, &tag_key, tag_lock, NULL);
if (ret < 0)
goto out;
undo_indx = true;
}
if (tgs->srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
@@ -752,7 +886,7 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
}
if (tgs->totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
ret = apply_totl_delta(sb, &tag_key, &tval, tag_lock);
if (ret < 0)
goto out;
undo_totl = true;
@@ -773,10 +907,17 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
inode->i_ctime = current_time(inode);
ret = 0;
out:
if (ret < 0 && undo_indx) {
if (value)
err = scoutfs_item_delete_force(sb, &tag_key, tag_lock, NULL);
else
err = scoutfs_item_create_force(sb, &tag_key, NULL, 0, tag_lock, NULL);
BUG_ON(err); /* inconsistent */
}
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
@@ -785,7 +926,7 @@ out:
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
err = apply_totl_delta(sb, &tag_key, &tval, tag_lock);
BUG_ON(err);
}
@@ -801,7 +942,7 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name, const void
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_lock *tag_lock = NULL;
struct scoutfs_lock *lck = NULL;
size_t name_len = strlen(name);
LIST_HEAD(ind_locks);
@@ -816,8 +957,11 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name, const void
if (ret)
goto unlock;
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (tgs.totl || tgs.indx) {
if (tgs.totl)
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &tag_lock);
else
ret = scoutfs_lock_xattr_indx(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &tag_lock);
if (ret)
goto unlock;
}
@@ -836,7 +980,7 @@ retry:
goto release;
ret = scoutfs_xattr_set_locked(dentry->d_inode, name, name_len, value, size, flags, &tgs,
lck, totl_lock, &ind_locks);
lck, tag_lock, &ind_locks);
if (ret == 0)
scoutfs_update_inode_item(inode, lck, &ind_locks);
@@ -845,11 +989,12 @@ release:
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
scoutfs_unlock(sb, tag_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
#ifndef KC_XATTR_STRUCT_XATTR_HANDLER
/*
* Future kernels have this amazing hack to rewind the name to get the
* skipped prefix. We're back in the stone ages without the handler
@@ -857,22 +1002,43 @@ unlock:
* compat hook to either call the kernel's xattr_full_name(handler), or
* our hack to use the flags as the prefix length.
*/
static const char *full_name_hack(void *handler, const char *name, int len)
static const char *full_name_hack(const char *name, int len)
{
return name - len;
}
#endif
static int scoutfs_xattr_get_handler(struct dentry *dentry, const char *name,
void *value, size_t size, int handler_flags)
static int scoutfs_xattr_get_handler
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
(const struct xattr_handler *handler, struct dentry *dentry,
struct inode *inode, const char *name, void *value,
size_t size)
{
name = full_name_hack(NULL, name, handler_flags);
name = xattr_full_name(handler, name);
#else
(struct dentry *dentry, const char *name,
void *value, size_t size, int handler_flags)
{
name = full_name_hack(name, handler_flags);
#endif
return scoutfs_xattr_get(dentry, name, value, size);
}
static int scoutfs_xattr_set_handler(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags, int handler_flags)
static int scoutfs_xattr_set_handler
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
(const struct xattr_handler *handler,
KC_VFS_NS_DEF
struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags)
{
name = full_name_hack(NULL, name, handler_flags);
name = xattr_full_name(handler, name);
#else
(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags, int handler_flags)
{
name = full_name_hack(name, handler_flags);
#endif
return scoutfs_xattr_set(dentry, name, value, size, flags);
}
@@ -905,14 +1071,22 @@ static const struct xattr_handler scoutfs_xattr_security_handler = {
};
static const struct xattr_handler scoutfs_xattr_acl_access_handler = {
#ifdef KC_XATTR_HANDLER_NAME
.name = XATTR_NAME_POSIX_ACL_ACCESS,
#else
.prefix = XATTR_NAME_POSIX_ACL_ACCESS,
#endif
.flags = ACL_TYPE_ACCESS,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
};
static const struct xattr_handler scoutfs_xattr_acl_default_handler = {
#ifdef KC_XATTR_HANDLER_NAME
.name = XATTR_NAME_POSIX_ACL_DEFAULT,
#else
.prefix = XATTR_NAME_POSIX_ACL_DEFAULT,
#endif
.flags = ACL_TYPE_DEFAULT,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
@@ -1027,14 +1201,15 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
{
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_lock *tag_lock = NULL;
struct scoutfs_xattr_totl_val tval;
struct scoutfs_key totl_key;
struct scoutfs_key tag_key;
struct scoutfs_key last;
struct scoutfs_key key;
bool release = false;
unsigned int bytes;
unsigned int val_len;
u8 locked_zone = 0;
void *value;
u64 total;
u64 hash;
@@ -1080,16 +1255,32 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
goto out;
}
ret = parse_totl_key(&totl_key, xat->name, xat->name_len) ?:
ret = parse_totl_key(&tag_key, xat->name, xat->name_len) ?:
parse_totl_u64(value, val_len, &total);
if (ret < 0)
break;
}
if (tgs.totl && totl_lock == NULL) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (tgs.indx) {
ret = parse_indx_key(&tag_key, xat->name, xat->name_len, ino);
if (ret < 0)
goto out;
}
if ((tgs.totl || tgs.indx) && locked_zone != tag_key.sk_zone) {
if (tag_lock) {
scoutfs_unlock(sb, tag_lock, SCOUTFS_LOCK_WRITE_ONLY);
tag_lock = NULL;
}
if (tgs.totl)
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0,
&tag_lock);
else
ret = scoutfs_lock_xattr_indx(sb, SCOUTFS_LOCK_WRITE_ONLY, 0,
&tag_lock);
if (ret < 0)
break;
locked_zone = tag_key.sk_zone;
}
ret = scoutfs_hold_trans(sb, false);
@@ -1112,7 +1303,13 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
if (tgs.totl) {
tval.total = cpu_to_le64(-total);
tval.count = cpu_to_le64(-1LL);
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
ret = apply_totl_delta(sb, &tag_key, &tval, tag_lock);
if (ret < 0)
break;
}
if (tgs.indx) {
ret = scoutfs_item_delete_force(sb, &tag_key, tag_lock, NULL);
if (ret < 0)
break;
}
@@ -1125,7 +1322,7 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
if (release)
scoutfs_release_trans(sb);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
scoutfs_unlock(sb, tag_lock, SCOUTFS_LOCK_WRITE_ONLY);
kfree(xat);
out:
return ret;

View File

@@ -3,6 +3,7 @@
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
indx:1,
srch:1,
totl:1;
};
@@ -30,4 +31,9 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);
void scoutfs_xattr_indx_get_range(struct scoutfs_key *start, struct scoutfs_key *end);
void scoutfs_xattr_init_indx_key(struct scoutfs_key *key, u8 major, u64 minor, u64 ino, u64 xid);
void scoutfs_xattr_get_indx_key(struct scoutfs_key *key, u8 *major, u64 *minor, u64 *ino, u64 *xid);
void scoutfs_xattr_set_indx_key_xid(struct scoutfs_key *key, u64 xid);
#endif

2
tests/.gitignore vendored
View File

@@ -8,3 +8,5 @@ src/bulk_create_paths
src/find_xattrs
src/stage_tmpfile
src/create_xattr_loop
src/o_tmpfile_umask
src/o_tmpfile_linkat

1
tests/.xfstests-branch Normal file
View File

@@ -0,0 +1 @@
v2022.05.01-2-g787cd20

View File

@@ -11,7 +11,9 @@ BIN := src/createmany \
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop \
src/fragmented_data_extents
src/fragmented_data_extents \
src/o_tmpfile_umask \
src/o_tmpfile_linkat
DEPS := $(wildcard src/*.d)

View File

@@ -25,8 +25,9 @@ All options can be seen by running with -h.
This script is built to test multi-node systems on one host by using
different mounts of the same devices. The script creates a fake block
device in front of each fs block device for each mount that will be
tested. Currently it will create free loop devices and will mount on
/mnt/test.[0-9].
tested. It will create predictable device mapper devices and mounts
them on /mnt/test.N. These static device names and mount paths limit
the script to a single execution per host.
All tests will be run by default. Particular tests can be included or
excluded by providing test name regular expressions with the -I and -E
@@ -104,14 +105,15 @@ used during the test.
| Variable | Description | Origin | Example |
| ---------------- | ------------------- | --------------- | ----------------- |
| T\_MB[0-9] | per-mount meta bdev | created per run | /dev/loop0 |
| T\_DB[0-9] | per-mount data bdev | created per run | /dev/loop1 |
| T\_MB[0-9] | per-mount meta bdev | created per run | /dev/mapper/\_scoutfs\_test\_meta\_[0-9] |
| T\_DB[0-9] | per-mount data bdev | created per run | /dev/mapper/\_scoutfs\_test\_data\_[0-9] |
| T\_D[0-9] | per-mount test dir | made for test | /mnt/test.[0-9]/t |
| T\_META\_DEVICE | main FS meta bdev | -M | /dev/vda |
| T\_DATA\_DEVICE | main FS data bdev | -D | /dev/vdb |
| T\_EX\_META\_DEV | scratch meta bdev | -f | /dev/vdd |
| T\_EX\_DATA\_DEV | scratch meta bdev | -e | /dev/vdc |
| T\_M[0-9] | mount paths | mounted per run | /mnt/test.[0-9]/ |
| T\_MODULE | built kernel module | created per run | ../kmod/src/..ko |
| T\_NR\_MOUNTS | number of mounts | -n | 3 |
| T\_O[0-9] | mount options | created per run | -o server\_addr= |
| T\_QUORUM | quorum count | -q | 2 |

View File

@@ -7,8 +7,9 @@ t_status_msg()
export T_PASS_STATUS=100
export T_SKIP_STATUS=101
export T_FAIL_STATUS=102
export T_SKIP_PERMITTED_STATUS=103
export T_FIRST_STATUS="$T_PASS_STATUS"
export T_LAST_STATUS="$T_FAIL_STATUS"
export T_LAST_STATUS="$T_SKIP_PERMITTED_STATUS"
t_pass()
{
@@ -21,6 +22,17 @@ t_skip()
exit $T_SKIP_STATUS
}
#
# This exit code is *reserved* for tests that are up-front never going to work
# in certain cases. This should be expressly documented per-case and made
# abundantly clear before merging. The test itself should document its case.
#
t_skip_permitted()
{
t_status_msg "$@"
exit $T_SKIP_PERMITTED_STATUS
}
t_fail()
{
t_status_msg "$@"
@@ -35,10 +47,22 @@ t_fail()
t_quiet()
{
echo "# $*" >> "$T_TMPDIR/quiet.log"
"$@" > "$T_TMPDIR/quiet.log" 2>&1 || \
"$@" >> "$T_TMPDIR/quiet.log" 2>&1 || \
t_fail "quiet command failed"
}
#
# Quietly run a command during a test. The output is logged but only
# the return code is printed, presumably because the output contains
# a lot of invocation specific text that is difficult to filter.
#
t_rc()
{
echo "# $*" >> "$T_TMP.rc.log"
"$@" >> "$T_TMP.rc.log" 2>&1
echo "rc: $?"
}
#
# redirect test output back to the output of the invoking script intead
# of the compared output.

View File

@@ -6,6 +6,61 @@ t_filter_fs()
-e 's@Device: [a-fA-F0-9]*h/[0-9]*d@Device: 0h/0d@g'
}
#
# We can hit a spurious kasan warning that was fixed upstream:
#
# e504e74cc3a2 x86/unwind/orc: Disable KASAN checking in the ORC unwinder, part 2
#
# KASAN can get mad when the unwinder doesn't find ORC metadata and
# wanders up without using frames and hits the KASAN stack red zones.
# We can ignore these messages.
#
# They're bracketed by:
# [ 2687.690127] ==================================================================
# [ 2687.691366] BUG: KASAN: stack-out-of-bounds in get_reg+0x1bc/0x230
# ...
# [ 2687.706220] ==================================================================
# [ 2687.707284] Disabling lock debugging due to kernel taint
#
# That final lock debugging message may not be included.
#
ignore_harmless_unwind_kasan_stack_oob()
{
awk '
BEGIN {
in_soob = 0
soob_nr = 0
}
( !in_soob && $0 ~ /==================================================================/ ) {
in_soob = 1
soob_nr = NR
saved = $0
}
( in_soob == 1 && NR == (soob_nr + 1) ) {
if (match($0, /KASAN: stack-out-of-bounds in get_reg/) != 0) {
in_soob = 2
} else {
in_soob = 0
print saved
}
saved=""
}
( in_soob == 2 && $0 ~ /==================================================================/ ) {
in_soob = 3
soob_nr = NR
}
( in_soob == 3 && NR > soob_nr && $0 !~ /Disabling lock debugging/ ) {
in_soob = 0
}
( !in_soob ) { print $0 }
END {
if (saved) {
print saved
}
}
'
}
#
# Filter out expected messages. Putting messages here implies that
# tests aren't relying on messages to discover failures.. they're
@@ -18,6 +73,7 @@ t_filter_dmesg()
# the kernel can just be noisy
re=" used greatest stack depth: "
re="$re|sched: RT throttling activated"
# mkfs/mount checks partition tables
re="$re|unknown partition table"
@@ -61,6 +117,7 @@ t_filter_dmesg()
re="$re|scoutfs .* error: meta_super META flag not set"
re="$re|scoutfs .* error: could not open metadev:.*"
re="$re|scoutfs .* error: Unknown or malformed option,.*"
re="$re|scoutfs .* error: invalid quorum_heartbeat_timeout_ms value"
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
@@ -81,6 +138,28 @@ t_filter_dmesg()
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
re="$re|scoutfs .* error.*server failed to bind to.*"
re="$re|scoutfs .* critical transaction commit failure.*"
egrep -v "($re)"
# change-devices causes loop device resizing
re="$re|loop: module loaded"
re="$re|loop[0-9].* detected capacity change from.*"
re="$re|dm-[0-9].* detected capacity change from.*"
# ignore systemd-journal rotating
re="$re|systemd-journald.*"
# process accounting can be noisy
re="$re|Process accounting resumed.*"
# format vers back/compat tries bad mounts
re="$re|scoutfs .* error.*outside of supported version.*"
re="$re|scoutfs .* error.*could not get .*super.*"
# ignore "unsafe core pattern" when xfstests tries to disable cores"
re="$re|Unsafe core_pattern used with fs.suid_dumpable=2.*"
re="$re|Pipe handler or fully qualified core dump path required.*"
re="$re|Set kernel.core_pattern before fs.suid_dumpable.*"
egrep -v "($re)" | \
ignore_harmless_unwind_kasan_stack_oob
}

View File

@@ -29,13 +29,12 @@ t_mount_rid()
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
# Output the "f.$fsid.r.$rid" identifier string for the given path
# in a mounted scoutfs volume.
#
t_ident()
t_ident_from_mnt()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
local mnt="$1"
local fsid
local rid
@@ -45,6 +44,38 @@ t_ident()
echo "f.${fsid:0:6}.r.${rid:0:6}"
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
#
t_ident()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
t_ident_from_mnt "$mnt"
}
#
# Output the sysfs path for a path in a mounted fs.
#
t_sysfs_path_from_ident()
{
local ident="$1"
echo "/sys/fs/scoutfs/$ident"
}
#
# Output the sysfs path for a path in a mounted fs.
#
t_sysfs_path_from_mnt()
{
local mnt="$1"
t_sysfs_path_from_ident $(t_ident_from_mnt $mnt)
}
#
# Output the mount's sysfs path, defaulting to mount 0 if none is
# specified.
@@ -53,7 +84,7 @@ t_sysfs_path()
{
local nr="$1"
echo "/sys/fs/scoutfs/$(t_ident $nr)"
t_sysfs_path_from_ident $(t_ident $nr)
}
#
@@ -75,6 +106,15 @@ t_fs_nrs()
seq 0 $((T_NR_MOUNTS - 1))
}
#
# output the fs nrs of quorum nodes, we "know" that
# the quorum nrs are the first consequtive nrs
#
t_quorum_nrs()
{
seq 0 $((T_QUORUM - 1))
}
#
# outputs "1" if the fs number has "1" in its quorum/is_leader file.
# All other cases output 0, including the fs nr being a client which
@@ -144,7 +184,27 @@ t_mount()
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet mount -t scoutfs \$T_O$nr \$T_DB$nr \$T_M$nr
eval t_quiet mount -t scoutfs \$T_O$nr\$opt \$T_DB$nr \$T_M$nr
}
#
# Mount with an optional mount option string. If the string is empty
# then the saved mount options are used. If the string has contents
# then it is appended to the end of the saved options with a separating
# comma.
#
# Unlike t_mount this won't inherently fail in t_quiet, errors are
# returned so bad options can be tested.
#
t_mount_opt()
{
local nr="$1"
local opt="${2:+,$2}"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval mount -t scoutfs \$T_O$nr\$opt \$T_DB$nr \$T_M$nr
}
t_umount()
@@ -236,6 +296,15 @@ t_trigger_get() {
cat "$(t_trigger_path "$nr")/$which"
}
t_trigger_set() {
local which="$1"
local nr="$2"
local val="$3"
local path=$(t_trigger_path "$nr")
echo "$val" > "$path/$which"
}
t_trigger_show() {
local which="$1"
local string="$2"
@@ -247,9 +316,8 @@ t_trigger_show() {
t_trigger_arm_silent() {
local which="$1"
local nr="$2"
local path=$(t_trigger_path "$nr")
echo 1 > "$path/$which"
t_trigger_set "$which" "$nr" 1
}
t_trigger_arm() {
@@ -391,7 +459,7 @@ t_set_sysfs_mount_option() {
local val="$3"
local opt="$(t_sysfs_path $nr)/mount_options/$name"
echo "$val" > "$opt"
echo "$val" > "$opt" 2>/dev/null
}
t_set_all_sysfs_mount_options() {

View File

@@ -0,0 +1,155 @@
== setup test directory
== getfacl
directory drwxr-xr-x 0 0 0 '.'
# file: .
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
== basic non-acl access through permissions
directory drwxr-xr-x 0 44444 0 'dir-testuid'
touch: cannot touch 'dir-testuid/file-group-write': Permission denied
touch: cannot touch 'symlinkdir-testuid/symlink-file-group-write': Permission denied
regular empty file -rw-r--r-- 22222 44444 0 'dir-testuid/file-group-write'
regular empty file -rw-r--r-- 22222 44444 0 'symlinkdir-testuid/symlink-file-group-write'
== basic acl access
directory drwxr-xr-x 0 0 0 'dir-root'
touch: cannot touch 'dir-root/file-group-write': Permission denied
touch: cannot touch 'symlinkdir-root/file-group-write': Permission denied
# file: dir-root
# owner: root
# group: root
user::rwx
user:22222:rwx
group::r-x
mask::rwx
other::r-x
regular empty file -rw-r--r-- 22222 0 0 'dir-root/file-group-write'
regular empty file -rw-r--r-- 22222 0 0 'symlinkdir-root/file-group-write'
== directory exec
Success
Success
# file: dir-root
# owner: root
# group: root
user::rwx
user:22222:rw-
group::r-x
mask::rwx
other::r-x
Failed
Failed
# file: dir-root
# owner: root
# group: root
user::rwx
user:22222:rw-
group::r-x
group:44444:rwx
mask::rwx
other::r-x
Success
Success
== get/set attr
regular empty file -rw-r--r-- 0 0 0 'file-root'
setfattr: file-root: Permission denied
# file: file-root
# owner: root
# group: root
user::rw-
user:22222:rw-
group::r--
mask::rw-
other::r--
# file: file-root
user.test2="Success"
# file: file-root
# owner: root
# group: root
user::rw-
group::r--
mask::r--
other::r--
setfattr: file-root: Permission denied
# file: file-root
user.test2="Success"
# file: file-root
# owner: root
# group: root
user::rw-
group::r--
group:44444:rw-
mask::rw-
other::r--
# file: file-root
user.test2="Success"
user.test4="Success"
== inheritance / default acl
directory drwxr-xr-x 0 0 0 'dir-root2'
mkdir: cannot create directory 'dir-root2/dir': Permission denied
touch: cannot touch 'dir-root2/dir/file': No such file or directory
# file: dir-root2
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:user:22222:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
mkdir: cannot create directory 'dir-root2/dir': Permission denied
touch: cannot touch 'dir-root2/dir/file': No such file or directory
# file: dir-root2
# owner: root
# group: root
user::rwx
user:22222:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:22222:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
directory drwxrwxr-x 22222 0 4 'dir-root2/dir'
# file: dir-root2/dir
# owner: 22222
# group: root
user::rwx
user:22222:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:22222:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
regular empty file -rw-rw-r-- 22222 0 0 'dir-root2/dir/file'
# file: dir-root2/dir/file
# owner: 22222
# group: root
user::rw-
user:22222:rwx #effective:rw-
group::r-x #effective:r--
mask::rw-
other::r--
== cleanup

View File

@@ -47,11 +47,13 @@ four
--- dir within dir
--- overwrite file
--- can't overwrite non-empty dir
mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to /mnt/test/test/basic-posix-consistency/dir/a/dir: Directory not empty
mv: cannot move '/mnt/test/test/basic-posix-consistency/dir/c/clobber' to '/mnt/test/test/basic-posix-consistency/dir/a/dir': Directory not empty
--- can overwrite empty dir
--- can rename into root
== path resoluion
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing
== concurrent creates make one file
one-file
== cleanup

View File

@@ -0,0 +1,28 @@
== make tmp sparse data dev files
== make scratch fs
== small new data device fails
rc: 1
== check sees data device errors
rc: 1
rc: 0
== preparing while mounted fails
rc: 1
== preparing without recovery fails
rc: 1
== check sees metadata errors
rc: 1
rc: 1
== preparing with file data fails
rc: 1
== preparing after emptied
rc: 0
== checks pass
rc: 0
rc: 0
== using prepared
== preparing larger and resizing
rc: 0
equal_prepared
large_prepared
resized larger test rc: 0
== cleanup

View File

@@ -1,3 +1,4 @@
== measure initial createmany
== measure initial createmany
== measure two concurrent createmany runs
== cleanup

View File

@@ -1,26 +1,330 @@
== initial writes smaller than prealloc grow to prealloc size
/mnt/test/test/data-prealloc/file-1: 7 extents found
/mnt/test/test/data-prealloc/file-2: 7 extents found
/mnt/test/test/data-prealloc/file-1: extents: 7
/mnt/test/test/data-prealloc/file-2: extents: 7
== larger files get full prealloc extents
/mnt/test/test/data-prealloc/file-1: 9 extents found
/mnt/test/test/data-prealloc/file-2: 9 extents found
/mnt/test/test/data-prealloc/file-1: extents: 9
/mnt/test/test/data-prealloc/file-2: extents: 9
== non-streaming writes with contig have per-block extents
/mnt/test/test/data-prealloc/file-1: 32 extents found
/mnt/test/test/data-prealloc/file-2: 32 extents found
/mnt/test/test/data-prealloc/file-1: extents: 32
/mnt/test/test/data-prealloc/file-2: extents: 32
== any writes to region prealloc get full extents
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
== streaming offline writes get full extents either way
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
/mnt/test/test/data-prealloc/file-1: extents: 4
/mnt/test/test/data-prealloc/file-2: extents: 4
== goofy preallocation amounts work
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 3 extents found
/mnt/test/test/data-prealloc/file-2: 3 extents found
/mnt/test/test/data-prealloc/file-1: extents: 6
/mnt/test/test/data-prealloc/file-2: extents: 6
/mnt/test/test/data-prealloc/file-1: extents: 6
/mnt/test/test/data-prealloc/file-2: extents: 6
/mnt/test/test/data-prealloc/file-1: extents: 3
/mnt/test/test/data-prealloc/file-2: extents: 3
== block writes into region allocs hole
wrote blk 24
wrote blk 32
wrote blk 40
wrote blk 55
wrote blk 63
wrote blk 71
wrote blk 72
wrote blk 79
wrote blk 80
wrote blk 87
wrote blk 88
wrote blk 95
before:
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 0 at pos 0
wrote blk 0
0.. 1:
1.. 7: unwritten
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 0 at pos 1
wrote blk 15
0.. 1:
1.. 14: unwritten
15.. 1:
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 0 at pos 2
wrote blk 19
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 1 at pos 0
wrote blk 25
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 1 at pos 1
wrote blk 39
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 1 at pos 2
wrote blk 44
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 2 at pos 0
wrote blk 48
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 2 at pos 1
wrote blk 62
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 2 at pos 2
wrote blk 67
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 3 at pos 0
wrote blk 73
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
73.. 1:
74.. 5: unwritten
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 3 at pos 1
wrote blk 86
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
73.. 1:
74.. 5: unwritten
79.. 2:
86.. 1:
87.. 2:
95.. 1: eof
writing into existing 3 at pos 2
wrote blk 92
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
73.. 1:
74.. 5: unwritten
79.. 2:
86.. 1:
87.. 2:
92.. 1:
93.. 2: unwritten
95.. 1: eof

View File

@@ -0,0 +1,4 @@
== ensuring utils and module for old versions
== unmounting test fs and removing test module
== testing combinations of old and new format versions
== restoring test module and mount

View File

@@ -0,0 +1,18 @@
== root inode returns nothing
== crazy large unused inode does nothing
== basic entry
file
== rename
renamed
== hard link
file
link
== removal
== different dirs
== file types
type b name block
type c name char
type d name dir
type f name file
type l name symlink
== all name lengths work

View File

@@ -17,7 +17,7 @@ ino not found in dseq index
mount 0 contents after mount 1 rm: contents
ino found in dseq index
ino found in dseq index
stat: cannot stat /mnt/test/test/inode-deletion/file: No such file or directory
stat: cannot stat '/mnt/test/test/inode-deletion/file': No such file or directory
ino not found in dseq index
ino not found in dseq index
== lots of deletions use one open map

View File

@@ -1,3 +1,4 @@
== setting longer hung task timeout
== creating fragmented extents
== unlink file with moved extents to free extents per block
== cleanup

View File

@@ -0,0 +1,2 @@
=== setup
=== spin reading and shrinking

View File

@@ -1,3 +1,11 @@
== non-acl O_TMPFILE creation honors umask
umask 022
fstat after open(0777): 0100755
stat after linkat: 0100755
umask 077
fstat after open(0777): 0100700
stat after linkat: 0100700
== stage from tmpfile
total file size 33669120
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
*

View File

@@ -20,10 +20,10 @@ offline waiting should now have two known entries:
data_wait_err found 2 waiters.
offline waiting should now have 0 known entries:
0
dd: error reading /mnt/test/test/offline-extent-waiting/dir/file: Input/output error
dd: error reading '/mnt/test/test/offline-extent-waiting/dir/file': Input/output error
0+0 records in
0+0 records out
dd: error reading /mnt/test/test/offline-extent-waiting/dir/file: Input/output error
dd: error reading '/mnt/test/test/offline-extent-waiting/dir/file': Input/output error
0+0 records in
0+0 records out
offline waiting should be empty again:

24
tests/golden/projects Normal file
View File

@@ -0,0 +1,24 @@
== default new files don't have project
0
== set new project on files and dirs
8675309
8675309
== non-root can see id
8675309
== can use IDs around long width limits
2147483647
2147483648
4294967295
9223372036854775807
9223372036854775808
18446744073709551615
== created files and dirs inherit project id
8675309
8675309
== inheritance continues
8675309
== clearing project id stops inheritance
0
0
== o_tmpfile creations inherit dir
8675309

View File

@@ -0,0 +1,5 @@
== bad timeout values fail
== bad mount option fails
== mount option
== sysfs
== reset all options

41
tests/golden/quota Normal file
View File

@@ -0,0 +1,41 @@
== prepare dir with write perm for test ids
== test assumes starting with no rules, empty list
== add rule
7 13,L,- 15,L,- 17,L,- I 33 -
== list is empty again after delete
== can change limits without deleting
1 1,L,- 1,L,- 1,L,- I 100 -
1 1,L,- 1,L,- 1,L,- I 101 -
1 1,L,- 1,L,- 1,L,- I 99 -
== wipe and restore rules in bulk
7 15,L,- 0,L,- 0,L,- I 33 -
7 14,L,- 0,L,- 0,L,- I 33 -
7 13,L,- 0,L,- 0,L,- I 33 -
7 12,L,- 0,L,- 0,L,- I 33 -
7 11,L,- 0,L,- 0,L,- I 33 -
7 10,L,- 0,L,- 0,L,- I 33 -
7 15,L,- 0,L,- 0,L,- I 33 -
7 14,L,- 0,L,- 0,L,- I 33 -
7 13,L,- 0,L,- 0,L,- I 33 -
7 12,L,- 0,L,- 0,L,- I 33 -
7 11,L,- 0,L,- 0,L,- I 33 -
7 10,L,- 0,L,- 0,L,- I 33 -
== default rule prevents file creation
touch: cannot touch '/mnt/test/test/quota/dir/file': Disk quota exceeded
== decreasing totl allows file creation again
== attr selecting rules prevent creation
touch: cannot touch '/mnt/test/test/quota/dir/file': Disk quota exceeded
touch: cannot touch '/mnt/test/test/quota/dir/file': Disk quota exceeded
== multi attr selecting doesn't prevent partial
touch: cannot touch '/mnt/test/test/quota/dir/file': Disk quota exceeded
== op differentiates
== higher priority rule applies
touch: cannot touch '/mnt/test/test/quota/dir/file': Disk quota exceeded
== data rules with total and count prevent write and fallocate
dd: error writing '/mnt/test/test/quota/dir/file': Disk quota exceeded
fallocate: fallocate failed: Disk quota exceeded
dd: error writing '/mnt/test/test/quota/dir/file': Disk quota exceeded
fallocate: fallocate failed: Disk quota exceeded
== added rules work after bulk restore
touch: cannot touch '/mnt/test/test/quota/dir/file': Disk quota exceeded
== cleanup

View File

@@ -0,0 +1,28 @@
== setting retention on dir fails
attr_x ioctl failed on '/mnt/test/test/retention-basic': Invalid argument (22)
scoutfs: set-attr-x failed: Invalid argument (22)
== set retention
== get-attr-x shows retention
1
== unpriv can't clear retention
attr_x ioctl failed on '/mnt/test/test/retention-basic/file-1': Operation not permitted (1)
scoutfs: set-attr-x failed: Operation not permitted (1)
== can set hidden scoutfs xattr in retention
== setting user. xattr fails in retention
setfattr: /mnt/test/test/retention-basic/file-1: Operation not permitted
== file deletion fails in retention
rm: cannot remove '/mnt/test/test/retention-basic/file-1': Operation not permitted
== file rename fails in retention
mv: cannot move '/mnt/test/test/retention-basic/file-1' to '/mnt/test/test/retention-basic/file-2': Operation not permitted
== file write fails in retention
date: write error: Operation not permitted
== file truncate fails in retention
truncate: failed to truncate '/mnt/test/test/retention-basic/file-1' at 0 bytes: Operation not permitted
== setattr fails in retention
touch: setting times of '/mnt/test/test/retention-basic/file-1': Operation not permitted
== clear retention
== file write
== file rename
== setattr
== xattr deletion
== cleanup

View File

@@ -22,10 +22,8 @@ scoutfs: setattr failed: Invalid argument (22)
== large ctime is set
1972-02-19 00:06:25.999999999 +0000
== large offline extents are created
Filesystem type is: 554f4353
File size of /mnt/test/test/setattr_more/file is 40988672 (10007 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 10006: 0.. 10006: 10007: unknown,eof
/mnt/test/test/setattr_more/file: 1 extent found
0: offset: 0 0 length: 10007 flags: O.L
extents: 1
== correct offline extent length
976563
== omitting data_version should not fail

View File

@@ -1,5 +1,9 @@
== create/release/stage single block file
0: offset: 0 0 length: 1 flags: O.L
extents: 1
== create/release/stage larger file
0: offset: 0 0 length: 4096 flags: O.L
extents: 1
== multiple release,drop_cache,stage cycles
== release+stage shouldn't change stat, data seq or vers
== stage does change meta_seq
@@ -7,16 +11,22 @@
stage: must provide file version with --data-version
Try `stage --help' or `stage --usage' for more information.
== wrapped region fails
stage returned -1, not 4096: error Invalid argument (22)
stage returned -1, not 8192: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== non-block aligned offset fails
stage returned -1, not 4095: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
0: offset: 0 0 length: 1 flags: O.L
extents: 1
== non-block aligned len within block fails
stage returned -1, not 1024: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
0: offset: 0 0 length: 1 flags: O.L
extents: 1
== partial final block that writes to i_size does work
== zero length stage doesn't bring blocks online
0: offset: 0 0 length: 100 flags: O.L
extents: 1
== stage of non-regular file fails
ioctl failed: Inappropriate ioctl for device (25)
stage: must provide file version with --data-version

View File

@@ -0,0 +1,37 @@
== initialize per-mount values
== arm compaction triggers
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_merge_stop_safe armed: 1
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_merge_stop_safe armed: 1
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_merge_stop_safe armed: 1
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_merge_stop_safe armed: 1
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_merge_stop_safe armed: 1
== compact more often
== create padded sorted inputs by forcing log rotation
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_compact_logs_pad_safe armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_force_log_rotate armed: 1
trigger srch_compact_logs_pad_safe armed: 1
== compaction of padded should stop at safe
== verify no compaction errors
== cleanup

View File

@@ -1,2 +1,3 @@
== create initial files
== race stage and release
== cleanup

View File

@@ -5,24 +5,44 @@ generic/004
generic/005
generic/006
generic/007
generic/008
generic/009
generic/011
generic/012
generic/013
generic/014
generic/015
generic/016
generic/018
generic/020
generic/021
generic/022
generic/023
generic/024
generic/025
generic/026
generic/028
generic/031
generic/032
generic/033
generic/034
generic/035
generic/037
generic/039
generic/040
generic/041
generic/050
generic/052
generic/053
generic/056
generic/057
generic/058
generic/059
generic/060
generic/061
generic/062
generic/063
generic/064
generic/065
generic/066
generic/067
@@ -31,42 +51,193 @@ generic/070
generic/071
generic/073
generic/076
generic/078
generic/079
generic/081
generic/082
generic/084
generic/086
generic/087
generic/088
generic/090
generic/091
generic/092
generic/094
generic/096
generic/097
generic/098
generic/099
generic/101
generic/104
generic/105
generic/106
generic/107
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/117
generic/118
generic/119
generic/121
generic/122
generic/123
generic/124
generic/128
generic/129
generic/130
generic/131
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/169
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/184
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/216
generic/217
generic/218
generic/219
generic/220
generic/221
generic/222
generic/223
generic/225
generic/227
generic/228
generic/229
generic/230
generic/235
generic/236
generic/237
generic/238
generic/240
generic/244
generic/245
generic/249
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/257
generic/258
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/286
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/294
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/306
generic/307
generic/308
generic/309
generic/312
generic/313
generic/314
generic/315
generic/316
generic/317
generic/319
generic/322
generic/324
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/335
generic/336
generic/337
@@ -74,10 +245,255 @@ generic/341
generic/342
generic/343
generic/348
generic/353
generic/355
generic/358
generic/359
generic/360
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/375
generic/376
generic/377
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
generic/389
generic/391
generic/392
generic/393
generic/394
generic/395
generic/396
generic/397
generic/398
generic/400
generic/401
generic/402
generic/403
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/426
generic/427
generic/436
generic/439
generic/440
generic/443
generic/445
generic/446
generic/448
generic/449
generic/450
generic/451
generic/453
generic/454
generic/456
generic/458
generic/460
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/477
generic/478
generic/479
generic/480
generic/481
generic/483
generic/485
generic/486
generic/487
generic/488
generic/489
generic/490
generic/491
generic/492
generic/498
generic/499
generic/501
generic/502
generic/503
generic/504
generic/505
generic/506
generic/507
generic/508
generic/509
generic/510
generic/511
generic/512
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/523
generic/524
generic/525
generic/526
generic/527
generic/528
generic/529
generic/530
generic/531
generic/533
generic/534
generic/535
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/547
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/557
generic/566
generic/567
generic/571
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/604
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/611
generic/612
generic/613
generic/618
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/632
generic/634
generic/635
generic/637
generic/639
generic/640
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/676
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/032
Not
run:
generic/008
@@ -241,7 +657,6 @@ generic/312
generic/314
generic/316
generic/317
generic/318
generic/324
generic/326
generic/327
@@ -252,8 +667,6 @@ generic/331
generic/332
generic/353
generic/355
generic/356
generic/357
generic/358
generic/359
generic/361
@@ -279,11 +692,174 @@ generic/383
generic/384
generic/385
generic/386
shared/001
generic/391
generic/392
generic/395
generic/396
generic/397
generic/398
generic/400
generic/402
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/427
generic/439
generic/440
generic/446
generic/449
generic/450
generic/451
generic/453
generic/454
generic/456
generic/458
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/485
generic/487
generic/488
generic/491
generic/492
generic/499
generic/501
generic/503
generic/505
generic/506
generic/507
generic/508
generic/511
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/528
generic/530
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/566
generic/567
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/612
generic/613
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/635
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/003
shared/004
shared/032
shared/051
shared/289
Passed all 79 tests
Passed all 495 tests

View File

@@ -1,5 +1,8 @@
#!/usr/bin/bash
# Force system tools to use ASCII quotes
export LC_ALL=C
#
# XXX
# - could have helper functions for waiting for pids
@@ -70,6 +73,7 @@ $(basename $0) options:
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
-T <nr> | Multiply the original trace buffer size by nr during the run.
-V <nr> | Set mkfs device format version.
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
-y | xfstests ./check additional args
@@ -173,6 +177,11 @@ while true; do
T_TRACE_MULT="$2"
shift
;;
-V)
test -n "$2" || die "-V must have a format version argument"
T_MKFS_FORMAT_VERSION="-V $2"
shift
;;
-X)
test -n "$2" || die "-X requires xfstests git repo dir argument"
T_XFSTESTS_REPO="$2"
@@ -323,16 +332,10 @@ unmount_all() {
cmd wait $p
done
# delete all temp meta devices
for dev in $(losetup --associated "$T_META_DEVICE" | cut -d : -f 1); do
if [ -e "$dev" ]; then
cmd losetup -d "$dev"
fi
done
# delete all temp data devices
for dev in $(losetup --associated "$T_DATA_DEVICE" | cut -d : -f 1); do
if [ -e "$dev" ]; then
cmd losetup -d "$dev"
# delete all temp devices
for dev in /dev/mapper/_scoutfs_test_*; do
if [ -b "$dev" ]; then
cmd dmsetup remove $dev
fi
done
}
@@ -347,7 +350,7 @@ if [ -n "$T_MKFS" ]; then
done
msg "making new filesystem with $T_QUORUM quorum members"
cmd scoutfs mkfs -f $quo $T_DATA_ALLOC_ZONE_BLOCKS \
cmd scoutfs mkfs -f $quo $T_DATA_ALLOC_ZONE_BLOCKS $T_MKFS_FORMAT_VERSION \
"$T_META_DEVICE" "$T_DATA_DEVICE"
fi
@@ -355,7 +358,8 @@ if [ -n "$T_INSMOD" ]; then
msg "removing and reinserting scoutfs module"
test -e /sys/module/scoutfs && cmd rmmod scoutfs
cmd modprobe libcrc32c
cmd insmod "$T_KMOD/src/scoutfs.ko"
T_MODULE="$T_KMOD/src/scoutfs.ko"
cmd insmod "$T_MODULE"
fi
if [ -n "$T_TRACE_MULT" ]; then
@@ -431,6 +435,12 @@ $T_UTILS/fenced/scoutfs-fenced > "$T_FENCED_LOG" 2>&1 &
fenced_pid=$!
fenced_log "started fenced pid $fenced_pid in the background"
# setup dm tables
echo "0 $(blockdev --getsz $T_META_DEVICE) linear $T_META_DEVICE 0" > \
$T_RESULTS/dmtable.meta
echo "0 $(blockdev --getsz $T_DATA_DEVICE) linear $T_DATA_DEVICE 0" > \
$T_RESULTS/dmtable.data
#
# mount concurrently so that a quorum is present to elect the leader and
# start a server.
@@ -439,10 +449,13 @@ msg "mounting $T_NR_MOUNTS mounts on meta $T_META_DEVICE data $T_DATA_DEVICE"
pids=""
for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
meta_dev=$(losetup --find --show $T_META_DEVICE)
test -b "$meta_dev" || die "failed to create temp device $meta_dev"
data_dev=$(losetup --find --show $T_DATA_DEVICE)
test -b "$data_dev" || die "failed to create temp device $data_dev"
name="_scoutfs_test_meta_$i"
cmd dmsetup create "$name" --table "$(cat $T_RESULTS/dmtable.meta)"
meta_dev="/dev/mapper/$name"
name="_scoutfs_test_data_$i"
cmd dmsetup create "$name" --table "$(cat $T_RESULTS/dmtable.data)"
data_dev="/dev/mapper/$name"
dir="/mnt/test.$i"
test -d "$dir" || cmd mkdir -p "$dir"
@@ -502,6 +515,7 @@ msg "running tests"
passed=0
skipped=0
failed=0
skipped_permitted=0
for t in $tests; do
# tests has basenames from sequence, get path and name
t="tests/$t"
@@ -544,6 +558,9 @@ for t in $tests; do
printf " %-30s $stats" "$test_name"
# mark in dmesg as to what test we are running
echo "run scoutfs test $test_name" > /dev/kmsg
# record dmesg before
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.before"
@@ -605,6 +622,10 @@ for t in $tests; do
grep -s -v "^$test_name " "$last" > "$last.tmp"
echo "$test_name $stats" >> "$last.tmp"
mv -f "$last.tmp" "$last"
elif [ "$sts" == "$T_SKIP_PERMITTED_STATUS" ]; then
echo " [ skipped (permitted): $message ]"
echo "$test_name skipped (permitted) $message " >> "$T_RESULTS/skip.log"
((skipped_permitted++))
elif [ "$sts" == "$T_SKIP_STATUS" ]; then
echo " [ skipped: $message ]"
echo "$test_name $message" >> "$T_RESULTS/skip.log"
@@ -618,7 +639,7 @@ for t in $tests; do
fi
done
msg "all tests run: $passed passed, $skipped skipped, $failed failed"
msg "all tests run: $passed passed, $skipped skipped, $skipped_permitted skipped (permitted), $failed failed"
if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then

View File

@@ -1,23 +1,31 @@
export-get-name-parent.sh
basic-block-counts.sh
basic-bad-mounts.sh
basic-posix-acl.sh
inode-items-updated.sh
simple-inode-index.sh
simple-staging.sh
simple-release-extents.sh
get-referring-entries.sh
fallocate.sh
basic-truncate.sh
data-prealloc.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
projects.sh
large-fragmented-free.sh
format-version-forward-back.sh
enospc.sh
srch-safe-merge-pos.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
retention-basic.sh
totl-xattr-tag.sh
quota.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-shrink-read-race.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
lock-recover-invalidate.sh
@@ -27,7 +35,7 @@ createmany-large-names.sh
createmany-rename-large-dir.sh
stage-release-race-alloc.sh
stage-multi-part.sh
stage-tmpfile.sh
o_tmpfile.sh
basic-posix-consistency.sh
dirent-consistency.sh
mkdir-rename-rmdir.sh
@@ -36,7 +44,9 @@ cross-mount-data-free.sh
persistent-item-vers.sh
setup-error-teardown.sh
resize-devices.sh
change-devices.sh
fence-and-reclaim.sh
quorum-heartbeat-timeout.sh
orphan-inodes.sh
mount-unmount-race.sh
client-unmount-recovery.sh

View File

@@ -1,6 +1,7 @@
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
#include <errno.h>
#include <string.h>
#include <sys/stat.h>
@@ -35,10 +36,10 @@ struct opts {
unsigned int dry_run:1,
ls_output:1,
quiet:1,
user_xattr:1,
same_srch_xattr:1,
group_srch_xattr:1,
unique_srch_xattr:1;
xattr_set:1,
xattr_file:1,
xattr_group:1;
char *xattr_name;
};
struct stats {
@@ -149,12 +150,31 @@ static void free_dir(struct dir *dir)
free(dir);
}
static size_t snprintf_off(void *buf, size_t sz, size_t off, char *fmt, ...)
{
va_list ap;
int ret;
if (off >= sz)
return sz;
va_start(ap, fmt);
ret = vsnprintf(buf + off, sz - off, fmt, ap);
va_end(ap);
if (ret <= 0)
return sz;
return off + ret;
}
static void create_dir(struct dir *dir, struct opts *opts,
struct stats *stats)
{
struct str_list *s;
char name[100];
char name[256]; /* max len and null term */
char val = 'v';
size_t off;
int rc;
int i;
@@ -175,29 +195,21 @@ static void create_dir(struct dir *dir, struct opts *opts,
rc = mknod(s->str, S_IFREG | 0644, 0);
error_exit(rc, "mknod %s failed"ERRF, s->str, ERRA);
rc = 0;
if (rc == 0 && opts->user_xattr) {
strcpy(name, "user.scoutfs_bcp");
rc = setxattr(s->str, name, &val, 1, 0);
}
if (rc == 0 && opts->same_srch_xattr) {
strcpy(name, "scoutfs.srch.scoutfs_bcp");
rc = setxattr(s->str, name, &val, 1, 0);
}
if (rc == 0 && opts->group_srch_xattr) {
snprintf(name, sizeof(name),
"scoutfs.srch.scoutfs_bcp.group.%lu",
stats->files / 10000);
rc = setxattr(s->str, name, &val, 1, 0);
}
if (rc == 0 && opts->unique_srch_xattr) {
snprintf(name, sizeof(name),
"scoutfs.srch.scoutfs_bcp.unique.%lu",
stats->files);
if (opts->xattr_set) {
off = snprintf_off(name, sizeof(name), 0, "%s", opts->xattr_name);
if (opts->xattr_file)
off = snprintf_off(name, sizeof(name), off,
"-f-%lu", stats->files);
if (opts->xattr_group)
off = snprintf_off(name, sizeof(name), off,
"-g-%lu", stats->files / 10000);
error_exit(off >= sizeof(name), "xattr name longer than 255 bytes");
rc = setxattr(s->str, name, &val, 1, 0);
error_exit(rc, "setxattr %s %s failed"ERRF, s->str, name, ERRA);
}
error_exit(rc, "setxattr %s %s failed"ERRF, s->str, name, ERRA);
stats->files++;
rate_banner(opts, stats);
@@ -365,11 +377,10 @@ static void usage(void)
" -d DIR | create all files in DIR top level directory\n"
" -n | dry run, only parse, don't create any files\n"
" -q | quiet, don't regularly print rates\n"
" -F | append \"-f-NR\" file nr to xattr name, requires -X\n"
" -G | append \"-g-NR\" file nr/10000 to xattr name, requires -X\n"
" -L | parse ls output; only reg, skip meta, paths at ./\n"
" -X | set the same user. xattr name in all files\n"
" -S | set the same .srch. xattr name in all files\n"
" -G | set a .srch. xattr name shared by groups of files\n"
" -U | set a unique .srch. xattr name in all files\n");
" -X NAM | set named xattr in all files\n");
}
int main(int argc, char **argv)
@@ -386,7 +397,7 @@ int main(int argc, char **argv)
memset(&opts, 0, sizeof(opts));
while ((c = getopt(argc, argv, "d:nqLXSGU")) != -1) {
while ((c = getopt(argc, argv, "d:nqFGLX:")) != -1) {
switch(c) {
case 'd':
top_dir = strdup(optarg);
@@ -397,20 +408,19 @@ int main(int argc, char **argv)
case 'q':
opts.quiet = 1;
break;
case 'F':
opts.xattr_file = 1;
break;
case 'G':
opts.xattr_group = 1;
break;
case 'L':
opts.ls_output = 1;
break;
case 'X':
opts.user_xattr = 1;
break;
case 'S':
opts.same_srch_xattr = 1;
break;
case 'G':
opts.group_srch_xattr = 1;
break;
case 'U':
opts.unique_srch_xattr = 1;
opts.xattr_set = 1;
opts.xattr_name = strdup(optarg);
error_exit(!opts.xattr_name, "error allocating xattr name");
break;
case '?':
printf("Unknown option '%c'\n", optopt);
@@ -419,6 +429,11 @@ int main(int argc, char **argv)
}
}
error_exit(opts.xattr_file && !opts.xattr_set,
"must specify xattr -X when appending file nr with -F");
error_exit(opts.xattr_group && !opts.xattr_set,
"must specify xattr -X when appending file nr with -G");
if (!opts.dry_run) {
error_exit(!top_dir,
"must specify top level directory with -d");

View File

@@ -48,7 +48,7 @@ struct our_handle {
static void exit_usage(void)
{
printf(" -h/-? output this usage message and exit\n"
" -e keep trying on enoent, consider success an error\n"
" -e keep trying on enoent and estale, consider success an error\n"
" -i <num> 64bit inode number for handle open, can be multiple\n"
" -m <string> scoutfs mount path string for ioctl fd\n"
" -n <string> optional xattr name string, defaults to \""DEFAULT_NAME"\"\n"
@@ -149,7 +149,7 @@ int main(int argc, char **argv)
fd = open_by_handle_at(mntfd, &handle.handle, O_RDWR);
if (fd == -1) {
if (!enoent_success_err || errno != ENOENT) {
if (!enoent_success_err || ( errno != ENOENT && errno != ESTALE )) {
perror("open_by_handle_at");
return 1;
}

View File

@@ -0,0 +1,71 @@
/*
* Copyright (C) 2023 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/stat.h>
#include <assert.h>
#include <limits.h>
static void linkat_tmpfile(char *dir, char *lpath)
{
char proc_self[PATH_MAX];
int ret;
int fd;
fd = open(dir, O_RDWR | O_TMPFILE, 0777);
if (fd < 0) {
perror("open(O_TMPFILE)");
exit(1);
}
snprintf(proc_self, sizeof(proc_self), "/proc/self/fd/%d", fd);
ret = linkat(AT_FDCWD, proc_self, AT_FDCWD, lpath, AT_SYMLINK_FOLLOW);
if (ret < 0) {
perror("linkat");
exit(1);
}
close(fd);
}
/*
* Use O_TMPFILE and linkat to create a new visible file, used to test
* the O_TMPFILE creation path by inspecting the created file.
*/
int main(int argc, char **argv)
{
char *lpath;
char *dir;
if (argc < 3) {
printf("%s <open_dir> <linkat_path>\n", argv[0]);
return 1;
}
dir = argv[1];
lpath = argv[2];
linkat_tmpfile(dir, lpath);
return 0;
}

View File

@@ -0,0 +1,97 @@
/*
* Show the modes of files as we create them with O_TMPFILE and link
* them into the namespace.
*
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/stat.h>
#include <assert.h>
#include <limits.h>
static void linkat_tmpfile_modes(char *dir, char *lpath, mode_t mode)
{
char proc_self[PATH_MAX];
struct stat st;
int ret;
int fd;
umask(mode);
printf("umask 0%o\n", mode);
fd = open(dir, O_RDWR | O_TMPFILE, 0777);
if (fd < 0) {
perror("open(O_TMPFILE)");
exit(1);
}
ret = fstat(fd, &st);
if (ret < 0) {
perror("fstat");
exit(1);
}
printf("fstat after open(0777): 0%o\n", st.st_mode);
snprintf(proc_self, sizeof(proc_self), "/proc/self/fd/%d", fd);
ret = linkat(AT_FDCWD, proc_self, AT_FDCWD, lpath, AT_SYMLINK_FOLLOW);
if (ret < 0) {
perror("linkat");
exit(1);
}
close(fd);
ret = stat(lpath, &st);
if (ret < 0) {
perror("fstat");
exit(1);
}
printf("stat after linkat: 0%o\n", st.st_mode);
ret = unlink(lpath);
if (ret < 0) {
perror("unlink");
exit(1);
}
}
int main(int argc, char **argv)
{
char *lpath;
char *dir;
if (argc < 3) {
printf("%s <open_dir> <linkat_path>\n", argv[0]);
return 1;
}
dir = argv[1];
lpath = argv[2];
linkat_tmpfile_modes(dir, lpath, 022);
linkat_tmpfile_modes(dir, lpath, 077);
return 0;
}

View File

@@ -12,7 +12,7 @@ mount_fail()
}
echo "== prepare devices, mount point, and logs"
SCR="/mnt/scoutfs.extra"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"
> $T_TMP.mount.out
scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \

Some files were not shown because too many files have changed in this diff Show More