We merely trace exit values and position, and ignore length.
Because vm_fault_t is __bitwise, sparse will loudly complain about
a plain cast to u32, so we must __force (on el8). ret will be 512 in
normal cases.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Add support for writable MAP_SHARED mmap()ings. Avoid issues with late
writepage()s building transactions by doing the block_write_begin() work in
scoutfs_data_page_mkwrite(). Ensure the page is marked dirty and prepared
for write, then let the VM complete the write when the page is flushed or
invalidated.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since kABI migrations across minor versions is a thing of the past going
forward, we now:
- Detect if we're on EL9
- If so, add a requirement on the various flavors of release package to
that specific major.minor version
This appropriately does not allow upgrades across minor versions.
Additional blkdev/bdev changes now cause this call to be removed as
well resulting in us having to use yet another API to do the same for
el9_5.
The changes are a little more subtle as now the bdev_mount() call passes
a custom bd_holder_ops that we must match or else throw a WARN_ON, so we
switch to using sbi as our holder arg instead.
Make sure to bdev_fput and not fput, since we don't want to have our
private data cleanup deferred, failing xfstests generic/604.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The assignments to it is no longer needed at all. All references can be
dropped since v6.4-rc4-163-g0d625446d0a4.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since v6.5-rc1-7-g9b6304c1d537, current_time() is no longer
extern, so we need to update this grep regex to continue to match.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Right now a client requesting a null mode for a lock will cause
invalidations of all existing granted modes of the lock across the
cluster.
This is unneccessarily broad. The absolute requirement is that a null
request invalidates other existing granted modes on the client. That's
how the client safely resolves shrinking's desire to free locks while
the locks are in use. It relies on turning it into the race between use
and remote invalidation.
But that only requires invalidating existing grants from the requesting
client, not all clients. It is always safe for null grants to coexist
with all grants on other clients. Consider the existing mechanics
involving null modes. First, null locks are instatiated on the client
before sending any requests at all. At any given time newly allocated
null locks are coexisting with all existing locks across the cluster.
Second, the server frees the client entry tracking struct the moment it
sends a null grant to the client. From that point on the client's null
lock can not have any impact on the rest of the lock holders because the
server has forgotten about it.
So we add this case to the server's test that two client lock modes are
compatible. We take the opportunity to comment the heck out of this
function instead of making it a dense boolean composition. The only
functional change is the addition of this case, the existing cases are
refactored but unchanged.
Signed-off-by: Zach Brown <zab@versity.com>
When freeing acked reesponses in the net layer we sweep the send and
resend queues looking for queued responses up to the sequence number
we've had acked. The code that did this used a weird pattern of
returning ints and adding them which gave me pause. Clean it up to use
bools and or (not short-circuiting ||) to more obviously communicate
what's going on.
Signed-off-by: Zach Brown <zab@versity.com>
Over time some fields have been added to the lock struct which haven't
been added to the lock tracing output. Add some of the more relevant
lock fields to tracing.
Signed-off-by: Zach Brown <zab@versity.com>
Lock object lifetimes in the lock server are protected by reference
counts. References are acquired while holding a lock on an rbtree.
Unfortunately, the decision to free lock objects wasn't tested while
also holding that lock on the rbtree. A caller putting their object
would test the refcount, then wait to get the rbtree lock to remove it
from the tree.
There's a possible race where the decision is made to remove the object
but another reference is added before the object is removed. This was
seen in testing and manifest as an incoming request handling path adding
a request message to the object before it is freed, losing the message.
Clients would then hang on a lock that never saw a response because
their request was freed with the lock object.
The fix is to hold the rbtree lock when testing the refcount and
deciding to free. It adds a bit more contention but not significantly
so, given the wild existing contention on a per-fs spinlocked rbtree.
Signed-off-by: Zach Brown <zab@versity.com>
In v5.17-rc4-53-g3a3bae50af5d, we can no longer omit having this
method unhooked as the mm caller blindly calls it now. In-kernel
filesystems all were fixed in this change.
aops->invalidatepage was the old aops method that would free pages
with private attached data. This method is replaced with the
new invalidate_folio method. If this method is NULL, the memory
will become orphaned. (v5.17-rc4-29-gf50015a596fa)
Signed-off-by: Auke Kok <auke.kok@versity.com>
We use check_add_overflow(a, b, d) here to validate that (off, len)
pairs do not exceed the max value type. The kernel conveniently has
several macros to sort out the problems with signed or unsigned types.
However, we're not interested in purely seeing whether (a + b)
overflows, because we're using this for (off, len) overflow checks,
where the bytes we read are from 0 to len -1. We must therefore call
this check with (b) being "len - 1".
I've made sure that we don't accidentally fail when (len == 0)
in all cases by making sure we've already checked this condition
before, and moving code around as needed to ensure that (len > 0)
in all cases where we check.
The macro check_add_overflow requires a (d) argument in which
temporarily the result of the addition is stored and then checked to see
if an overflow occurred. We put a `tmp` variable on the stack of the
correct type as needed to make the checks function.
simple-release-extents test mistakenly relied on this buggy wrap code,
so it needs fixing. The move-blocks test also got it wrong.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We consistently enter scoutfs_data_wait_check when len == 0 from
scoutfs_aio_write() which directly passes the i_size_read() value,
and for cases where we `echo >> $FILE` this is always reached.
This can cause the wrapping check to fail since `0 + (0 - 1) < 0` which
triggers the WARN_ON_ONCE wrap check that needs updating to allow
certain operations on huge files.
More importantly we can just omit all these checks if `len == 0` anyway,
since they should always succeed and never should require taking all the
locks.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Passing a holder ptr to these functions now replaces the FMODE_EXCL
flag. _put no longer needs flags for this reason, but the holder
instead.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v6.4-rc2-198-g05bdb9965305 adds a new type for passing flags instead
of abusing fmode_t flags. They are essentially the same flags just
in a new type.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v6.0-rc6-9-g863f144f12ad changes the VFS method to pass in a struct
file and not a dentry in preperation for tmpfile support in fuse.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The current spec template can't handle future major el releases
gracefully and fails to build entirely. We isolate all changes
so that they are either "el7 specific" or generic. This rids us
entirely of el8 specific conditionals.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Folios are the new data types used for passing pages. For now,
folios only appear to have a single page. Future kernels will
change that.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.19-rc4-52-ge33c267ab70d Adds shrinker names to the registration
call to aid with shrinker debugging, which is highly opaque.
To enable you'll have to recompile the kernel with
CONFIG_SHRINKER_DEBUG=y though, since it's disabled by default in
OSV kernels.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The iter based read/write calls can support splice in el9 if we
hook up these calls, otherwise splice will stop working.
->write() similar to: v3.15-rc4-330-g8d0207652cbe. ->read() to
generic implementation.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We instead opt to use sock_setsockopt which is generally exactly the
same and can be easily converted to map to kernel_setsockopt without
impacting the code significantly.
There are 3 methods we're calling with usec timeval's, and that is
significantly different now that this requires a bit more compat code
so we split these out to separate compat functions to handle them.
Some of the TCP sock functions also have a slightly different signature
that we want to split them out (struct socket vs. sock). Some further
no longer return success, either.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We switch to using 64bit usec structs and recommended replacement
functions from Documentation/core-api/timekeeping.rst.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In v5.11-rc4-8-ge65ce2a50cf6 the *set handler is passed a
user_namespace struct pointing to the map from the mount.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Greg KH tells us to do just this in v5.4-rc5-31-g9927c6fa3e1d:
No one checks the return value of debugfs_create_atomic_t(),
as it's not needed, so make the return value void, so that no
one tries to do so in the future.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.12-rc6-9-g4f0f586bf0c8
All list_sort functions use the list_cmp_func_t type, which compares
list_head member types. These are now required to be `const` as the
compiler will now check them. This propagates into our callers.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.7-rc2-1174-gfd4f12bc38c3 significantly rewrites the bpf iterator
which hits this _next() function. It also adds a check that verifies
that the *pos is incremented after every call, even if it goes beyond
the last member (in which case it's not used).
Signed-off-by: Auke Kok <auke.kok@versity.com>
v5.11-rc4-7-g2f221d6f7b88 Changes setattr_prepare from an extern
to plain int. There's no impact further to the compat to keep it
working except for the detection regex.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We could use sizeof_field as a direct replacement (which is the same)
except that this entire thing can directly use offsetofend().
Signed-off-by: Auke Kok <auke.kok@versity.com>
The wrapper in setattr_more that translates the operations to attr_x
needs to decide whether to ask attr_x to perform a change to any of
the fields passed to it or not. For the date and size fields this
is implicit - we always tell attr_x to change them. For any of the
other fields, it should be explicit.
The only field that is in the struct that this applies to is
data_version. Because the data version field by default is zero,
we use that as condition to decide whether to pass the data_version
down to attr_x.
Previously, the code would always pass a data_version=0 down to attr_x,
triggering one of the validity checks, making it return -EINVAL. We
add a simple test case to test for this issue.
Signed-off-by: Auke Kok <auke.kok@versity.com>
These new shrinkers were recently added. Because there's very little
ways to debug them, or even see them properly function, we should at
least add counters for them.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In 29160b0b I mistakenly disabled all caching of ACLs for el8
instead of only disabling cache lookups. The correct change
should have been to disable cache lookups only, and leave setting the
acl cache after storing or fetching, as the kernel needs this data
to resolve acls when doing permission checks.
Restore the acl cache insertions fixes.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Add support for the indx xattr tag which lets xattrs determine the sort
order of by their inode number in a global index.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for project IDs. They're managed through the _attr_x
interfaces and are inherited from the parent directory during creation.
Signed-off-by: Zach Brown <zab@versity.com>
Change the read_xattr_totls ioctl to use the weak item cache instead of
manually reading and merging the fs items for the xattr totals on every
call.
Signed-off-by: Zach Brown <zab@versity.com>