Commit Graph

1368 Commits

Author SHA1 Message Date
Auke Kok
e4721366ff Added user_ns argument to posix_acl_update_mode, set_posix_acl
v5.11-rc4-8-ge65ce2a50cf6 adds idmap support to these calls.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
4ef64c6fcf Vfs methods become user namespace mount aware.
v5.11-rc4-24-g549c7297717c

All of these VFS methods are now passed a user_namespace.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
2d58ee2a37 Account for new bio_alloc() args.
Block device and opf are now passed through and set. We mimic compat
code to do the same.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
1f0dd7f025 __vmalloc defaults to PAGE_KERNEL everywhere, so the arg was removed.
v5.7-523-g88dca4ca5a93

__vmalloc no longer has the 3rd argument.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
077468ac1e debugfs_create_atomic_t now returns void, don't check result
Greg KH tells us to do just this in v5.4-rc5-31-g9927c6fa3e1d:

	No one checks the return value of debugfs_create_atomic_t(),
	as it's not needed, so make the return value void, so that no
	one tries to do so in the future.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c951713ab2 list_cmp_func_t introduced, using const.
v5.12-rc6-9-g4f0f586bf0c8

All list_sort functions use the list_cmp_func_t type, which compares
list_head member types. These are now required to be `const` as the
compiler will now check them. This propagates into our callers.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
ad82a5e52a Squelch warning from bpf_iter.c.
v5.7-rc2-1174-gfd4f12bc38c3 significantly rewrites the bpf iterator
which hits this _next() function. It also adds a check that verifies
that the *pos is incremented after every call, even if it goes beyond
the last member (in which case it's not used).

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
d3c5328909 setattr_prepare no longer extern in fs.h
v5.11-rc4-7-g2f221d6f7b88 Changes setattr_prepare from an extern
to plain int. There's no impact further to the compat to keep it
working except for the detection regex.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
c30172210f Use blk_opf_t to pass bio op flags
Compat is back to unsigned int.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
19af6e28fb "unaligned/access_ok.h" is not needed, and removed.
Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
8885486bc8 Add several low level includes.
Newer kernels include less header dependencies by default, so we have
to add these.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
0204e092e4 FIELD_SIZEOF was deprecated.
We could use sizeof_field as a direct replacement (which is the same)
except that this entire thing can directly use offsetofend().

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-10-03 12:41:05 -07:00
Auke Kok
b45fbe0bbb Don't pass data version to attr_x unless the ioctl means to set it.
The wrapper in setattr_more that translates the operations to attr_x
needs to decide whether to ask attr_x to perform a change to any of
the fields passed to it or not. For the date and size fields this
is implicit - we always tell attr_x to change them. For any of the
other fields, it should be explicit.

The only field that is in the struct that this applies to is
data_version. Because the data version field by default is zero,
we use that as condition to decide whether to pass the data_version
down to attr_x.

Previously, the code would always pass a data_version=0 down to attr_x,
triggering one of the validity checks, making it return -EINVAL. We
add a simple test case to test for this issue.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-09-27 19:31:22 -04:00
Greg Cymbalski
4dde57dc27 Rely on $PATH for weak-modules
This avoids having to deal with EL-specific path differences for
the weak-modules script.
2024-09-27 12:30:25 -07:00
Auke Kok
fb93d82b1e Add shrinker counters for wkic and quota_info.
These new shrinkers were recently added. Because there's very little
ways to debug them, or even see them properly function, we should at
least add counters for them.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-09-18 13:40:54 -04:00
Auke Kok
ccd65b9a61 Fix POSIX ACL use in el8+.
In 29160b0b I mistakenly disabled all caching of ACLs for el8
instead of only disabling cache lookups. The correct change
should have been to disable cache lookups only, and leave setting the
acl cache after storing or fetching, as the kernel needs this data
to resolve acls when doing permission checks.

Restore the acl cache insertions fixes.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-08-09 17:57:23 -04:00
Zach Brown
38c6d66ffc Add indx xattr tag support
Add support for the indx xattr tag which lets xattrs determine the sort
order of by their inode number in a global index.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
38e6f11ee4 Add quota support
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
4a8240748e Add project ID support
Add support for project IDs.  They're managed through the _attr_x
interfaces and are inherited from the parent directory during creation.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
9c45e8b7ef read_xattr_totls ioctl uses weak item cache
Change the read_xattr_totls ioctl to use the weak item cache instead of
manually reading and merging the fs items for the xattr totals on every
call.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
ee9e8c3e1a Extract .totl. item merging into own functions
The _READ_XATTR_TOTALS ioctl had manual code for merging the .totl.
total and value while reading fs items.  We're going to want to do this
in another reader so let's put these in their own funcions that clearly
isolate the logic of merging the fs items into a coherent result.

We can get rid of some of the totl_read_ counters that tracked which
items we were merging.  They weren't adding much value and conflated the
reading ioctl interface with the merging logic.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
5f156b7a36 Add scoutfs_forest_read_items_roots
Add a forest item reading interface that lets the caller specify the net
roots instead of always getting them from a network request.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
3a51ca369b Add the weak item cache
Add the weak item cache that is used for reads that can handle results
being a little behind.  This gives us a lot more freedom to implement
the cache that biases concurrent reads.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
fb5331a1d9 Add inode retention bit
Add a bit to the private scoutfs inode flags which indicates that the
inode is in retention mode.  The bit is visible through the _attr_x
interface.  It can only be set on regular files and when set it prevents
modification to all but non-user xattrs.  It can be cleared by root.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 15:09:05 -07:00
Zach Brown
270726a6ea Implement stat_more and setattr_more with attr_x
Now that we have the attr_x calls we can implement stat_more with
get_attr_x and setattr_more with set_attr_x.

The conversion of stat_more fixes a surprising consistency bug.
stat_more wasn't acquiring a cluster lock for the inode nore refreshing
it so it could have returned stale data if modifications were made in
another mount.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
6a99ca9ede Add attr_x core and ioctls
The existing stat_more and setattr_more interfaces aren't extensible.
This solves that problem by adding attribute interfaces which specify
the specific fields to work with.

We're about to add a few more inode fields and it makes sense to add
them to this extensible structure rather than adding more ioctls or
relatively clumsy xattrs.  This is modeled loosely on the upstream
kernel's statx support.

The ioctl entry points call core functions so that we can also implement
the existing stat_more and setattr_more interfaces in terms of these new
attr_x functions.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
0521bd0e6b Make offline extent creation use one transaction
Initially setattr_more followed the general pattern where extent
manipulation might require multiple transactions if there are lots of
extent items to work with.   The scoutfs_data_init_offline_extent()
function that creates an offline extent handled transactions itself.

But in this case the call only supports adding a single offline extent.
It will always use a small fixed amount of metadata and could be
combined with other metadata changes in one atomic transaction.

This changes scoutfs_data_init_offline_extent() to have the caller
handle transactions, inode updates, etc.  This lets the caller perform
all the restore changes in one transaction.  This interface change will
then be used as we add another caller that adds a single offline extent
in the same way.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
361491846d Add scoutfs_fmt_vers_unsupported()
Add a little inline helper to test whether the mounted format version
supports a feature or not, returning an errno that callers can use when
they can return a shared expected error.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Bryant G. Duffy-Ly
9ba4271c26 Add new max format version of 2
We're about to add new format structures so increment the max version to
2.  Future commits will add the features before we release version 2 in
the wild.

Signed-off-by: Zach Brown <zab@zabbo.net>
2024-06-28 14:53:49 -07:00
Bryant G. Duffy-Ly
90cfaf17d1 Initial support for different inode sizes
We're about to increase the inode size and increment the format version.
Inode reading and writing has to handle different valid inode sizes as
allowed by the format version.   This is the initial skeletal work that
later patches which really increase the inode size will further refine
to add the specific known sizes and format versions.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
[zab@versity.com: reworded description, reworked to use _within]
Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
6931cb7b0e Add scoutfs_inode_[gs]et_flags
Add functions for getting and setting our private inode flags.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
7d4db05445 Add scoutfs_item_lookup_smaller_zero
Add a lookup variant that returns an error if the item value is larger
than the caller's value buffer size and which zeros the rest of the
caller's buffer if the returned value is smaller.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-28 14:53:49 -07:00
Zach Brown
7b71250072 Merge pull request #176 from versity/zab/accumulated_fixes
Zab/accumulated fixes
2024-06-26 13:21:50 -07:00
Zach Brown
8e37be279c Use seqlock to protect inode fields
We were using a seqcount to protect high frequency reads and writes to
some of our private inode fields.  The writers were serialized by the
caller but that's a bit too easy to get wrong.  We're already storing
the write seqcount update so the additional internal spinlock stores in
seqlocks isn't a significant additional overhead.  The seqlocks also
handle preemption for us.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
4b87045447 Pre-declare scoutfs_lock in forest.h
Definitions in forest.h use lock pointers.  Pre-declare the struct so it
doesn't break inclusion without lock.h, following current practice in
the header.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
3f773a8594 Fix uninit written in scoutfs_file_write_iter
scoutfs_file_write_iter tried to track written bytes and return those
unless there was an error.  But written was uninitialized if we got
errors in any of the calls leading up to performing the write.  The
bytes written were also not being passed to the generic_write_sync
helper.  This fixes up all those inconsistencies and makes it look like
the write_iter path in other filesystems.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
c385eea9a1 Check for all offline in scoutfs_file_write_iter
When we write to file contents we change the data_version.  To stage old
contents into an offline region the data_version of the file must match
the archived copy.  When writing we have to make sure that there is no
offline data so that we don't increase the data_version which will
prevent staging of any other file regions because the data_versions no
longer match.

scoutfs_file_write_iter was only checking for offline data in its write
region, not the entire file.  Fix it to match the _aio_write method and
check the whole file.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
c296bc1959 Remove scoutfs_data_wait_check_iter
scoutfs_data_wait_check_iter() was checking the contiguous region of the
file starting at its pos and extending for iter_iov_count() bytes.  The
caller can do that with the previous _data_wait_check() method by
providing the same count that _check_iter() was using.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
3052feac29 Have item cache show unprotected lock
The item cache has a bit of safety checks that make sure that an
operation is performed while holding a lock that covers the item.  It
dumped a stack trace via WARN when that wasn't true, but it didn't
include any details about the keys or lock modes involved.

This adds a message that's printed once which includes the keys and
modes when an operation is attempted that isn't protected.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
1fa0d7727c scoutfs_item_create checks wrong lock mode
scoutfs_item_create() was checking that its lock had a read mode, when
it should have been checking for a write mode.  This worked out because
callers with write mode locks are also protecting reads.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
2af6f47c8b Fix bad error exit path in unlink
Unlink looks up the entry items for the name it is removing because we
no longer store the extra key material in dentries.  If this lookup
fails it will use an error path which release a transaction which wasn't
held.  Thankfully this error path is unlikely (corruption or systemic
errors like eio or enomem) so we haven't hit this in practice.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-25 15:11:20 -07:00
Zach Brown
48716461e4 Add tracepoint as block read returns ESTALE
Block reads can return ESTALE naturally as mounts read through old
cached blocks.  We won't always log it as an error but we should add a
tracepoint that can be inspected.

Signed-off-by: Zach Brown <zab@versity.com>
2024-06-10 11:03:38 -07:00
Zach Brown
0519830229 Merge pull request #165 from versity/greg/kmod-uninstall-cleanup
More cleanly drive weak-modules on install/uninstall
2024-04-11 14:32:06 -07:00
Greg Cymbalski
4d6e1a14ae More safely install/uninstall with weak-modules
This addresses some minor issues with how we handle driving the
weak-modules infrastructure for handling running on kernels not
explicitly built for.

For one, we now drive weak-modules at install-time more explicitly (it
was adding symlinks for all modules into the right place for the running
kernel, whereas now it only handles that for scoutfs against all
installed kernels).

Also we no longer leave stale modules on the filesystem after an
uninstall/upgrade, similar to what's done for vsm's kmods right now.
RPM's pre/postinstall scriptlets are used to drive weak-modules to clean
things up.

Note that this (intentionally) does not (re)generate initrds of any
kind.

Finally, this was tested on both the native kernel version and on
updates that would need the migrated modules. As a result, installs are
a little quicker, the module still gets migrated successfully, and
uninstalls correctly remove (only) the packaged module.
2024-04-11 13:20:50 -07:00
Greg Cymbalski
a4bc3fb27d Capture git info at spec creation time, pass into make 2024-02-05 15:44:10 -08:00
Zach Brown
c3890abd7b Correctly set the log_merge_wait_timeout_ms option
The initial code for setting the timeout used the wrong parsed variable.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-30 12:01:35 -08:00
Zach Brown
5ab38bfa48 Merge pull request #160 from versity/zab/log_merging_speedups
Zab/log merging speedups
2024-01-29 12:26:55 -08:00
Zach Brown
e9ad61b444 Delete multiple log trees items per server commit
server_log_merge_free_work() is responsible for freeing all the input
log trees for a log merge operation that has finished.  It looks for the
next item to free, frees the log btree it references, and then deletes
the item.  It was doing this with a full server commit for each item
which can take an agonizingly long time.

This changes it perform multiple deletions in a commit as long as
there's plenty of alloc space.  The moment the commit gets low it
applies the commit and opens a new one.  This sped up the deletion of a
few hundred thousand log tree items from taking hours to seconds.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:30:17 -08:00
Zach Brown
91bbf90f71 Don't pin input btrees when merging
The btree_merge code was pinning leaf blocks for all input btrees as it
iterated over them.  This doesn't work when there are a very large
number of input btrees.  It can run out of memory trying to hold a
reference to a 64KiB leaf block for each input root.

This reworks the btree merging code.  It reads a window of blocks from
all input trees to get a set of merged items.  It can take multiple
passes to complete the merge but by setting the merge window large
enough this overhead is reduced.  Merging now consumes a fixed amount of
memory rather than using memory proportional to the number of input
btrees.

Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:30:17 -08:00
Zach Brown
b5630f540d Add tracing of the log merge finalizing decision
Signed-off-by: Zach Brown <zab@versity.com>
2024-01-25 11:30:17 -08:00