Commit Graph

1388 Commits

Author SHA1 Message Date
Auke Kok
732637d372 merge conflict from zab/shrink cleanup 2025-10-07 12:22:53 -07:00
Auke Kok
963591cc9a Fix a sparse warning in net.c 2025-10-07 12:22:40 -07:00
Auke Kok
ad79ee94f9 Add tcp_keepalive_timeout_ms option.
The default TCP keepalive value is currently 10s, resulting in clients
being disconnected after 10 seconds of not replying to a TCP keepalive
packet. These keepalive values are reasonable most of the times, but
we've seen client disconnects where this timeout has been exceeded,
resulting in fencing. The cause for this is unknown at this time, but it
is suspected that network intermissions are happening.

This change adds a configurable value for this specific client socket
timeout. It enforces that its value is above UNRESPONSIVE_PROBES, whose
value remains unchanged.

The default value of 10000ms (10s) remains the trusted value. It is
enirely unclear and untested what values are reasonable and which
ones are not.  Since the value of this setting can and will interact
with other timeout values, care must be taken to not exceed certain
other timeout values.  I've tested this only briefly with values of
5000 and 25000. Outside that range is likely problematic.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-07 12:16:23 -07:00
Zach Brown
65ea250de9 Remove msghdr iov_iter kernelcompat
This removes the KC_MSGHDR_STRUCT_IOV_ITER kernel compat.
kernel_{send,recv}msg() initializes either msg_iov or msg_iter.

This isn't a clean revert of "69068ae2 Initialize msg.msg_iter from
iovec." because previous patches fixed the order of arguments, and the
net send caller was removed.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:15:59 -07:00
Zach Brown
86ca09ed7d Send messages in batches
Previous work had the receiver try to receive multiple messages in bulk.
This does the same for the sender.

We walk the send queue and initialize a vector that we then send with
one call.  This is intentionally similar to the single message sending
pattern to avoid unintended changes.

Along with the changes to recieve in bulk this ended up increasing the
message processing rate by about 6x when both send and receive were
going full throttle.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:15:51 -07:00
Zach Brown
5681920bfe Fix swapped sendmsg nr_segs/count
When the msg_iter compat was added the iter was initialized with nr_segs
and count swapped.  I'm not convinced this had any effect because the
kernel_{send,recv}msg() call would initialize msg_iter again with the
correct arguments.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:15:43 -07:00
Zach Brown
6c2ccf75ea Receive incoming messages in bulk
Our messaging layer is used for small control messages, not large data
payloads.  By calling recvmsg twice for every incoming message we're
hitting the socket lock reasonably hard.  With senders doing the same,
and a lot of messages flowing in each direction, the contention is
non-trivial.

This changes the receiver to copy as much of the incoming stream into a
page that is then framed and copied again into individual allocated
messages that can be processed concurrently.  We're avoiding contention
with the sender on the socket at the cost of additional copies of our
small messages.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:15:34 -07:00
Zach Brown
a818b9e461 Process client lock messages in ordered work
The lock client has a requirement that it can't handle some messages
being processed out of order.  Previously it had detected message
ordering itself, but had missed some cases.  Recieve processing was then
changed to always call lock message processing from the recv work to
globally order all lock messages.

This inline processing was contributing to excessive latencies in making
our way through the incoming receive queue, delaying work that would
otherwise be parallel once we got it off the recv queue.

This was seen in practice as a giant flood of lock shrink messages
arrived at the client.  It processed each in turn, starving a statfs
response long enough to trigger the hung task warning.

This fix does two things.

First, it moves ordered recv processing out of the recv work.  It lets
the recv work drain the socket quickly and turn it into a list that the
ordered work is consuming.  Other messages will have a chance to be
received and queued to their processing work without having to wait for
the ordered work to be processed.

Secondly, it adds parallelism to the ordered processing.  The incoming
lock messages don't need global ordering, they need ordering within each
lock.  We add an arbitrary but reasonable number of ordered workers and
hash lock messages to each worker based on the lock's key.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:15:20 -07:00
Zach Brown
b9f8eee59e Use list_lru for block cache shrinking
The block cache had a bizarre cache eviction policy that was trying to
avoid precise LRU updates at each block.  It had pretty bad behaviour,
including only allowing reclaim of maybe 20% of the blocks that were
visited by the shrinker.

We can use the existing list_lru facility in the kernel to do a better
job.  Blocks only exhibit contention as they're allocated and added to
per-node lists.  From then on we only set accessed bits and the private
list walkers move blocks around on the list as we see the accessed bits.
(It looks more like a fifo with lazy promotion than a "LRU" that is
actively moving list items around as they're accessed.)

Using the facility means changing how we remove blocks from the cache
and hide them from lookup.  We clean up the refcount inserted flag a bit
to be expressed more as a base refcount that can be acquired by
whoever's removing from the cache.  It seems a lot clearer.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:14:25 -07:00
Zach Brown
d8fcbb9564 Add kernelcompat for list_lru
Add kernelcompat helpers for initial use of list_lru for shrinking.  The
most complicated part is the walk callback type changing.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:14:15 -07:00
Zach Brown
4d58252e1a Retry stale item reads instead of stopping reclaim
Readers can read a set of items that is stale with respect to items that
were dirtied and written under a local cluster lock after the read
started.

The active reader machanism addressed this by refusing to shrink pages
that could contain items that were dirtied while any readers were in
flight.  Under the right circumstances this can result in refusing to
shrink quite a lot of pages indeed.

This changes the mechanism to allow pages to be reclaimed, and instead
forces stale readers to retry.  The gamble is that reads are much faster
than writes.  A small fraction should have to retry, and when they do
they can be satisfied by the block cache.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-07 12:12:29 -07:00
Chris Kirby
293df47589 Fix race condition in orphan-inodes test
Make sure that the orphan scanners can see deletions after forced unmounts
by waiting for reclaim_open_log_tree() to run on each mount; and waiting for
finalize_and_start_log_merge() to run and not find any finalized trees.

Do this by adding two new counters: reclaimed_open_logs and
log_merge_no_finalized and fixing the orphan-inodes test to check those
before waiting for the orphan scanners to complete.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 16:55:47 -05:00
Chris Kirby
2a58e4c147 Use ENOLINK as a special error code during forced unmount
Tests such as quorum-heartbeat-timeout were failing with EIO messages in dmesg output due to expected errors during forced unmount. Use ENOLINK instead, and filter all errors from dmesg with this errno (67).

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 15:57:42 -05:00
Zach Brown
4f9c3503c8 Add cond_resched to iput worker
The iput worker can accumulate quite a bit of pending work to do.  We've
seen hung task warnings while it's doing its work (admitedly in debug
kernels).  There's no harm in throwing in a cond_resched so other tasks
get a chance to do work.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
541cb47af0 Add tracing for get_file_block() and scoutfs_ioc_search_xattrs().
Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
d537365d0a Fix several cases in srch.c where the return value of EIO should have been -EIO.
Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
7375627861 Add the inode number to scoutfs_xattr_set traces.
Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
48d849e2f4 Only start new quorum election after a receive failure
It's possible for the quorum worker to be preempted for a long period,
especially on debug kernels. Since we only check for how much time
has passed, it's possible for a clean receive to inadvertently
trigger an election. This can cause the quorum-heartbeat-timeout
test to fail due to observed delays outside of the expected bounds.

Instead, make sure we had a receive failure before comparing timestamps.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
35bcad91a6 Close window where we can lose search items
In finalize_and_start_log_merge(), we overwrite the server
mount's log tree with its finalized form and then later write out
its next open log tree. This leaves a window where the mount's
srch_file is nulled out, causing us to lose any search items in
that log tree.

This shows up as intermittent failures in the srch-basic-functionality
test.

Eliminate this timing window by doing what unmount/reclaim does when
it finalizes, by moving the resources from the item that we finalize
into server trees/items as it finalizes. Then there is no window
where those resources exist only in memory until we create another
transaction.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Auke Kok
0b7b9d4a5e Avoid trigger munching of block_remove_stale trigger.
It's entirely likely that the trigger here is munched by a read on a
dirty block from any unrelated or background read. Avoid that by putting
the trigger at the end of the condition list.

Now that the order is swapped, we have to avoid a null deref in
block_is_dirty(bp) here, as well.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-06 12:27:25 -05:00
Auke Kok
f86a7b4d3c Fully wait for orphan inode scan to complete.
The issue with the previous attempt to fix the orphan-inodes test was
that we would regularly exceed the 120s timeout value put in there.

Instead, in this commit, we change the code to add a new counter to
indicate orphan deletion progress. When orphan inodes are deleted, the
increment of this counter indicates progress happened. Inversely,
every time the counter doesn't increment, and the orphan scan attempts
counter increments, we know that there was no more work to be done.

For safety, we wait until 2 consecutive scan attempts were made without
forward progress in the test case.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
bb3e1f3665 Fix commit budget calculation with multiple holders
The try_drain_data_freed() path was generating errors about overrunning
its commit budget:

scoutfs f.2b8928.r.02689f error: 1 holders exceeded alloc budget av: bef 8185 now 8036, fr: bef 8185 now 7602

The budget overrun check was using the current number of commit holders
(in this case one) instead of the the maximum number of concurrent holders
(in this case two). So even well behaved paths like try_drain_data_freed()
can appear to exceed their commit budget if other holders dirty some blocks
and apply their commits before the try_drain_data_freed() thread does its
final budget reconciliation.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
0d262de4ac Fix dirtied block calculation in extent_mod_blocks()
Free extents are stored in two btrees: one sorted by block number, one
by size. So if you insert a new extent between two existing extents, you can
be modifying two items in the by-block-number tree. And depending on the size
of those items, that can result in three items over in the -by-size tree.
So that's a 5x multiplier per level.

If we're shrinking the tree and adding more freed blocks, we're conceptually
dirtying two blocks at each level to merge. (current *2 in the code).
But if they fall under the low water mark then one of them is freed, so we
can have *3 per level in this case.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Chris Kirby
3f786596e0 Don't overrun the block budget in server_log_merge_free_work().
This fixes a potential fence post failure like the following:

error: 1 holders exceeded alloc budget av: bef 7407 now 7392, fr: bef 8185 now 7672

The code is only accounting for the freed btree blocks, not the dirtying of
other items. So it's possible to be at exactly (COMMIT_HOLD_ALLOC_BUDGET / 2),
dirty some log btree blocks, loop again, then consume another
(COMMIT_HOLD_ALLOC_BUDGET / 2) and blow past the total budget.

In this example, we went over by 13 blocks.

By only consuming up to 1/8 of the budget on each loop, and committing when we
have consumed 3/4 of the budget, we can avoid the fence post condition.

Signed-off-by: Chris Kirby <ckirby@versity.com>
2025-10-06 12:27:25 -05:00
Zach Brown
e088424d70 Add initial filters for warnings in distro source
Add a chunk of filters for sparse warnings that trigger on distro kernel
source.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:35:36 -07:00
Zach Brown
d0cf026298 Require sparse, and filter kernel sparse output
Fail the build if we don't check with sparse in both the kernel and
userspace utils.  Add a filtering wrapper to the kernel build so that we
have a place to filter out uninteresting errors from kernel sources that
we're building against.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:35:36 -07:00
Zach Brown
03fa1ce7c5 Avoid bad sparse warning in lock_invalidate()
This is another example of refactoring a loop to avoid sparse warnings
from doing something in the else of a failed trylock if.  We want to
drop and reacquire the lock if the trylock fails so we do it every loop
iteration.  This shouldn't be experiencing much contention because most
of the cov users are usually done under locks and invalidation has
excluded lock holders.  So the additional lock and unlock noise should
be local.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:35:36 -07:00
Zach Brown
3d9f10de93 Work around sparse warning in _item_write_done
scoutfs_item_write_done() acquires the cinf dirty_lock and pg rwlock out
of order.  It uses a trylock to detect failure and back off of both
before retrying.

sparse seems to have some peculiar sensitivity to following the else
branch from a failed trylock while already in a context.  Doing that
consistently triggered the spurious mismatched context warning.

This refactors the loop to always drop and reacquire the dirty_lock
after attemping the trylock.  It's not great, but this shouldn't be very
contended because the transaction write has serialized write lock
holderse that would be trying to dirty items.  The silly lock noise will
be mostly cached.

Signed-off-by: Zach Brown <zab@versity.com>
2025-10-03 09:34:23 -07:00
Auke Kok
2f48a606e8 Fix -Wmaybe-uninitalized since rhel9.5
Looks like the compiler isn't smart enough to understand the pass by
pointer value, and we can initialize it here easily.

make[1]: Entering directory '/usr/src/kernels/5.14.0-503.26.1.el9_5.x86_64'
  CC [M]  /home/auke/scoutfs/kmod/src/server.o
/home/auke/scoutfs/kmod/src/server.c: In function ‘fence_pending_recov_worker’:
/home/auke/scoutfs/kmod/src/server.c:4170:23: error: ‘addr.v4.addr’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
 4170 |                 ret = scoutfs_fence_start(sb, rid, le32_to_be32(addr.v4.addr),
      |                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 4171 |                                           SCOUTFS_FENCE_CLIENT_RECOVERY);
      |                                           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors

There's still the obvious issue here that we'd intended to support ipv6
but just disregard that here.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-08 15:20:50 -07:00
Zach Brown
888b1394a6 Retry client commit and get log trees separately
The client transaction commit worker has a series of functions that it
calls to commit the current transaction and open the next one.  If any
of them fail, it retries all of them from the beginning each time until
they all succeed.

This pattern behaves badly since we added the strict get_trans_seq and
commit_trans_seq latching in the log_trees.  The server will only commit
the items for a get or commit request once, and will fail a commit
request if it isn't given the seq that matches the current item.

If the server gets an error it can have persisted items while sending an
error to the client.  If this error was for a get request, then the
client will retry all of its transaction write functions.  This includes
the commit request which is now using a stale seq and will fail
indefinitely.  This is visible in the server log as:

  error -5 committing client logs for rid e57e37132c919c4f: invalid log trees item get_trans_seq

The solution is to retry the commit and get phases independently.  This
way a failed get will be retried on its own without running through the
commit phase that had succeeded.  The client will eventually get the
next seq that it can then safely commit.

Signed-off-by: Zach Brown <zab@versity.com>
2025-04-29 11:46:38 -07:00
Zach Brown
e457694f19 Don't send dirty data_freed blocks to client
At the end of get_log_trees we can try and drain the data_freed extent
tree, which can take multiple commits.  If a commit fails then the
blocks are still dirty in memory.  We can't send references to those
blocks to the client.  We have to return an error and not send the
log_trees, like the main get_log_trees does.  The client will retry and
eventually get a log_trees that references blocks that were successfully
committed.

Signed-off-by: Zach Brown <zab@versity.com>
2025-04-29 11:46:38 -07:00
Zach Brown
609fc56cd6 Merge pull request #203 from versity/auke/new_inode_ctime
Fix new_inode ctime assignment.
2025-02-25 15:23:16 -08:00
Auke Kok
e3e2cfceec Fix new_inode ctime assignment.
Very old copy/paste bug here, we want to update new_inode's ctime
instead. old_inode already is updated.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-02-18 13:15:49 -05:00
Auke Kok
e9d147260c Fix ctx->pos updating to properly handle dent gaps
We need to assure we're emitting dents with the proper position
and we already have them as part of our dent. The only caveat is
to increment ctx->pos once beyond the list to make sure the caller
doesn't call us once more.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
6c85879489 Assert unlock doesn't underflow lock user count.
While debugging a double unlock error we hit this condition and
debugging would have been a lot easier had we enforced this simple
constraint that we can't decrement the lock users count if it's
already 0.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
8b76a53cf3 Avoid cluster locking while put_user() in _allocated_inos.
Similar to fiemap, readdir and walk_inodes, this method could have
put_user during a page fault, causing potentially a deadlock.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
e76a171c40 Avoid faulting while cluster locked in _walk_inodes.
Similar to readdir and fiemap vfs methods, we can't copy to user while
holding cluster locks. The previous comment about it being safe no
longer applies, and this could deadlock.

Rewrite the loop to iterate and store entries in a page, then flush
the page contents while not holding a clusterlock.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
8cb08507d6 Do not copy to user while holding locks in scoutfs_data_fiemap()
Now that we support mmap writes, at any point in time we could
pagefault and lock for writes. That means - just like readdir -
we can no longer lock and copy_to_user, since it also may page fault
and thus deadlock.

We statically allocate 32 extent entries on the stack and use
these to shuffle out fiemap entries at a time, locking and
unlocking around collecting and fiemap_fill_extent_next.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
cad12d5ce8 Avoid deadlock in _readdir() due to copy_to_user().
dir_emit() will copy_to_user, which can pagefault. If this happens while
cluster locked, we could deadlock.

We use a single page to stage dir_emit data, and iterate between
fetching dirents while locked, and emitting them while not locked.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-27 14:49:04 -05:00
Auke Kok
1bcd1d4d00 Drop readdir pre-.iterate() compat (el7.5ish).
These 2 sections of compat for readdir are wholly obsolete and can be
hard dropped, which restores the method to look like current upstream
code.

This was added in ddd1a4e.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Auke Kok
b944f609aa remap_pages ops becomes obsolete. 2025-01-23 14:28:40 -05:00
Auke Kok
519b47a53c mmap() trace events.
We merely trace exit values and position, and ignore length.

Because vm_fault_t is __bitwise, sparse will loudly complain about
a plain cast to u32, so we must __force (on el8). ret will be 512 in
normal cases.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Benjamin LaHaise
3788d67101 Add support for writable shared mmap()ings
Add support for writable MAP_SHARED mmap()ings.  Avoid issues with late
writepage()s building transactions by doing the block_write_begin() work in
scoutfs_data_page_mkwrite().  Ensure the page is marked dirty and prepared
for write, then let the VM complete the write when the page is flushed or
invalidated.

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Benjamin LaHaise
b7a3d03711 Add support for read only mmap()
Adds the required memory mapped ops struct and page fault handler
for reads.

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-01-23 14:28:40 -05:00
Zach Brown
3e71f49260 Merge pull request #195 from versity/auke/el9_5
RHEL9.5 kernel support
2024-12-03 14:27:57 -08:00
Greg Cymbalski
110d5ea0d5 Block EL9 minor version upgrades
Since kABI migrations across minor versions is a thing of the past going
forward, we now:
- Detect if we're on EL9
- If so, add a requirement on the various flavors of release package to
  that specific major.minor version

This appropriately does not allow upgrades across minor versions.
2024-12-02 16:04:24 -08:00
Auke Kok
669de459a7 bdev_open_by_path is now removed as well.
Additional blkdev/bdev changes now cause this call to be removed as
well resulting in us having to use yet another API to do the same for
el9_5.

The changes are a little more subtle as now the bdev_mount() call passes
a custom bd_holder_ops that we must match or else throw a WARN_ON, so we
switch to using sbi as our holder arg instead.

Make sure to bdev_fput and not fput, since we don't want to have our
private data cleanup deferred, failing xfstests generic/604.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-27 18:52:39 -08:00
Auke Kok
621271f8cf backing_dev_info is entirely removed.
The assignments to it is no longer needed at all. All references can be
dropped since v6.4-rc4-163-g0d625446d0a4.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-08 13:32:21 -05:00
Auke Kok
d1092cdbe9 current_time() is no longer extern.
Since v6.5-rc1-7-g9b6304c1d537, current_time() is no longer
extern, so we need to update this grep regex to continue to match.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2024-11-08 13:21:03 -05:00
Zach Brown
19e78c32fc Allow null lock compatibility between nodes
Right now a client requesting a null mode for a lock will cause
invalidations of all existing granted modes of the lock across the
cluster.

This is unneccessarily broad.  The absolute requirement is that a null
request invalidates other existing granted modes on the client.  That's
how the client safely resolves shrinking's desire to free locks while
the locks are in use.  It relies on turning it into the race between use
and remote invalidation.

But that only requires invalidating existing grants from the requesting
client, not all clients.  It is always safe for null grants to coexist
with all grants on other clients.  Consider the existing mechanics
involving null modes.  First, null locks are instatiated on the client
before sending any requests at all.  At any given time newly allocated
null locks are coexisting with all existing locks across the cluster.
Second, the server frees the client entry tracking struct the moment it
sends a null grant to the client.  From that point on the client's null
lock can not have any impact on the rest of the lock holders because the
server has forgotten about it.

So we add this case to the server's test that two client lock modes are
compatible.  We take the opportunity to comment the heck out of this
function instead of making it a dense boolean composition.  The only
functional change is the addition of this case, the existing cases are
refactored but unchanged.

Signed-off-by: Zach Brown <zab@versity.com>
2024-10-31 15:34:59 -07:00