Add an archive layer ioctl for converting offline extents into sparse
extents without relying on or modifying data_version. This is helpful
when working with files with very large sparse regions.
Signed-off-by: Zach Brown <zab@versity.com>
Looks like the compiler isn't smart enough to understand the pass by
pointer value, and we can initialize it here easily.
make[1]: Entering directory '/usr/src/kernels/5.14.0-503.26.1.el9_5.x86_64'
CC [M] /home/auke/scoutfs/kmod/src/server.o
/home/auke/scoutfs/kmod/src/server.c: In function ‘fence_pending_recov_worker’:
/home/auke/scoutfs/kmod/src/server.c:4170:23: error: ‘addr.v4.addr’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
4170 | ret = scoutfs_fence_start(sb, rid, le32_to_be32(addr.v4.addr),
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4171 | SCOUTFS_FENCE_CLIENT_RECOVERY);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
There's still the obvious issue here that we'd intended to support ipv6
but just disregard that here.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Occasionally, we have some tests fail because these kills produce:
tests/lock-recover-invalidate.sh: line 42: 9928 Terminated
Even though we expected them to be silent. In these particular cases we
already don't care about this output.
We borrow the silent_kill() function from orphan-inodes and promote it
to t_silent_kill() in funcs/exec.sh, and then use it everywhere where
appropriate.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The current test sequence performs the unlink and immediately tests
whether enough resources are available to create new files again, and
this consistently fails.
One of my crummy VMs takes a good 12 seconds before the `touch` actually
succeeds. We care about the filesystem eventually returning from ENOSPC,
and certainly we don't want it to take forever, but there is a period
after our first ENOSPC error and cleanup that we expect ENOSPC to fail
for a bit longer.
Make the timeout 120s. As soon as the `touch` completes, exit the wait
loop.
Signed-off-by: Auke Kok <auke.kok@versity.com>
If run without `-m` (explicit mkfs) in subsequent testing, old test
data files may break several tests. Most failures are -EEXIST, but
there are some more subtle ones.
This change erases any existing test dir as needed just before we
run the tests, and avoids the issue entirely.
I considered doing a `mv dir dir.$$ && rm -rf dir.$$ &` alternative
solution but that likely will interfere disproportionally with
tests that do disconnects and other thing that can be impacted by an
unlink storm.
This has an obvious performance aspect - tests will be a little
slower to start on subsequent runs. In CI, this will effectively be
a no-op though.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This test regularly fails in CI when the 15 seconds elapses and the
system still hasn't concluded the mount log merges and orphan inode
scans needed to unlink the test files.
Instead of just extending the timeout value, we test-and-retry for 120s.
This hopefully is faster in most cases. My smallest VM needs about 6s-8s
on average.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The client transaction commit worker has a series of functions that it
calls to commit the current transaction and open the next one. If any
of them fail, it retries all of them from the beginning each time until
they all succeed.
This pattern behaves badly since we added the strict get_trans_seq and
commit_trans_seq latching in the log_trees. The server will only commit
the items for a get or commit request once, and will fail a commit
request if it isn't given the seq that matches the current item.
If the server gets an error it can have persisted items while sending an
error to the client. If this error was for a get request, then the
client will retry all of its transaction write functions. This includes
the commit request which is now using a stale seq and will fail
indefinitely. This is visible in the server log as:
error -5 committing client logs for rid e57e37132c919c4f: invalid log trees item get_trans_seq
The solution is to retry the commit and get phases independently. This
way a failed get will be retried on its own without running through the
commit phase that had succeeded. The client will eventually get the
next seq that it can then safely commit.
Signed-off-by: Zach Brown <zab@versity.com>
At the end of get_log_trees we can try and drain the data_freed extent
tree, which can take multiple commits. If a commit fails then the
blocks are still dirty in memory. We can't send references to those
blocks to the client. We have to return an error and not send the
log_trees, like the main get_log_trees does. The client will retry and
eventually get a log_trees that references blocks that were successfully
committed.
Signed-off-by: Zach Brown <zab@versity.com>
Stored as `results/scoutfs.tap`, this file contains TAP format 14
generated test results.
Embedded in the output are some metadata so that these files can be
aggregated and stored in an unique and deduplicating way, but using a
generated UUID at the start of testing. The file itself also catches git
ID, date, and kernel version, as well as the (possibly altered) test
sequence used.
Any test that has diff or dmesg output will be considered failed, and a
copy of the relevant data is included as comments.
Signed-off-by: Auke Kok <auke.kok@versity.com>
This happens with the basic-truncate test, only. It's the only user
of the `yes` program.
The `yes` command normally fails gracefully under the usual runs that
are attached to some terminal. But when the test script runs entirely
under something else, it will throw a needless error message that
pollutes the test output:
`yes: standard output: Broken pipe`
Adjust the redirect to omit all stderr for `yes` in this case.
Signed-off-by: Auke Kok <auke.kok@versity.com>
scoutfs cli commands were using a helper that tried to perform word
expansion on the path argument. This was done with the intent of
providing the convenience of shell expansion (env vars, ~) within the
cli command argument.
But it breaks paths that accidentally have their file names match the
syntax that wordexp supports. "[ ]" tripped up files in the wild.
We don't need to provide shell expansion functionality in our argument
parsing. The shell can do that. The cli must pass the arguments
straight through, no parsing at all.
Signed-off-by: Zach Brown <zab@versity.com>
Very old copy/paste bug here, we want to update new_inode's ctime
instead. old_inode already is updated.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We need to assure we're emitting dents with the proper position
and we already have them as part of our dent. The only caveat is
to increment ctx->pos once beyond the list to make sure the caller
doesn't call us once more.
Signed-off-by: Auke Kok <auke.kok@versity.com>
While debugging a double unlock error we hit this condition and
debugging would have been a lot easier had we enforced this simple
constraint that we can't decrement the lock users count if it's
already 0.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Similar to fiemap, readdir and walk_inodes, this method could have
put_user during a page fault, causing potentially a deadlock.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Similar to readdir and fiemap vfs methods, we can't copy to user while
holding cluster locks. The previous comment about it being safe no
longer applies, and this could deadlock.
Rewrite the loop to iterate and store entries in a page, then flush
the page contents while not holding a clusterlock.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Now that we support mmap writes, at any point in time we could
pagefault and lock for writes. That means - just like readdir -
we can no longer lock and copy_to_user, since it also may page fault
and thus deadlock.
We statically allocate 32 extent entries on the stack and use
these to shuffle out fiemap entries at a time, locking and
unlocking around collecting and fiemap_fill_extent_next.
Signed-off-by: Auke Kok <auke.kok@versity.com>
dir_emit() will copy_to_user, which can pagefault. If this happens while
cluster locked, we could deadlock.
We use a single page to stage dir_emit data, and iterate between
fetching dirents while locked, and emitting them while not locked.
Signed-off-by: Auke Kok <auke.kok@versity.com>
These 2 sections of compat for readdir are wholly obsolete and can be
hard dropped, which restores the method to look like current upstream
code.
This was added in ddd1a4e.
Signed-off-by: Auke Kok <auke.kok@versity.com>
We merely trace exit values and position, and ignore length.
Because vm_fault_t is __bitwise, sparse will loudly complain about
a plain cast to u32, so we must __force (on el8). ret will be 512 in
normal cases.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Now that all of these should be passing, we enable all mmap() tests in
xfstests, and update the golden output with the new tests.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Two test programs are added. The run time is about 1min on my el7
instance.
The test script finishes up with a read/write mmap test on offline
extents to verify the data wait paths in those functions.
One program will perform vfs read/write and mmap read/write calls on
the same file from across 5 threads (mounts) repeatedly. The goal
is to assure there are no locking issues between read/write paths.
The second test program performs consistency checking on a file that is
repeatedly written/read using memory maps and normal reads and writes,
and the content is verified after every operation.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Add support for writable MAP_SHARED mmap()ings. Avoid issues with late
writepage()s building transactions by doing the block_write_begin() work in
scoutfs_data_page_mkwrite(). Ensure the page is marked dirty and prepared
for write, then let the VM complete the write when the page is flushed or
invalidated.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Auke Kok <auke.kok@versity.com>