As I was committing the initial check command I had only partially
completed a rename of the function that checks the metadata allocators.
Signed-off-by: Zach Brown <zab@versity.com>
The list alloc blocks have an array of blknos that are offset by a start
field in the block header. The print code wasn't using that and was
always referencing the beginning of the array, which could miss blocks.
Signed-off-by: Zach Brown <zab@versity.com>
server_log_merge_free_work() is responsible for freeing all the input
log trees for a log merge operation that has finished. It looks for the
next item to free, frees the log btree it references, and then deletes
the item. It was doing this with a full server commit for each item
which can take an agonizingly long time.
This changes it perform multiple deletions in a commit as long as
there's plenty of alloc space. The moment the commit gets low it
applies the commit and opens a new one. This sped up the deletion of a
few hundred thousand log tree items from taking hours to seconds.
Signed-off-by: Zach Brown <zab@versity.com>
The btree_merge code was pinning leaf blocks for all input btrees as it
iterated over them. This doesn't work when there are a very large
number of input btrees. It can run out of memory trying to hold a
reference to a 64KiB leaf block for each input root.
This reworks the btree merging code. It reads a window of blocks from
all input trees to get a set of merged items. It can take multiple
passes to complete the merge but by setting the merge window large
enough this overhead is reduced. Merging now consumes a fixed amount of
memory rather than using memory proportional to the number of input
btrees.
Signed-off-by: Zach Brown <zab@versity.com>
Add a mount option for the amount of time that log merge creation can
wait before giving up. We add some counters so we can see how often
the timeout is being hit and what the average successfull wait time is.
Signed-off-by: Zach Brown <zab@versity.com>
The server sends sync requests to clients when it sees that they have
open log trees that need to be committed for log merging to proceed.
These are currently sent in the context of each client's get_log_trees
request, resulting in sync requests queued for one client from all
clients. Depending on message delivery and commit latencies, this can
create a sync storm.
The server's sends are reliable and the open commits are marked with the
seq when they opened. It's easy for us to record having sent syncs to
all open commits so that future attempts can be avoided. Later open
commits will have higher seqs and will get a new round of syncs sent.
Signed-off-by: Zach Brown <zab@versity.com>
The server was checking all client log_trees items to search for the
lowest commit seq that was still open. This can be expensive when there
are a lot of finalized log_trees items that won't have open seqs. Only
the last log_trees item for each client rid can be open, and the items
are sorted by rid and nr, so we can easily only check the last item for
each client rid.
Signed-off-by: Zach Brown <zab@versity.com>
During get_log_trees the server checks log_trees items to see if it
should start a log merge operation. It did this by iterating over all
log_trees items and there can be quite a lot of them.
It doesn't need to see all of the items. It only needs to see the most
recent log_trees item for each mount. That's enough to make the
decisions that start the log merging process.
Signed-off-by: Zach Brown <zab@versity.com>
KASAN could raise a spurious warning if the unwinder started in code
without ORC metadata and tried to access in the KASAN stack frame
redzones. This was fixed upstream but we can rarely see it in older
kernels. We can ignore these messages.
Signed-off-by: Zach Brown <zab@versity.com>
This test is trying to make sure that concurrent work isn't much, much,
slower than individual work. It does this by timing creating a bunch of
files in a dir on a mount and then timing doing the same in two mounts
concurrently. But it messed it up the concurrency pretty badly.
It had the concurrent createmany tasks creating files with a full path.
That means that every create is trying to read all the parent
directories. The way inode number allocation works means that one of
the mounts is likely to be getting a write lock that includes a shared
parent. This created a ton of cluster lock contention between the two
tasks.
Then it didn't sync the creates between phases. It could be
accidentally recording the time it took to write out the dirty single
creates as time taken during the parallel creates.
By syncing between phases and having the createmany tasks create files
relative to their per-mount directories we actually perform concurrent
work and test that we're not creating contention outside of the task
load.
This became a problem as we switched from loopback devices to device
mapper devices. The loopback writers were using buffered writes so we
were masking the io cost of constantly invalidating and refilling the
item cache by turning the reads into memory copies out of the page
cache.
While we're in here we actually clean up the created files and then use
t_fail to fail the test while the files still exist so they can be
examined.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we're not setting up per-mount loopback devices we can not have
the loop module loaded until tests are running.
Signed-off-by: Zach Brown <zab@versity.com>
We don't directly mount the underlying devices for each mount because
the kernel notices multiple mounts and doesn't setup a new super block
for each.
Previously the script used loopback devices to create the local shared
block construct 'cause it was easy. This introduced corruption of
blocks that saw concurrent read and write IOs. The buffered kernel file
IO paths that loopback eventually degrades into by default (via splice)
could have buffered readers copying out of pages without the page lock
while writers modified the page. This manifest as occasional crc
failure of blocks that we knowingly issue concurrent reads and writes to
from multiple mounts (the quorum and super blocks).
This changes the script to use device-mapper linear passthrough devices.
Their IOs don't hit a caching layer and don't provide an opportunity to
corrupt blocks.
Signed-off-by: Zach Brown <zab@versity.com>
Our large fragmented free test creates pathologically file extents which
are as expensive as possible to free. We know that debugging kernels
can take a long time to do this so we can extend the hung task timeout.
Signed-off-by: Zach Brown <zab@versity.com>
One of the phases of this test wanted to delete files but got the glob
quoting wrong. This didn't matter for the original test but when we
changed the test to use its own xattr name then those existing undeleted
files got confused with other files in later phases of the test.
This changes the test to delete the files with a more reliable find
pattern instead of using shell glob expansion.
Signed-off-by: Zach Brown <zab@versity.com>
Previously the bulk_create_paths test tool used the same xattr name for
each category of xattrs it was creating.
This created a problem where two tests got their xattrs confused with
each other. The first test created a bunch of srch xattrs, failed, and
didn't clean up after itself. The second test saw these search xattrs
as its own and got very confused when there were far more srch xattrs
than it thought it had created.
This lets each test specify the srch xattr names that are created by
bulk_create_paths so that tests can work with their xattrs independent
of each other.
Signed-off-by: Zach Brown <zab@versity.com>
We just added a test to try and get srch compaction stuck by having an
input file continue at a specific offset. To exercise the bug the test
needs to perform 6 compactions. It needs to merge 4 sets of logs into 4
sorted files, it needs to make partial progress merging those 4 sorted
files into another file, and then finall attempt to continue compacting
from the partial progress offset.
The first version of the test didn't necessarily ensure that these
compactions happened. It created far too many log files then just
waited for time to pass. If the host was slow then the mounts may not
make it through the initial logs to try and compact the sorted files.
The triggers wouldn't fire and the test would fail.
These changes much more carefully orchestrate and watch the various
steps of compaction to make sure that we trigger the bug.
Signed-off-by: Zach Brown <zab@versity.com>
Add a sysfs file for getting and setting the delay between srch
compaction requests from the client. We'll use this in testing to
ensure compaction runs promptly.
Signed-off-by: Zach Brown <zab@versity.com>
Compacting sorted srch files can take multiple transactions because they
can be very large. Each transaction resumes at a byte offset in a block
where the previous transaction stopped.
The resuming code tests that the byte offsets are sane but had a mistake
in testing the offset to skip to. It returned an error if the
compaction resumed from the last possible safe offset for decoding
entries.
If a system is unlucky enough to have a compaction transaction stop at
just this offset then compaction stops making forward progress as each
attempt to resume returns an error.
The fix allows continuation from this last safe offset while returning
errors for attempts to continue *past* that offset. This matches all
the encoding code which allows encoding the last entry in the block at
this offset.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test for srch compaction getting stuck hitting errors continuing a
partial operation. It ensures that a block has an encoded entry at
the _SAFE_BYTES offset, that an operaton stops precisely at that
offset, and then watches for errors.
Signed-off-by: Zach Brown <zab@versity.com>
The srch compaction request building function and the srch compaction
worker both have logic to recognize a valid response with no input files
indicating that there's no work to do. The server unfortunately
translated nr == 0 into ENOENT and send that error response to the
client. This caused the client to increment error counters in the
common case when there's no compaction work to perform. We'd like the
error counter to reflect actual errors, we're about to check it in a
test, so let's fix this up to the server sends a sucessful response with
nr == 0 to indicate that there's no work to do.
Signed-off-by: Zach Brown <zab@versity.com>
Without `iflag=fullblock` we encounter sporadic cases where the
input file to the truncate test isn't fully written to 8K and ends
up to be only 4K. The subsequent truncate tests then fail.
We add a check to the input test file size just to be sure in the
future.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The server had a few lower level seqcounts that it used to protect
state. One user got it wrong by forgetting to disable pre-emption
around writers. Debug kernels warned as write_seqcount_begin() was
called without preemption disabled.
We fix that user and make it easier to get right in the future by having
one higher level seqlock and using that consistently for seq read
begin/retry and write lock/unlock patterns.
Signed-off-by: Zach Brown <zab@versity.com>
On el9 distros systemd-journald will log rotation events into kmesg.
Since the default logs on VM images are transient only, they are
rotated several times during a single test cycle, causing test failures.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The t_quiet test command execution helper was constantly truncating the
quiet.log with the output of each command. It was meant to show each
command and its output as they're run.
Signed-off-by: Zach Brown <zab@versity.com>
The rpmbuild support files no longer define the previously used kernel
module macros. This carves out the differences between el7 and el8 with
conditionals based on the distro we are building for.
Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
In rhel7 this is a nested struct with ktime_t. However, in rhel8
ktime_t is a simple s64, and not a union, and thus we can't do
this as easily. Just memset it.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In newer kernels, we always get -ESTALE because the inode has been
marked immediately as deleting. Since this is expected behavior we
should not fail the test here on this error value.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In RHEL7, this was skipped automatically. In RHEL8, we don't support
the needed passing through of the actual user namespace into our
ACL set/get handlers. Once we get around v5.11 or so, the handlers
are automatically passed the namespace. Until then, skip this test.
Signed-off-by: Auke Kok <auke.kok@versity.com>
New kernels expect to do a partial match when a .prefix is used here,
and provide a .name member in case matching should look at the whole
string. This is what we want.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The caller takes care of caching for us. Us doing caching
messes with memory management of cached ACLs and breaks.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Instead of messing with quotes and using grep for the correct
xattr name, directly query the value of the xattr being tested
only, and compare that to the input.
Side effect is that this is significantly simpler and faster.
Signed-off-by: Auke Kok <auke.kok@versity.com>
`stat` internally switched to using the new `statx` syscall, and this
affects the output of perror() subsequently. This is the same error
as before (and expected).
Signed-off-by: Auke Kok <auke.kok@versity.com>
The filefrag program in e2fsprogs-v1.42.10-10-g29758d2f now includes
an extra flag, and changes how the `unknown` flag is output.
We essentially adjust for this "new" golden value on the fly if we
encounter it. We don't expect future changes to the output.
Signed-off-by: Auke Kok <auke.kok@versity.com>
In older versions of coreutils, quoted strings are occasionally
output using utf-8 open/close single quotes.
New versions of coreutils will exclusively use the ASCII single quote
character "'" when the output is not a TTY - as is the case with
all test scripts.
We can avoid most of these problems by always setting LC_ALL=C in
testing, however.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The aio_read and aio_write callbacks are no longer used by newer
kernels which now uses iter based readers and writers.
We can avoid implementing plain .read and .write as an iter will
be generated when needed for us automatically.
We add a new data_wait_check_iter() function accordingly.
With these methods removed from the kernel, the el8 kernel no
longer uses the extended ops wrapper struct and is much closer now
to upstream. As a result, a lot of methods are moving around from
inode_dir_operations to and from inode_file_operations etc, and
perhaps things will look a bit more structured as a result.
As a result, we need a slightly different data_wait_check() that
accounts for the iter and offset properly.
Signed-off-by: Auke Kok <auke.kok@versity.com>
.readpages is obsolete in el8 kernels. We implement the .readahead
method instead which is passed a struct readahead_control. We use
the readahead_page(rac) accessor to retrieve page by page from the
struct.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v4.9-12228-g530e9b76ae8f Drops all (un)register_(hot)cpu_notifier()
API functions. From here on we need to use the new cpuhp_* API.
We avoid this entirely for now, at the cost of leaking pages until
the filesystem is unmounted.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Convert the timeout struct unto a u64 nsecs value before passing it to
the trace point event, as to not overflow the 64bit limitation on args.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v4.16-rc1-1-g9b2c45d479d0
This interface now returns (sizeof (addr)) on success, instead of 0.
Therefore, we have to change the error condition detection.
The compat for older kernels handles the addrlen check internally.
Signed-off-by: Auke Kok <auke.kok@versity.com>
MS_* flags from <linux/mount.h> should not be used in the kernel
anymore from 4.x onwards. Instead, we need to use the SB_* versions
Signed-off-by: Auke Kok <auke.kok@versity.com>
Move to the more recent interfaces for counting and scanning cached
objects to shrink.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
Move towards modern bio intefaces, while unfortunately carrying along a
bunch of compat functions that let us still work with the old
incompatible interfaces.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
memalloc_nofs_save() was introduced as preferential to trying to use GFP
flags to indicate that a task should not recurse during reclaim. We use
it instead of the _noio_ we were using before.
Signed-off-by: Zach Brown <zab@versity.com>
__percpu_counter_add_batch was renamed to make it clear that the __
doesn't mean it's less safe, as it means in other calls in the API, but
just that it takes an additional parameter.
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
There are new interfaces available but the old one has been retained
for us to use. In case of older kernels, we will need to fall back
to the previous name of these functions.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Provide fallback in degraded mode for kernels pre-v4.15-rc3 by directly
manipulating the member as needed.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since v4.6-rc3-27-g9902af79c01a, inode->i_mutex has been replaced
with ->i_rwsem. However, long since whenever, inode_lock() and
related functions already worked as intended and provided fully
exclusive locking to the inode.
To avoid a name clash on pre-rhel8 kernels, we have to rename a
stack variable in `src/file.c`.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Since v4.15-rc3-4-gae5e165d855d, <linux/iversion.h> contains a new
inode->i_version API and it is not included by default.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The new variant of the code that recomputes the augmented value
is designed to handle non-scalar types and to facilitate that, it
has new semantics for the _compute callback. It is now passed a
boolean flag `exit` that indicates that if the value isn't changed,
it should exit and halt propagation.
The callback function now shall return whether that propagation should
stop or not, and not the computed new value. The callback can now
directly update the new computed value in the node.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Fixes: Error: implicit declaration of function ‘blkdev_put’
Previously this was an `extern` in <fs.h> and included implicitly,
hence the need to hard include it now.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v4.1-rc4-22-g92cf211874e9 merges this into preempt.h, and on
rhel7 kernels we don't need this include anymore either.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v3.15-rc1-6-g1a56f2aa4752 removes flush_work_sync entirely, but
ever since v3.6-rc1-25-g606a5020b9bd which made all workqueues
non-reentrant, it has been equivalent to flush_work.
This is safe because in all cases only one server->work can be
in flight at a time.
Signed-off-by: Auke Kok <auke.kok@versity.com>
v3.18-rc3-2-g230fa253df63 forces us to remove ACCESS_ONCE() with
READ_ONCE(), but it is probably the better interface and works with
non-scalar types.
Signed-off-by: Auke Kok <auke.kok@versity.com>
PAGE_CACHE_SIZE was previously defined to be equivalent to PAGE_SIZE.
This symbol was removed in v4.6-rc1-32-g1fa64f198b9f.
Signed-off-by: Auke Kok <auke.kok@versity.com>
Because we `-include src/kernelcompat.h` from the command line,
this header gets included before any of the kernel includes in
most .c and .h files. We should at least make sure we pull in
<fs> and <kernel> since they're required.
Signed-off-by: Auke Kok <auke.kok@versity.com>
The fence-and-reclaim test has a little function that runs after fencing
and recovery to make sure that all the mounts are operational again.
The main thing it does is re-use the same locks across a lot of files to
ensure that lock recovery didn't lose any locks that stop forward
progress.
But I also threw in a test of the committed_seq machinery, as a bit of
belt and suspenders. The problem is the test is racey. It samples the
seq after the write so the greatest seq it rememebers can be after the
write and will not be committed by the other nodes reads. It being less
than the committed_seq is a totally reasonable race.
Which explains why this test has been rarely failing since it was
written. There's no particular reason to test the committed_seq
machinery here, so we can just remove that racey test.
Signed-off-by: Zach Brown <zab@versity.com>
Server code that wants to dirty blocks by holding a commit won't be
allowed to until the current allocators for the server transaction have
enough space for the holder. As an active holder applies the commit the
allocators are refilled and the waiting holders will proceed.
But the current allocators can have no resources as the server starts
up. There will never be active holders to apply the commit and refill
the allocators. In this case all the holders will block indefinitely.
The fix is to trigger a server commit when a holder doesn't have room.
It used to be that commits were only triggered when apply callers were
waiting. We transfer some of that logic into a new 'committing' field
so that we can have commits in flight without apply callers waiting. We
add it to the server commit tracing.
While we're at it we clean up the logic that tests if a hold can
proceed. It used to be confusingly split across two functions that both
could sample the current allocator space remaining. This could lead to
weird cases where the first holder could use the second alloc remaining
call, not the one whose values were tested to see if the holder could
fit. Now each hold check only samples the allocators once.
And finally we fix a subtle case where the budget exceeded message can
spuriously trigger in the case where dirtying the freed list created a
new empty block after the holder recorded the amount of space in the
freed block.
Signed-off-by: Zach Brown <zab@versity.com>
Data preallocation attempts to allocate large aligned regions of
extents. It tried to fill the hole around a write offset that
didn't contain an extent. It missed the case where there can be
multiple extents between the start of the region and the hole.
It could try to overwrite these additional existing extents and writes
could return EINVAL.
We fix this by trimming the preallocation to start at the write offset
if there are any extents in the region before the write offset. The
data preallocation test output has to be updated now that allocation
extents won't grow towards the start of the region when there are
existing extents.
Signed-off-by: Zach Brown <zab@versity.com>
Log merge completions were spliced in one server commit. It's possible
to get enough completion work pending that it all can't be completed in
one server commit. Operations fail with ENOSPC and because these
changes can't be unwound cleanly the server asserts.
This allows the completion splicing to break the work up into multiple
commits.
Processing completions in multiple commits means that request creation
can observe the merge status in states that weren't possible before.
Splicing is careful to maintain an elevated nr_complete count while the
client can't get requests because the tree is rebalancing.
Signed-off-by: Zach Brown <zab@versity.com>
The move_blocks ioctl finds extents to move in the source file by
searching from the starting block offset of the region to move.
Logically, this is fine. After each extent item is deleted the next
search will find the next extent.
The problem is that deleted items still exist in the item cache. The
next iteration has to skip over all the deleted extents from the start
of the region. This is fine with large extents, but with heavily
fragmented extents this creates a huge amplification of the number of
items to traverse when moving the fragmented extents in a large file.
(It's not quite O(n^2)/2 for the total extents, deleted items are purged
as we write out the dirty items in each transaction.. but it's still
immense.)
The fix is to simply start searching for the next extent after the one
we just moved.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which exercises filling holes in prealloc regions when the
_contig_only prealloc option is not set.
Signed-off-by: Zach Brown <zab@versity.com>
If the _contig_only option isn't set then we try to preallocate aligned
regions of files. The initial implementation naively only allowed one
preallocation attempt in each aligned region. If it got a small
allocation that didn't fill the region then every future allocation
in the region would be a single block.
This changes every preallocation in the region to attempt to fill the
hole in the region that iblock fell in. It uses an extra extent search
(item cache search) to try and avoid thousands of single block
allocations.
Signed-off-by: Zach Brown <zab@versity.com>
The RCU hash table uses deferred work to resize the hash table. There's
a time during resize when hash table iteration will return EAGAIN until
resize makes more progress. During this time resize can perform
GFP_KERNEL allocations.
Our shrinker tries to iterate over its RCU hash table to find blocks to
reclaim. It tries to restart iteration if it gets EAGAIN on the
assumption that it will be usable again soon.
Combine the two and our shrinker can get stuck retrying iteration
indefinitely because it's shrinking on behalf of the hash table resizing
that is trying to allocate the next table before making iteration work
again. We have to stop shrinking in this case so that the resizing
caller can proceed.
Signed-off-by: Zach Brown <zab@versity.com>
Add an ioctl that gives the callers all entries that refer to an inode.
It's like a backwards readdir. It's a light bit of translation between
the internal _add_next_linkrefs() list of entries and the ioctl
interface of a buffer of entry structs.
Signed-off-by: Zach Brown <zab@versity.com>
Extend scoutfs_dir_add_next_linkref() to be able to return multiple
backrefs under the lock for each call and have it take an argument to
limit the number of backrefs that can be added and returned.
Its return code changes a bit in that it returns 1 on success instead of
0 so we have to be a little careful with callers who were expecting 0.
It still returns -ENOENT when no entries are found.
We break up its tracepoint into one that records each entry added and
one that records the result of each call.
This will be used by an ioctl to give callers just the entries that
point to an inode instead of assembling full paths from the root.
Signed-off-by: Zach Brown <zab@versity.com>
Update the quorum_heartbeat_timeout_ms test to also test the mount
option, not just updating the timeout via sysfs. This takes some
reworking as we have to avoid the active leader/server when setting the
timeout via the mount option. We also allow for a bit more slack around
comparing kernel sleeps and userspace wall clocks.
Signed-off-by: Zach Brown <zab@versity.com>
Mount option parsing runs early enough that the rest of the option
read/write serialization infrastructure isn't set up yet. The
quorum_heartbeat_timeout_ms mount option tried to use a helper that
updated the stored option but it wasn't initialized yet so it crashed.
The helper was really only to have the option validity test in one
place. It's reworked to only verify the option and the actual setting
is left to the callers.
Signed-off-by: Zach Brown <zab@versity.com>
If setting a sysfs option failes the bash write error is output. It
contains the script line number which can fail over time, leading to
mismatched golden output failures if we used the output as an expected
indication of failure. Callers should test its rc and output
accordingly if they want the failure logged and compared.
Signed-off-by: Zach Brown <zab@versity.com>
Forced unmount is supposed to isolate the mount from the world. The
net.c TCP messaging returns errors when sending during forced unmount.
The quorum code has its own UDP messaging and wasn't taking forced
unmount into account.
This lead to quorum still being able to send resignation messages to
other quorum peers during forced unmount, making it hard to test
heartbeat timeouts with forced unmount.
The quorum messaging is already unreliable so we can easily make it drop
messages during forced unmount. Now forced unmount more fully isolates
the quorum code and it becomes easier to test.
Signed-off-by: Zach Brown <zab@versity.com>
Add tracking and reporting of delays in sending or receiving quorum
heartbeat messages. We measure the time between back to back sends or
receives of heartbeat messages. We record these delays truncated down
to second granularity in the quorum sysfs status file. We log messages
to the console for each longest measured delay up to the maximum
configurable heartbeat timeout.
Signed-off-by: Zach Brown <zab@versity.com>
Add mount and sysfs options for changing the quorum heartbeat timeout.
This allows setting a longer delay in taking over for failed hosts that
has a greater chance of surviving temporary non-fatal delays.
We also double the existing default timeout to 10s which is still
reasonably responsive.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum udp socket allocation still allowed starting io which can
trigger longer latencies trying to free memory. We change the flags to
prefer dipping into emergency pools and then failing rather than
blocking trying to satisfy an allocation. We'd much rather have a given
heartbeat attempt fail and have the opportunity to succeed at the next
interval rather than running the risk of blocking across multiple
intervals.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum work was using the system workq. While that's mostly fine,
we can create a dedicated workqueue with the specific flags that we
need. The quorum work needs to run promptly to avoid fencing so we set
it to high priority.
Signed-off-by: Zach Brown <zab@versity.com>
In the quorum work loop some message receive actions extend the timeout
after the timeout expiration is checked. This is usually fine when the
work runs soon after the messages are received and before the timeout
expires. But under load the work might not schedule until long after
both the message has been received and the timeout has expired.
If the message was a heartbeat message then the wakeup delay would be
mistaken for lack of activity on the server and it would try to take
over for an otherwise active server.
This moves the extension of the heartbeat on message receive to before
the timeout is checked. In our case of a delayed heartbeat message it
would still find it in the recv queue and extend the timeout, avoiding
fencing an active server.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command for writing a super block to a new data device after
reading the metadata device to ensure that there's no existing
data on the old data device.
Signed-off-by: Zach Brown <zab@versity.com>
Some tests had grown a bad pattern of making a mount point for the
scratch mount in the root /mnt directory. Change them to use a mount
point in their test's temp directory outside the testing fs.
Signed-off-by: Zach Brown <zab@versity.com>
Split the existing device_size() into get_device_size() and
limit_device_size(). An upcoming command wants to get the device size
without applying limiting policy.
Signed-off-by: Zach Brown <zab@versity.com>
We missed initializing sb->s_time_gran which controls how some parts of
the kernel truncate the granularity of nsec in timespec. Some paths
don't use it at all so time would be maintained at full precision. But
other paths, particularly setattr_copy() from userspace and
notify_change() from the kernel use it to truncate as times are set.
Setting s_time_gran to 1 maintains full nsec precision.
Signed-off-by: Zach Brown <zab@versity.com>
The VFS performs a lot of checks on renames before calling the fs
method. We acquire locks and refresh inodes in the rename method so we
have to duplciate a lot of the vfs checks.
One of the checks involves loops with ancestors and subdirectories. We
missed the case where the root directory is the destination and doesn't
have any parent directories. The backref walker it calls returns
-ENOENT instead of 0 with an empty set of parents and that error bubbled
up to rename.
The fix is to notice when we're asking for ancestors of the one
directory that can't have ancestors and short circuit the test.
Signed-off-by: Zach Brown <zab@versity.com>
When a client no longer needs to append to a srch file, for whatever
reason, we move the reference from the log_trees item into a specific
srch file btree item in the server's srch file tracking btree.
Zeroing the log_trees item and inserting the server's btree item are
done in a server commit and should be written atomically.
But commit_log_trees had an error handling case that could leave the
newly inserted item dirty in memory without zeroing the srch file
reference in the existing log_trees item. Future attempts to rotate the
file reference, perhaps by retrying the commit or by reclaiming the
client's rid, would get EEXIST and fail.
This fixes the error handling path to ensure that we'll keep the dirty
srch file btree and log_trees item in sync. The desynced items can
still exist in the world so we'll tolerate getting EEXIST on insertion.
After enough time has passed, or if repair zeroed the duplicate
reference, we could remove this special case from insertion.
Signed-off-by: Zach Brown <zab@versity.com>
The move_blocks ioctl intends to only move extents whose bytes fall
inside i_size. This is easy except for a final extent that straddles an
i_size that isn't aligned to 4K data blocks.
The code that either checked for an extent being entirely past i_size or
for limiting the number of blocks to move by i_size clumsily compared
i_size offsets in bytes with extent counts in 4KB blocks. In just the
right circumstances, probably with the help of a byte length to move
that is much larger than i_size, the length calculation could result in
trying to move 0 blocks. Once this hit the loop would keep finding that
extent and calculating 0 blocks to move and would be stuck.
We fix this by clamping the count of blocks in extents to move in terms
of byte offsets at the start of the loop. This gets rid of the extra
size checks and byte offset use in the loop. We also add a sanity check
to make sure that we can't get stuck if, say, corruption resulted in an
otherwise impossible zero length extent.
Signed-off-by: Zach Brown <zab@versity.com>
There were kernels that didn't apply the current umask to inode modes
created with O_TMPFILE without acls. Let's have a test running to make
sure that we're not surprised if we come across one.
Signed-off-by: Zach Brown <zab@versity.com>
We had a one-off test that was overly specific to staging from tmpfile.
This renames it to a more generic test where we can add more tests of
o_tmpfile in general.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we've removed its users we can remove the global saved copy of
the super block from scoutfs_sb_info.
Signed-off-by: Zach Brown <zab@versity.com>
As the server does its work its transactions modify a dirty super block
in memory. This used the global super block in scoutfs_sb_info which
was visible to everything, including the client. Move the dirty super
block over to the private server info so that only the server can see
it.
This is mostly boring storage motion but we do change that the quorum
code hands the server a static copy of the quorum config to use as it
starts up before it reads the most recent super block.
Signed-off-by: Zach Brown <zab@versity.com>
Refilling a client's data_avail is the only alloc_move call that doesn't
try and limit the number of blocks that it dirties. If it doesn't find
sufficiently large extents it can exhaust the server's alloc budget
without hitting the target. It'll try to dirty blocks and return a hard
error.
This changes that behaviour to allow returning 0 if it moved any
extents. Other callers can deal with partial progress as they already
limit the blocks they dirty. This will also return ENOSPC if it hadn't
moved anything just as the current code would.
The result is that data fill can not necessarily hit the target. It
might take multiple commits to fill the data_avail btree.
Signed-off-by: Zach Brown <zab@versity.com>
The server's statfs request handler was intending to lock dirty
structures as they were walked to get sums used for statfs fields.
Other callers walk stable structures, though, so the summation calls had
grown iteration over other structures that the server didn't know it had
to lock.
This meant that the server was walking unlocked dirty structures as they
were being modified. The races are very tight, but it can result in
request handling errors that shut down connections and IO errors from
trying to read inconsistent refs as they were modified by the locked
writer.
We've built up infrastructure so the server can now walk stable
structures just like the other callers. It will no longer wander into
dirty blocks so it doesn't need to lock them and it will retry if its
walk of stale data crosses a broken reference.
Signed-off-by: Zach Brown <zab@versity.com>
Transition from manual checking for persistent ESTALE to the shared
helper that we just added. This should not change behavior.
Signed-off-by: Zach Brown <zab@versity.com>
Many readers had little implementations of the logic to decide to retry
stale reads with different refs or decide that they're persistent and
return hard errors. Let's move that into a small helper.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_forest_inode_count() assumed it was called with stable refs and
would always translate ESTALE to EIO. Change it so that it passes
ESTALE to the caller who is responsible for handling it.
The server will use this to retry reading from stable supers that it's
storing in memory.
Signed-off-by: Zach Brown <zab@versity.com>
The server has a mechanism for tracking the last stable roots used by
network rpcs. We expand it a bit to include the entire super so
that we can add users in the server which want the last full stable
super. We can still use the stable super to give out the stable
roots.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum code was using the copy of the super block in the sb info for
its config. With that going away we make different users more carefully
reference the config. The quorum agent has a copy that it reads on
setup, the client rarely reads a copy when trying to connect, and the
server uses its super.
This is about data access isolation and should have no functional effect
other than to cause more super reads.
Signed-off-by: Zach Brown <zab@versity.com>
A few paths throughout the code get the fsid for the current mount by
using the copy of the super block that we store in the scoutfs_sb_info
for the mount. We'd like to remove the super block from the sbi and
it's cleaner to have a specific constant field for the fsid of the mount
which will not change.
Signed-off-by: Zach Brown <zab@versity.com>
When we truncate away from a partial block we need to zero its tail that
was past i_size and dirty it so that it's written.
We missed the typical vfs boilerplate of calling block_truncate_page
from setattr->set_size that does this. We need to be a little careful
to pass our file lock down to get_block and then queue the inode for
writeback so its written out with the transaction. This follows the
pattern in .write_end.
Signed-off-by: Zach Brown <zab@versity.com>
The d_prune_aliases in lock invalidation was thought to be safe because
the caller had an inode refernece, surely it can't get into iput_final.
I missed the fundamental dcache pattern that dput can ascend through
parents and end up in inode eviction for entirely unrelated inodes.
It's very easy for this to deadlock, imagine if nothing else that the
inode invalidation is blocked on in dput->iput->evict->delete->lock is
itself in the list of locks to invalidate in the caller.
We fix this by always kicking off d_prune and dput into async work.
This increases the chance that inodes will still be referenced after
invalidation and prevent inline deletion. More deletions can be
deferred until the orphan scanner finds them. It should be rare,
though. We're still likely to put and drop invalidated inodes before a
writer gets around to removing the final unlink and asking us for the
omap that describes our cached inodes.
To perform the d_prune in work we make it a behavioural flag and make
our queued iputs a little more robust. We use much safer and
understandable locking to cover the count and the new flags and we put
the work in re-entrant work in their own workqueue instead of one work
instance in the system_wq.
Signed-off-by: Zach Brown <zab@versity.com>
Add a quick test of the index items to make sure that rapid inode
updates don't create duplicate meta_seq items.
Signed-off-by: Zach Brown <zab@versity.com>
FS items are deleted by logging a deletion item that has a greater item
version than the item to delete. The versions are usually maintained by
the write_seq of the exclusive write lock that protects the item. Any
newer write hold will have a greater version than all previous write
holds so any items created under the lock will have a greater vers than
all previous items under the lock. All deletion items will be merged
with the older item and both will be dropped.
This doesn't work for concurrent write-only locks. The write-only locks
match with each other so their write_seqs are asssigned in the order
that they are granted. That grant order can be mismatched with item
creation order. We can get deletion items with lesser versions than the
item to delete because of when each creation's write-only lock was
granted.
Write only locks are used to maintain consistency between concurrent
writers and readers, not between writers. Consistency between writers
is done with another primary write lock. For example, if you're writing
seq items to a write-only region you need to have the write lock on the
inode for the specific seq item you're writing.
The fix, then, is to pass these primary write locks down to the item
cache so that it can chose an item version that is the greatest amongst
the transaction, the write-only lock, and the primary lock. This now
ensures that the primary lock's increasing write_seq makes it down to
the item, bringing item version ordering in line with exclusive holds of
the primary lock.
All of this to fix concurrent inode updates sometimes leaving behind
duplicate meta_seq items because old seq item deletions ended up with
older versions than the seq item they tried to delete, nullifying the
deletion.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we've removed the hash and pos from the dentry_info struct we
can do without it. We can store the refresh gen in the d_fsdsta pointer
(sorry, 64bit only for now.. could allocate if we needed to.) This gets
rid of the lock coverage spinlocks and puts a bit more pressure on lock
lookup, which we already know we have to make more efficient. We can
get rid of all the dentry info allocation calls.
Now that we're not setting d_op as we allocate d_fsdata we put the ops
on the super block so that we get d_revalidate called on all our
dentries.
We also are a bit more precise about the errors we can return from
verification. If the target of a dentry link changes then we return
-ESTALE rather than silently performing the caller's operation on
another inode.
Signed-off-by: Zach Brown <zab@versity.com>
Add a lock call to get the current refresh_gen of a held lock. If the
lock doesn't exist or isn't readable then we return 0. This an be used
to track lock coverage of structures without the overhead and lifetime
binding of the lock coverage struct.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_sysfs_exit() is called during error handling in module init.
When scoutfs is built-in (so, never.) the __exit section won't be
loaded. Remove the __exit annotation so it's always available to be
called.
Signed-off-by: Zach Brown <zab@versity.com>
The dentry cache life cycles are far too crazy to rely on d_fsdata being
kept in sync with the rest of the dentry fields. Callers can do all
sorts of crazy things with dentries. Only unlink and rename need these
fields and those operations are already so expensive that item lookups
to get the current actual hash and pos are lost in the noise.
Signed-off-by: Zach Brown <zab@versity.com>
The test shell helpers for saving and restoring mount options were
trying to put each mount's option value in an array. It meant to build
the array key by concatenating the option name and the mount number.
But it didn't isolate the option "name" variable when evaluating it,
instead always evaluating "name_" to nothing and building keys for all
options that only contained the mount index. This then broke when tests
attempted to save and restore multiple options.
Signed-off-by: Zach Brown <zab@versity.com>
Make mount options for the size of preallocation and whether or not it
should be restricted to extending writes. Disabling the default
restriction to streaming writes lets it preallocate in aligned regions
of the preallocation size when they contain no extents.
Signed-off-by: Zach Brown <zab@versity.com>
The orphan_scan_delay_ms option setting code mistakenly set the default
before testing the option for -1 (not the default) to discover if
multiple options had been set. This made any attempt to set fail.
Initialize the option to -1 so the first set succeeds and apply the
default if we don't set the value.
Signed-off-by: Zach Brown <zab@versity.com>
The simple-xattr-unit test had a helper that failed by exiting with
non-zero instead of emitting a message. Let's make it a bit easier to
see what's going on.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for the POSIX ACLs as described in acl(5). Support is
enabled by default and can be explicitly enabled or disabled with the
acl or noacl mount options, respectively.
Signed-off-by: Zach Brown <zab@versity.com>
The upcoming acl support wants to be able to get and set xattrs from
callers who already have cluster locks and transactions. We refactor
the existing xattr get and set calls into locked and unlocked variants.
It's mostly boring code motion with the unfortunate situation that the
caller needs to acquire the totl cluster lock before holding a
transaction before calling into the xattr code. We push the parsing of
the tags to the caller of the locked get and set so that they can know
to acquire the right lock. (The acl callers will never be setting
scoutfs. prefixed xattrs so they will never have tags.)
Signed-off-by: Zach Brown <zab@versity.com>
Move to the use of the array of xattr_handler structs on the super to
dispatch set and get from generic_ based on the xattr prefix. This
will make it easier to add handling of the pseudo system. ACL xattrs.
Signed-off-by: Zach Brown <zab@versity.com>
try_delete_inode_items() is responsible for making sure that it's safe
to delete an inode's persistent items. One of the things it has to
check is that there isn't another deletion attempt on the inode in this
mount. It sets a bit in lock data while it's working and backs off if
the bit is already set.
Unfortunately it was always clearing this bit as it exited, regardless
of whether it set it or not. This would let the next attempt perform
the deletion again before the working task had finished. This was often
not a problem because background orphan scanning is the only source of
regular concurrent deletion attempts.
But it's a big problem if a deletion attempt takes a very long time. It
gives enough time for an orphan scan attempt to clear the bit then try
again and clobber on whoever is performing the very slow deletion.
I hit this in a test that built files with an absurd number of
fragmented extents. The second concurrent orphan attempt was able to
proceed with deletion and performed a bunch of duplicate data extent
frees and caused corruption.
The fix is to only clear the bit if we set it. Now all concurrent
attempts will back off until the first task is done.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which gives the server a transaction with a free list block
that contains blknos that each dirty an individiaul btree blocks in the
global data free extent btree.
Signed-off-by: Zach Brown <zab@versity.com>
Recently scoutfs_alloc_move() was changed to try and limit the amount of
metadata blocks it could allocate or free. The intent was to stop
concurrent holders of a transaction from fully consuming the available
allocator for the transaction.
The limiting logic was a bit off. It stopped when the allocator had the
caller's limit remaining, not when it had consumed the caller's limit.
This is overly permissive and could still allow concurrent callers to
consume the allocator. It was also triggering warning messages when a
call consumed more than its allowed budget while holding a transaction.
Unfortunately, we don't have per-caller tracking of allocator resource
consumption. The best we can do is sample the allocators as we start
and return if they drop by the caller's limit. This is overly
conservative in that it accounts any consumption during concurrent
callers to all callers.
This isn't perfect but it makes the failure case less likely and the
impact shouldn't be significant. We don't often have a lot of
concurrency and the limits are larger than callers will typically
consume.
Signed-off-by: Zach Brown <zab@versity.com>
Add scoutfs_alloc_meta_low_since() to test if the metadata avail or
freed resources have been used by a given amount since a previous
snapshot.
Signed-off-by: Zach Brown <zab@versity.com>
As _get_log_trees() in the server prepares the log_trees item for the
client's commit, it moves all the freed data extents from the log_trees
item into core data extent allocator btree items. If the freed blocks
are very fragmented then it can exceed a commit's metadata allocation
budget trying to dirty blocks in the free data extent btree.
The fix is to move the freed data extents in multiple commits. First we
move a limited number in the main commit that does all the rest of the
work preparing the commit. Then we try to move the remaining freed
extents in multiple additional commits.
Signed-off-by: Zach Brown <zab@versity.com>
Callers who send to specific client connections can get -ENOTCONN if
their client has gone away. We forgot to free the send tracking struct
in that case.
Signed-off-by: Zach Brown <zab@versity.com>
The omap code keeps track of rids that are connected to the server. It
only freed the tracked rids as the server told it that rids were being
removed. But that removal only happened as clients were evicted. If
the server shutdown it'd leave the old rid entries around. They'd be
leaked as the mount was unmounted and could linger and crate duplicate
entries if the server started back up and the same clients reconnected.
The fix is to free the tracking rids as the server shuts down. They'll
be rebuilt as clients reconnect if the server restarts.
Signed-off-by: Zach Brown <zab@versity.com>
If we return an error from .fill_super without having set sb->s_root
then the vfs won't call our put_super. Our fill_super is careful to
call put_super so that it can tear down partial state, but we weren't
doing this with a few very early errors in fill_super. This tripped
leak detection when we weren't freeing the sbi when returning errors
from bad option parsing.
Signed-off-by: Zach Brown <zab@versity.com>
Clients don't use the net conn info and specified that it has 0 size.
The net layer would try and allocate a zero size region which returns
the magic ZERO_SIZE_PTR, which it would then later try and free. While
that works, it's a little goofy. We can avoid the allocation when the
size is 0. The pointer will remain null which kfree also accepts.
Signed-off-by: Zach Brown <zab@versity.com>
Add an option to skip printing structures that are likely to be so huge
that the print output becomes completely unwieldly on large systems.
Signed-off-by: Zach Brown <zab@versity.com>
Like a lot of places in the server, get_log_trees() doesn't have the
tools in needs to safely unwind partial changes in the face of an error.
In the worst case, it can have moved extents from the mount's log_trees
item into the server's main data allocator. The dirty data allocator
reference is in the super block so it can be written later. The dirty
log_trees reference is on stack, though, so it will be thrown away on
error. This ends up duplicating extents in the persistent structures
because they're written in the new dirty allocator but still remain in
the unwritten source log_trees allocator.
This change makes it harder for that to happen. It dirties the
log_trees item and always tries to update so that the dirty blocks are
consistent if they're later written out. If we do get an error updating
the item we throw an assertion. It's not great, but it matches other
similar circumstances in other parts of the server.
Signed-off-by: Zach Brown <zab@versity.com>
We were setting sk_allocation on the quorum UDP sockets to prevent
entering reclaim while using sockets but we missed setting it on the
regular messaging TCP sockets. This could create deadlocks where the
sending socket could enter scoutfs reclaim and wait for server messages
while holding the socket lock, preventing the receive thread from
receiving messages while it blocked on the socket lock.
The fix is to prevent entering the FS to reclaim during socket
allocations.
Signed-off-by: Zach Brown <zab@versity.com>
Client log_trees allocator btrees can build up quite a number of
extents. In the right circumstances fragmented extents can have to
dirty a large number of paths to leaf blocks in the core allocator
btrees. It might not be possible to dirty all the blocks necessary to
move all the extents in one commit.
This reworks the extent motion so that it can be performed in multiple
commits if the meta allocator for the commit runs out while it is moving
extents. It's a minimal fix with as little disruption to the ordering
of commits and locking as possible. It simply bubbles up an error when
the allocators run out and retries functions that can already be retried
in other circumstances.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing allocator motion during get_log_trees dirty quite a lot of
blocks, which makes sense. Let's continue to up the budget. If we
still need significantly larger budgets we'll want to look into capping
the dirty block use of the allocator extent movers which will mean
changing callers to support partial progress.
Signed-off-by: Zach Brown <zab@versity.com>
When a new server starts up it rebuilds its view of all the granted
locks with lock recovery messages. Clients give the server their
granted lock modes which the server then uses to process all the resent
lock requests from clients.
The lock invalidation work in the client is responsible for
transitioning an old granted mode to a new invalidated mode from an
unsolicited message from the server. It has to process any client state
that'd be incompatible with the new mode (write dirty data, drop
caches). While it is doing this work, as an implementation short cut,
it sets the granted lock mode to the new mode so that users that are
compatible with the new invalidated mode can use the lock whlie it's
being invalidated. Picture readers reading data while a write lock is
invalidating and writing dirty data.
A problem arises when a lock recover request is processed during lock
invalidation. The client lock recover request handler sends a response
with the current granted mode. The server takes this to mean that the
invalidation is done but the client invalidation worker might still be
writing data, dropping caches, etc. The server will allow the state
machine to advance which can send grants to pending client requests
which believed that the invalidation was done.
All of this can lead to a grant response handler in the client tripping
the assertion that there can not be cached items that were incompatible
with the old mode in a grant from the server. Invalidation might still
be invalidating caches. Hitting this bug is very rare and requires a
new server starting up while a client has both a request outstanding and
an invalidation being processed when the lock recover request arrives.
The fix is to record the old mode during invalidation and send that in
lock recover responses. This can lead the lock server to resend
invalidation requests to the client. The client already safely handles
duplicate invalidation requests from other failover cases.
Signed-off-by: Zach Brown <zab@versity.com>
The change to only allocate a buffer for the first xattr item with
kmalloc instead of the entire logical xattr payload with vmalloc
included a regression for getting large xattrs.
getxattr used to copy the entire payload into the large vmalloc so it
could unlock just after get_next_xattr. The change to only getting the
first item buffer added a call to copy from the rest of the items but
those copies weren't covered by the locks. This would often work
because the lock pointer still pointed to a valid lock. But if the lock
was invalidated then the mode would no longer be compatible and
_item_lookup would return EINVAL.
The fix is to extend xattr_rwsem and cluster lock coverage to the rest
fo the function body, which includes the value item copies. This also
makes getxattr's lock coverage consistent with setxattr and listxattr
which might reduce the risk of similar mistakes in the future.
Signed-off-by: Zach Brown <zab@versity.com>
After we've merged a log btree back into the main fs tree we kick off
work to free all its blocks. This would fully fill the transactions
free blocks list before stopping to apply the commit.
Consuming the entire free list makes it hard to have concurrent holders
of a commit who also want to free things. This chnages the log btree
block freeing to limit itself to a fraction of the budget that each
holder gets. That coarse limit avoids us having to precisely account
for the allocations and frees while modifying the freeing item while
still freeing many blocks per commit.
Signed-off-by: Zach Brown <zab@versity.com>
Server commits use an allocator that has a limited number of available
metadata blocks and entries in a list for freed blocks. The allocator
is refilled between commits. Holders can't fully consume the allocator
during the commit and that tended to work out because server commit
holders commit before sending responses. We'd tend to commit frequently
enough that we'd get a chance to refill the allocators before they were
consumed.
But there was no mechanism to ensure that this would be the case.
Enough concurrent server holders were able to fully consume the
allocators before committing. This causes scoutfs_meta_alloc and _free
to return errors, leading the server to fail in the worst cases.
This changes the server commit tracking to use more robust structures
which limit the number of concurrent holders so that the allocators
aren't exhausted. The commit_users struct stops holders from making
progress once the allocators don't have room for more holders. It also
lets us stop future holders from making progress once the commit work
has been queued. The previous cute use of a rwsem didn't allow for
either of these protections.
We don't have precise tracking of each holder's allocation consumption
so we don't try and reserve blocks for each holder. Instead we have a
maxmimum consumption per holder and make sure that all the holders can't
consume the allocators if they all use their full limit.
All of this requires the holding code paths to be well behaved and not
use more than the per-hold limit. We add some debugging code to print
the stacks of holders that were active when the total holder limit was
exceeded. This is the motivation for having state in the holders. We
can record some data at the time their hold started that'll make it a
little easier to track down which of the holders exceeded their limit.
Signed-off-by: Zach Brown <zab@versity.com>
Add helper function to give the caller the number of blocks remaining in
the first list block that's used for meta allocation and freeing.
Signed-off-by: Zach Brown <zab@versity.com>
There was a brief time where we exported the ability to hold and apply
commits outside of the main server code. That wasn't a great idea, and
the few users have seen been reworked to not require directly
manipulating server transactions, so we can reduce risk and make these
functions private again.
Signed-off-by: Zach Brown <zab@versity.com>
Quorum members will try to elect a new leader when they don't receive
heartbeats from the currently elected leader. This timeout is short to
encourage restoring service promptly.
Heartbeats are sent from the quorum worker thread and are delayed while
it synchronously starts up the server, which includes fencing previous
servers. If fence requests take too long then heartbeats will be
delayed long enough for remaining quorum members to elect a new leader
while the recently elected server is still busy fencing.
To fix this we decouple server startup from the quorum main thread.
Server starting and stopping becomes asynchronous so the quorum thread
is able to send heartbeats while the server work is off starting up and
fencing.
The server used to call into quorum to clear a flag as it exited. We
remove that mechanism and have the server maintain a running status that
quorum can query.
We add some state to the quorum work to track the asynchronous state of
the server. This lets the quorum protocol change roles immediately as
needed while remembering that there is a server running that needs to be
acted on.
The server used to also call into quorum to update quorum blocks. This
is a read-modify-write operation that has to be serialized. Now that we
have both the server starting up and the quorum work running they both
can't perform these read-modify-write cycles. Instead we have the
quorum work own all the block updates and it queries the server status
to determine when it should update the quorum block to indicate that the
server has fenced or shut down.
Signed-off-by: Zach Brown <zab@versity.com>
The fence script we use for our single node multi-mount tests only knows
how to fence by using forced unmount to destroy a mount. As of now, the
tests only generate failing nodes that need to be fenced by using forced
unmount as well. This results in the awkward situation where the
testing fence script doesn't have anything to do because the mount is
already gone.
When the test fence script has nothing to do we might not notice if it
isn't run. This adds explicit verification to the fencing tests that
the script was really run. It adds per-invocation logging to the fence
script and the test makes sure that it was run.
While we're at it, we take the opportunity to tidy up some of the
scripting around this. We use a sysfs file with the data device
major:minor numbers so that the fencing script can find and unmount
mounts without having to ask them for their rid. They may not be
operational.
Signed-off-by: Zach Brown <zab@versity.com>
Extended attribute values can be larger than a reasonable maximum size
for our btree items so we store xattrs in many items. The first pass at
this code used vmalloc to make it relatively easy to work with a
contiguous buffer that was cut up into multiple items.
The problem, of course, is that vmalloc() is expensive. Well, the
problem is that I always forget just how expensive it can be and use it
when I shouldn't. We had loads on high cpu count machines that were
catastrophically cpu bound on all the contentious work that vmalloc does
to maintain a coherent global address space.
This removes the use of vmalloc and only allocates a small buffer for
the first compound item. The later items directly reference regions of
value buffer rather than copying it to and from the large intermediate
vmalloced buffer.
Signed-off-by: Zach Brown <zab@versity.com>
The t_server_nr and t_first_client_nr helpers iterated over all the fs
numbers examining their quorum/is_leader files, but clients don't have a
quorum/ directory. This was causing spurious outputs in tests that were
looking for servers but didn't find it in the first quorum fs number and
made it down into the clients.
Give them a helper that returns 0 for being a leader if the quorum/ dir
doesn't exist.
Signed-off-by: Zach Brown <zab@versity.com>
We were seeing rare test failures where it looked like is_leader wasn't
set for any of the mounts. The test that couldn't find a set is_leader
file had just perfomed some mounts so we know that a server was up and
processing requests.
The quorum task wasn't updating the status that's shown in sysfs and
debugfs until after the server started up. This opened the race where
the server was able to serve mount requests and have the test run to
find no is_leader file set before the quorum task was able to update the
stats and make its election visible.
This updates the quorum task to make its status visible more often,
typically before it does something that will take a while. The
is_leader will now be visible before the server is started so the test
will always see the file after server starts up and lets mounts finish.
Signed-off-by: Zach Brown <zab@versity.com>
The final iput of an inode can delete items in cluster locked
transactions. It was never safe to call iput within locked
transactions but we never saw the problem. Recent work on inode
deletion raised the issue again.
This makes sure that we always perform iput outside of locked
transactions. The only interesting change is making scoutfs_new_inode()
return the allocated inode on error so that the caller can put the inode
after releasing the transaction.
Signed-off-by: Zach Brown <zab@versity.com>
During forced unmount commits abort due to errors and the open
transaction is left in a dirty state that is cleaned up by
scoutfs_shutdown_trans(). It cleans all the dirty blocks in the commit
write context with scoutfs_block_writer_forget_all(), but it forgot to
call scoutfs_alloc_prepare_commit() to put the block references held by
the allocator.
This was generating leaked block warnings during testing that used
forced unmount. It wouldn't affect regular operations.
Signed-off-by: Zach Brown <zab@versity.com>
We were seeing a number of problems coming from races that allowed tasks
in a mount to try and concurrently delete an inode's items. We could
see error messages indicating that deletion failed with -ENOENT, we
could see users of inodes behave erratically as inodes were deleted from
under them, and we could see eventual server errors trying to merge
overlapping data extents which were "freed" (add to transaction lists)
multiple times.
This commit addresses the problems in one relatively large patch. While
we could mechanically split up the fixes, they're all interdependent and
splitting them up (bisecting through them) could cause failures that
would be devilishly hard to diagnose.
First we stop allowing multiple cached vfs inodes. This was initially
done to avoid deadlocks between lock invalidation and final inode
deletion. We add a specific lookup that's used by invalidation which
ignores any inodes which are in I_NEW or I_FREEING. Now that iget can
wait on inode flags we call iget5_locked before acquiring the cluster
lock. This ensures that we can only have one cached vfs inode for a
given inode number in evict_inode trying to delete.
Now that we can only have one cached inode, we can rework the omap
tracking to use _set and _clear instead of _inc and _put. This isn't
strictly necessary but is a simplification and lets us issue warnings if
we see that we ever try to set an inode numbers bit on behalf of
multiple cached inodes. We also add a _test helper.
Orphan scanning would try to perform deletion by instantiating a cached
inode and then putting it, triggering eviction and final deletion. This
was an attempt to simplify concurrency but ended up causing more
problems. It no longer tries to interact with inode cache at all and
attempts to safely delete inode items directly. It uses the omap test
to determine that it should skip an already cached inode.
We had attempted to forbid opening inodes by handle if they had an nlink
of 0. Since we allowed multiple cached inodes for an inode number this
was to prevent adding cached inodes that were being deleted. It was
only performing the check on newly allocated inodes, though, so it could
get a reference to the cached inode that the scanner had inserted for
deleting. We're chosing to keep restricting opening by handle to only
linked inodes so we also check existing inodes after they're refreshed.
We're left with a task evicting an inode and the orphan scanner racing
to delete an inode's items. We move the work of determining if its safe
to delete out of scoutfs_omap_should_delete() and into
try_delete_inode_items() which is called directly from eviction and
scanning. This is mostly code motion but we do make three critical
changes. We get rid of the goofy concurrent deletion detection in
delete_inode_items() and instead use a bit in the lock data to serialize
multiple attempts to delete an inode's items. We no longer assume that
the inode must still be around because we were called from evict and
specifically check that inode item is still present for deleting.
Finally, we use the omap test to discover that we shouldn't delete an
inode that is locally cached (and would be not be included to the omap
response). We do all this under the inode write lock to serialize
between mounts.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing some trouble with very specific race conditions. This
updates the orphan-inodes test to try and force final inode deletion
during eviction, the orphan scan worker, and opening inodes by handle to
all race and hit an inode number at the same time.
Signed-off-by: Zach Brown <zab@versity.com>
The orphan inode test often uses a trick where it runs sleep in the
abckground with a file as stdin as a means of holding files open. This
can very rarely fail if the background sleep happens to be first
schedled after the unlink of the file it's reading as stdin. A small
delay gives it a chance to run and open the file before its unlinked.
It's still possible to lose the race, of course, but so far this has
been good enough.
Signed-off-by: Zach Brown <zab@versity.com>
Add a mount option to set the delay betwen scanning of the orphan list.
The sysfs file for the option is writable so this option can be set at
run time.
Signed-off-by: Zach Brown <zab@versity.com>
The mount options code is some of the oldest in the tree and is weirdly
split between options.c and super.c. This cleans up the options code,
moves it all to options.c, and reworks it to be more in line with the
modern subsystem convenction of storing state in an allocated info
struct.
Rather than putting the parsed options in the super for everyone to
directly reference we put them in the private options info struct and
add a locked read function. This will let us add sysfs files to change
mount options while safely serializing with readers.
All the users of mount options that used to directly reference the
parsed struct now call the read function to get a copy. They're all
small local changes except for quorum which saves a static copy of the
quorum slot number because it references it in so many places and relies
on it not changing.
Finally, we remove the empty debugfs "options" directory.
Signed-off-by: Zach Brown <zab@versity.com>
The inode caller of omap was manually calculating the group and bits,
which isn't fantastic. Export the little helper to calculate it so
the inode caller doesn't have to.
Signed-off-by: Zach Brown <zab@versity.com>
You can almost feel the editing mistake that brought the delay
calculation into the conditional and forgot to remove the initial
calculation at declaration.
Signed-off-by: Zach Brown <zab@versity.com>
We were seeing ABBA deadlocks on the dio_count wait and extent_sem
between fallocate and reads. It turns out that fallocate got lock
ordering wrong.
This brings fallocate in line with the rest of the adherents to the lock
heirarchy. Most importantly, the extent_sem is used after the
dio_count. While we're at it we bring the i_mutex down to just before
the cluster lock for consistency.
Signed-off-by: Zach Brown <zab@versity.com>
The man pages and inline help blurbs for the recently added format
version and quorum config commands incorrectly described the device
arguments which are needed.
Signed-off-by: Zach Brown <zab@versity.com>
The server's log merge complete request handler was considering the
absence of the client's original request as a failure. Unfortunately,
this case is possible if a previous server successfully completed the
client's request but the response was lost because it stopped for
whatever reason.
The failure was being logged as a hard error to the console which was
causing tests to occasionally fail during server failover that hit just
as the log merge completion was being processed.
The error was being sent to the client as a response, we just need to
silence the message for these expected but rare errors.
We also fix the related case where the server printed the even more
harsh WARN_ON if there was a next original request but it wasn't the one
we expected to find from our requesting client.
Signed-off-by: Zach Brown <zab@versity.com>
The net _cancel_request call hasn't been used or tested in approximately
a bazillion years. Best to get rid of it and have to add and test it
if we think we need it again.
Signed-off-by: Zach Brown <zab@versity.com>
Our open by handle functions didn't care that the inode wasn't
referenced and let tasks open unlinked inodes by number. This
interacted badly with the inode deletion mechanisms which required that
inodes couldn't be cached on other nodes after the transaction which
removed their final reference.
If a task did accidentally open a file by inode while it was being
deleted it could see the inode items in an inconsistent state and return
very confusing errors that look like corruption.
The fix is to give the handle iget callers a flag to tell iget to only
get the inode if it has a positive nlink. If iget sees that the inode
has been unlinked it returns enoent.
Signed-off-by: Zach Brown <zab@versity.com>
The orphan inodes test needs to test if inode items exist as it
manipulates inodes. It used to open the inode by a handle but we're
fixing that to not allow opening unlinked files. The
get-allocated-inos ioctl tests for the presence of items owned by the
inode regardless of any other vfs state so we can use it to verify what
scoutfs is doing as we work with the vfs inodes.
Signed-off-by: Zach Brown <zab@versity.com>
Add the get-allocated-inos scoutfs command which wraps the
GET_ALLOCATED_INOS ioctl. It'll be used by tests to find items
associated with an inode instead of trying to open the inode by a
constructed handle after it was unlinked.
Signed-off-by: Zach Brown <zab@versity.com>
Add an ioctl that can give some indication of inodes that have inode
items. We're exposing this for tests that verify the handling of open
unlinked inodes.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding an ioctl that wants to build inode item keys so let's
export the private inode key initializer.
Signed-off-by: Zach Brown <zab@versity.com>
This reverts commit 61ad844891.
This fix was trying to ensure that lock recovery response handling
can't run after farewell calls reclaim_rid() by jumping through a bunch
of hoops to tear down locking state as the first farewell request
arrived.
It introduced very slippery use after free during shutdown. It appears
that it was from drain_workqueue() previously being able to stop
chaining work. That's no longer possible when you're trying to drain
two workqueues that can queue work in each other.
We found a much clearer way to solve the problem so we can toss this.
Signed-off-by: Zach Brown <zab@versity.com>
We recently found that the server can send a farewell response and try
to tear down a client's lock state while it was still in lock recovery
with the client. The lock recovery response could add a lock
for the client after farell's reclaim_rid() had thought the client was
gone forever and tore down its locks.
This left a lock in the lock server that wasn't associated with any
clients and so could never be invalidated. Attempts to acquire
conflicting locks with it would hang forever, which we saw as hangs in
testing with lots of unmounting.
We tried to fix it by serializing incoming request handling and
forcefully clobbering the client's lock state as we first got
the farewell request. That went very badly.
This takes another approach of trying to explicitly wait for lock
recovery to finish before sending farewell responses. It's more in
line with the overall pattern of having the client be up and functional
until farewell tears it down.
With this in place we can revert the other attempted fix that was
causing so many problems.
Signed-off-by: Zach Brown <zab@versity.com>
The local-force-unmount fenced fencing script only works when all the
mounts are on the local host and it uses force unmount. It is only
used in our specific local testing scripts. Packaging it as an example
lead people to believe that it could be used to cobble together a
multi-host testing network, however temporary.
Move it from being in utils and packged to being private to our tests so
that it doesn't present an attractive nuisance.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_recov_shutdown() tried to move the recovery tracking structs off
the shared list and into a private list so they could be freed. But
then it went and walked the now empty shared list to free entries. It
should walk the private list.
This would leak a small amount of memory in the rare cases where the
server was shutdown while recovery was still pending.
Signed-off-by: Zach Brown <zab@versity.com>
Back when we added the get/commit transaction sequence numbers to the
log_trees we forgot to add them to the scoutfs print output.
Signed-off-by: Zach Brown <zab@versity.com>
The server's little set_shutting_down() helper accidentally used a read
barrier instead of a write barrier.
Signed-off-by: Zach Brown <zab@versity.com>
Tear down client lock server state and set a boolean so that
there is no race between client/server processing lock recovery
at the same time as farewell.
Currently there is a bug where if server and clients are unmounted
then work from the client is processed out of order, which leaves
behind a server_lock for a RID that no longer exists.
In order to fix this we need to serialize SCOUTFS_NET_CMD_FAREWELL
in recv_worker.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
This unit test reproduces the race we have between
client and server diong lock recovery while farewell
is processed.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
The max_seq and active reader mechanisms in the item cache stop readers
from reading old items and inserting them in the cache after newer items
have been reclaimed by memory pressure. The max_seq field in the pages
must reflect the greatest seq of the items in the page so that reclaim
knows that the page contains items newer than old readers and must not
be removed.
We update the page max_seq as items are inserted or as they're dirtied
in the page. There's an additional subtle effect that the max_seq can
also protect items which have been erased. Deletion items are erased
from the pages as a commit completes. The max_seq in that page will
still protect it from being reclaimed even though no items have that seq
value themselves.
That protection fails if the range of keys containing the erased item is
moved to another page with a lower max_seq. The item mover only
updated the destination page's max_seq for each item that was moved. It
missed that the empty space between the items might have a larger
max_seq from an erased item. We don't know where the erased item is so
we have to assume that a larger max_seq in the source page must be set
on the destination page.
This could explain very rare item cache corruption where nodes were
seeing deleted directory entry items reappearing. It would take a
specific sequence of events involving large directories with an isolated
removal, a delayed item cache reader, a commit, and then enough
insertions to split the page all happening in precisely the wrong
sequence.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command to change the quorum config which starts by only supports
updating the super block whlie the file system is oflfine.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding a command to change the quorum config which updates its
version number. Let's make the version a little more visible and start
it at the more humane 1.
Signed-off-by: Zach Brown <zab@versity.com>
Move the code that checks that the super is in use from
change-format-version into its own function in util.c. We'll use it in
an upcoming command to change the quorum config.
Signed-off-by: Zach Brown <zab@versity.com>
Move functions for printing and validating the quorum config from mkfs.c
to quorum.c so that they can be used in an upcoming command to change
the quorum config.
Signed-off-by: Zach Brown <zab@versity.com>
The change from --quorum-count to --quorum-slot forgot to update a
mention of the option in an error message in mkfs when it wasn't
provided.
Signed-off-by: Zach Brown <zab@versity.com>
We want to enable the test case for:
generic/023 - tests that renameat2 syscall exists
generic/024 - renameat2 with NOREPLACE flag
Move both generic/025 and 078 to the no run list so that
we can test the unsupported output if the flags were
passed that were not supported.
Example output:
generic/025 [not run] fs doesn't support RENAME_EXCHANGE
generic/078 [not run] fs doesn't support RENAME_WHITEOUT
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
The goal of the test case is to have two mount points
with two async calls made to do renameat2. This allows
for two calls to race to call renameat2 RENAME_NOREPLACE.
When this happens you expect one of them to fail with a
-EEXIST. This would validate that the new flag works.
Essentially one of the two calls to renameat should hit the
new RENAME_NOREPLACE code and exit early.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Support generic renameat2 syscall then add support for the
RENAME_NOREPLACE flag. To suppor the flag we need to check
the existance of both entries and return -EXIST.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
The current test case attempts to create a state to read
by calling setattr and getattr in attempt to force block
cache reads. It so happens that this does not always force
cache block reads, which in rare cases causes this test case
to fail.
The new test case removes all the extra bouncing around of mount
points and we just directly call scoutfs df which will walk
everyone's allocators to summarize the block counts, which is
guaranteed to exist. Therefore, we do not have to create any sort
of state prior to trying to force a read.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Let's try maintaining release notes in a file in the repo. There are
lots of schemes for associating commits and release notes and this seems
like the simplest place to start.
Signed-off-by: Zach Brown <zab@versity.com>
[85164.299902] scoutfs f.8c19e1.r.facf2e error: server error writing btree blocks: -5
[144308.589596] scoutfs f.c9397a.r.8ae97f error: server error -5 freeing merged btree blocks: looping commit del/upd freeing item
[174646.005596] scoutfs f.15f0b3.r.1862df error: server error -5 freeing merged btree blocks: final commit del/upd freeing item
[146653.893676] scoutfs f.c7f188.r.34e23c error: server error writing super block: -5
[273218.436675] scoutfs f.dd4157.r.f0da7e error: server failed to bind to 127.0.0.1:42002, err -98
[376832.542823] scoutfs f.049985.r.1a8987 error: error -5 reading quorum block 19 to update event 1 term 3
The above is an example output that will be filtered out
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
We do not want to short-circuit btree_walk early, it is
better to handle the force unmount on the caller side.
Therefore, remove this from btree_walk.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
If there is a forced unmount we call _net_shutdown from
umount_begin in order to tell the server and clients to
break out of pending network replies. We then add the call
to abort within the shutdown_worker since most of the mucking
with send and resend queues are all done there.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Only BUG_ON for inconsistency and not do it for commit errors
or failure to delete the original request.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
In scoutfs_server_worker we do not properly handle the clean up
of _block_writer_init and alloc_init. On error paths we can clean
up the context if either of thoes are initialized we can call
alloc_prepare_commit or writer_forget_all to ensure we drop
the block references and clear the dirty status of all the blocks
in the writer.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
Remove a bunch of old language from the README. We're no longer in the
early days of the open release so we can remove all the alpha quality
language. And the system has grown sufficiently that the repo README
isn't a great place for a small getting started doc. There just isn't
room to do the subject justice. If we need such a thing for the
project we'll put it as a first order doc in the repo that'd be
distributed along with everything else.
Signed-off-by: Zach Brown <zab@versity.com>
In order to safely free blocks we need to first dirty
the work. This allows for resume later on without a double
free.
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
As we update xattrs we need to update any existing old items with the
contents of the new xattr that uses those items. The loop that updated
existing items only took the old xattr size into account and assumed
that the new xattr would use those items. If the new xattr size used
fewer parts then the attempt to update all the old parts that weren't
covered by the new size would go very wrong. The length of the region
in the new xattr would be negative so it'd try to use the max part
length. Worse, it'd copy these max part length regions outside the
input new xattr buffer. Typically this would land in addressible memory
and copy garbage into the unused old items before they were later
deleted.
However, it could access so far outside the input buffer that it could
cross a page boudary into inaccessible memory and fault. We saw this in
the field while trying to repeatedly incrementally shrink a large xattr.
This fixes the loop that updates overlapping items between the new and
old xattr to start with the smaller of their two item counts. Now it
will only update items that are actually used by both xattrs and will
only safely access the new xattr input buffer.
Signed-off-by: Zach Brown <zab@versity.com>
From now on if we make incompatible changes to structures or messages
then we update the format version and ensure that the code can deal with
all the versions in its supported range.
Signed-off-by: Zach Brown <zab@versity.com>
We had arbitrarily chosen an ioctl code 's' to match scoutfs, but of
course that conflicts. This chooses an arbitrary hole in the upstream
reservations from inode-numbers.rst.
Then we make sure to have our _IO[WR] usage reflect the direction of the
final type paramater. For most of our ioctls userspace is writing an
argument parameter to perform an operation (that often has side
effects). Most of our ioctls should be _IOW because userspace is
writing the parameter, not _IOR (though the operation tends to read
state). A few ioctls copy output back to userspace in the parameter so
they're _IOWR.
Signed-off-by: Zach Brown <zab@versity.com>
The idea here was that we'd expand the size of the struct and
valid_bytes would tell the kernel which fields were present in
userspace's struct. That doesn't combine well with the ioctl convention
of having the size of the type baked into the ioctl number. We'll
remove this to make the world less surprising. If we expand the
interface we'd add additional ioctls and types.
Signed-off-by: Zach Brown <zab@versity.com>
While checking in on some other code I noticed that we have lingering
allocator and writer contexts over in the lock server. The lock server
used to manage its own client state and recovery. We've sinced moved
that into shared recov functionality in the server. The lock server no
longer manipulates its own btrees and doesn't need these unused
references to the server's contexts.
Signed-off-by: Zach Brown <zab@versity.com>
Introduce some space between the current key zone and type values so
that we have room to insert new keys amongst the current keys if we need
to. A spacing of 4 is arbitrarily chosen as small enough to still give
us intuitively small numbers while leaving enough room to grow, given
how long its taken to come to the current number of keys.
Signed-off-by: Zach Brown <zab@versity.com>
The code that updates inode index items on behalf of indexed fields uses
an array to track changes in the fields. Those array indexes were the
raw key type values.
We're about to introduce some sparse space between all the key values so
that we have some room to add keys in the future at arbitrary sort
positions amongst the previous keys.
We don't want the inode index item updating code to keep using raw types
as array indices when the type values are no longer small dense values.
We introduce indirection from type values to array indices to keep the
tracking array in the in-memory inode struct small.
Signed-off-by: Zach Brown <zab@versity.com>
As we freeze the format let's remove this old experiment to try and make
it easier to line up traces from different mounts. It never worked
particularly well and I think it could be argued that trying to merge
trace logs on different machines isn't a particularly meaningful thing
to do. You care about how they interact not what they were doing at
the same time with their indepdendent resources.
Signed-off-by: Zach Brown <zab@versity.com>
There are a few bad corner cases in the state machine that governs how
client transactions are opened, modified, and committed.
The worst problem is on the server side. All server request handlers
need to cope with resent requests without causing bad side effects.
Both get_log_trees and commit_log_trees would try to fully processes
resent requests. _get_log_trees() looks safe because it works with the
log_trees that was stored previously. _commit_log_trees() is not safe
because it can rotate out the srch log file referenced by the sent
log_trees every time it's processed. This could create extra srch
entries which would delete the first instance of entries. Worse still,
by injecting the same block structure into the system multiple times it
ends up causing multiple frees of the blocks that make up the srch file.
The client side problems are slightly different, but related. There
aren't strong constraints which guarantee that we'll only send a commit
request after a get request succeeds. In crazy circumstances the
commit request in the write worker could come before the first get in
mount succeeds. Far worse is that we can send multiple commit requests
for one transaction if it changes as we get errors during multiple
queued write attempts, particularly if we get errors from get_log_trees
after having successfully committed.
This hardens all these paths to ensure a strict sequence of
get_log_trees, transaction modification, and commit_log_trees.
On the server we add *_trans_seq fields to the log_trees struct so that
both get_ and commit_ can see that they've already prepared a commit to
send or have already committed the incoming commit, respectively. We
can use the get_trans_seq field as the trans_seq of the open transaction
and get rid of the entire seperate mechanism we used to have for
tracking open trans seqs in the clients. We can get the same info by
walking the log_trees and looking at their *_trans_seq fields.
In the client we have the write worker immediately return success if
mount hasn't opened the first transaction. Then we don't have the
worker return to allow further modification until it has gotten success
from get_log_trees.
Signed-off-by: Zach Brown <zab@versity.com>
The transaction code was built a million years ago and put all of its
data in our core super block info. This finally moves the rest of the
private transaction fields out of the core super block and into the
transaction info. This makes it clear that it's private to trans.c and
brings it line with the rest of the subsystems in the tree.
Signed-off-by: Zach Brown <zab@versity.com>
Add tracking in the alloc functions that the server uses to move extents
between allocator structures on behalf of client mounts.
Signed-off-by: Zach Brown <zab@versity.com>
The srch compaction worker will wait a bit before attempting another
compaction as it finishes a compaction that failed.
Unfortunately, it clobbered the errors it got during compaction with the
result of sending the commit to the server with the error flag. If the
commit is successful then it thinks there were no errors and immediately
re-queues itself to try the next compaction.
If the error is persistent, as it was with a bug in how we merged log
files with a single page's worth of entries, then we can spin
indefinitely getting and error, clobbering the error with the commit
result, and immediately queueing our work to do it all over again.
This fix preserves existing errors when geting the result of the commit
and will correctly back off. If we get persistent merge errors at least
they won't consume significant resources. We add a counter for commit
for the errors so we can get some visibility if this happens.
Signed-off-by: Zach Brown <zab@versity.com>
The k-way merge function at the core of the srch file entry merging had
some bookkeeping math (calculating number of parents) that couldn't
handle merging a single incoming entry stream, so it threw a warning and
returned an error. When refusing to handle that case, it was assuming
that caller was trying to merge down a single log file which doesn't
make any sense.
But in the case of multiple small unsorted logs we can absolutely end up
with their entries stored in one sorted page. We have one sorted input
page that's merging multiple log files. The merge function is also the
path that writes to the output file so we absolutely need to handle this
case.
We more carefully calculate the number of parents, clamping it to one
parent when we'd otherwise get "(roundup(1) -> 1) - 1 == 0" when
calculating the number of parents from the number of inputs. We can
relax the warning and error to refuse to merge nothing.
The test triggers this case by putting single search entries in the log
files for mounts and unmounting them to force rotation of the mount log
files into mergable rotated log files.
Signed-off-by: Zach Brown <zab@versity.com>
Our statfs implementation had clients reading the super block and using
the next free inode number to guess how many inodes there might be. We
are very aggressive with giving directories private pools of inode
numbers to allocate from. They're often not used at all, creating huge
gaps in allocated inode numbers. The ratio of the average number of
allocations per directory to the batch size given to each directory is
the factor that the used inode count can be off by.
Now that we have a precise count of active inodes we can use that to
return accurate counts of inodes in the files fields in the statfs
struct. We still don't have static inode allocation so the fields don't
make a ton of sense. We fake the total and free count to give a
reasonable estimate of the total files that doesn't change while the
free count is calculated from the correct count of used inodes.
While we're at it we add a request to get the summed fields that the
server can cheaply discover in cache rather than having the client
always perform read IOs.
Signed-off-by: Zach Brown <zab@versity.com>
Add an alloc_foreach variant which uses the caller's super to walk the
allocators rather than always reading it off the device.
Signed-off-by: Zach Brown <zab@versity.com>
Add a count of used inodes to the super block and a change in the inode
count to the log_trees struct. Client transactions track the change in
inode count as they create and delete inodes. The log_trees delta is
added to the count in the super as finalized log_trees are deleted.
Signed-off-by: Zach Brown <zab@versity.com>
We had previously started on a relatively simple notion of an
interoperability version which wasn't quite right. This fleshes out
support for a more functional format version. The super blocks have a
single version that defines behaviour of the running system. The code
supports a range of versions and we add some initial interfaces for
updating the version while the system is offline. All of this together
should let us safely change the underlying format over time.
Signed-off-by: Zach Brown <zab@versity.com>
Add a write_nr field to the quorum block header which is incremented
with every write. Each event also gets a write_nr field that is set to
the incremented value from the header. This gives us a history of the
order of event updates that isn't sensitive to misconfigured time.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding another command that does block IO so move some block
reading and writing functions out of mkfs. We also grow a few function
variants and call the write_sync variant from mkfs instead of having it
manually sync.
Signed-off-by: Zach Brown <zab@versity.com>
The code that shows the note sections as files uses the section size to
define the size of the notes payload. We don't need to null terminate
the strings to define their lengths. Doing so puts a null in the notes
file which isn't appreciated by many readers.
Signed-off-by: Zach Brown <zab@versity.com>
The test harness might as well use all cpus when building. It's
reasonably safe to assume both that the test systems are otherwise idle
and that the build is likely to succeed.
Signed-off-by: Zach Brown <zab@versity.com>
TCP keepalive probes only work when the connection is idle. They're not
sent when there's unacked send data being retramnsmitted. If the server
fails while we're retransmitting we don't break the connection and try
to elect and connect to a new server until the very long default
conneciton timeouts or the server comes back and the stale connection is
aborted.
We can set TCP_USER_TIMEOUT to break an unresponsive connection when
there's written data. It changes the behavior of the keepalive probes
so we rework them a bit to clearly apply our timeout consistently
between the two mechanisms.
Signed-off-by: Zach Brown <zab@versity.com>
As the server comes up it needs to fence any previous servers before it
assumes exclusive access to the device. If fencing fails it can leave
fence requests behind. The error path for these very early failures
didn't shut down fencing so we'd have lingering fence requests span the
life cycle of server startup and shutdown. The next time the server
starts up in this mount it can try to create the fence request again,
get an error because a lingering one already exists, and immediately
shut down.
The result is that fencing errors that hit that initial attempt during
server startup can become persistent fencing errors for the lifetime of
that mount, preventing it from every successfully starting the server.
Moving the fence stop call to hit all exiting error paths consistently
clean up fence requests and avoid this problem. The next server
instance will get a chance to process the fence request again. It might
well hit the same error, but at least it gets a chance.
Signed-off-by: Zach Brown <zab@versity.com>
The current script gets stuck in an infinite loop when the test
suite is started with 1 mount point. This is due to the advancement
part of the script in which it advances the ops for each mount.
The current while loop checks for when the op_mnt wraps by checking if
it equals 0. But the problem is we set each of the op_mnts to 0 during
the advancement, so when it wraps it still equates to 0, so it is an
infinite loop. Therefore, the fix is to check at the end of the loop
check if the last op's mount number wrapped. If so just break out.
Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
In some of the allocation paths there are goto statements
that end up calling kfree(). That is fine, but in cases
where the pointer is not initially set to NULL then we
might have an undefined behavior. kfree on a NULL pointer
does nothing, so essentially these changes should not
change behavior, but clarifies the code path better.
Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
Unfortunately, we're back in kernels that don't yet have d_op->d_init.
We allocate our dentry info manually as we're given dentries. The
recent verification work forgot to consistently make sure the info was
allocated before using it. Fix that up, and while we're at it be a bit
more robust in how we check to see that it's been initialized without
grabbing the d_lock.
Signed-off-by: Zach Brown <zab@versity.com>
This adds i_version to our inode and maintains it as we allocate, load,
modify, and store inodes. We set the flag in the superblock so
in-kernel users can use i_version to see changes in our inodes.
Signed-off-by: Zach Brown <zab@versity.com>
More recent gcc notices that ret in delete_files can be undefined if nr
is 0 while missing that we won't call delete_files in that case. Seems
worth fixing, regardless.
Signed-off-by: Zach Brown <zab@versity.com>
Add a quick test to make sure that create is validating stale dentries
before deciding if it should create or return -eexist.
Signed-off-by: Zach Brown <zab@versity.com>
Add the .totl. xattr tag. When the tag is set the end of the name
specifies a total name with 3 encoded u64s separated by dots. The value
of the xattr is a u64 that is added to the named total. An ioctl is
added to read the totals.
Signed-off-by: Zach Brown <zab@versity.com>
The fs log btrees have values that start with a header that stores the
item's seq and flags. There's a lot of sketchy code that manipulates
the value header as items are passed around.
This adds the seq and flags as core item fields in the btree. They're
only set by the interfaces that are used to store fs items: _insert_list
and _merge. The rest of the btree items that use the main interface
don't work with the fields.
This was done to help delta items discover when logged items have been
merged before the finalized lob btrees are deleted and the code ends up
being quite a bit cleaner.
Signed-off-by: Zach Brown <zab@versity.com>
Add an inode creation time field. It's created for all new inodes.
It's visible to stat_more. setattr_more can set it during
restore.
Signed-off-by: Zach Brown <zab@versity.com>
Our dir methods were trusting dentry args. The vfs code paths use
i_mutex to protect dentries across revalidate or lookup and method
calls. But that doesn't protect methods running in other mounts.
Multiple nodes can interleave the initial lookup or revalidate then
actual method call.
Rename got this right. It is very paranoid about verifying inputs after
acquiring all the locks it needs.
We extend this pattern to the rest of the methods that need to use the
mapping of name to inode (and our hash and pos) in dentries. Once we
acquire the parent dir lock we verify that the dentry is still current,
returning -EEXIST or -ENOENT as appropriate.
Along these lines, we tighten up dentry info correctness a bit by
updating our dentry info (recording lock coverage and hash/pos) for
negative dentries produced by lookup or as the result of unlink.
Signed-off-by: Zach Brown <zab@versity.com>
Client lock invalidation handling was very strict about not receiving
duplicate invalidation requests from the server because it could only
track one pending request. The promise to only send one invalidate at a
time is made by one server, it can't be enforced across server failover.
Particularly because invalidation processing can have to do quite a lot
of work with the server as it tears down state associated with the lock.
We fix this by recording and processing each individual incoming
invalidation request on the lock.
The code that handled reordering of incoming grant responses and
invalidation requests waited for the lock's mode to match the old mode
in the invalidation request before proceeding. That would have
prevented duplicate invalidation requests from making forward progress.
To fix this we make lock client recieve processing synchronous instead
of going through async work which can reorder. Now grant responses are
processed as they're received and will always be resolved before all the
invalidation requests are queued and processed in order.
Signed-off-by: Zach Brown <zab@versity.com>
The forest reader reads items from the fs_root and all log btrees and
gives them to the caller who tracks them to resolve version differences.
The reads can run into stale blocks which have been overwritten. The
forest reader was implementing the retry under the item state in the
caller. This can corrupt items that are only seen firest in an old fs
root before a merge and then only seen in the fs_root after a merge. In
this case the item won't have any versioning and the existing version
from the old fs_root is preferred. This is particularly bad when the
new version was deleted -- in that case we have no metadata which would
tell us to drop the old item that was read from the old fs_root.
This is fixed by pushing the retry up to callers who wipe the item state
before each retry. Now each set of items is related to a single
snapshot of the fs_root and logs at one point in time.
I haven't seen definitive evidence of this happening in practice. I
found this problem after putting on my craziest thinking toque and
auditing the code for places where we could lose item updates.
Signed-off-by: Zach Brown <zab@versity.com>
Btree merging attempted to build an rbtree of the input roots with only
one version of an item present in the rbtree at a time. It really
messed this up by completely dropping an input root when a root with a
newer version of its item tried to take its place in the rbtree. What
it should have done is advance to the next item in the older root, which
itself could have required advancing some other older root. Dropping
the root entirely is catastrophically wrong because it hides the rest of
the items in the root from merging. This has been manifesting as
occasional mysterious item loss during tests where memory pressure, item
update patterns, and merging all lined up just so.
This fixes the problem by more clearly keeping the next item in each
root in the rbtree. We sort by newest to oldest version so that once
we merge the most recent version of an item its easy to skip all the
older versions of the items in the next rbtree entries for the
rest of the input roots.
While we're at it we work with references to the static cached input
btree blocks. The old code was a first pass that used an expensive
btree walk per item and copied the value payload.
Signed-off-by: Zach Brown <zab@versity.com>
When the xattr inode searchs fail the test will eventually fail when the
output differs, but that could take a while. Have it fail much sooner
so that we can have tighter debugging interations and trace ring buffer
contents that are likely to be a lot closer to the first failure.
Signed-off-by: Zach Brown <zab@versity.com>
The current orphan scan uses the forest_next_hint to look for candidate
orphan items to delete. It doesn't skip deleted items and checks the
forest of log btrees so it'd return hints for every single item that
existed in all the log btrees across the system. And we call the hint
per-item.
When the system is deleting a lot of files we end up generating a huge
load where all mounts are constantly getting the btree roots from the
server, reading all the newest log btree blocks, finding deleted orphan
items for inodes that have already been deleted, and moving on to the
next deleted orphan item.
The fix is to use a read-only traversal of only one version of the fs
root for all the items in one scan. This avoids all the deleted orphan
items that exist in the log btrees which will disappear when they're
merged. It lets the item iteration happen in a single read-only cached
btree instead of constantly reading in the most recently written root
block of every log btree.
The result is an enormous speedup of large deletions. I don't want to
describe exactly how enormous.
Signed-off-by: Zach Brown <zab@versity.com>
We can be performing final deletion as inodes are evicted during
unmount. We have to keep full locking, transactions, and networking up
and running for the evict_inodes() call in generic_shutdown_super().
Unfortunately, this means that workers can be using inode references
during evict_inodes() which prevents them from being evicted. Those
workers can then remain running as we tear down the system, causing
crashes and deadlocks as the final iputs try to use resources that have
been destroyed.
The fix is to first properly stop orphan scanning, which can instantiate
new cached inodes, up before the call to kill_block_super ends up trying
to evict all inodes. Then we just need to wait for any pending iput and
invalidate work to finish and perform the final iput, which will always
evict because generic_shutdown_super has cleared MS_ACTIVE.
Signed-off-by: Zach Brown <zab@versity.com>
Add some simple tracking of message counts for each lock in the lock
server so that we can start to see where conflicts may be happening in a
running system.
Signed-off-by: Zach Brown <zab@versity.com>
Add a quick helper that can be used to avoid doing work if we know that
we're already shutting down. This can be a single coarser indicator
than adding functions to each subsystem to track that we're shutting
down.
Signed-off-by: Zach Brown <zab@versity.com>
Currently the first inode number that can be allocated directly follows
the root inode. This means the first batch of allocated inodes are in
the same lock group as the root inode.
The root inode is a bit special. It is always hot as absolute path
lookups and inode-to-path resolution always read directory entries from
the root.
Let's try aligning the first free inode number to the next inode lock
group boundary. This will stop work in those inodes from necessarily
conflicting with work in the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
We had some logic to try and delay lock invalidation while the lock was
still actively in use. This was trying to reduce the cost of
pathological lock conflict cases but it had some severe fairness
problems.
It was first introduced to deal with bad patterns in userspace that no
longer exist and it was built on top of the LSM transaction machinery
that also no longer exists. It hasn't aged well.
Instead of introducing invalidation latency in the hopes that it leads
to more batched work, which it can't always, let's aim more towards
reducing latency in all parts of the write-invalidate-read path and
also aim towards reducing contention in the first place.
Signed-off-by: Zach Brown <zab@versity.com>
We have a problem where items can appear to go backwards in time because
of the way we chose which log btrees to finalize and merge.
Because we don't have versions in items in the fs_root, and even might
not have items at all if they were deleted, we always assume items in
log btrees are newer than items in the fs root.
This creates the requirement that we can't merge a log btree if it has
items that are also present in older versions in other log btrees which
are not being merged. The unmerged old item in the log btree would take
precedent over the newer merged item in the fs root.
We weren't enforcing this requirement at all. We used the max_item_seq
to ensure that all items were older than the current stable seq but that
says nothing about the relationship between older items in the finalized
and active log btrees. Nothing at all stops an active btree from having
an old version of a newer item that is present in another mount's
finalized log btree.
To reliably fix this we create a strict item seq discontinuity between
all the finalized merge inputs and all the active log btrees. Once any
log btree is naturally finalized the server forced all the clients to
group up and finalize all their open log btrees. A merge operation can
then safely operate on all the finalized trees before any new trees are
given to clients who would start using increasing items seqs.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command for the server to request that clients commit their open
transaction. This will be used to create groups of finalized log
btrees for consistent merging.
Signed-off-by: Zach Brown <zab@versity.com>
We were checking that quorum_slot_nr was within the range of possible
slots allowed by the format as it was parsed. We weren't checking that
it referenced a configured slot. Make sure, and give a nice error
message that shows the configured slots.
Signed-off-by: Zach Brown <zab@versity.com>
During rough forced unmount testing we saw a seemingly mysterious
concurrent election. It could be explained if mounts coming up don't
start with the same term. Let's try having mounts initialize their term
to the greatest of all the terms they can see in the quorum blocks.
This will prevent the situation where some new quorum actors with
greater terms start out ignoring all the messages from others.
Signed-off-by: Zach Brown <zab@versity.com>
Nothing interesting here, just a minor convenience to use test and set
instead of testing and then setting.
Signed-off-by: Zach Brown <zab@versity.com>
The server doesn't give us much to go on when it gets an error handling
requests to work with log trees from the client. This adds a lot of
specific error messages so we can get a better understanding of
failures.
Signed-off-by: Zach Brown <zab@versity.com>
We were trusting the rid in the log trees struct that the client sent.
Compare it to our recorded rid on the connection and fail if the client
sent the wrong rid.
Signed-off-by: Zach Brown <zab@versity.com>
The locking protocol only allows one outstanding invalidation request
for a lock at a time. The client invalidation state is a bit hairy and
involves removing the lock from the invalidation list while it is being
processed which includes sending the response. This means that another
request can arrive while the lock is not on the invalidation list. We
have fields in the lock to record another incoming request which puts
the lock back on the list.
But the invalidation work wasn't always queued again in this case. It
*looks* like the incoming request path would queue the work, but by
definition the lock isn't on the invalidation list during this race. If
it's the only lock in play then the invalidation list will be empty and
the work won't be queued. The lock can get stuck with a pending
invalidation if nothing else kicks the invaliation worker. We saw this
in testing when the root inode lock group missed the wakeup.
The fix is to have the work requeue itself after putting the lock back
on the invalidation list when it notices that another request came in.
Signed-off-by: Zach Brown <zab@versity.com>
When a client socket disconnects we save the connection state to re-use
later if the client reconnects. A newly accepted connection finds the
old connection associated with the reconnecting client and migrates
state from the old idle connection to the newly accepted connection.
While moving messages between the old and new send and resend queues the
code had an aggressive BUG_ON that was asserting that the newly accepted
connection couldn't have any messages in its resend queue.
This BUG can be tripped due to the ordering of greeting processing and
connection state migration. The server greeting processing path sends
the farewell response to the client before it calls the net code to
migrate connection state. When it "sends" the farewell response it puts
the message on the send queue and kicks the send work. It's possible
for the send work to execute and move the farewell response to the
resend queue and trip the BUG_ON.
This is harmless. The sent greeting response is going to end up on the
resend queue either way, there's no reason for the reconnection
migration to assert that it can't have happened yet. It is going to be
dropped the moment we get a message from the client with a recv_seq that
is necessarily past the greeting response which always gets a seq of 1
from the newly accepted connection.
We remove the BUG_ON and try to splice the old resend queue after the
possible response at the head of the resend_queue so that it is the
first to be dropped.
Signed-off-by: Zach Brown <zab@versity.com>
The last thing server commits do is move extents from the freed list
into freed extents. It moves as many as it can until it runs out of
avail meta blocks and space fore freed meta blocks in the current
allocator's lists.
The calculation for whether the lists had resources to move an extent
was quite off. It missed that the first move might have to dirty the
current allocator or the list block, that the btree could join/split
blocks at each level down the paths, and boy does it look like the
height component of the calculation was just bonkers.
With the wrong calculation the server could overflow the freed list
while moving extents and trigger a BUG_ON. We rarely saw this in
testing.
Signed-off-by: Zach Brown <zab@versity.com>
server_get_log_trees() sets the low flag in a mount's meta_avail
allocator, triggering enospc for any space consuming allocatins in the
mount, if the server's global meta_vail pool falls below the reserved
block count. Before each server transaction opens we swap the global
meta_avail and meta_freed allocators to ensure that the transaction has
at least the reserved count of blocks available.
This creates a risk of premature enospc as the global meta_avail pool
drains and swaps to the larger meta_freed. The pool can be close to the
reserved count, perhaps at it exactly. _get_log_trees can fill the
client's mount, even a little, and drop the global meta_avail total
under the reserved count, triggering enospc, even though meta_Freed
could have had quite a lot of blocks.
The fix is to ensure that the global meta_avail has 2x the reserved
count and swapping if it falls under that. This ensures that a server
transaction can consume an entire reserved count and still have enough
to avoid triggering enospc.
This fixes a scattering of rare premature enospc returns that were
hitting during tests. It was rare for meta_avail to fall just at the
reserved count and for get_log_trees to have to refill the client
allocator, but it happened.
Signed-off-by: Zach Brown <zab@versity.com>
Add a scoutfs command that uses an ioctl to send a request to the server
to safely use a device that has grown.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs was incorrectly initializing total_data_blocks. The field is meant
to record the number of blocks from the start of the device that the
filesystem could access. mkfs was subtracting the initial reserved area
of the device, meaning the number of blocks that the filesystem might
access.
This could allow accesses past devices if mount checks the device size
against the smaller total_data_blocks.
And we're about to use total_data_blocks as the start of a new extent to
add when growing the volume. It needs to be fixed so that this new
grown free extent doesn't overlap with the end of the existing free
extents.
Signed-off-by: Zach Brown <zab@versity.com>
There are fields in the super block that specify the range of blocks
that would be used for metadata or data. They are from the time when a
single block device was carved up into regions for metadata and data.
They don't make sense now that we have separate metadata and data block
devices. The starting blkno is static and we go to the end of the
device.
This removes the fields now that they serve no purpose. The only use
of them to check that freed extents fell within the correct bounds can
still be performed by using the static starting number or roughly using
the size of the devices. It's not perfect, but this is already only
a check to see that the blknos aren't utter nonsense.
We're removing the fields now to avoid having to update them while
worrying about users when resizing devices.
Signed-off-by: Zach Brown <zab@versity.com>
As subsystems were built I tended to use interruptible waits in the hope
that we'd let users break out of most waits.
The reality is that we have significant code paths that have trouble
unwinding. Final inode deletion during iput->evict in a task is a good
example. It's madness to have a pending signal turn an inode deletion
from an efficient inline operation to a deferred background orphan inode
scan deletion.
It also happens that golang built pre-emptive thread scheduling around
signals. Under load we see a surprising amount of signal spam and it
has created surprising error cases which would have otherwise been fine.
This changes waits to expect that IOs (including network commands) will
complete reasonably promptly. We remove all interruptible waits with
the notable exception of breaking out of a pending mount. That requires
shuffling setup around a little bit so that the first network message we
wait for is the lock for getting the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
If async network request submission fails then the response handler will
never be called. The sync request wrapper made the mistake of trying to
wait for completion when initial submission failed. This never happened
in normal operation but we're able to trigger it with some regularity
with forced unmount during tests. Unmount would hang waiting for work
to shutdown which was waiting for request responses that would never
happen.
Signed-off-by: Zach Brown <zab@versity.com>
Changing the file size can changes the file contents -- reads will
change when they stop returning data. fallocate can change the file
size and if it does it should increment the data_version, just like
setattr does.
Signed-off-by: Zach Brown <zab@versity.com>
The stage_tmpfile test util was written when fallocate didn't update
data_version for size extensions. It is more correct to get the
data_version after fallocate changes data_versions for however many
transactions, extent allocations, and i_size extensions it took to
allocate space.
Signed-off-by: Zach Brown <zab@versity.com>
Some kernels have blkdev_reread_part acquire the bd_mutex and then call
into drop_partitions which calls fsync_bdev which acquires s_umount.
This inverts the usual pattern of deactivate_super getting s_umount and
then using blkdev_put in kill_sb->put_super to drop a second device.
The inversion has been fixed upstream by years of rewrites. We can't go
back in time to fix the kernels that we're testing against,
unfortunately, so we disable lockdep around our valid leg of the
inversion that lockdep is noticing in our testing.
Signed-off-by: Zach Brown <zab@versity.com>
iput() can only be used in contexts that could perform final inode
deletion which requires cluster locks and transactions. This is
absolutely true for the transaction committing worker. We can't have
deletion during transaction commit trying to get locks and dirty *more*
items in the transaction.
Now that we're properly getting locks in final inode deletion and
O_TMPFILE support has put pressure on deletion, we're seeing deadlocks
between inode eviction during transaction commit getting a index lock
and index lock invalidation trying to commit.
We use the newly offered queued iput to defer the iput from walking our
dirty inodes. The transaction commit will be able to proceed while
the iput worker is off waiting for a lock.
Signed-off-by: Zach Brown <zab@versity.com>
Lock invalidation had the ability to kick iput off to work context. We
need to use it for inode writeback as well so we move the mechanism over
to inode.c and give it a proper call.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing errors during truncate that are surprising. Let's try and
recover from them and provide more info when they happen so that we can
dig deeper.
Signed-off-by: Zach Brown <zab@versity.com>
We recently fixed problems sending omap responses to originating clients
which can race with the clients disconnecting. We need to handle the
requests sent to clients on behalf of an origination request in exactly
the same way. The send can race with the client being evicted. It'll
be cleaned after the race is safely ignored by the client's rid being
removed from the server's request tracking.
Signed-off-by: Zach Brown <zab@versity.com>
The times in the quorum status file are in absolute monotinic kernel
time since bootup. That's not particularly helpful especially when
comparing across hosts with different boot times.
This shows relative times in timespec64 seconds until or since the times
in question. While we're at it we also collect the send and receive
timestamps closer to each send or receive call.
Signed-off-by: Zach Brown <zab@versity.com>
Generally, forced unmount works by returning errors for all IO. Quorum
is pretty resilient in that it can have the IO errors eaten by server
startup and does its own messaging that won't return errors. Trying to
force unmount can have the quorum service continually participate in
electing a server that immediately fails and shutds down.
This specifically shuts down the internal quorum service when it sees
that unmount is being forced. This is easier and cleaner than having
the network IO return errors and then having that trigger shutdown.
Signed-off-by: Zach Brown <zab@versity.com>
The quorum service shuts down if it sees errors that mean that it can't
do its job.
This is mostly fatal errors gathering resources at startup or runtime IO
errors but it was also shutting down if server startup fails. That's
not quite right. This should be treated like the server shutting down
on errors. Quorum needs to stay around to participate in electing the
next server.
Fence timeouts could trigger this. A quorum mount could crash, the
next server without a fence script could have a fence request timeout
and shutdown, and now the third remaining server is left to indefinitely
send vote requests into the void.
With this fixed, continuing that example, the quorum service in the
second mount remains to elect the third server with a working fence
script after the second server shuts down after its fence request times
out.
Signed-off-by: Zach Brown <zab@versity.com>
This should be good enough to get single node mounts up and running with
fenced with minimal effort. The example config will need to be copied
to /etc/scoutfs/scoutfs-fenced.conf for it to be functional, so this
still requires specific opt-in and wont accidentally run for multi-node
systems.
Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
The omap message lifecycle is a little different than the server's usual
handling that sends a response from the request handler. The response
is sent long after the initial receive handler is pinning the connection
to the client. It's fine for the response to be dropped.
The main server request handler handled this case but other response
senders didn't. Put this error handling in the server response sender
itself so that all callers are covered.
Signed-off-by: Zach Brown <zab@versity.com>
We hide I_FREEING inodes from inode lookup to avoid inversions with
cluster locking. This can result in duplicate inodes structs for a
given inode number. Then can both race to try and delete the same items
for their shared inode number. This leads to error messages from
evict_inode and could lead to corruption if they, for example, both try
and free the same data extents.
This adds very basic serialization so only one instance can try to
delete items at a time.
Signed-off-by: Zach Brown <zab@versity.com>
The item cache has to be careful not to insert stale read items when
previously dirty items have been written and invalidated while a read
was in flight.
This was previously done by recording the possible range of items that a
reader could see based on the key range of its lock. This is
disasterous when a workload operates entirely within one lock. I ran
into this when testing a small number of files with massive amounts of
xattrs. While any reader is in flight all pages can't be invalidated
because they all intersect with the one lock that covers all the items
in use.
The fix is to more naturally reflect the problem by tracking the
greatest item seq in pages and the earliest seq that any readers
can't see. This lets invalidate only skip pages with items
that weren't visible to the earliest reader.
This more naturally reflects that the problem is due to the age of the
items, not their position in the key space. Now only a few of the most
recently modified pages could be skipped and they'll be at the end
of the LRU and won't typically be visited. As an added benefit it's
now much cheaper to add, delete, and test the active readers.
This fix stopped rm -rf of a full system's worth of xattrs from taking
minutes constantly spinning skipping all pages in the LRU to seconds of
doing real removal work.
Signed-off-by: Zach Brown <zab@versity.com>
Normally mkfs would fail if we specify meta or data devices that are too
small. We'd like to use small devices for test scenarios, though, so
add an option to allow specifying sizes smaller than the minumum
required sizes.
Signed-off-by: Zach Brown <zab@versity.com>
These forward declarations were for interfaces that have since been
removed or changed and are no longer needed.
Signed-off-by: Zach Brown <zab@versity.com>
Returning ENOSPC is challenging because we have clients working on
allocators which are a fraction of the whole and we use COW transactions
so we need to be able to allocate to free. This adds support for
returning ENOSPC to client posix allocators as free space gets low.
For metadata, we reserve a number of free blocks for making progress
with client and server transactions which can free space. The server
sets the low flag in a client's allocator if we start to dip into
reserved blocks. In the client we add an argument to entering a
transaction which indicates if we're allocating new space (as opposed to
just modifying existing data or freeing). When an allocating
transaction runs low and the server low flag is set then we return
ENOSPC.
Adding an argument to transaciton holders and having it return ENOSPC
gave us the opportunity to clean it up and make it a little clearer.
More work is done outside the wait_event function and it now
specifically waits for a transaction to cycle when it forces a commit
rather than spinning until the transaction worker acquires the lock and
stops it.
For data the same pattern applies except there are no reserved blocks
and we don't COW data so it's a simple case of returning the hard ENOSPC
when the data allocator flag is set.
The server needs to consider the reserved count when refilling the
client's meta_avail allocator and when swapping between the two
meta_avail and meta_free allocators.
We add the reserved metadata block count to statfs_more so that df can
subtract it from the free meta blocks and make it clear when enospc is
going to be returned for metadata allocations.
We increase the minimum device size in mkfs so that small testing
devices provide sufficient reserved blocks.
And finally we add a little test that makes sure we can fill both
metadata and data to ENOSPC and then recover by deleting what we filled.
Signed-off-by: Zach Brown <zab@versity.com>
The forest log merge work calls into the client to send commit requests
to the server. The forest is usually destroyed relatively late in the
sequence and can still be running after the client is destroyed.
Adding a _forest_stop call lets us stop the log merging work
before the client is destroyed.
Signed-off-by: Zach Brown <zab@versity.com>
Killing a task can end up in evict and break out of acquiring the locks
to perform final inode deletion. This isn't necessarily fatal. The
orphan task will come around and will delete the inode when it is truly
no longer referenced.
So let's silence the error and keep track of how many times it happens.
Signed-off-by: Zach Brown <zab@versity.com>
Orphaned items haven't been deleted for quite a while -- the call to the
orphan inode scanner has been commented out for ages. The deletion of
the orphan item didn't take rid zone locking into account as we moved
deletion from being strictly local to being performed by whoever last
used the inode.
This reworks orphan item management and brings back orphan inode
scanning to correctly delete orphaned inodes.
We get rid of the rid zone that was always _WRITE locked by each mount.
That made it impossible for other mounts to get a _WRITE lock to delete
orphan items. Instead we rename it to the orphan zone and have orphan
item callers get _WRITE_ONLY locks inside their inode locks. Now all
nodes can create and delete orphan items as they have _WRITE locks on
the associated inodes.
Then we refresh the orphan inode scanning function. It now runs
regularly in the background of all mounts. It avoids creating cluster
lock contention by finding candidates with unlocked forest hint reads
and by testing inode caches locally and via the open map before properly
locking and trying to delete the inode's items.
Signed-off-by: Zach Brown <zab@versity.com>
The log merging work deletes log trees items once their item roots are
merged back into the fs root. Those deleted items could still have
populated srch files that would be lost. We force rotation of the srch
files in the items as they're reclaimed to turn them into rotated srch
files that can be compacted.
Signed-off-by: Zach Brown <zab@versity.com>
Refilling a btree block by moving items from its siblings as it falls
under the join threshold had some pretty serious mistakes. It used the
target block's total item count instead of the siblings when deciding
how many items to move. It didn't take item moving overruns into
account when deciding to compact so it could run out of contiguous free
space as it moved the last item. And once it compacted it returned
without moving because the return was meant to be in the error case.
This is all fixed by correctly examining the sibling block to determine
if we should join a block up to 75% full or move a big chunk over,
compacting if the free space doesn't have room for an excessive worst
case overrun, and fixing the compaction error checking return typo.
Signed-off-by: Zach Brown <zab@versity.com>
The alloc iterator needs to find and include the totals of the avail and
freed allocator list heads in the log merge items.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs was miscalculating the offset of the start of the free region in
the center of blocks as it populated blocks with items. It was using
the length of the free region as its offset in the block. To find
the offset of the end of the free region in the block it has to be
taken relative to the end of the item array.
Signed-off-by: Zach Brown <zab@versity.com>
Some item_val_len() callers were applying alignment twice, which isn't
needed.
And additions to erased_bytes as value lengths change didn't take
alignment into account. They could end up double counting if val_len
changes within the alignment are then accounted for again as the full
item and alignment is later deleted. Additions to erased_bytes based on
val_len should always take alignment into account.
Signed-off-by: Zach Brown <zab@versity.com>
The item cache allocates a page and a little tracking struct for each
cached page. If the page allocation fails it might try to free a null
page pointer, which isn't allowed.
Signed-off-by: Zach Brown <zab@versity.com>
Item creation, which fills out a new item at the end of the array of
item structs at the start of the block, didn't explicitly zero the item
struct padding to 0. It would only have been zero if the memory was
already zero, which is likely for new blocks, but isn't necessarily true
if the memory had previously been used by deleted values.
Signed-off-by: Zach Brown <zab@versity.com>
The change to aligning values didn't update the btree block verifier's
total length calculation, and while we're in there we can also check
that values are correctly aligned.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we had an unused function that could be flipped on to verify
btree blocks during traversal. This refactors the block verifier a bit
to be called by a verifying walker. This will let callers walk paths to
leaves to verify the tree around operations, rather than verification
being performed during the next walk.
Signed-off-by: Zach Brown <zab@versity.com>
Take the condition used to decide if a btree block needs to be joined
and put it in total_above_join_low_water() so that btree_merging will be
able to call it to see if the leaf block it's merging into needs to be
joined.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree function for freeing all the blocks in a btree without
having to cow the blocks to track which refs have been freed. We use a
key from the caller to track which portions of the tree have been freed.
Signed-off-by: Zach Brown <zab@versity.com>
Over time the printing of the btree roots embedded in the super block
has gotten a little out of hand. Add a helper macro for the printf
format and args and re-order them to match their order in the
superblock.
Signed-off-by: Zach Brown <zab@versity.com>
We now have a core seq number in the super that is advanced for multiple
users. The client transaction seq comes from the core seq so we
remove the trans_seq from the super. The item version is also converted
to use a seq that's derived from the core seq.
Signed-off-by: Zach Brown <zab@versity.com>
Add the client work which is regularly scheduled to ask the server for
log merging work to do. The relatively simple client work gets a
request from the server, finds the log roots to merge given the reqeust
seq, performs the merge with a btree call and callbacks, and commits the
result to the server.
Signed-off-by: Zach Brown <zab@versity.com>
This adds the server processing side of the btree merge functionality.
The client isn't yet sending the log_merge messages so no merging will
be performed.
The bulk of the work happens as the server processess a get_log_merge
message to build a merge request for the client. It starts a log merge
if one isn't in flight. If one is in flight it checks to see if it
should be spliced and maybe finished. In the common case it finds the
next range to be merged and sends the request to the client to process.
The commit_log_merge handler is the completion side of that request. If
the request failed then we unwind its resources based on the stored
request item. If it succeeds we record it in an item for get_
processing to splice eventually.
Then we modify two existing server code paths.
First, get_log_tree doesn't just create or use a single existing log
btree for a client mount. If the existing log btree is large enough it
sets its finalized flag and advances the nr to use a new log btree.
That makes the old finalized log btree available for merging.
Then we need to be a bit more careful when reclaiming the open log btree
for a client. We can't use next to find the only open log btree, we use
prev to find the last and make sure that it isn't already finalized.
Signed-off-by: Zach Brown <zab@versity.com>
Add the format specification for the upcoming btree merging. Log btrees
gain a finalized field, we add the super btree root and all the items
that the server will use to coordinate merging amongst clients, and we
add the two client net messages which the server will implement.
Signed-off-by: Zach Brown <zab@versity.com>
Extract part of the get_last_seq handler into a call that finds the last
stable client transaction seq. Log merging needs this to determine a
cutoff for stable items in log btrees.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree call to just dirty to a leaf block, joining and splitting
along the way so that the blocks in the path satisfy the balance
constraints.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree function for merging the items in a range from a number of
read-only input btrees into a destination btree.
Signed-off-by: Zach Brown <zab@versity.com>
Add a BTW_SUBTREE flag to btree_walk() to restrict splitting or joining
of the root block. When clients are merging into the root built from a
reference to the last parent in the fs tree we want to be careful that
we maintain a single root block that can be spliced back into the fs
tree. We specifically check that the root block remain within the
split/join thresholds. If it falls out of compliance we return an error
so that it can be spliced back into the fs tree and then split/joined
with its siblings.
Signed-off-by: Zach Brown <zab@versity.com>
Add calls for working with subtrees built around references to blocks in
the last level of parents. This will let the server farm out btree
merging work where concurrency is built around safely working with all
the items and leaves that fall under a given parent block.
Signed-off-by: Zach Brown <zab@versity.com>
Add a btree helper for finding the range of keys which are found in
leaves referenced by the last parent block when searching for a given
key.
Signed-off-by: Zach Brown <zab@versity.com>
Rename the item version to seq and set it to the max of the transaction
seq and the lock's write_seq. This lets btree item merging chose a seq
at which all dirty items written in future commits must have greater
seqs. It can drop the seqs from items written to the fs tree during
btree merging knowing that there aren't any older items out in
transactions that could be mistaken for newer items.
Signed-off-by: Zach Brown <zab@versity.com>
Rename the write_version lock field to write_seq and get it from the
core seq in the super block.
We're doing this to create a relationship between a client transaction's
seq and a lock's write_seq. New transactions will have a greater seq
than all previously granted write locks and new write locks will have a
greater seq than all open transactions. This will be used to resolve
ambiguities in item merging as transaction seqs are written out of order
and write locks span transactions.
Signed-off-by: Zach Brown <zab@versity.com>
Get the next seq for a client transaction from the core seq in the super
block. Remove its specific next_trans_seq field.
While making this change we switch to only using le64 in the network
message payloads, the rest of the processing now uses natural u64s.
Signed-off-by: Zach Brown <zab@versity.com>
Add a new seq field to the super block which will be the source of all
incremented seqs throughout the system. We give out incremented seqs to
callers with an atomic64_t in memory which is synced back to the super
block as we commit transactions in the server.
Signed-off-by: Zach Brown <zab@versity.com>
When we moved to the current allocator we fixed up the server commit
path to initialize the pair of allocators as a commit is finished rather
than before it starts. This removed all the error cases from
hold_commit. Remove the error handling from hold_commit calls to make
the system just a bit simpler.
Signed-off-by: Zach Brown <zab@versity.com>
The core quorum work loop assumes that it has exclusive access to its
slot's quorum block. It uniquely marks blocks it writes and verifies
the marks on read to discover if another mount has written to its slot
under the assumption that this must be a configuration error that put
two mounts in the same slot.
But the design of the leader bit in the block violates the invariant
that only a slot will write to its block. As the server comes up and
fences previous leaders it writes to their block to clear their leader
bit.
The final hole in the design is that because we're fencing mounts, not
slots, each slot can have two mounts in play. An active mount can be
using the slot and there can still be a persistent record of a previous
mount in the slot that crashed that needs to be fenced.
All this comes together to have the server fence an old mount in a slot
while a new mount is coming up. The new mount sees the mark change and
freaks out and stops participating in quorum.
The fix is to rework the quorum blocks so that each slot only writes to
its own block. Instead of the server writing to each fenced mount's
slot, it writes a fence event to its block once all previous mounts have
been fenced. We add a bit of bookkeeping so that the server can
discover when all block leader fence operations have completed. Each
event gets its own term so we can compare events to discover live
servers.
We get rid of the write marks and instead have an event that is written
as a quorum agent starts up and is then checked on every read to make
sure it still matches.
Signed-off-by: Zach Brown <zab@versity.com>
If the server shuts down it calls into quorum to tell it that the
server has exited. This stops quorum from sending heartbeats that
suppress other leader elections.
The function that did this got the logic wrong. It was setting the bit
instead of clearing it, having been initially written to set a bit when
the server exited.
Signed-off-by: Zach Brown <zab@versity.com>
Add the peername of the client's connected socket to its mounted_client
item as it mounts. If the client doesn't recover then fencing can use
the IP to find the host to fence.
Signed-off-by: Zach Brown <zab@versity.com>
The error messages from reading quorum blocks were confusing. The mark
was being checked when the block had already seen an error, and we got
multiple messages for some errors.
This cleans it up a bit so we only get one error message for each error
source and each message contains relevant context.
Signed-off-by: Zach Brown <zab@versity.com>
Currently the server's recovery timeout work synchronously reclaims
resources for each client whose recovery timed out.
scoutfs_recov_next_pending() can always return the head of the pending
list because its caller will always remove it from the list as it
iterates.
As we move to real fencing the server will be creating fence requests
for all the timed out clients concurrently. It will need to iterate
over all the rids for clients in recovery.
So we sort recovery's pending list by rid and change _recov_next_pending
to return the next pending rid after a rid argument. This lets the
server iterate over all the pending rids at once.
Signed-off-by: Zach Brown <zab@versity.com>
Client recovery in the server doesn't add the omap rid for all the
clients that it's waiting for. It only adds the rid as they connect. A
client whose recovery timeout expires and is evicted will try to have
its omap rid removed without being added.
Today this triggers a warning and returns an error from a time when the
omap rid lifecycle was more rigid. Now that it's being called by the
server's reclaim_rid, along with a bunch of other functions that succeed
if called for non-existant clients, let's have the omap remove_rid do
the same.
Signed-off-by: Zach Brown <zab@versity.com>
I saw a confusing hang that looked like a lack of ordering between
a waker setting shutting_down and a wait event testing it after
being woken up. Let's see if more barriers help.
Signed-off-by: Zach Brown <zab@versity.com>
Our connection state spans sockets that can disconnect and reconnect.
While sockets are connected we store the socket's remote address in the
connection's peername and we clear it as sockets disconnect.
Fencing wants to know the last connected address of the mount. It's a
bit of metadata we know about the mount that can be used to find it and
fence it. As we store the peer address we also stash it away as the
last known peer address for the socket. Fencing can then use that
instead of the current socket peer address which is guaranteed to be
uninitialized because there's no socket connected.
Signed-off-by: Zach Brown <zab@versity.com>
The client currently always queues immediate connect work if it's
nodify_down is called. It was assuming that notify_down is only called
from a healthy established connection. But it's also called for
unsuccessful conneect attempts that might not have timed out. Say the
host is up but the port isn't listening.
This results in spamming connection attempts while an old stale leader
block until a new server is elected, fences the previous leader, and
updates their quorum block.
The fix is to explicitly manage the connection work queueing delay. We
only set it to immediately queue on mount and when we see a greeting
reply from the server. We always set it to a longer timeout as we start
a connection attempt. This means we'll always have a long reconnect
delay unless we really connected to a server.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which exercises the various reasons for fencing mounts and
checks that we reclaim the resources that they had.
Signed-off-by: Zach Brown <zab@versity.com>
The server is responsible for calling the fencing subsystem. It is the
source of fencing requests as it decides that previous mounts are
unresponsive. It is responsible for reclaiming resources for fenced
mounts and freeing their associated fence request.
Signed-off-by: Zach Brown <zab@versity.com>
Add sysfs attribute creation that can provide the parent dir kobject
instead of always creating the sysfs object dir off of the main
per-mount dir.
Signed-off-by: Zach Brown <zab@versity.com>
Add super_ops->umount_begin so that we can implement a forced unmount
which tries to avoid issuing any more network or storage ops. It can
return errors and lose unsynchronized data.
Signed-off-by: Zach Brown <zab@versity.com>
Add the data_alloc_zone_blocks volume option. This changes the
behaviour of the server to try and give mounts free data extents which
fall in exclusive fixed-size zones.
We add the field to the scoutfs_volume_options struct and add it to the
set_volopt server handler which enforces constrains on the size of the
zones.
We then add fields to the log_trees struct which records the size of the
zones and sets bits for the zones that contain free extents in the
data_avail allocator root. The get_log_trees handler is changed to read
all the zone bitmaps from all the items, pass those bitmaps in to
_alloc_move to direct data allocations, and finally update the bitmaps
in the log_trees items to cover the newly allocated extents. The
log_trees data_alloc_zone fields are cleared as the mount's logs are
reclaimed to indicate that the mount is no longer writing to the zone.
The policy mechanism of finding free extents based on the bitmaps is
ipmlemented down in _data_alloc_move().
Signed-off-by: Zach Brown <zab@versity.com>
Add parameters so that scoutfs_alloc_move() can first search for source
extents in specified zones. It uses relatively cheap searches through
the order items to find extents that intersect with the regions
described by the zone bitmaps.
Signed-off-by: Zach Brown <zab@versity.com>
Allocators store free extents in two items, one sorted by their blkno
position and the other by their precise length.
The length index makes it easy to search for precise extent lengths, but
it makes it hard to search for a large extent within a given blkno
region. Skipping in the blkno dimension has to be done for every
precise length value.
We don't need that level of precision. If we index the extents by a
coarser order of the length then we have a fixed number of orders in
which we have to skip in the blkno dimension when searching within a
specific region.
This changes the length item to be stored at the log(8) order of the
length of the extents. This groups extents into orders that are close
to the human-friendly base 10 orders of magnitude.
With this change the order field in the key no longer stores the precise
extent length. To preserve the length of the extent we need to use
another field. The only 64bit field remaining is the first which is a
higher comparision priority than the type. So we use the highest
comparison priority zone field to differentiate the position and order
indexes and can now use all three 64bit fields in the key.
Finally, we have to be careful when constructing a key to use _next when
searching for a large extent. Previously keys were relying on the magic
property that building a key from an extent length of 0 ended up at the
key value -0 = 0. That only worked because we never stored zero length
extents. We now store zero length orders so we can't use the negative
trick anymore. We explicitly treat 0 length extents carefully when
building keys and we subtract the order from U64_MAX to store the orders
from largest to smallest.
Signed-off-by: Zach Brown <zab@versity.com>
Introduce global volume options. They're stored in the superblock and
can be seen in sysfs files that use network commands to get and
set the options on the server.
Signed-off-by: Zach Brown <zab@versity.com>
A lock that is undergoing invalidation is put on a list of locks in the
super block. Invalidation requests put locks on the list. While locks
are invalidated they're temporarily put on a private list.
To support a request arriving while the lock is being processed we
carefully manage the invalidation fields in the lock between the
invalidation worker and the incoming request. The worker correctly
noticed that a new invalidation request had arrived but it left the lock
on its private list instead of putting it back on the invalidation list
for further processing. The lock was unreachable, wouldn't get
invalidated, and caused everyone trying to use the lock to block
indefinitely.
When the worker sees another request arrive for an invalidating lock it
needs to move the lock from the private list back to the invalidation
list.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we added a ilookup variant that ignored I_FREEING inodes
to avoid a deadlock between lock invalidation (lock->I_FREEING) and
eviction (I_FREEING->lock);
Now we're seeing similar deadlocks between eviction (I_FREEING->lock)
and fh_to_dentry's iget (lock->I_FREEING).
I think it's reasonable to ignore all inodes with I_FREEING set when
we're using our _test callback in ilookup or iget. We can remove the
_nofreeing ilookup variant and move its I_FREEING test into the
iget_test callback provided to both ilookup and iget.
Callers will get the same result, it will just happen without waiting
for a previously I_FREEING inode to leave. They'll get NULL instead of
waiting from ilookup. They'll allocate and start to initialize a newer
instance of the inode and insert it along side the previous instance.
We don't have inode number re-use so we don't have the problem where a
newly allocated inode number is relying on inode cache serialization to
not find a previously allocated inode that is being evicted.
This change does allow for concurrent iget of an inode number that is
being deleted on a local node. This could happen in fh_to_dentry with a
raw inode number. But this was already a problem between mounts because
they don't have a shared inode cache to serialize them. Once we fix
that between nodes, we fix it on a single node as well.
Signed-off-by: Zach Brown <zab@versity.com>
The vfs often calls filesystem methods with i_mutex held. This creates
a natural ordering of i_mutex outside of cluster locks. The file
aio_read method acquired i_mutex after its cluster lock, creating a
deadlock with other vfs methods like setattr.
The acquisition of i_mutex after the cluster lock was due to using the
pattern where we use the per-task lock to discover if we're the first
user of the lock in a call chain. Readpage has to do this, but file
aio_read doesn't. It should never be called recursively. So we can
acquire the i_mutex outside of the cluster lock and warn if we ever are
called recursively.
Signed-off-by: Zach Brown <zab@versity.com>
When move blocks is staging it requires an overlapping offline extent to
cover the entire region to move.
It performs the stage by modifying extents at a time. If there are
fragmented source extents it will modify each of them at a time in the
region.
When looking for the extent to match the source extent it looked from
the iblock of the start of the whole operation, not the start of the
source extent it's matching. This meant that it would find a the first
previous online extent it just modified, which wouldn't be online, and
would return -EINVAL.
The fix is to have it search from the logical start of the extent it's
trying to match, not the start of the region.
Signed-off-by: Zach Brown <zab@versity.com>
The client's incoming lock invalidation request handler triggers a
BUG_ON if it gets a request for a lock that is already processing a
previous invalidation request. The server is supposed to only send
one request at a time.
The problem is that the batched invalidation request handling will send
responses outside of spinlock coverage before reacquirin the lock and
finishing processing once the response send has been successful.
This gives a window for another invalidation request to arrive after the
response was sent but before the invalidation finished processing. This
triggers the bug.
The fix is to mark the lock such that we can recognize a valid second
request arriving after we send the response but before we finish
processing. If it arrives we'll continue invalidation processing with
the arguments from the new request.
Signed-off-by: Zach Brown <zab@versity.com>
Lock teardown during unmount involves first calling shutdown and then
destroy. The shutdown call is meant to ensure that it's safe to tear
down the client network connections. Once shutdown returns locking is
promising that it won't call into the client to send new lock requests.
The current shutdown implementation is very heavy handed and shuts down
everything. This creates a deadlock. After calling lock shutdown, the
client will send its farewell and wait for a response. The server might
not send the farewell response until other mounts have unmounted if our
client is in the server's mount. In this case we stil have to be
processing lock invalidation requests to allow other unmounting clients
to make forward progress.
This is reasonably easy and safe to do. We only use the shutdown flag
to stop lock calls that would change lock state and send requests. We
don't have it stop incoming requests processing in the work queueing
functions. It's safe to keep processing incoming requests between
_shutdown and _destroy because the requests already come in through the
client. As the client shuts down it will stop calling us.
Signed-off-by: Zach Brown <zab@versity.com>
Even though we can pass in gfp flags to vmalloc it eventually calls pte
alloc functions which ignore the caller's flags and use user gfp flags.
This risks reclaim re-entering fs paths during allocations in the block
cache. These allocs that allowed reclaim deep in the fs was causing
lockdep to add RECLAIM dependencies between locks and holler about
deadlocks.
We apply the same pattern that xfs does for disabling reclaim while
allocating vmalloced block payloads. Setting PF_MEMALLOC_NOIO causes
reclaim in that task to clear __GFP_IO and __GFP_FS, regardless of the
individual allocation flags in the task, preventing recursion.
Signed-off-by: Zach Brown <zab@versity.com>
The shared recovery layer outputs different messages than when it ran
only for lock_recovery in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Locks have a bunch of state that reflects concurrent processing.
Testing that state determines when it's safe to free a lock because
nothing is going on.
During unmount we abruptly stop processing locks. Unmount will send a
farewell to the server which will remove all the state associated with
the client that's unmounting for all its locks, regardless of the state
the locks were in.
The client unmount path has to clean up the interupted lock state and
free it, carefully avoiding assertions that would otherwise indicate
that we're freeing used locks. The move to async lock invalidation
forgot to clean up the invalidation state. Previously a synchronous
work function would set and clear invalidate_pending while it was
running. Once we finished waiting for it invalidate_pending would be
clear. The move to async invalidation work meant that we can still have
invalidate_pending with no work executing. Lock destruction removed
locks from the invalidation list but forgot to clear the
invalidate_pending flag.
This triggered assertions during unmount that were otherwise harmless.
There was other use of the lock, we just forgot to clean up the lock
state.
Signed-off-by: Zach Brown <zab@versity.com>
The data_info struct holds the data allocator that is filled by
transactions as they commit. We have to free it after we've shutdown
transactions. It's more like the forest in this regard so we move its
desctruction down by the forest to group similar behaviour.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the lock client waits for invalidation work and prevents
future work from being queued. We're currently shutting down the
subsystems that lock calls before lock itself, leading to crashes if we
happen to have invalidations executing as we unmount.
Shutting down locking before its dependencies fixes this. This was hit
in testing during the inode deletion fixes because it created the
perfect race by acquiring locks during unmount so that the server was
very unlikely to send invalidations on behalf to one mount on behalf of
another as they both unmounted.
Signed-off-by: Zach Brown <zab@versity.com>
Shutting down the transaction during unmount relied on the vfs unmount
path to perform a sync of any remaining dirty transaction. There are
ways that we can dirty a transaction during unmount after it calls
the fs sync, so we try to write any remaining dirty transaction before
shutting down.
Signed-off-by: Zach Brown <zab@versity.com>
We've had a long-standing deadlock between lock invalidation and
eviction. Invalidating a lock wants to lookup inodes and drop their
resources while blocking locks. Eviction wants to get a lock to perform
final deletion while the inodes has I_FREEING set which blocks lookups.
We only saw this deadlock a handful of times in all of the time we've
run the code, but it's now much more common now that we're acquiring
locks in iput to test that nlink is zero instead of only when nlink is
zero. I see unmount hang regularly when testing final inode deletion.
This adds a lookup variant for invalidation which will refuse to
return freeing inodes so they won't be waited on. Once they're freeing
they can't be seen by future lock users so they don't need to be
invalidated. This keeps the lock invalication promise and avoids
sleeping on freeing inodes which creates the deadlock.
Signed-off-by: Zach Brown <zab@versity.com>
t_umount had a typo that had it try to unmount a mount based on a
caller's variable, which accidentally happened to work for its only
caller. Future callers would not have been so lucky.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we wouldn't try and remove cached dentries and inodes as
lock revocation removed cluster lock coverage. The next time
we tried to use the cached dentries or inodes we'd acquire
a lock and refresh them.
But now cached inodes prevent final inode deletion. If they linger
outside cluster locking then any final deletion will need to be deferred
until all its cached inodes are naturally dropped at some point in the
future across the cluster. It might take refreshing the dentries or for
memory pressure to push out the old cached inodes.
This tries to proctively drop cached dentries and inodes as we lose
cluster lock coverage if they're not actively referenced. We need to be
careful not to perform final inode deletion during lock invalidation
because it will deadlock, so we defer an iput which could delete during
evict out to async work.
Now deletion can be done synchronously in the task that is performing
the unlink because previous use of the inode on remote mounts hasn't
left unused cached inodes sitting around.
Signed-off-by: Zach Brown <zab@versity.com>
Today an inode's items are deleted once its nlink reaches zero and the
final iput is called in a local mount. This can delete inodes from
under other mounts which have opened the inode before it was unlinked on
another mount.
We fix this by adding cached inode tracking. Each mount maintains
groups of cached inode bitmaps at the same granularity as inode locking.
As a mount performs its final iput it gets a bitmap from the server
which indicates if any other mount has inodes in the group open.
This makes the two fast paths of opening and closing linked files and of
deleting a file that was unlinked locally only pay a moderate cost of
either maintaining the bitmap locally and only getting the open map once
per lock group. Removing many files in a group will only lock and get
the open map once per group.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we have the recov layer we can have the lock server use it to
track lock recovery. The lock server no longer needs its own recovery
tracking structures and can instead call recov. We add a call for the
server to call to kick lock processing once lock recovery finishes. We
can get rid of the persistent lock_client items now that the server is
driving recovery from the mounted_client items.
Signed-off-by: Zach Brown <zab@versity.com>
The server starts recovery when it finds mounted client items as it
starts up. The clients are done recovering once they send their
greeting. If they don't recover in time then they'll be fenced.
Signed-off-by: Zach Brown <zab@versity.com>
Add a little set of functions to help the server track which clients are
waiting to recover which state. The open map messages need to wait for
recovery so we're moving recovery out of being only in the lock server.
Signed-off-by: Zach Brown <zab@versity.com>
Add lock coverage which tracks if the inode has been refreshed and is
covered by the inode group cluster lock. This will be used by
drop_inode and evict_inode to discover that the inode is current and
doesn't need to be refreshed.
Signed-off-by: Zach Brown <zab@versity.com>
The sneaky rhashtable_insert_fast() can't return -EEXIST despite the
last line of the function *REALLY* making it look like it can. It just
inserts new objects at the head of the bucket lists without comparing
the insertion with existing objects.
The block cache was relying on insertion to resolve duplicate racing
allocated blocks. Because it couldn't return -EEXIST we could get
duplicate cached blocks present in the hash table.
rhashtable_lookup_insert_fast() fixes this by actually comparing the
inserted objects key with the objects found in the insertion bucket. A
racing allocator trying to insert a duplicate cached block will get an
error, drop their allocated block, and retry their lookup.
Signed-off-by: Zach Brown <zab@versity.com>
The rhashtable can return EBUSY if you insert fast enough to trigger an
expansion of the next table size that is waiting to be rehashed in an
rcu callback. If we get EBUSY from rhasthable_insert we call
synchronize_rcu to wait for the rehash to complete before trying again.
This was hit in testing restores of a very large namespace and took a
few hours to hit.
Signed-off-by: Zach Brown <zab@versity.com>
We're seeing cpu livelocks in block shrinking where counters show that a
single block cache shrink call is only getting EAGAIN from repeated
rhashtable walk attempts. It occurred to me that the running task might
be preventing an RCU grace period from ending by never blocking.
The hope of this commit is that by waiting for rcu callbacks to run
we'll ensure that any pending rebalance callback runs before we retry
the rhashtable walk again. I haven't been able to reproduce this easily
so this is a stab in the dark.
Signed-off-by: Zach Brown <zab@versity.com>
Kinda weird to goto back to the out label and then out the bottom. Just
return -EIO, like forest_next_hint() does.
Don't call client_get_roots() right before retry, since is the first thing
retry does.
Signed-off-by: Andy Grover <agrover@versity.com>
Support O_TMPFILE: Create an unlinked file and put it on the orphan list.
If it ever gains a link, take it off the orphan list.
Change MOVE_BLOCKS ioctl to allow moving blocks into offline extent ranges.
Ioctl callers must set a new flag to enable this operation mode.
RH-compat: tmpfile support it actually backported by RH into 3.10 kernel.
We need to use some of their kabi-maintaining wrappers to use it:
use a struct inode_operations_wrapper instead of base struct
inode_operations, set S_IOPS_WRAPPER flag in i_flags. This lets
RH's modified vfs_tmpfile() find our tmpfile fn pointer.
Add a test that tests both creating tmpfiles as well as moving their
contents into a destination file via MOVE_BLOCKS.
xfstests common/004 now runs because tmpfile is supported.
Signed-off-by: Andy Grover <agrover@versity.com>
The srch client compaction work initializes allocators, dirties blocks,
and writes them out as its transaction. It forgot to call the
pre-commit allocator prepare function.
The prepare function drops block references used by the meta allocator
during the transaction. This leaked block references which kept blocks
from being freed by the shrinker under memory pressure. Eventually
memory was full of leaked blocks and the shrinker walked all of them
looking blocks to free, resulting in an effective livelock that ground
the system to a crawl.
Signed-off-by: Zach Brown <zab@versity.com>
By the time we get to destroying the block cache we should have put all
our block references. Warn as we tear down the blocks if we see any
blocks that still have references, implying a ref leak. This caught a
leak caused by srch compaction forgetting to put allocator list block
refs.
Signed-off-by: Zach Brown <zab@versity.com>
That comment looked very weird indeed until I recognized that I must
have forgotten to delete the first two attempts at starting the
sentence.
Signed-off-by: Zach Brown <zab@versity.com>
The very first greeting a client sends is unique becuase it doesn't yet
have a server_term field set and tells the server to create items to
track the client.
A server processing this request can create the items and then shut down
before the client is able to receive the reply. They'll resend the
greeting without server_term but then the next server will get -EEXIST
errors as it tries to create items for the client. This causes the
connection to break, which the client tries to reestablish, and the
pattern repeats indefinitely.
The fix is to simply recognize that -EEXIST is acceptable during item
creation. Server message handlers always have to address the case where
a resent message was already processed by a previous server but it's
response didn't make it to the client.
Signed-off-by: Zach Brown <zab@versity.com>
Remove an old client info field from the unmount barrier mechanism which
was removed a while ago. It used to be compared to a super field to
decide to finish unmount without reconnecting but now we check for our
mounted_client item in the server's btree.
Signed-off-by: Zach Brown <zab@versity.com>
Define a family field, and add a union for IPv4 and v6 variants, although
v6 is not supported yet.
Family field is now used to determine presence of address in a quorum slot,
instead of checking if addr is zero.
Signed-off-by: Andy Grover <agrover@versity.com>
The block-stale-reads test was built from the ashes of a test that
used counters and triggers to work with the btree when it was
only used on the server.
The initial quick translation to try and trigger block cache retries
while the forest called the btree got so much wrong. It was still
trying to use some 'cl' variable that didn't refer to the client any
more, the trigger helpers now call statfs to find paths and can end up
triggering themselves. and many more counters stale reads can happen
throughout the system while we're working -- not just one from our
trigger.
This fixes it up to consistently use fs numbers instead of
the silly stale cl variable and be less sensitive to triggers firing and
counter differences.
Signed-off-by: Zach Brown <zab@versity.com>
t_trigger_arm always output the value of the trigger after arming on the
premise that tests required the trigger being armed. In the process of
showing the trigger it calls a bunch of t_ helpers that build the path
to the trigger file using statfs_more to get the rid of mounts.
If the trigger being armed is in the server's mount and the specific
trigger test is fired by the server's statfs_more request processing
then the trigger can be fired before read its value. Tests can
inconsistently fail as the golden output shows the trigger being armed
or not depending on if it was in the server's mount or not.
t_trigger_arm_silent doesn't output the value of the armed trigger. It
can be used for low level triggers that don't rely on reading the
trigger's value to discover that their effect has happened.
Signed-off-by: Zach Brown <zab@versity.com>
Tests can use t_counter_diff to put a message in their golden output
when a specific change in counters is expected. This adds
t_counter_diff_changed to output a message that indicates change or not,
for tests that want to see counters change but the amount of change
doesn't need to be precisely known.
Signed-off-by: Zach Brown <zab@versity.com>
Each transaction maintains a global list of inodes to sync. It checks
the inode and adds it in each write_end call per OS page. Locking and
unlocking the global spinlock was showing up in profiles. At the very
least, we can only get the lock once per large file that's written
during a transaction. This will reduce spinlock traffic on the lock by
the number of pages written per file. We'll want a better solution in
the long run, but this helps for now.
Signed-off-by: Zach Brown <zab@versity.com>
Each transaction hold makes multiple calls to _alloc_meta_low to see if
the transaction should be committed to refill allocators before the
caller's hold is acquired and they can dirty blocks in the transaction.
_alloc_meta_low was using a spinlock to sample the allocator list_head
blocks to determine if there was space available. The lock and unlock
stores were creating significant cacheline contention.
The _alloc_meta_low calls are higher frequency than allocations. We can
use a seqlock to have exclusive writers and allow concurrent
_alloc_meta_low readers who retry if a writer intervenes.
Signed-off-by: Zach Brown <zab@versity.com>
We saw the transaction info lock showing up in profiles. We were doing
quite a lot of work with that lock held. We can remove it entirely and
use an atomic.
Instead of a locked holders count and writer boolean we can use an
atomic holders and have a high bit indicate that the write_func is
pending. This turns the lock/unlock pairs in hold and release into
atomic inc/cmpxchg/dec operations.
Then we were checking allocators under the trans lock. Now that we have
an atomic holders count we can increment it to prevent the writer from
commiting and release it after the checks if we need another commit
before the hold.
And finally, we were freeing our allocated reservation struct under the
lock. We weren't actually doing anything with the reservation struct so
we can use journal_info as the nested hold counter instead of having it
point to an allocated and freed struct.
Signed-off-by: Zach Brown <zab@versity.com>
As the implementation shifted away from the ring of btree blocks and LSM
segments we lost callers to all these triggers. They're unused and can
be removed.
Signed-off-by: Zach Brown <zab@versity.com>
The previous test that triggered re-reading blocks, as though they were
stale, was written in the era where it only hit btree blocks and
everything else was stored in LSM segments.
This reworks the test to make it clear that it affects all our block
readers today. The test only exercise the core read retry path, but it
could be expanded to test callers retrying with newer references after
they get -ESTALE errors.
Signed-off-by: Zach Brown <zab@versity.com>
Our block cache consistency mechanism allows readers to try and read
stale block references. They check block headers of the block they read
to discover if it has been modified and they should retry the read with
newer block references.
For this to be correct the block contents can't change under the
readers. That's obviously true in the simple imagined case of one node
writing and another node reading. But we also have the case where the
stale reader and dirtying writer can be concurrent tasks in the same
mount which share a block cache.
There were a two failure cases that derive from the order of readers and
writers working with blocks.
If the reader goes first, the writer could find the existing block in
the cache and modify it while the reader assumes that it is read only.
The fix is to have the writer always remove any existing cached block
and insert a newly allocated block into the cache with the header fields
already changed. Any existing readers will still have their cached
block references and any new readers will see the modified headers and
return -ESTALE.
The next failure comes from readers trying to invalidate dirty blocks
when they see modified headers. They assumed that the existing cached
block was old and could be dropped so that a new current version could
be read. But in this case a local writer has clobbered the reader's
stale block and the reader should immediately return -ESTALE.
Signed-off-by: Zach Brown <zab@versity.com>
To create dirty blocks in memory each block type caller currently gets a
reference on a created block and then dirties it. The reference it gets
could be an existing cached block that stale readers are currently
using. This creates a problem with our block consistency protocol where
writers can dirty and modify cached blocks that readers are currently
reading in memory, leading to read corruption.
This commit is the first step in addressing that problem. We add a
scoutfs_block_dirty_ref() call which returns a reference to a dirtied
block from the block core in one call. We're only changing the callers
in this patch but we'll be reworking the dirtying mechanism in an
upcoming patch to avoid corrupting readers.
Signed-off-by: Zach Brown <zab@versity.com>
Update scoutfs print to use the new block_ref struct instead of the
handful of per-block type ref structs that we had accumulated.
Signed-off-by: Zach Brown <zab@versity.com>
Each of the different block types had a reading function that read a
block and then checked their reference struct for their block type.
This gets rid of each block reference type and has a single block_ref
type which is then checked by a single ref reading function in the block
core. By putting ref checking in the core we no longer have to export
checking the block header crc, verifying headers, invalidating blocks,
or even reading raw blocks themseves. Everyone reads refs and leaves
the checking up to the core.
The changes don't have a significant functional effect. This is mostly
just changing types and moving code around. (There are some changes to
visible counters.)
This shares code, which is nice, but this is putting the block reference
checking in one place in the block core so that in a few patches we can
fix problems with writers dirtying blocks that are being read.
Signed-off-by: Zach Brown <zab@versity.com>
The block cache wasn't safely racing readers walking the rcu radix_tree
and the shrinker walking the LRU list. A reader could get a reference
to a block that had been removed from the radix and was queued for
freeing. It'd clobber the free's llist_head union member by putting the
block back on the lru and both the read and free would crash as they
each corrupted each other's memory. We rarely saw this in heavy load
testing.
The fix is to clean up the use of rcu, refcounting, and freeing.
First, we get rid of the LRU list. Now we don't have to worry about
resolving racing accesses of blocks between two independent structures.
Instead of shrinking walking the LRU list, we can mark blocks on access
such that shrinking can walk all blocks randomly and expect to quickly
find candidates to shrink.
To make it easier to concurrently walk all the blocks we switch to the
rhashtable instead of the radix tree. It also has nice per-bucket
locking so we can get rid of the global lock that protected the LRU list
and radix insertion. (And it isn't limited to 'long' keys so we can get
rid of the check for max meta blknos that couldn't be cached.)
Now we need to tighten up when read can get a reference and when shrink
can remove blocks. We have presence in the hash table hold a refcount
but we make it a magic high bit in the refcount so that it can be
differentiated from other references. Now lookup can atomically get a
reference to blocks that are in the hash table, and shrinking can
atomically remove blocks when it is the only other reference.
We also clean up freeing a bit. It has to wait for the rcu grace period
to ensure that no other rcu readers can reference the blocks its
freeing. It has to iterate over the list with _safe because it's
freeing as it goes.
Interestingly, when reworking the shrinker I noticed that we weren't
scaling the nr_to_scan from the pages we returned in previous shrink
calls back to blocks. We now divide the input from pages back into
blocks.
Signed-off-by: Zach Brown <zab@versity.com>
We had a mutex protecting the list of farewell requests. The critical
sections are all very short so we can use a spinlock and be a bit
clearer and more efficient. While we're at it, refactor freeing to free
outside of the criticial section.
Signed-off-by: Zach Brown <zab@versity.com>
The server has to be careful to only send farewell responses to quorum
clients once it knows that it won't need their vote to elect a leader to
server remaining clients.
The logic for doing this forgot to take non-quorum clients into account.
It would send farewell requests to all the final majority of quorum
members once they all tried to unmount. This could leave non-quorum
clients hung in unmount trying to send their farewell requests.
The fix is to count mouted_clients items for non-quorum clients and hold
off on sending farewell requests to the final majority until those
non-quorum clients have unmounted.
Signed-off-by: Zach Brown <zab@versity.com>
The recent quorum and unmount fixes should have addressed the failures
we were seeing in the mount-unmount-race test.
Signed-off-by: Zach Brown <zab@versity.com>
Update the man pages with descriptions of the new mkfs -Q quorum slot
configuration and quorum_slot_nr mount option.
Signed-off-by: Zach Brown <zab@versity.com>
We mask device numbers in command output to 0:0 so that we can have
consistent golden test output. The device number matching regex
responsible for this missed a few digits.
It didn't show up until we both tested enough mounts to get larger
device minor numbers and fixed multi-mount consistency so that the
affected tests didn't fail for other reasons.
Signed-off-by: Zach Brown <zab@versity.com>
Our test unmount function unmounted the device instead of the mount
point. It was written this way back in an old version of the harness
which didn't track mount points.
Now that we have mount points, we can just unmount that. This stops the
umount command from having to search through all the current mounts
looking for the mountpoint for the device it was asked to unmount.
Signed-off-by: Zach Brown <zab@versity.com>
I got a test failure where waiting returned an error, but it wasn't
clear what the error was or where it might have come from. Add more
logging so that we learn more about what might have gone wrong.
Signed-off-by: Zach Brown <zab@versity.com>
Update the example configuration in the README to specify the quorum
slots in mkfs arguments and mount options.
Signed-off-by: Zach Brown <zab@versity.com>
The mounted_clients btree stores items to track mounted clients. It's
modified by multiple greeting workers and the farewell work.
The greeting work was serialized by the farewell_mutex, but the
modifications in the farewell thread weren't protected. This could
result in modifications between the threads being lost if the dirty
block reference updates raced in just the right way. I saw this in
testing with deletions in farewell being lost and then that lingering
item preventing unmount because the server thought it had to wait for a
remaining quorum member to unmount.
We fix this by adding a mutex specifically to protect the
mounted_clients btree in the server.
Signed-off-by: Zach Brown <zab@versity.com>
As clients unmount they send a farewell request that cleans up
persistent state associated with the mount. The client needs to be sure
that it gets processed, and we must maintain a majority of quorum
members mounted to be able to elect a server to process farewell
requests.
We had a mechanism using the unmount_barrier fields in the greeting and
super_block to let the final unmounting quorum majority know that their
farewells have been processed and that they didn't need to keep trying
to reconnect.
But we missed that we also need this out of band farewell handling
signal for non-quorum member clients as well. The server can send
farewells to a non-member client as well as the final majority and then
tear down all the connections before the non-quorum client can see its
farewell response. It also needs to be able to know that its farewell
has been processed before the server let the final majority unmount.
We can remove the custom unmount_barrier method and instead have all
unmounting clients check for their mounted_client item in the server's
btree. This item is removed as the last step of farewell processing so
if the client sees that it has been removed it knows that it doesn't
need to resend the farewell and can finish unmounting.
This fixes a bug where a non-quorum unmount could hang if it raced with
the final majority unmounting. I was able to trigger this hang in our
tests with 5 mounts and 3 quorum members.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs mkfs had two block writing functions: write_block to fill out
some block header fields including crc calculation, and then
write_block_raw to pwrite the raw buffer to the bytes in the device.
These were used inconsistenly as blocks came and went over time. Most
callers filled out all the header fields themselves and called the raw
writer. write_block was only used for super writing, which made sense
because it clobbered the block's header with the super header so the
caller's set header magic and seq fields would be lost.
This cleans up the mess. We only have one block writer and the caller
provides all the hdr fields. Everything uses it instead of filling out
the fields themselves and calling the raw writer.
Signed-off-by: Zach Brown <zab@versity.com>
Add macros for stringifying either the name of a macro or its value. In
keeping with making our utils/ sort of look like kernel code, we use the
kernel stringify names.
Signed-off-by: Zach Brown <zab@versity.com>
Previously quorum configuration specified the number of votes needed to
elected the leader. This was an excessive amount of freedom in the
configuration of the cluster which created all sorts of problems which
had to be designed around.
Most acutely, though, it required a probabilistic mechanism for mounts
to persistently record that they're starting a server so that future
servers could find and possibly fence them. They would write to a lot
of quorum blocks and trust that it was unlikely that future servers
would overwrite all of their written blocks. Overwriting was always
possible, which would be bad enough, but it also required so much IO
that we had to use long election timeouts to avoid spurious fencing.
These longer timeouts had already gone wrong on some storage
configurations, leading to hung mounts.
To fix this and other problems we see coming, like live membership
changes, we now specifically configure the number and identity of mounts
which will be participating in quorum voting. With specific identities,
mounts now have a corresponding specific block they can write to and
which future servers can read from to see if they're still running.
We change the quorum config in the super block from a single
quorum_count to an array of quorum slots which specify the address of
the mount that is assigned to that slot. The mount argument to specify
a quorum voter changes from "server_addr=$addr" to "quorum_slot_nr=$nr"
which specifies the mount's slot. The slot's address is used for udp
election messages and tcp server connections.
Now that we specifically have configured unique IP addresses for all the
quorum members, we can use UDP messages to send and receive the vote
mesages in the raft protocol to elect a leader. The quorum code doesn't
have to read and write disk block votes and is a more reasonable core
loop that either waits for received network messages or timeouts to
advance the raft election state machine.
The quorum blocks are now used for slots to store their persistent raft
term and to set their leader state. We have event fields in the block
to record the timestamp of the most recent interesting events that
happened to the slot.
Now that raft doesn't use IO, we can leave the quorum election work
running in the background. The raft work in the quorum members is
always running so we can use a much more typical raft implementation
with heartbeats. Critically, this decouples the client and election
life cycles. Quorum is always running and is responsible for starting
and stopping the server. The client repeatedly tries to connect to a
server, it has nothing to do with deciding to participate in quorum.
Finally, we add a quorum/status sysfs file which shows the state of the
quorum raft protocol in a member mount and has the last messages that
were sent to or received from the other members.
Signed-off-by: Zach Brown <zab@versity.com>
As a client unmounts it sends a farewell request to the server. We have
to carefully manage unmounting the final quorum members so that there is
always a remaining quorum to elect a leader to start a server to process
all their farewell requests.
The mechanism for doing this described these clients as "voters".
That's not really right, in our terminology voters and candidates are
temporary roles taken on by members during a specific election term in
the raft protocol. It's more accurate to describe the final set of
clients as quorum members. They can be voters or candidates depending
on how the raft protocol timeouts workout in any given election.
So we rename the greeting flag, mounted client flag, and the code and
comments on either side of the client and server to be a little clearer.
This only changes symbols and comments, there should be no functional
change.
Signed-off-by: Zach Brown <zab@versity.com>
As we read the super we check the first and last meta and data blkno
fields. The tests weren't updated as we moved from one device to two
metadata and data devices.
Add a helper that tests the range for the device and test both meta and
data ranges fully, instead of only testing the endpoints of each and
assuming they're related because they're living on one device.
Signed-off-by: Zach Brown <zab@versity.com>
The mount-unmount-race test is occasionally hanging, disable it while we
debug it and have test coverage for unrelated work.
Signed-off-by: Zach Brown <zab@versity.com>
This is checked for by the kernel ioctl code, so giving unaligned values
will return an error, instead of aborting with an assert.
Signed-off-by: Andy Grover <agrover@versity.com>
As a core principle, all server message processing needs to be safe to
replay as servers shut down and requests are resent to new servers.
The advance_seq handler got this wrong. It would only try to remove a
trans_seq item for the seq sent by the client before inserting a new
item for the next seq. This change could be committed before the reply
was lost as the server shuts down. The next server would process the
resent request but wouldn't find the old item for the seq that the
client sent, and would ignore the new item that the previous server
inserted. It would then insert another greater seq for the same client.
This would leave behind a stale old trans_seq that would be returned as
the last_seq which would forever limit the results that could be
returned from the seq index walks.
This fix is to always remove all previous seq items for the client
before inserting a new one. This creates O(clients) server work, but
it's minimal.
This manifest as occasional simple-inode-index test failures (say 1 in
5?) which would trigger if the unmounts during previous tests would
happen to have advance_seq resent across server shutdowns. With this
change the test now reliably passes.
Signed-off-by: Zach Brown <zab@versity.com>
We've grown some test names that are prefixes of others
(createmany-parallel, createmany-parallel-mounts). When we're searching
for lines with the test name we have to search for the exact test name,
by terminating the name with a space, instead of searching for a line
that starts with the test name.
This fixes strange output and saved passed stats for the names that
share a prefix.
Signed-off-by: Zach Brown <zab@versity.com>
The message indicating that xfstests output was now being shown was
mashed up against the previous passed stats and it was gross and I hated
it.
Signed-off-by: Zach Brown <zab@versity.com>
When running in debug kernels in guests we can really bog down things
enough to trigger hrtimer warnings. I don't think there's much we can
reasonably do about that.
Signed-off-by: Zach Brown <zab@versity.com>
Farewell work is queued by farewell message processing. Server shutdown
didn't properly wait for pending farewell work to finish before tearing
down. As the server work destroyed the server's connection the farewell
work could stlil be running and try to send responses down the socket.
We make the server more carefully avoid queueuing farewell work if it's
in the process of shutting down and wait for farewell work to finish
before destroying the server's resources.
This fixed all manner of crashes that were seen in testing when a bunch
of nodes unmounted, creating farewell work on the server as it itself
unmounted and destroyed the server.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_srch_get_compact() is building up a compaction request which has
a list of srch files to read and sort and write into a new srch file.
It finds input files by searching for a sufficient number of similar
files: first any unsorted log files and then sorted log files that are
around the same size.
It finds the files by using btree next on the srch zone which has types
for unsorted srch log files, sorted srch files, but also pending and
busy compaction items.
It was being far too cute about iterating over different key types. It
was trying to adapt to finding the next key and was making assumptions
about the order of key types. It didn't notice that the pending and
busy key types followed log and sorted and would generate EIO when it
ran into them and found their value length didn't match what it was
expecting.
Rework the next item ref parsing so that it returns -ENOENT if it gets
an unexpected key type, then look for the next key type when checking
enoent.
Signed-off-by: Zach Brown <zab@versity.com>
Add a function that tests can use to skip when the metadata device isn't
large enough. I thought we needed to avoid enospc in a particular test,
but it turns out the test's failure was unrelated. So this isn't used
for now but it seems nice to keep around.
Signed-off-by: Zach Brown <zab@versity.com>
The grace period is intended to let lock holders squeeze in more bulk
work before another node pulls the lock out from under them. The length
of the delay is a balance between getting more work done per lock hold
and adding latency to ping-ponging workloads.
The current grace period was too short. To do work in the conflicting
case you often have to read the result that the other mount wrote as you
invalidated their lock. The test was written in the LSM world where
we'd effectively read a single level 0 1MB segment. In the btree world
we're checking bloom blocks and reading the other mount's btree. It has
more dependent read latency.
So we turn up the grace period to let conflicting readers squeeze in
more work before pulling the lock out from under them. This value was
chosen to make lock-conflicting-batch-commit pass in guests sharing nvme
metadata devices in debugging kernels.
Signed-off-by: Zach Brown <zab@versity.com>
The test had a silly typo in the label it put on the time it took mounts
to perform conflicting metadata changes.
Signed-off-by: Zach Brown <zab@versity.com>
When we're splicing in dentries in lookup we can be splicing the result
of changes on other nodes into a stale dcache. The stale dcache might
contain dir entries and the dcache does not allow aliased directories.
Use d_materialise_unique() to splice in dir inodes so that we remove all
aliased dentries which must be stale.
We can still use d_splice_alias() for all other inode types. Any
existing stale dentries will fail revalidation before they're used.
Signed-off-by: Zach Brown <zab@versity.com>
We can lose interesting state if the mounts are unmounted as tests fail,
only unmount if all the tests pass.
Signed-off-by: Zach Brown <zab@versity.com>
Weirdly, run-tests was treating trace_printk not as an option to enable
trace_printk() traces but as an option to print trace events to the
console with printk? That's not a thing.
Make -P really enable trace_printk tracing and collect it as it would
enabled trace events. It needs to be treated seperately from the -t
options that enable trace events.
While we're at it treat the -P trace dumping option as a stand-alone
option that works without -t arguments.
Signed-off-by: Zach Brown <zab@versity.com>
run-tests.sh has a -t argument which takes a whitespace seperated string
of globs of events to enable. This was hard to use and made it very
easy to accidentally expand the globs at the wrong place in the script.
This makes each -t argument specify a single word glob which is stored
in an array so the glob isn't expanded until it's applied to the trace
event path. We also add an error for -t globs that didn't match any
events and add a message with the count of -t arguments and enabled
events.
Signed-off-by: Zach Brown <zab@versity.com>
The lock invalidation work function needs to be careful not to requeue
itself while we're shutting down or we can be left with invalidation
functions racing with shutdown. Invalidation calls igrab so we can end
up with unmount warning that there are still inodes in use.
Signed-off-by: Zach Brown <zab@versity.com>
Add a new distinguishable return value (ENOBUFS) from allocator for if
the transaction cannot alloc space. This doesn't mean the filesystem is
full -- opening a new transaction may result in forward progress.
Alter fallocate and get_blocks code to check for this err val and retry
with a new transaction. Handling actual ENOSPC can still happen, of
course.
Add counter called "alloc_trans_retry" and increment it from both spots.
Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: fixed up write_begin error paths]
The item cache page life cycle is tricky. There are no proper page
reference counts, everthing is done by nesting the page rwlock inside
item_cache_info rwlock. The intent is that you can only reference pages
while you hold the rwlocks appropriately. The per-cpu page references
are outside that locking regime so they add a reference count. Now
there are reference counts for the main cache index reference and for
each per-cpu reference.
The end result of all this is that you can only reference pages outside
of locks if you're protected by references.
Lock invalidation messed this up by trying to add its right split page
to the lru after it was unlocked. Its page reference wasn't protected
at this point. Shrinking could be freeing that page, and so it could be
putting a freed page's memory back on the lru.
Shrinking had a little bug that it was using list_move to move an
initialized lru_head list_head. It turns out to be harmless (list_del
will just follow pointers to itself and set itself as next and prev all
over again), but boy does it catch one's eye. Let's remove all
confusion and drop the reference while holding the cinf->rwlock instead
of trying to optimize freeing outside locks.
Finally, the big one: inserting a read item after compacting the page to
make room was inserting into stale parent pointers into the old
pre-compacted page, rather than the new page that was swapped in by
compaction. This left references to a freed page in the page rbtree and
hilarity ensued.
Signed-off-by: Zach Brown <zab@versity.com>
Instead of hashing headers, define an interop version. Do not mount
superblocks that have a different version, either higher or lower.
Since this is pretty much the same as the format hash except it's a
constant, minimal code changes are needed.
Initial dev version is 0, with the intent that version will be bumped to
1 immediately prior to tagging initial release version.
Update README. Fix comments.
Add interop version to notes and modinfo.
Signed-off-by: Andy Grover <agrover@versity.com>
Add a relatively constrained ioctl that moves extents between regular
files. This is intended to be used by tasks which combine many existing
files into a much larger file without reading and writing all the file
contents.
Signed-off-by: Zach Brown <zab@versity.com>
By convention we have the _IO* ioctl definition after the argument
structs and ALLOC_DETAIL got it a bit wrong so move it down.
Signed-off-by: Zach Brown <zab@versity.com>
We were checking for the wrong magic value.
We now need to use -f when running mkfs in run-tests for things to work.
Signed-off-by: Andy Grover <agrover@versity.com>
This more closely matches stage ioctl and other conventions.
Also change release code to use offset/length nomenclature for consistency.
Signed-off-by: Andy Grover <agrover@versity.com>
Update for cli args and options changes. Reorder subcommands to match
scoutfs built-in help.
Consistent ScoutFS capitalization.
Tighten up some descriptions and verbiage for consistency and omit
descriptions of internals in a few spots.
Add SEE ALSO for blockdev(8) and wipefs(8).
Signed-off-by: Andy Grover <agrover@versity.com>
Make it static and then use it both for argp_parse as well as
cmd_register_argp.
Split commands into five groups, to help understanding of their
usefulness.
Mention that each command has its own help text, and that we are being
fancy to keep the user from having to give fs path.
Signed-off-by: Andy Grover <agrover@versity.com>
This has some fancy parsing going on, and I decided to just leave it
in the main function instead of going to the effort to move it all
to the parsing function.
Signed-off-by: Andy Grover <agrover@versity.com>
Support max-meta-size and max-data-size using KMGTP units with rounding.
Detect other fs signatures using blkid library.
Detect ScoutFS super using magic value.
Move read_block() from print.c into util.c since blkid also needs it.
Signed-off-by: Andy Grover <agrover@versity.com>
Print warning if printing a data dev, you probably wanted the meta dev.
Change read_block to return err value. Otherwise there are confusing
ENOMEM messages when pread() fails. e.g. try to print /dev/null.
Signed-off-by: Andy Grover <agrover@versity.com>
Make offset and length optional. Allow size units (KMGTP) to be used
for offset/length.
release: Since off/len no longer given in 4k blocks, round offset and
length to to 4KiB, down and up respectively. Emit a message if rounding
occurs.
Make version a required option.
stage: change ordering to src (the archive file) then the dest (the
staged file).
Signed-off-by: Andy Grover <agrover@versity.com>
With many concurrent writers we were seeing excessive commits forced
because it thought the data allocator was running low. The transaction
was checking the raw total_len value in the data_avail alloc_root for
the number of free data blocks. But this read wasn't locked, and
allocators could completely remove a large free extent and then
re-insert a slightly smaller free extent as they perform their
alloction. The transaction could see a temporary very small total_len
and trigger a commit.
Data allocations are serialized by a heavy mutex so we don't want to
have the reader try and use that to see a consistent total_len. Instead
we create a data allocator run-time struct that has a consistent
total_len that is updated after all the extent items are manipulated.
This also gives us a place to put the caller's cached extent so that it
can be included in the total_len, previously it wasn't included in the
free total that the transaction saw.
The file data allocator can then initialize and use this struct instead
of its raw use of the root and cached extent. Then the transaction can
sample its consistent total_len that reflects the root and cached
extent.
A subtle detail is that fallocate can't use _free_data to return an
allocated extent on error to the avail pool. It instead frees into the
data_free pool like normal frees. It doesn't really matter that this
could prematurely drain the avail pool because it's in an error path.
Signed-off-by: Zach Brown <zab@versity.com>
Implement a fallback mechanism for opening paths to a filesystem. If
explicitly given, use that. If env var is set, use that. Otherwise, use
current working directory.
Use wordexp to expand ~, $HOME, etc.
Signed-off-by: Andy Grover <agrover@versity.com>
Finally get rid of the last silly vestige of the ancient 'ci' name and
update the scoutfs_inode_info pointers to si. This is just a global
search and replace, nothing functional changes.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which stages a file in multiple parts while a long-lived
process is blocking on offline extents trying to compare the file to the
known contents.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we have full precision extents a writer with i_mutex and a page
lock can be modifying large extent items which cover much of the
surrounding pages in the file. Readers can be in a different page with
only the page lock and try to work with extent items as the writer is
deleting and creating them.
We add a per-inode rwsem which just protects file extent item
manipulation. We try to acquire it as close to the item use as possible
in data.c which is the only place we work with file extent items.
This stops rare read corruption we were seeing where get_block in a
reader was racing with extent item deletion in a stager at a further
offset in the file.
Signed-off-by: Zach Brown <zab@versity.com>
Move the main scoutfs README.md from the old kmod/ location into the top
of the new single repository. We update the language and instructions
just a bit to reflect that we can checkout and build the module and
utilities from the single repo.
Signed-off-by: Zach Brown <zab@versity.com>
The README in tests/ had gone a bit stale. While it was originally
written to be a README.md displayed in the github repo, we can
still use it in place as a quick introduction to the tests.
Signed-off-by: Zach Brown <zab@versity.com>
When we had three repos the run-tests harness helped by checking
branches in kmod and utils repos to build and test. Now that we have
one repo we can just use the sibling kmod/ and utils/ dirs in the repo.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we're in one repo utils can get its format and ioctl headers
from the authoriative kmod files. When we're building a dist tarball
we copy the files over so that the build from the dist tarball can use
them.
Signed-off-by: Zach Brown <zab@versity.com>
For some reason, the make dist rule in kmod/ put the spec file in a
scoutfs-$ver/ directory, instead of scoutfs-kmod-$ver/ like the rest of
the files and instead of scoutfs-utils-$ver/ that the spec file for
utils is put in the utils dist tarball.
This adds -kmod to the path for the spec file so that it matches the
rest of the kmod dist tarball.
Signed-off-by: Zach Brown <zab@versity.com>
Add a trivial top-level Makefile that just runs Make in all the subdirs.
This will probably expand over time.
Signed-off-by: Zach Brown <zab@versity.com>
Add a utility that mimics our search_xattrs ioctl with directory entry
walking and fgetxattr as efficiently as it can so we can use it to test
large file populations.
Signed-off-by: Zach Brown <zab@versity.com>
The search_xattrs ioctl is only going to find entries for xattrs with
the .srch. tag which create srch entries as they're created and
destroyed. Export the xattr tag parsing so that the ioctl can return
-EINVAL for xattrs which don't have the scoutfs prefix and the .srch.
tag.
Signed-off-by: Zach Brown <zab@versity.com>
Hash collisions can lead to multiple xattr ids in an inode being found
for a given name hash value. If this happens we only want to return the
inode number once.
Signed-off-by: Zach Brown <zab@versity.com>
Compacting very large srch files can use all of a given operation's
metadata allocator. When this happens we record the position in the
srch files of the compcation in the pending item.
We could lose entries when this happens because the kway_next callback
would advance the srch file position as it read entries and put them in
the tournament tree leaves, not as it put them in the output file. We'd
continue from the entries that were next to go in the tournament leaves,
not from what was in the leaves.
This refactors the kway merge callbacks to differentiate between getting
entries at the position and advancing the positions. We initialize the
tournament leaves by getting entries at the positions and only advance
the position as entries leave the tournament tree and are either stored
in the output srch files or are dropped.
Signed-off-by: Zach Brown <zab@versity.com>
In the rare case that searching for xattrs only finds deletions within
its window it retries the search past the window. The end entry is
inclusive and is the last entry that can be returned. When retrying the
search we need to start from the entry after that to ensure forward
progress.
Signed-off-by: Zach Brown <zab@versity.com>
We have to limit the number of srch entries that we'll track while
performing a search for all the inodes that contain xattrs that match
the search hash value.
As we hit the limit on the number of entries to track we have to drop
entries. As we drop entries we can't return any inodes for entries
past the dropped entries. We were updating the end point of the search
as we dropped entries past the tracked set, but we weren't updating the
search end point if we dropped the last currently tracked entry.
And we were setting the end point to the dropped entry, not to the entry
before it. This could lead us to spuriously returning deleted entries
if we drop the creation entry and then allow tracking its deletion
later.
This fixes both those problems. We now properly set the end point to
just before the dropped entry for all entries that we drop.
Signed-off-by: Zach Brown <zab@versity.com>
The k-way merge used by srch file compaction only dropped the second
entry in a pair of duplicate entries. Duplicate entries are both
supposed to be removed so that entries for removed xattrs don't take up
space in the files.
This both drops the second entry and removes the first encoded entry.
As we encode entries we rememeber their starting offset and the previous
entry that they were encoded from. When we hit a duplicate entry
we undo the encoding of the previous entry.
This only works wihin srch file blocks. We can still have duplicate
entries that span blocks but that's unlikely and relatively harmless.
Signed-off-by: Zach Brown <zab@versity.com>
The search_xattrs ioctl looks for srch entries in srch files that map
the caller's hashed xattr name to inodes. As it searches it maintains a
range of entries that it is looking for. When it searches sorted srch
files for entries it first performs a binary search for the start of the
range and then iterates over the blocks until it reaches the end of its
range.
The binary search for the start of the range was a bit wrong. If the
start of the range was less than all the blocks then the binary search
could wrap the left index, try to get a file block at a negative index,
and return an error for the search.
This is relatively hard to hit in practice. You have to search for the
xattr name with the smallest hashed value and have a sorted srch file
that's just the right size so that blk offset 0 is the last block
compared in the binary search, which sets the right index to -1. If
there are lots of xattrs, or sorted files of the wrong length, it'll
work.
This fixes the binary search so that it specifically records the first
block offset that intersects with the range and tests that the left and
right offsets haven't been inverted. Now that we're not breaking out of
the binary search loop we can more obviously put each block reference
that we get.
Signed-off-by: Zach Brown <zab@versity.com>
The srch code was putting btree item refs outside of success. This is
fine, but they only need to be put when btree ops return success and
have set the reference.
Signed-off-by: Zach Brown <zab@versity.com>
Dirty items in a client transaction are stored in OS pages. When the
transaction is committed each item is stored in its position in a dirty
btree block in the client's existing log btree. Allocators are refilled
between transaction commits so a given commit must have sufficient meta
allocator space (avail blocks and unused freed entries) for all the
btree blocks that are dirtied.
The number of btree blocks that are written, thus the number of cow
allocations and frees, depends on the number of blocks in the log btree
and the distribution of dirty items amongst those blocks. In a typical
load items will be near each other and many dirty items in smaller
kernel pages will be stored in fewer larger btree blocks.
But with the right circumstances, the ratio of dirty pages to dirty
blocks can be much smaller. With a very large directory and random
entry renames you can easily have 1 btree block dirtied for every page
of dirty items.
Our existing allocator meta allocator fill targets and the number of
dirty item cache pages we allowed did not properly take this in to
account. It was possible (and, it turned out, relatively easy to test
for with a hgue directory and random renames) to run out of meta avail
blocks while storing dirty items in dirtied btree blocks.
This rebalances our targets and thresholds to make it more likely that
we'll have enough allocator resources to commit dirty items. Instead of
having an arbitrary limit on the number of dirty item cache pages, we
require that a given number of dirty item cache pages have a given
number of allocator blocks available.
We require a decent number of avialable blocks for each dirty page, so
we increase the server's target number of blocks to give the client so
that it can still build large transactions.
This code is conservative and should not be a problem in practice, but
it's theoretically possible to build a log btree and set of dirty items
that would dirty more blocks that this code assumes. We will probably
revisit this as we add proper support for ENOSPC.
Signed-off-by: Zach Brown <zab@versity.com>
The srch system checks that is has allocator space while deleting srch
files and while merging them and dirtying output blocks. Update the
callers to check for the correct number of avail or freed blocks that it
needs between each check.
Signed-off-by: Zach Brown <zab@versity.com>
Previously, scoutfs_alloc_meta_lo_thresh() returned true when a small
static number of metadata blocks were either available to allocate or
had space for freeing. This didn't make a lot of sense as the correct
number depends on how many allocations each caller will make during
their atomic transaction.
Rework the call to take an argument for the number of avail or freed
blocks available to test. This first pass just uses the existing
number, we'll get to the callers.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test that randomly renames entries in a single large directory.
This has caught bugs in the reservation of allocator resources for
client transactions.
Signed-off-by: Zach Brown <zab@versity.com>
Prefer named to anonymous enums. This helps readability a little.
Use enum as param type if possible (a couple spots).
Remove unused enum in lock_server.c.
Define enum spbm_flags using shift notation for consistency.
Rename get_file_block()'s "gfb" parameter to "flags" for consistency.
Signed-off-by: Andy Grover <agrover@versity.com>
Not initializing wid[] can cause incorrect output.
Also, we only need 6 columns if we reference the array from 0.
Signed-off-by: Andy Grover <agrover@versity.com>
The xfstests generic/067 test is a bit of a stinker in that it's trying
to make sure a mount failes when the device is invalid. It does this
with raw mount calls without any filesystem-specific conventions. Our
mount fails, so the test passes, but not for the reason the test
assumes. It's not a great test. But we expect it to not be great and
produce this message.
Signed-off-by: Zach Brown <zab@versity.com>
Add another expected message that comes from attempting to mount an ext4
filesystem from a device that returns read errors.
Signed-off-by: Zach Brown <zab@versity.com>
The tests were checking that the literal string was zero, which it never
was. Once we check the value of the variable then we notice that the
sense of some tests went from -n || to -n &&, so switch those to -z.
Signed-off-by: Zach Brown <zab@versity.com>
For xfstests, we need to be able to specify both for scratch device as
well.
using -e and -f for now, but we should really be switching to long options.
Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: minor arg message fixes]
Add -z option to run-tests.sh to specify metadata device.
Do a bunch of things twice.
Fix up setup-error-teardown test.
Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: minor arg message fixes, golden output]
mkfs: Take two block devices as arguments. Write everything to metadata
dev, and the superblock to the data dev. UUIDs match. Differentiate by
checking a bit in a new "flags" field in the superblock.
Refactor device_size() a little. Convert spaces to tabs.
Move code to pretty-print sizes to dev.c so we can use it in error
messages there, as well as in mkfs.c.
print: Include flags in output.
Add -D and -M options for setting max dev sizes
Allow sizes to be specified using units like "K", "G" etc.
Note: -D option replaces -S option, and uses above units rather than
the number of 4k data blocks.
Update man pages for cmdline changes.
Signed-off-by: Andy Grover <agrover@versity.com>
Update the README.md introduction to scoutfs to mention the need for and
use of metadata and data block devices.
Signed-off-by: Zach Brown <zab@versity.com>
Require a second path to metadata bdev be given via mount option.
Verify meta sb matches sb also written to data sb. Change code as needed
in super.c to allow both to be read. Remove check for overlapping
meta and data blknos, since they are now on entirely separate bdevs.
Use meta_bdev for superblock, quorum, and block.c reads and writes.
Signed-off-by: Andy Grover <agrover@versity.com>
It was too tricky to pick out the difference between metadata and data
usage in the previous format. This makes it much more clear which
values are for either metadata or data.
Signed-off-by: Zach Brown <zab@versity.com>
Write locks are given an increasing version number as they're granted
which makes its way into items in the log btrees and is used to find the
most recent version of an item.
The initialization of the lock server's next write_version for granted
locks dates back to the initial prototype of the forest of log btrees.
It is only initialized to zero as the module is loaded. This means that
reloading the module, perhaps by rebooting, resets all the item versions
to 0 and can lead to newly written items being ignored in favour of
older existing items with greater versions from a previous mount.
To fix this we initialize the lock server's write_version to the
greatest of all the versions in items in log btrees. We add a field to
the log_trees struct which records the greatest version which is
maintained as we write out items in transactions. These are read by the
server as it starts.
Then lock recovery needs to include the write_version so that the
lock_server can be sure to set the next write_version past the greatest
version in the currently granted locks.
Signed-off-by: Zach Brown <zab@versity.com>
The log_trees structs store the data that is used by client commits.
The primary struct is communicated over the wire so it includes the rid
and nr that identify the log. The _val struct was stored in btree item
values and was missing the rid and nr because those were stored in the
item's key.
It's madness to duplicate the entire struct just to shave off those two
fields. We can remove the _val struct and store the main struct in item
values, including the rid and nr.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which makes sure that we don't initialize the lock server's
write version to a version less than existing log tree items.
Signed-off-by: Zach Brown <zab@versity.com>
Audit code for structs allocated on stack without initialization, or
using kmalloc() instead of kzalloc().
- avl.c: zero padding in avl_node on insert.
- btree.c: Verify item padding is zero, or WARN_ONCE.
- inode.c: scoutfs_inode contains scoutfs_timespecs, which have padding.
- net.c: zero pad in net header.
- net.h: scoutfs_net_addr has padding, zero it in scoutfs_addr_from_sin().
- xattr.c: scoutfs_xattr has padding, zero it.
- forest.c: item_root in forest_next_hint() appears to either be
assigned-to or unused, so no need to zero it.
- key.h: Ensure padding is zeroed in scoutfs_key_set_{zeros,ones}
Signed-off-by: Andy Grover <agrover@versity.com>
Instead, explicitly add padding field, and adjust member ordering to
eliminate compiler-added padding between members, and at the end of the
struct (if possible: some structs end in a u8[0] array.)
This should prevent unaligned accesses. Not a big deal on x86_64, but
other archs like aarch64 really want this.
Signed-off-by: Andy Grover <agrover@versity.com>
This will ensure structs, which are internally 8 byte aligned, will remain
so when in the item cache.
16 bytes alignment doesn't seem like it's needed so just do 8.
Signed-off-by: Andy Grover <agrover@versity.com>
We were using a trailing owner offset to iterate over btree item values
from the back of the block towards the front. We did this to reclaim
fragmented free space in a block to satisfy an allocation instead of
having to split the block, which is expensive mostly because it has to
allocate and free metadata blocks.
In the before times, we used to compact items by sorting items by their
offset, moving them, and then sorting them by their keys again. The
sorting by keys was expensive so we added these owner offsets to be able
to compact without sorting.
But the complexity of maintaining the owner metadata is not worth it.
We can avoid the expensive sorting by keys by allocating a temporary
array of item offsets and sorting only it by the value offset. That's
nice and quick, it was the key comparisons that were expensive. Then we
can remove the owner offset entirely, as well as the block header final
free region that compaction needed.
And we also don't compact as often in the modern era because we do the
bulk of our work in the item cache instead of in the btree, and we've
changed the split/merge/compaction heuristics to avoid constantly
splitting/merging/comapcting and an item population happens to hover
right around a shared threshold.
Signed-off-by: Zach Brown <zab@versity.com>
dev.c includes linux/fs.h which includes linux/types.h, which defines
these types, __be16 etc. These are also defined in sparse.h, but I don't
think these are needed.
Definitions in linux/types.h includes stuff to set attr(bitwise) if
__CHECKER__ is defined, so we can remove __sp_biwise.
Signed-off-by: Andy Grover <agrover@versity.com>
The check for a small device didn't return an error code because it was
copied from error tests of ret for an error code. It has to generate
one, do so.
Signed-off-by: Zach Brown <zab@versity.com>
Add the df command which uses the new alloc_detail ioctl to show df for
the metadata and data devices separately.
Signed-off-by: Zach Brown <zab@versity.com>
Use little helpers to insert items into new single block btrees for
mkfs. We're about to insert a whole bunch more items.
Signed-off-by: Zach Brown <zab@versity.com>
Remove the old superblock fields which were used to track free blocks
found in the radix allocators. We now walk all the allocators when we
need to know the free totals, rather than trying to keep fields in sync.
Signed-off-by: Zach Brown <zab@versity.com>
Before the introduction of the AVL tree to sort btree items, the items
were sorted by sorting a small packed array of offsets. The final
offset in that array pointed to the item in the block with the greatest
key.
With the move to sorting items in an AVL tree by nodes embedded in item
structs, we now don't have the array of offsets and instead have a dense
array of items. Creation and deletion of items always works with the
final item in the array.
last_item() used to return the item with the greatest key by returning
the item pointed to by the final entry in the sorted offset array, then
it returned the final entry in the item array for creation and deletion
but that was no longer the item with the greatest key.
But spliting and joining still used last_item() to find the item in the
block with the greatest key for updating references to blocks in
parents. Since the introduction of the AVL tree splitting and joining
has been corrrupting the tree by setting parent block reference keys to
whatever item happened to be at the end of the array, not the item with
the greatest key.
The extent code recently pushed hard enough to hit this by working with
relatively random extent items in the core allocation btrees.
Eventually the parent block reference keys got out of sync and we'd fail
to find items by descending into the wrong children when looking for
them. Extent deletion hit this during allocation, returned -ENOENT, and
the allocator turned that into -ENOSPC.
With this fixed we can repetedly create and delte millions of files with
heavily fragmented extents in a tiny metadata device. Eventually it
actually runs out of space instead of spuriously returning ENOSPC in a
matter of minutes.
Signed-off-by: Zach Brown <zab@versity.com>
With the introduction of incremental srch file compaction we added some
fields to the srch_compact struct to record the position of compaction
in each file. This increased the size of the struct past the limit the
btree places on the size of item values.
We decrease the number of files per compaction from 8 to 4 to cut the
size of the srch_compcat struct in half. This compacts twice as often,
but still relatively infrequently, and it uses half the space for srch
files waiting to hit the compaction threshold.
Signed-off-by: Zach Brown <zab@versity.com>
Previously the srch compaction work would output the entire compacted
file and delete the input files in one atomic commit. The server would
send the input files and an allocator to the client, and the client
would send back an output file and an allocator that included the
deletion of the input files. The server would merge in the allocator
and replace the input file items with the output file item.
Doing it this way required giving an enormous allocation pool to the
client in a radix, which would deal with recursive operations
(allocating from and freeing to the radix that is being modified). We
no longer have the radix allocator, and we use single block avail/free
lists instead of recursively modifying the btrees with free extent
items. The compaction RPC needs to work with a finite amount of
allocator resources that can be stored in an alloc list block.
The compaction work now does a fixed amount of work and a compaction
operation spans multiple work iterations.
A single compaction struct is now sent between the client and server in
the get_compact and commit_compact messages. The client records any
partial progress in the struct. The server writes that position into
PENDING items. It first searchs for pending items to give to clients
before searching for files to start a new compaction operation.
The compact struct has flags to indicate whether the output file is
being written or the input files are being deleted. The server manages
the flags and sets the input file deletion flag only once the result of
the compaction has been reflected in the btree items which record srch
files.
We added the progress fields to the compaction struct, making it even
bigger than it already was, so we take the time to allocate them rather
than declaring them on the stack.
It's worth mentioning that each operation now takes a reasonably bounded
amount of time will make it feasible to decide that it has failed and
needs to be fenced.
Signed-off-by: Zach Brown <zab@versity.com>
The total_{meta,data}_blocks scoutfs_super_block fields initialized by
mkfs aren't visible to userspace anywhere. Add them to statfs_more so
that tools can get the totals (and use them for df, in this particular
case).
Signed-off-by: Zach Brown <zab@versity.com>
Remove the statfs RPC from the client and server now that we're using
allocator iteration to calculate free blocks.
Signed-off-by: Zach Brown <zab@versity.com>
Use alloc_foreach to count the free blocks in all the allocators instead
of sending an RPC to the server. We cache the results so that constant
df calls don't generate a constant stream of IO.
Signed-off-by: Zach Brown <zab@versity.com>
An an ioctl which copies details of each persistent allocator to
userspace. This will be used by a scoutfs command to give information
about the allocators in the system.
Signed-off-by: Zach Brown <zab@versity.com>
Add an alloc call which reads all the persistent allocators and calls a
callback for each. This is going to be used to calculate free blocks
in clients for df, and in an ioctl to give a more detailed view of
allocators.
Signed-off-by: Zach Brown <zab@versity.com>
The algorithm for choosing the split key assumed that there were
multiple items in the page. That wasn't always true and it could result
in choosing the first item as the split key, which could end up
decrementing the left page's end key before it's start key.
We've since added compaction to the paths that split pages so we now
guarantee that we have at least two items in the page being split. With
that we can be sure to use the second item's key and ensure that we're
never creating invalid keys for the pages created by the split.
Signed-off-by: Zach Brown <zab@versity.com>
The tests for the various page range intersections were out of order.
The edge overlap case could trigger before the bisection case and we'd
fail to remove the initial items in the page. That would leave items
before the start key which would later be used as a midpoint for a
split, causing all kinds of chaos.
Rework the cases so that the overlap cases are last. The unique bisect
case will be caught before we can mistake it for an edge overlap case.
And minimize the number of comparisons we calculate by storing the
handful that all the cases need.
Signed-off-by: Zach Brown <zab@versity.com>
The first pass of the item cache didn't try to reclaim freed space at
all. It would leave behind very sparse pages. The oldest of which
would be reclaimed by memory pressure.
While this worked, it created much more stress on the system than is
necessary. Splitting a page with one key also makes it hard to
calculate the boundaries of the split pages, given that the start and
end keys could be the single item.
This adds a header field which tracks the free space in item cache
pgaes. Free space is created before the alloc offset by removing items
from the rbtree, but also from shrinking item values when updating or
deleting items.
If we try to split a page with sufficient free space to insert the
largest possible item then we compact the page instead of splitting it.
We copy the items into the front of an unused page and swap the pages.
Signed-off-by: Zach Brown <zab@versity.com>
Add a quick function that walks the rbtree and makes sure it doesn't see
any obvious key errors. This is far too expensive to use regularly but
it's handy to have around and add calls to when debugging.
Signed-off-by: Zach Brown <zab@versity.com>
The xattr item stream is constructred from a large contiguous region
that contains the struct header, the key, and the value. The value
can be larger than a page so kmalloc is likely to fail as the system
gets fragmented.
Our recent move to the item cache added a significant source of page
allocation churn which moved the system towards fragmentation much more
quickly and was causing high-order allocation failures in testing.
Signed-off-by: Zach Brown <zab@versity.com>
Previously we'd avoided full extents in file data mapping items because
we were deleting items from forest btrees directly. That created
deletion items for every version of file extents as they were modified.
Now we have the item cache which can remove deleted items from memory
when deletion items aren't necessary.
By layering file data extents on an extent layer, we can also transition
allocators to use extents and fix a lot of problems in the radix block
allocator.
Most of this change is churn from changing allocator function and struct
names.
File data extents no longer have to manage loading and storing from and
to packed extent items at a fixed granularity. All those loops are torn
out and data operations now call the extent layer with their callbacks
instead of calling its packed item extent functions. This now means
that fallocate and especially restoring offline extents can use larger
extents. Small file block allocation now comes from a cached extent
which reduces item calls for small file data streaming writes.
The big change in the server is to use more root structures to manage
recursive modification instead of relying on the allocator to notice and
do the right thing. The radix allocator tried to notice when it was
actively operating on a root that it was also using to allocate and free
metadata blocks. This resulted in a lot of bugs. Instead we now double
buffer the server's avail and freed roots so that the server fills and
drains the stable roots from the previous transaction. We also double
buffer the core fs metadata avail root so that we can increase the time
to reuse freed metadata blocks.
The server now only moves free extents into client allocators when they
fall below a low threshold. This reduces the shared modification of the
client's allocator roots which requires cold block reads on both the
client and server.
Signed-off-by: Zach Brown <zab@versity.com>
Add an allocator which uses btree items to store extents. Both the
client and server will use this for btree blocks, the client will use it
for srch blocks and data extents, and the server will move extents
between the core fs allocator btree roots and the clients' roots.
Signed-off-by: Zach Brown <zab@versity.com>
Add infrastructure for working with extents. Callers provide callbacks
which operate on their extent storage while this code performs the
fiddly splitting and merging of extents. This layer doesn't have any
persitent structures itself, it only operates on native structs in
memory.
Signed-off-by: Zach Brown <zab@versity.com>
It can be handy to skip checking out specific branches from the
required repos, so -s option will skip doing so for kmod/utils/xfstests.
Also fix utils die messages to reference -U/u instead of -K/k.
Signed-off-by: Andy Grover <agrover@versity.com>
bulk_create_paths was inspired by createmany when it was outputting
status lines every 10000 files. That's far too often if we're creating
files very quickly. And it only tried to output a line after entire
directories, so output could stall for very large directories.
Behave more inline with vmstat, iostat, etc, and output a line at a
regular time interval.
Signed-off-by: Zach Brown <zab@versity.com>
Add options to bulk_create_paths for creating xattrs as we create files.
We can create normal xattrs, or .srch. tagged xattrs where all, some, or
none of the files share the same xattr name.
Signed-off-by: Zach Brown <zab@versity.com>
The test that exercises re-reading stale cached blocks was still
trying to use both tiny btree blocks and segments, both of which have
been removed.
Signed-off-by: Zach Brown <zab@versity.com>
The calculation of the last valid data blkno was off by one. It was
calculating the total number of small blocks that fit in the device
size.
Signed-off-by: Zach Brown <zab@versity.com>
The recent cleanup of the radix allocator included removing tracking of
the first set bits or references in blocks.
Signed-off-by: Zach Brown <zab@versity.com>
Track the kernel changes to use the scoutfs_key struct as the btree key
instead of a big-endian binary blob.
Signed-off-by: Zach Brown <zab@versity.com>
The kernel has long sinced moved away from symbolic printing of key
cones and types, and it just removed the MAX values from the format
header. Let's follow suit and get rid of the zone and type strings.
Signed-off-by: Zach Brown <zab@versity.com>
We had manually implemented a few of the functions to add values to
specific endian types. Make a macro to generate the function and
generate them for all the endian types we use.
Signed-off-by: Zach Brown <zab@versity.com>
The percpu_counter library merges the per-cpu counters with a shared
count when the per-cpu counter gets larger than a certain value. The
default is very small, so we often end up taking a shared lock to update
the count. Use a larger batch so that we take the lock less often.
Signed-off-by: Zach Brown <zab@versity.com>
Now that the item cache is bearing the load of high frequency item
calls, we can remove all the item granular work that the forest was
trying to do. The item cache amortizes the cost of the forest so its
remaining methods can go straight to the btrees and don't need
complicated state to reduce the overhead of item calls.
Signed-off-by: Zach Brown <zab@versity.com>
Use the new item cache for all the item work in the fs instead of
calling into the forest of btrees. Most of this is mechanical
conversion from the _forest calls to the _item calls. The item cache
no longer supports the kvec argument for describing values so all the
callers pass in the value pointer and length directly.
The item cache doesn't support saving items as they're deleted and later
restoring them from an error unwinding path. There were only two users
of this. Directory entries can easily guarantee that deletion won't
fail by dirtying the items first in the item cache. Xattr updates were
a little trickier. They can combine dirtying, creating, updating, and
deleting to atomically switch between items that describe different
versions of a multi-item value. This also fixed a bug in the srch
xattrs where replacing an xattr would create a new id for the xattr and
leave existing srch items referencing a now deleted id. Replacing now
reuses the old id.
And finally we add back in the locking and transaction item cache
integration.
Signed-off-by: Zach Brown <zab@versity.com>
Add an item cache between fs callers and the forest of btrees. Calling
out to the btrees for every item operation was far too expensive. This
gives us a flexible in-memory structure for working with items that
isn't bound by the constrants of persistent block IO. We can rarely
stream large groups of items to and from the btrees and then use
efficient kernel memory structures for more frequent item operations.
This adds the infrastructure, nothing is calling it yet.
Signed-off-by: Zach Brown <zab@versity.com>
Add forest calls that the item cache will use. It needs to read all the
items in the leaf blocks of forest btree which could contain the key,
write dirty items to the log btree, and dirty bits in the bloom block as
items are dirtied.
Signed-off-by: Zach Brown <zab@versity.com>
Add btree calls to call a callback for all items in a leaf, and to
insert a list of items into their leaf blocks. These will be used by
the item cache to populate the cache and to write dirty items into dirty
btree blocks.
Signed-off-by: Zach Brown <zab@versity.com>
The current btree walk recorded the start and end of child subtrees as
it walked, and it could give the caller the next key to iterate towards
after the block it returned. Future methods want to get at the key
bounds of child subtrees, so we add a key range struct that all walk
callers provide and fill it with all the interesting keys calculated
during the walk.
Signed-off-by: Zach Brown <zab@versity.com>
Btree traversal doesn't split a block if it has room for the caller's
item. Extract this test into a function so that an upcoming btree call
can test that each of multiple insertions into a leaf will fit.
Signed-off-by: Zach Brown <zab@versity.com>
Remove the last remnants of the indexed xattrs which used fs items.
This makes the significant change of renumbering the key zones so I
wanted it in its own commit.
Signed-off-by: Zach Brown <zab@versity.com>
In a merge where the input and source trees are the same, the input
block can be an initial pre-cow version of the dirty source block.
Dirtying blocks in the change will clear allocations in the dirty source
block but they will remain in the pre-cow input block. The merge can
then set these blocks in the dst, even though they were also used by
allocation, because they're still set in the pre-cow input block.
This fix is clumsy, but minimal and specific to this problem. A more
thorough fix is being worked on which introduces more staging more
allocator trees and should stop calls that are modifying the current
active avail or free trees.
Signed-off-by: Zach Brown <zab@versity.com>
Lock invalidation has to make sure that changes are visible to future
readers. It was syncing if the current transaction is dirty. This was
never optimal, but it wasn't catastrophic when concurrent invalidation
work could all block on one sync in progress.
With the move to a single invalidation worker serially invalidating
locks it became unacceptable. Invalidation happening in the presence of
writers would constantly sync the current transaction while very old
unused write locks were invalidated. Their changes had long since been
committed in previous transactions.
We add a lock field to remember the transaction sequence which could
have been dirtied under the lock. If that transaction has already been
comitted by the time we invalidate the lock it doesn't have to sync.
Signed-off-by: Zach Brown <zab@versity.com>
The client lock network message processing callbacks were built to
simply perform the processing work for the message in the networking
work context that it was called in. This particularly makes sense for
invalidation because it has to interact with other components that
require blocking contexts (syncing commits, invalidating inodes,
truncating pages, etc).
The problem is that these messages are per-lock. With the right
workloads we can use all the capacity for executing work just in lock
invalidation work. There is no more work execution available for other
network processing. Critically, the blocked invalidation work is
waiting for the commit thread to get its network responses before
invalidation can make forward progress. I was easily reproducing
deadlocks by leaving behind a lot of locks and then triggering a flood
of invalidation requests on behalf of shrinking due to memory pressure.
The fix is to put locks on lists and have a small fixed number of work
contexts process all the locks pending for each message type. The
network callbacks don't block, they just put the lock on the list and
queue the work that will walk the lists. Invalidation now blocks one
work context, not the number of incoming requests.
There were some wait conditions in work that used to use the lock workq.
Other paths that change those conditions now have to know to queue the
work specifically, not just wake tasks which included blocked work
executors.
The other subtle impact of the change is that we can no longer rely on
networking to shutdown message processing work that was happening in its
callbacks. We have to specifically stop our work queues in _shutdown.
Signed-off-by: Zach Brown <zab@versity.com>
While checking for lost server commit holds, I noticed that the
advance_seq request path had obviously incorrect unwinding after getting
an error. Fix it up so that it always unlocks and applies its commit.
Signed-off-by: Zach Brown <zab@versity.com>
Add the committed_seq to statfs_more which gives the greatest seq which
has been committed. This lets callers disocover that a seq for a change
they made has been committed.
Signed-off-by: Zach Brown <zab@versity.com>
We had a debugging WARN_ON that warns when a client has an error
commiting their transaction. Let's add a bit more detail and promote it
to a proper error. These should not happen.
Signed-off-by: Zach Brown <zab@versity.com>
The forest code is responsible for constructing a consistent fs image
out of the items spread across all the btrees written by mounts in the
system.
Usually readers walk a btree looking for log trees that they should
read. As a mount modifies items in its dirty log tree, readers need to
be sure to check that in-memory dirty log tree even though it isn't
present in the btree that records persistent log trees.
The code did this by setting a flag to indicate that readers using a
lock should check the dirty log tree. But the flag usage wasn't
properly locked and left a race where a reader and writer could race,
leaving future readers to not know that they should check the dirty log
tree. When we rarely hit that race we'd see item errors that made no
sense, like not being able to find an inode item to update after having
just created it in the current transaction.
To fix this, we clean up the tree tracking in the forest code.
We get rid of the static forest_root structs in the lock_private that
were used to track the two special-case roots that aren't found in log
tree items: the in-memory dirty log root and the final fs root. All
roots are now dynamically allocated. We use a flag in the root to
identify it as the dirty log root, and identify the fs root by its
rid/nr. This results in a bunch of caller churn as we remove lpriv from
root identifying functions.
We get rid of the idea of the writer adding a static root to the list as
well as marking the log as needing to read the root. Instead we make
all root management happen as we refresh the list. The forest maintains
a commit sequence and writers set state in the lock to indicate that the
lock has dirty items in the log during this transaction. Iteration then
compares the state set by the commit, writer, and the last refresh to
determine if a new refresh needs to happen.
Properly tracking the presence of dirty items lets us recognize when the
lock no longer has dirty items in the log and we can stop locking and
reading the dirty log and fall back to reading the committed stable
version. The previous code didn't do that, it would lock and read the
dirty root forever.
While we're in here, we fix the locking around setting bloom bits and
have it track the version of the log tree that was set so that we don't
have to clear set bits as the log version is rotated out by the server.
There was also a subtle bug where we could hit to stale errors for the
same root and return -EIO because we triggering refresh returned stale.
We rework the retrying logic to use a separate error code to force
refreshing so that we can't accidentally trigger eio by conflating
reading stale blocks and forcing refreshing.
And finally, we no longer record that we need the dirty log tree in a
root if we have a lock that could never read. It's a minor optimization
that doesn't change functional behaviour.
Signed-off-by: Zach Brown <zab@versity.com>
Using strictly coherent btree items to map the hash of xattr names to
inode numbers proved the value of the functionality, but it was too
expensive. We now have the more efficient srch infrastructure to use.
We change from the .indx. to the .srch. tag, and change the ioctl from
find_xattr to search_xattrs. The idea is to communicate that these are
accelerated searches, not precise index lookups and are relatively
expensive.
Rather than maintaining btree items, xattr setting and deleting emits
srch entries which either tracks the xattr or combines with the previous
tracker and removes the entry. These are done under the lock that
protects the main xattr item, we can remove the separate locking of the
previous index items.
The semantics of the search ioctl needs to change a bit. Because
searches are so expensive we now return a flag to indicate that the
search completed. While we're there, we also allow a last_ino parameter
so that searches can be divided up and run in parallel.
Signed-off-by: Zach Brown <zab@versity.com>
This introduces the srch mechanism that we'll use to accelerate finding
files based on the presence of a given named xattr. This is an
optimized version of the initial prototype that was using locked btree
items for .indx. xattrs.
This is built around specific compressed data structures, having the
operation cost match the reality of orders of magnitude more writers
than readers, and adopting a relaxed locking model. Combine all of this
and maintaining the xattrs no longer tanks creation rates while
maintaining excellent search latencies, given that searches are defined
as rare and relatively expensive.
The core data type is the srch entry which maps a hashed name to an
inode number. Mounts can append entries to the end of unsorted log
files during their transaction. The server tracks these files and
rotates them into a list of files as they get large enough. Mounts have
compaction work that regularly asks the server for a set of files to
read and combine into a single sorted output file. The server only
initiates compactions when it sees a number of files of roughly the same
size. Searches then walk all the commited srch files, both log files
and sorted compacted files, looking for entries that associate an xattr
name with an inode number.
Signed-off-by: Zach Brown <zab@versity.com>
The get_fs_roots rpc and server interfaces were built around individual
roots. Rebuild it around passing around a struct so that we can add
roots without impacting all the current users.
Signed-off-by: Zach Brown <zab@versity.com>
Now that we have larger blocks we can have a larger max item. This was
increased to make room for the srch compaction items which store a good
number of srch files in their value.
Signed-off-by: Zach Brown <zab@versity.com>
The conversion of the super block metadata block counters to units of
large metadata blocks forgot to scale back to the small block size when
filling out the block count fields in the statfs rpc. This resulted in
the free and total metadata use being off by the factor of large to
small block size (default of ~16x at the moment).
Signed-off-by: Zach Brown <zab@versity.com>
We had a few uses of crc for hashing. That was fine enough for initial
testing but the huge number of xattrs that srch is recording was
seeing very bad collisions from the clumsy combination of crc32c into
a 64bit hash. Replace it with FNV for now.
This also takes the opportunity to use 3 hash functions in the forest
bloom filter so that we can extract them from the 64bit hash of the key
rather than iterating and recalculating hashes for each function.
Signed-off-by: Zach Brown <zab@versity.com>
We first attempt to allocate our large logically contiguous cached
blocks with physically contiguous pages to minimize the impact on the
tlb. When that fails we fall back to vmalloc()ed blocks. Sadly,
high-order page allocation failure is expected and we forgot to provide
the flag that suppresses the page allocation failure message.
Signed-off-by: Zach Brown <zab@versity.com>
We had a bug where mkfs would set a free data blkno allocator bit past
the end of the device. (Just at it, in fact. Those fenceposts.) Add
some checks at mount to make sure that the allocator blkno ranges in the
super don't have obvious mistakes.
Signed-off-by: Zach Brown <zab@versity.com>
Entries in a directory are indexed by the hash of their name. This
introduces a perfectly random access pattern. And this results in a cow
storm as directories get large enough such that the leaf blocks that
store their entries are larger than our commits. Each commit ends up
being full of cowed leaf blocks that contain a single new entry.
The dirent name fingerprints change the dirent key to first start with a
fingerprint of the name. This reduces the scope of hash randomization
from the entire directory to entries with the same fingerprint.
On real customer dir sizes and file names we saw roughly 3x create rate
improvements from being able to create more entries in leaf blocks
within a commit.
Signed-off-by: Zach Brown <zab@versity.com>
The radix allocator no longer uses the block visited bit because it
maintains its own much richer private per-block data stored off the priv
pointer.
Signed-off-by: Zach Brown <zab@versity.com>
This reverts commit 294b6d1f79e6d00ba60e26960c764d10c7f4b8a5.
We had previously seen lock contention between mounts that were either
resolving paths by looking up entries in directories or writing xattrs
in file inodes as they did archiving work.
The previous attempt to avoid this contention was to give each directory
its own inode number allocator which ensured that inodes created for
entries in the directory wouldn't share lock groups with inodes in other
directories.
But this creates the problem of operating on few files per lock for
reasonably small directories. It also creates more server commits as
each new directory gets its inode allocation reservation.
The fix is to have mount-wide seperate allocators for directories and
for everything else. This puts directories and files in seperate groups
and locks, regardless of directory population.
Signed-off-by: Zach Brown <zab@versity.com>
We had switched away from the radix_tree because we were adding a
_block_move call which couldn't fail. We no longer need that call, so
we can go back to storing cached blocks in the radix tree which can use
RCU lookups.
This revert has some conflict resolution around recent commits to add
the IO_BUSY block flag and the switch to _LG_ blocks.
This reverts commit 10205a5670dd96af350cf481a3336817871a9a5b.
Signed-off-by: Zach Brown <zab@versity.com>
The radix allocator has to be careful to not get lost in recursion
trying to allocate metadata blocks for its dirty radix blocks while
allocating metadata blocks for others.
The first pass had used path data structures to record the references to
all the blocks we'd need to modify to reflect the frees and allocations
performed while dirtying radix blocks. Once it had all the path blocks
it moved the old clean blocks into new dirty locations so that the
dirtying couldn't fail.
This had two very bad performance implications. First, it meant that
trying to read clean versions of dirtied trees would always read the old
blocks again because their clean version had been moved to the dirty
version. Typically this wouldn't happen but the server does exactly
this every time it tries to merge freed blocks back into its avail
allocator. This created a significant IO load on the server. Secondly,
that block cache move not being allowed to fail motivated us to move to
a locked rbtree for the block cache instead of the lockless rcu
radix_tree.
This changes the recursion avoidance to use per-block private metadata
to track every block that we allocate and cow rather than move. Each
dirty block knows its parent ref and the blknos it would clear and set.
If dirtying fails we can walk back through all the blocks we dirty and
restore their original references before dropping all the dirty blocks
and returning an error. This lets us get rid of the path structure
entirely and results in a much cleaner system.
This change meant tracking free blocks without clearing them as they're
used to satisfy dirty block allocations. The change now has a cursor
that walks the avail metadata tree without modifying it. While building
this it became clear that tracking the first set bits of refs doesn't
provide any value if we're always searching from a cursor. The cursor
ends up providing the same value of avoiding constantly searching empty
initial bits and refs. Maintaining the first metadata was just
overhead.
Signed-off-by: Zach Brown <zab@versity.com>
The forst code has a hint call to gives iterators a place to start
reading from before they acquire locks. It was checking all the log
trees but it wasn't checking the main fs tree. This happened to be OK
today because we're not yet merging items from the log trees into the
main fs tree, but we don't want to miss them once we do start merging
the trees.
Signed-off-by: Zach Brown <zab@versity.com>
The forest item operations were reading the super block to find the
roots that it should read items from.
This was easiest to implement to start, but it is too expensive. We
have to find the roots for every newly acquired lock and every call to
walk the inode seq indexes.
To avoid all these reads we first send the current stable versions of
the fs and logs btrees roots along with root grants. Then we add a net
command to get the current stable roots from the server. This is used
to refresh the roots if stale blocks are encountered and on the seq
index queries.
Signed-off-by: Zach Brown <zab@versity.com>
The server fills radix allocators for the client to consume while
allocating during a transaction. The radix merge function used to move
an entire radix block at a time. With larger blocks this becomes much
too coarse and can move way too much in one call.
This moves allocator bits a word at a time and more precisely moves the
amount that the caller asked for.
Signed-off-by: Zach Brown <zab@versity.com>
Introduce different constants for small and large metadata block
sizes.
The small 4KB size is used for the super block, quorum blocks, and as
the granularity of file data block allocation. The larger 64KB size is
used for the radix, btree, and forest bloom metadata block structures.
The bulk of this are obvious transitions from the old single constant to
the appropriate new constant. But there are a few more involved
changes, though just barely.
The block crc calculation now needs the caller to pass in the size of
the block. The radix function to return free bytes instead returns free
blocks and the caller is responsible for knowing how big its managed
blocks are.
Signed-off-by: Zach Brown <zab@versity.com>
It used to take significant effort to create very tall btrees because
they only stored small references to large LSM segments. Now they store
all file system metadata and we can easily create sufficiently large
btrees for testing. We don't need the tiny btree option.
Signed-off-by: Zach Brown <zab@versity.com>
There's no users of these variants of _prev and _next so they can be
removed. Support for them was also dropped in the previous reworking of
the internal structure of the btree blocks.
Signed-off-by: Zach Brown <zab@versity.com>
This btree implementation was first built for the relatively light duty
of indexing segments in the LSM item implementation. We're now using it
as the core metadata index. It's already using a lot of cpu to do its
job with small blocks and it only gets more expensive as the block size
increases. These changes reduce the CPU use of working with the btree
block structures.
We use a balanced binary tree to index items by key in the block. This
gives us rare tree balancing cost on insertion and deletion instead of
the memmove overhead of maintaining a dense array of item offsets sorted
by key. The keys are stored in the item struct which are stored in an
array at the front of the block so searching for an item uses contiguous
cachelines.
We add a trailing owner offset to values so that we can iterate through
them. This is used to track space freed up by values instead of paying
the memmove cost of keeping all the values at the end of the block. We
occasionally reclaim the fragmented value free space instead of
splitting the block.
Direct item lookups use a small hash table at the end of the block
which maps offsets to items. It uses linear probing and is guaranteed
to have a light load factor so lookups are very likely to only need
a single cache lookup.
We adjust the watermark for triggering a join from half of a block down
to a quarter. This results in less utilized blocks on average. But it
creates distance between the join and split thresholds so we get less
cpu use from constantly joining and splitting if item populations happen
to hover around the previously shared threshold.
While shifting the implementation we choose not to add support for some
features that no longer make sense. There are no longer callers of
_before and _after, and having synthetic tests to use small btree blocks
no longer makes ense when we can easily create very tall trees. Both
those btree interfaces and the tiny btree block support will be removed.
Signed-off-by: Zach Brown <zab@versity.com>
The btree currently uses variable length big-endian buffers that are
compared with memcmp() as keys. This is a historical relic of the time
when keys could be very large. We had dirent keys that included the
name and manifest entries that included those fs keys.
But now all the btree callers are jumping through hoops to translate
their fs keys into big-endian btree keys. And the memcmp() of the
keys is showing up in profiles.
This makes the btree take native scoutfs_key structs as its key. The
forest callers which are working with fs keys can just pass their keys
straight through. The server btree callers with their private btrees
get key fields definied for their use instead of having individual
big-endian key structs.
A nice side-effect of this is that splitting parents doesn't have to
assume that a maximal key will be inserted by a child split. We can
have more keys in parents and wider trees.
Signed-off-by: Zach Brown <zab@versity.com>
These were used for constructing arrays of string mappings of key
fields. We don't print keys with symbolic strings anymore so we don't
need to maintain these values anymore.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for reporting errors to data waiters via a new
SCOUTFS_IOC_DATA_WAIT_ERR ioctl. This allows waiters to return an error
to readers when staging fails.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
[zab: renamed to data_wait_err, took ino arg]
Signed-off-by: Zach Brown <zab@versity.com>
Add support for reporting errors to data waiters via a new
SCOUTFS_IOC_DATA_WAIT_ERR ioctl. This allows waiters to return an error
to readers when staging fails.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
[zab: renamed to data_wait_err, took ino arg]
Signed-off-by: Zach Brown <zab@versity.com>
We had a bug where we were creating extent lengths that were rounded up
to the size of the packed extent items instead of being limited by
i_size. As it happens the last setattr_more test would have found it if
I'd actually done the math to check that the extent length was correct.
We add an explicit offline blocks count test because that's what lead us
to notice that the offline extent length was wrong.
Signed-off-by: Zach Brown <zab@versity.com>
We had a bug where offline extent creation during setattr_more just
wasn't making it all the way to persistent items. This adds basic
sanity tests of the setattr_more interface.
Signed-off-by: Zach Brown <zab@versity.com>
Remove the definitions and descriptions of sources of corruption that
are no longer identified by the kernel module.
Signed-off-by: Zach Brown <zab@versity.com>
It's handy to use ilog2 in the format header for defining shifts based
on values. Add a userspace helper that uses glibc's log2 functions.
Signed-off-by: Zach Brown <zab@versity.com>
The simple-release-extents test wanted to create a file with a single
large extent, but it did it with a streaming write. While we'd like
our data allocator to create a large extent from initial writes, it
certainly doesn't guarantee it. Fallocate is much more likely to
createa a large extent.
Signed-off-by: Zach Brown <zab@versity.com>
The dmesg check was creating false positives when unexpected messages
from before the test run were forced out of the ring. The evicted
messages were showing up as removals in the diff.
We only want to see new messages that were created during the test run.
So we format the diff to only output added lines.
Signed-off-by: Zach Brown <zab@versity.com>
We add directories of our built binaries for tests to find. Let's
prepend them to PATH so that we find them before any installed
binaries in the system.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for initializing radix allocator blocks that describe free
space in mkfs and support for printing them out.
Signed-off-by: Zach Brown <zab@versity.com>
Add a -y argument so we can specify additional args to ./xfstests, and
clean up our xfstest a bit while we're in there.
Signed-off-by: Zach Brown <zab@versity.com>
Add a message describing when mount-unmount-race has to be skipped
because it doesn't have enough mounts to unmount while maintaining
quorum.
Signed-off-by: Zach Brown <zab@versity.com>
The segment-cache-fwd-back-iter test only applied to populating the item
cache from segments, and we don't do that anymore. The test can
be removed.
Signed-off-by: Zach Brown <zab@versity.com>
When running a test we only create the test dir through one mount, but
we were off-by-one when deciding that we were iterating through the
first mount.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which makes sure that errors during setup can be properly
torn down. This found an assertion that was being triggered during lock
shudown.
Signed-off-by: Zach Brown <zab@versity.com>
We can't use cmd() to create the results dir because it tries to
redirect output to the results dir, which fails, so mkdir isn't run and
we don't create the results dir.
Signed-off-by: Zach Brown <zab@versity.com>
We check out the specified git branch with "origin/" prepended, but we
weren't verifying that same full branch so the verification failed
because it couldn't distinguish differentiate amongst possible named
branches.
Signed-off-by: Zach Brown <zab@versity.com>
Add a scoutfs command wrapper around the statfs_moe ioctl. It's the
same as the stat_more ioctl but has different fields and a different
ioctl.
Signed-off-by: Zach Brown <zab@versity.com>
Now that networking is identifing clients by their rid some persistent
structures are using that to store records of clients.
Signed-off-by: Zach Brown <zab@versity.com>
The format no longer has statically configured named slots. The only
persistent config is the number of monts that must be voting to reach
quorum. The quorum blocks now have a log of successfull elections.
Signed-off-by: Zach Brown <zab@versity.com>
The first commit of the scoutfs-tests suite which uses multiple mounts
on one host to test multi-node scoutfs.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for the xattr tags which can hide or index xattrs by their
name. We get an item that indexes inodes by the presence of an xattr, a
listxattr_raw ioctl which can show hidden xattrs, and an ioctl that
finds inodes which have an xattr.
Signed-off-by: Zach Brown <zab@versity.com>
The error return conventions were confused, resulting in main exiting
with success when command execution failed.
Signed-off-by: Zach Brown <zab@versity.com>
Remove the ctrstat command. It was built back when we had a handful of
counters. It's output format doesn't make much sense now that we have
an absolute ton of counters. If we want fancy counter output in the
future we'd add it to the counters command.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command to output the sysfs counters for a volume, with the option
of generating a table that fits the terminal.
Signed-off-by: Zach Brown <zab@versity.com>
Move the magic value that identifies the super block into the block
header and use it for btree blocks as well.
Signed-off-by: Zach Brown <zab@versity.com>
Update the format header to reflect that the kernel now uses a locking
service instead of using an fs/dlm lockspace. Nothing in userspace uses
locking.
Signed-off-by: Zach Brown <zab@versity.com>
The server no longer stores the address to connect to in the super
block. It's now stored in the quorum config and voting blocks.
Signed-off-by: Zach Brown <zab@versity.com>
The stage command was trivially implemented by allocating, reading, and
staging the entire region in buffer. This is unreasonable for large
file regions. Implement the stage command by having it read each
portion of the region into a smaller buffer, starting with a meg.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs needs to make sure that a device is large enough for a file system.
We had a tiny limit that almost certainly wouldn't have worked.
Increase the limit to a still absurdly small but arguably possible 16
segments.
Signed-off-by: Zach Brown <zab@versity.com>
Update the format header and add a man page which describes the
corruption messages that the kernel module can spit out.
Signed-off-by: Zach Brown <zab@versity.com>
Make the changes to support the new small key struct. mkfs and print
work with simpler keys, segment items, and manifest entries. The item
cache keys ioctl now just needs to work with arrays of keys.
Signed-off-by: Zach Brown <zab@versity.com>
Lots of tests run scout stat and parse a single value. Give them an
option to have the only output be that value so they don't have to pull
it out of the output.
Signed-off-by: Zach Brown <zab@versity.com>
The previous formatting was modeled after the free form 'stat' output
and it's a real mess. Just make it a simple "name value" table.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs now directly uses the kernel dlm subsystem and offsers a debugfs
file with the current lock state. We don't need userspace to read and
format the contents of a debugging file.
Signed-off-by: Zach Brown <zab@versity.com>
With the removal of the size index items we no longer have to print them
or be able to walk the index. mkfs only needs to create a meta seq
index item for the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
This command takes a device and dumps all dlmglue locks and their state to
the console. It also computes some average lock wait times. We provide a
couple of options:
--lvbs=[yes|no] turns on or off printing of lvb data (default is off)
--oneline provides a more concise per-lock printout.
Signed-off-by: Mark Fasheh <mfasheh@versity.com>
We were chopping off the command string when passing the argument array into
registered commands. getopt expects a program name as the first argument, so
change cmd_execute() to only chop off the scoutfs program name now. Now we
can parse command arguments in an easy and standard manner.
This necessitates a small update of each commands usage of argv/argc.
Signed-off-by: Mark Fasheh <mfasheh@versity.com>
Update the calculation of the largest number of btree blocks based on
the format.h update that provides the min free space in parent blocks
instead of the free limit for the entire block.
Signed-off-by: Zach Brown <zab@versity.com>
The kernel format.h has built up some changes that the userspace utils
don't use. We're about to start enforcing exact matching of the source
files at run time so let's bring these back in sync.
Signed-off-by: Zach Brown <zab@versity.com>
The kernel key printing code was refactored to more carefully print
keys. Import this updated code by adding supporting functions around it
so that we don't have to make edits to it and can easily update the
import in the future.
Signed-off-by: Zach Brown <zab@versity.com>
format.h and ioctl.h are copied from the kernel module. It had a habit
of accidentally using types that aren't exported to userspace. It's
since added build checks that enforce exported types. This copies the
fixed use of exported types over for hopefully the last time.
Signed-off-by: Zach Brown <zab@versity.com>
Our item cache protocol is tied to holding DLM locks which cover a
region of the item namespace. We want locks to cover all the data
associated with an inode and other locks to cover the indexes. So we
resort the items first by major (index, fs) then by inode type (inode,
dirent, etc).
Signed-off-by: Zach Brown <zab@versity.com>
Manifest entries and segment allocation bitmap regions are now stored in
btree items instead of the ring log. This lets us work with them
incrementally and share them between nodes.
Signed-off-by: Zach Brown <zab@versity.com>
Just lift the key printer from the kernel and use it to print
item keys in segments and in manifest entries.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for the inode index items which are replacing the seq walks
from the old btree structures. We create the index items for the root
inode, can print out the items, and add a commmand to walk the indices.
Signed-off-by: Zach Brown <zab@versity.com>
Recent kernel headers have leaked __bitwise into userspace. Rename our
use of __bitwise in userspace sparse builds to avoid the collision.
Signed-off-by: Zach Brown <zab@versity.com>
It's a bit confusing to always see both the old and current super block.
Let's only print the first one. We could add an argument to print all
of them.
Signed-off-by: Zach Brown <zab@versity.com>
Add mkfs and print support for the simpler rings that the segment bitmap
allocator and manifest are now using. Some other recent format header
updates come along for the ride.
Signed-off-by: Zach Brown <zab@versity.com>
The segment item struct used to have fiddly packed offsets and lengths.
Now it's just normal fields so we can work with them directly and get
rid of the native item indirection.
Signed-off-by: Zach Brown <zab@versity.com>
We were using a bitmap to record segments during manifest printing and
then walking that bitmap to print segments. It's a little silly to have
a second data structure record the referenced segments when we could
just walk the manifest again to print the segments.
So refactor node printing into a treap walker that calls a function for
each node. Then we can have functions that print the node data
structurs for each treap and then one that prints the segments that are
referenced by manifest nodes.
Signed-off-by: Zach Brown <zab@versity.com>
We had changed the manifest keys to fully cover the space around the
segments in the hopes that it'd let item reading easily find negative
cached regions around items.
But that makes compaction think that segments intersect with items when
they really don't. We'd much rather avoid unnecessary compaction by
having the manifest entries precisely reflect the keys in the segment.
Item reading can do more work at run time to find the bounds of the key
space that are around the edges of the segments it works with.
Signed-off-by: Zach Brown <zab@versity.com>
Make sure that the manifest entries for a given level fully
cover the possible key space. This helps item reading describe
cached key ranges that extend around items.
Signed-off-by: Zach Brown <zab@versity.com>
Update mkfs and print to describe the ring blocks with a starting index
and number of blocks instead of a head and tail index.
Signed-off-by: Zach Brown <zab@versity.com>
Make a new file system by writing a root inode in a segment and storing
a manifest entry in the ring that references the segment.
Signed-off-by: Zach Brown <zab@versity.com>
We updated the code to use the new iteration of the data_version ioctl
but we forgot to update the ioctl definition so it didn't actually work.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs was starting setting free blk bits from 0 instead of from
the blkno offset of the first free block. This resulted in
the highest order above a used blkno being marked free. Freeing
that blkno would set its lowest order blkno. Now that blkno can be
allocated from two orders. That, eventually, can lead to blocks
being doubly allocated and users trampling on each other.
While auditing the code to chase this bug down I also noticed that
write_buddy_blocks() was using a min() that makes no sense at all. Here
'blk' is inclusive, the modulo math works on its own.
Signed-off-by: Zach Brown <zab@versity.com>
The btree block now has a le16 nr_items field to make room for the
number of items that larger blocks can hold.
Signed-off-by: Zach Brown <zab@versity.com>
Update mkfs and print for the full radix buddy allocators. mkfs has to
calculate the number of blocks and the height of the tree and has to
initialize the paths down the left and right side of the tree.
Print needs to dump the new radix blockx and super block fields.
Signed-off-by: Zach Brown <zab@versity.com>
The pseudo random byte wrapper function used the intel instructions
so that it could deal with high call rates, like initializing random
node priorities for a large treap.
But this is obviously not remotely portable and has the annoying habit
of tripping up versions of valgrind that haven't yet learned about these
instructions.
We don't actually have high bandwidth callers so let's back off and just
let openssl take care of this for us.
Signed-off-by: Zach Brown <zab@versity.com>
Initialize the free_order field in all the slots of the buddy index
block so that the kernel will try to allocate from them and will
initialize and populate the first block.
Signed-off-by: Zach Brown <zab@versity.com>
Add commands that use the find-xattr ioctls to show the inode numbers of
inodes which probably contain xattrs matching the specified name or
value.
Signed-off-by: Zach Brown <zab@versity.com>
Add the inode-paths command which uses the ioctl to display all the
paths that lead to the given inode. We add support for printing
the new link backref items and inode and dirent fields.
Signed-off-by: Zach Brown <zab@versity.com>
We had the start of functions that operated on little endian bitmaps.
This adds more operations and uses __packed to support unaligned bitmaps
on platforms where unaligned accesses are a problem.
Signed-off-by: Zach Brown <zab@versity.com>
Happily, it turns out that there are crash extensions for extracting
trace messages from crash dumps. That's good enough for us.
Signed-off-by: Zach Brown <zab@versity.com>
The kernel now has an ioctl to give us inode numbers with their sequence
number for every inode that's been modified since a given tree update
sequence number.
Update mkfs and print to the on-disk format changes and add a trivial
inodes-since command which calls the ioctl and prints the results.
Signed-off-by: Zach Brown <zab@versity.com>
Add a 'trace' command which uses the debugfs file created by the scoutfs
kernel module to read and print trace messages.
Signed-off-by: Zach Brown <zab@versity.com>
The slightly tweaked format that uses linear probing to mitigate dirent
name hash collisions doesn't need a record of the greatest number of
collisions in the dir inode.
Signed-off-by: Zach Brown <zab@versity.com>
Initialize the block count fields in the super block on mkfs and print
out the buddy allocator fields and blocks.
Signed-off-by: Zach Brown <zab@versity.com>
When printing try to read both super blocks and use the most recent one
instead of just using the first one.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for printing dirent items to scoutfs print. We're careful
to change non-printable characters to ".".
Signed-off-by: Zach Brown <zab@versity.com>
Update print to show the inode fields in the newer dirent hashing
scheme. mkfs doesn't create directory entries.
Signed-off-by: Zach Brown <zab@versity.com>
The bloom filter had two bad bugs.
First the calculation was adding the bit width of newly hashed data to
the hash value instead of the record of the hashed bits available.
And the block offset calculation for each bit wasn't truncated to the
number of bloom blocks. While fixing this we can clean up the code and
make it faster by recording the bits in terms of their block and bit
offset instead of their large bit value.
Signed-off-by: Zach Brown <zab@versity.com>
The swizzle value was defined in terms of longs but the code used u64s.
And the bare shifted value was an int so it'd get truncated. Switch it
all to using longs.
The ratio of bugs to lines of code in that first attempt was through the
roof!
Signed-off-by: Zach Brown <zab@versity.com>
Update to the format rev which has large log segments that start with
bloom filter blocks, have items linked in a skip list, and item values
stored at offsets in the block.
Signed-off-by: Zach Brown <zab@versity.com>
pseudo_random_bytes() was accidentally copying the last partial long to
the beggining of the buffer instead of the end. The final partial long
bytes weren't being filled.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs just needs to initialize bloom filter blocks with the bits for the
single root inode key. We can get away with these skeletal functions
for now.
Signed-off-by: Zach Brown <zab@versity.com>
We're going to need to start setting bloom filters bits in mkfs so we'll
add this trivial inline. It might grow later.
Signed-off-by: Zach Brown <zab@versity.com>
The initial bitmap entry written in the ring by mkfs was off by one.
Three chunks were written but the 0th chunk is also free for the supers.
It has to mark the first four chunks as allocated.
Signed-off-by: Zach Brown <zab@versity.com>
In the first pass we'd only printed the first map and ring blocks.
This reads the number of used map blocks into an allocation large enough
for the maximum number of map blocks.
Then we use the block numbers from the map blocks to print the active
ring blocks which are described by the super.
Signed-off-by: Zach Brown <zab@versity.com>
The use of 'log' for all the large sizes was pretty confusing. Let's
use 'chunk' to describe the large alloc size. Other things live in them
as well as logs. Then use 'log segment' to describe the larger log
structure stored in a chunk that's made up of all the little blocks.
Get rid of the explicit distinction between brick and block numbers.
The format is now defined it terms of fixed 4k blocks. Logs become a
logical structure that's made up of a fixed number of blocks. The
allocator still manages large log sized regions.
== truncate writes zeroed partial end of file block
0000000 0a79 0a79 0a79 0a79 0a79 0a79 0a79 0a79
*
0006144 0000 0000 0000 0000 0000 0000 0000 0000
*
0012288
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.