The message indicating that xfstests output was now being shown was
mashed up against the previous passed stats and it was gross and I hated
it.
Signed-off-by: Zach Brown <zab@versity.com>
When running in debug kernels in guests we can really bog down things
enough to trigger hrtimer warnings. I don't think there's much we can
reasonably do about that.
Signed-off-by: Zach Brown <zab@versity.com>
Farewell work is queued by farewell message processing. Server shutdown
didn't properly wait for pending farewell work to finish before tearing
down. As the server work destroyed the server's connection the farewell
work could stlil be running and try to send responses down the socket.
We make the server more carefully avoid queueuing farewell work if it's
in the process of shutting down and wait for farewell work to finish
before destroying the server's resources.
This fixed all manner of crashes that were seen in testing when a bunch
of nodes unmounted, creating farewell work on the server as it itself
unmounted and destroyed the server.
Signed-off-by: Zach Brown <zab@versity.com>
scoutfs_srch_get_compact() is building up a compaction request which has
a list of srch files to read and sort and write into a new srch file.
It finds input files by searching for a sufficient number of similar
files: first any unsorted log files and then sorted log files that are
around the same size.
It finds the files by using btree next on the srch zone which has types
for unsorted srch log files, sorted srch files, but also pending and
busy compaction items.
It was being far too cute about iterating over different key types. It
was trying to adapt to finding the next key and was making assumptions
about the order of key types. It didn't notice that the pending and
busy key types followed log and sorted and would generate EIO when it
ran into them and found their value length didn't match what it was
expecting.
Rework the next item ref parsing so that it returns -ENOENT if it gets
an unexpected key type, then look for the next key type when checking
enoent.
Signed-off-by: Zach Brown <zab@versity.com>
Add a function that tests can use to skip when the metadata device isn't
large enough. I thought we needed to avoid enospc in a particular test,
but it turns out the test's failure was unrelated. So this isn't used
for now but it seems nice to keep around.
Signed-off-by: Zach Brown <zab@versity.com>
The grace period is intended to let lock holders squeeze in more bulk
work before another node pulls the lock out from under them. The length
of the delay is a balance between getting more work done per lock hold
and adding latency to ping-ponging workloads.
The current grace period was too short. To do work in the conflicting
case you often have to read the result that the other mount wrote as you
invalidated their lock. The test was written in the LSM world where
we'd effectively read a single level 0 1MB segment. In the btree world
we're checking bloom blocks and reading the other mount's btree. It has
more dependent read latency.
So we turn up the grace period to let conflicting readers squeeze in
more work before pulling the lock out from under them. This value was
chosen to make lock-conflicting-batch-commit pass in guests sharing nvme
metadata devices in debugging kernels.
Signed-off-by: Zach Brown <zab@versity.com>
The test had a silly typo in the label it put on the time it took mounts
to perform conflicting metadata changes.
Signed-off-by: Zach Brown <zab@versity.com>
When we're splicing in dentries in lookup we can be splicing the result
of changes on other nodes into a stale dcache. The stale dcache might
contain dir entries and the dcache does not allow aliased directories.
Use d_materialise_unique() to splice in dir inodes so that we remove all
aliased dentries which must be stale.
We can still use d_splice_alias() for all other inode types. Any
existing stale dentries will fail revalidation before they're used.
Signed-off-by: Zach Brown <zab@versity.com>
We can lose interesting state if the mounts are unmounted as tests fail,
only unmount if all the tests pass.
Signed-off-by: Zach Brown <zab@versity.com>
Weirdly, run-tests was treating trace_printk not as an option to enable
trace_printk() traces but as an option to print trace events to the
console with printk? That's not a thing.
Make -P really enable trace_printk tracing and collect it as it would
enabled trace events. It needs to be treated seperately from the -t
options that enable trace events.
While we're at it treat the -P trace dumping option as a stand-alone
option that works without -t arguments.
Signed-off-by: Zach Brown <zab@versity.com>
run-tests.sh has a -t argument which takes a whitespace seperated string
of globs of events to enable. This was hard to use and made it very
easy to accidentally expand the globs at the wrong place in the script.
This makes each -t argument specify a single word glob which is stored
in an array so the glob isn't expanded until it's applied to the trace
event path. We also add an error for -t globs that didn't match any
events and add a message with the count of -t arguments and enabled
events.
Signed-off-by: Zach Brown <zab@versity.com>
The lock invalidation work function needs to be careful not to requeue
itself while we're shutting down or we can be left with invalidation
functions racing with shutdown. Invalidation calls igrab so we can end
up with unmount warning that there are still inodes in use.
Signed-off-by: Zach Brown <zab@versity.com>
Add a new distinguishable return value (ENOBUFS) from allocator for if
the transaction cannot alloc space. This doesn't mean the filesystem is
full -- opening a new transaction may result in forward progress.
Alter fallocate and get_blocks code to check for this err val and retry
with a new transaction. Handling actual ENOSPC can still happen, of
course.
Add counter called "alloc_trans_retry" and increment it from both spots.
Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: fixed up write_begin error paths]
The item cache page life cycle is tricky. There are no proper page
reference counts, everthing is done by nesting the page rwlock inside
item_cache_info rwlock. The intent is that you can only reference pages
while you hold the rwlocks appropriately. The per-cpu page references
are outside that locking regime so they add a reference count. Now
there are reference counts for the main cache index reference and for
each per-cpu reference.
The end result of all this is that you can only reference pages outside
of locks if you're protected by references.
Lock invalidation messed this up by trying to add its right split page
to the lru after it was unlocked. Its page reference wasn't protected
at this point. Shrinking could be freeing that page, and so it could be
putting a freed page's memory back on the lru.
Shrinking had a little bug that it was using list_move to move an
initialized lru_head list_head. It turns out to be harmless (list_del
will just follow pointers to itself and set itself as next and prev all
over again), but boy does it catch one's eye. Let's remove all
confusion and drop the reference while holding the cinf->rwlock instead
of trying to optimize freeing outside locks.
Finally, the big one: inserting a read item after compacting the page to
make room was inserting into stale parent pointers into the old
pre-compacted page, rather than the new page that was swapped in by
compaction. This left references to a freed page in the page rbtree and
hilarity ensued.
Signed-off-by: Zach Brown <zab@versity.com>
Instead of hashing headers, define an interop version. Do not mount
superblocks that have a different version, either higher or lower.
Since this is pretty much the same as the format hash except it's a
constant, minimal code changes are needed.
Initial dev version is 0, with the intent that version will be bumped to
1 immediately prior to tagging initial release version.
Update README. Fix comments.
Add interop version to notes and modinfo.
Signed-off-by: Andy Grover <agrover@versity.com>
Add a relatively constrained ioctl that moves extents between regular
files. This is intended to be used by tasks which combine many existing
files into a much larger file without reading and writing all the file
contents.
Signed-off-by: Zach Brown <zab@versity.com>
By convention we have the _IO* ioctl definition after the argument
structs and ALLOC_DETAIL got it a bit wrong so move it down.
Signed-off-by: Zach Brown <zab@versity.com>
We were checking for the wrong magic value.
We now need to use -f when running mkfs in run-tests for things to work.
Signed-off-by: Andy Grover <agrover@versity.com>
This more closely matches stage ioctl and other conventions.
Also change release code to use offset/length nomenclature for consistency.
Signed-off-by: Andy Grover <agrover@versity.com>
Update for cli args and options changes. Reorder subcommands to match
scoutfs built-in help.
Consistent ScoutFS capitalization.
Tighten up some descriptions and verbiage for consistency and omit
descriptions of internals in a few spots.
Add SEE ALSO for blockdev(8) and wipefs(8).
Signed-off-by: Andy Grover <agrover@versity.com>
Make it static and then use it both for argp_parse as well as
cmd_register_argp.
Split commands into five groups, to help understanding of their
usefulness.
Mention that each command has its own help text, and that we are being
fancy to keep the user from having to give fs path.
Signed-off-by: Andy Grover <agrover@versity.com>
This has some fancy parsing going on, and I decided to just leave it
in the main function instead of going to the effort to move it all
to the parsing function.
Signed-off-by: Andy Grover <agrover@versity.com>
Support max-meta-size and max-data-size using KMGTP units with rounding.
Detect other fs signatures using blkid library.
Detect ScoutFS super using magic value.
Move read_block() from print.c into util.c since blkid also needs it.
Signed-off-by: Andy Grover <agrover@versity.com>
Print warning if printing a data dev, you probably wanted the meta dev.
Change read_block to return err value. Otherwise there are confusing
ENOMEM messages when pread() fails. e.g. try to print /dev/null.
Signed-off-by: Andy Grover <agrover@versity.com>
Make offset and length optional. Allow size units (KMGTP) to be used
for offset/length.
release: Since off/len no longer given in 4k blocks, round offset and
length to to 4KiB, down and up respectively. Emit a message if rounding
occurs.
Make version a required option.
stage: change ordering to src (the archive file) then the dest (the
staged file).
Signed-off-by: Andy Grover <agrover@versity.com>
With many concurrent writers we were seeing excessive commits forced
because it thought the data allocator was running low. The transaction
was checking the raw total_len value in the data_avail alloc_root for
the number of free data blocks. But this read wasn't locked, and
allocators could completely remove a large free extent and then
re-insert a slightly smaller free extent as they perform their
alloction. The transaction could see a temporary very small total_len
and trigger a commit.
Data allocations are serialized by a heavy mutex so we don't want to
have the reader try and use that to see a consistent total_len. Instead
we create a data allocator run-time struct that has a consistent
total_len that is updated after all the extent items are manipulated.
This also gives us a place to put the caller's cached extent so that it
can be included in the total_len, previously it wasn't included in the
free total that the transaction saw.
The file data allocator can then initialize and use this struct instead
of its raw use of the root and cached extent. Then the transaction can
sample its consistent total_len that reflects the root and cached
extent.
A subtle detail is that fallocate can't use _free_data to return an
allocated extent on error to the avail pool. It instead frees into the
data_free pool like normal frees. It doesn't really matter that this
could prematurely drain the avail pool because it's in an error path.
Signed-off-by: Zach Brown <zab@versity.com>