Now that we're in one repo utils can get its format and ioctl headers
from the authoriative kmod files. When we're building a dist tarball
we copy the files over so that the build from the dist tarball can use
them.
Signed-off-by: Zach Brown <zab@versity.com>
For some reason, the make dist rule in kmod/ put the spec file in a
scoutfs-$ver/ directory, instead of scoutfs-kmod-$ver/ like the rest of
the files and instead of scoutfs-utils-$ver/ that the spec file for
utils is put in the utils dist tarball.
This adds -kmod to the path for the spec file so that it matches the
rest of the kmod dist tarball.
Signed-off-by: Zach Brown <zab@versity.com>
Add a trivial top-level Makefile that just runs Make in all the subdirs.
This will probably expand over time.
Signed-off-by: Zach Brown <zab@versity.com>
Add a utility that mimics our search_xattrs ioctl with directory entry
walking and fgetxattr as efficiently as it can so we can use it to test
large file populations.
Signed-off-by: Zach Brown <zab@versity.com>
The search_xattrs ioctl is only going to find entries for xattrs with
the .srch. tag which create srch entries as they're created and
destroyed. Export the xattr tag parsing so that the ioctl can return
-EINVAL for xattrs which don't have the scoutfs prefix and the .srch.
tag.
Signed-off-by: Zach Brown <zab@versity.com>
Hash collisions can lead to multiple xattr ids in an inode being found
for a given name hash value. If this happens we only want to return the
inode number once.
Signed-off-by: Zach Brown <zab@versity.com>
Compacting very large srch files can use all of a given operation's
metadata allocator. When this happens we record the position in the
srch files of the compcation in the pending item.
We could lose entries when this happens because the kway_next callback
would advance the srch file position as it read entries and put them in
the tournament tree leaves, not as it put them in the output file. We'd
continue from the entries that were next to go in the tournament leaves,
not from what was in the leaves.
This refactors the kway merge callbacks to differentiate between getting
entries at the position and advancing the positions. We initialize the
tournament leaves by getting entries at the positions and only advance
the position as entries leave the tournament tree and are either stored
in the output srch files or are dropped.
Signed-off-by: Zach Brown <zab@versity.com>
In the rare case that searching for xattrs only finds deletions within
its window it retries the search past the window. The end entry is
inclusive and is the last entry that can be returned. When retrying the
search we need to start from the entry after that to ensure forward
progress.
Signed-off-by: Zach Brown <zab@versity.com>
We have to limit the number of srch entries that we'll track while
performing a search for all the inodes that contain xattrs that match
the search hash value.
As we hit the limit on the number of entries to track we have to drop
entries. As we drop entries we can't return any inodes for entries
past the dropped entries. We were updating the end point of the search
as we dropped entries past the tracked set, but we weren't updating the
search end point if we dropped the last currently tracked entry.
And we were setting the end point to the dropped entry, not to the entry
before it. This could lead us to spuriously returning deleted entries
if we drop the creation entry and then allow tracking its deletion
later.
This fixes both those problems. We now properly set the end point to
just before the dropped entry for all entries that we drop.
Signed-off-by: Zach Brown <zab@versity.com>
The k-way merge used by srch file compaction only dropped the second
entry in a pair of duplicate entries. Duplicate entries are both
supposed to be removed so that entries for removed xattrs don't take up
space in the files.
This both drops the second entry and removes the first encoded entry.
As we encode entries we rememeber their starting offset and the previous
entry that they were encoded from. When we hit a duplicate entry
we undo the encoding of the previous entry.
This only works wihin srch file blocks. We can still have duplicate
entries that span blocks but that's unlikely and relatively harmless.
Signed-off-by: Zach Brown <zab@versity.com>
The search_xattrs ioctl looks for srch entries in srch files that map
the caller's hashed xattr name to inodes. As it searches it maintains a
range of entries that it is looking for. When it searches sorted srch
files for entries it first performs a binary search for the start of the
range and then iterates over the blocks until it reaches the end of its
range.
The binary search for the start of the range was a bit wrong. If the
start of the range was less than all the blocks then the binary search
could wrap the left index, try to get a file block at a negative index,
and return an error for the search.
This is relatively hard to hit in practice. You have to search for the
xattr name with the smallest hashed value and have a sorted srch file
that's just the right size so that blk offset 0 is the last block
compared in the binary search, which sets the right index to -1. If
there are lots of xattrs, or sorted files of the wrong length, it'll
work.
This fixes the binary search so that it specifically records the first
block offset that intersects with the range and tests that the left and
right offsets haven't been inverted. Now that we're not breaking out of
the binary search loop we can more obviously put each block reference
that we get.
Signed-off-by: Zach Brown <zab@versity.com>
The srch code was putting btree item refs outside of success. This is
fine, but they only need to be put when btree ops return success and
have set the reference.
Signed-off-by: Zach Brown <zab@versity.com>
Dirty items in a client transaction are stored in OS pages. When the
transaction is committed each item is stored in its position in a dirty
btree block in the client's existing log btree. Allocators are refilled
between transaction commits so a given commit must have sufficient meta
allocator space (avail blocks and unused freed entries) for all the
btree blocks that are dirtied.
The number of btree blocks that are written, thus the number of cow
allocations and frees, depends on the number of blocks in the log btree
and the distribution of dirty items amongst those blocks. In a typical
load items will be near each other and many dirty items in smaller
kernel pages will be stored in fewer larger btree blocks.
But with the right circumstances, the ratio of dirty pages to dirty
blocks can be much smaller. With a very large directory and random
entry renames you can easily have 1 btree block dirtied for every page
of dirty items.
Our existing allocator meta allocator fill targets and the number of
dirty item cache pages we allowed did not properly take this in to
account. It was possible (and, it turned out, relatively easy to test
for with a hgue directory and random renames) to run out of meta avail
blocks while storing dirty items in dirtied btree blocks.
This rebalances our targets and thresholds to make it more likely that
we'll have enough allocator resources to commit dirty items. Instead of
having an arbitrary limit on the number of dirty item cache pages, we
require that a given number of dirty item cache pages have a given
number of allocator blocks available.
We require a decent number of avialable blocks for each dirty page, so
we increase the server's target number of blocks to give the client so
that it can still build large transactions.
This code is conservative and should not be a problem in practice, but
it's theoretically possible to build a log btree and set of dirty items
that would dirty more blocks that this code assumes. We will probably
revisit this as we add proper support for ENOSPC.
Signed-off-by: Zach Brown <zab@versity.com>
The srch system checks that is has allocator space while deleting srch
files and while merging them and dirtying output blocks. Update the
callers to check for the correct number of avail or freed blocks that it
needs between each check.
Signed-off-by: Zach Brown <zab@versity.com>
Previously, scoutfs_alloc_meta_lo_thresh() returned true when a small
static number of metadata blocks were either available to allocate or
had space for freeing. This didn't make a lot of sense as the correct
number depends on how many allocations each caller will make during
their atomic transaction.
Rework the call to take an argument for the number of avail or freed
blocks available to test. This first pass just uses the existing
number, we'll get to the callers.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test that randomly renames entries in a single large directory.
This has caught bugs in the reservation of allocator resources for
client transactions.
Signed-off-by: Zach Brown <zab@versity.com>
Prefer named to anonymous enums. This helps readability a little.
Use enum as param type if possible (a couple spots).
Remove unused enum in lock_server.c.
Define enum spbm_flags using shift notation for consistency.
Rename get_file_block()'s "gfb" parameter to "flags" for consistency.
Signed-off-by: Andy Grover <agrover@versity.com>
Not initializing wid[] can cause incorrect output.
Also, we only need 6 columns if we reference the array from 0.
Signed-off-by: Andy Grover <agrover@versity.com>
The xfstests generic/067 test is a bit of a stinker in that it's trying
to make sure a mount failes when the device is invalid. It does this
with raw mount calls without any filesystem-specific conventions. Our
mount fails, so the test passes, but not for the reason the test
assumes. It's not a great test. But we expect it to not be great and
produce this message.
Signed-off-by: Zach Brown <zab@versity.com>
Add another expected message that comes from attempting to mount an ext4
filesystem from a device that returns read errors.
Signed-off-by: Zach Brown <zab@versity.com>
The tests were checking that the literal string was zero, which it never
was. Once we check the value of the variable then we notice that the
sense of some tests went from -n || to -n &&, so switch those to -z.
Signed-off-by: Zach Brown <zab@versity.com>
For xfstests, we need to be able to specify both for scratch device as
well.
using -e and -f for now, but we should really be switching to long options.
Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: minor arg message fixes]
Add -z option to run-tests.sh to specify metadata device.
Do a bunch of things twice.
Fix up setup-error-teardown test.
Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: minor arg message fixes, golden output]
mkfs: Take two block devices as arguments. Write everything to metadata
dev, and the superblock to the data dev. UUIDs match. Differentiate by
checking a bit in a new "flags" field in the superblock.
Refactor device_size() a little. Convert spaces to tabs.
Move code to pretty-print sizes to dev.c so we can use it in error
messages there, as well as in mkfs.c.
print: Include flags in output.
Add -D and -M options for setting max dev sizes
Allow sizes to be specified using units like "K", "G" etc.
Note: -D option replaces -S option, and uses above units rather than
the number of 4k data blocks.
Update man pages for cmdline changes.
Signed-off-by: Andy Grover <agrover@versity.com>
Update the README.md introduction to scoutfs to mention the need for and
use of metadata and data block devices.
Signed-off-by: Zach Brown <zab@versity.com>
Require a second path to metadata bdev be given via mount option.
Verify meta sb matches sb also written to data sb. Change code as needed
in super.c to allow both to be read. Remove check for overlapping
meta and data blknos, since they are now on entirely separate bdevs.
Use meta_bdev for superblock, quorum, and block.c reads and writes.
Signed-off-by: Andy Grover <agrover@versity.com>
It was too tricky to pick out the difference between metadata and data
usage in the previous format. This makes it much more clear which
values are for either metadata or data.
Signed-off-by: Zach Brown <zab@versity.com>
Write locks are given an increasing version number as they're granted
which makes its way into items in the log btrees and is used to find the
most recent version of an item.
The initialization of the lock server's next write_version for granted
locks dates back to the initial prototype of the forest of log btrees.
It is only initialized to zero as the module is loaded. This means that
reloading the module, perhaps by rebooting, resets all the item versions
to 0 and can lead to newly written items being ignored in favour of
older existing items with greater versions from a previous mount.
To fix this we initialize the lock server's write_version to the
greatest of all the versions in items in log btrees. We add a field to
the log_trees struct which records the greatest version which is
maintained as we write out items in transactions. These are read by the
server as it starts.
Then lock recovery needs to include the write_version so that the
lock_server can be sure to set the next write_version past the greatest
version in the currently granted locks.
Signed-off-by: Zach Brown <zab@versity.com>
The log_trees structs store the data that is used by client commits.
The primary struct is communicated over the wire so it includes the rid
and nr that identify the log. The _val struct was stored in btree item
values and was missing the rid and nr because those were stored in the
item's key.
It's madness to duplicate the entire struct just to shave off those two
fields. We can remove the _val struct and store the main struct in item
values, including the rid and nr.
Signed-off-by: Zach Brown <zab@versity.com>
Add a test which makes sure that we don't initialize the lock server's
write version to a version less than existing log tree items.
Signed-off-by: Zach Brown <zab@versity.com>
Audit code for structs allocated on stack without initialization, or
using kmalloc() instead of kzalloc().
- avl.c: zero padding in avl_node on insert.
- btree.c: Verify item padding is zero, or WARN_ONCE.
- inode.c: scoutfs_inode contains scoutfs_timespecs, which have padding.
- net.c: zero pad in net header.
- net.h: scoutfs_net_addr has padding, zero it in scoutfs_addr_from_sin().
- xattr.c: scoutfs_xattr has padding, zero it.
- forest.c: item_root in forest_next_hint() appears to either be
assigned-to or unused, so no need to zero it.
- key.h: Ensure padding is zeroed in scoutfs_key_set_{zeros,ones}
Signed-off-by: Andy Grover <agrover@versity.com>
Instead, explicitly add padding field, and adjust member ordering to
eliminate compiler-added padding between members, and at the end of the
struct (if possible: some structs end in a u8[0] array.)
This should prevent unaligned accesses. Not a big deal on x86_64, but
other archs like aarch64 really want this.
Signed-off-by: Andy Grover <agrover@versity.com>