Add mount and sysfs options for changing the quorum heartbeat timeout.
This allows setting a longer delay in taking over for failed hosts that
has a greater chance of surviving temporary non-fatal delays.
We also double the existing default timeout to 10s which is still
reasonably responsive.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command for writing a super block to a new data device after
reading the metadata device to ensure that there's no existing
data on the old data device.
Signed-off-by: Zach Brown <zab@versity.com>
Split the existing device_size() into get_device_size() and
limit_device_size(). An upcoming command wants to get the device size
without applying limiting policy.
Signed-off-by: Zach Brown <zab@versity.com>
Make mount options for the size of preallocation and whether or not it
should be restricted to extending writes. Disabling the default
restriction to streaming writes lets it preallocate in aligned regions
of the preallocation size when they contain no extents.
Signed-off-by: Zach Brown <zab@versity.com>
Add support for the POSIX ACLs as described in acl(5). Support is
enabled by default and can be explicitly enabled or disabled with the
acl or noacl mount options, respectively.
Signed-off-by: Zach Brown <zab@versity.com>
Add an option to skip printing structures that are likely to be so huge
that the print output becomes completely unwieldly on large systems.
Signed-off-by: Zach Brown <zab@versity.com>
The fence script we use for our single node multi-mount tests only knows
how to fence by using forced unmount to destroy a mount. As of now, the
tests only generate failing nodes that need to be fenced by using forced
unmount as well. This results in the awkward situation where the
testing fence script doesn't have anything to do because the mount is
already gone.
When the test fence script has nothing to do we might not notice if it
isn't run. This adds explicit verification to the fencing tests that
the script was really run. It adds per-invocation logging to the fence
script and the test makes sure that it was run.
While we're at it, we take the opportunity to tidy up some of the
scripting around this. We use a sysfs file with the data device
major:minor numbers so that the fencing script can find and unmount
mounts without having to ask them for their rid. They may not be
operational.
Signed-off-by: Zach Brown <zab@versity.com>
Add a mount option to set the delay betwen scanning of the orphan list.
The sysfs file for the option is writable so this option can be set at
run time.
Signed-off-by: Zach Brown <zab@versity.com>
The man pages and inline help blurbs for the recently added format
version and quorum config commands incorrectly described the device
arguments which are needed.
Signed-off-by: Zach Brown <zab@versity.com>
Add the get-allocated-inos scoutfs command which wraps the
GET_ALLOCATED_INOS ioctl. It'll be used by tests to find items
associated with an inode instead of trying to open the inode by a
constructed handle after it was unlinked.
Signed-off-by: Zach Brown <zab@versity.com>
The local-force-unmount fenced fencing script only works when all the
mounts are on the local host and it uses force unmount. It is only
used in our specific local testing scripts. Packaging it as an example
lead people to believe that it could be used to cobble together a
multi-host testing network, however temporary.
Move it from being in utils and packged to being private to our tests so
that it doesn't present an attractive nuisance.
Signed-off-by: Zach Brown <zab@versity.com>
Back when we added the get/commit transaction sequence numbers to the
log_trees we forgot to add them to the scoutfs print output.
Signed-off-by: Zach Brown <zab@versity.com>
Add a command to change the quorum config which starts by only supports
updating the super block whlie the file system is oflfine.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding a command to change the quorum config which updates its
version number. Let's make the version a little more visible and start
it at the more humane 1.
Signed-off-by: Zach Brown <zab@versity.com>
Move the code that checks that the super is in use from
change-format-version into its own function in util.c. We'll use it in
an upcoming command to change the quorum config.
Signed-off-by: Zach Brown <zab@versity.com>
Move functions for printing and validating the quorum config from mkfs.c
to quorum.c so that they can be used in an upcoming command to change
the quorum config.
Signed-off-by: Zach Brown <zab@versity.com>
The change from --quorum-count to --quorum-slot forgot to update a
mention of the option in an error message in mkfs when it wasn't
provided.
Signed-off-by: Zach Brown <zab@versity.com>
The idea here was that we'd expand the size of the struct and
valid_bytes would tell the kernel which fields were present in
userspace's struct. That doesn't combine well with the ioctl convention
of having the size of the type baked into the ioctl number. We'll
remove this to make the world less surprising. If we expand the
interface we'd add additional ioctls and types.
Signed-off-by: Zach Brown <zab@versity.com>
There are a few bad corner cases in the state machine that governs how
client transactions are opened, modified, and committed.
The worst problem is on the server side. All server request handlers
need to cope with resent requests without causing bad side effects.
Both get_log_trees and commit_log_trees would try to fully processes
resent requests. _get_log_trees() looks safe because it works with the
log_trees that was stored previously. _commit_log_trees() is not safe
because it can rotate out the srch log file referenced by the sent
log_trees every time it's processed. This could create extra srch
entries which would delete the first instance of entries. Worse still,
by injecting the same block structure into the system multiple times it
ends up causing multiple frees of the blocks that make up the srch file.
The client side problems are slightly different, but related. There
aren't strong constraints which guarantee that we'll only send a commit
request after a get request succeeds. In crazy circumstances the
commit request in the write worker could come before the first get in
mount succeeds. Far worse is that we can send multiple commit requests
for one transaction if it changes as we get errors during multiple
queued write attempts, particularly if we get errors from get_log_trees
after having successfully committed.
This hardens all these paths to ensure a strict sequence of
get_log_trees, transaction modification, and commit_log_trees.
On the server we add *_trans_seq fields to the log_trees struct so that
both get_ and commit_ can see that they've already prepared a commit to
send or have already committed the incoming commit, respectively. We
can use the get_trans_seq field as the trans_seq of the open transaction
and get rid of the entire seperate mechanism we used to have for
tracking open trans seqs in the clients. We can get the same info by
walking the log_trees and looking at their *_trans_seq fields.
In the client we have the write worker immediately return success if
mount hasn't opened the first transaction. Then we don't have the
worker return to allow further modification until it has gotten success
from get_log_trees.
Signed-off-by: Zach Brown <zab@versity.com>
Add a count of used inodes to the super block and a change in the inode
count to the log_trees struct. Client transactions track the change in
inode count as they create and delete inodes. The log_trees delta is
added to the count in the super as finalized log_trees are deleted.
Signed-off-by: Zach Brown <zab@versity.com>
We had previously started on a relatively simple notion of an
interoperability version which wasn't quite right. This fleshes out
support for a more functional format version. The super blocks have a
single version that defines behaviour of the running system. The code
supports a range of versions and we add some initial interfaces for
updating the version while the system is offline. All of this together
should let us safely change the underlying format over time.
Signed-off-by: Zach Brown <zab@versity.com>
Add a write_nr field to the quorum block header which is incremented
with every write. Each event also gets a write_nr field that is set to
the incremented value from the header. This gives us a history of the
order of event updates that isn't sensitive to misconfigured time.
Signed-off-by: Zach Brown <zab@versity.com>
We're adding another command that does block IO so move some block
reading and writing functions out of mkfs. We also grow a few function
variants and call the write_sync variant from mkfs instead of having it
manually sync.
Signed-off-by: Zach Brown <zab@versity.com>
This adds i_version to our inode and maintains it as we allocate, load,
modify, and store inodes. We set the flag in the superblock so
in-kernel users can use i_version to see changes in our inodes.
Signed-off-by: Zach Brown <zab@versity.com>
Add the .totl. xattr tag. When the tag is set the end of the name
specifies a total name with 3 encoded u64s separated by dots. The value
of the xattr is a u64 that is added to the named total. An ioctl is
added to read the totals.
Signed-off-by: Zach Brown <zab@versity.com>
The fs log btrees have values that start with a header that stores the
item's seq and flags. There's a lot of sketchy code that manipulates
the value header as items are passed around.
This adds the seq and flags as core item fields in the btree. They're
only set by the interfaces that are used to store fs items: _insert_list
and _merge. The rest of the btree items that use the main interface
don't work with the fields.
This was done to help delta items discover when logged items have been
merged before the finalized lob btrees are deleted and the code ends up
being quite a bit cleaner.
Signed-off-by: Zach Brown <zab@versity.com>
Add an inode creation time field. It's created for all new inodes.
It's visible to stat_more. setattr_more can set it during
restore.
Signed-off-by: Zach Brown <zab@versity.com>
Currently the first inode number that can be allocated directly follows
the root inode. This means the first batch of allocated inodes are in
the same lock group as the root inode.
The root inode is a bit special. It is always hot as absolute path
lookups and inode-to-path resolution always read directory entries from
the root.
Let's try aligning the first free inode number to the next inode lock
group boundary. This will stop work in those inodes from necessarily
conflicting with work in the root inode.
Signed-off-by: Zach Brown <zab@versity.com>
We have a problem where items can appear to go backwards in time because
of the way we chose which log btrees to finalize and merge.
Because we don't have versions in items in the fs_root, and even might
not have items at all if they were deleted, we always assume items in
log btrees are newer than items in the fs root.
This creates the requirement that we can't merge a log btree if it has
items that are also present in older versions in other log btrees which
are not being merged. The unmerged old item in the log btree would take
precedent over the newer merged item in the fs root.
We weren't enforcing this requirement at all. We used the max_item_seq
to ensure that all items were older than the current stable seq but that
says nothing about the relationship between older items in the finalized
and active log btrees. Nothing at all stops an active btree from having
an old version of a newer item that is present in another mount's
finalized log btree.
To reliably fix this we create a strict item seq discontinuity between
all the finalized merge inputs and all the active log btrees. Once any
log btree is naturally finalized the server forced all the clients to
group up and finalize all their open log btrees. A merge operation can
then safely operate on all the finalized trees before any new trees are
given to clients who would start using increasing items seqs.
Signed-off-by: Zach Brown <zab@versity.com>
Add a scoutfs command that uses an ioctl to send a request to the server
to safely use a device that has grown.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs was incorrectly initializing total_data_blocks. The field is meant
to record the number of blocks from the start of the device that the
filesystem could access. mkfs was subtracting the initial reserved area
of the device, meaning the number of blocks that the filesystem might
access.
This could allow accesses past devices if mount checks the device size
against the smaller total_data_blocks.
And we're about to use total_data_blocks as the start of a new extent to
add when growing the volume. It needs to be fixed so that this new
grown free extent doesn't overlap with the end of the existing free
extents.
Signed-off-by: Zach Brown <zab@versity.com>
There are fields in the super block that specify the range of blocks
that would be used for metadata or data. They are from the time when a
single block device was carved up into regions for metadata and data.
They don't make sense now that we have separate metadata and data block
devices. The starting blkno is static and we go to the end of the
device.
This removes the fields now that they serve no purpose. The only use
of them to check that freed extents fell within the correct bounds can
still be performed by using the static starting number or roughly using
the size of the devices. It's not perfect, but this is already only
a check to see that the blknos aren't utter nonsense.
We're removing the fields now to avoid having to update them while
worrying about users when resizing devices.
Signed-off-by: Zach Brown <zab@versity.com>
This should be good enough to get single node mounts up and running with
fenced with minimal effort. The example config will need to be copied
to /etc/scoutfs/scoutfs-fenced.conf for it to be functional, so this
still requires specific opt-in and wont accidentally run for multi-node
systems.
Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
Normally mkfs would fail if we specify meta or data devices that are too
small. We'd like to use small devices for test scenarios, though, so
add an option to allow specifying sizes smaller than the minumum
required sizes.
Signed-off-by: Zach Brown <zab@versity.com>
Returning ENOSPC is challenging because we have clients working on
allocators which are a fraction of the whole and we use COW transactions
so we need to be able to allocate to free. This adds support for
returning ENOSPC to client posix allocators as free space gets low.
For metadata, we reserve a number of free blocks for making progress
with client and server transactions which can free space. The server
sets the low flag in a client's allocator if we start to dip into
reserved blocks. In the client we add an argument to entering a
transaction which indicates if we're allocating new space (as opposed to
just modifying existing data or freeing). When an allocating
transaction runs low and the server low flag is set then we return
ENOSPC.
Adding an argument to transaciton holders and having it return ENOSPC
gave us the opportunity to clean it up and make it a little clearer.
More work is done outside the wait_event function and it now
specifically waits for a transaction to cycle when it forces a commit
rather than spinning until the transaction worker acquires the lock and
stops it.
For data the same pattern applies except there are no reserved blocks
and we don't COW data so it's a simple case of returning the hard ENOSPC
when the data allocator flag is set.
The server needs to consider the reserved count when refilling the
client's meta_avail allocator and when swapping between the two
meta_avail and meta_free allocators.
We add the reserved metadata block count to statfs_more so that df can
subtract it from the free meta blocks and make it clear when enospc is
going to be returned for metadata allocations.
We increase the minimum device size in mkfs so that small testing
devices provide sufficient reserved blocks.
And finally we add a little test that makes sure we can fill both
metadata and data to ENOSPC and then recover by deleting what we filled.
Signed-off-by: Zach Brown <zab@versity.com>
Orphaned items haven't been deleted for quite a while -- the call to the
orphan inode scanner has been commented out for ages. The deletion of
the orphan item didn't take rid zone locking into account as we moved
deletion from being strictly local to being performed by whoever last
used the inode.
This reworks orphan item management and brings back orphan inode
scanning to correctly delete orphaned inodes.
We get rid of the rid zone that was always _WRITE locked by each mount.
That made it impossible for other mounts to get a _WRITE lock to delete
orphan items. Instead we rename it to the orphan zone and have orphan
item callers get _WRITE_ONLY locks inside their inode locks. Now all
nodes can create and delete orphan items as they have _WRITE locks on
the associated inodes.
Then we refresh the orphan inode scanning function. It now runs
regularly in the background of all mounts. It avoids creating cluster
lock contention by finding candidates with unlocked forest hint reads
and by testing inode caches locally and via the open map before properly
locking and trying to delete the inode's items.
Signed-off-by: Zach Brown <zab@versity.com>
mkfs was miscalculating the offset of the start of the free region in
the center of blocks as it populated blocks with items. It was using
the length of the free region as its offset in the block. To find
the offset of the end of the free region in the block it has to be
taken relative to the end of the item array.
Signed-off-by: Zach Brown <zab@versity.com>
Over time the printing of the btree roots embedded in the super block
has gotten a little out of hand. Add a helper macro for the printf
format and args and re-order them to match their order in the
superblock.
Signed-off-by: Zach Brown <zab@versity.com>
We now have a core seq number in the super that is advanced for multiple
users. The client transaction seq comes from the core seq so we
remove the trans_seq from the super. The item version is also converted
to use a seq that's derived from the core seq.
Signed-off-by: Zach Brown <zab@versity.com>
The core quorum work loop assumes that it has exclusive access to its
slot's quorum block. It uniquely marks blocks it writes and verifies
the marks on read to discover if another mount has written to its slot
under the assumption that this must be a configuration error that put
two mounts in the same slot.
But the design of the leader bit in the block violates the invariant
that only a slot will write to its block. As the server comes up and
fences previous leaders it writes to their block to clear their leader
bit.
The final hole in the design is that because we're fencing mounts, not
slots, each slot can have two mounts in play. An active mount can be
using the slot and there can still be a persistent record of a previous
mount in the slot that crashed that needs to be fenced.
All this comes together to have the server fence an old mount in a slot
while a new mount is coming up. The new mount sees the mark change and
freaks out and stops participating in quorum.
The fix is to rework the quorum blocks so that each slot only writes to
its own block. Instead of the server writing to each fenced mount's
slot, it writes a fence event to its block once all previous mounts have
been fenced. We add a bit of bookkeeping so that the server can
discover when all block leader fence operations have completed. Each
event gets its own term so we can compare events to discover live
servers.
We get rid of the write marks and instead have an event that is written
as a quorum agent starts up and is then checked on every read to make
sure it still matches.
Signed-off-by: Zach Brown <zab@versity.com>