Compare commits

..

75 Commits
v1.2 ... v1.9

Author SHA1 Message Date
Zach Brown
fb7cb057c4 v1.9 Release
Finish the release notes for the 1.9 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-29 16:41:58 -07:00
Zach Brown
1b924c501e Merge pull request #103 from versity/zab/verify_dentry_errors
Zab/verify dentry errors
2022-10-27 16:15:53 -07:00
Zach Brown
aed4313995 Simplify dentry verification
Now that we've removed the hash and pos from the dentry_info struct we
can do without it.  We can store the refresh gen in the d_fsdsta pointer
(sorry, 64bit only for now.. could allocate if we needed to.)  This gets
rid of the lock coverage spinlocks and puts a bit more pressure on lock
lookup, which we already know we have to make more efficient.  We can
get rid of all the dentry info allocation calls.

Now that we're not setting d_op as we allocate d_fsdata we put the ops
on the super block so that we get d_revalidate called on all our
dentries.

We also are a bit more precise about the errors we can return from
verification.  If the target of a dentry link changes then we return
-ESTALE rather than silently performing the caller's operation on
another inode.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-27 14:32:06 -07:00
Zach Brown
61d86f7718 Add scoutfs_lock_ino_refresh_gen
Add a lock call to get the current refresh_gen of a held lock.   If the
lock doesn't exist or isn't readable then we return 0.  This an be used
to track lock coverage of structures without the overhead and lifetime
binding of the lock coverage struct.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-27 14:16:07 -07:00
Zach Brown
717b56698a Remove __exit from scoutfs_sysfs_exit()
scoutfs_sysfs_exit() is called during error handling in module init.
When scoutfs is built-in (so, never.) the __exit section won't be
loaded.  Remove the __exit annotation so it's always available to be
called.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-26 16:42:27 -07:00
Zach Brown
c92a7ff705 Don't use dentry private hash/pos for deletion
The dentry cache life cycles are far too crazy to rely on d_fsdata being
kept in sync with the rest of the dentry fields.  Callers can do all
sorts of crazy things with dentries.  Only unlink and rename need these
fields and those operations are already so expensive that item lookups
to get the current actual hash and pos are lost in the noise.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-26 16:42:26 -07:00
Zach Brown
d05489c670 Merge pull request #102 from versity/zab/v1.8
v1.8 Release
2022-10-18 11:21:48 -07:00
Zach Brown
4806e8a7b3 v1.8 Release
Finish the release notes for the 1.8 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-18 09:48:41 -07:00
Zach Brown
b74f3f577d Merge pull request #101 from versity/zab/data_prealloc_options
Zab/data prealloc options
2022-10-17 12:18:51 -07:00
Zach Brown
d5ddf1ecac Fix option save/restore test helpers
The test shell helpers for saving and restoring mount options were
trying to put each mount's option value in an array.  It meant to build
the array key by concatenating the option name and the mount number.
But it didn't isolate the option "name" variable when evaluating it,
instead always evaluating "name_" to nothing and building keys for all
options that only contained the mount index.  This then broke when tests
attempted to save and restore multiple options.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-17 09:12:21 -07:00
Zach Brown
e27ea22fe4 Add run-tests -T option to increase trace size
Add an option to increase the trace buffer size during the run.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:36 -07:00
Zach Brown
51fe5a4ceb Add -o mount option argument to run-tests
Add a run-tests option that lets us append an option string to all
mounts performed during the tests.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:36 -07:00
Zach Brown
3847c4fe63 Add data-prealloc test
Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:35 -07:00
Zach Brown
ef2daf8857 Make data preallocation tunable
Make mount options for the size of preallocation and whether or not it
should be restricted to extending writes.  Disabling the default
restriction to streaming writes lets it preallocate in aligned regions
of the preallocation size when they contain no extents.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:35 -07:00
Zach Brown
064409eb62 Merge pull request #100 from versity/zab/acl
Zab/acl
2022-09-29 09:51:10 -07:00
Zach Brown
ddc5d9f04d Allow setting orphan_scan_delay_ms option
The orphan_scan_delay_ms option setting code mistakenly set the default
before testing the option for -1 (not the default) to discover if
multiple options had been set.  This made any attempt to set fail.

Initialize the option to -1 so the first set succeeds and apply the
default if we don't set the value.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
433a80c6fc Add compat for changing posix_acl_valid arguments
Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
78405bb5fd Remove ACL tests from xfstests expunge list
Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
98e514e5f4 Add failure message to xattr length test
The simple-xattr-unit test had a helper that failed by exiting with
non-zero instead of emitting a message.  Let's make it a bit easier to
see what's going on.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
29538a9f45 Add POSIX ACL support
Add support for the POSIX ACLs as described in acl(5).  Support is
enabled by default and can be explicitly enabled or disabled with the
acl or noacl mount options, respectively.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
1826048ca3 Add _locked xattr get and set calls
The upcoming acl support wants to be able to get and set xattrs from
callers who already have cluster locks and transactions.   We refactor
the existing xattr get and set calls into locked and unlocked variants.

It's mostly boring code motion with the unfortunate situation that the
caller needs to acquire the totl cluster lock before holding a
transaction before calling into the xattr code.   We push the parsing of
the tags to the caller of the locked get and set so that they can know
to acquire the right lock.  (The acl callers will never be setting
scoutfs. prefixed xattrs so they will never have tags.)

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:11:24 -07:00
Zach Brown
798fbb793e Move to xattr_handler xattr prefix dispatch
Move to the use of the array of xattr_handler structs on the super to
dispatch set and get from generic_ based on the xattr prefix.   This
will make it easier to add handling of the pseudo system. ACL xattrs.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-21 14:24:52 -07:00
Zach Brown
d7b16419ef Merge pull request #99 from versity/zab/v1.7
v1.7 Release
2022-08-26 13:20:56 -07:00
Zach Brown
f13aba78b1 v1.7 Release
Finish the release notes for the 1.7 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-08-26 11:38:23 -07:00
Zach Brown
3220c2055c Merge pull request #98 from versity/zab/move_freed_many_commits
Zab/move freed many commits
2022-08-01 09:09:28 -07:00
Zach Brown
1cbc927ccb Only clear trying inode deletion bit when set
try_delete_inode_items() is responsible for making sure that it's safe
to delete an inode's persistent items.  One of the things it has to
check is that there isn't another deletion attempt on the inode in this
mount.  It sets a bit in lock data while it's working and backs off if
the bit is already set.

Unfortunately it was always clearing this bit as it exited, regardless
of whether it set it or not.  This would let the next attempt perform
the deletion again before the working task had finished.  This was often
not a problem because background orphan scanning is the only source of
regular concurrent deletion attempts.

But it's a big problem if a deletion attempt takes a very long time.  It
gives enough time for an orphan scan attempt to clear the bit then try
again and clobber on whoever is performing the very slow deletion.

I hit this in a test that built files with an absurd number of
fragmented extents.  The second concurrent orphan attempt was able to
proceed with deletion and performed a bunch of duplicate data extent
frees and caused corruption.

The fix is to only clear the bit if we set it.  Now all concurrent
attempts will back off until the first task is done.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:25:01 -07:00
Zach Brown
acb94dd9b7 Add test of large fragmented free lists
Add a test which gives the server a transaction with a free list block
that contains blknos that each dirty an individiaul btree blocks in the
global data free extent btree.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:25:01 -07:00
Zach Brown
233fbb39f3 Limit alloc_move per-call allocator consumption
Recently scoutfs_alloc_move() was changed to try and limit the amount of
metadata blocks it could allocate or free.  The intent was to stop
concurrent holders of a transaction from fully consuming the available
allocator for the transaction.

The limiting logic was a bit off.  It stopped when the allocator had the
caller's limit remaining, not when it had consumed the caller's limit.
This is overly permissive and could still allow concurrent callers to
consume the allocator.  It was also triggering warning messages when a
call consumed more than its allowed budget while holding a transaction.

Unfortunately, we don't have per-caller tracking of allocator resource
consumption.  The best we can do is sample the allocators as we start
and return if they drop by the caller's limit.  This is overly
conservative in that it accounts any consumption during concurrent
callers to all callers.

This isn't perfect but it makes the failure case less likely and the
impact shouldn't be significant.  We don't often have a lot of
concurrency and the limits are larger than callers will typically
consume.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:25:01 -07:00
Zach Brown
198d3cda32 Add scoutfs_alloc_meta_low_since()
Add scoutfs_alloc_meta_low_since() to test if the metadata avail or
freed resources have been used by a given amount since a previous
snapshot.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:24:10 -07:00
Zach Brown
e8c64b4217 Move freed data extents in multiple server commits
As _get_log_trees() in the server prepares the log_trees item for the
client's commit, it moves all the freed data extents from the log_trees
item into core data extent allocator btree items.  If the freed blocks
are very fragmented then it can exceed a commit's metadata allocation
budget trying to dirty blocks in the free data extent btree.

The fix is to move the freed data extents in multiple commits.  First we
move a limited number in the main commit that does all the rest of the
work preparing the commit.  Then we try to move the remaining freed
extents in multiple additional commits.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-28 11:42:33 -07:00
Zach Brown
89b64ae1f7 Merge pull request #97 from versity/zab/v1_6_release
v1.6 Release
2022-07-07 14:54:26 -07:00
Zach Brown
fc8a5a1b5c v1.6 Release
Finish the release notes for the 1.6 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-07 13:07:55 -07:00
Zach Brown
d4c793e010 Merge pull request #94 from versity/zab/mem_free_fixes
Zab/mem free fixes
2022-07-07 13:07:04 -07:00
Zach Brown
8a3058818c Merge pull request #95 from versity/zab/skip_likely_huge
Add skip-likely-huge print option
2022-07-07 10:27:50 -07:00
Zach Brown
ba9a106f72 Free send attempts to disconnected clients
Callers who send to specific client connections can get -ENOTCONN if
their client has gone away.   We forgot to free the send tracking struct
in that case.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:20 -07:00
Zach Brown
310725eb72 Free omap rid list as server exits
The omap code keeps track of rids that are connected to the server.  It
only freed the tracked rids as the server told it that rids were being
removed.   But that removal only happened as clients were evicted.  If
the server shutdown it'd leave the old rid entries around.   They'd be
leaked as the mount was unmounted and could linger and crate duplicate
entries if the server started back up and the same clients reconnected.

The fix is to free the tracking rids as the server shuts down.   They'll
be rebuilt as clients reconnect if the server restarts.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:19 -07:00
Zach Brown
51a8236316 Fix missed partial fill_super teardown
If we return an error from .fill_super without having set sb->s_root
then the vfs won't call our put_super.  Our fill_super is careful to
call put_super so that it can tear down partial state, but we weren't
doing this with a few very early errors in fill_super.  This tripped
leak detection when we weren't freeing the sbi when returning errors
from bad option parsing.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:19 -07:00
Zach Brown
f3dd00895b Don't allocate zero size net info
Clients don't use the net conn info and specified that it has 0 size.
The net layer would try and allocate a zero size region which returns
the magic ZERO_SIZE_PTR, which it would then later try and free.  While
that works, it's a little goofy.   We can avoid the allocation when the
size is 0.  The pointer will remain null which kfree also accepts.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:19 -07:00
Zach Brown
49df98f5a8 Add skip-likely-huge print option
Add an option to skip printing structures that are likely to be so huge
that the print output becomes completely unwieldly on large systems.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:07:57 -07:00
Zach Brown
15cf3c4134 Merge pull request #93 from versity/zab/v1_5_release
v1.5 Release
2022-06-21 11:22:02 -07:00
Zach Brown
1abe97351d v1.5 Release
Finish the release notes for the 1.5 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-21 09:46:16 -07:00
Zach Brown
f757e29915 Merge pull request #92 from versity/zab/server_error_assertions
Protect get_log_trees corruption with assertion
2022-06-17 15:29:58 -07:00
Zach Brown
31e474c5fa Protect get_log_trees corruption with assertion
Like a lot of places in the server, get_log_trees() doesn't have the
tools in needs to safely unwind partial changes in the face of an error.

In the worst case, it can have moved extents from the mount's log_trees
item into the server's main data allocator.  The dirty data allocator
reference is in the super block so it can be written later.   The dirty
log_trees reference is on stack, though, so it will be thrown away on
error.  This ends up duplicating extents in the persistent structures
because they're written in the new dirty allocator but still remain in
the unwritten source log_trees allocator.

This change makes it harder for that to happen.   It dirties the
log_trees item and always tries to update so that the dirty blocks are
consistent if they're later written out.  If we do get an error updating
the item we throw an assertion.   It's not great, but it matches other
similar circumstances in other parts of the server.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-17 14:22:59 -07:00
Zach Brown
dcf8202d7c Merge pull request #91 from versity/zab/tcp_sk_alloc_nofs
Set sk_allocation on TCP sockets
2022-06-15 09:16:59 -07:00
Zach Brown
ae55fa3153 Set sk_allocation on TCP sockets
We were setting sk_allocation on the quorum UDP sockets to prevent
entering reclaim while using sockets but we missed setting it on the
regular messaging TCP sockets.   This could create deadlocks where the
sending socket could enter scoutfs reclaim and wait for server messages
while holding the socket lock, preventing the receive thread from
receiving messages while it blocked on the socket lock.

The fix is to prevent entering the FS to reclaim during socket
allocations.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-14 08:21:19 -07:00
Zach Brown
7f9f21317c Merge pull request #90 from versity/zab/multiple_alloc_move_commits
Reclaim log_trees alloc roots in multiple commits
2022-06-08 13:23:01 -07:00
Zach Brown
0d4bf83da3 Reclaim log_trees alloc roots in multiple commits
Client log_trees allocator btrees can build up quite a number of
extents.  In the right circumstances fragmented extents can have to
dirty a large number of paths to leaf blocks in the core allocator
btrees.  It might not be possible to dirty all the blocks necessary to
move all the extents in one commit.

This reworks the extent motion so that it can be performed in multiple
commits if the meta allocator for the commit runs out while it is moving
extents.  It's a minimal fix with as little disruption to the ordering
of commits and locking as possible.  It simply bubbles up an error when
the allocators run out and retries functions that can already be retried
in other circumstances.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 11:53:53 -07:00
Zach Brown
0a6b1fb304 Merge pull request #88 from versity/zab/v1_4_release
v1.4 Release
2022-05-06 11:23:45 -07:00
Zach Brown
fb7e43dd23 v1.4 Release
Finish the release notes for the 1.4 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-05-06 09:57:27 -07:00
Zach Brown
45d90a5ae4 Merge pull request #86 from versity/zab/increase_server_commit_block_budget
Increase server commit dirty block budget
2022-05-06 09:47:47 -07:00
Zach Brown
48f1305a8a Increase server commit dirty block budget
We're seeing allocator motion during get_log_trees dirty quite a lot of
blocks, which makes sense.  Let's continue to up the budget.   If we
still need significantly larger budgets we'll want to look into capping
the dirty block use of the allocator extent movers which will mean
changing callers to support partial progress.

Signed-off-by: Zach Brown <zab@versity.com>
2022-05-05 12:11:14 -07:00
Zach Brown
cd4d6502b8 Merge pull request #87 from versity/zab/lock_invalidation_recovery
Zab/lock invalidation recovery
2022-04-28 09:01:16 -07:00
Zach Brown
dff366e1a4 Add lock invalidation and recovery test
Add a test which tries to have lock recovery processed during lock
invalidation on clients.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-27 12:22:18 -07:00
Zach Brown
ca526e2bc0 Lock recovery uses old mode while invalidating
When a new server starts up it rebuilds its view of all the granted
locks with lock recovery messages.  Clients give the server their
granted lock modes which the server then uses to process all the resent
lock requests from clients.

The lock invalidation work in the client is responsible for
transitioning an old granted mode to a new invalidated mode from an
unsolicited message from the server.  It has to process any client state
that'd be incompatible with the new mode (write dirty data, drop
caches).  While it is doing this work, as an implementation short cut,
it sets the granted lock mode to the new mode so that users that are
compatible with the new invalidated mode can use the lock whlie it's
being invalidated.  Picture readers reading data while a write lock is
invalidating and writing dirty data.

A problem arises when a lock recover request is processed during lock
invalidation.  The client lock recover request handler sends a response
with the current granted mode.  The server takes this to mean that the
invalidation is done but the client invalidation worker might still be
writing data, dropping caches, etc.  The server will allow the state
machine to advance which can send grants to pending client requests
which believed that the invalidation was done.

All of this can lead to a grant response handler in the client tripping
the assertion that there can not be cached items that were incompatible
with the old mode in a grant from the server.  Invalidation might still
be invalidating caches.  Hitting this bug is very rare and requires a
new server starting up while a client has both a request outstanding and
an invalidation being processed when the lock recover request arrives.

The fix is to record the old mode during invalidation and send that in
lock recover responses.  This can lead the lock server to resend
invalidation requests to the client.  The client already safely handles
duplicate invalidation requests from other failover cases.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-27 12:20:56 -07:00
Zach Brown
e423d42106 Merge pull request #85 from versity/zab/v1_3_release
v1.3 Release
2022-04-07 12:21:42 -07:00
Zach Brown
82d2be2e4a v1.3 Release
Finish the release notes for the 1.3 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-07 10:42:14 -07:00
Zach Brown
4102b760d0 Merge pull request #84 from versity/zab/getxattr_under_lock
Fix getxattr with large values giving EINVAL
2022-04-04 13:58:40 -07:00
Zach Brown
65654ee7c0 Fix getxattr with large values giving EINVAL
The change to only allocate a buffer for the first xattr item with
kmalloc instead of the entire logical xattr payload with vmalloc
included a regression for getting large xattrs.

getxattr used to copy the entire payload into the large vmalloc so it
could unlock just after get_next_xattr.   The change to only getting the
first item buffer added a call to copy from the rest of the items but
those copies weren't covered by the locks.  This would often work
because the lock pointer still pointed to a valid lock.  But if the lock
was invalidated then the mode would no longer be compatible and
_item_lookup would return EINVAL.

The fix is to extend xattr_rwsem and cluster lock coverage to the rest
fo the function body, which includes the value item copies.  This also
makes getxattr's lock coverage consistent with setxattr and listxattr
which might reduce the risk of similar mistakes in the future.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-04 12:49:50 -07:00
Zach Brown
b2d6ceeb9c Merge pull request #82 from versity/zab/server_alloc_reservation
Zab/server alloc reservation
2022-04-01 17:36:22 -07:00
Zach Brown
d8231016f8 Free fewer log btree blocks per server commit
After we've merged a log btree back into the main fs tree we kick off
work to free all its blocks.  This would fully fill the transactions
free blocks list before stopping to apply the commit.

Consuming the entire free list makes it hard to have concurrent holders
of a commit who also want to free things.  This chnages the log btree
block freeing to limit itself to a fraction of the budget that each
holder gets.  That coarse limit avoids us having to precisely account
for the allocations and frees while modifying the freeing item while
still freeing many blocks per commit.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-01 15:28:20 -07:00
Zach Brown
3c2b329675 Limit alloc consumption in server commits
Server commits use an allocator that has a limited number of available
metadata blocks and entries in a list for freed blocks.  The allocator
is refilled between commits.  Holders can't fully consume the allocator
during the commit and that tended to work out because server commit
holders commit before sending responses.  We'd tend to commit frequently
enough that we'd get a chance to refill the allocators before they were
consumed.

But there was no mechanism to ensure that this would be the case.
Enough concurrent server holders were able to fully consume the
allocators before committing.   This causes scoutfs_meta_alloc and _free
to return errors, leading the server to fail in the worst cases.

This changes the server commit tracking to use more robust structures
which limit the number of concurrent holders so that the allocators
aren't exhausted.  The commit_users struct stops holders from making
progress once the allocators don't have room for more holders.  It also
lets us stop future holders from making progress once the commit work
has been queued.  The previous cute use of a rwsem didn't allow for
either of these protections.

We don't have precise tracking of each holder's allocation consumption
so we don't try and reserve blocks for each holder.   Instead we have a
maxmimum consumption per holder and make sure that all the holders can't
consume the allocators if they all use their full limit.

All of this requires the holding code paths to be well behaved and not
use more than the per-hold limit.   We add some debugging code to print
the stacks of holders that were active when the total holder limit was
exceeded.  This is the motivation for having state in the holders.  We
can record some data at the time their hold started that'll make it a
little easier to track down which of the holders exceeded their limit.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-01 15:28:17 -07:00
Zach Brown
96ad8dd510 Add scoutfs_alloc_meta_remaining
Add helper function to give the caller the number of blocks remaining in
the first list block that's used for meta allocation and freeing.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-01 15:21:44 -07:00
Zach Brown
44f38a31ec Make server commit access private again
There was a brief time where we exported the ability to hold and apply
commits outside of the main server code.  That wasn't a great idea, and
the few users have seen been reworked to not require directly
manipulating server transactions, so we can reduce risk and make these
functions private again.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-01 15:21:43 -07:00
Zach Brown
fb2ff753ad Merge pull request #83 from versity/zab/heartbeat_during_fencing
Send quorum heartbeats while fencing
2022-04-01 09:12:41 -07:00
Zach Brown
bb3db7e272 Send quorum heartbeats while fencing
Quorum members will try to elect a new leader when they don't receive
heartbeats from the currently elected leader.   This timeout is short to
encourage restoring service promptly.

Heartbeats are sent from the quorum worker thread and are delayed while
it synchronously starts up the server, which includes fencing previous
servers.  If fence requests take too long then heartbeats will be
delayed long enough for remaining quorum members to elect a new leader
while the recently elected server is still busy fencing.

To fix this we decouple server startup from the quorum main thread.
Server starting and stopping becomes asynchronous so the quorum thread
is able to send heartbeats while the server work is off starting up and
fencing.

The server used to call into quorum to clear a flag as it exited.   We
remove that mechanism and have the server maintain a running status that
quorum can query.

We add some state to the quorum work to track the asynchronous state of
the server.   This lets the quorum protocol change roles immediately as
needed while remembering that there is a server running that needs to be
acted on.

The server used to also call into quorum to update quorum blocks.   This
is a read-modify-write operation that has to be serialized.  Now that we
have both the server starting up and the quorum work running they both
can't perform these read-modify-write cycles.  Instead we have the
quorum work own all the block updates and it queries the server status
to determine when it should update the quorum block to indicate that the
server has fenced or shut down.

Signed-off-by: Zach Brown <zab@versity.com>
2022-03-31 10:29:43 -07:00
Zach Brown
c94b072925 Merge pull request #81 from versity/zab/fenced_test
Zab/fenced test
2022-03-29 09:05:09 -07:00
Zach Brown
26ae9c6e04 Verify local unmount testing fence script
The fence script we use for our single node multi-mount tests only knows
how to fence by using forced unmount to destroy a mount.  As of now, the
tests only generate failing nodes that need to be fenced by using forced
unmount as well.  This results in the awkward situation where the
testing fence script doesn't have anything to do because the mount is
already gone.

When the test fence script has nothing to do we might not notice if it
isn't run.  This adds explicit verification to the fencing tests that
the script was really run.  It adds per-invocation logging to the fence
script and the test makes sure that it was run.

While we're at it, we take the opportunity to tidy up some of the
scripting around this.  We use a sysfs file with the data device
major:minor numbers so that the fencing script can find and unmount
mounts without having to ask them for their rid.  They may not be
operational.

Signed-off-by: Zach Brown <zab@versity.com>
2022-03-28 14:52:08 -07:00
Zach Brown
c8d7221ec5 Show data device numbers in sysfs file
It can be handy to associate mounts with their sysfs directory by their
data device number.

Signed-off-by: Zach Brown <zab@versity.com>
2022-03-25 14:43:25 -07:00
Zach Brown
7fd03dc311 Merge pull request #80 from versity/zab/avoid_xattr_vmalloc
Don't use vmalloc in get/set xattr
2022-03-22 12:00:51 -07:00
Zach Brown
4e8a088cc5 Don't use vmalloc in get/set xattr
Extended attribute values can be larger than a reasonable maximum size
for our btree items so we store xattrs in many items.  The first pass at
this code used vmalloc to make it relatively easy to work with a
contiguous buffer that was cut up into multiple items.

The problem, of course, is that vmalloc() is expensive.  Well, the
problem is that I always forget just how expensive it can be and use it
when I shouldn't.  We had loads on high cpu count machines that were
catastrophically cpu bound on all the contentious work that vmalloc does
to maintain a coherent global address space.

This removes the use of vmalloc and only allocates a small buffer for
the first compound item.  The later items directly reference regions of
value buffer rather than copying it to and from the large intermediate
vmalloced buffer.

Signed-off-by: Zach Brown <zab@versity.com>
2022-03-21 21:44:11 -07:00
Zach Brown
9c751c1197 Merge pull request #78 from versity/zab/quorum_leader_visibility
Zab/quorum leader visibility
2022-03-16 09:16:57 -07:00
Zach Brown
875583b7ef Add t_fs_is_leader test helper
The t_server_nr and t_first_client_nr helpers iterated over all the fs
numbers examining their quorum/is_leader files, but clients don't have a
quorum/ directory.  This was causing spurious outputs in tests that were
looking for servers but didn't find it in the first quorum fs number and
made it down into the clients.

Give them a helper that returns 0 for being a leader if the quorum/ dir
doesn't exist.

Signed-off-by: Zach Brown <zab@versity.com>
2022-03-15 16:09:55 -07:00
Zach Brown
38e5aa77c4 Update quorum status files more frequently
We were seeing rare test failures where it looked like is_leader wasn't
set for any of the mounts.   The test that couldn't find a set is_leader
file had just perfomed some mounts so we know that a server was up and
processing requests.

The quorum task wasn't updating the status that's shown in sysfs and
debugfs until after the server started up.  This opened the race where
the server was able to serve mount requests and have the test run to
find no is_leader file set before the quorum task was able to update the
stats and make its election visible.

This updates the quorum task to make its status visible more often,
typically before it does something that will take a while.  The
is_leader will now be visible before the server is started so the test
will always see the file after server starts up and lets mounts finish.

Signed-off-by: Zach Brown <zab@versity.com>
2022-03-15 15:07:57 -07:00
Zach Brown
57a1d75e52 Merge pull request #77 from versity/zab/v1_2_release
Zab/v1 2 release
2022-03-14 18:10:16 -07:00
Zach Brown
51d19d797f Start v1.3-rc release notes
Create the 1.3 section in the release notes for commits to fill.

Signed-off-by: Zach Brown <zab@versity.com>
2022-03-14 17:15:24 -07:00
53 changed files with 2780 additions and 970 deletions

View File

@@ -1,6 +1,134 @@
Versity ScoutFS Release Notes
=============================
---
v1.9
\
*Oct 29, 2022*
Fix VFS cached directory entry consistency verification that could cause
spurious "no such file or directory" (ENOENT) errors from rename over
NFS under certain conditions. The problem was only every with the
consistency of in-memory cached dentry objects, persistent data was
correct and eventual eviction of the bad cached objects would stop
generating the errors.
---
v1.8
\
*Oct 18, 2022*
Add support for Linux POSIX Access Control Lists, as described in
acl(5). Mount options are added to enable ("acl") and disable ("noacl")
support. The default is to support ACLs. ACLs are stored in the
existing extended attribute scheme so adding support is does not require
a format change.
Add options to control data extent preallocation. The default behavior
does not change. The options can relax the limits on preallocation
which will then trigger under more write patterns and increase the risk
of preallocated space which is never used. The options are described in
scoutfs(5).
---
v1.7
\
*Aug 26, 2022*
* **Fixed possible persistent errors moving freed data extents**
\
Fixed a case where the server could hit persistent errors trying to
move a client's freed extents in one commit. The client had to free
a large number of extents that occupied distant positions in the
global free extent btree. Very large fragmented files could cause
this. The server now moves the freed extents in multiple commits and
can always ensure forward progress.
* **Fixed possible persistent errors from freed duplicate extents**
\
Background orphan deletion wasn't properly synchronizing with
foreground tasks deleting very large files. If a deletion took long
enough then background deletion could also attempt to delete inode items
while the deletion was making progress. This could create duplicate
deletions of data extent items which causes the server to abort when
it later discovers the duplicate extents as it merges free lists.
---
v1.6
\
*Jul 7, 2022*
* **Fix memory leaks in rare corner cases**
\
Analysis tools found a few corner cases that leaked small structures,
generally around error handling or startup and shutdown.
* **Add --skip-likely-huge scoutfs print command option**
\
Add an option to scoutfs print to reduce the size of the output
so that it can be used to see system-wide metadata without being
overwhelmed by file-level details.
---
v1.5
\
*Jun 21, 2022*
* **Fix persistent error during server startup**
\
Fixed a case where the server would always hit a consistent error on
seartup, preventing the system from mounting. This required a rare
but valid state across the clients.
* **Fix a client hang that would lead to fencing**
\
The client module's use of in-kernel networking was missing annotation
that could lead to communication hanging. The server would fence the
client when it stopped communicating. This could be identified by the
server fencing a client after it disconnected with no attempt by the
client to reconnect.
---
v1.4
\
*May 6, 2022*
* **Fix possible client crash during server failover**
\
Fixed a narrow window during server failover and lock recovery that
could cause a client mount to believe that it had an inconsistent item
cache and panic. This required very specific lock state and messaging
patterns between multiple mounts and multiple servers which made it
unlikely to occur in the field.
---
v1.3
\
*Apr 7, 2022*
* **Fix rare server instability under heavy load**
\
Fixed a case of server instability under heavy load due to concurrent
work fully exhausting metadata block allocation pools reserved for a
single server transaction. This would cause brief interruption as the
server shutdown and the next server started up and made progress as
pending work was retried.
* **Fix slow fencing preventing server startup**
\
If a server had to process many fence requests with a slow fencing
mechanism it could be interrupted before it finished. The server
now makes sure heartbeat messages are sent while it is making progress
on fencing requests so that other quorum members don't interrupt the
process.
* **Performance improvement in getxattr and setxattr**
\
Kernel allocation patterns in the getxattr and setxattr
implementations were causing significant contention between CPUs. Their
allocation strategy was changed so that concurrent tasks can call these
xattr methods without degrading performance.
---
v1.2
\

View File

@@ -8,6 +8,7 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
-include $(src)/Makefile.kernelcompat
scoutfs-y += \
acl.o \
avl.o \
alloc.o \
block.o \

View File

@@ -34,3 +34,12 @@ endif
ifneq (,$(shell grep 'FMODE_KABI_ITERATE' include/linux/fs.h))
ccflags-y += -DKC_FMODE_KABI_ITERATE
endif
#
# v4.7-rc2-23-g0d4d717f2583
#
# Added user_ns argument to posix_acl_valid
#
ifneq (,$(shell grep 'posix_acl_valid.*user_ns,' include/linux/posix_acl.h))
ccflags-y += -DKC_POSIX_ACL_VALID_USER_NS
endif

355
kmod/src/acl.c Normal file
View File

@@ -0,0 +1,355 @@
/*
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/xattr.h>
#include <linux/posix_acl.h>
#include <linux/posix_acl_xattr.h>
#include "format.h"
#include "super.h"
#include "scoutfs_trace.h"
#include "xattr.h"
#include "acl.h"
#include "inode.h"
#include "trans.h"
/*
* POSIX draft ACLs are stored as full xattr items with the entries
* encoded as the kernel's posix_acl_xattr_{header,entry} value structs.
*
* They're accessed and modified via user facing synthetic xattrs, iops
* calls from the kernel, during inode mode changes, and during inode
* creation.
*
* ACL access devolves into xattr access which is relatively expensive
* so we maintain the cached native form in the vfs inode. We drop the
* cache in lock invalidation which means that cached acl access must
* always be performed under cluster locking.
*/
static int acl_xattr_name_len(int type, char **name, size_t *name_len)
{
int ret = 0;
switch (type) {
case ACL_TYPE_ACCESS:
*name = XATTR_NAME_POSIX_ACL_ACCESS;
if (name_len)
*name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
break;
case ACL_TYPE_DEFAULT:
*name = XATTR_NAME_POSIX_ACL_DEFAULT;
if (name_len)
*name_len = sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1;
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock)
{
struct posix_acl *acl;
char *value = NULL;
char *name;
int ret;
if (!IS_POSIXACL(inode))
return NULL;
acl = get_cached_acl(inode, type);
if (acl != ACL_NOT_CACHED)
return acl;
ret = acl_xattr_name_len(type, &name, NULL);
if (ret < 0)
return ERR_PTR(ret);
ret = scoutfs_xattr_get_locked(inode, name, NULL, 0, lock);
if (ret > 0) {
value = kzalloc(ret, GFP_NOFS);
if (!value)
ret = -ENOMEM;
else
ret = scoutfs_xattr_get_locked(inode, name, value, ret, lock);
}
if (ret > 0) {
acl = posix_acl_from_xattr(&init_user_ns, value, ret);
} else if (ret == -ENODATA || ret == 0) {
acl = NULL;
} else {
acl = ERR_PTR(ret);
}
/* can set null negative cache */
if (!IS_ERR(acl))
set_cached_acl(inode, type, acl);
kfree(value);
return acl;
}
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
struct posix_acl *acl;
int ret;
if (!IS_POSIXACL(inode))
return NULL;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret < 0) {
acl = ERR_PTR(ret);
} else {
acl = scoutfs_get_acl_locked(inode, type, lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
return acl;
}
/*
* The caller has acquired the locks and dirtied the inode, they'll
* update the inode item if we return 0.
*/
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks)
{
static const struct scoutfs_xattr_prefix_tags tgs = {0,}; /* never scoutfs. prefix */
bool set_mode = false;
char *value = NULL;
umode_t new_mode;
size_t name_len;
char *name;
int size = 0;
int ret;
ret = acl_xattr_name_len(type, &name, &name_len);
if (ret < 0)
return ret;
switch (type) {
case ACL_TYPE_ACCESS:
if (acl) {
ret = posix_acl_update_mode(inode, &new_mode, &acl);
if (ret < 0)
goto out;
set_mode = true;
}
break;
case ACL_TYPE_DEFAULT:
if (!S_ISDIR(inode->i_mode)) {
ret = acl ? -EINVAL : 0;
goto out;
}
break;
}
if (acl) {
size = posix_acl_xattr_size(acl->a_count);
value = kmalloc(size, GFP_NOFS);
if (!value) {
ret = -ENOMEM;
goto out;
}
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
if (ret < 0)
goto out;
}
ret = scoutfs_xattr_set_locked(inode, name, name_len, value, size, 0, &tgs,
lock, NULL, ind_locks);
if (ret == 0 && set_mode) {
inode->i_mode = new_mode;
if (!value) {
/* can be setting an acl that only affects mode, didn't need xattr */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
}
}
out:
if (!ret)
set_cached_acl(inode, type, acl);
kfree(value);
return ret;
}
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
LIST_HEAD(ind_locks);
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE, SCOUTFS_LKF_REFRESH_INODE, inode, &lock) ?:
scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
if (ret == 0) {
ret = scoutfs_dirty_inode_item(inode, lock) ?:
scoutfs_set_acl_locked(inode, acl, type, lock, &ind_locks);
if (ret == 0)
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
return ret;
}
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type)
{
struct posix_acl *acl;
int ret = 0;
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
acl = scoutfs_get_acl(dentry->d_inode, type);
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl == NULL)
return -ENODATA;
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
posix_acl_release(acl);
return ret;
}
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type)
{
struct posix_acl *acl = NULL;
int ret;
if (!inode_owner_or_capable(dentry->d_inode))
return -EPERM;
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
if (value) {
acl = posix_acl_from_xattr(&init_user_ns, value, size);
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl) {
ret = kc_posix_acl_valid(&init_user_ns, acl);
if (ret)
goto out;
}
}
ret = scoutfs_set_acl(dentry->d_inode, acl, type);
out:
posix_acl_release(acl);
return ret;
}
/*
* Apply the parent's default acl to new inodes access acl and inherit
* it as the default for new directories. The caller holds locks and a
* transaction.
*/
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
struct list_head *ind_locks)
{
struct posix_acl *acl = NULL;
int ret = 0;
if (!S_ISLNK(inode->i_mode)) {
if (IS_POSIXACL(dir)) {
acl = scoutfs_get_acl_locked(dir, ACL_TYPE_DEFAULT, dir_lock);
if (IS_ERR(acl))
return PTR_ERR(acl);
}
if (!acl)
inode->i_mode &= ~current_umask();
}
if (IS_POSIXACL(dir) && acl) {
if (S_ISDIR(inode->i_mode)) {
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_DEFAULT,
lock, ind_locks);
if (ret)
goto out;
}
ret = posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
if (ret < 0)
return ret;
if (ret > 0)
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS,
lock, ind_locks);
} else {
cache_no_acl(inode);
}
out:
posix_acl_release(acl);
return ret;
}
/*
* Update the access ACL based on a newly set mode. If we return an
* error then the xattr wasn't changed.
*
* Annoyingly, setattr_copy has logic that transforms the final set mode
* that we want to use to update the acl. But we don't want to modify
* the other inode fields while discovering the resulting mode. We're
* relying on acl_chmod not caring about the transformation (currently
* just clears sgid). It would be better if we could get the resulting
* mode to give to acl_chmod without modifying the other inode fields.
*
* The caller has the inode mutex, a cluster lock, transaction, and will
* update the inode item if we return success.
*/
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
struct scoutfs_lock *lock, struct list_head *ind_locks)
{
struct posix_acl *acl;
int ret = 0;
if (!IS_POSIXACL(inode) || !(attr->ia_valid & ATTR_MODE))
return 0;
if (S_ISLNK(inode->i_mode))
return -EOPNOTSUPP;
acl = scoutfs_get_acl_locked(inode, ACL_TYPE_ACCESS, lock);
if (IS_ERR_OR_NULL(acl))
return PTR_ERR(acl);
ret = posix_acl_chmod(&acl, GFP_KERNEL, attr->ia_mode);
if (ret)
return ret;
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS, lock, ind_locks);
posix_acl_release(acl);
return ret;
}

18
kmod/src/acl.h Normal file
View File

@@ -0,0 +1,18 @@
#ifndef _SCOUTFS_ACL_H_
#define _SCOUTFS_ACL_H_
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type);
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks);
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type);
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type);
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
struct scoutfs_lock *lock, struct list_head *ind_locks);
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
struct list_head *ind_locks);
#endif

View File

@@ -84,6 +84,21 @@ static u64 smallest_order_length(u64 len)
return 1ULL << (free_extent_order(len) * 3);
}
/*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*/
static u32 extent_mod_blocks(u32 height)
{
return ((1 + height) * 2) * 3;
}
/*
* Free extents don't have flags and are stored in two indexes sorted by
* block location and by length order, largest first. The location key
@@ -877,6 +892,13 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
* -ENOENT is returned if we run out of extents in the source tree
* before moving the total.
*
* If meta_budget is non-zero then -EINPROGRESS can be returned if the
* the caller's budget is consumed in the allocator during this call
* (though not necessarily by us, we don't have per-thread tracking of
* allocator consumption :/). The call can still have made progress and
* caller is expected commit the dirty trees and examining the resulting
* modified trees to see if they need to continue moving extents.
*
* The caller can specify that extents in the source tree should first
* be found based on their zone bitmaps. We'll first try to find
* extents in the exclusive zones, then vacant zones, and then we'll
@@ -891,7 +913,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks)
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget)
{
struct alloc_ext_args args = {
.alloc = alloc,
@@ -899,6 +921,8 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
};
struct scoutfs_extent found;
struct scoutfs_extent ext;
u32 avail_start = 0;
u32 freed_start = 0;
u64 moved = 0;
u64 count;
int ret = 0;
@@ -909,6 +933,9 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
vacant = NULL;
}
if (meta_budget != 0)
scoutfs_alloc_meta_remaining(alloc, &avail_start, &freed_start);
while (moved < total) {
count = total - moved;
@@ -941,6 +968,14 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
break;
if (meta_budget != 0 &&
scoutfs_alloc_meta_low_since(alloc, avail_start, freed_start, meta_budget,
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
ret = -EINPROGRESS;
break;
}
/* searching set start/len, finish initializing alloced extent */
ext.map = found.map ? ext.start - found.start + found.map : 0;
ext.flags = found.flags;
@@ -1065,15 +1100,6 @@ out:
* than completely exhausting the avail list or overflowing the freed
* list.
*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*
* The caller tells us how many extents they're about to modify and how
* many other additional blocks they may cow manually. And finally, the
* caller could be the first to dirty the avail and freed blocks in the
@@ -1082,7 +1108,7 @@ out:
static bool list_has_blocks(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root, u32 extents, u32 addl_blocks)
{
u32 tree_blocks = (((1 + root->root.height) * 2) * 3) * extents;
u32 tree_blocks = extent_mod_blocks(root->root.height) * extents;
u32 most = 1 + tree_blocks + addl_blocks;
if (le32_to_cpu(alloc->avail.first_nr) < most) {
@@ -1318,6 +1344,38 @@ bool scoutfs_alloc_meta_low(struct super_block *sb,
return lo;
}
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space)
{
unsigned int seq;
do {
seq = read_seqbegin(&alloc->seqlock);
*avail_total = le32_to_cpu(alloc->avail.first_nr);
*freed_space = list_block_space(alloc->freed.first_nr);
} while (read_seqretry(&alloc->seqlock, seq));
}
/*
* Returns true if the caller's consumption of nr from either avail or
* freed would end up exceeding their budget relative to the starting
* remaining snapshot they took.
*/
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr)
{
u32 avail_use;
u32 freed_use;
u32 avail;
u32 freed;
scoutfs_alloc_meta_remaining(alloc, &avail, &freed);
avail_use = avail_start - avail;
freed_use = freed_start - freed;
return ((avail_use + nr) > budget) || ((freed_use + nr) > budget);
}
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag)
{

View File

@@ -19,14 +19,11 @@
(128ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* The largest aligned region that we'll try to allocate at the end of
* the file as it's extended. This is also limited to the current file
* size so we can only waste at most twice the total file size when
* files are less than this. We try to keep this around the point of
* diminishing returns in streaming performance of common data devices
* to limit waste.
* The default size that we'll try to preallocate. This is trying to
* hit the limit of large efficient device writes while minimizing
* wasted preallocation that is never used.
*/
#define SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT \
#define SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
@@ -131,7 +128,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
@@ -158,6 +155,9 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr);
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag);

View File

@@ -2449,7 +2449,7 @@ int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *key,
struct scoutfs_btree_root *root, int alloc_low)
struct scoutfs_btree_root *root, int free_budget)
{
u64 blknos[SCOUTFS_BTREE_MAX_HEIGHT];
struct scoutfs_block *bl = NULL;
@@ -2459,11 +2459,15 @@ int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_avl_node *node;
struct scoutfs_avl_node *next;
struct scoutfs_key par_next;
int nr_freed = 0;
int nr_par;
int level;
int ret;
int i;
if (WARN_ON_ONCE(free_budget <= 0))
return -EINVAL;
if (WARN_ON_ONCE(root->height > ARRAY_SIZE(blknos)))
return -EIO; /* XXX corruption */
@@ -2538,8 +2542,7 @@ int scoutfs_btree_free_blocks(struct super_block *sb,
while (node) {
/* make sure we can always free parents after leaves */
if (scoutfs_alloc_meta_low(sb, alloc,
alloc_low + nr_par + 1)) {
if ((nr_freed + 1 + nr_par) > free_budget) {
ret = 0;
goto out;
}
@@ -2553,6 +2556,7 @@ int scoutfs_btree_free_blocks(struct super_block *sb,
le64_to_cpu(ref.blkno));
if (ret < 0)
goto out;
nr_freed++;
node = scoutfs_avl_next(&bt->item_root, node);
if (node) {
@@ -2568,6 +2572,7 @@ int scoutfs_btree_free_blocks(struct super_block *sb,
blknos[i]);
ret = scoutfs_free_meta(sb, alloc, wri, blknos[i]);
BUG_ON(ret); /* checked meta low, freed should fit */
nr_freed++;
}
/* restart walk past the subtree we just freed */

View File

@@ -125,7 +125,7 @@ int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *key,
struct scoutfs_btree_root *root, int alloc_low);
struct scoutfs_btree_root *root, int free_budget);
void scoutfs_btree_put_iref(struct scoutfs_btree_item_ref *iref);

View File

@@ -75,8 +75,6 @@
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
EXPAND_COUNTER(dentry_revalidate_error) \
EXPAND_COUNTER(dentry_revalidate_invalid) \
EXPAND_COUNTER(dentry_revalidate_locked) \
EXPAND_COUNTER(dentry_revalidate_orphan) \
EXPAND_COUNTER(dentry_revalidate_rcu) \
EXPAND_COUNTER(dentry_revalidate_root) \
EXPAND_COUNTER(dentry_revalidate_valid) \
@@ -157,6 +155,7 @@
EXPAND_COUNTER(orphan_scan_error) \
EXPAND_COUNTER(orphan_scan_item) \
EXPAND_COUNTER(orphan_scan_omap_set) \
EXPAND_COUNTER(quorum_candidate_server_stopping) \
EXPAND_COUNTER(quorum_elected) \
EXPAND_COUNTER(quorum_fence_error) \
EXPAND_COUNTER(quorum_fence_leader) \

View File

@@ -366,27 +366,27 @@ static inline u64 ext_last(struct scoutfs_extent *ext)
/*
* The caller is writing to a logical iblock that doesn't have an
* allocated extent.
* allocated extent. The caller has searched for an extent containing
* iblock. If it already existed then it must be unallocated and
* offline.
*
* We always allocate an extent starting at the logical iblock. The
* caller has searched for an extent containing iblock. If it already
* existed then it must be unallocated and offline.
* We implement two preallocation strategies. Typically we only
* preallocate for simple streaming writes and limit preallocation while
* the file is small. The largest efficient allocation size is
* typically large enough that it would be unreasonable to allocate that
* much for all small files.
*
* Preallocation is used if we're strictly contiguously extending
* writes. That is, if the logical block offset equals the number of
* online blocks. We try to preallocate the number of blocks existing
* so that small files don't waste inordinate amounts of space and large
* files will eventually see large extents. This only works for
* contiguous single stream writes or stages of files from the first
* block. It doesn't work for concurrent stages, releasing behind
* staging, sparse files, multi-node writes, etc. fallocate() is always
* a better tool to use.
* Optionally, we can simply preallocate large empty aligned regions.
* This can waste a lot of space for small or sparse files but is
* reasonable when a file population is known to be large and dense but
* known to be written with non-streaming write patterns.
*/
static int alloc_block(struct super_block *sb, struct inode *inode,
struct scoutfs_extent *ext, u64 iblock,
struct scoutfs_lock *lock)
{
DECLARE_DATA_INFO(sb, datinf);
struct scoutfs_mount_options opts;
const u64 ino = scoutfs_ino(inode);
struct data_ext_args args = {
.ino = ino,
@@ -394,17 +394,22 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
.lock = lock,
};
struct scoutfs_extent found;
struct scoutfs_extent pre;
struct scoutfs_extent pre = {0,};
bool undo_pre = false;
u64 blkno = 0;
u64 online;
u64 offline;
u8 flags;
u64 start;
u64 count;
u64 rem;
int ret;
int err;
trace_scoutfs_data_alloc_block_enter(sb, ino, iblock, ext);
scoutfs_options_read(sb, &opts);
/* can only allocate over existing unallocated offline extent */
if (WARN_ON_ONCE(ext->len &&
!(iblock >= ext->start && iblock <= ext_last(ext) &&
@@ -413,66 +418,106 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
mutex_lock(&datinf->mutex);
scoutfs_inode_get_onoff(inode, &online, &offline);
/* default to single allocation at the written block */
start = iblock;
count = 1;
/* copy existing flags for preallocated regions */
flags = ext->len ? ext->flags : 0;
if (ext->len) {
/* limit preallocation to remaining existing (offline) extent */
/*
* Assume that offline writers are going to be writing
* all the offline extents and try to preallocate the
* rest of the unwritten extent.
*/
count = ext->len - (iblock - ext->start);
flags = ext->flags;
} else if (opts.data_prealloc_contig_only) {
/*
* Only preallocate when a quick test of the online
* block counts looks like we're a simple streaming
* write. Try to write until the next extent but limit
* the preallocation size to the number of online
* blocks.
*/
scoutfs_inode_get_onoff(inode, &online, &offline);
if (iblock > 1 && iblock == online) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (found.len && found.start > iblock)
count = found.start - iblock;
else
count = opts.data_prealloc_blocks;
count = min(iblock, count);
}
} else {
/* otherwise alloc to next extent */
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &found);
/*
* Preallocation of aligned regions only preallocates if
* the aligned region contains no extents at all. This
* could be fooled by offline sparse extents but we
* don't want to iterate over all offline extents in the
* aligned region.
*/
div64_u64_rem(iblock, opts.data_prealloc_blocks, &rem);
start = iblock - rem;
count = opts.data_prealloc_blocks;
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, start, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (found.len && found.start > iblock)
count = found.start - iblock;
else
count = SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT;
flags = 0;
if (found.len && found.start < start + count)
count = 1;
}
/* overall prealloc limit */
count = min_t(u64, count, SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT);
/* only strictly contiguous extending writes will try to preallocate */
if (iblock > 1 && iblock == online)
count = min(iblock, count);
else
count = 1;
count = min_t(u64, count, opts.data_prealloc_blocks);
ret = scoutfs_alloc_data(sb, datinf->alloc, datinf->wri,
&datinf->dalloc, count, &blkno, &count);
if (ret < 0)
goto out;
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno, 0);
if (ret < 0)
goto out;
/*
* An aligned prealloc attempt that gets a smaller extent can
* fail to cover iblock, make sure that it does. This is a
* pathological case so we don't try to move the window past
* iblock. Just enough to cover it, which we know is safe.
*/
if (start + count <= iblock)
start += (iblock - (start + count) + 1);
if (count > 1) {
pre.start = iblock + 1;
pre.len = count - 1;
pre.map = blkno + 1;
pre.start = start;
pre.len = count;
pre.map = blkno;
pre.flags = flags | SEF_UNWRITTEN;
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, pre.start,
pre.len, pre.map, pre.flags);
if (ret < 0) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock,
1, 0, flags);
BUG_ON(err); /* couldn't restore original */
if (ret < 0)
goto out;
}
undo_pre = true;
}
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno + (iblock - start), 0);
if (ret < 0)
goto out;
/* tell the caller we have a single block, could check next? */
ext->start = iblock;
ext->len = 1;
ext->map = blkno;
ext->map = blkno + (iblock - start);
ext->flags = 0;
ret = 0;
out:
if (ret < 0 && blkno > 0) {
if (undo_pre) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
pre.start, pre.len, 0, flags);
BUG_ON(err); /* leaked preallocated extent */
}
err = scoutfs_free_data(sb, datinf->alloc, datinf->wri,
&datinf->data_freed, blkno, count);
BUG_ON(err); /* leaked free blocks */

View File

@@ -32,6 +32,7 @@
#include "hash.h"
#include "omap.h"
#include "forest.h"
#include "acl.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -59,8 +60,6 @@
* All the entries have a dirent struct with the full name in their
* value. The dirent struct contains the name hash and readdir position
* so that any item use can reference all the items for a given entry.
* This is important for deleting all the items given a dentry that was
* populated by lookup.
*/
static unsigned int mode_to_type(umode_t mode)
@@ -99,100 +98,12 @@ static unsigned int dentry_type(enum scoutfs_dentry_type type)
return DT_UNKNOWN;
}
/*
* @lock_cov: tells revalidation that the dentry is still locked and valid.
*
* @pos, @hash: lets us remove items on final unlink without having to
* look them up.
*/
struct dentry_info {
struct scoutfs_lock_coverage lock_cov;
u64 hash;
u64 pos;
};
static struct kmem_cache *dentry_info_cache;
static void scoutfs_d_release(struct dentry *dentry)
{
struct super_block *sb = dentry->d_sb;
struct dentry_info *di = dentry->d_fsdata;
if (di) {
scoutfs_lock_del_coverage(sb, &di->lock_cov);
kmem_cache_free(dentry_info_cache, di);
dentry->d_fsdata = NULL;
}
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags);
static const struct dentry_operations scoutfs_dentry_ops = {
.d_release = scoutfs_d_release,
const struct dentry_operations scoutfs_dentry_ops = {
.d_revalidate = scoutfs_d_revalidate,
};
static int alloc_dentry_info(struct dentry *dentry)
{
struct dentry_info *di;
smp_rmb();
if (dentry->d_op == &scoutfs_dentry_ops)
return 0;
di = kmem_cache_zalloc(dentry_info_cache, GFP_NOFS);
if (!di)
return -ENOMEM;
scoutfs_lock_init_coverage(&di->lock_cov);
spin_lock(&dentry->d_lock);
if (!dentry->d_fsdata) {
dentry->d_fsdata = di;
smp_wmb();
d_set_d_op(dentry, &scoutfs_dentry_ops);
}
spin_unlock(&dentry->d_lock);
if (di != dentry->d_fsdata)
kmem_cache_free(dentry_info_cache, di);
return 0;
}
static void update_dentry_info(struct super_block *sb, struct dentry *dentry,
u64 hash, u64 pos, struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
if (WARN_ON_ONCE(di == NULL))
return;
scoutfs_lock_add_coverage(sb, lock, &di->lock_cov);
di->hash = hash;
di->pos = pos;
}
static u64 dentry_info_hash(struct dentry *dentry)
{
struct dentry_info *di = dentry->d_fsdata;
if (WARN_ON_ONCE(di == NULL))
return 0;
return di->hash;
}
static u64 dentry_info_pos(struct dentry *dentry)
{
struct dentry_info *di = dentry->d_fsdata;
if (WARN_ON_ONCE(di == NULL))
return 0;
return di->pos;
}
static void init_dirent_key(struct scoutfs_key *key, u8 type, u64 ino,
u64 major, u64 minor)
{
@@ -317,62 +228,105 @@ out:
return ret;
}
/*
* Verify that the caller's dentry still precisely matches our dirent
* items.
*
* The caller has a dentry that the vfs revalidated before they acquired
* their locks. If the dentry is still covered by a lock we immediately
* return 0. If not, we check items and return -ENOENT if a positive
* dentry no longer matches the items or -EEXIST if a negative entry's
* name now has an item.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
static int lookup_dentry_dirent(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_dirent *dent_ret,
struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
return lookup_dirent(sb, dir_ino, dentry->d_name.name, dentry->d_name.len,
dirent_name_hash(dentry->d_name.name, dentry->d_name.len),
dent_ret, lock);
}
static u64 dentry_parent_ino(struct dentry *dentry)
{
struct dentry *parent = NULL;
struct inode *dir;
u64 dir_ino = 0;
if ((parent = dget_parent(dentry)) && (dir = parent->d_inode))
dir_ino = scoutfs_ino(dir);
dput(parent);
return dir_ino;
}
/* negative dentries return 0, our root ino is non-zero (1) */
static u64 dentry_ino(struct dentry *dentry)
{
return dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
}
static void set_dentry_fsdata(struct dentry *dentry, struct scoutfs_lock *lock)
{
void *now = (void *)(unsigned long)lock->refresh_gen;
void *was;
/* didn't want to alloc :/ */
BUILD_BUG_ON(sizeof(dentry->d_fsdata) != sizeof(u64));
BUILD_BUG_ON(sizeof(dentry->d_fsdata) != sizeof(long));
do {
was = dentry->d_fsdata;
} while (cmpxchg(&dentry->d_fsdata, was, now) != was);
}
static bool test_dentry_fsdata(struct dentry *dentry, u64 refresh)
{
u64 fsd = (unsigned long)ACCESS_ONCE(dentry->d_fsdata);
return fsd == refresh;
}
/*
* Validate an operation caller's input dentry argument. If the fsdata
* is valid then the underlying dirent items couldn't have changed and
* we return 0. If fsdata is no longer protected by a lock or its
* fields don't match then we check the dirent item. If the dirent item
* doesn't match what the caller expected given their dentry fields then
* we return an error.
*/
static int validate_dentry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
{
u64 ino = dentry_ino(dentry);
struct scoutfs_dirent dent = {0,};
const char *name;
u64 dentry_ino;
int name_len;
u64 hash;
int ret;
if (scoutfs_lock_is_covered(sb, &di->lock_cov))
return 0;
if (test_dentry_fsdata(dentry, lock->refresh_gen)) {
ret = 0;
goto out;
}
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
name = dentry->d_name.name;
name_len = dentry->d_name.len;
hash = dirent_name_hash(name, name_len);
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
ret = lookup_dentry_dirent(sb, dir_ino, dentry, &dent, lock);
if (ret < 0 && ret != -ENOENT)
return ret;
goto out;
if (dentry_ino != le64_to_cpu(dent.ino) || di->hash != le64_to_cpu(dent.hash) ||
di->pos != le64_to_cpu(dent.pos)) {
if (dentry_ino)
ret = -ENOENT;
else
ret = -EEXIST;
/* use negative zeroed dent when lookup gave -ENOENT */
if (!ino && dent.ino) {
/* caller expected negative but there was a dirent */
ret = -EEXIST;
} else if (ino && !dent.ino) {
/* caller expected positive but there was no dirent */
ret = -ENOENT;
} else if (ino != le64_to_cpu(dent.ino)) {
/* name linked to different inode than caller's */
ret = -ESTALE;
} else {
/* dirent ino matches dentry ino */
ret = 0;
}
out:
trace_scoutfs_validate_dentry(sb, dentry, dir_ino, ino, le64_to_cpu(dent.ino),
lock->refresh_gen, ret);
return ret;
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
{
struct super_block *sb = dentry->d_sb;
struct dentry_info *di = dentry->d_fsdata;
struct dentry *parent = dget_parent(dentry);
struct scoutfs_lock *lock = NULL;
struct scoutfs_dirent dent;
bool is_covered = false;
struct inode *dir;
u64 dentry_ino;
u64 dir_ino = dentry_parent_ino(dentry);
int ret;
/* don't think this happens but we can find out */
@@ -394,47 +348,7 @@ static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
goto out;
}
if (WARN_ON_ONCE(di == NULL)) {
ret = 0;
goto out;
}
is_covered = scoutfs_lock_is_covered(sb, &di->lock_cov);
if (is_covered) {
scoutfs_inc_counter(sb, dentry_revalidate_locked);
ret = 1;
goto out;
}
if (!parent || !parent->d_inode) {
scoutfs_inc_counter(sb, dentry_revalidate_orphan);
ret = 0;
goto out;
}
dir = parent->d_inode;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, dir, &lock);
if (ret)
goto out;
ret = lookup_dirent(sb, scoutfs_ino(dir),
dentry->d_name.name, dentry->d_name.len,
dirent_name_hash(dentry->d_name.name,
dentry->d_name.len),
&dent, lock);
if (ret == -ENOENT) {
dent.ino = 0;
dent.hash = 0;
dent.pos = 0;
} else if (ret < 0) {
goto out;
}
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
if ((dentry_ino == le64_to_cpu(dent.ino))) {
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), lock);
if (test_dentry_fsdata(dentry, scoutfs_lock_ino_refresh_gen(sb, dir_ino))) {
scoutfs_inc_counter(sb, dentry_revalidate_valid);
ret = 1;
} else {
@@ -443,10 +357,7 @@ static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
}
out:
trace_scoutfs_d_revalidate(sb, dentry, flags, parent, is_covered, ret);
dput(parent);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
trace_scoutfs_d_revalidate(sb, dentry, flags, dir_ino, ret);
if (ret < 0 && ret != -ECHILD)
scoutfs_inc_counter(sb, dentry_revalidate_error);
@@ -483,10 +394,6 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
goto out;
}
ret = alloc_dentry_info(dentry);
if (ret)
goto out;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, dir, &dir_lock);
if (ret)
goto out;
@@ -500,8 +407,7 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
ino = le64_to_cpu(dent.ino);
}
if (ret == 0)
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), dir_lock);
set_dentry_fsdata(dentry, dir_lock);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
@@ -725,10 +631,6 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
int ret = 0;
u64 ino;
ret = alloc_dentry_info(dentry);
if (ret)
return ERR_PTR(ret);
ret = scoutfs_alloc_ino(sb, S_ISDIR(mode), &ino);
if (ret)
return ERR_PTR(ret);
@@ -765,7 +667,8 @@ retry:
if (ret)
goto out_unlock;
ret = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock, &inode);
ret = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock, &inode) ?:
scoutfs_init_acl_locked(inode, dir, *inode_lock, *dir_lock, ind_locks);
if (ret < 0)
goto out;
@@ -816,7 +719,7 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
@@ -829,7 +732,7 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
if (ret)
goto out;
update_dentry_info(sb, dentry, hash, pos, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
@@ -903,19 +806,15 @@ static int scoutfs_link(struct dentry *old_dentry,
if (ret)
return ret;
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
if (inode->i_nlink >= SCOUTFS_LINK_MAX) {
ret = -EMLINK;
goto out_unlock;
}
ret = alloc_dentry_info(dentry);
if (ret)
goto out_unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
if (inode->i_nlink == 0) {
@@ -957,7 +856,7 @@ retry:
WARN_ON_ONCE(err); /* no orphan, might not scan and delete after crash */
goto out;
}
update_dentry_info(sb, dentry, hash, pos, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, dir_size);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
@@ -1005,9 +904,11 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent dent;
LIST_HEAD(ind_locks);
u64 ind_seq;
int ret = 0;
u64 hash;
int ret;
ret = scoutfs_lock_inodes(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE,
@@ -1016,11 +917,7 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
if (ret)
return ret;
ret = alloc_dentry_info(dentry);
if (ret)
goto unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto unlock;
@@ -1029,6 +926,13 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
goto unlock;
}
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
ret = lookup_dirent(sb, scoutfs_ino(dir), dentry->d_name.name, dentry->d_name.len, hash,
&dent, dir_lock);
if (ret < 0)
goto out;
if (should_orphan(inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
@@ -1052,16 +956,15 @@ retry:
goto out;
}
ret = del_entry_items(sb, scoutfs_ino(dir), dentry_info_hash(dentry),
dentry_info_pos(dentry), scoutfs_ino(inode),
dir_lock, inode_lock);
ret = del_entry_items(sb, scoutfs_ino(dir), le64_to_cpu(dent.hash), le64_to_cpu(dent.pos),
scoutfs_ino(inode), dir_lock, inode_lock);
if (ret) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
WARN_ON_ONCE(ret); /* should have been dirty */
goto out;
}
update_dentry_info(sb, dentry, 0, 0, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
dir->i_ctime = ts;
dir->i_mtime = ts;
@@ -1242,10 +1145,11 @@ const struct inode_operations scoutfs_symlink_iops = {
.put_link = scoutfs_put_link,
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
};
/*
@@ -1273,17 +1177,13 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
name_len > PATH_MAX || name_len > SCOUTFS_SYMLINK_MAX_SIZE)
return -ENAMETOOLONG;
ret = alloc_dentry_info(dentry);
if (ret)
return ret;
inode = lock_hold_create(dir, dentry, S_IFLNK|S_IRWXUGO, 0,
&dir_lock, &inode_lock, NULL, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
@@ -1301,7 +1201,7 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
if (ret)
goto out;
update_dentry_info(sb, dentry, hash, pos, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
@@ -1631,6 +1531,8 @@ static int scoutfs_rename_common(struct inode *old_dir,
struct scoutfs_lock *old_inode_lock = NULL;
struct scoutfs_lock *new_inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_dirent new_dent;
struct scoutfs_dirent old_dent;
struct timespec now;
bool ins_new = false;
bool del_new = false;
@@ -1678,19 +1580,18 @@ static int scoutfs_rename_common(struct inode *old_dir,
if (ret)
goto out_unlock;
/* make sure that the entries assumed by the argument still exist */
ret = validate_dentry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
validate_dentry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
if (ret)
goto out_unlock;
/* test dir i_size now that it's refreshed */
if (new_inode && S_ISDIR(new_inode->i_mode) && i_size_read(new_inode)) {
ret = -ENOTEMPTY;
goto out_unlock;
}
/* make sure that the entries assumed by the argument still exist */
ret = alloc_dentry_info(old_dentry) ?:
alloc_dentry_info(new_dentry) ?:
verify_entry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
if (ret)
goto out_unlock;
if ((flags & RENAME_NOREPLACE) && (new_inode != NULL)) {
ret = -EEXIST;
@@ -1733,10 +1634,12 @@ retry:
/* remove the new entry if it exists */
if (new_inode) {
ret = del_entry_items(sb, scoutfs_ino(new_dir),
dentry_info_hash(new_dentry),
dentry_info_pos(new_dentry),
scoutfs_ino(new_inode),
ret = lookup_dirent(sb, scoutfs_ino(new_dir), new_dentry->d_name.name,
new_dentry->d_name.len, new_hash, &new_dent, new_dir_lock);
if (ret < 0)
goto out;
ret = del_entry_items(sb, scoutfs_ino(new_dir), le64_to_cpu(new_dent.hash),
le64_to_cpu(new_dent.pos), scoutfs_ino(new_inode),
new_dir_lock, new_inode_lock);
if (ret)
goto out;
@@ -1752,11 +1655,14 @@ retry:
goto out;
del_new = true;
ret = lookup_dirent(sb, scoutfs_ino(old_dir), old_dentry->d_name.name,
old_dentry->d_name.len, old_hash, &old_dent, old_dir_lock);
if (ret < 0)
goto out;
/* remove the old entry */
ret = del_entry_items(sb, scoutfs_ino(old_dir),
dentry_info_hash(old_dentry),
dentry_info_pos(old_dentry),
scoutfs_ino(old_inode),
ret = del_entry_items(sb, scoutfs_ino(old_dir), le64_to_cpu(old_dent.hash),
le64_to_cpu(old_dent.pos), scoutfs_ino(old_inode),
old_dir_lock, old_inode_lock);
if (ret)
goto out;
@@ -1771,7 +1677,7 @@ retry:
/* won't fail from here on out, update all the vfs structs */
/* the caller will use d_move to move the old_dentry into place */
update_dentry_info(sb, old_dentry, new_hash, new_pos, new_dir_lock);
set_dentry_fsdata(old_dentry, new_dir_lock);
i_size_write(old_dir, i_size_read(old_dir) - old_dentry->d_name.len);
if (!new_inode)
@@ -1836,8 +1742,8 @@ out:
err = 0;
if (ins_old)
err = add_entry_items(sb, scoutfs_ino(old_dir),
dentry_info_hash(old_dentry),
dentry_info_pos(old_dentry),
le64_to_cpu(old_dent.hash),
le64_to_cpu(old_dent.pos),
old_dentry->d_name.name,
old_dentry->d_name.len,
scoutfs_ino(old_inode),
@@ -1853,8 +1759,8 @@ out:
if (ins_new && err == 0)
err = add_entry_items(sb, scoutfs_ino(new_dir),
dentry_info_hash(new_dentry),
dentry_info_pos(new_dentry),
le64_to_cpu(new_dent.hash),
le64_to_cpu(new_dent.pos),
new_dentry->d_name.name,
new_dentry->d_name.len,
scoutfs_ino(new_inode),
@@ -1978,32 +1884,14 @@ const struct inode_operations_wrapper scoutfs_dir_iops = {
.rename = scoutfs_rename,
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
.rename2 = scoutfs_rename2,
};
void scoutfs_dir_exit(void)
{
if (dentry_info_cache) {
kmem_cache_destroy(dentry_info_cache);
dentry_info_cache = NULL;
}
}
int scoutfs_dir_init(void)
{
dentry_info_cache = kmem_cache_create("scoutfs_dentry_info",
sizeof(struct dentry_info), 0,
SLAB_RECLAIM_ACCOUNT, NULL);
if (!dentry_info_cache)
return -ENOMEM;
return 0;
}

View File

@@ -8,6 +8,8 @@ extern const struct file_operations scoutfs_dir_fops;
extern const struct inode_operations_wrapper scoutfs_dir_iops;
extern const struct inode_operations scoutfs_symlink_iops;
extern const struct dentry_operations scoutfs_dentry_ops;
struct scoutfs_link_backref_entry {
struct list_head head;
u64 dir_ino;
@@ -29,7 +31,4 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
int scoutfs_symlink_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock, u64 i_size);
int scoutfs_dir_init(void);
void scoutfs_dir_exit(void);
#endif

View File

@@ -36,6 +36,7 @@
#include "omap.h"
#include "forest.h"
#include "btree.h"
#include "acl.h"
/*
* XXX
@@ -136,20 +137,22 @@ void scoutfs_destroy_inode(struct inode *inode)
static const struct inode_operations scoutfs_file_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
.fiemap = scoutfs_data_fiemap,
};
static const struct inode_operations scoutfs_special_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
};
/*
@@ -507,10 +510,15 @@ retry:
if (ret)
goto out;
ret = scoutfs_acl_chmod_locked(inode, attr, lock, &ind_locks);
if (ret < 0)
goto release;
setattr_copy(inode, attr);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
release:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
out:
@@ -1685,6 +1693,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode sinode;
struct scoutfs_key key;
bool clear_trying = false;
u64 group_nr;
int bit_nr;
int ret;
@@ -1704,6 +1713,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = 0;
goto out;
}
clear_trying = true;
/* can't delete if it's cached in local or remote mounts */
if (scoutfs_omap_test(sb, ino) || test_bit_le(bit_nr, ldata->map.bits)) {
@@ -1730,7 +1740,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = delete_inode_items(sb, ino, &sinode, lock, orph_lock);
out:
if (ldata)
if (clear_trying)
clear_bit(bit_nr, ldata->trying);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);

View File

@@ -46,4 +46,10 @@ static inline int dir_emit_dots(struct file *file, void *dirent,
}
#endif
#ifdef KC_POSIX_ACL_VALID_USER_NS
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(user_ns, acl)
#else
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(acl)
#endif
#endif

View File

@@ -18,6 +18,7 @@
#include <linux/mm.h>
#include <linux/sort.h>
#include <linux/ctype.h>
#include <linux/posix_acl.h>
#include "super.h"
#include "lock.h"
@@ -156,6 +157,8 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
if (!linfo->unmounting)
d_prune_aliases(inode);
forget_all_cached_acls(inode);
si->drop_invalidated = true;
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
@@ -289,6 +292,7 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
lock->sb = sb;
init_waitqueue_head(&lock->waitq);
lock->mode = SCOUTFS_LOCK_NULL;
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
atomic64_set(&lock->forest_bloom_nr, 0);
@@ -666,7 +670,9 @@ struct inv_req {
*
* Before we start invalidating the lock we set the lock to the new
* mode, preventing further incompatible users of the old mode from
* using the lock while we're invalidating.
* using the lock while we're invalidating. We record the previously
* granted mode so that we can send lock recover responses with the old
* granted mode during invalidation.
*/
static void lock_invalidate_worker(struct work_struct *work)
{
@@ -691,7 +697,8 @@ static void lock_invalidate_worker(struct work_struct *work)
if (!lock_counts_match(nl->new_mode, lock->users))
continue;
/* set the new mode, no incompatible users during inval */
/* set the new mode, no incompatible users during inval, recov needs old */
lock->invalidating_mode = lock->mode;
lock->mode = nl->new_mode;
/* move everyone that's ready to our private list */
@@ -734,6 +741,8 @@ static void lock_invalidate_worker(struct work_struct *work)
list_del(&ireq->head);
kfree(ireq);
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
if (list_empty(&lock->inv_list)) {
/* finish if another request didn't arrive */
list_del_init(&lock->inv_head);
@@ -824,6 +833,7 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_net_lock_recover *nlr;
enum scoutfs_lock_mode mode;
struct scoutfs_lock *lock;
struct scoutfs_lock *next;
struct rb_node *node;
@@ -844,10 +854,15 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
for (i = 0; lock && i < SCOUTFS_NET_LOCK_MAX_RECOVER_NR; i++) {
if (lock->invalidating_mode != SCOUTFS_LOCK_NULL)
mode = lock->invalidating_mode;
else
mode = lock->mode;
nlr->locks[i].key = lock->start;
nlr->locks[i].write_seq = cpu_to_le64(lock->write_seq);
nlr->locks[i].old_mode = lock->mode;
nlr->locks[i].new_mode = lock->mode;
nlr->locks[i].old_mode = mode;
nlr->locks[i].new_mode = mode;
node = rb_next(&lock->node);
if (node)
@@ -1513,6 +1528,38 @@ void scoutfs_lock_flush_invalidate(struct super_block *sb)
flush_work(&linfo->inv_work);
}
static u64 get_held_lock_refresh_gen(struct super_block *sb, struct scoutfs_key *start)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
u64 refresh_gen = 0;
/* this can be called from all manner of places */
if (!linfo)
return 0;
spin_lock(&linfo->lock);
lock = lock_lookup(sb, start, NULL);
if (lock) {
if (lock_mode_can_read(lock->mode))
refresh_gen = lock->refresh_gen;
}
spin_unlock(&linfo->lock);
return refresh_gen;
}
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino)
{
struct scoutfs_key start;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_FS_ZONE;
start.ski_ino = cpu_to_le64(ino & ~(u64)SCOUTFS_LOCK_INODE_GROUP_MASK);
return get_held_lock_refresh_gen(sb, &start);
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.

View File

@@ -39,6 +39,7 @@ struct scoutfs_lock {
struct list_head cov_list;
enum scoutfs_lock_mode mode;
enum scoutfs_lock_mode invalidating_mode;
unsigned int waiters[SCOUTFS_LOCK_NR_MODES];
unsigned int users[SCOUTFS_LOCK_NR_MODES];
@@ -99,6 +100,8 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode);
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino);
void scoutfs_free_unused_locks(struct super_block *sb);
int scoutfs_lock_setup(struct super_block *sb);

View File

@@ -749,7 +749,7 @@ out:
if (ret < 0) {
scoutfs_err(sb, "lock server err %d during client rid %016llx farewell, shutting down",
ret, rid);
scoutfs_server_abort(sb);
scoutfs_server_stop(sb);
}
return ret;

View File

@@ -355,6 +355,7 @@ static int submit_send(struct super_block *sb,
}
if (rid != 0) {
spin_unlock(&conn->lock);
kfree(msend);
return -ENOTCONN;
}
}
@@ -991,6 +992,8 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
if (ret < 0)
break;
acc_sock->sk->sk_allocation = GFP_NOFS;
/* inherit accepted request funcs from listening conn */
acc_conn = scoutfs_net_alloc_conn(sb, conn->notify_up,
conn->notify_down,
@@ -1053,6 +1056,8 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
/* caller specified connect timeout */
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
@@ -1292,7 +1297,7 @@ restart:
if (ret) {
scoutfs_err(sb, "client fence returned err %d, shutting down server",
ret);
scoutfs_server_abort(sb);
scoutfs_server_stop(sb);
}
}
destroy_conn(acc);
@@ -1341,10 +1346,12 @@ scoutfs_net_alloc_conn(struct super_block *sb,
if (!conn)
return NULL;
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
if (info_size) {
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
}
conn->workq = alloc_workqueue("scoutfs_net_%s",
@@ -1450,6 +1457,8 @@ int scoutfs_net_bind(struct super_block *sb,
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
(char *)&optval, sizeof(optval));

View File

@@ -157,6 +157,15 @@ static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
return nr;
}
static void free_rid_list(struct omap_rid_list *list)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *tmp;
list_for_each_entry_safe(entry, tmp, &list->head, head)
free_rid(list, entry);
}
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
{
struct omap_rid_entry *entry;
@@ -804,6 +813,10 @@ void scoutfs_omap_server_shutdown(struct super_block *sb)
llist_for_each_entry_safe(req, tmp, requests, llnode)
kfree(req);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
synchronize_rcu();
}
@@ -864,6 +877,10 @@ void scoutfs_omap_destroy(struct super_block *sb)
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
rhashtable_destroy(&ominf->group_ht);
rhashtable_destroy(&ominf->req_ht);
kfree(ominf);

View File

@@ -27,16 +27,25 @@
#include "options.h"
#include "super.h"
#include "inode.h"
#include "alloc.h"
enum {
Opt_acl,
Opt_data_prealloc_blocks,
Opt_data_prealloc_contig_only,
Opt_metadev_path,
Opt_noacl,
Opt_orphan_scan_delay_ms,
Opt_quorum_slot_nr,
Opt_err,
};
static const match_table_t tokens = {
{Opt_acl, "acl"},
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_noacl, "noacl"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_err, NULL}
@@ -106,11 +115,17 @@ static void free_options(struct scoutfs_mount_options *opts)
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
#define MIN_DATA_PREALLOC_BLOCKS 1ULL
#define MAX_DATA_PREALLOC_BLOCKS ((unsigned long long)SCOUTFS_BLOCK_SM_MAX)
static void init_default_options(struct scoutfs_mount_options *opts)
{
memset(opts, 0, sizeof(*opts));
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
opts->data_prealloc_contig_only = 1;
opts->quorum_slot_nr = -1;
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
opts->orphan_scan_delay_ms = -1;
}
/*
@@ -122,6 +137,7 @@ static void init_default_options(struct scoutfs_mount_options *opts)
static int parse_options(struct super_block *sb, char *options, struct scoutfs_mount_options *opts)
{
substring_t args[MAX_OPT_ARGS];
u64 nr64;
int nr;
int token;
char *p;
@@ -134,12 +150,44 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
token = match_token(p, tokens, args);
switch (token) {
case Opt_acl:
sb->s_flags |= MS_POSIXACL;
break;
case Opt_data_prealloc_blocks:
ret = match_u64(args, &nr64);
if (ret < 0 ||
nr64 < MIN_DATA_PREALLOC_BLOCKS || nr64 > MAX_DATA_PREALLOC_BLOCKS) {
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->data_prealloc_blocks = nr64;
break;
case Opt_data_prealloc_contig_only:
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 || nr > 1) {
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must only be 0 or 1");
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->data_prealloc_contig_only = nr;
break;
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
if (ret < 0)
return ret;
break;
case Opt_noacl:
sb->s_flags &= ~MS_POSIXACL;
break;
case Opt_orphan_scan_delay_ms:
if (opts->orphan_scan_delay_ms != -1) {
scoutfs_err(sb, "multiple orphan_scan_delay_ms options provided, only provide one.");
@@ -181,6 +229,9 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
}
}
if (opts->orphan_scan_delay_ms == -1)
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
if (!opts->metadev_path) {
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
return -EINVAL;
@@ -250,10 +301,17 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct scoutfs_mount_options opts;
const bool is_acl = !!(sb->s_flags & MS_POSIXACL);
scoutfs_options_read(sb, &opts);
if (is_acl)
seq_puts(seq, ",acl");
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
if (!is_acl)
seq_puts(seq, ",noacl");
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
if (opts.quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
@@ -261,6 +319,83 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
return 0;
}
static ssize_t data_prealloc_blocks_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%llu", opts.data_prealloc_blocks);
}
static ssize_t data_prealloc_blocks_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[30]; /* more than enough for octal -U64_MAX */
u64 val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtoll(nullterm, 0, &val);
if (ret < 0 || val < MIN_DATA_PREALLOC_BLOCKS || val > MAX_DATA_PREALLOC_BLOCKS) {
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.data_prealloc_blocks = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(data_prealloc_blocks);
static ssize_t data_prealloc_contig_only_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.data_prealloc_contig_only);
}
static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < 0 || val > 1) {
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must be 0 or 1");
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.data_prealloc_contig_only = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
@@ -325,6 +460,8 @@ static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *
SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
SCOUTFS_ATTR_PTR(quorum_slot_nr),

View File

@@ -6,6 +6,8 @@
#include "format.h"
struct scoutfs_mount_options {
u64 data_prealloc_blocks;
bool data_prealloc_contig_only;
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;

View File

@@ -105,6 +105,8 @@ enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
struct quorum_status {
enum quorum_role role;
u64 term;
u64 server_start_term;
int server_event;
int vote_for;
unsigned long vote_bits;
ktime_t timeout;
@@ -117,7 +119,6 @@ struct quorum_info {
bool shutdown;
int our_quorum_slot_nr;
unsigned long flags;
int votes_needed;
spinlock_t show_lock;
@@ -128,8 +129,6 @@ struct quorum_info {
struct scoutfs_sysfs_attrs ssa;
};
#define QINF_FLAG_SERVER 0
#define DECLARE_QUORUM_INFO(sb, name) \
struct quorum_info *name = SCOUTFS_SB(sb)->quorum_info
#define DECLARE_QUORUM_INFO_KOBJ(kobj, name) \
@@ -494,16 +493,6 @@ static int update_quorum_block(struct super_block *sb, int event, u64 term, bool
return ret;
}
/*
* The calling server has fenced previous leaders and reclaimed their
* resources. We can now update our fence event with a greater term to
* stop future leaders from doing the same.
*/
int scoutfs_quorum_fence_complete(struct super_block *sb, u64 term)
{
return update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_FENCE, term, true);
}
/*
* The calling server has been elected and has started running but can't
* yet assume that it has exclusive access to the metadata device. We
@@ -593,15 +582,9 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
}
out:
if (fence_started) {
err = scoutfs_fence_wait_fenced(sb, msecs_to_jiffies(SCOUTFS_QUORUM_FENCE_TO_MS));
if (ret == 0)
ret = err;
} else {
err = scoutfs_quorum_fence_complete(sb, term);
if (ret == 0)
ret = err;
}
err = scoutfs_fence_wait_fenced(sb, msecs_to_jiffies(SCOUTFS_QUORUM_FENCE_TO_MS));
if (ret == 0)
ret = err;
if (ret < 0)
scoutfs_inc_counter(sb, quorum_fence_error);
@@ -609,12 +592,26 @@ out:
return ret;
}
/*
* The main quorum task maintains its private status. It seemed cleaner
* to occasionally copy the status for showing in sysfs/debugfs files
* than to have the two lock access to shared status. The show copy is
* updated after being modified before the quorum task sleeps for a
* significant amount of time, either waiting on timeouts or interacting
* with the server.
*/
static void update_show_status(struct quorum_info *qinf, struct quorum_status *qst)
{
spin_lock(&qinf->show_lock);
qinf->show_status = *qst;
spin_unlock(&qinf->show_lock);
}
/*
* The quorum work always runs in the background of quorum member
* mounts. It's responsible for starting and stopping the server if
* it's elected leader, and the server can call back into it to let it
* know that it has shut itself down (perhaps due to error) so that the
* work should stop sending heartbeats.
* it's elected leader. While it's leader it sends heartbeats to
* suppress other quorum work from standing for election.
*/
static void scoutfs_quorum_worker(struct work_struct *work)
{
@@ -622,7 +619,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
struct super_block *sb = qinf->sb;
struct sockaddr_in unused;
struct quorum_host_msg msg;
struct quorum_status qst;
struct quorum_status qst = {0,};
int ret;
int err;
@@ -631,9 +628,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* start out as a follower */
qst.role = FOLLOWER;
qst.term = 0;
qst.vote_for = -1;
qst.vote_bits = 0;
/* read our starting term from greatest in all events in all slots */
read_greatest_term(sb, &qst.term);
@@ -651,6 +646,8 @@ static void scoutfs_quorum_worker(struct work_struct *work)
while (!(qinf->shutdown || scoutfs_forcing_unmount(sb))) {
update_show_status(qinf, &qst);
ret = recv_msg(sb, &msg, qst.timeout);
if (ret < 0) {
if (ret != -ETIMEDOUT && ret != -EAGAIN) {
@@ -667,24 +664,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
msg.term < qst.term)
msg.type = SCOUTFS_QUORUM_MSG_INVALID;
/* if the server has shutdown we become follower */
if (!test_bit(QINF_FLAG_SERVER, &qinf->flags) &&
qst.role == LEADER) {
qst.role = FOLLOWER;
qst.vote_for = -1;
qst.vote_bits = 0;
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_server_shutdown);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.term);
scoutfs_inc_counter(sb, quorum_send_resignation);
}
spin_lock(&qinf->show_lock);
qinf->show_status = qst;
spin_unlock(&qinf->show_lock);
trace_scoutfs_quorum_loop(sb, qst.role, qst.term, qst.vote_for,
qst.vote_bits,
ktime_to_timespec64(qst.timeout));
@@ -695,7 +674,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
if (qst.role == LEADER) {
scoutfs_warn(sb, "saw msg type %u from %u for term %llu while leader in term %llu, shutting down server.",
msg.type, msg.from, msg.term, qst.term);
scoutfs_server_stop(sb);
}
qst.role = FOLLOWER;
qst.term = msg.term;
@@ -717,6 +695,13 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* followers and candidates start new election on timeout */
if (qst.role != LEADER &&
ktime_after(ktime_get(), qst.timeout)) {
/* .. but only if their server has stopped */
if (!scoutfs_server_is_down(sb)) {
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_candidate_server_stopping);
continue;
}
qst.role = CANDIDATE;
qst.term++;
qst.vote_for = -1;
@@ -758,29 +743,69 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.term);
qst.timeout = heartbeat_interval();
update_show_status(qinf, &qst);
/* record that we've been elected before starting up server */
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_ELECT, qst.term, true);
if (ret < 0)
goto out;
/* make very sure server is fully shut down */
scoutfs_server_stop(sb);
/* set server bit before server shutdown could clear */
set_bit(QINF_FLAG_SERVER, &qinf->flags);
qst.server_start_term = qst.term;
qst.server_event = SCOUTFS_QUORUM_EVENT_ELECT;
scoutfs_server_start(sb, qst.term);
}
ret = scoutfs_server_start(sb, qst.term);
if (ret < 0) {
clear_bit(QINF_FLAG_SERVER, &qinf->flags);
/* store our increased term */
err = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP, qst.term,
true);
if (err < 0) {
ret = err;
goto out;
}
ret = 0;
continue;
/*
* This leader's server is up, having finished fencing
* previous leaders. We update the fence event with the
* current term to let future leaders know that previous
* servers have been fenced.
*/
if (qst.role == LEADER && qst.server_event != SCOUTFS_QUORUM_EVENT_FENCE &&
scoutfs_server_is_up(sb)) {
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_FENCE, qst.term, true);
if (ret < 0)
goto out;
qst.server_event = SCOUTFS_QUORUM_EVENT_FENCE;
}
/*
* Stop a running server if we're no longer leader in
* its term.
*/
if (!(qst.role == LEADER && qst.term == qst.server_start_term) &&
scoutfs_server_is_running(sb)) {
scoutfs_server_stop(sb);
}
/*
* A previously running server has stopped. The quorum
* protocol might have shut it down by changing roles or
* it might have stopped on its own, perhaps on errors.
* If we're still a leader then we become a follower and
* send resignations to encourage the next election.
* Always update the _STOP event to stop connections and
* fencing.
*/
if (qst.server_start_term > 0 && scoutfs_server_is_down(sb)) {
if (qst.role == LEADER) {
qst.role = FOLLOWER;
qst.vote_for = -1;
qst.vote_bits = 0;
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_server_shutdown);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.server_start_term);
scoutfs_inc_counter(sb, quorum_send_resignation);
}
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP,
qst.server_start_term, true);
if (ret < 0)
goto out;
qst.server_start_term = 0;
}
/* leaders regularly send heartbeats to delay elections */
@@ -817,12 +842,19 @@ static void scoutfs_quorum_worker(struct work_struct *work)
}
}
update_show_status(qinf, &qst);
/* always try to stop a running server as we stop */
if (test_bit(QINF_FLAG_SERVER, &qinf->flags)) {
scoutfs_server_stop(sb);
scoutfs_fence_stop(sb);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.term);
if (scoutfs_server_is_running(sb)) {
scoutfs_server_stop_wait(sb);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION, qst.term);
if (qst.server_start_term > 0) {
err = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP,
qst.server_start_term, true);
if (err < 0 && ret == 0)
ret = err;
}
}
/* record that this slot no longer has an active quorum */
@@ -834,21 +866,6 @@ out:
}
}
/*
* The calling server has shutdown and is no longer using shared
* resources. Clear the bit so that we stop sending heartbeats and
* allow the next server to be elected. Update the stop event so that
* it won't be considered available by clients or fenced by the next
* leader.
*/
void scoutfs_quorum_server_shutdown(struct super_block *sb, u64 term)
{
DECLARE_QUORUM_INFO(sb, qinf);
clear_bit(QINF_FLAG_SERVER, &qinf->flags);
update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP, term, true);
}
/*
* Clients read quorum blocks looking for the leader with a server whose
* address it can try and connect to.
@@ -970,6 +987,8 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
qinf->our_quorum_slot_nr);
snprintf_ret(buf, size, &ret, "term %llu\n",
qst.term);
snprintf_ret(buf, size, &ret, "server_start_term %llu\n", qst.server_start_term);
snprintf_ret(buf, size, &ret, "server_event %d\n", qst.server_event);
snprintf_ret(buf, size, &ret, "role %d (%s)\n",
qst.role, role_str(qst.role));
snprintf_ret(buf, size, &ret, "vote_for %d\n",

View File

@@ -2,14 +2,12 @@
#define _SCOUTFS_QUORUM_H_
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
void scoutfs_quorum_server_shutdown(struct super_block *sb, u64 term);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
int scoutfs_quorum_fence_complete(struct super_block *sb, u64 term);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);

View File

@@ -1417,42 +1417,71 @@ TRACE_EVENT(scoutfs_rename,
);
TRACE_EVENT(scoutfs_d_revalidate,
TP_PROTO(struct super_block *sb,
struct dentry *dentry, int flags, struct dentry *parent,
bool is_covered, int ret),
TP_PROTO(struct super_block *sb, struct dentry *dentry, int flags, u64 dir_ino, int ret),
TP_ARGS(sb, dentry, flags, parent, is_covered, ret),
TP_ARGS(sb, dentry, flags, dir_ino, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, dentry)
__string(name, dentry->d_name.name)
__field(__u64, ino)
__field(__u64, parent_ino)
__field(__u64, dir_ino)
__field(int, flags)
__field(int, is_root)
__field(int, is_covered)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dentry = dentry;
__assign_str(name, dentry->d_name.name)
__entry->ino = dentry->d_inode ?
scoutfs_ino(dentry->d_inode) : 0;
__entry->parent_ino = parent->d_inode ?
scoutfs_ino(parent->d_inode) : 0;
__entry->ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
__entry->dir_ino = dir_ino;
__entry->flags = flags;
__entry->is_root = IS_ROOT(dentry);
__entry->is_covered = is_covered;
__entry->ret = ret;
),
TP_printk(SCSBF" name %s ino %llu parent_ino %llu flags 0x%x s_root %u is_covered %u ret %d",
SCSB_TRACE_ARGS, __get_str(name), __entry->ino,
__entry->parent_ino, __entry->flags,
__entry->is_root,
__entry->is_covered,
__entry->ret)
TP_printk(SCSBF" dentry %p name %s ino %llu dir_ino %llu flags 0x%x s_root %u ret %d",
SCSB_TRACE_ARGS, __entry->dentry, __get_str(name), __entry->ino, __entry->dir_ino,
__entry->flags, __entry->is_root, __entry->ret)
);
TRACE_EVENT(scoutfs_validate_dentry,
TP_PROTO(struct super_block *sb, struct dentry *dentry, u64 dir_ino, u64 dentry_ino,
u64 dent_ino, u64 refresh_gen, int ret),
TP_ARGS(sb, dentry, dir_ino, dentry_ino, dent_ino, refresh_gen, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, dentry)
__field(__u64, dir_ino)
__string(name, dentry->d_name.name)
__field(__u64, dentry_ino)
__field(__u64, dent_ino)
__field(__u64, fsdata_gen)
__field(__u64, refresh_gen)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dentry = dentry;
__entry->dir_ino = dir_ino;
__assign_str(name, dentry->d_name.name)
__entry->dentry_ino = dentry_ino;
__entry->dent_ino = dent_ino;
__entry->fsdata_gen = (unsigned long long)dentry->d_fsdata;
__entry->refresh_gen = refresh_gen;
__entry->ret = ret;
),
TP_printk(SCSBF" dentry %p dir %llu name %s dentry_ino %llu dent_ino %llu fsdata_gen %llu refresh_gen %llu ret %d",
SCSB_TRACE_ARGS, __entry->dentry, __entry->dir_ino, __get_str(name),
__entry->dentry_ino, __entry->dent_ino, __entry->fsdata_gen,
__entry->refresh_gen, __entry->ret)
);
DECLARE_EVENT_CLASS(scoutfs_super_lifecycle_class,
@@ -1843,6 +1872,53 @@ DEFINE_EVENT(scoutfs_server_client_count_class, scoutfs_server_client_down,
TP_ARGS(sb, rid, nr_clients)
);
DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, holding)
__field(int, applying)
__field(int, nr_holders)
__field(__u32, avail_before)
__field(__u32, freed_before)
__field(int, exceeded)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->holding = !!holding;
__entry->applying = !!applying;
__entry->nr_holders = nr_holders;
__entry->avail_before = avail_before;
__entry->freed_before = freed_before;
__entry->exceeded = !!exceeded;
),
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying, __entry->nr_holders,
__entry->avail_before, __entry->freed_before, __entry->exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_hold,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_apply,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_start,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_end,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
#define slt_symbolic(mode) \
__print_symbolic(mode, \
{ SLT_CLIENT, "client" }, \

File diff suppressed because it is too large Load Diff

View File

@@ -64,8 +64,6 @@ int scoutfs_server_lock_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_recover_request(struct super_block *sb, u64 rid,
struct scoutfs_key *key);
void scoutfs_server_hold_commit(struct super_block *sb);
int scoutfs_server_apply_commit(struct super_block *sb, int err);
void scoutfs_server_recov_finish(struct super_block *sb, u64 rid, int which);
int scoutfs_server_send_omap_request(struct super_block *sb, u64 rid,
@@ -77,9 +75,12 @@ u64 scoutfs_server_seq(struct super_block *sb);
u64 scoutfs_server_next_seq(struct super_block *sb);
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
int scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_abort(struct super_block *sb);
void scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_stop(struct super_block *sb);
void scoutfs_server_stop_wait(struct super_block *sb);
bool scoutfs_server_is_running(struct super_block *sb);
bool scoutfs_server_is_up(struct super_block *sb);
bool scoutfs_server_is_down(struct super_block *sb);
int scoutfs_server_setup(struct super_block *sb);
void scoutfs_server_destroy(struct super_block *sb);

View File

@@ -47,6 +47,7 @@
#include "omap.h"
#include "volopt.h"
#include "fence.h"
#include "xattr.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
@@ -482,8 +483,10 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_magic = SCOUTFS_SUPER_MAGIC;
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_d_op = &scoutfs_dentry_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_flags |= MS_I_VERSION;
sb->s_xattr = scoutfs_xattr_handlers;
sb->s_flags |= MS_I_VERSION | MS_POSIXACL;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -496,7 +499,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
ret = assign_random_id(sbi);
if (ret < 0)
return ret;
goto out;
spin_lock_init(&sbi->next_ino_lock);
spin_lock_init(&sbi->data_wait_root.lock);
@@ -505,7 +508,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
/* parse options early for use during setup */
ret = scoutfs_options_early_setup(sb, data);
if (ret < 0)
return ret;
goto out;
scoutfs_options_read(sb, &opts);
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
@@ -628,7 +631,6 @@ MODULE_ALIAS_FS("scoutfs");
static void teardown_module(void)
{
debugfs_remove(scoutfs_debugfs_root);
scoutfs_dir_exit();
scoutfs_inode_exit();
scoutfs_sysfs_exit();
}
@@ -666,7 +668,6 @@ static int __init scoutfs_module_init(void)
goto out;
}
ret = scoutfs_inode_init() ?:
scoutfs_dir_init() ?:
register_filesystem(&scoutfs_fs_type);
out:
if (ret)

View File

@@ -37,6 +37,15 @@ struct attr_funcs {
#define ATTR_FUNCS_RO(_name) \
static struct attr_funcs _name##_attr_funcs = __ATTR_RO(_name)
static ssize_t data_device_maj_min_show(struct kobject *kobj, struct attribute *attr, char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
return snprintf(buf, PAGE_SIZE, "%u:%u\n",
MAJOR(sb->s_bdev->bd_dev), MINOR(sb->s_bdev->bd_dev));
}
ATTR_FUNCS_RO(data_device_maj_min);
static ssize_t format_version_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
@@ -101,6 +110,7 @@ static ssize_t attr_funcs_show(struct kobject *kobj, struct attribute *attr,
static struct attribute *sb_id_attrs[] = {
&data_device_maj_min_attr_funcs.attr,
&format_version_attr_funcs.attr,
&fsid_attr_funcs.attr,
&rid_attr_funcs.attr,
@@ -258,7 +268,7 @@ int __init scoutfs_sysfs_init(void)
return 0;
}
void __exit scoutfs_sysfs_exit(void)
void scoutfs_sysfs_exit(void)
{
if (scoutfs_kset)
kset_unregister(scoutfs_kset);

View File

@@ -53,6 +53,6 @@ int scoutfs_setup_sysfs(struct super_block *sb);
void scoutfs_destroy_sysfs(struct super_block *sb);
int __init scoutfs_sysfs_init(void);
void __exit scoutfs_sysfs_exit(void);
void scoutfs_sysfs_exit(void);
#endif

View File

@@ -15,6 +15,7 @@
#include <linux/dcache.h>
#include <linux/xattr.h>
#include <linux/crc32c.h>
#include <linux/posix_acl.h>
#include "format.h"
#include "inode.h"
@@ -26,6 +27,7 @@
#include "xattr.h"
#include "lock.h"
#include "hash.h"
#include "acl.h"
#include "scoutfs_trace.h"
/*
@@ -57,12 +59,6 @@ static u32 xattr_names_equal(const char *a_name, unsigned int a_len,
return a_len == b_len && memcmp(a_name, b_name, a_len) == 0;
}
static unsigned int xattr_full_bytes(struct scoutfs_xattr *xat)
{
return offsetof(struct scoutfs_xattr,
name[xat->name_len + le16_to_cpu(xat->val_len)]);
}
static unsigned int xattr_nr_parts(struct scoutfs_xattr *xat)
{
return SCOUTFS_XATTR_NR_PARTS(xat->name_len,
@@ -85,16 +81,6 @@ static void init_xattr_key(struct scoutfs_key *key, u64 ino, u32 name_hash,
#define SCOUTFS_XATTR_PREFIX "scoutfs."
#define SCOUTFS_XATTR_PREFIX_LEN (sizeof(SCOUTFS_XATTR_PREFIX) - 1)
static int unknown_prefix(const char *name)
{
return strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) &&
strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) &&
strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)&&
strncmp(name, SCOUTFS_XATTR_PREFIX, SCOUTFS_XATTR_PREFIX_LEN);
}
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
@@ -137,12 +123,29 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
}
/*
* Find the next xattr and copy the key, xattr header, and as much of
* the name and value into the callers buffer as we can. Returns the
* number of bytes copied which include the header, name, and value and
* can be limited by the xattr length or the callers buffer. The caller
* is responsible for comparing their lengths, the header, and the
* returned length before safely using the xattr.
* xattrs are stored in multiple items. The first item is a
* concatenation of an initial header, the name, and then as much of the
* value as fits in the remainder of the first item. This return the
* size of the first item that'd store an xattr with the given name
* length and value payload size.
*/
static int first_item_bytes(int name_len, size_t size)
{
if (WARN_ON_ONCE(name_len <= 0) ||
WARN_ON_ONCE(name_len > SCOUTFS_XATTR_MAX_NAME_LEN))
return 0;
return min_t(int, sizeof(struct scoutfs_xattr) + name_len + size,
SCOUTFS_XATTR_MAX_PART_SIZE);
}
/*
* Find the next xattr, set the caller's key, and copy as much of the
* first item into the callers buffer as we can. Returns the number of
* bytes copied which can include the header, name, and start of the
* value from the first item. The caller is responsible for comparing
* their lengths, the header, and the returned length before safely
* using the buffer.
*
* If a name is provided then we'll iterate over items with a matching
* name_hash until we find a matching name. If we don't find a matching
@@ -154,20 +157,17 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
* Returns -ENOENT if it didn't find a next item.
*/
static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
struct scoutfs_xattr *xat, unsigned int bytes,
struct scoutfs_xattr *xat, unsigned int xat_bytes,
const char *name, unsigned int name_len,
u64 name_hash, u64 id, struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key last;
u8 last_part;
int total;
u8 part;
int ret;
/* need to be able to see the name we're looking for */
if (WARN_ON_ONCE(name_len > 0 && bytes < offsetof(struct scoutfs_xattr,
name[name_len])))
if (WARN_ON_ONCE(name_len > 0 &&
xat_bytes < offsetof(struct scoutfs_xattr, name[name_len])))
return -EINVAL;
if (name_len)
@@ -176,26 +176,15 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
init_xattr_key(key, scoutfs_ino(inode), name_hash, id);
init_xattr_key(&last, scoutfs_ino(inode), U32_MAX, U64_MAX);
last_part = 0;
part = 0;
total = 0;
for (;;) {
key->skx_part = part;
ret = scoutfs_item_next(sb, key, &last,
(void *)xat + total, bytes - total,
lock);
if (ret < 0) {
/* XXX corruption, ran out of parts */
if (ret == -ENOENT && part > 0)
ret = -EIO;
ret = scoutfs_item_next(sb, key, &last, xat, xat_bytes, lock);
if (ret < 0)
break;
}
trace_scoutfs_xattr_get_next_key(sb, key);
/* XXX corruption */
if (key->skx_part != part) {
if (key->skx_part != 0) {
ret = -EIO;
break;
}
@@ -205,8 +194,7 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
* the first part and if the next xattr name fits in our
* buffer then the item must have included it.
*/
if (part == 0 &&
(ret < sizeof(struct scoutfs_xattr) ||
if ((ret < sizeof(struct scoutfs_xattr) ||
(xat->name_len <= name_len &&
ret < offsetof(struct scoutfs_xattr,
name[xat->name_len])) ||
@@ -216,7 +204,7 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
break;
}
if (part == 0 && name_len) {
if (name_len > 0) {
/* ran out of names that could match */
if (le64_to_cpu(key->skx_name_hash) != name_hash) {
ret = -ENOENT;
@@ -224,64 +212,126 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
}
/* keep looking for our name */
if (!xattr_names_equal(name, name_len,
xat->name, xat->name_len)) {
part = 0;
if (!xattr_names_equal(name, name_len, xat->name, xat->name_len)) {
le64_add_cpu(&key->skx_id, 1);
continue;
}
/* use the matching name we found */
last_part = xattr_nr_parts(xat) - 1;
}
total += ret;
if (total == bytes || part == last_part) {
/* copied as much as we could */
ret = total;
break;
}
part++;
/* found next name */
break;
}
return ret;
}
/*
* The caller has already read and verified the xattr's first item.
* Copy the value from the tail of the first item and from any future
* items into the destination buffer.
*/
static int copy_xattr_value(struct super_block *sb, struct scoutfs_key *xat_key,
struct scoutfs_xattr *xat, int xat_bytes,
char *buffer, size_t size,
struct scoutfs_lock *lock)
{
struct scoutfs_key key;
size_t copied = 0;
int val_tail;
int bytes;
int ret;
int i;
/* must have first item up to value */
if (WARN_ON_ONCE(xat_bytes < sizeof(struct scoutfs_xattr)) ||
WARN_ON_ONCE(xat_bytes < offsetof(struct scoutfs_xattr, name[xat->name_len])))
return -EINVAL;
/* only ever copy up to the full value */
size = min_t(size_t, size, le16_to_cpu(xat->val_len));
/* must have full first item if caller needs value from second item */
val_tail = SCOUTFS_XATTR_MAX_PART_SIZE -
offsetof(struct scoutfs_xattr, name[xat->name_len]);
if (WARN_ON_ONCE(size > val_tail && xat_bytes != SCOUTFS_XATTR_MAX_PART_SIZE))
return -EINVAL;
/* copy from tail of first item */
bytes = min_t(unsigned int, size, val_tail);
if (bytes > 0) {
memcpy(buffer, &xat->name[xat->name_len], bytes);
copied += bytes;
}
key = *xat_key;
for (i = 1; copied < size; i++) {
key.skx_part = i;
bytes = min_t(unsigned int, size - copied, SCOUTFS_XATTR_MAX_PART_SIZE);
ret = scoutfs_item_lookup(sb, &key, buffer + copied, bytes, lock);
if (ret >= 0 && ret != bytes)
ret = -EIO;
if (ret < 0)
return ret;
copied += ret;
}
return copied;
}
/*
* The caller is working with items that are either in the allocated
* first compound item or further items that are offsets into a value
* buffer. Give them a pointer and length of the start of the item.
*/
static void xattr_item_part_buffer(void **buf, int *len, int part,
struct scoutfs_xattr *xat, unsigned int xat_bytes,
const char *value, size_t size)
{
int off;
if (part == 0) {
*buf = xat;
*len = xat_bytes;
} else {
off = (part * SCOUTFS_XATTR_MAX_PART_SIZE) -
offsetof(struct scoutfs_xattr, name[xat->name_len]);
BUG_ON(off >= size); /* calls limited by number of parts */
*buf = (void *)value + off;
*len = min_t(size_t, size - off, SCOUTFS_XATTR_MAX_PART_SIZE);
}
}
/*
* Create all the items associated with the given xattr. If this
* returns an error it will have already cleaned up any items it created
* before seeing the error.
*/
static int create_xattr_items(struct inode *inode, u64 id,
struct scoutfs_xattr *xat, unsigned int bytes,
static int create_xattr_items(struct inode *inode, u64 id, struct scoutfs_xattr *xat,
int xat_bytes, const char *value, size_t size, u8 new_parts,
struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key key;
unsigned int part_bytes;
unsigned int total;
int ret;
int ret = 0;
void *buf;
int len;
int i;
init_xattr_key(&key, scoutfs_ino(inode),
xattr_name_hash(xat->name, xat->name_len), id);
total = 0;
ret = 0;
while (total < bytes) {
part_bytes = min_t(unsigned int, bytes - total,
SCOUTFS_XATTR_MAX_PART_SIZE);
for (i = 0; i < new_parts; i++) {
key.skx_part = i;
xattr_item_part_buffer(&buf, &len, i, xat, xat_bytes, value, size);
ret = scoutfs_item_create(sb, &key,
(void *)xat + total, part_bytes,
lock);
if (ret) {
ret = scoutfs_item_create(sb, &key, buf, len, lock);
if (ret < 0) {
while (key.skx_part-- > 0)
scoutfs_item_delete(sb, &key, lock);
break;
}
total += part_bytes;
key.skx_part++;
}
return ret;
@@ -329,20 +379,20 @@ out:
* deleted items.
*/
static int change_xattr_items(struct inode *inode, u64 id,
struct scoutfs_xattr *new_xat,
unsigned int new_bytes, u8 new_parts,
u8 old_parts, struct scoutfs_lock *lock)
struct scoutfs_xattr *xat, int xat_bytes,
const char *value, size_t size,
u8 new_parts, u8 old_parts, struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key key;
int last_created = -1;
int bytes;
int off;
void *buf;
int len;
int i;
int ret;
init_xattr_key(&key, scoutfs_ino(inode),
xattr_name_hash(new_xat->name, new_xat->name_len), id);
xattr_name_hash(xat->name, xat->name_len), id);
/* dirty existing old items */
for (i = 0; i < old_parts; i++) {
@@ -354,13 +404,10 @@ static int change_xattr_items(struct inode *inode, u64 id,
/* create any new items past the old */
for (i = old_parts; i < new_parts; i++) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
key.skx_part = i;
ret = scoutfs_item_create(sb, &key, (void *)new_xat + off,
bytes, lock);
xattr_item_part_buffer(&buf, &len, i, xat, xat_bytes, value, size);
ret = scoutfs_item_create(sb, &key, buf, len, lock);
if (ret)
goto out;
@@ -369,13 +416,10 @@ static int change_xattr_items(struct inode *inode, u64 id,
/* update dirtied overlapping existing items, last partial first */
for (i = min(old_parts, new_parts) - 1; i >= 0; i--) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
key.skx_part = i;
ret = scoutfs_item_update(sb, &key, (void *)new_xat + off,
bytes, lock);
xattr_item_part_buffer(&buf, &len, i, xat, xat_bytes, value, size);
ret = scoutfs_item_update(sb, &key, buf, len, lock);
/* only last partial can fail, then we unwind created */
if (ret < 0)
goto out;
@@ -403,72 +447,69 @@ out:
* Copy the value for the given xattr name into the caller's buffer, if it
* fits. Return the bytes copied or -ERANGE if it doesn't fit.
*/
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
size_t size)
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
struct scoutfs_lock *lck)
{
struct inode *inode = dentry->d_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_key key;
unsigned int bytes;
unsigned int xat_bytes;
size_t name_len;
int ret;
if (unknown_prefix(name))
return -EOPNOTSUPP;
name_len = strlen(name);
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
return -ENODATA;
/* only need enough for caller's name and value sizes */
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
xat_bytes = first_item_bytes(name_len, size);
xat = kmalloc(xat_bytes, GFP_NOFS);
if (!xat)
return -ENOMEM;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lck);
if (ret)
goto out;
down_read(&si->xattr_rwsem);
ret = get_next_xattr(inode, &key, xat, bytes,
name, name_len, 0, 0, lck);
up_read(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_READ);
ret = get_next_xattr(inode, &key, xat, xat_bytes, name, name_len, 0, 0, lck);
if (ret < 0) {
if (ret == -ENOENT)
ret = -ENODATA;
goto out;
goto unlock;
}
/* the caller just wants to know the size */
if (size == 0) {
ret = le16_to_cpu(xat->val_len);
goto out;
goto unlock;
}
/* the caller's buffer wasn't big enough */
if (size < le16_to_cpu(xat->val_len)) {
ret = -ERANGE;
goto out;
goto unlock;
}
/* XXX corruption, the items didn't match the header */
if (ret < xattr_full_bytes(xat)) {
ret = -EIO;
goto out;
ret = copy_xattr_value(sb, &key, xat, xat_bytes, buffer, size, lck);
unlock:
up_read(&si->xattr_rwsem);
kfree(xat);
return ret;
}
static int scoutfs_xattr_get(struct dentry *dentry, const char *name, void *buffer, size_t size)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret == 0) {
ret = scoutfs_xattr_get_locked(inode, name, buffer, size, lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
ret = le16_to_cpu(xat->val_len);
memcpy(buffer, &xat->name[xat->name_len], ret);
out:
vfree(xat);
return ret;
}
@@ -576,29 +617,32 @@ int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
* cause creation to fail if the xattr already exists (_CREATE) or
* doesn't already exist (_REPLACE). xattrs can have a zero length
* value.
*
* The caller has acquired cluster locks, holds a transaction, and has
* dirtied the inode item so that they can update it after we modify it.
* The caller has to know the tags to acquire cluster locks before
* holding the transaction so they pass in the parsed tags, or all 0s for
* non scoutfs. prefixes.
*/
static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags)
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
const void *value, size_t size, int flags,
const struct scoutfs_xattr_prefix_tags *tgs,
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
struct list_head *ind_locks)
{
struct inode *inode = dentry->d_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int bytes;
unsigned int xat_bytes_totl;
unsigned int xat_bytes;
unsigned int val_len;
u64 ind_seq;
u64 total;
u64 hash = 0;
u64 id = 0;
@@ -607,6 +651,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
trace_scoutfs_xattr_set(sb, name_len, value, size, flags);
if (WARN_ON_ONCE(tgs->totl && !totl_lock))
return -EINVAL;
/* mirror the syscall's errors for large names and values */
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
return -ERANGE;
@@ -617,73 +664,61 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
(flags & ~(XATTR_CREATE | XATTR_REPLACE)))
return -EINVAL;
if (unknown_prefix(name))
return -EOPNOTSUPP;
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
if ((tgs->hide | tgs->srch | tgs->totl) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
if (tgs->totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
/* alloc enough to read old totl value */
xat = __vmalloc(bytes + SCOUTFS_XATTR_MAX_TOTL_U64, GFP_NOFS, PAGE_KERNEL);
if (!xat) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
if (ret)
goto out;
/* allocate enough to always read an existing xattr's totl */
xat_bytes_totl = first_item_bytes(name_len,
max_t(size_t, size, SCOUTFS_XATTR_MAX_TOTL_U64));
/* but store partial first item that only includes the new xattr's value */
xat_bytes = first_item_bytes(name_len, size);
xat = kmalloc(xat_bytes_totl, GFP_NOFS);
if (!xat)
return -ENOMEM;
down_write(&si->xattr_rwsem);
/* find an existing xattr to delete, including possible totl value */
ret = get_next_xattr(inode, &key, xat,
sizeof(struct scoutfs_xattr) + name_len + SCOUTFS_XATTR_MAX_TOTL_U64,
name, name_len, 0, 0, lck);
ret = get_next_xattr(inode, &key, xat, xat_bytes_totl, name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
goto unlock;
goto out;
/* check existence constraint flags */
if (ret == -ENOENT && (flags & XATTR_REPLACE)) {
ret = -ENODATA;
goto unlock;
goto out;
} else if (ret >= 0 && (flags & XATTR_CREATE)) {
ret = -EEXIST;
goto unlock;
goto out;
}
/* not an error to delete something that doesn't exist */
if (ret == -ENOENT && !value) {
ret = 0;
goto unlock;
goto out;
}
/* s64 count delta if we create or delete */
if (tgs.totl)
if (tgs->totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
if (found_parts && tgs.totl) {
if (found_parts && tgs->totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
if (ret < 0)
goto unlock;
goto out;
le64_add_cpu(&tval.total, -total);
}
/* prepare our xattr */
/* prepare the xattr header, name, and start of value in first item */
if (value) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
@@ -693,17 +728,94 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
xat->val_len = cpu_to_le16(size);
memset(xat->__pad, 0, sizeof(xat->__pad));
memcpy(xat->name, name, name_len);
memcpy(&xat->name[xat->name_len], value, size);
memcpy(&xat->name[name_len], value,
min(size, SCOUTFS_XATTR_MAX_PART_SIZE -
offsetof(struct scoutfs_xattr, name[name_len])));
if (tgs.totl) {
if (tgs->totl) {
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto unlock;
goto out;
}
le64_add_cpu(&tval.total, total);
}
if (tgs->srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
goto out;
undo_srch = true;
}
if (tgs->totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto out;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
lck);
else
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), lck);
if (ret < 0)
goto out;
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
ret = 0;
out:
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
up_write(&si->xattr_rwsem);
kfree(xat);
return ret;
}
static int scoutfs_xattr_set(struct dentry *dentry, const char *name, const void *value,
size_t size, int flags)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_lock *lck = NULL;
size_t name_len = strlen(name);
LIST_HEAD(ind_locks);
u64 ind_seq;
int ret;
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
if (ret)
goto unlock;
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret)
@@ -723,79 +835,98 @@ retry:
if (ret < 0)
goto release;
if (tgs.srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
goto release;
undo_srch = true;
}
if (tgs.totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto release;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, bytes,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
lck);
else
ret = create_xattr_items(inode, id, xat, bytes, lck);
if (ret < 0)
goto release;
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
scoutfs_update_inode_item(inode, lck, &ind_locks);
ret = 0;
ret = scoutfs_xattr_set_locked(dentry->d_inode, name, name_len, value, size, flags, &tgs,
lck, totl_lock, &ind_locks);
if (ret == 0)
scoutfs_update_inode_item(inode, lck, &ind_locks);
release:
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
out:
vfree(xat);
return ret;
}
int scoutfs_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags)
/*
* Future kernels have this amazing hack to rewind the name to get the
* skipped prefix. We're back in the stone ages without the handler
* arg, so we Just Know that this is possible. This will become a
* compat hook to either call the kernel's xattr_full_name(handler), or
* our hack to use the flags as the prefix length.
*/
static const char *full_name_hack(void *handler, const char *name, int len)
{
if (size == 0)
value = ""; /* set empty value */
return name - len;
}
static int scoutfs_xattr_get_handler(struct dentry *dentry, const char *name,
void *value, size_t size, int handler_flags)
{
name = full_name_hack(NULL, name, handler_flags);
return scoutfs_xattr_get(dentry, name, value, size);
}
static int scoutfs_xattr_set_handler(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags, int handler_flags)
{
name = full_name_hack(NULL, name, handler_flags);
return scoutfs_xattr_set(dentry, name, value, size, flags);
}
int scoutfs_removexattr(struct dentry *dentry, const char *name)
{
return scoutfs_xattr_set(dentry, name, NULL, 0, XATTR_REPLACE);
}
static const struct xattr_handler scoutfs_xattr_user_handler = {
.prefix = XATTR_USER_PREFIX,
.flags = XATTR_USER_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_scoutfs_handler = {
.prefix = SCOUTFS_XATTR_PREFIX,
.flags = SCOUTFS_XATTR_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_trusted_handler = {
.prefix = XATTR_TRUSTED_PREFIX,
.flags = XATTR_TRUSTED_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_security_handler = {
.prefix = XATTR_SECURITY_PREFIX,
.flags = XATTR_SECURITY_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_acl_access_handler = {
.prefix = XATTR_NAME_POSIX_ACL_ACCESS,
.flags = ACL_TYPE_ACCESS,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
};
static const struct xattr_handler scoutfs_xattr_acl_default_handler = {
.prefix = XATTR_NAME_POSIX_ACL_DEFAULT,
.flags = ACL_TYPE_DEFAULT,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
};
const struct xattr_handler *scoutfs_xattr_handlers[] = {
&scoutfs_xattr_user_handler,
&scoutfs_xattr_scoutfs_handler,
&scoutfs_xattr_trusted_handler,
&scoutfs_xattr_security_handler,
&scoutfs_xattr_acl_access_handler,
&scoutfs_xattr_acl_default_handler,
NULL
};
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,
@@ -807,7 +938,7 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_key key;
unsigned int bytes;
unsigned int xat_bytes;
ssize_t total = 0;
u32 name_hash = 0;
bool is_hidden;
@@ -820,8 +951,8 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
id = *id_pos;
/* need a buffer large enough for all possible names */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN;
xat = kmalloc(bytes, GFP_NOFS);
xat_bytes = first_item_bytes(SCOUTFS_XATTR_MAX_NAME_LEN, 0);
xat = kmalloc(xat_bytes, GFP_NOFS);
if (!xat) {
ret = -ENOMEM;
goto out;
@@ -834,8 +965,7 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
down_read(&si->xattr_rwsem);
for (;;) {
ret = get_next_xattr(inode, &key, xat, bytes,
NULL, 0, name_hash, id, lck);
ret = get_next_xattr(inode, &key, xat, xat_bytes, NULL, 0, name_hash, id, lck);
if (ret < 0) {
if (ret == -ENOENT)
ret = total;

View File

@@ -1,25 +1,29 @@
#ifndef _SCOUTFS_XATTR_H_
#define _SCOUTFS_XATTR_H_
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
size_t size);
int scoutfs_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags);
int scoutfs_removexattr(struct dentry *dentry, const char *name);
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,
bool e_range, bool show_hidden);
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1,
totl:1;
};
extern const struct xattr_handler *scoutfs_xattr_handlers[];
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
struct scoutfs_lock *lck);
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
const void *value, size_t size, int flags,
const struct scoutfs_xattr_prefix_tags *tgs,
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
struct list_head *ind_locks);
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,
bool e_range, bool show_hidden);
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);

View File

@@ -10,7 +10,8 @@ BIN := src/createmany \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop
src/create_xattr_loop \
src/fragmented_data_extents
DEPS := $(wildcard src/*.d)

View File

@@ -1,5 +1,18 @@
#!/usr/bin/bash
#
# This fencing script is used for testing clusters of multiple mounts on
# a single host. It finds mounts to fence by looking for their rids and
# only knows how to "fence" by using forced unmount.
#
echo "$0 running rid '$SCOUTFS_FENCED_REQ_RID' ip '$SCOUTFS_FENCED_REQ_IP' args '$@'"
log() {
echo "$@" > /dev/stderr
exit 1
}
echo_fail() {
echo "$@" > /dev/stderr
exit 1
@@ -7,29 +20,24 @@ echo_fail() {
rid="$SCOUTFS_FENCED_REQ_RID"
#
# Look for a local mount with the rid to fence. Typically we'll at
# least find the mount with the server that requested the fence that
# we're processing. But it's possible that mounts are unmounted
# before, or while, we're running.
#
mnts=$(findmnt -l -n -t scoutfs -o TARGET) || \
echo_fail "findmnt -t scoutfs failed" > /dev/stderr
for fs in /sys/fs/scoutfs/*; do
[ ! -d "$fs" ] && continue
for mnt in $mnts; do
mnt_rid=$(scoutfs statfs -p "$mnt" -s rid) || \
echo_fail "scoutfs statfs $mnt failed"
if [ "$mnt_rid" == "$rid" ]; then
umount -f "$mnt" || \
echo_fail "umout -f $mnt"
exit 0
fs_rid="$(cat $fs/rid)" || \
echo_fail "failed to get rid in $fs"
if [ "$fs_rid" != "$rid" ]; then
continue
fi
nr="$(cat $fs/data_device_maj_min)" || \
echo_fail "failed to get data device major:minor in $fs"
mnts=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
echo_fail "findmnt -t scoutfs -S $nr failed"
for mnt in $mnts; do
umount -f "$mnt" || \
echo_fail "umout -f $mnt failed"
done
done
#
# If the mount doesn't exist on this host then it can't access the
# devices by definition and can be considered fenced.
#
exit 0

View File

@@ -75,6 +75,20 @@ t_fs_nrs()
seq 0 $((T_NR_MOUNTS - 1))
}
#
# outputs "1" if the fs number has "1" in its quorum/is_leader file.
# All other cases output 0, including the fs nr being a client which
# won't have a quorum/ dir.
#
t_fs_is_leader()
{
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader 2>/dev/null)" == "1" ]; then
echo "1"
else
echo "0"
fi
}
#
# Output the mount nr of the current server. This takes no steps to
# ensure that the server doesn't shut down and have some other mount
@@ -83,7 +97,7 @@ t_fs_nrs()
t_server_nr()
{
for i in $(t_fs_nrs); do
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "1" ]; then
if [ "$(t_fs_is_leader $i)" == "1" ]; then
echo $i
return
fi
@@ -101,7 +115,7 @@ t_server_nr()
t_first_client_nr()
{
for i in $(t_fs_nrs); do
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "0" ]; then
if [ "$(t_fs_is_leader $i)" == "0" ]; then
echo $i
return
fi
@@ -391,7 +405,7 @@ t_save_all_sysfs_mount_options() {
for i in $(t_fs_nrs); do
opt="$(t_sysfs_path $i)/mount_options/$name"
ind="$name_$i"
ind="${name}_${i}"
_saved_opts[$ind]="$(cat $opt)"
done
@@ -403,7 +417,7 @@ t_restore_all_sysfs_mount_options() {
local i
for i in $(t_fs_nrs); do
ind="$name_$i"
ind="${name}_${i}"
t_set_sysfs_mount_option $i $name "${_saved_opts[$ind]}"
done

View File

@@ -0,0 +1,26 @@
== initial writes smaller than prealloc grow to prealloc size
/mnt/test/test/data-prealloc/file-1: 7 extents found
/mnt/test/test/data-prealloc/file-2: 7 extents found
== larger files get full prealloc extents
/mnt/test/test/data-prealloc/file-1: 9 extents found
/mnt/test/test/data-prealloc/file-2: 9 extents found
== non-streaming writes with contig have per-block extents
/mnt/test/test/data-prealloc/file-1: 32 extents found
/mnt/test/test/data-prealloc/file-2: 32 extents found
== any writes to region prealloc get full extents
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
== streaming offline writes get full extents either way
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
== goofy preallocation amounts work
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 3 extents found
/mnt/test/test/data-prealloc/file-2: 3 extents found

View File

@@ -0,0 +1,3 @@
== creating fragmented extents
== unlink file with moved extents to free extents per block
== cleanup

View File

@@ -0,0 +1,3 @@
== starting background invalidating read/write load
== 60s of lock recovery during invalidating load
== stopping background load

View File

View File

@@ -40,6 +40,7 @@ generic/092
generic/098
generic/101
generic/104
generic/105
generic/106
generic/107
generic/117
@@ -51,6 +52,7 @@ generic/184
generic/221
generic/228
generic/236
generic/237
generic/245
generic/249
generic/257
@@ -63,6 +65,7 @@ generic/308
generic/309
generic/313
generic/315
generic/319
generic/322
generic/335
generic/336
@@ -72,6 +75,7 @@ generic/342
generic/343
generic/348
generic/360
generic/375
generic/376
generic/377
Not
@@ -282,4 +286,4 @@ shared/004
shared/032
shared/051
shared/289
Passed all 75 tests
Passed all 79 tests

View File

@@ -58,6 +58,7 @@ $(basename $0) options:
-m | Run mkfs on the device before mounting and running
| tests. Implies unmounting existing mounts first.
-n <nr> | The number of devices and mounts to test.
-o <opts> | Add option string to all mounts during all tests.
-P | Enable trace_printk.
-p | Exit script after preparing mounts only, don't run tests.
-q <nr> | The first <nr> mounts will be quorum members. Must be
@@ -68,6 +69,7 @@ $(basename $0) options:
-s | Skip git repo checkouts.
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
-T <nr> | Multiply the original trace buffer size by nr during the run.
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
-y | xfstests ./check additional args
@@ -136,6 +138,12 @@ while true; do
T_NR_MOUNTS="$2"
shift
;;
-o)
test -n "$2" || die "-o must have option string argument"
# always appending to existing options
T_MNT_OPTIONS+=",$2"
shift
;;
-P)
T_TRACE_PRINTK="1"
;;
@@ -160,6 +168,11 @@ while true; do
T_TRACE_GLOB+=("$2")
shift
;;
-T)
test -n "$2" || die "-T must have trace buffer size multiplier argument"
T_TRACE_MULT="$2"
shift
;;
-X)
test -n "$2" || die "-X requires xfstests git repo dir argument"
T_XFSTESTS_REPO="$2"
@@ -345,6 +358,13 @@ if [ -n "$T_INSMOD" ]; then
cmd insmod "$T_KMOD/src/scoutfs.ko"
fi
if [ -n "$T_TRACE_MULT" ]; then
orig_trace_size=$(cat /sys/kernel/debug/tracing/buffer_size_kb)
mult_trace_size=$((orig_trace_size * T_TRACE_MULT))
msg "increasing trace buffer size from $orig_trace_size KiB to $mult_trace_size KiB"
echo $mult_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
fi
nr_globs=${#T_TRACE_GLOB[@]}
if [ $nr_globs -gt 0 ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
@@ -374,19 +394,21 @@ fi
# always describe tracing in the logs
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/sys/kernel/debug/tracing/buffer_size_kb \
/proc/sys/kernel/ftrace_dump_on_oops
#
# Build a fenced config that runs scripts out of the repository rather
# than the default system directory
#
conf="$T_RESULTS/scoutfs-fencd.conf"
conf="$T_RESULTS/scoutfs-fenced.conf"
cat > $conf << EOF
SCOUTFS_FENCED_DELAY=1
SCOUTFS_FENCED_RUN=$T_TESTS/fenced-local-force-unmount.sh
SCOUTFS_FENCED_RUN_ARGS=""
SCOUTFS_FENCED_RUN_ARGS="ignored run args"
EOF
export SCOUTFS_FENCED_CONFIG_FILE="$conf"
T_FENCED_LOG="$T_RESULTS/fenced.log"
#
# Run the agent in the background, log its output, an kill it if we
@@ -394,7 +416,7 @@ export SCOUTFS_FENCED_CONFIG_FILE="$conf"
#
fenced_log()
{
echo "[$(timestamp)] $*" >> "$T_RESULTS/fenced.stdout.log"
echo "[$(timestamp)] $*" >> "$T_FENCED_LOG"
}
fenced_pid=""
kill_fenced()
@@ -405,7 +427,7 @@ kill_fenced()
fi
}
trap kill_fenced EXIT
$T_UTILS/fenced/scoutfs-fenced > "$T_RESULTS/fenced.stdout.log" 2> "$T_RESULTS/fenced.stderr.log" &
$T_UTILS/fenced/scoutfs-fenced > "$T_FENCED_LOG" 2>&1 &
fenced_pid=$!
fenced_log "started fenced pid $fenced_pid in the background"
@@ -429,6 +451,7 @@ for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
if [ "$i" -lt "$T_QUORUM" ]; then
opts="$opts,quorum_slot_nr=$i"
fi
opts="${opts}${T_MNT_OPTIONS}"
msg "mounting $meta_dev|$data_dev on $dir"
cmd mount -t scoutfs $opts "$data_dev" "$dir" &
@@ -603,6 +626,9 @@ if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
echo 0 > /sys/kernel/debug/tracing/options/trace_printk
cat /sys/kernel/debug/tracing/trace > "$T_RESULTS/traces"
if [ -n "$orig_trace_size" ]; then
echo $orig_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
fi
fi
if [ "$skipped" == 0 -a "$failed" == 0 ]; then

View File

@@ -6,9 +6,11 @@ simple-inode-index.sh
simple-staging.sh
simple-release-extents.sh
fallocate.sh
data-prealloc.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
large-fragmented-free.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
@@ -17,6 +19,7 @@ lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
lock-recover-invalidate.sh
export-lookup-evict-race.sh
createmany-parallel.sh
createmany-large-names.sh

View File

@@ -0,0 +1,113 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
/*
* This creates fragmented data extents.
*
* A file is created that has alternating free and allocated extents.
* This also results in the global allocator having the matching
* fragmented free extent pattern. While that file is being created,
* occasionally an allocated extent is moved to another file. This
* results in a file that has fragmented extents at a given stride that
* can be deleted to create free data extents with a given stride.
*
* We don't have hole punching so to do this quickly we use a goofy
* combination of fallocate, truncate, and our move_blocks ioctl.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <linux/types.h>
#include <assert.h>
#include "ioctl.h"
#define BLOCK_SIZE 4096
int main(int argc, char **argv)
{
struct scoutfs_ioctl_move_blocks mb = {0,};
unsigned long long freed_extents;
unsigned long long move_stride;
unsigned long long i;
int alloc_fd;
int trunc_fd;
off_t off;
int ret;
if (argc != 5) {
printf("%s <freed_extents> <move_stride> <alloc_file> <trunc_file>\n", argv[0]);
return 1;
}
freed_extents = strtoull(argv[1], NULL, 0);
move_stride = strtoull(argv[2], NULL, 0);
alloc_fd = open(argv[3], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (alloc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[3], errno, strerror(errno));
exit(1);
}
trunc_fd = open(argv[4], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (trunc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[4], errno, strerror(errno));
exit(1);
}
for (i = 0, off = 0; i < freed_extents; i++, off += BLOCK_SIZE * 2) {
ret = fallocate(alloc_fd, 0, off, BLOCK_SIZE * 2);
if (ret < 0) {
fprintf(stderr, "fallocate at off %llu error: %d (%s)\n",
(unsigned long long)off, errno, strerror(errno));
exit(1);
}
ret = ftruncate(alloc_fd, off + BLOCK_SIZE);
if (ret < 0) {
fprintf(stderr, "truncate to off %llu error: %d (%s)\n",
(unsigned long long)off + BLOCK_SIZE, errno, strerror(errno));
exit(1);
}
if ((i % move_stride) == 0) {
mb.from_fd = alloc_fd;
mb.from_off = off;
mb.len = BLOCK_SIZE;
mb.to_off = i * BLOCK_SIZE;
ret = ioctl(trunc_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
if (ret < 0) {
fprintf(stderr, "move from off %llu error: %d (%s)\n",
(unsigned long long)off,
errno, strerror(errno));
}
}
}
if (alloc_fd > -1)
close(alloc_fd);
if (trunc_fd > -1)
close(trunc_fd);
return 0;
}

View File

@@ -0,0 +1,136 @@
#
# test that the data prealloc options behave as expected. We write to
# two files a block at a time so that a single file doesn't naturally
# merge adjacent consecutive allocations. (we don't have multiple
# allocation cursors)
#
t_require_commands scoutfs stat filefrag dd touch truncate
write_forwards()
{
local prefix="$1"
local nr="$2"
local blk
touch "$prefix"-{1,2}
truncate -s 0 "$prefix"-{1,2}
for blk in $(seq 0 1 $((nr - 1))); do
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
done
}
write_backwards()
{
local prefix="$1"
local nr="$2"
local blk
touch "$prefix"-{1,2}
truncate -s 0 "$prefix"-{1,2}
for blk in $(seq $((nr - 1)) -1 0); do
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
done
}
release_files() {
local prefix="$1"
local size=$(($2 * 4096))
local vers
local f
for f in "$prefix"*; do
size=$(stat -c "%s" "$f")
vers=$(scoutfs stat -s data_version "$f")
scoutfs release "$f" -V "$vers" -o 0 -l $size
done
}
stage_files() {
local prefix="$1"
local nr="$2"
local vers
local f
for blk in $(seq 0 1 $((nr - 1))); do
for f in "$prefix"*; do
vers=$(scoutfs stat -s data_version "$f")
scoutfs stage /dev/zero "$f" -V "$vers" -o $((blk * 4096)) -l 4096
done
done
}
print_extents_found()
{
local prefix="$1"
filefrag "$prefix"* 2>&1 | grep "extent.*found" | t_filter_fs
}
t_save_all_sysfs_mount_options data_prealloc_blocks
t_save_all_sysfs_mount_options data_prealloc_contig_only
restore_options()
{
t_restore_all_sysfs_mount_options data_prealloc_blocks
t_restore_all_sysfs_mount_options data_prealloc_contig_only
}
trap restore_options EXIT
prefix="$T_D0/file"
echo "== initial writes smaller than prealloc grow to prealloc size"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 64
print_extents_found $prefix
echo "== larger files get full prealloc extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 128
print_extents_found $prefix
echo "== non-streaming writes with contig have per-block extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_backwards $prefix 32
print_extents_found $prefix
echo "== any writes to region prealloc get full extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 64
print_extents_found $prefix
write_backwards $prefix 64
print_extents_found $prefix
echo "== streaming offline writes get full extents either way"
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 64
release_files $prefix 64
stage_files $prefix 64
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
release_files $prefix 64
stage_files $prefix 64
print_extents_found $prefix
echo "== goofy preallocation amounts work"
t_set_sysfs_mount_option 0 data_prealloc_blocks 7
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 14
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_blocks 13
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 53
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_blocks 1
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 3
print_extents_found $prefix
t_pass

View File

@@ -45,6 +45,18 @@ check_read_write()
fi
}
# verify that fenced ran our testing fence script
verify_fenced_run()
{
local rids="$@"
local rid
for rid in $rids; do
grep -q ".* running rid '$rid'.* args 'ignored run args'" "$T_FENCED_LOG" || \
t_fail "fenced didn't execute RUN script for rid $rid"
done
}
echo "== make sure all mounts can see each other"
check_read_write
@@ -62,12 +74,14 @@ done
while t_rid_is_fencing $rid; do
sleep .5
done
verify_fenced_run $rid
t_mount $cl
check_read_write
echo "== force unmount all non-server, connection timeout, fence nop, mount"
sv=$(t_server_nr)
pattern="nonsense"
rids=""
sync
for cl in $(t_fs_nrs); do
if [ $cl == $sv ]; then
@@ -75,6 +89,7 @@ for cl in $(t_fs_nrs); do
fi
rid=$(t_mount_rid $cl)
rids="$rids $rid"
pattern="$pattern|$rid"
echo "cl $cl sv $sv rid $rid" >> "$T_TMP.log"
@@ -89,6 +104,7 @@ done
while test -d $(echo /sys/fs/scoutfs/*/fence/* | cut -d " " -f 1); do
sleep .5
done
verify_fenced_run $rids
# remount all the clients
for cl in $(t_fs_nrs); do
if [ $cl == $sv ]; then
@@ -109,11 +125,17 @@ t_wait_for_leader
while t_rid_is_fencing $rid; do
sleep .5
done
verify_fenced_run $rid
t_mount $sv
check_read_write
echo "== force unmount everything, new server fences all previous"
sync
rids=""
# get rids before forced unmount breaks scoutfs statfs
for nr in $(t_fs_nrs); do
rids="$rids $(t_mount_rid $nr)"
done
for nr in $(t_fs_nrs); do
t_force_umount $nr
done
@@ -122,6 +144,7 @@ t_mount_all
while test -d $(echo /sys/fs/scoutfs/*/fence/* | cut -d " " -f 1); do
sleep .5
done
verify_fenced_run $rids
check_read_write
t_pass

View File

@@ -0,0 +1,22 @@
#
# Make sure the server can handle a transaction with a data_freed whose
# blocks all hit different btree blocks in the main free list. It
# probably has to be merged in multiple commits.
#
t_require_commands fragmented_data_extents
EXTENTS_PER_BTREE_BLOCK=600
EXTENTS_PER_LIST_BLOCK=8192
FREED_EXTENTS=$((EXTENTS_PER_BTREE_BLOCK * EXTENTS_PER_LIST_BLOCK))
echo "== creating fragmented extents"
fragmented_data_extents $FREED_EXTENTS $EXTENTS_PER_BTREE_BLOCK "$T_D0/alloc" "$T_D0/move"
echo "== unlink file with moved extents to free extents per block"
rm -f "$T_D0/move"
echo "== cleanup"
rm -f "$T_D0/alloc"
t_pass

View File

@@ -0,0 +1,43 @@
#
# trigger server failover and lock recovery during heavy invalidating
# load on multiple mounts
#
majority_nr=$(t_majority_count)
quorum_nr=$T_QUORUM
test "$quorum_nr" == "$majority_nr" && \
t_skip "need remaining majority when leader unmounted"
test "$T_NR_MOUNTS" -lt "$((quorum_nr + 2))" && \
t_skip "need at least 2 non-quorum load mounts"
echo "== starting background invalidating read/write load"
touch "$T_D0/file"
load_pids=""
for i in $(t_fs_nrs); do
if [ "$i" -ge "$quorum_nr" ]; then
eval path="\$T_D${i}/file"
(while true; do touch $path > /dev/null 2>&1; done) &
load_pids="$load_pids $!"
(while true; do stat $path > /dev/null 2>&1; done) &
load_pids="$load_pids $!"
fi
done
# had it reproduce in ~40s on wimpy debug kernel guests
LENGTH=60
echo "== ${LENGTH}s of lock recovery during invalidating load"
END=$((SECONDS + LENGTH))
while [ "$SECONDS" -lt "$END" ]; do
sv=$(t_server_nr)
t_umount $sv
t_mount $sv
# new server had to process greeting for mount to finish
done
echo "== stopping background load"
kill $load_pids
t_pass

View File

@@ -36,7 +36,8 @@ test_xattr_lengths() {
else
echo "$name=\"$val\"" > "$T_TMP.good"
fi
cmp "$T_TMP.good" "$T_TMP.got" || exit 1
cmp "$T_TMP.good" "$T_TMP.got" || \
t_fail "cmp failed name len $name_len val len $val_len"
setfattr -x $name "$FILE"
}

View File

@@ -65,7 +65,6 @@ generic/030 # mmap missing
generic/075 # file content mismatch failures (fds, etc)
generic/080 # mmap missing
generic/103 # enospc causes trans commit failures
generic/105 # needs trigage: something about acls
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/120 # (can't exec 'cause no mmap)
@@ -73,17 +72,14 @@ generic/126 # (can't exec 'cause no mmap)
generic/141 # mmap missing
generic/213 # enospc causes trans commit failures
generic/215 # mmap missing
generic/237 # wrong error return from failing setfacl?
generic/246 # mmap missing
generic/247 # mmap missing
generic/248 # mmap missing
generic/319 # utils output change? update branch?
generic/321 # requires selinux enabled for '+' in ls?
generic/325 # mmap missing
generic/338 # BUG_ON update inode error handling
generic/346 # mmap missing
generic/347 # _dmthin_mount doesn't work?
generic/375 # utils output change? update branch?
EOF
t_restore_output

View File

@@ -55,9 +55,21 @@ test -x "$SCOUTFS_FENCED_RUN" || \
error_exit "SCOUTFS_FENCED_RUN '$SCOUTFS_FENCED_RUN' isn't executable"
#
# main loop watching for fence request across all filesystems
# Main loop watching for fence request across all filesystems. The
# server can shut down without waiting for pending fence requests to
# finish. All of the interaction with the fence directory and files can
# fail at any moment. We will generate log messages when the dir or
# files disappear.
#
# generate failure messages to stderr while still echoing 0 for the caller
careful_cat()
{
local path="$@"
cat "$@" || echo 0
}
while sleep $SCOUTFS_FENCED_DELAY; do
for fence in /sys/fs/scoutfs/*/fence/*; do
# catches unmatched regex when no dirs
@@ -66,7 +78,8 @@ while sleep $SCOUTFS_FENCED_DELAY; do
fi
# skip requests that have been handled
if [ $(cat "$fence/fenced") == 1 -o $(cat "$fence/error") == 1 ]; then
if [ "$(careful_cat $fence/fenced)" == 1 -o \
"$(careful_cat $fence/error)" == 1 ]; then
continue
fi
@@ -81,10 +94,10 @@ while sleep $SCOUTFS_FENCED_DELAY; do
export SCOUTFS_FENCED_REQ_RID="$rid"
export SCOUTFS_FENCED_REQ_IP="$ip"
$run $SCOUTFS_FENCED_RUN_ARGS
$SCOUTFS_FENCED_RUN $SCOUTFS_FENCED_RUN_ARGS
rc=$?
if [ "$rc" != 0 ]; then
log_message "server $srv fencing rid $rid saw error status $rc from $run"
log_message "server $srv fencing rid $rid saw error status $rc"
echo 1 > "$fence/error"
continue
fi

View File

@@ -15,12 +15,61 @@ general mount options described in the
.BR mount (8)
manual page.
.TP
.B acl
The acl mount option enables support for POSIX Access Control Lists
as detailed in
.BR acl (5) .
Support for POSIX ACLs is the default.
.TP
.B data_prealloc_blocks=<blocks>
Set the size of preallocation regions of data files, in 4KiB blocks.
Writes to these regions that contain no extents will attempt to
preallocate the size of the full region. This can waste a lot of space
with small files, files with sparse regions, and files whose final
length isn't a multiple of the preallocation size. The following
data_prealloc_contig_only option, which is the default, restricts this
behaviour to waste less space.
.sp
All the preallocation options can be changed in an active mount by
writing to their respective files in the options directory in the
mount's sysfs directory.
.sp
It is worth noting that it is always more efficient in every way to use
.BR fallocate (2)
to precisely allocate large extents for the resulting size of the file.
Always attempt to enable it in software that supports it.
.TP
.B data_prealloc_contig_only=<0|1>
This option, currently the default, limits file data preallocation in
two ways. First, it will only preallocate when extending a fully
allocated file. Second, it will limit the size of preallocation to the
existing length of the file. These limits reduce the amount of
preallocation wasted per file at the cost of multiple initial extents in
all files. It only supports simple streaming writes, any other write
pattern will not be recognized and could result in many fragmented
extent allocations.
.sp
This option can be disabled to encourage large allocated extents
regardless of write patterns. This can be helpful if files are written
with initial sparse regions (perhaps by multiple threads writing to
different regions) and wasted space isn't an issue (perhaps because the
file population contains few small files).
.TP
.B metadev_path=<device>
The metadev_path option specifies the path to the block device that
contains the filesystem's metadata.
.sp
This option is required.
.TP
.B noacl
The noacl mount option disables the default support for POSIX Access
Control Lists. Any existing system.posix_acl_default and
system.posix_acl_access extended attributes remain in inodes. They
will appear in listings from
.BR listxattr (5)
but specific retrieval or reomval operations will fail. They will be
used for enforcement again if ACL support is later enabled.
.TP
.B orphan_scan_delay_ms=<number>
This option sets the average expected delay, in milliseconds, between
each mount's scan of the global orphaned inode list. Jitter is added to

View File

@@ -597,7 +597,7 @@ format.
.PD
.TP
.BI "print META-DEVICE"
.BI "print {-S|--skip-likely-huge} META-DEVICE"
.sp
Prints out all of the metadata in the file system. This makes no effort
to ensure that the structures are consistent as they're traversed and
@@ -607,6 +607,20 @@ output.
.PD 0
.TP
.sp
.B "-S, --skip-likely-huge"
Skip printing structures that are likely to be very large. The
structures that are skipped tend to be global and whose size tends to be
related to the size of the volume. Examples of skipped structures include
the global fs items, srch files, and metadata and data
allocators. Similar structures that are not skipped are related to the
number of mounts and are maintained at a relatively reasonable size.
These include per-mount log trees, srch files, allocators, and the
metadata allocators used by server commits.
.sp
Skipping the larger structures limits the print output to a relatively
constant size rather than being a large multiple of the used metadata
space of the volume making the output much more useful for inspection.
.TP
.B "META-DEVICE"
The path to the metadata device for the filesystem whose metadata will be
printed. Since this command reads via the host's buffer cache, it may not

View File

@@ -8,6 +8,7 @@
#include <errno.h>
#include <string.h>
#include <stdarg.h>
#include <stdbool.h>
#include <ctype.h>
#include <uuid/uuid.h>
#include <sys/socket.h>
@@ -989,9 +990,10 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
struct print_args {
char *meta_device;
bool skip_likely_huge;
};
static int print_volume(int fd)
static int print_volume(int fd, struct print_args *args)
{
struct scoutfs_super_block *super = NULL;
struct print_recursion_args pa;
@@ -1041,23 +1043,26 @@ static int print_volume(int fd)
ret = err;
}
for (i = 0; i < array_size(super->meta_alloc); i++) {
snprintf(str, sizeof(str), "meta_alloc[%u]", i);
err = print_btree(fd, super, str, &super->meta_alloc[i].root,
if (!args->skip_likely_huge) {
for (i = 0; i < array_size(super->meta_alloc); i++) {
snprintf(str, sizeof(str), "meta_alloc[%u]", i);
err = print_btree(fd, super, str, &super->meta_alloc[i].root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
}
err = print_btree(fd, super, "data_alloc", &super->data_alloc.root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
}
err = print_btree(fd, super, "data_alloc", &super->data_alloc.root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "srch_root", &super->srch_root,
print_srch_root_item, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "logs_root", &super->logs_root,
print_log_trees_item, NULL);
if (err && !ret)
@@ -1065,19 +1070,23 @@ static int print_volume(int fd)
pa.super = super;
pa.fd = fd;
err = print_btree_leaf_items(fd, super, &super->srch_root.ref,
print_srch_root_files, &pa);
if (err && !ret)
ret = err;
if (!args->skip_likely_huge) {
err = print_btree_leaf_items(fd, super, &super->srch_root.ref,
print_srch_root_files, &pa);
if (err && !ret)
ret = err;
}
err = print_btree_leaf_items(fd, super, &super->logs_root.ref,
print_log_trees_roots, &pa);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "fs_root", &super->fs_root,
print_fs_item, NULL);
if (err && !ret)
ret = err;
if (!args->skip_likely_huge) {
err = print_btree(fd, super, "fs_root", &super->fs_root,
print_fs_item, NULL);
if (err && !ret)
ret = err;
}
out:
free(super);
@@ -1098,7 +1107,7 @@ static int do_print(struct print_args *args)
return ret;
}
ret = print_volume(fd);
ret = print_volume(fd, args);
close(fd);
return ret;
};
@@ -1108,6 +1117,9 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
struct print_args *args = state->input;
switch (key) {
case 'S':
args->skip_likely_huge = true;
break;
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
@@ -1125,8 +1137,13 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
return 0;
}
static struct argp_option options[] = {
{ "skip-likely-huge", 'S', NULL, 0, "Skip large structures to minimize output size"},
{ NULL }
};
static struct argp argp = {
NULL,
options,
parse_opt,
"META-DEV",
"Print metadata structures"