Compare commits

..

75 Commits

Author SHA1 Message Date
Zach Brown
5512d5c03e v1.11 Release
Finish the release notes for the 1.11 release.

Signed-off-by: Zach Brown <zab@versity.com>
2023-02-02 11:00:38 -08:00
Zach Brown
8cf7be4651 Merge pull request #115 from versity/zab/utils_flush
Zab/utils flush
2023-02-02 10:25:12 -08:00
Zach Brown
3363b4fb79 Flush device caches in buffered util cmds
Add calls to our new device cache flushing helper in commands that use
buffered reads.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-18 10:52:02 -08:00
Zach Brown
ddb5cce2a5 Add quick utils flush_device helper
Add a quick helper that just calls cache flushing ioctls on different
kinds of files.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-18 10:27:47 -08:00
Zach Brown
1b0e9c45f4 Merge pull request #114 from versity/zab/commit_lt_dirty
Allow replaying srch file rotation
2023-01-17 16:07:13 -08:00
Zach Brown
2e2ccb6f61 Allow replaying srch file rotation
When a client no longer needs to append to a srch file, for whatever
reason, we move the reference from the log_trees item into a specific
srch file btree item in the server's srch file tracking btree.

Zeroing the log_trees item and inserting the server's btree item are
done in a server commit and should be written atomically.

But commit_log_trees had an error handling case that could leave the
newly inserted item dirty in memory without zeroing the srch file
reference in the existing log_trees item.  Future attempts to rotate the
file reference, perhaps by retrying the commit or by reclaiming the
client's rid, would get EEXIST and fail.

This fixes the error handling path to ensure that we'll keep the dirty
srch file btree and log_trees item in sync.  The desynced items can
still exist in the world so we'll tolerate getting EEXIST on insertion.
After enough time has passed, or if repair zeroed the duplicate
reference, we could remove this special case from insertion.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-17 14:33:27 -08:00
Zach Brown
01c8bba56d Merge pull request #109 from versity/zab/server_statfs_stable_blocks
Zab/server statfs stable blocks
2023-01-12 09:58:48 -08:00
Zach Brown
17cb1fe84b Merge pull request #110 from versity/zab/partial_alloc_move
Allow partial extent motion
2023-01-12 09:58:12 -08:00
Zach Brown
78ae87031b Merge pull request #112 from versity/zab/tmpfile_umask
Zab/tmpfile umask
2023-01-12 09:57:56 -08:00
Zach Brown
bf93ea73c4 Merge pull request #113 from versity/zab/move_blocks_loop_fixes
Fix move_blocks loop exit conditions
2023-01-12 09:56:25 -08:00
Zach Brown
a23e7478a0 Fix move_blocks loop exit conditions
The move_blocks ioctl intends to only move extents whose bytes fall
inside i_size.  This is easy except for a final extent that straddles an
i_size that isn't aligned to 4K data blocks.

The code that either checked for an extent being entirely past i_size or
for limiting the number of blocks to move by i_size clumsily compared
i_size offsets in bytes with extent counts in 4KB blocks.  In just the
right circumstances, probably with the help of a byte length to move
that is much larger than i_size, the length calculation could result in
trying to move 0 blocks.  Once this hit the loop would keep finding that
extent and calculating 0 blocks to move and would be stuck.

We fix this by clamping the count of blocks in extents to move in terms
of byte offsets at the start of the loop.  This gets rid of the extra
size checks and byte offset use in the loop.  We also add a sanity check
to make sure that we can't get stuck if, say, corruption resulted in an
otherwise impossible zero length extent.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-10 09:34:52 -08:00
Zach Brown
9ba2ee5c88 Add testing of O_TMPFILE umask
There were kernels that didn't apply the current umask to inode modes
created with O_TMPFILE without acls.  Let's have a test running to make
sure that we're not surprised if we come across one.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-09 14:49:23 -08:00
Zach Brown
fe33a492c2 Make o_tmpfile test more generic
The o_tmpfile test only did one thing, clean it up a bit so we can add
more tests to the file.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-09 10:14:40 -08:00
Zach Brown
77c0ff89fb Rename stage-tmpfile to o_tmpfile
We had a one-off test that was overly specific to staging from tmpfile.
This renames it to a more generic test where we can add more tests of
o_tmpfile in general.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-09 10:07:15 -08:00
Zach Brown
7c2d83e2f8 Remove saved super block in scoutfs_sb_info
Now that we've removed its users we can remove the global saved copy of
the super block from scoutfs_sb_info.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-06 11:15:45 -08:00
Zach Brown
40aa47c888 Have the server keep a private dirty super block
As the server does its work its transactions modify a dirty super block
in memory.  This used the global super block in scoutfs_sb_info which
was visible to everything, including the client.  Move the dirty super
block over to the private server info so that only the server can see
it.

This is mostly boring storage motion but we do change that the quorum
code hands the server a static copy of the quorum config to use as it
starts up before it reads the most recent super block.

Signed-off-by: Zach Brown <zab@versity.com>
2023-01-06 11:15:45 -08:00
Zach Brown
c1bd7bcce5 Allow partial extent motion
Refilling a client's data_avail is the only alloc_move call that doesn't
try and limit the number of blocks that it dirties.  If it doesn't find
sufficiently large extents it can exhaust the server's alloc budget
without hitting the target.  It'll try to dirty blocks and return a hard
error.

This changes that behaviour to allow returning 0 if it moved any
extents.  Other callers can deal with partial progress as they already
limit the blocks they dirty.  This will also return ENOSPC if it hadn't
moved anything just as the current code would.

The result is that data fill can not necessarily hit the target.  It
might take multiple commits to fill the data_avail btree.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-15 20:47:41 -08:00
Zach Brown
7720222588 Have statfs use unlocked stable roots
The server's statfs request handler was intending to lock dirty
structures as they were walked to get sums used for statfs fields.
Other callers walk stable structures, though, so the summation calls had
grown iteration over other structures that the server didn't know it had
to lock.

This meant that the server was walking unlocked dirty structures as they
were being modified.  The races are very tight, but it can result in
request handling errors that shut down connections and IO errors from
trying to read inconsistent refs as they were modified by the locked
writer.

We've built up infrastructure so the server can now walk stable
structures just like the other callers.  It will no longer wander into
dirty blocks so it doesn't need to lock them and it will retry if its
walk of stale data crosses a broken reference.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
fff07ce19c Use stale block read retrying helper
Transition from manual checking for persistent ESTALE to the shared
helper that we just added.  This should not change behavior.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
464de56d28 Add stale block read retrying helper
Many readers had little implementations of the logic to decide to retry
stale reads with different refs or decide that they're persistent and
return hard errors.  Let's move that into a small helper.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
342c206550 Have scoutfs_forest_inode_count return stale reads
scoutfs_forest_inode_count() assumed it was called with stable refs and
would always translate ESTALE to EIO.  Change it so that it passes
ESTALE to the caller who is responsible for handling it.

The server will use this to retry reading from stable supers that it's
storing in memory.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
fe4734d019 Save a full stable super in the server
The server has a mechanism for tracking the last stable roots used by
network rpcs.  We expand it a bit to include the entire super so
that we can add users in the server which want the last full stable
super.  We can still use the stable super to give out the stable
roots.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
b1a43bb312 Make quorum config use more precise
The quorum code was using the copy of the super block in the sb info for
its config.  With that going away we make different users more carefully
reference the config.  The quorum agent has a copy that it reads on
setup, the client rarely reads a copy when trying to connect, and the
server uses its super.

This is about data access isolation and should have no functional effect
other than to cause more super reads.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
929703213f Add fsid sbi field
A few paths throughout the code get the fsid for the current mount by
using the copy of the super block that we store in the scoutfs_sb_info
for the mount.  We'd like to remove the super block from the sbi and
it's cleaner to have a specific constant field for the fsid of the mount
which will not change.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-12 14:59:22 -08:00
Zach Brown
78279ffb4a Merge pull request #108 from versity/zab/v1.10
v1.10 Release
2022-12-07 13:33:45 -08:00
Zach Brown
0b919e2ba7 v1.10 Release
Finish the release notes for the 1.10 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-07 12:30:17 -08:00
Zach Brown
bb5267f0c9 Merge pull request #107 from versity/zab/write_truncated_zero_tail
Zab/write truncated zero tail
2022-12-06 11:31:52 -08:00
Zach Brown
6d4916954b Add basic-truncate test
Signed-off-by: Zach Brown <zab@versity.com>
2022-12-06 10:31:31 -08:00
Zach Brown
8e067b3d3f Truncate dirties zero tail extension
When we truncate away from a partial block we need to zero its tail that
was past i_size and dirty it so that it's written.

We missed the typical vfs boilerplate of calling block_truncate_page
from setattr->set_size that does this.  We need to be a little careful
to pass our file lock down to get_block and then queue the inode for
writeback so its written out with the transaction.  This follows the
pattern in .write_end.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-06 10:31:31 -08:00
Zach Brown
87500e8bb5 Merge pull request #106 from versity/zab/invalidation_dprune_iput
Zab/invalidation dprune iput
2022-12-02 13:23:56 -08:00
Zach Brown
41174867ed Add t_get_sysfs_mount_option test func
Add a quick little function to get the value of a mount option.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-02 12:28:13 -08:00
Zach Brown
276fbebdac Avoid dput in lock invalidation
The d_prune_aliases in lock invalidation was thought to be safe because
the caller had an inode refernece, surely it can't get into iput_final.

I missed the fundamental dcache pattern that dput can ascend through
parents and end up in inode eviction for entirely unrelated inodes.
It's very easy for this to deadlock, imagine if nothing else that the
inode invalidation is blocked on in dput->iput->evict->delete->lock is
itself in the list of locks to invalidate in the caller.

We fix this by always kicking off d_prune and dput into async work.
This increases the chance that inodes will still be referenced after
invalidation and prevent inline deletion.  More deletions can be
deferred until the orphan scanner finds them.  It should be rare,
though.  We're still likely to put and drop invalidated inodes before a
writer gets around to removing the final unlink and asking us for the
omap that describes our cached inodes.

To perform the d_prune in work we make it a behavioural flag and make
our queued iputs a little more robust.   We use much safer and
understandable locking to cover the count and the new flags and we put
the work in re-entrant work in their own workqueue instead of one work
instance in the system_wq.

Signed-off-by: Zach Brown <zab@versity.com>
2022-12-02 12:28:13 -08:00
Zach Brown
03df993e14 Merge pull request #105 from versity/zab/cw_item_vers
Zab/cw item vers
2022-11-30 11:10:18 -08:00
Zach Brown
701f1a9538 Add test that checks duplicate meta_seq entries
Add a quick test of the index items to make sure that rapid inode
updates don't create duplicate meta_seq items.

Signed-off-by: Zach Brown <zab@versity.com>
2022-11-15 13:26:32 -08:00
Zach Brown
71ed4512dc Include primary lock write_seq for write_only vers
FS items are deleted by logging a deletion item that has a greater item
version than the item to delete.  The versions are usually maintained by
the write_seq of the exclusive write lock that protects the item.  Any
newer write hold will have a greater version than all previous write
holds so any items created under the lock will have a greater vers than
all previous items under the lock.  All deletion items will be merged
with the older item and both will be dropped.

This doesn't work for concurrent write-only locks.  The write-only locks
match with each other so their write_seqs are asssigned in the order
that they are granted.  That grant order can be mismatched with item
creation order.  We can get deletion items with lesser versions than the
item to delete because of when each creation's write-only lock was
granted.

Write only locks are used to maintain consistency between concurrent
writers and readers, not between writers.  Consistency between writers
is done with another primary write lock.  For example, if you're writing
seq items to a write-only region you need to have the write lock on the
inode for the specific seq item you're writing.

The fix, then, is to pass these primary write locks down to the item
cache so that it can chose an item version that is the greatest amongst
the transaction, the write-only lock, and the primary lock.  This now
ensures that the primary lock's increasing write_seq makes it down to
the item, bringing item version ordering in line with exclusive holds of
the primary lock.

All of this to fix concurrent inode updates sometimes leaving behind
duplicate meta_seq items because old seq item deletions ended up with
older versions than the seq item they tried to delete, nullifying the
deletion.

Signed-off-by: Zach Brown <zab@versity.com>
2022-11-15 13:26:32 -08:00
Zach Brown
57dff347a6 Merge pull request #104 from versity/zab/v1.9
v1.9 Release
2022-10-29 17:41:51 -07:00
Zach Brown
fb7cb057c4 v1.9 Release
Finish the release notes for the 1.9 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-29 16:41:58 -07:00
Zach Brown
1b924c501e Merge pull request #103 from versity/zab/verify_dentry_errors
Zab/verify dentry errors
2022-10-27 16:15:53 -07:00
Zach Brown
aed4313995 Simplify dentry verification
Now that we've removed the hash and pos from the dentry_info struct we
can do without it.  We can store the refresh gen in the d_fsdsta pointer
(sorry, 64bit only for now.. could allocate if we needed to.)  This gets
rid of the lock coverage spinlocks and puts a bit more pressure on lock
lookup, which we already know we have to make more efficient.  We can
get rid of all the dentry info allocation calls.

Now that we're not setting d_op as we allocate d_fsdata we put the ops
on the super block so that we get d_revalidate called on all our
dentries.

We also are a bit more precise about the errors we can return from
verification.  If the target of a dentry link changes then we return
-ESTALE rather than silently performing the caller's operation on
another inode.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-27 14:32:06 -07:00
Zach Brown
61d86f7718 Add scoutfs_lock_ino_refresh_gen
Add a lock call to get the current refresh_gen of a held lock.   If the
lock doesn't exist or isn't readable then we return 0.  This an be used
to track lock coverage of structures without the overhead and lifetime
binding of the lock coverage struct.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-27 14:16:07 -07:00
Zach Brown
717b56698a Remove __exit from scoutfs_sysfs_exit()
scoutfs_sysfs_exit() is called during error handling in module init.
When scoutfs is built-in (so, never.) the __exit section won't be
loaded.  Remove the __exit annotation so it's always available to be
called.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-26 16:42:27 -07:00
Zach Brown
c92a7ff705 Don't use dentry private hash/pos for deletion
The dentry cache life cycles are far too crazy to rely on d_fsdata being
kept in sync with the rest of the dentry fields.  Callers can do all
sorts of crazy things with dentries.  Only unlink and rename need these
fields and those operations are already so expensive that item lookups
to get the current actual hash and pos are lost in the noise.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-26 16:42:26 -07:00
Zach Brown
d05489c670 Merge pull request #102 from versity/zab/v1.8
v1.8 Release
2022-10-18 11:21:48 -07:00
Zach Brown
4806e8a7b3 v1.8 Release
Finish the release notes for the 1.8 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-18 09:48:41 -07:00
Zach Brown
b74f3f577d Merge pull request #101 from versity/zab/data_prealloc_options
Zab/data prealloc options
2022-10-17 12:18:51 -07:00
Zach Brown
d5ddf1ecac Fix option save/restore test helpers
The test shell helpers for saving and restoring mount options were
trying to put each mount's option value in an array.  It meant to build
the array key by concatenating the option name and the mount number.
But it didn't isolate the option "name" variable when evaluating it,
instead always evaluating "name_" to nothing and building keys for all
options that only contained the mount index.  This then broke when tests
attempted to save and restore multiple options.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-17 09:12:21 -07:00
Zach Brown
e27ea22fe4 Add run-tests -T option to increase trace size
Add an option to increase the trace buffer size during the run.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:36 -07:00
Zach Brown
51fe5a4ceb Add -o mount option argument to run-tests
Add a run-tests option that lets us append an option string to all
mounts performed during the tests.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:36 -07:00
Zach Brown
3847c4fe63 Add data-prealloc test
Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:35 -07:00
Zach Brown
ef2daf8857 Make data preallocation tunable
Make mount options for the size of preallocation and whether or not it
should be restricted to extending writes.  Disabling the default
restriction to streaming writes lets it preallocate in aligned regions
of the preallocation size when they contain no extents.

Signed-off-by: Zach Brown <zab@versity.com>
2022-10-14 14:03:35 -07:00
Zach Brown
064409eb62 Merge pull request #100 from versity/zab/acl
Zab/acl
2022-09-29 09:51:10 -07:00
Zach Brown
ddc5d9f04d Allow setting orphan_scan_delay_ms option
The orphan_scan_delay_ms option setting code mistakenly set the default
before testing the option for -1 (not the default) to discover if
multiple options had been set.  This made any attempt to set fail.

Initialize the option to -1 so the first set succeeds and apply the
default if we don't set the value.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
433a80c6fc Add compat for changing posix_acl_valid arguments
Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
78405bb5fd Remove ACL tests from xfstests expunge list
Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
98e514e5f4 Add failure message to xattr length test
The simple-xattr-unit test had a helper that failed by exiting with
non-zero instead of emitting a message.  Let's make it a bit easier to
see what's going on.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
29538a9f45 Add POSIX ACL support
Add support for the POSIX ACLs as described in acl(5).  Support is
enabled by default and can be explicitly enabled or disabled with the
acl or noacl mount options, respectively.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:36:10 -07:00
Zach Brown
1826048ca3 Add _locked xattr get and set calls
The upcoming acl support wants to be able to get and set xattrs from
callers who already have cluster locks and transactions.   We refactor
the existing xattr get and set calls into locked and unlocked variants.

It's mostly boring code motion with the unfortunate situation that the
caller needs to acquire the totl cluster lock before holding a
transaction before calling into the xattr code.   We push the parsing of
the tags to the caller of the locked get and set so that they can know
to acquire the right lock.  (The acl callers will never be setting
scoutfs. prefixed xattrs so they will never have tags.)

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-28 10:11:24 -07:00
Zach Brown
798fbb793e Move to xattr_handler xattr prefix dispatch
Move to the use of the array of xattr_handler structs on the super to
dispatch set and get from generic_ based on the xattr prefix.   This
will make it easier to add handling of the pseudo system. ACL xattrs.

Signed-off-by: Zach Brown <zab@versity.com>
2022-09-21 14:24:52 -07:00
Zach Brown
d7b16419ef Merge pull request #99 from versity/zab/v1.7
v1.7 Release
2022-08-26 13:20:56 -07:00
Zach Brown
f13aba78b1 v1.7 Release
Finish the release notes for the 1.7 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-08-26 11:38:23 -07:00
Zach Brown
3220c2055c Merge pull request #98 from versity/zab/move_freed_many_commits
Zab/move freed many commits
2022-08-01 09:09:28 -07:00
Zach Brown
1cbc927ccb Only clear trying inode deletion bit when set
try_delete_inode_items() is responsible for making sure that it's safe
to delete an inode's persistent items.  One of the things it has to
check is that there isn't another deletion attempt on the inode in this
mount.  It sets a bit in lock data while it's working and backs off if
the bit is already set.

Unfortunately it was always clearing this bit as it exited, regardless
of whether it set it or not.  This would let the next attempt perform
the deletion again before the working task had finished.  This was often
not a problem because background orphan scanning is the only source of
regular concurrent deletion attempts.

But it's a big problem if a deletion attempt takes a very long time.  It
gives enough time for an orphan scan attempt to clear the bit then try
again and clobber on whoever is performing the very slow deletion.

I hit this in a test that built files with an absurd number of
fragmented extents.  The second concurrent orphan attempt was able to
proceed with deletion and performed a bunch of duplicate data extent
frees and caused corruption.

The fix is to only clear the bit if we set it.  Now all concurrent
attempts will back off until the first task is done.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:25:01 -07:00
Zach Brown
acb94dd9b7 Add test of large fragmented free lists
Add a test which gives the server a transaction with a free list block
that contains blknos that each dirty an individiaul btree blocks in the
global data free extent btree.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:25:01 -07:00
Zach Brown
233fbb39f3 Limit alloc_move per-call allocator consumption
Recently scoutfs_alloc_move() was changed to try and limit the amount of
metadata blocks it could allocate or free.  The intent was to stop
concurrent holders of a transaction from fully consuming the available
allocator for the transaction.

The limiting logic was a bit off.  It stopped when the allocator had the
caller's limit remaining, not when it had consumed the caller's limit.
This is overly permissive and could still allow concurrent callers to
consume the allocator.  It was also triggering warning messages when a
call consumed more than its allowed budget while holding a transaction.

Unfortunately, we don't have per-caller tracking of allocator resource
consumption.  The best we can do is sample the allocators as we start
and return if they drop by the caller's limit.  This is overly
conservative in that it accounts any consumption during concurrent
callers to all callers.

This isn't perfect but it makes the failure case less likely and the
impact shouldn't be significant.  We don't often have a lot of
concurrency and the limits are larger than callers will typically
consume.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:25:01 -07:00
Zach Brown
198d3cda32 Add scoutfs_alloc_meta_low_since()
Add scoutfs_alloc_meta_low_since() to test if the metadata avail or
freed resources have been used by a given amount since a previous
snapshot.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-29 11:24:10 -07:00
Zach Brown
e8c64b4217 Move freed data extents in multiple server commits
As _get_log_trees() in the server prepares the log_trees item for the
client's commit, it moves all the freed data extents from the log_trees
item into core data extent allocator btree items.  If the freed blocks
are very fragmented then it can exceed a commit's metadata allocation
budget trying to dirty blocks in the free data extent btree.

The fix is to move the freed data extents in multiple commits.  First we
move a limited number in the main commit that does all the rest of the
work preparing the commit.  Then we try to move the remaining freed
extents in multiple additional commits.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-28 11:42:33 -07:00
Zach Brown
89b64ae1f7 Merge pull request #97 from versity/zab/v1_6_release
v1.6 Release
2022-07-07 14:54:26 -07:00
Zach Brown
fc8a5a1b5c v1.6 Release
Finish the release notes for the 1.6 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-07 13:07:55 -07:00
Zach Brown
d4c793e010 Merge pull request #94 from versity/zab/mem_free_fixes
Zab/mem free fixes
2022-07-07 13:07:04 -07:00
Zach Brown
8a3058818c Merge pull request #95 from versity/zab/skip_likely_huge
Add skip-likely-huge print option
2022-07-07 10:27:50 -07:00
Zach Brown
ba9a106f72 Free send attempts to disconnected clients
Callers who send to specific client connections can get -ENOTCONN if
their client has gone away.   We forgot to free the send tracking struct
in that case.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:20 -07:00
Zach Brown
310725eb72 Free omap rid list as server exits
The omap code keeps track of rids that are connected to the server.  It
only freed the tracked rids as the server told it that rids were being
removed.   But that removal only happened as clients were evicted.  If
the server shutdown it'd leave the old rid entries around.   They'd be
leaked as the mount was unmounted and could linger and crate duplicate
entries if the server started back up and the same clients reconnected.

The fix is to free the tracking rids as the server shuts down.   They'll
be rebuilt as clients reconnect if the server restarts.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:19 -07:00
Zach Brown
51a8236316 Fix missed partial fill_super teardown
If we return an error from .fill_super without having set sb->s_root
then the vfs won't call our put_super.  Our fill_super is careful to
call put_super so that it can tear down partial state, but we weren't
doing this with a few very early errors in fill_super.  This tripped
leak detection when we weren't freeing the sbi when returning errors
from bad option parsing.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:19 -07:00
Zach Brown
f3dd00895b Don't allocate zero size net info
Clients don't use the net conn info and specified that it has 0 size.
The net layer would try and allocate a zero size region which returns
the magic ZERO_SIZE_PTR, which it would then later try and free.  While
that works, it's a little goofy.   We can avoid the allocation when the
size is 0.  The pointer will remain null which kfree also accepts.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:16:19 -07:00
Zach Brown
49df98f5a8 Add skip-likely-huge print option
Add an option to skip printing structures that are likely to be so huge
that the print output becomes completely unwieldly on large systems.

Signed-off-by: Zach Brown <zab@versity.com>
2022-07-06 15:07:57 -07:00
74 changed files with 2552 additions and 1261 deletions

View File

@@ -1,6 +1,123 @@
Versity ScoutFS Release Notes
=============================
---
v1.11
\
*Feb 2, 2023*
Fixed a free extent processing error that could prevent mount from
proceeding when free data extents were sufficiently fragmented. It now
properly handle very fragmented free extent maps.
Fixed a statfs server processing race that could return spurious errors
and shut down the server. With the race closed statfs processing is
reliable.
Fixed a rare livelock in the move\_blocks ioctl. With the right
relationship between ioctl arguments and eventual file extent items the
core loop in the move\_blocks ioctl could get stuck looping on an extent
item and never return. The loop exit conditions were fixed and the loop
will always advance through all extents.
Changed the 'print' scoutfs commands to flush the block cache for the
devices. It was inconvenient to expect cache flushing to be a separate
step to ensure consistency with remote node writes.
---
v1.10
\
*Dec 7, 2022*
Fixed a potential directory entry cache management deadlock that could
occur when many nodes performed heavy metadata write loads across shared
directories and their child subdirectories. The deadlock could halt
invalidation progress on a node which could then stop use of locks that
needed invalidation on that node which would result in almost all tasks
hanging on those locks that would never make progress.
Fixed a circumstance where metadata change sequence index item
modification could leave behind old stale metadata sequence items. The
duplication case required concurrent metadata updates across mounts with
particular open transaction patterns so the duplicate items are rare.
They resulted in a small amount of additional load when walking change
indexes but had no effect on correctness.
Fixed a rare case where sparse file extension might not write partial
blocks of zeros which was found in testing. This required using
truncate to extend files past file sizes that end in partial blocks
along with the right transaction commit and memory reclaim patterns.
This never affected regular non-sparse files nor files prepopulated with
fallocate.
---
v1.9
\
*Oct 29, 2022*
Fix VFS cached directory entry consistency verification that could cause
spurious "no such file or directory" (ENOENT) errors from rename over
NFS under certain conditions. The problem was only every with the
consistency of in-memory cached dentry objects, persistent data was
correct and eventual eviction of the bad cached objects would stop
generating the errors.
---
v1.8
\
*Oct 18, 2022*
Add support for Linux POSIX Access Control Lists, as described in
acl(5). Mount options are added to enable ("acl") and disable ("noacl")
support. The default is to support ACLs. ACLs are stored in the
existing extended attribute scheme so adding support is does not require
a format change.
Add options to control data extent preallocation. The default behavior
does not change. The options can relax the limits on preallocation
which will then trigger under more write patterns and increase the risk
of preallocated space which is never used. The options are described in
scoutfs(5).
---
v1.7
\
*Aug 26, 2022*
* **Fixed possible persistent errors moving freed data extents**
\
Fixed a case where the server could hit persistent errors trying to
move a client's freed extents in one commit. The client had to free
a large number of extents that occupied distant positions in the
global free extent btree. Very large fragmented files could cause
this. The server now moves the freed extents in multiple commits and
can always ensure forward progress.
* **Fixed possible persistent errors from freed duplicate extents**
\
Background orphan deletion wasn't properly synchronizing with
foreground tasks deleting very large files. If a deletion took long
enough then background deletion could also attempt to delete inode items
while the deletion was making progress. This could create duplicate
deletions of data extent items which causes the server to abort when
it later discovers the duplicate extents as it merges free lists.
---
v1.6
\
*Jul 7, 2022*
* **Fix memory leaks in rare corner cases**
\
Analysis tools found a few corner cases that leaked small structures,
generally around error handling or startup and shutdown.
* **Add --skip-likely-huge scoutfs print command option**
\
Add an option to scoutfs print to reduce the size of the output
so that it can be used to see system-wide metadata without being
overwhelmed by file-level details.
---
v1.5
\

View File

@@ -8,6 +8,7 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
-include $(src)/Makefile.kernelcompat
scoutfs-y += \
acl.o \
avl.o \
alloc.o \
block.o \

View File

@@ -34,3 +34,12 @@ endif
ifneq (,$(shell grep 'FMODE_KABI_ITERATE' include/linux/fs.h))
ccflags-y += -DKC_FMODE_KABI_ITERATE
endif
#
# v4.7-rc2-23-g0d4d717f2583
#
# Added user_ns argument to posix_acl_valid
#
ifneq (,$(shell grep 'posix_acl_valid.*user_ns,' include/linux/posix_acl.h))
ccflags-y += -DKC_POSIX_ACL_VALID_USER_NS
endif

355
kmod/src/acl.c Normal file
View File

@@ -0,0 +1,355 @@
/*
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/xattr.h>
#include <linux/posix_acl.h>
#include <linux/posix_acl_xattr.h>
#include "format.h"
#include "super.h"
#include "scoutfs_trace.h"
#include "xattr.h"
#include "acl.h"
#include "inode.h"
#include "trans.h"
/*
* POSIX draft ACLs are stored as full xattr items with the entries
* encoded as the kernel's posix_acl_xattr_{header,entry} value structs.
*
* They're accessed and modified via user facing synthetic xattrs, iops
* calls from the kernel, during inode mode changes, and during inode
* creation.
*
* ACL access devolves into xattr access which is relatively expensive
* so we maintain the cached native form in the vfs inode. We drop the
* cache in lock invalidation which means that cached acl access must
* always be performed under cluster locking.
*/
static int acl_xattr_name_len(int type, char **name, size_t *name_len)
{
int ret = 0;
switch (type) {
case ACL_TYPE_ACCESS:
*name = XATTR_NAME_POSIX_ACL_ACCESS;
if (name_len)
*name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
break;
case ACL_TYPE_DEFAULT:
*name = XATTR_NAME_POSIX_ACL_DEFAULT;
if (name_len)
*name_len = sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1;
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock)
{
struct posix_acl *acl;
char *value = NULL;
char *name;
int ret;
if (!IS_POSIXACL(inode))
return NULL;
acl = get_cached_acl(inode, type);
if (acl != ACL_NOT_CACHED)
return acl;
ret = acl_xattr_name_len(type, &name, NULL);
if (ret < 0)
return ERR_PTR(ret);
ret = scoutfs_xattr_get_locked(inode, name, NULL, 0, lock);
if (ret > 0) {
value = kzalloc(ret, GFP_NOFS);
if (!value)
ret = -ENOMEM;
else
ret = scoutfs_xattr_get_locked(inode, name, value, ret, lock);
}
if (ret > 0) {
acl = posix_acl_from_xattr(&init_user_ns, value, ret);
} else if (ret == -ENODATA || ret == 0) {
acl = NULL;
} else {
acl = ERR_PTR(ret);
}
/* can set null negative cache */
if (!IS_ERR(acl))
set_cached_acl(inode, type, acl);
kfree(value);
return acl;
}
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
struct posix_acl *acl;
int ret;
if (!IS_POSIXACL(inode))
return NULL;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret < 0) {
acl = ERR_PTR(ret);
} else {
acl = scoutfs_get_acl_locked(inode, type, lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
return acl;
}
/*
* The caller has acquired the locks and dirtied the inode, they'll
* update the inode item if we return 0.
*/
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks)
{
static const struct scoutfs_xattr_prefix_tags tgs = {0,}; /* never scoutfs. prefix */
bool set_mode = false;
char *value = NULL;
umode_t new_mode;
size_t name_len;
char *name;
int size = 0;
int ret;
ret = acl_xattr_name_len(type, &name, &name_len);
if (ret < 0)
return ret;
switch (type) {
case ACL_TYPE_ACCESS:
if (acl) {
ret = posix_acl_update_mode(inode, &new_mode, &acl);
if (ret < 0)
goto out;
set_mode = true;
}
break;
case ACL_TYPE_DEFAULT:
if (!S_ISDIR(inode->i_mode)) {
ret = acl ? -EINVAL : 0;
goto out;
}
break;
}
if (acl) {
size = posix_acl_xattr_size(acl->a_count);
value = kmalloc(size, GFP_NOFS);
if (!value) {
ret = -ENOMEM;
goto out;
}
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
if (ret < 0)
goto out;
}
ret = scoutfs_xattr_set_locked(inode, name, name_len, value, size, 0, &tgs,
lock, NULL, ind_locks);
if (ret == 0 && set_mode) {
inode->i_mode = new_mode;
if (!value) {
/* can be setting an acl that only affects mode, didn't need xattr */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
}
}
out:
if (!ret)
set_cached_acl(inode, type, acl);
kfree(value);
return ret;
}
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
LIST_HEAD(ind_locks);
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE, SCOUTFS_LKF_REFRESH_INODE, inode, &lock) ?:
scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
if (ret == 0) {
ret = scoutfs_dirty_inode_item(inode, lock) ?:
scoutfs_set_acl_locked(inode, acl, type, lock, &ind_locks);
if (ret == 0)
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
return ret;
}
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type)
{
struct posix_acl *acl;
int ret = 0;
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
acl = scoutfs_get_acl(dentry->d_inode, type);
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl == NULL)
return -ENODATA;
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
posix_acl_release(acl);
return ret;
}
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type)
{
struct posix_acl *acl = NULL;
int ret;
if (!inode_owner_or_capable(dentry->d_inode))
return -EPERM;
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
if (value) {
acl = posix_acl_from_xattr(&init_user_ns, value, size);
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl) {
ret = kc_posix_acl_valid(&init_user_ns, acl);
if (ret)
goto out;
}
}
ret = scoutfs_set_acl(dentry->d_inode, acl, type);
out:
posix_acl_release(acl);
return ret;
}
/*
* Apply the parent's default acl to new inodes access acl and inherit
* it as the default for new directories. The caller holds locks and a
* transaction.
*/
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
struct list_head *ind_locks)
{
struct posix_acl *acl = NULL;
int ret = 0;
if (!S_ISLNK(inode->i_mode)) {
if (IS_POSIXACL(dir)) {
acl = scoutfs_get_acl_locked(dir, ACL_TYPE_DEFAULT, dir_lock);
if (IS_ERR(acl))
return PTR_ERR(acl);
}
if (!acl)
inode->i_mode &= ~current_umask();
}
if (IS_POSIXACL(dir) && acl) {
if (S_ISDIR(inode->i_mode)) {
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_DEFAULT,
lock, ind_locks);
if (ret)
goto out;
}
ret = posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
if (ret < 0)
return ret;
if (ret > 0)
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS,
lock, ind_locks);
} else {
cache_no_acl(inode);
}
out:
posix_acl_release(acl);
return ret;
}
/*
* Update the access ACL based on a newly set mode. If we return an
* error then the xattr wasn't changed.
*
* Annoyingly, setattr_copy has logic that transforms the final set mode
* that we want to use to update the acl. But we don't want to modify
* the other inode fields while discovering the resulting mode. We're
* relying on acl_chmod not caring about the transformation (currently
* just clears sgid). It would be better if we could get the resulting
* mode to give to acl_chmod without modifying the other inode fields.
*
* The caller has the inode mutex, a cluster lock, transaction, and will
* update the inode item if we return success.
*/
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
struct scoutfs_lock *lock, struct list_head *ind_locks)
{
struct posix_acl *acl;
int ret = 0;
if (!IS_POSIXACL(inode) || !(attr->ia_valid & ATTR_MODE))
return 0;
if (S_ISLNK(inode->i_mode))
return -EOPNOTSUPP;
acl = scoutfs_get_acl_locked(inode, ACL_TYPE_ACCESS, lock);
if (IS_ERR_OR_NULL(acl))
return PTR_ERR(acl);
ret = posix_acl_chmod(&acl, GFP_KERNEL, attr->ia_mode);
if (ret)
return ret;
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS, lock, ind_locks);
posix_acl_release(acl);
return ret;
}

18
kmod/src/acl.h Normal file
View File

@@ -0,0 +1,18 @@
#ifndef _SCOUTFS_ACL_H_
#define _SCOUTFS_ACL_H_
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type);
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks);
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type);
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type);
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
struct scoutfs_lock *lock, struct list_head *ind_locks);
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
struct list_head *ind_locks);
#endif

View File

@@ -892,12 +892,11 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
* -ENOENT is returned if we run out of extents in the source tree
* before moving the total.
*
* If meta_reserved is non-zero then -EINPROGRESS can be returned if the
* current meta allocator's avail blocks or room for freed blocks would
* have fallen under the reserved amount. The could have been
* successfully dirtied in this case but the number of blocks moved is
* not returned. The caller is expected to deal with the partial
* progress by commiting the dirty trees and examining the resulting
* If meta_budget is non-zero then -EINPROGRESS can be returned if the
* the caller's budget is consumed in the allocator during this call
* (though not necessarily by us, we don't have per-thread tracking of
* allocator consumption :/). The call can still have made progress and
* caller is expected commit the dirty trees and examining the resulting
* modified trees to see if they need to continue moving extents.
*
* The caller can specify that extents in the source tree should first
@@ -914,7 +913,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved)
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget)
{
struct alloc_ext_args args = {
.alloc = alloc,
@@ -922,6 +921,8 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
};
struct scoutfs_extent found;
struct scoutfs_extent ext;
u32 avail_start = 0;
u32 freed_start = 0;
u64 moved = 0;
u64 count;
int ret = 0;
@@ -932,6 +933,9 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
vacant = NULL;
}
if (meta_budget != 0)
scoutfs_alloc_meta_remaining(alloc, &avail_start, &freed_start);
while (moved < total) {
count = total - moved;
@@ -964,14 +968,24 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
break;
if (meta_reserved != 0 &&
scoutfs_alloc_meta_low(sb, alloc, meta_reserved +
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
if (meta_budget != 0 &&
scoutfs_alloc_meta_low_since(alloc, avail_start, freed_start, meta_budget,
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
ret = -EINPROGRESS;
break;
}
/* return partial if the server alloc can't dirty any more */
if (scoutfs_alloc_meta_low(sb, alloc, 50 + extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
if (WARN_ON_ONCE(!moved))
ret = -ENOSPC;
else
ret = 0;
break;
}
/* searching set start/len, finish initializing alloced extent */
ext.map = found.map ? ext.start - found.start + found.map : 0;
ext.flags = found.flags;
@@ -1351,6 +1365,27 @@ void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total,
} while (read_seqretry(&alloc->seqlock, seq));
}
/*
* Returns true if the caller's consumption of nr from either avail or
* freed would end up exceeding their budget relative to the starting
* remaining snapshot they took.
*/
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr)
{
u32 avail_use;
u32 freed_use;
u32 avail;
u32 freed;
scoutfs_alloc_meta_remaining(alloc, &avail, &freed);
avail_use = avail_start - avail;
freed_use = freed_start - freed;
return ((avail_use + nr) > budget) || ((freed_use + nr) > budget);
}
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag)
{
@@ -1547,12 +1582,10 @@ out:
* call the caller's callback. This assumes that the super it's reading
* could be stale and will retry if it encounters stale blocks.
*/
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
int scoutfs_alloc_foreach(struct super_block *sb, scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
DECLARE_SAVED_REFS(saved);
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -1561,26 +1594,18 @@ int scoutfs_alloc_foreach(struct super_block *sb,
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
do {
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
ret = scoutfs_block_check_stale(sb, ret, &saved, &super->logs_root.ref,
&super->srch_root.ref);
} while (ret == -ESTALE);
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
out:
if (ret == -ESTALE) {
if (memcmp(&stale_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
} else {
BUILD_BUG_ON(sizeof(stale_refs) != sizeof(refs));
memcpy(stale_refs, refs, sizeof(stale_refs));
goto retry;
}
}
kfree(super);
return ret;
}

View File

@@ -19,14 +19,11 @@
(128ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* The largest aligned region that we'll try to allocate at the end of
* the file as it's extended. This is also limited to the current file
* size so we can only waste at most twice the total file size when
* files are less than this. We try to keep this around the point of
* diminishing returns in streaming performance of common data devices
* to limit waste.
* The default size that we'll try to preallocate. This is trying to
* hit the limit of large efficient device writes while minimizing
* wasted preallocation that is never used.
*/
#define SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT \
#define SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
@@ -131,7 +128,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved);
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
@@ -159,6 +156,8 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr);
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag);

View File

@@ -677,7 +677,7 @@ out:
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_block_header *hdr;
struct block_private *bp = NULL;
bool retried = false;
@@ -701,7 +701,7 @@ retry:
set_bit(BLOCK_BIT_CRC_VALID, &bp->bits);
}
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != super->hdr.fsid ||
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != cpu_to_le64(sbi->fsid) ||
hdr->seq != ref->seq || hdr->blkno != ref->blkno) {
ret = -ESTALE;
goto out;
@@ -728,6 +728,36 @@ out:
return ret;
}
static bool stale_refs_match(struct scoutfs_block_ref *caller, struct scoutfs_block_ref *saved)
{
return !caller || (caller->blkno == saved->blkno && caller->seq == saved->seq);
}
/*
* Check if a read of a reference that gave ESTALE should be retried or
* should generate a hard error. If this is the second time we got
* ESTALE from the same refs then we return EIO and the caller should
* stop. As long as we keep seeing different refs we'll return ESTALE
* and the caller can keep trying.
*/
int scoutfs_block_check_stale(struct super_block *sb, int ret,
struct scoutfs_block_saved_refs *saved,
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b)
{
if (ret == -ESTALE) {
if (stale_refs_match(a, &saved->refs[0]) && stale_refs_match(b, &saved->refs[1])){
ret = -EIO;
} else {
if (a)
saved->refs[0] = *a;
if (b)
saved->refs[1] = *b;
}
}
return ret;
}
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl)
{
if (!IS_ERR_OR_NULL(bl))
@@ -797,7 +827,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
u32 magic, struct scoutfs_block **bl_ret,
u64 dirty_blkno, u64 *ref_blkno)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_block *cow_bl = NULL;
struct scoutfs_block *bl = NULL;
struct block_private *exist_bp = NULL;
@@ -865,7 +895,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
hdr = bl->data;
hdr->magic = cpu_to_le32(magic);
hdr->fsid = super->hdr.fsid;
hdr->fsid = cpu_to_le64(sbi->fsid);
hdr->blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&hdr->seq, sizeof(hdr->seq));

View File

@@ -13,6 +13,17 @@ struct scoutfs_block {
void *priv;
};
struct scoutfs_block_saved_refs {
struct scoutfs_block_ref refs[2];
};
#define DECLARE_SAVED_REFS(name) \
struct scoutfs_block_saved_refs name = {{{0,}}}
int scoutfs_block_check_stale(struct super_block *sb, int ret,
struct scoutfs_block_saved_refs *saved,
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b);
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret);
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);

View File

@@ -356,7 +356,6 @@ static int client_greeting(struct super_block *sb,
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
int ret;
@@ -371,9 +370,9 @@ static int client_greeting(struct super_block *sb,
goto out;
}
if (gr->fsid != super->hdr.fsid) {
if (gr->fsid != cpu_to_le64(sbi->fsid)) {
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
le64_to_cpu(gr->fsid), sbi->fsid);
ret = -EINVAL;
goto out;
}
@@ -476,7 +475,6 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_mount_options opts;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
@@ -508,7 +506,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
goto out;
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.fsid = cpu_to_le64(sbi->fsid);
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.server_term = cpu_to_le64(client->server_term);
greet.rid = cpu_to_le64(sbi->rid);

View File

@@ -75,8 +75,6 @@
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
EXPAND_COUNTER(dentry_revalidate_error) \
EXPAND_COUNTER(dentry_revalidate_invalid) \
EXPAND_COUNTER(dentry_revalidate_locked) \
EXPAND_COUNTER(dentry_revalidate_orphan) \
EXPAND_COUNTER(dentry_revalidate_rcu) \
EXPAND_COUNTER(dentry_revalidate_root) \
EXPAND_COUNTER(dentry_revalidate_valid) \
@@ -189,8 +187,6 @@
EXPAND_COUNTER(srch_search_retry_empty) \
EXPAND_COUNTER(srch_search_sorted) \
EXPAND_COUNTER(srch_search_sorted_block) \
EXPAND_COUNTER(srch_search_stale_eio) \
EXPAND_COUNTER(srch_search_stale_retry) \
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \

View File

@@ -366,27 +366,27 @@ static inline u64 ext_last(struct scoutfs_extent *ext)
/*
* The caller is writing to a logical iblock that doesn't have an
* allocated extent.
* allocated extent. The caller has searched for an extent containing
* iblock. If it already existed then it must be unallocated and
* offline.
*
* We always allocate an extent starting at the logical iblock. The
* caller has searched for an extent containing iblock. If it already
* existed then it must be unallocated and offline.
* We implement two preallocation strategies. Typically we only
* preallocate for simple streaming writes and limit preallocation while
* the file is small. The largest efficient allocation size is
* typically large enough that it would be unreasonable to allocate that
* much for all small files.
*
* Preallocation is used if we're strictly contiguously extending
* writes. That is, if the logical block offset equals the number of
* online blocks. We try to preallocate the number of blocks existing
* so that small files don't waste inordinate amounts of space and large
* files will eventually see large extents. This only works for
* contiguous single stream writes or stages of files from the first
* block. It doesn't work for concurrent stages, releasing behind
* staging, sparse files, multi-node writes, etc. fallocate() is always
* a better tool to use.
* Optionally, we can simply preallocate large empty aligned regions.
* This can waste a lot of space for small or sparse files but is
* reasonable when a file population is known to be large and dense but
* known to be written with non-streaming write patterns.
*/
static int alloc_block(struct super_block *sb, struct inode *inode,
struct scoutfs_extent *ext, u64 iblock,
struct scoutfs_lock *lock)
{
DECLARE_DATA_INFO(sb, datinf);
struct scoutfs_mount_options opts;
const u64 ino = scoutfs_ino(inode);
struct data_ext_args args = {
.ino = ino,
@@ -394,17 +394,22 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
.lock = lock,
};
struct scoutfs_extent found;
struct scoutfs_extent pre;
struct scoutfs_extent pre = {0,};
bool undo_pre = false;
u64 blkno = 0;
u64 online;
u64 offline;
u8 flags;
u64 start;
u64 count;
u64 rem;
int ret;
int err;
trace_scoutfs_data_alloc_block_enter(sb, ino, iblock, ext);
scoutfs_options_read(sb, &opts);
/* can only allocate over existing unallocated offline extent */
if (WARN_ON_ONCE(ext->len &&
!(iblock >= ext->start && iblock <= ext_last(ext) &&
@@ -413,66 +418,106 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
mutex_lock(&datinf->mutex);
scoutfs_inode_get_onoff(inode, &online, &offline);
/* default to single allocation at the written block */
start = iblock;
count = 1;
/* copy existing flags for preallocated regions */
flags = ext->len ? ext->flags : 0;
if (ext->len) {
/* limit preallocation to remaining existing (offline) extent */
/*
* Assume that offline writers are going to be writing
* all the offline extents and try to preallocate the
* rest of the unwritten extent.
*/
count = ext->len - (iblock - ext->start);
flags = ext->flags;
} else if (opts.data_prealloc_contig_only) {
/*
* Only preallocate when a quick test of the online
* block counts looks like we're a simple streaming
* write. Try to write until the next extent but limit
* the preallocation size to the number of online
* blocks.
*/
scoutfs_inode_get_onoff(inode, &online, &offline);
if (iblock > 1 && iblock == online) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (found.len && found.start > iblock)
count = found.start - iblock;
else
count = opts.data_prealloc_blocks;
count = min(iblock, count);
}
} else {
/* otherwise alloc to next extent */
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &found);
/*
* Preallocation of aligned regions only preallocates if
* the aligned region contains no extents at all. This
* could be fooled by offline sparse extents but we
* don't want to iterate over all offline extents in the
* aligned region.
*/
div64_u64_rem(iblock, opts.data_prealloc_blocks, &rem);
start = iblock - rem;
count = opts.data_prealloc_blocks;
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, start, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (found.len && found.start > iblock)
count = found.start - iblock;
else
count = SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT;
flags = 0;
if (found.len && found.start < start + count)
count = 1;
}
/* overall prealloc limit */
count = min_t(u64, count, SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT);
/* only strictly contiguous extending writes will try to preallocate */
if (iblock > 1 && iblock == online)
count = min(iblock, count);
else
count = 1;
count = min_t(u64, count, opts.data_prealloc_blocks);
ret = scoutfs_alloc_data(sb, datinf->alloc, datinf->wri,
&datinf->dalloc, count, &blkno, &count);
if (ret < 0)
goto out;
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno, 0);
if (ret < 0)
goto out;
/*
* An aligned prealloc attempt that gets a smaller extent can
* fail to cover iblock, make sure that it does. This is a
* pathological case so we don't try to move the window past
* iblock. Just enough to cover it, which we know is safe.
*/
if (start + count <= iblock)
start += (iblock - (start + count) + 1);
if (count > 1) {
pre.start = iblock + 1;
pre.len = count - 1;
pre.map = blkno + 1;
pre.start = start;
pre.len = count;
pre.map = blkno;
pre.flags = flags | SEF_UNWRITTEN;
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, pre.start,
pre.len, pre.map, pre.flags);
if (ret < 0) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock,
1, 0, flags);
BUG_ON(err); /* couldn't restore original */
if (ret < 0)
goto out;
}
undo_pre = true;
}
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno + (iblock - start), 0);
if (ret < 0)
goto out;
/* tell the caller we have a single block, could check next? */
ext->start = iblock;
ext->len = 1;
ext->map = blkno;
ext->map = blkno + (iblock - start);
ext->flags = 0;
ret = 0;
out:
if (ret < 0 && blkno > 0) {
if (undo_pre) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
pre.start, pre.len, 0, flags);
BUG_ON(err); /* leaked preallocated extent */
}
err = scoutfs_free_data(sb, datinf->alloc, datinf->wri,
&datinf->data_freed, blkno, count);
BUG_ON(err); /* leaked free blocks */
@@ -586,8 +631,8 @@ static int scoutfs_get_block_read(struct inode *inode, sector_t iblock,
return ret;
}
static int scoutfs_get_block_write(struct inode *inode, sector_t iblock,
struct buffer_head *bh, int create)
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
int create)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
int ret;
@@ -1147,9 +1192,9 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
* explained above the move_blocks ioctl argument structure definition.
*
* The caller has processed the ioctl args and performed the most basic
* inode checks, but we perform more detailed inode checks once we have
* the inode lock and refreshed inodes. Our job is to safely lock the
* two files and move the extents.
* argument sanity and inode checks, but we perform more detailed inode
* checks once we have the inode lock and refreshed inodes. Our job is
* to safely lock the two files and move the extents.
*/
#define MOVE_DATA_EXTENTS_PER_HOLD 16
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
@@ -1209,6 +1254,15 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
/* only move extent blocks inside i_size, careful not to wrap */
from_size = i_size_read(from);
if (from_off >= from_size) {
ret = 0;
goto out;
}
if (from_off + byte_len > from_size)
count = ((from_size - from_off) + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
if (S_ISDIR(from->i_mode) || S_ISDIR(to->i_mode)) {
ret = -EISDIR;
goto out;
@@ -1284,9 +1338,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
break;
}
/* only move extents within count and i_size */
if (ext.start >= from_iblock + count ||
ext.start >= i_size_read(from)) {
/* done if next extent starts after moving region */
if (ext.start >= from_iblock + count) {
done = true;
ret = 0;
break;
@@ -1294,13 +1347,15 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
from_start = max(ext.start, from_iblock);
map = ext.map + (from_start - ext.start);
len = min3(from_iblock + count,
round_up((u64)i_size_read(from),
SCOUTFS_BLOCK_SM_SIZE),
ext.start + ext.len) - from_start;
len = min(from_iblock + count, ext.start + ext.len) - from_start;
to_start = to_iblock + (from_start - from_iblock);
/* we'd get stuck, shouldn't happen */
if (WARN_ON_ONCE(len == 0)) {
ret = -EIO;
goto out;
}
if (is_stage) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
to_start, 1, &off_ext);

View File

@@ -43,6 +43,9 @@ extern const struct file_operations scoutfs_file_fops;
struct scoutfs_alloc;
struct scoutfs_block_writer;
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
int create);
int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
u64 ino, u64 iblock, u64 last, bool offline,
struct scoutfs_lock *lock);

View File

@@ -32,6 +32,7 @@
#include "hash.h"
#include "omap.h"
#include "forest.h"
#include "acl.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -59,8 +60,6 @@
* All the entries have a dirent struct with the full name in their
* value. The dirent struct contains the name hash and readdir position
* so that any item use can reference all the items for a given entry.
* This is important for deleting all the items given a dentry that was
* populated by lookup.
*/
static unsigned int mode_to_type(umode_t mode)
@@ -99,100 +98,12 @@ static unsigned int dentry_type(enum scoutfs_dentry_type type)
return DT_UNKNOWN;
}
/*
* @lock_cov: tells revalidation that the dentry is still locked and valid.
*
* @pos, @hash: lets us remove items on final unlink without having to
* look them up.
*/
struct dentry_info {
struct scoutfs_lock_coverage lock_cov;
u64 hash;
u64 pos;
};
static struct kmem_cache *dentry_info_cache;
static void scoutfs_d_release(struct dentry *dentry)
{
struct super_block *sb = dentry->d_sb;
struct dentry_info *di = dentry->d_fsdata;
if (di) {
scoutfs_lock_del_coverage(sb, &di->lock_cov);
kmem_cache_free(dentry_info_cache, di);
dentry->d_fsdata = NULL;
}
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags);
static const struct dentry_operations scoutfs_dentry_ops = {
.d_release = scoutfs_d_release,
const struct dentry_operations scoutfs_dentry_ops = {
.d_revalidate = scoutfs_d_revalidate,
};
static int alloc_dentry_info(struct dentry *dentry)
{
struct dentry_info *di;
smp_rmb();
if (dentry->d_op == &scoutfs_dentry_ops)
return 0;
di = kmem_cache_zalloc(dentry_info_cache, GFP_NOFS);
if (!di)
return -ENOMEM;
scoutfs_lock_init_coverage(&di->lock_cov);
spin_lock(&dentry->d_lock);
if (!dentry->d_fsdata) {
dentry->d_fsdata = di;
smp_wmb();
d_set_d_op(dentry, &scoutfs_dentry_ops);
}
spin_unlock(&dentry->d_lock);
if (di != dentry->d_fsdata)
kmem_cache_free(dentry_info_cache, di);
return 0;
}
static void update_dentry_info(struct super_block *sb, struct dentry *dentry,
u64 hash, u64 pos, struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
if (WARN_ON_ONCE(di == NULL))
return;
scoutfs_lock_add_coverage(sb, lock, &di->lock_cov);
di->hash = hash;
di->pos = pos;
}
static u64 dentry_info_hash(struct dentry *dentry)
{
struct dentry_info *di = dentry->d_fsdata;
if (WARN_ON_ONCE(di == NULL))
return 0;
return di->hash;
}
static u64 dentry_info_pos(struct dentry *dentry)
{
struct dentry_info *di = dentry->d_fsdata;
if (WARN_ON_ONCE(di == NULL))
return 0;
return di->pos;
}
static void init_dirent_key(struct scoutfs_key *key, u8 type, u64 ino,
u64 major, u64 minor)
{
@@ -317,62 +228,105 @@ out:
return ret;
}
/*
* Verify that the caller's dentry still precisely matches our dirent
* items.
*
* The caller has a dentry that the vfs revalidated before they acquired
* their locks. If the dentry is still covered by a lock we immediately
* return 0. If not, we check items and return -ENOENT if a positive
* dentry no longer matches the items or -EEXIST if a negative entry's
* name now has an item.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
static int lookup_dentry_dirent(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_dirent *dent_ret,
struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
return lookup_dirent(sb, dir_ino, dentry->d_name.name, dentry->d_name.len,
dirent_name_hash(dentry->d_name.name, dentry->d_name.len),
dent_ret, lock);
}
static u64 dentry_parent_ino(struct dentry *dentry)
{
struct dentry *parent = NULL;
struct inode *dir;
u64 dir_ino = 0;
if ((parent = dget_parent(dentry)) && (dir = parent->d_inode))
dir_ino = scoutfs_ino(dir);
dput(parent);
return dir_ino;
}
/* negative dentries return 0, our root ino is non-zero (1) */
static u64 dentry_ino(struct dentry *dentry)
{
return dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
}
static void set_dentry_fsdata(struct dentry *dentry, struct scoutfs_lock *lock)
{
void *now = (void *)(unsigned long)lock->refresh_gen;
void *was;
/* didn't want to alloc :/ */
BUILD_BUG_ON(sizeof(dentry->d_fsdata) != sizeof(u64));
BUILD_BUG_ON(sizeof(dentry->d_fsdata) != sizeof(long));
do {
was = dentry->d_fsdata;
} while (cmpxchg(&dentry->d_fsdata, was, now) != was);
}
static bool test_dentry_fsdata(struct dentry *dentry, u64 refresh)
{
u64 fsd = (unsigned long)ACCESS_ONCE(dentry->d_fsdata);
return fsd == refresh;
}
/*
* Validate an operation caller's input dentry argument. If the fsdata
* is valid then the underlying dirent items couldn't have changed and
* we return 0. If fsdata is no longer protected by a lock or its
* fields don't match then we check the dirent item. If the dirent item
* doesn't match what the caller expected given their dentry fields then
* we return an error.
*/
static int validate_dentry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
{
u64 ino = dentry_ino(dentry);
struct scoutfs_dirent dent = {0,};
const char *name;
u64 dentry_ino;
int name_len;
u64 hash;
int ret;
if (scoutfs_lock_is_covered(sb, &di->lock_cov))
return 0;
if (test_dentry_fsdata(dentry, lock->refresh_gen)) {
ret = 0;
goto out;
}
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
name = dentry->d_name.name;
name_len = dentry->d_name.len;
hash = dirent_name_hash(name, name_len);
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
ret = lookup_dentry_dirent(sb, dir_ino, dentry, &dent, lock);
if (ret < 0 && ret != -ENOENT)
return ret;
goto out;
if (dentry_ino != le64_to_cpu(dent.ino) || di->hash != le64_to_cpu(dent.hash) ||
di->pos != le64_to_cpu(dent.pos)) {
if (dentry_ino)
ret = -ENOENT;
else
ret = -EEXIST;
/* use negative zeroed dent when lookup gave -ENOENT */
if (!ino && dent.ino) {
/* caller expected negative but there was a dirent */
ret = -EEXIST;
} else if (ino && !dent.ino) {
/* caller expected positive but there was no dirent */
ret = -ENOENT;
} else if (ino != le64_to_cpu(dent.ino)) {
/* name linked to different inode than caller's */
ret = -ESTALE;
} else {
/* dirent ino matches dentry ino */
ret = 0;
}
out:
trace_scoutfs_validate_dentry(sb, dentry, dir_ino, ino, le64_to_cpu(dent.ino),
lock->refresh_gen, ret);
return ret;
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
{
struct super_block *sb = dentry->d_sb;
struct dentry_info *di = dentry->d_fsdata;
struct dentry *parent = dget_parent(dentry);
struct scoutfs_lock *lock = NULL;
struct scoutfs_dirent dent;
bool is_covered = false;
struct inode *dir;
u64 dentry_ino;
u64 dir_ino = dentry_parent_ino(dentry);
int ret;
/* don't think this happens but we can find out */
@@ -394,47 +348,7 @@ static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
goto out;
}
if (WARN_ON_ONCE(di == NULL)) {
ret = 0;
goto out;
}
is_covered = scoutfs_lock_is_covered(sb, &di->lock_cov);
if (is_covered) {
scoutfs_inc_counter(sb, dentry_revalidate_locked);
ret = 1;
goto out;
}
if (!parent || !parent->d_inode) {
scoutfs_inc_counter(sb, dentry_revalidate_orphan);
ret = 0;
goto out;
}
dir = parent->d_inode;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, dir, &lock);
if (ret)
goto out;
ret = lookup_dirent(sb, scoutfs_ino(dir),
dentry->d_name.name, dentry->d_name.len,
dirent_name_hash(dentry->d_name.name,
dentry->d_name.len),
&dent, lock);
if (ret == -ENOENT) {
dent.ino = 0;
dent.hash = 0;
dent.pos = 0;
} else if (ret < 0) {
goto out;
}
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
if ((dentry_ino == le64_to_cpu(dent.ino))) {
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), lock);
if (test_dentry_fsdata(dentry, scoutfs_lock_ino_refresh_gen(sb, dir_ino))) {
scoutfs_inc_counter(sb, dentry_revalidate_valid);
ret = 1;
} else {
@@ -443,10 +357,7 @@ static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
}
out:
trace_scoutfs_d_revalidate(sb, dentry, flags, parent, is_covered, ret);
dput(parent);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
trace_scoutfs_d_revalidate(sb, dentry, flags, dir_ino, ret);
if (ret < 0 && ret != -ECHILD)
scoutfs_inc_counter(sb, dentry_revalidate_error);
@@ -483,10 +394,6 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
goto out;
}
ret = alloc_dentry_info(dentry);
if (ret)
goto out;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, dir, &dir_lock);
if (ret)
goto out;
@@ -500,8 +407,7 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
ino = le64_to_cpu(dent.ino);
}
if (ret == 0)
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), dir_lock);
set_dentry_fsdata(dentry, dir_lock);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
@@ -725,10 +631,6 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
int ret = 0;
u64 ino;
ret = alloc_dentry_info(dentry);
if (ret)
return ERR_PTR(ret);
ret = scoutfs_alloc_ino(sb, S_ISDIR(mode), &ino);
if (ret)
return ERR_PTR(ret);
@@ -765,7 +667,8 @@ retry:
if (ret)
goto out_unlock;
ret = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock, &inode);
ret = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock, &inode) ?:
scoutfs_init_acl_locked(inode, dir, *inode_lock, *dir_lock, ind_locks);
if (ret < 0)
goto out;
@@ -816,7 +719,7 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
@@ -829,7 +732,7 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
if (ret)
goto out;
update_dentry_info(sb, dentry, hash, pos, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
@@ -903,19 +806,15 @@ static int scoutfs_link(struct dentry *old_dentry,
if (ret)
return ret;
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
if (inode->i_nlink >= SCOUTFS_LINK_MAX) {
ret = -EMLINK;
goto out_unlock;
}
ret = alloc_dentry_info(dentry);
if (ret)
goto out_unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
if (inode->i_nlink == 0) {
@@ -941,7 +840,7 @@ retry:
goto out;
if (del_orphan) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock, inode_lock);
if (ret)
goto out;
}
@@ -953,11 +852,11 @@ retry:
scoutfs_ino(inode), inode->i_mode, dir_lock,
inode_lock);
if (ret) {
err = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
err = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock, inode_lock);
WARN_ON_ONCE(err); /* no orphan, might not scan and delete after crash */
goto out;
}
update_dentry_info(sb, dentry, hash, pos, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, dir_size);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
@@ -1005,9 +904,11 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent dent;
LIST_HEAD(ind_locks);
u64 ind_seq;
int ret = 0;
u64 hash;
int ret;
ret = scoutfs_lock_inodes(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE,
@@ -1016,11 +917,7 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
if (ret)
return ret;
ret = alloc_dentry_info(dentry);
if (ret)
goto unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto unlock;
@@ -1029,6 +926,13 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
goto unlock;
}
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
ret = lookup_dirent(sb, scoutfs_ino(dir), dentry->d_name.name, dentry->d_name.len, hash,
&dent, dir_lock);
if (ret < 0)
goto out;
if (should_orphan(inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
@@ -1047,21 +951,20 @@ retry:
goto unlock;
if (should_orphan(inode)) {
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock, inode_lock);
if (ret < 0)
goto out;
}
ret = del_entry_items(sb, scoutfs_ino(dir), dentry_info_hash(dentry),
dentry_info_pos(dentry), scoutfs_ino(inode),
dir_lock, inode_lock);
ret = del_entry_items(sb, scoutfs_ino(dir), le64_to_cpu(dent.hash), le64_to_cpu(dent.pos),
scoutfs_ino(inode), dir_lock, inode_lock);
if (ret) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock, inode_lock);
WARN_ON_ONCE(ret); /* should have been dirty */
goto out;
}
update_dentry_info(sb, dentry, 0, 0, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
dir->i_ctime = ts;
dir->i_mtime = ts;
@@ -1242,10 +1145,11 @@ const struct inode_operations scoutfs_symlink_iops = {
.put_link = scoutfs_put_link,
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
};
/*
@@ -1273,17 +1177,13 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
name_len > PATH_MAX || name_len > SCOUTFS_SYMLINK_MAX_SIZE)
return -ENAMETOOLONG;
ret = alloc_dentry_info(dentry);
if (ret)
return ret;
inode = lock_hold_create(dir, dentry, S_IFLNK|S_IRWXUGO, 0,
&dir_lock, &inode_lock, NULL, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
ret = validate_dentry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
@@ -1301,7 +1201,7 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
if (ret)
goto out;
update_dentry_info(sb, dentry, hash, pos, dir_lock);
set_dentry_fsdata(dentry, dir_lock);
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
@@ -1631,6 +1531,8 @@ static int scoutfs_rename_common(struct inode *old_dir,
struct scoutfs_lock *old_inode_lock = NULL;
struct scoutfs_lock *new_inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_dirent new_dent;
struct scoutfs_dirent old_dent;
struct timespec now;
bool ins_new = false;
bool del_new = false;
@@ -1678,19 +1580,18 @@ static int scoutfs_rename_common(struct inode *old_dir,
if (ret)
goto out_unlock;
/* make sure that the entries assumed by the argument still exist */
ret = validate_dentry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
validate_dentry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
if (ret)
goto out_unlock;
/* test dir i_size now that it's refreshed */
if (new_inode && S_ISDIR(new_inode->i_mode) && i_size_read(new_inode)) {
ret = -ENOTEMPTY;
goto out_unlock;
}
/* make sure that the entries assumed by the argument still exist */
ret = alloc_dentry_info(old_dentry) ?:
alloc_dentry_info(new_dentry) ?:
verify_entry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
if (ret)
goto out_unlock;
if ((flags & RENAME_NOREPLACE) && (new_inode != NULL)) {
ret = -EEXIST;
@@ -1733,10 +1634,12 @@ retry:
/* remove the new entry if it exists */
if (new_inode) {
ret = del_entry_items(sb, scoutfs_ino(new_dir),
dentry_info_hash(new_dentry),
dentry_info_pos(new_dentry),
scoutfs_ino(new_inode),
ret = lookup_dirent(sb, scoutfs_ino(new_dir), new_dentry->d_name.name,
new_dentry->d_name.len, new_hash, &new_dent, new_dir_lock);
if (ret < 0)
goto out;
ret = del_entry_items(sb, scoutfs_ino(new_dir), le64_to_cpu(new_dent.hash),
le64_to_cpu(new_dent.pos), scoutfs_ino(new_inode),
new_dir_lock, new_inode_lock);
if (ret)
goto out;
@@ -1752,18 +1655,22 @@ retry:
goto out;
del_new = true;
ret = lookup_dirent(sb, scoutfs_ino(old_dir), old_dentry->d_name.name,
old_dentry->d_name.len, old_hash, &old_dent, old_dir_lock);
if (ret < 0)
goto out;
/* remove the old entry */
ret = del_entry_items(sb, scoutfs_ino(old_dir),
dentry_info_hash(old_dentry),
dentry_info_pos(old_dentry),
scoutfs_ino(old_inode),
ret = del_entry_items(sb, scoutfs_ino(old_dir), le64_to_cpu(old_dent.hash),
le64_to_cpu(old_dent.pos), scoutfs_ino(old_inode),
old_dir_lock, old_inode_lock);
if (ret)
goto out;
ins_old = true;
if (should_orphan(new_inode)) {
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(new_inode), orph_lock);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(new_inode), orph_lock,
new_inode_lock);
if (ret)
goto out;
}
@@ -1771,7 +1678,7 @@ retry:
/* won't fail from here on out, update all the vfs structs */
/* the caller will use d_move to move the old_dentry into place */
update_dentry_info(sb, old_dentry, new_hash, new_pos, new_dir_lock);
set_dentry_fsdata(old_dentry, new_dir_lock);
i_size_write(old_dir, i_size_read(old_dir) - old_dentry->d_name.len);
if (!new_inode)
@@ -1836,8 +1743,8 @@ out:
err = 0;
if (ins_old)
err = add_entry_items(sb, scoutfs_ino(old_dir),
dentry_info_hash(old_dentry),
dentry_info_pos(old_dentry),
le64_to_cpu(old_dent.hash),
le64_to_cpu(old_dent.pos),
old_dentry->d_name.name,
old_dentry->d_name.len,
scoutfs_ino(old_inode),
@@ -1853,8 +1760,8 @@ out:
if (ins_new && err == 0)
err = add_entry_items(sb, scoutfs_ino(new_dir),
dentry_info_hash(new_dentry),
dentry_info_pos(new_dentry),
le64_to_cpu(new_dent.hash),
le64_to_cpu(new_dent.pos),
new_dentry->d_name.name,
new_dentry->d_name.len,
scoutfs_ino(new_inode),
@@ -1925,7 +1832,7 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock, inode_lock);
if (ret < 0)
goto out; /* XXX returning error but items created */
@@ -1978,32 +1885,14 @@ const struct inode_operations_wrapper scoutfs_dir_iops = {
.rename = scoutfs_rename,
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
.rename2 = scoutfs_rename2,
};
void scoutfs_dir_exit(void)
{
if (dentry_info_cache) {
kmem_cache_destroy(dentry_info_cache);
dentry_info_cache = NULL;
}
}
int scoutfs_dir_init(void)
{
dentry_info_cache = kmem_cache_create("scoutfs_dentry_info",
sizeof(struct dentry_info), 0,
SLAB_RECLAIM_ACCOUNT, NULL);
if (!dentry_info_cache)
return -ENOMEM;
return 0;
}

View File

@@ -8,6 +8,8 @@ extern const struct file_operations scoutfs_dir_fops;
extern const struct inode_operations_wrapper scoutfs_dir_iops;
extern const struct inode_operations scoutfs_symlink_iops;
extern const struct dentry_operations scoutfs_dentry_ops;
struct scoutfs_link_backref_entry {
struct list_head head;
u64 dir_ino;
@@ -29,7 +31,4 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
int scoutfs_symlink_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock, u64 i_size);
int scoutfs_dir_init(void);
void scoutfs_dir_exit(void);
#endif

View File

@@ -78,11 +78,6 @@ struct forest_refs {
struct scoutfs_block_ref logs_ref;
};
/* initialize some refs that initially aren't equal */
#define DECLARE_STALE_TRACKING_SUPER_REFS(a, b) \
struct forest_refs a = {{cpu_to_le64(0),}}; \
struct forest_refs b = {{cpu_to_le64(1),}}
struct forest_bloom_nrs {
unsigned int nrs[SCOUTFS_FOREST_BLOOM_NRS];
};
@@ -136,11 +131,11 @@ static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scout
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next)
{
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
struct scoutfs_net_roots roots;
struct scoutfs_btree_root item_root;
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
DECLARE_SAVED_REFS(saved);
struct scoutfs_key found;
struct scoutfs_key ltk;
bool checked_fs;
@@ -155,8 +150,6 @@ retry:
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
refs.fs_ref = roots.fs_root.ref;
refs.logs_ref = roots.logs_root.ref;
scoutfs_key_init_log_trees(&ltk, 0, 0);
checked_fs = false;
@@ -212,14 +205,10 @@ retry:
}
}
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
return -EIO;
prev_refs = refs;
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.fs_root.ref, &roots.logs_root.ref);
if (ret == -ESTALE)
goto retry;
}
out:
return ret;
}
@@ -541,9 +530,8 @@ void scoutfs_forest_dec_inode_count(struct super_block *sb)
/*
* Return the total inode count from the super block and all the
* log_btrees it references. This assumes it's working with a block
* reference hierarchy that should be fully consistent. If we see
* ESTALE we've hit persistent corruption.
* log_btrees it references. ESTALE from read blocks is returned to the
* caller who is expected to retry or return hard errors.
*/
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count)
@@ -572,8 +560,6 @@ int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_bloc
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
else if (ret == -ESTALE)
ret = -EIO;
break;
}
}

View File

@@ -19,6 +19,8 @@
#include <linux/pagemap.h>
#include <linux/sched.h>
#include <linux/list_sort.h>
#include <linux/workqueue.h>
#include <linux/buffer_head.h>
#include "format.h"
#include "super.h"
@@ -36,6 +38,7 @@
#include "omap.h"
#include "forest.h"
#include "btree.h"
#include "acl.h"
/*
* XXX
@@ -66,8 +69,10 @@ struct inode_sb_info {
struct delayed_work orphan_scan_dwork;
struct workqueue_struct *iput_workq;
struct work_struct iput_work;
struct llist_head iput_llist;
spinlock_t iput_lock;
struct list_head iput_list;
};
#define DECLARE_INODE_SB_INFO(sb, name) \
@@ -94,7 +99,9 @@ static void scoutfs_inode_ctor(void *obj)
init_rwsem(&si->xattr_rwsem);
INIT_LIST_HEAD(&si->writeback_entry);
scoutfs_lock_init_coverage(&si->ino_lock_cov);
atomic_set(&si->iput_count, 0);
INIT_LIST_HEAD(&si->iput_head);
si->iput_count = 0;
si->iput_flags = 0;
inode_init_once(&si->inode);
}
@@ -136,20 +143,22 @@ void scoutfs_destroy_inode(struct inode *inode)
static const struct inode_operations scoutfs_file_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
.fiemap = scoutfs_data_fiemap,
};
static const struct inode_operations scoutfs_special_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = scoutfs_listxattr,
.removexattr = scoutfs_removexattr,
.removexattr = generic_removexattr,
.get_acl = scoutfs_get_acl,
};
/*
@@ -322,7 +331,6 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
load_inode(inode, &sinode);
atomic64_set(&si->last_refreshed, refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
si->drop_invalidated = false;
}
} else {
ret = 0;
@@ -354,6 +362,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
LIST_HEAD(ind_locks);
int ret;
@@ -364,6 +373,13 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
if (ret)
return ret;
scoutfs_per_task_add(&si->pt_data_lock, &pt_ent, lock);
ret = block_truncate_page(inode->i_mapping, new_size, scoutfs_get_block_write);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
if (ret < 0)
goto unlock;
scoutfs_inode_queue_writeback(inode);
if (new_size != i_size_read(inode))
scoutfs_inode_inc_data_version(inode);
@@ -375,6 +391,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
unlock:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
@@ -507,10 +524,15 @@ retry:
if (ret)
goto out;
ret = scoutfs_acl_chmod_locked(inode, attr, lock, &ind_locks);
if (ret < 0)
goto release;
setattr_copy(inode, attr);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
release:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
out:
@@ -947,7 +969,8 @@ void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
static int update_index_items(struct super_block *sb,
struct scoutfs_inode_info *si, u64 ino, u8 type,
u64 major, u32 minor,
struct list_head *lock_list)
struct list_head *lock_list,
struct scoutfs_lock *primary)
{
struct scoutfs_lock *ins_lock;
struct scoutfs_lock *del_lock;
@@ -964,7 +987,7 @@ static int update_index_items(struct super_block *sb,
scoutfs_inode_init_index_key(&ins, type, major, minor, ino);
ins_lock = find_index_lock(lock_list, type, major, minor, ino);
ret = scoutfs_item_create_force(sb, &ins, NULL, 0, ins_lock);
ret = scoutfs_item_create_force(sb, &ins, NULL, 0, ins_lock, primary);
if (ret || !will_del_index(si, type, major, minor))
return ret;
@@ -976,7 +999,7 @@ static int update_index_items(struct super_block *sb,
del_lock = find_index_lock(lock_list, type, get_item_major(si, type),
get_item_minor(si, type), ino);
ret = scoutfs_item_delete_force(sb, &del, del_lock);
ret = scoutfs_item_delete_force(sb, &del, del_lock, primary);
if (ret) {
err = scoutfs_item_delete(sb, &ins, ins_lock);
BUG_ON(err);
@@ -988,7 +1011,8 @@ static int update_index_items(struct super_block *sb,
static int update_indices(struct super_block *sb,
struct scoutfs_inode_info *si, u64 ino, umode_t mode,
struct scoutfs_inode *sinode,
struct list_head *lock_list)
struct list_head *lock_list,
struct scoutfs_lock *primary)
{
struct index_update {
u8 type;
@@ -1008,7 +1032,7 @@ static int update_indices(struct super_block *sb,
continue;
ret = update_index_items(sb, si, ino, upd->type, upd->major,
upd->minor, lock_list);
upd->minor, lock_list, primary);
if (ret)
break;
}
@@ -1048,7 +1072,7 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
/* only race with other inode field stores once */
store_inode(&sinode, inode);
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list);
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list, lock);
BUG_ON(ret);
scoutfs_inode_init_key(&key, ino);
@@ -1317,7 +1341,7 @@ void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list)
/* this is called on final inode cleanup so enoent is fine */
static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
u32 minor, struct list_head *ind_locks)
u32 minor, struct list_head *ind_locks, struct scoutfs_lock *primary)
{
struct scoutfs_key key;
struct scoutfs_lock *lock;
@@ -1326,7 +1350,7 @@ static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
scoutfs_inode_init_index_key(&key, type, major, minor, ino);
lock = find_index_lock(ind_locks, type, major, minor, ino);
ret = scoutfs_item_delete_force(sb, &key, lock);
ret = scoutfs_item_delete_force(sb, &key, lock, primary);
if (ret == -ENOENT)
ret = 0;
return ret;
@@ -1343,16 +1367,17 @@ static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
*/
static int remove_index_items(struct super_block *sb, u64 ino,
struct scoutfs_inode *sinode,
struct list_head *ind_locks)
struct list_head *ind_locks,
struct scoutfs_lock *primary)
{
umode_t mode = le32_to_cpu(sinode->mode);
int ret;
ret = remove_index(sb, ino, SCOUTFS_INODE_INDEX_META_SEQ_TYPE,
le64_to_cpu(sinode->meta_seq), 0, ind_locks);
le64_to_cpu(sinode->meta_seq), 0, ind_locks, primary);
if (ret == 0 && S_ISREG(mode))
ret = remove_index(sb, ino, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE,
le64_to_cpu(sinode->data_seq), 0, ind_locks);
le64_to_cpu(sinode->data_seq), 0, ind_locks, primary);
return ret;
}
@@ -1442,7 +1467,6 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
si->have_item = false;
atomic64_set(&si->last_refreshed, lock->refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
si->drop_invalidated = false;
si->flags = 0;
scoutfs_inode_set_meta_seq(inode);
@@ -1485,22 +1509,24 @@ static void init_orphan_key(struct scoutfs_key *key, u64 ino)
* zone under a write only lock while the caller has the inode protected
* by a write lock.
*/
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock)
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary)
{
struct scoutfs_key key;
init_orphan_key(&key, ino);
return scoutfs_item_create_force(sb, &key, NULL, 0, lock);
return scoutfs_item_create_force(sb, &key, NULL, 0, lock, primary);
}
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock)
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary)
{
struct scoutfs_key key;
init_orphan_key(&key, ino);
return scoutfs_item_delete_force(sb, &key, lock);
return scoutfs_item_delete_force(sb, &key, lock, primary);
}
/*
@@ -1553,7 +1579,7 @@ retry:
release = true;
ret = remove_index_items(sb, ino, sinode, &ind_locks);
ret = remove_index_items(sb, ino, sinode, &ind_locks, lock);
if (ret)
goto out;
@@ -1568,7 +1594,7 @@ retry:
if (ret < 0)
goto out;
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock);
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock, lock);
if (ret < 0)
goto out;
@@ -1685,6 +1711,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode sinode;
struct scoutfs_key key;
bool clear_trying = false;
u64 group_nr;
int bit_nr;
int ret;
@@ -1704,6 +1731,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = 0;
goto out;
}
clear_trying = true;
/* can't delete if it's cached in local or remote mounts */
if (scoutfs_omap_test(sb, ino) || test_bit_le(bit_nr, ldata->map.bits)) {
@@ -1730,7 +1758,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = delete_inode_items(sb, ino, &sinode, lock, orph_lock);
out:
if (ldata)
if (clear_trying)
clear_bit(bit_nr, ldata->trying);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
@@ -1740,18 +1768,18 @@ out:
}
/*
* As we drop an inode we need to decide to try and delete its items or
* not, which is expensive. The two common cases we want to get right
* both have cluster lock coverage and don't want to delete. Dropping
* unused inodes during read lock invalidation has the current lock and
* sees a nonzero nlink and knows not to delete. Final iput after a
* local unlink also has a lock, sees a zero nlink, and tries to perform
* item deletion in the task that dropped the last link, as users
* expect.
* As we evicted an inode we need to decide to try and delete its items
* or not, which is expensive. We only try when we have lock coverage
* and the inode has been unlinked. This catches the common case of
* regular deletion so deletion will be performed in the final unlink
* task. It also catches open-unlink or o_tmpfile that aren't cached on
* other nodes.
*
* Evicting an inode outside of cluster locking is the odd slow path
* that involves lock contention during use the worst cross-mount
* open-unlink/delete case.
* Inodes being evicted outside of lock coverage, by referenced dentries
* or inodes that survived the attempt to drop them as their lock was
* invalidated, will not try to delete. This means that cross-mount
* open/unlink will almost certainly fall back to the orphan scanner to
* perform final deletion.
*/
void scoutfs_evict_inode(struct inode *inode)
{
@@ -1767,7 +1795,7 @@ void scoutfs_evict_inode(struct inode *inode)
/* clear before trying to delete tests */
scoutfs_omap_clear(sb, ino);
if (!scoutfs_lock_is_covered(sb, &si->ino_lock_cov) || inode->i_nlink == 0)
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink == 0)
try_delete_inode_items(sb, scoutfs_ino(inode));
}
@@ -1792,30 +1820,56 @@ int scoutfs_drop_inode(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const bool covered = scoutfs_lock_is_covered(sb, &si->ino_lock_cov);
trace_scoutfs_drop_inode(sb, scoutfs_ino(inode), inode->i_nlink, inode_unhashed(inode),
si->drop_invalidated);
covered);
return si->drop_invalidated || !scoutfs_lock_is_covered(sb, &si->ino_lock_cov) ||
generic_drop_inode(inode);
return !covered || generic_drop_inode(inode);
}
/*
* These iput workers can be concurrent amongst cpus. This lets us get
* some concurrency when these async final iputs end up performing very
* expensive inode deletion. Typically they're dropping linked inodes
* that lost lock coverage and the iput will evict without deleting.
*
* Keep in mind that the dputs in d_prune can ascend into parents and
* end up performing the final iput->evict deletion on other inodes.
*/
static void iput_worker(struct work_struct *work)
{
struct inode_sb_info *inf = container_of(work, struct inode_sb_info, iput_work);
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct llist_node *inodes;
bool more;
struct inode *inode;
unsigned long count;
unsigned long flags;
inodes = llist_del_all(&inf->iput_llist);
spin_lock(&inf->iput_lock);
while ((si = list_first_entry_or_null(&inf->iput_list, struct scoutfs_inode_info,
iput_head))) {
list_del_init(&si->iput_head);
count = si->iput_count;
flags = si->iput_flags;
si->iput_count = 0;
si->iput_flags = 0;
spin_unlock(&inf->iput_lock);
llist_for_each_entry_safe(si, tmp, inodes, iput_llnode) {
do {
more = atomic_dec_return(&si->iput_count) > 0;
iput(&si->inode);
} while (more);
inode = &si->inode;
/* can't touch during unmount, dcache destroys w/o locks */
if ((flags & SI_IPUT_FLAG_PRUNE) && !inf->stopped)
d_prune_aliases(inode);
while (count-- > 0)
iput(inode);
/* can't touch inode after final iput */
spin_lock(&inf->iput_lock);
}
spin_unlock(&inf->iput_lock);
}
/*
@@ -1832,15 +1886,21 @@ static void iput_worker(struct work_struct *work)
* Nothing stops multiple puts of an inode before the work runs so we
* can track multiple puts in flight.
*/
void scoutfs_inode_queue_iput(struct inode *inode)
void scoutfs_inode_queue_iput(struct inode *inode, unsigned long flags)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
bool should_queue;
if (atomic_inc_return(&si->iput_count) == 1)
llist_add(&si->iput_llnode, &inf->iput_llist);
smp_wmb(); /* count and list visible before work executes */
schedule_work(&inf->iput_work);
spin_lock(&inf->iput_lock);
si->iput_count++;
si->iput_flags |= flags;
if ((should_queue = list_empty(&si->iput_head)))
list_add_tail(&si->iput_head, &inf->iput_list);
spin_unlock(&inf->iput_lock);
if (should_queue)
queue_work(inf->iput_workq, &inf->iput_work);
}
/*
@@ -2044,7 +2104,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
trace_scoutfs_inode_walk_writeback(sb, scoutfs_ino(inode),
write, ret);
if (ret) {
scoutfs_inode_queue_iput(inode);
scoutfs_inode_queue_iput(inode, 0);
goto out;
}
@@ -2060,7 +2120,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
if (!write)
list_del_init(&si->writeback_entry);
scoutfs_inode_queue_iput(inode);
scoutfs_inode_queue_iput(inode, 0);
}
spin_unlock(&inf->writeback_lock);
@@ -2085,7 +2145,15 @@ int scoutfs_inode_setup(struct super_block *sb)
spin_lock_init(&inf->ino_alloc.lock);
INIT_DELAYED_WORK(&inf->orphan_scan_dwork, inode_orphan_scan_worker);
INIT_WORK(&inf->iput_work, iput_worker);
init_llist_head(&inf->iput_llist);
spin_lock_init(&inf->iput_lock);
INIT_LIST_HEAD(&inf->iput_list);
/* re-entrant, worker locks with itself and queueing */
inf->iput_workq = alloc_workqueue("scoutfs_inode_iput", WQ_UNBOUND, 0);
if (!inf->iput_workq) {
kfree(inf);
return -ENOMEM;
}
sbi->inode_sb_info = inf;
@@ -2121,14 +2189,18 @@ void scoutfs_inode_flush_iput(struct super_block *sb)
DECLARE_INODE_SB_INFO(sb, inf);
if (inf)
flush_work(&inf->iput_work);
flush_workqueue(inf->iput_workq);
}
void scoutfs_inode_destroy(struct super_block *sb)
{
struct inode_sb_info *inf = SCOUTFS_SB(sb)->inode_sb_info;
kfree(inf);
if (inf) {
if (inf->iput_workq)
destroy_workqueue(inf->iput_workq);
kfree(inf);
}
}
void scoutfs_inode_exit(void)

View File

@@ -56,14 +56,16 @@ struct scoutfs_inode_info {
struct scoutfs_lock_coverage ino_lock_cov;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node iput_llnode;
atomic_t iput_count;
struct list_head iput_head;
unsigned long iput_count;
unsigned long iput_flags;
struct inode inode;
};
/* try to prune dcache aliases with queued iput */
#define SI_IPUT_FLAG_PRUNE (1 << 0)
static inline struct scoutfs_inode_info *SCOUTFS_I(struct inode *inode)
{
return container_of(inode, struct scoutfs_inode_info, inode);
@@ -78,7 +80,7 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
void scoutfs_destroy_inode(struct inode *inode);
int scoutfs_drop_inode(struct inode *inode);
void scoutfs_evict_inode(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode, unsigned long flags);
#define SCOUTFS_IGF_LINKED (1 << 0) /* enoent if nlink == 0 */
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf);
@@ -125,8 +127,10 @@ int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary);
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary);
void scoutfs_inode_schedule_orphan_dwork(struct super_block *sb);
void scoutfs_inode_queue_writeback(struct inode *inode);

View File

@@ -1676,6 +1676,14 @@ static int lock_safe(struct scoutfs_lock *lock, struct scoutfs_key *key,
return 0;
}
static int optional_lock_mode_match(struct scoutfs_lock *lock, int mode)
{
if (WARN_ON_ONCE(lock && lock->mode != mode))
return -EINVAL;
else
return 0;
}
/*
* Copy the cached item's value into the caller's value. The number of
* bytes copied is returned. A null val returns 0.
@@ -1832,12 +1840,19 @@ out:
* also increase the seqs. It lets us limit the inputs of item merging
* to the last stable seq and ensure that all the items in open
* transactions and granted locks will have greater seqs.
*
* This is a little awkward for WRITE_ONLY locks which can have much
* older versions than the version of locked primary data that they're
* operating on behalf of. Callers can optionally provide that primary
* lock to get the version from. This ensures that items created under
* WRITE_ONLY locks can not have versions less than their primary data.
*/
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock)
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock,
struct scoutfs_lock *primary)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return max(sbi->trans_seq, lock->write_seq);
return max3(sbi->trans_seq, lock->write_seq, primary ? primary->write_seq : 0);
}
/*
@@ -1872,7 +1887,7 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
if (!item || item->deletion) {
ret = -ENOENT;
} else {
item->seq = item_seq(sb, lock);
item->seq = item_seq(sb, lock, NULL);
mark_item_dirty(sb, cinf, pg, NULL, item);
ret = 0;
}
@@ -1889,10 +1904,10 @@ out:
*/
static int item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock,
int mode, bool force)
struct scoutfs_lock *primary, int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
const u64 seq = item_seq(sb, lock, primary);
struct cached_item *found;
struct cached_item *item;
struct cached_page *pg;
@@ -1902,7 +1917,8 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_create);
if ((ret = lock_safe(lock, key, mode)))
if ((ret = lock_safe(lock, key, mode)) ||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
@@ -1943,15 +1959,15 @@ out:
int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
return item_create(sb, key, val, val_len, lock,
return item_create(sb, key, val, val_len, lock, NULL,
SCOUTFS_LOCK_READ, false);
}
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock)
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
{
return item_create(sb, key, val, val_len, lock,
return item_create(sb, key, val, val_len, lock, primary,
SCOUTFS_LOCK_WRITE_ONLY, true);
}
@@ -1965,7 +1981,7 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
const u64 seq = item_seq(sb, lock, NULL);
struct cached_item *item;
struct cached_item *found;
struct cached_page *pg;
@@ -2025,12 +2041,16 @@ out:
* current items so the caller always writes with write only locks. If
* combining the current delta item and the caller's item results in a
* null we can just drop it, we don't have to emit a deletion item.
*
* Delta items don't have to worry about creating items with old
* versions under write_only locks. The versions don't impact how we
* merge two items.
*/
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
const u64 seq = item_seq(sb, lock, NULL);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
@@ -2099,10 +2119,11 @@ out:
* deletion item if there isn't one already cached.
*/
static int item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, int mode, bool force)
struct scoutfs_lock *lock, struct scoutfs_lock *primary,
int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
const u64 seq = item_seq(sb, lock, primary);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
@@ -2111,7 +2132,8 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_delete);
if ((ret = lock_safe(lock, key, mode)))
if ((ret = lock_safe(lock, key, mode)) ||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
@@ -2161,13 +2183,13 @@ out:
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock)
{
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE, false);
return item_delete(sb, key, lock, NULL, SCOUTFS_LOCK_WRITE, false);
}
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock)
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
{
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE_ONLY, true);
return item_delete(sb, key, lock, primary, SCOUTFS_LOCK_WRITE_ONLY, true);
}
u64 scoutfs_item_dirty_pages(struct super_block *sb)

View File

@@ -15,16 +15,15 @@ int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock);
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
u64 scoutfs_item_dirty_pages(struct super_block *sb);
int scoutfs_item_write_dirty(struct super_block *sb);

View File

@@ -46,4 +46,10 @@ static inline int dir_emit_dots(struct file *file, void *dirent,
}
#endif
#ifdef KC_POSIX_ACL_VALID_USER_NS
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(user_ns, acl)
#else
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(acl)
#endif
#endif

View File

@@ -18,6 +18,7 @@
#include <linux/mm.h>
#include <linux/sort.h>
#include <linux/ctype.h>
#include <linux/posix_acl.h>
#include "super.h"
#include "lock.h"
@@ -129,16 +130,13 @@ static bool lock_modes_match(int granted, int requested)
* allows deletions to be performed by unlink without having to wait for
* remote cached inodes to be dropped.
*
* If the cached inode was already deferring final inode deletion then
* we can't perform that inline in invalidation. The locking alone
* deadlock, and it might also take multiple transactions to fully
* delete an inode with significant metadata. We only perform the iput
* inline if we know that possible eviction can't perform the final
* deletion, otherwise we kick it off to async work.
* We kick the d_prune and iput off to async work because they can end
* up in final iput and inode eviction item deletion which would
* deadlock. d_prune->dput can end up in iput on parents in different
* locks entirely.
*/
static void invalidate_inode(struct super_block *sb, u64 ino)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_inode_info *si;
struct inode *inode;
@@ -152,17 +150,9 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
scoutfs_data_wait_changed(inode);
}
/* can't touch during unmount, dcache destroys w/o locks */
if (!linfo->unmounting)
d_prune_aliases(inode);
forget_all_cached_acls(inode);
si->drop_invalidated = true;
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
} else {
/* defer iput to work context so we don't evict inodes from invalidation */
scoutfs_inode_queue_iput(inode);
}
scoutfs_inode_queue_iput(inode, SI_IPUT_FLAG_PRUNE);
}
}
@@ -198,16 +188,6 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
retry:
/* invalidate inodes before removing coverage */
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
/* remove cov items to tell users that their cache is stale */
spin_lock(&lock->cov_list_lock);
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
@@ -223,6 +203,16 @@ retry:
}
spin_unlock(&lock->cov_list_lock);
/* invalidate inodes after removing coverage so drop/evict aren't covered */
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
scoutfs_item_invalidate(sb, &lock->start, &lock->end);
}
@@ -1525,6 +1515,38 @@ void scoutfs_lock_flush_invalidate(struct super_block *sb)
flush_work(&linfo->inv_work);
}
static u64 get_held_lock_refresh_gen(struct super_block *sb, struct scoutfs_key *start)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
u64 refresh_gen = 0;
/* this can be called from all manner of places */
if (!linfo)
return 0;
spin_lock(&linfo->lock);
lock = lock_lookup(sb, start, NULL);
if (lock) {
if (lock_mode_can_read(lock->mode))
refresh_gen = lock->refresh_gen;
}
spin_unlock(&linfo->lock);
return refresh_gen;
}
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino)
{
struct scoutfs_key start;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_FS_ZONE;
start.ski_ino = cpu_to_le64(ino & ~(u64)SCOUTFS_LOCK_INODE_GROUP_MASK);
return get_held_lock_refresh_gen(sb, &start);
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.

View File

@@ -100,6 +100,8 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode);
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino);
void scoutfs_free_unused_locks(struct super_block *sb);
int scoutfs_lock_setup(struct super_block *sb);

View File

@@ -355,6 +355,7 @@ static int submit_send(struct super_block *sb,
}
if (rid != 0) {
spin_unlock(&conn->lock);
kfree(msend);
return -ENOTCONN;
}
}
@@ -1345,10 +1346,12 @@ scoutfs_net_alloc_conn(struct super_block *sb,
if (!conn)
return NULL;
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
if (info_size) {
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
}
conn->workq = alloc_workqueue("scoutfs_net_%s",

View File

@@ -157,6 +157,15 @@ static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
return nr;
}
static void free_rid_list(struct omap_rid_list *list)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *tmp;
list_for_each_entry_safe(entry, tmp, &list->head, head)
free_rid(list, entry);
}
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
{
struct omap_rid_entry *entry;
@@ -804,6 +813,10 @@ void scoutfs_omap_server_shutdown(struct super_block *sb)
llist_for_each_entry_safe(req, tmp, requests, llnode)
kfree(req);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
synchronize_rcu();
}
@@ -864,6 +877,10 @@ void scoutfs_omap_destroy(struct super_block *sb)
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
rhashtable_destroy(&ominf->group_ht);
rhashtable_destroy(&ominf->req_ht);
kfree(ominf);

View File

@@ -27,16 +27,25 @@
#include "options.h"
#include "super.h"
#include "inode.h"
#include "alloc.h"
enum {
Opt_acl,
Opt_data_prealloc_blocks,
Opt_data_prealloc_contig_only,
Opt_metadev_path,
Opt_noacl,
Opt_orphan_scan_delay_ms,
Opt_quorum_slot_nr,
Opt_err,
};
static const match_table_t tokens = {
{Opt_acl, "acl"},
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_noacl, "noacl"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_err, NULL}
@@ -106,11 +115,17 @@ static void free_options(struct scoutfs_mount_options *opts)
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
#define MIN_DATA_PREALLOC_BLOCKS 1ULL
#define MAX_DATA_PREALLOC_BLOCKS ((unsigned long long)SCOUTFS_BLOCK_SM_MAX)
static void init_default_options(struct scoutfs_mount_options *opts)
{
memset(opts, 0, sizeof(*opts));
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
opts->data_prealloc_contig_only = 1;
opts->quorum_slot_nr = -1;
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
opts->orphan_scan_delay_ms = -1;
}
/*
@@ -122,6 +137,7 @@ static void init_default_options(struct scoutfs_mount_options *opts)
static int parse_options(struct super_block *sb, char *options, struct scoutfs_mount_options *opts)
{
substring_t args[MAX_OPT_ARGS];
u64 nr64;
int nr;
int token;
char *p;
@@ -134,12 +150,44 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
token = match_token(p, tokens, args);
switch (token) {
case Opt_acl:
sb->s_flags |= MS_POSIXACL;
break;
case Opt_data_prealloc_blocks:
ret = match_u64(args, &nr64);
if (ret < 0 ||
nr64 < MIN_DATA_PREALLOC_BLOCKS || nr64 > MAX_DATA_PREALLOC_BLOCKS) {
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->data_prealloc_blocks = nr64;
break;
case Opt_data_prealloc_contig_only:
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 || nr > 1) {
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must only be 0 or 1");
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->data_prealloc_contig_only = nr;
break;
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
if (ret < 0)
return ret;
break;
case Opt_noacl:
sb->s_flags &= ~MS_POSIXACL;
break;
case Opt_orphan_scan_delay_ms:
if (opts->orphan_scan_delay_ms != -1) {
scoutfs_err(sb, "multiple orphan_scan_delay_ms options provided, only provide one.");
@@ -181,6 +229,9 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
}
}
if (opts->orphan_scan_delay_ms == -1)
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
if (!opts->metadev_path) {
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
return -EINVAL;
@@ -250,10 +301,17 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct scoutfs_mount_options opts;
const bool is_acl = !!(sb->s_flags & MS_POSIXACL);
scoutfs_options_read(sb, &opts);
if (is_acl)
seq_puts(seq, ",acl");
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
if (!is_acl)
seq_puts(seq, ",noacl");
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
if (opts.quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
@@ -261,6 +319,83 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
return 0;
}
static ssize_t data_prealloc_blocks_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%llu", opts.data_prealloc_blocks);
}
static ssize_t data_prealloc_blocks_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[30]; /* more than enough for octal -U64_MAX */
u64 val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtoll(nullterm, 0, &val);
if (ret < 0 || val < MIN_DATA_PREALLOC_BLOCKS || val > MAX_DATA_PREALLOC_BLOCKS) {
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.data_prealloc_blocks = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(data_prealloc_blocks);
static ssize_t data_prealloc_contig_only_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.data_prealloc_contig_only);
}
static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < 0 || val > 1) {
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must be 0 or 1");
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.data_prealloc_contig_only = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
@@ -325,6 +460,8 @@ static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *
SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
SCOUTFS_ATTR_PTR(quorum_slot_nr),

View File

@@ -6,6 +6,8 @@
#include "format.h"
struct scoutfs_mount_options {
u64 data_prealloc_blocks;
bool data_prealloc_contig_only;
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;

View File

@@ -114,6 +114,7 @@ struct quorum_status {
struct quorum_info {
struct super_block *sb;
struct scoutfs_quorum_config qconf;
struct work_struct work;
struct socket *sock;
bool shutdown;
@@ -134,11 +135,18 @@ struct quorum_info {
#define DECLARE_QUORUM_INFO_KOBJ(kobj, name) \
DECLARE_QUORUM_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
static bool quorum_slot_present(struct scoutfs_super_block *super, int i)
static bool quorum_slot_present(struct scoutfs_quorum_config *qconf, int i)
{
BUG_ON(i < 0 || i > SCOUTFS_QUORUM_MAX_SLOTS);
return super->qconf.slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
return qconf->slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
}
static void quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
{
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
scoutfs_addr_to_sin(sin, &qconf->slots[i].addr);
}
static ktime_t election_timeout(void)
@@ -160,7 +168,6 @@ static ktime_t heartbeat_timeout(void)
static int create_socket(struct super_block *sb)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct socket *sock = NULL;
struct sockaddr_in sin;
int addrlen;
@@ -174,7 +181,7 @@ static int create_socket(struct super_block *sb)
sock->sk->sk_allocation = GFP_NOFS;
scoutfs_quorum_slot_sin(super, qinf->our_quorum_slot_nr, &sin);
quorum_slot_sin(&qinf->qconf, qinf->our_quorum_slot_nr, &sin);
addrlen = sizeof(sin);
ret = kernel_bind(sock, (struct sockaddr *)&sin, addrlen);
@@ -204,13 +211,13 @@ static __le32 quorum_message_crc(struct scoutfs_quorum_message *qmes)
static void send_msg_members(struct super_block *sb, int type, u64 term,
int only)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
ktime_t now;
int i;
struct scoutfs_quorum_message qmes = {
.fsid = super->hdr.fsid,
.fsid = cpu_to_le64(sbi->fsid),
.term = cpu_to_le64(term),
.type = type,
.from = qinf->our_quorum_slot_nr,
@@ -234,11 +241,11 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i) ||
if (!quorum_slot_present(&qinf->qconf, i) ||
(only >= 0 && i != only) || i == qinf->our_quorum_slot_nr)
continue;
scoutfs_quorum_slot_sin(super, i, &sin);
scoutfs_quorum_slot_sin(&qinf->qconf, i, &sin);
now = ktime_get();
kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
@@ -266,7 +273,7 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
ktime_t abs_to)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_quorum_message qmes;
struct timeval tv;
ktime_t rel_to;
@@ -309,10 +316,10 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
if (ret != sizeof(qmes) ||
qmes.crc != quorum_message_crc(&qmes) ||
qmes.fsid != super->hdr.fsid ||
qmes.fsid != cpu_to_le64(sbi->fsid) ||
qmes.type >= SCOUTFS_QUORUM_MSG_INVALID ||
qmes.from >= SCOUTFS_QUORUM_MAX_SLOTS ||
!quorum_slot_present(super, qmes.from)) {
!quorum_slot_present(&qinf->qconf, qmes.from)) {
/* should we be trying to open a new socket? */
scoutfs_inc_counter(sb, quorum_recv_invalid);
return -EAGAIN;
@@ -342,7 +349,7 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
bool check_rid)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
const u64 fsid = sbi->fsid;
const u64 rid = sbi->rid;
char msg[150];
__le32 crc;
@@ -367,9 +374,9 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
else if (le32_to_cpu(blk->hdr.magic) != SCOUTFS_BLOCK_MAGIC_QUORUM)
snprintf(msg, sizeof(msg), "blk magic %08x != %08x",
le32_to_cpu(blk->hdr.magic), SCOUTFS_BLOCK_MAGIC_QUORUM);
else if (blk->hdr.fsid != super->hdr.fsid)
else if (blk->hdr.fsid != cpu_to_le64(fsid))
snprintf(msg, sizeof(msg), "blk fsid %016llx != %016llx",
le64_to_cpu(blk->hdr.fsid), le64_to_cpu(super->hdr.fsid));
le64_to_cpu(blk->hdr.fsid), fsid);
else if (le64_to_cpu(blk->hdr.blkno) != blkno)
snprintf(msg, sizeof(msg), "blk blkno %llu != %llu",
le64_to_cpu(blk->hdr.blkno), blkno);
@@ -410,8 +417,7 @@ out:
*/
static void read_greatest_term(struct super_block *sb, u64 *term)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_quorum_block blk;
int ret;
int e;
@@ -420,7 +426,7 @@ static void read_greatest_term(struct super_block *sb, u64 *term)
*term = 0;
for (s = 0; s < SCOUTFS_QUORUM_MAX_SLOTS; s++) {
if (!quorum_slot_present(super, s))
if (!quorum_slot_present(&qinf->qconf, s))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + s, &blk, false);
@@ -514,14 +520,15 @@ static int update_quorum_block(struct super_block *sb, int event, u64 term, bool
* keeps us from being fenced while we allow userspace fencing to take a
* reasonably long time. We still want to timeout eventually.
*/
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
u64 term)
{
#define NR_OLD 2
struct scoutfs_quorum_block_event old[SCOUTFS_QUORUM_MAX_SLOTS][NR_OLD] = {{{0,}}};
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
struct sockaddr_in sin;
const __le64 lefsid = cpu_to_le64(sbi->fsid);
const u64 rid = sbi->rid;
bool fence_started = false;
u64 fenced = 0;
@@ -534,7 +541,7 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
BUILD_BUG_ON(SCOUTFS_QUORUM_BLOCKS < SCOUTFS_QUORUM_MAX_SLOTS);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
if (!quorum_slot_present(qconf, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
@@ -567,11 +574,11 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
continue;
scoutfs_inc_counter(sb, quorum_fence_leader);
scoutfs_quorum_slot_sin(super, i, &sin);
quorum_slot_sin(qconf, i, &sin);
fence_rid = old[i][j].rid;
scoutfs_info(sb, "fencing previous leader "SCSBF" at term %llu in slot %u with address "SIN_FMT,
SCSB_LEFR_ARGS(super->hdr.fsid, fence_rid),
SCSB_LEFR_ARGS(lefsid, fence_rid),
le64_to_cpu(old[i][j].term), i, SIN_ARG(&sin));
ret = scoutfs_fence_start(sb, le64_to_cpu(fence_rid), sin.sin_addr.s_addr,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER);
@@ -752,7 +759,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.server_start_term = qst.term;
qst.server_event = SCOUTFS_QUORUM_EVENT_ELECT;
scoutfs_server_start(sb, qst.term);
scoutfs_server_start(sb, &qinf->qconf, qst.term);
}
/*
@@ -877,16 +884,25 @@ out:
*/
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = NULL;
struct scoutfs_quorum_block blk;
u64 elect_term;
u64 term = 0;
int ret = 0;
int i;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
if (!quorum_slot_present(&super->qconf, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
@@ -900,7 +916,7 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
if (elect_term > term &&
elect_term > le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_STOP].term)) {
term = elect_term;
scoutfs_quorum_slot_sin(super, i, sin);
scoutfs_quorum_slot_sin(&super->qconf, i, sin);
continue;
}
}
@@ -909,6 +925,7 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
ret = -ENOENT;
out:
kfree(super);
return ret;
}
@@ -924,12 +941,9 @@ u8 scoutfs_quorum_votes_needed(struct super_block *sb)
return qinf->votes_needed;
}
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin)
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
{
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
scoutfs_addr_to_sin(sin, &super->qconf.slots[i].addr);
return quorum_slot_sin(qconf, i, sin);
}
static char *role_str(int role)
@@ -1060,11 +1074,10 @@ static inline bool valid_ipv4_port(__be16 port)
return port != 0 && be16_to_cpu(port) != U16_MAX;
}
static int verify_quorum_slots(struct super_block *sb)
static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
struct scoutfs_quorum_config *qconf)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
DECLARE_QUORUM_INFO(sb, qinf);
struct sockaddr_in other;
struct sockaddr_in sin;
int found = 0;
@@ -1074,10 +1087,10 @@ static int verify_quorum_slots(struct super_block *sb)
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
if (!quorum_slot_present(qconf, i))
continue;
scoutfs_quorum_slot_sin(super, i, &sin);
scoutfs_quorum_slot_sin(qconf, i, &sin);
if (!valid_ipv4_unicast(sin.sin_addr.s_addr)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 unicast address: "SIN_FMT,
@@ -1092,10 +1105,10 @@ static int verify_quorum_slots(struct super_block *sb)
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (!quorum_slot_present(super, j))
if (!quorum_slot_present(qconf, j))
continue;
scoutfs_quorum_slot_sin(super, j, &other);
scoutfs_quorum_slot_sin(qconf, j, &other);
if (sin.sin_addr.s_addr == other.sin_addr.s_addr &&
sin.sin_port == other.sin_port) {
@@ -1113,11 +1126,11 @@ static int verify_quorum_slots(struct super_block *sb)
return -EINVAL;
}
if (!quorum_slot_present(super, qinf->our_quorum_slot_nr)) {
if (!quorum_slot_present(qconf, qinf->our_quorum_slot_nr)) {
char *str = slots;
*str = '\0';
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (quorum_slot_present(super, i)) {
if (quorum_slot_present(qconf, i)) {
ret = snprintf(str, &slots[ARRAY_SIZE(slots)] - str, "%c%u",
str == slots ? ' ' : ',', i);
if (ret < 2 || ret > 3) {
@@ -1141,16 +1154,22 @@ static int verify_quorum_slots(struct super_block *sb)
else
qinf->votes_needed = (found / 2) + 1;
qinf->qconf = *qconf;
return 0;
}
/*
* Once this schedules the quorum worker it can be elected leader and
* start the server, possibly before this returns.
* start the server, possibly before this returns. The quorum agent
* would be responsible for tracking the quorum config in the super
* block if it changes. Until then uses a static config that it reads
* during setup.
*/
int scoutfs_quorum_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = NULL;
struct scoutfs_mount_options opts;
struct quorum_info *qinf;
int ret;
@@ -1160,7 +1179,9 @@ int scoutfs_quorum_setup(struct super_block *sb)
return 0;
qinf = kzalloc(sizeof(struct quorum_info), GFP_KERNEL);
if (!qinf) {
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_KERNEL);
if (!qinf || !super) {
kfree(qinf);
ret = -ENOMEM;
goto out;
}
@@ -1174,7 +1195,11 @@ int scoutfs_quorum_setup(struct super_block *sb)
sbi->quorum_info = qinf;
qinf->sb = sb;
ret = verify_quorum_slots(sb);
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
ret = verify_quorum_slots(sb, qinf, &super->qconf);
if (ret < 0)
goto out;
@@ -1194,6 +1219,7 @@ out:
if (ret)
scoutfs_quorum_destroy(sb);
kfree(super);
return ret;
}

View File

@@ -4,10 +4,11 @@
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
u64 term);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);

View File

@@ -691,16 +691,16 @@ TRACE_EVENT(scoutfs_evict_inode,
TRACE_EVENT(scoutfs_drop_inode,
TP_PROTO(struct super_block *sb, __u64 ino, unsigned int nlink,
unsigned int unhashed, bool drop_invalidated),
unsigned int unhashed, bool lock_covered),
TP_ARGS(sb, ino, nlink, unhashed, drop_invalidated),
TP_ARGS(sb, ino, nlink, unhashed, lock_covered),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(unsigned int, nlink)
__field(unsigned int, unhashed)
__field(unsigned int, drop_invalidated)
__field(unsigned int, lock_covered)
),
TP_fast_assign(
@@ -708,12 +708,12 @@ TRACE_EVENT(scoutfs_drop_inode,
__entry->ino = ino;
__entry->nlink = nlink;
__entry->unhashed = unhashed;
__entry->drop_invalidated = !!drop_invalidated;
__entry->lock_covered = !!lock_covered;
),
TP_printk(SCSBF" ino %llu nlink %u unhashed %d drop_invalidated %u", SCSB_TRACE_ARGS,
TP_printk(SCSBF" ino %llu nlink %u unhashed %d lock_covered %u", SCSB_TRACE_ARGS,
__entry->ino, __entry->nlink, __entry->unhashed,
__entry->drop_invalidated)
__entry->lock_covered)
);
TRACE_EVENT(scoutfs_inode_walk_writeback,
@@ -1417,42 +1417,71 @@ TRACE_EVENT(scoutfs_rename,
);
TRACE_EVENT(scoutfs_d_revalidate,
TP_PROTO(struct super_block *sb,
struct dentry *dentry, int flags, struct dentry *parent,
bool is_covered, int ret),
TP_PROTO(struct super_block *sb, struct dentry *dentry, int flags, u64 dir_ino, int ret),
TP_ARGS(sb, dentry, flags, parent, is_covered, ret),
TP_ARGS(sb, dentry, flags, dir_ino, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, dentry)
__string(name, dentry->d_name.name)
__field(__u64, ino)
__field(__u64, parent_ino)
__field(__u64, dir_ino)
__field(int, flags)
__field(int, is_root)
__field(int, is_covered)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dentry = dentry;
__assign_str(name, dentry->d_name.name)
__entry->ino = dentry->d_inode ?
scoutfs_ino(dentry->d_inode) : 0;
__entry->parent_ino = parent->d_inode ?
scoutfs_ino(parent->d_inode) : 0;
__entry->ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
__entry->dir_ino = dir_ino;
__entry->flags = flags;
__entry->is_root = IS_ROOT(dentry);
__entry->is_covered = is_covered;
__entry->ret = ret;
),
TP_printk(SCSBF" name %s ino %llu parent_ino %llu flags 0x%x s_root %u is_covered %u ret %d",
SCSB_TRACE_ARGS, __get_str(name), __entry->ino,
__entry->parent_ino, __entry->flags,
__entry->is_root,
__entry->is_covered,
__entry->ret)
TP_printk(SCSBF" dentry %p name %s ino %llu dir_ino %llu flags 0x%x s_root %u ret %d",
SCSB_TRACE_ARGS, __entry->dentry, __get_str(name), __entry->ino, __entry->dir_ino,
__entry->flags, __entry->is_root, __entry->ret)
);
TRACE_EVENT(scoutfs_validate_dentry,
TP_PROTO(struct super_block *sb, struct dentry *dentry, u64 dir_ino, u64 dentry_ino,
u64 dent_ino, u64 refresh_gen, int ret),
TP_ARGS(sb, dentry, dir_ino, dentry_ino, dent_ino, refresh_gen, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, dentry)
__field(__u64, dir_ino)
__string(name, dentry->d_name.name)
__field(__u64, dentry_ino)
__field(__u64, dent_ino)
__field(__u64, fsdata_gen)
__field(__u64, refresh_gen)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dentry = dentry;
__entry->dir_ino = dir_ino;
__assign_str(name, dentry->d_name.name)
__entry->dentry_ino = dentry_ino;
__entry->dent_ino = dent_ino;
__entry->fsdata_gen = (unsigned long long)dentry->d_fsdata;
__entry->refresh_gen = refresh_gen;
__entry->ret = ret;
),
TP_printk(SCSBF" dentry %p dir %llu name %s dentry_ino %llu dent_ino %llu fsdata_gen %llu refresh_gen %llu ret %d",
SCSB_TRACE_ARGS, __entry->dentry, __entry->dir_ino, __get_str(name),
__entry->dentry_ino, __entry->dent_ino, __entry->fsdata_gen,
__entry->refresh_gen, __entry->ret)
);
DECLARE_EVENT_CLASS(scoutfs_super_lifecycle_class,

View File

@@ -130,9 +130,9 @@ struct server_info {
struct mutex srch_mutex;
struct mutex mounted_clients_mutex;
/* stable versions stored from commits, given in locks and rpcs */
seqcount_t roots_seqcount;
struct scoutfs_net_roots roots;
/* stable super stored from commits, given in locks and rpcs */
seqcount_t stable_seqcount;
struct scoutfs_super_block stable_super;
/* serializing and get and set volume options */
seqcount_t volopt_seqcount;
@@ -143,11 +143,18 @@ struct server_info {
struct work_struct fence_pending_recov_work;
/* while running we check for fenced mounts to reclaim */
struct delayed_work reclaim_dwork;
/* a running server gets a static quorum config from quorum as it starts */
struct scoutfs_quorum_config qconf;
/* a running server maintains a private dirty super */
struct scoutfs_super_block dirty_super;
};
#define DECLARE_SERVER_INFO(sb, name) \
struct server_info *name = SCOUTFS_SB(sb)->server_info
#define DIRTY_SUPER_SB(sb) (&SCOUTFS_SB(sb)->server_info->dirty_super)
/*
* The server tracks each connected client.
*/
@@ -469,16 +476,22 @@ static void commit_end(struct super_block *sb, struct commit_users *cusers, int
wake_up(&cusers->waitq);
}
static void get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots)
static void get_stable(struct super_block *sb, struct scoutfs_super_block *super,
struct scoutfs_net_roots *roots)
{
DECLARE_SERVER_INFO(sb, server);
unsigned int seq;
do {
seq = read_seqcount_begin(&server->roots_seqcount);
*roots = server->roots;
} while (read_seqcount_retry(&server->roots_seqcount, seq));
seq = read_seqcount_begin(&server->stable_seqcount);
if (super)
*super = server->stable_super;
if (roots) {
roots->fs_root = server->stable_super.fs_root;
roots->logs_root = server->stable_super.logs_root;
roots->srch_root = server->stable_super.srch_root;
}
} while (read_seqcount_retry(&server->stable_seqcount, seq));
}
u64 scoutfs_server_seq(struct super_block *sb)
@@ -510,17 +523,12 @@ void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq)
}
}
static void set_roots(struct server_info *server,
struct scoutfs_btree_root *fs_root,
struct scoutfs_btree_root *logs_root,
struct scoutfs_btree_root *srch_root)
static void set_stable_super(struct server_info *server, struct scoutfs_super_block *super)
{
preempt_disable();
write_seqcount_begin(&server->roots_seqcount);
server->roots.fs_root = *fs_root;
server->roots.logs_root = *logs_root;
server->roots.srch_root = *srch_root;
write_seqcount_end(&server->roots_seqcount);
write_seqcount_begin(&server->stable_seqcount);
server->stable_super = *super;
write_seqcount_end(&server->stable_seqcount);
preempt_enable();
}
@@ -545,7 +553,7 @@ static void scoutfs_server_commit_func(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
commit_work);
struct super_block *sb = server->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct commit_users *cusers = &server->cusers;
int ret;
@@ -603,8 +611,7 @@ static void scoutfs_server_commit_func(struct work_struct *work)
goto out;
}
set_roots(server, &super->fs_root, &super->logs_root,
&super->srch_root);
set_stable_super(server, super);
/* swizzle the active and idle server alloc/freed heads */
server->other_ind ^= 1;
@@ -641,7 +648,7 @@ static int server_alloc_inodes(struct super_block *sb,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_net_inode_alloc ial = { 0, };
COMMIT_HOLD(hold);
__le64 lecount;
@@ -694,13 +701,13 @@ static int alloc_move_refill_zoned(struct super_block *sb, struct scoutfs_alloc_
static int alloc_move_empty(struct super_block *sb,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 meta_reserved)
struct scoutfs_alloc_root *src, u64 meta_budget)
{
DECLARE_SERVER_INFO(sb, server);
return scoutfs_alloc_move(sb, &server->alloc, &server->wri,
dst, src, le64_to_cpu(src->total_len), NULL, NULL, 0,
meta_reserved);
meta_budget);
}
/*
@@ -809,7 +816,7 @@ static void mod_bitmap_bits(__le64 *dst, u64 dst_zone_blocks,
static int get_data_alloc_zone_bits(struct super_block *sb, u64 rid, __le64 *exclusive,
__le64 *vacant, u64 zone_blocks)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_log_trees *lt;
struct scoutfs_key key;
@@ -1040,7 +1047,7 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
u64 rid, struct commit_hold *hold)
{
struct server_info *server = SCOUTFS_SB(sb)->server_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_log_merge_status stat;
struct scoutfs_log_merge_range rng;
struct scoutfs_log_trees each_lt;
@@ -1226,6 +1233,82 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
return ret;
}
/*
* The calling get_log_trees ran out of available blocks in its commit's
* metadata allocator while moving extents from the log tree's
* data_freed into the core data_avail. This finishes moving the
* extents in as many additional commits as it takes. The logs mutex
* is nested inside holding commits so we recheck the persistent item
* each time we commit to make sure it's still what we think. The
* caller is still going to send the item to the client so we update the
* caller's each time we make progress. This is a best-effort attempt
* to clean up and it's valid to leave extents in data_freed we don't
* return errors to the caller. The client will continue the work later
* in get_log_trees or as the rid is reclaimed.
*/
static void try_drain_data_freed(struct super_block *sb, struct scoutfs_log_trees *lt)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
const u64 rid = le64_to_cpu(lt->rid);
const u64 nr = le64_to_cpu(lt->nr);
struct scoutfs_log_trees drain;
struct scoutfs_key key;
COMMIT_HOLD(hold);
int ret = 0;
int err;
scoutfs_key_init_log_trees(&key, rid, nr);
while (lt->data_freed.total_len != 0) {
server_hold_commit(sb, &hold);
mutex_lock(&server->logs_mutex);
ret = find_log_trees_item(sb, &super->logs_root, false, rid, U64_MAX, &drain);
if (ret < 0)
break;
/* careful to only keep draining the caller's specific open trans */
if (drain.nr != lt->nr || drain.get_trans_seq != lt->get_trans_seq ||
drain.commit_trans_seq != lt->commit_trans_seq || drain.flags != lt->flags) {
ret = -ENOENT;
break;
}
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri,
&super->logs_root, &key);
if (ret < 0)
break;
/* moving can modify and return errors, always update caller and item */
mutex_lock(&server->alloc_mutex);
ret = alloc_move_empty(sb, &super->data_alloc, &drain.data_freed,
COMMIT_HOLD_ALLOC_BUDGET / 2);
mutex_unlock(&server->alloc_mutex);
if (ret == -EINPROGRESS)
ret = 0;
*lt = drain;
err = scoutfs_btree_force(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &drain, sizeof(drain));
BUG_ON(err < 0); /* dirtying must guarantee success */
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
if (ret < 0) {
ret = 0; /* don't try to abort, ignoring ret */
break;
}
}
/* try to cleanly abort and write any partial dirty btree blocks, but ignore result */
if (ret < 0) {
mutex_unlock(&server->logs_mutex);
server_apply_commit(sb, &hold, 0);
}
}
/*
* Give the client roots to all the trees that they'll use to build
* their transaction.
@@ -1253,7 +1336,7 @@ static int server_get_log_trees(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
u64 rid = scoutfs_net_client_rid(conn);
DECLARE_SERVER_INFO(sb, server);
__le64 exclusive[SCOUTFS_DATA_ALLOC_ZONE_LE64S];
@@ -1351,7 +1434,9 @@ static int server_get_log_trees(struct super_block *sb,
goto update;
}
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 0);
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 100);
if (ret == -EINPROGRESS)
ret = 0;
if (ret < 0) {
err_str = "emptying committed data_freed";
goto update;
@@ -1429,6 +1514,10 @@ out:
scoutfs_err(sb, "error %d getting log trees for rid %016llx: %s",
ret, rid, err_str);
/* try to drain excessive data_freed with additional commits, if needed, ignoring err */
if (ret == 0)
try_drain_data_freed(sb, &lt);
return scoutfs_net_response(sb, conn, cmd, id, ret, &lt, sizeof(lt));
}
@@ -1442,7 +1531,7 @@ static int server_commit_log_trees(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
const u64 rid = scoutfs_net_client_rid(conn);
DECLARE_SERVER_INFO(sb, server);
SCOUTFS_BTREE_ITEM_REF(iref);
@@ -1497,6 +1586,13 @@ static int server_commit_log_trees(struct super_block *sb,
if (ret < 0 || committed)
goto unlock;
/* make sure _update succeeds before we modify srch items */
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri, &super->logs_root, &key);
if (ret < 0) {
err_str = "dirtying lt item";
goto unlock;
}
/* try to rotate the srch log when big enough */
mutex_lock(&server->srch_mutex);
ret = scoutfs_srch_rotate_log(sb, &server->alloc, &server->wri,
@@ -1511,6 +1607,7 @@ static int server_commit_log_trees(struct super_block *sb,
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
BUG_ON(ret < 0); /* dirtying should have guaranteed success */
if (ret < 0)
err_str = "updating log trees item";
@@ -1542,7 +1639,7 @@ static int server_get_roots(struct super_block *sb,
memset(&roots, 0, sizeof(roots));
ret = -EINVAL;
} else {
get_roots(sb, &roots);
get_stable(sb, NULL, &roots);
ret = 0;
}
@@ -1572,7 +1669,7 @@ static int server_get_roots(struct super_block *sb,
*/
static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
DECLARE_SERVER_INFO(sb, server);
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_log_trees lt;
@@ -1669,9 +1766,8 @@ out:
*/
static int get_stable_trans_seq(struct super_block *sb, u64 *last_seq_ret)
{
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_log_trees *lt;
struct scoutfs_key key;
@@ -1827,9 +1923,8 @@ static int server_srch_get_compact(struct super_block *sb,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_srch_compact *sc = NULL;
COMMIT_HOLD(hold);
int ret;
@@ -1894,8 +1989,7 @@ static int server_srch_commit_compact(struct super_block *sb,
{
DECLARE_SERVER_INFO(sb, server);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_srch_compact *sc;
struct scoutfs_alloc_list_head av;
struct scoutfs_alloc_list_head fr;
@@ -1970,8 +2064,7 @@ static int splice_log_merge_completions(struct super_block *sb,
bool no_ranges)
{
struct server_info *server = SCOUTFS_SB(sb)->server_info;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_log_merge_complete comp;
struct scoutfs_log_merge_freeing fr;
struct scoutfs_log_merge_range rng;
@@ -2288,7 +2381,7 @@ static void server_log_merge_free_work(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
log_merge_free_work);
struct super_block *sb = server->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_log_merge_freeing fr;
struct scoutfs_key key;
COMMIT_HOLD(hold);
@@ -2380,8 +2473,7 @@ static int server_get_log_merge(struct super_block *sb,
{
DECLARE_SERVER_INFO(sb, server);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_log_merge_status stat;
struct scoutfs_log_merge_range rng;
struct scoutfs_log_merge_range remain;
@@ -2664,8 +2756,7 @@ static int server_commit_log_merge(struct super_block *sb,
{
DECLARE_SERVER_INFO(sb, server);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_log_merge_request orig_req;
struct scoutfs_log_merge_complete *comp;
struct scoutfs_log_merge_status stat;
@@ -2900,7 +2991,7 @@ static int server_set_volopt(struct super_block *sb, struct scoutfs_net_connecti
u8 cmd, u64 id, void *arg, u16 arg_len)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_volume_options *volopt;
COMMIT_HOLD(hold);
u64 opt;
@@ -2969,7 +3060,7 @@ static int server_clear_volopt(struct super_block *sb, struct scoutfs_net_connec
u8 cmd, u64 id, void *arg, u16 arg_len)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_volume_options *volopt;
COMMIT_HOLD(hold);
__le64 *opt;
@@ -3023,7 +3114,7 @@ static int server_resize_devices(struct super_block *sb, struct scoutfs_net_conn
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_net_resize_devices *nrd;
COMMIT_HOLD(hold);
u64 meta_tot;
@@ -3130,16 +3221,19 @@ static int count_free_blocks(struct super_block *sb, void *arg, int owner,
}
/*
* We calculate the total inode count and free blocks from the current in-memory dirty
* versions of the super block and log_trees structs, so we have to lock them.
* We calculate the total inode count and free blocks from the last
* stable super that was written. Other users also walk stable blocks
* so by joining them we don't have to worry about ensuring that we've
* locked all the dirty structures that the summations could reference.
* We handle stale reads by retrying with the most recent stable super.
*/
static int server_statfs(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block super;
struct scoutfs_net_statfs nst = {{0,}};
struct statfs_free_blocks sfb = {0,};
DECLARE_SAVED_REFS(saved);
u64 inode_count;
int ret;
@@ -3148,24 +3242,24 @@ static int server_statfs(struct super_block *sb, struct scoutfs_net_connection *
goto out;
}
mutex_lock(&server->alloc_mutex);
ret = scoutfs_alloc_foreach_super(sb, super, count_free_blocks, &sfb);
mutex_unlock(&server->alloc_mutex);
if (ret < 0)
goto out;
do {
get_stable(sb, &super, NULL);
mutex_lock(&server->logs_mutex);
ret = scoutfs_forest_inode_count(sb, super, &inode_count);
mutex_unlock(&server->logs_mutex);
if (ret < 0)
goto out;
ret = scoutfs_alloc_foreach_super(sb, &super, count_free_blocks, &sfb) ?:
scoutfs_forest_inode_count(sb, &super, &inode_count);
if (ret < 0 && ret != -ESTALE)
goto out;
BUILD_BUG_ON(sizeof(nst.uuid) != sizeof(super->uuid));
memcpy(nst.uuid, super->uuid, sizeof(nst.uuid));
ret = scoutfs_block_check_stale(sb, ret, &saved, &super.logs_root.ref,
&super.srch_root.ref);
} while (ret == -ESTALE);
BUILD_BUG_ON(sizeof(nst.uuid) != sizeof(super.uuid));
memcpy(nst.uuid, super.uuid, sizeof(nst.uuid));
nst.free_meta_blocks = cpu_to_le64(sfb.meta);
nst.total_meta_blocks = super->total_meta_blocks;
nst.total_meta_blocks = super.total_meta_blocks;
nst.free_data_blocks = cpu_to_le64(sfb.data);
nst.total_data_blocks = super->total_data_blocks;
nst.total_data_blocks = super.total_data_blocks;
nst.inode_count = cpu_to_le64(inode_count);
ret = 0;
@@ -3196,7 +3290,7 @@ static int insert_mounted_client(struct super_block *sb, u64 rid, u64 gr_flags,
struct sockaddr_in *sin)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_mounted_client_btree_val mcv;
struct scoutfs_key key;
int ret;
@@ -3222,7 +3316,7 @@ static int lookup_mounted_client_addr(struct super_block *sb, u64 rid,
union scoutfs_inet_addr *addr)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_mounted_client_btree_val *mcv;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
@@ -3256,7 +3350,7 @@ static int lookup_mounted_client_addr(struct super_block *sb, u64 rid,
static int delete_mounted_client(struct super_block *sb, u64 rid)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_key key;
int ret;
@@ -3280,7 +3374,7 @@ static int delete_mounted_client(struct super_block *sb, u64 rid)
static int cancel_srch_compact(struct super_block *sb, u64 rid)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_alloc_list_head av;
struct scoutfs_alloc_list_head fr;
int ret;
@@ -3332,7 +3426,7 @@ static int cancel_srch_compact(struct super_block *sb, u64 rid)
static int cancel_log_merge(struct super_block *sb, u64 rid)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_log_merge_status stat;
struct scoutfs_log_merge_request req;
struct scoutfs_log_merge_range rng;
@@ -3456,7 +3550,7 @@ static int server_greeting(struct super_block *sb,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_net_greeting *gr = arg;
struct scoutfs_net_greeting greet;
DECLARE_SERVER_INFO(sb, server);
@@ -3472,10 +3566,9 @@ static int server_greeting(struct super_block *sb,
goto send_err;
}
if (gr->fsid != super->hdr.fsid) {
if (gr->fsid != cpu_to_le64(sbi->fsid)) {
scoutfs_warn(sb, "client rid %016llx greeting fsid 0x%llx did not match server fsid 0x%llx",
le64_to_cpu(gr->rid), le64_to_cpu(gr->fsid),
le64_to_cpu(super->hdr.fsid));
le64_to_cpu(gr->rid), le64_to_cpu(gr->fsid), sbi->fsid);
ret = -EINVAL;
goto send_err;
}
@@ -3615,7 +3708,7 @@ static void farewell_worker(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
farewell_work);
struct super_block *sb = server->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_mounted_client_btree_val *mcv;
struct farewell_request *tmp;
struct farewell_request *fw;
@@ -3977,7 +4070,7 @@ static void recovery_timeout(struct super_block *sb)
static int start_recovery(struct super_block *sb)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
unsigned int nr = 0;
@@ -4094,8 +4187,7 @@ static void scoutfs_server_worker(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
work);
struct super_block *sb = server->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_net_connection *conn = NULL;
struct scoutfs_mount_options opts;
DECLARE_WAIT_QUEUE_HEAD(waitq);
@@ -4107,13 +4199,13 @@ static void scoutfs_server_worker(struct work_struct *work)
trace_scoutfs_server_work_enter(sb, 0, 0);
scoutfs_options_read(sb, &opts);
scoutfs_quorum_slot_sin(super, opts.quorum_slot_nr, &sin);
scoutfs_quorum_slot_sin(&server->qconf, opts.quorum_slot_nr, &sin);
scoutfs_info(sb, "server starting at "SIN_FMT, SIN_ARG(&sin));
scoutfs_block_writer_init(sb, &server->wri);
/* first make sure no other servers are still running */
ret = scoutfs_quorum_fence_leaders(sb, server->term);
ret = scoutfs_quorum_fence_leaders(sb, &server->qconf, server->term);
if (ret < 0) {
scoutfs_err(sb, "server error %d attempting to fence previous leaders", ret);
goto out;
@@ -4149,8 +4241,7 @@ static void scoutfs_server_worker(struct work_struct *work)
write_seqcount_end(&server->volopt_seqcount);
atomic64_set(&server->seq_atomic, le64_to_cpu(super->seq));
set_roots(server, &super->fs_root, &super->logs_root,
&super->srch_root);
set_stable_super(server, super);
/* prepare server alloc for this transaction, larger first */
if (le64_to_cpu(super->server_meta_avail[0].total_nr) <
@@ -4244,11 +4335,12 @@ out:
/*
* Start the server but don't wait for it to complete.
*/
void scoutfs_server_start(struct super_block *sb, u64 term)
void scoutfs_server_start(struct super_block *sb, struct scoutfs_quorum_config *qconf, u64 term)
{
DECLARE_SERVER_INFO(sb, server);
if (cmpxchg(&server->status, SERVER_DOWN, SERVER_STARTING) == SERVER_DOWN) {
server->qconf = *qconf;
server->term = term;
queue_work(server->wq, &server->work);
}
@@ -4300,7 +4392,7 @@ int scoutfs_server_setup(struct super_block *sb)
INIT_WORK(&server->log_merge_free_work, server_log_merge_free_work);
mutex_init(&server->srch_mutex);
mutex_init(&server->mounted_clients_mutex);
seqcount_init(&server->roots_seqcount);
seqcount_init(&server->stable_seqcount);
seqcount_init(&server->volopt_seqcount);
mutex_init(&server->volopt_mutex);
INIT_WORK(&server->fence_pending_recov_work, fence_pending_recov_worker);

View File

@@ -75,7 +75,7 @@ u64 scoutfs_server_seq(struct super_block *sb);
u64 scoutfs_server_next_seq(struct super_block *sb);
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
void scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_start(struct super_block *sb, struct scoutfs_quorum_config *qconf, u64 term);
void scoutfs_server_stop(struct super_block *sb);
void scoutfs_server_stop_wait(struct super_block *sb);
bool scoutfs_server_is_running(struct super_block *sb);

View File

@@ -861,7 +861,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_srch_rb_root *sroot,
u64 hash, u64 ino, u64 last_ino, bool *done)
{
struct scoutfs_net_roots prev_roots;
struct scoutfs_net_roots roots;
struct scoutfs_srch_entry start;
struct scoutfs_srch_entry end;
@@ -869,6 +868,7 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_log_trees lt;
struct scoutfs_srch_file sfl;
SCOUTFS_BTREE_ITEM_REF(iref);
DECLARE_SAVED_REFS(saved);
struct scoutfs_key key;
unsigned long limit = SRCH_LIMIT;
int ret;
@@ -877,7 +877,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
*done = false;
srch_init_rb_root(sroot);
memset(&prev_roots, 0, sizeof(prev_roots));
start.hash = cpu_to_le64(hash);
start.ino = cpu_to_le64(ino);
@@ -892,7 +891,6 @@ retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
memset(&roots.fs_root, 0, sizeof(roots.fs_root));
end = final;
@@ -968,16 +966,10 @@ retry:
*done = sre_cmp(&end, &final) == 0;
ret = 0;
out:
if (ret == -ESTALE) {
if (memcmp(&prev_roots, &roots, sizeof(roots)) == 0) {
scoutfs_inc_counter(sb, srch_search_stale_eio);
ret = -EIO;
} else {
scoutfs_inc_counter(sb, srch_search_stale_retry);
prev_roots = roots;
goto retry;
}
}
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.srch_root.ref,
&roots.logs_root.ref);
if (ret == -ESTALE)
goto retry;
return ret;
}
@@ -1003,6 +995,14 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
le64_to_cpu(sfl->ref.blkno), 0);
ret = scoutfs_btree_insert(sb, alloc, wri, root, &key,
sfl, sizeof(*sfl));
/*
* While it's fine to replay moving the client's logging srch
* file to the core btree item, server commits should keep it
* from happening. So we'll warn if we see it happen. This can
* be removed eventually.
*/
if (WARN_ON_ONCE(ret == -EEXIST))
ret = 0;
if (ret == 0) {
memset(sfl, 0, sizeof(*sfl));
scoutfs_inc_counter(sb, srch_rotate_log);

View File

@@ -47,6 +47,7 @@
#include "omap.h"
#include "volopt.h"
#include "fence.h"
#include "xattr.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
@@ -460,9 +461,8 @@ static int scoutfs_read_supers(struct super_block *sb)
goto out;
}
sbi->fsid = le64_to_cpu(meta_super->hdr.fsid);
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
kfree(data_super);
@@ -482,8 +482,10 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_magic = SCOUTFS_SUPER_MAGIC;
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_d_op = &scoutfs_dentry_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_flags |= MS_I_VERSION;
sb->s_xattr = scoutfs_xattr_handlers;
sb->s_flags |= MS_I_VERSION | MS_POSIXACL;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -496,7 +498,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
ret = assign_random_id(sbi);
if (ret < 0)
return ret;
goto out;
spin_lock_init(&sbi->next_ino_lock);
spin_lock_init(&sbi->data_wait_root.lock);
@@ -505,7 +507,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
/* parse options early for use during setup */
ret = scoutfs_options_early_setup(sb, data);
if (ret < 0)
return ret;
goto out;
scoutfs_options_read(sb, &opts);
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
@@ -628,7 +630,6 @@ MODULE_ALIAS_FS("scoutfs");
static void teardown_module(void)
{
debugfs_remove(scoutfs_debugfs_root);
scoutfs_dir_exit();
scoutfs_inode_exit();
scoutfs_sysfs_exit();
}
@@ -666,7 +667,6 @@ static int __init scoutfs_module_init(void)
goto out;
}
ret = scoutfs_inode_init() ?:
scoutfs_dir_init() ?:
register_filesystem(&scoutfs_fs_type);
out:
if (ret)

View File

@@ -35,11 +35,10 @@ struct scoutfs_sb_info {
struct super_block *sb;
/* assigned once at the start of each mount, read-only */
u64 fsid;
u64 rid;
u64 fmt_vers;
struct scoutfs_super_block super;
struct block_device *meta_bdev;
spinlock_t next_ino_lock;
@@ -135,14 +134,14 @@ static inline bool scoutfs_unmounting(struct super_block *sb)
(int)(le64_to_cpu(fsid) >> SCSB_SHIFT), \
(int)(le64_to_cpu(rid) >> SCSB_SHIFT)
#define SCSB_ARGS(sb) \
(int)(le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) >> SCSB_SHIFT), \
(int)(SCOUTFS_SB(sb)->fsid >> SCSB_SHIFT), \
(int)(SCOUTFS_SB(sb)->rid >> SCSB_SHIFT)
#define SCSB_TRACE_FIELDS \
__field(__u64, fsid) \
__field(__u64, rid)
#define SCSB_TRACE_ASSIGN(sb) \
__entry->fsid = SCOUTFS_HAS_SBI(sb) ? \
le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) : 0;\
SCOUTFS_SB(sb)->fsid : 0; \
__entry->rid = SCOUTFS_HAS_SBI(sb) ? \
SCOUTFS_SB(sb)->rid : 0;
#define SCSB_TRACE_ARGS \

View File

@@ -60,10 +60,9 @@ static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return snprintf(buf, PAGE_SIZE, "%016llx\n",
le64_to_cpu(super->hdr.fsid));
return snprintf(buf, PAGE_SIZE, "%016llx\n", sbi->fsid);
}
ATTR_FUNCS_RO(fsid);
@@ -268,7 +267,7 @@ int __init scoutfs_sysfs_init(void)
return 0;
}
void __exit scoutfs_sysfs_exit(void)
void scoutfs_sysfs_exit(void)
{
if (scoutfs_kset)
kset_unregister(scoutfs_kset);

View File

@@ -53,6 +53,6 @@ int scoutfs_setup_sysfs(struct super_block *sb);
void scoutfs_destroy_sysfs(struct super_block *sb);
int __init scoutfs_sysfs_init(void);
void __exit scoutfs_sysfs_exit(void);
void scoutfs_sysfs_exit(void);
#endif

View File

@@ -15,6 +15,7 @@
#include <linux/dcache.h>
#include <linux/xattr.h>
#include <linux/crc32c.h>
#include <linux/posix_acl.h>
#include "format.h"
#include "inode.h"
@@ -26,6 +27,7 @@
#include "xattr.h"
#include "lock.h"
#include "hash.h"
#include "acl.h"
#include "scoutfs_trace.h"
/*
@@ -79,16 +81,6 @@ static void init_xattr_key(struct scoutfs_key *key, u64 ino, u32 name_hash,
#define SCOUTFS_XATTR_PREFIX "scoutfs."
#define SCOUTFS_XATTR_PREFIX_LEN (sizeof(SCOUTFS_XATTR_PREFIX) - 1)
static int unknown_prefix(const char *name)
{
return strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) &&
strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) &&
strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)&&
strncmp(name, SCOUTFS_XATTR_PREFIX, SCOUTFS_XATTR_PREFIX_LEN);
}
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
@@ -455,22 +447,17 @@ out:
* Copy the value for the given xattr name into the caller's buffer, if it
* fits. Return the bytes copied or -ERANGE if it doesn't fit.
*/
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
size_t size)
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
struct scoutfs_lock *lck)
{
struct inode *inode = dentry->d_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_key key;
unsigned int xat_bytes;
size_t name_len;
int ret;
if (unknown_prefix(name))
return -EOPNOTSUPP;
name_len = strlen(name);
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
return -ENODATA;
@@ -480,10 +467,6 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
if (!xat)
return -ENOMEM;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lck);
if (ret)
goto out;
down_read(&si->xattr_rwsem);
ret = get_next_xattr(inode, &key, xat, xat_bytes, name, name_len, 0, 0, lck);
@@ -509,12 +492,27 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
ret = copy_xattr_value(sb, &key, xat, xat_bytes, buffer, size, lck);
unlock:
up_read(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_READ);
out:
kfree(xat);
return ret;
}
static int scoutfs_xattr_get(struct dentry *dentry, const char *name, void *buffer, size_t size)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret == 0) {
ret = scoutfs_xattr_get_locked(inode, name, buffer, size, lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
return ret;
}
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
{
scoutfs_key_set_zeros(key);
@@ -619,30 +617,32 @@ int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
* cause creation to fail if the xattr already exists (_CREATE) or
* doesn't already exist (_REPLACE). xattrs can have a zero length
* value.
*
* The caller has acquired cluster locks, holds a transaction, and has
* dirtied the inode item so that they can update it after we modify it.
* The caller has to know the tags to acquire cluster locks before
* holding the transaction so they pass in the parsed tags, or all 0s for
* non scoutfs. prefixes.
*/
static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags)
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
const void *value, size_t size, int flags,
const struct scoutfs_xattr_prefix_tags *tgs,
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
struct list_head *ind_locks)
{
struct inode *inode = dentry->d_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int xat_bytes_totl;
unsigned int xat_bytes;
unsigned int val_len;
u64 ind_seq;
u64 total;
u64 hash = 0;
u64 id = 0;
@@ -651,6 +651,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
trace_scoutfs_xattr_set(sb, name_len, value, size, flags);
if (WARN_ON_ONCE(tgs->totl && !totl_lock))
return -EINVAL;
/* mirror the syscall's errors for large names and values */
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
return -ERANGE;
@@ -661,16 +664,10 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
(flags & ~(XATTR_CREATE | XATTR_REPLACE)))
return -EINVAL;
if (unknown_prefix(name))
return -EOPNOTSUPP;
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
if ((tgs->hide | tgs->srch | tgs->totl) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
if (tgs->totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
/* allocate enough to always read an existing xattr's totl */
@@ -679,51 +676,44 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
/* but store partial first item that only includes the new xattr's value */
xat_bytes = first_item_bytes(name_len, size);
xat = kmalloc(xat_bytes_totl, GFP_NOFS);
if (!xat) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
if (ret)
goto out;
if (!xat)
return -ENOMEM;
down_write(&si->xattr_rwsem);
/* find an existing xattr to delete, including possible totl value */
ret = get_next_xattr(inode, &key, xat, xat_bytes_totl, name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
goto unlock;
goto out;
/* check existence constraint flags */
if (ret == -ENOENT && (flags & XATTR_REPLACE)) {
ret = -ENODATA;
goto unlock;
goto out;
} else if (ret >= 0 && (flags & XATTR_CREATE)) {
ret = -EEXIST;
goto unlock;
goto out;
}
/* not an error to delete something that doesn't exist */
if (ret == -ENOENT && !value) {
ret = 0;
goto unlock;
goto out;
}
/* s64 count delta if we create or delete */
if (tgs.totl)
if (tgs->totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
if (found_parts && tgs.totl) {
if (found_parts && tgs->totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
if (ret < 0)
goto unlock;
goto out;
le64_add_cpu(&tval.total, -total);
}
@@ -742,15 +732,90 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
min(size, SCOUTFS_XATTR_MAX_PART_SIZE -
offsetof(struct scoutfs_xattr, name[name_len])));
if (tgs.totl) {
if (tgs->totl) {
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto unlock;
goto out;
}
le64_add_cpu(&tval.total, total);
}
if (tgs->srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
goto out;
undo_srch = true;
}
if (tgs->totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto out;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
lck);
else
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), lck);
if (ret < 0)
goto out;
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
ret = 0;
out:
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
up_write(&si->xattr_rwsem);
kfree(xat);
return ret;
}
static int scoutfs_xattr_set(struct dentry *dentry, const char *name, const void *value,
size_t size, int flags)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_lock *lck = NULL;
size_t name_len = strlen(name);
LIST_HEAD(ind_locks);
u64 ind_seq;
int ret;
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
if (ret)
goto unlock;
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret)
@@ -770,80 +835,98 @@ retry:
if (ret < 0)
goto release;
if (tgs.srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
goto release;
undo_srch = true;
}
if (tgs.totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto release;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
lck);
else
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), lck);
if (ret < 0)
goto release;
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
scoutfs_update_inode_item(inode, lck, &ind_locks);
ret = 0;
ret = scoutfs_xattr_set_locked(dentry->d_inode, name, name_len, value, size, flags, &tgs,
lck, totl_lock, &ind_locks);
if (ret == 0)
scoutfs_update_inode_item(inode, lck, &ind_locks);
release:
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
out:
kfree(xat);
return ret;
}
int scoutfs_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags)
/*
* Future kernels have this amazing hack to rewind the name to get the
* skipped prefix. We're back in the stone ages without the handler
* arg, so we Just Know that this is possible. This will become a
* compat hook to either call the kernel's xattr_full_name(handler), or
* our hack to use the flags as the prefix length.
*/
static const char *full_name_hack(void *handler, const char *name, int len)
{
if (size == 0)
value = ""; /* set empty value */
return name - len;
}
static int scoutfs_xattr_get_handler(struct dentry *dentry, const char *name,
void *value, size_t size, int handler_flags)
{
name = full_name_hack(NULL, name, handler_flags);
return scoutfs_xattr_get(dentry, name, value, size);
}
static int scoutfs_xattr_set_handler(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags, int handler_flags)
{
name = full_name_hack(NULL, name, handler_flags);
return scoutfs_xattr_set(dentry, name, value, size, flags);
}
int scoutfs_removexattr(struct dentry *dentry, const char *name)
{
return scoutfs_xattr_set(dentry, name, NULL, 0, XATTR_REPLACE);
}
static const struct xattr_handler scoutfs_xattr_user_handler = {
.prefix = XATTR_USER_PREFIX,
.flags = XATTR_USER_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_scoutfs_handler = {
.prefix = SCOUTFS_XATTR_PREFIX,
.flags = SCOUTFS_XATTR_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_trusted_handler = {
.prefix = XATTR_TRUSTED_PREFIX,
.flags = XATTR_TRUSTED_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_security_handler = {
.prefix = XATTR_SECURITY_PREFIX,
.flags = XATTR_SECURITY_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_acl_access_handler = {
.prefix = XATTR_NAME_POSIX_ACL_ACCESS,
.flags = ACL_TYPE_ACCESS,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
};
static const struct xattr_handler scoutfs_xattr_acl_default_handler = {
.prefix = XATTR_NAME_POSIX_ACL_DEFAULT,
.flags = ACL_TYPE_DEFAULT,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
};
const struct xattr_handler *scoutfs_xattr_handlers[] = {
&scoutfs_xattr_user_handler,
&scoutfs_xattr_scoutfs_handler,
&scoutfs_xattr_trusted_handler,
&scoutfs_xattr_security_handler,
&scoutfs_xattr_acl_access_handler,
&scoutfs_xattr_acl_default_handler,
NULL
};
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,

View File

@@ -1,25 +1,29 @@
#ifndef _SCOUTFS_XATTR_H_
#define _SCOUTFS_XATTR_H_
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
size_t size);
int scoutfs_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags);
int scoutfs_removexattr(struct dentry *dentry, const char *name);
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,
bool e_range, bool show_hidden);
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1,
totl:1;
};
extern const struct xattr_handler *scoutfs_xattr_handlers[];
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
struct scoutfs_lock *lck);
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
const void *value, size_t size, int flags,
const struct scoutfs_xattr_prefix_tags *tgs,
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
struct list_head *ind_locks);
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,
bool e_range, bool show_hidden);
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);

1
tests/.gitignore vendored
View File

@@ -8,3 +8,4 @@ src/bulk_create_paths
src/find_xattrs
src/stage_tmpfile
src/create_xattr_loop
src/o_tmpfile_umask

View File

@@ -10,7 +10,9 @@ BIN := src/createmany \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop
src/create_xattr_loop \
src/fragmented_data_extents \
src/o_tmpfile_umask
DEPS := $(wildcard src/*.d)

View File

@@ -377,6 +377,14 @@ t_wait_for_leader() {
done
}
t_get_sysfs_mount_option() {
local nr="$1"
local name="$2"
local opt="$(t_sysfs_path $nr)/mount_options/$name"
cat "$opt"
}
t_set_sysfs_mount_option() {
local nr="$1"
local name="$2"
@@ -405,7 +413,7 @@ t_save_all_sysfs_mount_options() {
for i in $(t_fs_nrs); do
opt="$(t_sysfs_path $i)/mount_options/$name"
ind="$name_$i"
ind="${name}_${i}"
_saved_opts[$ind]="$(cat $opt)"
done
@@ -417,7 +425,7 @@ t_restore_all_sysfs_mount_options() {
local i
for i in $(t_fs_nrs); do
ind="$name_$i"
ind="${name}_${i}"
t_set_sysfs_mount_option $i $name "${_saved_opts[$ind]}"
done

View File

@@ -0,0 +1,6 @@
== truncate writes zeroed partial end of file block
0000000 0a79 0a79 0a79 0a79 0a79 0a79 0a79 0a79
*
0006144 0000 0000 0000 0000 0000 0000 0000 0000
*
0012288

View File

@@ -0,0 +1,26 @@
== initial writes smaller than prealloc grow to prealloc size
/mnt/test/test/data-prealloc/file-1: 7 extents found
/mnt/test/test/data-prealloc/file-2: 7 extents found
== larger files get full prealloc extents
/mnt/test/test/data-prealloc/file-1: 9 extents found
/mnt/test/test/data-prealloc/file-2: 9 extents found
== non-streaming writes with contig have per-block extents
/mnt/test/test/data-prealloc/file-1: 32 extents found
/mnt/test/test/data-prealloc/file-2: 32 extents found
== any writes to region prealloc get full extents
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
== streaming offline writes get full extents either way
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
== goofy preallocation amounts work
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 3 extents found
/mnt/test/test/data-prealloc/file-2: 3 extents found

View File

@@ -0,0 +1,3 @@
== creating fragmented extents
== unlink file with moved extents to free extents per block
== cleanup

View File

@@ -1,3 +1,11 @@
== non-acl O_TMPFILE creation honors umask
umask 022
fstat after open(0777): 0100755
stat after linkat: 0100755
umask 077
fstat after open(0777): 0100700
stat after linkat: 0100700
== stage from tmpfile
total file size 33669120
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
*

View File

@@ -7,3 +7,4 @@ found second
== changing metadata must increase meta seq
== changing contents must increase data seq
== make sure dirtying doesn't livelock walk
== concurrent update attempts maintain single entries

View File

@@ -40,6 +40,7 @@ generic/092
generic/098
generic/101
generic/104
generic/105
generic/106
generic/107
generic/117
@@ -51,6 +52,7 @@ generic/184
generic/221
generic/228
generic/236
generic/237
generic/245
generic/249
generic/257
@@ -63,6 +65,7 @@ generic/308
generic/309
generic/313
generic/315
generic/319
generic/322
generic/335
generic/336
@@ -72,6 +75,7 @@ generic/342
generic/343
generic/348
generic/360
generic/375
generic/376
generic/377
Not
@@ -282,4 +286,4 @@ shared/004
shared/032
shared/051
shared/289
Passed all 75 tests
Passed all 79 tests

View File

@@ -58,6 +58,7 @@ $(basename $0) options:
-m | Run mkfs on the device before mounting and running
| tests. Implies unmounting existing mounts first.
-n <nr> | The number of devices and mounts to test.
-o <opts> | Add option string to all mounts during all tests.
-P | Enable trace_printk.
-p | Exit script after preparing mounts only, don't run tests.
-q <nr> | The first <nr> mounts will be quorum members. Must be
@@ -68,6 +69,7 @@ $(basename $0) options:
-s | Skip git repo checkouts.
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
-T <nr> | Multiply the original trace buffer size by nr during the run.
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
-y | xfstests ./check additional args
@@ -136,6 +138,12 @@ while true; do
T_NR_MOUNTS="$2"
shift
;;
-o)
test -n "$2" || die "-o must have option string argument"
# always appending to existing options
T_MNT_OPTIONS+=",$2"
shift
;;
-P)
T_TRACE_PRINTK="1"
;;
@@ -160,6 +168,11 @@ while true; do
T_TRACE_GLOB+=("$2")
shift
;;
-T)
test -n "$2" || die "-T must have trace buffer size multiplier argument"
T_TRACE_MULT="$2"
shift
;;
-X)
test -n "$2" || die "-X requires xfstests git repo dir argument"
T_XFSTESTS_REPO="$2"
@@ -345,6 +358,13 @@ if [ -n "$T_INSMOD" ]; then
cmd insmod "$T_KMOD/src/scoutfs.ko"
fi
if [ -n "$T_TRACE_MULT" ]; then
orig_trace_size=$(cat /sys/kernel/debug/tracing/buffer_size_kb)
mult_trace_size=$((orig_trace_size * T_TRACE_MULT))
msg "increasing trace buffer size from $orig_trace_size KiB to $mult_trace_size KiB"
echo $mult_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
fi
nr_globs=${#T_TRACE_GLOB[@]}
if [ $nr_globs -gt 0 ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
@@ -374,6 +394,7 @@ fi
# always describe tracing in the logs
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/sys/kernel/debug/tracing/buffer_size_kb \
/proc/sys/kernel/ftrace_dump_on_oops
#
@@ -430,6 +451,7 @@ for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
if [ "$i" -lt "$T_QUORUM" ]; then
opts="$opts,quorum_slot_nr=$i"
fi
opts="${opts}${T_MNT_OPTIONS}"
msg "mounting $meta_dev|$data_dev on $dir"
cmd mount -t scoutfs $opts "$data_dev" "$dir" &
@@ -604,6 +626,9 @@ if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
echo 0 > /sys/kernel/debug/tracing/options/trace_printk
cat /sys/kernel/debug/tracing/trace > "$T_RESULTS/traces"
if [ -n "$orig_trace_size" ]; then
echo $orig_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
fi
fi
if [ "$skipped" == 0 -a "$failed" == 0 ]; then

View File

@@ -6,9 +6,12 @@ simple-inode-index.sh
simple-staging.sh
simple-release-extents.sh
fallocate.sh
basic-truncate.sh
data-prealloc.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
large-fragmented-free.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
@@ -24,7 +27,7 @@ createmany-large-names.sh
createmany-rename-large-dir.sh
stage-release-race-alloc.sh
stage-multi-part.sh
stage-tmpfile.sh
o_tmpfile.sh
basic-posix-consistency.sh
dirent-consistency.sh
mkdir-rename-rmdir.sh

View File

@@ -0,0 +1,113 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
/*
* This creates fragmented data extents.
*
* A file is created that has alternating free and allocated extents.
* This also results in the global allocator having the matching
* fragmented free extent pattern. While that file is being created,
* occasionally an allocated extent is moved to another file. This
* results in a file that has fragmented extents at a given stride that
* can be deleted to create free data extents with a given stride.
*
* We don't have hole punching so to do this quickly we use a goofy
* combination of fallocate, truncate, and our move_blocks ioctl.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <linux/types.h>
#include <assert.h>
#include "ioctl.h"
#define BLOCK_SIZE 4096
int main(int argc, char **argv)
{
struct scoutfs_ioctl_move_blocks mb = {0,};
unsigned long long freed_extents;
unsigned long long move_stride;
unsigned long long i;
int alloc_fd;
int trunc_fd;
off_t off;
int ret;
if (argc != 5) {
printf("%s <freed_extents> <move_stride> <alloc_file> <trunc_file>\n", argv[0]);
return 1;
}
freed_extents = strtoull(argv[1], NULL, 0);
move_stride = strtoull(argv[2], NULL, 0);
alloc_fd = open(argv[3], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (alloc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[3], errno, strerror(errno));
exit(1);
}
trunc_fd = open(argv[4], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (trunc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[4], errno, strerror(errno));
exit(1);
}
for (i = 0, off = 0; i < freed_extents; i++, off += BLOCK_SIZE * 2) {
ret = fallocate(alloc_fd, 0, off, BLOCK_SIZE * 2);
if (ret < 0) {
fprintf(stderr, "fallocate at off %llu error: %d (%s)\n",
(unsigned long long)off, errno, strerror(errno));
exit(1);
}
ret = ftruncate(alloc_fd, off + BLOCK_SIZE);
if (ret < 0) {
fprintf(stderr, "truncate to off %llu error: %d (%s)\n",
(unsigned long long)off + BLOCK_SIZE, errno, strerror(errno));
exit(1);
}
if ((i % move_stride) == 0) {
mb.from_fd = alloc_fd;
mb.from_off = off;
mb.len = BLOCK_SIZE;
mb.to_off = i * BLOCK_SIZE;
ret = ioctl(trunc_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
if (ret < 0) {
fprintf(stderr, "move from off %llu error: %d (%s)\n",
(unsigned long long)off,
errno, strerror(errno));
}
}
}
if (alloc_fd > -1)
close(alloc_fd);
if (trunc_fd > -1)
close(trunc_fd);
return 0;
}

View File

@@ -0,0 +1,97 @@
/*
* Show the modes of files as we create them with O_TMPFILE and link
* them into the namespace.
*
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/stat.h>
#include <assert.h>
#include <limits.h>
static void linkat_tmpfile_modes(char *dir, char *lpath, mode_t mode)
{
char proc_self[PATH_MAX];
struct stat st;
int ret;
int fd;
umask(mode);
printf("umask 0%o\n", mode);
fd = open(dir, O_RDWR | O_TMPFILE, 0777);
if (fd < 0) {
perror("open(O_TMPFILE)");
exit(1);
}
ret = fstat(fd, &st);
if (ret < 0) {
perror("fstat");
exit(1);
}
printf("fstat after open(0777): 0%o\n", st.st_mode);
snprintf(proc_self, sizeof(proc_self), "/proc/self/fd/%d", fd);
ret = linkat(AT_FDCWD, proc_self, AT_FDCWD, lpath, AT_SYMLINK_FOLLOW);
if (ret < 0) {
perror("linkat");
exit(1);
}
close(fd);
ret = stat(lpath, &st);
if (ret < 0) {
perror("fstat");
exit(1);
}
printf("stat after linkat: 0%o\n", st.st_mode);
ret = unlink(lpath);
if (ret < 0) {
perror("unlink");
exit(1);
}
}
int main(int argc, char **argv)
{
char *lpath;
char *dir;
if (argc < 3) {
printf("%s <open_dir> <linkat_path>\n", argv[0]);
return 1;
}
dir = argv[1];
lpath = argv[2];
linkat_tmpfile_modes(dir, lpath, 022);
linkat_tmpfile_modes(dir, lpath, 077);
return 0;
}

View File

@@ -0,0 +1,21 @@
#
# Test basic correctness of truncate.
#
t_require_commands yes dd od truncate
FILE="$T_D0/file"
#
# We forgot to write a dirty block that zeroed the tail of a partial
# final block as we truncated past it.
#
echo "== truncate writes zeroed partial end of file block"
yes | dd of="$FILE" bs=8K count=1 status=none
sync
truncate -s 6K "$FILE"
truncate -s 12K "$FILE"
echo 3 > /proc/sys/vm/drop_caches
od -Ad -x "$FILE"
t_pass

View File

@@ -0,0 +1,136 @@
#
# test that the data prealloc options behave as expected. We write to
# two files a block at a time so that a single file doesn't naturally
# merge adjacent consecutive allocations. (we don't have multiple
# allocation cursors)
#
t_require_commands scoutfs stat filefrag dd touch truncate
write_forwards()
{
local prefix="$1"
local nr="$2"
local blk
touch "$prefix"-{1,2}
truncate -s 0 "$prefix"-{1,2}
for blk in $(seq 0 1 $((nr - 1))); do
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
done
}
write_backwards()
{
local prefix="$1"
local nr="$2"
local blk
touch "$prefix"-{1,2}
truncate -s 0 "$prefix"-{1,2}
for blk in $(seq $((nr - 1)) -1 0); do
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
done
}
release_files() {
local prefix="$1"
local size=$(($2 * 4096))
local vers
local f
for f in "$prefix"*; do
size=$(stat -c "%s" "$f")
vers=$(scoutfs stat -s data_version "$f")
scoutfs release "$f" -V "$vers" -o 0 -l $size
done
}
stage_files() {
local prefix="$1"
local nr="$2"
local vers
local f
for blk in $(seq 0 1 $((nr - 1))); do
for f in "$prefix"*; do
vers=$(scoutfs stat -s data_version "$f")
scoutfs stage /dev/zero "$f" -V "$vers" -o $((blk * 4096)) -l 4096
done
done
}
print_extents_found()
{
local prefix="$1"
filefrag "$prefix"* 2>&1 | grep "extent.*found" | t_filter_fs
}
t_save_all_sysfs_mount_options data_prealloc_blocks
t_save_all_sysfs_mount_options data_prealloc_contig_only
restore_options()
{
t_restore_all_sysfs_mount_options data_prealloc_blocks
t_restore_all_sysfs_mount_options data_prealloc_contig_only
}
trap restore_options EXIT
prefix="$T_D0/file"
echo "== initial writes smaller than prealloc grow to prealloc size"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 64
print_extents_found $prefix
echo "== larger files get full prealloc extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 128
print_extents_found $prefix
echo "== non-streaming writes with contig have per-block extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_backwards $prefix 32
print_extents_found $prefix
echo "== any writes to region prealloc get full extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 64
print_extents_found $prefix
write_backwards $prefix 64
print_extents_found $prefix
echo "== streaming offline writes get full extents either way"
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 64
release_files $prefix 64
stage_files $prefix 64
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
release_files $prefix 64
stage_files $prefix 64
print_extents_found $prefix
echo "== goofy preallocation amounts work"
t_set_sysfs_mount_option 0 data_prealloc_blocks 7
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 14
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_blocks 13
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 53
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_blocks 1
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 3
print_extents_found $prefix
t_pass

View File

@@ -0,0 +1,22 @@
#
# Make sure the server can handle a transaction with a data_freed whose
# blocks all hit different btree blocks in the main free list. It
# probably has to be merged in multiple commits.
#
t_require_commands fragmented_data_extents
EXTENTS_PER_BTREE_BLOCK=600
EXTENTS_PER_LIST_BLOCK=8192
FREED_EXTENTS=$((EXTENTS_PER_BTREE_BLOCK * EXTENTS_PER_LIST_BLOCK))
echo "== creating fragmented extents"
fragmented_data_extents $FREED_EXTENTS $EXTENTS_PER_BTREE_BLOCK "$T_D0/alloc" "$T_D0/move"
echo "== unlink file with moved extents to free extents per block"
rm -f "$T_D0/move"
echo "== cleanup"
rm -f "$T_D0/alloc"
t_pass

16
tests/tests/o_tmpfile.sh Normal file
View File

@@ -0,0 +1,16 @@
#
# basic tests of O_TMPFILE
#
t_require_commands stage_tmpfile hexdump
echo "== non-acl O_TMPFILE creation honors umask"
o_tmpfile_umask "$T_D0" "$T_D0/umask-file"
echo "== stage from tmpfile"
DEST_FILE="$T_D0/dest_file"
stage_tmpfile $T_D0 $DEST_FILE
hexdump -C "$DEST_FILE"
rm -f "$DEST_FILE"
t_pass

View File

@@ -103,4 +103,34 @@ while [ "$nr" -lt 100 ]; do
((nr++))
done
#
# make sure rapid concurrent metadata updates don't create multiple
# meta_seq entries
#
# we had a bug where deletion items created under concurrent_write locks
# could get versions older than the items they're deleting which were
# protected by read/write locks.
#
echo "== concurrent update attempts maintain single entries"
FILES=4
nr=1
while [ "$nr" -lt 10 ]; do
# touch a bunch of files in parallel from all mounts
for i in $(t_fs_nrs); do
eval path="\$T_D${i}"
seq -f "$path/file-%.0f" 1 $FILES | xargs touch &
done
wait || t_fail "concurrent file updates failed"
# make sure no inodes have duplicate entries
sync
scoutfs walk-inodes -p "$T_D0" meta_seq -- 0 -1 | \
grep -v "minor" | \
awk '{print $4}' | \
sort -n | uniq -c | \
awk '($1 != 1)' | \
sort -n
((nr++))
done
t_pass

View File

@@ -36,7 +36,8 @@ test_xattr_lengths() {
else
echo "$name=\"$val\"" > "$T_TMP.good"
fi
cmp "$T_TMP.good" "$T_TMP.got" || exit 1
cmp "$T_TMP.good" "$T_TMP.got" || \
t_fail "cmp failed name len $name_len val len $val_len"
setfattr -x $name "$FILE"
}

View File

@@ -1,15 +0,0 @@
#
# Run tmpfile_stage and check the output with hexdump.
#
t_require_commands stage_tmpfile hexdump
DEST_FILE="$T_D0/dest_file"
stage_tmpfile $T_D0 $DEST_FILE
hexdump -C "$DEST_FILE"
rm -fr "$DEST_FILE"
t_pass

View File

@@ -65,7 +65,6 @@ generic/030 # mmap missing
generic/075 # file content mismatch failures (fds, etc)
generic/080 # mmap missing
generic/103 # enospc causes trans commit failures
generic/105 # needs trigage: something about acls
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/120 # (can't exec 'cause no mmap)
@@ -73,17 +72,14 @@ generic/126 # (can't exec 'cause no mmap)
generic/141 # mmap missing
generic/213 # enospc causes trans commit failures
generic/215 # mmap missing
generic/237 # wrong error return from failing setfacl?
generic/246 # mmap missing
generic/247 # mmap missing
generic/248 # mmap missing
generic/319 # utils output change? update branch?
generic/321 # requires selinux enabled for '+' in ls?
generic/325 # mmap missing
generic/338 # BUG_ON update inode error handling
generic/346 # mmap missing
generic/347 # _dmthin_mount doesn't work?
generic/375 # utils output change? update branch?
EOF
t_restore_output

View File

@@ -1,140 +0,0 @@
#!/usr/bin/bash
# /usr/libexec/scoutfs-fenced/run/ipmi-remote-host
# ipmi configuration
SCOUTFS_IPMI_CONFIG_FILE=${SCOUTFS_IPMI_CONFIG_FILE:-/etc/scoutfs/scoutfs-ipmi.conf}
SCOUTFS_IPMI_HOSTS_FILE=${SCOUTFS_IPMI_HOSTS_FILE:-/etc/scoutfs/scoutfs-ipmi-hosts.conf}
## hosts file format
## SCOUTFS_HOST_IP IPMI_ADDRESS
## ex:
# 192.168.1.1 192.168.10.1
# command setup
IPMI_POWER="/sbin/ipmipower"
SSH_CMD="ssh -o ConnectTimeout=3 -o BatchMode=yes -o StrictHostKeyChecking=no"
LOGGER="/bin/logger -p local3.crit -t scoutfs-fenced"
$LOGGER "ipmi fence script invoked: IP: $SCOUTFS_FENCED_REQ_IP RID: $SCOUTFS_FENCED_REQ_RID TEST: $IPMITEST"
echo_fail() {
echo "$@" >&2
$LOGGER "fence failed: $@"
exit 1
}
echo_log() {
echo "$@" >&2
$LOGGER "fence info: $@"
}
echo_test_pass() {
echo -e "\xE2\x9C\x94 $@"
}
echo_test_fail() {
echo -e "\xE2\x9D\x8C $@"
}
test -n "$SCOUTFS_IPMI_CONFIG_FILE" || \
echo_fail "SCOUTFS_IPMI_CONFIG_FILE isn't set"
test -r "$SCOUTFS_IPMI_CONFIG_FILE" || \
echo_fail "$SCOUTFS_IPMI_CONFIG_FILE isn't readable file"
. "$SCOUTFS_IPMI_CONFIG_FILE"
test -n "$SCOUTFS_IPMI_HOSTS_FILE" || \
echo_fail "SCOUTFS_IPMI_HOSTS_FILE isn't set"
test -r "$SCOUTFS_IPMI_HOSTS_FILE" || \
echo_fail "$SCOUTFS_IPMI_HOSTS_FILE isn't readable file"
test -x "$IPMI_POWER" || \
echo_fail "$IPMI_POWER not found, need to install freeimpi?"
export ip="$SCOUTFS_FENCED_REQ_IP"
export rid="$SCOUTFS_FENCED_REQ_RID"
getIPMIhost () {
host=$(awk -v ip="$1" '$1 == ip {print $2}' "$SCOUTFS_IPMI_HOSTS_FILE") || \
echo_fail "lookup ipmi host failed"
echo "$host"
}
powerOffHost() {
# older versions of ipmipower inverted wait-until-off/wait-until-on, so specify both
$IPMI_POWER $IPMI_OPTS -h "$1" --wait-until-off --wait-until-on --off || \
echo_fail "ipmi power off $1 failed"
ipmioutput=$($IPMI_POWER $IPMI_OPTS -h "$1" --stat) || \
echo_fail "ipmi power stat $1 failed"
if [[ ! "$ipmioutput" =~ off ]]; then
echo_fail "ipmi stat $1 not off"
fi
$LOGGER "ipmi fence power down $1 success"
exit 0
}
if [ -n "$IPMITEST" ]; then
for i in $(awk '!/^($|[[:space:]]*#)/ {print $1}' "$SCOUTFS_IPMI_HOSTS_FILE"); do
if ! $SSH_CMD "$i" /bin/true; then
echo_test_fail "ssh $i"
else
echo_test_pass "ssh $i"
fi
host=$(getIPMIhost "$i")
if [ -z "$host" ]; then
echo_test_fail "ipmi config $i $host"
else
if ! $IPMI_POWER $IPMI_OPTS -h "$host" --stat; then
echo_test_fail "ipmi $i"
else
echo_test_pass "ipmi $i"
fi
fi
done
exit 0
fi
if [ -z "$ip" ]; then
echo_fail "no IP given for fencing"
fi
host=$(getIPMIhost "$ip")
if [ -z "$host" ]; then
echo_fail "no IPMI host found for fence IP"
fi
# first check via ssh if the mount still exists
# if ssh succeeds, we will only power down the node if mounted
if ! output=$($SSH_CMD "$ip" "echo BEGIN; LC_ALL=C egrep -m 1 '(^0x*|^$rid$)' /sys/kernel/boot_params/version /sys/fs/scoutfs/f*r*/rid; echo END"); then
# ssh not working, just power down host
powerOffHost "$host"
fi
if [[ ! "$output" =~ BEGIN ]]; then
# ssh failure
echo_log "no BEGIN"
powerOffHost "$host"
fi
if [[ ! "$output" =~ \/boot_params\/ ]]; then
# ssh failure
echo_log "no boot params"
powerOffHost "$host"
fi
if [[ ! "$output" =~ END ]]; then
# ssh failure
echo_log "no END"
powerOffHost "$host"
fi
if [[ "$output" =~ "rid:$rid" ]]; then
# rid still mounted, power down
echo_log "rid $rid still mounted"
powerOffHost "$host"
fi
$LOGGER "ipmi fence host $ip/$host success (rid $rid not mounted)"
exit 0

View File

@@ -1,36 +0,0 @@
#!/usr/bin/bash
# /usr/libexec/scoutfs-fenced/run/local-force-umount
echo_fail() {
echo "$@" > /dev/stderr
exit 1
}
rid="$SCOUTFS_FENCED_REQ_RID"
#
# Look for a local mount with the rid to fence. Typically we'll at
# least find the mount with the server that requested the fence that
# we're processing. But it's possible that mounts are unmounted
# before, or while, we're running.
#
mnts=$(findmnt -l -n -t scoutfs -o TARGET) || \
echo_fail "findmnt -t scoutfs failed" > /dev/stderr
for mnt in $mnts; do
mnt_rid=$(scoutfs statfs -p "$mnt" -s rid) || \
echo_fail "scoutfs statfs $mnt failed"
if [ "$mnt_rid" == "$rid" ]; then
umount -f "$mnt" || \
echo_fail "umout -f $mnt"
exit 0
fi
done
#
# If the mount doesn't exist on this host then it can't access the
# devices by definition and can be considered fenced.
#
exit 0

View File

@@ -1,139 +0,0 @@
#!/usr/bin/bash
# /usr/libexec/scoutfs-fenced/run/powerman-remote-host
# powerman configuration
SCOUTFS_PM_CONFIG_FILE=${SCOUTFS_PM_CONFIG_FILE:-/etc/scoutfs/scoutfs-pm.conf}
SCOUTFS_PM_HOSTS_FILE=${SCOUTFS_PM_HOSTS_FILE:-/etc/scoutfs/scoutfs-pm-hosts.conf}
## hosts file format
## SCOUTFS_HOST_IP POWERMAN_NODE_NAME
## ex:
# 192.168.1.1 dm1
# command setup
PM_CMD="/usr/bin/pm"
SSH_CMD="ssh -o ConnectTimeout=3 -o BatchMode=yes -o StrictHostKeyChecking=no"
LOGGER="/bin/logger -p local3.crit -t scoutfs-fenced"
$LOGGER "ipmi fence script invoked: IP: $SCOUTFS_FENCED_REQ_IP RID: $SCOUTFS_FENCED_REQ_RID TEST: $IPMITEST"
echo_fail() {
echo "$@" >&2
$LOGGER "fence failed: $@"
exit 1
}
echo_log() {
echo "$@" >&2
$LOGGER "fence info: $@"
}
echo_test_pass() {
echo -e "\xE2\x9C\x94 $@"
}
echo_test_fail() {
echo -e "\xE2\x9D\x8C $@"
}
test -n "$SCOUTFS_PM_CONFIG_FILE" || \
echo_fail "SCOUTFS_PM_CONFIG_FILE isn't set"
test -r "$SCOUTFS_PM_CONFIG_FILE" || \
echo_fail "$SCOUTFS_PM_CONFIG_FILE isn't readable file"
. "$SCOUTFS_PM_CONFIG_FILE"
test -n "$SCOUTFS_PM_HOSTS_FILE" || \
echo_fail "SCOUTFS_PM_HOSTS_FILE isn't set"
test -r "$SCOUTFS_PM_HOSTS_FILE" || \
echo_fail "$SCOUTFS_PM_HOSTS_FILE isn't readable file"
test -x "$PM_CMD" || \
echo_fail "$PMCMD not found, need to install powerman?"
export ip="$SCOUTFS_FENCED_REQ_IP"
fence_rid="$SCOUTFS_FENCED_REQ_RID"
getPMhost () {
host=$(awk -v ip="$1" '$1 == ip {print $2}' "$SCOUTFS_PM_HOSTS_FILE") || \
echo_fail "lookup pm host failed"
echo "$host"
}
powerOffHost() {
$PM_CMD $PM_OPTS "$1" -0 || \
echo_fail "pm power off $host failed"
pmoutput=$($PM_CMD $PM_OPTS "$1" -q | grep "$1") || \
echo_fail "powerman power stat $1 failed"
if [[ ! "$pmoutput" =~ off ]]; then
echo_fail "powerman stat $1 not off"
fi
$LOGGER "powerman fence power down $1 success"
exit 0
}
if [ -n "$PMTEST" ]; then
for i in $(awk '!/^($|[[:space:]]*#)/ {print $1}' "$SCOUTFS_PM_HOSTS_FILE"); do
if ! $SSH_CMD "$i" /bin/true; then
echo_test_fail "ssh $i"
else
echo_test_pass "ssh $i"
fi
host=$(getPMhost "$i")
if [ -z "$host" ]; then
echo_test_fail "pm config $i $host"
else
if ! $PM_CMD $PM_OPTS "$host" -q; then
echo_test_fail "pm $i"
else
echo_test_pass "pm $i"
fi
fi
done
exit 0
fi
if [ -z "$ip" ]; then
echo_fail "no IP given for fencing"
fi
host=$(getPMhost "$ip")
if [ -z "$host" ]; then
echo_fail "no host found for fence IP"
fi
# first check via ssh if the mount still exists
# if ssh succeeds, we will only power down the node if mounted
if ! output=$($SSH_CMD "$ip" "echo BEGIN; LC_ALL=C egrep -m 1 '(^0x*|^$rid$)' /sys/kernel/boot_params/version /sys/fs/scoutfs/f*r*/rid; echo END"); then
# ssh not working, just power down host
powerOffHost "$host"
fi
if [[ ! "$output" =~ BEGIN ]]; then
# ssh failure
echo_log "no BEGIN"
powerOffHost "$host"
fi
if [[ ! "$output" =~ \/boot_params\/ ]]; then
# ssh failure
echo_log "no boot params"
powerOffHost "$host"
fi
if [[ ! "$output" =~ END ]]; then
# ssh failure
echo_log "no END"
powerOffHost "$host"
fi
if [[ "$output" =~ "rid:$rid" ]]; then
# rid still mounted, power down
echo_log "rid $rid still mounted"
powerOffHost "$host"
fi
$LOGGER "powerman fence host $ip/$host success (rid $rid not mounted)"
exit 0

View File

@@ -1,11 +0,0 @@
# /etc/scoutfs/scoutfs-ipmi-hosts.conf
## config file format
##
## SCOUTFS_HOST_IP must match the interface used for scoutfs
## leader/follower communications
##
## SCOUTFS_HOST_IP IPMI_ADDRESS
## ex:
#192.168.1.1 192.168.10.1

View File

@@ -1,10 +0,0 @@
#!/usr/bin/bash
# /etc/scoutfs/scoutfs-ipmi.conf
IPMI_USER="admin"
IPMI_PASSWORD="password"
IPMI_OPTS="-D LAN_2_0 -u $IPMI_USER -p $IPMI_PASSWORD"
# some Intel BMCs need -I 17
# IPMI_OPTS="-D LAN_2_0 -u $IPMI_USER -p $IPMI_PASSWORD -I 17"

View File

@@ -1,11 +0,0 @@
# /etc/scoutfs/scoutfs-ipmi-hosts.conf
## config file format
##
## SCOUTFS_HOST_IP must match the interface used for scoutfs
## leader/follower communications
##
## SCOUTFS_HOST_IP POWERMAN_NODE_NAME
## ex:
#192.168.1.1 node1

View File

@@ -1,8 +0,0 @@
#!/usr/bin/bash
# /etc/scoutfs/scoutfs-pm.conf
PM_OPTS=""
# optionally specify remote powerman server
#PM_OPTS="-h pm-server.localdomain"

View File

@@ -15,12 +15,61 @@ general mount options described in the
.BR mount (8)
manual page.
.TP
.B acl
The acl mount option enables support for POSIX Access Control Lists
as detailed in
.BR acl (5) .
Support for POSIX ACLs is the default.
.TP
.B data_prealloc_blocks=<blocks>
Set the size of preallocation regions of data files, in 4KiB blocks.
Writes to these regions that contain no extents will attempt to
preallocate the size of the full region. This can waste a lot of space
with small files, files with sparse regions, and files whose final
length isn't a multiple of the preallocation size. The following
data_prealloc_contig_only option, which is the default, restricts this
behaviour to waste less space.
.sp
All the preallocation options can be changed in an active mount by
writing to their respective files in the options directory in the
mount's sysfs directory.
.sp
It is worth noting that it is always more efficient in every way to use
.BR fallocate (2)
to precisely allocate large extents for the resulting size of the file.
Always attempt to enable it in software that supports it.
.TP
.B data_prealloc_contig_only=<0|1>
This option, currently the default, limits file data preallocation in
two ways. First, it will only preallocate when extending a fully
allocated file. Second, it will limit the size of preallocation to the
existing length of the file. These limits reduce the amount of
preallocation wasted per file at the cost of multiple initial extents in
all files. It only supports simple streaming writes, any other write
pattern will not be recognized and could result in many fragmented
extent allocations.
.sp
This option can be disabled to encourage large allocated extents
regardless of write patterns. This can be helpful if files are written
with initial sparse regions (perhaps by multiple threads writing to
different regions) and wasted space isn't an issue (perhaps because the
file population contains few small files).
.TP
.B metadev_path=<device>
The metadev_path option specifies the path to the block device that
contains the filesystem's metadata.
.sp
This option is required.
.TP
.B noacl
The noacl mount option disables the default support for POSIX Access
Control Lists. Any existing system.posix_acl_default and
system.posix_acl_access extended attributes remain in inodes. They
will appear in listings from
.BR listxattr (5)
but specific retrieval or reomval operations will fail. They will be
used for enforcement again if ACL support is later enabled.
.TP
.B orphan_scan_delay_ms=<number>
This option sets the average expected delay, in milliseconds, between
each mount's scan of the global orphaned inode list. Jitter is added to

View File

@@ -597,7 +597,7 @@ format.
.PD
.TP
.BI "print META-DEVICE"
.BI "print {-S|--skip-likely-huge} META-DEVICE"
.sp
Prints out all of the metadata in the file system. This makes no effort
to ensure that the structures are consistent as they're traversed and
@@ -607,13 +607,25 @@ output.
.PD 0
.TP
.sp
.B "-S, --skip-likely-huge"
Skip printing structures that are likely to be very large. The
structures that are skipped tend to be global and whose size tends to be
related to the size of the volume. Examples of skipped structures include
the global fs items, srch files, and metadata and data
allocators. Similar structures that are not skipped are related to the
number of mounts and are maintained at a relatively reasonable size.
These include per-mount log trees, srch files, allocators, and the
metadata allocators used by server commits.
.sp
Skipping the larger structures limits the print output to a relatively
constant size rather than being a large multiple of the used metadata
space of the volume making the output much more useful for inspection.
.TP
.B "META-DEVICE"
The path to the metadata device for the filesystem whose metadata will be
printed. Since this command reads via the host's buffer cache, it may not
reflect the current blocks in the filesystem possibly written to the shared
block devices from another host, unless
.B blockdev \--flushbufs
command is used first.
printed. An attempt will be made to flush the host's buffer cache for
this device with the BLKFLSBUF ioctl, or with posix_fadvise() if
the path refers to a regular file.
.RE
.PD

View File

@@ -55,21 +55,14 @@ install -m 755 -D src/scoutfs $RPM_BUILD_ROOT%{_sbindir}/scoutfs
install -m 644 -D src/ioctl.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/ioctl.h
install -m 644 -D src/format.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/format.h
install -m 755 -D fenced/scoutfs-fenced $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/scoutfs-fenced
install -m 755 -D fenced/local-force-unmount $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/local-force-unmount
install -m 755 -D fenced/ipmi-remote-host $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/ipmi-remote-host
install -m 755 -D fenced/powerman-remote-host $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/powerman-remote-host
install -m 644 -D fenced/scoutfs-fenced.service $RPM_BUILD_ROOT%{_unitdir}/scoutfs-fenced.service
install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-fenced.conf.example
install -m 644 -D fenced/scoutfs-ipmi.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-ipmi.conf
install -m 644 -D fenced/scoutfs-ipmi-hosts.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-ipmi-hosts.conf
install -m 644 -D fenced/scoutfs-pm.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-pm.conf
install -m 644 -D fenced/scoutfs-pm-hosts.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-pm-hosts.conf
%files
%defattr(644,root,root,755)
%{_mandir}/man*/scoutfs*.gz
%{_unitdir}/scoutfs-fenced.service
%config(noreplace) %{_sysconfdir}/scoutfs
%{_sysconfdir}/scoutfs
%defattr(755,root,root,755)
%{_sbindir}/scoutfs
%{_libexecdir}/scoutfs-fenced

View File

@@ -3,6 +3,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <linux/fs.h>
#include <errno.h>
@@ -103,3 +104,44 @@ char *size_str(u64 nr, unsigned size)
return suffixes[i];
}
/*
* Try to flush the local read cache for a device. This is only a best
* effort as these interfaces don't block waiting to fully purge the
* cache. This is OK because it's used by cached readers that are known
* to be racy anyway.
*/
int flush_device(int fd)
{
struct stat st;
int ret;
ret = fstat(fd, &st);
if (ret < 0) {
ret = -errno;
fprintf(stderr, "fstat failed: %s (%d)\n", strerror(errno), errno);
goto out;
}
if (S_ISREG(st.st_mode)) {
ret = posix_fadvise(fd, 0, st.st_size, POSIX_FADV_DONTNEED);
if (ret < 0) {
ret = -errno;
fprintf(stderr, "POSIX_FADV_DONTNEED failed: %s (%d)\n",
strerror(errno), errno);
goto out;
}
} else if (S_ISBLK(st.st_mode)) {
ret = ioctl(fd, BLKFLSBUF, 0);
if (ret < 0) {
ret = -errno;
fprintf(stderr, "BLKFLSBUF, failed: %s (%d)\n", strerror(errno), errno);
goto out;
}
}
ret = 0;
out:
return ret;
}

View File

@@ -14,5 +14,6 @@ int device_size(char *path, int fd,
char *use_type, u64 *size_ret);
float size_flt(u64 nr, unsigned size);
char *size_str(u64 nr, unsigned size);
int flush_device(int fd);
#endif

View File

@@ -118,6 +118,33 @@ struct mkfs_args {
struct scoutfs_quorum_slot slots[SCOUTFS_QUORUM_MAX_SLOTS];
};
static int open_mkfs_dev(struct mkfs_args *args, char *path, mode_t mode, char *which)
{
int ret;
int fd = -1;
fd = open(path, mode);
if (fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open %s dev '%s': %s (%d)\n",
which, path, strerror(errno), errno);
goto out;
}
ret = flush_device(fd);
if (ret < 0)
goto out;
if (!args->force)
ret = check_bdev(fd, path, which);
out:
if (ret < 0 && fd >= 0)
close(fd);
return ret ?: fd;
}
/*
* Make a new file system by writing:
* - super blocks
@@ -156,32 +183,17 @@ static int do_mkfs(struct mkfs_args *args)
gettimeofday(&tv, NULL);
pseudo_random_bytes(&fsid, sizeof(fsid));
meta_fd = open(args->meta_device, O_RDWR | O_EXCL);
meta_fd = open_mkfs_dev(args, args->meta_device, O_RDWR | O_EXCL, "meta");
if (meta_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
ret = meta_fd;
goto out;
}
if (!args->force) {
ret = check_bdev(meta_fd, args->meta_device, "meta");
if (ret)
return ret;
}
data_fd = open(args->data_device, O_RDWR | O_EXCL);
data_fd = open_mkfs_dev(args, args->data_device, O_RDWR | O_EXCL, "data");
if (data_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open '%s': %s (%d)\n",
args->data_device, strerror(errno), errno);
ret = data_fd;
goto out;
}
if (!args->force) {
ret = check_bdev(data_fd, args->data_device, "data");
if (ret)
return ret;
}
super = calloc(1, SCOUTFS_BLOCK_SM_SIZE);
bt = calloc(1, SCOUTFS_BLOCK_LG_SIZE);

View File

@@ -8,6 +8,7 @@
#include <errno.h>
#include <string.h>
#include <stdarg.h>
#include <stdbool.h>
#include <ctype.h>
#include <uuid/uuid.h>
#include <sys/socket.h>
@@ -26,6 +27,7 @@
#include "avl.h"
#include "srch.h"
#include "leaf_item_hash.h"
#include "dev.h"
static void print_block_header(struct scoutfs_block_header *hdr, int size)
{
@@ -989,9 +991,10 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
struct print_args {
char *meta_device;
bool skip_likely_huge;
};
static int print_volume(int fd)
static int print_volume(int fd, struct print_args *args)
{
struct scoutfs_super_block *super = NULL;
struct print_recursion_args pa;
@@ -1041,23 +1044,26 @@ static int print_volume(int fd)
ret = err;
}
for (i = 0; i < array_size(super->meta_alloc); i++) {
snprintf(str, sizeof(str), "meta_alloc[%u]", i);
err = print_btree(fd, super, str, &super->meta_alloc[i].root,
if (!args->skip_likely_huge) {
for (i = 0; i < array_size(super->meta_alloc); i++) {
snprintf(str, sizeof(str), "meta_alloc[%u]", i);
err = print_btree(fd, super, str, &super->meta_alloc[i].root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
}
err = print_btree(fd, super, "data_alloc", &super->data_alloc.root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
}
err = print_btree(fd, super, "data_alloc", &super->data_alloc.root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "srch_root", &super->srch_root,
print_srch_root_item, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "logs_root", &super->logs_root,
print_log_trees_item, NULL);
if (err && !ret)
@@ -1065,19 +1071,23 @@ static int print_volume(int fd)
pa.super = super;
pa.fd = fd;
err = print_btree_leaf_items(fd, super, &super->srch_root.ref,
print_srch_root_files, &pa);
if (err && !ret)
ret = err;
if (!args->skip_likely_huge) {
err = print_btree_leaf_items(fd, super, &super->srch_root.ref,
print_srch_root_files, &pa);
if (err && !ret)
ret = err;
}
err = print_btree_leaf_items(fd, super, &super->logs_root.ref,
print_log_trees_roots, &pa);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "fs_root", &super->fs_root,
print_fs_item, NULL);
if (err && !ret)
ret = err;
if (!args->skip_likely_huge) {
err = print_btree(fd, super, "fs_root", &super->fs_root,
print_fs_item, NULL);
if (err && !ret)
ret = err;
}
out:
free(super);
@@ -1098,7 +1108,12 @@ static int do_print(struct print_args *args)
return ret;
}
ret = print_volume(fd);
ret = flush_device(fd);
if (ret < 0)
goto out;
ret = print_volume(fd, args);
out:
close(fd);
return ret;
};
@@ -1108,6 +1123,9 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
struct print_args *args = state->input;
switch (key) {
case 'S':
args->skip_likely_huge = true;
break;
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
@@ -1125,8 +1143,13 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
return 0;
}
static struct argp_option options[] = {
{ "skip-likely-huge", 'S', NULL, 0, "Skip large structures to minimize output size"},
{ NULL }
};
static struct argp argp = {
NULL,
options,
parse_opt,
"META-DEV",
"Print metadata structures"