Compare commits

...

151 Commits

Author SHA1 Message Date
Zach Brown
8efb30afbc No i_mutex in aio_read for data_wait_check
As a regular file reader first acquires a cluster lock it checks the
file's extents to see if the region it is reading contains offline
extents.

When this was written the extent operations didn't have their own
internal locking.  It was up to callers to use vfs mechanisms to
serialize readers and writers.  The aio_read data_wait_check extent
caller tried to use dio_count and an i_mutex acquisition to ensure that
our ioctls wouldn't modify extents.

This creates a bad inversion between the vfs i_mutex and our cluster
inode lock.  There are lots of fs methods which are called by the vfs
with i_mutex held which acquire a lock.  This read case was holding a
cluster lock and then acquiring i_mutex.

Since the data waiting was written the file data extent operations have
added their own extent_sem to protect the extent items from concurrent
callers.  We can rely on that internal locking and drop the bad i_mutex
use which causes the inversion.

Not surprisingly, this was trivial to deadlock by racing simple
utilities like cat and touch.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-27 11:20:40 -07:00
Zach Brown
df90b3eb90 Add export-lookup-evict-race test
Add a test that creates races between fh_to_dentry and eviction
triggered by lock invalidation.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-27 11:20:40 -07:00
Andy Grover
6eeaab3322 Merge pull request #35 from versity/zab/invalidate_already_pending
Handle back to back invalidation requests
2021-04-23 16:40:45 -07:00
Andy Grover
ac68d14b8d Merge pull request #36 from versity/zab/move_blocks_next_einval
Fix accidental EINVAL in move_blocks
2021-04-23 14:39:29 -07:00
Zach Brown
ecfc8a0d0e Merge pull request #33 from versity/zab/open_ino_map
Zab/open ino map
2021-04-23 10:55:11 -07:00
Zach Brown
63148d426e Fix accidental EINVAL in move_blocks
When move blocks is staging it requires an overlapping offline extent to
cover the entire region to move.

It performs the stage by modifying extents at a time.  If there are
fragmented source extents it will modify each of them at a time in the
region.

When looking for the extent to match the source extent it looked from
the iblock of the start of the whole operation, not the start of the
source extent it's matching.  This meant that it would find a the first
previous online extent it just modified, which wouldn't be online, and
would return -EINVAL.

The fix is to have it search from the logical start of the extent it's
trying to match, not the start of the region.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-23 10:39:34 -07:00
Zach Brown
a27c54568c Handle back to back invalidation requests
The client's incoming lock invalidation request handler triggers a
BUG_ON if it gets a request for a lock that is already processing a
previous invalidation request.  The server is supposed to only send
one request at a time.

The problem is that the batched invalidation request handling will send
responses outside of spinlock coverage before reacquirin the lock and
finishing processing once the response send has been successful.

This gives a window for another invalidation request to arrive after the
response was sent but before the invalidation finished processing.  This
triggers the bug.

The fix is to mark the lock such that we can recognize a valid second
request arriving after we send the response but before we finish
processing.  If it arrives we'll continue invalidation processing with
the arguments from the new request.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-22 17:00:50 -07:00
Zach Brown
dfc2f7a4e8 Remove unused scoutfs_free_unused_locks nr arg
The nr argument wasn't used.  It always tries to free as many as the
shrinker call will let it.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
94dd86f762 Process lock invalidation after shutdown
Lock teardown during unmount involves first calling shutdown and then
destroy.  The shutdown call is meant to ensure that it's safe to tear
down the client network connections.  Once shutdown returns locking is
promising that it won't call into the client to send new lock requests.

The current shutdown implementation is very heavy handed and shuts down
everything.  This creates a deadlock.  After calling lock shutdown, the
client will send its farewell and wait for a response.  The server might
not send the farewell response until other mounts have unmounted if our
client is in the server's mount.  In this case we stil have to be
processing lock invalidation requests to allow other unmounting clients
to make forward progress.

This is reasonably easy and safe to do.  We only use the shutdown flag
to stop lock calls that would change lock state and send requests.  We
don't have it stop incoming requests processing in the work queueing
functions.  It's safe to keep processing incoming requests between
_shutdown and _destroy because the requests already come in through the
client.  As the client shuts down it will stop calling us.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
841d22e26e Disable task reclaim flags for block cache vmalloc
Even though we can pass in gfp flags to vmalloc it eventually calls pte
alloc functions which ignore the caller's flags and use user gfp flags.
This risks reclaim re-entering fs paths during allocations in the block
cache.  These allocs that allowed reclaim deep in the fs was causing
lockdep to add RECLAIM dependencies between locks and holler about
deadlocks.

We apply the same pattern that xfs does for disabling reclaim while
allocating vmalloced block payloads.  Setting PF_MEMALLOC_NOIO causes
reclaim in that task to clear __GFP_IO and __GFP_FS, regardless of the
individual allocation flags in the task, preventing recursion.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
ba8bf13ae1 Update dmesg whitelist for recovery
The shared recovery layer outputs different messages than when it ran
only for lock_recovery in the lock server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
2949b6063f Clear lock invalidate_pending during destroy
Locks have a bunch of state that reflects concurrent processing.
Testing that state determines when it's safe to free a lock because
nothing is going on.

During unmount we abruptly stop processing locks.  Unmount will send a
farewell to the server which will remove all the state associated with
the client that's unmounting for all its locks, regardless of the state
the locks were in.

The client unmount path has to clean up the interupted lock state and
free it, carefully avoiding assertions that would otherwise indicate
that we're freeing used locks.  The move to async lock invalidation
forgot to clean up the invalidation state.  Previously a synchronous
work function would set and clear invalidate_pending while it was
running.  Once we finished waiting for it invalidate_pending would be
clear.  The move to async invalidation work meant that we can still have
invalidate_pending with no work executing.  Lock destruction removed
locks from the invalidation list but forgot to clear the
invalidate_pending flag.

This triggered assertions during unmount that were otherwise harmless.
There was other use of the lock, we just forgot to clean up the lock
state.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
1e88aa6c0f Shutdown data after trans
The data_info struct holds the data allocator that is filled by
transactions as they commit.  We have to free it after we've shutdown
transactions.  It's more like the forest in this regard so we move its
desctruction down by the forest to group similar behaviour.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
d9aea98220 Shutdown locking before transactions
Shutting down the lock client waits for invalidation work and prevents
future work from being queued.  We're currently shutting down the
subsystems that lock calls before lock itself, leading to crashes if we
happen to have invalidations executing as we unmount.

Shutting down locking before its dependencies fixes this.  This was hit
in testing during the inode deletion fixes because it created the
perfect race by acquiring locks during unmount so that the server was
very unlikely to send invalidations on behalf to one mount on behalf of
another as they both unmounted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
04f4b8bcb3 Perform final transaction write before shutdown
Shutting down the transaction during unmount relied on the vfs unmount
path to perform a sync of any remaining dirty transaction.  There are
ways that we can dirty a transaction during unmount after it calls
the fs sync, so we try to write any remaining dirty transaction before
shutting down.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
fead263af3 Remove unused sb_info shutdown
We're no longer using the shutdown field in our sb info struct.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
4389c73c14 Fix deadlock between lock invalidate and evict
We've had a long-standing deadlock between lock invalidation and
eviction.  Invalidating a lock wants to lookup inodes and drop their
resources while blocking locks.  Eviction wants to get a lock to perform
final deletion while the inodes has I_FREEING set which blocks lookups.

We only saw this deadlock a handful of times in all of the time we've
run the code, but it's now much more common now that we're acquiring
locks in iput to test that nlink is zero instead of only when nlink is
zero.  I see unmount hang regularly when testing final inode deletion.

This adds a lookup variant for invalidation which will refuse to
return freeing inodes so they won't be waited on.  Once they're freeing
they can't be seen by future lock users so they don't need to be
invalidated.  This keeps the lock invalication promise and avoids
sleeping on freeing inodes which creates the deadlock.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
dba88705f7 Fix t_umount mount point number
t_umount had a typo that had it try to unmount a mount based on a
caller's variable, which accidentally happened to work for its only
caller.  Future callers would not have been so lucky.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
715c29aad3 Proactively drop dentry/inode caches outside locks
Previously we wouldn't try and remove cached dentries and inodes as
lock revocation removed cluster lock coverage.  The next time
we tried to use the cached dentries or inodes we'd acquire
a lock and refresh them.

But now cached inodes prevent final inode deletion.  If they linger
outside cluster locking then any final deletion will need to be deferred
until all its cached inodes are naturally dropped at some point in the
future across the cluster.  It might take refreshing the dentries or for
memory pressure to push out the old cached inodes.

This tries to proctively drop cached dentries and inodes as we lose
cluster lock coverage if they're not actively referenced.  We need to be
careful not to perform final inode deletion during lock invalidation
because it will deadlock, so we defer an iput which could delete during
evict out to async work.

Now deletion can be done synchronously in the task that is performing
the unlink because previous use of the inode on remote mounts hasn't
left unused cached inodes sitting around.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
b244b2d59c Add inode-deletion test
Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
22371fe5bd Fully destroy inodes after all mounts evict
Today an inode's items are deleted once its nlink reaches zero and the
final iput is called in a local mount.  This can delete inodes from
under other mounts which have opened the inode before it was unlinked on
another mount.

We fix this by adding cached inode tracking.  Each mount maintains
groups of cached inode bitmaps at the same granularity as inode locking.
As a mount performs its final iput it gets a bitmap from the server
which indicates if any other mount has inodes in the group open.

This makes the two fast paths of opening and closing linked files and of
deleting a file that was unlinked locally only pay a moderate cost of
either maintaining the bitmap locally and only getting the open map once
per lock group.  Removing many files in a group will only lock and get
the open map once per group.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
c6fd807638 Use recov to manage lock recovery
Now that we have the recov layer we can have the lock server use it to
track lock recovery.  The lock server no longer needs its own recovery
tracking structures and can instead call recov.  We add a call for the
server to call to kick lock processing once lock recovery finishes.  We
can get rid of the persistent lock_client items now that the server is
driving recovery from the mounted_client items.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
592f472a1c Use recov in server to recover client greetings
The server starts recovery when it finds mounted client items as it
starts up.  The clients are done recovering once they send their
greeting.  If they don't recover in time then they'll be fenced.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
a65775588f Add server recovery helpers
Add a little set of functions to help the server track which clients are
waiting to recover which state.  The open map messages need to wait for
recovery so we're moving recovery out of being only in the lock server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
da1af9b841 Add scoutfs inode ino lock coverage
Add lock coverage which tracks if the inode has been refreshed and is
covered by the inode group cluster lock.  This will be used by
drop_inode and evict_inode to discover that the inode is current and
doesn't need to be refreshed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
accd680a7e Fix block setup always returning 0
Another case of returning 0 instead of ret.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Andy Grover
cbb031bb5d Merge pull request #32 from versity/zab/block_rhashtable_insert_fixes
Zab/block rhashtable insert fixes
2021-04-13 10:42:17 -07:00
Zach Brown
c3290771a0 Block cache use rht _lookup_ insert for EEXIST
The sneaky rhashtable_insert_fast() can't return -EEXIST despite the
last line of the function *REALLY* making it look like it can.  It just
inserts new objects at the head of the bucket lists without comparing
the insertion with existing objects.

The block cache was relying on insertion to resolve duplicate racing
allocated blocks.  Because it couldn't return -EEXIST we could get
duplicate cached blocks present in the hash table.

rhashtable_lookup_insert_fast() fixes this by actually comparing the
inserted objects key with the objects found in the insertion bucket.  A
racing allocator trying to insert a duplicate cached block will get an
error, drop their allocated block, and retry their lookup.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 09:24:23 -07:00
Zach Brown
cf3cb3f197 Wait for rhashtable to rehash on insert EBUSY
The rhashtable can return EBUSY if you insert fast enough to trigger an
expansion of the next table size that is waiting to be rehashed in an
rcu callback.  If we get EBUSY from rhasthable_insert we call
synchronize_rcu to wait for the rehash to complete before trying again.

This was hit in testing restores of a very large namespace and took a
few hours to hit.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 09:24:23 -07:00
Andy Grover
cb4ed98b3c Merge pull request #31 from versity/zab/block_shrink_wait_for_rebalance
Block cache shrink restart waits for rcu callbacks
2021-04-08 09:03:12 -07:00
Zach Brown
9ee7f7b9dc Block cache shrink restart waits for rcu callbacks
We're seeing cpu livelocks in block shrinking where counters show that a
single block cache shrink call is only getting EAGAIN from repeated
rhashtable walk attempts.  It occurred to me that the running task might
be preventing an RCU grace period from ending by never blocking.

The hope of this commit is that by waiting for rcu callbacks to run
we'll ensure that any pending rebalance callback runs before we retry
the rhashtable walk again.  I haven't been able to reproduce this easily
so this is a stab in the dark.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-07 12:50:50 -07:00
Zach Brown
300791ecfa Merge pull request #29 from agrover/cleanup
Cleanup
2021-04-07 12:27:00 -07:00
Andy Grover
4630b77b45 cleanup: Use flexible array members instead of 0-length arrays
See Documentation/process/deprecated.rst:217, items[] now preferred over
items[0].

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:14:47 -07:00
Andy Grover
bdc43ca634 cleanup: Fix ESTALE handling in forest_read_items
Kinda weird to goto back to the out label and then out the bottom. Just
return -EIO, like forest_next_hint() does.

Don't call client_get_roots() right before retry, since is the first thing
retry does.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:14:04 -07:00
Andy Grover
6406f05350 cleanup: Remove struct net_lock_grant_response
We're not using the roots member of this struct, so we can just
use struct scoutfs_net_lock directly.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:13:56 -07:00
Andy Grover
820b7295f0 cleanup: Unused LIST_HEADs
Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-05 16:23:41 -07:00
Zach Brown
b3611103ee Merge pull request #26 from agrover/tmpfile
Support O_TMPFILE and allow MOVE_BLOCKS into released extents
2021-04-05 15:23:41 -07:00
Andy Grover
0deb232d3f Support O_TMPFILE and allow MOVE_BLOCKS into released extents
Support O_TMPFILE: Create an unlinked file and put it on the orphan list.
If it ever gains a link, take it off the orphan list.

Change MOVE_BLOCKS ioctl to allow moving blocks into offline extent ranges.
Ioctl callers must set a new flag to enable this operation mode.

RH-compat: tmpfile support it actually backported by RH into 3.10 kernel.
We need to use some of their kabi-maintaining wrappers to use it:
use a struct inode_operations_wrapper instead of base struct
inode_operations, set S_IOPS_WRAPPER flag in i_flags. This lets
RH's modified vfs_tmpfile() find our tmpfile fn pointer.

Add a test that tests both creating tmpfiles as well as moving their
contents into a destination file via MOVE_BLOCKS.

xfstests common/004 now runs because tmpfile is supported.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-05 14:23:44 -07:00
Andy Grover
1366e254f9 Merge pull request #30 from versity/zab/srch_block_ref_leak
Zab/srch block ref leak
2021-04-01 16:50:34 -07:00
Zach Brown
1259f899a3 srch compaction needs to prepare alloc for commit
The srch client compaction work initializes allocators, dirties blocks,
and writes them out as its transaction.  It forgot to call the
pre-commit allocator prepare function.

The prepare function drops block references used by the meta allocator
during the transaction.  This leaked block references which kept blocks
from being freed by the shrinker under memory pressure.  Eventually
memory was full of leaked blocks and the shrinker walked all of them
looking blocks to free, resulting in an effective livelock that ground
the system to a crawl.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-01 13:04:40 -07:00
Zach Brown
2d393f435b Warn on leaked block refs on unmount
By the time we get to destroying the block cache we should have put all
our block references.  Warn as we tear down the blocks if we see any
blocks that still have references, implying a ref leak.  This caught a
leak caused by srch compaction forgetting to put allocator list block
refs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-01 13:04:06 -07:00
Andy Grover
09c879bcf1 Merge pull request #25 from versity/zab/client_greeting_items_exist
Zab/client greeting items exist
2021-03-16 15:57:55 -07:00
Zach Brown
3de703757f Fix weird comment editing error
That comment looked very weird indeed until I recognized that I must
have forgotten to delete the first two attempts at starting the
sentence.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 12:02:05 -07:00
Zach Brown
7d67489b0c Handle resent initial client greetings
The very first greeting a client sends is unique becuase it doesn't yet
have a server_term field set and tells the server to create items to
track the client.

A server processing this request can create the items and then shut down
before the client is able to receive the reply.  They'll resend the
greeting without server_term but then the next server will get -EEXIST
errors as it tries to create items for the client.  This causes the
connection to break, which the client tries to reestablish, and the
pattern repeats indefinitely.

The fix is to simply recognize that -EEXIST is acceptable during item
creation.  Server message handlers always have to address the case where
a resent message was already processed by a previous server but it's
response didn't make it to the client.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 11:56:26 -07:00
Zach Brown
73084462e9 Remove unused client greeting_umb
Remove an old client info field from the unmount barrier mechanism which
was removed a while ago.  It used to be compared to a super field to
decide to finish unmount without reconnecting but now we check for our
mounted_client item in the server's btree.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 10:04:42 -07:00
Zach Brown
8c81af2b9b Merge pull request #22 from agrover/ipv6
Reserve space in superblock for IPv6 addresses
2021-03-15 16:04:26 -07:00
Andy Grover
efe5d92458 Reserve space in superblock for IPv6 addresses
Define a family field, and add a union for IPv4 and v6 variants, although
v6 is not supported yet.

Family field is now used to determine presence of address in a quorum slot,
instead of checking if addr is zero.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-03-12 14:10:42 -08:00
Andy Grover
d39e56d953 Merge pull request #24 from versity/zab/fix-block-stale-reads
Zab/fix block stale reads
2021-03-11 09:33:03 -08:00
Zach Brown
5661a1fb02 Fix block-stale-reads test
The block-stale-reads test was built from the ashes of a test that
used counters and triggers to work with the btree when it was
only used on the server.

The initial quick translation to try and trigger block cache retries
while the forest called the btree got so much wrong.  It was still
trying to use some 'cl' variable that didn't refer to the client any
more, the trigger helpers now call statfs to find paths and can end up
triggering themselves. and many more counters stale reads can happen
throughout the system while we're working -- not just one from our
trigger.

This fixes it up to consistently use fs numbers instead of
the silly stale cl variable and be less sensitive to triggers firing and
counter differences.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:36:41 -08:00
Zach Brown
12fa289399 Add t_trigger_arm_silent
t_trigger_arm always output the value of the trigger after arming on the
premise that tests required the trigger being armed.  In the process of
showing the trigger it calls a bunch of t_ helpers that build the path
to the trigger file using statfs_more to get the rid of mounts.

If the trigger being armed is in the server's mount and the specific
trigger test is fired by the server's statfs_more request processing
then the trigger can be fired before read its value.  Tests can
inconsistently fail as the golden output shows the trigger being armed
or not depending on if it was in the server's mount or not.

t_trigger_arm_silent doesn't output the value of the armed trigger.  It
can be used for low level triggers that don't rely on reading the
trigger's value to discover that their effect has happened.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:36:34 -08:00
Zach Brown
75e8fab57c Add t_counter_diff_changed
Tests can use t_counter_diff to put a message in their golden output
when a specific change in counters is expected.  This adds
t_counter_diff_changed to output a message that indicates change or not,
for tests that want to see counters change but the amount of change
doesn't need to be precisely known.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:32:04 -08:00
Zach Brown
513d6b2734 Merge pull request #20 from versity/zab/remove_trans_spinlock
Zab/remove trans spinlock
2021-03-04 13:59:07 -08:00
Zach Brown
f8d39610a2 Only get inode writeback_lock when adding inodes
Each transaction maintains a global list of inodes to sync.  It checks
the inode and adds it in each write_end call per OS page.  Locking and
unlocking the global spinlock was showing up in profiles.  At the very
least, we can only get the lock once per large file that's written
during a transaction.  This will reduce spinlock traffic on the lock by
the number of pages written per file.   We'll want a better solution in
the long run, but this helps for now.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-04 11:39:30 -08:00
Zach Brown
c470c1c9f6 Allow read-mostly _alloc_meta_low
Each transaction hold makes multiple calls to _alloc_meta_low to see if
the transaction should be committed to refill allocators before the
caller's hold is acquired and they can dirty blocks in the transaction.

_alloc_meta_low was using a spinlock to sample the allocator list_head
blocks to determine if there was space available.  The lock and unlock
stores were creating significant cacheline contention.

The _alloc_meta_low calls are higher frequency than allocations.  We can
use a seqlock to have exclusive writers and allow concurrent
_alloc_meta_low readers who retry if a writer intervenes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-04 11:39:30 -08:00
Andy Grover
cad902b9cd Merge pull request #19 from versity/zab/block_crash_and_consistency
Zab/block crash and consistency
2021-03-04 10:57:27 -08:00
Zach Brown
e163f3b099 Use atomic holders instead of trans info lock
We saw the transaction info lock showing up in profiles.  We were doing
quite a lot of work with that lock held.  We can remove it entirely and
use an atomic.

Instead of a locked holders count and writer boolean we can use an
atomic holders and have a high bit indicate that the write_func is
pending.  This turns the lock/unlock pairs in hold and release into
atomic inc/cmpxchg/dec operations.

Then we were checking allocators under the trans lock.  Now that we have
an atomic holders count we can increment it to prevent the writer from
commiting and release it after the checks if we need another commit
before the hold.

And finally, we were freeing our allocated reservation struct under the
lock.  We weren't actually doing anything with the reservation struct so
we can use journal_info as the nested hold counter instead of having it
point to an allocated and freed struct.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 14:18:04 -08:00
Zach Brown
a508baae76 Remove unused triggers
As the implementation shifted away from the ring of btree blocks and LSM
segments we lost callers to all these triggers.  They're unused and can
be removed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:50:00 -08:00
Zach Brown
208c51d1d2 Update stale block reading test
The previous test that triggered re-reading blocks, as though they were
stale, was written in the era where it only hit btree blocks and
everything else was stored in LSM segments.

This reworks the test to make it clear that it affects all our block
readers today.  The test only exercise the core read retry path, but it
could be expanded to test callers retrying with newer references after
they get -ESTALE errors.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:50:00 -08:00
Zach Brown
9450959ca4 Protect stale block readers from local dirtying
Our block cache consistency mechanism allows readers to try and read
stale block references.  They check block headers of the block they read
to discover if it has been modified and they should retry the read with
newer block references.

For this to be correct the block contents can't change under the
readers.  That's obviously true in the simple imagined case of one node
writing and another node reading.  But we also have the case where the
stale reader and dirtying writer can be concurrent tasks in the same
mount which share a block cache.

There were a two failure cases that derive from the order of readers and
writers working with blocks.

If the reader goes first, the writer could find the existing block in
the cache and modify it while the reader assumes that it is read only.
The fix is to have the writer always remove any existing cached block
and insert a newly allocated block into the cache with the header fields
already changed.  Any existing readers will still have their cached
block references and any new readers will see the modified headers and
return -ESTALE.

The next failure comes from readers trying to invalidate dirty blocks
when they see modified headers.  They assumed that the existing cached
block was old and could be dropped so that a new current version could
be read.  But in this case a local writer has clobbered the reader's
stale block and the reader should immediately return -ESTALE.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:59 -08:00
Zach Brown
6237f0adc5 Add _block_dirty_ref to dirty blocks in one place
To create dirty blocks in memory each block type caller currently gets a
reference on a created block and then dirties it.  The reference it gets
could be an existing cached block that stale readers are currently
using.  This creates a problem with our block consistency protocol where
writers can dirty and modify cached blocks that readers are currently
reading in memory, leading to read corruption.

This commit is the first step in addressing that problem.  We add a
scoutfs_block_dirty_ref() call which returns a reference to a dirtied
block from the block core in one call.  We're only changing the callers
in this patch but we'll be reworking the dirtying mechanism in an
upcoming patch to avoid corrupting readers.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
f18fa0e97a Update scoutfs print for centralized block_ref
Update scoutfs print to use the new block_ref struct instead of the
handful of per-block type ref structs that we had accumulated.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
0969a94bfc Check one block_ref struct in block core
Each of the different block types had a reading function that read a
block and then checked their reference struct for their block type.

This gets rid of each block reference type and has a single block_ref
type which is then checked by a single ref reading function in the block
core.  By putting ref checking in the core we no longer have to export
checking the block header crc, verifying headers, invalidating blocks,
or even reading raw blocks themseves.  Everyone reads refs and leaves
the checking up to the core.

The changes don't have a significant functional effect.  This is mostly
just changing types and moving code around.  (There are some changes to
visible counters.)

This shares code, which is nice, but this is putting the block reference
checking in one place in the block core so that in a few patches we can
fix problems with writers dirtying blocks that are being read.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
b1b75cbe9f Fix block cache shrink and read racing crash
The block cache wasn't safely racing readers walking the rcu radix_tree
and the shrinker walking the LRU list.  A reader could get a reference
to a block that had been removed from the radix and was queued for
freeing.  It'd clobber the free's llist_head union member by putting the
block back on the lru and both the read and free would crash as they
each corrupted each other's memory.  We rarely saw this in heavy load
testing.

The fix is to clean up the use of rcu, refcounting, and freeing.

First, we get rid of the LRU list.  Now we don't have to worry about
resolving racing accesses of blocks between two independent structures.
Instead of shrinking walking the LRU list, we can mark blocks on access
such that shrinking can walk all blocks randomly and expect to quickly
find candidates to shrink.

To make it easier to concurrently walk all the blocks we switch to the
rhashtable instead of the radix tree.  It also has nice per-bucket
locking so we can get rid of the global lock that protected the LRU list
and radix insertion.  (And it isn't limited to 'long' keys so we can get
rid of the check for max meta blknos that couldn't be cached.)

Now we need to tighten up when read can get a reference and when shrink
can remove blocks.  We have presence in the hash table hold a refcount
but we make it a magic high bit in the refcount so that it can be
differentiated from other references.  Now lookup can atomically get a
reference to blocks that are in the hash table, and shrinking can
atomically remove blocks when it is the only other reference.

We also clean up freeing a bit. It has to wait for the rcu grace period
to ensure that no other rcu readers can reference the blocks its
freeing.  It has to iterate over the list with _safe because it's
freeing as it goes.

Interestingly, when reworking the shrinker I noticed that we weren't
scaling the nr_to_scan from the pages we returned in previous shrink
calls back to blocks.  We now divide the input from pages back into
blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:15 -08:00
Zach Brown
0f14826ff8 Merge pull request #18 from versity/zab/quorum_slots_unmount
Zab/quorum slots unmount
2021-02-22 13:34:25 -08:00
Zach Brown
336d521e44 Use spinlock to protect server farewell list
We had a mutex protecting the list of farewell requests.  The critical
sections are all very short so we can use a spinlock and be a bit
clearer and more efficient.  While we're at it, refactor freeing to free
outside of the criticial section.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
4fab75b862 Account for non-quorum in server farewell
The server has to be careful to only send farewell responses to quorum
clients once it knows that it won't need their vote to elect a leader to
server remaining clients.

The logic for doing this forgot to take non-quorum clients into account.
It would send farewell requests to all the final majority of quorum
members once they all tried to unmount.  This could leave non-quorum
clients hung in unmount trying to send their farewell requests.

The fix is to count mouted_clients items for non-quorum clients and hold
off on sending farewell requests to the final majority until those
non-quorum clients have unmounted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
f6f72e7eae Resume running the mount-unmount-race test
The recent quorum and unmount fixes should have addressed the failures
we were seeing in the mount-unmount-race test.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
9878312b4d Update man pages for quorum slot changes
Update the man pages with descriptions of the new mkfs -Q quorum slot
configuration and quorum_slot_nr mount option.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
7421bd1861 Filter all test device digits to 0
We mask device numbers in command output to 0:0 so that we can have
consistent golden test output.  The device number matching regex
responsible for this missed a few digits.

It didn't show up until we both tested enough mounts to get larger
device minor numbers and fixed multi-mount consistency so that the
affected tests didn't fail for other reasons.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
1db6f8194d Update xfstests to use quorum slot options
Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
2de7692336 Unmount mount point, not device
Our test unmount function unmounted the device instead of the mount
point.  It was written this way back in an old version of the harness
which didn't track mount points.

Now that we have mount points, we can just unmount that.  This stops the
umount command from having to search through all the current mounts
looking for the mountpoint for the device it was asked to unmount.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
8c1d96898a Log wait failure in mount-unmount-race test
I got a test failure where waiting returned an error, but it wasn't
clear what the error was or where it might have come from.  Add more
logging so that we learn more about what might have gone wrong.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
090646aaeb Update repo README.md for quorum slots
Update the example configuration in the README to specify the quorum
slots in mkfs arguments and mount options.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
d53350f9f1 Consistently lock server mounted_clients btree
The mounted_clients btree stores items to track mounted clients.  It's
modified by multiple greeting workers and the farewell work.

The greeting work was serialized by the farewell_mutex, but the
modifications in the farewell thread weren't protected.  This could
result in modifications between the threads being lost if the dirty
block reference updates raced in just the right way.  I saw this in
testing with deletions in farewell being lost and then that lingering
item preventing unmount because the server thought it had to wait for a
remaining quorum member to unmount.

We fix this by adding a mutex specifically to protect the
mounted_clients btree in the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
57f34e90e9 Use mounted_client item as sign of farewell
As clients unmount they send a farewell request that cleans up
persistent state associated with the mount.  The client needs to be sure
that it gets processed, and we must maintain a majority of quorum
members mounted to be able to elect a server to process farewell
requests.

We had a mechanism using the unmount_barrier fields in the greeting and
super_block to let the final unmounting quorum majority know that their
farewells have been processed and that they didn't need to keep trying
to reconnect.

But we missed that we also need this out of band farewell handling
signal for non-quorum member clients as well.  The server can send
farewells to a non-member client as well as the final majority and then
tear down all the connections before the non-quorum client can see its
farewell response.  It also needs to be able to know that its farewell
has been processed before the server let the final majority unmount.

We can remove the custom unmount_barrier method and instead have all
unmounting clients check for their mounted_client item in the server's
btree.  This item is removed as the last step of farewell processing so
if the client sees that it has been removed it knows that it doesn't
need to resend the farewell and can finish unmounting.

This fixes a bug where a non-quorum unmount could hang if it raced with
the final majority unmounting.  I was able to trigger this hang in our
tests with 5 mounts and 3 quorum members.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
79f6878355 Clean up block writing in mkfs
scoutfs mkfs had two block writing functions: write_block to fill out
some block header fields including crc calculation, and then
write_block_raw to pwrite the raw buffer to the bytes in the device.

These were used inconsistenly as blocks came and went over time.  Most
callers filled out all the header fields themselves and called the raw
writer.  write_block was only used for super writing, which made sense
because it clobbered the block's header with the super header so the
caller's set header magic and seq fields would be lost.

This cleans up the mess.  We only have one block writer and the caller
provides all the hdr fields.  Everything uses it instead of filling out
the fields themselves and calling the raw writer.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
740e13e53a Return error from _quorum_setup
Well that's a silly mistake.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
dbb716f1bb Update tests for quorum slots
Update the tests to deal with the mkfs and mount changes for the
specifically configured quorum slots.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
87fcad5428 Update scoutfs mkfs and print for quorum slots
Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
406d157891 Add stringify macro to utils
Add macros for stringifying either the name of a macro or its value.  In
keeping with making our utils/ sort of look like kernel code, we use the
kernel stringify names.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-18 12:57:30 -08:00
Zach Brown
8e34c5d66a Use quorum slots and background election work
Previously quorum configuration specified the number of votes needed to
elected the leader.  This was an excessive amount of freedom in the
configuration of the cluster which created all sorts of problems which
had to be designed around.

Most acutely, though, it required a probabilistic mechanism for mounts
to persistently record that they're starting a server so that future
servers could find and possibly fence them.  They would write to a lot
of quorum blocks and trust that it was unlikely that future servers
would overwrite all of their written blocks.  Overwriting was always
possible, which would be bad enough, but it also required so much IO
that we had to use long election timeouts to avoid spurious fencing.
These longer timeouts had already gone wrong on some storage
configurations, leading to hung mounts.

To fix this and other problems we see coming, like live membership
changes, we now specifically configure the number and identity of mounts
which will be participating in quorum voting.  With specific identities,
mounts now have a corresponding specific block they can write to and
which future servers can read from to see if they're still running.

We change the quorum config in the super block from a single
quorum_count to an array of quorum slots which specify the address of
the mount that is assigned to that slot.  The mount argument to specify
a quorum voter changes from "server_addr=$addr" to "quorum_slot_nr=$nr"
which specifies the mount's slot.  The slot's address is used for udp
election messages and tcp server connections.

Now that we specifically have configured unique IP addresses for all the
quorum members, we can use UDP messages to send and receive the vote
mesages in the raft protocol to elect a leader.  The quorum code doesn't
have to read and write disk block votes and is a more reasonable core
loop that either waits for received network messages or timeouts to
advance the raft election state machine.

The quorum blocks are now used for slots to store their persistent raft
term and to set their leader state.  We have event fields in the block
to record the timestamp of the most recent interesting events that
happened to the slot.

Now that raft doesn't use IO, we can leave the quorum election work
running in the background.  The raft work in the quorum members is
always running so we can use a much more typical raft implementation
with heartbeats.  Critically, this decouples the client and election
life cycles.  Quorum is always running and is responsible for starting
and stopping the server.  The client repeatedly tries to connect to a
server, it has nothing to do with deciding to participate in quorum.

Finally, we add a quorum/status sysfs file which shows the state of the
quorum raft protocol in a member mount and has the last messages that
were sent to or received from the other members.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-18 12:57:30 -08:00
Zach Brown
1c7bbd6260 More accurately describe unmounting quorum members
As a client unmounts it sends a farewell request to the server.  We have
to carefully manage unmounting the final quorum members so that there is
always a remaining quorum to elect a leader to start a server to process
all their farewell requests.

The mechanism for doing this described these clients as "voters".
That's not really right, in our terminology voters and candidates are
temporary roles taken on by members during a specific election term in
the raft protocol.  It's more accurate to describe the final set of
clients as quorum members.  They can be voters or candidates depending
on how the raft protocol timeouts workout in any given election.

So we rename the greeting flag, mounted client flag, and the code and
comments on either side of the client and server to be a little clearer.

This only changes symbols and comments, there should be no functional
change.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-11 15:47:39 -08:00
Zach Brown
3ad18b0f3b Update super blkno field tests for meta device
As we read the super we check the first and last meta and data blkno
fields.  The tests weren't updated as we moved from one device to two
metadata and data devices.

Add a helper that tests the range for the device and test both meta and
data ranges fully, instead of only testing the endpoints of each and
assuming they're related because they're living on one device.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-11 15:47:29 -08:00
Andy Grover
79cd7a499b Merge pull request #17 from versity/zab/disable_mount_unmount_test
Disable mount-unmount-race test
2021-02-01 10:09:26 -08:00
Zach Brown
6ad18769cb Disable mount-unmount-race test
The mount-unmount-race test is occasionally hanging, disable it while we
debug it and have test coverage for unrelated work.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-01 10:07:47 -08:00
Zach Brown
49d82fcaaf Merge pull request #14 from agrover/fix-jira-202
utils: Do not assert if release is given unaligned offset or length
2021-02-01 09:46:01 -08:00
Zach Brown
e4e12c1968 Merge pull request #15 from agrover/radix-block
Remove unused radix_block struct
2021-02-01 09:24:59 -08:00
Andy Grover
15fd2ccc02 utils: Do not assert if release is given unaligned offset or length
This is checked for by the kernel ioctl code, so giving unaligned values
will return an error, instead of aborting with an assert.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-29 09:30:57 -08:00
Andy Grover
eea95357d3 Remove unused radix_block struct
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-26 16:07:05 -08:00
Andy Grover
9842c5d13e Merge pull request #13 from versity/zab/multi_mount_test_fixes
Zab/multi mount test fixes
2021-01-26 15:56:33 -08:00
Zach Brown
ade539217e Handle advance_seq being replayed in new server
As a core principle, all server message processing needs to be safe to
replay as servers shut down and requests are resent to new servers.

The advance_seq handler got this wrong.  It would only try to remove a
trans_seq item for the seq sent by the client before inserting a new
item for the next seq.  This change could be committed before the reply
was lost as the server shuts down.  The next server would process the
resent request but wouldn't find the old item for the seq that the
client sent, and would ignore the new item that the previous server
inserted.  It would then insert another greater seq for the same client.

This would leave behind a stale old trans_seq that would be returned as
the last_seq which would forever limit the results that could be
returned from the seq index walks.

This fix is to always remove all previous seq items for the client
before inserting a new one.  This creates O(clients) server work, but
it's minimal.

This manifest as occasional simple-inode-index test failures (say 1 in
5?) which would trigger if the unmounts during previous tests would
happen to have advance_seq resent across server shutdowns.  With this
change the test now reliably passes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
5a90234c94 Use terminated test name when saving passed stats
We've grown some test names that are prefixes of others
(createmany-parallel, createmany-parallel-mounts).  When we're searching
for lines with the test name we have to search for the exact test name,
by terminating the name with a space, instead of searching for a line
that starts with the test name.

This fixes strange output and saved passed stats for the names that
share a prefix.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
f81e4cb98a Add whitespace to xfstests output message
The message indicating that xfstests output was now being shown was
mashed up against the previous passed stats and it was gross and I hated
it.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
1fc706bf3f Filter hrtimer slow messages from dmesg
When running in debug kernels in guests we can really bog down things
enough to trigger hrtimer warnings.  I don't think there's much we can
reasonably do about that.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
e9c3aa6501 More carefully cancel server farewell work
Farewell work is queued by farewell message processing.  Server shutdown
didn't properly wait for pending farewell work to finish before tearing
down.  As the server work destroyed the server's connection the farewell
work could stlil be running and try to send responses down the socket.

We make the server more carefully avoid queueuing farewell work if it's
in the process of shutting down and wait for farewell work to finish
before destroying the server's resources.

This fixed all manner of crashes that were seen in testing when a bunch
of nodes unmounted, creating farewell work on the server as it itself
unmounted and destroyed the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
d39268bbc1 Fix spurious EIO from scoutfs_srch_get_compact
scoutfs_srch_get_compact() is building up a compaction request which has
a list of srch files to read and sort and write into a new srch file.
It finds input files by searching for a sufficient number of similar
files: first any unsorted log files and then sorted log files that are
around the same size.

It finds the files by using btree next on the srch zone which has types
for unsorted srch log files, sorted srch files, but also pending and
busy compaction items.

It was being far too cute about iterating over different key types.  It
was trying to adapt to finding the next key and was making assumptions
about the order of key types.  It didn't notice that the pending and
busy key types followed log and sorted and would generate EIO when it
ran into them and found their value length didn't match what it was
expecting.

Rework the next item ref parsing so that it returns -ENOENT if it gets
an unexpected key type, then look for the next key type when checking
enoent.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
35ed1a2438 Add t_require_meta_size function
Add a function that tests can use to skip when the metadata device isn't
large enough.  I thought we needed to avoid enospc in a particular test,
but it turns out the test's failure was unrelated.  So this isn't used
for now but it seems nice to keep around.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
32e7978a6e Extend lock invalidate grace period
The grace period is intended to let lock holders squeeze in more bulk
work before another node pulls the lock out from under them.  The length
of the delay is a balance between getting more work done per lock hold
and adding latency to ping-ponging workloads.

The current grace period was too short.  To do work in the conflicting
case you often have to read the result that the other mount wrote as you
invalidated their lock.  The test was written in the LSM world where
we'd effectively read a single level 0 1MB segment.  In the btree world
we're checking bloom blocks and reading the other mount's btree.  It has
more dependent read latency.

So we turn up the grace period to let conflicting readers squeeze in
more work before pulling the lock out from under them.  This value was
chosen to make lock-conflicting-batch-commit pass in guests sharing nvme
metadata devices in debugging kernels.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
8123b8fc35 fix lock-conflicting-batch-commit conf output
The test had a silly typo in the label it put on the time it took mounts
to perform conflicting metadata changes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
da5911c311 Use d_materialise_unique to splice dir dentries
When we're splicing in dentries in lookup we can be splicing the result
of changes on other nodes into a stale dcache.  The stale dcache might
contain dir entries and the dcache does not allow aliased directories.

Use d_materialise_unique() to splice in dir inodes so that we remove all
aliased dentries which must be stale.

We can still use d_splice_alias() for all other inode types.  Any
existing stale dentries will fail revalidation before they're used.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
098fc420be Add some item cache page tracing
Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
7a96537210 Leave mounts mounted if run-tests fails
We can lose interesting state if the mounts are unmounted as tests fail,
only unmount if all the tests pass.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
0607dfdac8 Enable and collect trace_printk
Weirdly, run-tests was treating trace_printk not as an option to enable
trace_printk() traces but as an option to print trace events to the
console with printk?  That's not a thing.

Make -P really enable trace_printk tracing and collect it as it would
enabled trace events.  It needs to be treated seperately from the -t
options that enable trace events.

While we're at it treat the -P trace dumping option as a stand-alone
option that works without -t arguments.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
0354bb64c5 More carefully enable tracing in run-tests
run-tests.sh has a -t argument which takes a whitespace seperated string
of globs of events to enable.  This was hard to use and made it very
easy to accidentally expand the globs at the wrong place in the script.

This makes each -t argument specify a single word glob which is stored
in an array so the glob isn't expanded until it's applied to the trace
event path.   We also add an error for -t globs that didn't match any
events and add a message with the count of -t arguments and enabled
events.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
631801c45c Don't queue lock invalidation work during shutdown
The lock invalidation work function needs to be careful not to requeue
itself while we're shutting down or we can be left with invalidation
functions racing with shutdown.  Invalidation calls igrab so we can end
up with unmount warning that there are still inodes in use.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
47a1ac92f7 Update ino-path args in basic-posix-consistency
The ino-path calls in basic-posix-consistency weren't updated for the
recent change to scoutfs cli args.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:45:23 -08:00
Zach Brown
004f693af3 Add golden output for mount-unmount-race test
Signed-off-by: Zach Brown <zab@versity.com>
2021-01-25 14:19:35 -08:00
Andy Grover
f271a5d140 Merge pull request #12 from versity/zab/andys_fallocate_fix_minor_cleanup
Retry if transaction cannot alloc for fallocate or write
2021-01-25 12:52:14 -08:00
Andy Grover
355eac79d2 Retry if transaction cannot alloc for fallocate or write
Add a new distinguishable return value (ENOBUFS) from allocator for if
the transaction cannot alloc space. This doesn't mean the filesystem is
full -- opening a new transaction may result in forward progress.

Alter fallocate and get_blocks code to check for this err val and retry
with a new transaction. Handling actual ENOSPC can still happen, of
course.

Add counter called "alloc_trans_retry" and increment it from both spots.

Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: fixed up write_begin error paths]
2021-01-25 09:32:01 -08:00
Zach Brown
d8b4e94854 Merge pull request #10 from agrover/rm-item-accounting
Remove item accounting
2021-01-21 09:57:53 -08:00
Andy Grover
bed33c7ffd Remove item accounting
Remove kmod/src/count.h
Remove scoutfs_trans_track_item()
Remove reserved/actual fields from scoutfs_reservation

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-20 17:01:08 -08:00
Andy Grover
b370730029 Merge pull request #11 from versity/zab/item_cache_memory_corruption
Fix item cache page memory corruption
2021-01-20 10:27:20 -08:00
Zach Brown
d64dd89ead Fix item cache page memory corruption
The item cache page life cycle is tricky.  There are no proper page
reference counts, everthing is done by nesting the page rwlock inside
item_cache_info rwlock.  The intent is that you can only reference pages
while you hold the rwlocks appropriately.  The per-cpu page references
are outside that locking regime so they add a reference count.  Now
there are reference counts for the main cache index reference and for
each per-cpu reference.

The end result of all this is that you can only reference pages outside
of locks if you're protected by references.

Lock invalidation messed this up by trying to add its right split page
to the lru after it was unlocked.  Its page reference wasn't protected
at this point.  Shrinking could be freeing that page, and so it could be
putting a freed page's memory back on the lru.

Shrinking had a little bug that it was using list_move to move an
initialized lru_head list_head.  It turns out to be harmless (list_del
will just follow pointers to itself and set itself as next and prev all
over again), but boy does it catch one's eye.  Let's remove all
confusion and drop the reference while holding the cinf->rwlock instead
of trying to optimize freeing outside locks.

Finally, the big one: inserting a read item after compacting the page to
make room was inserting into stale parent pointers into the old
pre-compacted page, rather than the new page that was swapped in by
compaction.  This left references to a freed page in the page rbtree and
hilarity ensued.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-20 09:02:29 -08:00
Zach Brown
8d81196e01 Merge pull request #7 from agrover/versioning
Filesystem version instead of format hash check
2021-01-19 11:55:32 -08:00
Andy Grover
d731c1577e Filesystem version instead of format hash check
Instead of hashing headers, define an interop version. Do not mount
superblocks that have a different version, either higher or lower.

Since this is pretty much the same as the format hash except it's a
constant, minimal code changes are needed.

Initial dev version is 0, with the intent that version will be bumped to
1 immediately prior to tagging initial release version.

Update README. Fix comments.

Add interop version to notes and modinfo.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-15 10:53:00 -08:00
Andy Grover
a421bb0884 Merge pull request #5 from versity/zab/move_blocks_ioctl
Zab/move blocks ioctl
2021-01-14 16:18:45 -08:00
Zach Brown
773eb129ed Add move-blocks test
Add a basic test of the move_blocks ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
eb3981c103 Add move-blocks scoutfs cli command
Add a move-blocks command that translates arguments and calls the
MOVE_BLOCKS ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
3139d3ea68 Add move_blocks ioctl
Add a relatively constrained ioctl that moves extents between regular
files.  This is intended to be used by tasks which combine many existing
files into a much larger file without reading and writing all the file
contents.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
4da3d47601 Move ALLOC_DETAIL ioctl definition
By convention we have the _IO* ioctl definition after the argument
structs and ALLOC_DETAIL got it a bit wrong so move it down.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
aa1b1fa34f Add util.h for kernel helpers
Add a little header for inline convenience functions.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
8fcc9095e6 Merge pull request #6 from agrover/super
Fix mkfs check for existing ScoutFS superblock
2021-01-14 08:57:53 -08:00
Andy Grover
299062a456 Fix mkfs check for existing ScoutFS superblock
We were checking for the wrong magic value.

We now need to use -f when running mkfs in run-tests for things to work.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-13 16:32:41 -08:00
Andy Grover
7cac1e7136 Merge pull request #1 from agrover/use-argp
Rework scoutfs command-line parsing
2021-01-13 11:14:08 -08:00
Andy Grover
454dbebf59 Categorize not enough mounts as skip, not fail
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
2c5871c253 Change release ioctl to be denominated in bytes not blocks
This more closely matches stage ioctl and other conventions.

Also change release code to use offset/length nomenclature for consistency.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
64a698aa93 Make changes to tests for new scoutfs cmdline syntax
Some different error message require changes to golden/*

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
d48b447e75 Do not set -Wpadded except for checking kmod-shared headers
Remove now-unneeded manual padding in arg structs.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
5241bba7f6 Update scoutfs.8 man page
Update for cli args and options changes. Reorder subcommands to match
scoutfs built-in help.

Consistent ScoutFS capitalization.

Tighten up some descriptions and verbiage for consistency and omit
descriptions of internals in a few spots.

Add SEE ALSO for blockdev(8) and wipefs(8).

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
e0a2175c2e Use argp info instead of duplicating for cmd_register()
Make it static and then use it both for argp_parse as well as
cmd_register_argp.

Split commands into five groups, to help understanding of their
usefulness.

Mention that each command has its own help text, and that we are being
fancy to keep the user from having to give fs path.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
f2cd1003f6 Implement argp support for walk-inodes
This has some fancy parsing going on, and I decided to just leave it
in the main function instead of going to the effort to move it all
to the parsing function.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
97c6cc559e Implement argp support for data-waiting and data-wait-err
These both have a lot of required options.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
7c54c86c38 Implement argp support for setattr
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
e1ba508301 Implement argp support for counters
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
f35154eb19 counters: Ensure name_wid[0] is initialized to zero
I was seeing some segfaults and other weirdness without this.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
7befc61482 Implement argp support for mkfs and add --force
Support max-meta-size and max-data-size using KMGTP units with rounding.

Detect other fs signatures using blkid library.

Detect ScoutFS super using magic value.

Move read_block() from print.c into util.c since blkid also needs it.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
1383ca1a8d Merge pull request #3 from versity/zab/multithread_write_extra_commits
Consistently sample data alloc total_len
2021-01-12 11:51:15 -08:00
Andy Grover
6b5ddf2b3a Implement argp support for print
Print warning if printing a data dev, you probably wanted the meta dev.

Change read_block to return err value. Otherwise there are confusing
ENOMEM messages when pread() fails. e.g. try to print /dev/null.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Andy Grover
d025122fdd Implement argp support for listxaddr-hidden
Rename to list-hidden-xaddrs.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Andy Grover
706fe9a30e Implement argp support for search-xattrs
Get fs path via normal methods, and make xattr an argument not an option.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Andy Grover
0f17ecb9e3 Implement argp support for stage/release
Make offset and length optional. Allow size units (KMGTP) to be used
  for offset/length.

release: Since off/len no longer given in 4k blocks, round offset and
  length to to 4KiB, down and up respectively. Emit a message if rounding
  occurs.

Make version a required option.

stage: change ordering to src (the archive file) then the dest (the
  staged file).

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Zach Brown
fc003a5038 Consistently sample data alloc total_len
With many concurrent writers we were seeing excessive commits forced
because it thought the data allocator was running low.  The transaction
was checking the raw total_len value in the data_avail alloc_root for
the number of free data blocks.  But this read wasn't locked, and
allocators could completely remove a large free extent and then
re-insert a slightly smaller free extent as they perform their
alloction.  The transaction could see a temporary very small total_len
and trigger a commit.

Data allocations are serialized by a heavy mutex so we don't want to
have the reader try and use that to see a consistent total_len.  Instead
we create a data allocator run-time struct that has a consistent
total_len that is updated after all the extent items are manipulated.
This also gives us a place to put the caller's cached extent so that it
can be included in the total_len, previously it wasn't included in the
free total that the transaction saw.

The file data allocator can then initialize and use this struct instead
of its raw use of the root and cached extent.  Then the transaction can
sample its consistent total_len that reflects the root and cached
extent.

A subtle detail is that fallocate can't use _free_data to return an
allocated extent on error to the avail pool.  It instead frees into the
data_free pool like normal frees.  It doesn't really matter that this
could prematurely drain the avail pool because it's in an error path.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-06 09:25:32 -08:00
Andy Grover
10df01eb7a Implement argp support for ino-path
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
68b8e4098d Implement argp support for stat and statfs
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
5701184324 Implement argp support for df
Convert arg parsing to use argp. Use new get_path() helper fn.

Add -h human-readable option.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
a3035582d3 Add strdup_or_error()
Add a helper function to handle the impossible event that strdup fails.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
9e47a32257 Add get_path()
Implement a fallback mechanism for opening paths to a filesystem. If
explicitly given, use that. If env var is set, use that. Otherwise, use
current working directory.

Use wordexp to expand ~, $HOME, etc.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
b4592554af Merge pull request #2 from versity/zab/stage_read_zero_block
Zab/stage read zero block
2020-12-17 16:48:52 -08:00
Zach Brown
1e0f8ee27a Finally change all 'ci' inode info ptrs to 'si'
Finally get rid of the last silly vestige of the ancient 'ci' name and
update the scoutfs_inode_info pointers to si.  This is just a global
search and replace, nothing functional changes.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-15 15:20:02 -08:00
Zach Brown
511cb04330 Add stage-mulit-part test
Add a test which stages a file in multiple parts while a long-lived
process is blocking on offline extents trying to compare the file to the
known contents.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-15 15:13:42 -08:00
Zach Brown
807ae11ee9 Protect per-inode extent items with extent_sem
Now that we have full precision extents a writer with i_mutex and a page
lock can be modifying large extent items which cover much of the
surrounding pages in the file.  Readers can be in a different page with
only the page lock and try to work with extent items as the writer is
deleting and creating them.

We add a per-inode rwsem which just protects file extent item
manipulation.  We try to acquire it as close to the item use as possible
in data.c which is the only place we work with file extent items.

This stops rare read corruption we were seeing where get_block in a
reader was racing with extent item deletion in a stager at a further
offset in the file.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-15 11:56:50 -08:00
123 changed files with 9014 additions and 4838 deletions

View File

@@ -31,15 +31,9 @@ functionality hasn't been implemented. It's appropriate for early
adopters and interested developers, not for production use.
In that vein, expect significant incompatible changes to both the format
of network messages and persistent structures. To avoid mistakes the
implementation currently calculates a hash of the format and ioctl
header files in the source tree. The kernel module will refuse to mount
a volume created by userspace utilities with a mismatched hash, and it
will refuse to connect to a remote node with a mismatched hash. This
means having to unmount, mkfs, and remount everything across many
functional changes. Once the format is nailed down we'll wire up
forward and back compat machinery and remove this temporary safety
measure.
of network messages and persistent structures. Since the format hash-checking
has now been removed in preparation for release, if there is any doubt, mkfs
is strongly recommended.
The current kernel module is developed against the RHEL/CentOS 7.x
kernel to minimize the friction of developing and testing with partners'
@@ -71,8 +65,13 @@ The steps for getting scoutfs mounted and operational are:
2. Make a new filesystem on the devices with the userspace utilities
3. Mount the devices on all the nodes
In this example we run all of these commands on three nodes. The names
of the block devices are the same on all the nodes.
In this example we use three nodes. The names of the block devices are
the same on all the nodes. Two of the nodes will be quorum members. A
majority of quorum members must be mounted to elect a leader to run a
server that all the mounts connect to. It should be noted that two
quorum members results in a majority of one, each member itself, so
split brain elections are possible but so unlikely that it's fine for a
demonstration.
1. Get the Kernel Module and Userspace Binaries
@@ -94,24 +93,30 @@ of the block devices are the same on all the nodes.
alias scoutfs=$PWD/scoutfs/utils/src/scoutfs
```
2. Make a New Filesystem (**destroys contents, no questions asked**)
2. Make a New Filesystem (**destroys contents**)
We specify that two of our three nodes must be present to form a
quorum for the system to function.
We specify quorum slots with the addresses of each of the quorum
member nodes, the metadata device, and the data device.
```shell
scoutfs mkfs -Q 2 /dev/meta_dev /dev/data_dev
scoutfs mkfs -Q 0,$NODE0_ADDR,12345 -Q 1,$NODE1_ADDR,12345 /dev/meta_dev /dev/data_dev
```
3. Mount the Filesystem
Each mounting node provides its local IP address on which it will run
an internal server for the other mounts if it is elected the leader by
the quorum.
First, mount each of the quorum nodes so that they can elect and
start a server for the remaining node to connect to. The slot numbers
were specified with the leading "0,..." and "1,..." in the mkfs options
above.
```shell
mkdir /mnt/scoutfs
mount -t scoutfs -o server_addr=$NODE_ADDR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
mount -t scoutfs -o quorum_slot_nr=$SLOT_NR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
Then mount the remaining node which can now connect to the running server.
```shell
mount -t scoutfs -o metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
4. For Kicks, Observe the Metadata Change Index

View File

@@ -16,11 +16,7 @@ SCOUTFS_GIT_DESCRIBE := \
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
echo no-git)
SCOUTFS_FORMAT_HASH := \
$(shell cat src/format.h src/ioctl.h | md5sum | cut -b1-16)
SCOUTFS_ARGS := SCOUTFS_GIT_DESCRIBE=$(SCOUTFS_GIT_DESCRIBE) \
SCOUTFS_FORMAT_HASH=$(SCOUTFS_FORMAT_HASH) \
CONFIG_SCOUTFS_FS=m -C $(SK_KSRC) M=$(CURDIR)/src \
EXTRA_CFLAGS="-Werror"

View File

@@ -1,7 +1,6 @@
obj-$(CONFIG_SCOUTFS_FS) := scoutfs.o
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\" \
-DSCOUTFS_FORMAT_HASH=0x$(SCOUTFS_FORMAT_HASH)LLU
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\"
CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
@@ -28,9 +27,11 @@ scoutfs-y += \
lock_server.o \
msg.o \
net.o \
omap.o \
options.o \
per_task.o \
quorum.o \
recov.o \
scoutfs_trace.o \
server.o \
sort_priv.o \

View File

@@ -252,7 +252,7 @@ void scoutfs_alloc_init(struct scoutfs_alloc *alloc,
{
memset(alloc, 0, sizeof(struct scoutfs_alloc));
spin_lock_init(&alloc->lock);
seqlock_init(&alloc->seqlock);
mutex_init(&alloc->mutex);
alloc->avail = *avail;
alloc->freed = *freed;
@@ -358,31 +358,24 @@ static void list_block_sort(struct scoutfs_alloc_list_block *lblk)
/*
* We're always reading blocks that we own, so we shouldn't see stale
* references. But the cached block can be stale and we can need to
* invalidate it.
* references but we could retry reads after dropping stale cached
* blocks. If we do see a stale error then we've hit persistent
* corruption.
*/
static int read_list_block(struct super_block *sb,
struct scoutfs_alloc_list_ref *ref,
static int read_list_block(struct super_block *sb, struct scoutfs_block_ref *ref,
struct scoutfs_block **bl_ret)
{
struct scoutfs_block *bl = NULL;
int ret;
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
if (!IS_ERR_OR_NULL(bl) &&
!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno,
SCOUTFS_BLOCK_MAGIC_ALLOC_LIST)) {
scoutfs_inc_counter(sb, alloc_stale_cached_list_block);
scoutfs_block_invalidate(sb, bl);
scoutfs_block_put(sb, bl);
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
}
if (IS_ERR(bl)) {
*bl_ret = NULL;
return PTR_ERR(bl);
}
ret = scoutfs_block_read_ref(sb, ref, SCOUTFS_BLOCK_MAGIC_ALLOC_LIST, bl_ret);
if (ret < 0) {
if (ret == -ESTALE) {
scoutfs_inc_counter(sb, alloc_stale_list_block);
ret = -EIO;
}
};
*bl_ret = bl;
return 0;
return ret;
}
/*
@@ -396,86 +389,12 @@ static int read_list_block(struct super_block *sb,
static int dirty_list_block(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_list_ref *ref,
struct scoutfs_block_ref *ref,
u64 dirty, u64 *old,
struct scoutfs_block **bl_ret)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_block *cow_bl = NULL;
struct scoutfs_block *bl = NULL;
struct scoutfs_alloc_list_block *lblk;
bool undo_alloc = false;
u64 blkno;
int ret;
int err;
blkno = le64_to_cpu(ref->blkno);
if (blkno) {
ret = read_list_block(sb, ref, &bl);
if (ret < 0)
goto out;
if (scoutfs_block_writer_is_dirty(sb, bl)) {
ret = 0;
goto out;
}
}
if (dirty == 0) {
ret = scoutfs_alloc_meta(sb, alloc, wri, &dirty);
if (ret < 0)
goto out;
undo_alloc = true;
}
cow_bl = scoutfs_block_create(sb, dirty);
if (IS_ERR(cow_bl)) {
ret = PTR_ERR(cow_bl);
goto out;
}
if (old) {
*old = blkno;
} else if (blkno) {
ret = scoutfs_free_meta(sb, alloc, wri, blkno);
if (ret < 0)
goto out;
}
if (bl)
memcpy(cow_bl->data, bl->data, SCOUTFS_BLOCK_LG_SIZE);
else
memset(cow_bl->data, 0, SCOUTFS_BLOCK_LG_SIZE);
scoutfs_block_put(sb, bl);
bl = cow_bl;
cow_bl = NULL;
lblk = bl->data;
lblk->hdr.magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_ALLOC_LIST);
lblk->hdr.fsid = super->hdr.fsid;
lblk->hdr.blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&lblk->hdr.seq, sizeof(lblk->hdr.seq));
ref->blkno = lblk->hdr.blkno;
ref->seq = lblk->hdr.seq;
scoutfs_block_writer_mark_dirty(sb, wri, bl);
ret = 0;
out:
scoutfs_block_put(sb, cow_bl);
if (ret < 0 && undo_alloc) {
err = scoutfs_free_meta(sb, alloc, wri, dirty);
BUG_ON(err); /* inconsistent */
}
if (ret < 0) {
scoutfs_block_put(sb, bl);
bl = NULL;
}
*bl_ret = bl;
return ret;
return scoutfs_block_dirty_ref(sb, alloc, wri, ref, SCOUTFS_BLOCK_MAGIC_ALLOC_LIST,
bl_ret, dirty, old);
}
/* Allocate a new dirty list block if we fill up more than 3/4 of the block. */
@@ -497,7 +416,7 @@ static int dirty_alloc_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri)
{
struct scoutfs_alloc_list_ref orig_freed;
struct scoutfs_block_ref orig_freed;
struct scoutfs_alloc_list_block *lblk;
struct scoutfs_block *av_bl = NULL;
struct scoutfs_block *fr_bl = NULL;
@@ -607,7 +526,8 @@ int scoutfs_alloc_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
goto out;
spin_lock(&alloc->lock);
write_seqlock(&alloc->seqlock);
lblk = alloc->dirty_avail_bl->data;
if (WARN_ON_ONCE(lblk->nr == 0)) {
/* shouldn't happen, transaction should commit first */
@@ -617,7 +537,8 @@ int scoutfs_alloc_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
list_block_remove(&alloc->avail, lblk, 1);
ret = 0;
}
spin_unlock(&alloc->lock);
write_sequnlock(&alloc->seqlock);
out:
if (ret < 0)
@@ -640,7 +561,8 @@ int scoutfs_free_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
goto out;
spin_lock(&alloc->lock);
write_seqlock(&alloc->seqlock);
lblk = alloc->dirty_freed_bl->data;
if (WARN_ON_ONCE(list_block_space(lblk->nr) == 0)) {
/* shouldn't happen, transaction should commit first */
@@ -649,7 +571,8 @@ int scoutfs_free_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
list_block_add(&alloc->freed, lblk, blkno);
ret = 0;
}
spin_unlock(&alloc->lock);
write_sequnlock(&alloc->seqlock);
out:
scoutfs_inc_counter(sb, alloc_free_meta);
@@ -657,6 +580,60 @@ out:
return ret;
}
void scoutfs_dalloc_init(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail)
{
dalloc->root = *data_avail;
memset(&dalloc->cached, 0, sizeof(dalloc->cached));
atomic64_set(&dalloc->total_len, le64_to_cpu(dalloc->root.total_len));
}
void scoutfs_dalloc_get_root(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail)
{
*data_avail = dalloc->root;
}
static void dalloc_update_total_len(struct scoutfs_data_alloc *dalloc)
{
atomic64_set(&dalloc->total_len, le64_to_cpu(dalloc->root.total_len) +
dalloc->cached.len);
}
u64 scoutfs_dalloc_total_len(struct scoutfs_data_alloc *dalloc)
{
return atomic64_read(&dalloc->total_len);
}
/*
* Return the current in-memory cached free extent to extent items in
* the avail root. This should be locked by the caller just like
* _alloc_data and _free_data.
*/
int scoutfs_dalloc_return_cached(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_data_alloc *dalloc)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = &dalloc->root,
.type = SCOUTFS_FREE_EXTENT_BLKNO_TYPE,
};
int ret = 0;
if (dalloc->cached.len) {
ret = scoutfs_ext_insert(sb, &alloc_ext_ops, &args,
dalloc->cached.start,
dalloc->cached.len, 0, 0);
if (ret == 0)
memset(&dalloc->cached, 0, sizeof(dalloc->cached));
}
return ret;
}
/*
* Allocate a data extent. An extent that's smaller than the requested
* size can be returned.
@@ -671,14 +648,13 @@ out:
*/
int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *root,
struct scoutfs_extent *cached, u64 count,
struct scoutfs_data_alloc *dalloc, u64 count,
u64 *blkno_ret, u64 *count_ret)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = root,
.root = &dalloc->root,
.type = SCOUTFS_FREE_EXTENT_LEN_TYPE,
};
struct scoutfs_extent ext;
@@ -699,27 +675,35 @@ int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
}
/* smaller allocations come from a cached extent */
if (cached->len == 0) {
if (dalloc->cached.len == 0) {
ret = scoutfs_ext_alloc(sb, &alloc_ext_ops, &args, 0, 0,
SCOUTFS_ALLOC_DATA_LG_THRESH, cached);
SCOUTFS_ALLOC_DATA_LG_THRESH,
&dalloc->cached);
if (ret < 0)
goto out;
}
len = min(count, cached->len);
len = min(count, dalloc->cached.len);
*blkno_ret = cached->start;
*blkno_ret = dalloc->cached.start;
*count_ret = len;
cached->start += len;
cached->len -= len;
dalloc->cached.start += len;
dalloc->cached.len -= len;
ret = 0;
out:
if (ret < 0) {
/*
* Special retval meaning there wasn't space to alloc from
* this txn. Doesn't mean filesystem is completely full.
* Maybe upper layers want to try again.
*/
if (ret == -ENOENT)
ret = -ENOSPC;
ret = -ENOBUFS;
*blkno_ret = 0;
*count_ret = 0;
} else {
dalloc_update_total_len(dalloc);
}
scoutfs_inc_counter(sb, alloc_alloc_data);
@@ -1045,7 +1029,7 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
struct scoutfs_alloc_list_head *src)
{
struct scoutfs_alloc_list_block *lblk;
struct scoutfs_alloc_list_ref *ref;
struct scoutfs_block_ref *ref;
struct scoutfs_block *prev = NULL;
struct scoutfs_block *bl = NULL;
int ret = 0;
@@ -1086,17 +1070,23 @@ out:
/*
* Returns true if meta avail and free don't have room for the given
* number of alloctions or frees.
* number of allocations or frees. This is called at a significantly
* higher frequency than allocations as writers try to enter
* transactions. This is the only reader of the seqlock which gives
* read-mostly sampling instead of bouncing a spinlock around all the
* cores.
*/
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr)
{
unsigned int seq;
bool lo;
spin_lock(&alloc->lock);
lo = le32_to_cpu(alloc->avail.first_nr) < nr ||
list_block_space(alloc->freed.first_nr) < nr;
spin_unlock(&alloc->lock);
do {
seq = read_seqbegin(&alloc->seqlock);
lo = le32_to_cpu(alloc->avail.first_nr) < nr ||
list_block_space(alloc->freed.first_nr) < nr;
} while (read_seqretry(&alloc->seqlock, seq));
return lo;
}
@@ -1108,8 +1098,8 @@ bool scoutfs_alloc_meta_low(struct super_block *sb,
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_btree_ref stale_refs[2] = {{0,}};
struct scoutfs_btree_ref refs[2] = {{0,}};
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
struct scoutfs_super_block *super = NULL;
struct scoutfs_srch_compact *sc;
struct scoutfs_log_trees lt;

View File

@@ -72,7 +72,8 @@
* transaction.
*/
struct scoutfs_alloc {
spinlock_t lock;
/* writers rarely modify list_head avail/freed. readers often check for _meta_alloc_low */
seqlock_t seqlock;
struct mutex mutex;
struct scoutfs_block *dirty_avail_bl;
struct scoutfs_block *dirty_freed_bl;
@@ -80,6 +81,18 @@ struct scoutfs_alloc {
struct scoutfs_alloc_list_head freed;
};
/*
* A run-time data allocator. We have a cached extent in memory that is
* a lot cheaper to work with than the extent items, and we have a
* consistent record of the total_len that can be sampled outside of the
* usual heavy serialization of the extent modifications.
*/
struct scoutfs_data_alloc {
struct scoutfs_alloc_root root;
struct scoutfs_extent cached;
atomic64_t total_len;
};
void scoutfs_alloc_init(struct scoutfs_alloc *alloc,
struct scoutfs_alloc_list_head *avail,
struct scoutfs_alloc_list_head *freed);
@@ -92,10 +105,18 @@ int scoutfs_alloc_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
int scoutfs_free_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 blkno);
void scoutfs_dalloc_init(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail);
void scoutfs_dalloc_get_root(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail);
u64 scoutfs_dalloc_total_len(struct scoutfs_data_alloc *dalloc);
int scoutfs_dalloc_return_cached(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_data_alloc *dalloc);
int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *root,
struct scoutfs_extent *cached, u64 count,
struct scoutfs_data_alloc *dalloc, u64 count,
u64 *blkno_ret, u64 *count_ret);
int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,

File diff suppressed because it is too large Load Diff

View File

@@ -13,27 +13,16 @@ struct scoutfs_block {
void *priv;
};
__le32 scoutfs_block_calc_crc(struct scoutfs_block_header *hdr, u32 size);
bool scoutfs_block_valid_crc(struct scoutfs_block_header *hdr, u32 size);
bool scoutfs_block_valid_ref(struct super_block *sb,
struct scoutfs_block_header *hdr,
__le64 seq, __le64 blkno);
struct scoutfs_block *scoutfs_block_create(struct super_block *sb, u64 blkno);
struct scoutfs_block *scoutfs_block_read(struct super_block *sb, u64 blkno);
void scoutfs_block_invalidate(struct super_block *sb, struct scoutfs_block *bl);
bool scoutfs_block_consistent_ref(struct super_block *sb,
struct scoutfs_block *bl,
__le64 seq, __le64 blkno, u32 magic);
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret);
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);
void scoutfs_block_writer_init(struct super_block *sb,
struct scoutfs_block_writer *wri);
void scoutfs_block_writer_mark_dirty(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl);
bool scoutfs_block_writer_is_dirty(struct super_block *sb,
struct scoutfs_block *bl);
int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_block_ref *ref,
u32 magic, struct scoutfs_block **bl_ret,
u64 dirty_blkno, u64 *ref_blkno);
int scoutfs_block_writer_write(struct super_block *sb,
struct scoutfs_block_writer *wri);
void scoutfs_block_writer_forget_all(struct super_block *sb,

View File

@@ -80,7 +80,7 @@ enum btree_walk_flags {
BTW_NEXT = (1 << 0), /* return >= key */
BTW_PREV = (1 << 1), /* return <= key */
BTW_DIRTY = (1 << 2), /* cow stable blocks */
BTW_ALLOC = (1 << 3), /* allocate a new block for 0 ref */
BTW_ALLOC = (1 << 3), /* allocate a new block for 0 ref, requires dirty */
BTW_INSERT = (1 << 4), /* walking to insert, try splitting */
BTW_DELETE = (1 << 5), /* walking to delete, try joining */
};
@@ -619,140 +619,36 @@ static void move_items(struct scoutfs_btree_block *dst,
* This is used to lookup cached blocks, read blocks, cow blocks for
* dirtying, and allocate new blocks.
*
* Btree blocks don't have rigid cache consistency. We can be following
* block references into cached blocks that are now stale or can be
* following a stale root into blocks that have been overwritten. If we
* hit a block that looks stale we first invalidate the cache and retry,
* returning -ESTALE if it still looks wrong. The caller can retry the
* read from a more current root or decide that this is a persistent
* error.
* If we read a stale block we return stale so the caller can retry with
* a newer root or return an error.
*/
static int get_ref_block(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, int flags,
struct scoutfs_btree_ref *ref,
struct scoutfs_block_ref *ref,
struct scoutfs_block **bl_ret)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_btree_block *bt = NULL;
struct scoutfs_btree_block *new;
struct scoutfs_block *new_bl = NULL;
struct scoutfs_block *bl = NULL;
bool retried = false;
u64 blkno;
u64 seq;
int ret;
/* always get the current block, either to return or cow from */
if (ref && ref->blkno) {
retry:
if (WARN_ON_ONCE((flags & BTW_ALLOC) && !(flags & BTW_DIRTY)))
return -EINVAL;
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
if (IS_ERR(bl)) {
trace_scoutfs_btree_read_error(sb, ref);
scoutfs_inc_counter(sb, btree_read_error);
ret = PTR_ERR(bl);
goto out;
}
bt = (void *)bl->data;
if (!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno,
SCOUTFS_BLOCK_MAGIC_BTREE) ||
scoutfs_trigger(sb, BTREE_STALE_READ)) {
scoutfs_inc_counter(sb, btree_stale_read);
scoutfs_block_invalidate(sb, bl);
scoutfs_block_put(sb, bl);
bl = NULL;
if (!retried) {
retried = true;
goto retry;
}
ret = -ESTALE;
goto out;
}
/*
* We need to create a new dirty copy of the block if
* the caller asked for it. If the block is already
* dirty then we can return it.
*/
if (!(flags & BTW_DIRTY) ||
scoutfs_block_writer_is_dirty(sb, bl)) {
ret = 0;
goto out;
}
} else if (!(flags & BTW_ALLOC)) {
if (ref->blkno == 0 && !(flags & BTW_ALLOC)) {
ret = -ENOENT;
goto out;
}
ret = scoutfs_alloc_meta(sb, alloc, wri, &blkno);
if (ret < 0)
goto out;
prandom_bytes(&seq, sizeof(seq));
new_bl = scoutfs_block_create(sb, blkno);
if (IS_ERR(new_bl)) {
ret = scoutfs_free_meta(sb, alloc, wri, blkno);
BUG_ON(ret);
ret = PTR_ERR(new_bl);
goto out;
}
new = (void *)new_bl->data;
/* free old stable blkno we're about to overwrite */
if (ref && ref->blkno) {
ret = scoutfs_free_meta(sb, alloc, wri,
le64_to_cpu(ref->blkno));
if (ret) {
ret = scoutfs_free_meta(sb, alloc, wri, blkno);
BUG_ON(ret);
scoutfs_block_put(sb, new_bl);
new_bl = NULL;
goto out;
}
}
scoutfs_block_writer_mark_dirty(sb, wri, new_bl);
trace_scoutfs_btree_dirty_block(sb, blkno, seq,
bt ? le64_to_cpu(bt->hdr.blkno) : 0,
bt ? le64_to_cpu(bt->hdr.seq) : 0);
if (bt) {
/* returning a cow of an existing block */
memcpy(new, bt, SCOUTFS_BLOCK_LG_SIZE);
scoutfs_block_put(sb, bl);
} else {
/* returning a newly allocated block */
memset(new, 0, SCOUTFS_BLOCK_LG_SIZE);
new->hdr.fsid = super->hdr.fsid;
}
bl = new_bl;
bt = new;
bt->hdr.magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_BTREE);
bt->hdr.blkno = cpu_to_le64(blkno);
bt->hdr.seq = cpu_to_le64(seq);
if (ref) {
ref->blkno = bt->hdr.blkno;
ref->seq = bt->hdr.seq;
}
ret = 0;
if (flags & BTW_DIRTY)
ret = scoutfs_block_dirty_ref(sb, alloc, wri, ref, SCOUTFS_BLOCK_MAGIC_BTREE,
bl_ret, 0, NULL);
else
ret = scoutfs_block_read_ref(sb, ref, SCOUTFS_BLOCK_MAGIC_BTREE, bl_ret);
out:
if (ret) {
scoutfs_block_put(sb, bl);
bl = NULL;
if (ret < 0) {
if (ret == -ESTALE)
scoutfs_inc_counter(sb, btree_stale_read);
}
*bl_ret = bl;
return ret;
}
@@ -766,7 +662,7 @@ static void create_parent_item(struct scoutfs_btree_block *parent,
{
struct scoutfs_avl_node *par;
int cmp;
struct scoutfs_btree_ref ref = {
struct scoutfs_block_ref ref = {
.blkno = child->hdr.blkno,
.seq = child->hdr.seq,
};
@@ -784,7 +680,7 @@ static void update_parent_item(struct scoutfs_btree_block *parent,
struct scoutfs_btree_item *par_item,
struct scoutfs_btree_block *child)
{
struct scoutfs_btree_ref *ref = item_val(parent, par_item);
struct scoutfs_block_ref *ref = item_val(parent, par_item);
par_item->key = *item_key(last_item(child));
ref->blkno = child->hdr.blkno;
@@ -832,12 +728,13 @@ static int try_split(struct super_block *sb,
struct scoutfs_block *par_bl = NULL;
struct scoutfs_btree_block *left;
struct scoutfs_key max_key;
struct scoutfs_block_ref zeros;
int ret;
int err;
/* parents need to leave room for child references */
if (right->level)
val_len = sizeof(struct scoutfs_btree_ref);
val_len = sizeof(struct scoutfs_block_ref);
/* don't need to split if there's enough space for the item */
if (mid_free_item_room(right, val_len))
@@ -849,7 +746,8 @@ static int try_split(struct super_block *sb,
scoutfs_inc_counter(sb, btree_split);
/* alloc split neighbour first to avoid unwinding tree growth */
ret = get_ref_block(sb, alloc, wri, BTW_ALLOC, NULL, &left_bl);
memset(&zeros, 0, sizeof(zeros));
ret = get_ref_block(sb, alloc, wri, BTW_ALLOC | BTW_DIRTY, &zeros, &left_bl);
if (ret)
return ret;
left = left_bl->data;
@@ -857,7 +755,8 @@ static int try_split(struct super_block *sb,
init_btree_block(left, right->level);
if (!parent) {
ret = get_ref_block(sb, alloc, wri, BTW_ALLOC, NULL, &par_bl);
memset(&zeros, 0, sizeof(zeros));
ret = get_ref_block(sb, alloc, wri, BTW_ALLOC | BTW_DIRTY, &zeros, &par_bl);
if (ret) {
err = scoutfs_free_meta(sb, alloc, wri,
le64_to_cpu(left->hdr.blkno));
@@ -905,7 +804,7 @@ static int try_join(struct super_block *sb,
struct scoutfs_btree_item *sib_par_item;
struct scoutfs_btree_block *sib;
struct scoutfs_block *sib_bl;
struct scoutfs_btree_ref *ref;
struct scoutfs_block_ref *ref;
unsigned int sib_tot;
bool move_right;
int to_move;
@@ -1194,7 +1093,7 @@ static int btree_walk(struct super_block *sb,
struct scoutfs_btree_item *prev;
struct scoutfs_avl_node *next_node;
struct scoutfs_avl_node *node;
struct scoutfs_btree_ref *ref;
struct scoutfs_block_ref *ref;
unsigned int level;
unsigned int nr;
int ret;
@@ -1225,8 +1124,7 @@ restart:
if (!(flags & BTW_INSERT)) {
ret = -ENOENT;
} else {
ret = get_ref_block(sb, alloc, wri, BTW_ALLOC,
&root->ref, &bl);
ret = get_ref_block(sb, alloc, wri, BTW_ALLOC | BTW_DIRTY, &root->ref, &bl);
if (ret == 0) {
bt = bl->data;
init_btree_block(bt, 0);

View File

@@ -31,16 +31,14 @@
#include "net.h"
#include "endian_swap.h"
#include "quorum.h"
#include "omap.h"
/*
* The client is responsible for maintaining a connection to the server.
* This includes managing quorum elections that determine which client
* should run the server that all the clients connect to.
*/
#define CLIENT_CONNECT_DELAY_MS (MSEC_PER_SEC / 10)
#define CLIENT_CONNECT_TIMEOUT_MS (1 * MSEC_PER_SEC)
#define CLIENT_QUORUM_TIMEOUT_MS (5 * MSEC_PER_SEC)
struct client_info {
struct super_block *sb;
@@ -52,7 +50,6 @@ struct client_info {
struct delayed_work connect_dwork;
u64 server_term;
u64 greeting_umb;
bool sending_farewell;
int farewell_error;
@@ -121,16 +118,14 @@ int scoutfs_client_get_roots(struct super_block *sb,
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
__le64 before = cpu_to_le64p(seq);
__le64 after;
__le64 leseq;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
&before, sizeof(before),
&after, sizeof(after));
NULL, 0, &leseq, sizeof(leseq));
if (ret == 0)
*seq = le64_to_cpu(after);
*seq = le64_to_cpu(leseq);
return ret;
}
@@ -156,7 +151,7 @@ static int client_lock_response(struct super_block *sb,
void *resp, unsigned int resp_len,
int error, void *data)
{
if (resp_len != sizeof(struct scoutfs_net_lock_grant_response))
if (resp_len != sizeof(struct scoutfs_net_lock))
return -EINVAL;
/* XXX error? */
@@ -221,6 +216,39 @@ int scoutfs_client_srch_commit_compact(struct super_block *sb,
res, sizeof(*res), NULL, 0);
}
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_response(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
id, 0, map, sizeof(*map));
}
/* The client is receiving an omap request from the server */
static int client_open_ino_map(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != sizeof(struct scoutfs_open_ino_map_args))
return -EINVAL;
return scoutfs_omap_client_handle_request(sb, id, arg);
}
/* The client is sending an omap request to the server */
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_open_ino_map_args args = {
.group_nr = cpu_to_le64(group_nr),
.req_id = 0,
};
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
&args, sizeof(args), map, sizeof(*map));
}
/* The client is receiving a invalidation request from the server */
static int client_lock(struct super_block *sb,
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
@@ -282,10 +310,10 @@ static int client_greeting(struct super_block *sb,
goto out;
}
if (gr->format_hash != super->format_hash) {
if (gr->version != super->version) {
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
le64_to_cpu(gr->format_hash),
le64_to_cpu(super->format_hash));
le64_to_cpu(gr->version),
le64_to_cpu(super->version));
ret = -EINVAL;
goto out;
}
@@ -294,52 +322,30 @@ static int client_greeting(struct super_block *sb,
scoutfs_net_client_greeting(sb, conn, new_server);
client->server_term = le64_to_cpu(gr->server_term);
client->greeting_umb = le64_to_cpu(gr->unmount_barrier);
ret = 0;
out:
return ret;
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
* The client is deciding if it needs to keep trying to reconnect to
* have its farewell request processed. The server removes our mounted
* client item last so that if we don't see it we know the server has
* processed our farewell and we don't need to reconnect, we can unmount
* safely.
*
* In the typical case a mount reads the super blocks and finds the
* address of the currently running server and connects to it.
* Non-voting clients who can't connect will keep trying alternating
* reading the address and getting connect timeouts.
*
* Voting mounts will try to elect a leader if they can't connect to the
* server. When a quorum can't connect and are able to elect a leader
* then a new server is started. The new server will write its address
* in the super and everyone will be able to connect.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once a client receives its farewell response it
* can exit. But a majority of clients need to stick around to elect a
* server to process all their farewell requests. This is coordinated
* by having the greeting tell the server that a client is a voter. The
* server then holds on to farewell requests from voters until only
* requests from the final quorum remain. These farewell responses are
* only sent after updating an unmount barrier in the super to indicate
* to the final quorum that they can safely exit without having received
* a farewell response over the network.
* This is peeking at btree blocks that the server could be actively
* freeing with cow updates so it can see stale blocks, we just return
* the error and we'll retry eventually as the connection times out.
*/
static void scoutfs_client_connect_worker(struct work_struct *work)
static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
{
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = NULL;
struct mount_options *opts = &sbi->opts;
const bool am_voter = opts->server_addr.sin_addr.s_addr != 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
ktime_t timeout_abs;
u64 elected_term;
struct scoutfs_key key = {
.sk_zone = SCOUTFS_MOUNTED_CLIENT_ZONE,
.skmc_rid = cpu_to_le64(rid),
};
struct scoutfs_super_block *super;
SCOUTFS_BTREE_ITEM_REF(iref);
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -352,57 +358,77 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
goto out;
/* can safely unmount if we see that server processed our farewell */
if (am_voter && client->sending_farewell &&
(le64_to_cpu(super->unmount_barrier) > client->greeting_umb)) {
ret = scoutfs_btree_lookup(sb, &super->mounted_clients, &key, &iref);
if (ret == 0) {
scoutfs_btree_put_iref(&iref);
ret = 1;
}
if (ret == -ENOENT)
ret = 0;
kfree(super);
out:
return ret;
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
*
* We ask quorum for an address to try and connect to. If there isn't
* one, or it fails, we back off a bit before trying again.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once the server processes a farewell request from
* the client it can forget the client. If the connection is broken
* before the client gets the farewell response it doesn't want to
* reconnect to send it again.. instead the client can read the metadata
* device to check for the lack of an item which indicates that the
* server has processed its farewell.
*/
static void scoutfs_client_connect_worker(struct work_struct *work)
{
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct mount_options *opts = &sbi->opts;
const bool am_quorum = opts->quorum_slot_nr >= 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
int ret;
/* can unmount once server farewell handling removes our item */
if (client->sending_farewell &&
lookup_mounted_client_item(sb, sbi->rid) == 0) {
client->farewell_error = 0;
complete(&client->farewell_comp);
ret = 0;
goto out;
}
/* try to connect to the super's server address */
scoutfs_addr_to_sin(&sin, &super->server_addr);
if (sin.sin_addr.s_addr != 0 && sin.sin_port != 0)
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
else
ret = -ENOTCONN;
/* voters try to elect a leader if they couldn't connect */
if (ret < 0) {
/* non-voters will keep retrying */
if (!am_voter)
goto out;
/* make sure local server isn't writing super during votes */
scoutfs_server_stop(sb);
timeout_abs = ktime_add_ms(ktime_get(),
CLIENT_QUORUM_TIMEOUT_MS);
ret = scoutfs_quorum_election(sb, timeout_abs,
le64_to_cpu(super->quorum_server_term),
&elected_term);
/* start the server if we were asked to */
if (elected_term > 0)
ret = scoutfs_server_start(sb, &opts->server_addr,
elected_term);
ret = -ENOTCONN;
ret = scoutfs_quorum_server_sin(sb, &sin);
if (ret < 0)
goto out;
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
if (ret < 0)
goto out;
}
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.format_hash = super->format_hash;
greet.version = super->version;
greet.server_term = cpu_to_le64(client->server_term);
greet.unmount_barrier = cpu_to_le64(client->greeting_umb);
greet.rid = cpu_to_le64(sbi->rid);
greet.flags = 0;
if (client->sending_farewell)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_FAREWELL);
if (am_voter)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_VOTER);
if (am_quorum)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_QUORUM);
ret = scoutfs_net_submit_request(sb, client->conn,
SCOUTFS_NET_CMD_GREETING,
@@ -411,7 +437,6 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
scoutfs_net_shutdown(sb, client->conn);
out:
kfree(super);
/* always have a small delay before retrying to avoid storms */
if (ret && !atomic_read(&client->shutting_down))
@@ -422,6 +447,7 @@ out:
static scoutfs_net_request_t client_req_funcs[] = {
[SCOUTFS_NET_CMD_LOCK] = client_lock,
[SCOUTFS_NET_CMD_LOCK_RECOVER] = client_lock_recover,
[SCOUTFS_NET_CMD_OPEN_INO_MAP] = client_open_ino_map,
};
/*

View File

@@ -22,6 +22,10 @@ int scoutfs_client_srch_get_compact(struct super_block *sb,
struct scoutfs_srch_compact *sc);
int scoutfs_client_srch_commit_compact(struct super_block *sb,
struct scoutfs_srch_compact *res);
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map);
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map);
int scoutfs_client_setup(struct super_block *sb);
void scoutfs_client_destroy(struct super_block *sb);

View File

@@ -1,315 +0,0 @@
#ifndef _SCOUTFS_COUNT_H_
#define _SCOUTFS_COUNT_H_
/*
* Our estimate of the space consumed while dirtying items is based on
* the number of items and the size of their values.
*
* The estimate is still a read-only input to entering the transaction.
* We'd like to use it as a clean rhs arg to hold_trans. We define SIC_
* functions which return the count struct. This lets us have a single
* arg and avoid bugs in initializing and passing in struct pointers
* from callers. The internal __count functions are used compose an
* estimate out of the sets of items it manipulates. We program in much
* clearer C instead of in the preprocessor.
*
* Compilers are able to collapse the inlines into constants for the
* constant estimates.
*/
struct scoutfs_item_count {
signed items;
signed vals;
};
/* The caller knows exactly what they're doing. */
static inline const struct scoutfs_item_count SIC_EXACT(signed items,
signed vals)
{
struct scoutfs_item_count cnt = {
.items = items,
.vals = vals,
};
return cnt;
}
/*
* Allocating an inode creates a new set of indexed items.
*/
static inline void __count_alloc_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
/*
* Dirtying an inode dirties the inode item and can delete and create
* the full set of indexed items.
*/
static inline void __count_dirty_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = 2 * SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
static inline const struct scoutfs_item_count SIC_ALLOC_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_alloc_inode(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_DIRTY_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Directory entries are stored in three items.
*/
static inline void __count_dirents(struct scoutfs_item_count *cnt,
unsigned name_len)
{
cnt->items += 3;
cnt->vals += 3 * offsetof(struct scoutfs_dirent, name[name_len]);
}
static inline void __count_sym_target(struct scoutfs_item_count *cnt,
unsigned size)
{
unsigned nr = DIV_ROUND_UP(size, SCOUTFS_MAX_VAL_SIZE);
cnt->items += nr;
cnt->vals += size;
}
static inline void __count_orphan(struct scoutfs_item_count *cnt)
{
cnt->items += 1;
}
static inline void __count_mknod(struct scoutfs_item_count *cnt,
unsigned name_len)
{
__count_alloc_inode(cnt);
__count_dirents(cnt, name_len);
__count_dirty_inode(cnt);
}
static inline const struct scoutfs_item_count SIC_MKNOD(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
return cnt;
}
/*
* Dropping the inode deletes all its items. Potentially enormous numbers
* of items (data mapping, xattrs) are deleted in their own transactions.
*/
static inline const struct scoutfs_item_count SIC_DROP_INODE(int mode,
u64 size)
{
struct scoutfs_item_count cnt = {0,};
if (S_ISLNK(mode))
__count_sym_target(&cnt, size);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
cnt.vals = 0;
return cnt;
}
static inline const struct scoutfs_item_count SIC_LINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Unlink can add orphan items.
*/
static inline const struct scoutfs_item_count SIC_UNLINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_SYMLINK(unsigned name_len,
unsigned size)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
__count_sym_target(&cnt, size);
return cnt;
}
/*
* This assumes the worst case of a rename between directories that
* unlinks an existing target. That'll be worse than the common case
* by a few hundred bytes.
*/
static inline const struct scoutfs_item_count SIC_RENAME(unsigned old_len,
unsigned new_len)
{
struct scoutfs_item_count cnt = {0,};
/* dirty dirs and inodes */
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
/* unlink old and new, link new */
__count_dirents(&cnt, old_len);
__count_dirents(&cnt, new_len);
__count_dirents(&cnt, new_len);
/* orphan the existing target */
__count_orphan(&cnt);
return cnt;
}
/*
* Creating an xattr results in a dirty set of items with values that
* store the xattr header, name, and value. There's always at least one
* item with the header and name. Any previously existing items are
* deleted which dirties their key but removes their value. The two
* sets of items are indexed by different ids so their items don't
* overlap.
*/
static inline const struct scoutfs_item_count SIC_XATTR_SET(unsigned old_parts,
bool creating,
unsigned name_len,
unsigned size)
{
struct scoutfs_item_count cnt = {0,};
unsigned int new_parts;
__count_dirty_inode(&cnt);
if (old_parts)
cnt.items += old_parts;
if (creating) {
new_parts = SCOUTFS_XATTR_NR_PARTS(name_len, size);
cnt.items += new_parts;
cnt.vals += sizeof(struct scoutfs_xattr) + name_len + size;
}
return cnt;
}
/*
* write_begin can have to allocate all the blocks in the page and can
* have to add a big allocation from the server to do so:
* - merge added free extents from the server
* - remove a free extent per block
* - remove an offline extent for every other block
* - add a file extent per block
*/
static inline const struct scoutfs_item_count SIC_WRITE_BEGIN(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned nr_free = (1 + SCOUTFS_BLOCK_SM_PER_PAGE) * 3;
unsigned nr_file = (DIV_ROUND_UP(SCOUTFS_BLOCK_SM_PER_PAGE, 2) +
SCOUTFS_BLOCK_SM_PER_PAGE) * 3;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* Truncating an extent can:
* - delete existing file extent,
* - create two surrounding file extents,
* - add an offline file extent,
* - delete two existing free extents
* - create a merged free extent
*/
static inline const struct scoutfs_item_count
SIC_TRUNC_EXTENT(struct inode *inode)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_file = 1 + 2 + 1;
unsigned int nr_free = (2 + 1) * 2;
if (inode)
__count_dirty_inode(&cnt);
cnt.items += nr_file + nr_free;
cnt.vals += nr_file;
return cnt;
}
/*
* Fallocating an extent can, at most:
* - allocate from the server: delete two free and insert merged
* - free an allocated extent: delete one and create two split
* - remove an unallocated file extent: delete one and create two split
* - add an fallocated flie extent: delete two and inset one merged
*/
static inline const struct scoutfs_item_count SIC_FALLOCATE_ONE(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_free = ((1 + 2) * 2) * 2;
unsigned int nr_file = (1 + 2) * 2;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* ioc_setattr_more can dirty the inode and add a single offline extent.
*/
static inline const struct scoutfs_item_count SIC_SETATTR_MORE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
cnt.items++;
return cnt;
}
#endif

View File

@@ -20,17 +20,21 @@
EXPAND_COUNTER(alloc_list_freed_hi) \
EXPAND_COUNTER(alloc_move) \
EXPAND_COUNTER(alloc_moved_extent) \
EXPAND_COUNTER(alloc_stale_cached_list_block) \
EXPAND_COUNTER(block_cache_access) \
EXPAND_COUNTER(alloc_stale_list_block) \
EXPAND_COUNTER(block_cache_access_update) \
EXPAND_COUNTER(block_cache_alloc_failure) \
EXPAND_COUNTER(block_cache_alloc_page_order) \
EXPAND_COUNTER(block_cache_alloc_virt) \
EXPAND_COUNTER(block_cache_end_io_error) \
EXPAND_COUNTER(block_cache_forget) \
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_invalidate) \
EXPAND_COUNTER(block_cache_lru_move) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_restart) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
@@ -42,7 +46,6 @@
EXPAND_COUNTER(btree_lookup) \
EXPAND_COUNTER(btree_next) \
EXPAND_COUNTER(btree_prev) \
EXPAND_COUNTER(btree_read_error) \
EXPAND_COUNTER(btree_split) \
EXPAND_COUNTER(btree_stale_read) \
EXPAND_COUNTER(btree_update) \
@@ -58,6 +61,8 @@
EXPAND_COUNTER(corrupt_symlink_inode_size) \
EXPAND_COUNTER(corrupt_symlink_missing_item) \
EXPAND_COUNTER(corrupt_symlink_not_null_term) \
EXPAND_COUNTER(data_fallocate_enobufs_retry) \
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
EXPAND_COUNTER(dentry_revalidate_error) \
EXPAND_COUNTER(dentry_revalidate_invalid) \
EXPAND_COUNTER(dentry_revalidate_locked) \
@@ -71,6 +76,7 @@
EXPAND_COUNTER(ext_op_remove) \
EXPAND_COUNTER(forest_bloom_fail) \
EXPAND_COUNTER(forest_bloom_pass) \
EXPAND_COUNTER(forest_bloom_stale) \
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
@@ -137,18 +143,21 @@
EXPAND_COUNTER(net_recv_invalid_message) \
EXPAND_COUNTER(net_recv_messages) \
EXPAND_COUNTER(net_unknown_request) \
EXPAND_COUNTER(quorum_cycle) \
EXPAND_COUNTER(quorum_elected_leader) \
EXPAND_COUNTER(quorum_election_timeout) \
EXPAND_COUNTER(quorum_failure) \
EXPAND_COUNTER(quorum_read_block) \
EXPAND_COUNTER(quorum_read_block_error) \
EXPAND_COUNTER(quorum_elected) \
EXPAND_COUNTER(quorum_fence_error) \
EXPAND_COUNTER(quorum_fence_leader) \
EXPAND_COUNTER(quorum_read_invalid_block) \
EXPAND_COUNTER(quorum_saw_super_leader) \
EXPAND_COUNTER(quorum_timedout) \
EXPAND_COUNTER(quorum_write_block) \
EXPAND_COUNTER(quorum_write_block_error) \
EXPAND_COUNTER(quorum_fenced) \
EXPAND_COUNTER(quorum_recv_error) \
EXPAND_COUNTER(quorum_recv_heartbeat) \
EXPAND_COUNTER(quorum_recv_invalid) \
EXPAND_COUNTER(quorum_recv_resignation) \
EXPAND_COUNTER(quorum_recv_vote) \
EXPAND_COUNTER(quorum_send_heartbeat) \
EXPAND_COUNTER(quorum_send_resignation) \
EXPAND_COUNTER(quorum_send_request) \
EXPAND_COUNTER(quorum_send_vote) \
EXPAND_COUNTER(quorum_server_shutdown) \
EXPAND_COUNTER(quorum_term_follower) \
EXPAND_COUNTER(server_commit_hold) \
EXPAND_COUNTER(server_commit_queue) \
EXPAND_COUNTER(server_commit_worker) \
@@ -158,7 +167,6 @@
EXPAND_COUNTER(srch_compact_flush) \
EXPAND_COUNTER(srch_compact_log_page) \
EXPAND_COUNTER(srch_compact_removed_entry) \
EXPAND_COUNTER(srch_inconsistent_ref) \
EXPAND_COUNTER(srch_rotate_log) \
EXPAND_COUNTER(srch_search_log) \
EXPAND_COUNTER(srch_search_log_block) \

View File

@@ -37,8 +37,8 @@
#include "lock.h"
#include "file.h"
#include "msg.h"
#include "count.h"
#include "ext.h"
#include "util.h"
/*
* We want to amortize work done after dirtying the shared transaction
@@ -53,9 +53,8 @@ struct data_info {
struct mutex mutex;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
struct scoutfs_alloc_root data_avail;
struct scoutfs_alloc_root data_freed;
struct scoutfs_extent cached_ext;
struct scoutfs_data_alloc dalloc;
};
#define DECLARE_DATA_INFO(sb, name) \
@@ -93,6 +92,16 @@ static void ext_from_item(struct scoutfs_extent *ext,
ext->flags = dv->flags;
}
static void data_ext_op_warn(struct inode *inode)
{
struct scoutfs_inode_info *si;
if (inode) {
si = SCOUTFS_I(inode);
WARN_ON_ONCE(!rwsem_is_locked(&si->extent_sem));
}
}
static int data_ext_next(struct super_block *sb, void *arg, u64 start, u64 len,
struct scoutfs_extent *ext)
{
@@ -102,6 +111,8 @@ static int data_ext_next(struct super_block *sb, void *arg, u64 start, u64 len,
struct scoutfs_key last;
int ret;
data_ext_op_warn(args->inode);
item_from_extent(&last, &dv, args->ino, U64_MAX, 1, 0, 0);
item_from_extent(&key, &dv, args->ino, start, len, 0, 0);
@@ -139,6 +150,8 @@ static int data_ext_insert(struct super_block *sb, void *arg, u64 start,
struct scoutfs_key key;
int ret;
data_ext_op_warn(args->inode);
item_from_extent(&key, &dv, args->ino, start, len, map, flags);
ret = scoutfs_item_create(sb, &key, &dv, sizeof(dv), args->lock);
if (ret == 0 && args->inode)
@@ -154,6 +167,8 @@ static int data_ext_remove(struct super_block *sb, void *arg, u64 start,
struct scoutfs_key key;
int ret;
data_ext_op_warn(args->inode);
item_from_extent(&key, &dv, args->ino, start, len, map, flags);
ret = scoutfs_item_delete(sb, &key, args->lock);
if (ret == 0 && args->inode)
@@ -275,7 +290,7 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
u64 ino, u64 iblock, u64 last, bool offline,
struct scoutfs_lock *lock)
{
struct scoutfs_item_count cnt = SIC_TRUNC_EXTENT(inode);
struct scoutfs_inode_info *si = NULL;
LIST_HEAD(ind_locks);
s64 ret = 0;
@@ -290,12 +305,17 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
if (WARN_ON_ONCE(last < iblock))
return -EINVAL;
if (inode) {
si = SCOUTFS_I(inode);
down_write(&si->extent_sem);
}
while (iblock <= last) {
if (inode)
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks,
true, cnt);
true);
else
ret = scoutfs_hold_trans(sb, cnt);
ret = scoutfs_hold_trans(sb);
if (ret)
break;
@@ -321,6 +341,9 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
ret = 0;
}
if (si)
up_write(&si->extent_sem);
return ret;
}
@@ -407,8 +430,7 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
count = 1;
ret = scoutfs_alloc_data(sb, datinf->alloc, datinf->wri,
&datinf->data_avail, &datinf->cached_ext,
count, &blkno, &count);
&datinf->dalloc, count, &blkno, &count);
if (ret < 0)
goto out;
@@ -533,6 +555,38 @@ out:
return ret;
}
/*
* Typically extent item users are serialized by i_mutex. But page
* readers only hold the page lock and need to be protected from writers
* in other pages which can be manipulating neighbouring extents as
* they split and merge.
*/
static int scoutfs_get_block_read(struct inode *inode, sector_t iblock,
struct buffer_head *bh, int create)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
int ret;
down_read(&si->extent_sem);
ret = scoutfs_get_block(inode, iblock, bh, create);
up_read(&si->extent_sem);
return ret;
}
static int scoutfs_get_block_write(struct inode *inode, sector_t iblock,
struct buffer_head *bh, int create)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
int ret;
down_write(&si->extent_sem);
ret = scoutfs_get_block(inode, iblock, bh, create);
up_write(&si->extent_sem);
return ret;
}
/*
* This is almost never used. We can't block on a cluster lock while
* holding the page lock because lock invalidation gets the page lock
@@ -598,7 +652,7 @@ static int scoutfs_readpage(struct file *file, struct page *page)
return ret;
}
ret = mpage_readpage(page, scoutfs_get_block);
ret = mpage_readpage(page, scoutfs_get_block_read);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
@@ -646,7 +700,7 @@ static int scoutfs_readpages(struct file *file, struct address_space *mapping,
}
}
ret = mpage_readpages(mapping, pages, nr_pages, scoutfs_get_block);
ret = mpage_readpages(mapping, pages, nr_pages, scoutfs_get_block_read);
out:
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
BUG_ON(!list_empty(pages));
@@ -655,13 +709,13 @@ out:
static int scoutfs_writepage(struct page *page, struct writeback_control *wbc)
{
return block_write_full_page(page, scoutfs_get_block, wbc);
return block_write_full_page(page, scoutfs_get_block_write, wbc);
}
static int scoutfs_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
return mpage_writepages(mapping, wbc, scoutfs_get_block);
return mpage_writepages(mapping, wbc, scoutfs_get_block_write);
}
/* fsdata allocated in write_begin and freed in write_end */
@@ -697,13 +751,13 @@ static int scoutfs_write_begin(struct file *file,
goto out;
}
retry:
do {
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &wbd->ind_locks, inode,
true) ?:
scoutfs_inode_index_try_lock_hold(sb, &wbd->ind_locks,
ind_seq,
SIC_WRITE_BEGIN());
ind_seq);
} while (ret > 0);
if (ret < 0)
goto out;
@@ -712,17 +766,22 @@ static int scoutfs_write_begin(struct file *file,
flags |= AOP_FLAG_NOFS;
/* generic write_end updates i_size and calls dirty_inode */
ret = scoutfs_dirty_inode_item(inode, wbd->lock);
if (ret == 0)
ret = block_write_begin(mapping, pos, len, flags, pagep,
scoutfs_get_block);
if (ret)
ret = scoutfs_dirty_inode_item(inode, wbd->lock) ?:
block_write_begin(mapping, pos, len, flags, pagep,
scoutfs_get_block_write);
if (ret < 0) {
scoutfs_release_trans(sb);
out:
if (ret) {
scoutfs_inode_index_unlock(sb, &wbd->ind_locks);
kfree(wbd);
if (ret == -ENOBUFS) {
/* Retry with a new transaction. */
scoutfs_inc_counter(sb, data_write_begin_enobufs_retry);
goto retry;
}
}
out:
if (ret < 0)
kfree(wbd);
return ret;
}
@@ -859,9 +918,8 @@ static s64 fallocate_extents(struct super_block *sb, struct inode *inode,
mutex_lock(&datinf->mutex);
ret = scoutfs_alloc_data(sb, datinf->alloc, datinf->wri,
&datinf->data_avail,
&datinf->cached_ext,
count, &blkno, &count);
&datinf->dalloc, count,
&blkno, &count);
if (ret == 0) {
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock,
count, blkno,
@@ -869,7 +927,7 @@ static s64 fallocate_extents(struct super_block *sb, struct inode *inode,
if (ret < 0) {
err = scoutfs_free_data(sb, datinf->alloc,
datinf->wri,
&datinf->data_avail,
&datinf->data_freed,
blkno, count);
BUG_ON(err); /* inconsistent */
}
@@ -903,6 +961,7 @@ static s64 fallocate_extents(struct super_block *sb, struct inode *inode,
long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
{
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_lock *lock = NULL;
@@ -913,6 +972,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
s64 ret;
mutex_lock(&inode->i_mutex);
down_write(&si->extent_sem);
/* XXX support more flags */
if (mode & ~(FALLOC_FL_KEEP_SIZE)) {
@@ -950,8 +1010,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
while(iblock <= last) {
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_FALLOCATE_ONE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret)
goto out;
@@ -969,6 +1028,12 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
/* txn couldn't meet the request. Let's try with a new txn */
if (ret == -ENOBUFS) {
scoutfs_inc_counter(sb, data_fallocate_enobufs_retry);
continue;
}
if (ret <= 0)
goto out;
@@ -978,6 +1043,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
out:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
up_write(&si->extent_sem);
mutex_unlock(&inode->i_mutex);
trace_scoutfs_data_fallocate(sb, ino, mode, offset, len, ret);
@@ -998,6 +1064,7 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct data_ext_args args = {
.ino = scoutfs_ino(inode),
@@ -1019,8 +1086,7 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
}
/* we're updating meta_seq with offline block count */
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_SETATTR_MORE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret < 0)
goto out;
@@ -1028,8 +1094,10 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
if (ret < 0)
goto unlock;
down_write(&si->extent_sem);
ret = scoutfs_ext_insert(sb, &data_ext_ops, &args,
0, count, 0, SEF_OFFLINE);
up_write(&si->extent_sem);
if (ret < 0)
goto unlock;
@@ -1043,6 +1111,277 @@ out:
return ret;
}
/*
* We're using truncate_inode_pages_range to maintain consistency
* between the page cache and extents that just changed. We have to
* call with full aligned page offsets or it thinks that it should leave
* behind a zeroed partial page.
*/
static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
{
truncate_inode_pages_range(&inode->i_data,
start << SCOUTFS_BLOCK_SM_SHIFT,
((start + len) << SCOUTFS_BLOCK_SM_SHIFT) - 1);
}
/*
* Move extents from one file to another. The behaviour is more fully
* explained above the move_blocks ioctl argument structure definition.
*
* The caller has processed the ioctl args and performed the most basic
* inode checks, but we perform more detailed inode checks once we have
* the inode lock and refreshed inodes. Our job is to safely lock the
* two files and move the extents.
*/
#define MOVE_DATA_EXTENTS_PER_HOLD 16
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off, bool is_stage,
u64 data_version)
{
struct scoutfs_inode_info *from_si = SCOUTFS_I(from);
struct scoutfs_inode_info *to_si = SCOUTFS_I(to);
struct super_block *sb = from->i_sb;
struct scoutfs_lock *from_lock = NULL;
struct scoutfs_lock *to_lock = NULL;
struct data_ext_args from_args;
struct data_ext_args to_args;
struct scoutfs_extent ext;
struct timespec cur_time;
LIST_HEAD(locks);
bool done = false;
loff_t from_size;
loff_t to_size;
u64 from_offline;
u64 to_offline;
u64 from_start;
u64 to_start;
u64 from_iblock;
u64 to_iblock;
u64 count;
u64 junk;
u64 seq;
u64 map;
u64 len;
int ret;
int err;
int i;
lock_two_nondirectories(from, to);
ret = scoutfs_lock_inodes(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, from, &from_lock,
to, &to_lock, NULL, NULL, NULL, NULL);
if (ret)
goto out;
if ((from_off & SCOUTFS_BLOCK_SM_MASK) ||
(to_off & SCOUTFS_BLOCK_SM_MASK) ||
((byte_len & SCOUTFS_BLOCK_SM_MASK) &&
(from_off + byte_len != i_size_read(from)))) {
ret = -EINVAL;
goto out;
}
if (is_stage && (data_version != SCOUTFS_I(to)->data_version)) {
ret = -ESTALE;
goto out;
}
from_iblock = from_off >> SCOUTFS_BLOCK_SM_SHIFT;
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
if (S_ISDIR(from->i_mode) || S_ISDIR(to->i_mode)) {
ret = -EISDIR;
goto out;
}
if (!S_ISREG(from->i_mode) || !S_ISREG(to->i_mode)) {
ret = -EINVAL;
goto out;
}
ret = inode_permission(from, MAY_WRITE) ?:
inode_permission(to, MAY_WRITE);
if (ret < 0)
goto out;
/* can't stage once data_version changes */
scoutfs_inode_get_onoff(from, &junk, &from_offline);
scoutfs_inode_get_onoff(to, &junk, &to_offline);
if (from_offline || (to_offline && !is_stage)) {
ret = -ENODATA;
goto out;
}
from_args = (struct data_ext_args) {
.ino = scoutfs_ino(from),
.inode = from,
.lock = from_lock,
};
to_args = (struct data_ext_args) {
.ino = scoutfs_ino(to),
.inode = to,
.lock = to_lock,
};
inode_dio_wait(from);
inode_dio_wait(to);
ret = filemap_write_and_wait_range(&from->i_data, from_off,
from_off + byte_len - 1);
if (ret < 0)
goto out;
for (;;) {
ret = scoutfs_inode_index_start(sb, &seq) ?:
scoutfs_inode_index_prepare(sb, &locks, from, true) ?:
scoutfs_inode_index_prepare(sb, &locks, to, true) ?:
scoutfs_inode_index_try_lock_hold(sb, &locks, seq);
if (ret > 0)
continue;
if (ret < 0)
goto out;
ret = scoutfs_dirty_inode_item(from, from_lock) ?:
scoutfs_dirty_inode_item(to, to_lock);
if (ret < 0)
goto out;
down_write_two(&from_si->extent_sem, &to_si->extent_sem);
/* arbitrarily limit the number of extents per trans hold */
for (i = 0; i < MOVE_DATA_EXTENTS_PER_HOLD; i++) {
struct scoutfs_extent off_ext;
/* find the next extent to move */
ret = scoutfs_ext_next(sb, &data_ext_ops, &from_args,
from_iblock, 1, &ext);
if (ret < 0) {
if (ret == -ENOENT) {
done = true;
ret = 0;
}
break;
}
/* only move extents within count and i_size */
if (ext.start >= from_iblock + count ||
ext.start >= i_size_read(from)) {
done = true;
ret = 0;
break;
}
from_start = max(ext.start, from_iblock);
map = ext.map + (from_start - ext.start);
len = min3(from_iblock + count,
round_up((u64)i_size_read(from),
SCOUTFS_BLOCK_SM_SIZE),
ext.start + ext.len) - from_start;
to_start = to_iblock + (from_start - from_iblock);
if (is_stage) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
to_start, 1, &off_ext);
if (ret)
break;
if (!scoutfs_ext_inside(to_start, len, &off_ext) ||
!(off_ext.flags & SEF_OFFLINE)) {
ret = -EINVAL;
break;
}
ret = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
} else {
/* insert the new, fails if it overlaps */
ret = scoutfs_ext_insert(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
}
if (ret < 0)
break;
/* remove the old, possibly splitting */
ret = scoutfs_ext_set(sb, &data_ext_ops, &from_args,
from_start, len, 0, 0);
if (ret < 0) {
if (is_stage) {
/* re-mark dest range as offline */
WARN_ON_ONCE(!(off_ext.flags & SEF_OFFLINE));
err = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
to_start, len,
0, off_ext.flags);
} else {
/* remove inserted new on err */
err = scoutfs_ext_remove(sb, &data_ext_ops,
&to_args, to_start,
len);
}
BUG_ON(err); /* XXX inconsistent */
break;
}
trace_scoutfs_data_move_blocks(sb, scoutfs_ino(from),
from_start, len, map,
ext.flags,
scoutfs_ino(to),
to_start);
/* moved extent might extend i_size */
to_size = (to_start + len) << SCOUTFS_BLOCK_SM_SHIFT;
if (to_size > i_size_read(to)) {
/* while maintaining final partial */
from_size = (from_start + len) <<
SCOUTFS_BLOCK_SM_SHIFT;
if (from_size > i_size_read(from))
to_size -= from_size -
i_size_read(from);
i_size_write(to, to_size);
}
}
up_write(&from_si->extent_sem);
up_write(&to_si->extent_sem);
cur_time = CURRENT_TIME;
if (!is_stage) {
to->i_ctime = to->i_mtime = cur_time;
scoutfs_inode_inc_data_version(to);
scoutfs_inode_set_data_seq(to);
}
from->i_ctime = from->i_mtime = cur_time;
scoutfs_inode_inc_data_version(from);
scoutfs_inode_set_data_seq(from);
scoutfs_update_inode_item(from, from_lock, &locks);
scoutfs_update_inode_item(to, to_lock, &locks);
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &locks);
if (ret < 0 || done)
break;
}
/* remove any cached pages from old extents */
truncate_inode_pages_extent(from, from_iblock, count);
truncate_inode_pages_extent(to, to_iblock, count);
out:
scoutfs_unlock(sb, from_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, to_lock, SCOUTFS_LOCK_WRITE);
unlock_two_nondirectories(from, to);
return ret;
}
/*
* This copies to userspace :/
*/
@@ -1075,6 +1414,7 @@ static int fill_extent(struct fiemap_extent_info *fieinfo,
int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_lock *lock = NULL;
@@ -1095,8 +1435,8 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
if (ret)
goto out;
/* XXX overkill? */
mutex_lock(&inode->i_mutex);
down_read(&si->extent_sem);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret)
@@ -1148,6 +1488,7 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
ret = fill_extent(fieinfo, &cur, last_flags);
unlock:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
up_read(&si->extent_sem);
mutex_unlock(&inode->i_mutex);
out:
@@ -1227,8 +1568,9 @@ static struct scoutfs_data_wait *dw_next(struct scoutfs_data_wait *dw)
* Check if we should wait by looking for extents whose flags match.
* Returns 0 if no extents were found or any error encountered.
*
* The caller must have locked the extents before calling, both across
* mounts and within this mount.
* The caller must have acquired a cluster lock that covers the extent
* items. We acquire the extent_sem to protect our read from writers in
* other tasks.
*
* Returns 1 if any file extents in the caller's region matched. If the
* wait struct is provided then it is initialized to be woken when the
@@ -1240,6 +1582,7 @@ int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u8 sef, u8 op, struct scoutfs_data_wait *dw,
struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct data_ext_args args = {
@@ -1272,6 +1615,8 @@ int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
}
}
down_read(&si->extent_sem);
iblock = pos >> SCOUTFS_BLOCK_SM_SHIFT;
last_block = (pos + len - 1) >> SCOUTFS_BLOCK_SM_SHIFT;
@@ -1308,6 +1653,8 @@ int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
iblock = ext.start + ext.len;
}
up_read(&si->extent_sem);
out:
trace_scoutfs_data_wait_check(sb, ino, pos, len, sef, op, &ext, ret);
@@ -1461,7 +1808,7 @@ void scoutfs_data_init_btrees(struct super_block *sb,
datinf->alloc = alloc;
datinf->wri = wri;
datinf->data_avail = lt->data_avail;
scoutfs_dalloc_init(&datinf->dalloc, &lt->data_avail);
datinf->data_freed = lt->data_freed;
mutex_unlock(&datinf->mutex);
@@ -1474,7 +1821,7 @@ void scoutfs_data_get_btrees(struct super_block *sb,
mutex_lock(&datinf->mutex);
lt->data_avail = datinf->data_avail;
scoutfs_dalloc_get_root(&datinf->dalloc, &lt->data_avail);
lt->data_freed = datinf->data_freed;
mutex_unlock(&datinf->mutex);
@@ -1490,31 +1837,20 @@ int scoutfs_data_prepare_commit(struct super_block *sb)
int ret;
mutex_lock(&datinf->mutex);
if (datinf->cached_ext.len) {
ret = scoutfs_free_data(sb, datinf->alloc, datinf->wri,
&datinf->data_avail,
datinf->cached_ext.start,
datinf->cached_ext.len);
if (ret == 0)
memset(&datinf->cached_ext, 0,
sizeof(datinf->cached_ext));
} else {
ret = 0;
}
ret = scoutfs_dalloc_return_cached(sb, datinf->alloc, datinf->wri,
&datinf->dalloc);
mutex_unlock(&datinf->mutex);
return ret;
}
/*
* This isn't serializing with allocators so it can be a bit racey.
*/
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb)
{
DECLARE_DATA_INFO(sb, datinf);
return le64_to_cpu(datinf->data_avail.total_len) <<
SCOUTFS_BLOCK_SM_SHIFT;
return scoutfs_dalloc_total_len(&datinf->dalloc) <<
SCOUTFS_BLOCK_SM_SHIFT;
}
int scoutfs_data_setup(struct super_block *sb)

View File

@@ -58,6 +58,9 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len);
int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
struct scoutfs_lock *lock);
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off, bool to_stage,
u64 data_version);
int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u8 sef, u8 op, struct scoutfs_data_wait *ow,

View File

@@ -30,6 +30,7 @@
#include "item.h"
#include "lock.h"
#include "hash.h"
#include "omap.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -463,7 +464,18 @@ out:
else
inode = scoutfs_iget(sb, ino);
return d_splice_alias(inode, dentry);
/*
* We can't splice dir aliases into the dcache. dir entries
* might have changed on other nodes so our dcache could still
* contain them, rather than having been moved in rename. For
* dirs, we use d_materialize_unique to remove any existing
* aliases which must be stale. Our inode numbers aren't reused
* so inodes pointed to by entries can't change types.
*/
if (!IS_ERR_OR_NULL(inode) && S_ISDIR(inode->i_mode))
return d_materialise_unique(dentry, inode);
else
return d_splice_alias(inode, dentry);
}
/*
@@ -655,7 +667,6 @@ static int del_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
*/
static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
umode_t mode, dev_t rdev,
const struct scoutfs_item_count cnt,
struct scoutfs_lock **dir_lock,
struct scoutfs_lock **inode_lock,
struct list_head *ind_locks)
@@ -694,7 +705,7 @@ retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, ind_locks, dir, true) ?:
scoutfs_inode_index_prepare_ino(sb, ind_locks, ino, mode) ?:
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq, cnt);
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -741,7 +752,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
inode = lock_hold_create(dir, dentry, mode, rdev,
SIC_MKNOD(dentry->d_name.len),
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
@@ -804,6 +814,7 @@ static int scoutfs_link(struct dentry *old_dentry,
struct scoutfs_lock *dir_lock;
struct scoutfs_lock *inode_lock = NULL;
LIST_HEAD(ind_locks);
bool del_orphan;
u64 dir_size;
u64 ind_seq;
u64 hash;
@@ -832,12 +843,13 @@ static int scoutfs_link(struct dentry *old_dentry,
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
del_orphan = (inode->i_nlink == 0);
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_LINK(dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -847,6 +859,12 @@ retry:
if (ret)
goto out;
if (del_orphan) {
ret = scoutfs_orphan_dirty(sb, scoutfs_ino(inode));
if (ret)
goto out;
}
pos = SCOUTFS_I(dir)->next_readdir_pos++;
ret = add_entry_items(sb, scoutfs_ino(dir), hash, pos,
@@ -862,6 +880,11 @@ retry:
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
if (del_orphan) {
ret = scoutfs_orphan_delete(sb, scoutfs_ino(inode));
WARN_ON_ONCE(ret);
}
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -918,8 +941,7 @@ retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_UNLINK(dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -1154,7 +1176,6 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
return ret;
inode = lock_hold_create(dir, dentry, S_IFLNK|S_IRWXUGO, 0,
SIC_SYMLINK(dentry->d_name.len, name_len),
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
@@ -1586,9 +1607,7 @@ retry:
scoutfs_inode_index_prepare(sb, &ind_locks, new_dir, false)) ?:
(new_inode == NULL ? 0 :
scoutfs_inode_index_prepare(sb, &ind_locks, new_inode, false)) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_RENAME(old_dentry->d_name.len,
new_dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -1756,6 +1775,42 @@ static int scoutfs_dir_open(struct inode *inode, struct file *file)
}
#endif
static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
LIST_HEAD(ind_locks);
int ret;
if (dentry->d_name.len > SCOUTFS_NAME_LEN)
return -ENAMETOOLONG;
inode = lock_hold_create(dir, dentry, mode, 0,
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
insert_inode_hash(inode);
d_tmpfile(dentry, inode);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
scoutfs_inode_index_unlock(sb, &ind_locks);
ret = scoutfs_orphan_inode(inode);
WARN_ON_ONCE(ret); /* XXX returning error but items deleted */
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
return ret;
}
const struct file_operations scoutfs_dir_fops = {
.KC_FOP_READDIR = scoutfs_readdir,
#ifdef KC_FMODE_KABI_ITERATE
@@ -1766,7 +1821,10 @@ const struct file_operations scoutfs_dir_fops = {
.llseek = generic_file_llseek,
};
const struct inode_operations scoutfs_dir_iops = {
const struct inode_operations_wrapper scoutfs_dir_iops = {
.ops = {
.lookup = scoutfs_lookup,
.mknod = scoutfs_mknod,
.create = scoutfs_create,
@@ -1783,6 +1841,8 @@ const struct inode_operations scoutfs_dir_iops = {
.removexattr = scoutfs_removexattr,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
};
void scoutfs_dir_exit(void)

View File

@@ -5,7 +5,7 @@
#include "lock.h"
extern const struct file_operations scoutfs_dir_fops;
extern const struct inode_operations scoutfs_dir_iops;
extern const struct inode_operations_wrapper scoutfs_dir_iops;
extern const struct inode_operations scoutfs_symlink_iops;
struct scoutfs_link_backref_entry {
@@ -14,7 +14,7 @@ struct scoutfs_link_backref_entry {
u64 dir_pos;
u16 name_len;
struct scoutfs_dirent dent;
/* the full name is allocated and stored in dent.name[0] */
/* the full name is allocated and stored in dent.name[] */
};
int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,

View File

@@ -38,7 +38,7 @@ static bool ext_overlap(struct scoutfs_extent *ext, u64 start, u64 len)
return !(e_end < start || ext->start > end);
}
static bool ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
{
u64 in_end = start + len - 1;
u64 out_end = out->start + out->len - 1;
@@ -241,7 +241,7 @@ int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
goto out;
/* removed extent must be entirely within found */
if (!ext_inside(start, len, &found)) {
if (!scoutfs_ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}
@@ -341,7 +341,7 @@ int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
if (ret == 0 && ext_overlap(&found, start, len)) {
/* set extent must be entirely within found */
if (!ext_inside(start, len, &found)) {
if (!scoutfs_ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}

View File

@@ -31,5 +31,6 @@ int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
struct scoutfs_extent *ext);
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags);
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out);
#endif

View File

@@ -27,8 +27,14 @@
#include "file.h"
#include "inode.h"
#include "per_task.h"
#include "omap.h"
/* TODO: Direct I/O, AIO */
/*
* Start a high level file read. We check for offline extents in the
* read region here so that we only check the extents once. We use the
* dio count to prevent releasing while we're reading after we've
* checked the extents.
*/
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos)
{
@@ -42,30 +48,32 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
int ret;
retry:
/* protect checked extents from release */
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
/* protect checked extents from stage/release */
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ,
&dw, inode_lock);
if (ret != 0)
goto out;
} else {
WARN_ON_ONCE(true);
}
ret = generic_file_aio_read(iocb, iov, nr_segs, pos);
out:
if (scoutfs_per_task_del(&si->pt_data_lock, &pt_ent))
inode_dio_done(inode);
inode_dio_done(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {

View File

@@ -66,8 +66,8 @@ struct forest_info {
struct forest_info *name = SCOUTFS_SB(sb)->forest_info
struct forest_refs {
struct scoutfs_btree_ref fs_ref;
struct scoutfs_btree_ref logs_ref;
struct scoutfs_block_ref fs_ref;
struct scoutfs_block_ref logs_ref;
};
/* initialize some refs that initially aren't equal */
@@ -96,20 +96,16 @@ static void calc_bloom_nrs(struct forest_bloom_nrs *bloom,
}
}
static struct scoutfs_block *read_bloom_ref(struct super_block *sb,
struct scoutfs_btree_ref *ref)
static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scoutfs_block_ref *ref)
{
struct scoutfs_block *bl;
int ret;
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
if (IS_ERR(bl))
return bl;
if (!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno,
SCOUTFS_BLOCK_MAGIC_BLOOM)) {
scoutfs_block_invalidate(sb, bl);
scoutfs_block_put(sb, bl);
return ERR_PTR(-ESTALE);
ret = scoutfs_block_read_ref(sb, ref, SCOUTFS_BLOCK_MAGIC_BLOOM, &bl);
if (ret < 0) {
if (ret == -ESTALE)
scoutfs_inc_counter(sb, forest_bloom_stale);
bl = ERR_PTR(ret);
}
return bl;
@@ -280,7 +276,6 @@ int scoutfs_forest_read_items(struct super_block *sb,
scoutfs_inc_counter(sb, forest_read_items);
calc_bloom_nrs(&bloom, &lock->start);
roots = lock->roots;
retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
@@ -353,15 +348,9 @@ retry:
ret = 0;
out:
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
goto out;
}
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
return -EIO;
prev_refs = refs;
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
goto retry;
}
@@ -381,18 +370,14 @@ out:
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
struct scoutfs_lock *lock)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
DECLARE_FOREST_INFO(sb, finf);
struct scoutfs_block *new_bl = NULL;
struct scoutfs_block *bl = NULL;
struct scoutfs_bloom_block *bb;
struct scoutfs_btree_ref *ref;
struct scoutfs_block_ref *ref;
struct forest_bloom_nrs bloom;
int nr_set = 0;
u64 blkno;
u64 nr;
int ret;
int err;
int i;
nr = le64_to_cpu(finf->our_log.nr);
@@ -410,53 +395,11 @@ int scoutfs_forest_set_bloom_bits(struct super_block *sb,
ref = &finf->our_log.bloom_ref;
if (ref->blkno) {
bl = read_bloom_ref(sb, ref);
if (IS_ERR(bl)) {
ret = PTR_ERR(bl);
goto unlock;
}
bb = bl->data;
}
if (!ref->blkno || !scoutfs_block_writer_is_dirty(sb, bl)) {
ret = scoutfs_alloc_meta(sb, finf->alloc, finf->wri, &blkno);
if (ret < 0)
goto unlock;
new_bl = scoutfs_block_create(sb, blkno);
if (IS_ERR(new_bl)) {
err = scoutfs_free_meta(sb, finf->alloc, finf->wri,
blkno);
BUG_ON(err); /* could have dirtied */
ret = PTR_ERR(new_bl);
goto unlock;
}
if (bl) {
err = scoutfs_free_meta(sb, finf->alloc, finf->wri,
le64_to_cpu(ref->blkno));
BUG_ON(err); /* could have dirtied */
memcpy(new_bl->data, bl->data, SCOUTFS_BLOCK_LG_SIZE);
} else {
memset(new_bl->data, 0, SCOUTFS_BLOCK_LG_SIZE);
}
scoutfs_block_writer_mark_dirty(sb, finf->wri, new_bl);
scoutfs_block_put(sb, bl);
bl = new_bl;
bb = bl->data;
new_bl = NULL;
bb->hdr.magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_BLOOM);
bb->hdr.fsid = super->hdr.fsid;
bb->hdr.blkno = cpu_to_le64(blkno);
prandom_bytes(&bb->hdr.seq, sizeof(bb->hdr.seq));
ref->blkno = bb->hdr.blkno;
ref->seq = bb->hdr.seq;
}
ret = scoutfs_block_dirty_ref(sb, finf->alloc, finf->wri, ref, SCOUTFS_BLOCK_MAGIC_BLOOM,
&bl, 0, NULL);
if (ret < 0)
goto unlock;
bb = bl->data;
for (i = 0; i < ARRAY_SIZE(bloom.nrs); i++) {
if (!test_and_set_bit_le(bloom.nrs[i], bb->bits)) {

View File

@@ -1,6 +1,9 @@
#ifndef _SCOUTFS_FORMAT_H_
#define _SCOUTFS_FORMAT_H_
#define SCOUTFS_INTEROP_VERSION 0ULL
#define SCOUTFS_INTEROP_VERSION_STR __stringify(0)
/* statfs(2) f_type */
#define SCOUTFS_SUPER_MAGIC 0x554f4353 /* "SCOU" */
@@ -11,6 +14,7 @@
#define SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK 0x897e4a7d
#define SCOUTFS_BLOCK_MAGIC_SRCH_PARENT 0xb23a2a05
#define SCOUTFS_BLOCK_MAGIC_ALLOC_LIST 0x8a93ac83
#define SCOUTFS_BLOCK_MAGIC_QUORUM 0xbc310868
/*
* The super block, quorum block, and file data allocation granularity
@@ -51,15 +55,19 @@
#define SCOUTFS_SUPER_BLKNO ((64ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* A reasonably large region of aligned quorum blocks follow the super
* block. Each voting cycle reads the entire region so we don't want it
* to be too enormous. 256K seems like a reasonably chunky single IO.
* The number of blocks in the region also determines the number of
* mounts that have a reasonable probability of not overwriting each
* other's random block locations.
* A small number of quorum blocks follow the super block, enough of
* them to match the starting offset of the super block so the region is
* aligned to the power of two that contains it.
*/
#define SCOUTFS_QUORUM_BLKNO ((256ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
#define SCOUTFS_QUORUM_BLOCKS ((256ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
#define SCOUTFS_QUORUM_BLKNO (SCOUTFS_SUPER_BLKNO + 1)
#define SCOUTFS_QUORUM_BLOCKS (SCOUTFS_SUPER_BLKNO - 1)
/*
* Free metadata blocks start after the quorum blocks
*/
#define SCOUTFS_META_DEV_START_BLKNO \
((SCOUTFS_QUORUM_BLKNO + SCOUTFS_QUORUM_BLOCKS) >> \
SCOUTFS_BLOCK_SM_LG_SHIFT)
/*
* Start data on the data device aligned as well.
@@ -78,11 +86,33 @@ struct scoutfs_timespec {
__u8 __pad[4];
};
/* XXX ipv6 */
struct scoutfs_inet_addr {
__le32 addr;
enum scoutfs_inet_family {
SCOUTFS_AF_NONE = 0,
SCOUTFS_AF_IPV4 = 1,
SCOUTFS_AF_IPV6 = 2,
};
struct scoutfs_inet_addr4 {
__le16 family;
__le16 port;
__u8 __pad[2];
__le32 addr;
};
/*
* Not yet supported by code.
*/
struct scoutfs_inet_addr6 {
__le16 family;
__le16 port;
__u8 addr[16];
__le32 flow_info;
__le32 scope_id;
__u8 __pad[4];
};
union scoutfs_inet_addr {
struct scoutfs_inet_addr4 v4;
struct scoutfs_inet_addr6 v6;
};
/*
@@ -98,6 +128,15 @@ struct scoutfs_block_header {
__le64 blkno;
};
/*
* A reference to a block. The corresponding fields in the block_header
* must match after having read the block contents.
*/
struct scoutfs_block_ref {
__le64 blkno;
__le64 seq;
};
/*
* scoutfs identifies all file system metadata items by a small key
* struct.
@@ -156,9 +195,6 @@ struct scoutfs_key {
#define sklt_rid _sk_first
#define sklt_nr _sk_second
/* lock clients */
#define sklc_rid _sk_first
/* seqs */
#define skts_trans_seq _sk_first
#define skts_rid _sk_second
@@ -173,19 +209,6 @@ struct scoutfs_key {
#define skfl_neglen _sk_second
#define skfl_blkno _sk_third
struct scoutfs_radix_block {
struct scoutfs_block_header hdr;
union {
struct scoutfs_radix_ref {
__le64 blkno;
__le64 seq;
__le64 sm_total;
__le64 lg_total;
} refs[0];
__le64 bits[0];
};
};
struct scoutfs_avl_root {
__le16 node;
};
@@ -207,17 +230,12 @@ struct scoutfs_avl_node {
*/
#define SCOUTFS_BTREE_MAX_HEIGHT 20
struct scoutfs_btree_ref {
__le64 blkno;
__le64 seq;
};
/*
* A height of X means that the first block read will have level X-1 and
* the leaves will have level 0.
*/
struct scoutfs_btree_root {
struct scoutfs_btree_ref ref;
struct scoutfs_block_ref ref;
__u8 height;
__u8 __pad[7];
};
@@ -238,7 +256,7 @@ struct scoutfs_btree_block {
__le16 mid_free_len;
__u8 level;
__u8 __pad[7];
struct scoutfs_btree_item items[0];
struct scoutfs_btree_item items[];
/* leaf blocks have a fixed size item offset hash table at the end */
};
@@ -258,18 +276,13 @@ struct scoutfs_btree_block {
#define SCOUTFS_BTREE_LEAF_ITEM_HASH_BYTES \
(SCOUTFS_BTREE_LEAF_ITEM_HASH_NR * sizeof(__le16))
struct scoutfs_alloc_list_ref {
__le64 blkno;
__le64 seq;
};
/*
* first_nr tracks the nr of the first block in the list and is used for
* allocation sizing. total_nr is the sum of the nr of all the blocks in
* the list and is used for calculating total free block counts.
*/
struct scoutfs_alloc_list_head {
struct scoutfs_alloc_list_ref ref;
struct scoutfs_block_ref ref;
__le64 total_nr;
__le32 first_nr;
__u8 __pad[4];
@@ -288,10 +301,10 @@ struct scoutfs_alloc_list_head {
*/
struct scoutfs_alloc_list_block {
struct scoutfs_block_header hdr;
struct scoutfs_alloc_list_ref next;
struct scoutfs_block_ref next;
__le32 start;
__le32 nr;
__le64 blknos[0]; /* naturally aligned for sorting */
__le64 blknos[]; /* naturally aligned for sorting */
};
#define SCOUTFS_ALLOC_LIST_MAX_BLOCKS \
@@ -316,7 +329,7 @@ struct scoutfs_mounted_client_btree_val {
__u8 flags;
};
#define SCOUTFS_MOUNTED_CLIENT_VOTER (1 << 0)
#define SCOUTFS_MOUNTED_CLIENT_QUORUM (1 << 0)
/*
* srch files are a contiguous run of blocks with compressed entries
@@ -334,15 +347,10 @@ struct scoutfs_srch_entry {
#define SCOUTFS_SRCH_ENTRY_MAX_BYTES (2 + (sizeof(__u64) * 3))
struct scoutfs_srch_ref {
__le64 blkno;
__le64 seq;
};
struct scoutfs_srch_file {
struct scoutfs_srch_entry first;
struct scoutfs_srch_entry last;
struct scoutfs_srch_ref ref;
struct scoutfs_block_ref ref;
__le64 blocks;
__le64 entries;
__u8 height;
@@ -351,13 +359,13 @@ struct scoutfs_srch_file {
struct scoutfs_srch_parent {
struct scoutfs_block_header hdr;
struct scoutfs_srch_ref refs[0];
struct scoutfs_block_ref refs[];
};
#define SCOUTFS_SRCH_PARENT_REFS \
((SCOUTFS_BLOCK_LG_SIZE - \
offsetof(struct scoutfs_srch_parent, refs)) / \
sizeof(struct scoutfs_srch_ref))
sizeof(struct scoutfs_block_ref))
struct scoutfs_srch_block {
struct scoutfs_block_header hdr;
@@ -366,7 +374,7 @@ struct scoutfs_srch_block {
struct scoutfs_srch_entry tail;
__le32 entry_nr;
__le32 entry_bytes;
__u8 entries[0];
__u8 entries[];
};
/*
@@ -428,7 +436,7 @@ struct scoutfs_log_trees {
struct scoutfs_alloc_list_head meta_avail;
struct scoutfs_alloc_list_head meta_freed;
struct scoutfs_btree_root item_root;
struct scoutfs_btree_ref bloom_ref;
struct scoutfs_block_ref bloom_ref;
struct scoutfs_alloc_root data_avail;
struct scoutfs_alloc_root data_freed;
struct scoutfs_srch_file srch_file;
@@ -441,7 +449,7 @@ struct scoutfs_log_item_value {
__le64 vers;
__u8 flags;
__u8 __pad[7];
__u8 data[0];
__u8 data[];
};
/*
@@ -456,7 +464,7 @@ struct scoutfs_log_item_value {
struct scoutfs_bloom_block {
struct scoutfs_block_header hdr;
__le64 total_set;
__le64 bits[0];
__le64 bits[];
};
/*
@@ -482,11 +490,10 @@ struct scoutfs_bloom_block {
#define SCOUTFS_LOCK_ZONE 4
/* Items only stored in server btrees */
#define SCOUTFS_LOG_TREES_ZONE 6
#define SCOUTFS_LOCK_CLIENTS_ZONE 7
#define SCOUTFS_TRANS_SEQ_ZONE 8
#define SCOUTFS_MOUNTED_CLIENT_ZONE 9
#define SCOUTFS_SRCH_ZONE 10
#define SCOUTFS_FREE_EXTENT_ZONE 11
#define SCOUTFS_TRANS_SEQ_ZONE 7
#define SCOUTFS_MOUNTED_CLIENT_ZONE 8
#define SCOUTFS_SRCH_ZONE 9
#define SCOUTFS_FREE_EXTENT_ZONE 10
/* inode index zone */
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 1
@@ -538,7 +545,7 @@ struct scoutfs_xattr {
__le16 val_len;
__u8 name_len;
__u8 __pad[5];
__u8 name[0];
__u8 name[];
};
@@ -547,56 +554,84 @@ struct scoutfs_xattr {
#define SCOUTFS_UUID_BYTES 16
/*
* Mounts read all the quorum blocks and write to one random quorum
* block during a cycle. The min cycle time limits the per-mount iop
* load during elections. The random cycle delay makes it less likely
* that mounts will read and write at the same time and miss each
* other's writes. An election only completes if a quorum of mounts
* vote for a leader before any of their elections timeout. This is
* made less likely by the probability that mounts will overwrite each
* others random block locations. The max quorum count limits that
* probability. 9 mounts only have a 55% chance of writing to unique 4k
* blocks in a 256k region. The election timeout is set to include
* enough cycles to usually complete the election. Once a leader is
* elected it spends a number of cycles writing out blocks with itself
* logged as a leader. This reduces the possibility that servers
* will have their log entries overwritten and not be fenced.
*/
#define SCOUTFS_QUORUM_MAX_COUNT 9
#define SCOUTFS_QUORUM_CYCLE_LO_MS 10
#define SCOUTFS_QUORUM_CYCLE_HI_MS 20
#define SCOUTFS_QUORUM_TERM_LO_MS 250
#define SCOUTFS_QUORUM_TERM_HI_MS 500
#define SCOUTFS_QUORUM_ELECTED_LOG_CYCLES 10
#define SCOUTFS_QUORUM_MAX_SLOTS 15
struct scoutfs_quorum_block {
/*
* To elect a leader, members race to have their variable election
* timeouts expire. If they're first to send a vote request with a
* greater term to a majority of waiting members they'll be elected with
* a majority. If the timeouts are too close, the vote may be split and
* everyone will wait for another cycle of variable timeouts to expire.
*
* These determine how long it will take to elect a leader once there's
* no evidence of a server (no leader quorum blocks on mount; heartbeat
* timeout expired.)
*/
#define SCOUTFS_QUORUM_ELECT_MIN_MS 250
#define SCOUTFS_QUORUM_ELECT_VAR_MS 100
/*
* Once a leader is elected they send out heartbeats at regular
* intervals to force members to wait the much longer heartbeat timeout.
* Once heartbeat timeout expires without receiving a heartbeat they'll
* switch over the performing elections.
*
* These determine how long it could take members to notice that a
* leader has gone silent and start to elect a new leader.
*/
#define SCOUTFS_QUORUM_HB_IVAL_MS 100
#define SCOUTFS_QUORUM_HB_TIMEO_MS (5 * MSEC_PER_SEC)
struct scoutfs_quorum_message {
__le64 fsid;
__le64 blkno;
__le64 version;
__le64 term;
__le64 write_nr;
__le64 voter_rid;
__le64 vote_for_rid;
__u8 type;
__u8 from;
__u8 __pad[2];
__le32 crc;
__u8 log_nr;
__u8 __pad[3];
struct scoutfs_quorum_log {
__le64 term;
__le64 rid;
struct scoutfs_inet_addr addr;
} log[0];
};
#define SCOUTFS_QUORUM_LOG_MAX \
((SCOUTFS_BLOCK_SM_SIZE - sizeof(struct scoutfs_quorum_block)) / \
sizeof(struct scoutfs_quorum_log))
/* a candidate requests a vote */
#define SCOUTFS_QUORUM_MSG_REQUEST_VOTE 0
/* followers send votes to candidates */
#define SCOUTFS_QUORUM_MSG_VOTE 1
/* elected leaders broadcast heartbeats to delay elections */
#define SCOUTFS_QUORUM_MSG_HEARTBEAT 2
/* leaders broadcast as they leave to break heartbeat timeout */
#define SCOUTFS_QUORUM_MSG_RESIGNATION 3
#define SCOUTFS_QUORUM_MSG_INVALID 4
/*
* The version is currently always 0, but will be used by mounts to
* discover that membership has changed.
*/
struct scoutfs_quorum_config {
__le64 version;
struct scoutfs_quorum_slot {
union scoutfs_inet_addr addr;
} slots[SCOUTFS_QUORUM_MAX_SLOTS];
};
struct scoutfs_quorum_block {
struct scoutfs_block_header hdr;
__le64 term;
__le64 random_write_mark;
__le64 flags;
struct scoutfs_quorum_block_event {
__le64 rid;
struct scoutfs_timespec ts;
} write, update_term, set_leader, clear_leader, fenced;
};
#define SCOUTFS_QUORUM_BLOCK_LEADER (1 << 0)
#define SCOUTFS_FLAG_IS_META_BDEV 0x01
struct scoutfs_super_block {
struct scoutfs_block_header hdr;
__le64 id;
__le64 format_hash;
__le64 version;
__le64 flags;
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 next_ino;
@@ -607,19 +642,13 @@ struct scoutfs_super_block {
__le64 total_data_blocks;
__le64 first_data_blkno;
__le64 last_data_blkno;
__le64 quorum_fenced_term;
__le64 quorum_server_term;
__le64 unmount_barrier;
__u8 quorum_count;
__u8 __pad[7];
struct scoutfs_inet_addr server_addr;
struct scoutfs_quorum_config qconf;
struct scoutfs_alloc_root meta_alloc[2];
struct scoutfs_alloc_root data_alloc;
struct scoutfs_alloc_list_head server_meta_avail[2];
struct scoutfs_alloc_list_head server_meta_freed[2];
struct scoutfs_btree_root fs_root;
struct scoutfs_btree_root logs_root;
struct scoutfs_btree_root lock_clients;
struct scoutfs_btree_root trans_seqs;
struct scoutfs_btree_root mounted_clients;
struct scoutfs_btree_root srch_root;
@@ -695,7 +724,7 @@ struct scoutfs_dirent {
__le64 pos;
__u8 type;
__u8 __pad[7];
__u8 name[0];
__u8 name[];
};
#define SCOUTFS_NAME_LEN 255
@@ -746,12 +775,6 @@ enum scoutfs_dentry_type {
* the same serer after receiving a greeting response and to a new
* server after failover.
*
* @unmount_barrier: Incremented every time the remaining majority of
* quorum members all agree to leave. The server tells a quorum member
* the value that it's connecting under so that if the client sees the
* value increase in the super block then it knows that the server has
* processed its farewell and can safely unmount.
*
* @rid: The client's random id that was generated once as the mount
* started up. This identifies a specific remote mount across
* connections and servers. It's set to the client's rid in both the
@@ -759,15 +782,14 @@ enum scoutfs_dentry_type {
*/
struct scoutfs_net_greeting {
__le64 fsid;
__le64 format_hash;
__le64 version;
__le64 server_term;
__le64 unmount_barrier;
__le64 rid;
__le64 flags;
};
#define SCOUTFS_NET_GREETING_FLAG_FAREWELL (1 << 0)
#define SCOUTFS_NET_GREETING_FLAG_VOTER (1 << 1)
#define SCOUTFS_NET_GREETING_FLAG_QUORUM (1 << 1)
#define SCOUTFS_NET_GREETING_FLAG_INVALID (~(__u64)0 << 2)
/*
@@ -800,7 +822,7 @@ struct scoutfs_net_header {
__u8 flags;
__u8 error;
__u8 __pad[3];
__u8 data[0];
__u8 data[];
};
#define SCOUTFS_NET_FLAG_RESPONSE (1 << 0)
@@ -818,6 +840,7 @@ enum scoutfs_net_cmd {
SCOUTFS_NET_CMD_LOCK_RECOVER,
SCOUTFS_NET_CMD_SRCH_GET_COMPACT,
SCOUTFS_NET_CMD_SRCH_COMMIT_COMPACT,
SCOUTFS_NET_CMD_OPEN_INO_MAP,
SCOUTFS_NET_CMD_FAREWELL,
SCOUTFS_NET_CMD_UNKNOWN,
};
@@ -868,15 +891,10 @@ struct scoutfs_net_lock {
__u8 __pad[6];
};
struct scoutfs_net_lock_grant_response {
struct scoutfs_net_lock nl;
struct scoutfs_net_roots roots;
};
struct scoutfs_net_lock_recover {
__le16 nr;
__u8 __pad[6];
struct scoutfs_net_lock locks[0];
struct scoutfs_net_lock locks[];
};
#define SCOUTFS_NET_LOCK_MAX_RECOVER_NR \
@@ -943,4 +961,42 @@ enum scoutfs_corruption_sources {
#define SC_NR_LONGS DIV_ROUND_UP(SC_NR_SOURCES, BITS_PER_LONG)
#define SCOUTFS_OPEN_INO_MAP_SHIFT 10
#define SCOUTFS_OPEN_INO_MAP_BITS (1 << SCOUTFS_OPEN_INO_MAP_SHIFT)
#define SCOUTFS_OPEN_INO_MAP_MASK (SCOUTFS_OPEN_INO_MAP_BITS - 1)
#define SCOUTFS_OPEN_INO_MAP_LE64S (SCOUTFS_OPEN_INO_MAP_BITS / 64)
/*
* The request and response conversation is as follows:
*
* client[init] -> server:
* group_nr = G
* req_id = 0 (I)
* server -> client[*]
* group_nr = G
* req_id = R
* client[*] -> server
* group_nr = G (I)
* req_id = R
* bits
* server -> client[init]
* group_nr = G (I)
* req_id = R (I)
* bits
*
* Many of the fields in individual messages are ignored ("I") because
* the net id or the omap req_id can be used to identify the
* conversation. We always include them on the wire to make inspected
* messages easier to follow.
*/
struct scoutfs_open_ino_map_args {
__le64 group_nr;
__le64 req_id;
};
struct scoutfs_open_ino_map {
struct scoutfs_open_ino_map_args args;
__le64 bits[SCOUTFS_OPEN_INO_MAP_LE64S];
};
#endif

View File

@@ -33,6 +33,7 @@
#include "item.h"
#include "client.h"
#include "cmp.h"
#include "omap.h"
/*
* XXX
@@ -71,29 +72,32 @@ static struct kmem_cache *scoutfs_inode_cachep;
*/
static void scoutfs_inode_ctor(void *obj)
{
struct scoutfs_inode_info *ci = obj;
struct scoutfs_inode_info *si = obj;
mutex_init(&ci->item_mutex);
seqcount_init(&ci->seqcount);
ci->staging = false;
scoutfs_per_task_init(&ci->pt_data_lock);
atomic64_set(&ci->data_waitq.changed, 0);
init_waitqueue_head(&ci->data_waitq.waitq);
init_rwsem(&ci->xattr_rwsem);
RB_CLEAR_NODE(&ci->writeback_node);
init_rwsem(&si->extent_sem);
mutex_init(&si->item_mutex);
seqcount_init(&si->seqcount);
si->staging = false;
scoutfs_per_task_init(&si->pt_data_lock);
atomic64_set(&si->data_waitq.changed, 0);
init_waitqueue_head(&si->data_waitq.waitq);
init_rwsem(&si->xattr_rwsem);
RB_CLEAR_NODE(&si->writeback_node);
scoutfs_lock_init_coverage(&si->ino_lock_cov);
atomic_set(&si->inv_iput_count, 0);
inode_init_once(&ci->inode);
inode_init_once(&si->inode);
}
struct inode *scoutfs_alloc_inode(struct super_block *sb)
{
struct scoutfs_inode_info *ci;
struct scoutfs_inode_info *si;
ci = kmem_cache_alloc(scoutfs_inode_cachep, GFP_NOFS);
if (!ci)
si = kmem_cache_alloc(scoutfs_inode_cachep, GFP_NOFS);
if (!si)
return NULL;
return &ci->inode;
return &si->inode;
}
static void scoutfs_i_callback(struct rcu_head *head)
@@ -140,12 +144,15 @@ static void remove_writeback_inode(struct inode_sb_info *inf,
void scoutfs_destroy_inode(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
spin_lock(&inf->writeback_lock);
remove_writeback_inode(inf, SCOUTFS_I(inode));
spin_unlock(&inf->writeback_lock);
scoutfs_lock_del_coverage(inode->i_sb, &si->ino_lock_cov);
call_rcu(&inode->i_rcu, scoutfs_i_callback);
}
@@ -181,7 +188,8 @@ static void set_inode_ops(struct inode *inode)
inode->i_fop = &scoutfs_file_fops;
break;
case S_IFDIR:
inode->i_op = &scoutfs_dir_iops;
inode->i_op = &scoutfs_dir_iops.ops;
inode->i_flags |= S_IOPS_WRAPPER;
inode->i_fop = &scoutfs_dir_fops;
break;
case S_IFLNK:
@@ -221,7 +229,7 @@ static void set_item_info(struct scoutfs_inode_info *si,
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
i_size_write(inode, le64_to_cpu(cinode->size));
set_nlink(inode, le32_to_cpu(cinode->nlink));
@@ -236,23 +244,23 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
inode->i_ctime.tv_sec = le64_to_cpu(cinode->ctime.sec);
inode->i_ctime.tv_nsec = le32_to_cpu(cinode->ctime.nsec);
ci->meta_seq = le64_to_cpu(cinode->meta_seq);
ci->data_seq = le64_to_cpu(cinode->data_seq);
ci->data_version = le64_to_cpu(cinode->data_version);
ci->online_blocks = le64_to_cpu(cinode->online_blocks);
ci->offline_blocks = le64_to_cpu(cinode->offline_blocks);
ci->next_readdir_pos = le64_to_cpu(cinode->next_readdir_pos);
ci->next_xattr_id = le64_to_cpu(cinode->next_xattr_id);
ci->flags = le32_to_cpu(cinode->flags);
si->meta_seq = le64_to_cpu(cinode->meta_seq);
si->data_seq = le64_to_cpu(cinode->data_seq);
si->data_version = le64_to_cpu(cinode->data_version);
si->online_blocks = le64_to_cpu(cinode->online_blocks);
si->offline_blocks = le64_to_cpu(cinode->offline_blocks);
si->next_readdir_pos = le64_to_cpu(cinode->next_readdir_pos);
si->next_xattr_id = le64_to_cpu(cinode->next_xattr_id);
si->flags = le32_to_cpu(cinode->flags);
/*
* i_blocks is initialized from online and offline and is then
* maintained as blocks come and go.
*/
inode->i_blocks = (ci->online_blocks + ci->offline_blocks)
inode->i_blocks = (si->online_blocks + si->offline_blocks)
<< SCOUTFS_BLOCK_SM_SECTOR_SHIFT;
set_item_info(ci, cinode);
set_item_info(si, cinode);
}
static void init_inode_key(struct scoutfs_key *key, u64 ino)
@@ -305,6 +313,8 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock,
if (ret == 0) {
load_inode(inode, &sinode);
atomic64_set(&si->last_refreshed, refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
si->drop_invalidated = false;
}
} else {
ret = 0;
@@ -334,7 +344,7 @@ int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
u64 new_size, bool truncate)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
LIST_HEAD(ind_locks);
int ret;
@@ -342,8 +352,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
if (!S_ISREG(inode->i_mode))
return 0;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, true,
SIC_DIRTY_INODE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, true);
if (ret)
return ret;
@@ -353,7 +362,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
truncate_setsize(inode, new_size);
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
if (truncate)
ci->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
si->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_inode_set_data_seq(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
@@ -365,17 +374,16 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
static int clear_truncate_flag(struct inode *inode, struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
LIST_HEAD(ind_locks);
int ret;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_DIRTY_INODE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret)
return ret;
ci->flags &= ~SCOUTFS_INO_FLAG_TRUNCATE;
si->flags &= ~SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
@@ -386,13 +394,13 @@ static int clear_truncate_flag(struct inode *inode, struct scoutfs_lock *lock)
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 start;
int ret, err;
trace_scoutfs_complete_truncate(inode, ci->flags);
trace_scoutfs_complete_truncate(inode, si->flags);
if (!(ci->flags & SCOUTFS_INO_FLAG_TRUNCATE))
if (!(si->flags & SCOUTFS_INO_FLAG_TRUNCATE))
return 0;
start = (i_size_read(inode) + SCOUTFS_BLOCK_SM_SIZE - 1) >>
@@ -486,8 +494,7 @@ retry:
}
}
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_DIRTY_INODE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret)
goto out;
@@ -643,19 +650,19 @@ void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off)
static int scoutfs_iget_test(struct inode *inode, void *arg)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 *ino = arg;
return ci->ino == *ino;
return si->ino == *ino;
}
static int scoutfs_iget_set(struct inode *inode, void *arg)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 *ino = arg;
inode->i_ino = *ino;
ci->ino = *ino;
si->ino = *ino;
return 0;
}
@@ -665,6 +672,28 @@ struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino)
return ilookup5(sb, ino, scoutfs_iget_test, &ino);
}
static int iget_test_nofreeing(struct inode *inode, void *arg)
{
return !(inode->i_state & I_FREEING) && scoutfs_iget_test(inode, arg);
}
/*
* There's a natural risk of a deadlock between lock invalidation and
* eviction. Invalidation blocks locks while looking up inodes and
* invalidating local caches. Inode eviction gets a lock to check final
* inode deletion while the inode is marked FREEING which blocks
* lookups.
*
* We have a lookup variant which doesn't return I_FREEING inodes
* instead of waiting on them. If an inode has made it to I_FREEING
* then it doesn't have any local caches that are reachable and the lock
* invalidation promise is kept.
*/
struct inode *scoutfs_ilookup_nofreeing(struct super_block *sb, u64 ino)
{
return ilookup5(sb, ino, iget_test_nofreeing, &ino);
}
struct inode *scoutfs_iget(struct super_block *sb, u64 ino)
{
struct scoutfs_lock *lock = NULL;
@@ -689,6 +718,8 @@ struct inode *scoutfs_iget(struct super_block *sb, u64 ino)
atomic64_set(&si->last_refreshed, 0);
ret = scoutfs_inode_refresh(inode, lock, 0);
if (ret == 0)
ret = scoutfs_omap_inc(sb, ino);
if (ret) {
iget_failed(inode);
inode = ERR_PTR(ret);
@@ -705,7 +736,7 @@ out:
static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 online_blocks;
u64 offline_blocks;
@@ -732,9 +763,9 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
cinode->data_version = cpu_to_le64(scoutfs_inode_data_version(inode));
cinode->online_blocks = cpu_to_le64(online_blocks);
cinode->offline_blocks = cpu_to_le64(offline_blocks);
cinode->next_readdir_pos = cpu_to_le64(ci->next_readdir_pos);
cinode->next_xattr_id = cpu_to_le64(ci->next_xattr_id);
cinode->flags = cpu_to_le32(ci->flags);
cinode->next_readdir_pos = cpu_to_le64(si->next_readdir_pos);
cinode->next_xattr_id = cpu_to_le64(si->next_xattr_id);
cinode->flags = cpu_to_le32(si->flags);
}
/*
@@ -1188,8 +1219,7 @@ int scoutfs_inode_index_start(struct super_block *sb, u64 *seq)
* Returns > 0 if the seq changed and the locks should be retried.
*/
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
struct list_head *list, u64 seq,
const struct scoutfs_item_count cnt)
struct list_head *list, u64 seq)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct index_lock *ind_lock;
@@ -1205,7 +1235,7 @@ int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
goto out;
}
ret = scoutfs_hold_trans(sb, cnt);
ret = scoutfs_hold_trans(sb);
if (ret == 0 && seq != sbi->trans_seq) {
scoutfs_release_trans(sb);
ret = 1;
@@ -1219,8 +1249,7 @@ out:
}
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
bool set_data_seq,
const struct scoutfs_item_count cnt)
bool set_data_seq)
{
struct super_block *sb = inode->i_sb;
int ret;
@@ -1230,7 +1259,7 @@ int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
ret = scoutfs_inode_index_start(sb, &seq) ?:
scoutfs_inode_index_prepare(sb, list, inode,
set_data_seq) ?:
scoutfs_inode_index_try_lock_hold(sb, list, seq, cnt);
scoutfs_inode_index_try_lock_hold(sb, list, seq);
} while (ret > 0);
return ret;
@@ -1368,7 +1397,7 @@ struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
umode_t mode, dev_t rdev, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *ci;
struct scoutfs_inode_info *si;
struct scoutfs_key key;
struct scoutfs_inode sinode;
struct inode *inode;
@@ -1378,16 +1407,18 @@ struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
if (!inode)
return ERR_PTR(-ENOMEM);
ci = SCOUTFS_I(inode);
ci->ino = ino;
ci->data_version = 0;
ci->online_blocks = 0;
ci->offline_blocks = 0;
ci->next_readdir_pos = SCOUTFS_DIRENT_FIRST_POS;
ci->next_xattr_id = 0;
ci->have_item = false;
atomic64_set(&ci->last_refreshed, lock->refresh_gen);
ci->flags = 0;
si = SCOUTFS_I(inode);
si->ino = ino;
si->data_version = 0;
si->online_blocks = 0;
si->offline_blocks = 0;
si->next_readdir_pos = SCOUTFS_DIRENT_FIRST_POS;
si->next_xattr_id = 0;
si->have_item = false;
atomic64_set(&si->last_refreshed, lock->refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
si->drop_invalidated = false;
si->flags = 0;
scoutfs_inode_set_meta_seq(inode);
scoutfs_inode_set_data_seq(inode);
@@ -1402,10 +1433,17 @@ struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
store_inode(&sinode, inode);
init_inode_key(&key, scoutfs_ino(inode));
ret = scoutfs_omap_inc(sb, ino);
if (ret < 0)
goto out;
ret = scoutfs_item_create(sb, &key, &sinode, sizeof(sinode), lock);
if (ret < 0)
scoutfs_omap_dec(sb, ino);
out:
if (ret) {
iput(inode);
return ERR_PTR(ret);
inode = ERR_PTR(ret);
}
return inode;
@@ -1421,7 +1459,18 @@ static void init_orphan_key(struct scoutfs_key *key, u64 rid, u64 ino)
};
}
static int remove_orphan_item(struct super_block *sb, u64 ino)
int scoutfs_orphan_dirty(struct super_block *sb, u64 ino)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_lock *lock = sbi->rid_lock;
struct scoutfs_key key;
init_orphan_key(&key, sbi->rid, ino);
return scoutfs_item_dirty(sb, &key, lock);
}
int scoutfs_orphan_delete(struct super_block *sb, u64 ino)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_lock *lock = sbi->rid_lock;
@@ -1439,15 +1488,15 @@ static int remove_orphan_item(struct super_block *sb, u64 ino)
/*
* Remove all the items associated with a given inode. This is only
* called once nlink has dropped to zero so we don't have to worry about
* dirents referencing the inode or link backrefs. Dropping nlink to 0
* also created an orphan item. That orphan item will continue
* triggering attempts to finish previous partial deletion until all
* deletion is complete and the orphan item is removed.
* called once nlink has dropped to zero and nothing has the inode open
* so we don't have to worry about dirents referencing the inode or link
* backrefs. Dropping nlink to 0 also created an orphan item. That
* orphan item will continue triggering attempts to finish previous
* partial deletion until all deletion is complete and the orphan item
* is removed.
*/
static int delete_inode_items(struct super_block *sb, u64 ino)
static int delete_inode_items(struct super_block *sb, u64 ino, struct scoutfs_lock *lock)
{
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode sinode;
struct scoutfs_key key;
LIST_HEAD(ind_locks);
@@ -1457,10 +1506,6 @@ static int delete_inode_items(struct super_block *sb, u64 ino)
u64 size;
int ret;
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_WRITE, 0, ino, &lock);
if (ret)
return ret;
init_inode_key(&key, ino);
ret = scoutfs_item_lookup_exact(sb, &key, &sinode, sizeof(sinode),
@@ -1498,8 +1543,7 @@ static int delete_inode_items(struct super_block *sb, u64 ino)
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
prepare_index_deletion(sb, &ind_locks, ino, mode, &sinode) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_DROP_INODE(mode, size));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -1521,23 +1565,29 @@ retry:
if (ret)
goto out;
ret = remove_orphan_item(sb, ino);
ret = scoutfs_orphan_delete(sb, ino);
out:
if (release)
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
return ret;
}
/*
* iput_final has already written out the dirty pages to the inode
* before we get here. We're left with a clean inode that we have to
* tear down. If there are no more links to the inode then we also
* remove all its persistent structures.
* tear down. We use locking and open inode number bitmaps to decide if
* we should finally destroy an inode that is no longer open nor
* reachable through directory entries.
*/
void scoutfs_evict_inode(struct inode *inode)
{
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_lock *lock;
int ret;
trace_scoutfs_evict_inode(inode->i_sb, scoutfs_ino(inode),
inode->i_nlink, is_bad_inode(inode));
@@ -1546,19 +1596,45 @@ void scoutfs_evict_inode(struct inode *inode)
truncate_inode_pages_final(&inode->i_data);
if (inode->i_nlink == 0)
delete_inode_items(inode->i_sb, scoutfs_ino(inode));
ret = scoutfs_omap_should_delete(sb, inode, &lock);
if (ret > 0) {
ret = delete_inode_items(inode->i_sb, scoutfs_ino(inode), lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
}
if (ret < 0)
scoutfs_err(sb, "error %d while checking to delete inode nr %llu, it might linger.",
ret, ino);
scoutfs_omap_dec(sb, ino);
clear:
clear_inode(inode);
}
/*
* We want to remove inodes from the cache as their count goes to 0 if
* they're no longer covered by a cluster lock or if while locked they
* were unlinked.
*
* We don't want unused cached inodes to linger outside of cluster
* locking so that they don't prevent final inode deletion on other
* nodes. We don't have specific per-inode or per-dentry locks which
* would otherwise remove the stale caches as they're invalidated.
* Stale cached inodes provide little value because they're going to be
* refreshed the next time they're locked. Populating the item cache
* and loading the inode item is a lot more expensive than initializing
* and inserting a newly allocated vfs inode.
*/
int scoutfs_drop_inode(struct inode *inode)
{
int ret = generic_drop_inode(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
trace_scoutfs_drop_inode(inode->i_sb, scoutfs_ino(inode),
inode->i_nlink, inode_unhashed(inode));
return ret;
trace_scoutfs_drop_inode(sb, scoutfs_ino(inode), inode->i_nlink, inode_unhashed(inode),
si->drop_invalidated);
return si->drop_invalidated || !scoutfs_lock_is_covered(sb, &si->ino_lock_cov) ||
generic_drop_inode(inode);
}
/*
@@ -1575,8 +1651,10 @@ int scoutfs_scan_orphans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_lock *lock = sbi->rid_lock;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_key key;
struct scoutfs_key last;
u64 ino;
int err = 0;
int ret;
@@ -1592,7 +1670,13 @@ int scoutfs_scan_orphans(struct super_block *sb)
if (ret < 0)
goto out;
ret = delete_inode_items(sb, le64_to_cpu(key.sko_ino));
ino = le64_to_cpu(key.sko_ino);
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_WRITE, 0, ino, &inode_lock);
if (ret == 0) {
ret = delete_inode_items(sb, le64_to_cpu(key.sko_ino), inode_lock);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
}
if (ret && ret != -ENOENT && !err)
err = ret;
@@ -1626,19 +1710,28 @@ int scoutfs_orphan_inode(struct inode *inode)
}
/*
* Track an inode that could have dirty pages. Used to kick off writeback
* on all dirty pages during transaction commit without tying ourselves in
* knots trying to call through the high level vfs sync methods.
* Track an inode that could have dirty pages. Used to kick off
* writeback on all dirty pages during transaction commit without tying
* ourselves in knots trying to call through the high level vfs sync
* methods.
*
* This is called by writers who hold the inode and transaction. The
* inode's presence in the rbtree is removed by destroy_inode, prevented
* by the inode hold, and by committing the transaction, which is
* prevented by holding the transaction. The inode can only go from
* empty to on the rbtree while we're here.
*/
void scoutfs_inode_queue_writeback(struct inode *inode)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
spin_lock(&inf->writeback_lock);
if (RB_EMPTY_NODE(&si->writeback_node))
insert_writeback_inode(inf, si);
spin_unlock(&inf->writeback_lock);
if (RB_EMPTY_NODE(&si->writeback_node)) {
spin_lock(&inf->writeback_lock);
if (RB_EMPTY_NODE(&si->writeback_node))
insert_writeback_inode(inf, si);
spin_unlock(&inf->writeback_lock);
}
}
/*

View File

@@ -4,7 +4,6 @@
#include "key.h"
#include "lock.h"
#include "per_task.h"
#include "count.h"
#include "format.h"
#include "data.h"
@@ -22,6 +21,14 @@ struct scoutfs_inode_info {
u64 offline_blocks;
u32 flags;
/*
* Protects per-inode extent items, most particularly readers
* who want to serialize writers without holding i_mutex. (only
* used in data.c, it's the only place that understands file
* extent items)
*/
struct rw_semaphore extent_sem;
/*
* The in-memory item info caches the current index item values
* so that we can decide to update them with comparisons instead
@@ -44,6 +51,13 @@ struct scoutfs_inode_info {
struct rw_semaphore xattr_rwsem;
struct rb_node writeback_node;
struct scoutfs_lock_coverage ino_lock_cov;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node inv_iput_llnode;
atomic_t inv_iput_count;
struct inode inode;
};
@@ -65,6 +79,7 @@ int scoutfs_orphan_inode(struct inode *inode);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup_nofreeing(struct super_block *sb, u64 ino);
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
u32 minor, u64 ino);
@@ -75,11 +90,9 @@ int scoutfs_inode_index_prepare_ino(struct super_block *sb,
struct list_head *list, u64 ino,
umode_t mode);
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
struct list_head *list, u64 seq,
const struct scoutfs_item_count cnt);
struct list_head *list, u64 seq);
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
bool set_data_seq,
const struct scoutfs_item_count cnt);
bool set_data_seq);
void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list);
int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock);
@@ -109,6 +122,8 @@ int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_scan_orphans(struct super_block *sb);
int scoutfs_orphan_dirty(struct super_block *sb, u64 ino);
int scoutfs_orphan_delete(struct super_block *sb, u64 ino);
void scoutfs_inode_queue_writeback(struct inode *inode);
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);

View File

@@ -12,6 +12,7 @@
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/file.h>
#include <linux/uaccess.h>
#include <linux/compiler.h>
#include <linux/uio.h>
@@ -274,8 +275,8 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
struct super_block *sb = inode->i_sb;
struct scoutfs_ioctl_release args;
struct scoutfs_lock *lock = NULL;
loff_t start;
loff_t end_inc;
u64 sblock;
u64 eblock;
u64 online;
u64 offline;
u64 isize;
@@ -286,9 +287,11 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
trace_scoutfs_ioc_release(sb, scoutfs_ino(inode), &args);
if (args.count == 0)
if (args.length == 0)
return 0;
if ((args.block + args.count) < args.block)
if (((args.offset + args.length) < args.offset) ||
(args.offset & SCOUTFS_BLOCK_SM_MASK) ||
(args.length & SCOUTFS_BLOCK_SM_MASK))
return -EINVAL;
@@ -321,23 +324,24 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
inode_dio_wait(inode);
/* drop all clean and dirty cached blocks in the range */
start = args.block << SCOUTFS_BLOCK_SM_SHIFT;
end_inc = ((args.block + args.count) << SCOUTFS_BLOCK_SM_SHIFT) - 1;
truncate_inode_pages_range(&inode->i_data, start, end_inc);
truncate_inode_pages_range(&inode->i_data, args.offset,
args.offset + args.length - 1);
sblock = args.offset >> SCOUTFS_BLOCK_SM_SHIFT;
eblock = (args.offset + args.length - 1) >> SCOUTFS_BLOCK_SM_SHIFT;
ret = scoutfs_data_truncate_items(sb, inode, scoutfs_ino(inode),
args.block,
args.block + args.count - 1, true,
sblock,
eblock, true,
lock);
if (ret == 0) {
scoutfs_inode_get_onoff(inode, &online, &offline);
isize = i_size_read(inode);
if (online == 0 && isize) {
start = (isize + SCOUTFS_BLOCK_SM_SIZE - 1)
sblock = (isize + SCOUTFS_BLOCK_SM_SIZE - 1)
>> SCOUTFS_BLOCK_SM_SHIFT;
ret = scoutfs_data_truncate_items(sb, inode,
scoutfs_ino(inode),
start, U64_MAX,
sblock, U64_MAX,
false, lock);
}
}
@@ -459,23 +463,24 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
trace_scoutfs_ioc_stage(sb, scoutfs_ino(inode), &args);
end_size = args.offset + args.count;
end_size = args.offset + args.length;
/* verify arg constraints that aren't dependent on file */
if (args.count < 0 || (end_size < args.offset) ||
args.offset & SCOUTFS_BLOCK_SM_MASK)
if (args.length < 0 || (end_size < args.offset) ||
args.offset & SCOUTFS_BLOCK_SM_MASK) {
return -EINVAL;
}
if (args.count == 0)
if (args.length == 0)
return 0;
/* the iocb is really only used for the file pointer :P */
init_sync_kiocb(&kiocb, file);
kiocb.ki_pos = args.offset;
kiocb.ki_left = args.count;
kiocb.ki_nbytes = args.count;
kiocb.ki_left = args.length;
kiocb.ki_nbytes = args.length;
iov.iov_base = (void __user *)(unsigned long)args.buf_ptr;
iov.iov_len = args.count;
iov.iov_len = args.length;
ret = mnt_want_write_file(file);
if (ret)
@@ -514,11 +519,11 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
written = 0;
do {
ret = generic_file_buffered_write(&kiocb, &iov, 1, pos, &pos,
args.count, written);
args.length, written);
BUG_ON(ret == -EIOCBQUEUED);
if (ret > 0)
written += ret;
} while (ret > 0 && written < args.count);
} while (ret > 0 && written < args.length);
si->staging = false;
current->backing_dev_info = NULL;
@@ -669,8 +674,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
/* setting only so we don't see 0 data seq with nonzero data_version */
set_data_seq = sm.data_version != 0 ? true : false;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq,
SIC_SETATTR_MORE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq);
if (ret)
goto unlock;
@@ -933,6 +937,60 @@ static long scoutfs_ioc_alloc_detail(struct file *file, unsigned long arg)
args.copied;
}
static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
{
struct inode *to = file_inode(file);
struct super_block *sb = to->i_sb;
struct scoutfs_ioctl_move_blocks __user *umb = (void __user *)arg;
struct scoutfs_ioctl_move_blocks mb;
struct file *from_file;
struct inode *from;
int ret;
if (copy_from_user(&mb, umb, sizeof(mb)))
return -EFAULT;
if (mb.len == 0)
return 0;
if (mb.from_off + mb.len < mb.from_off ||
mb.to_off + mb.len < mb.to_off)
return -EOVERFLOW;
from_file = fget(mb.from_fd);
if (!from_file)
return -EBADF;
from = file_inode(from_file);
if (from == to) {
ret = -EINVAL;
goto out;
}
if (from->i_sb != sb) {
ret = -EXDEV;
goto out;
}
if (mb.flags & SCOUTFS_IOC_MB_UNKNOWN) {
ret = -EINVAL;
goto out;
}
ret = mnt_want_write_file(file);
if (ret < 0)
goto out;
ret = scoutfs_data_move_blocks(from, mb.from_off, mb.len,
to, mb.to_off, !!(mb.flags & SCOUTFS_IOC_MB_STAGE),
mb.data_version);
mnt_drop_write_file(file);
out:
fput(from_file);
return ret;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -960,6 +1018,8 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_data_wait_err(file, arg);
case SCOUTFS_IOC_ALLOC_DETAIL:
return scoutfs_ioc_alloc_detail(file, arg);
case SCOUTFS_IOC_MOVE_BLOCKS:
return scoutfs_ioc_move_blocks(file, arg);
}
return -ENOTTY;

View File

@@ -163,7 +163,7 @@ struct scoutfs_ioctl_ino_path_result {
__u64 dir_pos;
__u16 path_bytes;
__u8 _pad[6];
__u8 path[0];
__u8 path[];
};
/* Get a single path from the root to the given inode number */
@@ -176,8 +176,8 @@ struct scoutfs_ioctl_ino_path_result {
* an offline record is left behind to trigger demand staging if the
* file is read.
*
* The starting block offset and number of blocks to release are in
* units 4KB blocks.
* The starting file offset and number of bytes to release must be in
* multiples of 4KB.
*
* The specified range can extend past i_size and can straddle sparse
* regions or blocks that are already offline. The only change it makes
@@ -193,8 +193,8 @@ struct scoutfs_ioctl_ino_path_result {
* presentation of the data in the file.
*/
struct scoutfs_ioctl_release {
__u64 block;
__u64 count;
__u64 offset;
__u64 length;
__u64 data_version;
};
@@ -205,7 +205,7 @@ struct scoutfs_ioctl_stage {
__u64 data_version;
__u64 buf_ptr;
__u64 offset;
__s32 count;
__s32 length;
__u32 _pad;
};
@@ -259,7 +259,7 @@ struct scoutfs_ioctl_data_waiting {
__u8 _pad[6];
};
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U8_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
struct scoutfs_ioctl_data_waiting)
@@ -279,7 +279,7 @@ struct scoutfs_ioctl_setattr_more {
};
#define SCOUTFS_IOC_SETATTR_MORE_OFFLINE (1 << 0)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U8_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U64_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE _IOW(SCOUTFS_IOCTL_MAGIC, 7, \
struct scoutfs_ioctl_setattr_more)
@@ -395,9 +395,6 @@ struct scoutfs_ioctl_data_wait_err {
struct scoutfs_ioctl_data_wait_err)
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
struct scoutfs_ioctl_alloc_detail {
__u64 entries_ptr;
__u64 entries_nr;
@@ -413,4 +410,70 @@ struct scoutfs_ioctl_alloc_detail_entry {
__u8 __pad[6];
};
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
/*
* Move extents from one regular file to another at a different offset,
* on the same file system.
*
* from_fd specifies the source file and the ioctl is called on the
* destination file. Both files must have write access. from_off specifies
* the byte offset in the source, to_off is the byte offset in the
* destination, and len is the number of bytes in the region to move. All of
* the offsets and lengths must be in multiples of 4KB, except in the case
* where the from_off + len ends at the i_size of the source
* file. data_version is only used when STAGE flag is set (see below). flags
* field is currently only used to optionally specify STAGE behavior.
*
* This interface only moves extents which are block granular, it does
* not perform RMW of sub-block byte extents and it does not overwrite
* existing extents in the destination. It will split extents in the
* source.
*
* Only extents within i_size on the source are moved. The destination
* i_size will be updated if extents are moved beyond its current
* i_size. The i_size update will maintain final partial blocks in the
* source.
*
* If STAGE flag is not set, it will return an error if either of the files
* have offline extents. It will return 0 when all of the extents in the
* source region have been moved to the destination. Moving extents updates
* the ctime, mtime, meta_seq, data_seq, and data_version fields of both the
* source and destination inodes. If an error is returned then partial
* progress may have been made and inode fields may have been updated.
*
* If STAGE flag is set, as above except destination range must be in an
* offline extent. Fields are updated only for source inode.
*
* Errors specific to this interface include:
*
* EINVAL: from_off, len, or to_off aren't a multiple of 4KB; the source
* and destination files are the same inode; either the source or
* destination is not a regular file; the destination file has
* an existing overlapping extent (if STAGE flag not set); the
* destination range is not in an offline extent (if STAGE set).
* EOVERFLOW: either from_off + len or to_off + len exceeded 64bits.
* EBADF: from_fd isn't a valid open file descriptor.
* EXDEV: the source and destination files are in different filesystems.
* EISDIR: either the source or destination is a directory.
* ENODATA: either the source or destination file have offline extents and
* STAGE flag is not set.
* ESTALE: data_version does not match destination data_version.
*/
#define SCOUTFS_IOC_MB_STAGE (1 << 0)
#define SCOUTFS_IOC_MB_UNKNOWN (U64_MAX << 1)
struct scoutfs_ioctl_move_blocks {
__u64 from_fd;
__u64 from_off;
__u64 len;
__u64 to_off;
__u64 data_version;
__u64 flags;
};
#define SCOUTFS_IOC_MOVE_BLOCKS _IOR(SCOUTFS_IOCTL_MAGIC, 13, \
struct scoutfs_ioctl_move_blocks)
#endif

View File

@@ -1339,7 +1339,10 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
/* split needs multiple items, sparse may not have enough */
if (!left)
return -ENOMEM;
compact_page_items(sb, pg, left);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par,
&pnode);
}
item = alloc_item(pg, key, liv, val, val_len);
@@ -1491,6 +1494,8 @@ retry:
rbtree_erase(&rd->node, &root);
rbtree_insert(&rd->node, par, pnode, &cinf->pg_root);
lru_accessed(sb, cinf, rd);
trace_scoutfs_item_read_page(sb, key, &rd->start,
&rd->end);
continue;
}
@@ -2342,6 +2347,8 @@ retry:
write_lock(&pg->rwlock);
pgi = trim_page_intersection(sb, cinf, pg, right, start, end);
trace_scoutfs_item_invalidate_page(sb, start, end,
&pg->start, &pg->end, pgi);
BUG_ON(pgi == PGI_DISJOINT); /* walk wouldn't ret disjoint */
if (pgi == PGI_INSIDE) {
@@ -2364,9 +2371,9 @@ retry:
/* inv was entirely inside page, done after bisect */
write_trylock_will_succeed(&right->rwlock);
rbtree_insert(&right->node, par, pnode, &cinf->pg_root);
lru_accessed(sb, cinf, right);
write_unlock(&right->rwlock);
write_unlock(&pg->rwlock);
lru_accessed(sb, cinf, right);
right = NULL;
break;
}
@@ -2396,7 +2403,6 @@ static int item_lru_shrink(struct shrinker *shrink,
struct active_reader *active;
struct cached_page *tmp;
struct cached_page *pg;
LIST_HEAD(list);
int nr;
if (sc->nr_to_scan == 0)
@@ -2433,21 +2439,17 @@ static int item_lru_shrink(struct shrinker *shrink,
__lru_remove(sb, cinf, pg);
rbtree_erase(&pg->node, &cinf->pg_root);
list_move_tail(&pg->lru_head, &list);
invalidate_pcpu_page(pg);
write_unlock(&pg->rwlock);
put_pg(sb, pg);
if (--nr == 0)
break;
}
write_unlock(&cinf->rwlock);
spin_unlock(&cinf->lru_lock);
list_for_each_entry_safe(pg, tmp, &list, lru_head) {
list_del_init(&pg->lru_head);
put_pg(sb, pg);
}
out:
return min_t(unsigned long, cinf->lru_pages, INT_MAX);
}

View File

@@ -34,6 +34,7 @@
#include "data.h"
#include "xattr.h"
#include "item.h"
#include "omap.h"
/*
* scoutfs uses a lock service to manage item cache consistency between
@@ -65,7 +66,7 @@
* relative to that lock state we resend.
*/
#define GRACE_PERIOD_KT ms_to_ktime(2)
#define GRACE_PERIOD_KT ms_to_ktime(10)
/*
* allocated per-super, freed on unmount.
@@ -74,6 +75,7 @@ struct lock_info {
struct super_block *sb;
spinlock_t lock;
bool shutdown;
bool unmounting;
struct rb_root lock_tree;
struct rb_root lock_range_tree;
struct shrinker shrinker;
@@ -87,6 +89,9 @@ struct lock_info {
struct work_struct shrink_work;
struct list_head shrink_list;
atomic64_t next_refresh_gen;
struct work_struct inv_iput_work;
struct llist_head inv_iput_llist;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree tseq_tree;
};
@@ -122,21 +127,81 @@ static bool lock_modes_match(int granted, int requested)
}
/*
* invalidate cached data associated with an inode whose lock is going
* away.
* Final iput can get into evict and perform final inode deletion which
* can delete a lot of items under locks and transactions. We really
* don't want to be doing all that in an iput during invalidation. When
* invalidation sees that iput might perform final deletion it puts them
* on a list and queues this work.
*
* Nothing stops multiple puts for multiple invalidations of an inode
* before the work runs so we can track multiple puts in flight.
*/
static void lock_inv_iput_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info, inv_iput_work);
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct llist_node *inodes;
bool more;
inodes = llist_del_all(&linfo->inv_iput_llist);
llist_for_each_entry_safe(si, tmp, inodes, inv_iput_llnode) {
do {
more = atomic_dec_return(&si->inv_iput_count) > 0;
iput(&si->inode);
} while (more);
}
}
/*
* Invalidate cached data associated with an inode whose lock is going
* away. We ignore indoes with I_FREEING instead of waiting on them to
* avoid a deadlock, if they're freeing then they won't be visible to
* future lock users and we don't need to invalidate them.
*
* We try to drop cached dentries and inodes covered by the lock if they
* aren't referenced. This removes them from the mount's open map and
* allows deletions to be performed by unlink without having to wait for
* remote cached inodes to be dropped.
*
* If the cached inode was already deferring final inode deletion then
* we can't perform that inline in invalidation. The locking alone
* deadlock, and it might also take multiple transactions to fully
* delete an inode with significant metadata. We only perform the iput
* inline if we know that possible eviction can't perform the final
* deletion, otherwise we kick it off to async work.
*/
static void invalidate_inode(struct super_block *sb, u64 ino)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_inode_info *si;
struct inode *inode;
inode = scoutfs_ilookup(sb, ino);
inode = scoutfs_ilookup_nofreeing(sb, ino);
if (inode) {
si = SCOUTFS_I(inode);
scoutfs_inc_counter(sb, lock_invalidate_inode);
if (S_ISREG(inode->i_mode)) {
truncate_inode_pages(inode->i_mapping, 0);
scoutfs_data_wait_changed(inode);
}
iput(inode);
/* can't touch during unmount, dcache destroys w/o locks */
if (!linfo->unmounting)
d_prune_aliases(inode);
si->drop_invalidated = true;
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
} else {
/* defer iput to work context so we don't evict inodes from invalidation */
if (atomic_inc_return(&si->inv_iput_count) == 1)
llist_add(&si->inv_iput_llnode, &linfo->inv_iput_llist);
smp_wmb(); /* count and list visible before work executes */
queue_work(linfo->workq, &linfo->inv_iput_work);
}
}
}
@@ -172,6 +237,16 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
retry:
/* invalidate inodes before removing coverage */
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
/* remove cov items to tell users that their cache is stale */
spin_lock(&lock->cov_list_lock);
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
@@ -187,15 +262,6 @@ retry:
}
spin_unlock(&lock->cov_list_lock);
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
scoutfs_item_invalidate(sb, &lock->start, &lock->end);
}
@@ -229,6 +295,7 @@ static void lock_free(struct lock_info *linfo, struct scoutfs_lock *lock)
BUG_ON(!list_empty(&lock->shrink_head));
BUG_ON(!list_empty(&lock->cov_list));
scoutfs_omap_free_lock_data(lock->omap_data);
kfree(lock);
}
@@ -264,6 +331,7 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
lock->mode = SCOUTFS_LOCK_NULL;
atomic64_set(&lock->forest_bloom_nr, 0);
spin_lock_init(&lock->omap_spinlock);
trace_scoutfs_lock_alloc(sb, lock);
@@ -553,7 +621,7 @@ static void queue_grant_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->grant_list) && !linfo->shutdown)
if (!list_empty(&linfo->grant_list))
queue_work(linfo->workq, &linfo->grant_work);
}
@@ -569,7 +637,7 @@ static void queue_inv_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->inv_list) && !linfo->shutdown)
if (!list_empty(&linfo->inv_list))
mod_delayed_work(linfo->workq, &linfo->inv_dwork, 0);
}
@@ -638,7 +706,6 @@ static void lock_grant_worker(struct work_struct *work)
struct lock_info *linfo = container_of(work, struct lock_info,
grant_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock_grant_response *gr;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
@@ -648,8 +715,7 @@ static void lock_grant_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->grant_list, grant_head) {
gr = &lock->grant_resp;
nl = &lock->grant_resp.nl;
nl = &lock->grant_nl;
/* wait for reordered invalidation to finish */
if (lock->mode != nl->old_mode)
@@ -667,7 +733,6 @@ static void lock_grant_worker(struct work_struct *work)
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_version = le64_to_cpu(nl->write_version);
lock->roots = gr->roots;
if (lock_count_match_exists(nl->new_mode, lock->waiters))
extend_grace(sb, lock);
@@ -689,9 +754,8 @@ static void lock_grant_worker(struct work_struct *work)
* work to process.
*/
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock_grant_response *gr)
struct scoutfs_net_lock *nl)
{
struct scoutfs_net_lock *nl = &gr->nl;
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
@@ -705,7 +769,7 @@ int scoutfs_lock_grant_response(struct super_block *sb,
trace_scoutfs_lock_grant_response(sb, lock);
BUG_ON(!lock->request_pending);
lock->grant_resp = *gr;
lock->grant_nl = *nl;
list_add_tail(&lock->grant_head, &linfo->grant_list);
queue_grant_work(linfo);
@@ -717,7 +781,9 @@ int scoutfs_lock_grant_response(struct super_block *sb,
/*
* Each lock has received a lock invalidation request from the server
* which specifies a new mode for the lock. The server will only send
* one invalidation request at a time for each lock.
* one invalidation request at a time for each lock. The server can
* send another invalidate request after we send the response but before
* we reacquire the lock and finish invalidation.
*
* This is an unsolicited request from the server so it can arrive at
* any time after we make the server aware of the lock by initially
@@ -770,16 +836,6 @@ static void lock_invalidate_worker(struct work_struct *work)
list_for_each_entry_safe(lock, tmp, &linfo->inv_list, inv_head) {
nl = &lock->inv_nl;
/* skip if grace hasn't elapsed, record earliest */
deadline = lock->grace_deadline;
if (ktime_before(now, deadline)) {
delay = min(delay,
nsecs_to_jiffies(ktime_to_ns(
ktime_sub(deadline, now))));
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
continue;
}
/* wait for reordered grant to finish */
if (lock->mode != nl->old_mode)
continue;
@@ -788,6 +844,15 @@ static void lock_invalidate_worker(struct work_struct *work)
if (!lock_counts_match(nl->new_mode, lock->users))
continue;
/* skip if grace hasn't elapsed, record earliest */
deadline = lock->grace_deadline;
if (!linfo->shutdown && ktime_before(now, deadline)) {
delay = min(delay,
nsecs_to_jiffies(ktime_to_ns(
ktime_sub(deadline, now))));
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
continue;
}
/* set the new mode, no incompatible users during inval */
lock->mode = nl->new_mode;
@@ -805,8 +870,14 @@ static void lock_invalidate_worker(struct work_struct *work)
nl = &lock->inv_nl;
net_id = lock->inv_net_id;
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
/* only lock protocol, inv can't call subsystems after shutdown */
if (!linfo->shutdown) {
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
}
/* allow another request after we respond but before we finish */
lock->inv_net_id = 0;
/* respond with the key and modes from the request */
ret = scoutfs_client_lock_response(sb, net_id, nl);
@@ -819,11 +890,13 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &ready, inv_head) {
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
trace_scoutfs_lock_invalidated(sb, lock);
wake_up(&lock->waitq);
if (lock->inv_net_id == 0) {
/* finish if another request didn't arrive */
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
wake_up(&lock->waitq);
}
put_lock(linfo, lock);
}
@@ -838,34 +911,47 @@ out:
}
/*
* Record an incoming invalidate request from the server and add its lock
* to the list for processing.
* Record an incoming invalidate request from the server and add its
* lock to the list for processing. This request can be from a new
* server and racing with invalidation that frees from an old server.
* It's fine to not find the requested lock and send an immediate
* response.
*
* This is trusting the server and will crash if it's sent bad requests :/
* The invalidation process drops the linfo lock to send responses. The
* moment it does so we can receive another invalidation request (the
* server can ask us to go from write->read then read->null). We allow
* for one chain like this but it's a bug if we receive more concurrent
* invalidation requests than that. The server should be only sending
* one at a time.
*/
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
int ret = 0;
scoutfs_inc_counter(sb, lock_invalidate_request);
spin_lock(&linfo->lock);
lock = get_lock(sb, &nl->key);
BUG_ON(!lock);
if (lock) {
BUG_ON(lock->invalidate_pending);
lock->invalidate_pending = 1;
lock->inv_nl = *nl;
BUG_ON(lock->inv_net_id != 0);
lock->inv_net_id = net_id;
list_add_tail(&lock->inv_head, &linfo->inv_list);
lock->inv_nl = *nl;
if (list_empty(&lock->inv_head)) {
list_add_tail(&lock->inv_head, &linfo->inv_list);
lock->invalidate_pending = 1;
}
trace_scoutfs_lock_invalidate_request(sb, lock);
queue_inv_work(linfo);
}
spin_unlock(&linfo->lock);
return 0;
if (!lock)
ret = scoutfs_client_lock_response(sb, net_id, nl);
return ret;
}
/*
@@ -996,7 +1082,7 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
lock_inc_count(lock->waiters, mode);
for (;;) {
if (linfo->shutdown) {
if (WARN_ON_ONCE(linfo->shutdown)) {
ret = -ESHUTDOWN;
break;
}
@@ -1478,7 +1564,7 @@ restart:
BUG_ON(lock->mode == SCOUTFS_LOCK_NULL);
BUG_ON(!list_empty(&lock->shrink_head));
if (linfo->shutdown || nr-- == 0)
if (nr-- == 0)
break;
__lock_del_lru(linfo, lock);
@@ -1505,7 +1591,7 @@ out:
return ret;
}
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr)
void scoutfs_free_unused_locks(struct super_block *sb)
{
struct lock_info *linfo = SCOUTFS_SB(sb)->lock_info;
struct shrink_control sc = {
@@ -1533,15 +1619,40 @@ static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
}
/*
* The caller is going to be calling _destroy soon and, critically, is
* about to shutdown networking before calling us so that we don't get
* any callbacks while we're destroying. We have to ensure that we
* won't call networking after this returns.
* shrink_dcache_for_umount() tears down dentries with no locking. We
* need to make sure that our invalidation won't touch dentries before
* we return and the caller calls the generic vfs unmount path.
*/
void scoutfs_lock_unmount_begin(struct super_block *sb)
{
DECLARE_LOCK_INFO(sb, linfo);
if (linfo) {
linfo->unmounting = true;
flush_delayed_work(&linfo->inv_dwork);
}
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.
*
* Internal fs threads can be using locking, and locking can have async
* work pending. We use ->shutdown to force callers to return
* -ESHUTDOWN and to prevent the future queueing of work that could call
* networking. Locks whose work is stopped will be torn down by _destroy.
* At this point all fs callers and internal services that use locks
* should have stopped. We won't have any callers initiating lock
* transitions and sending requests. We set the shutdown flag to catch
* anyone who breaks this rule.
*
* We unregister the shrinker so that we won't try and send null
* requests in response to memory pressure. The locks will all be
* unceremoniously dropped once we get a farewell response from the
* server which indicates that they destroyed our locking state.
*
* We will still respond to invalidation requests that have to be
* processed to let unmount in other mounts acquire locks and make
* progress. However, we don't fully process the invalidation because
* we're shutting down. We only update the lock state and send the
* response. We shouldn't have any users of locking that require
* invalidation correctness at this point.
*/
void scoutfs_lock_shutdown(struct super_block *sb)
{
@@ -1554,19 +1665,18 @@ void scoutfs_lock_shutdown(struct super_block *sb)
trace_scoutfs_lock_shutdown(sb, linfo);
spin_lock(&linfo->lock);
/* stop the shrinker from queueing work */
unregister_shrinker(&linfo->shrinker);
flush_work(&linfo->shrink_work);
/* cause current and future lock calls to return errors */
spin_lock(&linfo->lock);
linfo->shutdown = true;
for (node = rb_first(&linfo->lock_tree); node; node = rb_next(node)) {
lock = rb_entry(node, struct scoutfs_lock, node);
wake_up(&lock->waitq);
}
spin_unlock(&linfo->lock);
flush_work(&linfo->grant_work);
flush_delayed_work(&linfo->inv_dwork);
flush_work(&linfo->shrink_work);
}
/*
@@ -1594,8 +1704,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
trace_scoutfs_lock_destroy(sb, linfo);
/* stop the shrinker from queueing work */
unregister_shrinker(&linfo->shrinker);
/* make sure that no one's actively using locks */
spin_lock(&linfo->lock);
@@ -1641,8 +1749,10 @@ void scoutfs_lock_destroy(struct super_block *sb)
__lock_del_lru(linfo, lock);
if (!list_empty(&lock->grant_head))
list_del_init(&lock->grant_head);
if (!list_empty(&lock->inv_head))
if (!list_empty(&lock->inv_head)) {
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
}
if (!list_empty(&lock->shrink_head))
list_del_init(&lock->shrink_head);
lock_remove(linfo, lock);
@@ -1679,6 +1789,8 @@ int scoutfs_lock_setup(struct super_block *sb)
INIT_WORK(&linfo->shrink_work, lock_shrink_worker);
INIT_LIST_HEAD(&linfo->shrink_list);
atomic64_set(&linfo->next_refresh_gen, 0);
INIT_WORK(&linfo->inv_iput_work, lock_inv_iput_worker);
init_llist_head(&linfo->inv_iput_llist);
scoutfs_tseq_tree_init(&linfo->tseq_tree, lock_tseq_show);
sbi->lock_info = linfo;

View File

@@ -10,6 +10,8 @@
#define SCOUTFS_LOCK_NR_MODES SCOUTFS_LOCK_INVALID
struct scoutfs_omap_lock;
/*
* A few fields (start, end, refresh_gen, write_version, granted_mode)
* are referenced by code outside lock.c.
@@ -23,7 +25,6 @@ struct scoutfs_lock {
u64 refresh_gen;
u64 write_version;
u64 dirty_trans_seq;
struct scoutfs_net_roots roots;
struct list_head lru_head;
wait_queue_head_t waitq;
ktime_t grace_deadline;
@@ -31,7 +32,7 @@ struct scoutfs_lock {
invalidate_pending:1;
struct list_head grant_head;
struct scoutfs_net_lock_grant_response grant_resp;
struct scoutfs_net_lock grant_nl;
struct list_head inv_head;
struct scoutfs_net_lock inv_nl;
u64 inv_net_id;
@@ -48,6 +49,10 @@ struct scoutfs_lock {
/* the forest tracks which log tree last saw bloom bit updates */
atomic64_t forest_bloom_nr;
/* open ino mapping has a valid map for a held write lock */
spinlock_t omap_spinlock;
struct scoutfs_omap_lock_data *omap_data;
};
struct scoutfs_lock_coverage {
@@ -57,7 +62,7 @@ struct scoutfs_lock_coverage {
};
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock_grant_response *gr);
struct scoutfs_net_lock *nl);
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl);
int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
@@ -96,9 +101,10 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode);
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr);
void scoutfs_free_unused_locks(struct super_block *sb);
int scoutfs_lock_setup(struct super_block *sb);
void scoutfs_lock_unmount_begin(struct super_block *sb);
void scoutfs_lock_shutdown(struct super_block *sb);
void scoutfs_lock_destroy(struct super_block *sb);

View File

@@ -20,10 +20,10 @@
#include "tseq.h"
#include "spbm.h"
#include "block.h"
#include "btree.h"
#include "msg.h"
#include "scoutfs_trace.h"
#include "lock_server.h"
#include "recov.h"
/*
* The scoutfs server implements a simple lock service. Client mounts
@@ -56,14 +56,11 @@
* Message requests and responses are reliably delivered in order across
* reconnection.
*
* The server maintains a persistent record of connected clients. A new
* server instance discovers these and waits for previously connected
* clients to reconnect and recover their state before proceeding. If
* clients don't reconnect they are forcefully prevented from unsafely
* accessing the shared persistent storage. (fenced, according to the
* rules of the platform.. could range from being powered off to having
* their switch port disabled to having their local block device set
* read-only.)
* As a new server comes up it recovers lock state from existing clients
* which were connected to a previous lock server. Recover requests are
* sent to clients as they connect and they respond with all there
* locks. Once all clients and locks are accounted for normal
* processing can resume.
*
* The lock server doesn't respond to memory pressure. The only way
* locks are freed is if they are invalidated to null on behalf of a
@@ -77,12 +74,8 @@ struct lock_server_info {
struct super_block *sb;
spinlock_t lock;
struct mutex mutex;
struct rb_root locks_root;
struct scoutfs_spbm recovery_pending;
struct delayed_work recovery_dwork;
struct scoutfs_tseq_tree tseq_tree;
struct dentry *tseq_dentry;
@@ -430,7 +423,7 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
goto out;
}
/* XXX should always have a server lock here? recovery? */
/* XXX should always have a server lock here? */
snode = get_server_lock(inf, &nl->key, NULL, false);
if (!snode) {
ret = -EINVAL;
@@ -473,18 +466,14 @@ out:
* so we unlock the snode mutex.
*
* All progress must wait for all clients to finish with recovery
* because we don't know which locks they'll hold. The unlocked
* recovery_pending test here is OK. It's filled by setup before
* anything runs. It's emptied by recovery completion. We can get a
* false nonempty result if we race with recovery completion, but that's
* OK because recovery completion processes all the locks that have
* requests after emptying, including the unlikely loser of that race.
* because we don't know which locks they'll hold. Once recover
* finishes the server calls us to kick all the locks that were waiting
* during recovery.
*/
static int process_waiting_requests(struct super_block *sb,
struct server_lock_node *snode)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_net_lock_grant_response gres;
struct scoutfs_net_lock nl;
struct client_lock_entry *req;
struct client_lock_entry *req_tmp;
@@ -497,7 +486,7 @@ static int process_waiting_requests(struct super_block *sb,
/* processing waits for all invalidation responses or recovery */
if (!list_empty(&snode->invalidated) ||
!scoutfs_spbm_empty(&inf->recovery_pending)) {
scoutfs_recov_next_pending(sb, SCOUTFS_RECOV_LOCKS) != 0) {
ret = 0;
goto out;
}
@@ -547,11 +536,8 @@ static int process_waiting_requests(struct super_block *sb,
nl.write_version = cpu_to_le64(wv);
}
gres.nl = nl;
scoutfs_server_get_roots(sb, &gres.roots);
ret = scoutfs_server_lock_response(sb, req->rid,
req->net_id, &gres);
req->net_id, &nl);
if (ret)
goto out;
@@ -573,85 +559,39 @@ out:
return ret;
}
static void init_lock_clients_key(struct scoutfs_key *key, u64 rid)
{
*key = (struct scoutfs_key) {
.sk_zone = SCOUTFS_LOCK_CLIENTS_ZONE,
.sklc_rid = cpu_to_le64(rid),
};
}
/*
* The server received a greeting from a client for the first time. If
* the client had already talked to the server then we must find an
* existing record for it and should begin recovery. If it doesn't have
* a record then its timed out and we can't allow it to reconnect. If
* its connecting for the first time then we insert a new record. If
* the client is in lock recovery then we send the initial lock request.
*
* This is running in concurrent client greeting processing contexts.
*/
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist)
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
init_lock_clients_key(&key, rid);
mutex_lock(&inf->mutex);
if (should_exist) {
ret = scoutfs_btree_lookup(sb, &super->lock_clients, &key,
&iref);
if (ret == 0)
scoutfs_btree_put_iref(&iref);
} else {
ret = scoutfs_btree_insert(sb, inf->alloc, inf->wri,
&super->lock_clients,
&key, NULL, 0);
}
mutex_unlock(&inf->mutex);
if (should_exist && ret == 0) {
if (scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
scoutfs_key_set_zeros(&key);
ret = scoutfs_server_lock_recover_request(sb, rid, &key);
if (ret)
goto out;
} else {
ret = 0;
}
out:
return ret;
}
/*
* A client sent their last recovery response and can exit recovery. If
* they were the last client in recovery then we can process all the
* server locks that had requests.
* All clients have finished lock recovery, we can make forward process
* on all the queued requests that were waiting on recovery.
*/
static int finished_recovery(struct super_block *sb, u64 rid, bool cancel)
int scoutfs_lock_server_finished_recovery(struct super_block *sb)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct server_lock_node *snode;
struct scoutfs_key key;
bool still_pending;
int ret = 0;
spin_lock(&inf->lock);
scoutfs_spbm_clear(&inf->recovery_pending, rid);
still_pending = !scoutfs_spbm_empty(&inf->recovery_pending);
spin_unlock(&inf->lock);
if (still_pending)
return 0;
if (cancel)
cancel_delayed_work_sync(&inf->recovery_dwork);
scoutfs_key_set_zeros(&key);
scoutfs_info(sb, "all lock clients recovered");
while ((snode = get_server_lock(inf, &key, NULL, true))) {
key = snode->key;
@@ -695,16 +635,15 @@ int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
int i;
/* client must be in recovery */
spin_lock(&inf->lock);
if (!scoutfs_spbm_test(&inf->recovery_pending, rid))
if (!scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
ret = -EINVAL;
spin_unlock(&inf->lock);
if (ret)
goto out;
}
/* client has sent us all their locks */
if (nlr->nr == 0) {
ret = finished_recovery(sb, rid, true);
scoutfs_server_recov_finish(sb, rid, SCOUTFS_RECOV_LOCKS);
ret = 0;
goto out;
}
@@ -755,101 +694,15 @@ out:
return ret;
}
static int get_rid_and_put_ref(struct scoutfs_btree_item_ref *iref, u64 *rid)
{
int ret;
if (iref->val_len == 0) {
*rid = le64_to_cpu(iref->key->sklc_rid);
ret = 0;
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(iref);
return ret;
}
/*
* This work executes if enough time passes without all of the clients
* finishing with recovery and canceling the work. We walk through the
* client records and find any that still have their recovery pending.
*/
static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
{
struct lock_server_info *inf = container_of(work,
struct lock_server_info,
recovery_dwork.work);
struct super_block *sb = inf->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
bool timed_out;
u64 rid;
int ret;
ret = scoutfs_server_hold_commit(sb);
if (ret)
goto out;
/* we enter recovery if there are any client records */
for (rid = 0; ; rid++) {
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT) {
ret = 0;
break;
}
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
break;
spin_lock(&inf->lock);
if (scoutfs_spbm_test(&inf->recovery_pending, rid)) {
scoutfs_spbm_clear(&inf->recovery_pending, rid);
timed_out = true;
} else {
timed_out = false;
}
spin_unlock(&inf->lock);
if (!timed_out)
continue;
scoutfs_err(sb, "client rid %016llx lock recovery timed out",
rid);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &key);
if (ret)
break;
}
ret = scoutfs_server_apply_commit(sb, ret);
out:
/* force processing all pending lock requests */
if (ret == 0)
ret = finished_recovery(sb, 0, false);
if (ret < 0) {
scoutfs_err(sb, "lock server saw err %d while timing out clients, shutting down", ret);
scoutfs_server_abort(sb);
}
}
/*
* A client is leaving the lock service. They aren't using locks and
* won't send any more requests. We tear down all the state we had for
* them. This can be called multiple times for a given client as their
* farewell is resent to new servers. It's OK to not find any state.
* If we fail to delete a persistent entry then we have to shut down and
* hope that the next server has more luck.
*/
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct client_lock_entry *clent;
struct client_lock_entry *tmp;
struct server_lock_node *snode;
@@ -858,20 +711,7 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
bool freed;
int ret = 0;
mutex_lock(&inf->mutex);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &key);
mutex_unlock(&inf->mutex);
if (ret == -ENOENT) {
ret = 0;
goto out;
}
if (ret < 0)
goto out;
scoutfs_key_set_zeros(&key);
while ((snode = get_server_lock(inf, &key, NULL, true))) {
freed = false;
@@ -956,23 +796,14 @@ static void lock_server_tseq_show(struct seq_file *m,
/*
* Setup the lock server. This is called before networking can deliver
* requests. If we find existing client records then we enter recovery.
* Lock request processing is deferred until recovery is resolved for
* all the existing clients, either they reconnect and replay locks or
* we time them out.
* requests.
*/
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 max_vers)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct lock_server_info *inf;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
unsigned int nr;
u64 rid;
int ret;
inf = kzalloc(sizeof(struct lock_server_info), GFP_KERNEL);
if (!inf)
@@ -980,11 +811,7 @@ int scoutfs_lock_server_setup(struct super_block *sb,
inf->sb = sb;
spin_lock_init(&inf->lock);
mutex_init(&inf->mutex);
inf->locks_root = RB_ROOT;
scoutfs_spbm_init(&inf->recovery_pending);
INIT_DELAYED_WORK(&inf->recovery_dwork,
scoutfs_lock_server_recovery_timeout);
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
inf->alloc = alloc;
inf->wri = wri;
@@ -999,36 +826,7 @@ int scoutfs_lock_server_setup(struct super_block *sb,
sbi->lock_server_info = inf;
/* we enter recovery if there are any client records */
nr = 0;
for (rid = 0; ; rid++) {
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT)
break;
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
goto out;
ret = scoutfs_spbm_set(&inf->recovery_pending, rid);
if (ret)
goto out;
nr++;
if (rid == U64_MAX)
break;
}
ret = 0;
if (nr) {
schedule_delayed_work(&inf->recovery_dwork,
msecs_to_jiffies(LOCK_SERVER_RECOVERY_MS));
scoutfs_info(sb, "waiting for %u lock clients to recover", nr);
}
out:
return ret;
return 0;
}
/*
@@ -1046,8 +844,6 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
LIST_HEAD(list);
if (inf) {
cancel_delayed_work_sync(&inf->recovery_dwork);
debugfs_remove(inf->tseq_dentry);
rbtree_postorder_for_each_entry_safe(snode, stmp,
@@ -1066,8 +862,6 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
kfree(snode);
}
scoutfs_spbm_destroy(&inf->recovery_pending);
kfree(inf);
sbi->lock_server_info = NULL;
}

View File

@@ -3,10 +3,10 @@
int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock_recover *nlr);
int scoutfs_lock_server_finished_recovery(struct super_block *sb);
int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
u64 net_id, struct scoutfs_net_lock *nl);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid);
int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);

View File

@@ -944,7 +944,6 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
struct scoutfs_net_connection *acc_conn;
DECLARE_WAIT_QUEUE_HEAD(waitq);
struct socket *acc_sock;
LIST_HEAD(conn_list);
int ret;
trace_scoutfs_net_listen_work_enter(sb, 0, 0);
@@ -1546,9 +1545,8 @@ void scoutfs_net_client_greeting(struct super_block *sb,
* response and they can disconnect cleanly.
*
* At this point our connection is idle except for send submissions and
* shutdown being queued. Once we shut down a We completely own a We
* have exclusive access to a previous conn once its shutdown and we set
* _freeing.
* shutdown being queued. We have exclusive access to the previous conn
* once it's shutdown and we set _freeing.
*/
void scoutfs_net_server_greeting(struct super_block *sb,
struct scoutfs_net_connection *conn,

View File

@@ -90,19 +90,13 @@ enum conn_flags {
#define SIN_ARG(sin) sin, be16_to_cpu((sin)->sin_port)
static inline void scoutfs_addr_to_sin(struct sockaddr_in *sin,
struct scoutfs_inet_addr *addr)
union scoutfs_inet_addr *addr)
{
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->port));
}
BUG_ON(addr->v4.family != cpu_to_le16(SCOUTFS_AF_IPV4));
static inline void scoutfs_addr_from_sin(struct scoutfs_inet_addr *addr,
struct sockaddr_in *sin)
{
addr->addr = be32_to_le32(sin->sin_addr.s_addr);
addr->port = be16_to_le16(sin->sin_port);
memset(addr->__pad, 0, sizeof(addr->__pad));
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->v4.addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
}
struct scoutfs_net_connection *

1046
kmod/src/omap.c Normal file

File diff suppressed because it is too large Load Diff

24
kmod/src/omap.h Normal file
View File

@@ -0,0 +1,24 @@
#ifndef _SCOUTFS_OMAP_H_
#define _SCOUTFS_OMAP_H_
int scoutfs_omap_inc(struct super_block *sb, u64 ino);
void scoutfs_omap_dec(struct super_block *sb, u64 ino);
int scoutfs_omap_should_delete(struct super_block *sb, struct inode *inode,
struct scoutfs_lock **lock_ret);
void scoutfs_omap_free_lock_data(struct scoutfs_omap_lock_data *ldata);
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map_args *args);
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_finished_recovery(struct super_block *sb);
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map_args *args);
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map *resp_map);
void scoutfs_omap_server_shutdown(struct super_block *sb);
int scoutfs_omap_setup(struct super_block *sb);
void scoutfs_omap_destroy(struct super_block *sb);
#endif

View File

@@ -28,7 +28,7 @@
#include "super.h"
static const match_table_t tokens = {
{Opt_server_addr, "server_addr=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_err, NULL}
};
@@ -43,46 +43,6 @@ u32 scoutfs_option_u32(struct super_block *sb, int token)
return 0;
}
/* The caller's string is null terminted and can be clobbered */
static int parse_ipv4(struct super_block *sb, char *str,
struct sockaddr_in *sin)
{
unsigned long port = 0;
__be32 addr;
char *c;
int ret;
/* null term port, if specified */
c = strchr(str, ':');
if (c)
*c = '\0';
/* parse addr */
addr = in_aton(str);
if (ipv4_is_multicast(addr) || ipv4_is_lbcast(addr) ||
ipv4_is_zeronet(addr) ||
ipv4_is_local_multicast(addr)) {
scoutfs_err(sb, "invalid unicast ipv4 address: %s", str);
return -EINVAL;
}
/* parse port, if specified */
if (c) {
c++;
ret = kstrtoul(c, 0, &port);
if (ret != 0 || port == 0 || port >= U16_MAX) {
scoutfs_err(sb, "invalid port in ipv4 address: %s", c);
return -EINVAL;
}
}
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = addr;
sin->sin_port = cpu_to_be16(port);
return 0;
}
static int parse_bdev_path(struct super_block *sb, substring_t *substr,
char **bdev_path_ret)
{
@@ -132,14 +92,15 @@ out:
int scoutfs_parse_options(struct super_block *sb, char *options,
struct mount_options *parsed)
{
char ipstr[INET_ADDRSTRLEN + 1];
substring_t args[MAX_OPT_ARGS];
int nr;
int token;
char *p;
int ret;
/* Set defaults */
memset(parsed, 0, sizeof(*parsed));
parsed->quorum_slot_nr = -1;
while ((p = strsep(&options, ",")) != NULL) {
if (!*p)
@@ -147,12 +108,23 @@ int scoutfs_parse_options(struct super_block *sb, char *options,
token = match_token(p, tokens, args);
switch (token) {
case Opt_server_addr:
case Opt_quorum_slot_nr:
match_strlcpy(ipstr, args, ARRAY_SIZE(ipstr));
ret = parse_ipv4(sb, ipstr, &parsed->server_addr);
if (ret < 0)
if (parsed->quorum_slot_nr != -1) {
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
return -EINVAL;
}
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 ||
nr >= SCOUTFS_QUORUM_MAX_SLOTS) {
scoutfs_err(sb, "invalid quorum_slot_nr option, must be between 0 and %u",
SCOUTFS_QUORUM_MAX_SLOTS - 1);
if (ret == 0)
ret = -EINVAL;
return ret;
}
parsed->quorum_slot_nr = nr;
break;
case Opt_metadev_path:

View File

@@ -6,13 +6,13 @@
#include "format.h"
enum scoutfs_mount_options {
Opt_server_addr,
Opt_quorum_slot_nr,
Opt_metadev_path,
Opt_err,
};
struct mount_options {
struct sockaddr_in server_addr;
int quorum_slot_nr;
char *metadev_path;
};

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,15 @@
#ifndef _SCOUTFS_QUORUM_H_
#define _SCOUTFS_QUORUM_H_
int scoutfs_quorum_election(struct super_block *sb, ktime_t timeout_abs,
u64 prev_term, u64 *elected_term);
void scoutfs_quorum_clear_leader(struct super_block *sb);
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
void scoutfs_quorum_server_shutdown(struct super_block *sb);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);
void scoutfs_quorum_destroy(struct super_block *sb);
#endif

280
kmod/src/recov.c Normal file
View File

@@ -0,0 +1,280 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/rhashtable.h>
#include <linux/rcupdate.h>
#include "super.h"
#include "recov.h"
/*
* There are a few server messages which can't be processed until they
* know that they have state for all possibly active clients. These
* little helpers track which clients have recovered what state and give
* those message handlers a call to check if recovery has completed. We
* track the timeout here, but all we do is call back into the server to
* take steps to evict timed out clients and then let us know that their
* recovery has finished.
*/
struct recov_info {
struct super_block *sb;
spinlock_t lock;
struct list_head pending;
struct timer_list timer;
void (*timeout_fn)(struct super_block *);
};
#define DECLARE_RECOV_INFO(sb, name) \
struct recov_info *name = SCOUTFS_SB(sb)->recov_info
struct recov_pending {
struct list_head head;
u64 rid;
int which;
};
static struct recov_pending *find_pending(struct recov_info *recinf, u64 rid, int which)
{
struct recov_pending *pend;
list_for_each_entry(pend, &recinf->pending, head) {
if ((rid == 0 || pend->rid == rid) && (pend->which & which))
return pend;
}
return NULL;
}
/*
* Record that we'll be waiting for a client to recover something.
* _finished will eventually be called for every _prepare, either
* because recovery naturally finished or because it timed out and the
* server evicted the client.
*/
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *alloc;
struct recov_pending *pend;
if (WARN_ON_ONCE(which & SCOUTFS_RECOV_INVALID))
return -EINVAL;
alloc = kmalloc(sizeof(*pend), GFP_NOFS);
if (!alloc)
return -ENOMEM;
spin_lock(&recinf->lock);
pend = find_pending(recinf, rid, SCOUTFS_RECOV_ALL);
if (pend) {
pend->which |= which;
} else {
swap(pend, alloc);
pend->rid = rid;
pend->which = which;
list_add(&pend->head, &recinf->pending);
}
spin_unlock(&recinf->lock);
kfree(alloc);
return 0;
}
/*
* Recovery is only finished once we've begun (which sets the timer) and
* all clients have finished. If we didn't test the timer we could
* claim it finished prematurely as clients are being prepared.
*/
static int recov_finished(struct recov_info *recinf)
{
return !!(recinf->timeout_fn != NULL && list_empty(&recinf->pending));
}
static void timer_callback(struct timer_list *timer)
{
struct recov_info *recinf = from_timer(recinf, timer, timer);
recinf->timeout_fn(recinf->sb);
}
/*
* Begin waiting for recovery once we've prepared all the clients. If
* the timeout period elapses before _finish is called on all prepared
* clients then the timer will call the callback.
*
* Returns > 0 if all the prepared clients finish recovery before begin
* is called.
*/
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms)
{
DECLARE_RECOV_INFO(sb, recinf);
int ret;
spin_lock(&recinf->lock);
recinf->timeout_fn = timeout_fn;
recinf->timer.expires = jiffies + msecs_to_jiffies(timeout_ms);
add_timer(&recinf->timer);
ret = recov_finished(recinf);
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
return ret;
}
/*
* A given client has recovered the given state. If it's finished all
* recovery then we free it, and if all clients have finished recovery
* then we cancel the timeout timer.
*
* Returns > 0 if _begin has been called and all clients have finished.
* The caller will only see > 0 returned once.
*/
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
int ret = 0;
spin_lock(&recinf->lock);
pend = find_pending(recinf, rid, which);
if (pend) {
pend->which &= ~which;
if (pend->which) {
pend = NULL;
} else {
list_del(&pend->head);
ret = recov_finished(recinf);
}
}
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
kfree(pend);
return ret;
}
/*
* Returns true if the given client is still trying to recover
* the given state.
*/
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
bool is_pending;
spin_lock(&recinf->lock);
is_pending = find_pending(recinf, rid, which) != NULL;
spin_unlock(&recinf->lock);
return is_pending;
}
/*
* Returns 0 if there are no rids waiting for the given state to be
* recovered. Returns the rid of a client still waiting if there are
* any, in no specified order.
*
* This is inherently racey. Callers are responsible for resolving any
* actions taken based on pending with the recovery finishing, perhaps
* before we return.
*/
u64 scoutfs_recov_next_pending(struct super_block *sb, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
u64 rid;
spin_lock(&recinf->lock);
pend = find_pending(recinf, 0, which);
rid = pend ? pend->rid : 0;
spin_unlock(&recinf->lock);
return rid;
}
/*
* The server is shutting down and doesn't need to worry about recovery
* anymore. It'll be built up again by the next server, if needed.
*/
void scoutfs_recov_shutdown(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
struct recov_pending *tmp;
LIST_HEAD(list);
del_timer_sync(&recinf->timer);
spin_lock(&recinf->lock);
list_splice_init(&recinf->pending, &list);
recinf->timeout_fn = NULL;
spin_unlock(&recinf->lock);
list_for_each_entry_safe(pend, tmp, &recinf->pending, head) {
list_del(&pend->head);
kfree(pend);
}
}
int scoutfs_recov_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct recov_info *recinf;
int ret;
recinf = kzalloc(sizeof(struct recov_info), GFP_KERNEL);
if (!recinf) {
ret = -ENOMEM;
goto out;
}
recinf->sb = sb;
spin_lock_init(&recinf->lock);
INIT_LIST_HEAD(&recinf->pending);
timer_setup(&recinf->timer, timer_callback, 0);
sbi->recov_info = recinf;
ret = 0;
out:
return ret;
}
void scoutfs_recov_destroy(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (recinf) {
scoutfs_recov_shutdown(sb);
kfree(recinf);
sbi->recov_info = NULL;
}
}

23
kmod/src/recov.h Normal file
View File

@@ -0,0 +1,23 @@
#ifndef _SCOUTFS_RECOV_H_
#define _SCOUTFS_RECOV_H_
enum {
SCOUTFS_RECOV_GREETING = ( 1 << 0),
SCOUTFS_RECOV_LOCKS = ( 1 << 1),
SCOUTFS_RECOV_INVALID = (~0 << 2),
SCOUTFS_RECOV_ALL = (~SCOUTFS_RECOV_INVALID),
};
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which);
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms);
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which);
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which);
u64 scoutfs_recov_next_pending(struct super_block *sb, int which);
void scoutfs_recov_shutdown(struct super_block *sb);
int scoutfs_recov_setup(struct super_block *sb);
void scoutfs_recov_destroy(struct super_block *sb);
#endif

View File

@@ -31,7 +31,6 @@
#include "lock.h"
#include "super.h"
#include "ioctl.h"
#include "count.h"
#include "export.h"
#include "dir.h"
#include "server.h"
@@ -169,6 +168,40 @@ TRACE_EVENT(scoutfs_data_fallocate,
__entry->len, __entry->ret)
);
TRACE_EVENT(scoutfs_data_move_blocks,
TP_PROTO(struct super_block *sb, u64 from_ino, u64 from_start, u64 len,
u64 map, u8 flags, u64 to_ino, u64 to_start),
TP_ARGS(sb, from_ino, from_start, len, map, flags, to_ino, to_start),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, from_ino)
__field(__u64, from_start)
__field(__u64, len)
__field(__u64, map)
__field(__u8, flags)
__field(__u64, to_ino)
__field(__u64, to_start)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->from_ino = from_ino;
__entry->from_start = from_start;
__entry->len = len;
__entry->map = map;
__entry->flags = flags;
__entry->to_ino = to_ino;
__entry->to_start = to_start;
),
TP_printk(SCSBF" from_ino %llu from_start %llu len %llu map %llu flags 0x%x to_ino %llu to_start %llu\n",
SCSB_TRACE_ARGS, __entry->from_ino, __entry->from_start,
__entry->len, __entry->map, __entry->flags, __entry->to_ino,
__entry->to_start)
);
TRACE_EVENT(scoutfs_data_fiemap,
TP_PROTO(struct super_block *sb, __u64 start, __u64 len, int ret),
@@ -390,135 +423,34 @@ TRACE_EVENT(scoutfs_trans_write_func,
TP_printk(SCSBF" dirty %lu", SCSB_TRACE_ARGS, __entry->dirty)
);
TRACE_EVENT(scoutfs_release_trans,
TP_PROTO(struct super_block *sb, void *rsv, unsigned int rsv_holders,
struct scoutfs_item_count *res,
struct scoutfs_item_count *act, unsigned int tri_holders,
unsigned int tri_writing, unsigned int tri_items,
unsigned int tri_vals),
DECLARE_EVENT_CLASS(scoutfs_trans_hold_release_class,
TP_PROTO(struct super_block *sb, void *journal_info, int holders),
TP_ARGS(sb, rsv, rsv_holders, res, act, tri_holders, tri_writing,
tri_items, tri_vals),
TP_ARGS(sb, journal_info, holders),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, rsv)
__field(unsigned int, rsv_holders)
__field(int, res_items)
__field(int, res_vals)
__field(int, act_items)
__field(int, act_vals)
__field(unsigned int, tri_holders)
__field(unsigned int, tri_writing)
__field(unsigned int, tri_items)
__field(unsigned int, tri_vals)
__field(unsigned long, journal_info)
__field(int, holders)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->rsv = rsv;
__entry->rsv_holders = rsv_holders;
__entry->res_items = res->items;
__entry->res_vals = res->vals;
__entry->act_items = act->items;
__entry->act_vals = act->vals;
__entry->tri_holders = tri_holders;
__entry->tri_writing = tri_writing;
__entry->tri_items = tri_items;
__entry->tri_vals = tri_vals;
__entry->journal_info = (unsigned long)journal_info;
__entry->holders = holders;
),
TP_printk(SCSBF" rsv %p holders %u reserved %u.%u actual "
"%d.%d, trans holders %u writing %u reserved "
"%u.%u", SCSB_TRACE_ARGS, __entry->rsv, __entry->rsv_holders,
__entry->res_items, __entry->res_vals, __entry->act_items,
__entry->act_vals, __entry->tri_holders, __entry->tri_writing,
__entry->tri_items, __entry->tri_vals)
TP_printk(SCSBF" journal_info 0x%0lx holders %d",
SCSB_TRACE_ARGS, __entry->journal_info, __entry->holders)
);
TRACE_EVENT(scoutfs_trans_acquired_hold,
TP_PROTO(struct super_block *sb, const struct scoutfs_item_count *cnt,
void *rsv, unsigned int rsv_holders,
struct scoutfs_item_count *res,
struct scoutfs_item_count *act, unsigned int tri_holders,
unsigned int tri_writing, unsigned int tri_items,
unsigned int tri_vals),
TP_ARGS(sb, cnt, rsv, rsv_holders, res, act, tri_holders, tri_writing,
tri_items, tri_vals),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, cnt_items)
__field(int, cnt_vals)
__field(void *, rsv)
__field(unsigned int, rsv_holders)
__field(int, res_items)
__field(int, res_vals)
__field(int, act_items)
__field(int, act_vals)
__field(unsigned int, tri_holders)
__field(unsigned int, tri_writing)
__field(unsigned int, tri_items)
__field(unsigned int, tri_vals)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->cnt_items = cnt->items;
__entry->cnt_vals = cnt->vals;
__entry->rsv = rsv;
__entry->rsv_holders = rsv_holders;
__entry->res_items = res->items;
__entry->res_vals = res->vals;
__entry->act_items = act->items;
__entry->act_vals = act->vals;
__entry->tri_holders = tri_holders;
__entry->tri_writing = tri_writing;
__entry->tri_items = tri_items;
__entry->tri_vals = tri_vals;
),
TP_printk(SCSBF" cnt %u.%u, rsv %p holders %u reserved %u.%u "
"actual %d.%d, trans holders %u writing %u reserved "
"%u.%u", SCSB_TRACE_ARGS, __entry->cnt_items,
__entry->cnt_vals, __entry->rsv, __entry->rsv_holders,
__entry->res_items, __entry->res_vals, __entry->act_items,
__entry->act_vals, __entry->tri_holders, __entry->tri_writing,
__entry->tri_items, __entry->tri_vals)
DEFINE_EVENT(scoutfs_trans_hold_release_class, scoutfs_trans_acquired_hold,
TP_PROTO(struct super_block *sb, void *journal_info, int holders),
TP_ARGS(sb, journal_info, holders)
);
TRACE_EVENT(scoutfs_trans_track_item,
TP_PROTO(struct super_block *sb, int delta_items, int delta_vals,
int act_items, int act_vals, int res_items, int res_vals),
TP_ARGS(sb, delta_items, delta_vals, act_items, act_vals, res_items,
res_vals),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, delta_items)
__field(int, delta_vals)
__field(int, act_items)
__field(int, act_vals)
__field(int, res_items)
__field(int, res_vals)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->delta_items = delta_items;
__entry->delta_vals = delta_vals;
__entry->act_items = act_items;
__entry->act_vals = act_vals;
__entry->res_items = res_items;
__entry->res_vals = res_vals;
),
TP_printk(SCSBF" delta_items %d delta_vals %d act_items %d act_vals %d res_items %d res_vals %d",
SCSB_TRACE_ARGS, __entry->delta_items, __entry->delta_vals,
__entry->act_items, __entry->act_vals, __entry->res_items,
__entry->res_vals)
DEFINE_EVENT(scoutfs_trans_hold_release_class, scoutfs_release_trans,
TP_PROTO(struct super_block *sb, void *journal_info, int holders),
TP_ARGS(sb, journal_info, holders)
);
TRACE_EVENT(scoutfs_ioc_release,
@@ -530,22 +462,22 @@ TRACE_EVENT(scoutfs_ioc_release,
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(__u64, block)
__field(__u64, count)
__field(__u64, offset)
__field(__u64, length)
__field(__u64, vers)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->block = args->block;
__entry->count = args->count;
__entry->offset = args->offset;
__entry->length = args->length;
__entry->vers = args->data_version;
),
TP_printk(SCSBF" ino %llu block %llu count %llu vers %llu",
SCSB_TRACE_ARGS, __entry->ino, __entry->block,
__entry->count, __entry->vers)
TP_printk(SCSBF" ino %llu offset %llu length %llu vers %llu",
SCSB_TRACE_ARGS, __entry->ino, __entry->offset,
__entry->length, __entry->vers)
);
DEFINE_EVENT(scoutfs_ino_ret_class, scoutfs_ioc_release_ret,
@@ -564,7 +496,7 @@ TRACE_EVENT(scoutfs_ioc_stage,
__field(__u64, ino)
__field(__u64, vers)
__field(__u64, offset)
__field(__s32, count)
__field(__s32, length)
),
TP_fast_assign(
@@ -572,12 +504,12 @@ TRACE_EVENT(scoutfs_ioc_stage,
__entry->ino = ino;
__entry->vers = args->data_version;
__entry->offset = args->offset;
__entry->count = args->count;
__entry->length = args->length;
),
TP_printk(SCSBF" ino %llu vers %llu offset %llu count %d",
TP_printk(SCSBF" ino %llu vers %llu offset %llu length %d",
SCSB_TRACE_ARGS, __entry->ino, __entry->vers,
__entry->offset, __entry->count)
__entry->offset, __entry->length)
);
TRACE_EVENT(scoutfs_ioc_data_wait_err,
@@ -758,15 +690,16 @@ TRACE_EVENT(scoutfs_evict_inode,
TRACE_EVENT(scoutfs_drop_inode,
TP_PROTO(struct super_block *sb, __u64 ino, unsigned int nlink,
unsigned int unhashed),
unsigned int unhashed, bool drop_invalidated),
TP_ARGS(sb, ino, nlink, unhashed),
TP_ARGS(sb, ino, nlink, unhashed, drop_invalidated),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(unsigned int, nlink)
__field(unsigned int, unhashed)
__field(unsigned int, drop_invalidated)
),
TP_fast_assign(
@@ -774,10 +707,12 @@ TRACE_EVENT(scoutfs_drop_inode,
__entry->ino = ino;
__entry->nlink = nlink;
__entry->unhashed = unhashed;
__entry->drop_invalidated = !!drop_invalidated;
),
TP_printk(SCSBF" ino %llu nlink %u unhashed %d", SCSB_TRACE_ARGS,
__entry->ino, __entry->nlink, __entry->unhashed)
TP_printk(SCSBF" ino %llu nlink %u unhashed %d drop_invalidated %u", SCSB_TRACE_ARGS,
__entry->ino, __entry->nlink, __entry->unhashed,
__entry->drop_invalidated)
);
TRACE_EVENT(scoutfs_inode_walk_writeback,
@@ -1652,7 +1587,7 @@ TRACE_EVENT(scoutfs_get_name,
);
TRACE_EVENT(scoutfs_btree_read_error,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_ref *ref),
TP_PROTO(struct super_block *sb, struct scoutfs_block_ref *ref),
TP_ARGS(sb, ref),
@@ -1672,37 +1607,10 @@ TRACE_EVENT(scoutfs_btree_read_error,
SCSB_TRACE_ARGS, __entry->blkno, __entry->seq)
);
TRACE_EVENT(scoutfs_btree_dirty_block,
TP_PROTO(struct super_block *sb, u64 blkno, u64 seq,
u64 bt_blkno, u64 bt_seq),
TP_ARGS(sb, blkno, seq, bt_blkno, bt_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, blkno)
__field(__u64, seq)
__field(__u64, bt_blkno)
__field(__u64, bt_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->blkno = blkno;
__entry->seq = seq;
__entry->bt_blkno = bt_blkno;
__entry->bt_seq = bt_seq;
),
TP_printk(SCSBF" blkno %llu seq %llu bt_blkno %llu bt_seq %llu",
SCSB_TRACE_ARGS, __entry->blkno, __entry->seq,
__entry->bt_blkno, __entry->bt_seq)
);
TRACE_EVENT(scoutfs_btree_walk,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_root *root,
struct scoutfs_key *key, int flags, int level,
struct scoutfs_btree_ref *ref),
struct scoutfs_block_ref *ref),
TP_ARGS(sb, root, key, flags, level, ref),
@@ -1838,118 +1746,69 @@ TRACE_EVENT(scoutfs_lock_message,
__entry->old_mode, __entry->new_mode)
);
DECLARE_EVENT_CLASS(scoutfs_quorum_message_class,
TP_PROTO(struct super_block *sb, u64 term, u8 type, int nr),
TRACE_EVENT(scoutfs_quorum_election,
TP_PROTO(struct super_block *sb, u64 prev_term),
TP_ARGS(sb, prev_term),
TP_ARGS(sb, term, type, nr),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, prev_term)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->prev_term = prev_term;
),
TP_printk(SCSBF" prev_term %llu",
SCSB_TRACE_ARGS, __entry->prev_term)
);
TRACE_EVENT(scoutfs_quorum_election_ret,
TP_PROTO(struct super_block *sb, int ret, u64 elected_term),
TP_ARGS(sb, ret, elected_term),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, ret)
__field(__u64, elected_term)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ret = ret;
__entry->elected_term = elected_term;
),
TP_printk(SCSBF" ret %d elected_term %llu",
SCSB_TRACE_ARGS, __entry->ret, __entry->elected_term)
);
TRACE_EVENT(scoutfs_quorum_election_vote,
TP_PROTO(struct super_block *sb, int role, u64 term, u64 vote_for_rid,
int votes, int log_cycles, int quorum_count),
TP_ARGS(sb, role, term, vote_for_rid, votes, log_cycles, quorum_count),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, role)
__field(__u64, term)
__field(__u64, vote_for_rid)
__field(int, votes)
__field(int, log_cycles)
__field(int, quorum_count)
__field(__u8, type)
__field(int, nr)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->role = role;
__entry->term = term;
__entry->vote_for_rid = vote_for_rid;
__entry->votes = votes;
__entry->log_cycles = log_cycles;
__entry->quorum_count = quorum_count;
__entry->type = type;
__entry->nr = nr;
),
TP_printk(SCSBF" role %d term %llu vote_for_rid %016llx votes %d log_cycles %d quorum_count %d",
SCSB_TRACE_ARGS, __entry->role, __entry->term,
__entry->vote_for_rid, __entry->votes, __entry->log_cycles,
__entry->quorum_count)
TP_printk(SCSBF" term %llu type %u nr %d",
SCSB_TRACE_ARGS, __entry->term, __entry->type, __entry->nr)
);
DEFINE_EVENT(scoutfs_quorum_message_class, scoutfs_quorum_send_message,
TP_PROTO(struct super_block *sb, u64 term, u8 type, int nr),
TP_ARGS(sb, term, type, nr)
);
DEFINE_EVENT(scoutfs_quorum_message_class, scoutfs_quorum_recv_message,
TP_PROTO(struct super_block *sb, u64 term, u8 type, int nr),
TP_ARGS(sb, term, type, nr)
);
DECLARE_EVENT_CLASS(scoutfs_quorum_block_class,
TP_PROTO(struct super_block *sb, struct scoutfs_quorum_block *blk),
TRACE_EVENT(scoutfs_quorum_loop,
TP_PROTO(struct super_block *sb, int role, u64 term, int vote_for,
unsigned long vote_bits, struct timespec64 timeout),
TP_ARGS(sb, blk),
TP_ARGS(sb, role, term, vote_for, vote_bits, timeout),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, blkno)
__field(__u64, term)
__field(__u64, write_nr)
__field(__u64, voter_rid)
__field(__u64, vote_for_rid)
__field(__u32, crc)
__field(__u8, log_nr)
__field(int, role)
__field(int, vote_for)
__field(unsigned long, vote_bits)
__field(unsigned long, vote_count)
__field(unsigned long long, timeout_sec)
__field(int, timeout_nsec)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->blkno = le64_to_cpu(blk->blkno);
__entry->term = le64_to_cpu(blk->term);
__entry->write_nr = le64_to_cpu(blk->write_nr);
__entry->voter_rid = le64_to_cpu(blk->voter_rid);
__entry->vote_for_rid = le64_to_cpu(blk->vote_for_rid);
__entry->crc = le32_to_cpu(blk->crc);
__entry->log_nr = blk->log_nr;
__entry->term = term;
__entry->role = role;
__entry->vote_for = vote_for;
__entry->vote_bits = vote_bits;
__entry->vote_count = hweight_long(vote_bits);
__entry->timeout_sec = timeout.tv_sec;
__entry->timeout_nsec = timeout.tv_nsec;
),
TP_printk(SCSBF" blkno %llu term %llu write_nr %llu voter_rid %016llx vote_for_rid %016llx crc 0x%08x log_nr %u",
SCSB_TRACE_ARGS, __entry->blkno, __entry->term,
__entry->write_nr, __entry->voter_rid, __entry->vote_for_rid,
__entry->crc, __entry->log_nr)
);
DEFINE_EVENT(scoutfs_quorum_block_class, scoutfs_quorum_read_block,
TP_PROTO(struct super_block *sb, struct scoutfs_quorum_block *blk),
TP_ARGS(sb, blk)
);
DEFINE_EVENT(scoutfs_quorum_block_class, scoutfs_quorum_write_block,
TP_PROTO(struct super_block *sb, struct scoutfs_quorum_block *blk),
TP_ARGS(sb, blk)
TP_printk(SCSBF" term %llu role %d vote_for %d vote_bits 0x%lx vote_count %lu timeout %llu.%u",
SCSB_TRACE_ARGS, __entry->term, __entry->role,
__entry->vote_for, __entry->vote_bits, __entry->vote_count,
__entry->timeout_sec, __entry->timeout_nsec)
);
/*
@@ -1979,31 +1838,27 @@ DEFINE_EVENT(scoutfs_clock_sync_class, scoutfs_recv_clock_sync,
);
TRACE_EVENT(scoutfs_trans_seq_advance,
TP_PROTO(struct super_block *sb, u64 rid, u64 prev_seq,
u64 next_seq),
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, prev_seq, next_seq),
TP_ARGS(sb, rid, trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, prev_seq)
__field(__u64, next_seq)
__field(__u64, trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->prev_seq = prev_seq;
__entry->next_seq = next_seq;
__entry->trans_seq = trans_seq;
),
TP_printk(SCSBF" rid %016llx prev_seq %llu next_seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->prev_seq,
__entry->next_seq)
TP_printk(SCSBF" rid %016llx trans_seq %llu\n",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_trans_seq_farewell,
TRACE_EVENT(scoutfs_trans_seq_remove,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, trans_seq),
@@ -2083,8 +1938,8 @@ DEFINE_EVENT(scoutfs_forest_bloom_class, scoutfs_forest_bloom_search,
);
TRACE_EVENT(scoutfs_forest_prepare_commit,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_ref *item_ref,
struct scoutfs_btree_ref *bloom_ref),
TP_PROTO(struct super_block *sb, struct scoutfs_block_ref *item_ref,
struct scoutfs_block_ref *bloom_ref),
TP_ARGS(sb, item_ref, bloom_ref),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
@@ -2150,18 +2005,45 @@ TRACE_EVENT(scoutfs_forest_init_our_log,
__entry->blkno, __entry->seq)
);
TRACE_EVENT(scoutfs_block_dirty_ref,
TP_PROTO(struct super_block *sb, u64 ref_blkno, u64 ref_seq,
u64 block_blkno, u64 block_seq),
TP_ARGS(sb, ref_blkno, ref_seq, block_blkno, block_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ref_blkno)
__field(__u64, ref_seq)
__field(__u64, block_blkno)
__field(__u64, block_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ref_blkno = ref_blkno;
__entry->ref_seq = ref_seq;
__entry->block_blkno = block_blkno;
__entry->block_seq = block_seq;
),
TP_printk(SCSBF" ref_blkno %llu ref_seq %llu block_blkno %llu block_seq %llu",
SCSB_TRACE_ARGS, __entry->ref_blkno, __entry->ref_seq,
__entry->block_blkno, __entry->block_seq)
);
DECLARE_EVENT_CLASS(scoutfs_block_class,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved),
TP_PROTO(struct super_block *sb, void *bp, u64 blkno, int refcount, int io_count,
unsigned long bits, __u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, bp)
__field(__u64, blkno)
__field(int, refcount)
__field(int, io_count)
__field(unsigned long, bits)
__field(__u64, lru_moved)
__field(long, bits)
__field(__u64, accessed)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
@@ -2170,57 +2052,71 @@ DECLARE_EVENT_CLASS(scoutfs_block_class,
__entry->refcount = refcount;
__entry->io_count = io_count;
__entry->bits = bits;
__entry->lru_moved = lru_moved;
__entry->accessed = accessed;
),
TP_printk(SCSBF" bp %p blkno %llu refcount %d io_count %d bits 0x%lx lru_moved %llu",
SCSB_TRACE_ARGS, __entry->bp, __entry->blkno,
__entry->refcount, __entry->io_count, __entry->bits,
__entry->lru_moved)
TP_printk(SCSBF" bp %p blkno %llu refcount %d io_count %d bits 0x%lx accessed %llu",
SCSB_TRACE_ARGS, __entry->bp, __entry->blkno, __entry->refcount,
__entry->io_count, __entry->bits, __entry->accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_allocate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_free,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_insert,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_remove,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_end_io,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_submit,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_invalidate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_mark_dirty,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_forget,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_shrink,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits, u64 lru_moved),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, lru_moved)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DECLARE_EVENT_CLASS(scoutfs_ext_next_class,
@@ -2462,6 +2358,136 @@ TRACE_EVENT(scoutfs_alloc_move,
__entry->ret)
);
TRACE_EVENT(scoutfs_item_read_page,
TP_PROTO(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *pg_start, struct scoutfs_key *pg_end),
TP_ARGS(sb, key, pg_start, pg_end),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
sk_trace_define(key)
sk_trace_define(pg_start)
sk_trace_define(pg_end)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
sk_trace_assign(key, key);
sk_trace_assign(pg_start, pg_start);
sk_trace_assign(pg_end, pg_end);
),
TP_printk(SCSBF" key "SK_FMT" pg_start "SK_FMT" pg_end "SK_FMT,
SCSB_TRACE_ARGS, sk_trace_args(key), sk_trace_args(pg_start),
sk_trace_args(pg_end))
);
TRACE_EVENT(scoutfs_item_invalidate_page,
TP_PROTO(struct super_block *sb, struct scoutfs_key *start,
struct scoutfs_key *end, struct scoutfs_key *pg_start,
struct scoutfs_key *pg_end, int pgi),
TP_ARGS(sb, start, end, pg_start, pg_end, pgi),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
sk_trace_define(start)
sk_trace_define(end)
sk_trace_define(pg_start)
sk_trace_define(pg_end)
__field(int, pgi)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
sk_trace_assign(start, start);
sk_trace_assign(end, end);
sk_trace_assign(pg_start, pg_start);
sk_trace_assign(pg_end, pg_end);
__entry->pgi = pgi;
),
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" pg_start "SK_FMT" pg_end "SK_FMT" pgi %d",
SCSB_TRACE_ARGS, sk_trace_args(start), sk_trace_args(end),
sk_trace_args(pg_start), sk_trace_args(pg_end), __entry->pgi)
);
DECLARE_EVENT_CLASS(scoutfs_omap_group_class,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr, int bit_count),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr, bit_count),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, grp)
__field(__u64, group_nr)
__field(unsigned int, group_total)
__field(int, bit_nr)
__field(int, bit_count)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->grp = grp;
__entry->group_nr = group_nr;
__entry->group_total = group_total;
__entry->bit_nr = bit_nr;
__entry->bit_count = bit_count;
),
TP_printk(SCSBF" grp %p group_nr %llu group_total %u bit_nr %d bit_count %d",
SCSB_TRACE_ARGS, __entry->grp, __entry->group_nr, __entry->group_total,
__entry->bit_nr, __entry->bit_count)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_alloc,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr, int bit_count),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr, bit_count)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_free,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr, int bit_count),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr, bit_count)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_inc,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr, int bit_count),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr, bit_count)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_dec,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr, int bit_count),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr, bit_count)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_request,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr, int bit_count),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr, bit_count)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_destroy,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr, int bit_count),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr, bit_count)
);
TRACE_EVENT(scoutfs_omap_should_delete,
TP_PROTO(struct super_block *sb, u64 ino, unsigned int nlink, int ret),
TP_ARGS(sb, ino, nlink, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(unsigned int, nlink)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->nlink = nlink;
__entry->ret = ret;
),
TP_printk(SCSBF" ino %llu nlink %u ret %d",
SCSB_TRACE_ARGS, __entry->ino, __entry->nlink, __entry->ret)
);
#endif /* _TRACE_SCOUTFS_H */
/* This part must be outside protection */

File diff suppressed because it is too large Load Diff

View File

@@ -59,18 +59,21 @@ do { \
int scoutfs_server_lock_request(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_net_lock_grant_response *gr);
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_recover_request(struct super_block *sb, u64 rid,
struct scoutfs_key *key);
void scoutfs_server_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
int scoutfs_server_hold_commit(struct super_block *sb);
int scoutfs_server_apply_commit(struct super_block *sb, int err);
void scoutfs_server_recov_finish(struct super_block *sb, u64 rid, int which);
int scoutfs_server_send_omap_request(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map_args *args);
int scoutfs_server_send_omap_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map *map, int err);
struct sockaddr_in;
struct scoutfs_quorum_elected_info;
int scoutfs_server_start(struct super_block *sb, struct sockaddr_in *sin,
u64 term);
int scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_abort(struct super_block *sb);
void scoutfs_server_stop(struct super_block *sb);

View File

@@ -255,24 +255,9 @@ static u8 height_for_blk(u64 blk)
return hei;
}
static void init_file_block(struct super_block *sb, struct scoutfs_block *bl,
int level)
static inline u32 srch_level_magic(int level)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_block_header *hdr;
/* don't leak uninit kernel mem.. block should do this for us? */
memset(bl->data, 0, SCOUTFS_BLOCK_LG_SIZE);
hdr = bl->data;
hdr->fsid = super->hdr.fsid;
hdr->blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
if (level)
hdr->magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_SRCH_PARENT);
else
hdr->magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK);
return level ? SCOUTFS_BLOCK_MAGIC_SRCH_PARENT : SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK;
}
/*
@@ -284,39 +269,15 @@ static void init_file_block(struct super_block *sb, struct scoutfs_block *bl,
*/
static int read_srch_block(struct super_block *sb,
struct scoutfs_block_writer *wri, int level,
struct scoutfs_srch_ref *ref,
struct scoutfs_block_ref *ref,
struct scoutfs_block **bl_ret)
{
struct scoutfs_block *bl;
int retries = 0;
int ret = 0;
int mag;
u32 magic = srch_level_magic(level);
int ret;
mag = level ? SCOUTFS_BLOCK_MAGIC_SRCH_PARENT :
SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK;
retry:
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
if (!IS_ERR_OR_NULL(bl) &&
!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno, mag)) {
scoutfs_inc_counter(sb, srch_inconsistent_ref);
scoutfs_block_writer_forget(sb, wri, bl);
scoutfs_block_invalidate(sb, bl);
scoutfs_block_put(sb, bl);
bl = NULL;
if (retries++ == 0)
goto retry;
bl = ERR_PTR(-ESTALE);
ret = scoutfs_block_read_ref(sb, ref, magic, bl_ret);
if (ret == -ESTALE)
scoutfs_inc_counter(sb, srch_read_stale);
}
if (IS_ERR(bl)) {
ret = PTR_ERR(bl);
bl = NULL;
}
*bl_ret = bl;
return ret;
}
@@ -333,7 +294,7 @@ static int read_path_block(struct super_block *sb,
{
struct scoutfs_block *bl = NULL;
struct scoutfs_srch_parent *srp;
struct scoutfs_srch_ref ref;
struct scoutfs_block_ref ref;
int level;
int ind;
int ret;
@@ -392,12 +353,10 @@ static int get_file_block(struct super_block *sb,
struct scoutfs_block_header *hdr;
struct scoutfs_block *bl = NULL;
struct scoutfs_srch_parent *srp;
struct scoutfs_block *new_bl;
struct scoutfs_srch_ref *ref;
u64 blkno = 0;
struct scoutfs_block_ref new_root_ref;
struct scoutfs_block_ref *ref;
int level;
int ind;
int err;
int ret;
u8 hei;
@@ -409,29 +368,21 @@ static int get_file_block(struct super_block *sb,
goto out;
}
ret = scoutfs_alloc_meta(sb, alloc, wri, &blkno);
memset(&new_root_ref, 0, sizeof(new_root_ref));
level = sfl->height;
ret = scoutfs_block_dirty_ref(sb, alloc, wri, &new_root_ref,
srch_level_magic(level), &bl, 0, NULL);
if (ret < 0)
goto out;
bl = scoutfs_block_create(sb, blkno);
if (IS_ERR(bl)) {
ret = PTR_ERR(bl);
goto out;
}
blkno = 0;
scoutfs_block_writer_mark_dirty(sb, wri, bl);
init_file_block(sb, bl, sfl->height);
if (sfl->height) {
if (level) {
srp = bl->data;
srp->refs[0].blkno = sfl->ref.blkno;
srp->refs[0].seq = sfl->ref.seq;
srp->refs[0] = sfl->ref;
}
hdr = bl->data;
sfl->ref.blkno = hdr->blkno;
sfl->ref.seq = hdr->seq;
sfl->ref = new_root_ref;
sfl->height++;
scoutfs_block_put(sb, bl);
bl = NULL;
@@ -447,54 +398,13 @@ static int get_file_block(struct super_block *sb,
goto out;
}
/* read an existing block */
if (ref->blkno) {
ret = read_srch_block(sb, wri, level, ref, &bl);
if (ret < 0)
goto out;
}
/* allocate a new block if we need it */
if (!ref->blkno || ((flags & GFB_DIRTY) &&
!scoutfs_block_writer_is_dirty(sb, bl))) {
ret = scoutfs_alloc_meta(sb, alloc, wri, &blkno);
if (ret < 0)
goto out;
new_bl = scoutfs_block_create(sb, blkno);
if (IS_ERR(new_bl)) {
ret = PTR_ERR(new_bl);
goto out;
}
if (bl) {
/* cow old block if we have one */
ret = scoutfs_free_meta(sb, alloc, wri,
bl->blkno);
if (ret)
goto out;
memcpy(new_bl->data, bl->data,
SCOUTFS_BLOCK_LG_SIZE);
scoutfs_block_put(sb, bl);
bl = new_bl;
hdr = bl->data;
hdr->blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
} else {
/* init new allocated block */
bl = new_bl;
init_file_block(sb, bl, level);
}
blkno = 0;
scoutfs_block_writer_mark_dirty(sb, wri, bl);
/* update file or parent block ref */
hdr = bl->data;
ref->blkno = hdr->blkno;
ref->seq = hdr->seq;
}
if (flags & GFB_DIRTY)
ret = scoutfs_block_dirty_ref(sb, alloc, wri, ref, srch_level_magic(level),
&bl, 0, NULL);
else
ret = scoutfs_block_read_ref(sb, ref, srch_level_magic(level), &bl);
if (ret < 0)
goto out;
if (level == 0) {
ret = 0;
@@ -514,12 +424,6 @@ static int get_file_block(struct super_block *sb,
out:
scoutfs_block_put(sb, parent);
/* return allocated blkno on error */
if (blkno > 0) {
err = scoutfs_free_meta(sb, alloc, wri, blkno);
BUG_ON(err); /* radix should have been dirty */
}
if (ret < 0) {
scoutfs_block_put(sb, bl);
bl = NULL;
@@ -1198,14 +1102,10 @@ int scoutfs_srch_get_compact(struct super_block *sb,
for (;;scoutfs_key_inc(&key)) {
ret = scoutfs_btree_next(sb, root, &key, &iref);
if (ret == -ENOENT) {
ret = 0;
sc->nr = 0;
goto out;
}
if (ret == 0) {
if (iref.val_len == sizeof(struct scoutfs_srch_file)) {
if (iref.key->sk_type != type) {
ret = -ENOENT;
} else if (iref.val_len == sizeof(sfl)) {
key = *iref.key;
memcpy(&sfl, iref.val, iref.val_len);
} else {
@@ -1213,24 +1113,25 @@ int scoutfs_srch_get_compact(struct super_block *sb,
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0)
if (ret < 0) {
/* see if we ran out of log files or files entirely */
if (ret == -ENOENT) {
sc->nr = 0;
if (type == SCOUTFS_SRCH_LOG_TYPE) {
type = SCOUTFS_SRCH_BLOCKS_TYPE;
init_srch_key(&key, type, 0, 0);
continue;
} else {
ret = 0;
}
}
goto out;
}
/* skip any files already being compacted */
if (scoutfs_spbm_test(&busy, le64_to_cpu(sfl.ref.blkno)))
continue;
/* see if we ran out of log files or files entirely */
if (key.sk_type != type) {
sc->nr = 0;
if (key.sk_type == SCOUTFS_SRCH_BLOCKS_TYPE) {
type = SCOUTFS_SRCH_BLOCKS_TYPE;
} else {
ret = 0;
goto out;
}
}
/* reset if we iterated into the next size category */
if (type == SCOUTFS_SRCH_BLOCKS_TYPE) {
order = fls64(le64_to_cpu(sfl.blocks)) /
@@ -2255,7 +2156,8 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
if (ret < 0)
goto commit;
ret = scoutfs_block_writer_write(sb, &wri);
ret = scoutfs_alloc_prepare_commit(sb, &alloc, &wri) ?:
scoutfs_block_writer_write(sb, &wri);
commit:
/* the server won't use our partial compact if _ERROR is set */
sc->meta_avail = alloc.avail;

View File

@@ -44,6 +44,8 @@
#include "srch.h"
#include "item.h"
#include "alloc.h"
#include "recov.h"
#include "omap.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
@@ -166,7 +168,7 @@ out:
* try to free as many locks as possible.
*/
if (scoutfs_trigger(sb, STATFS_LOCK_PURGE))
scoutfs_free_unused_locks(sb, -1UL);
scoutfs_free_unused_locks(sb);
return ret;
}
@@ -176,7 +178,8 @@ static int scoutfs_show_options(struct seq_file *seq, struct dentry *root)
struct super_block *sb = root->d_sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
seq_printf(seq, ",server_addr="SIN_FMT, SIN_ARG(&opts->server_addr));
if (opts->quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts->quorum_slot_nr);
seq_printf(seq, ",metadev_path=%s", opts->metadev_path);
return 0;
@@ -192,20 +195,19 @@ static ssize_t metadev_path_show(struct kobject *kobj,
}
SCOUTFS_ATTR_RO(metadev_path);
static ssize_t server_addr_show(struct kobject *kobj,
static ssize_t quorum_server_nr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
return snprintf(buf, PAGE_SIZE, SIN_FMT"\n",
SIN_ARG(&opts->server_addr));
return snprintf(buf, PAGE_SIZE, "%d\n", opts->quorum_slot_nr);
}
SCOUTFS_ATTR_RO(server_addr);
SCOUTFS_ATTR_RO(quorum_server_nr);
static struct attribute *mount_options_attrs[] = {
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(server_addr),
SCOUTFS_ATTR_PTR(quorum_server_nr),
NULL,
};
@@ -243,28 +245,26 @@ static void scoutfs_put_super(struct super_block *sb)
trace_scoutfs_put_super(sb);
sbi->shutdown = true;
scoutfs_data_destroy(sb);
scoutfs_srch_destroy(sb);
scoutfs_unlock(sb, sbi->rid_lock, SCOUTFS_LOCK_WRITE);
sbi->rid_lock = NULL;
scoutfs_lock_shutdown(sb);
scoutfs_shutdown_trans(sb);
scoutfs_client_destroy(sb);
scoutfs_inode_destroy(sb);
scoutfs_item_destroy(sb);
scoutfs_forest_destroy(sb);
scoutfs_data_destroy(sb);
/* the server locks the listen address and compacts */
scoutfs_lock_shutdown(sb);
scoutfs_quorum_destroy(sb);
scoutfs_server_destroy(sb);
scoutfs_recov_destroy(sb);
scoutfs_net_destroy(sb);
scoutfs_lock_destroy(sb);
/* server clears quorum leader flag during shutdown */
scoutfs_quorum_destroy(sb);
scoutfs_omap_destroy(sb);
scoutfs_block_destroy(sb);
scoutfs_destroy_triggers(sb);
@@ -309,6 +309,34 @@ int scoutfs_write_super(struct super_block *sb,
sizeof(struct scoutfs_super_block));
}
static bool invalid_blkno_limits(struct super_block *sb, char *which,
u64 start, __le64 first, __le64 last,
struct block_device *bdev, int shift)
{
u64 blkno;
if (le64_to_cpu(first) < start) {
scoutfs_err(sb, "super block first %s blkno %llu is within first valid blkno %llu",
which, le64_to_cpu(first), start);
return true;
}
if (le64_to_cpu(first) > le64_to_cpu(last)) {
scoutfs_err(sb, "super block first %s blkno %llu is greater than last %s blkno %llu",
which, le64_to_cpu(first), which, le64_to_cpu(last));
return true;
}
blkno = (i_size_read(bdev->bd_inode) >> shift) - 1;
if (le64_to_cpu(last) > blkno) {
scoutfs_err(sb, "super block last %s blkno %llu is beyond device size last blkno %llu",
which, le64_to_cpu(last), blkno);
return true;
}
return false;
}
/*
* Read super, specifying bdev.
*/
@@ -316,9 +344,9 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
struct block_device *bdev,
struct scoutfs_super_block *super_res)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super;
__le32 calc;
u64 blkno;
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -352,58 +380,27 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
}
if (super->format_hash != cpu_to_le64(SCOUTFS_FORMAT_HASH)) {
scoutfs_err(sb, "super block has invalid format hash 0x%llx, expected 0x%llx",
le64_to_cpu(super->format_hash),
SCOUTFS_FORMAT_HASH);
if (super->version != cpu_to_le64(SCOUTFS_INTEROP_VERSION)) {
scoutfs_err(sb, "super block has invalid version %llu, expected %llu",
le64_to_cpu(super->version),
SCOUTFS_INTEROP_VERSION);
ret = -EINVAL;
goto out;
}
/* XXX do we want more rigorous invalid super checking? */
if (super->quorum_count == 0 ||
super->quorum_count > SCOUTFS_QUORUM_MAX_COUNT) {
scoutfs_err(sb, "super block has invalid quorum count %u, must be > 0 and <= %u",
super->quorum_count, SCOUTFS_QUORUM_MAX_COUNT);
if (invalid_blkno_limits(sb, "meta",
SCOUTFS_META_DEV_START_BLKNO,
super->first_meta_blkno,
super->last_meta_blkno, sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
invalid_blkno_limits(sb, "data",
SCOUTFS_DATA_DEV_START_BLKNO,
super->first_data_blkno,
super->last_data_blkno, sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
ret = -EINVAL;
goto out;
}
blkno = (SCOUTFS_QUORUM_BLKNO + SCOUTFS_QUORUM_BLOCKS) >>
SCOUTFS_BLOCK_SM_LG_SHIFT;
if (le64_to_cpu(super->first_meta_blkno) < blkno) {
scoutfs_err(sb, "super block first meta blkno %llu is within quorum blocks",
le64_to_cpu(super->first_meta_blkno));
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(super->first_meta_blkno) >
le64_to_cpu(super->last_meta_blkno)) {
scoutfs_err(sb, "super block first meta blkno %llu is greater than last meta blkno %llu",
le64_to_cpu(super->first_meta_blkno),
le64_to_cpu(super->last_meta_blkno));
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(super->first_data_blkno) >
le64_to_cpu(super->last_data_blkno)) {
scoutfs_err(sb, "super block first data blkno %llu is greater than last data blkno %llu",
le64_to_cpu(super->first_data_blkno),
le64_to_cpu(super->last_data_blkno));
ret = -EINVAL;
goto out;
}
blkno = (i_size_read(sb->s_bdev->bd_inode) >>
SCOUTFS_BLOCK_SM_SHIFT) - 1;
if (le64_to_cpu(super->last_data_blkno) > blkno) {
scoutfs_err(sb, "super block last data blkno %llu is outsite device size last blkno %llu",
le64_to_cpu(super->last_data_blkno), blkno);
ret = -EINVAL;
goto out;
}
out:
@@ -597,10 +594,12 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
scoutfs_inode_setup(sb) ?:
scoutfs_data_setup(sb) ?:
scoutfs_setup_trans(sb) ?:
scoutfs_omap_setup(sb) ?:
scoutfs_lock_setup(sb) ?:
scoutfs_net_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_recov_setup(sb) ?:
scoutfs_server_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_client_setup(sb) ?:
scoutfs_lock_rid(sb, SCOUTFS_LOCK_WRITE, 0, sbi->rid,
&sbi->rid_lock) ?:
@@ -649,6 +648,9 @@ static void scoutfs_kill_sb(struct super_block *sb)
{
trace_scoutfs_kill_sb(sb);
if (SCOUTFS_HAS_SBI(sb))
scoutfs_lock_unmount_begin(sb);
kill_block_super(sb);
}
@@ -682,6 +684,10 @@ static int __init scoutfs_module_init(void)
".section .note.git_describe,\"a\"\n"
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_interop_version,\"a\"\n"
".string \""SCOUTFS_INTEROP_VERSION_STR"\\n\"\n"
".previous\n");
scoutfs_init_counters();
@@ -714,3 +720,4 @@ module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
MODULE_INFO(scoutfs_interop_version, SCOUTFS_INTEROP_VERSION_STR);

View File

@@ -26,6 +26,8 @@ struct net_info;
struct block_info;
struct forest_info;
struct srch_info;
struct recov_info;
struct omap_info;
struct scoutfs_sb_info {
struct super_block *sb;
@@ -48,6 +50,7 @@ struct scoutfs_sb_info {
struct block_info *block_info;
struct forest_info *forest_info;
struct srch_info *srch_info;
struct omap_info *omap_info;
struct item_cache_info *item_cache_info;
wait_queue_head_t trans_hold_wq;
@@ -70,6 +73,7 @@ struct scoutfs_sb_info {
struct lock_server_info *lock_server_info;
struct client_info *client_info;
struct server_info *server_info;
struct recov_info *recov_info;
struct sysfs_info *sfsinfo;
struct scoutfs_counters *counters;
@@ -81,8 +85,6 @@ struct scoutfs_sb_info {
struct dentry *debug_root;
bool shutdown;
unsigned long corruption_messages_once[SC_NR_LONGS];
};

View File

@@ -39,17 +39,15 @@
* track the relationships between dirty blocks so there's only ever one
* transaction being built.
*
* The copy of the on-disk super block in the fs sb info has its header
* sequence advanced so that new dirty blocks inherit this dirty
* sequence number. It's only advanced once all those dirty blocks are
* reachable after having first written them all out and then the new
* super with that seq. It's first incremented at mount.
* Committing the current dirty transaction can be triggered by sync, a
* regular background commit interval, reaching a dirty block threshold,
* or the transaction running out of its private allocator resources.
* Once all the current holders release the writing func writes out the
* dirty blocks while excluding holders until it finishes.
*
* Unfortunately writers can nest. We don't bother trying to special
* case holding a transaction that you're already holding because that
* requires per-task storage. We just let anyone hold transactions
* regardless of waiters waiting to write, which risks waiters waiting a
* very long time.
* Unfortunately writing holders can nest. We track nested hold callers
* with the per-task journal_info pointer to avoid deadlocks between
* holders that might otherwise wait for a pending commit.
*/
/* sync dirty data at least this often */
@@ -59,11 +57,7 @@
* XXX move the rest of the super trans_ fields here.
*/
struct trans_info {
spinlock_t lock;
unsigned reserved_items;
unsigned reserved_vals;
unsigned holders;
bool writing;
atomic_t holders;
struct scoutfs_log_trees lt;
struct scoutfs_alloc alloc;
@@ -73,17 +67,9 @@ struct trans_info {
#define DECLARE_TRANS_INFO(sb, name) \
struct trans_info *name = SCOUTFS_SB(sb)->trans_info
static bool drained_holders(struct trans_info *tri)
{
bool drained;
spin_lock(&tri->lock);
tri->writing = true;
drained = tri->holders == 0;
spin_unlock(&tri->lock);
return drained;
}
/* avoid the high sign bit out of an abundance of caution*/
#define TRANS_HOLDERS_WRITE_FUNC_BIT (1 << 30)
#define TRANS_HOLDERS_COUNT_MASK (TRANS_HOLDERS_WRITE_FUNC_BIT - 1)
static int commit_btrees(struct super_block *sb)
{
@@ -128,6 +114,36 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
return scoutfs_block_writer_has_dirty(sb, &tri->wri);
}
/*
* This is racing with wait_event conditions, make sure our atomic
* stores and waitqueue loads are ordered.
*/
static void sub_holders_and_wake(struct super_block *sb, int val)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
atomic_sub(val, &tri->holders);
smp_mb(); /* make sure sub is visible before we wake */
if (waitqueue_active(&sbi->trans_hold_wq))
wake_up(&sbi->trans_hold_wq);
}
/*
* called as a wait_event condition, needs to be careful to not change
* task state and is racing with waking paths that sub_return, test, and
* wake.
*/
static bool drained_holders(struct trans_info *tri)
{
int holders;
smp_mb(); /* make sure task in wait_event queue before atomic read */
holders = atomic_read(&tri->holders) & TRANS_HOLDERS_COUNT_MASK;
return holders == 0;
}
/*
* This work func is responsible for writing out all the dirty blocks
* that make up the current dirty transaction. It prevents writers from
@@ -164,6 +180,9 @@ void scoutfs_trans_write_func(struct work_struct *work)
sbi->trans_task = current;
/* mark that we're writing so holders wait for us to finish and clear our bit */
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
wait_event(sbi->trans_hold_wq, drained_holders(tri));
trace_scoutfs_trans_write_func(sb,
@@ -215,11 +234,8 @@ out:
spin_unlock(&sbi->trans_write_lock);
wake_up(&sbi->trans_write_wq);
spin_lock(&tri->lock);
tri->writing = false;
spin_unlock(&tri->lock);
wake_up(&sbi->trans_hold_wq);
/* we're done, wake waiting holders */
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
sbi->trans_task = NULL;
@@ -311,64 +327,83 @@ void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
}
/*
* Each thread reserves space in the segment for their dirty items while
* they hold the transaction. This is calculated before the first
* transaction hold is acquired. It includes all the potential nested
* item manipulation that could happen with the transaction held.
* Including nested holds avoids having to deal with writing out partial
* transactions while a caller still holds the transaction.
* We store nested holders in the lower bits of journal_info. We use
* some higher bits as a magic value to detect if something goes
* horribly wrong and it gets clobbered.
*/
#define SCOUTFS_RESERVATION_MAGIC 0xd57cd13b
struct scoutfs_reservation {
unsigned magic;
unsigned holders;
struct scoutfs_item_count reserved;
struct scoutfs_item_count actual;
};
#define TRANS_JI_MAGIC 0xd5700000
#define TRANS_JI_MAGIC_MASK 0xfff00000
#define TRANS_JI_COUNT_MASK 0x000fffff
/* returns true if a caller already had a holder counted in journal_info */
static bool inc_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
if (holders == 0)
holders = TRANS_JI_MAGIC;
holders++;
current->journal_info = (void *)holders;
return (holders > (TRANS_JI_MAGIC | 1));
}
static void dec_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
WARN_ON_ONCE((holders & TRANS_JI_COUNT_MASK) == 0);
holders--;
if (holders == TRANS_JI_MAGIC)
holders = 0;
current->journal_info = (void *)holders;
}
/*
* Try to hold the transaction. If a caller already holds the trans then
* we piggy back on their hold. We wait if the writer is trying to
* write out the transation. And if our items won't fit then we kick off
* a write.
* This is called as the wait_event condition for holding a transaction.
* Increment the holder count unless the writer is present. We return
* false to wait until the writer finishes and wakes us.
*
* This is called as a condition for wait_event. It is very limited in
* the locking (blocking) it can do because the caller has set the task
* state before testing the condition safely race with waking after
* setting the condition. Our checking the amount of dirty metadata
* blocks and free data blocks is racy, but we don't mind the risk of
* delaying or prematurely forcing commits.
* This can be racing with itself while there's no waiters. We retry
* the cmpxchg instead of returning and waiting.
*/
static bool acquired_hold(struct super_block *sb,
struct scoutfs_reservation *rsv,
const struct scoutfs_item_count *cnt)
static bool inc_holders_unless_writer(struct trans_info *tri)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
bool acquired = false;
unsigned items;
unsigned vals;
int holders;
spin_lock(&tri->lock);
do {
smp_mb(); /* make sure we read after wait puts task in queue */
holders = atomic_read(&tri->holders);
if (holders & TRANS_HOLDERS_WRITE_FUNC_BIT)
return false;
trace_scoutfs_trans_acquired_hold(sb, cnt, rsv, rsv->holders,
&rsv->reserved, &rsv->actual,
tri->holders, tri->writing,
tri->reserved_items,
tri->reserved_vals);
} while (atomic_cmpxchg(&tri->holders, holders, holders + 1) != holders);
/* use a caller's existing reservation */
if (rsv->holders)
goto hold;
return true;
}
/* wait until the writing thread is finished */
if (tri->writing)
goto out;
/* see if we can reserve space for our item count */
items = tri->reserved_items + cnt->items;
vals = tri->reserved_vals + cnt->vals;
/*
* As we drop the last trans holder we try to wake a writing thread that
* was waiting for us to finish.
*/
static void release_holders(struct super_block *sb)
{
dec_journal_info_holders();
sub_holders_and_wake(sb, 1);
}
/*
* The caller has incremented holders so it is blocking commits. We
* make some quick checks to see if we need to trigger and wait for
* another commit before proceeding.
*/
static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
{
/*
* In theory each dirty item page could be straddling two full
* blocks, requiring 4 allocations for each item cache page.
@@ -378,11 +413,9 @@ static bool acquired_hold(struct super_block *sb,
* that it accounts for having to dirty parent blocks and
* whatever dirtying is done during the transaction hold.
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc,
scoutfs_item_dirty_pages(sb) * 2)) {
if (scoutfs_alloc_meta_low(sb, &tri->alloc, scoutfs_item_dirty_pages(sb) * 2)) {
scoutfs_inc_counter(sb, trans_commit_dirty_meta_full);
queue_trans_work(sbi);
goto out;
return true;
}
/*
@@ -394,71 +427,74 @@ static bool acquired_hold(struct super_block *sb,
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc, 16)) {
scoutfs_inc_counter(sb, trans_commit_meta_alloc_low);
queue_trans_work(sbi);
goto out;
return true;
}
/* Try to refill data allocator before premature enospc */
if (scoutfs_data_alloc_free_bytes(sb) <= SCOUTFS_TRANS_DATA_ALLOC_LWM) {
scoutfs_inc_counter(sb, trans_commit_data_alloc_low);
queue_trans_work(sbi);
return true;
}
return false;
}
static bool acquired_hold(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
bool acquired;
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
acquired = true;
goto out;
}
tri->reserved_items = items;
tri->reserved_vals = vals;
/* wait if the writer is blocking holds */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
acquired = false;
goto out;
}
rsv->reserved.items = cnt->items;
rsv->reserved.vals = cnt->vals;
/* wait if we're triggering another commit */
if (commit_before_hold(sb, tri)) {
release_holders(sb);
queue_trans_work(sbi);
acquired = false;
goto out;
}
hold:
rsv->holders++;
tri->holders++;
trace_scoutfs_trans_acquired_hold(sb, current->journal_info, atomic_read(&tri->holders));
acquired = true;
out:
spin_unlock(&tri->lock);
return acquired;
}
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt)
/*
* Try to hold the transaction. Holding the transaction prevents it
* from being committed. If a transaction is currently being written
* then we'll block until it's done and our hold can be granted.
*
* If a caller already holds the trans then we unconditionally acquire
* our hold and return to avoid deadlocks with our caller, the writing
* thread, and us. We record nested holds in a call stack with the
* journal_info pointer in the task_struct.
*
* The writing thread marks itself as a global trans_task which
* short-circuits all the hold machinery so it can call code that would
* otherwise try to hold transactions while it is writing.
*/
int scoutfs_hold_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
int ret;
/*
* Caller shouldn't provide garbage counts, nor counts that
* can't fit in segments by themselves.
*/
if (WARN_ON_ONCE(cnt.items <= 0 || cnt.vals < 0))
return -EINVAL;
if (current == sbi->trans_task)
return 0;
rsv = current->journal_info;
if (rsv == NULL) {
rsv = kzalloc(sizeof(struct scoutfs_reservation), GFP_NOFS);
if (!rsv)
return -ENOMEM;
rsv->magic = SCOUTFS_RESERVATION_MAGIC;
current->journal_info = rsv;
}
BUG_ON(rsv->magic != SCOUTFS_RESERVATION_MAGIC);
ret = wait_event_interruptible(sbi->trans_hold_wq,
acquired_hold(sb, rsv, &cnt));
if (ret && rsv->holders == 0) {
current->journal_info = NULL;
kfree(rsv);
}
return ret;
return wait_event_interruptible(sbi->trans_hold_wq, acquired_hold(sb));
}
/*
@@ -468,86 +504,22 @@ int scoutfs_hold_trans(struct super_block *sb,
*/
bool scoutfs_trans_held(void)
{
struct scoutfs_reservation *rsv = current->journal_info;
unsigned long holders = (unsigned long)current->journal_info;
return rsv && rsv->magic == SCOUTFS_RESERVATION_MAGIC;
return (holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) == TRANS_JI_MAGIC));
}
/*
* Record a transaction holder's individual contribution to the dirty
* items in the current transaction. We're making sure that the
* reservation matches the possible item manipulations while they hold
* the reservation.
*
* It is possible and legitimate for an individual contribution to be
* negative if they delete dirty items. The item cache makes sure that
* the total dirty item count doesn't fall below zero.
*/
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv = current->journal_info;
if (current == sbi->trans_task)
return;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
rsv->actual.items += items;
rsv->actual.vals += vals;
trace_scoutfs_trans_track_item(sb, items, vals, rsv->actual.items,
rsv->actual.vals, rsv->reserved.items,
rsv->reserved.vals);
WARN_ON_ONCE(rsv->actual.items > rsv->reserved.items);
WARN_ON_ONCE(rsv->actual.vals > rsv->reserved.vals);
}
/*
* As we drop the last hold in the reservation we try and wake other
* hold attempts that were waiting for space. As we drop the last trans
* holder we try to wake a writing thread that was waiting for us to
* finish.
*/
void scoutfs_release_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
DECLARE_TRANS_INFO(sb, tri);
bool wake = false;
if (current == sbi->trans_task)
return;
rsv = current->journal_info;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
release_holders(sb);
spin_lock(&tri->lock);
trace_scoutfs_release_trans(sb, rsv, rsv->holders, &rsv->reserved,
&rsv->actual, tri->holders, tri->writing,
tri->reserved_items, tri->reserved_vals);
BUG_ON(rsv->holders <= 0);
BUG_ON(tri->holders <= 0);
if (--rsv->holders == 0) {
tri->reserved_items -= rsv->reserved.items;
tri->reserved_vals -= rsv->reserved.vals;
current->journal_info = NULL;
kfree(rsv);
wake = true;
}
if (--tri->holders == 0)
wake = true;
spin_unlock(&tri->lock);
if (wake)
wake_up(&sbi->trans_hold_wq);
trace_scoutfs_release_trans(sb, current->journal_info, atomic_read(&tri->holders));
}
/*
@@ -576,7 +548,7 @@ int scoutfs_setup_trans(struct super_block *sb)
if (!tri)
return -ENOMEM;
spin_lock_init(&tri->lock);
atomic_set(&tri->holders, 0);
scoutfs_block_writer_init(sb, &tri->wri);
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",
@@ -592,8 +564,15 @@ int scoutfs_setup_trans(struct super_block *sb)
}
/*
* kill_sb calls sync before getting here so we know that dirty data
* should be in flight. We just have to wait for it to quiesce.
* While the vfs will have done an fs level sync before calling
* put_super, we may have done work down in our level after all the fs
* ops were done. An example is final inode deletion in iput, that's
* done in generic_shutdown_super after the sync and before calling our
* put_super.
*
* So we always try to write any remaining dirty transactions before
* shutting down. Typically there won't be any dirty data and the
* worker will just return.
*/
void scoutfs_shutdown_trans(struct super_block *sb)
{
@@ -601,13 +580,18 @@ void scoutfs_shutdown_trans(struct super_block *sb)
DECLARE_TRANS_INFO(sb, tri);
if (tri) {
scoutfs_block_writer_forget_all(sb, &tri->wri);
if (sbi->trans_write_workq) {
/* immediately queues pending timer */
flush_delayed_work(&sbi->trans_write_work);
/* prevents re-arming if it has to wait */
cancel_delayed_work_sync(&sbi->trans_write_work);
destroy_workqueue(sbi->trans_write_workq);
/* trans work schedules after shutdown see null */
sbi->trans_write_workq = NULL;
}
scoutfs_block_writer_forget_all(sb, &tri->wri);
kfree(tri);
sbi->trans_info = NULL;
}

View File

@@ -6,21 +6,16 @@
/* the client will force commits if data allocators get too low */
#define SCOUTFS_TRANS_DATA_ALLOC_LWM (256ULL * 1024 * 1024)
#include "count.h"
void scoutfs_trans_write_func(struct work_struct *work);
int scoutfs_trans_sync(struct super_block *sb, int wait);
int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
int datasync);
void scoutfs_trans_restart_sync_deadline(struct super_block *sb);
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt);
int scoutfs_hold_trans(struct super_block *sb);
bool scoutfs_trans_held(void);
void scoutfs_release_trans(struct super_block *sb);
u64 scoutfs_trans_sample_seq(struct super_block *sb);
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals);
int scoutfs_trans_get_log_trees(struct super_block *sb);
bool scoutfs_trans_has_dirty(struct super_block *sb);

View File

@@ -38,10 +38,7 @@ struct scoutfs_triggers {
struct scoutfs_triggers *name = SCOUTFS_SB(sb)->triggers
static char *names[] = {
[SCOUTFS_TRIGGER_BTREE_STALE_READ] = "btree_stale_read",
[SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF] = "btree_advance_ring_half",
[SCOUTFS_TRIGGER_HARD_STALE_ERROR] = "hard_stale_error",
[SCOUTFS_TRIGGER_SEG_STALE_READ] = "seg_stale_read",
[SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE] = "block_remove_stale",
[SCOUTFS_TRIGGER_STATFS_LOCK_PURGE] = "statfs_lock_purge",
};

View File

@@ -2,10 +2,7 @@
#define _SCOUTFS_TRIGGERS_H_
enum scoutfs_trigger {
SCOUTFS_TRIGGER_BTREE_STALE_READ,
SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF,
SCOUTFS_TRIGGER_HARD_STALE_ERROR,
SCOUTFS_TRIGGER_SEG_STALE_READ,
SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE,
SCOUTFS_TRIGGER_STATFS_LOCK_PURGE,
SCOUTFS_TRIGGER_NR,
};

20
kmod/src/util.h Normal file
View File

@@ -0,0 +1,20 @@
#ifndef _SCOUTFS_UTIL_H_
#define _SCOUTFS_UTIL_H_
/*
* Little utility helpers that probably belong upstream.
*/
static inline void down_write_two(struct rw_semaphore *a,
struct rw_semaphore *b)
{
BUG_ON(a == b);
if (a > b)
swap(a, b);
down_write(a);
down_write_nested(b, SINGLE_DEPTH_NESTING);
}
#endif

View File

@@ -577,10 +577,7 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_XATTR_SET(found_parts,
value != NULL,
name_len, size));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -781,7 +778,7 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
&tgs) != 0)
memset(&tgs, 0, sizeof(tgs));
ret = scoutfs_hold_trans(sb, SIC_EXACT(2, 0));
ret = scoutfs_hold_trans(sb);
if (ret < 0)
break;
release = true;

View File

@@ -1,4 +1,4 @@
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing -I ../kmod/src
SHELL := /usr/bin/bash
# each binary command is built from a single .c file
@@ -6,6 +6,7 @@ BIN := src/createmany \
src/dumb_setxattr \
src/handle_cat \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs
DEPS := $(wildcard src/*.d)

View File

@@ -3,7 +3,7 @@
t_filter_fs()
{
sed -e 's@mnt/test\.[0-9]*@mnt/test@g' \
-e 's@Device: [a-fA-F0-7]*h/[0-9]*d@Device: 0h/0d@g'
-e 's@Device: [a-fA-F0-9]*h/[0-9]*d@Device: 0h/0d@g'
}
#
@@ -52,12 +52,15 @@ t_filter_dmesg()
# tests that drop unmount io triggers fencing
re="$re|scoutfs .* error: fencing "
re="$re|scoutfs .*: waiting for .* lock clients"
re="$re|scoutfs .*: all lock clients recovered"
re="$re|scoutfs .*: waiting for .* clients"
re="$re|scoutfs .*: all clients recovered"
re="$re|scoutfs .* error: client rid.*lock recovery timed out"
# some tests mount w/o options
re="$re|scoutfs .* error: Required mount option \"metadev_path\" not found"
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
egrep -v "($re)"
}

View File

@@ -28,8 +28,8 @@ t_ident()
local fsid
local rid
fsid=$(scoutfs statfs -s fsid "$mnt")
rid=$(scoutfs statfs -s rid "$mnt")
fsid=$(scoutfs statfs -s fsid -p "$mnt")
rid=$(scoutfs statfs -s rid -p "$mnt")
echo "f.${fsid:0:6}.r.${rid:0:6}"
}
@@ -99,6 +99,19 @@ t_first_client_nr()
t_fail "t_first_client_nr didn't find any clients"
}
#
# The number of quorum members needed to form a majority to start the
# server.
#
t_majority_count()
{
if [ "$T_QUORUM" -lt 3 ]; then
echo 1
else
echo $(((T_QUORUM / 2) + 1))
fi
}
t_mount()
{
local nr="$1"
@@ -116,7 +129,7 @@ t_umount()
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount \$T_DB$i
eval t_quiet umount \$T_M$nr
}
#
@@ -196,12 +209,19 @@ t_trigger_show() {
echo "trigger $which $string: $(t_trigger_get $which $nr)"
}
t_trigger_arm() {
t_trigger_arm_silent() {
local which="$1"
local nr="$2"
local path=$(t_trigger_path "$nr")
echo 1 > "$path/$which"
}
t_trigger_arm() {
local which="$1"
local nr="$2"
t_trigger_arm_silent $which $nr
t_trigger_show $which armed $nr
}
@@ -216,16 +236,44 @@ t_counter() {
cat "$(t_sysfs_path $nr)/counters/$which"
}
#
# output the difference between the current value of a counter and the
# caller's provided previous value.
#
t_counter_diff_value() {
local which="$1"
local old="$2"
local nr="$3"
local new="$(t_counter $which $nr)"
echo "$((new - old))"
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified.
# to mount 0 if a mount isn't specified. For tests which expect a
# specific difference in counters.
#
t_counter_diff() {
local which="$1"
local old="$2"
local nr="$3"
local new
new="$(t_counter $which $nr)"
echo "counter $which diff $((new - old))"
echo "counter $which diff $(t_counter_diff_value $which $old $nr)"
}
#
# output a message indicating whether or not the counter value changed.
# For tests that expect a difference, or not, but the amount of
# difference isn't significant.
#
t_counter_diff_changed() {
local which="$1"
local old="$2"
local nr="$3"
local diff="$(t_counter_diff_value $which $old $nr)"
test "$diff" -eq 0 && \
echo "counter $which didn't change" ||
echo "counter $which changed"
}

View File

@@ -21,5 +21,20 @@ t_require_mounts() {
local req="$1"
test "$T_NR_MOUNTS" -ge "$req" || \
t_fail "$req mounts required, only have $T_NR_MOUNTS"
t_skip "$req mounts required, only have $T_NR_MOUNTS"
}
#
# Require that the meta device be at least the size string argument, as
# parsed by numfmt using single char base 2 suffixes (iec).. 64G, etc.
#
t_require_meta_size() {
local dev="$T_META_DEVICE"
local req_iec="$1"
local req_bytes=$(numfmt --from=iec --to=none $req_iec)
local dev_bytes=$(blockdev --getsize64 $dev)
local dev_iec=$(numfmt --from=auto --to=iec $dev_bytes)
test "$dev_bytes" -ge "$req_bytes" || \
t_skip "$dev must be at least $req_iec, is $dev_iec"
}

View File

@@ -0,0 +1,52 @@
== create shared test file
== set and get xattrs between mount pairs while retrying
# file: /mnt/test/test/block-stale-reads/file
user.xat="1"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="2"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="3"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="4"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="5"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="6"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="7"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="8"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="9"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="10"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed

View File

View File

@@ -0,0 +1,27 @@
== basic unlink deletes
ino found in dseq index
ino not found in dseq index
== local open-unlink waits for close to delete
contents after rm: contents
ino found in dseq index
ino not found in dseq index
== multiple local opens are protected
contents after rm 1: contents
contents after rm 2: contents
ino found in dseq index
ino not found in dseq index
== remote unopened unlink deletes
ino not found in dseq index
ino not found in dseq index
== unlink wait for open on other mount
mount 0 contents after mount 1 rm: contents
ino found in dseq index
ino found in dseq index
stat: cannot stat /mnt/test/test/inode-deletion/file: No such file or directory
ino not found in dseq index
ino not found in dseq index
== lots of deletions use one open map
== open files survive remote scanning orphans
mount 0 contents after mount 1 remounted: contents
ino not found in dseq index
ino not found in dseq index

View File

@@ -0,0 +1,3 @@
== create per mount files
== 30s of racing random mount/umount
== mounting any unmounted

33
tests/golden/move-blocks Normal file
View File

@@ -0,0 +1,33 @@
== build test files
== wrapped offsets should fail
ioctl failed on '/mnt/test/test/move-blocks/to': Value too large for defined data type (75)
scoutfs: move-blocks failed: Value too large for defined data type (75)
ioctl failed on '/mnt/test/test/move-blocks/to': Value too large for defined data type (75)
scoutfs: move-blocks failed: Value too large for defined data type (75)
== specifying same file fails
ioctl failed on '/mnt/test/test/move-blocks/hardlink': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
== specifying files in other file systems fails
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid cross-device link (18)
scoutfs: move-blocks failed: Invalid cross-device link (18)
== offsets must be multiples of 4KB
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
== can't move onto existing extent
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
== can't move between files with offline extents
ioctl failed on '/mnt/test/test/move-blocks/to': No data available (61)
scoutfs: move-blocks failed: No data available (61)
ioctl failed on '/mnt/test/test/move-blocks/to': No data available (61)
scoutfs: move-blocks failed: No data available (61)
== basic moves work
== moving final partial block sets partial i_size
123
== moving updates inode fields
== moving blocks backwards works
== combine many files into one

View File

@@ -1,6 +1,6 @@
== create files
== waiter shows up in ioctl
offline wating should be empty:
offline waiting should be empty:
0
offline waiting should now have one known entry:
== multiple waiters on same block listed once
@@ -8,7 +8,7 @@ offline waiting still has one known entry:
== different blocks show up
offline waiting now has two known entries:
== staging wakes everyone
offline wating should be empty again:
offline waiting should be empty again:
0
== interruption does no harm
offline waiting should now have one known entry:

View File

@@ -1,9 +1,9 @@
== 0 data_version arg fails
setattr_more ioctl failed on '/mnt/test/test/setattr_more/file': Invalid argument (22)
scoutfs: setattr failed: Invalid argument (22)
setattr: data version must not be 0
Try `setattr --help' or `setattr --usage' for more information.
== args must specify size and offline
setattr_more ioctl failed on '/mnt/test/test/setattr_more/file': Invalid argument (22)
scoutfs: setattr failed: Invalid argument (22)
setattr: must provide size if using --offline option
Try `setattr --help' or `setattr --usage' for more information.
== only works on regular files
failed to open '/mnt/test/test/setattr_more/dir': Is a directory (21)
scoutfs: setattr failed: Is a directory (21)

View File

@@ -8,16 +8,16 @@
release ioctl failed: Invalid argument (22)
scoutfs: release failed: Invalid argument (22)
== releasing non-file fails
ioctl failed on '/mnt/test/test/simple-release-extents/file-char': Inappropriate ioctl for device (25)
release ioctl failed: Inappropriate ioctl for device (25)
scoutfs: release failed: Inappropriate ioctl for device (25)
ioctl failed: Inappropriate ioctl for device (25)
release: must provide file version --data-version
Try `release --help' or `release --usage' for more information.
== releasing a non-scoutfs file fails
ioctl failed on '/dev/null': Inappropriate ioctl for device (25)
release ioctl failed: Inappropriate ioctl for device (25)
scoutfs: release failed: Inappropriate ioctl for device (25)
ioctl failed: Inappropriate ioctl for device (25)
release: must provide file version --data-version
Try `release --help' or `release --usage' for more information.
== releasing bad version fails
release ioctl failed: Stale file handle (116)
scoutfs: release failed: Stale file handle (116)
release: must provide file version --data-version
Try `release --help' or `release --usage' for more information.
== verify small release merging
0 0 0: (0 0 1) (1 101 4)
0 0 1: (0 0 2) (2 102 3)

View File

@@ -4,8 +4,8 @@
== release+stage shouldn't change stat, data seq or vers
== stage does change meta_seq
== can't use stage to extend online file
stage returned -1, not 4096: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
stage: must provide file version with --data-version
Try `stage --help' or `stage --usage' for more information.
== wrapped region fails
stage returned -1, not 4096: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
@@ -18,6 +18,6 @@ scoutfs: stage failed: Input/output error (5)
== partial final block that writes to i_size does work
== zero length stage doesn't bring blocks online
== stage of non-regular file fails
ioctl failed on '/mnt/test/test/simple-staging/file-char': Inappropriate ioctl for device (25)
stage returned -1, not 1: error Inappropriate ioctl for device (25)
scoutfs: stage failed: Input/output error (5)
ioctl failed: Inappropriate ioctl for device (25)
stage: must provide file version with --data-version
Try `stage --help' or `stage --usage' for more information.

View File

View File

@@ -0,0 +1,18 @@
total file size 33669120
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
*
00400000 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 |BBBBBBBBBBBBBBBB|
*
00801000 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 |CCCCCCCCCCCCCCCC|
*
00c03000 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 |DDDDDDDDDDDDDDDD|
*
01006000 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 |EEEEEEEEEEEEEEEE|
*
0140a000 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 |FFFFFFFFFFFFFFFF|
*
0180f000 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 |GGGGGGGGGGGGGGGG|
*
01c15000 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 |HHHHHHHHHHHHHHHH|
*
0201c000

View File

@@ -1,11 +0,0 @@
== create file for xattr ping pong
# file: /mnt/test/test/stale-btree-read/file
user.xat="initial"
== retry btree block read
trigger btree_stale_read armed: 1
# file: /mnt/test/test/stale-btree-read/file
user.xat="btree"
trigger btree_stale_read after: 0
counter btree_stale_read diff 1

View File

@@ -1,6 +1,7 @@
Ran:
generic/001
generic/002
generic/004
generic/005
generic/006
generic/007
@@ -73,7 +74,6 @@ generic/376
generic/377
Not
run:
generic/004
generic/008
generic/009
generic/012
@@ -278,4 +278,4 @@ shared/004
shared/032
shared/051
shared/289
Passed all 72 tests
Passed all 73 tests

View File

@@ -52,16 +52,17 @@ $(basename $0) options:
| the file system to be tested. Will be clobbered by -m mkfs.
-m | Run mkfs on the device before mounting and running
| tests. Implies unmounting existing mounts first.
-n | The number of devices and mounts to test.
-P | Output trace events with printk as they're generated.
-n <nr> | The number of devices and mounts to test.
-P | Enable trace_printk.
-p | Exit script after preparing mounts only, don't run tests.
-q <nr> | Specify the quorum count needed to mount. This is
| used when running mkfs and is needed by a few tests.
-q <nr> | The first <nr> mounts will be quorum members. Must be
| at least 1 and no greater than -n number of mounts.
-r <dir> | Specify the directory in which to store results of
| test runs. The directory will be created if it doesn't
| exist. Previous results will be deleted as each test runs.
-s | Skip git repo checkouts.
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
-y | xfstests ./check additional args
@@ -77,6 +78,9 @@ done
T_TRACE_DUMP="0"
T_TRACE_PRINTK="0"
# array declarations to be able to use array ops
declare -a T_TRACE_GLOB
while true; do
case $1 in
-a)
@@ -147,7 +151,7 @@ while true; do
;;
-t)
test -n "$2" || die "-t must have trace glob argument"
T_TRACE_GLOB="$2"
T_TRACE_GLOB+=("$2")
shift
;;
-X)
@@ -195,7 +199,6 @@ test -e "$T_EX_META_DEV" || die "extra meta device -f '$T_EX_META_DEV' doesn't e
test -n "$T_EX_DATA_DEV" || die "must specify -e extra data device"
test -e "$T_EX_DATA_DEV" || die "extra data device -e '$T_EX_DATA_DEV' doesn't exist"
test -n "$T_MKFS" -a -z "$T_QUORUM" && die "mkfs (-m) requires quorum (-q)"
test -n "$T_RESULTS" || die "must specify -r results dir"
test -n "$T_XFSTESTS_REPO" -a -z "$T_XFSTESTS_BRANCH" -a -z "$T_SKIP_CHECKOUT" && \
die "-X xfstests repo requires -x xfstests branch"
@@ -205,6 +208,12 @@ test -n "$T_XFSTESTS_BRANCH" -a -z "$T_XFSTESTS_REPO" -a -z "$T_SKIP_CHECKOUT" &
test -n "$T_NR_MOUNTS" || die "must specify -n nr mounts"
test "$T_NR_MOUNTS" -ge 1 -a "$T_NR_MOUNTS" -le 8 || \
die "-n nr mounts must be >= 1 and <= 8"
test -n "$T_QUORUM" || \
die "must specify -q number of mounts that are quorum members"
test "$T_QUORUM" -ge "1" || \
die "-q quorum mmembers must be at least 1"
test "$T_QUORUM" -le "$T_NR_MOUNTS" || \
die "-q quorum mmembers must not be greater than -n mounts"
# top level paths
T_KMOD=$(realpath "$(dirname $0)/../kmod")
@@ -303,8 +312,14 @@ if [ -n "$T_UNMOUNT" ]; then
unmount_all
fi
quo=""
if [ -n "$T_MKFS" ]; then
cmd scoutfs mkfs -Q "$T_QUORUM" "$T_META_DEVICE" "$T_DATA_DEVICE"
for i in $(seq -0 $((T_QUORUM - 1))); do
quo="$quo -Q $i,127.0.0.1,$((42000 + i))"
done
msg "making new filesystem with $T_QUORUM quorum members"
cmd scoutfs mkfs -f $quo "$T_META_DEVICE" "$T_DATA_DEVICE"
fi
if [ -n "$T_INSMOD" ]; then
@@ -314,23 +329,37 @@ if [ -n "$T_INSMOD" ]; then
cmd insmod "$T_KMOD/src/scoutfs.ko"
fi
if [ -n "$T_TRACE_GLOB" ]; then
msg "enabling trace events"
nr_globs=${#T_TRACE_GLOB[@]}
if [ $nr_globs -gt 0 ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
for g in $T_TRACE_GLOB; do
for g in "${T_TRACE_GLOB[@]}"; do
for e in /sys/kernel/debug/tracing/events/scoutfs/$g/enable; do
echo 1 > $e
if test -w "$e"; then
echo 1 > "$e"
else
die "-t glob '$g' matched no scoutfs events"
fi
done
done
echo "$T_TRACE_DUMP" > /proc/sys/kernel/ftrace_dump_on_oops
echo "$T_TRACE_PRINTK" > /sys/kernel/debug/tracing/options/trace_printk
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/proc/sys/kernel/ftrace_dump_on_oops
nr_events=$(cat /sys/kernel/debug/tracing/set_event | wc -l)
msg "enabled $nr_events trace events from $nr_globs -t globs"
fi
if [ -n "$T_TRACE_PRINTK" ]; then
echo "$T_TRACE_PRINTK" > /sys/kernel/debug/tracing/options/trace_printk
fi
if [ -n "$T_TRACE_DUMP" ]; then
echo "$T_TRACE_DUMP" > /proc/sys/kernel/ftrace_dump_on_oops
fi
# always describe tracing in the logs
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/proc/sys/kernel/ftrace_dump_on_oops
#
# mount concurrently so that a quorum is present to elect the leader and
# start a server.
@@ -347,8 +376,12 @@ for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
dir="/mnt/test.$i"
test -d "$dir" || cmd mkdir -p "$dir"
opts="-o metadev_path=$meta_dev"
if [ "$i" -lt "$T_QUORUM" ]; then
opts="$opts,quorum_slot_nr=$i"
fi
msg "mounting $meta_dev|$data_dev on $dir"
opts="-o server_addr=127.0.0.1,metadev_path=$meta_dev"
cmd mount -t scoutfs $opts "$data_dev" "$dir" &
p="$!"
@@ -434,7 +467,7 @@ for t in $tests; do
# get stats from previous pass
last="$T_RESULTS/last-passed-test-stats"
stats=$(grep -s "^$test_name" "$last" | cut -d " " -f 2-)
stats=$(grep -s "^$test_name " "$last" | cut -d " " -f 2-)
test -n "$stats" && stats="last: $stats"
printf " %-30s $stats" "$test_name"
@@ -497,7 +530,7 @@ for t in $tests; do
echo " passed: $stats"
((passed++))
# save stats for passed test
grep -s -v "^$test_name" "$last" > "$last.tmp"
grep -s -v "^$test_name " "$last" > "$last.tmp"
echo "$test_name $stats" >> "$last.tmp"
mv -f "$last.tmp" "$last"
elif [ "$sts" == "$T_SKIP_STATUS" ]; then
@@ -515,23 +548,24 @@ done
msg "all tests run: $passed passed, $skipped skipped, $failed failed"
unmount_all
if [ -n "$T_TRACE_GLOB" ]; then
if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then
msg "saving traces and disabling tracing"
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
echo 0 > /sys/kernel/debug/tracing/options/trace_printk
cat /sys/kernel/debug/tracing/trace > "$T_RESULTS/traces"
fi
if [ "$skipped" == 0 -a "$failed" == 0 ]; then
msg "all tests passed"
unmount_all
exit 0
fi
if [ "$skipped" != 0 ]; then
msg "$skipped tests skipped, check skip.log"
msg "$skipped tests skipped, check skip.log, still mounted"
fi
if [ "$failed" != 0 ]; then
msg "$failed tests failed, check fail.log"
msg "$failed tests failed, check fail.log, still mounted"
fi
exit 1

View File

@@ -6,16 +6,20 @@ simple-staging.sh
simple-release-extents.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
export-lookup-evict-race.sh
createmany-parallel.sh
createmany-large-names.sh
createmany-rename-large-dir.sh
stage-release-race-alloc.sh
stage-multi-part.sh
stage-tmpfile.sh
basic-posix-consistency.sh
dirent-consistency.sh
lock-ex-race-processes.sh
@@ -26,5 +30,6 @@ setup-error-teardown.sh
mount-unmount-race.sh
createmany-parallel-mounts.sh
archive-light-cycle.sh
stale-btree-read.sh
block-stale-reads.sh
inode-deletion.sh
xfstests.sh

145
tests/src/stage_tmpfile.c Normal file
View File

@@ -0,0 +1,145 @@
/*
* Exercise O_TMPFILE creation as well as staging from tmpfiles into
* a released destination file.
*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <linux/types.h>
#include <assert.h>
#include "ioctl.h"
#define array_size(arr) (sizeof(arr) / sizeof(arr[0]))
/*
* Write known data into 8 tmpfiles.
* Make a new file X and release it
* Move contents of 8 tmpfiles into X.
*/
struct sub_tmp_info {
int fd;
unsigned int offset;
unsigned int length;
};
#define SZ 4096
char buf[SZ];
int main(int argc, char **argv)
{
struct scoutfs_ioctl_release ioctl_args = {0};
struct scoutfs_ioctl_move_blocks mb;
struct sub_tmp_info sub_tmps[8];
int tot_size = 0;
char *dest_file;
int dest_fd;
char *mnt;
int ret;
int i;
if (argc < 3) {
printf("%s <mountpoint> <dest_file>\n", argv[0]);
return 1;
}
mnt = argv[1];
dest_file = argv[2];
for (i = 0; i < array_size(sub_tmps); i++) {
struct sub_tmp_info *sub_tmp = &sub_tmps[i];
int remaining;
sub_tmp->fd = open(mnt, O_RDWR | O_TMPFILE, S_IRUSR | S_IWUSR);
if (sub_tmp->fd < 0) {
perror("error");
exit(1);
}
sub_tmp->offset = tot_size;
/* First tmp file is 4MB */
/* Each is 4k bigger than last */
sub_tmp->length = (i + 1024) * sizeof(buf);
remaining = sub_tmp->length;
/* Each sub tmpfile written with 'A', 'B', etc. */
memset(buf, 'A' + i, sizeof(buf));
while (remaining) {
int written;
written = write(sub_tmp->fd, buf, sizeof(buf));
assert(written == sizeof(buf));
tot_size += sizeof(buf);
remaining -= written;
}
}
printf("total file size %d\n", tot_size);
dest_fd = open(dest_file, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR);
if (dest_fd == -1) {
perror("error");
exit(1);
}
// make dest file big
ret = posix_fallocate(dest_fd, 0, tot_size);
if (ret) {
perror("error");
exit(1);
}
// release everything in dest file
ioctl_args.offset = 0;
ioctl_args.length = tot_size;
ioctl_args.data_version = 0;
ret = ioctl(dest_fd, SCOUTFS_IOC_RELEASE, &ioctl_args);
if (ret < 0) {
perror("error");
exit(1);
}
// move contents into dest in reverse order
for (i = array_size(sub_tmps) - 1; i >= 0 ; i--) {
struct sub_tmp_info *sub_tmp = &sub_tmps[i];
mb.from_fd = sub_tmp->fd;
mb.from_off = 0;
mb.len = sub_tmp->length;
mb.to_off = sub_tmp->offset;
mb.data_version = 0;
mb.flags = SCOUTFS_IOC_MB_STAGE;
ret = ioctl(dest_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
if (ret < 0) {
perror("error");
exit(1);
}
}
return 0;
}

View File

@@ -161,9 +161,9 @@ for n in $(t_fs_nrs); do
echo "bash $gen $blocks $n $p $f > $path" >> $create
echo "cmp $path <(bash $gen $blocks $n $p $f)" >> $verify
echo "vers=\$(scoutfs stat -s data_version $path)" >> $release
echo "scoutfs release $path \$vers 0 $blocks" >> $release
echo "scoutfs release $path -V \$vers -o 0 -l $bytes" >> $release
echo "vers=\$(scoutfs stat -s data_version $path)" >> $stage
echo "scoutfs stage $path \$vers 0 $bytes <(bash $gen $blocks $n $p $f)" >> $stage
echo "scoutfs stage <(bash $gen $blocks $n $p $f) $path -V \$vers -o 0 -l $bytes " >> $stage
echo "rm -f $path" >> $unlink
echo "x=\$(scoutfs stat -s online_blocks $path)" >> $online

View File

@@ -9,14 +9,14 @@ t_require_commands scoutfs dd truncate touch mkdir rm rmdir
release_vers() {
local file="$1"
local vers="$2"
local block="$3"
local count="$4"
local offset="$3"
local length="$4"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs release "$file" "$vers" "$block" "$count"
scoutfs release "$file" -V "$vers" -o "$offset" -l "$length"
}
# if vers is "stat" then we ask stat_more for the data_version
@@ -24,14 +24,14 @@ stage_vers() {
local file="$1"
local vers="$2"
local offset="$3"
local count="$4"
local length="$4"
local contents="$5"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs stage "$file" "$vers" "$offset" "$count" "$contents"
scoutfs stage "$contents" "$file" -V "$vers" -o "$offset" -l "$length"
}
echo_blocks()
@@ -57,15 +57,15 @@ dd if=/dev/zero of="$FILE" bs=4K count=1 conv=notrunc oflag=append status=none
echo_blocks "$FILE"
echo "== release"
release_vers "$FILE" stat 0 2
release_vers "$FILE" stat 0 8K
echo_blocks "$FILE"
echo "== duplicate release"
release_vers "$FILE" stat 0 2
release_vers "$FILE" stat 0 8K
echo_blocks "$FILE"
echo "== duplicate release past i_size"
release_vers "$FILE" stat 0 16
release_vers "$FILE" stat 0 64K
echo_blocks "$FILE"
echo "== stage"

View File

@@ -160,8 +160,8 @@ for i in $(seq 1 1); do
mkdir -p $(dirname $lnk)
ln "$T_D0/file" $lnk
scoutfs ino-path $ino "$T_M0" > "$T_TMP.0"
scoutfs ino-path $ino "$T_M1" > "$T_TMP.1"
scoutfs ino-path -p "$T_M0" $ino > "$T_TMP.0"
scoutfs ino-path -p "$T_M1" $ino > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
done
done
@@ -169,32 +169,32 @@ rm -rf "$T_D0/dir"
echo "== inode indexes match after syncing existing"
t_sync_seq_index
scoutfs walk-inodes meta_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes meta_seq 0 -1 "$T_M1" > "$T_TMP.1"
scoutfs walk-inodes -p "$T_M0" -- meta_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- meta_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes data_seq 0 -1 "$T_M1" > "$T_TMP.1"
scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- data_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== inode indexes match after copying and syncing"
mkdir "$T_D0/dir"
cp -ar /boot/conf* "$T_D0/dir"
t_sync_seq_index
scoutfs walk-inodes meta_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes meta_seq 0 -1 "$T_M1" > "$T_TMP.1"
scoutfs walk-inodes -p "$T_M0" -- meta_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- meta_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes data_seq 0 -1 "$T_M1" > "$T_TMP.1"
scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- data_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== inode indexes match after removing and syncing"
rm -f "$T_D1/dir/conf*"
t_sync_seq_index
scoutfs walk-inodes meta_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes meta_seq 0 -1 "$T_M1" > "$T_TMP.1"
scoutfs walk-inodes -p "$T_M0" -- meta_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- meta_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes data_seq 0 -1 "$T_M1" > "$T_TMP.1"
scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- data_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
t_pass

View File

@@ -0,0 +1,61 @@
#
# Exercise stale block reading.
#
# It would be very difficult to manipulate the allocators, cache, and
# persistent blocks to create stable block reading scenarios. Instead
# we use triggers to exercise how readers encounter stale blocks.
#
t_require_commands touch setfattr getfattr
inc_wrap_fs_nr()
{
local nr="$(($1 + 1))"
if [ "$nr" == "$T_NR_MOUNTS" ]; then
nr=0
fi
echo $nr
}
GETFATTR="getfattr --absolute-names"
SETFATTR="setfattr"
echo "== create shared test file"
touch "$T_D0/file"
$SETFATTR -n user.xat -v 0 "$T_D0/file"
#
# Trigger retries in the block cache as we bounce xattr values around
# between sequential pairs of mounts. This is a little silly because if
# either of the mounts are the server then they'll almost certaily have
# their trigger fired prematurely by message handling btree calls while
# working with the t_ helpers long before we work with the xattrs. But
# the block cache stale retry path is still being exercised.
#
echo "== set and get xattrs between mount pairs while retrying"
set_nr=0
get_nr=$(inc_wrap_fs_nr $set_nr)
for i in $(seq 1 10); do
eval set_file="\$T_D${set_nr}/file"
eval get_file="\$T_D${get_nr}/file"
old_set=$(t_counter block_cache_remove_stale $set_nr)
old_get=$(t_counter block_cache_remove_stale $get_nr)
t_trigger_arm_silent block_remove_stale $set_nr
t_trigger_arm_silent block_remove_stale $get_nr
$SETFATTR -n user.xat -v $i "$set_file"
$GETFATTR -n user.xat "$get_file" 2>&1 | t_filter_fs
t_counter_diff_changed block_cache_remove_stale $old_set $set_nr
t_counter_diff_changed block_cache_remove_stale $old_get $get_nr
set_nr="$get_nr"
get_nr=$(inc_wrap_fs_nr $set_nr)
done
t_pass

View File

@@ -0,0 +1,32 @@
#
# test racing fh_to_dentry with evict from lock invalidation. We've
# had deadlocks between the ordering of iget and evict when they acquire
# cluster locks.
#
t_require_commands touch stat handle_cat
t_require_mounts 2
CPUS=$(getconf _NPROCESSORS_ONLN)
NR=$((CPUS * 4))
END=$((SECONDS + 30))
touch "$T_D0/file"
ino=$(stat -c "%i" "$T_D0/file")
while test $SECONDS -lt $END; do
for i in $(seq 1 $NR); do
fs=$((RANDOM % T_NR_MOUNTS))
eval dir="\$T_D${fs}"
write=$((RANDOM & 1))
if [ "$write" == 1 ]; then
touch "$dir/file" &
else
handle_cat "$dir" "$ino" &
fi
done
wait
done
t_pass

View File

@@ -0,0 +1,98 @@
#
# test deleting an inode once all its links and references are gone.
#
t_require_commands cat scoutfs
t_require_mounts 2
FILE="$T_D0/file"
check_ino_index() {
local ino="$1"
local dseq="$2"
local mnt="$3"
t_sync_seq_index
scoutfs walk-inodes -p "$mnt" -- data_seq $dseq $(($dseq + 1)) |
awk 'BEGIN { not = "not " }
($4 == '$ino') { not = ""; exit; }
END { print "ino " not "found in dseq index" }'
}
echo "== basic unlink deletes"
echo "contents" > "$FILE"
ino=$(stat -c "%i" "$FILE")
dseq=$(scoutfs stat -s data_seq "$FILE")
check_ino_index "$ino" "$dseq" "$T_M0"
rm -f "$FILE"
check_ino_index "$ino" "$dseq" "$T_M0"
echo "== local open-unlink waits for close to delete"
echo "contents" > "$FILE"
ino=$(stat -c "%i" "$FILE")
dseq=$(scoutfs stat -s data_seq "$FILE")
exec {FD}<"$FILE" # open unused fd, assign to FD
rm -f "$FILE"
echo "contents after rm: $(cat <&$FD)"
check_ino_index "$ino" "$dseq" "$T_M0"
exec {FD}>&- # close
check_ino_index "$ino" "$dseq" "$T_M0"
echo "== multiple local opens are protected"
echo "contents" > "$FILE"
ino=$(stat -c "%i" "$FILE")
dseq=$(scoutfs stat -s data_seq "$FILE")
exec {FD1}<"$FILE"
exec {FD2}<"$FILE"
rm -f "$FILE"
echo "contents after rm 1: $(cat <&$FD1)"
echo "contents after rm 2: $(cat <&$FD2)"
check_ino_index "$ino" "$dseq" "$T_M0"
exec {FD1}>&- # close
exec {FD2}>&- # close
check_ino_index "$ino" "$dseq" "$T_M0"
echo "== remote unopened unlink deletes"
echo "contents" > "$T_D0/file"
ino=$(stat -c "%i" "$T_D0/file")
dseq=$(scoutfs stat -s data_seq "$T_D0/file")
rm -f "$T_D1/file"
check_ino_index "$ino" "$dseq" "$T_M0"
check_ino_index "$ino" "$dseq" "$T_M1"
echo "== unlink wait for open on other mount"
echo "contents" > "$T_D0/file"
ino=$(stat -c "%i" "$T_D0/file")
dseq=$(scoutfs stat -s data_seq "$T_D0/file")
exec {FD}<"$T_D0/file"
rm -f "$T_D1/file"
echo "mount 0 contents after mount 1 rm: $(cat <&$FD)"
check_ino_index "$ino" "$dseq" "$T_M0"
check_ino_index "$ino" "$dseq" "$T_M1"
exec {FD}>&- # close
# we know that revalidating will unhash the remote dentry
stat "$T_D0/file" 2>&1 | t_filter_fs
check_ino_index "$ino" "$dseq" "$T_M0"
check_ino_index "$ino" "$dseq" "$T_M1"
echo "== lots of deletions use one open map"
mkdir "$T_D0/dir"
touch "$T_D0/dir"/files-{1..5}
rm -f "$T_D0/dir"/files-*
rmdir "$T_D0/dir"
echo "== open files survive remote scanning orphans"
echo "contents" > "$T_D0/file"
ino=$(stat -c "%i" "$T_D0/file")
dseq=$(scoutfs stat -s data_seq "$T_D0/file")
exec {FD}<"$T_D0/file"
rm -f "$T_D0/file"
t_umount 1
t_mount 1
echo "mount 0 contents after mount 1 remounted: $(cat <&$FD)"
exec {FD}>&- # close
check_ino_index "$ino" "$dseq" "$T_M0"
check_ino_index "$ino" "$dseq" "$T_M1"
t_pass

View File

@@ -30,7 +30,7 @@ echo "== create files and sync"
dd if=/dev/zero of="$DIR/truncate" bs=4096 count=1 status=none
dd if=/dev/zero of="$DIR/stage" bs=4096 count=1 status=none
vers=$(scoutfs stat -s data_version "$DIR/stage")
scoutfs release "$DIR/stage" $vers 0 1
scoutfs release "$DIR/stage" -V $vers -o 0 -l 4K
dd if=/dev/zero of="$DIR/release" bs=4096 count=1 status=none
touch "$DIR/write_end"
mkdir "$DIR"/{mknod_dir,link_dir,unlink_dir,symlink_dir,rename_dir}
@@ -41,9 +41,9 @@ sync; sync
echo "== modify files"
truncate -s 0 "$DIR/truncate"
vers=$(scoutfs stat -s data_version "$DIR/stage")
scoutfs stage "$DIR/stage" $vers 0 4096 /dev/zero
scoutfs stage /dev/zero "$DIR/stage" -V $vers -o 0 -l 4096
vers=$(scoutfs stat -s data_version "$DIR/release")
scoutfs release "$DIR/release" $vers 0 1
scoutfs release "$DIR/release" -V $vers -o 0 -l 4K
dd if=/dev/zero of="$DIR/write_end" bs=4096 count=1 status=none conv=notrunc
touch $DIR/mknod_dir/mknod_file
touch $DIR/link_dir/link_targ

View File

@@ -50,7 +50,7 @@ for m in 0 1; do
done
wait
CONF="$((SECONDS - START))"
echo "conf: $IND" >> $T_TMP.log
echo "conf: $CONF" >> $T_TMP.log
if [ "$CONF" -gt "$((IND * 5))" ]; then
t_fail "conflicting $CONF secs is more than 5x independent $IND secs"

View File

@@ -9,7 +9,7 @@ FILE="$T_D0/file"
echo "== race writing and index walking"
for i in $(seq 1 10); do
dd if=/dev/zero of="$FILE" bs=4K count=1 status=none conv=notrunc &
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > /dev/null &
scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > /dev/null &
wait
done

View File

@@ -4,25 +4,23 @@
# At the start of the test all mounts are mounted. Each iteration
# randomly decides to change each mount or to leave it alone.
#
# They create dirty items before unmounting to encourage compaction
# while unmounting
# Each iteration create dirty items across the mounts randomly, giving
# unmount some work to do.
#
# For this test to be meaningful it needs multiple mounts beyond the
# quorum set which can be racing to mount and unmount. A reasonable
# config would be 5 mounts with 3 quorum. But the test will run with
# whatever count it finds.
# quorum majority which can be racing to mount and unmount. A
# reasonable config would be 5 mounts with 3 quorum members. But the
# test will run with whatever count it finds.
#
# This assumes that all the mounts are configured as voting servers. We
# could update it to be more clever and know that it can always safely
# unmount mounts that aren't configured as servers.
# The test assumes that the first mounts are the quorum members.
#
# nothing to do if we can't unmount
test "$T_NR_MOUNTS" == "$T_QUORUM" && \
t_skip "only quorum members mounted, can't unmount"
majority_nr=$(t_majority_count)
quorum_nr=$T_QUORUM
nr_mounted=$T_NR_MOUNTS
nr_quorum=$T_QUORUM
cur_quorum=$quorum_nr
test "$cur_quorum" == "$majority_nr" && \
t_skip "all quorum members make up majority, need more mounts to unmount"
echo "== create per mount files"
for i in $(t_fs_nrs); do
@@ -55,25 +53,42 @@ while [ "$SECONDS" -lt "$END" ]; do
fi
if [ "${mounted[$i]}" == 1 ]; then
if [ "$nr_mounted" -gt "$nr_quorum" ]; then
#
# can always unmount non-quorum mounts,
# can only unmount quorum members beyond majority
#
if [ "$i" -ge "$quorum_nr" -o \
"$cur_quorum" -gt "$majority_nr" ]; then
t_umount $i &
pid=$!
echo "umount $i pid $pid quo $cur_quorum" \
>> $T_TMP.log
pids="$pids $pid"
mounted[$i]=0
(( nr_mounted-- ))
if [ "$i" -lt "$quorum_nr" ]; then
(( cur_quorum-- ))
fi
fi
else
t_mount $i &
pid=$!
pids="$pids $pid"
echo "mount $i pid $pid quo $cur_quorum" >> $T_TMP.log
mounted[$i]=1
(( nr_mounted++ ))
if [ "$i" -lt "$quorum_nr" ]; then
(( cur_quorum++ ))
fi
fi
done
echo "waiting (secs $SECONDS)" >> $T_TMP.log
for p in $pids; do
t_quiet wait $p
wait $p
rc=$?
if [ "$rc" != 0 ]; then
echo "waiting for pid $p returned $rc"
t_fail "background mount/umount returned error"
fi
done
echo "done waiting (secs $SECONDS))" >> $T_TMP.log
done

169
tests/tests/move-blocks.sh Normal file
View File

@@ -0,0 +1,169 @@
#
# test MOVE_BLOCKS ioctl, mostly basic error testing and functionality,
# but a bit of expected use.
#
t_require_commands scoutfs dd
FROM="$T_D0/from"
TO="$T_D0/to"
HARD="$T_D0/hardlink"
OTHER="$T_TMP.other"
BLOCKS=8
BS=4096
PART=123
LEN=$(((BS * BLOCKS) + PART))
PIECES=8
regenerate_files() {
rm -f "$FROM"
rm -f "$TO"
dd if=/dev/urandom of="$FROM" bs=$LEN count=1 status=none
touch "$TO"
}
set_updated_fields() {
local arr="$1"
local path="$2"
eval $arr["ctime"]="$(stat -c '%Z' "$path")"
eval $arr["mtime"]="$(stat -c '%Y' "$path")"
eval $arr["data_version"]="$(scoutfs stat -s data_version "$path")"
eval $arr["meta_seq"]="$(scoutfs stat -s meta_seq "$path")"
eval $arr["data_seq"]="$(scoutfs stat -s data_seq "$path")"
}
#
# before moving extents manually copy the byte regions so that we have
# expected good file contents to compare to. We know that the byte
# regions are 4KB block aligned (with an allowance for a len that ends
# on from i_size).
#
move_and_compare() {
local from="$1"
local from_off="$2"
local from_blk="$((from_off / BS))"
local len="$3"
local blocks="$(((len + BS - 1) / BS))"
local to="$4"
local to_off="$5"
local to_blk="$((to_off / BS))"
local right_start=$((from_blk + blocks))
local from_size=$(stat -c "%s" "$from")
local from_blocks=$(( (from_size + BS - 1) / BS ))
local right_len=$((from_blocks - right_start))
# copying around instead of punching hole
dd if="$from" of="$from.expected" bs="$BS" \
skip=0 seek=0 count="$from_blk" \
status=none
dd if="$from" of="$from.expected" bs="$BS" \
skip="$right_start" seek="$right_start" count="$right_len" \
status=none conv=notrunc
# moving doesn't truncate, expect full size when no data
truncate -s "$from_size" "$from.expected"
cp "$to" "$to.expected"
dd if="$from" of="$to.expected" bs="$BS" \
skip="$from_blk" seek="$to_blk" count="$blocks" \
status=none conv=notrunc
scoutfs move-blocks "$from" -f "$from_off" -l "$len" "$to" -t "$to_off" \
2>&1 | t_filter_fs
cmp "$from" "$from.expected"
cmp "$to" "$to.expected"
}
echo "== build test files"
regenerate_files
touch "$OTHER"
ln "$FROM" "$HARD"
echo "== wrapped offsets should fail"
HUGE=0x8000000000000000
scoutfs move-blocks "$FROM" -f "$HUGE" -l "$HUGE" "$TO" -t 0 2>&1 | t_filter_fs
scoutfs move-blocks "$FROM" -f 0 -l "$HUGE" "$TO" -t "$HUGE" 2>&1 | t_filter_fs
echo "== specifying same file fails"
scoutfs move-blocks "$FROM" -f 0 -l "$BS" "$HARD" -t 0 2>&1 | t_filter_fs
echo "== specifying files in other file systems fails"
scoutfs move-blocks "$OTHER" -f 0 -l "$BS" "$TO" -t 0 2>&1 | t_filter_fs
echo "== offsets must be multiples of 4KB"
scoutfs move-blocks "$FROM" -f 1 -l "$BS" "$TO" -t 0 2>&1 | t_filter_fs
scoutfs move-blocks "$FROM" -f 0 -l 1 "$TO" -t 0 2>&1 | t_filter_fs
scoutfs move-blocks "$FROM" -f 0 -l "$BS" "$TO" -t 1 2>&1 | t_filter_fs
echo "== can't move onto existing extent"
dd if=/dev/urandom of="$TO" bs=$BS count=1 status=none
scoutfs move-blocks "$FROM" -f 0 -l "$BS" "$TO" -t 0 2>&1 | t_filter_fs
echo "== can't move between files with offline extents"
dd if=/dev/zero of="$TO" bs=$BS count=1 status=none
vers=$(scoutfs stat -s data_version "$TO")
scoutfs release "$TO" -V "$vers" -o 0 -l $BS
scoutfs move-blocks "$FROM" -f 0 -l "$BS" "$TO" -t 0 2>&1 | t_filter_fs
regenerate_files
vers=$(scoutfs stat -s data_version "$FROM")
scoutfs release "$FROM" -V "$vers" -o 0 -l $BS
scoutfs move-blocks "$FROM" -f 0 -l "$BS" "$TO" -t 0 2>&1 | t_filter_fs
regenerate_files
echo "== basic moves work"
move_and_compare "$FROM" 0 "$BS" "$TO" 0
regenerate_files
move_and_compare "$FROM" 0 "$BS" "$TO" "$BS"
regenerate_files
move_and_compare "$FROM" 0 "$LEN" "$TO" 0
regenerate_files
echo "== moving final partial block sets partial i_size"
move_and_compare "$FROM" $((LEN - PART)) "$PART" "$TO" 0
stat -c '%s' "$TO"
regenerate_files
echo "== moving updates inode fields"
declare -A from_before from_after to_before to_after
set_updated_fields from_before "$FROM"
set_updated_fields to_before "$TO"
t_quiet sync
sleep 1
move_and_compare "$FROM" 0 "$BS" "$TO" 0
set_updated_fields from_after "$FROM"
set_updated_fields to_after "$TO"
for k in ${!from_after[@]}; do
if [ "${from_before[$k]}" == "${from_after[$k]}" ]; then
echo "move didn't change from $k ${from_before[$k]}"
fi
if [ "${to_before[$k]}" == "${to_after[$k]}" ]; then
echo "move didn't change to $k ${to_before[$k]}"
fi
done
regenerate_files
echo "== moving blocks backwards works"
cp "$FROM" "$FROM.orig"
move_and_compare "$FROM" $((LEN - PART)) "$PART" "$TO" $((LEN - PART))
for i in $(seq $((BLOCKS - 1)) -1 0); do
move_and_compare "$FROM" $((i * BS)) "$BS" "$TO" $((i * BS))
done
cmp "$TO" "$FROM.orig"
regenerate_files
echo "== combine many files into one"
for i in $(seq 0 $((PIECES - 1))); do
dd if=/dev/urandom of="$FROM.$i" bs=$BS count=$BLOCKS status=none
cat "$FROM.$i" >> "$TO.large"
move_and_compare "$FROM.$i" 0 "$((BS * BLOCKS))" \
"$TO" $((i * BS * BLOCKS))
done
((i++))
cat "$FROM" >> "$TO.large"
move_and_compare "$FROM" 0 "$LEN" "$TO" $((i * BS * BLOCKS))
cmp "$TO.large" "$TO"
t_pass

View File

@@ -24,7 +24,7 @@ expect_wait()
shift
done
scoutfs data-waiting 0 0 "$file" > $T_TMP.wait.output
scoutfs data-waiting -B 0 -I 0 -p "$file" > $T_TMP.wait.output
diff -u $T_TMP.wait.expected $T_TMP.wait.output
}
@@ -37,9 +37,9 @@ ino=$(stat -c "%i" "$DIR/file")
vers=$(scoutfs stat -s data_version "$DIR/file")
echo "== waiter shows up in ioctl"
echo "offline wating should be empty:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
echo "offline waiting should be empty:"
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
cat "$DIR/file" > /dev/null &
sleep .1
echo "offline waiting should now have one known entry:"
@@ -58,13 +58,13 @@ echo "offline waiting now has two known entries:"
expect_wait "$DIR/file" "read" $ino 0 $ino 1
echo "== staging wakes everyone"
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
sleep .1
echo "offline wating should be empty again:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
echo "offline waiting should be empty again:"
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
echo "== interruption does no harm"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
cat "$DIR/file" > /dev/null 2>&1 &
pid="$!"
sleep .1
@@ -74,7 +74,7 @@ kill "$pid"
# silence terminated message
wait "$pid" 2> /dev/null
echo "offline waiting should be empty again:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
echo "== EIO injection for waiting readers works"
ino=$(stat -c "%i" "$DIR/file")
@@ -86,23 +86,23 @@ dd if="$DIR/file" bs=$BS skip=1 of=/dev/null 2>&1 | \
pid2="$!"
sleep .1
echo "offline waiting should now have two known entries:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
expect_wait "$DIR/file" "read" $ino 0 $ino 1
scoutfs data-wait-err "$DIR" "$ino" "$vers" 0 $((BS*2)) read -5
scoutfs data-wait-err -p "$DIR" -I "$ino" -V "$vers" -F 0 -C $((BS*2)) -O read -E -5
sleep .1
echo "offline waiting should now have 0 known entries:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
# silence terminated message
wait "$pid" 2> /dev/null
wait "$pid2" 2> /dev/null
cat $T_TMP.cat1
cat $T_TMP.cat2
echo "offline waiting should be empty again:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
echo "== readahead while offline does no harm"
xfs_io -c "fadvise -w 0 $BYTES" "$DIR/file"
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
cmp "$DIR/file" "$DIR/golden"
echo "== waiting on interesting blocks works"
@@ -113,65 +113,65 @@ for base in $(echo 0 $(($BLOCKS / 2)) $(($BLOCKS - 2))); do
done
done
for b in $blocks; do
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
dd if="$DIR/file" of=/dev/null \
status=none bs=$BS count=1 skip=$b 2> /dev/null &
sleep .1
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
sleep .1
echo "offline waiting is empty at block $b"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
done
echo "== contents match when staging blocks forward"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
cat "$DIR/file" > "$DIR/forward" &
for b in $(seq 0 1 $((BLOCKS - 1))); do
dd if="$DIR/golden" of="$DIR/block" status=none bs=$BS skip=$b count=1
scoutfs stage "$DIR/file" "$vers" $((b * $BS)) $BS "$DIR/block"
scoutfs stage "$DIR/block" "$DIR/file" -V "$vers" -o $((b * $BS)) -l $BS
done
sleep .1
cmp "$DIR/golden" "$DIR/forward"
echo "== contents match when staging blocks backwards"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
cat "$DIR/file" > "$DIR/backward" &
for b in $(seq $((BLOCKS - 1)) -1 0); do
dd if="$DIR/golden" of="$DIR/block" status=none bs=$BS skip=$b count=1
scoutfs stage "$DIR/file" "$vers" $((b * $BS)) $BS "$DIR/block"
scoutfs stage "$DIR/block" "$DIR/file" -V "$vers" -o $((b * $BS)) -l $BS
done
sleep .1
cmp "$DIR/golden" "$DIR/backward"
echo "== truncate to same size doesn't wait"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
truncate -s "$BYTES" "$DIR/file" &
sleep .1
echo "offline wating should be empty:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
echo "== truncating does wait"
truncate -s "$BS" "$DIR/file" &
sleep .1
echo "truncate should be waiting for first block:"
expect_wait "$DIR/file" "change_size" $ino 0
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
sleep .1
echo "trunate should no longer be waiting:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
cat "$DIR/golden" > "$DIR/file"
vers=$(scoutfs stat -s data_version "$DIR/file")
echo "== writing waits"
dd if=/dev/urandom of="$DIR/other" bs=$BS count=$BLOCKS status=none
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
# overwrite, not truncate+write
dd if="$DIR/other" of="$DIR/file" \
bs=$BS count=$BLOCKS conv=notrunc status=none &
sleep .1
echo "should be waiting for write"
expect_wait "$DIR/file" "write" $ino 0
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
cmp "$DIR/file" "$DIR/other"
echo "== cleanup"

View File

@@ -8,63 +8,63 @@ FILE="$T_D0/file"
echo "== 0 data_version arg fails"
touch "$FILE"
scoutfs setattr -d 0 -s 1 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 0 -s 1 "$FILE" 2>&1 | t_filter_fs
rm "$FILE"
echo "== args must specify size and offline"
touch "$FILE"
scoutfs setattr -d 1 -o -s 0 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -o -s 0 "$FILE" 2>&1 | t_filter_fs
rm "$FILE"
echo "== only works on regular files"
mkdir "$T_D0/dir"
scoutfs setattr -d 1 -s 1 -f "$T_D0/dir" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -s 1 "$T_D0/dir" 2>&1 | t_filter_fs
rmdir "$T_D0/dir"
mknod "$T_D0/char" c 1 3
scoutfs setattr -d 1 -s 1 -f "$T_D0/char" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -s 1 "$T_D0/char" 2>&1 | t_filter_fs
rm "$T_D0/char"
echo "== non-zero file size fails"
echo contents > "$FILE"
scoutfs setattr -d 1 -s 1 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -s 1 "$FILE" 2>&1 | t_filter_fs
rm "$FILE"
echo "== non-zero file data_version fails"
touch "$FILE"
truncate -s 1M "$FILE"
truncate -s 0 "$FILE"
scoutfs setattr -d 1 -o -s 1 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -o -s 1 "$FILE" 2>&1 | t_filter_fs
rm "$FILE"
echo "== large size is set"
touch "$FILE"
scoutfs setattr -d 1 -s 578437695752307201 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -s 578437695752307201 "$FILE" 2>&1 | t_filter_fs
stat -c "%s" "$FILE"
rm "$FILE"
echo "== large data_version is set"
touch "$FILE"
scoutfs setattr -d 578437695752307201 -s 1 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 578437695752307201 -s 1 "$FILE" 2>&1 | t_filter_fs
scoutfs stat -s data_version "$FILE"
rm "$FILE"
echo "== large ctime is set"
touch "$FILE"
# only doing 32bit sec 'cause stat gets confused
scoutfs setattr -c 67305985.999999999 -d 1 -s 1 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -t 67305985.999999999 -V 1 -s 1 "$FILE" 2>&1 | t_filter_fs
TZ=GMT stat -c "%z" "$FILE"
rm "$FILE"
echo "== large offline extents are created"
touch "$FILE"
scoutfs setattr -d 1 -o -s $((10007 * 4096)) -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -o -s $((10007 * 4096)) "$FILE" 2>&1 | t_filter_fs
filefrag -v -b4096 "$FILE" 2>&1 | t_filter_fs
rm "$FILE"
# had a bug where we were creating extents that were too long
echo "== correct offline extent length"
touch "$FILE"
scoutfs setattr -d 1 -o -s 4000000000 -f "$FILE" 2>&1 | t_filter_fs
scoutfs setattr -V 1 -o -s 4000000000 "$FILE" 2>&1 | t_filter_fs
scoutfs stat -s offline_blocks "$FILE"
rm "$FILE"

View File

@@ -14,7 +14,7 @@ query_index() {
local first="${2:-0}"
local last="${3:--1}"
scoutfs walk-inodes $which $first $last "$T_M0"
scoutfs walk-inodes -p "$T_M0" -- $which $first $last
}
# print the major in the index for the ino if it's found
@@ -22,7 +22,7 @@ ino_major() {
local which="$1"
local ino="$2"
scoutfs walk-inodes $which 0 -1 "$T_M0" | \
scoutfs walk-inodes -p "$T_M0" -- $which 0 -1 | \
awk '($4 == "'$ino'") {print $2}'
}

View File

@@ -23,14 +23,14 @@ create_file() {
release_vers() {
local file="$1"
local vers="$2"
local block="$3"
local count="$4"
local offset="$3"
local length="$4"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs release "$file" "$vers" "$block" "$count"
scoutfs release "$file" -V "$vers" -o "$offset" -l "$length"
}
FILE="$T_D0/file"
@@ -38,41 +38,41 @@ CHAR="$FILE-char"
echo "== simple whole file multi-block releasing"
create_file "$FILE" 65536
release_vers "$FILE" stat 0 16
release_vers "$FILE" stat 0 64K
rm "$FILE"
echo "== release last block that straddles i_size"
create_file "$FILE" 6144
release_vers "$FILE" stat 1 1
release_vers "$FILE" stat 4K 4K
rm "$FILE"
echo "== release entire file past i_size"
create_file "$FILE" 8192
release_vers "$FILE" stat 0 100
release_vers "$FILE" stat 0 400K
# not deleting for the following little tests
echo "== releasing offline extents is fine"
release_vers "$FILE" stat 0 100
release_vers "$FILE" stat 0 400K
echo "== 0 count is fine"
release_vers "$FILE" stat 0 0
echo "== release past i_size is fine"
release_vers "$FILE" stat 100 1
release_vers "$FILE" stat 400K 4K
echo "== wrapped blocks fails"
release_vers "$FILE" stat $vers 0x8000000000000000 0x8000000000000000
echo "== releasing non-file fails"
mknod "$CHAR" c 1 3
release_vers "$CHAR" stat 0 1 2>&1 | t_filter_fs
release_vers "$CHAR" stat 0 4K 2>&1 | t_filter_fs
rm "$CHAR"
echo "== releasing a non-scoutfs file fails"
release_vers "/dev/null" stat 0 1
release_vers "/dev/null" stat 0 4K
echo "== releasing bad version fails"
release_vers "$FILE" 0 0 1
release_vers "$FILE" 0 0 4K
rm "$FILE"
@@ -108,9 +108,9 @@ for c in $(seq 0 4); do
start=$(fiemap_file "$FILE" | \
awk '($1 == "0:"){print substr($4, 0, length($4)- 2)}')
release_vers "$FILE" stat $a 1
release_vers "$FILE" stat $b 1
release_vers "$FILE" stat $c 1
release_vers "$FILE" stat $(($a * 4))K 4K
release_vers "$FILE" stat $(($b * 4))K 4K
release_vers "$FILE" stat $(($c * 4))K 4K
echo -n "$a $b $c:"

View File

@@ -29,14 +29,14 @@ create_file() {
release_vers() {
local file="$1"
local vers="$2"
local block="$3"
local count="$4"
local offset="$3"
local length="$4"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs release "$file" "$vers" "$block" "$count"
scoutfs release "$file" -V "$vers" -o "$offset" -l "$length"
}
# if vers is "stat" then we ask stat_more for the data_version
@@ -44,14 +44,14 @@ stage_vers() {
local file="$1"
local vers="$2"
local offset="$3"
local count="$4"
local length="$4"
local contents="$5"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs stage "$file" "$vers" "$offset" "$count" "$contents"
scoutfs stage "$contents" "$file" -V "$vers" -o "$offset" -l "$length"
}
FILE="$T_D0/file"
@@ -60,7 +60,7 @@ CHAR="$FILE-char"
echo "== create/release/stage single block file"
create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
release_vers "$FILE" stat 0 4K
# make sure there only offline extents
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
stage_vers "$FILE" stat 0 4096 "$T_TMP"
@@ -70,7 +70,7 @@ rm -f "$FILE"
echo "== create/release/stage larger file"
create_file "$FILE" $((4096 * 4096))
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 4096
release_vers "$FILE" stat 0 16M
# make sure there only offline extents
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
stage_vers "$FILE" stat 0 $((4096 * 4096)) "$T_TMP"
@@ -83,7 +83,7 @@ cp "$FILE" "$T_TMP"
nr=1
while [ "$nr" -lt 10 ]; do
echo "attempt $nr" >> $seqres.full 2>&1
release_vers "$FILE" stat 0 1024
release_vers "$FILE" stat 0 4096K
sync
echo 3 > /proc/sys/vm/drop_caches
stage_vers "$FILE" stat 0 $((4096 * 1024)) "$T_TMP"
@@ -100,7 +100,7 @@ sync
stat "$FILE" > "$T_TMP.before"
scoutfs stat -s data_seq "$FILE" >> "$T_TMP.before"
scoutfs stat -s data_version "$FILE" >> "$T_TMP.before"
release_vers "$FILE" stat 0 1
release_vers "$FILE" stat 0 4K
stage_vers "$FILE" stat 0 4096 "$T_TMP"
stat "$FILE" > "$T_TMP.after"
scoutfs stat -s data_seq "$FILE" >> "$T_TMP.after"
@@ -110,7 +110,7 @@ rm -f "$FILE"
echo "== stage does change meta_seq"
create_file "$FILE" 4096
release_vers "$FILE" stat 0 1
release_vers "$FILE" stat 0 4K
sync
before=$(scoutfs stat -s meta_seq "$FILE")
stage_vers "$FILE" stat 0 4096 "$T_TMP"
@@ -121,7 +121,7 @@ rm -f "$FILE"
# XXX this now waits, demand staging should be own test
#echo "== can't write to offline"
#create_file "$FILE" 4096
#release_vers "$FILE" stat 0 1
#release_vers "$FILE" stat 0 4K
## make sure there only offline extents
#fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
#dd if=/dev/zero of="$FILE" conv=notrunc bs=4096 count=1 2>&1 | t_filter_fs
@@ -144,13 +144,13 @@ rm -f "$FILE"
echo "== wrapped region fails"
create_file "$FILE" 4096
stage_vers "$FILE" stat 0xFFFFFFFFFFFFFFFF 4096 /dev/zero
stage_vers "$FILE" stat 0xFFFFFFFFFFFFF000 4096 /dev/zero
rm -f "$FILE"
echo "== non-block aligned offset fails"
create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
release_vers "$FILE" stat 0 4K
stage_vers "$FILE" stat 1 4095 "$T_TMP"
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
rm -f "$FILE"
@@ -158,7 +158,7 @@ rm -f "$FILE"
echo "== non-block aligned len within block fails"
create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
release_vers "$FILE" stat 0 4K
stage_vers "$FILE" stat 0 1024 "$T_TMP"
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
rm -f "$FILE"
@@ -166,14 +166,14 @@ rm -f "$FILE"
echo "== partial final block that writes to i_size does work"
create_file "$FILE" 2048
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
release_vers "$FILE" stat 0 4K
stage_vers "$FILE" stat 0 2048 "$T_TMP"
cmp "$FILE" "$T_TMP"
rm -f "$FILE"
echo "== zero length stage doesn't bring blocks online"
create_file "$FILE" $((4096 * 100))
release_vers "$FILE" stat 0 100
release_vers "$FILE" stat 0 400K
stage_vers "$FILE" stat 4096 0 /dev/zero
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
rm -f "$FILE"
@@ -188,7 +188,7 @@ rm -f "$FILE"
#create_file "$FILE" 4096
#cp "$FILE" "$T_TMP"
#sync
#release_vers "$FILE" stat 0 1
#release_vers "$FILE" stat 0 4K
#md5sum "$FILE" 2>&1 | t_filter_fs
#stage_vers "$FILE" stat 0 4096 "$T_TMP"
#cmp "$FILE" "$T_TMP"

View File

@@ -17,7 +17,7 @@ diff_srch_find()
local n="$1"
sync
scoutfs search-xattrs -n "$n" -f "$T_M0" > "$T_TMP.srch"
scoutfs search-xattrs "$n" -p "$T_M0" > "$T_TMP.srch"
find_xattrs -d "$T_D0" -m "$T_M0" -n "$n" > "$T_TMP.find"
diff -u "$T_TMP.srch" "$T_TMP.find"

View File

@@ -0,0 +1,69 @@
#
# Stage a large file in multiple parts and have a reader read it while
# it's being staged. This has found problems with extent access
# locking.
#
t_require_commands scoutfs perl cmp rm
FILE_BYTES=$((4 * 1024 * 1024 * 1024))
FILE_BLOCKS=$((FILE_BYTES / 4096))
FRAG_BYTES=$((128 * 1024 * 1024))
FRAG_BLOCKS=$((FRAG_BYTES / 4096))
NR_FRAGS=$((FILE_BLOCKS / FRAG_BLOCKS))
#
# high bandwidth way to generate file contents with predictable
# contents. We use ascii lines with the block identity, padded to 4KB
# with spaces.
#
# $1 is number of 4k blocks to write, and each block gets its block
# number in the line. $2, $3, and $4 are fields that are put in every
# block.
#
gen() {
perl -e 'for (my $i = 0; $i < '$1'; $i++) { printf("mount %020u process %020u file %020u blkno %020u%s\n", '$2', '$3', '$4', $i, " " x 3987); }'
}
release_file() {
local path="$1"
local vers=$(scoutfs stat -s data_version "$path")
scoutfs release "$path" -V "$vers" -o 0 -l $FILE_BYTES
}
stage_file() {
local path="$1"
local vers=$(scoutfs stat -s data_version "$path")
local off=0
for a in $(seq 1 $NR_FRAGS); do
scoutfs stage <(gen $FRAG_BLOCKS $a $a $a) "$path" -V "$vers" \
-o $off -l $FRAG_BYTES
((off+=$FRAG_BYTES))
done
}
FILE="$T_D0/file"
whole_file() {
for a in $(seq 1 $NR_FRAGS); do
gen $FRAG_BLOCKS $a $a $a
done
}
#
# just one pass through the file.
#
whole_file > "$FILE"
release_file "$FILE"
cmp "$FILE" <(whole_file) &
pid=$!
stage_file "$FILE"
wait $pid || t_fail "comparison failed"
t_pass

View File

@@ -15,7 +15,7 @@ release_file() {
local vers=$(scoutfs stat -s data_version "$path")
echo "releasing $path" >> "$T_TMP.log"
scoutfs release "$path" "$vers" 0 $BLOCKS
scoutfs release "$path" -V "$vers" -o 0 -l $BYTES
echo "released $path" >> "$T_TMP.log"
}
@@ -24,8 +24,8 @@ stage_file() {
local vers=$(scoutfs stat -s data_version "$path")
echo "staging $path" >> "$T_TMP.log"
scoutfs stage "$path" "$vers" 0 $BYTES \
"$DIR/good/$(basename $path)"
scoutfs stage "$DIR/good/$(basename $path)" "$path" -V "$vers" -o 0 -l $BYTES
echo "staged $path" >> "$T_TMP.log"
}

View File

@@ -0,0 +1,15 @@
#
# Run tmpfile_stage and check the output with hexdump.
#
t_require_commands stage_tmpfile hexdump
DEST_FILE="$T_D0/dest_file"
stage_tmpfile $T_D0 $DEST_FILE
hexdump -C "$DEST_FILE"
rm -fr "$DEST_FILE"
t_pass

View File

@@ -1,40 +0,0 @@
#
# verify stale btree block reading
#
t_require_commands touch stat setfattr getfattr createmany
t_require_mounts 2
GETFATTR="getfattr --absolute-names"
SETFATTR="setfattr"
#
# This exercises the soft retry of btree blocks when
# inconsistent cached versions are found. It ensures that basic hard
# error returning turns into EIO in the case where the persistent reread
# blocks and segments really are inconsistent.
#
# The triggers apply across all execution in the file system. So to
# trigger btree block retries in the client we make sure that the server
# is running on the other node.
#
cl=$(t_first_client_nr)
sv=$(t_server_nr)
eval cl_dir="\$T_D${cl}"
eval sv_dir="\$T_D${sv}"
echo "== create file for xattr ping pong"
touch "$sv_dir/file"
$SETFATTR -n user.xat -v initial "$sv_dir/file"
$GETFATTR -n user.xat "$sv_dir/file" 2>&1 | t_filter_fs
echo "== retry btree block read"
$SETFATTR -n user.xat -v btree "$sv_dir/file"
t_trigger_arm btree_stale_read $cl
old=$(t_counter btree_stale_read $cl)
$GETFATTR -n user.xat "$cl_dir/file" 2>&1 | t_filter_fs
t_trigger_show btree_stale_read "after" $cl
t_counter_diff btree_stale_read $old $cl
t_pass

View File

@@ -37,17 +37,25 @@ t_quiet make
t_quiet sync
# pwd stays in xfstests dir to build config and run
#
# Each filesystem needs specific mkfs and mount options because we put
# quorum member addresess in mkfs options and the metadata device in
# mount options.
#
cat << EOF > local.config
export FSTYP=scoutfs
export MKFS_OPTIONS="-Q 1"
export MKFS_OPTIONS="-f"
export MKFS_TEST_OPTIONS="-Q 0,127.0.0.1,42000"
export MKFS_SCRATCH_OPTIONS="-Q 0,127.0.0.1,43000"
export MKFS_DEV_OPTIONS="-Q 0,127.0.0.1,44000"
export TEST_DEV=$T_DB0
export TEST_DIR=$T_M0
export SCRATCH_META_DEV=$T_EX_META_DEV
export SCRATCH_DEV=$T_EX_DATA_DEV
export SCRATCH_MNT="$T_TMPDIR/mnt.scratch"
export SCOUTFS_SCRATCH_MOUNT_OPTIONS="-o server_addr=127.0.0.1,metadev_path=$T_EX_META_DEV"
export MOUNT_OPTIONS="-o server_addr=127.0.0.1,metadev_path=$T_MB0"
export TEST_FS_MOUNT_OPTS="-o server_addr=127.0.0.1,metadev_path=$T_MB0"
export SCOUTFS_SCRATCH_MOUNT_OPTIONS="-o quorum_slot_nr=0,metadev_path=$T_EX_META_DEV"
export MOUNT_OPTIONS="-o quorum_slot_nr=0,metadev_path=$T_MB0"
export TEST_FS_MOUNT_OPTS="-o quorum_slot_nr=0,metadev_path=$T_MB0"
EOF
cat << EOF > local.exclude
@@ -83,7 +91,7 @@ generic/375 # utils output change? update branch?
EOF
t_restore_output
echo "(showing output of xfstests)"
echo " (showing output of xfstests)"
args="-E local.exclude ${T_XFSTESTS_ARGS:--g quick}"
./check $args

View File

@@ -1,25 +1,12 @@
#
# The userspace utils and kernel module share definitions of physical
# structures and ioctls. If we're in the repo we include the kmod
# headers directly, and hash them directly to calculate the format hash.
#
# If we're creating a standalone tarball for distribution we copy the
# headers out of the kmod dir into the tarball. And then when we're
# building in that tarball we use the headers in src/ directly.
#
FMTIOC_H := format.h ioctl.h
FMTIOC_DIST := $(addprefix src/,$(FMTIOC_H))
FMTIOC_KMOD := $(addprefix ../kmod/src/,$(FMTIOC_H))
ifneq ($(wildcard $(firstword $(FMTIOC_KMOD))),)
HASH_FILES := $(FMTIOC_KMOD)
else
HASH_FILES := $(FMTIOC_DIST)
endif
SCOUTFS_FORMAT_HASH := $(shell cat $(HASH_FILES) | md5sum | cut -b1-16)
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -g -msse4.2 \
-Wpadded \
-fno-strict-aliasing \
-DSCOUTFS_FORMAT_HASH=0x$(SCOUTFS_FORMAT_HASH)LLU
@@ -47,7 +34,7 @@ endif
$(BIN): $(OBJ)
$(QU) [BIN $@]
$(VE)gcc -o $@ $^ -luuid -lm -lcrypto
$(VE)gcc -o $@ $^ -luuid -lm -lcrypto -lblkid
%.o %.d: %.c Makefile sparse.sh
$(QU) [CC $<]

View File

@@ -21,21 +21,19 @@ contains the filesystem's metadata.
.sp
This option is required.
.TP
.B server_addr=<ipv4:port>
The server_addr option indicates that this mount will participate in
quorum election to try and run a server for all the mounts of its
filesystem. The option specifies the local TCP IPv4 address that the
mount's elected server will listen on for connections from all other
mounts of the filesystem.
.B quorum_slot_nr=<number>
The quorum_slot_nr option assigns a quorum member slot to the mount.
The mount will use the slot assignment to claim exclusive ownership of
the slot's configured address and an associated metadata device block.
Each slot number must be used by only one mount at any given time.
.sp
The IPv4 address must be specified as a dotted quad, name resolution is
not supported. A specific port may be provided after a seperating
colon. If no port is specified then a random port will be chosen. The
address will be used for the lifetime of the mount and can not be
changed. The mount must be unmounted to specify a different address.
When a mount is assigned a quorum slot it becomes a quorum member and
will participate in the raft leader election process and could start
the server for the filesystem if it is elected leader.
.sp
If server_addr is not specified then the mount will read the filesystem
until it sees the address of an elected server to connect to.
The assigned number must match one of the slots defined with \-Q options
when the filesystem was created with mkfs. If the number assigned
doesn't match a number created during mkfs then the mount will fail.
.SH FURTHER READING
A
.B scoutfs

View File

@@ -3,51 +3,304 @@
scoutfs \- scoutfs management utility
.SH DESCRIPTION
The
.b
scoutfs
utility provides commands to manage a scoutfs filesystem.
.B scoutfs
utility provides commands to create and manage a ScoutFS filesystem.
.SH COMMANDS
Note: Commands taking the
.B --path
option will, when the option is omitted, fall back to using the value of the
.I SCOUTFS_MOUNT_PATH
environment variable. If that variable is also absent the current working
directory will be used.
.TP
.BI "counters [\-t\] <sysfs topdir>"
.BI "df [-h|--human-readable] [-p|--path PATH]"
.sp
Displays the counters and their values for a mounted scoutfs filesystem.
Each counter and its value are printed on a line to stdout with
sufficient spaces seperating the name and value to align the values
after
Display available and used space on the ScoutFS data and metadata devices.
.RS 1.0i
.PD 0
.TP
.sp
.B "\-t"
Format the counters into a table that fills the display instead of
printing one counter per line. The names and values are padded to
create columns that fill the current width of the terminal.
.B "-h, --human-readable"
Output sizes in human-readable size units (e.g. 500G, 1.2P) rather than number
of ScoutFS allocation blocks.
.TP
.B "sysfs topdir"
Specify the mount's sysfs directory in which to find the
.B counters/
directory when then contains files for each counter.
The sysfs directory is typically
of the form
.I /sys/fs/scoutfs/f.<fsid>.r.<rid>/
\&.
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "data-waiting <ino> <iblock> <path>"
.BI "mkfs META-DEVICE DATA-DEVICE {-Q|--quorum-slot} NR,ADDR,PORT [-m|--max-meta-size SIZE] [-d|--max-data-size SIZE] [-f|--force]"
.sp
Displays all the files and blocks for which there is a task blocked waiting on
Initialize a new ScoutFS filesystem on the target devices. Since ScoutFS uses
separate block devices for its metadata and data storage, two are required.
.sp
If
.B --force
option is not given, mkfs will check for existing filesystem signatures. It is
recommended to use
.B wipefs(8)
to remove non-ScoutFS filesystem signatures before proceeding, and
.B --force
to overwrite a previous ScoutFS filesystem.
.RS 1.0i
.PD 0
.TP
.sp
.B META-DEVICE
The path to the block device to be used for ScoutFS metadata. If possible, use
a faster block device for the metadata device.
.TP
.B DATA-DEVICE
The path to the block device to be used for ScoutFS file data. If possible, use
a larger block device for the data device.
.TP
.B "-Q, --quorum-slot NR,ADDR,PORT"
Each \-Q option configures a quorum slot. The NR specifies the number
of the slot to configure which must be between 0 and 14. Each slot
number must only be used once, but they can be used in any order and
they need not be consecutive. This is to allow natural relationships
between slot numbers and nodes which may have arbitrary numbering
schemes. ADDR and PORT are the numerical IPv4 address and port which
will be used as the UDP endpoint for leader elections and as the TCP
listening address for server connections. The number of configured
slots determines the size of the quorum of member mounts which must be
present to start the server for the filesystem to operate. A simple
majority is typically required, while one mount is sufficient if only
one or two slots are configured. Until the majority quorum are present,
all mounts will hang waiting for a server to connect to.
.TP
.B "-m, --max-meta-size SIZE"
Limit the space used by ScoutFS on the metadata device to the
given size, rather than using the entire block device. Size is given as
an integer followed by a units digit: "K", "M", "G", "T", "P", to denote
kibibytes, mebibytes, etc.
.TP
.B "-d, --max-data-size SIZE"
Same as previous, but for limiting the size of the data device.
.TP
.B "-f, --force"
Ignore presence of existing data on the data and metadata devices.
.RE
.PD
.TP
.BI "stat FILE [-s|--single-field FIELD-NAME]"
.sp
Display ScoutFS-specific metadata fields for the given file.
.RS 1.0i
.PD 0
.TP
.sp
.B "FILE"
Path to the file.
.TP
.B "-s, --single-field FIELD-NAME"
Only output a single field's value instead of the default: all the stats with
one stat per line.
.sp
.TP
.RE
.PD
The fields are:
.RS 1.0i
.PD 0
.TP
.B "meta_seq"
The metadata change sequence. This changes each time the inode's metadata
is changed.
.TP
.B "data_seq"
The data change sequence. This changes each time the inode's data
is changed.
.TP
.B "data_version"
The data version changes every time the contents of the file changes,
or the file grows or shrinks.
.TP
.B "online_blocks"
The number of 4Kb data blocks that contain data and can be read.
.TP
.B "offline_blocks"
The number of 4Kb data blocks that are offline and would need to be
staged to be read.
.RE
.PD
.TP
.BI "statfs [-s|--single-field FIELD-NAME] [-p|--path PATH]"
.sp
Display ScoutFS-specific filesystem-wide metadata fields.
.RS 1.0i
.PD 0
.TP
.sp
.B "-s, --single-field FIELD-NAME"
Only ontput a single stat instead of all the stats with one stat per
line. The possible stat names are those given in the output.
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.sp
.TP
.RE
.PD
The fields are:
.RS 1.0i
.PD 0
.TP
.B "fsid"
The unique 64bit filesystem identifier for this filesystem.
.TP
.B "rid"
The unique 64bit random identifier for this mount of the filesystem.
This is generated for every new mount of the file system.
.TP
.B "committed_seq"
All seqs up to and including this seq have been
committed. Can be compared with meta_seq and data_seq from inodes in
.B stat
to discover if changes to a file have been committed to disk.
.TP
.B "total_meta_blocks"
The total number of 64K metadata blocks in the filesystem.
.TP
.B "total_data_blocks"
The total number of 4K data blocks in the filesystem.
.RE
.PD
.TP
.BI "counters [-t|--table] SYSFS-DIR"
.sp
Display the counters and their values for a mounted ScoutFS filesystem.
.RS 1.0i
.PD 0
.sp
.TP
.B SYSFS-DIR
The mount's sysfs directory in which to find the
.B counters/
directory when then contains files for each counter.
The sysfs directory is
of the form
.I /sys/fs/scoutfs/f.<fsid>.r.<rid>/
\&.
.TP
.B "-t, --table"
Format the counters into a columnar table that fills the width of the display
instead of printing one counter per line.
.RE
.PD
.TP
.BI "search-xattrs XATTR-NAME [-p|--path PATH]"
.sp
Display the inode numbers of inodes in the filesystem which may have
an extended attribute with the given name.
.sp
The results may contain false positives. The returned inode numbers
should be checked to verify that the extended attribute is in fact
present on the inode.
.RS 1.0i
.PD 0
.TP
.sp
.B XATTR-NAME
The full name of the extended attribute to search for as
described in the
.BR xattr (7)
manual page.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "list-hidden-xattrs FILE"
.sp
Display extended attributes starting with the
.BR scoutfs.
prefix and containing the
.BR hide.
tag
which makes them invisible to
.BR listxattr (2) .
The names of each attribute are output, one per line. Their order
is not specified.
.RS 1.0i
.PD 0
.TP
.sp
.B "FILE"
The path to a file within a ScoutFS filesystem. File permissions must allow
reading.
.RE
.PD
.TP
.BI "walk-inodes {meta_seq|data_seq} FIRST-INODE LAST-INODE [-p|--path PATH]"
.sp
Walk an inode index in the file system and output the inode numbers
that are found between the first and last positions in the index.
.RS 1.0i
.PD 0
.sp
.TP
.BR meta_seq , data_seq
Which index to walk.
.TP
.B "FIRST-INODE"
An integer index value giving starting position of the index walk.
.I 0
is the first possible position.
.TP
.B "LAST-INODE"
An integer index value giving the last position to include in the index walk.
.I \-1
can be given to indicate the last possible position.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "ino-path INODE-NUM [-p|--path PATH]"
.sp
Display all paths that reference an inode number.
.sp
Ongoing filesystem changes, such as renaming a common parent of multiple paths,
can cause displayed paths to be inconsistent.
.RS 1.0i
.PD 0
.sp
.TP
.B "INODE-NUM"
The inode number of the target inode.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "data-waiting {-I|--inode} INODE-NUM {-B|--block} BLOCK-NUM [-p|--path PATH]"
.sp
Display all the files and blocks for which there is a task blocked waiting on
offline data.
.sp
The results are sorted by the file's inode number and the
logical block offset that is being waited on.
.sp
Each line of output specifies a block in a file that has a task waiting
Each line of output describes a block in a file that has a task waiting
and is formatted as:
.I "ino <nr> iblock <nr> ops [str]"
\&. The ops string indicates blocked operations seperated by commas and can
include
include
.B read
for a read operation,
.B write
@@ -58,156 +311,151 @@ for a truncate or extending write.
.PD 0
.sp
.TP
.B "ino"
.B "-I, --inode INODE-NUM"
Start iterating over waiting tasks from the given inode number.
Specifying 0 will show all waiting tasks.
Value of 0 will show all waiting tasks.
.TP
.B "iblock"
.B "-B, --block BLOCK-NUM"
Start iterating over waiting tasks from the given logical block number
in the starting inode. Specifying 0 will show blocks in the first inode
in the starting inode. Value of 0 will show blocks in the first inode
and then continue to show all blocks with tasks waiting in all the
remaining inodes.
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "data-wait-err {-I|--inode} INODE-NUM {-V|--version} VER-NUM {-F|--offset} OFF-NUM {-C|--count} COUNT {-O|--op} OP {-E|--err} ERR [-p|--path PATH]"
.sp
Return error from matching waiters.
.RS 1.0i
.PD 0
.sp
.TP
.B "-C, --count COUNT"
Count.
.TP
.B "-E, --err ERR"
Error.
.TP
.B "-F, --offset OFF-NUM"
Offset. May be expressed in bytes, or with KMGTP (Kibi, Mibi, etc.) size
suffixes.
.TP
.B "-I, --inode INODE-NUM"
Inode number.
.TP
.B "-O, --op OP"
Operation. One of: "read", "write", "change_size".
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "stage ARCHIVE-FILE FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
.sp
.B Stage
(i.e. return to online) the previously-offline contents of a file by copying a
region from another file, the archive, and without updating regular inode
metadata. Any operations that are blocked by the existence of an offline
region will proceed once the region has been staged.
.RS 1.0i
.PD 0
.TP
.sp
.B "ARCHIVE-FILE"
The source file for the file contents being staged.
.TP
.B "FILE"
The regular file whose contents will be staged.
.TP
.B "-V, --version VERSION"
The data_version of the contents to be staged. It must match the
current data_version of the file.
.TP
.B "-o, --offset OFF-NUM"
The starting byte offset of the region to write. May be expressed in bytes, or with
KMGTP (Kibi, Mibi, etc.) size suffixes. Default is 0.
.TP
.B "-l, --length LENGTH"
Length of range (bytes or KMGTP units) of file to stage. Default is the file's
total size.
.RE
.PD
.TP
.BI "release FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
.sp
.B Release
the given region of the file. That is, remove the region's backing data and
leave an offline data region. Future attempts to read or write the offline
region will block until the region is restored by a
.B stage
write. This is used by userspace archive managers to free data space in the
ScoutFS filesystem once the file data has been archived.
.sp
Note: This only works on regular files with write permission. Releasing regions
that are already offline or sparse, including regions extending past the end of
the file, will silently succeed.
.RS 1.0i
.PD 0
.TP
.sp
.B "path"
A path to any inode in the target filesystem, typically the root
directory.
The path to the regular file whose region will be released.
.TP
.B "-V, --version VERSION"
The data_version of the contents to be released. It must match the current
data_version of the file. This ensures that a release operation is truncating
the same version of the data that was archived. (Use the
.BI "stat"
subcommand to obtain data version for a file.)
.TP
.B "-o, --offset OFF-NUM"
The starting byte offset of the region to write. May be expressed in bytes, or with
KMGTP (Kibi, Mibi, etc.) size suffixes. Default is 0.
.TP
.B "-l, --length LENGTH"
Length of range (bytes or KMGTP units) of file to stage. Default is the file's
total size.
.RE
.PD
.TP
.BI "find-xattrs <\-n\ name> <\-f path>"
.BI "setattr FILE [-d, --data-version=VERSION [-s, --size=SIZE [-o, --offline]]] [-t, --ctime=TIMESPEC]"
.sp
Displays the inode numbers of inodes in the filesystem which may have
an extended attribute with the given name.
.sp
The results may contain false positives. The returned inode numbers
should be checked to verify that the extended attribute is in fact
present on the inode.
.RS 1.0i
.PD 0
.TP
.sp
.B "-n name"
Specifies the full name of the extended attribute to search for as
described in the
.BR xattr (7)
manual page.
.TP
.B "-f path"
Specifies the path to any inode in the filesystem to search.
.RE
.PD
.TP
.BI "ino-path <ino> <path>"
.sp
Displays all the paths to links to the given inode number.
.sp
All the relative paths from the root directory to each link of the
target inode are output, one result per line. Each output path is
guaranteed to have been a valid path to a link at some point in the
past. An individual path won't be corrupted by a rename that occurs
during the search. The set of paths can be modified while the search is
running. A rename of a parent directory of all the paths, for example,
can result in output where the parent directory name component changes
in the middle of outputting all the paths.
Set ScoutFS-specific attributes on a newly created zero-length file.
.RS 1.0i
.PD 0
.sp
.TP
.B "ino"
The inode number of the target inode to resolve.
.B "-V, --data-version=VERSION"
Set data version.
.TP
.B "path"
A path to any inode in the target filesystem, typically the root
directory.
.B "-o, --offline"
Set file contents as offline, not sparse. Requires
.I --size
option also be present.
.TP
.B "-s, --size=SIZE"
Set file size. May be expressed in bytes, or with
KMGTP (Kibi, Mibi, etc.) size suffixes. Requires
.I --data-version
option also be present.
.TP
.B "-t, --ctime=TIMESPEC"
Set creation time using
.I "<seconds-since-epoch>.<nanoseconds>"
format.
.RE
.PD
.TP
.BI "listxattr-hidden <\-f path>"
.sp
Displays all the extended attributes starting with the
.BR scoutfs.
prefix and which contain the
.BR hide.
tag
which makes them invisible to
.BR listxattr (2)
\&.
The names of each attribute are output, one name per line. Their order
is determined by internal indexing implementation details and should not
be relied on.
.RS 1.0i
.PD 0
.TP
.sp
.B "-f path"
The path to the file whose extended attributes will be listed. The
user must have read permission to the inode.
.RE
.PD
.TP
.BI "mkfs <\-Q nr> <meta_dev_path> <data_dev_path> [-M meta_size] [-D data_size]"
.sp
Initialize a new empty filesystem in the target devices by writing empty
structures and a new superblock. Since ScoutFS uses separate block
devices for its metadata and data storage, both must be given.
.sp
This
.B unconditionally destroys
the contents of the devices, regardless of what they contain or who may be
using them. It simply writes new data structures into known offsets.
.B Be very careful that the devices do not contain data and are not actively in use.
.RS 1.0i
.PD 0
.TP
.sp
.B "-Q nr"
Specify the number of mounts needed to reach quorum and elect a mount
to start the server. Mounts of the filesystem will hang until this many
mounts are operational and can elect a server amongst themselves.
.sp
Mounts with the
.B server_addr
mount option participate in quorum. The safest quorum number is the
smallest majority of an odd number of participating mounts. For
example,
two out of three total mounts. This ensures that there can only be one
set of mounts that can establish quorum.
.sp
Degenerate quorums are possible, for example by specifying half of an
even number of mounts or less than half of the mount count, down to even
just one mount establishing quorum. These minority quorums carry the
risk of multiple quorums being established concurrently. Each quorum's
elected servers race to fence each other and can have the unlikely
outcome of continually racing to fence each other resulting in a
persistent loss of service.
.TP
.B "meta_dev_path"
The path to the device to be used for ScoutFS metadata. If possible,
use a faster block device for the metadata device. Its contents will be
unconditionally destroyed.
.TP
.B "data_dev_path"
The path to the device to be used for ScoutFS file data. If possible,
use a larger block device for the data device. Its contents will be
unconditionally destroyed.
.TP
.B "-M meta_size"
Limit the space used by the filesystem on the metadata device to the
given size, rather than using the entire block device. Size is given as
an integer followed by a units digit: "K", "M", "G", "T", "P", to denote
kibibytes, mebibytes, etc.
.TP
.B "-D data_size"
Same as previous, but for limiting the size of the data device.
.RE
.PD
.TP
.BI "print <path>"
.BI "print META-DEVICE"
.sp
Prints out all of the metadata in the file system. This makes no effort
to ensure that the structures are consistent as they're traversed and
@@ -217,236 +465,21 @@ output.
.PD 0
.TP
.sp
.B "path"
The path to the metadata device for filesystem whose metadata will
be printed. The command reads from the buffer cache of the device which
may not reflect the current blocks in the filesystem that may have been
written through another host or device. The local device's cache can be
manually flushed before printing, perhaps with the
.B \--flushbufs
command in the
.BR blockdev (8)
command.
.RE
.PD
.TP
.BI "release <path> <vers> <4KB block offset> <4KB block count>"
.sp
.B Release
the given logical block region of the file. That is, truncate away
any data blocks but leave behind offline data regions and do not change
the main inode metadata. Future attempts to read or write the block
region
will block until the region is restored by a
.B stage
write. This is used by userspace archive managers to store file data
in a remote archive tier.
.sp
This only works on regular files and with write permission. Releasing
regions that are already offline or are sparse, including past the end
of the file, silently succeed.
.RS 1.0i
.PD 0
.TP
.sp
.B "path"
The path to the regular file whose region will be released.
.TP
.B "version"
The current data version of the contents of the file. This ensures
that a release operation is truncating the version of the data that it
expects. It can't throw away data that was newly written while it was
performing its release operation. An inode's data_version is read
by the SCOUTFS_IOC_STATFS_MORE
ioctl.
.TP
.B "4KB block offset"
The 64bit logical block offset of the start of the region in units of 4KB.
.TP
.B "4KB block count"
The 64bit length of the region to release in units of 4KB blocks.
.RE
.PD
.TP
.BI "setattr <\-c ctime> <\-d data_version> -o <\-s i_size> <\-f path>
.sp
Set scoutfs specific metadata on a newly created inode without updating
other inode metadata.
.RS 1.0i
.PD 0
.TP
.sp
.B "-c ctime"
Specify the inode's creation GMT timespec with 64bit seconds and 32bit
nanoseconds formatted as
.B sec.nsec
\&.
.TP
.B "-d data_version"
Specify the inode's data version. This can only be set on regular files whose
current data_version is 0.
.TP
.B "-o"
Create an offline region for all of the file's data up to the specified
file size. This can only be set on regular files whose data_version is
0 and i_size must also be specified.
.TP
.B "-s i_size"
Set the inode's i_size. This can only be set on regular files whose
data_version is 0.
.TP
.B "-f path"
The file whose metadata will be set.
.RE
.PD
.TP
.BI "stage <file> <vers> <offset> <count> <archive file>"
.sp
.B Stage
the contents of the file by reading a region of another archive file and writing it
into the file region without updating regular inode metadata. Any tasks
that are blocked by the offline region will proceed once it has been
staged.
.RS 1.0i
.PD 0
.TP
.sp
.B "file"
The regular file whose contents will be staged.
.TP
.B "vers"
The data_version of the contents to be staged. It must match the
current data_version of the file.
.TP
.B "offset"
The starting byte offset of the region to write. This must be aligned
to 4KB blocks.
.TP
.B "count"
The length of the region to write in bytes. A length of 0 is a noop
and will immediately return success. The length must be a multiple
of 4KB blocks unless it is writing the final partial block in which
case it must end at i_size.
.TP
.B "archive file"
A file whose contents will be read and written as the staged region.
The start of the archive file will be used as the start of the region.
.RE
.PD
.TP
.BI "stat [-s single] <path>"
.sp
Display scoutfs metadata fields for the given inode.
.RS 1.0i
.PD 0
.TP
.sp
.B "-s single"
Only ontput a single stat instead of all the stats with one stat per
line. The possible stat names are those given in the output.
.TP
.B "path"
The path to the file whose inode field will be output.
.sp
.TP
.RE
.PD
The fields are as follows:
.RS 1.0i
.PD 0
.TP
.B "meta_seq"
The metadata change sequence. This changes each time the inode's metadata
is changed during a mount's transaction.
.TP
.B "data_seq"
The data change sequence. This changes each time the inode's data
is changed during a mount's transaction.
.TP
.B "data_version"
The data version changes every time any contents of the file changes,
including size changes. It can change many times during a syscall in a
transactions.
.TP
.B "online_blocks"
The number of 4Kb data blocks that contain data and can be read.
.TP
.B "online_blocks"
The number of 4Kb data blocks that are offline and would need to be
staged to be read.
.RE
.PD
.TP
.BI "statfs [-s single] <path>"
.sp
Display scoutfs metadata fields for a scoutfs filesystem.
.RS 1.0i
.PD 0
.TP
.sp
.B "-s single"
Only ontput a single stat instead of all the stats with one stat per
line. The possible stat names are those given in the output.
.TP
.B "path"
The path to any inode in the filesystem.
.sp
.TP
.RE
.PD
The fields are as follows:
.RS 1.0i
.PD 0
.TP
.B "fsid"
The unique 64bit filesystem identifier for this filesystem.
.TP
.B "rid"
The unique 64bit random identifier for this mount of the filesystem.
This is generated for every new mount of the file system.
.RE
.PD
.TP
.BI "walk-inodes <index> <first> <last> <path>"
.sp
Walks an inode index in the file system and outputs the inode numbers
that are found within the first and last positions in the index.
.RS 1.0i
.PD 0
.sp
.TP
.B "index"
Specifies the index to walk. The currently supported indices are
.B meta_seq
and
.B data_seq
\&.
.TP
.B "first"
The starting position of the index walk.
.I 0
is the first possible position in every index.
.TP
.B "last"
The last position to include in the index walk.
.I \-1
can be given as shorthand for the U64_MAX last possible position in
every index.
.TP
.B "path"
A path to any inode in the filesystem, typically the root directory.
.B "META-DEVICE"
The path to the metadata device for the filesystem whose metadata will be
printed. Since this command reads via the host's buffer cache, it may not
reflect the current blocks in the filesystem possibly written to the shared
block devices from another host, unless
.B blockdev \--flushbufs
command is used first.
.RE
.PD
.SH SEE ALSO
.BR scoutfs (5),
.BR xattr (7).
.BR xattr (7),
.BR blockdev (8),
.BR wipefs (8)
.SH AUTHORS
Zach Brown <zab@versity.com>

View File

@@ -16,6 +16,7 @@ BuildRequires: git
BuildRequires: gzip
BuildRequires: libuuid-devel
BuildRequires: openssl-devel
BuildRequires: libblkid-devel
#Requires: kmod-scoutfs = %{version}

94
utils/src/blkid.c Normal file
View File

@@ -0,0 +1,94 @@
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <stdbool.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <blkid/blkid.h>
#include "util.h"
#include "format.h"
#include "blkid.h"
static int check_bdev_blkid(int fd, char *devname, char *usage)
{
blkid_probe pr;
int ret = 0;
pr = blkid_new_probe_from_filename(devname);
if (!pr) {
fprintf(stderr, "%s: failed to create a new libblkid probe\n", devname);
goto out;
}
/* enable partitions probing (superblocks are enabled by default) */
ret = blkid_probe_enable_partitions(pr, true);
if (ret == -1) {
fprintf(stderr, "%s: blkid_probe_enable_partitions() failed\n", devname);
goto out;
}
ret = blkid_do_fullprobe(pr);
if (ret == -1) {
fprintf(stderr, "%s: blkid_do_fullprobe() failed", devname);
goto out;
} else if (ret == 0) {
const char *type;
if (!blkid_probe_lookup_value(pr, "TYPE", &type, NULL)) {
fprintf(stderr, "%s: appears to contain an existing "
"%s superblock\n", devname, type);
ret = -1;
goto out;
}
if (!blkid_probe_lookup_value(pr, "PTTYPE", &type, NULL)) {
fprintf(stderr, "%s: appears to contain a partition "
"table (%s)\n", devname, type);
ret = -1;
goto out;
}
} else {
/* return 0 if ok */
ret = 0;
}
out:
blkid_free_probe(pr);
return ret;
}
static int check_bdev_scoutfs(int fd, char *devname, char *usage)
{
struct scoutfs_super_block *super = NULL;
int ret;
ret = read_block(fd, SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT, (void **)&super);
if (ret)
return ret;
if (le32_to_cpu(super->hdr.magic) == SCOUTFS_BLOCK_MAGIC_SUPER) {
fprintf(stderr, "%s: appears to contain an existing "
"ScoutFS superblock\n", devname);
ret = -EINVAL;
}
free(super);
return ret;
}
/*
* Returns -1 on error, 0 otherwise.
*/
int check_bdev(int fd, char *devname, char *usage)
{
return check_bdev_blkid(fd, devname, usage) ?:
/* Our sig is not in blkid (yet) so check explicitly for us. */
check_bdev_scoutfs(fd, devname, usage);
}

6
utils/src/blkid.h Normal file
View File

@@ -0,0 +1,6 @@
#ifndef _BLKID_H_
#define _BLKID_H_
int check_bdev(int fd, char *path, char *usage);
#endif

Some files were not shown because too many files have changed in this diff Show More