Compare commits

..

332 Commits

Author SHA1 Message Date
Zach Brown
2634fadfcb Merge pull request #71 from versity/zab/v1_1_release
Zab/v1 1 release
2022-02-04 11:35:39 -08:00
Zach Brown
0c1f19556d Prepare v1.2-rc release
Add the v1.2-rc section to the release notes so that we can add entries
with commits as needed.

Signed-off-by: Zach Brown <zab@versity.com>
2022-02-04 11:32:53 -08:00
Zach Brown
19caae3da8 v1.1 Release
Finish off the release notes for the 1.1 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-02-04 11:32:37 -08:00
Zach Brown
2989afbf46 Merge pull request #70 from versity/zab/silence_duplicate_log_merge_complete_error
Silence resent log merge commit error
2022-02-02 14:35:01 -08:00
Zach Brown
730a84af92 Silence resent log merge commit error
The server's log merge complete request handler was considering the
absence of the client's original request as a failure.  Unfortunately,
this case is possible if a previous server successfully completed the
client's request but the response was lost because it stopped for
whatever reason.

The failure was being logged as a hard error to the console which was
causing tests to occasionally fail during server failover that hit just
as the log merge completion was being processed.

The error was being sent to the client as a response, we just need to
silence the message for these expected but rare errors.

We also fix the related case where the server printed the even more
harsh WARN_ON if there was a next original request but it wasn't the one
we expected to find from our requesting client.

Signed-off-by: Zach Brown <zab@versity.com>
2022-02-02 11:26:36 -08:00
Zach Brown
5b77133c3b Merge pull request #68 from versity/zab/collection_of_fixes
Zab/collection of fixes
2022-01-24 11:22:41 -08:00
Zach Brown
329ac0347d Remove unused scoutfs_net_cancel_request()
The net _cancel_request call hasn't been used or tested in approximately
a bazillion years.   Best to get rid of it and have to add and test it
if we think we need it again.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
15d7eec1f9 Disallow openening unlinked files by handle
Our open by handle functions didn't care that the inode wasn't
referenced and let tasks open unlinked inodes by number.  This
interacted badly with the inode deletion mechanisms which required that
inodes couldn't be cached on other nodes after the transaction which
removed their final reference.

If a task did accidentally open a file by inode while it was being
deleted it could see the inode items in an inconsistent state and return
very confusing errors that look like corruption.

The fix is to give the handle iget callers a flag to tell iget to only
get the inode if it has a positive nlink.   If iget sees that the inode
has been unlinked it returns enoent.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
cff17a4cae Remove unused flags scoutfs_inode_refresh arg
The flags argument to scoutfs_inode_refresh wasn't being used.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
9fa2c6af89 Use get-allocated-inos in orphan-inodes test
The orphan inodes test needs to test if inode items exist as it
manipulates inodes.  It used to open the inode by a handle but we're
fixing that to not allow opening unlinked files.   The
get-allocated-inos ioctl tests for the presence of items owned by the
inode regardless of any other vfs state so we can use it to verify what
scoutfs is doing as we work with the vfs inodes.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
e067961714 Add get-allocated-inos scoutfs command
Add the get-allocated-inos scoutfs command which wraps the
GET_ALLOCATED_INOS ioctl.   It'll be used by tests to find items
associated with an inode instead of trying to open the inode by a
constructed handle after it was unlinked.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
7a96e03148 Add get_allocated_inos ioctl
Add an ioctl that can give some indication of inodes that have inode
items.   We're exposing this for tests that verify the handling of open
unlinked inodes.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
e9b3cc873a Export scoutfs_inode_init_key
We're adding an ioctl that wants to build inode item keys so let's
export the private inode key initializer.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
5f2259c48f Revert "Fix client/server race btwn lock recov and farewell"
This reverts commit 61ad844891.

This fix was trying to ensure that lock recovery response handling
can't run after farewell calls reclaim_rid() by jumping through a bunch
of hoops to tear down locking state as the first farewell request
arrived.

It introduced very slippery use after free during shutdown.  It appears
that it was from drain_workqueue() previously being able to stop
chaining work.   That's no longer possible when you're trying to drain
two workqueues that can queue work in each other.

We found a much clearer way to solve the problem so we can toss this.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:40:08 -08:00
Zach Brown
e14912974d Wait for lock recovery before sending farewell
We recently found that the server can send a farewell response and try
to tear down a client's lock state while it was still in lock recovery
with the client.   The lock recovery response could add a lock
for the client after farell's reclaim_rid() had thought the client was
gone forever and tore down its locks.

This left a lock in the lock server that wasn't associated with any
clients and so could never be invalidated.   Attempts to acquire
conflicting locks with it would hang forever, which we saw as hangs in
testing with lots of unmounting.

We tried to fix it by serializing incoming request handling and
forcefully clobbering the client's lock state as we first got
the farewell request.   That went very badly.

This takes another approach of trying to explicitly wait for lock
recovery to finish before sending farewell responses.   It's more in
line with the overall pattern of having the client be up and functional
until farewell tears it down.

With this in place we can revert the other attempted fix that was
causing so many problems.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-24 09:39:51 -08:00
Zach Brown
813ce24d79 Move local-force-unmount test script into tests/
The local-force-unmount fenced fencing script only works when all the
mounts are on the local host and it uses force unmount.   It is only
used in our specific local testing scripts.  Packaging it as an example
lead people to believe that it could be used to cobble together a
multi-host testing network, however temporary.

Move it from being in utils and packged to being private to our tests so
that it doesn't present an attractive nuisance.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-19 11:33:34 -08:00
Zach Brown
e2ce5ab6da Free pending recovery state on shutdown
scoutfs_recov_shutdown() tried to move the recovery tracking structs off
the shared list and into a private list so they could be freed.  But
then it went and walked the now empty shared list to free entries.  It
should walk the private list.

This would leak a small amount of memory in the rare cases where the
server was shutdown while recovery was still pending.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-19 09:22:48 -08:00
Zach Brown
89ca903c41 Print log trees get/commit seqs
Back when we added the get/commit transaction sequence numbers to the
log_trees we forgot to add them to the scoutfs print output.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-19 09:21:02 -08:00
Zach Brown
e3c7e21c40 Use write memory barrier in set_shutting_down
The server's little set_shutting_down() helper accidentally used a read
barrier instead of a write barrier.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-19 09:17:38 -08:00
Zach Brown
e97ea5407d Merge pull request #64 from bgly/bduffyly/quorum_race
Fix client/server race between lock recov and farewell processing
2022-01-14 09:03:00 -08:00
Bryant G. Duffy-Ly
8db5c118c3 Change clent to c_ent
To make it clearer; changing clent to c_ent to represent
client entry.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2022-01-13 13:33:05 -06:00
Bryant G. Duffy-Ly
61ad844891 Fix client/server race btwn lock recov and farewell
Tear down client lock server state and set a boolean so that
there is no race between client/server processing lock recovery
at the same time as farewell.

Currently there is a bug where if server and clients are unmounted
then work from the client is processed out of order, which leaves
behind a server_lock for a RID that no longer exists.
In order to fix this we need to serialize SCOUTFS_NET_CMD_FAREWELL
in recv_worker.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2022-01-13 13:32:56 -06:00
Zach Brown
2c8f5d8fc1 Merge pull request #65 from versity/zab/item_cache_move_page_seq
Preserve item cache page max_seq as items move
2022-01-13 09:12:23 -08:00
Bryant G. Duffy-Ly
8a504cd5ae Add client/server unmount race on lock_recov unit test
This unit test reproduces the race we have between
client and server diong lock recovery while farewell
is processed.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2022-01-12 21:29:00 -06:00
Zach Brown
99a1cc704f Preserve item cache page max_seq as items move
The max_seq and active reader mechanisms in the item cache stop readers
from reading old items and inserting them in the cache after newer items
have been reclaimed by memory pressure.  The max_seq field in the pages
must reflect the greatest seq of the items in the page so that reclaim
knows that the page contains items newer than old readers and must not
be removed.

We update the page max_seq as items are inserted or as they're dirtied
in the page.   There's an additional subtle effect that the max_seq can
also protect items which have been erased.  Deletion items are erased
from the pages as a commit completes.   The max_seq in that page will
still protect it from being reclaimed even though no items have that seq
value themselves.

That protection fails if the range of keys containing the erased item is
moved to another page with a lower max_seq.   The item mover only
updated the destination page's max_seq for each item that was moved.  It
missed that the empty space between the items might have a larger
max_seq from an erased item.  We don't know where the erased item is so
we have to assume that a larger max_seq in the source page must be set
on the destination page.

This could explain very rare item cache corruption where nodes were
seeing deleted directory entry items reappearing.  It would take a
specific sequence of events involving large directories with an isolated
removal, a delayed item cache reader, a commit, and then enough
insertions to split the page all happening in precisely the wrong
sequence.

Signed-off-by: Zach Brown <zab@versity.com>
2022-01-12 10:23:55 -08:00
Zach Brown
166ab58b99 Merge pull request #62 from versity/zab/change_quorum_config
Zab/change quorum config
2021-11-29 12:18:15 -08:00
Zach Brown
8bc1ee8346 Add change-quorum-config command
Add a command to change the quorum config which starts by only supports
updating the super block whlie the file system is oflfine.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 15:41:04 -08:00
Zach Brown
285b68879a Set quorum config ver to 1 in mkfs and print
We're adding a command to change the quorum config which updates its
version number.  Let's make the version a little more visible and start
it at the more humane 1.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 15:41:04 -08:00
Zach Brown
1ac3efe701 Add meta_super_in_use utils helper
Move the code that checks that the super is in use from
change-format-version into its own function in util.c.   We'll use it in
an upcoming command to change the quorum config.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 15:40:25 -08:00
Zach Brown
ce76682db7 Make mkfs quorum helpers available
Move functions for printing and validating the quorum config from mkfs.c
to quorum.c so that they can be used in an upcoming command to change
the quorum config.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 13:44:51 -08:00
Zach Brown
686f8515bc Fix --quorum-count typo in mkfs error message
The change from --quorum-count to --quorum-slot forgot to update a
mention of the option in an error message in mkfs when it wasn't
provided.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 13:44:51 -08:00
Zach Brown
93bc52cc54 Merge pull request #60 from bgly/bduffyly/block_stale_reads
Fix block-stale-read test case
2021-11-24 10:25:26 -08:00
Zach Brown
1108d1288a Merge pull request #61 from bgly/bduffyly/rename2
Add basic renameat2 syscall support
2021-11-24 10:24:23 -08:00
Bryant G. Duffy-Ly
0abcd5a004 Take generic/025/078 off expunge list adding 23/24
We want to enable the test case for:
generic/023 - tests that renameat2 syscall exists
generic/024 - renameat2 with NOREPLACE flag

Move both generic/025 and 078 to the no run list so that
we can test the unsupported output if the flags were
passed that were not supported.

Example output:
generic/025      [not run] fs doesn't support RENAME_EXCHANGE
generic/078      [not run] fs doesn't support RENAME_WHITEOUT

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:19 -06:00
Bryant G. Duffy-Ly
888ad8ec5c Add renameat2 unit test case
The goal of the test case is to have two mount points
with two async calls made to do renameat2. This allows
for two calls to race to call renameat2 RENAME_NOREPLACE.
When this happens you expect one of them to fail with a
-EEXIST. This would validate that the new flag works.
Essentially one of the two calls to renameat should hit the
new RENAME_NOREPLACE code and exit early.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:13 -06:00
Bryant G. Duffy-Ly
16ea0ef671 Add syscall wrapper for renameat2
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:08 -06:00
Bryant G. Duffy-Ly
1b8e3f7c05 Add basic renameat2 syscall support
Support generic renameat2 syscall then add support for the
RENAME_NOREPLACE flag. To suppor the flag we need to check
the existance of both entries and return -EXIST.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:02 -06:00
Bryant G. Duffy-Ly
3ae0ebd0d8 Fix block-stale-read test case
The current test case attempts to create a state to read
by calling setattr and getattr in attempt to force block
cache reads. It so happens that this does not always force
cache block reads, which in rare cases causes this test case
to fail.

The new test case removes all the extra bouncing around of mount
points and we just directly call scoutfs df which will walk
everyone's allocators to summarize the block counts, which is
guaranteed to exist. Therefore, we do not have to create any sort
of state prior to trying to force a read.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 15:41:54 -06:00
Zach Brown
714b7f2a84 Merge pull request #54 from bgly/bduffyly/abort_conn
Fix client/server abort conn on force unmount
2021-11-09 13:29:20 -08:00
Zach Brown
945f8b4828 Merge pull request #58 from bgly/bduffyly/print_data
Fix scoutfs print <data_dev> hang
2021-11-09 09:50:14 -08:00
Zach Brown
b5ccefeeb9 Merge pull request #59 from versity/zab/v1_release_notes
Add release notes with the 1.0 GA release
2021-11-08 16:09:42 -08:00
Zach Brown
ea08942824 Add release notes with the 1.0 GA release
Let's try maintaining release notes in a file in the repo.  There are
lots of schemes for associating commits and release notes and this seems
like the simplest place to start.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-08 14:42:33 -08:00
Bryant G. Duffy-Ly
95f2a87864 Fix scoutfs print <data_dev> hang
If a user tries to print a data device exit early if
it is data device.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-08 16:16:13 -06:00
Bryant G. Duffy-Ly
38ee2defd5 Add a filter for forced unmount error output
[85164.299902] scoutfs f.8c19e1.r.facf2e error: server error writing btree blocks: -5
[144308.589596] scoutfs f.c9397a.r.8ae97f error: server error -5 freeing merged btree blocks: looping commit del/upd freeing item
[174646.005596] scoutfs f.15f0b3.r.1862df error: server error -5 freeing merged btree blocks: final commit del/upd freeing item
[146653.893676] scoutfs f.c7f188.r.34e23c error: server error writing super block: -5
[273218.436675] scoutfs f.dd4157.r.f0da7e error: server failed to bind to 127.0.0.1:42002, err -98
[376832.542823] scoutfs f.049985.r.1a8987 error: error -5 reading quorum block 19 to update event 1 term 3

The above is an example output that will be filtered out

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-08 07:36:02 -06:00
Bryant G. Duffy-Ly
0fc8ccb122 Fix exiting out of btree_walk early for force_umnt
We do not want to short-circuit btree_walk early, it is
better to handle the force unmount on the caller side.
Therefore, remove this from btree_walk.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:21:09 -05:00
Bryant G. Duffy-Ly
e4a3c2b95d Break client/server out of waiting network replies
If there is a forced unmount we call _net_shutdown from
umount_begin in order to tell the server and clients to
break out of pending network replies. We then add the call
to abort within the shutdown_worker since most of the mucking
with send and resend queues are all done there.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:21:04 -05:00
Bryant G. Duffy-Ly
cf4e6611d3 Fix inconsistency assertions at commit_log_merge
Only BUG_ON for inconsistency and not do it for commit errors
or failure to delete the original request.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:18:57 -05:00
Bryant G. Duffy-Ly
65429a9cc4 Ensure that writer_init and alloc_init are cleaned
In scoutfs_server_worker we do not properly handle the clean up
of _block_writer_init and alloc_init. On error paths we can clean
up the context if either of thoes are initialized we can call
alloc_prepare_commit or writer_forget_all to ensure we drop
the block references and clear the dirty status of all the blocks
in the writer.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:05:42 -05:00
Zach Brown
d764ed7c43 Merge pull request #57 from versity/zab/update_readme
Update README.md
2021-11-05 11:34:44 -07:00
Zach Brown
465e5ee769 Update README.md
Remove a bunch of old language from the README.  We're no longer in the
early days of the open release so we can remove all the alpha quality
language.   And the system has grown sufficiently that the repo README
isn't a great place for a small getting started doc.  There just isn't
room to do the subject justice.   If we need such a thing for the
project we'll put it as a first order doc in the repo that'd be
distributed along with everything else.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-05 11:16:57 -07:00
Bryant G. Duffy-Ly
83a6bbb640 Fix inconsistency in server_log_merge_free_work
In order to safely free blocks we need to first dirty
the work. This allows for resume later on without a double
free.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-03 17:09:51 -05:00
Zach Brown
f02d68f567 Merge pull request #55 from versity/zab/v1_format_version
Zab/v1 format version
2021-11-03 10:18:50 -07:00
Zach Brown
5d6a510e25 Merge pull request #56 from versity/zab/xattr_shrink_bad_items
Fix xattr update out of bounds access
2021-11-02 10:17:06 -07:00
Zach Brown
1b4d291bf7 Fix xattr update out of bounds access
As we update xattrs we need to update any existing old items with the
contents of the new xattr that uses those items.   The loop that updated
existing items only took the old xattr size into account and assumed
that the new xattr would use those items.   If the new xattr size used
fewer parts then the attempt to update all the old parts that weren't
covered by the new size would go very wrong.   The length of the region
in the new xattr would be negative so it'd try to use the max part
length.  Worse, it'd copy these max part length regions outside the
input new xattr buffer.  Typically this would land in addressible memory
and copy garbage into the unused old items before they were later
deleted.

However, it could access so far outside the input buffer that it could
cross a page boudary into inaccessible memory and fault.  We saw this in
the field while trying to repeatedly incrementally shrink a large xattr.

This fixes the loop that updates overlapping items between the new and
old xattr to start with the smaller of their two item counts.  Now it
will only update items that are actually used by both xattrs and will
only safely access the new xattr input buffer.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-01 11:33:17 -07:00
Zach Brown
223ee5deef Declare v1 of the stable persistent format
From now on if we make incompatible changes to structures or messages
then we update the format version and ensure that the code can deal with
all the versions in its supported range.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
8f60ac06c5 Clean up our ioctl numbers
We had arbitrarily chosen an ioctl code 's' to match scoutfs, but of
course that conflicts.  This chooses an arbitrary hole in the upstream
reservations from inode-numbers.rst.

Then we make sure to have our _IO[WR] usage reflect the direction of the
final type paramater.  For most of our ioctls userspace is writing an
argument parameter to perform an operation (that often has side
effects).   Most of our ioctls should be _IOW because userspace is
writing the parameter, not _IOR (though the operation tends to read
state).  A few ioctls copy output back to userspace in the parameter so
they're _IOWR.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
932a842ae3 Remove valid_bytes from stat _more ioctls
The idea here was that we'd expand the size of the struct and
valid_bytes would tell the kernel which fields were present in
userspace's struct.  That doesn't combine well with the ioctl convention
of having the size of the type baked into the ioctl number.   We'll
remove this to make the world less surprising.  If we expand the
interface we'd add additional ioctls and types.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
618a7a4c47 Remove unused lock server alloc and wri
While checking in on some other code I noticed that we have lingering
allocator and writer contexts over in the lock server.  The lock server
used to manage its own client state and recovery.  We've sinced moved
that into shared recov functionality in the server.  The lock server no
longer manipulates its own btrees and doesn't need these unused
references to the server's contexts.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
9ebf43db99 Spread out key zone and type values
Introduce some space between the current key zone and type values so
that we have room to insert new keys amongst the current keys if we need
to.   A spacing of 4 is arbitrarily chosen as small enough to still give
us intuitively small numbers while leaving enough room to grow, given
how long its taken to come to the current number of keys.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
e38beee85a Stop using inode index key type as array index
The code that updates inode index items on behalf of indexed fields uses
an array to track changes in the fields.  Those array indexes were the
raw key type values.

We're about to introduce some sparse space between all the key values so
that we have some room to add keys in the future at arbitrary sort
positions amongst the previous keys.

We don't want the inode index item updating code to keep using raw types
as array indices when the type values are no longer small dense values.
We introduce indirection from type values to array indices to keep the
tracking array in the in-memory inode struct small.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
20ac2e35fa Remove clock_sync field from net message
As we freeze the format let's remove this old experiment to try and make
it easier to line up traces from different mounts.   It never worked
particularly well and I think it could be argued that trying to merge
trace logs on different machines isn't a particularly meaningful thing
to do.   You care about how they interact not what they were doing at
the same time with their indepdendent resources.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
80ee2c6d57 Harden client transaction processing
There are a few bad corner cases in the state machine that governs how
client transactions are opened, modified, and committed.

The worst problem is on the server side.   All server request handlers
need to cope with resent requests without causing bad side effects.
Both get_log_trees and commit_log_trees would try to fully processes
resent requests.  _get_log_trees() looks safe because it works with the
log_trees that was stored previously.  _commit_log_trees() is not safe
because it can rotate out the srch log file referenced by the sent
log_trees every time it's processed.  This could create extra srch
entries which would delete the first instance of entries.  Worse still,
by injecting the same block structure into the system multiple times it
ends up causing multiple frees of the blocks that make up the srch file.

The client side problems are slightly different, but related.   There
aren't strong constraints which guarantee that we'll only send a commit
request after a get request succeeds.   In crazy circumstances the
commit request in the write worker could come before the first get in
mount succeeds.   Far worse is that we can send multiple commit requests
for one transaction if it changes as we get errors during multiple
queued write attempts, particularly if we get errors from get_log_trees
after having successfully committed.

This hardens all these paths to ensure a strict sequence of
get_log_trees, transaction modification, and commit_log_trees.

On the server we add *_trans_seq fields to the log_trees struct so that
both get_ and commit_ can see that they've already prepared a commit to
send or have already committed the incoming commit, respectively.   We
can use the get_trans_seq field as the trans_seq of the open transaction
and get rid of the entire seperate mechanism we used to have for
tracking open trans seqs in the clients.  We can get the same info by
walking the log_trees and looking at their *_trans_seq fields.

In the client we have the write worker immediately return success if
mount hasn't opened the first transaction.   Then we don't have the
worker return to allow further modification until it has gotten success
from get_log_trees.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
42c4c6dd24 Move transaction sbi fields to trans_info
The transaction code was built a million years ago and put all of its
data in our core super block info.   This finally moves the rest of the
private transaction fields out of the core super block and into the
transaction info.   This makes it clear that it's private to trans.c and
brings it line with the rest of the subsystems in the tree.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
7d71b610af Add server extent motion tracking
Add tracking in the alloc functions that the server uses to move extents
between allocator structures on behalf of client mounts.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
70ede28e39 Remove unused traced_extent leavings
Remove some lingering support helpers for the traced_extent struct that
we haven't used in a while.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
b477604339 Don't clobber srch compact errors
The srch compaction worker will wait a bit before attempting another
compaction as it finishes a compaction that failed.

Unfortunately, it clobbered the errors it got during compaction with the
result of sending the commit to the server with the error flag.  If the
commit is successful then it thinks there were no errors and immediately
re-queues itself to try the next compaction.

If the error is persistent, as it was with a bug in how we merged log
files with a single page's worth of entries, then we can spin
indefinitely getting and error, clobbering the error with the commit
result, and immediately queueing our work to do it all over again.

This fix preserves existing errors when geting the result of the commit
and will correctly back off.  If we get persistent merge errors at least
they won't consume significant resources.  We add a counter for commit
for the errors so we can get some visibility if this happens.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
75f9aabe75 Allow compacting logs down to a single page
The k-way merge function at the core of the srch file entry merging had
some bookkeeping math (calculating number of parents) that couldn't
handle merging a single incoming entry stream, so it threw a warning and
returned an error.  When refusing to handle that case, it was assuming
that caller was trying to merge down a single log file which doesn't
make any sense.

But in the case of multiple small unsorted logs we can absolutely end up
with their entries stored in one sorted page.   We have one sorted input
page that's merging multiple log files.  The merge function is also the
path that writes to the output file so we absolutely need to handle this
case.

We more carefully calculate the number of parents, clamping it to one
parent when we'd otherwise get "(roundup(1) -> 1) - 1 == 0" when
calculating the number of parents from the number of inputs.  We can
relax the warning and error to refuse to merge nothing.

The test triggers this case by putting single search entries in the log
files for mounts and unmounting them to force rotation of the mount log
files into mergable rotated log files.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
cf512c5fcf Use inode_count field for statfs file counts
Our statfs implementation had clients reading the super block and using
the next free inode number to guess how many inodes there might be.  We
are very aggressive with giving directories private pools of inode
numbers to allocate from.   They're often not used at all, creating huge
gaps in allocated inode numbers.   The ratio of the average number of
allocations per directory to the batch size given to each directory is
the factor that the used inode count can be off by.

Now that we have a precise count of active inodes we can use that to
return accurate counts of inodes in the files fields in the statfs
struct.  We still don't have static inode allocation so the fields don't
make a ton of sense.  We fake the total and free count to give a
reasonable estimate of the total files that doesn't change while the
free count is calculated from the correct count of used inodes.

While we're at it we add a request to get the summed fields that the
server can cheaply discover in cache rather than having the client
always perform read IOs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
a53d6d1a8e Add scoutfs_alloc_foreach_super which takes super
Add an alloc_foreach variant which uses the caller's super to walk the
allocators rather than always reading it off the device.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
95ed36f9d3 Maintain inode count in super and log trees
Add a count of used inodes to the super block and a change in the inode
count to the log_trees struct.   Client transactions track the change in
inode count as they create and delete inodes.   The log_trees delta is
added to the count in the super as finalized log_trees are deleted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
94e5bc1457 Remove unused scoutfs_last_ino()
Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
366f615c9f Add support for our format version
We had previously started on a relatively simple notion of an
interoperability version which wasn't quite right.  This fleshes out
support for a more functional format version.   The super blocks have a
single version that defines behaviour of the running system.   The code
supports a range of versions and we add some initial interfaces for
updating the version while the system is offline.   All of this together
should let us safely change the underlying format over time.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
ac2587017e Add write_nr to quorum blocks
Add a write_nr field to the quorum block header which is incremented
with every write.  Each event also gets a write_nr field that is set to
the incremented value from the header.   This gives us a history of the
order of event updates that isn't sensitive to misconfigured time.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
1cdcf41ac7 Move more block read/write functions to util
We're adding another command that does block IO so move some block
reading and writing functions out of mkfs.   We also grow a few function
variants and call the write_sync variant from mkfs instead of having it
manually sync.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
024426df28 Add a file for userspace quorum config helpers
Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
a0690070ae Don't null terminate our note strings
The code that shows the note sections as files uses the section size to
define the size of the notes payload.  We don't need to null terminate
the strings to define their lengths.  Doing so puts a null in the notes
file which isn't appreciated by many readers.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
4e00f95014 run-tests builds our targets with -j
The test harness might as well use all cpus when building.  It's
reasonably safe to assume both that the test systems are otherwise idle
and that the build is likely to succeed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
0c95388f3b Set TCP_USER_TIMEOUT in addition to keepalives
TCP keepalive probes only work when the connection is idle.  They're not
sent when there's unacked send data being retramnsmitted.  If the server
fails while we're retransmitting we don't break the connection and try
to elect and connect to a new server until the very long default
conneciton timeouts or the server comes back and the stale connection is
aborted.

We can set TCP_USER_TIMEOUT to break an unresponsive connection when
there's written data.  It changes the behavior of the keepalive probes
so we rework them a bit to clearly apply our timeout consistently
between the two mechanisms.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
d255dd3b32 Fix SCOUTFs typo in totl name nr define
Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:10:45 -07:00
Zach Brown
9b4ac64312 Consistently stop fencing as server stops
As the server comes up it needs to fence any previous servers before it
assumes exclusive access to the device.  If fencing fails it can leave
fence requests behind.   The error path for these very early failures
didn't shut down fencing so we'd have lingering fence requests span the
life cycle of server startup and shutdown.  The next time the server
starts up in this mount it can try to create the fence request again,
get an error because a lingering one already exists, and immediately
shut down.

The result is that fencing errors that hit that initial attempt during
server startup can become persistent fencing errors for the lifetime of
that mount, preventing it from every successfully starting the server.

Moving the fence stop call to hit all exiting error paths consistently
clean up fence requests and avoid this problem.  The next server
instance will get a chance to process the fence request again.  It might
well hit the same error, but at least it gets a chance.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:10:45 -07:00
Zach Brown
22f9ab4dab Merge pull request #53 from bgly/fix_mkdir_test
Fix mkdir-rename-rmdir test script
2021-10-26 11:53:15 -07:00
Bryant Duffy-Ly
501953d69e Fix mkdir-rename-rmdir test script
The current script gets stuck in an infinite loop when the test
suite is started with 1 mount point. This is due to the advancement
part of the script in which it advances the ops for each mount.
The current while loop checks for when the op_mnt wraps by checking if
it equals 0. But the problem is we set each of the op_mnts to 0 during
the advancement, so when it wraps it still equates to 0, so it is an
infinite loop. Therefore, the fix is to check at the end of the loop
check if the last op's mount number wrapped. If so just break out.

Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
2021-10-21 11:41:02 -05:00
Bryant Duffy-Ly
66b8c5fbd7 Enhance clarify of some kfree paths
In some of the allocation paths there are goto statements
that end up calling kfree(). That is fine, but in cases
where the pointer is not initially set to NULL then we
might have an undefined behavior. kfree on a NULL pointer
does nothing, so essentially these changes should not
change behavior, but clarifies the code path better.

Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
2021-10-06 18:07:27 -05:00
Zach Brown
3c6c2194bd Merge pull request #51 from versity/zab/totl_xattr_tag
Zab/totl xattr tag
2021-09-13 18:06:28 -07:00
Zach Brown
6ca8c0eec2 Consistently initialize dentry info
Unfortunately, we're back in kernels that don't yet have d_op->d_init.
We allocate our dentry info manually as we're given dentries.  The
recent verification work forgot to consistently make sure the info was
allocated before using it.   Fix that up, and while we're at it be a bit
more robust in how we check to see that it's been initialized without
grabbing the d_lock.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
ea2b01434e Add support for i_version
This adds i_version to our inode and maintains it as we allocate, load,
modify, and store inodes.  We set the flag in the superblock so
in-kernel users can use i_version to see changes in our inodes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
d5eec7d001 Fix uninitialized srch ret that won't happen
More recent gcc notices that ret in delete_files can be undefined if nr
is 0 while missing that we won't call delete_files in that case.  Seems
worth fixing, regardless.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
ab92d8d251 Add quick test for racing creates
Add a quick test to make sure that create is validating stale dentries
before deciding if it should create or return -eexist.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
b9a0f1709f Add xattr .totl. tag
Add the .totl. xattr tag.  When the tag is set the end of the name
specifies a total name with 3 encoded u64s separated by dots.  The value
of the xattr is a u64 that is added to the named total.   An ioctl is
added to read the totals.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
a59fd5865d Add seq and flags to btree items
The fs log btrees have values that start with a header that stores the
item's seq and flags.  There's a lot of sketchy code that manipulates
the value header as items are passed around.

This adds the seq and flags as core item fields in the btree.   They're
only set by the interfaces that are used to store fs items: _insert_list
and _merge.  The rest of the btree items that use the main interface
don't work with the fields.

This was done to help delta items discover when logged items have been
merged before the finalized lob btrees are deleted and the code ends up
being quite a bit cleaner.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-09 14:44:55 -07:00
Zach Brown
46edf82b6b Add inode crtime creation time
Add an inode creation time field.  It's created for all new inodes.
It's visible to stat_more.  setattr_more can set it during
restore.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-03 11:14:41 -07:00
Zach Brown
e9078d83bf Merge pull request #50 from versity/zab/verify_dentries
Verify dentries after locking
2021-08-31 11:48:29 -07:00
Zach Brown
79fbaa6481 Verify dentries after locking
Our dir methods were trusting dentry args.  The vfs code paths use
i_mutex to protect dentries across revalidate or lookup and method
calls.  But that doesn't protect methods running in other mounts.
Multiple nodes can interleave the initial lookup or revalidate then
actual method call.

Rename got this right.  It is very paranoid about verifying inputs after
acquiring all the locks it needs.

We extend this pattern to the rest of the methods that need to use the
mapping of name to inode (and our hash and pos) in dentries.  Once we
acquire the parent dir lock we verify that the dentry is still current,
returning -EEXIST or -ENOENT as appropriate.

Along these lines, we tighten up dentry info correctness a bit by
updating our dentry info (recording lock coverage and hash/pos) for
negative dentries produced by lookup or as the result of unlink.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-31 09:49:32 -07:00
Zach Brown
9b9d3cf6fc Merge pull request #49 from versity/zab/btree_merge_fixes
Zab/btree merge fixes
2021-08-25 11:50:40 -07:00
Zach Brown
ad5662b892 Handle dupe invalidation requests during recovery
Client lock invalidation handling was very strict about not receiving
duplicate invalidation requests from the server because it could only
track one pending request.  The promise to only send one invalidate at a
time is made by one server, it can't be enforced across server failover.
Particularly because invalidation processing can have to do quite a lot
of work with the server as it tears down state associated with the lock.

We fix this by recording and processing each individual incoming
invalidation request on the lock.

The code that handled reordering of incoming grant responses and
invalidation requests waited for the lock's mode to match the old mode
in the invalidation request before proceeding.  That would have
prevented duplicate invalidation requests from making forward progress.

To fix this we make lock client recieve processing synchronous instead
of going through async work which can reorder.  Now grant responses are
processed as they're received and will always be resolved before all the
invalidation requests are queued and processed in order.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
f5577e26b1 Reset item state when retrying stale forest reads
The forest reader reads items from the fs_root and all log btrees and
gives them to the caller who tracks them to resolve version differences.

The reads can run into stale blocks which have been overwritten.  The
forest reader was implementing the retry under the item state in the
caller.  This can corrupt items that are only seen firest in an old fs
root before a merge and then only seen in the fs_root after a merge.  In
this case the item won't have any versioning and the existing version
from the old fs_root is preferred.  This is particularly bad when the
new version was deleted -- in that case we have no metadata which would
tell us to drop the old item that was read from the old fs_root.

This is fixed by pushing the retry up to callers who wipe the item state
before each retry.  Now each set of items is related to a single
snapshot of the fs_root and logs at one point in time.

I haven't seen definitive evidence of this happening in practice.  I
found this problem after putting on my craziest thinking toque and
auditing the code for places where we could lose item updates.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
5f57785790 Fix btree merge input item iteration
Btree merging attempted to build an rbtree of the input roots with only
one version of an item present in the rbtree at a time.  It really
messed this up by completely dropping an input root when a root with a
newer version of its item tried to take its place in the rbtree.  What
it should have done is advance to the next item in the older root, which
itself could have required advancing some other older root.  Dropping
the root entirely is catastrophically wrong because it hides the rest of
the items in the root from merging.  This has been manifesting as
occasional mysterious item loss during tests where memory pressure, item
update patterns, and merging all lined up just so.

This fixes the problem by more clearly keeping the next item in each
root in the rbtree.   We sort by newest to oldest version so that once
we merge the most recent version of an item its easy to skip all the
older versions of the items in the next rbtree entries for the
rest of the input roots.

While we're at it we work with references to the static cached input
btree blocks.  The old code was a first pass that used an expensive
btree walk per item and copied the value payload.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
2a33b9faf0 Add some error testing to srch-basic-functionality
When the xattr inode searchs fail the test will eventually fail when the
output differs, but that could take a while.  Have it fail much sooner
so that we can have tighter debugging interations and trace ring buffer
contents that are likely to be a lot closer to the first failure.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
3740c0a995 More carefully scan for orphan inodes
The current orphan scan uses the forest_next_hint to look for candidate
orphan items to delete.  It doesn't skip deleted items and checks the
forest of log btrees so it'd return hints for every single item that
existed in all the log btrees across the system.  And we call the hint
per-item.

When the system is deleting a lot of files we end up generating a huge
load where all mounts are constantly getting the btree roots from the
server, reading all the newest log btree blocks, finding deleted orphan
items for inodes that have already been deleted, and moving on to the
next deleted orphan item.

The fix is to use a read-only traversal of only one version of the fs
root for all the items in one scan.   This avoids all the deleted orphan
items that exist in the log btrees which will disappear when they're
merged.  It lets the item iteration happen in a single read-only cached
btree instead of constantly reading in the most recently written root
block of every log btree.

The result is an enormous speedup of large deletions.  I don't want to
describe exactly how enormous.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
a4f5293e78 Flush invalidate and iput inode references
We can be performing final deletion as inodes are evicted during
unmount.  We have to keep full locking, transactions, and networking up
and running for the evict_inodes() call in generic_shutdown_super().
Unfortunately, this means that workers can be using inode references
during evict_inodes() which prevents them from being evicted.  Those
workers can then remain running as we tear down the system, causing
crashes and deadlocks as the final iputs try to use resources that have
been destroyed.

The fix is to first properly stop orphan scanning, which can instantiate
new cached inodes, up before the call to kill_block_super ends up trying
to evict all inodes.  Then we just need to wait for any pending iput and
invalidate work to finish and perform the final iput, which will always
evict because generic_shutdown_super has cleared MS_ACTIVE.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
0c3026a2b7 Add simple per-lock server message count stats
Add some simple tracking of message counts for each lock in the lock
server so that we can start to see where conflicts may be happening in a
running system.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
5bc95fac7d Add scoutfs_unmounting()
Add a quick helper that can be used to avoid doing work if we know that
we're already shutting down.  This can be a single coarser indicator
than adding functions to each subsystem to track that we're shutting
down.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
36fcc4665d Align first free ino to lock group
Currently the first inode number that can be allocated directly follows
the root inode.  This means the first batch of allocated inodes are in
the same lock group as the root inode.

The root inode is a bit special.  It is always hot as absolute path
lookups and inode-to-path resolution always read directory entries from
the root.

Let's try aligning the first free inode number to the next inode lock
group boundary.  This will stop work in those inodes from necessarily
conflicting with work in the root inode.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
b0a08eb922 Remove lock grace period
We had some logic to try and delay lock invalidation while the lock was
still actively in use.  This was trying to reduce the cost of
pathological lock conflict cases but it had some severe fairness
problems.

It was first introduced to deal with bad patterns in userspace that no
longer exist and it was built on top of the LSM transaction machinery
that also no longer exists.   It hasn't aged well.

Instead of introducing invalidation latency in the hopes that it leads
to more batched work, which it can't always, let's aim more towards
reducing latency in all parts of the write-invalidate-read path and
also aim towards reducing contention in the first place.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
bb571377dc Don't merge newer items past older
We have a problem where items can appear to go backwards in time because
of the way we chose which log btrees to finalize and merge.

Because we don't have versions in items in the fs_root, and even might
not have items at all if they were deleted, we always assume items in
log btrees are newer than items in the fs root.

This creates the requirement that we can't merge a log btree if it has
items that are also present in older versions in other log btrees which
are not being merged.  The unmerged old item in the log btree would take
precedent over the newer merged item in the fs root.

We weren't enforcing this requirement at all.  We used the max_item_seq
to ensure that all items were older than the current stable seq but that
says nothing about the relationship between older items in the finalized
and active log btrees.  Nothing at all stops an active btree from having
an old version of a newer item that is present in another mount's
finalized log btree.

To reliably fix this we create a strict item seq discontinuity between
all the finalized merge inputs and all the active log btrees.  Once any
log btree is naturally finalized the server forced all the clients to
group up and finalize all their open log btrees.   A merge operation can
then safely operate on all the finalized trees before any new trees are
given to clients who would start using increasing items seqs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
5897f4d889 Add a trivial trace_printk wrapper
Make it a bit easier to include the fsid and rid in trace_printk
messages when we're experimenting.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:12:20 -07:00
Zach Brown
999093bfc9 Add sync log trees network command
Add a command for the server to request that clients commit their open
transaction.   This will be used to create groups of finalized log
btrees for consistent merging.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:12:17 -07:00
Zach Brown
05b5d93365 Verify that quorum_slot_nr references valid slot
We were checking that quorum_slot_nr was within the range of possible
slots allowed by the format as it was parsed.  We weren't checking that
it referenced a configured slot.  Make sure, and give a nice error
message that shows the configured slots.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
4d7191dc48 Print messages on extent ins/rem errors
Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
4495dbdce6 Set initial quorum term from max of all blocks
During rough forced unmount testing we saw a seemingly mysterious
concurrent election.  It could be explained if mounts coming up don't
start with the same term.  Let's try having mounts initialize their term
to the greatest of all the terms they can see in the quorum blocks.
This will prevent the situation where some new quorum actors with
greater terms start out ignoring all the messages from others.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
70569b0448 Trivial quorum test;set -> test_and_set
Nothing interesting here, just a minor convenience to use test and set
instead of testing and then setting.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
823838cf01 Add more messages to server processing errors
The server doesn't give us much to go on when it gets an error handling
requests to work with log trees from the client.  This adds a lot of
specific error messages so we can get a better understanding of
failures.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
89b5865a4c Verify that log tree commit is for sending rid
We were trusting the rid in the log trees struct that the client sent.
Compare it to our recorded rid on the connection and fail if the client
sent the wrong rid.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-17 12:13:01 -07:00
Zach Brown
7cf9cd8c20 Merge pull request #48 from versity/zab/missed_invalidate_wakeup
Queue invalidation during previous request
2021-08-09 09:50:39 -07:00
Zach Brown
65ac42831f Queue invalidation during previous request
The locking protocol only allows one outstanding invalidation request
for a lock at a time.  The client invalidation state is a bit hairy and
involves removing the lock from the invalidation list while it is being
processed which includes sending the response.  This means that another
request can arrive while the lock is not on the invalidation list.  We
have fields in the lock to record another incoming request which puts
the lock back on the list.

But the invalidation work wasn't always queued again in this case.  It
*looks* like the incoming request path would queue the work, but by
definition the lock isn't on the invalidation list during this race.  If
it's the only lock in play then the invalidation list will be empty and
the work won't be queued.  The lock can get stuck with a pending
invalidation if nothing else kicks the invaliation worker.  We saw this
in testing when the root inode lock group missed the wakeup.

The fix is to have the work requeue itself after putting the lock back
on the invalidation list when it notices that another request came in.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-06 15:41:11 -07:00
Zach Brown
dde6dab0a1 Merge pull request #47 from versity/zab/stability_fixes
Zab/stability fixes
2021-08-02 12:22:44 -07:00
Zach Brown
cb1726681c Fix net BUG_ON if reconnection farewell send races
When a client socket disconnects we save the connection state to re-use
later if the client reconnects.  A newly accepted connection finds the
old connection associated with the reconnecting client and migrates
state from the old idle connection to the newly accepted connection.

While moving messages between the old and new send and resend queues the
code had an aggressive BUG_ON that was asserting that the newly accepted
connection couldn't have any messages in its resend queue.

This BUG can be tripped due to the ordering of greeting processing and
connection state migration.  The server greeting processing path sends
the farewell response to the client before it calls the net code to
migrate connection state.  When it "sends" the farewell response it puts
the message on the send queue and kicks the send work.  It's possible
for the send work to execute and move the farewell response to the
resend queue and trip the BUG_ON.

This is harmless.   The sent greeting response is going to end up on the
resend queue either way, there's no reason for the reconnection
migration to assert that it can't have happened yet.  It is going to be
dropped the moment we get a message from the client with a recv_seq that
is necessarily past the greeting response which always gets a seq of 1
from the newly accepted connection.

We remove the BUG_ON and try to splice the old resend queue after the
possible response at the head of the resend_queue so that it is the
first to be dropped.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-02 11:15:57 -07:00
Zach Brown
cdff272163 Fix alloc list exhaustion calculation
The last thing server commits do is move extents from the freed list
into freed extents.  It moves as many as it can until it runs out of
avail meta blocks and space fore freed meta blocks in the current
allocator's lists.

The calculation for whether the lists had resources to move an extent
was quite off.  It missed that the first move might have to dirty the
current allocator or the list block, that the btree could join/split
blocks at each level down the paths, and boy does it look like the
height component of the calculation was just bonkers.

With the wrong calculation the server could overflow the freed list
while moving extents and trigger a BUG_ON.   We rarely saw this in
testing.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-01 14:31:57 -07:00
Zach Brown
7e935898ab Avoid premature metadata enospc
server_get_log_trees() sets the low flag in a mount's meta_avail
allocator, triggering enospc for any space consuming allocatins in the
mount, if the server's global meta_vail pool falls below the reserved
block count.  Before each server transaction opens we swap the global
meta_avail and meta_freed allocators to ensure that the transaction has
at least the reserved count of blocks available.

This creates a risk of premature enospc as the global meta_avail pool
drains and swaps to the larger meta_freed.  The pool can be close to the
reserved count, perhaps at it exactly.  _get_log_trees can fill the
client's mount, even a little, and drop the global meta_avail total
under the reserved count, triggering enospc, even though meta_Freed
could have had quite a lot of blocks.

The fix is to ensure that the global meta_avail has 2x the reserved
count and swapping if it falls under that.  This ensures that a server
transaction can consume an entire reserved count and still have enough
to avoid triggering enospc.

This fixes a scattering of rare premature enospc returns that were
hitting during tests.  It was rare for meta_avail to fall just at the
reserved count and for get_log_trees to have to refill the client
allocator, but it happened.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:26:32 -07:00
Zach Brown
6d0694f1b0 Add resize_devices ioctl and scoutfs command
Add a scoutfs command that uses an ioctl to send a request to the server
to safely use a device that has grown.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:26:32 -07:00
Zach Brown
fd686cab86 Fix total_data_blocks calculation in mkfs
mkfs was incorrectly initializing total_data_blocks.  The field is meant
to record the number of blocks from the start of the device that the
filesystem could access.  mkfs was subtracting the initial reserved area
of the device, meaning the number of blocks that the filesystem might
access.

This could allow accesses past devices if mount checks the device size
against the smaller total_data_blocks.

And we're about to use total_data_blocks as the start of a new extent to
add when growing the volume.  It needs to be fixed so that this new
grown free extent doesn't overlap with the end of the existing free
extents.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:26:32 -07:00
Zach Brown
4c1181c055 Remove first_ and last_ super blkno fields
There are fields in the super block that specify the range of blocks
that would be used for metadata or data.  They are from the time when a
single block device was carved up into regions for metadata and data.

They don't make sense now that we have separate metadata and data block
devices.  The starting blkno is static and we go to the end of the
device.

This removes the fields now that they serve no purpose.   The only use
of them to check that freed extents fell within the correct bounds can
still be performed by using the static starting number or roughly using
the size of the devices.  It's not perfect, but this is already only
a check to see that the blknos aren't utter nonsense.

We're removing the fields now to avoid having to update them while
worrying about users when resizing devices.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
d6bed7181f Remove almost all interruptible waits
As subsystems were built I tended to use interruptible waits in the hope
that we'd let users break out of most waits.

The reality is that we have significant code paths that have trouble
unwinding.  Final inode deletion during iput->evict in a task is a good
example.  It's madness to have a pending signal turn an inode deletion
from an efficient inline operation to a deferred background orphan inode
scan deletion.

It also happens that golang built pre-emptive thread scheduling around
signals.  Under load we see a surprising amount of signal spam and it
has created surprising error cases which would have otherwise been fine.

This changes waits to expect that IOs (including network commands) will
complete reasonably promptly.  We remove all interruptible waits with
the notable exception of breaking out of a pending mount.  That requires
shuffling setup around a little bit so that the first network message we
wait for is the lock for getting the root inode.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
4893a6f915 scoutfs_dirents_equal should return bool
It looks like it returned u64 because it was derived from _name_hash().

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
384590f016 Sync net shouldn't wait for errored submits
If async network request submission fails then the response handler will
never be called.  The sync request wrapper made the mistake of trying to
wait for completion when initial submission failed.  This never happened
in normal operation but we're able to trigger it with some regularity
with forced unmount during tests.  Unmount would hang waiting for work
to shutdown which was waiting for request responses that would never
happen.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
192f077c16 Update data_version when fallocate changes size
Changing the file size can changes the file contents -- reads will
change when they stop returning data.  fallocate can change the file
size and if it does it should increment the data_version, just like
setattr does.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
a9baeab22e stage_tmpfile test gets current data_version
The stage_tmpfile test util was written when fallocate didn't update
data_version for size extensions.  It is more correct to get the
data_version after fallocate changes data_versions for however many
transactions, extent allocations, and i_size extensions it took to
allocate space.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
b7ab26539a Avoid lockdep warning about upstream inversion
Some kernels have blkdev_reread_part acquire the bd_mutex and then call
into drop_partitions which calls fsync_bdev which acquires s_umount.
This inverts the usual pattern of deactivate_super getting s_umount and
then using blkdev_put in kill_sb->put_super to drop a second device.

The inversion has been fixed upstream by years of rewrites.  We can't go
back in time to fix the kernels that we're testing against,
unfortunately, so we disable lockdep around our valid leg of the
inversion that lockdep is noticing in our testing.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
c51f0c37da Defer dirty inode data writeback (and use list)
iput() can only be used in contexts that could perform final inode
deletion which requires cluster locks and transactions.  This is
absolutely true for the transaction committing worker.  We can't have
deletion during transaction commit trying to get locks and dirty *more*
items in the transaction.

Now that we're properly getting locks in final inode deletion and
O_TMPFILE support has put pressure on deletion, we're seeing deadlocks
between inode eviction during transaction commit getting a index lock
and index lock invalidation trying to commit.

We use the newly offered queued iput to defer the iput from walking our
dirty inodes.   The transaction commit will be able to proceed while
the iput worker is off waiting for a lock.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:20:40 -07:00
Zach Brown
52107424dd Promote deferred iput to inode call
Lock invalidation had the ability to kick iput off to work context.  We
need to use it for inode writeback as well so we move the mechanism over
to inode.c and give it a proper call.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
099a65ab07 Try recovering from truncate errors and more info
We're seeing errors during truncate that are surprising.  Let's try and
recover from them and provide more info when they happen so that we can
dig deeper.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
21c5724dd5 Update fenced service file StartLimitBurst
The first draft was written against an older schema, StartLimitBurst is
in [Service] now.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
3974d98f6b Don't use "/dev/*" redirections near systemd
It sets up stdout and stderr as sockets, not pipes, so these links don't
work.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
2901b43906 Also allow omap requests to disconnected clients
We recently fixed problems sending omap responses to originating clients
which can race with the clients disconnecting.  We need to handle the
requests sent to clients on behalf of an origination request in exactly
the same way.  The send can race with the client being evicted.  It'll
be cleaned after the race is safely ignored by the client's rid being
removed from the server's request tracking.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
03d7a4e7fe Show relative times in quorum status file output
The times in the quorum status file are in absolute monotinic kernel
time since bootup.  That's not particularly helpful especially when
comparing across hosts with different boot times.

This shows relative times in timespec64 seconds until or since the times
in question.   While we're at it we also collect the send and receive
timestamps closer to each send or receive call.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
d5d3b12986 Specficially shutdown quorum during forced unmount
Generally, forced unmount works by returning errors for all IO.  Quorum
is pretty resilient in that it can have the IO errors eaten by server
startup and does its own messaging that won't return errors.  Trying to
force unmount can have the quorum service continually participate in
electing a server that immediately fails and shutds down.

This specifically shuts down the internal quorum service when it sees
that unmount is being forced.  This is easier and cleaner than having
the network IO return errors and then having that trigger shutdown.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
e4dca8ddcc Don't shutdown quorum if server startup fails
The quorum service shuts down if it sees errors that mean that it can't
do its job.

This is mostly fatal errors gathering resources at startup or runtime IO
errors but it was also shutting down if server startup fails.   That's
not quite right.  This should be treated like the server shutting down
on errors.  Quorum needs to stay around to participate in electing the
next server.

Fence timeouts could trigger this.   A quorum mount could crash, the
next server without a fence script could have a fence request timeout
and shutdown, and now the third remaining server is left to indefinitely
send vote requests into the void.

With this fixed, continuing that example, the quorum service in the
second mount remains to elect the third server with a working fence
script after the second server shuts down after its fence request times
out.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
011b7d52e5 Merge pull request #45 from versity/ben/systemd_configs
Add fenced systemd and example configs
2021-07-09 08:39:18 -07:00
Ben McClelland
3a9db45194 Add fenced systemd and example configs
This should be good enough to get single node mounts up and running with
fenced with minimal effort.  The example config will need to be copied
to /etc/scoutfs/scoutfs-fenced.conf for it to be functional, so this
still requires specific opt-in and wont accidentally run for multi-node
systems.

Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
2021-07-09 08:22:39 -07:00
Zach Brown
53f11f5479 Merge pull request #46 from versity/zab/orphan_deletion_and_enospc
Zab/orphan deletion and enospc
2021-07-08 10:52:53 -07:00
Zach Brown
b4ede2ac6a Allow omap responses to disconnected originators
The omap message lifecycle is a little different than the server's usual
handling that sends a response from the request handler.  The response
is sent long after the initial receive handler is pinning the connection
to the client.   It's fine for the response to be dropped.

The main server request handler handled this case but other response
senders didn't.  Put this error handling in the server response sender
itself so that all callers are covered.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-08 09:36:07 -07:00
Zach Brown
cbe8d77f78 Prevent duplicate inode item deletion
We hide I_FREEING inodes from inode lookup to avoid inversions with
cluster locking.  This can result in duplicate inodes structs for a
given inode number.  Then can both race to try and delete the same items
for their shared inode number.  This leads to error messages from
evict_inode and could lead to corruption if they, for example, both try
and free the same data extents.

This adds very basic serialization so only one instance can try to
delete items at a time.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-07 14:13:14 -07:00
Zach Brown
5f682dabb5 Item cache invalidation uses seqs to avoid readers
The item cache has to be careful not to insert stale read items when
previously dirty items have been written and invalidated while a read
was in flight.

This was previously done by recording the possible range of items that a
reader could see based on the key range of its lock.   This is
disasterous when a workload operates entirely within one lock.  I ran
into this when testing a small number of files with massive amounts of
xattrs.  While any reader is in flight all pages can't be invalidated
because they all intersect with the one lock that covers all the items
in use.

The fix is to more naturally reflect the problem by tracking the
greatest item seq in pages and the earliest seq that any readers
can't see.  This lets invalidate only skip pages with items
that weren't visible to the earliest reader.

This more naturally reflects that the problem is due to the age of the
items, not their position in the key space.  Now only a few of the most
recently modified pages could be skipped and they'll be at the end
of the LRU and won't typically be visited.  As an added benefit it's
now much cheaper to add, delete, and test the active readers.

This fix stopped rm -rf of a full system's worth of xattrs from taking
minutes constantly spinning skipping all pages in the LRU to seconds of
doing real removal work.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-07 14:13:14 -07:00
Zach Brown
120c2d342a Add create_xattr_loop test tool
Add a quick tool that creates xattrs in a tight loop.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-07 14:13:14 -07:00
Zach Brown
84454b38c5 Add mkfs -A for small device sizes
Normally mkfs would fail if we specify meta or data devices that are too
small.  We'd like to use small devices for test scenarios, though, so
add an option to allow specifying sizes smaller than the minumum
required sizes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-07 14:13:14 -07:00
Zach Brown
29cfa81574 Remove unused leftovers from quorum changes
These forward declarations were for interfaces that have since been
removed or changed and are no longer needed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-07 14:13:14 -07:00
Zach Brown
73bf916182 Return ENOSPC as space gets low
Returning ENOSPC is challenging because we have clients working on
allocators which are a fraction of the whole and we use COW transactions
so we need to be able to allocate to free.  This adds support for
returning ENOSPC to client posix allocators as free space gets low.

For metadata, we reserve a number of free blocks for making progress
with client and server transactions which can free space.  The server
sets the low flag in a client's allocator if we start to dip into
reserved blocks.  In the client we add an argument to entering a
transaction which indicates if we're allocating new space (as opposed to
just modifying existing data or freeing).  When an allocating
transaction runs low and the server low flag is set then we return
ENOSPC.

Adding an argument to transaciton holders and having it return ENOSPC
gave us the opportunity to clean it up and make it a little clearer.
More work is done outside the wait_event function and it now
specifically waits for a transaction to cycle when it forces a commit
rather than spinning until the transaction worker acquires the lock and
stops it.

For data the same pattern applies except there are no reserved blocks
and we don't COW data so it's a simple case of returning the hard ENOSPC
when the data allocator flag is set.

The server needs to consider the reserved count when refilling the
client's meta_avail allocator and when swapping between the two
meta_avail and meta_free allocators.

We add the reserved metadata block count to statfs_more so that df can
subtract it from the free meta blocks and make it clear when enospc is
going to be returned for metadata allocations.

We increase the minimum device size in mkfs so that small testing
devices provide sufficient reserved blocks.

And finally we add a little test that makes sure we can fill both
metadata and data to ENOSPC and then recover by deleting what we filled.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-07 14:13:14 -07:00
Zach Brown
9db3b475c0 Stop log merge work earlier during unmount
The forest log merge work calls into the client to send commit requests
to the server.  The forest is usually destroyed relatively late in the
sequence and can still be running after the client is destroyed.

Adding a _forest_stop call lets us stop the log merging work
before the client is destroyed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-02 10:54:56 -07:00
Zach Brown
24d682bf81 Add orphan-inodes test
Signed-off-by: Zach Brown <zab@versity.com>
2021-07-02 10:54:56 -07:00
Zach Brown
2957f3e301 Avoid warnings when evict has signals pending
Killing a task can end up in evict and break out of acquiring the locks
to perform final inode deletion.  This isn't necessarily fatal.  The
orphan task will come around and will delete the inode when it is truly
no longer referenced.

So let's silence the error and keep track of how many times it happens.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-02 10:54:56 -07:00
Zach Brown
07210b5734 Reliably delete orphaned inodes
Orphaned items haven't been deleted for quite a while -- the call to the
orphan inode scanner has been commented out for ages.  The deletion of
the orphan item didn't take rid zone locking into account as we moved
deletion from being strictly local to being performed by whoever last
used the inode.

This reworks orphan item management and brings back orphan inode
scanning to correctly delete orphaned inodes.

We get rid of the rid zone that was always _WRITE locked by each mount.
That made it impossible for other mounts to get a _WRITE lock to delete
orphan items.  Instead we rename it to the orphan zone and have orphan
item callers get _WRITE_ONLY locks inside their inode locks.  Now all
nodes can create and delete orphan items as they have _WRITE locks on
the associated inodes.

Then we refresh the orphan inode scanning function.  It now runs
regularly in the background of all mounts.  It avoids creating cluster
lock contention by finding candidates with unlocked forest hint reads
and by testing inode caches locally and via the open map before properly
locking and trying to delete the inode's items.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-02 10:52:46 -07:00
Zach Brown
0374661a92 Merge pull request #43 from versity/zab/btree_merging
Zab/btree merging
2021-06-22 13:16:30 -07:00
Zach Brown
28759f3269 Rotate srch files as log trees items are reclaimed
The log merging work deletes log trees items once their item roots are
merged back into the fs root.  Those deleted items could still have
populated srch files that would be lost.  We force rotation of the srch
files in the items as they're reclaimed to turn them into rotated srch
files that can be compacted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:37:45 -07:00
Zach Brown
5c3fdb48af Fix btree join item movement
Refilling a btree block by moving items from its siblings as it falls
under the join threshold had some pretty serious mistakes.  It used the
target block's total item count instead of the siblings when deciding
how many items to move.  It didn't take item moving overruns into
account when deciding to compact so it could run out of contiguous free
space as it moved the last item.  And once it compacted it returned
without moving because the return was meant to be in the error case.

This is all fixed by correctly examining the sibling block to determine
if we should join a block up to 75% full or move a big chunk over,
compacting if the free space doesn't have room for an excessive worst
case overrun, and fixing the compaction error checking return typo.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
a7828a6410 Add log merge item allocators to alloc detail
The alloc iterator needs to find and include the totals of the avail and
freed allocator list heads in the log merge items.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
a1d46e1a92 Fix mkfs btree item offset calculation
mkfs was miscalculating the offset of the start of the free region in
the center of blocks as it populated blocks with items.  It was using
the length of the free region as its offset in the block.  To find
the offset of the end of the free region in the block it has to be
taken relative to the end of the item array.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
d67db6662b Fix item cache val_len alignment math
Some item_val_len() callers were applying alignment twice, which isn't
needed.

And additions to erased_bytes as value lengths change  didn't take
alignment into account.  They could end up double counting if val_len
changes within the alignment are then accounted for again as the full
item and alignment is later deleted.  Additions to erased_bytes based on
val_len should always take alignment into account.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
c5c050bef0 Item cache might free null page on alloc error
The item cache allocates a page and a little tracking struct for each
cached page.  If the page allocation fails it might try to free a null
page pointer, which isn't allowed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
96d286d6e5 Zero btree item padding as items are created
Item creation, which fills out a new item at the end of the array of
item structs at the start of the block, didn't explicitly zero the item
struct padding to 0.  It would only have been zero if the memory was
already zero, which is likely for new blocks, but isn't necessarily true
if the memory had previously been used by deleted values.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
9febc6b5dc Update btree block validator for 8byte alignment
The change to aligning values didn't update the btree block verifier's
total length calculation, and while we're in there we can also check
that values are correctly aligned.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
045b3ca8d4 Expand unused btree verifying walker
Previously we had an unused function that could be flipped on to verify
btree blocks during traversal.   This refactors the block verifier a bit
to be called by a verifying walker.  This will let callers walk paths to
leaves to verify the tree around operations, rather than verification
being performed during the next walk.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
ff882a4c4f Add btree total_above_join_low_water() test
Take the condition used to decide if a btree block needs to be joined
and put it in total_above_join_low_water() so that btree_merging will be
able to call it to see if the leaf block it's merging into needs to be
joined.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
3d1a0f06c0 Add scoutfs_btree_free_blocks
Add a btree function for freeing all the blocks in a btree without
having to cow the blocks to track which refs have been freed.  We use a
key from the caller to track which portions of the tree have been freed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
3488b4e6e0 Add scoutfs print support for log merge items
Add support for printing all the items in the log_merge tree that the
server uses to track log merging.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
c482204fcf Clean up btree root printing in superblock
Over time the printing of the btree roots embedded in the super block
has gotten a little out of hand.  Add a helper macro for the printf
format and args and re-order them to match their order in the
superblock.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
9711fef122 Update for core, trans, and item seq use
We now have a core seq number in the super that is advanced for multiple
users.    The client transaction seq comes from the core seq so we
remove the trans_seq from the super.  The item version is also converted
to use a seq that's derived from the core seq.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
91acf92666 Add client btree merge processing
Add the client work which is regularly scheduled to ask the server for
log merging work to do.  The relatively simple client work gets a
request from the server, finds the log roots to merge given the reqeust
seq, performs the merge with a btree call and callbacks, and commits the
result to the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
9c2122f7de Add server btree merge processing
This adds the server processing side of the btree merge functionality.
The client isn't yet sending the log_merge messages so no merging will
be performed.

The bulk of the work happens as the server processess a get_log_merge
message to build a merge request for the client.  It starts a log merge
if one isn't in flight.  If one is in flight it checks to see if it
should be spliced and maybe finished.  In the common case it finds the
next range to be merged and sends the request to the client to process.

The commit_log_merge handler is the completion side of that request.  If
the request failed then we unwind its resources based on the stored
request item.  If it succeeds we record it in an item for get_
processing to splice eventually.

Then we modify two existing server code paths.

First, get_log_tree doesn't just create or use a single existing log
btree for a client mount.  If the existing log btree is large enough it
sets its finalized flag and advances the nr to use a new log btree.
That makes the old finalized log btree available for merging.

Then we need to be a bit more careful when reclaiming the open log btree
for a client.  We can't use next to find the only open log btree, we use
prev to find the last and make sure that it isn't already finalized.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
4d3ea3b59b Add format support for log btree merging
Add the format specification for the upcoming btree merging.  Log btrees
gain a finalized field, we add the super btree root and all the items
that the server will use to coordinate merging amongst clients, and we
add the two client net messages which the server will implement.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
298a6a8865 Add server get_stable_trans_seq()
Extract part of the get_last_seq handler into a call that finds the last
stable client transaction seq.  Log merging needs this to determine a
cutoff for stable items in log btrees.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
082924df1a Add scoutfs_key_is_ones()
Add a quick inline for testing that a key is all ones.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
d8478ed6f1 Add scoutfs_btree_rebalance()
Add a btree call to just dirty to a leaf block, joining and splitting
along the way so that the blocks in the path satisfy the balance
constraints.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
0538c882bc Add btree_merge()
Add a btree function for merging the items in a range from a number of
read-only input btrees into a destination btree.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-17 09:36:00 -07:00
Zach Brown
3a03a6a20c Add SUBTREE btree walk flag to restrict join/merge
Add a BTW_SUBTREE flag to btree_walk() to restrict splitting or joining
of the root block.   When clients are merging into the root built from a
reference to the last parent in the fs tree we want to be careful that
we maintain a single root block that can be spliced back into the fs
tree.   We specifically check that the root block remain within the
split/join thresholds.  If it falls out of compliance we return an error
so that it can be spliced back into the fs tree and then split/joined
with its siblings.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-15 15:25:14 -07:00
Zach Brown
b6d0a45f6d Add btree_{get,set}_parent
Add calls for working with subtrees built around references to blocks in
the last level of parents.  This will let the server farm out btree
merging work where concurrency is built around safely working with all
the items and leaves that fall under a given parent block.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-15 15:25:14 -07:00
Zach Brown
d7f8896fac Add scoutfs_btree_parent_range
Add a btree helper for finding the range of keys which are found in
leaves referenced by the last parent block when searching for a given
key.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-15 15:25:14 -07:00
Zach Brown
65c39e5f97 Item seq is max of trans and lock write_seq
Rename the item version to seq and set it to the max of the transaction
seq and the lock's write_seq.  This lets btree item merging chose a seq
at which all dirty items written in future commits must have greater
seqs.  It can drop the seqs from items written to the fs tree during
btree merging knowing that there aren't any older items out in
transactions that could be mistaken for newer items.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-15 15:25:14 -07:00
Zach Brown
3c69861c03 Use core seq for lock write_seq
Rename the write_version lock field to write_seq and get it from the
core seq in the super block.

We're doing this to create a relationship between a client transaction's
seq and a lock's write_seq.  New transactions will have a greater seq
than all previously granted write locks and new write locks will have a
greater seq than all open transactions.  This will be used to resolve
ambiguities in item merging as transaction seqs are written out of order
and write locks span transactions.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-15 15:24:23 -07:00
Zach Brown
05ae756b74 Get trans seq from core seq
Get the next seq for a client transaction from the core seq in the super
block.  Remove its specific next_trans_seq field.

While making this change we switch to only using le64 in the network
message payloads, the rest of the processing now uses natural u64s.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-01 13:46:19 -07:00
Zach Brown
9051ceb6fc Add core seq to the super block
Add a new seq field to the super block which will be the source of all
incremented seqs throughout the system.  We give out incremented seqs to
callers with an atomic64_t in memory which is synced back to the super
block as we commit transactions in the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-01 13:33:30 -07:00
Zach Brown
bad1c602f9 server hold_commit returns void
When we moved to the current allocator we fixed up the server commit
path to initialize the pair of allocators as a commit is finished rather
than before it starts.  This removed all the error cases from
hold_commit.  Remove the error handling from hold_commit calls to make
the system just a bit simpler.

Signed-off-by: Zach Brown <zab@versity.com>
2021-06-01 13:32:26 -07:00
Zach Brown
cee6ad34d3 Merge pull request #42 from versity/zab/fencing_and_reclaiming
Zab/fencing and reclaiming
2021-06-01 11:12:51 -07:00
Zach Brown
38a4a56741 Stop writing to other quorum slot blocks
The core quorum work loop assumes that it has exclusive access to its
slot's quorum block.  It uniquely marks blocks it writes and verifies
the marks on read to discover if another mount has written to its slot
under the assumption that this must be a configuration error that put
two mounts in the same slot.

But the design of the leader bit in the block violates the invariant
that only a slot will write to its block.   As the server comes up and
fences previous leaders it writes to their block to clear their leader
bit.

The final hole in the design is that because we're fencing mounts, not
slots, each slot can have two mounts in play.  An active mount can be
using the slot and there can still be a persistent record of a previous
mount in the slot that crashed that needs to be fenced.

All this comes together to have the server fence an old mount in a slot
while a new mount is coming up.  The new mount sees the mark change and
freaks out and stops participating in quorum.

The fix is to rework the quorum blocks so that each slot only writes to
its own block.  Instead of the server writing to each fenced mount's
slot, it writes a fence event to its block once all previous mounts have
been fenced.  We add a bit of bookkeeping so that the server can
discover when all block leader fence operations have completed.  Each
event gets its own term so we can compare events to discover live
servers.

We get rid of the write marks and instead have an event that is written
as a quorum agent starts up and is then checked on every read to make
sure it still matches.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-31 13:10:45 -07:00
Zach Brown
76076011a2 Add scoutfs-fenced man page
Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:39 -07:00
Zach Brown
bdc0282fa7 Describe fencing in the scoutfs.5 man page
Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:39 -07:00
Zach Brown
1199bac91d Fix quorum server shutdown
If the server shuts down it calls into quorum to tell it that the
server has exited.  This stops quorum from sending heartbeats that
suppress other leader elections.

The function that did this got the logic wrong.  It was setting the bit
instead of clearing it, having been initially written to set a bit when
the server exited.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:39 -07:00
Zach Brown
1e460e5cb0 Add scoutfs-fenced and its run scripts to spec
Install the scoutfs-fenced daemon and its run scripts in the rpm spec
file.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:39 -07:00
Zach Brown
877e30d60f Add client address to mounted_client item
Add the peername of the client's connected socket to its mounted_client
item as it mounts.  If the client doesn't recover then fencing can use
the IP to find the host to fence.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:39 -07:00
Zach Brown
a972e42fba Update dmesg filters for fencing and reclaim
Add regexes for the messages that come from fencing and reclaiming
resources from fenced mounts.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
0706669047 Clean up quorum block read error messages
The error messages from reading quorum blocks were confusing.  The mark
was being checked when the block had already seen an error, and we got
multiple messages for some errors.

This cleans it up a bit so we only get one error message for each error
source and each message contains relevant context.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
76cef6fdfc Let _recov_next_pending iterate over rids
Currently the server's recovery timeout work synchronously reclaims
resources for each client whose recovery timed out.
scoutfs_recov_next_pending() can always return the head of the pending
list because its caller will always remove it from the list as it
iterates.

As we move to real fencing the server will be creating fence requests
for all the timed out clients concurrently.  It will need to iterate
over all the rids for clients in recovery.

So we sort recovery's pending list by rid and change _recov_next_pending
to return the next pending rid after a rid argument.  This lets the
server iterate over all the pending rids at once.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
aad2d3db59 Add stage_tmpfile to .gitignore
We missed adding this newly added binary to .gitignore.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
933fc687c3 omap remove_rid might not find entry
Client recovery in the server doesn't add the omap rid for all the
clients that it's waiting for.  It only adds the rid as they connect.  A
client whose recovery timeout expires and is evicted will try to have
its omap rid removed without being added.

Today this triggers a warning and returns an error from a time when the
omap rid lifecycle was more rigid.  Now that it's being called by the
server's reclaim_rid, along with a bunch of other functions that succeed
if called for non-existant clients, let's have the omap remove_rid do
the same.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
6663034295 Run the fence agent in the background of tests
Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
ab5466a771 Protect server shutting down with smp barriers
I saw a confusing hang that looked like a lack of ordering between
a waker setting shutting_down and a wait event testing it after
being woken up.  Let's see if more barriers help.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
f3764b873b Save previous connected client address
Our connection state spans sockets that can disconnect and reconnect.
While sockets are connected we store the socket's remote address in the
connection's peername and we clear it as sockets disconnect.

Fencing wants to know the last connected address of the mount.  It's a
bit of metadata we know about the mount that can be used to find it and
fence it.  As we store the peer address we also stash it away as the
last known peer address for the socket.  Fencing can then use that
instead of the current socket peer address which is guaranteed to be
uninitialized because there's no socket connected.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
9ebc9d0f66 Manage client reconnect delay
The client currently always queues immediate connect work if it's
nodify_down is called.  It was assuming that notify_down is only called
from a healthy established connection.   But it's also called for
unsuccessful conneect attempts that might not have timed out.  Say the
host is up but the port isn't listening.

This results in spamming connection attempts while an old stale leader
block until a new server is elected, fences the previous leader, and
updates their quorum block.

The fix is to explicitly manage the connection work queueing delay.  We
only set it to immediately queue on mount and when we see a greeting
reply from the server.  We always set it to a longer timeout as we start
a connection attempt.  This means we'll always have a long reconnect
delay unless we really connected to a server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
8b78f701a1 Add fence-and-reclaim test
Add a test which exercises the various reasons for fencing mounts and
checks that we reclaim the resources that they had.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
1f1f40f079 Add fence agent that processes fence requests
Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
943351944a Call fencing from the server
The server is responsible for calling the fencing subsystem.  It is the
source of fencing requests as it decides that previous mounts are
unresponsive.  It is responsible for reclaiming resources for fenced
mounts and freeing their associated fence request.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:28 -07:00
Zach Brown
b060eb4f5d Add fencing subsystem
Add the subsystem which tracks pending fence requests and exposes them
to userspace for processing.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:25 -07:00
Zach Brown
2dde729791 Add sysfs create attr w/ parent
Add sysfs attribute creation that can provide the parent dir kobject
instead of always creating the sysfs object dir off of the main
per-mount dir.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:19 -07:00
Zach Brown
ccb7c0bf4b Add rw sysfs attr wrapper
Add a wrapper around __ATTR_RW so that callers can add attributes with a
_store function.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:18:07 -07:00
Zach Brown
e9d04dcf8d Add forced unmount support
Add super_ops->umount_begin so that we can implement a forced unmount
which tries to avoid issuing any more network or storage ops.  It can
return errors and lose unsynchronized data.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-26 14:02:20 -07:00
Zach Brown
5dceac32db Merge pull request #40 from versity/zab/data_alloc_zones
Zab/data alloc zones
2021-05-24 13:00:48 -07:00
Zach Brown
ef440ead28 Add -z to run-test for data-alloc-zone-blocks
Add an option to run-tests which gets passed through to the
data-alloc-zone-blocks argument for mkfs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-21 15:31:02 -07:00
Zach Brown
d0b04e790c Add data-alloc-zone-blocks argument to mkfs
Add an argument to mkfs which sets the data_alloc_zone_blocks volume
option.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-21 15:31:02 -07:00
Zach Brown
54644a5074 Add data_alloc_zone_blocks volume option
Add the data_alloc_zone_blocks volume option.  This changes the
behaviour of the server to try and give mounts free data extents which
fall in exclusive fixed-size zones.

We add the field to the scoutfs_volume_options struct and add it to the
set_volopt server handler which enforces constrains on the size of the
zones.

We then add fields to the log_trees struct which records the size of the
zones and sets bits for the zones that contain free extents in the
data_avail allocator root.  The get_log_trees handler is changed to read
all the zone bitmaps from all the items, pass those bitmaps in to
_alloc_move to direct data allocations, and finally update the bitmaps
in the log_trees items to cover the newly allocated extents.  The
log_trees data_alloc_zone fields are cleared as the mount's logs are
reclaimed to indicate that the mount is no longer writing to the zone.

The policy mechanism of finding free extents based on the bitmaps is
ipmlemented down in _data_alloc_move().

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-21 15:31:02 -07:00
Zach Brown
52c2a465db Add zone awareness to scoutfs_alloc_move()
Add parameters so that scoutfs_alloc_move() can first search for source
extents in specified zones.  It uses relatively cheap searches through
the order items to find extents that intersect with the regions
described by the zone bitmaps.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-21 15:31:02 -07:00
Zach Brown
bc4975fad4 Add scoutfs_alloc_extents_cb()
Add an allocator call for getting a callback for all the extents in
btree items in an allocator root.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-21 15:31:02 -07:00
Zach Brown
9de3ae6dcb Index free extents by order of length
Allocators store free extents in two items, one sorted by their blkno
position and the other by their precise length.

The length index makes it easy to search for precise extent lengths, but
it makes it hard to search for a large extent within a given blkno
region.  Skipping in the blkno dimension has to be done for every
precise length value.

We don't need that level of precision.  If we index the extents by a
coarser order of the length then we have a fixed number of orders in
which we have to skip in the blkno dimension when searching within a
specific region.

This changes the length item to be stored at the log(8) order of the
length of the extents.  This groups extents into orders that are close
to the human-friendly base 10 orders of magnitude.

With this change the order field in the key no longer stores the precise
extent length.  To preserve the length of the extent we need to use
another field.  The only 64bit field remaining is the first which is a
higher comparision priority than the type.  So we use the highest
comparison priority zone field to differentiate the position and order
indexes and can now use all three 64bit fields in the key.

Finally, we have to be careful when constructing a key to use _next when
searching for a large extent.  Previously keys were relying on the magic
property that building a key from an extent length of 0 ended up at the
key value -0 = 0.  That only worked because we never stored zero length
extents.  We now store zero length orders so we can't use the negative
trick anymore.  We explicitly treat 0 length extents carefully when
building keys and we subtract the order from U64_MAX to store the orders
from largest to smallest.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-21 15:25:56 -07:00
Zach Brown
0aa6005c99 Add volume options super, server, and sysfs
Introduce global volume options.  They're stored in the superblock and
can be seen in sysfs files that use network commands to get and
set the options on the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-05-19 14:15:06 -07:00
Zach Brown
973dc4fd1c Merge pull request #38 from versity/zab/read_xattr_deadlocks
Zab/read xattr deadlocks
2021-05-03 09:44:57 -07:00
Zach Brown
a5ca5ee36d Put back-to-back invalidated locks back on list
A lock that is undergoing invalidation is put on a list of locks in the
super block.  Invalidation requests put locks on the list.  While locks
are invalidated they're temporarily put on a private list.

To support a request arriving while the lock is being processed we
carefully manage the invalidation fields in the lock between the
invalidation worker and the incoming request.  The worker correctly
noticed that a new invalidation request had arrived but it left the lock
on its private list instead of putting it back on the invalidation list
for further processing.  The lock was unreachable, wouldn't get
invalidated, and caused everyone trying to use the lock to block
indefinitely.

When the worker sees another request arrive for an invalidating lock it
needs to move the lock from the private list back to the invalidation
list.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-30 10:00:07 -07:00
Zach Brown
603af327ac Ignore I_FREEING in all inode hash lookups
Previously we added a ilookup variant that ignored I_FREEING inodes
to avoid a deadlock between lock invalidation (lock->I_FREEING) and
eviction (I_FREEING->lock);

Now we're seeing similar deadlocks between eviction (I_FREEING->lock)
and fh_to_dentry's iget (lock->I_FREEING).

I think it's reasonable to ignore all inodes with I_FREEING set when
we're using our _test callback in ilookup or iget.  We can remove the
_nofreeing ilookup variant and move its I_FREEING test into the
iget_test callback provided to both ilookup and iget.

Callers will get the same result, it will just happen without waiting
for a previously I_FREEING inode to leave.  They'll get NULL instead of
waiting from ilookup.  They'll allocate and start to initialize a newer
instance of the inode and insert it along side the previous instance.

We don't have inode number re-use so we don't have the problem where a
newly allocated inode number is relying on inode cache serialization to
not find a previously allocated inode that is being evicted.

This change does allow for concurrent iget of an inode number that is
being deleted on a local node.  This could happen in fh_to_dentry with a
raw inode number.  But this was already a problem between mounts because
they don't have a shared inode cache to serialize them.  Once we fix
that between nodes, we fix it on a single node as well.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-28 12:22:10 -07:00
Zach Brown
ca320d02cb Get i_mutex before cluster lock in file aio_read
The vfs often calls filesystem methods with i_mutex held.  This creates
a natural ordering of i_mutex outside of cluster locks.  The file
aio_read method acquired i_mutex after its cluster lock, creating a
deadlock with other vfs methods like setattr.

The acquisition of i_mutex after the cluster lock was due to using the
pattern where we use the per-task lock to discover if we're the first
user of the lock in a call chain.  Readpage has to do this, but file
aio_read doesn't.  It should never be called recursively.  So we can
acquire the i_mutex outside of the cluster lock and warn if we ever are
called recursively.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-28 12:11:06 -07:00
Zach Brown
5231cf4034 Add export-lookup-evict-race test
Add a test that creates races between fh_to_dentry and eviction
triggered by lock invalidation.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-28 12:11:06 -07:00
Andy Grover
f631058265 Merge pull request #37 from versity/zab/test_mkdir_rename_unlink
Add mkdir-rename-rmdir test
2021-04-27 13:21:27 -07:00
Zach Brown
1b4e60cae4 Add mkdir-rename-rmdir test
Add a test which performs mkdir, two renames of the dir, and rmdir on
all possible combinations of mounts.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-27 12:01:43 -07:00
Andy Grover
6eeaab3322 Merge pull request #35 from versity/zab/invalidate_already_pending
Handle back to back invalidation requests
2021-04-23 16:40:45 -07:00
Andy Grover
ac68d14b8d Merge pull request #36 from versity/zab/move_blocks_next_einval
Fix accidental EINVAL in move_blocks
2021-04-23 14:39:29 -07:00
Zach Brown
ecfc8a0d0e Merge pull request #33 from versity/zab/open_ino_map
Zab/open ino map
2021-04-23 10:55:11 -07:00
Zach Brown
63148d426e Fix accidental EINVAL in move_blocks
When move blocks is staging it requires an overlapping offline extent to
cover the entire region to move.

It performs the stage by modifying extents at a time.  If there are
fragmented source extents it will modify each of them at a time in the
region.

When looking for the extent to match the source extent it looked from
the iblock of the start of the whole operation, not the start of the
source extent it's matching.  This meant that it would find a the first
previous online extent it just modified, which wouldn't be online, and
would return -EINVAL.

The fix is to have it search from the logical start of the extent it's
trying to match, not the start of the region.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-23 10:39:34 -07:00
Zach Brown
a27c54568c Handle back to back invalidation requests
The client's incoming lock invalidation request handler triggers a
BUG_ON if it gets a request for a lock that is already processing a
previous invalidation request.  The server is supposed to only send
one request at a time.

The problem is that the batched invalidation request handling will send
responses outside of spinlock coverage before reacquirin the lock and
finishing processing once the response send has been successful.

This gives a window for another invalidation request to arrive after the
response was sent but before the invalidation finished processing.  This
triggers the bug.

The fix is to mark the lock such that we can recognize a valid second
request arriving after we send the response but before we finish
processing.  If it arrives we'll continue invalidation processing with
the arguments from the new request.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-22 17:00:50 -07:00
Zach Brown
dfc2f7a4e8 Remove unused scoutfs_free_unused_locks nr arg
The nr argument wasn't used.  It always tries to free as many as the
shrinker call will let it.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
94dd86f762 Process lock invalidation after shutdown
Lock teardown during unmount involves first calling shutdown and then
destroy.  The shutdown call is meant to ensure that it's safe to tear
down the client network connections.  Once shutdown returns locking is
promising that it won't call into the client to send new lock requests.

The current shutdown implementation is very heavy handed and shuts down
everything.  This creates a deadlock.  After calling lock shutdown, the
client will send its farewell and wait for a response.  The server might
not send the farewell response until other mounts have unmounted if our
client is in the server's mount.  In this case we stil have to be
processing lock invalidation requests to allow other unmounting clients
to make forward progress.

This is reasonably easy and safe to do.  We only use the shutdown flag
to stop lock calls that would change lock state and send requests.  We
don't have it stop incoming requests processing in the work queueing
functions.  It's safe to keep processing incoming requests between
_shutdown and _destroy because the requests already come in through the
client.  As the client shuts down it will stop calling us.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
841d22e26e Disable task reclaim flags for block cache vmalloc
Even though we can pass in gfp flags to vmalloc it eventually calls pte
alloc functions which ignore the caller's flags and use user gfp flags.
This risks reclaim re-entering fs paths during allocations in the block
cache.  These allocs that allowed reclaim deep in the fs was causing
lockdep to add RECLAIM dependencies between locks and holler about
deadlocks.

We apply the same pattern that xfs does for disabling reclaim while
allocating vmalloced block payloads.  Setting PF_MEMALLOC_NOIO causes
reclaim in that task to clear __GFP_IO and __GFP_FS, regardless of the
individual allocation flags in the task, preventing recursion.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
ba8bf13ae1 Update dmesg whitelist for recovery
The shared recovery layer outputs different messages than when it ran
only for lock_recovery in the lock server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
2949b6063f Clear lock invalidate_pending during destroy
Locks have a bunch of state that reflects concurrent processing.
Testing that state determines when it's safe to free a lock because
nothing is going on.

During unmount we abruptly stop processing locks.  Unmount will send a
farewell to the server which will remove all the state associated with
the client that's unmounting for all its locks, regardless of the state
the locks were in.

The client unmount path has to clean up the interupted lock state and
free it, carefully avoiding assertions that would otherwise indicate
that we're freeing used locks.  The move to async lock invalidation
forgot to clean up the invalidation state.  Previously a synchronous
work function would set and clear invalidate_pending while it was
running.  Once we finished waiting for it invalidate_pending would be
clear.  The move to async invalidation work meant that we can still have
invalidate_pending with no work executing.  Lock destruction removed
locks from the invalidation list but forgot to clear the
invalidate_pending flag.

This triggered assertions during unmount that were otherwise harmless.
There was other use of the lock, we just forgot to clean up the lock
state.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
1e88aa6c0f Shutdown data after trans
The data_info struct holds the data allocator that is filled by
transactions as they commit.  We have to free it after we've shutdown
transactions.  It's more like the forest in this regard so we move its
desctruction down by the forest to group similar behaviour.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
d9aea98220 Shutdown locking before transactions
Shutting down the lock client waits for invalidation work and prevents
future work from being queued.  We're currently shutting down the
subsystems that lock calls before lock itself, leading to crashes if we
happen to have invalidations executing as we unmount.

Shutting down locking before its dependencies fixes this.  This was hit
in testing during the inode deletion fixes because it created the
perfect race by acquiring locks during unmount so that the server was
very unlikely to send invalidations on behalf to one mount on behalf of
another as they both unmounted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
04f4b8bcb3 Perform final transaction write before shutdown
Shutting down the transaction during unmount relied on the vfs unmount
path to perform a sync of any remaining dirty transaction.  There are
ways that we can dirty a transaction during unmount after it calls
the fs sync, so we try to write any remaining dirty transaction before
shutting down.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
fead263af3 Remove unused sb_info shutdown
We're no longer using the shutdown field in our sb info struct.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
4389c73c14 Fix deadlock between lock invalidate and evict
We've had a long-standing deadlock between lock invalidation and
eviction.  Invalidating a lock wants to lookup inodes and drop their
resources while blocking locks.  Eviction wants to get a lock to perform
final deletion while the inodes has I_FREEING set which blocks lookups.

We only saw this deadlock a handful of times in all of the time we've
run the code, but it's now much more common now that we're acquiring
locks in iput to test that nlink is zero instead of only when nlink is
zero.  I see unmount hang regularly when testing final inode deletion.

This adds a lookup variant for invalidation which will refuse to
return freeing inodes so they won't be waited on.  Once they're freeing
they can't be seen by future lock users so they don't need to be
invalidated.  This keeps the lock invalication promise and avoids
sleeping on freeing inodes which creates the deadlock.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
dba88705f7 Fix t_umount mount point number
t_umount had a typo that had it try to unmount a mount based on a
caller's variable, which accidentally happened to work for its only
caller.  Future callers would not have been so lucky.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
715c29aad3 Proactively drop dentry/inode caches outside locks
Previously we wouldn't try and remove cached dentries and inodes as
lock revocation removed cluster lock coverage.  The next time
we tried to use the cached dentries or inodes we'd acquire
a lock and refresh them.

But now cached inodes prevent final inode deletion.  If they linger
outside cluster locking then any final deletion will need to be deferred
until all its cached inodes are naturally dropped at some point in the
future across the cluster.  It might take refreshing the dentries or for
memory pressure to push out the old cached inodes.

This tries to proctively drop cached dentries and inodes as we lose
cluster lock coverage if they're not actively referenced.  We need to be
careful not to perform final inode deletion during lock invalidation
because it will deadlock, so we defer an iput which could delete during
evict out to async work.

Now deletion can be done synchronously in the task that is performing
the unlink because previous use of the inode on remote mounts hasn't
left unused cached inodes sitting around.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
b244b2d59c Add inode-deletion test
Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
22371fe5bd Fully destroy inodes after all mounts evict
Today an inode's items are deleted once its nlink reaches zero and the
final iput is called in a local mount.  This can delete inodes from
under other mounts which have opened the inode before it was unlinked on
another mount.

We fix this by adding cached inode tracking.  Each mount maintains
groups of cached inode bitmaps at the same granularity as inode locking.
As a mount performs its final iput it gets a bitmap from the server
which indicates if any other mount has inodes in the group open.

This makes the two fast paths of opening and closing linked files and of
deleting a file that was unlinked locally only pay a moderate cost of
either maintaining the bitmap locally and only getting the open map once
per lock group.  Removing many files in a group will only lock and get
the open map once per group.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-21 12:17:33 -07:00
Zach Brown
c6fd807638 Use recov to manage lock recovery
Now that we have the recov layer we can have the lock server use it to
track lock recovery.  The lock server no longer needs its own recovery
tracking structures and can instead call recov.  We add a call for the
server to call to kick lock processing once lock recovery finishes.  We
can get rid of the persistent lock_client items now that the server is
driving recovery from the mounted_client items.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
592f472a1c Use recov in server to recover client greetings
The server starts recovery when it finds mounted client items as it
starts up.  The clients are done recovering once they send their
greeting.  If they don't recover in time then they'll be fenced.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
a65775588f Add server recovery helpers
Add a little set of functions to help the server track which clients are
waiting to recover which state.  The open map messages need to wait for
recovery so we're moving recovery out of being only in the lock server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
da1af9b841 Add scoutfs inode ino lock coverage
Add lock coverage which tracks if the inode has been refreshed and is
covered by the inode group cluster lock.  This will be used by
drop_inode and evict_inode to discover that the inode is current and
doesn't need to be refreshed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Zach Brown
accd680a7e Fix block setup always returning 0
Another case of returning 0 instead of ret.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 12:10:35 -07:00
Andy Grover
cbb031bb5d Merge pull request #32 from versity/zab/block_rhashtable_insert_fixes
Zab/block rhashtable insert fixes
2021-04-13 10:42:17 -07:00
Zach Brown
c3290771a0 Block cache use rht _lookup_ insert for EEXIST
The sneaky rhashtable_insert_fast() can't return -EEXIST despite the
last line of the function *REALLY* making it look like it can.  It just
inserts new objects at the head of the bucket lists without comparing
the insertion with existing objects.

The block cache was relying on insertion to resolve duplicate racing
allocated blocks.  Because it couldn't return -EEXIST we could get
duplicate cached blocks present in the hash table.

rhashtable_lookup_insert_fast() fixes this by actually comparing the
inserted objects key with the objects found in the insertion bucket.  A
racing allocator trying to insert a duplicate cached block will get an
error, drop their allocated block, and retry their lookup.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 09:24:23 -07:00
Zach Brown
cf3cb3f197 Wait for rhashtable to rehash on insert EBUSY
The rhashtable can return EBUSY if you insert fast enough to trigger an
expansion of the next table size that is waiting to be rehashed in an
rcu callback.  If we get EBUSY from rhasthable_insert we call
synchronize_rcu to wait for the rehash to complete before trying again.

This was hit in testing restores of a very large namespace and took a
few hours to hit.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 09:24:23 -07:00
Andy Grover
cb4ed98b3c Merge pull request #31 from versity/zab/block_shrink_wait_for_rebalance
Block cache shrink restart waits for rcu callbacks
2021-04-08 09:03:12 -07:00
Zach Brown
9ee7f7b9dc Block cache shrink restart waits for rcu callbacks
We're seeing cpu livelocks in block shrinking where counters show that a
single block cache shrink call is only getting EAGAIN from repeated
rhashtable walk attempts.  It occurred to me that the running task might
be preventing an RCU grace period from ending by never blocking.

The hope of this commit is that by waiting for rcu callbacks to run
we'll ensure that any pending rebalance callback runs before we retry
the rhashtable walk again.  I haven't been able to reproduce this easily
so this is a stab in the dark.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-07 12:50:50 -07:00
Zach Brown
300791ecfa Merge pull request #29 from agrover/cleanup
Cleanup
2021-04-07 12:27:00 -07:00
Andy Grover
4630b77b45 cleanup: Use flexible array members instead of 0-length arrays
See Documentation/process/deprecated.rst:217, items[] now preferred over
items[0].

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:14:47 -07:00
Andy Grover
bdc43ca634 cleanup: Fix ESTALE handling in forest_read_items
Kinda weird to goto back to the out label and then out the bottom. Just
return -EIO, like forest_next_hint() does.

Don't call client_get_roots() right before retry, since is the first thing
retry does.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:14:04 -07:00
Andy Grover
6406f05350 cleanup: Remove struct net_lock_grant_response
We're not using the roots member of this struct, so we can just
use struct scoutfs_net_lock directly.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:13:56 -07:00
Andy Grover
820b7295f0 cleanup: Unused LIST_HEADs
Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-05 16:23:41 -07:00
Zach Brown
b3611103ee Merge pull request #26 from agrover/tmpfile
Support O_TMPFILE and allow MOVE_BLOCKS into released extents
2021-04-05 15:23:41 -07:00
Andy Grover
0deb232d3f Support O_TMPFILE and allow MOVE_BLOCKS into released extents
Support O_TMPFILE: Create an unlinked file and put it on the orphan list.
If it ever gains a link, take it off the orphan list.

Change MOVE_BLOCKS ioctl to allow moving blocks into offline extent ranges.
Ioctl callers must set a new flag to enable this operation mode.

RH-compat: tmpfile support it actually backported by RH into 3.10 kernel.
We need to use some of their kabi-maintaining wrappers to use it:
use a struct inode_operations_wrapper instead of base struct
inode_operations, set S_IOPS_WRAPPER flag in i_flags. This lets
RH's modified vfs_tmpfile() find our tmpfile fn pointer.

Add a test that tests both creating tmpfiles as well as moving their
contents into a destination file via MOVE_BLOCKS.

xfstests common/004 now runs because tmpfile is supported.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-05 14:23:44 -07:00
Andy Grover
1366e254f9 Merge pull request #30 from versity/zab/srch_block_ref_leak
Zab/srch block ref leak
2021-04-01 16:50:34 -07:00
Zach Brown
1259f899a3 srch compaction needs to prepare alloc for commit
The srch client compaction work initializes allocators, dirties blocks,
and writes them out as its transaction.  It forgot to call the
pre-commit allocator prepare function.

The prepare function drops block references used by the meta allocator
during the transaction.  This leaked block references which kept blocks
from being freed by the shrinker under memory pressure.  Eventually
memory was full of leaked blocks and the shrinker walked all of them
looking blocks to free, resulting in an effective livelock that ground
the system to a crawl.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-01 13:04:40 -07:00
Zach Brown
2d393f435b Warn on leaked block refs on unmount
By the time we get to destroying the block cache we should have put all
our block references.  Warn as we tear down the blocks if we see any
blocks that still have references, implying a ref leak.  This caught a
leak caused by srch compaction forgetting to put allocator list block
refs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-01 13:04:06 -07:00
Andy Grover
09c879bcf1 Merge pull request #25 from versity/zab/client_greeting_items_exist
Zab/client greeting items exist
2021-03-16 15:57:55 -07:00
Zach Brown
3de703757f Fix weird comment editing error
That comment looked very weird indeed until I recognized that I must
have forgotten to delete the first two attempts at starting the
sentence.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 12:02:05 -07:00
Zach Brown
7d67489b0c Handle resent initial client greetings
The very first greeting a client sends is unique becuase it doesn't yet
have a server_term field set and tells the server to create items to
track the client.

A server processing this request can create the items and then shut down
before the client is able to receive the reply.  They'll resend the
greeting without server_term but then the next server will get -EEXIST
errors as it tries to create items for the client.  This causes the
connection to break, which the client tries to reestablish, and the
pattern repeats indefinitely.

The fix is to simply recognize that -EEXIST is acceptable during item
creation.  Server message handlers always have to address the case where
a resent message was already processed by a previous server but it's
response didn't make it to the client.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 11:56:26 -07:00
Zach Brown
73084462e9 Remove unused client greeting_umb
Remove an old client info field from the unmount barrier mechanism which
was removed a while ago.  It used to be compared to a super field to
decide to finish unmount without reconnecting but now we check for our
mounted_client item in the server's btree.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 10:04:42 -07:00
Zach Brown
8c81af2b9b Merge pull request #22 from agrover/ipv6
Reserve space in superblock for IPv6 addresses
2021-03-15 16:04:26 -07:00
Andy Grover
efe5d92458 Reserve space in superblock for IPv6 addresses
Define a family field, and add a union for IPv4 and v6 variants, although
v6 is not supported yet.

Family field is now used to determine presence of address in a quorum slot,
instead of checking if addr is zero.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-03-12 14:10:42 -08:00
Andy Grover
d39e56d953 Merge pull request #24 from versity/zab/fix-block-stale-reads
Zab/fix block stale reads
2021-03-11 09:33:03 -08:00
Zach Brown
5661a1fb02 Fix block-stale-reads test
The block-stale-reads test was built from the ashes of a test that
used counters and triggers to work with the btree when it was
only used on the server.

The initial quick translation to try and trigger block cache retries
while the forest called the btree got so much wrong.  It was still
trying to use some 'cl' variable that didn't refer to the client any
more, the trigger helpers now call statfs to find paths and can end up
triggering themselves. and many more counters stale reads can happen
throughout the system while we're working -- not just one from our
trigger.

This fixes it up to consistently use fs numbers instead of
the silly stale cl variable and be less sensitive to triggers firing and
counter differences.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:36:41 -08:00
Zach Brown
12fa289399 Add t_trigger_arm_silent
t_trigger_arm always output the value of the trigger after arming on the
premise that tests required the trigger being armed.  In the process of
showing the trigger it calls a bunch of t_ helpers that build the path
to the trigger file using statfs_more to get the rid of mounts.

If the trigger being armed is in the server's mount and the specific
trigger test is fired by the server's statfs_more request processing
then the trigger can be fired before read its value.  Tests can
inconsistently fail as the golden output shows the trigger being armed
or not depending on if it was in the server's mount or not.

t_trigger_arm_silent doesn't output the value of the armed trigger.  It
can be used for low level triggers that don't rely on reading the
trigger's value to discover that their effect has happened.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:36:34 -08:00
Zach Brown
75e8fab57c Add t_counter_diff_changed
Tests can use t_counter_diff to put a message in their golden output
when a specific change in counters is expected.  This adds
t_counter_diff_changed to output a message that indicates change or not,
for tests that want to see counters change but the amount of change
doesn't need to be precisely known.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:32:04 -08:00
Zach Brown
513d6b2734 Merge pull request #20 from versity/zab/remove_trans_spinlock
Zab/remove trans spinlock
2021-03-04 13:59:07 -08:00
Zach Brown
f8d39610a2 Only get inode writeback_lock when adding inodes
Each transaction maintains a global list of inodes to sync.  It checks
the inode and adds it in each write_end call per OS page.  Locking and
unlocking the global spinlock was showing up in profiles.  At the very
least, we can only get the lock once per large file that's written
during a transaction.  This will reduce spinlock traffic on the lock by
the number of pages written per file.   We'll want a better solution in
the long run, but this helps for now.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-04 11:39:30 -08:00
Zach Brown
c470c1c9f6 Allow read-mostly _alloc_meta_low
Each transaction hold makes multiple calls to _alloc_meta_low to see if
the transaction should be committed to refill allocators before the
caller's hold is acquired and they can dirty blocks in the transaction.

_alloc_meta_low was using a spinlock to sample the allocator list_head
blocks to determine if there was space available.  The lock and unlock
stores were creating significant cacheline contention.

The _alloc_meta_low calls are higher frequency than allocations.  We can
use a seqlock to have exclusive writers and allow concurrent
_alloc_meta_low readers who retry if a writer intervenes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-04 11:39:30 -08:00
Andy Grover
cad902b9cd Merge pull request #19 from versity/zab/block_crash_and_consistency
Zab/block crash and consistency
2021-03-04 10:57:27 -08:00
Zach Brown
e163f3b099 Use atomic holders instead of trans info lock
We saw the transaction info lock showing up in profiles.  We were doing
quite a lot of work with that lock held.  We can remove it entirely and
use an atomic.

Instead of a locked holders count and writer boolean we can use an
atomic holders and have a high bit indicate that the write_func is
pending.  This turns the lock/unlock pairs in hold and release into
atomic inc/cmpxchg/dec operations.

Then we were checking allocators under the trans lock.  Now that we have
an atomic holders count we can increment it to prevent the writer from
commiting and release it after the checks if we need another commit
before the hold.

And finally, we were freeing our allocated reservation struct under the
lock.  We weren't actually doing anything with the reservation struct so
we can use journal_info as the nested hold counter instead of having it
point to an allocated and freed struct.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 14:18:04 -08:00
Zach Brown
a508baae76 Remove unused triggers
As the implementation shifted away from the ring of btree blocks and LSM
segments we lost callers to all these triggers.  They're unused and can
be removed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:50:00 -08:00
Zach Brown
208c51d1d2 Update stale block reading test
The previous test that triggered re-reading blocks, as though they were
stale, was written in the era where it only hit btree blocks and
everything else was stored in LSM segments.

This reworks the test to make it clear that it affects all our block
readers today.  The test only exercise the core read retry path, but it
could be expanded to test callers retrying with newer references after
they get -ESTALE errors.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:50:00 -08:00
Zach Brown
9450959ca4 Protect stale block readers from local dirtying
Our block cache consistency mechanism allows readers to try and read
stale block references.  They check block headers of the block they read
to discover if it has been modified and they should retry the read with
newer block references.

For this to be correct the block contents can't change under the
readers.  That's obviously true in the simple imagined case of one node
writing and another node reading.  But we also have the case where the
stale reader and dirtying writer can be concurrent tasks in the same
mount which share a block cache.

There were a two failure cases that derive from the order of readers and
writers working with blocks.

If the reader goes first, the writer could find the existing block in
the cache and modify it while the reader assumes that it is read only.
The fix is to have the writer always remove any existing cached block
and insert a newly allocated block into the cache with the header fields
already changed.  Any existing readers will still have their cached
block references and any new readers will see the modified headers and
return -ESTALE.

The next failure comes from readers trying to invalidate dirty blocks
when they see modified headers.  They assumed that the existing cached
block was old and could be dropped so that a new current version could
be read.  But in this case a local writer has clobbered the reader's
stale block and the reader should immediately return -ESTALE.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:59 -08:00
Zach Brown
6237f0adc5 Add _block_dirty_ref to dirty blocks in one place
To create dirty blocks in memory each block type caller currently gets a
reference on a created block and then dirties it.  The reference it gets
could be an existing cached block that stale readers are currently
using.  This creates a problem with our block consistency protocol where
writers can dirty and modify cached blocks that readers are currently
reading in memory, leading to read corruption.

This commit is the first step in addressing that problem.  We add a
scoutfs_block_dirty_ref() call which returns a reference to a dirtied
block from the block core in one call.  We're only changing the callers
in this patch but we'll be reworking the dirtying mechanism in an
upcoming patch to avoid corrupting readers.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
f18fa0e97a Update scoutfs print for centralized block_ref
Update scoutfs print to use the new block_ref struct instead of the
handful of per-block type ref structs that we had accumulated.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
0969a94bfc Check one block_ref struct in block core
Each of the different block types had a reading function that read a
block and then checked their reference struct for their block type.

This gets rid of each block reference type and has a single block_ref
type which is then checked by a single ref reading function in the block
core.  By putting ref checking in the core we no longer have to export
checking the block header crc, verifying headers, invalidating blocks,
or even reading raw blocks themseves.  Everyone reads refs and leaves
the checking up to the core.

The changes don't have a significant functional effect.  This is mostly
just changing types and moving code around.  (There are some changes to
visible counters.)

This shares code, which is nice, but this is putting the block reference
checking in one place in the block core so that in a few patches we can
fix problems with writers dirtying blocks that are being read.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
b1b75cbe9f Fix block cache shrink and read racing crash
The block cache wasn't safely racing readers walking the rcu radix_tree
and the shrinker walking the LRU list.  A reader could get a reference
to a block that had been removed from the radix and was queued for
freeing.  It'd clobber the free's llist_head union member by putting the
block back on the lru and both the read and free would crash as they
each corrupted each other's memory.  We rarely saw this in heavy load
testing.

The fix is to clean up the use of rcu, refcounting, and freeing.

First, we get rid of the LRU list.  Now we don't have to worry about
resolving racing accesses of blocks between two independent structures.
Instead of shrinking walking the LRU list, we can mark blocks on access
such that shrinking can walk all blocks randomly and expect to quickly
find candidates to shrink.

To make it easier to concurrently walk all the blocks we switch to the
rhashtable instead of the radix tree.  It also has nice per-bucket
locking so we can get rid of the global lock that protected the LRU list
and radix insertion.  (And it isn't limited to 'long' keys so we can get
rid of the check for max meta blknos that couldn't be cached.)

Now we need to tighten up when read can get a reference and when shrink
can remove blocks.  We have presence in the hash table hold a refcount
but we make it a magic high bit in the refcount so that it can be
differentiated from other references.  Now lookup can atomically get a
reference to blocks that are in the hash table, and shrinking can
atomically remove blocks when it is the only other reference.

We also clean up freeing a bit. It has to wait for the rcu grace period
to ensure that no other rcu readers can reference the blocks its
freeing.  It has to iterate over the list with _safe because it's
freeing as it goes.

Interestingly, when reworking the shrinker I noticed that we weren't
scaling the nr_to_scan from the pages we returned in previous shrink
calls back to blocks.  We now divide the input from pages back into
blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:15 -08:00
Zach Brown
0f14826ff8 Merge pull request #18 from versity/zab/quorum_slots_unmount
Zab/quorum slots unmount
2021-02-22 13:34:25 -08:00
Zach Brown
336d521e44 Use spinlock to protect server farewell list
We had a mutex protecting the list of farewell requests.  The critical
sections are all very short so we can use a spinlock and be a bit
clearer and more efficient.  While we're at it, refactor freeing to free
outside of the criticial section.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
4fab75b862 Account for non-quorum in server farewell
The server has to be careful to only send farewell responses to quorum
clients once it knows that it won't need their vote to elect a leader to
server remaining clients.

The logic for doing this forgot to take non-quorum clients into account.
It would send farewell requests to all the final majority of quorum
members once they all tried to unmount.  This could leave non-quorum
clients hung in unmount trying to send their farewell requests.

The fix is to count mouted_clients items for non-quorum clients and hold
off on sending farewell requests to the final majority until those
non-quorum clients have unmounted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
f6f72e7eae Resume running the mount-unmount-race test
The recent quorum and unmount fixes should have addressed the failures
we were seeing in the mount-unmount-race test.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
9878312b4d Update man pages for quorum slot changes
Update the man pages with descriptions of the new mkfs -Q quorum slot
configuration and quorum_slot_nr mount option.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
7421bd1861 Filter all test device digits to 0
We mask device numbers in command output to 0:0 so that we can have
consistent golden test output.  The device number matching regex
responsible for this missed a few digits.

It didn't show up until we both tested enough mounts to get larger
device minor numbers and fixed multi-mount consistency so that the
affected tests didn't fail for other reasons.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
1db6f8194d Update xfstests to use quorum slot options
Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
2de7692336 Unmount mount point, not device
Our test unmount function unmounted the device instead of the mount
point.  It was written this way back in an old version of the harness
which didn't track mount points.

Now that we have mount points, we can just unmount that.  This stops the
umount command from having to search through all the current mounts
looking for the mountpoint for the device it was asked to unmount.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
8c1d96898a Log wait failure in mount-unmount-race test
I got a test failure where waiting returned an error, but it wasn't
clear what the error was or where it might have come from.  Add more
logging so that we learn more about what might have gone wrong.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
090646aaeb Update repo README.md for quorum slots
Update the example configuration in the README to specify the quorum
slots in mkfs arguments and mount options.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
d53350f9f1 Consistently lock server mounted_clients btree
The mounted_clients btree stores items to track mounted clients.  It's
modified by multiple greeting workers and the farewell work.

The greeting work was serialized by the farewell_mutex, but the
modifications in the farewell thread weren't protected.  This could
result in modifications between the threads being lost if the dirty
block reference updates raced in just the right way.  I saw this in
testing with deletions in farewell being lost and then that lingering
item preventing unmount because the server thought it had to wait for a
remaining quorum member to unmount.

We fix this by adding a mutex specifically to protect the
mounted_clients btree in the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
57f34e90e9 Use mounted_client item as sign of farewell
As clients unmount they send a farewell request that cleans up
persistent state associated with the mount.  The client needs to be sure
that it gets processed, and we must maintain a majority of quorum
members mounted to be able to elect a server to process farewell
requests.

We had a mechanism using the unmount_barrier fields in the greeting and
super_block to let the final unmounting quorum majority know that their
farewells have been processed and that they didn't need to keep trying
to reconnect.

But we missed that we also need this out of band farewell handling
signal for non-quorum member clients as well.  The server can send
farewells to a non-member client as well as the final majority and then
tear down all the connections before the non-quorum client can see its
farewell response.  It also needs to be able to know that its farewell
has been processed before the server let the final majority unmount.

We can remove the custom unmount_barrier method and instead have all
unmounting clients check for their mounted_client item in the server's
btree.  This item is removed as the last step of farewell processing so
if the client sees that it has been removed it knows that it doesn't
need to resend the farewell and can finish unmounting.

This fixes a bug where a non-quorum unmount could hang if it raced with
the final majority unmounting.  I was able to trigger this hang in our
tests with 5 mounts and 3 quorum members.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
79f6878355 Clean up block writing in mkfs
scoutfs mkfs had two block writing functions: write_block to fill out
some block header fields including crc calculation, and then
write_block_raw to pwrite the raw buffer to the bytes in the device.

These were used inconsistenly as blocks came and went over time.  Most
callers filled out all the header fields themselves and called the raw
writer.  write_block was only used for super writing, which made sense
because it clobbered the block's header with the super header so the
caller's set header magic and seq fields would be lost.

This cleans up the mess.  We only have one block writer and the caller
provides all the hdr fields.  Everything uses it instead of filling out
the fields themselves and calling the raw writer.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
740e13e53a Return error from _quorum_setup
Well that's a silly mistake.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
dbb716f1bb Update tests for quorum slots
Update the tests to deal with the mkfs and mount changes for the
specifically configured quorum slots.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
87fcad5428 Update scoutfs mkfs and print for quorum slots
Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
406d157891 Add stringify macro to utils
Add macros for stringifying either the name of a macro or its value.  In
keeping with making our utils/ sort of look like kernel code, we use the
kernel stringify names.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-18 12:57:30 -08:00
Zach Brown
8e34c5d66a Use quorum slots and background election work
Previously quorum configuration specified the number of votes needed to
elected the leader.  This was an excessive amount of freedom in the
configuration of the cluster which created all sorts of problems which
had to be designed around.

Most acutely, though, it required a probabilistic mechanism for mounts
to persistently record that they're starting a server so that future
servers could find and possibly fence them.  They would write to a lot
of quorum blocks and trust that it was unlikely that future servers
would overwrite all of their written blocks.  Overwriting was always
possible, which would be bad enough, but it also required so much IO
that we had to use long election timeouts to avoid spurious fencing.
These longer timeouts had already gone wrong on some storage
configurations, leading to hung mounts.

To fix this and other problems we see coming, like live membership
changes, we now specifically configure the number and identity of mounts
which will be participating in quorum voting.  With specific identities,
mounts now have a corresponding specific block they can write to and
which future servers can read from to see if they're still running.

We change the quorum config in the super block from a single
quorum_count to an array of quorum slots which specify the address of
the mount that is assigned to that slot.  The mount argument to specify
a quorum voter changes from "server_addr=$addr" to "quorum_slot_nr=$nr"
which specifies the mount's slot.  The slot's address is used for udp
election messages and tcp server connections.

Now that we specifically have configured unique IP addresses for all the
quorum members, we can use UDP messages to send and receive the vote
mesages in the raft protocol to elect a leader.  The quorum code doesn't
have to read and write disk block votes and is a more reasonable core
loop that either waits for received network messages or timeouts to
advance the raft election state machine.

The quorum blocks are now used for slots to store their persistent raft
term and to set their leader state.  We have event fields in the block
to record the timestamp of the most recent interesting events that
happened to the slot.

Now that raft doesn't use IO, we can leave the quorum election work
running in the background.  The raft work in the quorum members is
always running so we can use a much more typical raft implementation
with heartbeats.  Critically, this decouples the client and election
life cycles.  Quorum is always running and is responsible for starting
and stopping the server.  The client repeatedly tries to connect to a
server, it has nothing to do with deciding to participate in quorum.

Finally, we add a quorum/status sysfs file which shows the state of the
quorum raft protocol in a member mount and has the last messages that
were sent to or received from the other members.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-18 12:57:30 -08:00
Zach Brown
1c7bbd6260 More accurately describe unmounting quorum members
As a client unmounts it sends a farewell request to the server.  We have
to carefully manage unmounting the final quorum members so that there is
always a remaining quorum to elect a leader to start a server to process
all their farewell requests.

The mechanism for doing this described these clients as "voters".
That's not really right, in our terminology voters and candidates are
temporary roles taken on by members during a specific election term in
the raft protocol.  It's more accurate to describe the final set of
clients as quorum members.  They can be voters or candidates depending
on how the raft protocol timeouts workout in any given election.

So we rename the greeting flag, mounted client flag, and the code and
comments on either side of the client and server to be a little clearer.

This only changes symbols and comments, there should be no functional
change.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-11 15:47:39 -08:00
Zach Brown
3ad18b0f3b Update super blkno field tests for meta device
As we read the super we check the first and last meta and data blkno
fields.  The tests weren't updated as we moved from one device to two
metadata and data devices.

Add a helper that tests the range for the device and test both meta and
data ranges fully, instead of only testing the endpoints of each and
assuming they're related because they're living on one device.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-11 15:47:29 -08:00
Andy Grover
79cd7a499b Merge pull request #17 from versity/zab/disable_mount_unmount_test
Disable mount-unmount-race test
2021-02-01 10:09:26 -08:00
Zach Brown
6ad18769cb Disable mount-unmount-race test
The mount-unmount-race test is occasionally hanging, disable it while we
debug it and have test coverage for unrelated work.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-01 10:07:47 -08:00
Zach Brown
49d82fcaaf Merge pull request #14 from agrover/fix-jira-202
utils: Do not assert if release is given unaligned offset or length
2021-02-01 09:46:01 -08:00
Zach Brown
e4e12c1968 Merge pull request #15 from agrover/radix-block
Remove unused radix_block struct
2021-02-01 09:24:59 -08:00
Andy Grover
15fd2ccc02 utils: Do not assert if release is given unaligned offset or length
This is checked for by the kernel ioctl code, so giving unaligned values
will return an error, instead of aborting with an assert.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-29 09:30:57 -08:00
Andy Grover
eea95357d3 Remove unused radix_block struct
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-26 16:07:05 -08:00
Andy Grover
9842c5d13e Merge pull request #13 from versity/zab/multi_mount_test_fixes
Zab/multi mount test fixes
2021-01-26 15:56:33 -08:00
Zach Brown
ade539217e Handle advance_seq being replayed in new server
As a core principle, all server message processing needs to be safe to
replay as servers shut down and requests are resent to new servers.

The advance_seq handler got this wrong.  It would only try to remove a
trans_seq item for the seq sent by the client before inserting a new
item for the next seq.  This change could be committed before the reply
was lost as the server shuts down.  The next server would process the
resent request but wouldn't find the old item for the seq that the
client sent, and would ignore the new item that the previous server
inserted.  It would then insert another greater seq for the same client.

This would leave behind a stale old trans_seq that would be returned as
the last_seq which would forever limit the results that could be
returned from the seq index walks.

This fix is to always remove all previous seq items for the client
before inserting a new one.  This creates O(clients) server work, but
it's minimal.

This manifest as occasional simple-inode-index test failures (say 1 in
5?) which would trigger if the unmounts during previous tests would
happen to have advance_seq resent across server shutdowns.  With this
change the test now reliably passes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
5a90234c94 Use terminated test name when saving passed stats
We've grown some test names that are prefixes of others
(createmany-parallel, createmany-parallel-mounts).  When we're searching
for lines with the test name we have to search for the exact test name,
by terminating the name with a space, instead of searching for a line
that starts with the test name.

This fixes strange output and saved passed stats for the names that
share a prefix.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
f81e4cb98a Add whitespace to xfstests output message
The message indicating that xfstests output was now being shown was
mashed up against the previous passed stats and it was gross and I hated
it.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
1fc706bf3f Filter hrtimer slow messages from dmesg
When running in debug kernels in guests we can really bog down things
enough to trigger hrtimer warnings.  I don't think there's much we can
reasonably do about that.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
e9c3aa6501 More carefully cancel server farewell work
Farewell work is queued by farewell message processing.  Server shutdown
didn't properly wait for pending farewell work to finish before tearing
down.  As the server work destroyed the server's connection the farewell
work could stlil be running and try to send responses down the socket.

We make the server more carefully avoid queueuing farewell work if it's
in the process of shutting down and wait for farewell work to finish
before destroying the server's resources.

This fixed all manner of crashes that were seen in testing when a bunch
of nodes unmounted, creating farewell work on the server as it itself
unmounted and destroyed the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
d39268bbc1 Fix spurious EIO from scoutfs_srch_get_compact
scoutfs_srch_get_compact() is building up a compaction request which has
a list of srch files to read and sort and write into a new srch file.
It finds input files by searching for a sufficient number of similar
files: first any unsorted log files and then sorted log files that are
around the same size.

It finds the files by using btree next on the srch zone which has types
for unsorted srch log files, sorted srch files, but also pending and
busy compaction items.

It was being far too cute about iterating over different key types.  It
was trying to adapt to finding the next key and was making assumptions
about the order of key types.  It didn't notice that the pending and
busy key types followed log and sorted and would generate EIO when it
ran into them and found their value length didn't match what it was
expecting.

Rework the next item ref parsing so that it returns -ENOENT if it gets
an unexpected key type, then look for the next key type when checking
enoent.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
35ed1a2438 Add t_require_meta_size function
Add a function that tests can use to skip when the metadata device isn't
large enough.  I thought we needed to avoid enospc in a particular test,
but it turns out the test's failure was unrelated.  So this isn't used
for now but it seems nice to keep around.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
32e7978a6e Extend lock invalidate grace period
The grace period is intended to let lock holders squeeze in more bulk
work before another node pulls the lock out from under them.  The length
of the delay is a balance between getting more work done per lock hold
and adding latency to ping-ponging workloads.

The current grace period was too short.  To do work in the conflicting
case you often have to read the result that the other mount wrote as you
invalidated their lock.  The test was written in the LSM world where
we'd effectively read a single level 0 1MB segment.  In the btree world
we're checking bloom blocks and reading the other mount's btree.  It has
more dependent read latency.

So we turn up the grace period to let conflicting readers squeeze in
more work before pulling the lock out from under them.  This value was
chosen to make lock-conflicting-batch-commit pass in guests sharing nvme
metadata devices in debugging kernels.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
8123b8fc35 fix lock-conflicting-batch-commit conf output
The test had a silly typo in the label it put on the time it took mounts
to perform conflicting metadata changes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
da5911c311 Use d_materialise_unique to splice dir dentries
When we're splicing in dentries in lookup we can be splicing the result
of changes on other nodes into a stale dcache.  The stale dcache might
contain dir entries and the dcache does not allow aliased directories.

Use d_materialise_unique() to splice in dir inodes so that we remove all
aliased dentries which must be stale.

We can still use d_splice_alias() for all other inode types.  Any
existing stale dentries will fail revalidation before they're used.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
098fc420be Add some item cache page tracing
Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
7a96537210 Leave mounts mounted if run-tests fails
We can lose interesting state if the mounts are unmounted as tests fail,
only unmount if all the tests pass.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
0607dfdac8 Enable and collect trace_printk
Weirdly, run-tests was treating trace_printk not as an option to enable
trace_printk() traces but as an option to print trace events to the
console with printk?  That's not a thing.

Make -P really enable trace_printk tracing and collect it as it would
enabled trace events.  It needs to be treated seperately from the -t
options that enable trace events.

While we're at it treat the -P trace dumping option as a stand-alone
option that works without -t arguments.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
0354bb64c5 More carefully enable tracing in run-tests
run-tests.sh has a -t argument which takes a whitespace seperated string
of globs of events to enable.  This was hard to use and made it very
easy to accidentally expand the globs at the wrong place in the script.

This makes each -t argument specify a single word glob which is stored
in an array so the glob isn't expanded until it's applied to the trace
event path.   We also add an error for -t globs that didn't match any
events and add a message with the count of -t arguments and enabled
events.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
631801c45c Don't queue lock invalidation work during shutdown
The lock invalidation work function needs to be careful not to requeue
itself while we're shutting down or we can be left with invalidation
functions racing with shutdown.  Invalidation calls igrab so we can end
up with unmount warning that there are still inodes in use.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
47a1ac92f7 Update ino-path args in basic-posix-consistency
The ino-path calls in basic-posix-consistency weren't updated for the
recent change to scoutfs cli args.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:45:23 -08:00
Zach Brown
004f693af3 Add golden output for mount-unmount-race test
Signed-off-by: Zach Brown <zab@versity.com>
2021-01-25 14:19:35 -08:00
Andy Grover
f271a5d140 Merge pull request #12 from versity/zab/andys_fallocate_fix_minor_cleanup
Retry if transaction cannot alloc for fallocate or write
2021-01-25 12:52:14 -08:00
Andy Grover
355eac79d2 Retry if transaction cannot alloc for fallocate or write
Add a new distinguishable return value (ENOBUFS) from allocator for if
the transaction cannot alloc space. This doesn't mean the filesystem is
full -- opening a new transaction may result in forward progress.

Alter fallocate and get_blocks code to check for this err val and retry
with a new transaction. Handling actual ENOSPC can still happen, of
course.

Add counter called "alloc_trans_retry" and increment it from both spots.

Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: fixed up write_begin error paths]
2021-01-25 09:32:01 -08:00
Zach Brown
d8b4e94854 Merge pull request #10 from agrover/rm-item-accounting
Remove item accounting
2021-01-21 09:57:53 -08:00
Andy Grover
bed33c7ffd Remove item accounting
Remove kmod/src/count.h
Remove scoutfs_trans_track_item()
Remove reserved/actual fields from scoutfs_reservation

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-20 17:01:08 -08:00
Andy Grover
b370730029 Merge pull request #11 from versity/zab/item_cache_memory_corruption
Fix item cache page memory corruption
2021-01-20 10:27:20 -08:00
Zach Brown
d64dd89ead Fix item cache page memory corruption
The item cache page life cycle is tricky.  There are no proper page
reference counts, everthing is done by nesting the page rwlock inside
item_cache_info rwlock.  The intent is that you can only reference pages
while you hold the rwlocks appropriately.  The per-cpu page references
are outside that locking regime so they add a reference count.  Now
there are reference counts for the main cache index reference and for
each per-cpu reference.

The end result of all this is that you can only reference pages outside
of locks if you're protected by references.

Lock invalidation messed this up by trying to add its right split page
to the lru after it was unlocked.  Its page reference wasn't protected
at this point.  Shrinking could be freeing that page, and so it could be
putting a freed page's memory back on the lru.

Shrinking had a little bug that it was using list_move to move an
initialized lru_head list_head.  It turns out to be harmless (list_del
will just follow pointers to itself and set itself as next and prev all
over again), but boy does it catch one's eye.  Let's remove all
confusion and drop the reference while holding the cinf->rwlock instead
of trying to optimize freeing outside locks.

Finally, the big one: inserting a read item after compacting the page to
make room was inserting into stale parent pointers into the old
pre-compacted page, rather than the new page that was swapped in by
compaction.  This left references to a freed page in the page rbtree and
hilarity ensued.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-20 09:02:29 -08:00
Zach Brown
8d81196e01 Merge pull request #7 from agrover/versioning
Filesystem version instead of format hash check
2021-01-19 11:55:32 -08:00
Andy Grover
d731c1577e Filesystem version instead of format hash check
Instead of hashing headers, define an interop version. Do not mount
superblocks that have a different version, either higher or lower.

Since this is pretty much the same as the format hash except it's a
constant, minimal code changes are needed.

Initial dev version is 0, with the intent that version will be bumped to
1 immediately prior to tagging initial release version.

Update README. Fix comments.

Add interop version to notes and modinfo.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-15 10:53:00 -08:00
146 changed files with 16968 additions and 5518 deletions

128
README.md
View File

@@ -1,130 +1,24 @@
# Introduction
scoutfs is a clustered in-kernel Linux filesystem designed and built
from the ground up to support large archival systems.
scoutfs is a clustered in-kernel Linux filesystem designed to support
large archival systems. It features additional interfaces and metadata
so that archive agents can perform their maintenance workflows without
walking all the files in the namespace. Its cluster support lets
deployments add nodes to satisfy archival tier bandwidth targets.
Its key differentiating features are:
The design goal is to reach file populations in the trillions, with the
archival bandwidth to match, while remaining operational and responsive.
- Integrated consistent indexing accelerates archival maintenance operations
- Commit logs allow nodes to write concurrently without contention
It meets best of breed expectations:
Highlights of the design and implementation include:
* Fully consistent POSIX semantics between nodes
* Rich metadata to ensure the integrity of metadata references
* Atomic transactions to maintain consistent persistent structures
* First class kernel implementation for high performance and low latency
* Integrated archival metadata replaces syncing to external databases
* Dynamic seperation of resources lets nodes write in parallel
* 64bit throughout; no limits on file or directory sizes or counts
* Open GPLv2 implementation
Learn more in the [white paper](https://docs.wixstatic.com/ugd/aaa89b_88a5cc84be0b4d1a90f60d8900834d28.pdf).
# Current Status
**Alpha Open Source Development**
scoutfs is under heavy active development. We're developing it in the
open to give the community an opportunity to affect the design and
implementation.
The core architectural design elements are in place. Much surrounding
functionality hasn't been implemented. It's appropriate for early
adopters and interested developers, not for production use.
In that vein, expect significant incompatible changes to both the format
of network messages and persistent structures. To avoid mistakes the
implementation currently calculates a hash of the format and ioctl
header files in the source tree. The kernel module will refuse to mount
a volume created by userspace utilities with a mismatched hash, and it
will refuse to connect to a remote node with a mismatched hash. This
means having to unmount, mkfs, and remount everything across many
functional changes. Once the format is nailed down we'll wire up
forward and back compat machinery and remove this temporary safety
measure.
The current kernel module is developed against the RHEL/CentOS 7.x
kernel to minimize the friction of developing and testing with partners'
existing infrastructure. Once we're happy with the design we'll shift
development to the upstream kernel while maintaining distro
compatibility branches.
# Community Mailing List
Please join us on the open scoutfs-devel@scoutfs.org [mailing list
hosted on Google Groups](https://groups.google.com/a/scoutfs.org/forum/#!forum/scoutfs-devel)
for all discussion of scoutfs.
# Quick Start
**This following a very rough example of the procedure to get up and
running, experience will be needed to fill in the gaps. We're happy to
help on the mailing list.**
The requirements for running scoutfs on a small cluster are:
1. One or more nodes running x86-64 CentOS/RHEL 7.4 (or 7.3)
2. Access to two shared block devices
3. IPv4 connectivity between the nodes
The steps for getting scoutfs mounted and operational are:
1. Get the kernel module running on the nodes
2. Make a new filesystem on the devices with the userspace utilities
3. Mount the devices on all the nodes
In this example we run all of these commands on three nodes. The names
of the block devices are the same on all the nodes.
1. Get the Kernel Module and Userspace Binaries
* Either use snapshot RPMs built from git by Versity:
```shell
rpm -i https://scoutfs.s3-us-west-2.amazonaws.com/scoutfs-repo-0.0.1-1.el7_4.noarch.rpm
yum install scoutfs-utils kmod-scoutfs
```
* Or use the binaries built from checked out git repositories:
```shell
yum install kernel-devel
git clone git@github.com:versity/scoutfs.git
make -C scoutfs
modprobe libcrc32c
insmod scoutfs/kmod/src/scoutfs.ko
alias scoutfs=$PWD/scoutfs/utils/src/scoutfs
```
2. Make a New Filesystem (**destroys contents, no questions asked**)
We specify that two of our three nodes must be present to form a
quorum for the system to function.
```shell
scoutfs mkfs -Q 2 /dev/meta_dev /dev/data_dev
```
3. Mount the Filesystem
Each mounting node provides its local IP address on which it will run
an internal server for the other mounts if it is elected the leader by
the quorum.
```shell
mkdir /mnt/scoutfs
mount -t scoutfs -o server_addr=$NODE_ADDR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
4. For Kicks, Observe the Metadata Change Index
The `meta_seq` index tracks the inodes that are changed in each
transaction.
```shell
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/two; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
```

39
ReleaseNotes.md Normal file
View File

@@ -0,0 +1,39 @@
Versity ScoutFS Release Notes
=============================
---
v1.2-rc
\
*TBD*
---
v1.1
\
*Feb 4, 2022*
* **Add scoutfs(1) change-quorum-config command**
\
Add a change-quorum-config command to scoutfs(1) to change the quorum
configuration stored in the metadata device while the file system is
unmounted. This can be used to change the mounts that will
participate in quorum and the IP addresses they use.
* **Fix Rare Risk of Item Cache Corruption**
\
Code review found a rare potential source of item cache corruption.
If this happened it would look as though deleted parts of the filesystem
returned, but only at the time they were deleted. Old deleted items are
not affected. This problem only affected the item cache, never
persistent storage. Unmounting and remounting would drop the bad item
cache and resync it with the correct persistent data.
---
v1.0
\
*Nov 8, 2021*
* **Initial Release**
\
Version 1.0 marks the first GA release.

View File

@@ -16,11 +16,7 @@ SCOUTFS_GIT_DESCRIBE := \
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
echo no-git)
SCOUTFS_FORMAT_HASH := \
$(shell cat src/format.h src/ioctl.h | md5sum | cut -b1-16)
SCOUTFS_ARGS := SCOUTFS_GIT_DESCRIBE=$(SCOUTFS_GIT_DESCRIBE) \
SCOUTFS_FORMAT_HASH=$(SCOUTFS_FORMAT_HASH) \
CONFIG_SCOUTFS_FS=m -C $(SK_KSRC) M=$(CURDIR)/src \
EXTRA_CFLAGS="-Werror"

View File

@@ -1,7 +1,6 @@
obj-$(CONFIG_SCOUTFS_FS) := scoutfs.o
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\" \
-DSCOUTFS_FORMAT_HASH=0x$(SCOUTFS_FORMAT_HASH)LLU
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\"
CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
@@ -19,6 +18,7 @@ scoutfs-y += \
dir.o \
export.o \
ext.o \
fence.o \
file.o \
forest.o \
inode.o \
@@ -28,9 +28,11 @@ scoutfs-y += \
lock_server.o \
msg.o \
net.o \
omap.o \
options.o \
per_task.o \
quorum.o \
recov.o \
scoutfs_trace.o \
server.o \
sort_priv.o \
@@ -41,6 +43,7 @@ scoutfs-y += \
trans.o \
triggers.o \
tseq.o \
volopt.o \
xattr.o
#

File diff suppressed because it is too large Load Diff

View File

@@ -38,6 +38,10 @@
#define SCOUTFS_ALLOC_DATA_LG_THRESH \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/* the client will force commits if data allocators get too low */
#define SCOUTFS_ALLOC_DATA_REFILL_THRESH \
((256ULL * 1024 * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Fill client alloc roots to the target when they fall below the lo
* threshold.
@@ -55,15 +59,16 @@
#define SCOUTFS_SERVER_DATA_FILL_LO \
(1ULL * 1024 * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Each of the server meta_alloc roots will try to keep a minimum amount
* of free blocks. The server will swap roots when its current avail
* falls below the threshold while the freed root is still above it. It
* must have room for all the largest allocation attempted in a
* transaction on the server.
* Log merge meta allocations are only used for one request and will
* never use more than the dirty limit.
*/
#define SCOUTFS_SERVER_META_ALLOC_MIN \
(SCOUTFS_SERVER_META_FILL_TARGET * 2)
#define SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT (64ULL * 1024 * 1024)
/* a few extra blocks for alloc blocks */
#define SCOUTFS_SERVER_MERGE_FILL_TARGET \
((SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT >> SCOUTFS_BLOCK_LG_SHIFT) + 4)
#define SCOUTFS_SERVER_MERGE_FILL_LO SCOUTFS_SERVER_MERGE_FILL_TARGET
/*
* A run-time use of a pair of persistent avail/freed roots as a
@@ -72,7 +77,8 @@
* transaction.
*/
struct scoutfs_alloc {
spinlock_t lock;
/* writers rarely modify list_head avail/freed. readers often check for _meta_alloc_low */
seqlock_t seqlock;
struct mutex mutex;
struct scoutfs_block *dirty_avail_bl;
struct scoutfs_block *dirty_freed_bl;
@@ -124,7 +130,14 @@ int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total);
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_fill_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
@@ -145,11 +158,20 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag);
typedef int (*scoutfs_alloc_foreach_cb_t)(struct super_block *sb, void *arg,
int owner, u64 id,
bool meta, bool avail, u64 blocks);
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg);
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg);
typedef void (*scoutfs_alloc_extent_cb_t)(struct super_block *sb, void *cb_arg,
struct scoutfs_extent *ext);
int scoutfs_alloc_extents_cb(struct super_block *sb, struct scoutfs_alloc_root *root,
scoutfs_alloc_extent_cb_t cb, void *cb_arg);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -13,27 +13,16 @@ struct scoutfs_block {
void *priv;
};
__le32 scoutfs_block_calc_crc(struct scoutfs_block_header *hdr, u32 size);
bool scoutfs_block_valid_crc(struct scoutfs_block_header *hdr, u32 size);
bool scoutfs_block_valid_ref(struct super_block *sb,
struct scoutfs_block_header *hdr,
__le64 seq, __le64 blkno);
struct scoutfs_block *scoutfs_block_create(struct super_block *sb, u64 blkno);
struct scoutfs_block *scoutfs_block_read(struct super_block *sb, u64 blkno);
void scoutfs_block_invalidate(struct super_block *sb, struct scoutfs_block *bl);
bool scoutfs_block_consistent_ref(struct super_block *sb,
struct scoutfs_block *bl,
__le64 seq, __le64 blkno, u32 magic);
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret);
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);
void scoutfs_block_writer_init(struct super_block *sb,
struct scoutfs_block_writer *wri);
void scoutfs_block_writer_mark_dirty(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl);
bool scoutfs_block_writer_is_dirty(struct super_block *sb,
struct scoutfs_block *bl);
int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_block_ref *ref,
u32 magic, struct scoutfs_block **bl_ret,
u64 dirty_blkno, u64 *ref_blkno);
int scoutfs_block_writer_write(struct super_block *sb,
struct scoutfs_block_writer *wri);
void scoutfs_block_writer_forget_all(struct super_block *sb,

File diff suppressed because it is too large Load Diff

View File

@@ -20,13 +20,15 @@ struct scoutfs_btree_item_ref {
/* caller gives an item to the callback */
typedef int (*scoutfs_btree_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, void *arg);
/* simple singly-linked list of items */
struct scoutfs_btree_item_list {
struct scoutfs_btree_item_list *next;
struct scoutfs_key key;
u64 seq;
u8 flags;
int val_len;
u8 val[0];
};
@@ -82,6 +84,49 @@ int scoutfs_btree_insert_list(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_btree_item_list *lst);
int scoutfs_btree_parent_range(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_key *start,
struct scoutfs_key *end);
int scoutfs_btree_get_parent(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_btree_root *par_root);
int scoutfs_btree_set_parent(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_btree_root *par_root);
int scoutfs_btree_rebalance(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key);
/* merge input is a list of roots */
struct scoutfs_btree_root_head {
struct list_head head;
struct scoutfs_btree_root root;
};
int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *start,
struct scoutfs_key *end,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *input_list,
bool subtree, int dirty_limit, int alloc_low);
int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *key,
struct scoutfs_btree_root *root, int alloc_low);
void scoutfs_btree_put_iref(struct scoutfs_btree_item_ref *iref);
#endif

View File

@@ -31,16 +31,15 @@
#include "net.h"
#include "endian_swap.h"
#include "quorum.h"
#include "omap.h"
#include "trans.h"
/*
* The client is responsible for maintaining a connection to the server.
* This includes managing quorum elections that determine which client
* should run the server that all the clients connect to.
*/
#define CLIENT_CONNECT_DELAY_MS (MSEC_PER_SEC / 10)
#define CLIENT_CONNECT_TIMEOUT_MS (1 * MSEC_PER_SEC)
#define CLIENT_QUORUM_TIMEOUT_MS (5 * MSEC_PER_SEC)
struct client_info {
struct super_block *sb;
@@ -50,9 +49,9 @@ struct client_info {
struct workqueue_struct *workq;
struct delayed_work connect_dwork;
unsigned long connect_delay_jiffies;
u64 server_term;
u64 greeting_umb;
bool sending_farewell;
int farewell_error;
@@ -118,23 +117,6 @@ int scoutfs_client_get_roots(struct super_block *sb,
NULL, 0, roots, sizeof(*roots));
}
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
__le64 before = cpu_to_le64p(seq);
__le64 after;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
&before, sizeof(before),
&after, sizeof(after));
if (ret == 0)
*seq = le64_to_cpu(after);
return ret;
}
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
@@ -156,7 +138,7 @@ static int client_lock_response(struct super_block *sb,
void *resp, unsigned int resp_len,
int error, void *data)
{
if (resp_len != sizeof(struct scoutfs_net_lock_grant_response))
if (resp_len != sizeof(struct scoutfs_net_lock))
return -EINVAL;
/* XXX error? */
@@ -221,6 +203,120 @@ int scoutfs_client_srch_commit_compact(struct super_block *sb,
res, sizeof(*res), NULL, 0);
}
int scoutfs_client_get_log_merge(struct super_block *sb,
struct scoutfs_log_merge_request *req)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_GET_LOG_MERGE,
NULL, 0, req, sizeof(*req));
}
int scoutfs_client_commit_log_merge(struct super_block *sb,
struct scoutfs_log_merge_complete *comp)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_COMMIT_LOG_MERGE,
comp, sizeof(*comp), NULL, 0);
}
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_response(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
id, 0, map, sizeof(*map));
}
/* The client is receiving an omap request from the server */
static int client_open_ino_map(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != sizeof(struct scoutfs_open_ino_map_args))
return -EINVAL;
return scoutfs_omap_client_handle_request(sb, id, arg);
}
/* The client is sending an omap request to the server */
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_open_ino_map_args args = {
.group_nr = cpu_to_le64(group_nr),
.req_id = 0,
};
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
&args, sizeof(args), map, sizeof(*map));
}
/* The client is asking the server for the current volume options */
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_GET_VOLOPT,
NULL, 0, volopt, sizeof(*volopt));
}
/* The client is asking the server to update volume options */
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_SET_VOLOPT,
volopt, sizeof(*volopt), NULL, 0);
}
/* The client is asking the server to clear volume options */
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_CLEAR_VOLOPT,
volopt, sizeof(*volopt), NULL, 0);
}
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_RESIZE_DEVICES,
nrd, sizeof(*nrd), NULL, 0);
}
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_STATFS,
NULL, 0, nst, sizeof(*nst));
}
/*
* The server is asking that we trigger a commit of the current log
* trees so that they can ensure an item seq discontinuity between
* finalized log btrees and the next set of open log btrees. If we're
* shutting down then we're already going to perform a final commit.
*/
static int sync_log_trees(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != 0)
return -EINVAL;
if (!scoutfs_unmounting(sb))
scoutfs_trans_sync(sb, 0);
return scoutfs_net_response(sb, conn, cmd, id, 0, NULL, 0);
}
/* The client is receiving a invalidation request from the server */
static int client_lock(struct super_block *sb,
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
@@ -258,7 +354,8 @@ static int client_greeting(struct super_block *sb,
void *resp, unsigned int resp_len, int error,
void *data)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
@@ -275,17 +372,15 @@ static int client_greeting(struct super_block *sb,
}
if (gr->fsid != super->hdr.fsid) {
scoutfs_warn(sb, "server sent fsid 0x%llx, client has 0x%llx",
le64_to_cpu(gr->fsid),
le64_to_cpu(super->hdr.fsid));
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
ret = -EINVAL;
goto out;
}
if (gr->format_hash != super->format_hash) {
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
le64_to_cpu(gr->format_hash),
le64_to_cpu(super->format_hash));
if (le64_to_cpu(gr->fmt_vers) != sbi->fmt_vers) {
scoutfs_warn(sb, "server greeting response format version %llu did not match client format version %llu",
le64_to_cpu(gr->fmt_vers), sbi->fmt_vers);
ret = -EINVAL;
goto out;
}
@@ -294,52 +389,31 @@ static int client_greeting(struct super_block *sb,
scoutfs_net_client_greeting(sb, conn, new_server);
client->server_term = le64_to_cpu(gr->server_term);
client->greeting_umb = le64_to_cpu(gr->unmount_barrier);
client->connect_delay_jiffies = 0;
ret = 0;
out:
return ret;
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
* The client is deciding if it needs to keep trying to reconnect to
* have its farewell request processed. The server removes our mounted
* client item last so that if we don't see it we know the server has
* processed our farewell and we don't need to reconnect, we can unmount
* safely.
*
* In the typical case a mount reads the super blocks and finds the
* address of the currently running server and connects to it.
* Non-voting clients who can't connect will keep trying alternating
* reading the address and getting connect timeouts.
*
* Voting mounts will try to elect a leader if they can't connect to the
* server. When a quorum can't connect and are able to elect a leader
* then a new server is started. The new server will write its address
* in the super and everyone will be able to connect.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once a client receives its farewell response it
* can exit. But a majority of clients need to stick around to elect a
* server to process all their farewell requests. This is coordinated
* by having the greeting tell the server that a client is a voter. The
* server then holds on to farewell requests from voters until only
* requests from the final quorum remain. These farewell responses are
* only sent after updating an unmount barrier in the super to indicate
* to the final quorum that they can safely exit without having received
* a farewell response over the network.
* This is peeking at btree blocks that the server could be actively
* freeing with cow updates so it can see stale blocks, we just return
* the error and we'll retry eventually as the connection times out.
*/
static void scoutfs_client_connect_worker(struct work_struct *work)
static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
{
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = NULL;
struct mount_options *opts = &sbi->opts;
const bool am_voter = opts->server_addr.sin_addr.s_addr != 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
ktime_t timeout_abs;
u64 elected_term;
struct scoutfs_key key = {
.sk_zone = SCOUTFS_MOUNTED_CLIENT_ZONE,
.skmc_rid = cpu_to_le64(rid),
};
struct scoutfs_super_block *super;
SCOUTFS_BTREE_ITEM_REF(iref);
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -352,57 +426,94 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
goto out;
/* can safely unmount if we see that server processed our farewell */
if (am_voter && client->sending_farewell &&
(le64_to_cpu(super->unmount_barrier) > client->greeting_umb)) {
ret = scoutfs_btree_lookup(sb, &super->mounted_clients, &key, &iref);
if (ret == 0) {
scoutfs_btree_put_iref(&iref);
ret = 1;
}
if (ret == -ENOENT)
ret = 0;
kfree(super);
out:
return ret;
}
/*
* If we're not seeing successful connections we want to back off. Each
* connection attempt starts by setting a long connection work delay.
* We only set a shorter delay if we see a greeting response from the
* server. At that point we'll try to immediately reconnect if the
* connection is broken.
*/
static void queue_connect_dwork(struct super_block *sb, struct client_info *client)
{
if (!atomic_read(&client->shutting_down) && !scoutfs_forcing_unmount(sb))
queue_delayed_work(client->workq, &client->connect_dwork,
client->connect_delay_jiffies);
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
*
* We ask quorum for an address to try and connect to. If there isn't
* one, or it fails, we back off a bit before trying again.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once the server processes a farewell request from
* the client it can forget the client. If the connection is broken
* before the client gets the farewell response it doesn't want to
* reconnect to send it again.. instead the client can read the metadata
* device to check for the lack of an item which indicates that the
* server has processed its farewell.
*/
static void scoutfs_client_connect_worker(struct work_struct *work)
{
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct mount_options *opts = &sbi->opts;
const bool am_quorum = opts->quorum_slot_nr >= 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
int ret;
/* can unmount once server farewell handling removes our item */
if (client->sending_farewell &&
lookup_mounted_client_item(sb, sbi->rid) == 0) {
client->farewell_error = 0;
complete(&client->farewell_comp);
ret = 0;
goto out;
}
/* try to connect to the super's server address */
scoutfs_addr_to_sin(&sin, &super->server_addr);
if (sin.sin_addr.s_addr != 0 && sin.sin_port != 0)
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
else
ret = -ENOTCONN;
/* always wait a bit until a greeting response sets a lower delay */
client->connect_delay_jiffies = msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS);
/* voters try to elect a leader if they couldn't connect */
if (ret < 0) {
/* non-voters will keep retrying */
if (!am_voter)
goto out;
/* make sure local server isn't writing super during votes */
scoutfs_server_stop(sb);
timeout_abs = ktime_add_ms(ktime_get(),
CLIENT_QUORUM_TIMEOUT_MS);
ret = scoutfs_quorum_election(sb, timeout_abs,
le64_to_cpu(super->quorum_server_term),
&elected_term);
/* start the server if we were asked to */
if (elected_term > 0)
ret = scoutfs_server_start(sb, &opts->server_addr,
elected_term);
ret = -ENOTCONN;
ret = scoutfs_quorum_server_sin(sb, &sin);
if (ret < 0)
goto out;
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
if (ret < 0)
goto out;
}
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.format_hash = super->format_hash;
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.server_term = cpu_to_le64(client->server_term);
greet.unmount_barrier = cpu_to_le64(client->greeting_umb);
greet.rid = cpu_to_le64(sbi->rid);
greet.flags = 0;
if (client->sending_farewell)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_FAREWELL);
if (am_voter)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_VOTER);
if (am_quorum)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_QUORUM);
ret = scoutfs_net_submit_request(sb, client->conn,
SCOUTFS_NET_CMD_GREETING,
@@ -411,17 +522,15 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
scoutfs_net_shutdown(sb, client->conn);
out:
kfree(super);
/* always have a small delay before retrying to avoid storms */
if (ret && !atomic_read(&client->shutting_down))
queue_delayed_work(client->workq, &client->connect_dwork,
msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS));
if (ret)
queue_connect_dwork(sb, client);
}
static scoutfs_net_request_t client_req_funcs[] = {
[SCOUTFS_NET_CMD_SYNC_LOG_TREES] = sync_log_trees,
[SCOUTFS_NET_CMD_LOCK] = client_lock,
[SCOUTFS_NET_CMD_LOCK_RECOVER] = client_lock_recover,
[SCOUTFS_NET_CMD_OPEN_INO_MAP] = client_open_ino_map,
};
/*
@@ -434,8 +543,7 @@ static void client_notify_down(struct super_block *sb,
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
if (!atomic_read(&client->shutting_down))
queue_delayed_work(client->workq, &client->connect_dwork, 0);
queue_connect_dwork(sb, client);
}
int scoutfs_client_setup(struct super_block *sb)
@@ -470,7 +578,7 @@ int scoutfs_client_setup(struct super_block *sb)
goto out;
}
queue_delayed_work(client->workq, &client->connect_dwork, 0);
queue_connect_dwork(sb, client);
ret = 0;
out:
@@ -527,7 +635,7 @@ void scoutfs_client_destroy(struct super_block *sb)
if (client == NULL)
return;
if (client->server_term != 0) {
if (client->server_term != 0 && !scoutfs_forcing_unmount(sb)) {
client->sending_farewell = true;
ret = scoutfs_net_submit_request(sb, client->conn,
SCOUTFS_NET_CMD_FAREWELL,
@@ -535,10 +643,8 @@ void scoutfs_client_destroy(struct super_block *sb)
client_farewell_response,
NULL, NULL);
if (ret == 0) {
ret = wait_for_completion_interruptible(
&client->farewell_comp);
if (ret == 0)
ret = client->farewell_error;
wait_for_completion(&client->farewell_comp);
ret = client->farewell_error;
}
if (ret) {
scoutfs_inc_counter(sb, client_farewell_error);
@@ -562,3 +668,11 @@ void scoutfs_client_destroy(struct super_block *sb)
kfree(client);
sbi->client_info = NULL;
}
void scoutfs_client_net_shutdown(struct super_block *sb)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
if (client && client->conn)
scoutfs_net_shutdown(sb, client->conn);
}

View File

@@ -10,7 +10,6 @@ int scoutfs_client_commit_log_trees(struct super_block *sb,
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
u64 *scoutfs_client_bulk_alloc(struct super_block *sb);
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_lock_request(struct super_block *sb,
struct scoutfs_net_lock *nl);
@@ -22,7 +21,21 @@ int scoutfs_client_srch_get_compact(struct super_block *sb,
struct scoutfs_srch_compact *sc);
int scoutfs_client_srch_commit_compact(struct super_block *sb,
struct scoutfs_srch_compact *res);
int scoutfs_client_get_log_merge(struct super_block *sb,
struct scoutfs_log_merge_request *req);
int scoutfs_client_commit_log_merge(struct super_block *sb,
struct scoutfs_log_merge_complete *comp);
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map);
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map);
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd);
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst);
void scoutfs_client_net_shutdown(struct super_block *sb);
int scoutfs_client_setup(struct super_block *sb);
void scoutfs_client_destroy(struct super_block *sb);

View File

@@ -1,315 +0,0 @@
#ifndef _SCOUTFS_COUNT_H_
#define _SCOUTFS_COUNT_H_
/*
* Our estimate of the space consumed while dirtying items is based on
* the number of items and the size of their values.
*
* The estimate is still a read-only input to entering the transaction.
* We'd like to use it as a clean rhs arg to hold_trans. We define SIC_
* functions which return the count struct. This lets us have a single
* arg and avoid bugs in initializing and passing in struct pointers
* from callers. The internal __count functions are used compose an
* estimate out of the sets of items it manipulates. We program in much
* clearer C instead of in the preprocessor.
*
* Compilers are able to collapse the inlines into constants for the
* constant estimates.
*/
struct scoutfs_item_count {
signed items;
signed vals;
};
/* The caller knows exactly what they're doing. */
static inline const struct scoutfs_item_count SIC_EXACT(signed items,
signed vals)
{
struct scoutfs_item_count cnt = {
.items = items,
.vals = vals,
};
return cnt;
}
/*
* Allocating an inode creates a new set of indexed items.
*/
static inline void __count_alloc_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
/*
* Dirtying an inode dirties the inode item and can delete and create
* the full set of indexed items.
*/
static inline void __count_dirty_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = 2 * SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
static inline const struct scoutfs_item_count SIC_ALLOC_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_alloc_inode(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_DIRTY_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Directory entries are stored in three items.
*/
static inline void __count_dirents(struct scoutfs_item_count *cnt,
unsigned name_len)
{
cnt->items += 3;
cnt->vals += 3 * offsetof(struct scoutfs_dirent, name[name_len]);
}
static inline void __count_sym_target(struct scoutfs_item_count *cnt,
unsigned size)
{
unsigned nr = DIV_ROUND_UP(size, SCOUTFS_MAX_VAL_SIZE);
cnt->items += nr;
cnt->vals += size;
}
static inline void __count_orphan(struct scoutfs_item_count *cnt)
{
cnt->items += 1;
}
static inline void __count_mknod(struct scoutfs_item_count *cnt,
unsigned name_len)
{
__count_alloc_inode(cnt);
__count_dirents(cnt, name_len);
__count_dirty_inode(cnt);
}
static inline const struct scoutfs_item_count SIC_MKNOD(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
return cnt;
}
/*
* Dropping the inode deletes all its items. Potentially enormous numbers
* of items (data mapping, xattrs) are deleted in their own transactions.
*/
static inline const struct scoutfs_item_count SIC_DROP_INODE(int mode,
u64 size)
{
struct scoutfs_item_count cnt = {0,};
if (S_ISLNK(mode))
__count_sym_target(&cnt, size);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
cnt.vals = 0;
return cnt;
}
static inline const struct scoutfs_item_count SIC_LINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Unlink can add orphan items.
*/
static inline const struct scoutfs_item_count SIC_UNLINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_SYMLINK(unsigned name_len,
unsigned size)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
__count_sym_target(&cnt, size);
return cnt;
}
/*
* This assumes the worst case of a rename between directories that
* unlinks an existing target. That'll be worse than the common case
* by a few hundred bytes.
*/
static inline const struct scoutfs_item_count SIC_RENAME(unsigned old_len,
unsigned new_len)
{
struct scoutfs_item_count cnt = {0,};
/* dirty dirs and inodes */
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
/* unlink old and new, link new */
__count_dirents(&cnt, old_len);
__count_dirents(&cnt, new_len);
__count_dirents(&cnt, new_len);
/* orphan the existing target */
__count_orphan(&cnt);
return cnt;
}
/*
* Creating an xattr results in a dirty set of items with values that
* store the xattr header, name, and value. There's always at least one
* item with the header and name. Any previously existing items are
* deleted which dirties their key but removes their value. The two
* sets of items are indexed by different ids so their items don't
* overlap.
*/
static inline const struct scoutfs_item_count SIC_XATTR_SET(unsigned old_parts,
bool creating,
unsigned name_len,
unsigned size)
{
struct scoutfs_item_count cnt = {0,};
unsigned int new_parts;
__count_dirty_inode(&cnt);
if (old_parts)
cnt.items += old_parts;
if (creating) {
new_parts = SCOUTFS_XATTR_NR_PARTS(name_len, size);
cnt.items += new_parts;
cnt.vals += sizeof(struct scoutfs_xattr) + name_len + size;
}
return cnt;
}
/*
* write_begin can have to allocate all the blocks in the page and can
* have to add a big allocation from the server to do so:
* - merge added free extents from the server
* - remove a free extent per block
* - remove an offline extent for every other block
* - add a file extent per block
*/
static inline const struct scoutfs_item_count SIC_WRITE_BEGIN(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned nr_free = (1 + SCOUTFS_BLOCK_SM_PER_PAGE) * 3;
unsigned nr_file = (DIV_ROUND_UP(SCOUTFS_BLOCK_SM_PER_PAGE, 2) +
SCOUTFS_BLOCK_SM_PER_PAGE) * 3;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* Truncating an extent can:
* - delete existing file extent,
* - create two surrounding file extents,
* - add an offline file extent,
* - delete two existing free extents
* - create a merged free extent
*/
static inline const struct scoutfs_item_count
SIC_TRUNC_EXTENT(struct inode *inode)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_file = 1 + 2 + 1;
unsigned int nr_free = (2 + 1) * 2;
if (inode)
__count_dirty_inode(&cnt);
cnt.items += nr_file + nr_free;
cnt.vals += nr_file;
return cnt;
}
/*
* Fallocating an extent can, at most:
* - allocate from the server: delete two free and insert merged
* - free an allocated extent: delete one and create two split
* - remove an unallocated file extent: delete one and create two split
* - add an fallocated flie extent: delete two and inset one merged
*/
static inline const struct scoutfs_item_count SIC_FALLOCATE_ONE(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_free = ((1 + 2) * 2) * 2;
unsigned int nr_file = (1 + 2) * 2;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* ioc_setattr_more can dirty the inode and add a single offline extent.
*/
static inline const struct scoutfs_item_count SIC_SETATTR_MORE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
cnt.items++;
return cnt;
}
#endif

View File

@@ -20,17 +20,21 @@
EXPAND_COUNTER(alloc_list_freed_hi) \
EXPAND_COUNTER(alloc_move) \
EXPAND_COUNTER(alloc_moved_extent) \
EXPAND_COUNTER(alloc_stale_cached_list_block) \
EXPAND_COUNTER(block_cache_access) \
EXPAND_COUNTER(alloc_stale_list_block) \
EXPAND_COUNTER(block_cache_access_update) \
EXPAND_COUNTER(block_cache_alloc_failure) \
EXPAND_COUNTER(block_cache_alloc_page_order) \
EXPAND_COUNTER(block_cache_alloc_virt) \
EXPAND_COUNTER(block_cache_end_io_error) \
EXPAND_COUNTER(block_cache_forget) \
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_invalidate) \
EXPAND_COUNTER(block_cache_lru_move) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_restart) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
@@ -40,9 +44,18 @@
EXPAND_COUNTER(btree_insert) \
EXPAND_COUNTER(btree_leaf_item_hash_search) \
EXPAND_COUNTER(btree_lookup) \
EXPAND_COUNTER(btree_merge) \
EXPAND_COUNTER(btree_merge_alloc_low) \
EXPAND_COUNTER(btree_merge_delete) \
EXPAND_COUNTER(btree_merge_delta_combined) \
EXPAND_COUNTER(btree_merge_delta_null) \
EXPAND_COUNTER(btree_merge_dirty_limit) \
EXPAND_COUNTER(btree_merge_drop_old) \
EXPAND_COUNTER(btree_merge_insert) \
EXPAND_COUNTER(btree_merge_update) \
EXPAND_COUNTER(btree_merge_walk) \
EXPAND_COUNTER(btree_next) \
EXPAND_COUNTER(btree_prev) \
EXPAND_COUNTER(btree_read_error) \
EXPAND_COUNTER(btree_split) \
EXPAND_COUNTER(btree_stale_read) \
EXPAND_COUNTER(btree_update) \
@@ -58,6 +71,8 @@
EXPAND_COUNTER(corrupt_symlink_inode_size) \
EXPAND_COUNTER(corrupt_symlink_missing_item) \
EXPAND_COUNTER(corrupt_symlink_not_null_term) \
EXPAND_COUNTER(data_fallocate_enobufs_retry) \
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
EXPAND_COUNTER(dentry_revalidate_error) \
EXPAND_COUNTER(dentry_revalidate_invalid) \
EXPAND_COUNTER(dentry_revalidate_locked) \
@@ -71,12 +86,15 @@
EXPAND_COUNTER(ext_op_remove) \
EXPAND_COUNTER(forest_bloom_fail) \
EXPAND_COUNTER(forest_bloom_pass) \
EXPAND_COUNTER(forest_bloom_stale) \
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
EXPAND_COUNTER(item_delta) \
EXPAND_COUNTER(item_delta_written) \
EXPAND_COUNTER(item_dirty) \
EXPAND_COUNTER(item_invalidate) \
EXPAND_COUNTER(item_invalidate_page) \
@@ -106,12 +124,8 @@
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grace_extended) \
EXPAND_COUNTER(lock_grace_set) \
EXPAND_COUNTER(lock_grace_wait) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
EXPAND_COUNTER(lock_grant_work) \
EXPAND_COUNTER(lock_invalidate_coverage) \
EXPAND_COUNTER(lock_invalidate_inode) \
EXPAND_COUNTER(lock_invalidate_request) \
@@ -137,28 +151,37 @@
EXPAND_COUNTER(net_recv_invalid_message) \
EXPAND_COUNTER(net_recv_messages) \
EXPAND_COUNTER(net_unknown_request) \
EXPAND_COUNTER(quorum_cycle) \
EXPAND_COUNTER(quorum_elected_leader) \
EXPAND_COUNTER(quorum_election_timeout) \
EXPAND_COUNTER(quorum_failure) \
EXPAND_COUNTER(quorum_read_block) \
EXPAND_COUNTER(quorum_read_block_error) \
EXPAND_COUNTER(orphan_scan) \
EXPAND_COUNTER(orphan_scan_cached) \
EXPAND_COUNTER(orphan_scan_error) \
EXPAND_COUNTER(orphan_scan_item) \
EXPAND_COUNTER(orphan_scan_omap_set) \
EXPAND_COUNTER(orphan_scan_read) \
EXPAND_COUNTER(quorum_elected) \
EXPAND_COUNTER(quorum_fence_error) \
EXPAND_COUNTER(quorum_fence_leader) \
EXPAND_COUNTER(quorum_read_invalid_block) \
EXPAND_COUNTER(quorum_saw_super_leader) \
EXPAND_COUNTER(quorum_timedout) \
EXPAND_COUNTER(quorum_write_block) \
EXPAND_COUNTER(quorum_write_block_error) \
EXPAND_COUNTER(quorum_fenced) \
EXPAND_COUNTER(quorum_recv_error) \
EXPAND_COUNTER(quorum_recv_heartbeat) \
EXPAND_COUNTER(quorum_recv_invalid) \
EXPAND_COUNTER(quorum_recv_resignation) \
EXPAND_COUNTER(quorum_recv_vote) \
EXPAND_COUNTER(quorum_send_heartbeat) \
EXPAND_COUNTER(quorum_send_resignation) \
EXPAND_COUNTER(quorum_send_request) \
EXPAND_COUNTER(quorum_send_vote) \
EXPAND_COUNTER(quorum_server_shutdown) \
EXPAND_COUNTER(quorum_term_follower) \
EXPAND_COUNTER(server_commit_hold) \
EXPAND_COUNTER(server_commit_queue) \
EXPAND_COUNTER(server_commit_worker) \
EXPAND_COUNTER(srch_add_entry) \
EXPAND_COUNTER(srch_compact_dirty_block) \
EXPAND_COUNTER(srch_compact_entry) \
EXPAND_COUNTER(srch_compact_error) \
EXPAND_COUNTER(srch_compact_flush) \
EXPAND_COUNTER(srch_compact_log_page) \
EXPAND_COUNTER(srch_compact_removed_entry) \
EXPAND_COUNTER(srch_inconsistent_ref) \
EXPAND_COUNTER(srch_rotate_log) \
EXPAND_COUNTER(srch_search_log) \
EXPAND_COUNTER(srch_search_log_block) \
@@ -170,6 +193,11 @@
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
EXPAND_COUNTER(totl_read_copied) \
EXPAND_COUNTER(totl_read_finalized) \
EXPAND_COUNTER(totl_read_fs) \
EXPAND_COUNTER(totl_read_item) \
EXPAND_COUNTER(totl_read_logged) \
EXPAND_COUNTER(trans_commit_data_alloc_low) \
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
EXPAND_COUNTER(trans_commit_fsync) \

View File

@@ -37,7 +37,6 @@
#include "lock.h"
#include "file.h"
#include "msg.h"
#include "count.h"
#include "ext.h"
#include "util.h"
@@ -208,6 +207,7 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
u64 offset;
s64 ret;
u8 flags;
int err;
int i;
flags = offline ? SEF_OFFLINE : 0;
@@ -247,6 +247,18 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
tr.len = min(ext.len - offset, last - iblock + 1);
tr.flags = ext.flags;
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
if (ret < 0) {
if (WARN_ON_ONCE(ret == -EINVAL)) {
scoutfs_err(sb, "unexpected truncate inconsistency: ino %llu iblock %llu last %llu, start %llu len %llu",
ino, iblock, last, tr.start, tr.len);
}
break;
}
if (tr.map) {
mutex_lock(&datinf->mutex);
ret = scoutfs_free_data(sb, datinf->alloc,
@@ -254,16 +266,16 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
&datinf->data_freed,
tr.map, tr.len);
mutex_unlock(&datinf->mutex);
if (ret < 0)
if (ret < 0) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, tr.map, tr.flags);
if (err < 0)
scoutfs_err(sb, "truncate err %d restoring extent after error %lld: ino %llu start %llu len %llu",
err, ret, ino, tr.start, tr.len);
break;
}
}
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
BUG_ON(ret); /* inconsistent, could prealloc items */
iblock += tr.len;
}
@@ -291,7 +303,6 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
u64 ino, u64 iblock, u64 last, bool offline,
struct scoutfs_lock *lock)
{
struct scoutfs_item_count cnt = SIC_TRUNC_EXTENT(inode);
struct scoutfs_inode_info *si = NULL;
LIST_HEAD(ind_locks);
s64 ret = 0;
@@ -314,10 +325,9 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
while (iblock <= last) {
if (inode)
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks,
true, cnt);
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, true, false);
else
ret = scoutfs_hold_trans(sb, cnt);
ret = scoutfs_hold_trans(sb, false);
if (ret)
break;
@@ -753,13 +763,12 @@ static int scoutfs_write_begin(struct file *file,
goto out;
}
retry:
do {
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &wbd->ind_locks, inode,
true) ?:
scoutfs_inode_index_try_lock_hold(sb, &wbd->ind_locks,
ind_seq,
SIC_WRITE_BEGIN());
scoutfs_inode_index_try_lock_hold(sb, &wbd->ind_locks, ind_seq, true);
} while (ret > 0);
if (ret < 0)
goto out;
@@ -768,17 +777,22 @@ static int scoutfs_write_begin(struct file *file,
flags |= AOP_FLAG_NOFS;
/* generic write_end updates i_size and calls dirty_inode */
ret = scoutfs_dirty_inode_item(inode, wbd->lock);
if (ret == 0)
ret = block_write_begin(mapping, pos, len, flags, pagep,
scoutfs_get_block_write);
if (ret)
ret = scoutfs_dirty_inode_item(inode, wbd->lock) ?:
block_write_begin(mapping, pos, len, flags, pagep,
scoutfs_get_block_write);
if (ret < 0) {
scoutfs_release_trans(sb);
out:
if (ret) {
scoutfs_inode_index_unlock(sb, &wbd->ind_locks);
kfree(wbd);
if (ret == -ENOBUFS) {
/* Retry with a new transaction. */
scoutfs_inc_counter(sb, data_write_begin_enobufs_retry);
goto retry;
}
}
out:
if (ret < 0)
kfree(wbd);
return ret;
}
@@ -816,6 +830,7 @@ static int scoutfs_write_end(struct file *file, struct address_space *mapping,
scoutfs_inode_inc_data_version(inode);
}
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, wbd->lock, &wbd->ind_locks);
scoutfs_inode_queue_writeback(inode);
}
@@ -1007,8 +1022,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
while(iblock <= last) {
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_FALLOCATE_ONE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
if (ret)
goto out;
@@ -1018,14 +1032,23 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
end = (iblock + ret) << SCOUTFS_BLOCK_SM_SHIFT;
if (end > offset + len)
end = offset + len;
if (end > i_size_read(inode))
if (end > i_size_read(inode)) {
i_size_write(inode, end);
inode_inc_iversion(inode);
scoutfs_inode_inc_data_version(inode);
}
}
if (ret >= 0)
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
/* txn couldn't meet the request. Let's try with a new txn */
if (ret == -ENOBUFS) {
scoutfs_inc_counter(sb, data_fallocate_enobufs_retry);
continue;
}
if (ret <= 0)
goto out;
@@ -1078,8 +1101,7 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
}
/* we're updating meta_seq with offline block count */
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_SETATTR_MORE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
if (ret < 0)
goto out;
@@ -1128,7 +1150,8 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
*/
#define MOVE_DATA_EXTENTS_PER_HOLD 16
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off)
u64 byte_len, struct inode *to, u64 to_off, bool is_stage,
u64 data_version)
{
struct scoutfs_inode_info *from_si = SCOUTFS_I(from);
struct scoutfs_inode_info *to_si = SCOUTFS_I(to);
@@ -1138,6 +1161,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
struct data_ext_args from_args;
struct data_ext_args to_args;
struct scoutfs_extent ext;
struct timespec cur_time;
LIST_HEAD(locks);
bool done = false;
loff_t from_size;
@@ -1173,6 +1197,11 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
goto out;
}
if (is_stage && (data_version != SCOUTFS_I(to)->data_version)) {
ret = -ESTALE;
goto out;
}
from_iblock = from_off >> SCOUTFS_BLOCK_SM_SHIFT;
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
@@ -1195,7 +1224,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
/* can't stage once data_version changes */
scoutfs_inode_get_onoff(from, &junk, &from_offline);
scoutfs_inode_get_onoff(to, &junk, &to_offline);
if (from_offline || to_offline) {
if (from_offline || (to_offline && !is_stage)) {
ret = -ENODATA;
goto out;
}
@@ -1224,8 +1253,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
ret = scoutfs_inode_index_start(sb, &seq) ?:
scoutfs_inode_index_prepare(sb, &locks, from, true) ?:
scoutfs_inode_index_prepare(sb, &locks, to, true) ?:
scoutfs_inode_index_try_lock_hold(sb, &locks, seq,
SIC_EXACT(1, 1));
scoutfs_inode_index_try_lock_hold(sb, &locks, seq, false);
if (ret > 0)
continue;
if (ret < 0)
@@ -1240,6 +1268,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
/* arbitrarily limit the number of extents per trans hold */
for (i = 0; i < MOVE_DATA_EXTENTS_PER_HOLD; i++) {
struct scoutfs_extent off_ext;
/* find the next extent to move */
ret = scoutfs_ext_next(sb, &data_ext_ops, &from_args,
from_iblock, 1, &ext);
@@ -1268,10 +1298,27 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
to_start = to_iblock + (from_start - from_iblock);
/* insert the new, fails if it overlaps */
ret = scoutfs_ext_insert(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
if (is_stage) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
to_start, 1, &off_ext);
if (ret)
break;
if (!scoutfs_ext_inside(to_start, len, &off_ext) ||
!(off_ext.flags & SEF_OFFLINE)) {
ret = -EINVAL;
break;
}
ret = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
} else {
/* insert the new, fails if it overlaps */
ret = scoutfs_ext_insert(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
}
if (ret < 0)
break;
@@ -1279,10 +1326,18 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
ret = scoutfs_ext_set(sb, &data_ext_ops, &from_args,
from_start, len, 0, 0);
if (ret < 0) {
/* remove inserted new on err */
err = scoutfs_ext_remove(sb, &data_ext_ops,
&to_args, to_start,
len);
if (is_stage) {
/* re-mark dest range as offline */
WARN_ON_ONCE(!(off_ext.flags & SEF_OFFLINE));
err = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
to_start, len,
0, off_ext.flags);
} else {
/* remove inserted new on err */
err = scoutfs_ext_remove(sb, &data_ext_ops,
&to_args, to_start,
len);
}
BUG_ON(err); /* XXX inconsistent */
break;
}
@@ -1310,12 +1365,17 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
up_write(&from_si->extent_sem);
up_write(&to_si->extent_sem);
from->i_ctime = from->i_mtime =
to->i_ctime = to->i_mtime = CURRENT_TIME;
cur_time = CURRENT_TIME;
if (!is_stage) {
to->i_ctime = to->i_mtime = cur_time;
inode_inc_iversion(to);
scoutfs_inode_inc_data_version(to);
scoutfs_inode_set_data_seq(to);
}
from->i_ctime = from->i_mtime = cur_time;
inode_inc_iversion(from);
scoutfs_inode_inc_data_version(from);
scoutfs_inode_inc_data_version(to);
scoutfs_inode_set_data_seq(from);
scoutfs_inode_set_data_seq(to);
scoutfs_update_inode_item(from, from_lock, &locks);
scoutfs_update_inode_item(to, to_lock, &locks);
@@ -1801,13 +1861,17 @@ int scoutfs_data_prepare_commit(struct super_block *sb)
return ret;
}
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb)
/*
* Return true if the data allocator is lower than the caller's
* requirement and we haven't been told by the server that we're out of
* free extents.
*/
bool scoutfs_data_alloc_should_refill(struct super_block *sb, u64 blocks)
{
DECLARE_DATA_INFO(sb, datinf);
return scoutfs_dalloc_total_len(&datinf->dalloc) <<
SCOUTFS_BLOCK_SM_SHIFT;
return (scoutfs_dalloc_total_len(&datinf->dalloc) < blocks) &&
!(le32_to_cpu(datinf->dalloc.root.flags) & SCOUTFS_ALLOC_FLAG_LOW);
}
int scoutfs_data_setup(struct super_block *sb)

View File

@@ -38,13 +38,6 @@ struct scoutfs_data_wait {
.err = 0, \
}
struct scoutfs_traced_extent {
u64 iblock;
u64 count;
u64 blkno;
u8 flags;
};
extern const struct address_space_operations scoutfs_file_aops;
extern const struct file_operations scoutfs_file_fops;
struct scoutfs_alloc;
@@ -59,7 +52,8 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len);
int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
struct scoutfs_lock *lock);
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off);
u64 byte_len, struct inode *to, u64 to_off, bool to_stage,
u64 data_version);
int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u8 sef, u8 op, struct scoutfs_data_wait *ow,
@@ -85,7 +79,7 @@ void scoutfs_data_init_btrees(struct super_block *sb,
void scoutfs_data_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_data_prepare_commit(struct super_block *sb);
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb);
bool scoutfs_data_alloc_should_refill(struct super_block *sb, u64 blocks);
int scoutfs_data_setup(struct super_block *sb);
void scoutfs_data_destroy(struct super_block *sb);

View File

@@ -30,6 +30,8 @@
#include "item.h"
#include "lock.h"
#include "hash.h"
#include "omap.h"
#include "forest.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -134,8 +136,8 @@ static int alloc_dentry_info(struct dentry *dentry)
{
struct dentry_info *di;
/* XXX read mb? */
if (dentry->d_fsdata)
smp_rmb();
if (dentry->d_op == &scoutfs_dentry_ops)
return 0;
di = kmem_cache_zalloc(dentry_info_cache, GFP_NOFS);
@@ -147,6 +149,7 @@ static int alloc_dentry_info(struct dentry *dentry)
spin_lock(&dentry->d_lock);
if (!dentry->d_fsdata) {
dentry->d_fsdata = di;
smp_wmb();
d_set_d_op(dentry, &scoutfs_dentry_ops);
}
spin_unlock(&dentry->d_lock);
@@ -252,7 +255,7 @@ static u64 dirent_name_hash(const char *name, unsigned int name_len)
((u64)dirent_name_fingerprint(name, name_len) << 32);
}
static u64 dirent_names_equal(const char *a_name, unsigned int a_len,
static bool dirent_names_equal(const char *a_name, unsigned int a_len,
const char *b_name, unsigned int b_len)
{
return a_len == b_len && memcmp(a_name, b_name, a_len) == 0;
@@ -274,8 +277,7 @@ static int lookup_dirent(struct super_block *sb, u64 dir_ino, const char *name,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
ret = -ENOMEM;
goto out;
return -ENOMEM;
}
init_dirent_key(&key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, 0);
@@ -315,6 +317,52 @@ out:
return ret;
}
/*
* Verify that the caller's dentry still precisely matches our dirent
* items.
*
* The caller has a dentry that the vfs revalidated before they acquired
* their locks. If the dentry is still covered by a lock we immediately
* return 0. If not, we check items and return -ENOENT if a positive
* dentry no longer matches the items or -EEXIST if a negative entry's
* name now has an item.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
struct scoutfs_dirent dent = {0,};
const char *name;
u64 dentry_ino;
int name_len;
u64 hash;
int ret;
if (scoutfs_lock_is_covered(sb, &di->lock_cov))
return 0;
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
name = dentry->d_name.name;
name_len = dentry->d_name.len;
hash = dirent_name_hash(name, name_len);
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret < 0 && ret != -ENOENT)
return ret;
if (dentry_ino != le64_to_cpu(dent.ino) || di->hash != le64_to_cpu(dent.hash) ||
di->pos != le64_to_cpu(dent.pos)) {
if (dentry_ino)
ret = -ENOENT;
else
ret = -EEXIST;
} else {
ret = 0;
}
return ret;
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
{
struct super_block *sb = dentry->d_sb;
@@ -422,7 +470,7 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
{
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent dent;
struct scoutfs_dirent dent = {0,};
struct inode *inode;
u64 ino = 0;
u64 hash;
@@ -450,9 +498,11 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
ret = 0;
} else if (ret == 0) {
ino = le64_to_cpu(dent.ino);
}
if (ret == 0)
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), dir_lock);
}
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
out:
@@ -461,9 +511,20 @@ out:
else if (ino == 0)
inode = NULL;
else
inode = scoutfs_iget(sb, ino);
inode = scoutfs_iget(sb, ino, 0, 0);
return d_splice_alias(inode, dentry);
/*
* We can't splice dir aliases into the dcache. dir entries
* might have changed on other nodes so our dcache could still
* contain them, rather than having been moved in rename. For
* dirs, we use d_materialize_unique to remove any existing
* aliases which must be stale. Our inode numbers aren't reused
* so inodes pointed to by entries can't change types.
*/
if (!IS_ERR_OR_NULL(inode) && S_ISDIR(inode->i_mode))
return d_materialise_unique(dentry, inode);
else
return d_splice_alias(inode, dentry);
}
/*
@@ -478,10 +539,10 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
{
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
struct scoutfs_dirent *dent;
struct scoutfs_key key;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key last_key;
struct scoutfs_lock *dir_lock;
struct scoutfs_key key;
int name_len;
u64 pos;
int ret;
@@ -491,8 +552,7 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
ret = -ENOMEM;
goto out;
return -ENOMEM;
}
init_dirent_key(&last_key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
@@ -559,18 +619,17 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
u64 ino, umode_t mode, struct scoutfs_lock *dir_lock,
struct scoutfs_lock *inode_lock)
{
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key rdir_key;
struct scoutfs_key ent_key;
struct scoutfs_key lb_key;
struct scoutfs_dirent *dent;
bool del_ent = false;
bool del_rdir = false;
bool del_ent = false;
int ret;
dent = alloc_dirent(name_len);
if (!dent) {
ret = -ENOMEM;
goto out;
return -ENOMEM;
}
/* initialize the dent */
@@ -655,9 +714,9 @@ static int del_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
*/
static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
umode_t mode, dev_t rdev,
const struct scoutfs_item_count cnt,
struct scoutfs_lock **dir_lock,
struct scoutfs_lock **inode_lock,
struct scoutfs_lock **orph_lock,
struct list_head *ind_locks)
{
struct super_block *sb = dir->i_sb;
@@ -690,11 +749,17 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
if (ret)
goto out_unlock;
if (orph_lock) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, ino, orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, ind_locks, dir, true) ?:
scoutfs_inode_index_prepare_ino(sb, ind_locks, ino, mode) ?:
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq, cnt);
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq, true);
if (ret > 0)
goto retry;
if (ret)
@@ -714,9 +779,13 @@ out_unlock:
if (ret) {
scoutfs_inode_index_unlock(sb, ind_locks);
scoutfs_unlock(sb, *dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, *inode_lock, SCOUTFS_LOCK_WRITE);
*dir_lock = NULL;
scoutfs_unlock(sb, *inode_lock, SCOUTFS_LOCK_WRITE);
*inode_lock = NULL;
if (orph_lock) {
scoutfs_unlock(sb, *orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
*orph_lock = NULL;
}
inode = ERR_PTR(ret);
}
@@ -731,6 +800,7 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -741,10 +811,14 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
inode = lock_hold_create(dir, dentry, mode, rdev,
SIC_MKNOD(dentry->d_name.len),
&dir_lock, &inode_lock, &ind_locks);
&dir_lock, &inode_lock, NULL, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
pos = SCOUTFS_I(dir)->next_readdir_pos++;
@@ -760,6 +834,10 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_mtime = inode->i_atime = inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_mtime;
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
if (S_ISDIR(mode)) {
inc_nlink(inode);
@@ -803,12 +881,15 @@ static int scoutfs_link(struct dentry *old_dentry,
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
LIST_HEAD(ind_locks);
bool del_orphan = false;
u64 dir_size;
u64 ind_seq;
u64 hash;
u64 pos;
int ret;
int err;
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
@@ -831,13 +912,25 @@ static int scoutfs_link(struct dentry *old_dentry,
if (ret)
goto out_unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
if (inode->i_nlink == 0) {
del_orphan = true;
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_LINK(dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
if (ret > 0)
goto retry;
if (ret)
@@ -847,20 +940,31 @@ retry:
if (ret)
goto out;
if (del_orphan) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
if (ret)
goto out;
}
pos = SCOUTFS_I(dir)->next_readdir_pos++;
ret = add_entry_items(sb, scoutfs_ino(dir), hash, pos,
dentry->d_name.name, dentry->d_name.len,
scoutfs_ino(inode), inode->i_mode, dir_lock,
inode_lock);
if (ret)
if (ret) {
err = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
WARN_ON_ONCE(err); /* no orphan, might not scan and delete after crash */
goto out;
}
update_dentry_info(sb, dentry, hash, pos, dir_lock);
i_size_write(dir, dir_size);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -873,6 +977,8 @@ out_unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
@@ -897,6 +1003,7 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
struct inode *inode = dentry->d_inode;
struct timespec ts = current_kernel_time();
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_lock *dir_lock = NULL;
LIST_HEAD(ind_locks);
u64 ind_seq;
@@ -909,43 +1016,58 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
if (ret)
return ret;
ret = alloc_dentry_info(dentry);
if (ret)
goto unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto unlock;
if (S_ISDIR(inode->i_mode) && i_size_read(inode)) {
ret = -ENOTEMPTY;
goto unlock;
}
if (should_orphan(inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
if (ret < 0)
goto unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_UNLINK(dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, false);
if (ret > 0)
goto retry;
if (ret)
goto unlock;
if (should_orphan(inode)) {
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
if (ret < 0)
goto out;
}
ret = del_entry_items(sb, scoutfs_ino(dir), dentry_info_hash(dentry),
dentry_info_pos(dentry), scoutfs_ino(inode),
dir_lock, inode_lock);
if (ret)
if (ret) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
WARN_ON_ONCE(ret); /* should have been dirty */
goto out;
if (should_orphan(inode)) {
/*
* Insert the orphan item before we modify any inode
* metadata so we can gracefully exit should it
* fail.
*/
ret = scoutfs_orphan_inode(inode);
WARN_ON_ONCE(ret); /* XXX returning error but items deleted */
if (ret)
goto out;
}
update_dentry_info(sb, dentry, 0, 0, dir_lock);
dir->i_ctime = ts;
dir->i_mtime = ts;
i_size_write(dir, i_size_read(dir) - dentry->d_name.len);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
inode->i_ctime = ts;
drop_nlink(inode);
@@ -962,6 +1084,7 @@ unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
@@ -1137,6 +1260,7 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -1154,10 +1278,14 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
return ret;
inode = lock_hold_create(dir, dentry, S_IFLNK|S_IRWXUGO, 0,
SIC_SYMLINK(dentry->d_name.len, name_len),
&dir_lock, &inode_lock, &ind_locks);
&dir_lock, &inode_lock, NULL, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
ret = symlink_item_ops(sb, SYM_CREATE, scoutfs_ino(inode), inode_lock,
symname, name_len);
@@ -1177,9 +1305,13 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode_inc_iversion(dir);
inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_ctime;
i_size_write(inode, name_len);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -1232,10 +1364,10 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list)
{
struct scoutfs_link_backref_entry *ent;
struct scoutfs_link_backref_entry *ent = NULL;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_lock *lock = NULL;
int len;
int ret;
@@ -1455,26 +1587,6 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
return ret;
}
/*
* Make sure that a dirent from the dir to the inode exists at the name.
* The caller has the name locked in the dir.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, const char *name,
unsigned name_len, u64 hash, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_dirent dent;
int ret;
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret == 0 && le64_to_cpu(dent.ino) != ino)
ret = -ENOENT;
else if (ret == -ENOENT && ino == 0)
ret = 0;
return ret;
}
/*
* The vfs performs checks on cached inodes and dirents before calling
* here. It doesn't hold any locks so all of those checks can be based
@@ -1503,8 +1615,9 @@ static int verify_entry(struct super_block *sb, u64 dir_ino, const char *name,
* from using parent/child locking orders as two groups can have both
* parent and child relationships to each other.
*/
static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
static int scoutfs_rename_common(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
struct super_block *sb = old_dir->i_sb;
struct inode *old_inode = old_dentry->d_inode;
@@ -1514,6 +1627,7 @@ static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct scoutfs_lock *new_dir_lock = NULL;
struct scoutfs_lock *old_inode_lock = NULL;
struct scoutfs_lock *new_inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct timespec now;
bool ins_new = false;
bool del_new = false;
@@ -1568,16 +1682,25 @@ static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
}
/* make sure that the entries assumed by the argument still exist */
ret = verify_entry(sb, scoutfs_ino(old_dir), old_dentry->d_name.name,
old_dentry->d_name.len, old_hash,
scoutfs_ino(old_inode), old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry->d_name.name,
new_dentry->d_name.len, new_hash,
new_inode ? scoutfs_ino(new_inode) : 0,
new_dir_lock);
ret = alloc_dentry_info(old_dentry) ?:
alloc_dentry_info(new_dentry) ?:
verify_entry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
if (ret)
goto out_unlock;
if ((flags & RENAME_NOREPLACE) && (new_inode != NULL)) {
ret = -EEXIST;
goto out_unlock;
}
if (should_orphan(new_inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(new_inode),
&orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, old_dir, false) ?:
@@ -1586,9 +1709,7 @@ retry:
scoutfs_inode_index_prepare(sb, &ind_locks, new_dir, false)) ?:
(new_inode == NULL ? 0 :
scoutfs_inode_index_prepare(sb, &ind_locks, new_inode, false)) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_RENAME(old_dentry->d_name.len,
new_dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
if (ret > 0)
goto retry;
if (ret)
@@ -1639,7 +1760,7 @@ retry:
ins_old = true;
if (should_orphan(new_inode)) {
ret = scoutfs_orphan_inode(new_inode);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(new_inode), orph_lock);
if (ret)
goto out;
}
@@ -1679,6 +1800,13 @@ retry:
if (new_inode)
old_inode->i_ctime = now;
inode_inc_iversion(old_dir);
inode_inc_iversion(old_inode);
if (new_dir != old_dir)
inode_inc_iversion(new_dir);
if (new_inode)
inode_inc_iversion(new_inode);
scoutfs_update_inode_item(old_dir, old_dir_lock, &ind_locks);
scoutfs_update_inode_item(old_inode, old_inode_lock, &ind_locks);
if (new_dir != old_dir)
@@ -1743,10 +1871,28 @@ out_unlock:
scoutfs_unlock(sb, old_dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, new_dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, rename_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
static int scoutfs_rename(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry)
{
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, 0);
}
static int scoutfs_rename2(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
if (flags & ~RENAME_NOREPLACE)
return -EINVAL;
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, flags);
}
#ifdef KC_FMODE_KABI_ITERATE
/* we only need this to set the iterate flag for kabi :/ */
static int scoutfs_dir_open(struct inode *inode, struct file *file)
@@ -1756,6 +1902,55 @@ static int scoutfs_dir_open(struct inode *inode, struct file *file)
}
#endif
static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
int ret;
if (dentry->d_name.len > SCOUTFS_NAME_LEN)
return -ENAMETOOLONG;
inode = lock_hold_create(dir, dentry, mode, 0,
&dir_lock, &inode_lock, &orph_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
if (ret < 0) {
iput(inode);
goto out; /* XXX returning error but items created */
}
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
si->crtime = inode->i_mtime;
insert_inode_hash(inode);
ihold(inode); /* need to update inode modifications in d_tmpfile */
d_tmpfile(dentry, inode);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
scoutfs_inode_index_unlock(sb, &ind_locks);
iput(inode);
out:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
const struct file_operations scoutfs_dir_fops = {
.KC_FOP_READDIR = scoutfs_readdir,
#ifdef KC_FMODE_KABI_ITERATE
@@ -1766,7 +1961,10 @@ const struct file_operations scoutfs_dir_fops = {
.llseek = generic_file_llseek,
};
const struct inode_operations scoutfs_dir_iops = {
const struct inode_operations_wrapper scoutfs_dir_iops = {
.ops = {
.lookup = scoutfs_lookup,
.mknod = scoutfs_mknod,
.create = scoutfs_create,
@@ -1783,6 +1981,9 @@ const struct inode_operations scoutfs_dir_iops = {
.removexattr = scoutfs_removexattr,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
.rename2 = scoutfs_rename2,
};
void scoutfs_dir_exit(void)

View File

@@ -5,7 +5,7 @@
#include "lock.h"
extern const struct file_operations scoutfs_dir_fops;
extern const struct inode_operations scoutfs_dir_iops;
extern const struct inode_operations_wrapper scoutfs_dir_iops;
extern const struct inode_operations scoutfs_symlink_iops;
struct scoutfs_link_backref_entry {
@@ -14,7 +14,7 @@ struct scoutfs_link_backref_entry {
u64 dir_pos;
u16 name_len;
struct scoutfs_dirent dent;
/* the full name is allocated and stored in dent.name[0] */
/* the full name is allocated and stored in dent.name[] */
};
int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,

View File

@@ -81,7 +81,7 @@ static struct dentry *scoutfs_fh_to_dentry(struct super_block *sb,
trace_scoutfs_fh_to_dentry(sb, fh_type, sfid);
if (scoutfs_valid_fileid(fh_type))
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino));
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino), 0, SCOUTFS_IGF_LINKED);
return d_obtain_alias(inode);
}
@@ -100,7 +100,7 @@ static struct dentry *scoutfs_fh_to_parent(struct super_block *sb,
if (scoutfs_valid_fileid(fh_type) &&
fh_type == FILEID_SCOUTFS_WITH_PARENT)
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino));
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino), 0, SCOUTFS_IGF_LINKED);
return d_obtain_alias(inode);
}
@@ -123,7 +123,7 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
scoutfs_dir_free_backref_path(sb, &list);
trace_scoutfs_get_parent(sb, inode, ino);
inode = scoutfs_iget(sb, ino);
inode = scoutfs_iget(sb, ino, 0, SCOUTFS_IGF_LINKED);
return d_obtain_alias(inode);
}

View File

@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include "msg.h"
#include "ext.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -38,7 +39,7 @@ static bool ext_overlap(struct scoutfs_extent *ext, u64 start, u64 len)
return !(e_end < start || ext->start > end);
}
static bool ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
{
u64 in_end = start + len - 1;
u64 out_end = out->start + out->len - 1;
@@ -191,6 +192,9 @@ int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
/* inserting extent must not overlap */
if (found.len && ext_overlap(&ins, found.start, found.len)) {
if (ops->insert_overlap_warn)
scoutfs_err(sb, "inserting extent %llu.%llu overlaps existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}
@@ -241,7 +245,9 @@ int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
goto out;
/* removed extent must be entirely within found */
if (!ext_inside(start, len, &found)) {
if (!scoutfs_ext_inside(start, len, &found)) {
scoutfs_err(sb, "error removing extent %llu.%llu, isn't inside existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}
@@ -341,7 +347,7 @@ int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
if (ret == 0 && ext_overlap(&found, start, len)) {
/* set extent must be entirely within found */
if (!ext_inside(start, len, &found)) {
if (!scoutfs_ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}

View File

@@ -15,6 +15,8 @@ struct scoutfs_ext_ops {
u64 start, u64 len, u64 map, u8 flags);
int (*remove)(struct super_block *sb, void *arg, u64 start, u64 len,
u64 map, u8 flags);
bool insert_overlap_warn;
};
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,
@@ -31,5 +33,6 @@ int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
struct scoutfs_extent *ext);
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags);
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out);
#endif

480
kmod/src/fence.c Normal file
View File

@@ -0,0 +1,480 @@
/*
* Copyright (C) 2019 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/kobject.h>
#include <linux/sysfs.h>
#include <linux/device.h>
#include <linux/timer.h>
#include <asm/barrier.h>
#include "super.h"
#include "msg.h"
#include "sysfs.h"
#include "server.h"
#include "fence.h"
/*
* Fencing ensures that a given mount can no longer write to the
* metadata or data devices. It's necessary to ensure that it's safe to
* give another mount access to a resource that is currently owned by a
* mount that has stopped responding.
*
* Fencing is performed in collaboration between the currently elected
* quorum leader mount and userspace running on its host. The kernel
* creates fencing requests as it notices that mounts have stopped
* participating. The fence requests are published as directories in
* sysfs. Userspace agents watch for directories, take action, and
* write to files in the directory to indicate that the mount has been
* fenced. Once the mount is fenced the server can reclaim the
* resources previously held by the fenced mount.
*
* The fence requests contain metadata identifying the specific instance
* of the mount that needs to be fenced. This lets a fencing agent
* ensure that a specific mount has been fenced without necessarily
* destroying the node that was hosting it. Maybe the node had rebooted
* and the mount is no longer there, maybe the mount can be force
* unmounted, maybe the node can be configured to isolate the mount from
* the devices.
*
* The fencing mechanism is asynchronous and can fail but the server
* cannot make progress until it completes. If a fence request times
* out the server shuts down in the hope that another instance of a
* server might have more luck fencing a non-responsive mount.
*
* Sources of fencing are fundamentally anchored in shared persistent
* state. It is possible, though unlikely, that servers can fence a
* node and then themselves fail, leaving the next server to try and
* fence the mount again.
*/
struct fence_info {
struct kset *kset;
struct kobject fence_dir_kobj;
struct workqueue_struct *wq;
wait_queue_head_t waitq;
spinlock_t lock;
struct list_head list;
};
#define DECLARE_FENCE_INFO(sb, name) \
struct fence_info *name = SCOUTFS_SB(sb)->fence_info
struct pending_fence {
struct super_block *sb;
struct scoutfs_sysfs_attrs ssa;
struct list_head entry;
struct timer_list timer;
ktime_t start_kt;
__be32 ipv4_addr;
bool fenced;
bool error;
int reason;
u64 rid;
};
#define FENCE_FROM_KOBJ(kobj) \
container_of(SCOUTFS_SYSFS_ATTRS(kobj), struct pending_fence, ssa)
#define DECLARE_FENCE_FROM_KOBJ(name, kobj) \
struct pending_fence *name = FENCE_FROM_KOBJ(kobj)
static void destroy_fence(struct pending_fence *fence)
{
struct super_block *sb = fence->sb;
scoutfs_sysfs_destroy_attrs(sb, &fence->ssa);
del_timer_sync(&fence->timer);
kfree(fence);
}
static ssize_t elapsed_secs_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
ktime_t now = ktime_get();
struct timeval tv = { 0, };
if (ktime_after(now, fence->start_kt))
tv = ktime_to_timeval(ktime_sub(now, fence->start_kt));
return snprintf(buf, PAGE_SIZE, "%llu", (long long)tv.tv_sec);
}
SCOUTFS_ATTR_RO(elapsed_secs);
static ssize_t fenced_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%u", !!fence->fenced);
}
/*
* any write to the fenced file from userspace indicates that the mount
* has been safely fenced and can no longer write to the shared device.
*/
static ssize_t fenced_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
DECLARE_FENCE_INFO(fence->sb, fi);
if (!fence->fenced) {
del_timer_sync(&fence->timer);
fence->fenced = true;
wake_up(&fi->waitq);
}
return count;
}
SCOUTFS_ATTR_RW(fenced);
static ssize_t error_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%u", !!fence->error);
}
/*
* Fencing can tell us that they were unable to fence the given mount.
* We can't continue if the mount can't be isolated so we shut down the
* server.
*/
static ssize_t error_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf,
size_t count)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
struct super_block *sb = fence->sb;
DECLARE_FENCE_INFO(fence->sb, fi);
if (!fence->error) {
fence->error = true;
scoutfs_err(sb, "error indicated by fence action for rid %016llx", fence->rid);
wake_up(&fi->waitq);
}
return count;
}
SCOUTFS_ATTR_RW(error);
static ssize_t ipv4_addr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%pI4", &fence->ipv4_addr);
}
SCOUTFS_ATTR_RO(ipv4_addr);
static ssize_t reason_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
unsigned r = fence->reason;
char *str = "unknown";
static char *reasons[] = {
[SCOUTFS_FENCE_CLIENT_RECOVERY] = "client_recovery",
[SCOUTFS_FENCE_CLIENT_RECONNECT] = "client_reconnect",
[SCOUTFS_FENCE_QUORUM_BLOCK_LEADER] = "quorum_block_leader",
};
if (r < ARRAY_SIZE(reasons) && reasons[r])
str = reasons[r];
return snprintf(buf, PAGE_SIZE, "%s", str);
}
SCOUTFS_ATTR_RO(reason);
static ssize_t rid_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%016llx", fence->rid);
}
SCOUTFS_ATTR_RO(rid);
static struct attribute *fence_attrs[] = {
SCOUTFS_ATTR_PTR(elapsed_secs),
SCOUTFS_ATTR_PTR(fenced),
SCOUTFS_ATTR_PTR(error),
SCOUTFS_ATTR_PTR(ipv4_addr),
SCOUTFS_ATTR_PTR(reason),
SCOUTFS_ATTR_PTR(rid),
NULL,
};
#define FENCE_TIMEOUT_MS (MSEC_PER_SEC * 30)
static void fence_timeout(struct timer_list *timer)
{
struct pending_fence *fence = from_timer(fence, timer, timer);
struct super_block *sb = fence->sb;
DECLARE_FENCE_INFO(sb, fi);
fence->error = true;
scoutfs_err(sb, "fence request for rid %016llx was not serviced in %lums, raising error",
fence->rid, FENCE_TIMEOUT_MS);
wake_up(&fi->waitq);
}
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret;
fence = kzalloc(sizeof(struct pending_fence), GFP_NOFS);
if (!fence) {
ret = -ENOMEM;
goto out;
}
fence->sb = sb;
scoutfs_sysfs_init_attrs(sb, &fence->ssa);
fence->start_kt = ktime_get();
fence->ipv4_addr = ipv4_addr;
fence->fenced = false;
fence->error = false;
fence->reason = reason;
fence->rid = rid;
ret = scoutfs_sysfs_create_attrs_parent(sb, &fi->kset->kobj,
&fence->ssa, fence_attrs,
"%016llx", rid);
if (ret < 0) {
kfree(fence);
goto out;
}
timer_setup(&fence->timer, fence_timeout, 0);
fence->timer.expires = jiffies + msecs_to_jiffies(FENCE_TIMEOUT_MS);
add_timer(&fence->timer);
spin_lock(&fi->lock);
list_add_tail(&fence->entry, &fi->list);
spin_unlock(&fi->lock);
out:
return ret;
}
/*
* Give the caller the rid of the next fence request which has been
* fenced. This doesn't have a position from which to return the next
* because the caller either frees the fence request it's given or shuts
* down.
*/
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret = -ENOENT;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->fenced || fence->error) {
*rid = fence->rid;
*reason = fence->reason;
*error = fence->error;
ret = 0;
break;
}
}
spin_unlock(&fi->lock);
return ret;
}
int scoutfs_fence_reason_pending(struct super_block *sb, int reason)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
bool pending = false;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->reason == reason) {
pending = true;
break;
}
}
spin_unlock(&fi->lock);
return pending;
}
int scoutfs_fence_free(struct super_block *sb, u64 rid)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret = -ENOENT;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->rid == rid) {
list_del_init(&fence->entry);
ret = 0;
break;
}
}
spin_unlock(&fi->lock);
if (ret == 0) {
destroy_fence(fence);
wake_up(&fi->waitq);
}
return ret;
}
static bool all_fenced(struct fence_info *fi, bool *error)
{
struct pending_fence *fence;
bool all = true;
*error = false;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->error) {
*error = true;
all = true;
break;
}
if (!fence->fenced) {
all = false;
break;
}
}
spin_unlock(&fi->lock);
return all;
}
/*
* The caller waits for all the current requests to be fenced, but not
* necessarily reclaimed.
*/
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies)
{
DECLARE_FENCE_INFO(sb, fi);
bool error;
long ret;
ret = wait_event_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
if (ret == 0)
ret = -ETIMEDOUT;
else if (ret > 0)
ret = 0;
else if (error)
ret = -EIO;
return ret;
}
/*
* This must be called early during startup so that it is guaranteed that
* no other subsystems will try and call fence_start while we're waiting
* for testing fence requests to complete.
*/
int scoutfs_fence_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct mount_options *opts = &sbi->opts;
struct fence_info *fi;
int ret;
/* can only fence if we can be elected by quorum */
if (opts->quorum_slot_nr == -1) {
ret = 0;
goto out;
}
fi = kzalloc(sizeof(struct fence_info), GFP_KERNEL);
if (!fi) {
ret = -ENOMEM;
goto out;
}
init_waitqueue_head(&fi->waitq);
spin_lock_init(&fi->lock);
INIT_LIST_HEAD(&fi->list);
sbi->fence_info = fi;
fi->kset = kset_create_and_add("fence", NULL, scoutfs_sysfs_sb_dir(sb));
if (!fi->kset) {
ret = -ENOMEM;
goto out;
}
fi->wq = alloc_workqueue("scoutfs_fence",
WQ_UNBOUND | WQ_NON_REENTRANT, 0);
if (!fi->wq) {
ret = -ENOMEM;
goto out;
}
ret = 0;
out:
if (ret)
scoutfs_fence_destroy(sb);
return ret;
}
/*
* Tear down all pending fence requests because the server is shutting down.
*/
void scoutfs_fence_stop(struct super_block *sb)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
do {
spin_lock(&fi->lock);
fence = list_first_entry_or_null(&fi->list, struct pending_fence, entry);
if (fence)
list_del_init(&fence->entry);
spin_unlock(&fi->lock);
if (fence) {
destroy_fence(fence);
wake_up(&fi->waitq);
}
} while (fence);
}
void scoutfs_fence_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct fence_info *fi = SCOUTFS_SB(sb)->fence_info;
struct pending_fence *fence;
struct pending_fence *tmp;
if (fi) {
if (fi->wq)
destroy_workqueue(fi->wq);
list_for_each_entry_safe(fence, tmp, &fi->list, entry)
destroy_fence(fence);
if (fi->kset)
kset_unregister(fi->kset);
kfree(fi);
sbi->fence_info = NULL;
}
}

20
kmod/src/fence.h Normal file
View File

@@ -0,0 +1,20 @@
#ifndef _SCOUTFS_FENCE_H_
#define _SCOUTFS_FENCE_H_
enum {
SCOUTFS_FENCE_CLIENT_RECOVERY,
SCOUTFS_FENCE_CLIENT_RECONNECT,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER,
};
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason);
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error);
int scoutfs_fence_reason_pending(struct super_block *sb, int reason);
int scoutfs_fence_free(struct super_block *sb, u64 rid);
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies);
int scoutfs_fence_setup(struct super_block *sb);
void scoutfs_fence_stop(struct super_block *sb);
void scoutfs_fence_destroy(struct super_block *sb);
#endif

View File

@@ -27,8 +27,14 @@
#include "file.h"
#include "inode.h"
#include "per_task.h"
#include "omap.h"
/* TODO: Direct I/O, AIO */
/*
* Start a high level file read. We check for offline extents in the
* read region here so that we only check the extents once. We use the
* dio count to prevent releasing while we're reading after we've
* checked the extents.
*/
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos)
{
@@ -42,30 +48,32 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
int ret;
retry:
/* protect checked extents from release */
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
/* protect checked extents from stage/release */
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ,
&dw, inode_lock);
if (ret != 0)
goto out;
} else {
WARN_ON_ONCE(true);
}
ret = generic_file_aio_read(iocb, iov, nr_segs, pos);
out:
if (scoutfs_per_task_del(&si->pt_data_lock, &pt_ent))
inode_dio_done(inode);
inode_dio_done(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {

View File

@@ -26,6 +26,7 @@
#include "hash.h"
#include "srch.h"
#include "counters.h"
#include "xattr.h"
#include "scoutfs_trace.h"
/*
@@ -37,9 +38,9 @@
*
* The log btrees are modified by multiple transactions over time so
* there is no consistent ordering relationship between the items in
* different btrees. Each item in a log btree stores a version number
* for the item. Readers check log btrees for the most recent version
* that it should use.
* different btrees. Each item in a log btree stores a seq for the
* item. Readers check log btrees for the most recent seq that it
* should use.
*
* The item cache reads items in bulk from stable btrees, and writes a
* transaction's worth of dirty items into the item log btree.
@@ -52,6 +53,8 @@
*/
struct forest_info {
struct super_block *sb;
struct mutex mutex;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
@@ -60,14 +63,19 @@ struct forest_info {
struct mutex srch_mutex;
struct scoutfs_srch_file srch_file;
struct scoutfs_block *srch_bl;
struct workqueue_struct *workq;
struct delayed_work log_merge_dwork;
atomic64_t inode_count_delta;
};
#define DECLARE_FOREST_INFO(sb, name) \
struct forest_info *name = SCOUTFS_SB(sb)->forest_info
struct forest_refs {
struct scoutfs_btree_ref fs_ref;
struct scoutfs_btree_ref logs_ref;
struct scoutfs_block_ref fs_ref;
struct scoutfs_block_ref logs_ref;
};
/* initialize some refs that initially aren't equal */
@@ -96,20 +104,16 @@ static void calc_bloom_nrs(struct forest_bloom_nrs *bloom,
}
}
static struct scoutfs_block *read_bloom_ref(struct super_block *sb,
struct scoutfs_btree_ref *ref)
static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scoutfs_block_ref *ref)
{
struct scoutfs_block *bl;
int ret;
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
if (IS_ERR(bl))
return bl;
if (!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno,
SCOUTFS_BLOCK_MAGIC_BLOOM)) {
scoutfs_block_invalidate(sb, bl);
scoutfs_block_put(sb, bl);
return ERR_PTR(-ESTALE);
ret = scoutfs_block_read_ref(sb, ref, SCOUTFS_BLOCK_MAGIC_BLOOM, &bl);
if (ret < 0) {
if (ret == -ESTALE)
scoutfs_inc_counter(sb, forest_bloom_stale);
bl = ERR_PTR(ret);
}
return bl;
@@ -220,25 +224,17 @@ out:
}
struct forest_read_items_data {
bool is_fs;
int fic;
scoutfs_forest_item_cb cb;
void *cb_arg;
};
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key,
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, void *arg)
{
struct forest_read_items_data *rid = arg;
struct scoutfs_log_item_value _liv = {0,};
struct scoutfs_log_item_value *liv = &_liv;
if (!rid->is_fs) {
liv = val;
val += sizeof(struct scoutfs_log_item_value);
val_len -= sizeof(struct scoutfs_log_item_value);
}
return rid->cb(sb, key, liv, val, val_len, rid->cb_arg);
return rid->cb(sb, key, seq, flags, val, val_len, rid->fic, rid->cb_arg);
}
/*
@@ -250,19 +246,16 @@ static int forest_read_items(struct super_block *sb, struct scoutfs_key *key,
* that covers all the blocks. Any keys outside of this range can't be
* trusted because we didn't visit all the trees to check their items.
*
* If we hit stale blocks and retry we can call the callback for
* duplicate items. This is harmless because the items are stable while
* the caller holds their cluster lock and the caller has to filter out
* item versions anyway.
* We return -ESTALE if we hit stale blocks to give the caller a chance
* to reset their state and retry with a newer version of the btrees.
*/
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg)
{
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
struct forest_read_items_data rid = {
.cb = cb,
.cb_arg = arg,
@@ -274,32 +267,30 @@ int scoutfs_forest_read_items(struct super_block *sb,
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_block *bl;
struct scoutfs_key ltk;
struct scoutfs_key orig_start = *start;
struct scoutfs_key orig_end = *end;
int ret;
int i;
scoutfs_inc_counter(sb, forest_read_items);
calc_bloom_nrs(&bloom, &lock->start);
calc_bloom_nrs(&bloom, bloom_key);
roots = lock->roots;
retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
refs.fs_ref = roots.fs_root.ref;
refs.logs_ref = roots.logs_root.ref;
*start = lock->start;
*end = lock->end;
*start = orig_start;
*end = orig_end;
/* start with fs root items */
rid.is_fs = true;
rid.fic |= FIC_FS_ROOT;
ret = scoutfs_btree_read_items(sb, &roots.fs_root, key, start, end,
forest_read_items, &rid);
if (ret < 0)
goto out;
rid.is_fs = false;
rid.fic &= ~FIC_FS_ROOT;
scoutfs_key_init_log_trees(&ltk, 0, 0);
for (;; scoutfs_key_inc(&ltk)) {
@@ -344,30 +335,40 @@ retry:
scoutfs_inc_counter(sb, forest_bloom_pass);
if ((le64_to_cpu(lt.flags) & SCOUTFS_LOG_TREES_FINALIZED))
rid.fic |= FIC_FINALIZED;
ret = scoutfs_btree_read_items(sb, &lt.item_root, key, start,
end, forest_read_items, &rid);
if (ret < 0)
goto out;
rid.fic &= ~FIC_FINALIZED;
}
ret = 0;
out:
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
goto out;
}
prev_refs = refs;
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
goto retry;
}
return ret;
}
/*
* If the items are deltas then combine the src with the destination
* value and store the result in the destination.
*
* Returns:
* -errno: fatal error, no change
* 0: not delta items, no change
* +ve: SCOUTFS_DELTA_ values indicating when dst and/or src can be dropped
*/
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len)
{
if (key->sk_zone == SCOUTFS_XATTR_TOTL_ZONE)
return scoutfs_xattr_combine_totl(dst, dst_len, src, src_len);
return 0;
}
/*
* Make sure that the bloom bits for the lock's start key are all set in
* the current log's bloom block. We record the nr of our log tree in
@@ -381,18 +382,14 @@ out:
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
struct scoutfs_lock *lock)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
DECLARE_FOREST_INFO(sb, finf);
struct scoutfs_block *new_bl = NULL;
struct scoutfs_block *bl = NULL;
struct scoutfs_bloom_block *bb;
struct scoutfs_btree_ref *ref;
struct scoutfs_block_ref *ref;
struct forest_bloom_nrs bloom;
int nr_set = 0;
u64 blkno;
u64 nr;
int ret;
int err;
int i;
nr = le64_to_cpu(finf->our_log.nr);
@@ -410,53 +407,11 @@ int scoutfs_forest_set_bloom_bits(struct super_block *sb,
ref = &finf->our_log.bloom_ref;
if (ref->blkno) {
bl = read_bloom_ref(sb, ref);
if (IS_ERR(bl)) {
ret = PTR_ERR(bl);
goto unlock;
}
bb = bl->data;
}
if (!ref->blkno || !scoutfs_block_writer_is_dirty(sb, bl)) {
ret = scoutfs_alloc_meta(sb, finf->alloc, finf->wri, &blkno);
if (ret < 0)
goto unlock;
new_bl = scoutfs_block_create(sb, blkno);
if (IS_ERR(new_bl)) {
err = scoutfs_free_meta(sb, finf->alloc, finf->wri,
blkno);
BUG_ON(err); /* could have dirtied */
ret = PTR_ERR(new_bl);
goto unlock;
}
if (bl) {
err = scoutfs_free_meta(sb, finf->alloc, finf->wri,
le64_to_cpu(ref->blkno));
BUG_ON(err); /* could have dirtied */
memcpy(new_bl->data, bl->data, SCOUTFS_BLOCK_LG_SIZE);
} else {
memset(new_bl->data, 0, SCOUTFS_BLOCK_LG_SIZE);
}
scoutfs_block_writer_mark_dirty(sb, finf->wri, new_bl);
scoutfs_block_put(sb, bl);
bl = new_bl;
bb = bl->data;
new_bl = NULL;
bb->hdr.magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_BLOOM);
bb->hdr.fsid = super->hdr.fsid;
bb->hdr.blkno = cpu_to_le64(blkno);
prandom_bytes(&bb->hdr.seq, sizeof(bb->hdr.seq));
ref->blkno = bb->hdr.blkno;
ref->seq = bb->hdr.seq;
}
ret = scoutfs_block_dirty_ref(sb, finf->alloc, finf->wri, ref, SCOUTFS_BLOCK_MAGIC_BLOOM,
&bl, 0, NULL);
if (ret < 0)
goto unlock;
bb = bl->data;
for (i = 0; i < ARRAY_SIZE(bloom.nrs); i++) {
if (!test_and_set_bit_le(bloom.nrs[i], bb->bits)) {
@@ -483,29 +438,29 @@ out:
/*
* The caller is commiting items in the transaction and has found the
* greatest item version amongst them. We store it in the log_trees root
* greatest item seq amongst them. We store it in the log_trees root
* to send to the server.
*/
void scoutfs_forest_set_max_vers(struct super_block *sb, u64 max_vers)
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq)
{
DECLARE_FOREST_INFO(sb, finf);
finf->our_log.max_item_vers = cpu_to_le64(max_vers);
finf->our_log.max_item_seq = cpu_to_le64(max_seq);
}
/*
* The server is calling during setup to find the greatest item version
* The server is calling during setup to find the greatest item seq
* amongst all the log tree roots. They have the authoritative current
* super.
*
* Item versions are only used to compare items in log trees, not in the
* main fs tree. All we have to do is find the greatest version amongst
* the log_trees so that new locks will have a write_version greater
* than all the items in the log_trees.
* Item seqs are only used to compare items in log trees, not in the
* main fs tree. All we have to do is find the greatest seq amongst the
* log_trees so that the core seq will have a greater seq than all the
* items in the log_trees.
*/
int scoutfs_forest_get_max_vers(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *vers)
int scoutfs_forest_get_max_seq(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *seq)
{
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
@@ -513,7 +468,7 @@ int scoutfs_forest_get_max_vers(struct super_block *sb,
int ret;
scoutfs_key_init_log_trees(&ltk, 0, 0);
*vers = 0;
*seq = 0;
for (;; scoutfs_key_inc(&ltk)) {
ret = scoutfs_btree_next(sb, &super->logs_root, &ltk, &iref);
@@ -521,8 +476,7 @@ int scoutfs_forest_get_max_vers(struct super_block *sb,
if (iref.val_len == sizeof(struct scoutfs_log_trees)) {
ltk = *iref.key;
lt = iref.val;
*vers = max(*vers,
le64_to_cpu(lt->max_item_vers));
*seq = max(*seq, le64_to_cpu(lt->max_item_seq));
} else {
ret = -EIO;
}
@@ -571,6 +525,62 @@ int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id)
return ret;
}
void scoutfs_forest_inc_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_inc(&finf->inode_count_delta);
}
void scoutfs_forest_dec_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_dec(&finf->inode_count_delta);
}
/*
* Return the total inode count from the super block and all the
* log_btrees it references. This assumes it's working with a block
* reference hierarchy that should be fully consistent. If we see
* ESTALE we've hit persistent corruption.
*/
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count)
{
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
*inode_count = le64_to_cpu(super->inode_count);
scoutfs_key_init_log_trees(&key, 0, 0);
for (;;) {
ret = scoutfs_btree_next(sb, &super->logs_root, &key, &iref);
if (ret == 0) {
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
scoutfs_key_inc(&key);
lt = iref.val;
*inode_count += le64_to_cpu(lt->inode_count_delta);
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
else if (ret == -ESTALE)
ret = -EIO;
break;
}
}
return ret;
}
/*
* This is called from transactions as a new transaction opens and is
* serialized with all writers.
@@ -591,7 +601,7 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
memset(&finf->our_log, 0, sizeof(finf->our_log));
finf->our_log.item_root = lt->item_root;
finf->our_log.bloom_ref = lt->bloom_ref;
finf->our_log.max_item_vers = lt->max_item_vers;
finf->our_log.max_item_seq = lt->max_item_seq;
finf->our_log.rid = lt->rid;
finf->our_log.nr = lt->nr;
finf->srch_file = lt->srch_file;
@@ -599,6 +609,8 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
WARN_ON_ONCE(finf->srch_bl); /* commiting should have put the block */
finf->srch_bl = NULL;
atomic64_set(&finf->inode_count_delta, le64_to_cpu(lt->inode_count_delta));
trace_scoutfs_forest_init_our_log(sb, le64_to_cpu(lt->rid),
le64_to_cpu(lt->nr),
le64_to_cpu(lt->item_root.ref.blkno),
@@ -621,15 +633,137 @@ void scoutfs_forest_get_btrees(struct super_block *sb,
lt->item_root = finf->our_log.item_root;
lt->bloom_ref = finf->our_log.bloom_ref;
lt->srch_file = finf->srch_file;
lt->max_item_vers = finf->our_log.max_item_vers;
lt->max_item_seq = finf->our_log.max_item_seq;
scoutfs_block_put(sb, finf->srch_bl);
finf->srch_bl = NULL;
lt->inode_count_delta = cpu_to_le64(atomic64_read(&finf->inode_count_delta));
trace_scoutfs_forest_prepare_commit(sb, &lt->item_root.ref,
&lt->bloom_ref);
}
#define LOG_MERGE_DELAY_MS (5 * MSEC_PER_SEC)
/*
* Regularly try to get a log merge request from the server. If we get
* a request we walk the log_trees items to find input trees and pass
* them to btree_merge. All of our work is done in dirty blocks
* allocated from available free blocks that the server gave us. If we
* hit an error then we drop our dirty blocks without writing them and
* send an error flag to the server so they can reclaim our allocators
* and ignore the rest of our work.
*/
static void scoutfs_forest_log_merge_worker(struct work_struct *work)
{
struct forest_info *finf = container_of(work, struct forest_info,
log_merge_dwork.work);
struct super_block *sb = finf->sb;
struct scoutfs_btree_root_head *rhead = NULL;
struct scoutfs_btree_root_head *tmp;
struct scoutfs_log_merge_complete comp;
struct scoutfs_log_merge_request req;
struct scoutfs_log_trees *lt;
struct scoutfs_block_writer wri;
struct scoutfs_alloc alloc;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key next;
struct scoutfs_key key;
unsigned long delay;
LIST_HEAD(inputs);
int ret;
ret = scoutfs_client_get_log_merge(sb, &req);
if (ret < 0)
goto resched;
comp.root = req.root;
comp.start = req.start;
comp.end = req.end;
comp.remain = req.end;
comp.rid = req.rid;
comp.seq = req.seq;
comp.flags = 0;
scoutfs_alloc_init(&alloc, &req.meta_avail, &req.meta_freed);
scoutfs_block_writer_init(sb, &wri);
/* find finalized input log trees within the input seq */
for (scoutfs_key_init_log_trees(&key, 0, 0); ; scoutfs_key_inc(&key)) {
if (!rhead) {
rhead = kmalloc(sizeof(*rhead), GFP_NOFS);
if (!rhead) {
ret = -ENOMEM;
goto out;
}
}
ret = scoutfs_btree_next(sb, &req.logs_root, &key, &iref);
if (ret == 0) {
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
lt = iref.val;
if (lt->item_root.ref.blkno != 0 &&
(le64_to_cpu(lt->flags) & SCOUTFS_LOG_TREES_FINALIZED) &&
(le64_to_cpu(lt->finalize_seq) < le64_to_cpu(req.input_seq))) {
rhead->root = lt->item_root;
list_add_tail(&rhead->head, &inputs);
rhead = NULL;
}
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
if (ret == -ENOENT) {
ret = 0;
break;
}
goto out;
}
}
/* shouldn't be possible, but it's harmless */
if (list_empty(&inputs)) {
ret = 0;
goto out;
}
ret = scoutfs_btree_merge(sb, &alloc, &wri, &req.start, &req.end,
&next, &comp.root, &inputs,
!!(req.flags & cpu_to_le64(SCOUTFS_LOG_MERGE_REQUEST_SUBTREE)),
SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT, 10);
if (ret == -ERANGE) {
comp.remain = next;
le64_add_cpu(&comp.flags, SCOUTFS_LOG_MERGE_COMP_REMAIN);
ret = 0;
}
out:
scoutfs_alloc_prepare_commit(sb, &alloc, &wri);
if (ret == 0)
ret = scoutfs_block_writer_write(sb, &wri);
scoutfs_block_writer_forget_all(sb, &wri);
comp.meta_avail = alloc.avail;
comp.meta_freed = alloc.freed;
if (ret < 0)
le64_add_cpu(&comp.flags, SCOUTFS_LOG_MERGE_COMP_ERROR);
ret = scoutfs_client_commit_log_merge(sb, &comp);
kfree(rhead);
list_for_each_entry_safe(rhead, tmp, &inputs, head)
kfree(rhead);
resched:
delay = ret == 0 ? 0 : msecs_to_jiffies(LOG_MERGE_DELAY_MS);
queue_delayed_work(finf->workq, &finf->log_merge_dwork, delay);
}
int scoutfs_forest_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
@@ -643,10 +777,20 @@ int scoutfs_forest_setup(struct super_block *sb)
}
/* the finf fields will be setup as we open a transaction */
finf->sb = sb;
mutex_init(&finf->mutex);
mutex_init(&finf->srch_mutex);
INIT_DELAYED_WORK(&finf->log_merge_dwork,
scoutfs_forest_log_merge_worker);
sbi->forest_info = finf;
finf->workq = alloc_workqueue("scoutfs_log_merge", WQ_NON_REENTRANT |
WQ_UNBOUND | WQ_HIGHPRI, 0);
if (!finf->workq) {
ret = -ENOMEM;
goto out;
}
ret = 0;
out:
if (ret)
@@ -655,6 +799,24 @@ out:
return 0;
}
void scoutfs_forest_start(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
queue_delayed_work(finf->workq, &finf->log_merge_dwork,
msecs_to_jiffies(LOG_MERGE_DELAY_MS));
}
void scoutfs_forest_stop(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
if (finf && finf->workq) {
cancel_delayed_work_sync(&finf->log_merge_dwork);
destroy_workqueue(finf->workq);
}
}
void scoutfs_forest_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
@@ -662,6 +824,7 @@ void scoutfs_forest_destroy(struct super_block *sb)
if (finf) {
scoutfs_block_put(sb, finf->srch_bl);
kfree(finf);
sbi->forest_info = NULL;
}

View File

@@ -8,29 +8,36 @@ struct scoutfs_block;
#include "btree.h"
/* caller gives an item to the callback */
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
void *val, int val_len, void *arg);
enum {
FIC_FS_ROOT = (1 << 0),
FIC_FINALIZED = (1 << 1),
};
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, int fic, void *arg);
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next);
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
struct scoutfs_lock *lock);
void scoutfs_forest_set_max_vers(struct super_block *sb, u64 max_vers);
int scoutfs_forest_get_max_vers(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *vers);
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq);
int scoutfs_forest_get_max_seq(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *seq);
int scoutfs_forest_insert_list(struct super_block *sb,
struct scoutfs_btree_item_list *lst);
int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id);
void scoutfs_forest_inc_inode_count(struct super_block *sb);
void scoutfs_forest_dec_inode_count(struct super_block *sb);
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count);
void scoutfs_forest_init_btrees(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
@@ -38,7 +45,15 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
void scoutfs_forest_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
/* > 0 error codes */
#define SCOUTFS_DELTA_COMBINED 1 /* src val was combined, drop src */
#define SCOUTFS_DELTA_COMBINED_NULL 2 /* combined val has no data, drop both */
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len);
int scoutfs_forest_setup(struct super_block *sb);
void scoutfs_forest_start(struct super_block *sb);
void scoutfs_forest_stop(struct super_block *sb);
void scoutfs_forest_destroy(struct super_block *sb);
#endif

View File

@@ -1,6 +1,16 @@
#ifndef _SCOUTFS_FORMAT_H_
#define _SCOUTFS_FORMAT_H_
/*
* The format version defines the format of structures on devices,
* structures that are communicated over the wire, and the protocol
* behind the structures.
*/
#define SCOUTFS_FORMAT_VERSION_MIN 1
#define SCOUTFS_FORMAT_VERSION_MIN_STR __stringify(SCOUTFS_FORMAT_VERSION_MIN)
#define SCOUTFS_FORMAT_VERSION_MAX 1
#define SCOUTFS_FORMAT_VERSION_MAX_STR __stringify(SCOUTFS_FORMAT_VERSION_MAX)
/* statfs(2) f_type */
#define SCOUTFS_SUPER_MAGIC 0x554f4353 /* "SCOU" */
@@ -11,6 +21,7 @@
#define SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK 0x897e4a7d
#define SCOUTFS_BLOCK_MAGIC_SRCH_PARENT 0xb23a2a05
#define SCOUTFS_BLOCK_MAGIC_ALLOC_LIST 0x8a93ac83
#define SCOUTFS_BLOCK_MAGIC_QUORUM 0xbc310868
/*
* The super block, quorum block, and file data allocation granularity
@@ -51,15 +62,19 @@
#define SCOUTFS_SUPER_BLKNO ((64ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* A reasonably large region of aligned quorum blocks follow the super
* block. Each voting cycle reads the entire region so we don't want it
* to be too enormous. 256K seems like a reasonably chunky single IO.
* The number of blocks in the region also determines the number of
* mounts that have a reasonable probability of not overwriting each
* other's random block locations.
* A small number of quorum blocks follow the super block, enough of
* them to match the starting offset of the super block so the region is
* aligned to the power of two that contains it.
*/
#define SCOUTFS_QUORUM_BLKNO ((256ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
#define SCOUTFS_QUORUM_BLOCKS ((256ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
#define SCOUTFS_QUORUM_BLKNO (SCOUTFS_SUPER_BLKNO + 1)
#define SCOUTFS_QUORUM_BLOCKS (SCOUTFS_SUPER_BLKNO - 1)
/*
* Free metadata blocks start after the quorum blocks
*/
#define SCOUTFS_META_DEV_START_BLKNO \
((SCOUTFS_QUORUM_BLKNO + SCOUTFS_QUORUM_BLOCKS) >> \
SCOUTFS_BLOCK_SM_LG_SHIFT)
/*
* Start data on the data device aligned as well.
@@ -78,11 +93,33 @@ struct scoutfs_timespec {
__u8 __pad[4];
};
/* XXX ipv6 */
struct scoutfs_inet_addr {
__le32 addr;
enum scoutfs_inet_family {
SCOUTFS_AF_NONE = 0,
SCOUTFS_AF_IPV4 = 1,
SCOUTFS_AF_IPV6 = 2,
};
struct scoutfs_inet_addr4 {
__le16 family;
__le16 port;
__u8 __pad[2];
__le32 addr;
};
/*
* Not yet supported by code.
*/
struct scoutfs_inet_addr6 {
__le16 family;
__le16 port;
__u8 addr[16];
__le32 flow_info;
__le32 scope_id;
__u8 __pad[4];
};
union scoutfs_inet_addr {
struct scoutfs_inet_addr4 v4;
struct scoutfs_inet_addr6 v6;
};
/*
@@ -98,6 +135,15 @@ struct scoutfs_block_header {
__le64 blkno;
};
/*
* A reference to a block. The corresponding fields in the block_header
* must match after having read the block contents.
*/
struct scoutfs_block_ref {
__le64 blkno;
__le64 seq;
};
/*
* scoutfs identifies all file system metadata items by a small key
* struct.
@@ -129,6 +175,11 @@ struct scoutfs_key {
#define sko_rid _sk_first
#define sko_ino _sk_second
/* xattr totl */
#define skxt_a _sk_first
#define skxt_b _sk_second
#define skxt_c _sk_third
/* inode */
#define ski_ino _sk_first
@@ -156,35 +207,16 @@ struct scoutfs_key {
#define sklt_rid _sk_first
#define sklt_nr _sk_second
/* lock clients */
#define sklc_rid _sk_first
/* seqs */
#define skts_trans_seq _sk_first
#define skts_rid _sk_second
/* mounted clients */
#define skmc_rid _sk_first
/* free extents by blkno */
#define skfb_end _sk_second
#define skfb_len _sk_third
/* free extents by len */
#define skfl_neglen _sk_second
#define skfl_blkno _sk_third
struct scoutfs_radix_block {
struct scoutfs_block_header hdr;
union {
struct scoutfs_radix_ref {
__le64 blkno;
__le64 seq;
__le64 sm_total;
__le64 lg_total;
} refs[0];
__le64 bits[0];
};
};
#define skfb_end _sk_first
#define skfb_len _sk_second
/* free extents by order */
#define skfo_revord _sk_first
#define skfo_end _sk_second
#define skfo_len _sk_third
struct scoutfs_avl_root {
__le16 node;
@@ -207,17 +239,12 @@ struct scoutfs_avl_node {
*/
#define SCOUTFS_BTREE_MAX_HEIGHT 20
struct scoutfs_btree_ref {
__le64 blkno;
__le64 seq;
};
/*
* A height of X means that the first block read will have level X-1 and
* the leaves will have level 0.
*/
struct scoutfs_btree_root {
struct scoutfs_btree_ref ref;
struct scoutfs_block_ref ref;
__u8 height;
__u8 __pad[7];
};
@@ -225,11 +252,15 @@ struct scoutfs_btree_root {
struct scoutfs_btree_item {
struct scoutfs_avl_node node;
struct scoutfs_key key;
__le64 seq;
__le16 val_off;
__le16 val_len;
__u8 __pad[4];
__u8 flags;
__u8 __pad[3];
};
#define SCOUTFS_ITEM_FLAG_DELETION (1 << 0)
struct scoutfs_btree_block {
struct scoutfs_block_header hdr;
struct scoutfs_avl_root item_root;
@@ -238,7 +269,7 @@ struct scoutfs_btree_block {
__le16 mid_free_len;
__u8 level;
__u8 __pad[7];
struct scoutfs_btree_item items[0];
struct scoutfs_btree_item items[];
/* leaf blocks have a fixed size item offset hash table at the end */
};
@@ -258,23 +289,19 @@ struct scoutfs_btree_block {
#define SCOUTFS_BTREE_LEAF_ITEM_HASH_BYTES \
(SCOUTFS_BTREE_LEAF_ITEM_HASH_NR * sizeof(__le16))
struct scoutfs_alloc_list_ref {
__le64 blkno;
__le64 seq;
};
/*
* first_nr tracks the nr of the first block in the list and is used for
* allocation sizing. total_nr is the sum of the nr of all the blocks in
* the list and is used for calculating total free block counts.
*/
struct scoutfs_alloc_list_head {
struct scoutfs_alloc_list_ref ref;
struct scoutfs_block_ref ref;
__le64 total_nr;
__le32 first_nr;
__u8 __pad[4];
__le32 flags;
};
/*
* While the main allocator uses extent items in btree blocks, metadata
* allocations for a single transaction are recorded in arrays in
@@ -288,10 +315,10 @@ struct scoutfs_alloc_list_head {
*/
struct scoutfs_alloc_list_block {
struct scoutfs_block_header hdr;
struct scoutfs_alloc_list_ref next;
struct scoutfs_block_ref next;
__le32 start;
__le32 nr;
__le64 blknos[0]; /* naturally aligned for sorting */
__le64 blknos[]; /* naturally aligned for sorting */
};
#define SCOUTFS_ALLOC_LIST_MAX_BLOCKS \
@@ -303,20 +330,28 @@ struct scoutfs_alloc_list_block {
*/
struct scoutfs_alloc_root {
__le64 total_len;
__le32 flags;
__le32 _pad;
struct scoutfs_btree_root root;
};
/* Shared by _alloc_list_head and _alloc_root */
#define SCOUTFS_ALLOC_FLAG_LOW (1U << 0)
/* types of allocators, exposed to alloc_detail ioctl */
#define SCOUTFS_ALLOC_OWNER_NONE 0
#define SCOUTFS_ALLOC_OWNER_SERVER 1
#define SCOUTFS_ALLOC_OWNER_MOUNT 2
#define SCOUTFS_ALLOC_OWNER_SRCH 3
#define SCOUTFS_ALLOC_OWNER_LOG_MERGE 4
struct scoutfs_mounted_client_btree_val {
union scoutfs_inet_addr addr;
__u8 flags;
__u8 __pad[7];
};
#define SCOUTFS_MOUNTED_CLIENT_VOTER (1 << 0)
#define SCOUTFS_MOUNTED_CLIENT_QUORUM (1 << 0)
/*
* srch files are a contiguous run of blocks with compressed entries
@@ -334,15 +369,10 @@ struct scoutfs_srch_entry {
#define SCOUTFS_SRCH_ENTRY_MAX_BYTES (2 + (sizeof(__u64) * 3))
struct scoutfs_srch_ref {
__le64 blkno;
__le64 seq;
};
struct scoutfs_srch_file {
struct scoutfs_srch_entry first;
struct scoutfs_srch_entry last;
struct scoutfs_srch_ref ref;
struct scoutfs_block_ref ref;
__le64 blocks;
__le64 entries;
__u8 height;
@@ -351,13 +381,13 @@ struct scoutfs_srch_file {
struct scoutfs_srch_parent {
struct scoutfs_block_header hdr;
struct scoutfs_srch_ref refs[0];
struct scoutfs_block_ref refs[];
};
#define SCOUTFS_SRCH_PARENT_REFS \
((SCOUTFS_BLOCK_LG_SIZE - \
offsetof(struct scoutfs_srch_parent, refs)) / \
sizeof(struct scoutfs_srch_ref))
sizeof(struct scoutfs_block_ref))
struct scoutfs_srch_block {
struct scoutfs_block_header hdr;
@@ -366,7 +396,7 @@ struct scoutfs_srch_block {
struct scoutfs_srch_entry tail;
__le32 entry_nr;
__le32 entry_bytes;
__u8 entries[0];
__u8 entries[];
};
/*
@@ -419,44 +449,50 @@ struct scoutfs_srch_compact {
/* client -> server: compaction failed */
#define SCOUTFS_SRCH_COMPACT_FLAG_ERROR (1 << 5)
#define SCOUTFS_DATA_ALLOC_MAX_ZONES 1024
#define SCOUTFS_DATA_ALLOC_ZONE_BYTES DIV_ROUND_UP(SCOUTFS_DATA_ALLOC_MAX_ZONES, 8)
#define SCOUTFS_DATA_ALLOC_ZONE_LE64S DIV_ROUND_UP(SCOUTFS_DATA_ALLOC_MAX_ZONES, 64)
/*
* XXX I imagine we should rename these now that they've evolved to track
* all the btrees that clients use during a transaction. It's not just
* about item logs, it's about clients making changes to trees.
*
* @get_trans_seq, @commit_trans_seq: These pair of sequence numbers
* determine if a transaction is currently open for the mount that owns
* the log_trees struct. get_trans_seq is advanced by the server as the
* transaction is opened. The server sets comimt_trans_seq equal to
* get_ as the transaction is committed.
*/
struct scoutfs_log_trees {
struct scoutfs_alloc_list_head meta_avail;
struct scoutfs_alloc_list_head meta_freed;
struct scoutfs_btree_root item_root;
struct scoutfs_btree_ref bloom_ref;
struct scoutfs_block_ref bloom_ref;
struct scoutfs_alloc_root data_avail;
struct scoutfs_alloc_root data_freed;
struct scoutfs_srch_file srch_file;
__le64 max_item_vers;
__le64 data_alloc_zone_blocks;
__le64 data_alloc_zones[SCOUTFS_DATA_ALLOC_ZONE_LE64S];
__le64 inode_count_delta;
__le64 get_trans_seq;
__le64 commit_trans_seq;
__le64 max_item_seq;
__le64 finalize_seq;
__le64 rid;
__le64 nr;
__le64 flags;
};
struct scoutfs_log_item_value {
__le64 vers;
__u8 flags;
__u8 __pad[7];
__u8 data[0];
};
#define SCOUTFS_LOG_TREES_FINALIZED (1ULL << 0)
/*
* FS items are limited by the max btree value length with the log item
* value header.
*/
#define SCOUTFS_MAX_VAL_SIZE \
(SCOUTFS_BTREE_MAX_VAL_LEN - sizeof(struct scoutfs_log_item_value))
#define SCOUTFS_LOG_ITEM_FLAG_DELETION (1 << 0)
/* FS items are limited by the max btree value length */
#define SCOUTFS_MAX_VAL_SIZE SCOUTFS_BTREE_MAX_VAL_LEN
struct scoutfs_bloom_block {
struct scoutfs_block_header hdr;
__le64 total_set;
__le64 bits[0];
__le64 bits[];
};
/*
@@ -473,50 +509,122 @@ struct scoutfs_bloom_block {
member_sizeof(struct scoutfs_bloom_block, bits[0]) * 8)
#define SCOUTFS_FOREST_BLOOM_FUNC_BITS (SCOUTFS_BLOCK_LG_SHIFT + 3)
/*
* A private server btree item which records the status of a log merge
* operation that is in progress.
*/
struct scoutfs_log_merge_status {
struct scoutfs_key next_range_key;
__le64 nr_requests;
__le64 nr_complete;
__le64 seq;
};
/*
* A request is sent to the client and stored in a server btree item to
* record resources that would be reclaimed if the client failed. It
* has all the inputs needed for the client to perform its portion of a
* merge.
*/
struct scoutfs_log_merge_request {
struct scoutfs_alloc_list_head meta_avail;
struct scoutfs_alloc_list_head meta_freed;
struct scoutfs_btree_root logs_root;
struct scoutfs_btree_root root;
struct scoutfs_key start;
struct scoutfs_key end;
__le64 input_seq;
__le64 rid;
__le64 seq;
__le64 flags;
};
/* request root is subtree of fs root at parent, restricted merging modifications */
#define SCOUTFS_LOG_MERGE_REQUEST_SUBTREE (1ULL << 0)
/*
* The output of a client's merge of log btree items into a subtree
* rooted at a parent in the fs_root. The client sends it to the
* server, who stores it in a btree item for later splicing/rebalancing.
*/
struct scoutfs_log_merge_complete {
struct scoutfs_alloc_list_head meta_avail;
struct scoutfs_alloc_list_head meta_freed;
struct scoutfs_btree_root root;
struct scoutfs_key start;
struct scoutfs_key end;
struct scoutfs_key remain;
__le64 rid;
__le64 seq;
__le64 flags;
};
/* merge failed, ignore completion and reclaim stored request */
#define SCOUTFS_LOG_MERGE_COMP_ERROR (1ULL << 0)
/* merge didn't complete range, restart from remain */
#define SCOUTFS_LOG_MERGE_COMP_REMAIN (1ULL << 1)
/*
* Range items record the ranges of the fs keyspace that still need to
* be merged. They're added as a merge starts, removed as requests are
* sent and added back if the request didn't consume its entire range.
*/
struct scoutfs_log_merge_range {
struct scoutfs_key start;
struct scoutfs_key end;
};
struct scoutfs_log_merge_freeing {
struct scoutfs_btree_root root;
struct scoutfs_key key;
__le64 seq;
};
/*
* Keys are first sorted by major key zones.
*/
#define SCOUTFS_INODE_INDEX_ZONE 1
#define SCOUTFS_RID_ZONE 2
#define SCOUTFS_FS_ZONE 3
#define SCOUTFS_LOCK_ZONE 4
#define SCOUTFS_INODE_INDEX_ZONE 4
#define SCOUTFS_ORPHAN_ZONE 8
#define SCOUTFS_XATTR_TOTL_ZONE 12
#define SCOUTFS_FS_ZONE 16
#define SCOUTFS_LOCK_ZONE 20
/* Items only stored in server btrees */
#define SCOUTFS_LOG_TREES_ZONE 6
#define SCOUTFS_LOCK_CLIENTS_ZONE 7
#define SCOUTFS_TRANS_SEQ_ZONE 8
#define SCOUTFS_MOUNTED_CLIENT_ZONE 9
#define SCOUTFS_SRCH_ZONE 10
#define SCOUTFS_FREE_EXTENT_ZONE 11
#define SCOUTFS_LOG_TREES_ZONE 24
#define SCOUTFS_MOUNTED_CLIENT_ZONE 28
#define SCOUTFS_SRCH_ZONE 32
#define SCOUTFS_FREE_EXTENT_BLKNO_ZONE 36
#define SCOUTFS_FREE_EXTENT_ORDER_ZONE 40
/* Items only stored in log merge server btrees */
#define SCOUTFS_LOG_MERGE_STATUS_ZONE 44
#define SCOUTFS_LOG_MERGE_RANGE_ZONE 48
#define SCOUTFS_LOG_MERGE_REQUEST_ZONE 52
#define SCOUTFS_LOG_MERGE_COMPLETE_ZONE 56
#define SCOUTFS_LOG_MERGE_FREEING_ZONE 60
/* inode index zone */
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 1
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 2
#define SCOUTFS_INODE_INDEX_NR 3 /* don't forget to update */
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 4
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 8
/* rid zone (also used in server alloc btree) */
#define SCOUTFS_ORPHAN_TYPE 1
/* orphan zone, redundant type used for clarity */
#define SCOUTFS_ORPHAN_TYPE 4
/* fs zone */
#define SCOUTFS_INODE_TYPE 1
#define SCOUTFS_XATTR_TYPE 2
#define SCOUTFS_DIRENT_TYPE 3
#define SCOUTFS_READDIR_TYPE 4
#define SCOUTFS_LINK_BACKREF_TYPE 5
#define SCOUTFS_SYMLINK_TYPE 6
#define SCOUTFS_DATA_EXTENT_TYPE 7
#define SCOUTFS_INODE_TYPE 4
#define SCOUTFS_XATTR_TYPE 8
#define SCOUTFS_DIRENT_TYPE 12
#define SCOUTFS_READDIR_TYPE 16
#define SCOUTFS_LINK_BACKREF_TYPE 20
#define SCOUTFS_SYMLINK_TYPE 24
#define SCOUTFS_DATA_EXTENT_TYPE 28
/* lock zone, only ever found in lock ranges, never in persistent items */
#define SCOUTFS_RENAME_TYPE 1
#define SCOUTFS_RENAME_TYPE 4
/* srch zone, only in server btrees */
#define SCOUTFS_SRCH_LOG_TYPE 1
#define SCOUTFS_SRCH_BLOCKS_TYPE 2
#define SCOUTFS_SRCH_PENDING_TYPE 3
#define SCOUTFS_SRCH_BUSY_TYPE 4
/* free extents in allocator btrees in client and server, by blkno or len */
#define SCOUTFS_FREE_EXTENT_BLKNO_TYPE 1
#define SCOUTFS_FREE_EXTENT_LEN_TYPE 2
#define SCOUTFS_SRCH_LOG_TYPE 4
#define SCOUTFS_SRCH_BLOCKS_TYPE 8
#define SCOUTFS_SRCH_PENDING_TYPE 12
#define SCOUTFS_SRCH_BUSY_TYPE 16
/* file data extents have start and len in key */
struct scoutfs_data_extent_val {
@@ -538,91 +646,172 @@ struct scoutfs_xattr {
__le16 val_len;
__u8 name_len;
__u8 __pad[5];
__u8 name[0];
__u8 name[];
};
/*
* .totl. xattrs are mapped to items. The dotted u64s in the xattr name
* map to the item key. The item value total is the sum of all the
* xattr values. The item value count records the number of xattrs
* contributing to the total and is used when combining logged items to
* determine if totals are being created or destroyed.
*/
struct scoutfs_xattr_totl_val {
__le64 total;
__le64 count;
};
/* XXX does this exist upstream somewhere? */
#define member_sizeof(TYPE, MEMBER) (sizeof(((TYPE *)0)->MEMBER))
#define SCOUTFS_UUID_BYTES 16
/*
* Mounts read all the quorum blocks and write to one random quorum
* block during a cycle. The min cycle time limits the per-mount iop
* load during elections. The random cycle delay makes it less likely
* that mounts will read and write at the same time and miss each
* other's writes. An election only completes if a quorum of mounts
* vote for a leader before any of their elections timeout. This is
* made less likely by the probability that mounts will overwrite each
* others random block locations. The max quorum count limits that
* probability. 9 mounts only have a 55% chance of writing to unique 4k
* blocks in a 256k region. The election timeout is set to include
* enough cycles to usually complete the election. Once a leader is
* elected it spends a number of cycles writing out blocks with itself
* logged as a leader. This reduces the possibility that servers
* will have their log entries overwritten and not be fenced.
*/
#define SCOUTFS_QUORUM_MAX_COUNT 9
#define SCOUTFS_QUORUM_CYCLE_LO_MS 10
#define SCOUTFS_QUORUM_CYCLE_HI_MS 20
#define SCOUTFS_QUORUM_TERM_LO_MS 250
#define SCOUTFS_QUORUM_TERM_HI_MS 500
#define SCOUTFS_QUORUM_ELECTED_LOG_CYCLES 10
#define SCOUTFS_QUORUM_MAX_SLOTS 15
struct scoutfs_quorum_block {
/*
* To elect a leader, members race to have their variable election
* timeouts expire. If they're first to send a vote request with a
* greater term to a majority of waiting members they'll be elected with
* a majority. If the timeouts are too close, the vote may be split and
* everyone will wait for another cycle of variable timeouts to expire.
*
* These determine how long it will take to elect a leader once there's
* no evidence of a server (no leader quorum blocks on mount; heartbeat
* timeout expired.)
*/
#define SCOUTFS_QUORUM_ELECT_MIN_MS 250
#define SCOUTFS_QUORUM_ELECT_VAR_MS 100
/*
* Once a leader is elected they send out heartbeats at regular
* intervals to force members to wait the much longer heartbeat timeout.
* Once heartbeat timeout expires without receiving a heartbeat they'll
* switch over the performing elections.
*
* These determine how long it could take members to notice that a
* leader has gone silent and start to elect a new leader.
*/
#define SCOUTFS_QUORUM_HB_IVAL_MS 100
#define SCOUTFS_QUORUM_HB_TIMEO_MS (5 * MSEC_PER_SEC)
/*
* A newly elected leader will give fencing some time before giving up and
* shutting down.
*/
#define SCOUTFS_QUORUM_FENCE_TO_MS (15 * MSEC_PER_SEC)
struct scoutfs_quorum_message {
__le64 fsid;
__le64 blkno;
__le64 version;
__le64 term;
__le64 write_nr;
__le64 voter_rid;
__le64 vote_for_rid;
__u8 type;
__u8 from;
__u8 __pad[2];
__le32 crc;
__u8 log_nr;
__u8 __pad[3];
struct scoutfs_quorum_log {
__le64 term;
__le64 rid;
struct scoutfs_inet_addr addr;
} log[0];
};
#define SCOUTFS_QUORUM_LOG_MAX \
((SCOUTFS_BLOCK_SM_SIZE - sizeof(struct scoutfs_quorum_block)) / \
sizeof(struct scoutfs_quorum_log))
/* a candidate requests a vote */
#define SCOUTFS_QUORUM_MSG_REQUEST_VOTE 0
/* followers send votes to candidates */
#define SCOUTFS_QUORUM_MSG_VOTE 1
/* elected leaders broadcast heartbeats to delay elections */
#define SCOUTFS_QUORUM_MSG_HEARTBEAT 2
/* leaders broadcast as they leave to break heartbeat timeout */
#define SCOUTFS_QUORUM_MSG_RESIGNATION 3
#define SCOUTFS_QUORUM_MSG_INVALID 4
/*
* The version is currently always 0, but will be used by mounts to
* discover that membership has changed.
*/
struct scoutfs_quorum_config {
__le64 version;
struct scoutfs_quorum_slot {
union scoutfs_inet_addr addr;
} slots[SCOUTFS_QUORUM_MAX_SLOTS];
};
enum {
SCOUTFS_QUORUM_EVENT_BEGIN, /* quorum service starting up */
SCOUTFS_QUORUM_EVENT_TERM, /* updated persistent term */
SCOUTFS_QUORUM_EVENT_ELECT, /* won election */
SCOUTFS_QUORUM_EVENT_FENCE, /* server fenced others */
SCOUTFS_QUORUM_EVENT_STOP, /* server stopped */
SCOUTFS_QUORUM_EVENT_END, /* quorum service shutting down */
SCOUTFS_QUORUM_EVENT_NR,
};
struct scoutfs_quorum_block {
struct scoutfs_block_header hdr;
__le64 write_nr;
struct scoutfs_quorum_block_event {
__le64 write_nr;
__le64 rid;
__le64 term;
struct scoutfs_timespec ts;
} events[SCOUTFS_QUORUM_EVENT_NR];
};
/*
* Tunable options that apply to the entire system. They can be set in
* mkfs or in sysfs files which send an rpc to the server to make the
* change. The super version defines the options that exist.
*
* @set_bits: bits for each 64bit starting offset after set_bits
* indicate which logical option is set.
*
* @data_alloc_zone_blocks: if set, the data device is logically divided
* into contiguous zones of this many blocks. Data allocation will try
* and isolate allocated extents for each mount to their own zone. The
* zone size must be larger than the data alloc high water mark and
* large enough such that the number of zones is kept within its static
* limit.
*/
struct scoutfs_volume_options {
__le64 set_bits;
__le64 data_alloc_zone_blocks;
__le64 __future_expansion[63];
};
#define scoutfs_volopt_nr(field) \
((offsetof(struct scoutfs_volume_options, field) - \
(offsetof(struct scoutfs_volume_options, set_bits) + \
member_sizeof(struct scoutfs_volume_options, set_bits))) / sizeof(__le64))
#define scoutfs_volopt_bit(field) \
(1ULL << scoutfs_volopt_nr(field))
#define SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR \
scoutfs_volopt_nr(data_alloc_zone_blocks)
#define SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_BIT \
scoutfs_volopt_bit(data_alloc_zone_blocks)
#define SCOUTFS_VOLOPT_EXPANSION_BITS \
(~(scoutfs_volopt_bit(__future_expansion) - 1))
#define SCOUTFS_FLAG_IS_META_BDEV 0x01
struct scoutfs_super_block {
struct scoutfs_block_header hdr;
__le64 id;
__le64 format_hash;
__le64 fmt_vers;
__le64 flags;
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 seq;
__le64 next_ino;
__le64 next_trans_seq;
__le64 inode_count;
__le64 total_meta_blocks; /* both static and dynamic */
__le64 first_meta_blkno; /* first dynamically allocated */
__le64 last_meta_blkno;
__le64 total_data_blocks;
__le64 first_data_blkno;
__le64 last_data_blkno;
__le64 quorum_fenced_term;
__le64 quorum_server_term;
__le64 unmount_barrier;
__u8 quorum_count;
__u8 __pad[7];
struct scoutfs_inet_addr server_addr;
struct scoutfs_quorum_config qconf;
struct scoutfs_alloc_root meta_alloc[2];
struct scoutfs_alloc_root data_alloc;
struct scoutfs_alloc_list_head server_meta_avail[2];
struct scoutfs_alloc_list_head server_meta_freed[2];
struct scoutfs_btree_root fs_root;
struct scoutfs_btree_root logs_root;
struct scoutfs_btree_root lock_clients;
struct scoutfs_btree_root trans_seqs;
struct scoutfs_btree_root log_merge;
struct scoutfs_btree_root mounted_clients;
struct scoutfs_btree_root srch_root;
struct scoutfs_volume_options volopt;
};
#define SCOUTFS_ROOT_INO 1
@@ -646,13 +835,6 @@ struct scoutfs_super_block {
*
* @offline_blocks: The number of fixed 4k blocks that could be made
* online by staging.
*
* XXX
* - otime?
* - compat flags?
* - version?
* - generation?
* - be more careful with rdev?
*/
struct scoutfs_inode {
__le64 size;
@@ -663,6 +845,7 @@ struct scoutfs_inode {
__le64 offline_blocks;
__le64 next_readdir_pos;
__le64 next_xattr_id;
__le64 version;
__le32 nlink;
__le32 uid;
__le32 gid;
@@ -672,6 +855,7 @@ struct scoutfs_inode {
struct scoutfs_timespec atime;
struct scoutfs_timespec ctime;
struct scoutfs_timespec mtime;
struct scoutfs_timespec crtime;
};
#define SCOUTFS_INO_FLAG_TRUNCATE 0x1
@@ -695,7 +879,7 @@ struct scoutfs_dirent {
__le64 pos;
__u8 type;
__u8 __pad[7];
__u8 name[0];
__u8 name[];
};
#define SCOUTFS_NAME_LEN 255
@@ -723,6 +907,7 @@ enum scoutfs_dentry_type {
#define SCOUTFS_XATTR_MAX_NAME_LEN 255
#define SCOUTFS_XATTR_MAX_VAL_LEN 65535
#define SCOUTFS_XATTR_MAX_PART_SIZE SCOUTFS_MAX_VAL_SIZE
#define SCOUTFS_XATTR_MAX_TOTL_U64 23 /* octal U64_MAX */
#define SCOUTFS_XATTR_NR_PARTS(name_len, val_len) \
DIV_ROUND_UP(sizeof(struct scoutfs_xattr) + name_len + val_len, \
@@ -746,12 +931,6 @@ enum scoutfs_dentry_type {
* the same serer after receiving a greeting response and to a new
* server after failover.
*
* @unmount_barrier: Incremented every time the remaining majority of
* quorum members all agree to leave. The server tells a quorum member
* the value that it's connecting under so that if the client sees the
* value increase in the super block then it knows that the server has
* processed its farewell and can safely unmount.
*
* @rid: The client's random id that was generated once as the mount
* started up. This identifies a specific remote mount across
* connections and servers. It's set to the client's rid in both the
@@ -759,15 +938,14 @@ enum scoutfs_dentry_type {
*/
struct scoutfs_net_greeting {
__le64 fsid;
__le64 format_hash;
__le64 fmt_vers;
__le64 server_term;
__le64 unmount_barrier;
__le64 rid;
__le64 flags;
};
#define SCOUTFS_NET_GREETING_FLAG_FAREWELL (1 << 0)
#define SCOUTFS_NET_GREETING_FLAG_VOTER (1 << 1)
#define SCOUTFS_NET_GREETING_FLAG_QUORUM (1 << 1)
#define SCOUTFS_NET_GREETING_FLAG_INVALID (~(__u64)0 << 2)
/*
@@ -791,7 +969,6 @@ struct scoutfs_net_greeting {
* response messages.
*/
struct scoutfs_net_header {
__le64 clock_sync_id;
__le64 seq;
__le64 recv_seq;
__le64 id;
@@ -800,7 +977,7 @@ struct scoutfs_net_header {
__u8 flags;
__u8 error;
__u8 __pad[3];
__u8 data[0];
__u8 data[];
};
#define SCOUTFS_NET_FLAG_RESPONSE (1 << 0)
@@ -811,13 +988,21 @@ enum scoutfs_net_cmd {
SCOUTFS_NET_CMD_ALLOC_INODES,
SCOUTFS_NET_CMD_GET_LOG_TREES,
SCOUTFS_NET_CMD_COMMIT_LOG_TREES,
SCOUTFS_NET_CMD_SYNC_LOG_TREES,
SCOUTFS_NET_CMD_GET_ROOTS,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
SCOUTFS_NET_CMD_GET_LAST_SEQ,
SCOUTFS_NET_CMD_LOCK,
SCOUTFS_NET_CMD_LOCK_RECOVER,
SCOUTFS_NET_CMD_SRCH_GET_COMPACT,
SCOUTFS_NET_CMD_SRCH_COMMIT_COMPACT,
SCOUTFS_NET_CMD_GET_LOG_MERGE,
SCOUTFS_NET_CMD_COMMIT_LOG_MERGE,
SCOUTFS_NET_CMD_OPEN_INO_MAP,
SCOUTFS_NET_CMD_GET_VOLOPT,
SCOUTFS_NET_CMD_SET_VOLOPT,
SCOUTFS_NET_CMD_CLEAR_VOLOPT,
SCOUTFS_NET_CMD_RESIZE_DEVICES,
SCOUTFS_NET_CMD_STATFS,
SCOUTFS_NET_CMD_FAREWELL,
SCOUTFS_NET_CMD_UNKNOWN,
};
@@ -860,23 +1045,32 @@ struct scoutfs_net_roots {
struct scoutfs_btree_root srch_root;
};
struct scoutfs_net_resize_devices {
__le64 new_total_meta_blocks;
__le64 new_total_data_blocks;
};
struct scoutfs_net_statfs {
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 free_meta_blocks;
__le64 total_meta_blocks;
__le64 free_data_blocks;
__le64 total_data_blocks;
__le64 inode_count;
};
struct scoutfs_net_lock {
struct scoutfs_key key;
__le64 write_version;
__le64 write_seq;
__u8 old_mode;
__u8 new_mode;
__u8 __pad[6];
};
struct scoutfs_net_lock_grant_response {
struct scoutfs_net_lock nl;
struct scoutfs_net_roots roots;
};
struct scoutfs_net_lock_recover {
__le16 nr;
__u8 __pad[6];
struct scoutfs_net_lock locks[0];
struct scoutfs_net_lock locks[];
};
#define SCOUTFS_NET_LOCK_MAX_RECOVER_NR \
@@ -891,6 +1085,7 @@ enum scoutfs_lock_trace {
SLT_INVALIDATE,
SLT_REQUEST,
SLT_RESPONSE,
SLT_NR,
};
/*
@@ -943,4 +1138,42 @@ enum scoutfs_corruption_sources {
#define SC_NR_LONGS DIV_ROUND_UP(SC_NR_SOURCES, BITS_PER_LONG)
#define SCOUTFS_OPEN_INO_MAP_SHIFT 10
#define SCOUTFS_OPEN_INO_MAP_BITS (1 << SCOUTFS_OPEN_INO_MAP_SHIFT)
#define SCOUTFS_OPEN_INO_MAP_MASK (SCOUTFS_OPEN_INO_MAP_BITS - 1)
#define SCOUTFS_OPEN_INO_MAP_LE64S (SCOUTFS_OPEN_INO_MAP_BITS / 64)
/*
* The request and response conversation is as follows:
*
* client[init] -> server:
* group_nr = G
* req_id = 0 (I)
* server -> client[*]
* group_nr = G
* req_id = R
* client[*] -> server
* group_nr = G (I)
* req_id = R
* bits
* server -> client[init]
* group_nr = G (I)
* req_id = R (I)
* bits
*
* Many of the fields in individual messages are ignored ("I") because
* the net id or the omap req_id can be used to identify the
* conversation. We always include them on the wire to make inspected
* messages easier to follow.
*/
struct scoutfs_open_ino_map_args {
__le64 group_nr;
__le64 req_id;
};
struct scoutfs_open_ino_map {
struct scoutfs_open_ino_map_args args;
__le64 bits[SCOUTFS_OPEN_INO_MAP_LE64S];
};
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -4,12 +4,13 @@
#include "key.h"
#include "lock.h"
#include "per_task.h"
#include "count.h"
#include "format.h"
#include "data.h"
struct scoutfs_lock;
#define SCOUTFS_INODE_NR_INDICES 2
struct scoutfs_inode_info {
/* read or initialized for each inode instance */
u64 ino;
@@ -21,6 +22,7 @@ struct scoutfs_inode_info {
u64 online_blocks;
u64 offline_blocks;
u32 flags;
struct timespec crtime;
/*
* Protects per-inode extent items, most particularly readers
@@ -38,8 +40,8 @@ struct scoutfs_inode_info {
*/
struct mutex item_mutex;
bool have_item;
u64 item_majors[SCOUTFS_INODE_INDEX_NR];
u32 item_minors[SCOUTFS_INODE_INDEX_NR];
u64 item_majors[SCOUTFS_INODE_NR_INDICES];
u32 item_minors[SCOUTFS_INODE_NR_INDICES];
/* updated at on each new lock acquisition */
atomic64_t last_refreshed;
@@ -50,7 +52,14 @@ struct scoutfs_inode_info {
struct scoutfs_per_task pt_data_lock;
struct scoutfs_data_waitq data_waitq;
struct rw_semaphore xattr_rwsem;
struct rb_node writeback_node;
struct list_head writeback_entry;
struct scoutfs_lock_coverage ino_lock_cov;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node iput_llnode;
atomic_t iput_count;
struct inode inode;
};
@@ -69,11 +78,13 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
void scoutfs_destroy_inode(struct inode *inode);
int scoutfs_drop_inode(struct inode *inode);
void scoutfs_evict_inode(struct inode *inode);
int scoutfs_orphan_inode(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
#define SCOUTFS_IGF_LINKED (1 << 0) /* enoent if nlink == 0 */
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf);
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
void scoutfs_inode_init_key(struct scoutfs_key *key, u64 ino);
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
u32 minor, u64 ino);
int scoutfs_inode_index_start(struct super_block *sb, u64 *seq);
@@ -83,11 +94,9 @@ int scoutfs_inode_index_prepare_ino(struct super_block *sb,
struct list_head *list, u64 ino,
umode_t mode);
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
struct list_head *list, u64 seq,
const struct scoutfs_item_count cnt);
struct list_head *list, u64 seq, bool allocing);
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
bool set_data_seq,
const struct scoutfs_item_count cnt);
bool set_data_seq, bool allocing);
void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list);
int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock);
@@ -110,23 +119,24 @@ u64 scoutfs_inode_data_version(struct inode *inode);
void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off);
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock,
int flags);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_scan_orphans(struct super_block *sb);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
void scoutfs_inode_queue_writeback(struct inode *inode);
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);
u64 scoutfs_last_ino(struct super_block *sb);
void scoutfs_inode_exit(void);
int scoutfs_inode_init(void);
int scoutfs_inode_setup(struct super_block *sb);
void scoutfs_inode_start(struct super_block *sb);
void scoutfs_inode_orphan_stop(struct super_block *sb);
void scoutfs_inode_flush_iput(struct super_block *sb);
void scoutfs_inode_destroy(struct super_block *sb);
#endif

View File

@@ -21,6 +21,7 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/aio.h>
#include <linux/list_sort.h>
#include "format.h"
#include "key.h"
@@ -38,6 +39,8 @@
#include "hash.h"
#include "srch.h"
#include "alloc.h"
#include "server.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -540,19 +543,17 @@ out:
static long scoutfs_ioc_stat_more(struct file *file, unsigned long arg)
{
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_ioctl_stat_more stm;
if (get_user(stm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
stm.valid_bytes = min_t(u64, stm.valid_bytes,
sizeof(struct scoutfs_ioctl_stat_more));
stm.meta_seq = scoutfs_inode_meta_seq(inode);
stm.data_seq = scoutfs_inode_data_seq(inode);
stm.data_version = scoutfs_inode_data_version(inode);
scoutfs_inode_get_onoff(inode, &stm.online_blocks, &stm.offline_blocks);
stm.crtime_sec = si->crtime.tv_sec;
stm.crtime_nsec = si->crtime.tv_nsec;
if (copy_to_user((void __user *)arg, &stm, stm.valid_bytes))
if (copy_to_user((void __user *)arg, &stm, sizeof(stm)))
return -EFAULT;
return 0;
@@ -616,6 +617,7 @@ static long scoutfs_ioc_data_waiting(struct file *file, unsigned long arg)
static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
{
struct inode *inode = file->f_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_ioctl_setattr_more __user *usm = (void __user *)arg;
struct scoutfs_ioctl_setattr_more sm;
@@ -674,8 +676,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
/* setting only so we don't see 0 data seq with nonzero data_version */
set_data_seq = sm.data_version != 0 ? true : false;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq,
SIC_SETATTR_MORE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq, false);
if (ret)
goto unlock;
@@ -685,6 +686,8 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
i_size_write(inode, sm.i_size);
inode->i_ctime.tv_sec = sm.ctime_sec;
inode->i_ctime.tv_nsec = sm.ctime_nsec;
si->crtime.tv_sec = sm.crtime_sec;
si->crtime.tv_nsec = sm.crtime_nsec;
scoutfs_update_inode_item(inode, lock, &ind_locks);
ret = 0;
@@ -867,28 +870,35 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super;
struct scoutfs_ioctl_statfs_more sfm;
int ret;
if (get_user(sfm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super)
return -ENOMEM;
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
sfm.valid_bytes = min_t(u64, sfm.valid_bytes,
sizeof(struct scoutfs_ioctl_statfs_more));
sfm.fsid = le64_to_cpu(super->hdr.fsid);
sfm.rid = sbi->rid;
sfm.total_meta_blocks = le64_to_cpu(super->total_meta_blocks);
sfm.total_data_blocks = le64_to_cpu(super->total_data_blocks);
sfm.reserved_meta_blocks = scoutfs_server_reserved_meta_blocks(sb);
ret = scoutfs_client_get_last_seq(sb, &sfm.committed_seq);
if (ret)
return ret;
goto out;
if (copy_to_user((void __user *)arg, &sfm, sfm.valid_bytes))
return -EFAULT;
return 0;
if (copy_to_user((void __user *)arg, &sfm, sizeof(sfm)))
ret = -EFAULT;
else
ret = 0;
out:
kfree(super);
return ret;
}
struct copy_alloc_detail_args {
@@ -973,12 +983,18 @@ static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
goto out;
}
if (mb.flags & SCOUTFS_IOC_MB_UNKNOWN) {
ret = -EINVAL;
goto out;
}
ret = mnt_want_write_file(file);
if (ret < 0)
goto out;
ret = scoutfs_data_move_blocks(from, mb.from_off, mb.len,
to, mb.to_off);
to, mb.to_off, !!(mb.flags & SCOUTFS_IOC_MB_STAGE),
mb.data_version);
mnt_drop_write_file(file);
out:
fput(from_file);
@@ -986,6 +1002,402 @@ out:
return ret;
}
static long scoutfs_ioc_resize_devices(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_resize_devices __user *urd = (void __user *)arg;
struct scoutfs_ioctl_resize_devices rd;
struct scoutfs_net_resize_devices nrd;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rd, urd, sizeof(rd))) {
ret = -EFAULT;
goto out;
}
nrd.new_total_meta_blocks = cpu_to_le64(rd.new_total_meta_blocks);
nrd.new_total_data_blocks = cpu_to_le64(rd.new_total_data_blocks);
ret = scoutfs_client_resize_devices(sb, &nrd);
out:
return ret;
}
struct xattr_total_entry {
struct rb_node node;
struct scoutfs_ioctl_xattr_total xt;
u64 fs_seq;
u64 fs_total;
u64 fs_count;
u64 fin_seq;
u64 fin_total;
s64 fin_count;
u64 log_seq;
u64 log_total;
s64 log_count;
};
static int cmp_xt_entry_name(const struct xattr_total_entry *a,
const struct xattr_total_entry *b)
{
return scoutfs_cmp_u64s(a->xt.name[0], b->xt.name[0]) ?:
scoutfs_cmp_u64s(a->xt.name[1], b->xt.name[1]) ?:
scoutfs_cmp_u64s(a->xt.name[2], b->xt.name[2]);
}
/*
* Record the contribution of the three classes of logged items we can
* see: the item in the fs_root, items from finalized log btrees, and
* items from active log btrees. Once we have the full set the caller
* can decide which of the items contribute to the total it sends to the
* user.
*/
static int read_xattr_total_item(struct super_block *sb, struct scoutfs_key *key,
u64 seq, u8 flags, void *val, int val_len, int fic, void *arg)
{
struct scoutfs_xattr_totl_val *tval = val;
struct xattr_total_entry *ent;
struct xattr_total_entry rd;
struct rb_root *root = arg;
struct rb_node *parent;
struct rb_node **node;
int cmp;
rd.xt.name[0] = le64_to_cpu(key->skxt_a);
rd.xt.name[1] = le64_to_cpu(key->skxt_b);
rd.xt.name[2] = le64_to_cpu(key->skxt_c);
/* find entry matching name */
node = &root->rb_node;
parent = NULL;
cmp = -1;
while (*node) {
parent = *node;
ent = container_of(*node, struct xattr_total_entry, node);
/* sort merge items by key then newest to oldest */
cmp = cmp_xt_entry_name(&rd, ent);
if (cmp < 0)
node = &(*node)->rb_left;
else if (cmp > 0)
node = &(*node)->rb_right;
else
break;
}
/* allocate and insert new node if we need to */
if (cmp != 0) {
ent = kzalloc(sizeof(*ent), GFP_KERNEL);
if (!ent)
return -ENOMEM;
memcpy(&ent->xt.name, &rd.xt.name, sizeof(ent->xt.name));
rb_link_node(&ent->node, parent, node);
rb_insert_color(&ent->node, root);
}
if (fic & FIC_FS_ROOT) {
ent->fs_seq = seq;
ent->fs_total = le64_to_cpu(tval->total);
ent->fs_count = le64_to_cpu(tval->count);
} else if (fic & FIC_FINALIZED) {
ent->fin_seq = seq;
ent->fin_total += le64_to_cpu(tval->total);
ent->fin_count += le64_to_cpu(tval->count);
} else {
ent->log_seq = seq;
ent->log_total += le64_to_cpu(tval->total);
ent->log_count += le64_to_cpu(tval->count);
}
scoutfs_inc_counter(sb, totl_read_item);
return 0;
}
/* these are always _safe, node stores next */
#define for_each_xt_ent(ent, node, root) \
for (node = rb_first(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_next(node), 1); )
#define for_each_xt_ent_reverse(ent, node, root) \
for (node = rb_last(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_prev(node), 1); )
static void free_xt_ent(struct rb_root *root, struct xattr_total_entry *ent)
{
rb_erase(&ent->node, root);
kfree(ent);
}
static void free_all_xt_ents(struct rb_root *root)
{
struct xattr_total_entry *ent;
struct rb_node *node;
for_each_xt_ent(ent, node, root)
free_xt_ent(root, ent);
}
/*
* Starting from the caller's pos_name, copy the names, totals, and
* counts for the .totl. tagged xattrs in the system sorted by their
* name until the user's buffer is full. This only sees xattrs that
* have been committed. It doesn't use locking to force commits and
* block writers so it can be a little bit out of date with respect to
* dirty xattrs in memory across the system.
*
* Our reader has to be careful because the log btree merging code can
* write partial results to the fs_root. This means that a reader can
* see both cases where new finalized logs should be applied to the old
* fs items and where old finalized logs have already been applied to
* the partially merged fs items. Currently active logged items are
* always applied on top of all cases.
*
* These cases are differentiated with a combination of sequence numbers
* in items, the count of contributing xattrs, and a flag
* differentiating finalized and active logged items. This lets us
* recognize all cases, including when finalized logs were merged and
* deleted the fs item.
*
* We're allocating a tracking struct for each totl name we see while
* traversing the item btrees. The forest reader is providing the items
* it finds in leaf blocks that contain the search key. In the worst
* case all of these blocks are full and none of the items overlap. At
* most, figure order a thousand names per mount. But in practice many
* of these factors fall away: leaf blocks aren't fill, leaf items
* overlap, there aren't finalized log btrees, and not all mounts are
* actively changing totals. We're much more likely to only read a
* leaf block's worth of totals that have been long since merged into
* the fs_root.
*/
static long scoutfs_ioc_read_xattr_totals(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_read_xattr_totals __user *urxt = (void __user *)arg;
struct scoutfs_ioctl_read_xattr_totals rxt;
struct scoutfs_ioctl_xattr_total __user *uxt;
struct xattr_total_entry *ent;
struct scoutfs_key key;
struct scoutfs_key bloom_key;
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_root root = RB_ROOT;
struct rb_node *node;
int count = 0;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rxt, urxt, sizeof(rxt))) {
ret = -EFAULT;
goto out;
}
uxt = (void __user *)rxt.totals_ptr;
if ((rxt.totals_ptr & (sizeof(__u64) - 1)) ||
(rxt.totals_bytes < sizeof(struct scoutfs_ioctl_xattr_total))) {
ret = -EINVAL;
goto out;
}
scoutfs_key_set_zeros(&bloom_key);
bloom_key.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_xattr_init_totl_key(&start, rxt.pos_name);
while (rxt.totals_bytes >= sizeof(struct scoutfs_ioctl_xattr_total)) {
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
if (scoutfs_key_compare(&start, &end) > 0)
break;
key = start;
ret = scoutfs_forest_read_items(sb, &key, &bloom_key, &start, &end,
read_xattr_total_item, &root);
if (ret < 0) {
if (ret == -ESTALE) {
free_all_xt_ents(&root);
continue;
}
goto out;
}
if (RB_EMPTY_ROOT(&root))
break;
/* trim totals that fall outside of the consistent range */
for_each_xt_ent(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &start) < 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
for_each_xt_ent_reverse(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &end) > 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
/* copy resulting unique non-zero totals to userspace */
for_each_xt_ent(ent, node, &root) {
if (rxt.totals_bytes < sizeof(ent->xt))
break;
/* start with the fs item if we have it */
if (ent->fs_seq != 0) {
ent->xt.total = ent->fs_total;
ent->xt.count = ent->fs_count;
scoutfs_inc_counter(sb, totl_read_fs);
}
/* apply finalized logs if they're newer or creating */
if (((ent->fs_seq != 0) && (ent->fin_seq > ent->fs_seq)) ||
((ent->fs_seq == 0) && (ent->fin_count > 0))) {
ent->xt.total += ent->fin_total;
ent->xt.count += ent->fin_count;
scoutfs_inc_counter(sb, totl_read_finalized);
}
/* always apply active logs which must be newer than fs and finalized */
if (ent->log_seq > 0) {
ent->xt.total += ent->log_total;
ent->xt.count += ent->log_count;
scoutfs_inc_counter(sb, totl_read_logged);
}
if (ent->xt.total != 0 || ent->xt.count != 0) {
if (copy_to_user(uxt, &ent->xt, sizeof(ent->xt))) {
ret = -EFAULT;
goto out;
}
uxt++;
rxt.totals_bytes -= sizeof(ent->xt);
count++;
scoutfs_inc_counter(sb, totl_read_copied);
}
free_xt_ent(&root, ent);
}
/* continue after the last possible key read */
start = end;
scoutfs_key_inc(&start);
}
ret = 0;
out:
free_all_xt_ents(&root);
return ret ?: count;
}
static long scoutfs_ioc_get_allocated_inos(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_get_allocated_inos __user *ugai = (void __user *)arg;
struct scoutfs_ioctl_get_allocated_inos gai;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key key;
struct scoutfs_key end;
u64 __user *uinos;
u64 bytes;
u64 ino;
int nr;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&gai, ugai, sizeof(gai))) {
ret = -EFAULT;
goto out;
}
if ((gai.inos_ptr & (sizeof(__u64) - 1)) || (gai.inos_bytes < sizeof(__u64))) {
ret = -EINVAL;
goto out;
}
scoutfs_inode_init_key(&key, gai.start_ino);
scoutfs_inode_init_key(&end, gai.start_ino | SCOUTFS_LOCK_INODE_GROUP_MASK);
uinos = (void __user *)gai.inos_ptr;
bytes = gai.inos_bytes;
nr = 0;
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, gai.start_ino, &lock);
if (ret < 0)
goto out;
while (bytes >= sizeof(*uinos)) {
ret = scoutfs_item_next(sb, &key, &end, NULL, 0, lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
break;
}
if (key.sk_zone != SCOUTFS_FS_ZONE) {
ret = 0;
break;
}
/* all fs items are owned by allocated inodes, and _first is always ino */
ino = le64_to_cpu(key._sk_first);
if (put_user(ino, uinos)) {
ret = -EFAULT;
break;
}
uinos++;
bytes -= sizeof(*uinos);
if (++nr == INT_MAX)
break;
scoutfs_inode_init_key(&key, ino + 1);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
out:
return ret ?: nr;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -1015,6 +1427,12 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_alloc_detail(file, arg);
case SCOUTFS_IOC_MOVE_BLOCKS:
return scoutfs_ioc_move_blocks(file, arg);
case SCOUTFS_IOC_RESIZE_DEVICES:
return scoutfs_ioc_resize_devices(file, arg);
case SCOUTFS_IOC_READ_XATTR_TOTALS:
return scoutfs_ioc_read_xattr_totals(file, arg);
case SCOUTFS_IOC_GET_ALLOCATED_INOS:
return scoutfs_ioc_get_allocated_inos(file, arg);
}
return -ENOTTY;

View File

@@ -13,8 +13,7 @@
* This is enforced by pahole scripting in external build environments.
*/
/* XXX I have no idea how these are chosen. */
#define SCOUTFS_IOCTL_MAGIC 's'
#define SCOUTFS_IOCTL_MAGIC 0xE8 /* arbitrarily chosen hole in ioctl-number.rst */
/*
* Packed scoutfs keys rarely cross the ioctl boundary so we have a
@@ -88,7 +87,7 @@ enum scoutfs_ino_walk_seq_type {
* Adds entries to the user's buffer for each inode that is found in the
* given index between the first and last positions.
*/
#define SCOUTFS_IOC_WALK_INODES _IOR(SCOUTFS_IOCTL_MAGIC, 1, \
#define SCOUTFS_IOC_WALK_INODES _IOW(SCOUTFS_IOCTL_MAGIC, 1, \
struct scoutfs_ioctl_walk_inodes)
/*
@@ -163,11 +162,11 @@ struct scoutfs_ioctl_ino_path_result {
__u64 dir_pos;
__u16 path_bytes;
__u8 _pad[6];
__u8 path[0];
__u8 path[];
};
/* Get a single path from the root to the given inode number */
#define SCOUTFS_IOC_INO_PATH _IOR(SCOUTFS_IOCTL_MAGIC, 2, \
#define SCOUTFS_IOC_INO_PATH _IOW(SCOUTFS_IOCTL_MAGIC, 2, \
struct scoutfs_ioctl_ino_path)
/*
@@ -215,23 +214,16 @@ struct scoutfs_ioctl_stage {
/*
* Give the user inode fields that are not otherwise visible. statx()
* isn't always available and xattrs are relatively expensive.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_stat_more {
__u64 valid_bytes;
__u64 meta_seq;
__u64 data_seq;
__u64 data_version;
__u64 online_blocks;
__u64 offline_blocks;
__u64 crtime_sec;
__u32 crtime_nsec;
__u8 _pad[4];
};
#define SCOUTFS_IOC_STAT_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 5, \
@@ -259,15 +251,16 @@ struct scoutfs_ioctl_data_waiting {
__u8 _pad[6];
};
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U8_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
#define SCOUTFS_IOC_DATA_WAITING _IOW(SCOUTFS_IOCTL_MAGIC, 6, \
struct scoutfs_ioctl_data_waiting)
/*
* If i_size is set then data_version must be non-zero. If the offline
* flag is set then i_size must be set and a offline extent will be
* created from offset 0 to i_size.
* created from offset 0 to i_size. The time fields are always applied
* to the inode.
*/
struct scoutfs_ioctl_setattr_more {
__u64 data_version;
@@ -275,11 +268,12 @@ struct scoutfs_ioctl_setattr_more {
__u64 flags;
__u64 ctime_sec;
__u32 ctime_nsec;
__u8 _pad[4];
__u32 crtime_nsec;
__u64 crtime_sec;
};
#define SCOUTFS_IOC_SETATTR_MORE_OFFLINE (1 << 0)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U8_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U64_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE _IOW(SCOUTFS_IOCTL_MAGIC, 7, \
struct scoutfs_ioctl_setattr_more)
@@ -291,8 +285,8 @@ struct scoutfs_ioctl_listxattr_hidden {
__u32 hash_pos;
};
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOWR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
/*
* Return the inode numbers of inodes which might contain the given
@@ -345,32 +339,23 @@ struct scoutfs_ioctl_search_xattrs {
/* set in output_flags if returned inodes reached last_ino */
#define SCOUTFS_SEARCH_XATTRS_OFLAG_END (1ULL << 0)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOW(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
/*
* Give the user information about the filesystem.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* @committed_seq: All seqs up to and including this seq have been
* committed. Can be compared with meta_seq and data_seq from inodes in
* stat_more to discover if changes have been committed to disk.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_statfs_more {
__u64 valid_bytes;
__u64 fsid;
__u64 rid;
__u64 committed_seq;
__u64 total_meta_blocks;
__u64 total_data_blocks;
__u64 reserved_meta_blocks;
};
#define SCOUTFS_IOC_STATFS_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 10, \
@@ -391,7 +376,7 @@ struct scoutfs_ioctl_data_wait_err {
__s64 err;
};
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOR(SCOUTFS_IOCTL_MAGIC, 11, \
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOW(SCOUTFS_IOCTL_MAGIC, 11, \
struct scoutfs_ioctl_data_wait_err)
@@ -410,7 +395,7 @@ struct scoutfs_ioctl_alloc_detail_entry {
__u8 __pad[6];
};
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
#define SCOUTFS_IOC_ALLOC_DETAIL _IOW(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
/*
@@ -418,12 +403,13 @@ struct scoutfs_ioctl_alloc_detail_entry {
* on the same file system.
*
* from_fd specifies the source file and the ioctl is called on the
* destination file. Both files must have write access. from_off
* specifies the byte offset in the source, to_off is the byte offset in
* the destination, and len is the number of bytes in the region to
* move. All of the offsets and lengths must be in multiples of 4KB,
* except in the case where the from_off + len ends at the i_size of the
* source file.
* destination file. Both files must have write access. from_off specifies
* the byte offset in the source, to_off is the byte offset in the
* destination, and len is the number of bytes in the region to move. All of
* the offsets and lengths must be in multiples of 4KB, except in the case
* where the from_off + len ends at the i_size of the source
* file. data_version is only used when STAGE flag is set (see below). flags
* field is currently only used to optionally specify STAGE behavior.
*
* This interface only moves extents which are block granular, it does
* not perform RMW of sub-block byte extents and it does not overwrite
@@ -435,33 +421,142 @@ struct scoutfs_ioctl_alloc_detail_entry {
* i_size. The i_size update will maintain final partial blocks in the
* source.
*
* It will return an error if either of the files have offline extents.
* It will return 0 when all of the extents in the source region have
* been moved to the destination. Moving extents updates the ctime,
* mtime, meta_seq, data_seq, and data_version fields of both the source
* and destination inodes. If an error is returned then partial
* If STAGE flag is not set, it will return an error if either of the files
* have offline extents. It will return 0 when all of the extents in the
* source region have been moved to the destination. Moving extents updates
* the ctime, mtime, meta_seq, data_seq, and data_version fields of both the
* source and destination inodes. If an error is returned then partial
* progress may have been made and inode fields may have been updated.
*
* If STAGE flag is set, as above except destination range must be in an
* offline extent. Fields are updated only for source inode.
*
* Errors specific to this interface include:
*
* EINVAL: from_off, len, or to_off aren't a multiple of 4KB; the source
* and destination files are the same inode; either the source or
* destination is not a regular file; the destination file has
* an existing overlapping extent.
* an existing overlapping extent (if STAGE flag not set); the
* destination range is not in an offline extent (if STAGE set).
* EOVERFLOW: either from_off + len or to_off + len exceeded 64bits.
* EBADF: from_fd isn't a valid open file descriptor.
* EXDEV: the source and destination files are in different filesystems.
* EISDIR: either the source or destination is a directory.
* ENODATA: either the source or destination file have offline extents.
* ENODATA: either the source or destination file have offline extents and
* STAGE flag is not set.
* ESTALE: data_version does not match destination data_version.
*/
#define SCOUTFS_IOC_MB_STAGE (1 << 0)
#define SCOUTFS_IOC_MB_UNKNOWN (U64_MAX << 1)
struct scoutfs_ioctl_move_blocks {
__u64 from_fd;
__u64 from_off;
__u64 len;
__u64 to_off;
__u64 data_version;
__u64 flags;
};
#define SCOUTFS_IOC_MOVE_BLOCKS _IOR(SCOUTFS_IOCTL_MAGIC, 13, \
#define SCOUTFS_IOC_MOVE_BLOCKS _IOW(SCOUTFS_IOCTL_MAGIC, 13, \
struct scoutfs_ioctl_move_blocks)
struct scoutfs_ioctl_resize_devices {
__u64 new_total_meta_blocks;
__u64 new_total_data_blocks;
};
#define SCOUTFS_IOC_RESIZE_DEVICES \
_IOW(SCOUTFS_IOCTL_MAGIC, 14, struct scoutfs_ioctl_resize_devices)
#define SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR 3
/*
* Copy global totals of .totl. xattr value payloads to the user. This
* only sees xattrs which have been committed and this doesn't force
* commits of dirty data throughout the system. This can be out of sync
* by the amount of xattrs that can be dirty in open transactions that
* are being built throughout the system.
*
* pos_name: The array name of the first total that can be returned.
* The name is derived from the key of the xattrs that contribute to the
* total. For xattrs with a .totl.1.2.3 key, the pos_name[] should be
* {1, 2, 3}.
*
* totals_ptr: An aligned pointer to a buffer that will be filled with
* an array of scoutfs_ioctl_xattr_total structs for each total copied.
*
* totals_bytes: The size of the buffer in bytes. There must be room
* for at least one struct element so that returning 0 can promise that
* there were no more totals to copy after the pos_name.
*
* The number of copied elements is returned and 0 is returned if there
* were no more totals to copy after the pos_name.
*
* In addition to the usual errnos (EIO, EINVAL, EPERM, EFAULT) this
* adds:
*
* EINVAL: The totals_ buffer was not aligned or was not large enough
* for a single struct entry.
*/
struct scoutfs_ioctl_read_xattr_totals {
__u64 pos_name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 totals_ptr;
__u64 totals_bytes;
};
/*
* An individual total that is given to userspace. The total is the
* sum of all the values in the xattr payloads matching the name. The
* count is the number of xattrs, not number of files, contributing to
* the total.
*/
struct scoutfs_ioctl_xattr_total {
__u64 name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 total;
__u64 count;
};
#define SCOUTFS_IOC_READ_XATTR_TOTALS \
_IOW(SCOUTFS_IOCTL_MAGIC, 15, struct scoutfs_ioctl_read_xattr_totals)
/*
* This fills the caller's inos array with inode numbers that are in use
* after the start ino, within an internal inode group.
*
* This only makes a promise about the state of the inode numbers within
* the first and last numbers returned by one call. At one time, all of
* those inodes were still allocated. They could have changed before
* the call returned. And any numbers outside of the first and last
* (or single) are undefined.
*
* This doesn't iterate over all allocated inodes, it only probes a
* single group that the start inode is within. This interface was
* first introduced to support tests that needed to find out about a
* specific inode, while having some other similarly niche uses. It is
* unsuitable for a consistent iteration over all the inode numbers in
* use.
*
* This test of inode items doesn't serialize with the inode lifetime
* mechanism. It only tells you the numbers of inodes that were once
* active in the system and haven't yet been fully deleted. The inode
* numbers returned could have been in the process of being deleted and
* were already unreachable even before the call started.
*
* @start_ino: the first inode number that could be returned
* @inos_ptr: pointer to an aligned array of 64bit inode numbers
* @inos_bytes: the number of bytes available in the inos_ptr array
*
* Returns errors or the count of inode numbers returned, quite possibly
* including 0.
*/
struct scoutfs_ioctl_get_allocated_inos {
__u64 start_ino;
__u64 inos_ptr;
__u64 inos_bytes;
};
#define SCOUTFS_IOC_GET_ALLOCATED_INOS \
_IOW(SCOUTFS_IOCTL_MAGIC, 16, struct scoutfs_ioctl_get_allocated_inos)
#endif

View File

@@ -95,7 +95,7 @@ struct item_cache_info {
/* written by page readers, read by shrink */
spinlock_t active_lock;
struct rb_root active_root;
struct list_head active_list;
};
#define DECLARE_ITEM_CACHE_INFO(sb, name) \
@@ -127,6 +127,7 @@ struct cached_page {
unsigned long lru_time;
struct list_head dirty_list;
struct list_head dirty_head;
u64 max_seq;
struct page *page;
unsigned int page_off;
unsigned int erased_bytes;
@@ -138,10 +139,11 @@ struct cached_item {
struct list_head dirty_head;
unsigned int dirty:1, /* needs to be written */
persistent:1, /* in btrees, needs deletion item */
deletion:1; /* negative del item for writing */
deletion:1, /* negative del item for writing */
delta:1; /* item vales are combined, freed after write */
unsigned int val_len;
struct scoutfs_key key;
struct scoutfs_log_item_value liv;
u64 seq;
char val[0];
};
@@ -149,7 +151,8 @@ struct cached_item {
static int item_val_bytes(int val_len)
{
return round_up(offsetof(struct cached_item, val[val_len]), CACHED_ITEM_ALIGN);
return round_up(offsetof(struct cached_item, val[val_len]),
CACHED_ITEM_ALIGN);
}
/*
@@ -345,7 +348,8 @@ static struct cached_page *alloc_pg(struct super_block *sb, gfp_t gfp)
page = alloc_page(GFP_NOFS | gfp);
if (!page || !pg) {
kfree(pg);
__free_page(page);
if (page)
__free_page(page);
return NULL;
}
@@ -383,6 +387,12 @@ static void put_pg(struct super_block *sb, struct cached_page *pg)
}
}
static void update_pg_max_seq(struct cached_page *pg, struct cached_item *item)
{
if (item->seq > pg->max_seq)
pg->max_seq = item->seq;
}
/*
* Allocate space for a new item from the free offset at the end of a
* cached page. This isn't a blocking allocation, and it's likely that
@@ -390,8 +400,7 @@ static void put_pg(struct super_block *sb, struct cached_page *pg)
* page or checking the free space first.
*/
static struct cached_item *alloc_item(struct cached_page *pg,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
struct scoutfs_key *key, u64 seq, bool deletion,
void *val, int val_len)
{
struct cached_item *item;
@@ -406,22 +415,24 @@ static struct cached_item *alloc_item(struct cached_page *pg,
INIT_LIST_HEAD(&item->dirty_head);
item->dirty = 0;
item->persistent = 0;
item->deletion = !!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION);
item->deletion = !!deletion;
item->delta = 0;
item->val_len = val_len;
item->key = *key;
item->liv = *liv;
item->seq = seq;
if (val_len)
memcpy(item->val, val, val_len);
update_pg_max_seq(pg, item);
return item;
}
static void erase_item(struct cached_page *pg, struct cached_item *item)
{
rbtree_erase(&item->node, &pg->item_root);
pg->erased_bytes += round_up(item_val_bytes(item->val_len),
CACHED_ITEM_ALIGN);
pg->erased_bytes += item_val_bytes(item->val_len);
}
static void lru_add(struct super_block *sb, struct item_cache_info *cinf,
@@ -621,6 +632,8 @@ static void mark_item_dirty(struct super_block *sb,
list_add_tail(&item->dirty_head, &pg->dirty_list);
item->dirty = 1;
}
update_pg_max_seq(pg, item);
}
static void clear_item_dirty(struct super_block *sb,
@@ -672,6 +685,12 @@ static void erase_page_items(struct cached_page *pg,
* to the dirty list after the left page, and by adding items to the
* tail of right's dirty list in key sort order.
*
* The max_seq of the source page might be larger than all the items
* while protecting an erased item from being reclaimed while an older
* read is in flight. We don't know where it might be in the source
* page so we have to assume that it's in the key range being moved and
* update the destination page's max_seq accordingly.
*
* The caller is responsible for page locking and managing the lru.
*/
static void move_page_items(struct super_block *sb,
@@ -697,7 +716,7 @@ static void move_page_items(struct super_block *sb,
if (stop && scoutfs_key_compare(&from->key, stop) >= 0)
break;
to = alloc_item(right, &from->key, &from->liv, from->val,
to = alloc_item(right, &from->key, from->seq, from->deletion, from->val,
from->val_len);
rbtree_insert(&to->node, par, pnode, &right->item_root);
par = &to->node;
@@ -709,10 +728,13 @@ static void move_page_items(struct super_block *sb,
}
to->persistent = from->persistent;
to->deletion = from->deletion;
to->delta = from->delta;
erase_item(left, from);
}
if (left->max_seq > right->max_seq)
right->max_seq = left->max_seq;
}
enum page_intersection_type {
@@ -852,8 +874,7 @@ static void compact_page_items(struct super_block *sb,
for (from = first_item(&pg->item_root); from; from = next_item(from)) {
to = page_address(empty->page) + page_off;
page_off += round_up(item_val_bytes(from->val_len),
CACHED_ITEM_ALIGN);
page_off += item_val_bytes(from->val_len);
/* copy the entire item, struct members and all */
memcpy(to, from, item_val_bytes(from->val_len));
@@ -1260,46 +1281,76 @@ static int cache_empty_page(struct super_block *sb,
return 0;
}
/*
* Readers operate independently from dirty items and transactions.
* They read a set of persistent items and insert them into the cache
* when there aren't already pages whose key range contains the items.
* This naturally prefers cached dirty items over stale read items.
*
* We have to deal with the case where dirty items are written and
* invalidated while a read is in flight. The reader won't have seen
* the items that were dirty in their persistent roots as they started
* reading. By the time they insert their read pages the previously
* dirty items have been reclaimed and are not in the cache. The old
* stale items will be inserted in their place, effectively corrupting
* by having the dirty items disappear.
*
* We fix this by tracking the max seq of items in pages. As readers
* start they record the current transaction seq. Invalidation skips
* pages with a max seq greater than the first reader seq because the
* items in the page have to stick around to prevent the readers stale
* items from being inserted.
*
* This naturally only affects a small set of pages with items that were
* written relatively recently. If we're in memory pressure then we
* probably have a lot of pages and they'll naturally have items that
* were visible to any raders. We don't bother with the complicated and
* expensive further refinement of tracking the ranges that are being
* read and comparing those with pages to invalidate.
*/
struct active_reader {
struct rb_node node;
struct scoutfs_key start;
struct scoutfs_key end;
struct list_head head;
u64 seq;
};
static struct active_reader *active_rbtree_walk(struct rb_root *root,
struct scoutfs_key *start,
struct scoutfs_key *end,
struct rb_node **par,
struct rb_node ***pnode)
#define INIT_ACTIVE_READER(rdr) \
struct active_reader rdr = { .head = LIST_HEAD_INIT(rdr.head) }
static void add_active_reader(struct super_block *sb, struct active_reader *active)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
BUG_ON(!list_empty(&active->head));
active->seq = scoutfs_trans_sample_seq(sb);
spin_lock(&cinf->active_lock);
list_add_tail(&active->head, &cinf->active_list);
spin_unlock(&cinf->active_lock);
}
static u64 first_active_reader_seq(struct item_cache_info *cinf)
{
struct rb_node **node = &root->rb_node;
struct rb_node *parent = NULL;
struct active_reader *ret = NULL;
struct active_reader *active;
int cmp;
u64 first;
while (*node) {
parent = *node;
active = container_of(*node, struct active_reader, node);
/* only the calling task adds or deletes this active */
spin_lock(&cinf->active_lock);
active = list_first_entry_or_null(&cinf->active_list, struct active_reader, head);
first = active ? active->seq : U64_MAX;
spin_unlock(&cinf->active_lock);
cmp = scoutfs_key_compare_ranges(start, end, &active->start,
&active->end);
if (cmp < 0) {
node = &(*node)->rb_left;
} else if (cmp > 0) {
node = &(*node)->rb_right;
} else {
ret = active;
node = &(*node)->rb_left;
}
return first;
}
static void del_active_reader(struct item_cache_info *cinf, struct active_reader *active)
{
/* only the calling task adds or deletes this active */
if (!list_empty(&active->head)) {
spin_lock(&cinf->active_lock);
list_del_init(&active->head);
spin_unlock(&cinf->active_lock);
}
if (par)
*par = parent;
if (pnode)
*pnode = node;
return ret;
}
/*
@@ -1308,16 +1359,16 @@ static struct active_reader *active_rbtree_walk(struct rb_root *root,
* on our root and aren't in dirty or lru lists.
*
* We need to store deletion items here as we read items from all the
* btrees so that they can override older versions of the items. The
* deletion items will be deleted before we insert the pages into the
* cache. We don't insert old versions of items into the tree here so
* that the trees don't have to compare versions.
* btrees so that they can override older items. The deletion items
* will be deleted before we insert the pages into the cache. We don't
* insert old versions of items into the tree here so that the trees
* don't have to compare seqs.
*/
static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_log_item_value *liv, void *val,
int val_len, void *arg)
static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, int fic, void *arg)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const bool deletion = !!(flags & SCOUTFS_ITEM_FLAG_DELETION);
struct rb_root *root = arg;
struct cached_page *right = NULL;
struct cached_page *left = NULL;
@@ -1331,7 +1382,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
pg = page_rbtree_walk(sb, root, key, key, NULL, NULL, &p_par, &p_pnode);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
if (found && (le64_to_cpu(found->liv.vers) >= le64_to_cpu(liv->vers)))
if (found && (found->seq >= seq))
return 0;
if (!page_has_room(pg, val_len)) {
@@ -1339,10 +1390,13 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
/* split needs multiple items, sparse may not have enough */
if (!left)
return -ENOMEM;
compact_page_items(sb, pg, left);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par,
&pnode);
}
item = alloc_item(pg, key, liv, val, val_len);
item = alloc_item(pg, key, seq, deletion, val, val_len);
if (!item) {
/* simpler split of private pages, no locking/dirty/lru */
if (!left)
@@ -1365,7 +1419,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
put_pg(sb, pg);
pg = scoutfs_key_compare(key, &left->end) <= 0 ? left : right;
item = alloc_item(pg, key, liv, val, val_len);
item = alloc_item(pg, key, seq, deletion, val, val_len);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par,
&pnode);
@@ -1396,22 +1450,20 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
* locks held, but without locking the cache. The regions we read can
* be stale with respect to the current cache, which can be read and
* dirtied by other cluster lock holders on our node, but the cluster
* locks protect the stable items we read.
* locks protect the stable items we read. Invalidation is careful not
* to drop pages that have items that we couldn't see because they were
* dirty when we started reading.
*
* There's also the exciting case where a reader can populate the cache
* with stale old persistent data which was read before another local
* cluster lock holder was able to read, dirty, write, and then shrink
* the cache. In this case the cache couldn't be cleared by lock
* invalidation because the caller is actively holding the lock. But
* shrinking could evict the cache within the held lock. So we record
* that we're an active reader in the range covered by the lock and
* shrink will refuse to reclaim any pages that intersect with our read.
* The forest item reader is reading stable trees that could be
* overwritten. It can return -ESTALE which we return to the caller who
* will retry the operation and work with a new set of more recent
* btrees.
*/
static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
struct scoutfs_key *key, struct scoutfs_lock *lock)
{
struct rb_root root = RB_ROOT;
struct active_reader active;
INIT_ACTIVE_READER(active);
struct cached_page *right = NULL;
struct cached_page *pg;
struct cached_page *rd;
@@ -1427,15 +1479,6 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
int pgi;
int ret;
/* stop shrink from freeing new clean data, would let us cache stale */
active.start = lock->start;
active.end = lock->end;
spin_lock(&cinf->active_lock);
active_rbtree_walk(&cinf->active_root, &active.start, &active.end,
&par, &pnode);
rbtree_insert(&active.node, par, pnode, &cinf->active_root);
spin_unlock(&cinf->active_lock);
/* start with an empty page that covers the whole lock */
pg = alloc_pg(sb, 0);
if (!pg) {
@@ -1446,8 +1489,12 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
pg->end = lock->end;
rbtree_insert(&pg->node, NULL, &root.rb_node, &root);
ret = scoutfs_forest_read_items(sb, lock, key, &start, &end,
read_page_item, &root);
/* set active reader seq before reading persistent roots */
add_active_reader(sb, &active);
start = lock->start;
end = lock->end;
ret = scoutfs_forest_read_items(sb, key, &lock->start, &start, &end, read_page_item, &root);
if (ret < 0)
goto out;
@@ -1491,6 +1538,8 @@ retry:
rbtree_erase(&rd->node, &root);
rbtree_insert(&rd->node, par, pnode, &cinf->pg_root);
lru_accessed(sb, cinf, rd);
trace_scoutfs_item_read_page(sb, key, &rd->start,
&rd->end);
continue;
}
@@ -1521,9 +1570,7 @@ retry:
ret = 0;
out:
spin_lock(&cinf->active_lock);
rbtree_erase(&active.node, &cinf->active_root);
spin_unlock(&cinf->active_lock);
del_active_reader(cinf, &active);
/* free any pages we left dangling on error */
for_each_page_safe(&root, rd, pg_tmp) {
@@ -1582,7 +1629,7 @@ retry:
&lock->end);
else
ret = read_pages(sb, cinf, key, lock);
if (ret < 0)
if (ret < 0 && ret != -ESTALE)
goto out;
goto retry;
}
@@ -1778,6 +1825,21 @@ out:
return ret;
}
/*
* An item's seq is greater of the client transaction's seq and the
* lock's write_seq. This ensures that multiple commits in one lock
* grant will have increasing seqs, and new locks in open commits will
* also increase the seqs. It lets us limit the inputs of item merging
* to the last stable seq and ensure that all the items in open
* transactions and granted locks will have greater seqs.
*/
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return max(sbi->trans_seq, lock->write_seq);
}
/*
* Mark the item dirty. Dirtying while holding a transaction pins the
* page holding the item and guarantees that the item can be deleted or
@@ -1810,8 +1872,8 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
if (!item || item->deletion) {
ret = -ENOENT;
} else {
item->seq = item_seq(sb, lock);
mark_item_dirty(sb, cinf, pg, NULL, item);
item->liv.vers = cpu_to_le64(lock->write_version);
ret = 0;
}
@@ -1830,9 +1892,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
struct scoutfs_log_item_value liv = {
.vers = cpu_to_le64(lock->write_version),
};
const u64 seq = item_seq(sb, lock);
struct cached_item *found;
struct cached_item *item;
struct cached_page *pg;
@@ -1860,7 +1920,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
goto unlock;
}
item = alloc_item(pg, key, &liv, val, val_len);
item = alloc_item(pg, key, seq, false, val, val_len);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
@@ -1905,9 +1965,7 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
struct scoutfs_log_item_value liv = {
.vers = cpu_to_le64(lock->write_version),
};
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_item *found;
struct cached_page *pg;
@@ -1939,12 +1997,13 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
if (val_len)
memcpy(found->val, val, val_len);
if (val_len < found->val_len)
pg->erased_bytes += found->val_len - val_len;
pg->erased_bytes += item_val_bytes(found->val_len) -
item_val_bytes(val_len);
found->val_len = val_len;
found->liv.vers = liv.vers;
found->seq = seq;
mark_item_dirty(sb, cinf, pg, NULL, found);
} else {
item = alloc_item(pg, key, &liv, val, val_len);
item = alloc_item(pg, key, seq, false, val, val_len);
item->persistent = found->persistent;
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
@@ -1960,6 +2019,77 @@ out:
return ret;
}
/*
* Add a delta item. Delta items are an incremental change relative to
* the current persistent delta items. We never have to read the
* current items so the caller always writes with write only locks. If
* combining the current delta item and the caller's item results in a
* null we can just drop it, we don't have to emit a deletion item.
*/
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
struct rb_node *par;
int ret;
scoutfs_inc_counter(sb, item_delta);
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_WRITE_ONLY)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
if (ret < 0)
goto out;
ret = get_cached_page(sb, cinf, lock, key, true, true, val_len, &pg);
if (ret < 0)
goto out;
__acquire(pg->rwlock);
item = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
if (item) {
if (!item->delta) {
ret = -EIO;
goto unlock;
}
ret = scoutfs_forest_combine_deltas(key, item->val, item->val_len, val, val_len);
if (ret <= 0) {
if (ret == 0)
ret = -EIO;
goto unlock;
}
if (ret == SCOUTFS_DELTA_COMBINED) {
item->seq = seq;
mark_item_dirty(sb, cinf, pg, NULL, item);
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
clear_item_dirty(sb, cinf, pg, item);
erase_item(pg, item);
} else {
ret = -EIO;
goto unlock;
}
ret = 0;
} else {
item = alloc_item(pg, key, seq, false, val, val_len);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
item->delta = 1;
ret = 0;
}
unlock:
write_unlock(&pg->rwlock);
out:
return ret;
}
/*
* Delete an item from the cache. We can leave behind a dirty deletion
* item if there is a persistent item that needs to be overwritten.
@@ -1972,9 +2102,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
struct scoutfs_log_item_value liv = {
.vers = cpu_to_le64(lock->write_version),
};
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
@@ -2002,7 +2130,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
}
if (!item) {
item = alloc_item(pg, key, &liv, NULL, 0);
item = alloc_item(pg, key, seq, false, NULL, 0);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
}
@@ -2015,10 +2143,10 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
erase_item(pg, item);
} else {
/* must emit deletion to clobber old persistent item */
item->liv.vers = cpu_to_le64(lock->write_version);
item->liv.flags |= SCOUTFS_LOG_ITEM_FLAG_DELETION;
item->seq = seq;
item->deletion = 1;
pg->erased_bytes += item->val_len;
pg->erased_bytes += item_val_bytes(item->val_len) -
item_val_bytes(0);
item->val_len = 0;
mark_item_dirty(sb, cinf, pg, NULL, item);
}
@@ -2101,17 +2229,11 @@ int scoutfs_item_write_dirty(struct super_block *sb)
struct page *page;
LIST_HEAD(pages);
LIST_HEAD(pos);
u64 max_vers = 0;
int val_len;
u64 max_seq = 0;
int bytes;
int off;
int ret;
/* we're relying on struct layout to prepend item value headers */
BUILD_BUG_ON(offsetof(struct cached_item, val) !=
(offsetof(struct cached_item, liv) +
member_sizeof(struct cached_item, liv)));
if (atomic_read(&cinf->dirty_pages) == 0)
return 0;
@@ -2163,10 +2285,9 @@ int scoutfs_item_write_dirty(struct super_block *sb)
list_sort(NULL, &pg->dirty_list, cmp_item_key);
list_for_each_entry(item, &pg->dirty_list, dirty_head) {
val_len = sizeof(item->liv) + item->val_len;
bytes = offsetof(struct scoutfs_btree_item_list,
val[val_len]);
max_vers = max(max_vers, le64_to_cpu(item->liv.vers));
val[item->val_len]);
max_seq = max(max_seq, item->seq);
if (off + bytes > PAGE_SIZE) {
page = second;
@@ -2182,8 +2303,10 @@ int scoutfs_item_write_dirty(struct super_block *sb)
prev = &lst->next;
lst->key = item->key;
lst->val_len = val_len;
memcpy(lst->val, &item->liv, val_len);
lst->seq = item->seq;
lst->flags = item->deletion ? SCOUTFS_ITEM_FLAG_DELETION : 0;
lst->val_len = item->val_len;
memcpy(lst->val, item->val, item->val_len);
}
spin_lock(&cinf->dirty_lock);
@@ -2196,8 +2319,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
read_unlock(&pg->rwlock);
}
/* store max item vers in forest's log_trees */
scoutfs_forest_set_max_vers(sb, max_vers);
/* store max item seq in forest's log_trees */
scoutfs_forest_set_max_seq(sb, max_seq);
/* write all the dirty items into log btree blocks */
ret = scoutfs_forest_insert_list(sb, first);
@@ -2241,8 +2364,11 @@ retry:
dirty_head) {
clear_item_dirty(sb, cinf, pg, item);
if (item->delta)
scoutfs_inc_counter(sb, item_delta_written);
/* free deletion items */
if (item->deletion)
if (item->deletion || item->delta)
erase_item(pg, item);
else
item->persistent = 1;
@@ -2342,6 +2468,8 @@ retry:
write_lock(&pg->rwlock);
pgi = trim_page_intersection(sb, cinf, pg, right, start, end);
trace_scoutfs_item_invalidate_page(sb, start, end,
&pg->start, &pg->end, pgi);
BUG_ON(pgi == PGI_DISJOINT); /* walk wouldn't ret disjoint */
if (pgi == PGI_INSIDE) {
@@ -2364,9 +2492,9 @@ retry:
/* inv was entirely inside page, done after bisect */
write_trylock_will_succeed(&right->rwlock);
rbtree_insert(&right->node, par, pnode, &cinf->pg_root);
lru_accessed(sb, cinf, right);
write_unlock(&right->rwlock);
write_unlock(&pg->rwlock);
lru_accessed(sb, cinf, right);
right = NULL;
break;
}
@@ -2382,9 +2510,9 @@ retry:
/*
* Shrink the size the item cache. We're operating against the fast
* path lock ordering and we skip pages if we can't acquire locks.
* Similarly, we can run into dirty pages or pages which intersect with
* active readers that we can't shrink and also choose to skip.
* path lock ordering and we skip pages if we can't acquire locks. We
* can run into dirty pages or pages with items that weren't visible to
* the earliest active reader which must be skipped.
*/
static int item_lru_shrink(struct shrinker *shrink,
struct shrink_control *sc)
@@ -2393,27 +2521,24 @@ static int item_lru_shrink(struct shrinker *shrink,
struct item_cache_info,
shrinker);
struct super_block *sb = cinf->sb;
struct active_reader *active;
struct cached_page *tmp;
struct cached_page *pg;
LIST_HEAD(list);
u64 first_reader_seq;
int nr;
if (sc->nr_to_scan == 0)
goto out;
nr = sc->nr_to_scan;
/* can't invalidate pages with items that weren't visible to first reader */
first_reader_seq = first_active_reader_seq(cinf);
write_lock(&cinf->rwlock);
spin_lock(&cinf->lru_lock);
list_for_each_entry_safe(pg, tmp, &cinf->lru_list, lru_head) {
/* can't invalidate ranges being read, reader might be stale */
spin_lock(&cinf->active_lock);
active = active_rbtree_walk(&cinf->active_root, &pg->start,
&pg->end, NULL, NULL);
spin_unlock(&cinf->active_lock);
if (active) {
if (first_reader_seq <= pg->max_seq) {
scoutfs_inc_counter(sb, item_shrink_page_reader);
continue;
}
@@ -2433,21 +2558,17 @@ static int item_lru_shrink(struct shrinker *shrink,
__lru_remove(sb, cinf, pg);
rbtree_erase(&pg->node, &cinf->pg_root);
list_move_tail(&pg->lru_head, &list);
invalidate_pcpu_page(pg);
write_unlock(&pg->rwlock);
put_pg(sb, pg);
if (--nr == 0)
break;
}
write_unlock(&cinf->rwlock);
spin_unlock(&cinf->lru_lock);
list_for_each_entry_safe(pg, tmp, &list, lru_head) {
list_del_init(&pg->lru_head);
put_pg(sb, pg);
}
out:
return min_t(unsigned long, cinf->lru_pages, INT_MAX);
}
@@ -2486,7 +2607,7 @@ int scoutfs_item_setup(struct super_block *sb)
spin_lock_init(&cinf->lru_lock);
INIT_LIST_HEAD(&cinf->lru_list);
spin_lock_init(&cinf->active_lock);
cinf->active_root = RB_ROOT;
INIT_LIST_HEAD(&cinf->active_list);
cinf->pcpu_pages = alloc_percpu(struct item_percpu_pages);
if (!cinf->pcpu_pages)
@@ -2517,7 +2638,7 @@ void scoutfs_item_destroy(struct super_block *sb)
int cpu;
if (cinf) {
BUG_ON(!RB_EMPTY_ROOT(&cinf->active_root));
BUG_ON(!list_empty(&cinf->active_list));
unregister_hotcpu_notifier(&cinf->notifier);
unregister_shrinker(&cinf->shrinker);

View File

@@ -18,6 +18,8 @@ int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb,

View File

@@ -108,6 +108,16 @@ static inline void scoutfs_key_set_ones(struct scoutfs_key *key)
memset(key->__pad, 0, sizeof(key->__pad));
}
static inline bool scoutfs_key_is_ones(struct scoutfs_key *key)
{
return key->sk_zone == U8_MAX &&
key->_sk_first == cpu_to_le64(U64_MAX) &&
key->sk_type == U8_MAX &&
key->_sk_second == cpu_to_le64(U64_MAX) &&
key->_sk_third == cpu_to_le64(U64_MAX) &&
key->_sk_fourth == U8_MAX;
}
/*
* Return a -1/0/1 comparison of keys.
*

View File

@@ -34,6 +34,7 @@
#include "data.h"
#include "xattr.h"
#include "item.h"
#include "omap.h"
/*
* scoutfs uses a lock service to manage item cache consistency between
@@ -65,8 +66,6 @@
* relative to that lock state we resend.
*/
#define GRACE_PERIOD_KT ms_to_ktime(2)
/*
* allocated per-super, freed on unmount.
*/
@@ -74,19 +73,19 @@ struct lock_info {
struct super_block *sb;
spinlock_t lock;
bool shutdown;
bool unmounting;
struct rb_root lock_tree;
struct rb_root lock_range_tree;
struct shrinker shrinker;
struct list_head lru_list;
unsigned long long lru_nr;
struct workqueue_struct *workq;
struct work_struct grant_work;
struct list_head grant_list;
struct delayed_work inv_dwork;
struct work_struct inv_work;
struct list_head inv_list;
struct work_struct shrink_work;
struct list_head shrink_list;
atomic64_t next_refresh_gen;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree tseq_tree;
};
@@ -122,21 +121,48 @@ static bool lock_modes_match(int granted, int requested)
}
/*
* invalidate cached data associated with an inode whose lock is going
* Invalidate cached data associated with an inode whose lock is going
* away.
*
* We try to drop cached dentries and inodes covered by the lock if they
* aren't referenced. This removes them from the mount's open map and
* allows deletions to be performed by unlink without having to wait for
* remote cached inodes to be dropped.
*
* If the cached inode was already deferring final inode deletion then
* we can't perform that inline in invalidation. The locking alone
* deadlock, and it might also take multiple transactions to fully
* delete an inode with significant metadata. We only perform the iput
* inline if we know that possible eviction can't perform the final
* deletion, otherwise we kick it off to async work.
*/
static void invalidate_inode(struct super_block *sb, u64 ino)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_inode_info *si;
struct inode *inode;
inode = scoutfs_ilookup(sb, ino);
if (inode) {
si = SCOUTFS_I(inode);
scoutfs_inc_counter(sb, lock_invalidate_inode);
if (S_ISREG(inode->i_mode)) {
truncate_inode_pages(inode->i_mapping, 0);
scoutfs_data_wait_changed(inode);
}
iput(inode);
/* can't touch during unmount, dcache destroys w/o locks */
if (!linfo->unmounting)
d_prune_aliases(inode);
si->drop_invalidated = true;
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
} else {
/* defer iput to work context so we don't evict inodes from invalidation */
scoutfs_inode_queue_iput(inode);
}
}
}
@@ -172,6 +198,16 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
retry:
/* invalidate inodes before removing coverage */
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
/* remove cov items to tell users that their cache is stale */
spin_lock(&lock->cov_list_lock);
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
@@ -187,15 +223,6 @@ retry:
}
spin_unlock(&lock->cov_list_lock);
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
scoutfs_item_invalidate(sb, &lock->start, &lock->end);
}
@@ -224,11 +251,11 @@ static void lock_free(struct lock_info *linfo, struct scoutfs_lock *lock)
BUG_ON(!RB_EMPTY_NODE(&lock->node));
BUG_ON(!RB_EMPTY_NODE(&lock->range_node));
BUG_ON(!list_empty(&lock->lru_head));
BUG_ON(!list_empty(&lock->grant_head));
BUG_ON(!list_empty(&lock->inv_head));
BUG_ON(!list_empty(&lock->shrink_head));
BUG_ON(!list_empty(&lock->cov_list));
scoutfs_omap_free_lock_data(lock->omap_data);
kfree(lock);
}
@@ -251,8 +278,8 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
RB_CLEAR_NODE(&lock->node);
RB_CLEAR_NODE(&lock->range_node);
INIT_LIST_HEAD(&lock->lru_head);
INIT_LIST_HEAD(&lock->grant_head);
INIT_LIST_HEAD(&lock->inv_head);
INIT_LIST_HEAD(&lock->inv_list);
INIT_LIST_HEAD(&lock->shrink_head);
spin_lock_init(&lock->cov_list_lock);
INIT_LIST_HEAD(&lock->cov_list);
@@ -264,6 +291,7 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
lock->mode = SCOUTFS_LOCK_NULL;
atomic64_set(&lock->forest_bloom_nr, 0);
spin_lock_init(&lock->omap_spinlock);
trace_scoutfs_lock_alloc(sb, lock);
@@ -298,23 +326,6 @@ static bool lock_counts_match(int granted, unsigned int *counts)
return true;
}
/*
* Returns true if there are any mode counts that match with the desired
* mode. There can be other non-matching counts as well but we're only
* testing for the existence of any matching counts.
*/
static bool lock_count_match_exists(int desired, unsigned int *counts)
{
enum scoutfs_lock_mode mode;
for (mode = 0; mode < SCOUTFS_LOCK_NR_MODES; mode++) {
if (counts[mode] && lock_modes_match(desired, mode))
return true;
}
return false;
}
/*
* An idle lock has nothing going on. It can be present in the lru and
* can be freed by the final put when it has a null mode.
@@ -532,45 +543,15 @@ static void put_lock(struct lock_info *linfo,struct scoutfs_lock *lock)
}
/*
* Locks have a grace period that extends after activity and prevents
* invalidation. It's intended to let nodes do reasonable batches of
* work as locks ping pong between nodes that are doing conflicting
* work.
*/
static void extend_grace(struct super_block *sb, struct scoutfs_lock *lock)
{
ktime_t now = ktime_get();
if (ktime_after(now, lock->grace_deadline))
scoutfs_inc_counter(sb, lock_grace_set);
else
scoutfs_inc_counter(sb, lock_grace_extended);
lock->grace_deadline = ktime_add(now, GRACE_PERIOD_KT);
}
static void queue_grant_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->grant_list) && !linfo->shutdown)
queue_work(linfo->workq, &linfo->grant_work);
}
/*
* We immediately queue work on the assumption that the caller might
* have made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress, even if other locks are
* waiting for their grace period to elapse. It's a trade-off between
* invalidation latency and burning cpu repeatedly finding that locks
* are still in their grace period.
* The caller has made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress.
*/
static void queue_inv_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->inv_list) && !linfo->shutdown)
mod_delayed_work(linfo->workq, &linfo->inv_dwork, 0);
if (!list_empty(&linfo->inv_list))
queue_work(linfo->workq, &linfo->inv_work);
}
/*
@@ -618,80 +599,17 @@ static void bug_on_inconsistent_grant_cache(struct super_block *sb,
}
/*
* Each lock has received a grant response message from the server.
* The client is receiving a grant response message from the server.
* This is being called synchronously in the networking receive path so
* our work should be quick and reasonably non-blocking.
*
* Grant responses can be reordered with incoming invalidation requests
* from the server so we have to be careful to only set the new mode
* once the old mode matches.
*
* We extend the grace period as we grant the lock if there is a waiting
* locker who can use the lock. This stops invalidation from pulling
* the granted lock out from under the requester, resulting in a lot of
* churn with no forward progress. Using the grace period avoids having
* to identify a specific waiter and give it an acquired lock. It's
* also very similar to waking up the locker and having it win the race
* against the invalidation. In that case they'd extend the grace
* period anyway as they unlock.
*/
static void lock_grant_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
grant_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock_grant_response *gr;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
scoutfs_inc_counter(sb, lock_grant_work);
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->grant_list, grant_head) {
gr = &lock->grant_resp;
nl = &lock->grant_resp.nl;
/* wait for reordered invalidation to finish */
if (lock->mode != nl->old_mode)
continue;
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode,
nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) &&
lock_mode_can_read(nl->new_mode)) {
lock->refresh_gen =
atomic64_inc_return(&linfo->next_refresh_gen);
}
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_version = le64_to_cpu(nl->write_version);
lock->roots = gr->roots;
if (lock_count_match_exists(nl->new_mode, lock->waiters))
extend_grace(sb, lock);
trace_scoutfs_lock_granted(sb, lock);
list_del_init(&lock->grant_head);
wake_up(&lock->waitq);
put_lock(linfo, lock);
}
/* invalidations might be waiting for our reordered grant */
queue_inv_work(linfo);
spin_unlock(&linfo->lock);
}
/*
* The client is receiving a grant response message from the server. We
* find the lock, record the response, and add it to the list for grant
* work to process.
* The server's state machine can immediately send an invalidate request
* after sending this grant response. We won't process the incoming
* invalidate request until after processing this grant response.
*/
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock_grant_response *gr)
struct scoutfs_net_lock *nl)
{
struct scoutfs_net_lock *nl = &gr->nl;
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
@@ -705,62 +623,61 @@ int scoutfs_lock_grant_response(struct super_block *sb,
trace_scoutfs_lock_grant_response(sb, lock);
BUG_ON(!lock->request_pending);
lock->grant_resp = *gr;
list_add_tail(&lock->grant_head, &linfo->grant_list);
queue_grant_work(linfo);
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode, nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) && lock_mode_can_read(nl->new_mode))
lock->refresh_gen = atomic64_inc_return(&linfo->next_refresh_gen);
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_seq = le64_to_cpu(nl->write_seq);
trace_scoutfs_lock_granted(sb, lock);
wake_up(&lock->waitq);
put_lock(linfo, lock);
spin_unlock(&linfo->lock);
return 0;
}
struct inv_req {
struct list_head head;
struct scoutfs_lock *lock;
u64 net_id;
struct scoutfs_net_lock nl;
};
/*
* Each lock has received a lock invalidation request from the server
* which specifies a new mode for the lock. The server will only send
* one invalidation request at a time for each lock.
* which specifies a new mode for the lock. Our processing state
* machine and server failover and lock recovery can both conspire to
* give us triplicate invalidation requests. The incoming requests for
* a given lock need to be processed in order, but we can process locks
* in any order.
*
* This is an unsolicited request from the server so it can arrive at
* any time after we make the server aware of the lock by initially
* requesting it. We wait for users of the current mode to unlock
* before invalidating.
* any time after we make the server aware of the lock. We wait for
* users of the current mode to unlock before invalidating.
*
* This can arrive on behalf of our request for a mode that conflicts
* with our current mode. We have to proceed while we have a request
* pending. We can also be racing with shrink requests being sent while
* we're invalidating.
*
* This can be processed concurrently and experience reordering with a
* grant response sent back-to-back from the server. We carefully only
* invalidate once the lock mode matches what the server told us to
* invalidate.
*
* We delay invalidation processing until a grace period has elapsed
* since the last unlock. The intent is to let users do a reasonable
* batch of work before dropping the lock. Continuous unlocking can
* continuously extend the deadline.
*
* Before we start invalidating the lock we set the lock to the new
* mode, preventing further incompatible users of the old mode from
* using the lock while we're invalidating.
*
* This does a lot of serialized inode invalidation in one context and
* performs a lot of repeated calls to sync. It would be nice to get
* some concurrent inode invalidation and to more carefully only call
* sync when needed.
*/
static void lock_invalidate_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
inv_dwork.work);
struct lock_info *linfo = container_of(work, struct lock_info, inv_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
unsigned long delay = MAX_JIFFY_OFFSET;
ktime_t now = ktime_get();
ktime_t deadline;
struct inv_req *ireq;
LIST_HEAD(ready);
u64 net_id;
int ret;
scoutfs_inc_counter(sb, lock_invalidate_work);
@@ -768,21 +685,8 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->inv_list, inv_head) {
nl = &lock->inv_nl;
/* skip if grace hasn't elapsed, record earliest */
deadline = lock->grace_deadline;
if (ktime_before(now, deadline)) {
delay = min(delay,
nsecs_to_jiffies(ktime_to_ns(
ktime_sub(deadline, now))));
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
continue;
}
/* wait for reordered grant to finish */
if (lock->mode != nl->old_mode)
continue;
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
/* wait until incompatible holders unlock */
if (!lock_counts_match(nl->new_mode, lock->users))
@@ -798,18 +702,23 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_unlock(&linfo->lock);
if (list_empty(&ready))
goto out;
return;
/* invalidate once the lock is read */
list_for_each_entry(lock, &ready, inv_head) {
nl = &lock->inv_nl;
net_id = lock->inv_net_id;
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
/* only lock protocol, inv can't call subsystems after shutdown */
if (!linfo->shutdown) {
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
}
/* respond with the key and modes from the request */
ret = scoutfs_client_lock_response(sb, net_id, nl);
/* respond with the key and modes from the request, server might have died */
ret = scoutfs_client_lock_response(sb, ireq->net_id, nl);
if (ret == -ENOTCONN)
ret = 0;
BUG_ON(ret);
scoutfs_inc_counter(sb, lock_invalidate_response);
@@ -819,53 +728,89 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &ready, inv_head) {
list_del_init(&lock->inv_head);
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
lock->invalidate_pending = 0;
trace_scoutfs_lock_invalidated(sb, lock);
wake_up(&lock->waitq);
list_del(&ireq->head);
kfree(ireq);
if (list_empty(&lock->inv_list)) {
/* finish if another request didn't arrive */
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
wake_up(&lock->waitq);
} else {
/* another request arrived, back on the list and requeue */
list_move_tail(&lock->inv_head, &linfo->inv_list);
queue_inv_work(linfo);
}
put_lock(linfo, lock);
}
/* grant might have been waiting for invalidate request */
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
out:
/* queue delayed work if invalidations waiting on grace deadline */
if (delay != MAX_JIFFY_OFFSET)
queue_delayed_work(linfo->workq, &linfo->inv_dwork, delay);
}
/*
* Record an incoming invalidate request from the server and add its lock
* to the list for processing.
* Add an incoming invalidation request to the end of the list on the
* lock and queue it for blocking invalidation work. This is being
* called synchronously in the net recv path to avoid reordering with
* grants that were sent immediately before the server sent this
* invalidation.
*
* This is trusting the server and will crash if it's sent bad requests :/
* Incoming invalidation requests are a function of the remote lock
* server's state machine and are slightly decoupled from our lock
* state. We can receive duplicate requests if the server is quick
* enough to send the next request after we send a previous reply, or if
* pending invalidation spans server failover and lock recovery.
*
* Similarly, we can get a request to invalidate a lock we don't have if
* invalidation finished just after lock recovery to a new server.
* Happily we can just reply because we satisfy the invalidation
* response promise to not be using the old lock's mode if the lock
* doesn't exist.
*/
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
struct scoutfs_lock *lock = NULL;
struct inv_req *ireq;
int ret = 0;
scoutfs_inc_counter(sb, lock_invalidate_request);
ireq = kmalloc(sizeof(struct inv_req), GFP_NOFS);
BUG_ON(!ireq); /* lock server doesn't handle response errors */
if (ireq == NULL) {
ret = -ENOMEM;
goto out;
}
spin_lock(&linfo->lock);
lock = get_lock(sb, &nl->key);
BUG_ON(!lock);
if (lock) {
BUG_ON(lock->invalidate_pending);
lock->invalidate_pending = 1;
lock->inv_nl = *nl;
lock->inv_net_id = net_id;
list_add_tail(&lock->inv_head, &linfo->inv_list);
trace_scoutfs_lock_invalidate_request(sb, lock);
queue_inv_work(linfo);
ireq->lock = lock;
ireq->net_id = net_id;
ireq->nl = *nl;
if (list_empty(&lock->inv_list)) {
list_add_tail(&lock->inv_head, &linfo->inv_list);
lock->invalidate_pending = 1;
queue_inv_work(linfo);
}
list_add_tail(&ireq->head, &lock->inv_list);
}
spin_unlock(&linfo->lock);
return 0;
out:
if (!lock) {
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret); /* lock server doesn't fence timed out client requests */
}
return ret;
}
/*
@@ -901,7 +846,7 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
for (i = 0; lock && i < SCOUTFS_NET_LOCK_MAX_RECOVER_NR; i++) {
nlr->locks[i].key = lock->start;
nlr->locks[i].write_version = cpu_to_le64(lock->write_version);
nlr->locks[i].write_seq = cpu_to_le64(lock->write_seq);
nlr->locks[i].old_mode = lock->mode;
nlr->locks[i].new_mode = lock->mode;
@@ -996,7 +941,7 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
lock_inc_count(lock->waiters, mode);
for (;;) {
if (linfo->shutdown) {
if (WARN_ON_ONCE(linfo->shutdown)) {
ret = -ESHUTDOWN;
break;
}
@@ -1041,8 +986,14 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
trace_scoutfs_lock_wait(sb, lock);
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
if (flags & SCOUTFS_LKF_INTERRUPTIBLE) {
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
} else {
wait_event(lock->waitq, lock_wait_cond(sb, lock, mode));
ret = 0;
}
spin_lock(&linfo->lock);
if (ret)
break;
@@ -1099,7 +1050,7 @@ int scoutfs_lock_inode(struct super_block *sb, enum scoutfs_lock_mode mode, int
goto out;
if (flags & SCOUTFS_LKF_REFRESH_INODE) {
ret = scoutfs_inode_refresh(inode, *lock, flags);
ret = scoutfs_inode_refresh(inode, *lock);
if (ret < 0) {
scoutfs_unlock(sb, *lock, mode);
*lock = NULL;
@@ -1260,37 +1211,46 @@ int scoutfs_lock_inode_index(struct super_block *sb, enum scoutfs_lock_mode mode
}
/*
* The rid lock protects a mount's private persistent items in the rid
* zone. It's held for the duration of the mount. It lets the mount
* modify the rid items at will and signals to other mounts that we're
* still alive and our rid items shouldn't be reclaimed.
* Orphan items are stored in their own zone which are modified with
* shared write_only locks and are read inconsistently without locks by
* background scanning work.
*
* Being held for the entire mount prevents other nodes from reclaiming
* our items, like free blocks, when it would make sense for them to be
* able to. Maybe we have a bunch free and they're trying to allocate
* and are getting ENOSPC.
* Since we only use write_only locks we just lock the entire zone, but
* the api provides the inode in case we ever change the locking scheme.
*/
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 rid, struct scoutfs_lock **lock)
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags, u64 ino,
struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_RID_ZONE;
start.sko_rid = cpu_to_le64(rid);
start.sk_zone = SCOUTFS_ORPHAN_ZONE;
start.sko_ino = 0;
start.sk_type = SCOUTFS_ORPHAN_TYPE;
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_RID_ZONE;
end.sko_rid = cpu_to_le64(rid);
scoutfs_key_set_zeros(&end);
end.sk_zone = SCOUTFS_ORPHAN_ZONE;
end.sko_ino = cpu_to_le64(U64_MAX);
end.sk_type = SCOUTFS_ORPHAN_TYPE;
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
/*
* As we unlock we always extend the grace period to give the caller
* another pass at the lock before its invalidated.
*/
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scoutfs_lock_mode mode)
{
DECLARE_LOCK_INFO(sb, linfo);
@@ -1303,7 +1263,6 @@ void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scou
spin_lock(&linfo->lock);
lock_dec_count(lock->users, mode);
extend_grace(sb, lock);
if (lock_mode_can_write(mode))
lock->dirty_trans_seq = scoutfs_trans_sample_seq(sb);
@@ -1478,7 +1437,7 @@ restart:
BUG_ON(lock->mode == SCOUTFS_LOCK_NULL);
BUG_ON(!list_empty(&lock->shrink_head));
if (linfo->shutdown || nr-- == 0)
if (nr-- == 0)
break;
__lock_del_lru(linfo, lock);
@@ -1505,7 +1464,7 @@ out:
return ret;
}
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr)
void scoutfs_free_unused_locks(struct super_block *sb)
{
struct lock_info *linfo = SCOUTFS_SB(sb)->lock_info;
struct shrink_control sc = {
@@ -1533,15 +1492,48 @@ static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
}
/*
* The caller is going to be calling _destroy soon and, critically, is
* about to shutdown networking before calling us so that we don't get
* any callbacks while we're destroying. We have to ensure that we
* won't call networking after this returns.
* shrink_dcache_for_umount() tears down dentries with no locking. We
* need to make sure that our invalidation won't touch dentries before
* we return and the caller calls the generic vfs unmount path.
*/
void scoutfs_lock_unmount_begin(struct super_block *sb)
{
DECLARE_LOCK_INFO(sb, linfo);
if (linfo) {
linfo->unmounting = true;
flush_work(&linfo->inv_work);
}
}
void scoutfs_lock_flush_invalidate(struct super_block *sb)
{
DECLARE_LOCK_INFO(sb, linfo);
if (linfo)
flush_work(&linfo->inv_work);
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.
*
* Internal fs threads can be using locking, and locking can have async
* work pending. We use ->shutdown to force callers to return
* -ESHUTDOWN and to prevent the future queueing of work that could call
* networking. Locks whose work is stopped will be torn down by _destroy.
* At this point all fs callers and internal services that use locks
* should have stopped. We won't have any callers initiating lock
* transitions and sending requests. We set the shutdown flag to catch
* anyone who breaks this rule.
*
* We unregister the shrinker so that we won't try and send null
* requests in response to memory pressure. The locks will all be
* unceremoniously dropped once we get a farewell response from the
* server which indicates that they destroyed our locking state.
*
* We will still respond to invalidation requests that have to be
* processed to let unmount in other mounts acquire locks and make
* progress. However, we don't fully process the invalidation because
* we're shutting down. We only update the lock state and send the
* response. We shouldn't have any users of locking that require
* invalidation correctness at this point.
*/
void scoutfs_lock_shutdown(struct super_block *sb)
{
@@ -1554,19 +1546,18 @@ void scoutfs_lock_shutdown(struct super_block *sb)
trace_scoutfs_lock_shutdown(sb, linfo);
spin_lock(&linfo->lock);
/* stop the shrinker from queueing work */
unregister_shrinker(&linfo->shrinker);
flush_work(&linfo->shrink_work);
/* cause current and future lock calls to return errors */
spin_lock(&linfo->lock);
linfo->shutdown = true;
for (node = rb_first(&linfo->lock_tree); node; node = rb_next(node)) {
lock = rb_entry(node, struct scoutfs_lock, node);
wake_up(&lock->waitq);
}
spin_unlock(&linfo->lock);
flush_work(&linfo->grant_work);
flush_delayed_work(&linfo->inv_dwork);
flush_work(&linfo->shrink_work);
}
/*
@@ -1586,6 +1577,8 @@ void scoutfs_lock_destroy(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
struct inv_req *ireq_tmp;
struct inv_req *ireq;
struct rb_node *node;
enum scoutfs_lock_mode mode;
@@ -1594,8 +1587,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
trace_scoutfs_lock_destroy(sb, linfo);
/* stop the shrinker from queueing work */
unregister_shrinker(&linfo->shrinker);
/* make sure that no one's actively using locks */
spin_lock(&linfo->lock);
@@ -1614,8 +1605,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
spin_unlock(&linfo->lock);
if (linfo->workq) {
/* pending grace work queues normal work */
flush_workqueue(linfo->workq);
/* now all work won't queue itself */
destroy_workqueue(linfo->workq);
}
@@ -1632,22 +1621,31 @@ void scoutfs_lock_destroy(struct super_block *sb)
* of free).
*/
spin_lock(&linfo->lock);
node = rb_first(&linfo->lock_tree);
while (node) {
lock = rb_entry(node, struct scoutfs_lock, node);
node = rb_next(node);
list_for_each_entry_safe(ireq, ireq_tmp, &lock->inv_list, head) {
list_del_init(&ireq->head);
put_lock(linfo, ireq->lock);
kfree(ireq);
}
lock->request_pending = 0;
if (!list_empty(&lock->lru_head))
__lock_del_lru(linfo, lock);
if (!list_empty(&lock->grant_head))
list_del_init(&lock->grant_head);
if (!list_empty(&lock->inv_head))
if (!list_empty(&lock->inv_head)) {
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
}
if (!list_empty(&lock->shrink_head))
list_del_init(&lock->shrink_head);
lock_remove(linfo, lock);
lock_free(linfo, lock);
}
spin_unlock(&linfo->lock);
kfree(linfo);
@@ -1672,9 +1670,7 @@ int scoutfs_lock_setup(struct super_block *sb)
linfo->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&linfo->shrinker);
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->grant_work, lock_grant_worker);
INIT_LIST_HEAD(&linfo->grant_list);
INIT_DELAYED_WORK(&linfo->inv_dwork, lock_invalidate_worker);
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);
INIT_WORK(&linfo->shrink_work, lock_shrink_worker);
INIT_LIST_HEAD(&linfo->shrink_list);

View File

@@ -6,12 +6,15 @@
#define SCOUTFS_LKF_REFRESH_INODE 0x01 /* update stale inode from item */
#define SCOUTFS_LKF_NONBLOCK 0x02 /* only use already held locks */
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_NONBLOCK << 1) - 1))
#define SCOUTFS_LKF_INTERRUPTIBLE 0x04 /* pending signals return -ERESTARTSYS */
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_INTERRUPTIBLE << 1) - 1))
#define SCOUTFS_LOCK_NR_MODES SCOUTFS_LOCK_INVALID
struct scoutfs_omap_lock;
/*
* A few fields (start, end, refresh_gen, write_version, granted_mode)
* A few fields (start, end, refresh_gen, write_seq, granted_mode)
* are referenced by code outside lock.c.
*/
struct scoutfs_lock {
@@ -21,20 +24,15 @@ struct scoutfs_lock {
struct rb_node node;
struct rb_node range_node;
u64 refresh_gen;
u64 write_version;
u64 write_seq;
u64 dirty_trans_seq;
struct scoutfs_net_roots roots;
struct list_head lru_head;
wait_queue_head_t waitq;
ktime_t grace_deadline;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head grant_head;
struct scoutfs_net_lock_grant_response grant_resp;
struct list_head inv_head;
struct scoutfs_net_lock inv_nl;
u64 inv_net_id;
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
struct list_head inv_list; /* list of lock's invalidation requests */
struct list_head shrink_head;
spinlock_t cov_list_lock;
@@ -48,6 +46,10 @@ struct scoutfs_lock {
/* the forest tracks which log tree last saw bloom bit updates */
atomic64_t forest_bloom_nr;
/* open ino mapping has a valid map for a held write lock */
spinlock_t omap_spinlock;
struct scoutfs_omap_lock_data *omap_data;
};
struct scoutfs_lock_coverage {
@@ -57,7 +59,7 @@ struct scoutfs_lock_coverage {
};
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock_grant_response *gr);
struct scoutfs_net_lock *nl);
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl);
int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
@@ -80,8 +82,10 @@ int scoutfs_lock_inodes(struct super_block *sb, enum scoutfs_lock_mode mode, int
struct inode *d, struct scoutfs_lock **D_lock);
int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 rid, struct scoutfs_lock **lock);
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 ino, struct scoutfs_lock **lock);
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode mode);
@@ -96,9 +100,11 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode);
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr);
void scoutfs_free_unused_locks(struct super_block *sb);
int scoutfs_lock_setup(struct super_block *sb);
void scoutfs_lock_unmount_begin(struct super_block *sb);
void scoutfs_lock_flush_invalidate(struct super_block *sb);
void scoutfs_lock_shutdown(struct super_block *sb);
void scoutfs_lock_destroy(struct super_block *sb);

View File

@@ -20,10 +20,10 @@
#include "tseq.h"
#include "spbm.h"
#include "block.h"
#include "btree.h"
#include "msg.h"
#include "scoutfs_trace.h"
#include "lock_server.h"
#include "recov.h"
/*
* The scoutfs server implements a simple lock service. Client mounts
@@ -56,14 +56,11 @@
* Message requests and responses are reliably delivered in order across
* reconnection.
*
* The server maintains a persistent record of connected clients. A new
* server instance discovers these and waits for previously connected
* clients to reconnect and recover their state before proceeding. If
* clients don't reconnect they are forcefully prevented from unsafely
* accessing the shared persistent storage. (fenced, according to the
* rules of the platform.. could range from being powered off to having
* their switch port disabled to having their local block device set
* read-only.)
* As a new server comes up it recovers lock state from existing clients
* which were connected to a previous lock server. Recover requests are
* sent to clients as they connect and they respond with all there
* locks. Once all clients and locks are accounted for normal
* processing can resume.
*
* The lock server doesn't respond to memory pressure. The only way
* locks are freed is if they are invalidated to null on behalf of a
@@ -77,19 +74,12 @@ struct lock_server_info {
struct super_block *sb;
spinlock_t lock;
struct mutex mutex;
struct rb_root locks_root;
struct scoutfs_spbm recovery_pending;
struct delayed_work recovery_dwork;
struct scoutfs_tseq_tree tseq_tree;
struct dentry *tseq_dentry;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
atomic64_t write_version;
struct scoutfs_tseq_tree stats_tseq_tree;
struct dentry *stats_tseq_dentry;
};
#define DECLARE_LOCK_SERVER_INFO(sb, name) \
@@ -116,6 +106,9 @@ struct server_lock_node {
struct list_head granted;
struct list_head requested;
struct list_head invalidated;
struct scoutfs_tseq_entry stats_tseq_entry;
u64 stats[SLT_NR];
};
/*
@@ -160,30 +153,30 @@ enum {
*/
static void add_client_entry(struct server_lock_node *snode,
struct list_head *list,
struct client_lock_entry *clent)
struct client_lock_entry *c_ent)
{
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
if (list_empty(&clent->head))
list_add_tail(&clent->head, list);
if (list_empty(&c_ent->head))
list_add_tail(&c_ent->head, list);
else
list_move_tail(&clent->head, list);
list_move_tail(&c_ent->head, list);
clent->on_list = list == &snode->granted ? OL_GRANTED :
c_ent->on_list = list == &snode->granted ? OL_GRANTED :
list == &snode->requested ? OL_REQUESTED :
OL_INVALIDATED;
}
static void free_client_entry(struct lock_server_info *inf,
struct server_lock_node *snode,
struct client_lock_entry *clent)
struct client_lock_entry *c_ent)
{
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
if (!list_empty(&clent->head))
list_del_init(&clent->head);
scoutfs_tseq_del(&inf->tseq_tree, &clent->tseq_entry);
kfree(clent);
if (!list_empty(&c_ent->head))
list_del_init(&c_ent->head);
scoutfs_tseq_del(&inf->tseq_tree, &c_ent->tseq_entry);
kfree(c_ent);
}
static bool invalid_mode(u8 mode)
@@ -305,6 +298,8 @@ static struct server_lock_node *alloc_server_lock(struct lock_server_info *inf,
snode = get_server_lock(inf, key, ins, false);
if (snode != ins)
kfree(ins);
else
scoutfs_tseq_add(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
}
}
@@ -334,21 +329,23 @@ static void put_server_lock(struct lock_server_info *inf,
mutex_unlock(&snode->mutex);
if (should_free)
if (should_free) {
scoutfs_tseq_del(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
kfree(snode);
}
}
static struct client_lock_entry *find_entry(struct server_lock_node *snode,
struct list_head *list,
u64 rid)
{
struct client_lock_entry *clent;
struct client_lock_entry *c_ent;
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
list_for_each_entry(clent, list, head) {
if (clent->rid == rid)
return clent;
list_for_each_entry(c_ent, list, head) {
if (c_ent->rid == rid)
return c_ent;
}
return NULL;
@@ -367,7 +364,7 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
u64 net_id, struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *clent;
struct client_lock_entry *c_ent;
struct server_lock_node *snode;
int ret;
@@ -379,27 +376,29 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
goto out;
}
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!clent) {
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!c_ent) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&clent->head);
clent->rid = rid;
clent->net_id = net_id;
clent->mode = nl->new_mode;
INIT_LIST_HEAD(&c_ent->head);
c_ent->rid = rid;
c_ent->net_id = net_id;
c_ent->mode = nl->new_mode;
snode = alloc_server_lock(inf, &nl->key);
if (snode == NULL) {
kfree(clent);
kfree(c_ent);
ret = -ENOMEM;
goto out;
}
clent->snode = snode;
add_client_entry(snode, &snode->requested, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
snode->stats[SLT_REQUEST]++;
c_ent->snode = snode;
add_client_entry(snode, &snode->requested, c_ent);
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
ret = process_waiting_requests(sb, snode);
out:
@@ -418,7 +417,7 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *clent;
struct client_lock_entry *c_ent;
struct server_lock_node *snode;
int ret;
@@ -430,25 +429,27 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
goto out;
}
/* XXX should always have a server lock here? recovery? */
/* XXX should always have a server lock here? */
snode = get_server_lock(inf, &nl->key, NULL, false);
if (!snode) {
ret = -EINVAL;
goto out;
}
clent = find_entry(snode, &snode->invalidated, rid);
if (!clent) {
snode->stats[SLT_RESPONSE]++;
c_ent = find_entry(snode, &snode->invalidated, rid);
if (!c_ent) {
put_server_lock(inf, snode);
ret = -EINVAL;
goto out;
}
if (nl->new_mode == SCOUTFS_LOCK_NULL) {
free_client_entry(inf, snode, clent);
free_client_entry(inf, snode, c_ent);
} else {
clent->mode = nl->new_mode;
add_client_entry(snode, &snode->granted, clent);
c_ent->mode = nl->new_mode;
add_client_entry(snode, &snode->granted, c_ent);
}
ret = process_waiting_requests(sb, snode);
@@ -473,31 +474,27 @@ out:
* so we unlock the snode mutex.
*
* All progress must wait for all clients to finish with recovery
* because we don't know which locks they'll hold. The unlocked
* recovery_pending test here is OK. It's filled by setup before
* anything runs. It's emptied by recovery completion. We can get a
* false nonempty result if we race with recovery completion, but that's
* OK because recovery completion processes all the locks that have
* requests after emptying, including the unlikely loser of that race.
* because we don't know which locks they'll hold. Once recover
* finishes the server calls us to kick all the locks that were waiting
* during recovery.
*/
static int process_waiting_requests(struct super_block *sb,
struct server_lock_node *snode)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_net_lock_grant_response gres;
struct scoutfs_net_lock nl;
struct client_lock_entry *req;
struct client_lock_entry *req_tmp;
struct client_lock_entry *gr;
struct client_lock_entry *gr_tmp;
u64 wv;
u64 seq;
int ret;
BUG_ON(!mutex_is_locked(&snode->mutex));
/* processing waits for all invalidation responses or recovery */
if (!list_empty(&snode->invalidated) ||
!scoutfs_spbm_empty(&inf->recovery_pending)) {
scoutfs_recov_next_pending(sb, 0, SCOUTFS_RECOV_LOCKS) != 0) {
ret = 0;
goto out;
}
@@ -521,6 +518,7 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER,
SLT_INVALIDATE, SLT_REQUEST,
gr->rid, 0, &nl);
snode->stats[SLT_INVALIDATE]++;
add_client_entry(snode, &snode->invalidated, gr);
}
@@ -531,6 +529,7 @@ static int process_waiting_requests(struct super_block *sb,
nl.key = snode->key;
nl.new_mode = req->mode;
nl.write_seq = 0;
/* see if there's an existing compatible grant to replace */
gr = find_entry(snode, &snode->granted, req->rid);
@@ -543,21 +542,20 @@ static int process_waiting_requests(struct super_block *sb,
if (nl.new_mode == SCOUTFS_LOCK_WRITE ||
nl.new_mode == SCOUTFS_LOCK_WRITE_ONLY) {
wv = atomic64_inc_return(&inf->write_version);
nl.write_version = cpu_to_le64(wv);
/* doesn't commit seq update, recovered with locks */
seq = scoutfs_server_next_seq(sb);
nl.write_seq = cpu_to_le64(seq);
}
gres.nl = nl;
scoutfs_server_get_roots(sb, &gres.roots);
ret = scoutfs_server_lock_response(sb, req->rid,
req->net_id, &gres);
req->net_id, &nl);
if (ret)
goto out;
trace_scoutfs_lock_message(sb, SLT_SERVER, SLT_GRANT,
SLT_RESPONSE, req->rid,
req->net_id, &nl);
snode->stats[SLT_GRANT]++;
/* don't track null client locks, track all else */
if (req->mode == SCOUTFS_LOCK_NULL)
@@ -573,85 +571,39 @@ out:
return ret;
}
static void init_lock_clients_key(struct scoutfs_key *key, u64 rid)
{
*key = (struct scoutfs_key) {
.sk_zone = SCOUTFS_LOCK_CLIENTS_ZONE,
.sklc_rid = cpu_to_le64(rid),
};
}
/*
* The server received a greeting from a client for the first time. If
* the client had already talked to the server then we must find an
* existing record for it and should begin recovery. If it doesn't have
* a record then its timed out and we can't allow it to reconnect. If
* its connecting for the first time then we insert a new record. If
* the client is in lock recovery then we send the initial lock request.
*
* This is running in concurrent client greeting processing contexts.
*/
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist)
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
init_lock_clients_key(&key, rid);
mutex_lock(&inf->mutex);
if (should_exist) {
ret = scoutfs_btree_lookup(sb, &super->lock_clients, &key,
&iref);
if (ret == 0)
scoutfs_btree_put_iref(&iref);
} else {
ret = scoutfs_btree_insert(sb, inf->alloc, inf->wri,
&super->lock_clients,
&key, NULL, 0);
}
mutex_unlock(&inf->mutex);
if (should_exist && ret == 0) {
if (scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
scoutfs_key_set_zeros(&key);
ret = scoutfs_server_lock_recover_request(sb, rid, &key);
if (ret)
goto out;
} else {
ret = 0;
}
out:
return ret;
}
/*
* A client sent their last recovery response and can exit recovery. If
* they were the last client in recovery then we can process all the
* server locks that had requests.
* All clients have finished lock recovery, we can make forward process
* on all the queued requests that were waiting on recovery.
*/
static int finished_recovery(struct super_block *sb, u64 rid, bool cancel)
int scoutfs_lock_server_finished_recovery(struct super_block *sb)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct server_lock_node *snode;
struct scoutfs_key key;
bool still_pending;
int ret = 0;
spin_lock(&inf->lock);
scoutfs_spbm_clear(&inf->recovery_pending, rid);
still_pending = !scoutfs_spbm_empty(&inf->recovery_pending);
spin_unlock(&inf->lock);
if (still_pending)
return 0;
if (cancel)
cancel_delayed_work_sync(&inf->recovery_dwork);
scoutfs_key_set_zeros(&key);
scoutfs_info(sb, "all lock clients recovered");
while ((snode = get_server_lock(inf, &key, NULL, true))) {
key = snode->key;
@@ -669,14 +621,6 @@ static int finished_recovery(struct super_block *sb, u64 rid, bool cancel)
return ret;
}
static void set_max_write_version(struct lock_server_info *inf, u64 new)
{
u64 old;
while (new > (old = atomic64_read(&inf->write_version)) &&
(atomic64_cmpxchg(&inf->write_version, old, new) != old));
}
/*
* We sent a lock recover request to the client when we received its
* greeting while in recovery. Here we instantiate all the locks it
@@ -688,62 +632,61 @@ int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *existing;
struct client_lock_entry *clent;
struct client_lock_entry *c_ent;
struct server_lock_node *snode;
struct scoutfs_key key;
int ret = 0;
int i;
/* client must be in recovery */
spin_lock(&inf->lock);
if (!scoutfs_spbm_test(&inf->recovery_pending, rid))
if (!scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
ret = -EINVAL;
spin_unlock(&inf->lock);
if (ret)
goto out;
}
/* client has sent us all their locks */
if (nlr->nr == 0) {
ret = finished_recovery(sb, rid, true);
scoutfs_server_recov_finish(sb, rid, SCOUTFS_RECOV_LOCKS);
ret = 0;
goto out;
}
for (i = 0; i < le16_to_cpu(nlr->nr); i++) {
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!clent) {
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!c_ent) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&clent->head);
clent->rid = rid;
clent->net_id = 0;
clent->mode = nlr->locks[i].new_mode;
INIT_LIST_HEAD(&c_ent->head);
c_ent->rid = rid;
c_ent->net_id = 0;
c_ent->mode = nlr->locks[i].new_mode;
snode = alloc_server_lock(inf, &nlr->locks[i].key);
if (snode == NULL) {
kfree(clent);
kfree(c_ent);
ret = -ENOMEM;
goto out;
}
existing = find_entry(snode, &snode->granted, rid);
if (existing) {
kfree(clent);
kfree(c_ent);
put_server_lock(inf, snode);
ret = -EEXIST;
goto out;
}
clent->snode = snode;
add_client_entry(snode, &snode->granted, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
c_ent->snode = snode;
add_client_entry(snode, &snode->granted, c_ent);
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
put_server_lock(inf, snode);
/* make sure next write lock is greater than all recovered */
set_max_write_version(inf,
le64_to_cpu(nlr->locks[i].write_version));
/* make sure next core seq is greater than all lock write seq */
scoutfs_server_set_seq_if_greater(sb,
le64_to_cpu(nlr->locks[i].write_seq));
}
/* send request for next batch of keys */
@@ -755,102 +698,16 @@ out:
return ret;
}
static int get_rid_and_put_ref(struct scoutfs_btree_item_ref *iref, u64 *rid)
{
int ret;
if (iref->val_len == 0) {
*rid = le64_to_cpu(iref->key->sklc_rid);
ret = 0;
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(iref);
return ret;
}
/*
* This work executes if enough time passes without all of the clients
* finishing with recovery and canceling the work. We walk through the
* client records and find any that still have their recovery pending.
*/
static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
{
struct lock_server_info *inf = container_of(work,
struct lock_server_info,
recovery_dwork.work);
struct super_block *sb = inf->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
bool timed_out;
u64 rid;
int ret;
ret = scoutfs_server_hold_commit(sb);
if (ret)
goto out;
/* we enter recovery if there are any client records */
for (rid = 0; ; rid++) {
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT) {
ret = 0;
break;
}
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
break;
spin_lock(&inf->lock);
if (scoutfs_spbm_test(&inf->recovery_pending, rid)) {
scoutfs_spbm_clear(&inf->recovery_pending, rid);
timed_out = true;
} else {
timed_out = false;
}
spin_unlock(&inf->lock);
if (!timed_out)
continue;
scoutfs_err(sb, "client rid %016llx lock recovery timed out",
rid);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &key);
if (ret)
break;
}
ret = scoutfs_server_apply_commit(sb, ret);
out:
/* force processing all pending lock requests */
if (ret == 0)
ret = finished_recovery(sb, 0, false);
if (ret < 0) {
scoutfs_err(sb, "lock server saw err %d while timing out clients, shutting down", ret);
scoutfs_server_abort(sb);
}
}
/*
* A client is leaving the lock service. They aren't using locks and
* won't send any more requests. We tear down all the state we had for
* them. This can be called multiple times for a given client as their
* farewell is resent to new servers. It's OK to not find any state.
* If we fail to delete a persistent entry then we have to shut down and
* hope that the next server has more luck.
*/
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct client_lock_entry *clent;
struct client_lock_entry *c_ent;
struct client_lock_entry *tmp;
struct server_lock_node *snode;
struct scoutfs_key key;
@@ -858,20 +715,7 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
bool freed;
int ret = 0;
mutex_lock(&inf->mutex);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &key);
mutex_unlock(&inf->mutex);
if (ret == -ENOENT) {
ret = 0;
goto out;
}
if (ret < 0)
goto out;
scoutfs_key_set_zeros(&key);
while ((snode = get_server_lock(inf, &key, NULL, true))) {
freed = false;
@@ -880,9 +724,9 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
(list == &snode->requested) ? &snode->invalidated :
NULL) {
list_for_each_entry_safe(clent, tmp, list, head) {
if (clent->rid == rid) {
free_client_entry(inf, snode, clent);
list_for_each_entry_safe(c_ent, tmp, list, head) {
if (c_ent->rid == rid) {
free_client_entry(inf, snode, c_ent);
freed = true;
}
}
@@ -943,36 +787,35 @@ static char *lock_on_list_string(u8 on_list)
static void lock_server_tseq_show(struct seq_file *m,
struct scoutfs_tseq_entry *ent)
{
struct client_lock_entry *clent = container_of(ent,
struct client_lock_entry *c_ent = container_of(ent,
struct client_lock_entry,
tseq_entry);
struct server_lock_node *snode = clent->snode;
struct server_lock_node *snode = c_ent->snode;
seq_printf(m, SK_FMT" %s %s rid %016llx net_id %llu\n",
SK_ARG(&snode->key), lock_mode_string(clent->mode),
lock_on_list_string(clent->on_list), clent->rid,
clent->net_id);
SK_ARG(&snode->key), lock_mode_string(c_ent->mode),
lock_on_list_string(c_ent->on_list), c_ent->rid,
c_ent->net_id);
}
static void stats_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
{
struct server_lock_node *snode = container_of(ent, struct server_lock_node,
stats_tseq_entry);
seq_printf(m, SK_FMT" req %llu inv %llu rsp %llu gr %llu\n",
SK_ARG(&snode->key), snode->stats[SLT_REQUEST], snode->stats[SLT_INVALIDATE],
snode->stats[SLT_RESPONSE], snode->stats[SLT_GRANT]);
}
/*
* Setup the lock server. This is called before networking can deliver
* requests. If we find existing client records then we enter recovery.
* Lock request processing is deferred until recovery is resolved for
* all the existing clients, either they reconnect and replay locks or
* we time them out.
* requests.
*/
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 max_vers)
int scoutfs_lock_server_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct lock_server_info *inf;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
unsigned int nr;
u64 rid;
int ret;
inf = kzalloc(sizeof(struct lock_server_info), GFP_KERNEL);
if (!inf)
@@ -980,15 +823,9 @@ int scoutfs_lock_server_setup(struct super_block *sb,
inf->sb = sb;
spin_lock_init(&inf->lock);
mutex_init(&inf->mutex);
inf->locks_root = RB_ROOT;
scoutfs_spbm_init(&inf->recovery_pending);
INIT_DELAYED_WORK(&inf->recovery_dwork,
scoutfs_lock_server_recovery_timeout);
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
inf->alloc = alloc;
inf->wri = wri;
atomic64_set(&inf->write_version, max_vers); /* inc_return gives +1 */
scoutfs_tseq_tree_init(&inf->stats_tseq_tree, stats_tseq_show);
inf->tseq_dentry = scoutfs_tseq_create("server_locks", sbi->debug_root,
&inf->tseq_tree);
@@ -997,38 +834,17 @@ int scoutfs_lock_server_setup(struct super_block *sb,
return -ENOMEM;
}
inf->stats_tseq_dentry = scoutfs_tseq_create("server_lock_stats", sbi->debug_root,
&inf->stats_tseq_tree);
if (!inf->stats_tseq_dentry) {
debugfs_remove(inf->tseq_dentry);
kfree(inf);
return -ENOMEM;
}
sbi->lock_server_info = inf;
/* we enter recovery if there are any client records */
nr = 0;
for (rid = 0; ; rid++) {
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT)
break;
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
goto out;
ret = scoutfs_spbm_set(&inf->recovery_pending, rid);
if (ret)
goto out;
nr++;
if (rid == U64_MAX)
break;
}
ret = 0;
if (nr) {
schedule_delayed_work(&inf->recovery_dwork,
msecs_to_jiffies(LOCK_SERVER_RECOVERY_MS));
scoutfs_info(sb, "waiting for %u lock clients to recover", nr);
}
out:
return ret;
return 0;
}
/*
@@ -1041,14 +857,13 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct server_lock_node *snode;
struct server_lock_node *stmp;
struct client_lock_entry *clent;
struct client_lock_entry *c_ent;
struct client_lock_entry *ctmp;
LIST_HEAD(list);
if (inf) {
cancel_delayed_work_sync(&inf->recovery_dwork);
debugfs_remove(inf->tseq_dentry);
debugfs_remove(inf->stats_tseq_dentry);
rbtree_postorder_for_each_entry_safe(snode, stmp,
&inf->locks_root, node) {
@@ -1058,16 +873,14 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
list_splice_init(&snode->invalidated, &list);
mutex_lock(&snode->mutex);
list_for_each_entry_safe(clent, ctmp, &list, head) {
free_client_entry(inf, snode, clent);
list_for_each_entry_safe(c_ent, ctmp, &list, head) {
free_client_entry(inf, snode, c_ent);
}
mutex_unlock(&snode->mutex);
kfree(snode);
}
scoutfs_spbm_destroy(&inf->recovery_pending);
kfree(inf);
sbi->lock_server_info = NULL;
}

View File

@@ -3,17 +3,15 @@
int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock_recover *nlr);
int scoutfs_lock_server_finished_recovery(struct super_block *sb);
int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
u64 net_id, struct scoutfs_net_lock *nl);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid);
int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 max_vers);
int scoutfs_lock_server_setup(struct super_block *sb);
void scoutfs_lock_server_destroy(struct super_block *sb);
#endif

View File

@@ -4,6 +4,7 @@
#include <linux/bitops.h>
#include "key.h"
#include "counters.h"
#include "super.h"
void __printf(4, 5) scoutfs_msg(struct super_block *sb, const char *prefix,
const char *str, const char *fmt, ...);
@@ -23,6 +24,9 @@ do { \
#define scoutfs_info(sb, fmt, args...) \
scoutfs_msg_check(sb, KERN_INFO, "", fmt, ##args)
#define scoutfs_tprintk(sb, fmt, args...) \
trace_printk(SCSBF " " fmt "\n", SCSB_ARGS(sb), ##args);
#define scoutfs_bug_on(sb, cond, fmt, args...) \
do { \
if (cond) { \

View File

@@ -30,6 +30,7 @@
#include "net.h"
#include "endian_swap.h"
#include "tseq.h"
#include "fence.h"
/*
* scoutfs networking delivers requests and responses between nodes.
@@ -330,6 +331,9 @@ static int submit_send(struct super_block *sb,
WARN_ON_ONCE(id == 0 && (flags & SCOUTFS_NET_FLAG_RESPONSE)))
return -EINVAL;
if (scoutfs_forcing_unmount(sb))
return -EIO;
msend = kmalloc(offsetof(struct message_send,
nh.data[data_len]), GFP_NOFS);
if (!msend)
@@ -420,6 +424,16 @@ static int process_request(struct scoutfs_net_connection *conn,
mrecv->nh.data, le16_to_cpu(mrecv->nh.data_len));
}
static int call_resp_func(struct super_block *sb, struct scoutfs_net_connection *conn,
scoutfs_net_response_t resp_func, void *resp_data,
void *resp, unsigned int resp_len, int error)
{
if (resp_func)
return resp_func(sb, conn, resp, resp_len, error, resp_data);
else
return 0;
}
/*
* An incoming response finds the queued request and calls its response
* function. The response function for a given request will only be
@@ -434,7 +448,6 @@ static int process_response(struct scoutfs_net_connection *conn,
struct message_send *msend;
scoutfs_net_response_t resp_func = NULL;
void *resp_data;
int ret = 0;
spin_lock(&conn->lock);
@@ -449,11 +462,8 @@ static int process_response(struct scoutfs_net_connection *conn,
spin_unlock(&conn->lock);
if (resp_func)
ret = resp_func(sb, conn, mrecv->nh.data,
le16_to_cpu(mrecv->nh.data_len),
net_err_to_host(mrecv->nh.error), resp_data);
return ret;
return call_resp_func(sb, conn, resp_func, resp_data, mrecv->nh.data,
le16_to_cpu(mrecv->nh.data_len), net_err_to_host(mrecv->nh.error));
}
/*
@@ -619,8 +629,6 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
break;
}
trace_scoutfs_recv_clock_sync(nh.clock_sync_id);
data_len = le16_to_cpu(nh.data_len);
scoutfs_inc_counter(sb, net_recv_messages);
@@ -667,8 +675,15 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/* synchronously process greeting before next recvmsg */
if (nh.cmd == SCOUTFS_NET_CMD_GREETING)
/*
* Initial received greetings are processed
* synchronously before any other incoming messages.
*
* Incoming requests or responses to the lock client are
* called synchronously to avoid reordering.
*/
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
scoutfs_net_proc_worker(&mrecv->proc_work);
else
queue_work(conn->workq, &mrecv->proc_work);
@@ -768,9 +783,6 @@ static void scoutfs_net_send_worker(struct work_struct *work)
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
msend->nh.clock_sync_id = scoutfs_clock_sync_id();
trace_scoutfs_send_clock_sync(msend->nh.clock_sync_id);
ret = sendmsg_full(conn->sock, &msend->nh, len);
spin_lock(&conn->lock);
@@ -823,11 +835,9 @@ static void scoutfs_net_destroy_worker(struct work_struct *work)
if (conn->listening_conn && conn->notify_down)
conn->notify_down(sb, conn, conn->info, conn->rid);
/* free all messages, refactor and complete for forced unmount? */
list_splice_init(&conn->resend_queue, &conn->send_queue);
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head) {
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head)
free_msend(ninf, msend);
}
/* accepted sockets are removed from their listener's list */
if (conn->listening_conn) {
@@ -857,13 +867,31 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
}
/*
* Have a pretty aggressive keepalive timeout of around 10 seconds. The
* TCP keepalives are being processed out of task context so they should
* be responsive even when mounts are under load.
* By default, TCP would maintain a connection to an unresponsive peer
* for a very long time indeed. We can't do that because quorum
* members will only participate in an election when they don't have a
* healthy connection to a server. We use the KEEPALIVE* and
* TCP_USER_TIMEOUT options to ensure that we'll break an unresponsive
* connection and return to the quorum and client connection paths to
* try and establish a new connection to an active server.
*
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
* send queue. It doesn't work if the connection is idle. Adding
* keepalice probes with user_timeout set changes how the keepalive
* timeout is calculated. CNT no longer matters. Each time
* additional probes (not the first) are sent the user timeout is
* checked against the last time data was received. If none of the
* keepalives are responded to then eventually the user timeout applies.
*
* Given all this, we start with the overall unresponsive timeout. Then
* we set the probes to start sending towards the end of the timeout.
* We give it a few tries for a successful response before the timeout
* elapses during the probe timer processing after the unsuccessful
* probes.
*/
#define KEEPCNT 3
#define KEEPIDLE 7
#define KEEPINTVL 1
#define UNRESPONSIVE_TIMEOUT_SECS 10
#define UNRESPONSIVE_PROBES 3
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
@@ -872,7 +900,7 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
int optval;
int ret;
/* but use a keepalive timeout instead of send timeout */
/* we use a keepalive timeout instead of send timeout */
tv.tv_sec = 0;
tv.tv_usec = 0;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
@@ -880,24 +908,32 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
if (ret)
goto out;
optval = KEEPCNT;
/* not checked when user_timeout != 0, but for clarity */
optval = UNRESPONSIVE_PROBES;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = KEEPIDLE;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = KEEPINTVL;
optval = 1;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&optval, sizeof(optval));
@@ -925,6 +961,8 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
ret = -EAFNOSUPPORT;
if (ret)
goto out;
conn->last_peername = conn->peername;
out:
return ret;
}
@@ -944,7 +982,6 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
struct scoutfs_net_connection *acc_conn;
DECLARE_WAIT_QUEUE_HEAD(waitq);
struct socket *acc_sock;
LIST_HEAD(conn_list);
int ret;
trace_scoutfs_net_listen_work_enter(sb, 0, 0);
@@ -1089,9 +1126,11 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct scoutfs_net_connection *listener;
struct scoutfs_net_connection *acc_conn;
scoutfs_net_response_t resp_func;
struct message_send *msend;
struct message_send *tmp;
unsigned long delay;
void *resp_data;
trace_scoutfs_net_shutdown_work_enter(sb, 0, 0);
trace_scoutfs_conn_shutdown_start(conn);
@@ -1137,6 +1176,30 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
/* and wait for accepted conn shutdown work to finish */
wait_event(conn->waitq, empty_accepted_list(conn));
/*
* Forced unmount will cause net submit to fail once it's
* started and it calls shutdown to interrupt any previous
* senders waiting for a response. The response callbacks can
* do quite a lot of work so we're careful to call them outside
* the lock.
*/
if (scoutfs_forcing_unmount(sb)) {
spin_lock(&conn->lock);
list_splice_tail_init(&conn->send_queue, &conn->resend_queue);
while ((msend = list_first_entry_or_null(&conn->resend_queue,
struct message_send, head))) {
resp_func = msend->resp_func;
resp_data = msend->resp_data;
free_msend(ninf, msend);
spin_unlock(&conn->lock);
call_resp_func(sb, conn, resp_func, resp_data, NULL, 0, -ECONNABORTED);
spin_lock(&conn->lock);
}
spin_unlock(&conn->lock);
}
spin_lock(&conn->lock);
/* greetings aren't resent across sockets */
@@ -1206,6 +1269,7 @@ static void scoutfs_net_reconn_free_worker(struct work_struct *work)
unsigned long now = jiffies;
unsigned long deadline = 0;
bool requeue = false;
int ret;
trace_scoutfs_net_reconn_free_work_enter(sb, 0, 0);
@@ -1219,10 +1283,18 @@ restart:
time_after_eq(now, acc->reconn_deadline))) {
set_conn_fl(acc, reconn_freeing);
spin_unlock(&conn->lock);
if (!test_conn_fl(conn, shutting_down))
scoutfs_info(sb, "client timed out "SIN_FMT" -> "SIN_FMT", can not reconnect",
SIN_ARG(&acc->sockname),
SIN_ARG(&acc->peername));
if (!test_conn_fl(conn, shutting_down)) {
scoutfs_info(sb, "client "SIN_FMT" reconnect timed out, fencing",
SIN_ARG(&acc->last_peername));
ret = scoutfs_fence_start(sb, acc->rid,
acc->last_peername.sin_addr.s_addr,
SCOUTFS_FENCE_CLIENT_RECONNECT);
if (ret) {
scoutfs_err(sb, "client fence returned err %d, shutting down server",
ret);
scoutfs_server_abort(sb);
}
}
destroy_conn(acc);
goto restart;
}
@@ -1293,6 +1365,7 @@ scoutfs_net_alloc_conn(struct super_block *sb,
init_waitqueue_head(&conn->waitq);
conn->sockname.sin_family = AF_INET;
conn->peername.sin_family = AF_INET;
conn->last_peername.sin_family = AF_INET;
INIT_LIST_HEAD(&conn->accepted_head);
INIT_LIST_HEAD(&conn->accepted_list);
conn->next_send_seq = 1;
@@ -1459,8 +1532,7 @@ int scoutfs_net_connect(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin, unsigned long timeout_ms)
{
int error = 0;
int ret;
int ret = 0;
spin_lock(&conn->lock);
conn->connect_sin = *sin;
@@ -1468,10 +1540,8 @@ int scoutfs_net_connect(struct super_block *sb,
spin_unlock(&conn->lock);
queue_work(conn->workq, &conn->connect_work);
ret = wait_event_interruptible(conn->waitq,
connect_result(conn, &error));
return ret ?: error;
wait_event(conn->waitq, connect_result(conn, &ret));
return ret;
}
static void set_valid_greeting(struct scoutfs_net_connection *conn)
@@ -1546,9 +1616,8 @@ void scoutfs_net_client_greeting(struct super_block *sb,
* response and they can disconnect cleanly.
*
* At this point our connection is idle except for send submissions and
* shutdown being queued. Once we shut down a We completely own a We
* have exclusive access to a previous conn once its shutdown and we set
* _freeing.
* shutdown being queued. We have exclusive access to the previous conn
* once it's shutdown and we set _freeing.
*/
void scoutfs_net_server_greeting(struct super_block *sb,
struct scoutfs_net_connection *conn,
@@ -1608,10 +1677,10 @@ restart:
conn->next_send_id = reconn->next_send_id;
atomic64_set(&conn->recv_seq, atomic64_read(&reconn->recv_seq));
/* greeting response/ack will be on conn send queue */
/* reconn should be idle while in reconn_wait */
BUG_ON(!list_empty(&reconn->send_queue));
BUG_ON(!list_empty(&conn->resend_queue));
list_splice_init(&reconn->resend_queue, &conn->resend_queue);
/* queued greeting response is racing, can be in send or resend queue */
list_splice_tail_init(&reconn->resend_queue, &conn->resend_queue);
/* new conn info is unused, swap, old won't call down */
swap(conn->info, reconn->info);
@@ -1703,23 +1772,6 @@ int scoutfs_net_response_node(struct super_block *sb,
NULL, NULL, NULL);
}
/*
* The response function that was submitted with the request is not
* called if the request is canceled here.
*/
void scoutfs_net_cancel_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id)
{
struct message_send *msend;
spin_lock(&conn->lock);
msend = find_request(conn, cmd, id);
if (msend)
complete_send(conn, msend);
spin_unlock(&conn->lock);
}
struct sync_request_completion {
struct completion comp;
void *resp;
@@ -1775,11 +1827,10 @@ int scoutfs_net_sync_request(struct super_block *sb,
ret = scoutfs_net_submit_request(sb, conn, cmd, arg, arg_len,
sync_response, &sreq, &id);
ret = wait_for_completion_interruptible(&sreq.comp);
if (ret == -ERESTARTSYS)
scoutfs_net_cancel_request(sb, conn, cmd, id);
else
if (ret == 0) {
wait_for_completion(&sreq.comp);
ret = sreq.error;
}
return ret;
}

View File

@@ -49,6 +49,7 @@ struct scoutfs_net_connection {
u64 greeting_id;
struct sockaddr_in sockname;
struct sockaddr_in peername;
struct sockaddr_in last_peername;
struct list_head accepted_head;
struct scoutfs_net_connection *listening_conn;
@@ -90,19 +91,23 @@ enum conn_flags {
#define SIN_ARG(sin) sin, be16_to_cpu((sin)->sin_port)
static inline void scoutfs_addr_to_sin(struct sockaddr_in *sin,
struct scoutfs_inet_addr *addr)
union scoutfs_inet_addr *addr)
{
BUG_ON(addr->v4.family != cpu_to_le16(SCOUTFS_AF_IPV4));
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->port));
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->v4.addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
}
static inline void scoutfs_addr_from_sin(struct scoutfs_inet_addr *addr,
struct sockaddr_in *sin)
static inline void scoutfs_sin_to_addr(union scoutfs_inet_addr *addr, struct sockaddr_in *sin)
{
addr->addr = be32_to_le32(sin->sin_addr.s_addr);
addr->port = be16_to_le16(sin->sin_port);
memset(addr->__pad, 0, sizeof(addr->__pad));
BUG_ON(sin->sin_family != AF_INET);
memset(addr, 0, sizeof(union scoutfs_inet_addr));
addr->v4.family = cpu_to_le16(SCOUTFS_AF_IPV4);
addr->v4.addr = be32_to_le32(sin->sin_addr.s_addr);
addr->v4.port = be16_to_le16(sin->sin_port);
}
struct scoutfs_net_connection *
@@ -129,9 +134,6 @@ int scoutfs_net_submit_request_node(struct super_block *sb,
u64 rid, u8 cmd, void *arg, u16 arg_len,
scoutfs_net_response_t resp_func,
void *resp_data, u64 *id_ret);
void scoutfs_net_cancel_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id);
int scoutfs_net_sync_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, void *arg, unsigned arg_len,

1052
kmod/src/omap.c Normal file

File diff suppressed because it is too large Load Diff

24
kmod/src/omap.h Normal file
View File

@@ -0,0 +1,24 @@
#ifndef _SCOUTFS_OMAP_H_
#define _SCOUTFS_OMAP_H_
int scoutfs_omap_inc(struct super_block *sb, u64 ino);
void scoutfs_omap_dec(struct super_block *sb, u64 ino);
int scoutfs_omap_should_delete(struct super_block *sb, struct inode *inode,
struct scoutfs_lock **lock_ret, struct scoutfs_lock **orph_lock_ret);
void scoutfs_omap_free_lock_data(struct scoutfs_omap_lock_data *ldata);
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map_args *args);
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_finished_recovery(struct super_block *sb);
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map_args *args);
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map *resp_map);
void scoutfs_omap_server_shutdown(struct super_block *sb);
int scoutfs_omap_setup(struct super_block *sb);
void scoutfs_omap_destroy(struct super_block *sb);
#endif

View File

@@ -28,7 +28,7 @@
#include "super.h"
static const match_table_t tokens = {
{Opt_server_addr, "server_addr=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_err, NULL}
};
@@ -43,46 +43,6 @@ u32 scoutfs_option_u32(struct super_block *sb, int token)
return 0;
}
/* The caller's string is null terminted and can be clobbered */
static int parse_ipv4(struct super_block *sb, char *str,
struct sockaddr_in *sin)
{
unsigned long port = 0;
__be32 addr;
char *c;
int ret;
/* null term port, if specified */
c = strchr(str, ':');
if (c)
*c = '\0';
/* parse addr */
addr = in_aton(str);
if (ipv4_is_multicast(addr) || ipv4_is_lbcast(addr) ||
ipv4_is_zeronet(addr) ||
ipv4_is_local_multicast(addr)) {
scoutfs_err(sb, "invalid unicast ipv4 address: %s", str);
return -EINVAL;
}
/* parse port, if specified */
if (c) {
c++;
ret = kstrtoul(c, 0, &port);
if (ret != 0 || port == 0 || port >= U16_MAX) {
scoutfs_err(sb, "invalid port in ipv4 address: %s", c);
return -EINVAL;
}
}
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = addr;
sin->sin_port = cpu_to_be16(port);
return 0;
}
static int parse_bdev_path(struct super_block *sb, substring_t *substr,
char **bdev_path_ret)
{
@@ -132,14 +92,15 @@ out:
int scoutfs_parse_options(struct super_block *sb, char *options,
struct mount_options *parsed)
{
char ipstr[INET_ADDRSTRLEN + 1];
substring_t args[MAX_OPT_ARGS];
int nr;
int token;
char *p;
int ret;
/* Set defaults */
memset(parsed, 0, sizeof(*parsed));
parsed->quorum_slot_nr = -1;
while ((p = strsep(&options, ",")) != NULL) {
if (!*p)
@@ -147,12 +108,23 @@ int scoutfs_parse_options(struct super_block *sb, char *options,
token = match_token(p, tokens, args);
switch (token) {
case Opt_server_addr:
case Opt_quorum_slot_nr:
match_strlcpy(ipstr, args, ARRAY_SIZE(ipstr));
ret = parse_ipv4(sb, ipstr, &parsed->server_addr);
if (ret < 0)
if (parsed->quorum_slot_nr != -1) {
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
return -EINVAL;
}
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 ||
nr >= SCOUTFS_QUORUM_MAX_SLOTS) {
scoutfs_err(sb, "invalid quorum_slot_nr option, must be between 0 and %u",
SCOUTFS_QUORUM_MAX_SLOTS - 1);
if (ret == 0)
ret = -EINVAL;
return ret;
}
parsed->quorum_slot_nr = nr;
break;
case Opt_metadev_path:

View File

@@ -6,13 +6,13 @@
#include "format.h"
enum scoutfs_mount_options {
Opt_server_addr,
Opt_quorum_slot_nr,
Opt_metadev_path,
Opt_err,
};
struct mount_options {
struct sockaddr_in server_addr;
int quorum_slot_nr;
char *metadev_path;
};

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,18 @@
#ifndef _SCOUTFS_QUORUM_H_
#define _SCOUTFS_QUORUM_H_
int scoutfs_quorum_election(struct super_block *sb, ktime_t timeout_abs,
u64 prev_term, u64 *elected_term);
void scoutfs_quorum_clear_leader(struct super_block *sb);
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
void scoutfs_quorum_server_shutdown(struct super_block *sb, u64 term);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
int scoutfs_quorum_fence_complete(struct super_block *sb, u64 term);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);
void scoutfs_quorum_destroy(struct super_block *sb);
#endif

305
kmod/src/recov.c Normal file
View File

@@ -0,0 +1,305 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/rhashtable.h>
#include <linux/rcupdate.h>
#include <linux/list_sort.h>
#include "super.h"
#include "recov.h"
#include "cmp.h"
/*
* There are a few server messages which can't be processed until they
* know that they have state for all possibly active clients. These
* little helpers track which clients have recovered what state and give
* those message handlers a call to check if recovery has completed. We
* track the timeout here, but all we do is call back into the server to
* take steps to evict timed out clients and then let us know that their
* recovery has finished.
*/
struct recov_info {
struct super_block *sb;
spinlock_t lock;
struct list_head pending;
struct timer_list timer;
void (*timeout_fn)(struct super_block *);
};
#define DECLARE_RECOV_INFO(sb, name) \
struct recov_info *name = SCOUTFS_SB(sb)->recov_info
struct recov_pending {
struct list_head head;
u64 rid;
int which;
};
static struct recov_pending *next_pending(struct recov_info *recinf, u64 rid, int which)
{
struct recov_pending *pend;
list_for_each_entry(pend, &recinf->pending, head) {
if (pend->rid > rid && pend->which & which)
return pend;
}
return NULL;
}
static struct recov_pending *lookup_pending(struct recov_info *recinf, u64 rid, int which)
{
struct recov_pending *pend;
pend = next_pending(recinf, rid - 1, which);
if (pend && pend->rid == rid)
return pend;
return NULL;
}
/*
* We keep the pending list sorted by rid so that we can iterate over
* them. The list should be small and shouldn't be used often.
*/
static int cmp_pending_rid(void *priv, struct list_head *A, struct list_head *B)
{
struct recov_pending *a = list_entry(A, struct recov_pending, head);
struct recov_pending *b = list_entry(B, struct recov_pending, head);
return scoutfs_cmp_u64s(a->rid, b->rid);
}
/*
* Record that we'll be waiting for a client to recover something.
* _finished will eventually be called for every _prepare, either
* because recovery naturally finished or because it timed out and the
* server evicted the client.
*/
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *alloc;
struct recov_pending *pend;
if (WARN_ON_ONCE(which & SCOUTFS_RECOV_INVALID))
return -EINVAL;
alloc = kmalloc(sizeof(*pend), GFP_NOFS);
if (!alloc)
return -ENOMEM;
spin_lock(&recinf->lock);
pend = lookup_pending(recinf, rid, SCOUTFS_RECOV_ALL);
if (pend) {
pend->which |= which;
} else {
swap(pend, alloc);
pend->rid = rid;
pend->which = which;
list_add_tail(&pend->head, &recinf->pending);
list_sort(NULL, &recinf->pending, cmp_pending_rid);
}
spin_unlock(&recinf->lock);
kfree(alloc);
return 0;
}
/*
* Recovery is only finished once we've begun (which sets the timer) and
* all clients have finished. If we didn't test the timer we could
* claim it finished prematurely as clients are being prepared.
*/
static int recov_finished(struct recov_info *recinf)
{
return !!(recinf->timeout_fn != NULL && list_empty(&recinf->pending));
}
static void timer_callback(struct timer_list *timer)
{
struct recov_info *recinf = from_timer(recinf, timer, timer);
recinf->timeout_fn(recinf->sb);
}
/*
* Begin waiting for recovery once we've prepared all the clients. If
* the timeout period elapses before _finish is called on all prepared
* clients then the timer will call the callback.
*
* Returns > 0 if all the prepared clients finish recovery before begin
* is called.
*/
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms)
{
DECLARE_RECOV_INFO(sb, recinf);
int ret;
spin_lock(&recinf->lock);
recinf->timeout_fn = timeout_fn;
recinf->timer.expires = jiffies + msecs_to_jiffies(timeout_ms);
add_timer(&recinf->timer);
ret = recov_finished(recinf);
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
return ret;
}
/*
* A given client has recovered the given state. If it's finished all
* recovery then we free it, and if all clients have finished recovery
* then we cancel the timeout timer.
*
* Returns > 0 if _begin has been called and all clients have finished.
* The caller will only see > 0 returned once.
*/
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
int ret = 0;
spin_lock(&recinf->lock);
pend = lookup_pending(recinf, rid, which);
if (pend) {
pend->which &= ~which;
if (pend->which) {
pend = NULL;
} else {
list_del(&pend->head);
ret = recov_finished(recinf);
}
}
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
kfree(pend);
return ret;
}
/*
* Returns true if the given client is still trying to recover
* the given state.
*/
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
bool is_pending;
spin_lock(&recinf->lock);
is_pending = lookup_pending(recinf, rid, which) != NULL;
spin_unlock(&recinf->lock);
return is_pending;
}
/*
* Return the next rid after the given rid of a client waiting for the
* given state to be recovered. Start with rid 0, returns 0 when there
* are no more clients waiting for recovery.
*
* This is inherently racey. Callers are responsible for resolving any
* actions taken based on pending with the recovery finishing, perhaps
* before we return.
*/
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
spin_lock(&recinf->lock);
pend = next_pending(recinf, rid, which);
rid = pend ? pend->rid : 0;
spin_unlock(&recinf->lock);
return rid;
}
/*
* The server is shutting down and doesn't need to worry about recovery
* anymore. It'll be built up again by the next server, if needed.
*/
void scoutfs_recov_shutdown(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
struct recov_pending *tmp;
LIST_HEAD(list);
del_timer_sync(&recinf->timer);
spin_lock(&recinf->lock);
list_splice_init(&recinf->pending, &list);
recinf->timeout_fn = NULL;
spin_unlock(&recinf->lock);
list_for_each_entry_safe(pend, tmp, &list, head) {
list_del(&pend->head);
kfree(pend);
}
}
int scoutfs_recov_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct recov_info *recinf;
int ret;
recinf = kzalloc(sizeof(struct recov_info), GFP_KERNEL);
if (!recinf) {
ret = -ENOMEM;
goto out;
}
recinf->sb = sb;
spin_lock_init(&recinf->lock);
INIT_LIST_HEAD(&recinf->pending);
timer_setup(&recinf->timer, timer_callback, 0);
sbi->recov_info = recinf;
ret = 0;
out:
return ret;
}
void scoutfs_recov_destroy(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (recinf) {
scoutfs_recov_shutdown(sb);
kfree(recinf);
sbi->recov_info = NULL;
}
}

23
kmod/src/recov.h Normal file
View File

@@ -0,0 +1,23 @@
#ifndef _SCOUTFS_RECOV_H_
#define _SCOUTFS_RECOV_H_
enum {
SCOUTFS_RECOV_GREETING = ( 1 << 0),
SCOUTFS_RECOV_LOCKS = ( 1 << 1),
SCOUTFS_RECOV_INVALID = (~0 << 2),
SCOUTFS_RECOV_ALL = (~SCOUTFS_RECOV_INVALID),
};
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which);
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms);
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which);
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which);
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which);
void scoutfs_recov_shutdown(struct super_block *sb);
int scoutfs_recov_setup(struct super_block *sb);
void scoutfs_recov_destroy(struct super_block *sb);
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -56,21 +56,28 @@ do { \
__entry->name##_data_len, __entry->name##_cmd, __entry->name##_flags, \
__entry->name##_error
u64 scoutfs_server_reserved_meta_blocks(struct super_block *sb);
int scoutfs_server_lock_request(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_net_lock_grant_response *gr);
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_recover_request(struct super_block *sb, u64 rid,
struct scoutfs_key *key);
void scoutfs_server_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
int scoutfs_server_hold_commit(struct super_block *sb);
void scoutfs_server_hold_commit(struct super_block *sb);
int scoutfs_server_apply_commit(struct super_block *sb, int err);
void scoutfs_server_recov_finish(struct super_block *sb, u64 rid, int which);
struct sockaddr_in;
struct scoutfs_quorum_elected_info;
int scoutfs_server_start(struct super_block *sb, struct sockaddr_in *sin,
u64 term);
int scoutfs_server_send_omap_request(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map_args *args);
int scoutfs_server_send_omap_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map *map, int err);
u64 scoutfs_server_seq(struct super_block *sb);
u64 scoutfs_server_next_seq(struct super_block *sb);
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
int scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_abort(struct super_block *sb);
void scoutfs_server_stop(struct super_block *sb);

View File

@@ -28,6 +28,7 @@
#include "btree.h"
#include "spbm.h"
#include "client.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -255,24 +256,9 @@ static u8 height_for_blk(u64 blk)
return hei;
}
static void init_file_block(struct super_block *sb, struct scoutfs_block *bl,
int level)
static inline u32 srch_level_magic(int level)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_block_header *hdr;
/* don't leak uninit kernel mem.. block should do this for us? */
memset(bl->data, 0, SCOUTFS_BLOCK_LG_SIZE);
hdr = bl->data;
hdr->fsid = super->hdr.fsid;
hdr->blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
if (level)
hdr->magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_SRCH_PARENT);
else
hdr->magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK);
return level ? SCOUTFS_BLOCK_MAGIC_SRCH_PARENT : SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK;
}
/*
@@ -284,39 +270,15 @@ static void init_file_block(struct super_block *sb, struct scoutfs_block *bl,
*/
static int read_srch_block(struct super_block *sb,
struct scoutfs_block_writer *wri, int level,
struct scoutfs_srch_ref *ref,
struct scoutfs_block_ref *ref,
struct scoutfs_block **bl_ret)
{
struct scoutfs_block *bl;
int retries = 0;
int ret = 0;
int mag;
u32 magic = srch_level_magic(level);
int ret;
mag = level ? SCOUTFS_BLOCK_MAGIC_SRCH_PARENT :
SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK;
retry:
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
if (!IS_ERR_OR_NULL(bl) &&
!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno, mag)) {
scoutfs_inc_counter(sb, srch_inconsistent_ref);
scoutfs_block_writer_forget(sb, wri, bl);
scoutfs_block_invalidate(sb, bl);
scoutfs_block_put(sb, bl);
bl = NULL;
if (retries++ == 0)
goto retry;
bl = ERR_PTR(-ESTALE);
ret = scoutfs_block_read_ref(sb, ref, magic, bl_ret);
if (ret == -ESTALE)
scoutfs_inc_counter(sb, srch_read_stale);
}
if (IS_ERR(bl)) {
ret = PTR_ERR(bl);
bl = NULL;
}
*bl_ret = bl;
return ret;
}
@@ -333,7 +295,7 @@ static int read_path_block(struct super_block *sb,
{
struct scoutfs_block *bl = NULL;
struct scoutfs_srch_parent *srp;
struct scoutfs_srch_ref ref;
struct scoutfs_block_ref ref;
int level;
int ind;
int ret;
@@ -392,12 +354,10 @@ static int get_file_block(struct super_block *sb,
struct scoutfs_block_header *hdr;
struct scoutfs_block *bl = NULL;
struct scoutfs_srch_parent *srp;
struct scoutfs_block *new_bl;
struct scoutfs_srch_ref *ref;
u64 blkno = 0;
struct scoutfs_block_ref new_root_ref;
struct scoutfs_block_ref *ref;
int level;
int ind;
int err;
int ret;
u8 hei;
@@ -409,29 +369,21 @@ static int get_file_block(struct super_block *sb,
goto out;
}
ret = scoutfs_alloc_meta(sb, alloc, wri, &blkno);
memset(&new_root_ref, 0, sizeof(new_root_ref));
level = sfl->height;
ret = scoutfs_block_dirty_ref(sb, alloc, wri, &new_root_ref,
srch_level_magic(level), &bl, 0, NULL);
if (ret < 0)
goto out;
bl = scoutfs_block_create(sb, blkno);
if (IS_ERR(bl)) {
ret = PTR_ERR(bl);
goto out;
}
blkno = 0;
scoutfs_block_writer_mark_dirty(sb, wri, bl);
init_file_block(sb, bl, sfl->height);
if (sfl->height) {
if (level) {
srp = bl->data;
srp->refs[0].blkno = sfl->ref.blkno;
srp->refs[0].seq = sfl->ref.seq;
srp->refs[0] = sfl->ref;
}
hdr = bl->data;
sfl->ref.blkno = hdr->blkno;
sfl->ref.seq = hdr->seq;
sfl->ref = new_root_ref;
sfl->height++;
scoutfs_block_put(sb, bl);
bl = NULL;
@@ -447,54 +399,13 @@ static int get_file_block(struct super_block *sb,
goto out;
}
/* read an existing block */
if (ref->blkno) {
ret = read_srch_block(sb, wri, level, ref, &bl);
if (ret < 0)
goto out;
}
/* allocate a new block if we need it */
if (!ref->blkno || ((flags & GFB_DIRTY) &&
!scoutfs_block_writer_is_dirty(sb, bl))) {
ret = scoutfs_alloc_meta(sb, alloc, wri, &blkno);
if (ret < 0)
goto out;
new_bl = scoutfs_block_create(sb, blkno);
if (IS_ERR(new_bl)) {
ret = PTR_ERR(new_bl);
goto out;
}
if (bl) {
/* cow old block if we have one */
ret = scoutfs_free_meta(sb, alloc, wri,
bl->blkno);
if (ret)
goto out;
memcpy(new_bl->data, bl->data,
SCOUTFS_BLOCK_LG_SIZE);
scoutfs_block_put(sb, bl);
bl = new_bl;
hdr = bl->data;
hdr->blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
} else {
/* init new allocated block */
bl = new_bl;
init_file_block(sb, bl, level);
}
blkno = 0;
scoutfs_block_writer_mark_dirty(sb, wri, bl);
/* update file or parent block ref */
hdr = bl->data;
ref->blkno = hdr->blkno;
ref->seq = hdr->seq;
}
if (flags & GFB_DIRTY)
ret = scoutfs_block_dirty_ref(sb, alloc, wri, ref, srch_level_magic(level),
&bl, 0, NULL);
else
ret = scoutfs_block_read_ref(sb, ref, srch_level_magic(level), &bl);
if (ret < 0)
goto out;
if (level == 0) {
ret = 0;
@@ -514,12 +425,6 @@ static int get_file_block(struct super_block *sb,
out:
scoutfs_block_put(sb, parent);
/* return allocated blkno on error */
if (blkno > 0) {
err = scoutfs_free_meta(sb, alloc, wri, blkno);
BUG_ON(err); /* radix should have been dirty */
}
if (ret < 0) {
scoutfs_block_put(sb, bl);
bl = NULL;
@@ -1085,12 +990,13 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_srch_file *sfl)
struct scoutfs_srch_file *sfl, bool force)
{
struct scoutfs_key key;
int ret;
if (le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT)
if (sfl->ref.blkno == 0 ||
(!force && le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT))
return 0;
init_srch_key(&key, SCOUTFS_SRCH_LOG_TYPE,
@@ -1198,14 +1104,10 @@ int scoutfs_srch_get_compact(struct super_block *sb,
for (;;scoutfs_key_inc(&key)) {
ret = scoutfs_btree_next(sb, root, &key, &iref);
if (ret == -ENOENT) {
ret = 0;
sc->nr = 0;
goto out;
}
if (ret == 0) {
if (iref.val_len == sizeof(struct scoutfs_srch_file)) {
if (iref.key->sk_type != type) {
ret = -ENOENT;
} else if (iref.val_len == sizeof(sfl)) {
key = *iref.key;
memcpy(&sfl, iref.val, iref.val_len);
} else {
@@ -1213,24 +1115,25 @@ int scoutfs_srch_get_compact(struct super_block *sb,
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0)
if (ret < 0) {
/* see if we ran out of log files or files entirely */
if (ret == -ENOENT) {
sc->nr = 0;
if (type == SCOUTFS_SRCH_LOG_TYPE) {
type = SCOUTFS_SRCH_BLOCKS_TYPE;
init_srch_key(&key, type, 0, 0);
continue;
} else {
ret = 0;
}
}
goto out;
}
/* skip any files already being compacted */
if (scoutfs_spbm_test(&busy, le64_to_cpu(sfl.ref.blkno)))
continue;
/* see if we ran out of log files or files entirely */
if (key.sk_type != type) {
sc->nr = 0;
if (key.sk_type == SCOUTFS_SRCH_BLOCKS_TYPE) {
type = SCOUTFS_SRCH_BLOCKS_TYPE;
} else {
ret = 0;
goto out;
}
}
/* reset if we iterated into the next size category */
if (type == SCOUTFS_SRCH_BLOCKS_TYPE) {
order = fls64(le64_to_cpu(sfl.blocks)) /
@@ -1579,10 +1482,11 @@ static int kway_merge(struct super_block *sb,
int ind;
int i;
if (WARN_ON_ONCE(nr <= 1))
if (WARN_ON_ONCE(nr <= 0))
return -EINVAL;
nr_parents = roundup_pow_of_two(nr) - 1;
/* always at least one parent for single leaf */
nr_parents = max_t(unsigned long, 1, roundup_pow_of_two(nr) - 1);
/* root at [1] for easy sib/parent index calc, final pad for odd sib */
nr_nodes = 1 + nr_parents + nr + 1;
tnodes = __vmalloc(nr_nodes * sizeof(struct tourn_node),
@@ -2179,7 +2083,7 @@ static int delete_files(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_srch_compact *sc)
{
int ret;
int ret = 0;
int i;
for (i = 0; i < sc->nr; i++) {
@@ -2225,6 +2129,7 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
struct scoutfs_alloc alloc;
unsigned long delay;
int ret;
int err;
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
if (sc == NULL) {
@@ -2255,17 +2160,22 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
if (ret < 0)
goto commit;
ret = scoutfs_block_writer_write(sb, &wri);
ret = scoutfs_alloc_prepare_commit(sb, &alloc, &wri) ?:
scoutfs_block_writer_write(sb, &wri);
commit:
/* the server won't use our partial compact if _ERROR is set */
sc->meta_avail = alloc.avail;
sc->meta_freed = alloc.freed;
sc->flags |= ret < 0 ? SCOUTFS_SRCH_COMPACT_FLAG_ERROR : 0;
ret = scoutfs_client_srch_commit_compact(sb, sc);
err = scoutfs_client_srch_commit_compact(sb, sc);
if (err < 0 && ret == 0)
ret = err;
out:
/* our allocators and files should be stable */
WARN_ON_ONCE(ret == -ESTALE);
if (ret < 0)
scoutfs_inc_counter(sb, srch_compact_error);
scoutfs_block_writer_forget_all(sb, &wri);
if (!atomic_read(&srinf->shutdown)) {

View File

@@ -37,7 +37,7 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_srch_file *sfl);
struct scoutfs_srch_file *sfl, bool force);
int scoutfs_srch_get_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,

View File

@@ -20,7 +20,6 @@
#include <linux/statfs.h>
#include <linux/sched.h>
#include <linux/debugfs.h>
#include <linux/percpu.h>
#include "super.h"
#include "block.h"
@@ -44,70 +43,42 @@
#include "srch.h"
#include "item.h"
#include "alloc.h"
#include "recov.h"
#include "omap.h"
#include "volopt.h"
#include "fence.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
static DEFINE_PER_CPU(u64, clock_sync_ids) = 0;
/*
* Give the caller a unique clock sync id for a message they're about to
* send. We make the ids reasonably globally unique by using randomly
* initialized per-cpu 64bit counters.
*/
__le64 scoutfs_clock_sync_id(void)
/* the statfs file fields can be small (and signed?) :/ */
static __statfs_word saturate_truncated_word(u64 files)
{
u64 rnd = 0;
u64 ret;
u64 *id;
__statfs_word word = files;
retry:
preempt_disable();
id = this_cpu_ptr(&clock_sync_ids);
if (*id == 0) {
if (rnd == 0) {
preempt_enable();
get_random_bytes(&rnd, sizeof(rnd));
goto retry;
}
*id = rnd;
if (word != files) {
word = ~0ULL;
if (word < 0)
word = (unsigned long)word >> 1;
}
ret = ++(*id);
preempt_enable();
return cpu_to_le64(ret);
}
struct statfs_free_blocks {
u64 meta;
u64 data;
};
static int count_free_blocks(struct super_block *sb, void *arg, int owner,
u64 id, bool meta, bool avail, u64 blocks)
{
struct statfs_free_blocks *sfb = arg;
if (meta)
sfb->meta += blocks;
else
sfb->data += blocks;
return 0;
return word;
}
/*
* Build the free block counts by having alloc read all the persistent
* blocks which contain allocators and calling us for each of them.
* Only the super block reads aren't cached so repeatedly calling statfs
* is like repeated O_DIRECT IO. We can add a cache and stale results
* if that IO becomes a problem.
* The server gives us the current sum of free blocks and the total
* inode count that it can see across all the clients' log trees. It
* won't see allocations and inode creations or deletions that are dirty
* in client memory as it builds a transaction.
*
* We fake the number of free inodes value by assuming that we can fill
* free blocks with a certain number of inodes. We then the number of
* current inodes to that free count to determine the total possible
* inodes.
* We don't have static limits on the number of files so the statfs
* fields for the total possible files and the number free isn't
* particularly helpful. What we do want to report is the number of
* inodes, so we fake a max possible number of inodes given a
* conservative estimate of the total space consumption per file and
* then find the free by subtracting our precise count of active inodes.
* This seems like the least surprising compromise where the file max
* doesn't change and the caller gets the correct count of used inodes.
*
* The fsid that we report is constructed from the xor of the first two
* and second two little endian u32s that make up the uuid bytes.
@@ -115,41 +86,33 @@ static int count_free_blocks(struct super_block *sb, void *arg, int owner,
static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
{
struct super_block *sb = dentry->d_inode->i_sb;
struct scoutfs_super_block *super = NULL;
struct statfs_free_blocks sfb = {0,};
struct scoutfs_net_statfs nst;
u64 files;
u64 ffree;
__le32 uuid[4];
int ret;
scoutfs_inc_counter(sb, statfs);
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
ret = scoutfs_client_statfs(sb, &nst);
if (ret)
goto out;
ret = scoutfs_alloc_foreach(sb, count_free_blocks, &sfb);
if (ret < 0)
goto out;
kst->f_bfree = (sfb.meta << SCOUTFS_BLOCK_SM_LG_SHIFT) + sfb.data;
kst->f_bfree = (le64_to_cpu(nst.free_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.free_data_blocks);
kst->f_type = SCOUTFS_SUPER_MAGIC;
kst->f_bsize = SCOUTFS_BLOCK_SM_SIZE;
kst->f_blocks = (le64_to_cpu(super->total_meta_blocks) <<
SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(super->total_data_blocks);
kst->f_blocks = (le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.total_data_blocks);
kst->f_bavail = kst->f_bfree;
/* arbitrarily assume ~1K / empty file */
kst->f_ffree = sfb.meta * (SCOUTFS_BLOCK_LG_SIZE / 1024);
kst->f_files = kst->f_ffree + le64_to_cpu(super->next_ino);
files = div_u64(le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_LG_SHIFT, 2048);
ffree = files - le64_to_cpu(nst.inode_count);
kst->f_files = saturate_truncated_word(files);
kst->f_ffree = saturate_truncated_word(ffree);
BUILD_BUG_ON(sizeof(uuid) != sizeof(super->uuid));
memcpy(uuid, super->uuid, sizeof(uuid));
BUILD_BUG_ON(sizeof(uuid) != sizeof(nst.uuid));
memcpy(uuid, nst.uuid, sizeof(uuid));
kst->f_fsid.val[0] = le32_to_cpu(uuid[0]) ^ le32_to_cpu(uuid[1]);
kst->f_fsid.val[1] = le32_to_cpu(uuid[2]) ^ le32_to_cpu(uuid[3]);
kst->f_namelen = SCOUTFS_NAME_LEN;
@@ -158,15 +121,13 @@ static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
/* the vfs fills f_flags */
ret = 0;
out:
kfree(super);
/*
* We don't take cluster locks in statfs which makes it a very
* convenient place to trigger lock reclaim for debugging. We
* try to free as many locks as possible.
*/
if (scoutfs_trigger(sb, STATFS_LOCK_PURGE))
scoutfs_free_unused_locks(sb, -1UL);
scoutfs_free_unused_locks(sb);
return ret;
}
@@ -176,7 +137,8 @@ static int scoutfs_show_options(struct seq_file *seq, struct dentry *root)
struct super_block *sb = root->d_sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
seq_printf(seq, ",server_addr="SIN_FMT, SIN_ARG(&opts->server_addr));
if (opts->quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts->quorum_slot_nr);
seq_printf(seq, ",metadev_path=%s", opts->metadev_path);
return 0;
@@ -192,20 +154,19 @@ static ssize_t metadev_path_show(struct kobject *kobj,
}
SCOUTFS_ATTR_RO(metadev_path);
static ssize_t server_addr_show(struct kobject *kobj,
static ssize_t quorum_server_nr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
return snprintf(buf, PAGE_SIZE, SIN_FMT"\n",
SIN_ARG(&opts->server_addr));
return snprintf(buf, PAGE_SIZE, "%d\n", opts->quorum_slot_nr);
}
SCOUTFS_ATTR_RO(server_addr);
SCOUTFS_ATTR_RO(quorum_server_nr);
static struct attribute *mount_options_attrs[] = {
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(server_addr),
SCOUTFS_ATTR_PTR(quorum_server_nr),
NULL,
};
@@ -226,7 +187,15 @@ static void scoutfs_metadev_close(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi->meta_bdev) {
/*
* Some kernels have blkdev_reread_part which calls
* fsync_bdev while holding the bd_mutex which inverts
* the s_umount hold in deactivate_super and blkdev_put
* from kill_sb->put_super.
*/
lockdep_off();
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
lockdep_on();
sbi->meta_bdev = NULL;
}
}
@@ -243,31 +212,39 @@ static void scoutfs_put_super(struct super_block *sb)
trace_scoutfs_put_super(sb);
sbi->shutdown = true;
/*
* Wait for invalidation and iput to finish with any lingering
* inode references that escaped the evict_inodes in
* generic_shutdown_super. MS_ACTIVE is clear so final iput
* will always evict.
*/
scoutfs_lock_flush_invalidate(sb);
scoutfs_inode_flush_iput(sb);
WARN_ON_ONCE(!list_empty(&sb->s_inodes));
scoutfs_data_destroy(sb);
scoutfs_forest_stop(sb);
scoutfs_srch_destroy(sb);
scoutfs_unlock(sb, sbi->rid_lock, SCOUTFS_LOCK_WRITE);
sbi->rid_lock = NULL;
scoutfs_lock_shutdown(sb);
scoutfs_shutdown_trans(sb);
scoutfs_volopt_destroy(sb);
scoutfs_client_destroy(sb);
scoutfs_inode_destroy(sb);
scoutfs_item_destroy(sb);
scoutfs_forest_destroy(sb);
scoutfs_data_destroy(sb);
/* the server locks the listen address and compacts */
scoutfs_lock_shutdown(sb);
scoutfs_quorum_destroy(sb);
scoutfs_server_destroy(sb);
scoutfs_recov_destroy(sb);
scoutfs_net_destroy(sb);
scoutfs_lock_destroy(sb);
/* server clears quorum leader flag during shutdown */
scoutfs_quorum_destroy(sb);
scoutfs_omap_destroy(sb);
scoutfs_block_destroy(sb);
scoutfs_destroy_triggers(sb);
scoutfs_fence_destroy(sb);
scoutfs_options_destroy(sb);
scoutfs_sysfs_destroy_attrs(sb, &sbi->mopts_ssa);
debugfs_remove(sbi->debug_root);
@@ -281,6 +258,23 @@ static void scoutfs_put_super(struct super_block *sb)
sb->s_fs_info = NULL;
}
/*
* Record that we're performing a forced unmount. As put_super drives
* destruction of the filesystem we won't issue more network or storage
* operations because we assume that they'll hang. Pending operations
* can return errors when it's possible to do so. We may be racing with
* pending operations which can't be canceled.
*/
static void scoutfs_umount_begin(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
scoutfs_warn(sb, "forcing unmount, can return errors and lose unsynced data");
sbi->forced_unmount = true;
scoutfs_client_net_shutdown(sb);
}
static const struct super_operations scoutfs_super_ops = {
.alloc_inode = scoutfs_alloc_inode,
.drop_inode = scoutfs_drop_inode,
@@ -290,6 +284,7 @@ static const struct super_operations scoutfs_super_ops = {
.statfs = scoutfs_statfs,
.show_options = scoutfs_show_options,
.put_super = scoutfs_put_super,
.umount_begin = scoutfs_umount_begin,
};
/*
@@ -309,6 +304,22 @@ int scoutfs_write_super(struct super_block *sb,
sizeof(struct scoutfs_super_block));
}
static bool small_bdev(struct super_block *sb, char *which, u64 blocks,
struct block_device *bdev, int shift)
{
u64 size = (u64)i_size_read(bdev->bd_inode);
u64 count = size >> shift;
if (blocks > count) {
scoutfs_err(sb, "super block records %llu %s blocks, but device %u:%u size %llu only allows %llu blocks",
blocks, which, MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev), size, count);
return true;
}
return false;
}
/*
* Read super, specifying bdev.
*/
@@ -316,9 +327,9 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
struct block_device *bdev,
struct scoutfs_super_block *super_res)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super;
__le32 calc;
u64 blkno;
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -351,59 +362,33 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
goto out;
}
if (le64_to_cpu(super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
scoutfs_err(sb, "super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
if (super->format_hash != cpu_to_le64(SCOUTFS_FORMAT_HASH)) {
scoutfs_err(sb, "super block has invalid format hash 0x%llx, expected 0x%llx",
le64_to_cpu(super->format_hash),
SCOUTFS_FORMAT_HASH);
/*
* fill_supers checks the fmt_vers in both supers and then decides to use it.
* From then on we verify that the supers we read have that version.
*/
if (sbi->fmt_vers != 0 && le64_to_cpu(super->fmt_vers) != sbi->fmt_vers) {
scoutfs_err(sb, "super block has format version %llu than %llu read at mount",
le64_to_cpu(super->fmt_vers), sbi->fmt_vers);
ret = -EINVAL;
goto out;
}
/* XXX do we want more rigorous invalid super checking? */
if (super->quorum_count == 0 ||
super->quorum_count > SCOUTFS_QUORUM_MAX_COUNT) {
scoutfs_err(sb, "super block has invalid quorum count %u, must be > 0 and <= %u",
super->quorum_count, SCOUTFS_QUORUM_MAX_COUNT);
if (small_bdev(sb, "metadata", le64_to_cpu(super->total_meta_blocks), sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
small_bdev(sb, "data", le64_to_cpu(super->total_data_blocks), sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
ret = -EINVAL;
goto out;
}
blkno = (SCOUTFS_QUORUM_BLKNO + SCOUTFS_QUORUM_BLOCKS) >>
SCOUTFS_BLOCK_SM_LG_SHIFT;
if (le64_to_cpu(super->first_meta_blkno) < blkno) {
scoutfs_err(sb, "super block first meta blkno %llu is within quorum blocks",
le64_to_cpu(super->first_meta_blkno));
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(super->first_meta_blkno) >
le64_to_cpu(super->last_meta_blkno)) {
scoutfs_err(sb, "super block first meta blkno %llu is greater than last meta blkno %llu",
le64_to_cpu(super->first_meta_blkno),
le64_to_cpu(super->last_meta_blkno));
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(super->first_data_blkno) >
le64_to_cpu(super->last_data_blkno)) {
scoutfs_err(sb, "super block first data blkno %llu is greater than last data blkno %llu",
le64_to_cpu(super->first_data_blkno),
le64_to_cpu(super->last_data_blkno));
ret = -EINVAL;
goto out;
}
blkno = (i_size_read(sb->s_bdev->bd_inode) >>
SCOUTFS_BLOCK_SM_SHIFT) - 1;
if (le64_to_cpu(super->last_data_blkno) > blkno) {
scoutfs_err(sb, "super block last data blkno %llu is outsite device size last blkno %llu",
le64_to_cpu(super->last_data_blkno), blkno);
ret = -EINVAL;
goto out;
}
out:
@@ -509,6 +494,14 @@ static int scoutfs_read_supers(struct super_block *sb)
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) != le64_to_cpu(data_super->fmt_vers)) {
scoutfs_err(sb, "meta device format version %llu != data device format version %llu",
le64_to_cpu(meta_super->fmt_vers), le64_to_cpu(data_super->fmt_vers));
goto out;
}
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
@@ -530,6 +523,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_flags |= MS_I_VERSION;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -545,12 +539,8 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
return ret;
spin_lock_init(&sbi->next_ino_lock);
init_waitqueue_head(&sbi->trans_hold_wq);
spin_lock_init(&sbi->data_wait_root.lock);
sbi->data_wait_root.root = RB_ROOT;
spin_lock_init(&sbi->trans_write_lock);
INIT_DELAYED_WORK(&sbi->trans_write_work, scoutfs_trans_write_func);
init_waitqueue_head(&sbi->trans_write_wq);
scoutfs_sysfs_init_attrs(sb, &sbi->mopts_ssa);
ret = scoutfs_parse_options(sb, data, &opts);
@@ -591,27 +581,31 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
scoutfs_sysfs_create_attrs(sb, &sbi->mopts_ssa,
mount_options_attrs, "mount_options") ?:
scoutfs_setup_triggers(sb) ?:
scoutfs_fence_setup(sb) ?:
scoutfs_block_setup(sb) ?:
scoutfs_forest_setup(sb) ?:
scoutfs_item_setup(sb) ?:
scoutfs_inode_setup(sb) ?:
scoutfs_data_setup(sb) ?:
scoutfs_setup_trans(sb) ?:
scoutfs_omap_setup(sb) ?:
scoutfs_lock_setup(sb) ?:
scoutfs_net_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_recov_setup(sb) ?:
scoutfs_server_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_client_setup(sb) ?:
scoutfs_lock_rid(sb, SCOUTFS_LOCK_WRITE, 0, sbi->rid,
&sbi->rid_lock) ?:
scoutfs_trans_get_log_trees(sb) ?:
scoutfs_volopt_setup(sb) ?:
scoutfs_srch_setup(sb);
if (ret)
goto out;
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO);
/* this interruptible iget lets hung mount be aborted with ctl-c */
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO, SCOUTFS_LKF_INTERRUPTIBLE, 0);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
if (ret == -ERESTARTSYS)
ret = -EINTR;
goto out;
}
@@ -621,12 +615,15 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
/* send requests once iget progress shows we had a server */
ret = scoutfs_trans_get_log_trees(sb);
if (ret)
goto out;
/* start up background services that use everything else */
scoutfs_inode_start(sb);
scoutfs_forest_start(sb);
scoutfs_trans_restart_sync_deadline(sb);
// scoutfs_scan_orphans(sb);
ret = 0;
out:
/* on error, generic_shutdown_super calls put_super if s_root */
@@ -647,7 +644,17 @@ static struct dentry *scoutfs_mount(struct file_system_type *fs_type, int flags,
*/
static void scoutfs_kill_sb(struct super_block *sb)
{
trace_scoutfs_kill_sb(sb);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi) {
sbi->unmounting = true;
smp_wmb();
}
if (SCOUTFS_HAS_SBI(sb)) {
scoutfs_inode_orphan_stop(sb);
scoutfs_lock_unmount_begin(sb);
}
kill_block_super(sb);
}
@@ -680,7 +687,15 @@ static int __init scoutfs_module_init(void)
*/
__asm__ __volatile__ (
".section .note.git_describe,\"a\"\n"
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".ascii \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_min,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MIN_STR"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_max,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MAX_STR"\\n\"\n"
".previous\n");
scoutfs_init_counters();
@@ -714,3 +729,5 @@ module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
MODULE_INFO(scoutfs_format_version_min, SCOUTFS_FORMAT_VERSION_MIN_STR);
MODULE_INFO(scoutfs_format_version_max, SCOUTFS_FORMAT_VERSION_MAX_STR);

View File

@@ -26,13 +26,17 @@ struct net_info;
struct block_info;
struct forest_info;
struct srch_info;
struct recov_info;
struct omap_info;
struct volopt_info;
struct fence_info;
struct scoutfs_sb_info {
struct super_block *sb;
/* assigned once at the start of each mount, read-only */
u64 rid;
struct scoutfs_lock *rid_lock;
u64 fmt_vers;
struct scoutfs_super_block super;
@@ -48,28 +52,23 @@ struct scoutfs_sb_info {
struct block_info *block_info;
struct forest_info *forest_info;
struct srch_info *srch_info;
struct omap_info *omap_info;
struct volopt_info *volopt_info;
struct item_cache_info *item_cache_info;
wait_queue_head_t trans_hold_wq;
struct task_struct *trans_task;
struct fence_info *fence_info;
/* tracks tasks waiting for data extents */
struct scoutfs_data_wait_root data_wait_root;
spinlock_t trans_write_lock;
u64 trans_write_count;
/* set as transaction opens with trans holders excluded */
u64 trans_seq;
int trans_write_ret;
struct delayed_work trans_write_work;
wait_queue_head_t trans_write_wq;
struct workqueue_struct *trans_write_workq;
bool trans_deadline_expired;
struct trans_info *trans_info;
struct lock_info *lock_info;
struct lock_server_info *lock_server_info;
struct client_info *client_info;
struct server_info *server_info;
struct recov_info *recov_info;
struct sysfs_info *sfsinfo;
struct scoutfs_counters *counters;
@@ -81,7 +80,8 @@ struct scoutfs_sb_info {
struct dentry *debug_root;
bool shutdown;
bool forced_unmount;
bool unmounting;
unsigned long corruption_messages_once[SC_NR_LONGS];
};
@@ -103,6 +103,26 @@ static inline bool SCOUTFS_IS_META_BDEV(struct scoutfs_super_block *super_block)
#define SCOUTFS_META_BDEV_MODE (FMODE_READ | FMODE_WRITE | FMODE_EXCL)
static inline bool scoutfs_forcing_unmount(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return sbi->forced_unmount;
}
/*
* True if we're shutting down the system and can be used as a coarse
* indicator that we can avoid doing some work that no longer makes
* sense.
*/
static inline bool scoutfs_unmounting(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
smp_rmb();
return !sbi || sbi->unmounting;
}
/*
* A small string embedded in messages that's used to identify a
* specific mount. It's the three most significant bytes of the fsid
@@ -140,6 +160,4 @@ int scoutfs_write_super(struct super_block *sb,
/* to keep this out of the ioctl.h public interface definition */
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
__le64 scoutfs_clock_sync_id(void);
#endif

View File

@@ -37,6 +37,16 @@ struct attr_funcs {
#define ATTR_FUNCS_RO(_name) \
static struct attr_funcs _name##_attr_funcs = __ATTR_RO(_name)
static ssize_t format_version_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return snprintf(buf, PAGE_SIZE, "%llu\n", sbi->fmt_vers);
}
ATTR_FUNCS_RO(format_version);
static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
@@ -91,6 +101,7 @@ static ssize_t attr_funcs_show(struct kobject *kobj, struct attribute *attr,
static struct attribute *sb_id_attrs[] = {
&format_version_attr_funcs.attr,
&fsid_attr_funcs.attr,
&rid_attr_funcs.attr,
NULL,
@@ -131,9 +142,10 @@ void scoutfs_sysfs_init_attrs(struct super_block *sb,
* If this returns success then the file will be visible and show can
* be called until unmount.
*/
int scoutfs_sysfs_create_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...)
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
struct kobject *parent,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...)
{
va_list args;
size_t name_len;
@@ -174,8 +186,8 @@ int scoutfs_sysfs_create_attrs(struct super_block *sb,
goto out;
}
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype,
scoutfs_sysfs_sb_dir(sb), "%s", ssa->name);
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype, parent,
"%s", ssa->name);
out:
if (ret) {
kfree(ssa->name);

View File

@@ -10,6 +10,8 @@
#define SCOUTFS_ATTR_RO(_name) \
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RO(_name)
#define SCOUTFS_ATTR_RW(_name) \
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RW(_name)
#define SCOUTFS_ATTR_PTR(_name) \
&scoutfs_attr_##_name.attr
@@ -34,9 +36,14 @@ struct scoutfs_sysfs_attrs {
void scoutfs_sysfs_init_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa);
int scoutfs_sysfs_create_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...);
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
struct kobject *parent,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...);
#define scoutfs_sysfs_create_attrs(sb, ssa, attrs, fmt, args...) \
scoutfs_sysfs_create_attrs_parent(sb, scoutfs_sysfs_sb_dir(sb), \
ssa, attrs, fmt, ##args)
void scoutfs_sysfs_destroy_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa);

View File

@@ -17,6 +17,7 @@
#include <linux/atomic.h>
#include <linux/writeback.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include "super.h"
#include "trans.h"
@@ -39,51 +40,46 @@
* track the relationships between dirty blocks so there's only ever one
* transaction being built.
*
* The copy of the on-disk super block in the fs sb info has its header
* sequence advanced so that new dirty blocks inherit this dirty
* sequence number. It's only advanced once all those dirty blocks are
* reachable after having first written them all out and then the new
* super with that seq. It's first incremented at mount.
* Committing the current dirty transaction can be triggered by sync, a
* regular background commit interval, reaching a dirty block threshold,
* or the transaction running out of its private allocator resources.
* Once all the current holders release the writing func writes out the
* dirty blocks while excluding holders until it finishes.
*
* Unfortunately writers can nest. We don't bother trying to special
* case holding a transaction that you're already holding because that
* requires per-task storage. We just let anyone hold transactions
* regardless of waiters waiting to write, which risks waiters waiting a
* very long time.
* Unfortunately writing holders can nest. We track nested hold callers
* with the per-task journal_info pointer to avoid deadlocks between
* holders that might otherwise wait for a pending commit.
*/
/* sync dirty data at least this often */
#define TRANS_SYNC_DELAY (HZ * 10)
/*
* XXX move the rest of the super trans_ fields here.
*/
struct trans_info {
spinlock_t lock;
unsigned reserved_items;
unsigned reserved_vals;
unsigned holders;
bool writing;
struct super_block *sb;
atomic_t holders;
struct scoutfs_log_trees lt;
struct scoutfs_alloc alloc;
struct scoutfs_block_writer wri;
wait_queue_head_t hold_wq;
struct task_struct *task;
spinlock_t write_lock;
u64 write_count;
int write_ret;
struct delayed_work write_work;
wait_queue_head_t write_wq;
struct workqueue_struct *write_workq;
bool deadline_expired;
};
#define DECLARE_TRANS_INFO(sb, name) \
struct trans_info *name = SCOUTFS_SB(sb)->trans_info
static bool drained_holders(struct trans_info *tri)
{
bool drained;
spin_lock(&tri->lock);
tri->writing = true;
drained = tri->holders == 0;
spin_unlock(&tri->lock);
return drained;
}
/* avoid the high sign bit out of an abundance of caution*/
#define TRANS_HOLDERS_WRITE_FUNC_BIT (1 << 30)
#define TRANS_HOLDERS_COUNT_MASK (TRANS_HOLDERS_WRITE_FUNC_BIT - 1)
static int commit_btrees(struct super_block *sb)
{
@@ -105,6 +101,7 @@ static int commit_btrees(struct super_block *sb)
*/
int scoutfs_trans_get_log_trees(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_log_trees lt;
int ret = 0;
@@ -117,6 +114,11 @@ int scoutfs_trans_get_log_trees(struct super_block *sb)
scoutfs_forest_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
scoutfs_data_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
/* first set during mount from 0 to nonzero allows commits */
spin_lock(&tri->write_lock);
sbi->trans_seq = le64_to_cpu(lt.get_trans_seq);
spin_unlock(&tri->write_lock);
}
return ret;
}
@@ -128,6 +130,35 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
return scoutfs_block_writer_has_dirty(sb, &tri->wri);
}
/*
* This is racing with wait_event conditions, make sure our atomic
* stores and waitqueue loads are ordered.
*/
static void sub_holders_and_wake(struct super_block *sb, int val)
{
DECLARE_TRANS_INFO(sb, tri);
atomic_sub(val, &tri->holders);
smp_mb(); /* make sure sub is visible before we wake */
if (waitqueue_active(&tri->hold_wq))
wake_up(&tri->hold_wq);
}
/*
* called as a wait_event condition, needs to be careful to not change
* task state and is racing with waking paths that sub_return, test, and
* wake.
*/
static bool drained_holders(struct trans_info *tri)
{
int holders;
smp_mb(); /* make sure task in wait_event queue before atomic read */
holders = atomic_read(&tri->holders) & TRANS_HOLDERS_COUNT_MASK;
return holders == 0;
}
/*
* This work func is responsible for writing out all the dirty blocks
* that make up the current dirty transaction. It prevents writers from
@@ -138,90 +169,93 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
* functions that would try to hold the transaction. We record the task
* whose committing the transaction so that holding won't deadlock.
*
* Any dirty block had to have allocated a new blkno which would have
* created dirty allocator metadata blocks. We can avoid writing
* entirely if we don't have any dirty metadata blocks. This is
* important because we don't try to serialize this work during
* unmount.. we can execute as the vfs is shutting down.. we need to
* decide that nothing is dirty without calling the vfs at all.
* Once we clear the write func bit in holders then waiting holders can
* enter the transaction and continue modifying the transaction. Once
* we start writing we consider the transaction done and won't exit,
* clearing the write func bit, until get_log_trees has opened the next
* transaction. The exception is forced unmount which is allowed to
* generate errors and throw away data.
*
* We first try to sync the dirty inodes and write their dirty data blocks,
* then we write all our dirty metadata blocks, and only when those succeed
* do we write the new super that references all of these newly written blocks.
*
* If there are write errors then blocks are kept dirty in memory and will
* be written again at the next sync.
* This means that the only way fsync can return an error is if we're in
* forced unmount.
*/
void scoutfs_trans_write_func(struct work_struct *work)
{
struct scoutfs_sb_info *sbi = container_of(work, struct scoutfs_sb_info,
trans_write_work.work);
struct super_block *sb = sbi->sb;
DECLARE_TRANS_INFO(sb, tri);
u64 trans_seq = sbi->trans_seq;
struct trans_info *tri = container_of(work, struct trans_info, write_work.work);
struct super_block *sb = tri->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
bool retrying = false;
char *s = NULL;
int ret = 0;
sbi->trans_task = current;
tri->task = current;
wait_event(sbi->trans_hold_wq, drained_holders(tri));
/* mark that we're writing so holders wait for us to finish and clear our bit */
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
trace_scoutfs_trans_write_func(sb,
scoutfs_block_writer_dirty_bytes(sb, &tri->wri));
wait_event(tri->hold_wq, drained_holders(tri));
if (!scoutfs_block_writer_has_dirty(sb, &tri->wri) &&
!scoutfs_item_dirty_pages(sb)) {
if (sbi->trans_deadline_expired) {
/*
* If we're not writing data then we only advance the
* seq at the sync deadline interval. This keeps idle
* mounts from pinning a seq and stopping readers of the
* seq indices but doesn't send a message for every sync
* syscall.
*/
ret = scoutfs_client_advance_seq(sb, &trans_seq);
if (ret < 0)
s = "clean advance seq";
}
/* mount hasn't opened first transaction yet, still complete sync */
if (sbi->trans_seq == 0) {
ret = 0;
goto out;
}
if (sbi->trans_deadline_expired)
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
goto out;
}
trace_scoutfs_trans_write_func(sb, scoutfs_block_writer_dirty_bytes(sb, &tri->wri),
scoutfs_item_dirty_pages(sb));
if (tri->deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
scoutfs_inc_counter(sb, trans_commit_written);
/* XXX this all needs serious work for dealing with errors */
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb,
&tri->alloc, &tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "advance seq", scoutfs_client_advance_seq(sb, &trans_seq)) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
do {
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc,
&tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
if (ret < 0) {
if (!retrying) {
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
s, ret);
retrying = true;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
break;
}
msleep(2 * MSEC_PER_SEC);
} else if (retrying) {
scoutfs_info(sb, "retried transaction commit succeeded");
}
} while (ret < 0);
out:
if (ret < 0)
scoutfs_err(sb, "critical transaction commit failure: %s, %d",
s, ret);
spin_lock(&tri->write_lock);
tri->write_count++;
tri->write_ret = ret;
spin_unlock(&tri->write_lock);
wake_up(&tri->write_wq);
spin_lock(&sbi->trans_write_lock);
sbi->trans_write_count++;
sbi->trans_write_ret = ret;
sbi->trans_seq = trans_seq;
spin_unlock(&sbi->trans_write_lock);
wake_up(&sbi->trans_write_wq);
/* we're done, wake waiting holders */
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
spin_lock(&tri->lock);
tri->writing = false;
spin_unlock(&tri->lock);
wake_up(&sbi->trans_hold_wq);
sbi->trans_task = NULL;
tri->task = NULL;
scoutfs_trans_restart_sync_deadline(sb);
}
@@ -232,17 +266,17 @@ struct write_attempt {
};
/* this is called as a wait_event() condition so it can't change task state */
static int write_attempted(struct scoutfs_sb_info *sbi,
struct write_attempt *attempt)
static int write_attempted(struct super_block *sb, struct write_attempt *attempt)
{
DECLARE_TRANS_INFO(sb, tri);
int done = 1;
spin_lock(&sbi->trans_write_lock);
if (sbi->trans_write_count > attempt->count)
attempt->ret = sbi->trans_write_ret;
spin_lock(&tri->write_lock);
if (tri->write_count > attempt->count)
attempt->ret = tri->write_ret;
else
done = 0;
spin_unlock(&sbi->trans_write_lock);
spin_unlock(&tri->write_lock);
return done;
}
@@ -252,10 +286,12 @@ static int write_attempted(struct scoutfs_sb_info *sbi,
* We always have delayed sync work pending but the caller wants it
* to execute immediately.
*/
static void queue_trans_work(struct scoutfs_sb_info *sbi)
static void queue_trans_work(struct super_block *sb)
{
sbi->trans_deadline_expired = false;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work, 0);
DECLARE_TRANS_INFO(sb, tri);
tri->deadline_expired = false;
mod_delayed_work(tri->write_workq, &tri->write_work, 0);
}
/*
@@ -268,26 +304,24 @@ static void queue_trans_work(struct scoutfs_sb_info *sbi)
*/
int scoutfs_trans_sync(struct super_block *sb, int wait)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct write_attempt attempt;
DECLARE_TRANS_INFO(sb, tri);
struct write_attempt attempt = { .ret = 0 };
int ret;
if (!wait) {
queue_trans_work(sbi);
queue_trans_work(sb);
return 0;
}
spin_lock(&sbi->trans_write_lock);
attempt.count = sbi->trans_write_count;
spin_unlock(&sbi->trans_write_lock);
spin_lock(&tri->write_lock);
attempt.count = tri->write_count;
spin_unlock(&tri->write_lock);
queue_trans_work(sbi);
queue_trans_work(sb);
ret = wait_event_interruptible(sbi->trans_write_wq,
write_attempted(sbi, &attempt));
if (ret == 0)
ret = attempt.ret;
wait_event(tri->write_wq, write_attempted(sb, &attempt));
ret = attempt.ret;
return ret;
}
@@ -303,72 +337,91 @@ int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
sbi->trans_deadline_expired = true;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work,
tri->deadline_expired = true;
mod_delayed_work(tri->write_workq, &tri->write_work,
TRANS_SYNC_DELAY);
}
/*
* Each thread reserves space in the segment for their dirty items while
* they hold the transaction. This is calculated before the first
* transaction hold is acquired. It includes all the potential nested
* item manipulation that could happen with the transaction held.
* Including nested holds avoids having to deal with writing out partial
* transactions while a caller still holds the transaction.
* We store nested holders in the lower bits of journal_info. We use
* some higher bits as a magic value to detect if something goes
* horribly wrong and it gets clobbered.
*/
#define SCOUTFS_RESERVATION_MAGIC 0xd57cd13b
struct scoutfs_reservation {
unsigned magic;
unsigned holders;
struct scoutfs_item_count reserved;
struct scoutfs_item_count actual;
};
#define TRANS_JI_MAGIC 0xd5700000
#define TRANS_JI_MAGIC_MASK 0xfff00000
#define TRANS_JI_COUNT_MASK 0x000fffff
/* returns true if a caller already had a holder counted in journal_info */
static bool inc_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
if (holders == 0)
holders = TRANS_JI_MAGIC;
holders++;
current->journal_info = (void *)holders;
return (holders > (TRANS_JI_MAGIC | 1));
}
static void dec_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
WARN_ON_ONCE((holders & TRANS_JI_COUNT_MASK) == 0);
holders--;
if (holders == TRANS_JI_MAGIC)
holders = 0;
current->journal_info = (void *)holders;
}
/*
* Try to hold the transaction. If a caller already holds the trans then
* we piggy back on their hold. We wait if the writer is trying to
* write out the transation. And if our items won't fit then we kick off
* a write.
* This is called as the wait_event condition for holding a transaction.
* Increment the holder count unless the writer is present. We return
* false to wait until the writer finishes and wakes us.
*
* This is called as a condition for wait_event. It is very limited in
* the locking (blocking) it can do because the caller has set the task
* state before testing the condition safely race with waking after
* setting the condition. Our checking the amount of dirty metadata
* blocks and free data blocks is racy, but we don't mind the risk of
* delaying or prematurely forcing commits.
* This can be racing with itself while there's no waiters. We retry
* the cmpxchg instead of returning and waiting.
*/
static bool acquired_hold(struct super_block *sb,
struct scoutfs_reservation *rsv,
const struct scoutfs_item_count *cnt)
static bool inc_holders_unless_writer(struct trans_info *tri)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
bool acquired = false;
unsigned items;
unsigned vals;
int holders;
spin_lock(&tri->lock);
do {
smp_mb(); /* make sure we read after wait puts task in queue */
holders = atomic_read(&tri->holders);
if (holders & TRANS_HOLDERS_WRITE_FUNC_BIT)
return false;
trace_scoutfs_trans_acquired_hold(sb, cnt, rsv, rsv->holders,
&rsv->reserved, &rsv->actual,
tri->holders, tri->writing,
tri->reserved_items,
tri->reserved_vals);
} while (atomic_cmpxchg(&tri->holders, holders, holders + 1) != holders);
/* use a caller's existing reservation */
if (rsv->holders)
goto hold;
return true;
}
/* wait until the writing thread is finished */
if (tri->writing)
goto out;
/* see if we can reserve space for our item count */
items = tri->reserved_items + cnt->items;
vals = tri->reserved_vals + cnt->vals;
/*
* As we drop the last trans holder we try to wake a writing thread that
* was waiting for us to finish.
*/
static void release_holders(struct super_block *sb)
{
dec_journal_info_holders();
sub_holders_and_wake(sb, 1);
}
/*
* The caller has incremented holders so it is blocking commits. We
* make some quick checks to see if we need to trigger and wait for
* another commit before proceeding.
*/
static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
{
/*
* In theory each dirty item page could be straddling two full
* blocks, requiring 4 allocations for each item cache page.
@@ -378,11 +431,9 @@ static bool acquired_hold(struct super_block *sb,
* that it accounts for having to dirty parent blocks and
* whatever dirtying is done during the transaction hold.
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc,
scoutfs_item_dirty_pages(sb) * 2)) {
if (scoutfs_alloc_meta_low(sb, &tri->alloc, scoutfs_item_dirty_pages(sb) * 2)) {
scoutfs_inc_counter(sb, trans_commit_dirty_meta_full);
queue_trans_work(sbi);
goto out;
return true;
}
/*
@@ -394,70 +445,100 @@ static bool acquired_hold(struct super_block *sb,
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc, 16)) {
scoutfs_inc_counter(sb, trans_commit_meta_alloc_low);
queue_trans_work(sbi);
goto out;
return true;
}
/* Try to refill data allocator before premature enospc */
if (scoutfs_data_alloc_free_bytes(sb) <= SCOUTFS_TRANS_DATA_ALLOC_LWM) {
/* if we're low and can't refill then alloc could empty and return enospc */
if (scoutfs_data_alloc_should_refill(sb, SCOUTFS_ALLOC_DATA_REFILL_THRESH)) {
scoutfs_inc_counter(sb, trans_commit_data_alloc_low);
queue_trans_work(sbi);
goto out;
return true;
}
tri->reserved_items = items;
tri->reserved_vals = vals;
rsv->reserved.items = cnt->items;
rsv->reserved.vals = cnt->vals;
hold:
rsv->holders++;
tri->holders++;
acquired = true;
out:
spin_unlock(&tri->lock);
return acquired;
return false;
}
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt)
/*
* called as a wait_event condition, needs to be careful to not change
* task state and is racing with waking paths that sub_return, test, and
* wake.
*/
static bool holders_no_writer(struct trans_info *tri)
{
smp_mb(); /* make sure task in wait_event queue before atomic read */
return !(atomic_read(&tri->holders) & TRANS_HOLDERS_WRITE_FUNC_BIT);
}
/*
* Try to hold the transaction. Holding the transaction prevents it
* from being committed. If a transaction is currently being written
* then we'll block until it's done and our hold can be granted.
*
* If a caller already holds the trans then we unconditionally acquire
* our hold and return to avoid deadlocks with our caller, the writing
* thread, and us. We record nested holds in a call stack with the
* journal_info pointer in the task_struct.
*
* The writing thread marks itself as a global trans_task which
* short-circuits all the hold machinery so it can call code that would
* otherwise try to hold transactions while it is writing.
*
* If the caller is adding metadata items that will eventually consume
* free space -- not dirtying existing items or adding deletion items --
* then we can return enospc if our metadata allocator indicates that
* we're low on space.
*/
int scoutfs_hold_trans(struct super_block *sb, bool allocing)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
DECLARE_TRANS_INFO(sb, tri);
u64 seq;
int ret;
/*
* Caller shouldn't provide garbage counts, nor counts that
* can't fit in segments by themselves.
*/
if (WARN_ON_ONCE(cnt.items <= 0 || cnt.vals < 0))
return -EINVAL;
if (current == sbi->trans_task)
if (current == tri->task)
return 0;
rsv = current->journal_info;
if (rsv == NULL) {
rsv = kzalloc(sizeof(struct scoutfs_reservation), GFP_NOFS);
if (!rsv)
return -ENOMEM;
for (;;) {
/* shouldn't get holders until mount finishes, (not locking for cheap test) */
if (WARN_ON_ONCE(sbi->trans_seq == 0)) {
ret = -EINVAL;
break;
}
rsv->magic = SCOUTFS_RESERVATION_MAGIC;
current->journal_info = rsv;
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
ret = 0;
break;
}
/* wait until the writer work is finished */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
wait_event(tri->hold_wq, holders_no_writer(tri));
continue;
}
/* return enospc if server is into reserved blocks and we're allocating */
if (allocing && scoutfs_alloc_test_flag(sb, &tri->alloc, SCOUTFS_ALLOC_FLAG_LOW)) {
release_holders(sb);
ret = -ENOSPC;
break;
}
/* see if we need to trigger and wait for a commit before holding */
if (commit_before_hold(sb, tri)) {
seq = scoutfs_trans_sample_seq(sb);
release_holders(sb);
queue_trans_work(sb);
wait_event(tri->hold_wq, scoutfs_trans_sample_seq(sb) != seq);
continue;
}
ret = 0;
break;
}
BUG_ON(rsv->magic != SCOUTFS_RESERVATION_MAGIC);
ret = wait_event_interruptible(sbi->trans_hold_wq,
acquired_hold(sb, rsv, &cnt));
if (ret && rsv->holders == 0) {
current->journal_info = NULL;
kfree(rsv);
}
trace_scoutfs_hold_trans(sb, current->journal_info, atomic_read(&tri->holders), ret);
return ret;
}
@@ -468,86 +549,21 @@ int scoutfs_hold_trans(struct super_block *sb,
*/
bool scoutfs_trans_held(void)
{
struct scoutfs_reservation *rsv = current->journal_info;
unsigned long holders = (unsigned long)current->journal_info;
return rsv && rsv->magic == SCOUTFS_RESERVATION_MAGIC;
return (holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) == TRANS_JI_MAGIC));
}
/*
* Record a transaction holder's individual contribution to the dirty
* items in the current transaction. We're making sure that the
* reservation matches the possible item manipulations while they hold
* the reservation.
*
* It is possible and legitimate for an individual contribution to be
* negative if they delete dirty items. The item cache makes sure that
* the total dirty item count doesn't fall below zero.
*/
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv = current->journal_info;
if (current == sbi->trans_task)
return;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
rsv->actual.items += items;
rsv->actual.vals += vals;
trace_scoutfs_trans_track_item(sb, items, vals, rsv->actual.items,
rsv->actual.vals, rsv->reserved.items,
rsv->reserved.vals);
WARN_ON_ONCE(rsv->actual.items > rsv->reserved.items);
WARN_ON_ONCE(rsv->actual.vals > rsv->reserved.vals);
}
/*
* As we drop the last hold in the reservation we try and wake other
* hold attempts that were waiting for space. As we drop the last trans
* holder we try to wake a writing thread that was waiting for us to
* finish.
*/
void scoutfs_release_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
DECLARE_TRANS_INFO(sb, tri);
bool wake = false;
if (current == sbi->trans_task)
if (current == tri->task)
return;
rsv = current->journal_info;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
release_holders(sb);
spin_lock(&tri->lock);
trace_scoutfs_release_trans(sb, rsv, rsv->holders, &rsv->reserved,
&rsv->actual, tri->holders, tri->writing,
tri->reserved_items, tri->reserved_vals);
BUG_ON(rsv->holders <= 0);
BUG_ON(tri->holders <= 0);
if (--rsv->holders == 0) {
tri->reserved_items -= rsv->reserved.items;
tri->reserved_vals -= rsv->reserved.vals;
current->journal_info = NULL;
kfree(rsv);
wake = true;
}
if (--tri->holders == 0)
wake = true;
spin_unlock(&tri->lock);
if (wake)
wake_up(&sbi->trans_hold_wq);
trace_scoutfs_release_trans(sb, current->journal_info, atomic_read(&tri->holders), 0);
}
/*
@@ -557,12 +573,13 @@ void scoutfs_release_trans(struct super_block *sb)
*/
u64 scoutfs_trans_sample_seq(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 ret;
spin_lock(&sbi->trans_write_lock);
spin_lock(&tri->write_lock);
ret = sbi->trans_seq;
spin_unlock(&sbi->trans_write_lock);
spin_unlock(&tri->write_lock);
return ret;
}
@@ -576,12 +593,17 @@ int scoutfs_setup_trans(struct super_block *sb)
if (!tri)
return -ENOMEM;
spin_lock_init(&tri->lock);
tri->sb = sb;
atomic_set(&tri->holders, 0);
scoutfs_block_writer_init(sb, &tri->wri);
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",
WQ_UNBOUND, 1);
if (!sbi->trans_write_workq) {
spin_lock_init(&tri->write_lock);
INIT_DELAYED_WORK(&tri->write_work, scoutfs_trans_write_func);
init_waitqueue_head(&tri->write_wq);
init_waitqueue_head(&tri->hold_wq);
tri->write_workq = alloc_workqueue("scoutfs_trans", WQ_UNBOUND, 1);
if (!tri->write_workq) {
kfree(tri);
return -ENOMEM;
}
@@ -592,8 +614,15 @@ int scoutfs_setup_trans(struct super_block *sb)
}
/*
* kill_sb calls sync before getting here so we know that dirty data
* should be in flight. We just have to wait for it to quiesce.
* While the vfs will have done an fs level sync before calling
* put_super, we may have done work down in our level after all the fs
* ops were done. An example is final inode deletion in iput, that's
* done in generic_shutdown_super after the sync and before calling our
* put_super.
*
* So we always try to write any remaining dirty transactions before
* shutting down. Typically there won't be any dirty data and the
* worker will just return.
*/
void scoutfs_shutdown_trans(struct super_block *sb)
{
@@ -601,13 +630,18 @@ void scoutfs_shutdown_trans(struct super_block *sb)
DECLARE_TRANS_INFO(sb, tri);
if (tri) {
scoutfs_block_writer_forget_all(sb, &tri->wri);
if (sbi->trans_write_workq) {
cancel_delayed_work_sync(&sbi->trans_write_work);
destroy_workqueue(sbi->trans_write_workq);
if (tri->write_workq) {
/* immediately queues pending timer */
flush_delayed_work(&tri->write_work);
/* prevents re-arming if it has to wait */
cancel_delayed_work_sync(&tri->write_work);
destroy_workqueue(tri->write_workq);
/* trans work schedules after shutdown see null */
sbi->trans_write_workq = NULL;
tri->write_workq = NULL;
}
scoutfs_block_writer_forget_all(sb, &tri->wri);
kfree(tri);
sbi->trans_info = NULL;
}

View File

@@ -1,26 +1,16 @@
#ifndef _SCOUTFS_TRANS_H_
#define _SCOUTFS_TRANS_H_
/* the server will attempt to fill data allocs for each trans */
#define SCOUTFS_TRANS_DATA_ALLOC_HWM (2ULL * 1024 * 1024 * 1024)
/* the client will force commits if data allocators get too low */
#define SCOUTFS_TRANS_DATA_ALLOC_LWM (256ULL * 1024 * 1024)
#include "count.h"
void scoutfs_trans_write_func(struct work_struct *work);
int scoutfs_trans_sync(struct super_block *sb, int wait);
int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
int datasync);
void scoutfs_trans_restart_sync_deadline(struct super_block *sb);
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt);
int scoutfs_hold_trans(struct super_block *sb, bool allocing);
bool scoutfs_trans_held(void);
void scoutfs_release_trans(struct super_block *sb);
u64 scoutfs_trans_sample_seq(struct super_block *sb);
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals);
int scoutfs_trans_get_log_trees(struct super_block *sb);
bool scoutfs_trans_has_dirty(struct super_block *sb);

View File

@@ -38,10 +38,7 @@ struct scoutfs_triggers {
struct scoutfs_triggers *name = SCOUTFS_SB(sb)->triggers
static char *names[] = {
[SCOUTFS_TRIGGER_BTREE_STALE_READ] = "btree_stale_read",
[SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF] = "btree_advance_ring_half",
[SCOUTFS_TRIGGER_HARD_STALE_ERROR] = "hard_stale_error",
[SCOUTFS_TRIGGER_SEG_STALE_READ] = "seg_stale_read",
[SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE] = "block_remove_stale",
[SCOUTFS_TRIGGER_STATFS_LOCK_PURGE] = "statfs_lock_purge",
};

View File

@@ -2,10 +2,7 @@
#define _SCOUTFS_TRIGGERS_H_
enum scoutfs_trigger {
SCOUTFS_TRIGGER_BTREE_STALE_READ,
SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF,
SCOUTFS_TRIGGER_HARD_STALE_ERROR,
SCOUTFS_TRIGGER_SEG_STALE_READ,
SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE,
SCOUTFS_TRIGGER_STATFS_LOCK_PURGE,
SCOUTFS_TRIGGER_NR,
};

188
kmod/src/volopt.c Normal file
View File

@@ -0,0 +1,188 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/kobject.h>
#include <linux/sysfs.h>
#include "super.h"
#include "client.h"
#include "volopt.h"
/*
* Volume options are exposed through a sysfs directory. Getting and
* setting the values sends rpcs to the server who owns the options in
* the super block.
*/
struct volopt_info {
struct super_block *sb;
struct scoutfs_sysfs_attrs ssa;
};
#define DECLARE_VOLOPT_INFO(sb, name) \
struct volopt_info *name = SCOUTFS_SB(sb)->volopt_info
#define DECLARE_VOLOPT_INFO_KOBJ(kobj, name) \
DECLARE_VOLOPT_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
/*
* attribute arrays need to be dense but the options we export could
* well become sparse over time. .store and .load are generic and we
* have a lookup table to map the attributes array indexes to the number
* and name of the option.
*/
static struct volopt_nr_name {
int nr;
char *name;
} volopt_table[] = {
{ SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR, "data_alloc_zone_blocks" },
};
/* initialized by setup, pointer array is null terminated */
static struct kobj_attribute volopt_attrs[ARRAY_SIZE(volopt_table)];
static struct attribute *volopt_attr_ptrs[ARRAY_SIZE(volopt_table) + 1];
static void get_opt_data(struct kobj_attribute *attr, struct scoutfs_volume_options *volopt,
u64 *bit, __le64 **opt)
{
size_t index = attr - &volopt_attrs[0];
int nr = volopt_table[index].nr;
*bit = 1ULL << nr;
*opt = &volopt->set_bits + 1 + nr;
}
static ssize_t volopt_attr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
struct super_block *sb = vinf->sb;
struct scoutfs_volume_options volopt;
__le64 *opt;
u64 bit;
int ret;
ret = scoutfs_client_get_volopt(sb, &volopt);
if (ret < 0)
return ret;
get_opt_data(attr, &volopt, &bit, &opt);
if (le64_to_cpu(volopt.set_bits) & bit) {
return snprintf(buf, PAGE_SIZE, "%llu", le64_to_cpup(opt));
} else {
buf[0] = '\0';
return 0;
}
}
static ssize_t volopt_attr_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
struct super_block *sb = vinf->sb;
struct scoutfs_volume_options volopt = {0,};
u8 chars[32];
__le64 *opt;
u64 bit;
u64 val;
int ret;
if (count == 0)
return 0;
if (count > sizeof(chars) - 1)
return -ERANGE;
get_opt_data(attr, &volopt, &bit, &opt);
if (buf[0] == '\n' || buf[0] == '\r') {
volopt.set_bits = cpu_to_le64(bit);
ret = scoutfs_client_clear_volopt(sb, &volopt);
} else {
memcpy(chars, buf, count);
chars[count] = '\0';
ret = kstrtoull(chars, 0, &val);
if (ret < 0)
return ret;
volopt.set_bits = cpu_to_le64(bit);
*opt = cpu_to_le64(val);
ret = scoutfs_client_set_volopt(sb, &volopt);
}
if (ret == 0)
ret = count;
return ret;
}
/*
* The volume option sysfs files are slim shims around RPCs so this
* should be called after the client is setup and before it is torn
* down.
*/
int scoutfs_volopt_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct volopt_info *vinf;
int ret;
int i;
/* persistent volume options are always a bitmap u64 then the 64 options */
BUILD_BUG_ON(sizeof(struct scoutfs_volume_options) != (1 + 64) * 8);
vinf = kzalloc(sizeof(struct volopt_info), GFP_KERNEL);
if (!vinf) {
ret = -ENOMEM;
goto out;
}
scoutfs_sysfs_init_attrs(sb, &vinf->ssa);
vinf->sb = sb;
sbi->volopt_info = vinf;
for (i = 0; i < ARRAY_SIZE(volopt_table); i++) {
volopt_attrs[i] = (struct kobj_attribute) {
.attr = { .name = volopt_table[i].name, .mode = S_IWUSR | S_IRUGO },
.show = volopt_attr_show,
.store = volopt_attr_store,
};
volopt_attr_ptrs[i] = &volopt_attrs[i].attr;
}
BUILD_BUG_ON(ARRAY_SIZE(volopt_table) != ARRAY_SIZE(volopt_attr_ptrs) - 1);
volopt_attr_ptrs[i] = NULL;
ret = scoutfs_sysfs_create_attrs(sb, &vinf->ssa, volopt_attr_ptrs, "volume_options");
if (ret < 0)
goto out;
out:
if (ret)
scoutfs_volopt_destroy(sb);
return ret;
}
void scoutfs_volopt_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct volopt_info *vinf = SCOUTFS_SB(sb)->volopt_info;
if (vinf) {
scoutfs_sysfs_destroy_attrs(sb, &vinf->ssa);
kfree(vinf);
sbi->volopt_info = NULL;
}
}

7
kmod/src/volopt.h Normal file
View File

@@ -0,0 +1,7 @@
#ifndef _SCOUTFS_VOLOPT_H_
#define _SCOUTFS_VOLOPT_H_
int scoutfs_volopt_setup(struct super_block *sb);
void scoutfs_volopt_destroy(struct super_block *sb);
#endif

View File

@@ -97,6 +97,7 @@ static int unknown_prefix(const char *name)
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
#define TAG_LEN (sizeof(HIDE_TAG) - 1)
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
@@ -119,6 +120,9 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
} else if (!strncmp(name, SRCH_TAG, TAG_LEN)) {
if (++tgs->srch == 0)
return -EINVAL;
} else if (!strncmp(name, TOTL_TAG, TAG_LEN)) {
if (++tgs->totl == 0)
return -EINVAL;
} else {
/* only reason to use scoutfs. is tags */
if (!found)
@@ -364,7 +368,7 @@ static int change_xattr_items(struct inode *inode, u64 id,
}
/* update dirtied overlapping existing items, last partial first */
for (i = old_parts - 1; i >= 0; i--) {
for (i = min(old_parts, new_parts) - 1; i >= 0; i--) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
@@ -468,6 +472,100 @@ out:
return ret;
}
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
{
scoutfs_key_set_zeros(key);
key->sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
key->skxt_a = cpu_to_le64(name[0]);
key->skxt_b = cpu_to_le64(name[1]);
key->skxt_c = cpu_to_le64(name[2]);
}
/*
* Parse a u64 in any base after null terminating it while forbidding
* the leading + and trailing \n that kstrotull allows.
*/
static int parse_totl_u64(const char *s, int len, u64 *res)
{
char str[SCOUTFS_XATTR_MAX_TOTL_U64 + 1];
if (len <= 0 || len >= ARRAY_SIZE(str) || s[0] == '+' || s[len - 1] == '\n')
return -EINVAL;
memcpy(str, s, len);
str[len] = '\0';
return kstrtoull(str, 0, res) != 0 ? -EINVAL : 0;
}
/*
* non-destructive relatively quick parse of the last 3 dotted u64s that
* make up the name of the xattr total. -EINVAL is returned if there
* are anything but 3 valid u64 encodings between single dots at the end
* of the name.
*/
static int parse_totl_key(struct scoutfs_key *key, const char *name, int name_len)
{
u64 tot_name[3];
int end = name_len;
int nr = 0;
int len;
int ret;
int i;
/* parse name elements in reserve order from end of xattr name string */
for (i = name_len - 1; i >= 0 && nr < ARRAY_SIZE(tot_name); i--) {
if (name[i] != '.')
continue;
len = end - (i + 1);
ret = parse_totl_u64(&name[i + 1], len, &tot_name[nr]);
if (ret < 0)
goto out;
end = i;
nr++;
}
if (nr == ARRAY_SIZE(tot_name)) {
/* swap to account for parsing in reverse */
swap(tot_name[0], tot_name[2]);
scoutfs_xattr_init_totl_key(key, tot_name);
ret = 0;
} else {
ret = -EINVAL;
}
out:
return ret;
}
static int apply_totl_delta(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_xattr_totl_val *tval, struct scoutfs_lock *lock)
{
if (tval->total == 0 && tval->count == 0)
return 0;
return scoutfs_item_delta(sb, key, tval, sizeof(*tval), lock);
}
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
{
struct scoutfs_xattr_totl_val *s_tval = src;
struct scoutfs_xattr_totl_val *d_tval = dst;
if (src_len != sizeof(*s_tval) || dst_len != src_len)
return -EIO;
le64_add_cpu(&d_tval->total, le64_to_cpu(s_tval->total));
le64_add_cpu(&d_tval->count, le64_to_cpu(s_tval->count));
if (d_tval->total == 0 && d_tval->count == 0)
return SCOUTFS_DELTA_COMBINED_NULL;
return SCOUTFS_DELTA_COMBINED;
}
/*
* The confusing swiss army knife of creating, modifying, and deleting
* xattrs.
@@ -486,16 +584,22 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int bytes;
unsigned int val_len;
u64 ind_seq;
u64 total;
u64 hash = 0;
u64 id = 0;
int ret;
@@ -519,11 +623,15 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide || tgs.srch) && !capable(CAP_SYS_ADMIN))
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
/* alloc enough to read old totl value */
xat = __vmalloc(bytes + SCOUTFS_XATTR_MAX_TOTL_U64, GFP_NOFS, PAGE_KERNEL);
if (!xat) {
ret = -ENOMEM;
goto out;
@@ -536,9 +644,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
down_write(&si->xattr_rwsem);
/* find an existing xattr to delete */
/* find an existing xattr to delete, including possible totl value */
ret = get_next_xattr(inode, &key, xat,
sizeof(struct scoutfs_xattr) + name_len,
sizeof(struct scoutfs_xattr) + name_len + SCOUTFS_XATTR_MAX_TOTL_U64,
name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
goto unlock;
@@ -558,9 +666,23 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
goto unlock;
}
/* s64 count delta if we create or delete */
if (tgs.totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
if (found_parts && tgs.totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
if (ret < 0)
goto unlock;
le64_add_cpu(&tval.total, -total);
}
/* prepare our xattr */
if (value) {
if (found_parts)
@@ -572,15 +694,26 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
memset(xat->__pad, 0, sizeof(xat->__pad));
memcpy(xat->name, name, name_len);
memcpy(&xat->name[xat->name_len], value, size);
if (tgs.totl) {
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto unlock;
}
le64_add_cpu(&tval.total, total);
}
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret)
goto unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_XATTR_SET(found_parts,
value != NULL,
name_len, size));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
if (ret > 0)
goto retry;
if (ret)
@@ -600,6 +733,13 @@ retry:
undo_srch = true;
}
if (tgs.totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto release;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, bytes,
xattr_nr_parts(xat), found_parts, lck);
@@ -623,12 +763,20 @@ release:
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
out:
vfree(xat);
@@ -749,15 +897,22 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
{
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_xattr_totl_val tval;
struct scoutfs_key totl_key;
struct scoutfs_key last;
struct scoutfs_key key;
bool release = false;
unsigned int bytes;
unsigned int val_len;
void *value;
u64 total;
u64 hash;
int ret;
/* need a buffer large enough for all possible names */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN;
/* need a buffer large enough for all possible names and totl value */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN +
SCOUTFS_XATTR_MAX_TOTL_U64;
xat = kmalloc(bytes, GFP_NOFS);
if (!xat) {
ret = -ENOMEM;
@@ -776,12 +931,38 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (key.skx_part == 0 && (ret < sizeof(struct scoutfs_xattr) ||
ret < offsetof(struct scoutfs_xattr, name[xat->name_len]))) {
ret = -EIO;
break;
}
if (key.skx_part != 0 ||
scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) != 0)
memset(&tgs, 0, sizeof(tgs));
ret = scoutfs_hold_trans(sb, SIC_EXACT(2, 0));
if (tgs.totl) {
value = &xat->name[xat->name_len];
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
if (val_len != le16_to_cpu(xat->val_len)) {
ret = -EIO;
goto out;
}
ret = parse_totl_key(&totl_key, xat->name, xat->name_len) ?:
parse_totl_u64(value, val_len, &total);
if (ret < 0)
break;
}
if (tgs.totl && totl_lock == NULL) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret < 0)
break;
}
ret = scoutfs_hold_trans(sb, false);
if (ret < 0)
break;
release = true;
@@ -798,6 +979,14 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (tgs.totl) {
tval.total = cpu_to_le64(-total);
tval.count = cpu_to_le64(-1LL);
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
break;
}
scoutfs_release_trans(sb);
release = false;
@@ -806,6 +995,7 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
if (release)
scoutfs_release_trans(sb);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
kfree(xat);
out:
return ret;

View File

@@ -16,10 +16,14 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1;
srch:1,
totl:1;
};
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);
#endif

3
tests/.gitignore vendored
View File

@@ -1,6 +1,9 @@
src/*.d
src/createmany
src/dumb_renameat2
src/dumb_setxattr
src/handle_cat
src/bulk_create_paths
src/find_xattrs
src/stage_tmpfile
src/create_xattr_loop

View File

@@ -1,12 +1,15 @@
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing -I ../kmod/src
SHELL := /usr/bin/bash
# each binary command is built from a single .c file
BIN := src/createmany \
src/dumb_renameat2 \
src/dumb_setxattr \
src/handle_cat \
src/bulk_create_paths \
src/find_xattrs
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop
DEPS := $(wildcard src/*.d)

View File

@@ -0,0 +1,35 @@
#!/usr/bin/bash
echo_fail() {
echo "$@" > /dev/stderr
exit 1
}
rid="$SCOUTFS_FENCED_REQ_RID"
#
# Look for a local mount with the rid to fence. Typically we'll at
# least find the mount with the server that requested the fence that
# we're processing. But it's possible that mounts are unmounted
# before, or while, we're running.
#
mnts=$(findmnt -l -n -t scoutfs -o TARGET) || \
echo_fail "findmnt -t scoutfs failed" > /dev/stderr
for mnt in $mnts; do
mnt_rid=$(scoutfs statfs -p "$mnt" -s rid) || \
echo_fail "scoutfs statfs $mnt failed"
if [ "$mnt_rid" == "$rid" ]; then
umount -f "$mnt" || \
echo_fail "umout -f $mnt"
exit 0
fi
done
#
# If the mount doesn't exist on this host then it can't access the
# devices by definition and can be considered fenced.
#
exit 0

View File

@@ -3,7 +3,7 @@
t_filter_fs()
{
sed -e 's@mnt/test\.[0-9]*@mnt/test@g' \
-e 's@Device: [a-fA-F0-7]*h/[0-9]*d@Device: 0h/0d@g'
-e 's@Device: [a-fA-F0-9]*h/[0-9]*d@Device: 0h/0d@g'
}
#
@@ -40,7 +40,7 @@ t_filter_dmesg()
# mount and unmount spew a bunch
re="$re|scoutfs.*client connected"
re="$re|scoutfs.*client disconnected"
re="$re|scoutfs.*server setting up"
re="$re|scoutfs.*server starting"
re="$re|scoutfs.*server ready"
re="$re|scoutfs.*server accepted"
re="$re|scoutfs.*server closing"
@@ -52,12 +52,32 @@ t_filter_dmesg()
# tests that drop unmount io triggers fencing
re="$re|scoutfs .* error: fencing "
re="$re|scoutfs .*: waiting for .* lock clients"
re="$re|scoutfs .*: all lock clients recovered"
re="$re|scoutfs .*: waiting for .* clients"
re="$re|scoutfs .*: all clients recovered"
re="$re|scoutfs .* error: client rid.*lock recovery timed out"
# some tests mount w/o options
re="$re|scoutfs .* error: Required mount option \"metadev_path\" not found"
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
# fencing tests force unmounts and trigger timeouts
re="$re|scoutfs .* forcing unmount"
re="$re|scoutfs .* reconnect timed out"
re="$re|scoutfs .* recovery timeout expired"
re="$re|scoutfs .* fencing previous leader"
re="$re|scoutfs .* reclaimed resources"
re="$re|scoutfs .* quorum .* error"
re="$re|scoutfs .* error reading quorum block"
re="$re|scoutfs .* error .* writing quorum block"
re="$re|scoutfs .* error .* while checking to delete inode"
re="$re|scoutfs .* error .*writing btree blocks.*"
re="$re|scoutfs .* error .*writing super block.*"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.looping commit del.*upd freeing item"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
re="$re|scoutfs .* error.*server failed to bind to.*"
egrep -v "($re)"
}

View File

@@ -17,6 +17,17 @@ t_sync_seq_index()
t_quiet sync
}
t_mount_rid()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
local rid
rid=$(scoutfs statfs -s rid -p "$mnt")
echo "$rid"
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
@@ -99,6 +110,19 @@ t_first_client_nr()
t_fail "t_first_client_nr didn't find any clients"
}
#
# The number of quorum members needed to form a majority to start the
# server.
#
t_majority_count()
{
if [ "$T_QUORUM" -lt 3 ]; then
echo 1
else
echo $(((T_QUORUM / 2) + 1))
fi
}
t_mount()
{
local nr="$1"
@@ -116,7 +140,17 @@ t_umount()
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount \$T_DB$i
eval t_quiet umount \$T_M$nr
}
t_force_umount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount -f \$T_M$nr
}
#
@@ -196,12 +230,19 @@ t_trigger_show() {
echo "trigger $which $string: $(t_trigger_get $which $nr)"
}
t_trigger_arm() {
t_trigger_arm_silent() {
local which="$1"
local nr="$2"
local path=$(t_trigger_path "$nr")
echo 1 > "$path/$which"
}
t_trigger_arm() {
local which="$1"
local nr="$2"
t_trigger_arm_silent $which $nr
t_trigger_show $which armed $nr
}
@@ -216,16 +257,108 @@ t_counter() {
cat "$(t_sysfs_path $nr)/counters/$which"
}
#
# output the difference between the current value of a counter and the
# caller's provided previous value.
#
t_counter_diff_value() {
local which="$1"
local old="$2"
local nr="$3"
local new="$(t_counter $which $nr)"
echo "$((new - old))"
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified.
# to mount 0 if a mount isn't specified. For tests which expect a
# specific difference in counters.
#
t_counter_diff() {
local which="$1"
local old="$2"
local nr="$3"
local new
new="$(t_counter $which $nr)"
echo "counter $which diff $((new - old))"
echo "counter $which diff $(t_counter_diff_value $which $old $nr)"
}
#
# output a message indicating whether or not the counter value changed.
# For tests that expect a difference, or not, but the amount of
# difference isn't significant.
#
t_counter_diff_changed() {
local which="$1"
local old="$2"
local nr="$3"
local diff="$(t_counter_diff_value $which $old $nr)"
test "$diff" -eq 0 && \
echo "counter $which didn't change" ||
echo "counter $which changed"
}
#
# See if we can find a local mount with the caller's rid.
#
t_rid_is_mounted() {
local rid="$1"
local fr="$1"
for fr in /sys/fs/scoutfs/*; do
if [ "$(cat $fr/rid)" == "$rid" ]; then
return 0
fi
done
return 1
}
#
# A given mount is being fenced if any mount has a fence request pending
# for it which hasn't finished and been removed.
#
t_rid_is_fencing() {
local rid="$1"
local fr
for fr in /sys/fs/scoutfs/*; do
if [ -d "$fr/fence/$rid" ]; then
return 0
fi
done
return 1
}
#
# Wait until the mount identified by the first rid arg is not in any
# states specified by the remaining state description word args.
#
t_wait_if_rid_is() {
local rid="$1"
while ( [[ $* =~ mounted ]] && t_rid_is_mounted $rid ) ||
( [[ $* =~ fencing ]] && t_rid_is_fencing $rid ) ; do
sleep .5
done
}
#
# Wait until any mount identifies itself as the elected leader. We can
# be waiting while tests mount and unmount so mounts may not be mounted
# at the test's expected mount points.
#
t_wait_for_leader() {
local i
while sleep .25; do
for i in $(t_fs_nrs); do
local ldr="$(t_sysfs_path $i 2>/dev/null)/quorum/is_leader"
if [ "$(cat $ldr 2>/dev/null)" == "1" ]; then
return
fi
done
done
}

View File

@@ -23,3 +23,18 @@ t_require_mounts() {
test "$T_NR_MOUNTS" -ge "$req" || \
t_skip "$req mounts required, only have $T_NR_MOUNTS"
}
#
# Require that the meta device be at least the size string argument, as
# parsed by numfmt using single char base 2 suffixes (iec).. 64G, etc.
#
t_require_meta_size() {
local dev="$T_META_DEVICE"
local req_iec="$1"
local req_bytes=$(numfmt --from=iec --to=none $req_iec)
local dev_bytes=$(blockdev --getsize64 $dev)
local dev_iec=$(numfmt --from=auto --to=iec $dev_bytes)
test "$dev_bytes" -ge "$req_bytes" || \
t_skip "$dev must be at least $req_iec, is $dev_iec"
}

View File

@@ -53,3 +53,5 @@ mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing
== concurrent creates make one file
one-file

View File

@@ -0,0 +1,2 @@
== Issue scoutfs df to force block reads to trigger stale invalidation/retry
counter block_cache_remove_stale changed

View File

@@ -0,0 +1 @@
== 60s of unmounting non-quorum clients during recovery

8
tests/golden/enospc Normal file
View File

@@ -0,0 +1,8 @@
== prepare directories and files
== fallocate until enospc
== remove all the files and verify free data blocks
== make small meta fs
== create large xattrs until we fill up metadata
== remove files with xattrs after enospc
== make sure we can create again
== cleanup small meta fs

View File

View File

@@ -0,0 +1,5 @@
== make sure all mounts can see each other
== force unmount one client, connection timeout, fence nop, mount
== force unmount all non-server, connection timeout, fence nop, mount
== force unmount server, quorum elects new leader, fence nop, mount
== force unmount everything, new server fences all previous

View File

@@ -0,0 +1,27 @@
== basic unlink deletes
ino found in dseq index
ino not found in dseq index
== local open-unlink waits for close to delete
contents after rm: contents
ino found in dseq index
ino not found in dseq index
== multiple local opens are protected
contents after rm 1: contents
contents after rm 2: contents
ino found in dseq index
ino not found in dseq index
== remote unopened unlink deletes
ino not found in dseq index
ino not found in dseq index
== unlink wait for open on other mount
mount 0 contents after mount 1 rm: contents
ino found in dseq index
ino found in dseq index
stat: cannot stat /mnt/test/test/inode-deletion/file: No such file or directory
ino not found in dseq index
ino not found in dseq index
== lots of deletions use one open map
== open files survive remote scanning orphans
mount 0 contents after mount 1 remounted: contents
ino not found in dseq index
ino not found in dseq index

View File

@@ -1,4 +0,0 @@
== create per mount files
== time independent modification
== time concurrent independent modification
== time concurrent conflicting modification

View File

View File

@@ -0,0 +1,3 @@
== create per mount files
== 30s of racing random mount/umount
== mounting any unmounted

View File

@@ -0,0 +1,4 @@
== test our inode existance function
== unlinked and opened inodes still exist
== orphan from failed evict deletion is picked up
== orphaned inos in all mounts all deleted

View File

@@ -0,0 +1,2 @@
=== renameat2 noreplace flag test
=== run two asynchronous calls to renameat2 NOREPLACE

View File

@@ -0,0 +1,27 @@
== make initial small fs
== 0s do nothing
== shrinking fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== existing sizes do nothing
== growing outside device fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing meta works
== resizing data works
== shrinking back fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing again does nothing
== resizing to full works
== cleanup extra fs

View File

@@ -16,3 +16,4 @@ setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Argument list too long
=== good length boundaries
=== 500 random lengths
=== alternate val size between interesting sizes

View File

@@ -2,6 +2,7 @@
== update existing xattr
== remove an xattr
== remove xattr with files
== trigger small log merges by rotating single block with unmount
== create entries in current log
== delete small fraction
== remove files

View File

@@ -0,0 +1,18 @@
total file size 33669120
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
*
00400000 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 |BBBBBBBBBBBBBBBB|
*
00801000 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 |CCCCCCCCCCCCCCCC|
*
00c03000 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 |DDDDDDDDDDDDDDDD|
*
01006000 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 |EEEEEEEEEEEEEEEE|
*
0140a000 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 |FFFFFFFFFFFFFFFF|
*
0180f000 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 |GGGGGGGGGGGGGGGG|
*
01c15000 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 |HHHHHHHHHHHHHHHH|
*
0201c000

View File

@@ -1,11 +0,0 @@
== create file for xattr ping pong
# file: /mnt/test/test/stale-btree-read/file
user.xat="initial"
== retry btree block read
trigger btree_stale_read armed: 1
# file: /mnt/test/test/stale-btree-read/file
user.xat="btree"
trigger btree_stale_read after: 0
counter btree_stale_read diff 1

View File

@@ -0,0 +1,30 @@
== single file
1.2.3 = 1, 1
4.5.6 = 1, 1
== multiple files add up
1.2.3 = 2, 2
4.5.6 = 2, 2
== removing xattr updates total
1.2.3 = 2, 2
4.5.6 = 1, 1
== updating xattr updates total
1.2.3 = 11, 2
4.5.6 = 1, 1
== removing files update total
1.2.3 = 10, 1
== multiple files/names in one transaction
1.2.3 = 55, 10
== testing invalid names
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== testing invalid values
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== larger population that could merge

View File

@@ -1,6 +1,7 @@
Ran:
generic/001
generic/002
generic/004
generic/005
generic/006
generic/007
@@ -8,6 +9,8 @@ generic/011
generic/013
generic/014
generic/020
generic/023
generic/024
generic/028
generic/032
generic/034
@@ -73,7 +76,6 @@ generic/376
generic/377
Not
run:
generic/004
generic/008
generic/009
generic/012
@@ -82,6 +84,7 @@ generic/016
generic/018
generic/021
generic/022
generic/025
generic/026
generic/031
generic/033
@@ -93,6 +96,7 @@ generic/060
generic/061
generic/063
generic/064
generic/078
generic/079
generic/081
generic/082
@@ -278,4 +282,4 @@ shared/004
shared/032
shared/051
shared/289
Passed all 72 tests
Passed all 75 tests

View File

@@ -18,10 +18,15 @@ die() {
exit 1
}
timestamp()
{
date '+%F %T.%N'
}
# output a message with a timestamp to the run.log
log()
{
echo "[$(date '+%F %T.%N')] $*" >> "$T_RESULTS/run.log"
echo "[$(timestamp)] $*" >> "$T_RESULTS/run.log"
}
# run a logged command, exiting if it fails
@@ -52,19 +57,21 @@ $(basename $0) options:
| the file system to be tested. Will be clobbered by -m mkfs.
-m | Run mkfs on the device before mounting and running
| tests. Implies unmounting existing mounts first.
-n | The number of devices and mounts to test.
-P | Output trace events with printk as they're generated.
-n <nr> | The number of devices and mounts to test.
-P | Enable trace_printk.
-p | Exit script after preparing mounts only, don't run tests.
-q <nr> | Specify the quorum count needed to mount. This is
| used when running mkfs and is needed by a few tests.
-q <nr> | The first <nr> mounts will be quorum members. Must be
| at least 1 and no greater than -n number of mounts.
-r <dir> | Specify the directory in which to store results of
| test runs. The directory will be created if it doesn't
| exist. Previous results will be deleted as each test runs.
-s | Skip git repo checkouts.
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
-y | xfstests ./check additional args
-z <nr> | set data-alloc-zone-blocks in mkfs
EOF
}
@@ -77,6 +84,9 @@ done
T_TRACE_DUMP="0"
T_TRACE_PRINTK="0"
# array declarations to be able to use array ops
declare -a T_TRACE_GLOB
while true; do
case $1 in
-a)
@@ -147,7 +157,7 @@ while true; do
;;
-t)
test -n "$2" || die "-t must have trace glob argument"
T_TRACE_GLOB="$2"
T_TRACE_GLOB+=("$2")
shift
;;
-X)
@@ -165,6 +175,11 @@ while true; do
T_XFSTESTS_ARGS="$2"
shift
;;
-z)
test -n "$2" || die "-z must have nr mounts argument"
T_DATA_ALLOC_ZONE_BLOCKS="-z $2"
shift
;;
-h|-\?|--help)
show_help
exit 1
@@ -195,7 +210,6 @@ test -e "$T_EX_META_DEV" || die "extra meta device -f '$T_EX_META_DEV' doesn't e
test -n "$T_EX_DATA_DEV" || die "must specify -e extra data device"
test -e "$T_EX_DATA_DEV" || die "extra data device -e '$T_EX_DATA_DEV' doesn't exist"
test -n "$T_MKFS" -a -z "$T_QUORUM" && die "mkfs (-m) requires quorum (-q)"
test -n "$T_RESULTS" || die "must specify -r results dir"
test -n "$T_XFSTESTS_REPO" -a -z "$T_XFSTESTS_BRANCH" -a -z "$T_SKIP_CHECKOUT" && \
die "-X xfstests repo requires -x xfstests branch"
@@ -205,10 +219,17 @@ test -n "$T_XFSTESTS_BRANCH" -a -z "$T_XFSTESTS_REPO" -a -z "$T_SKIP_CHECKOUT" &
test -n "$T_NR_MOUNTS" || die "must specify -n nr mounts"
test "$T_NR_MOUNTS" -ge 1 -a "$T_NR_MOUNTS" -le 8 || \
die "-n nr mounts must be >= 1 and <= 8"
test -n "$T_QUORUM" || \
die "must specify -q number of mounts that are quorum members"
test "$T_QUORUM" -ge "1" || \
die "-q quorum mmembers must be at least 1"
test "$T_QUORUM" -le "$T_NR_MOUNTS" || \
die "-q quorum mmembers must not be greater than -n mounts"
# top level paths
T_KMOD=$(realpath "$(dirname $0)/../kmod")
T_UTILS=$(realpath "$T_KMOD/../utils")
T_TESTS=$(realpath "$(dirname $0)")
T_KMOD=$(realpath "$T_TESTS/../kmod")
T_UTILS=$(realpath "$T_TESTS/../utils")
test -d "$T_KMOD" || die "kmod/ repo dir $T_KMOD not directory"
test -d "$T_UTILS" || die "utils/ repo dir $T_UTILS not directory"
@@ -234,17 +255,20 @@ test -e "$T_RESULTS" || mkdir -p "$T_RESULTS"
test -d "$T_RESULTS" || \
die "$T_RESULTS dir is not a directory"
# might as well build our stuff with all cpus, assuming idle system
MAKE_ARGS="-j $(getconf _NPROCESSORS_ONLN)"
# build kernel module
msg "building kmod/ dir $T_KMOD"
cmd cd "$T_KMOD"
cmd make
cmd make $MAKE_ARGS
cmd sync
cmd cd -
# build utils
msg "building utils/ dir $T_UTILS"
cmd cd "$T_UTILS"
cmd make
cmd make $MAKE_ARGS
cmd sync
cmd cd -
@@ -261,7 +285,7 @@ fi
# building our test binaries
msg "building test binaries"
cmd make
cmd make $MAKE_ARGS
# set any options implied by others
test -n "$T_MKFS" && T_UNMOUNT=1
@@ -303,8 +327,15 @@ if [ -n "$T_UNMOUNT" ]; then
unmount_all
fi
quo=""
if [ -n "$T_MKFS" ]; then
cmd scoutfs mkfs -Q "$T_QUORUM" "$T_META_DEVICE" "$T_DATA_DEVICE" -f
for i in $(seq -0 $((T_QUORUM - 1))); do
quo="$quo -Q $i,127.0.0.1,$((42000 + i))"
done
msg "making new filesystem with $T_QUORUM quorum members"
cmd scoutfs mkfs -f $quo $T_DATA_ALLOC_ZONE_BLOCKS \
"$T_META_DEVICE" "$T_DATA_DEVICE"
fi
if [ -n "$T_INSMOD" ]; then
@@ -314,23 +345,70 @@ if [ -n "$T_INSMOD" ]; then
cmd insmod "$T_KMOD/src/scoutfs.ko"
fi
if [ -n "$T_TRACE_GLOB" ]; then
msg "enabling trace events"
nr_globs=${#T_TRACE_GLOB[@]}
if [ $nr_globs -gt 0 ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
for g in $T_TRACE_GLOB; do
for g in "${T_TRACE_GLOB[@]}"; do
for e in /sys/kernel/debug/tracing/events/scoutfs/$g/enable; do
echo 1 > $e
if test -w "$e"; then
echo 1 > "$e"
else
die "-t glob '$g' matched no scoutfs events"
fi
done
done
echo "$T_TRACE_DUMP" > /proc/sys/kernel/ftrace_dump_on_oops
echo "$T_TRACE_PRINTK" > /sys/kernel/debug/tracing/options/trace_printk
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/proc/sys/kernel/ftrace_dump_on_oops
nr_events=$(cat /sys/kernel/debug/tracing/set_event | wc -l)
msg "enabled $nr_events trace events from $nr_globs -t globs"
fi
if [ -n "$T_TRACE_PRINTK" ]; then
echo "$T_TRACE_PRINTK" > /sys/kernel/debug/tracing/options/trace_printk
fi
if [ -n "$T_TRACE_DUMP" ]; then
echo "$T_TRACE_DUMP" > /proc/sys/kernel/ftrace_dump_on_oops
fi
# always describe tracing in the logs
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/proc/sys/kernel/ftrace_dump_on_oops
#
# Build a fenced config that runs scripts out of the repository rather
# than the default system directory
#
conf="$T_RESULTS/scoutfs-fencd.conf"
cat > $conf << EOF
SCOUTFS_FENCED_DELAY=1
SCOUTFS_FENCED_RUN=$T_TESTS/fenced-local-force-unmount.sh
SCOUTFS_FENCED_RUN_ARGS=""
EOF
export SCOUTFS_FENCED_CONFIG_FILE="$conf"
#
# Run the agent in the background, log its output, an kill it if we
# exit
#
fenced_log()
{
echo "[$(timestamp)] $*" >> "$T_RESULTS/fenced.stdout.log"
}
fenced_pid=""
kill_fenced()
{
if test -n "$fenced_pid" -a -d "/proc/$fenced_pid" ; then
fenced_log "killing fenced pid $fenced_pid"
kill "$fenced_pid"
fi
}
trap kill_fenced EXIT
$T_UTILS/fenced/scoutfs-fenced > "$T_RESULTS/fenced.stdout.log" 2> "$T_RESULTS/fenced.stderr.log" &
fenced_pid=$!
fenced_log "started fenced pid $fenced_pid in the background"
#
# mount concurrently so that a quorum is present to elect the leader and
# start a server.
@@ -347,8 +425,12 @@ for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
dir="/mnt/test.$i"
test -d "$dir" || cmd mkdir -p "$dir"
opts="-o metadev_path=$meta_dev"
if [ "$i" -lt "$T_QUORUM" ]; then
opts="$opts,quorum_slot_nr=$i"
fi
msg "mounting $meta_dev|$data_dev on $dir"
opts="-o server_addr=127.0.0.1,metadev_path=$meta_dev"
cmd mount -t scoutfs $opts "$data_dev" "$dir" &
p="$!"
@@ -434,7 +516,7 @@ for t in $tests; do
# get stats from previous pass
last="$T_RESULTS/last-passed-test-stats"
stats=$(grep -s "^$test_name" "$last" | cut -d " " -f 2-)
stats=$(grep -s "^$test_name " "$last" | cut -d " " -f 2-)
test -n "$stats" && stats="last: $stats"
printf " %-30s $stats" "$test_name"
@@ -497,7 +579,7 @@ for t in $tests; do
echo " passed: $stats"
((passed++))
# save stats for passed test
grep -s -v "^$test_name" "$last" > "$last.tmp"
grep -s -v "^$test_name " "$last" > "$last.tmp"
echo "$test_name $stats" >> "$last.tmp"
mv -f "$last.tmp" "$last"
elif [ "$sts" == "$T_SKIP_STATUS" ]; then
@@ -515,24 +597,24 @@ done
msg "all tests run: $passed passed, $skipped skipped, $failed failed"
unmount_all
if [ -n "$T_TRACE_GLOB" ]; then
if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then
msg "saving traces and disabling tracing"
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
echo 0 > /sys/kernel/debug/tracing/options/trace_printk
cat /sys/kernel/debug/tracing/trace > "$T_RESULTS/traces"
fi
status=1
if [ "$failed" == 0 ]; then
if [ "$skipped" == 0 -a "$failed" == 0 ]; then
msg "all tests passed"
status=0
unmount_all
exit 0
fi
if [ "$skipped" != 0 ]; then
msg "$skipped tests skipped, check skip.log"
msg "$skipped tests skipped, check skip.log, still mounted"
fi
if [ "$failed" != 0 ]; then
msg "$failed tests failed, check fail.log"
msg "$failed tests failed, check fail.log, still mounted"
fi
exit $status
exit 1

View File

@@ -7,26 +7,36 @@ simple-release-extents.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
totl-xattr-tag.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
export-lookup-evict-race.sh
createmany-parallel.sh
createmany-large-names.sh
createmany-rename-large-dir.sh
stage-release-race-alloc.sh
stage-multi-part.sh
stage-tmpfile.sh
basic-posix-consistency.sh
dirent-consistency.sh
mkdir-rename-rmdir.sh
lock-ex-race-processes.sh
lock-conflicting-batch-commit.sh
cross-mount-data-free.sh
persistent-item-vers.sh
setup-error-teardown.sh
resize-devices.sh
fence-and-reclaim.sh
orphan-inodes.sh
mount-unmount-race.sh
client-unmount-recovery.sh
createmany-parallel-mounts.sh
archive-light-cycle.sh
stale-btree-read.sh
block-stale-reads.sh
inode-deletion.sh
renameat2-noreplace.sh
xfstests.sh

View File

@@ -0,0 +1,113 @@
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/xattr.h>
#include <ctype.h>
#include <string.h>
#include <errno.h>
#include <limits.h>
static void exit_usage(void)
{
printf(" -h/-? output this usage message and exit\n"
" -c <count> number of xattrs to create\n"
" -n <string> xattr name prefix, -NR is appended\n"
" -p <path> string with path to file with xattrs\n"
" -s <size> xattr value size\n");
exit(1);
}
int main(int argc, char **argv)
{
char *pref = NULL;
char *path = NULL;
char *val;
char *name;
unsigned long long count = 0;
unsigned long long size = 0;
unsigned long long i;
int ret;
int c;
while ((c = getopt(argc, argv, "+c:n:p:s:")) != -1) {
switch (c) {
case 'c':
count = strtoull(optarg, NULL, 0);
break;
case 'n':
pref = strdup(optarg);
break;
case 'p':
path = strdup(optarg);
break;
case 's':
size = strtoull(optarg, NULL, 0);
break;
case '?':
printf("unknown argument: %c\n", optind);
case 'h':
exit_usage();
}
}
if (count == 0) {
printf("specify count of xattrs to create with -c\n");
exit(1);
}
if (count == ULLONG_MAX) {
printf("invalid -c count\n");
exit(1);
}
if (size == 0) {
printf("specify xattrs value size with -s\n");
exit(1);
}
if (size == ULLONG_MAX || size < 2) {
printf("invalid -s size\n");
exit(1);
}
if (path == NULL) {
printf("specify path to file with -p\n");
exit(1);
}
if (pref == NULL) {
printf("specify xattr name prefix string with -n\n");
exit(1);
}
ret = snprintf(NULL, 0, "%s-%llu", pref, ULLONG_MAX) + 1;
name = malloc(ret);
if (!name) {
printf("couldn't allocate xattr name buffer\n");
exit(1);
}
val = malloc(size);
if (!val) {
printf("couldn't allocate xattr value buffer\n");
exit(1);
}
memset(val, 'a', size - 1);
val[size - 1] = '\0';
for (i = 0; i < count; i++) {
sprintf(name, "%s-%llu", pref, i);
ret = setxattr(path, name, val, size, 0);
if (ret) {
printf("returned %d errno %d (%s)\n",
ret, errno, strerror(errno));
return 1;
}
}
return 0;
}

View File

@@ -0,0 +1,93 @@
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#ifndef RENAMEAT2_EXIST
#include <unistd.h>
#include <sys/syscall.h>
#if !defined(SYS_renameat2) && defined(__x86_64__)
#define SYS_renameat2 316 /* from arch/x86/entry/syscalls/syscall_64.tbl */
#endif
static int renameat2(int olddfd, const char *old_dir,
int newdfd, const char *new_dir,
unsigned int flags)
{
#ifdef SYS_renameat2
return syscall(SYS_renameat2, olddfd, old_dir, newdfd, new_dir, flags);
#else
errno = ENOSYS;
return -1;
#endif
}
#endif
#ifndef RENAME_NOREPLACE
#define RENAME_NOREPLACE (1 << 0) /* Don't overwrite newpath of rename */
#endif
#ifndef RENAME_EXCHANGE
#define RENAME_EXCHANGE (1 << 1) /* Exchange oldpath and newpath */
#endif
#ifndef RENAME_WHITEOUT
#define RENAME_WHITEOUT (1 << 2) /* Whiteout oldpath */
#endif
static void exit_usage(char **argv)
{
fprintf(stderr,
"usage: %s [-n|-x|-w] old_path new_path\n"
" -n noreplace\n"
" -x exchange\n"
" -w whiteout\n", argv[0]);
exit(1);
}
int main(int argc, char **argv)
{
const char *old_path = NULL;
const char *new_path = NULL;
unsigned int flags = 0;
int ret;
int c;
for (c = 1; c < argc; c++) {
if (argv[c][0] == '-') {
switch (argv[c][1]) {
case 'n':
flags |= RENAME_NOREPLACE;
break;
case 'x':
flags |= RENAME_EXCHANGE;
break;
case 'w':
flags |= RENAME_WHITEOUT;
break;
default:
exit_usage(argv);
}
} else if (!old_path) {
old_path = argv[c];
} else if (!new_path) {
new_path = argv[c];
} else {
exit_usage(argv);
}
}
if (!old_path || !new_path) {
printf("specify the correct directory path\n");
errno = ENOENT;
return 1;
}
ret = renameat2(AT_FDCWD, old_path, AT_FDCWD, new_path, flags);
if (ret == -1) {
perror("Error");
return 1;
}
return 0;
}

153
tests/src/stage_tmpfile.c Normal file
View File

@@ -0,0 +1,153 @@
/*
* Exercise O_TMPFILE creation as well as staging from tmpfiles into
* a released destination file.
*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <linux/types.h>
#include <assert.h>
#include "ioctl.h"
#define array_size(arr) (sizeof(arr) / sizeof(arr[0]))
/*
* Write known data into 8 tmpfiles.
* Make a new file X and release it
* Move contents of 8 tmpfiles into X.
*/
struct sub_tmp_info {
int fd;
unsigned int offset;
unsigned int length;
};
#define SZ 4096
char buf[SZ];
int main(int argc, char **argv)
{
struct scoutfs_ioctl_release rel = {0};
struct scoutfs_ioctl_move_blocks mb;
struct scoutfs_ioctl_stat_more stm;
struct sub_tmp_info sub_tmps[8];
int tot_size = 0;
char *dest_file;
int dest_fd;
char *mnt;
int ret;
int i;
if (argc < 3) {
printf("%s <mountpoint> <dest_file>\n", argv[0]);
return 1;
}
mnt = argv[1];
dest_file = argv[2];
for (i = 0; i < array_size(sub_tmps); i++) {
struct sub_tmp_info *sub_tmp = &sub_tmps[i];
int remaining;
sub_tmp->fd = open(mnt, O_RDWR | O_TMPFILE, S_IRUSR | S_IWUSR);
if (sub_tmp->fd < 0) {
perror("error");
exit(1);
}
sub_tmp->offset = tot_size;
/* First tmp file is 4MB */
/* Each is 4k bigger than last */
sub_tmp->length = (i + 1024) * sizeof(buf);
remaining = sub_tmp->length;
/* Each sub tmpfile written with 'A', 'B', etc. */
memset(buf, 'A' + i, sizeof(buf));
while (remaining) {
int written;
written = write(sub_tmp->fd, buf, sizeof(buf));
assert(written == sizeof(buf));
tot_size += sizeof(buf);
remaining -= written;
}
}
printf("total file size %d\n", tot_size);
dest_fd = open(dest_file, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR);
if (dest_fd == -1) {
perror("error");
exit(1);
}
// make dest file big
ret = posix_fallocate(dest_fd, 0, tot_size);
if (ret) {
perror("error");
exit(1);
}
// get current data_version after fallocate's size extensions
ret = ioctl(dest_fd, SCOUTFS_IOC_STAT_MORE, &stm);
if (ret < 0) {
perror("stat_more ioctl error");
exit(1);
}
// release everything in dest file
rel.offset = 0;
rel.length = tot_size;
rel.data_version = stm.data_version;
ret = ioctl(dest_fd, SCOUTFS_IOC_RELEASE, &rel);
if (ret < 0) {
perror("error");
exit(1);
}
// move contents into dest in reverse order
for (i = array_size(sub_tmps) - 1; i >= 0 ; i--) {
struct sub_tmp_info *sub_tmp = &sub_tmps[i];
mb.from_fd = sub_tmp->fd;
mb.from_off = 0;
mb.len = sub_tmp->length;
mb.to_off = sub_tmp->offset;
mb.data_version = stm.data_version;
mb.flags = SCOUTFS_IOC_MB_STAGE;
ret = ioctl(dest_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
if (ret < 0) {
perror("error");
exit(1);
}
}
return 0;
}

View File

@@ -160,8 +160,8 @@ for i in $(seq 1 1); do
mkdir -p $(dirname $lnk)
ln "$T_D0/file" $lnk
scoutfs ino-path $ino "$T_M0" > "$T_TMP.0"
scoutfs ino-path $ino "$T_M1" > "$T_TMP.1"
scoutfs ino-path -p "$T_M0" $ino > "$T_TMP.0"
scoutfs ino-path -p "$T_M1" $ino > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
done
done
@@ -197,4 +197,13 @@ scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- data_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== concurrent creates make one file"
mkdir "$T_D0/concurrent"
for i in $(t_fs_nrs); do
eval p="\$T_D${i}/concurrent/one-file"
touch "$p" 2>&1 > "$T_TMP.multi-create.$i" &
done
wait
ls "$T_D0/concurrent"
t_pass

View File

@@ -0,0 +1,22 @@
#
# Exercise stale block reading.
#
# It would be very difficult to manipulate the allocators, cache, and
# persistent blocks to create stable block reading scenarios. Instead
# we use triggers to exercise how readers encounter stale blocks.
#
# Trigger retries in the block cache by calling scoutfs df
# which in turn will call scoutfs_ioctl_alloc_detail. This
# is guaranteed to exist, which will force block cache reads.
echo "== Issue scoutfs df to force block reads to trigger stale invalidation/retry"
nr=0
old=$(t_counter block_cache_remove_stale $nr)
t_trigger_arm_silent block_remove_stale $nr
scoutfs df -p "$T_M0" > /dev/null
t_counter_diff_changed block_cache_remove_stale $old $nr
t_pass

View File

@@ -0,0 +1,61 @@
#
# Unmount Server and unmount a client as it's replaying to a remaining server
#
majority_nr=$(t_majority_count)
quorum_nr=$T_QUORUM
test "$quorum_nr" == "$majority_nr" && \
t_skip "all quorum members make up majority, need more mounts to unmount"
test "$T_NR_MOUNTS" -lt "$T_QUORUM" && \
t_skip "Need enough non-quorum clients to unmount"
for i in $(t_fs_nrs); do
mounted[$i]=1
done
LENGTH=60
echo "== ${LENGTH}s of unmounting non-quorum clients during recovery"
END=$((SECONDS + LENGTH))
while [ "$SECONDS" -lt "$END" ]; do
sv=$(t_server_nr)
rid=$(t_mount_rid $sv)
echo "sv $sv rid $rid" >> "$T_TMP.log"
sync
t_umount $sv &
for i in $(t_fs_nrs); do
if [ "$i" -ge "$quorum_nr" ]; then
t_umount $i &
echo "umount $i pid $pid quo $quorum_nr" \
>> $T_TMP.log
mounted[$i]=0
fi
done
wait
t_mount $sv &
for i in $(t_fs_nrs); do
if [ "${mounted[$i]}" == 0 ]; then
t_mount $i &
fi
done
wait
declare RID_LIST=$(cat /sys/fs/scoutfs/*/rid | sort -u)
read -a rid_arr <<< $RID_LIST
declare LOCK_LIST=$(cut -d' ' -f 5 /sys/kernel/debug/scoutfs/*/server_locks | sort -u)
read -a lock_arr <<< $LOCK_LIST
for i in "${lock_arr[@]}"; do
if [[ ! " ${rid_arr[*]} " =~ " $i " ]]; then
t_fail "RID($i): exists when not mounted"
fi
done
done
t_pass

100
tests/tests/enospc.sh Normal file
View File

@@ -0,0 +1,100 @@
#
# test hititng enospc by filling with data or metadata and
# then recovering by removing what we filled.
#
# Type Size Total Used Free Use%
#MetaData 64KB 1048576 32782 1015794 3
# Data 4KB 16777152 0 16777152 0
free_blocks() {
local md="$1"
local mnt="$2"
scoutfs df -p "$mnt" | awk '($1 == "'$md'") { print $5; exit }'
}
t_require_commands scoutfs stat fallocate createmany
echo "== prepare directories and files"
for n in $(t_fs_nrs); do
eval path="\$T_D${n}/dir-$n/file-$n"
mkdir -p $(dirname $path)
touch $path
done
sync
echo "== fallocate until enospc"
before=$(free_blocks Data "$T_M0")
finished=0
while [ $finished != 1 ]; do
for n in $(t_fs_nrs); do
eval path="\$T_D${n}/dir-$n/file-$n"
off=$(stat -c "%s" "$path")
LC_ALL=C fallocate -o $off -l 128MiB "$path" > $T_TMP.fallocate 2>&1
err="$?"
if grep -qi "no space" $T_TMP.fallocate; then
finished=1
break
fi
if [ "$err" != "0" ]; then
t_fail "fallocate failed with $err"
fi
done
done
echo "== remove all the files and verify free data blocks"
for n in $(t_fs_nrs); do
eval dir="\$T_D${n}/dir-$n"
rm -rf "$dir"
done
sync
after=$(free_blocks Data "$T_M0")
# nothing else should be modifying data blocks
test "$before" == "$after" || \
t_fail "$after free data blocks after rm, expected $before"
# XXX this is all pretty manual, would be nice to have helpers
echo "== make small meta fs"
# meta device just big enough for reserves and the metadata we'll fill
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="/mnt/scoutfs.enospc"
mkdir -p "$SCR"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
echo "== create large xattrs until we fill up metadata"
mkdir -p "$SCR/xattrs"
for f in $(seq 1 100000); do
file="$SCR/xattrs/file-$f"
touch "$file"
LC_ALL=C create_xattr_loop -c 1000 -n user.scoutfs-enospc -p "$file" -s 65535 > $T_TMP.cxl 2>&1
err="$?"
if grep -qi "no space" $T_TMP.cxl; then
echo "enospc at f $f" >> $T_TMP.cxl
break
fi
if [ "$err" != "0" ]; then
t_fail "create_xattr_loop failed with $err"
fi
done
echo "== remove files with xattrs after enospc"
rm -rf "$SCR/xattrs"
echo "== make sure we can create again"
file="$SCR/file-after"
touch $file
setfattr -n user.scoutfs-enospc -v 1 "$file"
sync
rm -f "$file"
echo "== cleanup small meta fs"
umount "$SCR"
rmdir "$SCR"
t_pass

Some files were not shown because too many files have changed in this diff Show More