Compare commits

...

116 Commits

Author SHA1 Message Date
Zach Brown
5bea29a168 Use cwskip for the item cache
The use of pages in the item cache got us pretty far but it
fundmanetally couldn't escape the contention around the global or
per-page read locks.  Some loads became bottlenecked in contention
in the item cache.   Worse, we were seeing inconsistency in the
per-cpu cached mappings of key ranges to pages.

All the users of items in the cache are transitioned from searching for
items in locked pages to searching for items in the cwskip list.  It's
fundamentally built around a seqlock-like begin/retry pattern so most of
the item work gets wrapped around search and retry helpers.

Without pages we no longer have a global list of dirty pages.   Instead
we have per-cpu lists of dirty items that are later sorted and handed to
the btree insertion iterator.   We take the opportunity to clean up that
interface now that it's very easy for us to iterate through the stable
list of dirty items.

Rather than a global lru of pages we have an algorithm for maintaining
items in rough groups of ages.  Shrinking randomly walks the cwskip list
looking for regions of sufficiently old items rather than walking a
precise global lru list of pages.

Signed-off-by: Zach Brown <zab@versity.com>
2021-12-23 15:11:54 -08:00
Zach Brown
7a999f2657 Add cwskip skip list
Add the cwskip list that is built for concurrent writers.   We're about
to use to build the item cache around item instead of pages.

Signed-off-by: Zach Brown <zab@versity.com>
2021-12-23 15:11:54 -08:00
Zach Brown
166ab58b99 Merge pull request #62 from versity/zab/change_quorum_config
Zab/change quorum config
2021-11-29 12:18:15 -08:00
Zach Brown
8bc1ee8346 Add change-quorum-config command
Add a command to change the quorum config which starts by only supports
updating the super block whlie the file system is oflfine.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 15:41:04 -08:00
Zach Brown
285b68879a Set quorum config ver to 1 in mkfs and print
We're adding a command to change the quorum config which updates its
version number.  Let's make the version a little more visible and start
it at the more humane 1.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 15:41:04 -08:00
Zach Brown
1ac3efe701 Add meta_super_in_use utils helper
Move the code that checks that the super is in use from
change-format-version into its own function in util.c.   We'll use it in
an upcoming command to change the quorum config.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 15:40:25 -08:00
Zach Brown
ce76682db7 Make mkfs quorum helpers available
Move functions for printing and validating the quorum config from mkfs.c
to quorum.c so that they can be used in an upcoming command to change
the quorum config.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 13:44:51 -08:00
Zach Brown
686f8515bc Fix --quorum-count typo in mkfs error message
The change from --quorum-count to --quorum-slot forgot to update a
mention of the option in an error message in mkfs when it wasn't
provided.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-24 13:44:51 -08:00
Zach Brown
93bc52cc54 Merge pull request #60 from bgly/bduffyly/block_stale_reads
Fix block-stale-read test case
2021-11-24 10:25:26 -08:00
Zach Brown
1108d1288a Merge pull request #61 from bgly/bduffyly/rename2
Add basic renameat2 syscall support
2021-11-24 10:24:23 -08:00
Bryant G. Duffy-Ly
0abcd5a004 Take generic/025/078 off expunge list adding 23/24
We want to enable the test case for:
generic/023 - tests that renameat2 syscall exists
generic/024 - renameat2 with NOREPLACE flag

Move both generic/025 and 078 to the no run list so that
we can test the unsupported output if the flags were
passed that were not supported.

Example output:
generic/025      [not run] fs doesn't support RENAME_EXCHANGE
generic/078      [not run] fs doesn't support RENAME_WHITEOUT

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:19 -06:00
Bryant G. Duffy-Ly
888ad8ec5c Add renameat2 unit test case
The goal of the test case is to have two mount points
with two async calls made to do renameat2. This allows
for two calls to race to call renameat2 RENAME_NOREPLACE.
When this happens you expect one of them to fail with a
-EEXIST. This would validate that the new flag works.
Essentially one of the two calls to renameat should hit the
new RENAME_NOREPLACE code and exit early.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:13 -06:00
Bryant G. Duffy-Ly
16ea0ef671 Add syscall wrapper for renameat2
Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:08 -06:00
Bryant G. Duffy-Ly
1b8e3f7c05 Add basic renameat2 syscall support
Support generic renameat2 syscall then add support for the
RENAME_NOREPLACE flag. To suppor the flag we need to check
the existance of both entries and return -EXIST.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 17:54:02 -06:00
Bryant G. Duffy-Ly
3ae0ebd0d8 Fix block-stale-read test case
The current test case attempts to create a state to read
by calling setattr and getattr in attempt to force block
cache reads. It so happens that this does not always force
cache block reads, which in rare cases causes this test case
to fail.

The new test case removes all the extra bouncing around of mount
points and we just directly call scoutfs df which will walk
everyone's allocators to summarize the block counts, which is
guaranteed to exist. Therefore, we do not have to create any sort
of state prior to trying to force a read.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-19 15:41:54 -06:00
Zach Brown
714b7f2a84 Merge pull request #54 from bgly/bduffyly/abort_conn
Fix client/server abort conn on force unmount
2021-11-09 13:29:20 -08:00
Zach Brown
945f8b4828 Merge pull request #58 from bgly/bduffyly/print_data
Fix scoutfs print <data_dev> hang
2021-11-09 09:50:14 -08:00
Zach Brown
b5ccefeeb9 Merge pull request #59 from versity/zab/v1_release_notes
Add release notes with the 1.0 GA release
2021-11-08 16:09:42 -08:00
Zach Brown
ea08942824 Add release notes with the 1.0 GA release
Let's try maintaining release notes in a file in the repo.  There are
lots of schemes for associating commits and release notes and this seems
like the simplest place to start.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-08 14:42:33 -08:00
Bryant G. Duffy-Ly
95f2a87864 Fix scoutfs print <data_dev> hang
If a user tries to print a data device exit early if
it is data device.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-08 16:16:13 -06:00
Bryant G. Duffy-Ly
38ee2defd5 Add a filter for forced unmount error output
[85164.299902] scoutfs f.8c19e1.r.facf2e error: server error writing btree blocks: -5
[144308.589596] scoutfs f.c9397a.r.8ae97f error: server error -5 freeing merged btree blocks: looping commit del/upd freeing item
[174646.005596] scoutfs f.15f0b3.r.1862df error: server error -5 freeing merged btree blocks: final commit del/upd freeing item
[146653.893676] scoutfs f.c7f188.r.34e23c error: server error writing super block: -5
[273218.436675] scoutfs f.dd4157.r.f0da7e error: server failed to bind to 127.0.0.1:42002, err -98
[376832.542823] scoutfs f.049985.r.1a8987 error: error -5 reading quorum block 19 to update event 1 term 3

The above is an example output that will be filtered out

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-08 07:36:02 -06:00
Bryant G. Duffy-Ly
0fc8ccb122 Fix exiting out of btree_walk early for force_umnt
We do not want to short-circuit btree_walk early, it is
better to handle the force unmount on the caller side.
Therefore, remove this from btree_walk.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:21:09 -05:00
Bryant G. Duffy-Ly
e4a3c2b95d Break client/server out of waiting network replies
If there is a forced unmount we call _net_shutdown from
umount_begin in order to tell the server and clients to
break out of pending network replies. We then add the call
to abort within the shutdown_worker since most of the mucking
with send and resend queues are all done there.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:21:04 -05:00
Bryant G. Duffy-Ly
cf4e6611d3 Fix inconsistency assertions at commit_log_merge
Only BUG_ON for inconsistency and not do it for commit errors
or failure to delete the original request.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:18:57 -05:00
Bryant G. Duffy-Ly
65429a9cc4 Ensure that writer_init and alloc_init are cleaned
In scoutfs_server_worker we do not properly handle the clean up
of _block_writer_init and alloc_init. On error paths we can clean
up the context if either of thoes are initialized we can call
alloc_prepare_commit or writer_forget_all to ensure we drop
the block references and clear the dirty status of all the blocks
in the writer.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-05 15:05:42 -05:00
Zach Brown
d764ed7c43 Merge pull request #57 from versity/zab/update_readme
Update README.md
2021-11-05 11:34:44 -07:00
Zach Brown
465e5ee769 Update README.md
Remove a bunch of old language from the README.  We're no longer in the
early days of the open release so we can remove all the alpha quality
language.   And the system has grown sufficiently that the repo README
isn't a great place for a small getting started doc.  There just isn't
room to do the subject justice.   If we need such a thing for the
project we'll put it as a first order doc in the repo that'd be
distributed along with everything else.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-05 11:16:57 -07:00
Bryant G. Duffy-Ly
83a6bbb640 Fix inconsistency in server_log_merge_free_work
In order to safely free blocks we need to first dirty
the work. This allows for resume later on without a double
free.

Signed-off-by: Bryant G. Duffy-Ly <bduffyly@versity.com>
2021-11-03 17:09:51 -05:00
Zach Brown
f02d68f567 Merge pull request #55 from versity/zab/v1_format_version
Zab/v1 format version
2021-11-03 10:18:50 -07:00
Zach Brown
5d6a510e25 Merge pull request #56 from versity/zab/xattr_shrink_bad_items
Fix xattr update out of bounds access
2021-11-02 10:17:06 -07:00
Zach Brown
1b4d291bf7 Fix xattr update out of bounds access
As we update xattrs we need to update any existing old items with the
contents of the new xattr that uses those items.   The loop that updated
existing items only took the old xattr size into account and assumed
that the new xattr would use those items.   If the new xattr size used
fewer parts then the attempt to update all the old parts that weren't
covered by the new size would go very wrong.   The length of the region
in the new xattr would be negative so it'd try to use the max part
length.  Worse, it'd copy these max part length regions outside the
input new xattr buffer.  Typically this would land in addressible memory
and copy garbage into the unused old items before they were later
deleted.

However, it could access so far outside the input buffer that it could
cross a page boudary into inaccessible memory and fault.  We saw this in
the field while trying to repeatedly incrementally shrink a large xattr.

This fixes the loop that updates overlapping items between the new and
old xattr to start with the smaller of their two item counts.  Now it
will only update items that are actually used by both xattrs and will
only safely access the new xattr input buffer.

Signed-off-by: Zach Brown <zab@versity.com>
2021-11-01 11:33:17 -07:00
Zach Brown
223ee5deef Declare v1 of the stable persistent format
From now on if we make incompatible changes to structures or messages
then we update the format version and ensure that the code can deal with
all the versions in its supported range.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
8f60ac06c5 Clean up our ioctl numbers
We had arbitrarily chosen an ioctl code 's' to match scoutfs, but of
course that conflicts.  This chooses an arbitrary hole in the upstream
reservations from inode-numbers.rst.

Then we make sure to have our _IO[WR] usage reflect the direction of the
final type paramater.  For most of our ioctls userspace is writing an
argument parameter to perform an operation (that often has side
effects).   Most of our ioctls should be _IOW because userspace is
writing the parameter, not _IOR (though the operation tends to read
state).  A few ioctls copy output back to userspace in the parameter so
they're _IOWR.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
932a842ae3 Remove valid_bytes from stat _more ioctls
The idea here was that we'd expand the size of the struct and
valid_bytes would tell the kernel which fields were present in
userspace's struct.  That doesn't combine well with the ioctl convention
of having the size of the type baked into the ioctl number.   We'll
remove this to make the world less surprising.  If we expand the
interface we'd add additional ioctls and types.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
618a7a4c47 Remove unused lock server alloc and wri
While checking in on some other code I noticed that we have lingering
allocator and writer contexts over in the lock server.  The lock server
used to manage its own client state and recovery.  We've sinced moved
that into shared recov functionality in the server.  The lock server no
longer manipulates its own btrees and doesn't need these unused
references to the server's contexts.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
9ebf43db99 Spread out key zone and type values
Introduce some space between the current key zone and type values so
that we have room to insert new keys amongst the current keys if we need
to.   A spacing of 4 is arbitrarily chosen as small enough to still give
us intuitively small numbers while leaving enough room to grow, given
how long its taken to come to the current number of keys.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
e38beee85a Stop using inode index key type as array index
The code that updates inode index items on behalf of indexed fields uses
an array to track changes in the fields.  Those array indexes were the
raw key type values.

We're about to introduce some sparse space between all the key values so
that we have some room to add keys in the future at arbitrary sort
positions amongst the previous keys.

We don't want the inode index item updating code to keep using raw types
as array indices when the type values are no longer small dense values.
We introduce indirection from type values to array indices to keep the
tracking array in the in-memory inode struct small.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
20ac2e35fa Remove clock_sync field from net message
As we freeze the format let's remove this old experiment to try and make
it easier to line up traces from different mounts.   It never worked
particularly well and I think it could be argued that trying to merge
trace logs on different machines isn't a particularly meaningful thing
to do.   You care about how they interact not what they were doing at
the same time with their indepdendent resources.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
80ee2c6d57 Harden client transaction processing
There are a few bad corner cases in the state machine that governs how
client transactions are opened, modified, and committed.

The worst problem is on the server side.   All server request handlers
need to cope with resent requests without causing bad side effects.
Both get_log_trees and commit_log_trees would try to fully processes
resent requests.  _get_log_trees() looks safe because it works with the
log_trees that was stored previously.  _commit_log_trees() is not safe
because it can rotate out the srch log file referenced by the sent
log_trees every time it's processed.  This could create extra srch
entries which would delete the first instance of entries.  Worse still,
by injecting the same block structure into the system multiple times it
ends up causing multiple frees of the blocks that make up the srch file.

The client side problems are slightly different, but related.   There
aren't strong constraints which guarantee that we'll only send a commit
request after a get request succeeds.   In crazy circumstances the
commit request in the write worker could come before the first get in
mount succeeds.   Far worse is that we can send multiple commit requests
for one transaction if it changes as we get errors during multiple
queued write attempts, particularly if we get errors from get_log_trees
after having successfully committed.

This hardens all these paths to ensure a strict sequence of
get_log_trees, transaction modification, and commit_log_trees.

On the server we add *_trans_seq fields to the log_trees struct so that
both get_ and commit_ can see that they've already prepared a commit to
send or have already committed the incoming commit, respectively.   We
can use the get_trans_seq field as the trans_seq of the open transaction
and get rid of the entire seperate mechanism we used to have for
tracking open trans seqs in the clients.  We can get the same info by
walking the log_trees and looking at their *_trans_seq fields.

In the client we have the write worker immediately return success if
mount hasn't opened the first transaction.   Then we don't have the
worker return to allow further modification until it has gotten success
from get_log_trees.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
42c4c6dd24 Move transaction sbi fields to trans_info
The transaction code was built a million years ago and put all of its
data in our core super block info.   This finally moves the rest of the
private transaction fields out of the core super block and into the
transaction info.   This makes it clear that it's private to trans.c and
brings it line with the rest of the subsystems in the tree.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
7d71b610af Add server extent motion tracking
Add tracking in the alloc functions that the server uses to move extents
between allocator structures on behalf of client mounts.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
70ede28e39 Remove unused traced_extent leavings
Remove some lingering support helpers for the traced_extent struct that
we haven't used in a while.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
b477604339 Don't clobber srch compact errors
The srch compaction worker will wait a bit before attempting another
compaction as it finishes a compaction that failed.

Unfortunately, it clobbered the errors it got during compaction with the
result of sending the commit to the server with the error flag.  If the
commit is successful then it thinks there were no errors and immediately
re-queues itself to try the next compaction.

If the error is persistent, as it was with a bug in how we merged log
files with a single page's worth of entries, then we can spin
indefinitely getting and error, clobbering the error with the commit
result, and immediately queueing our work to do it all over again.

This fix preserves existing errors when geting the result of the commit
and will correctly back off.  If we get persistent merge errors at least
they won't consume significant resources.  We add a counter for commit
for the errors so we can get some visibility if this happens.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
75f9aabe75 Allow compacting logs down to a single page
The k-way merge function at the core of the srch file entry merging had
some bookkeeping math (calculating number of parents) that couldn't
handle merging a single incoming entry stream, so it threw a warning and
returned an error.  When refusing to handle that case, it was assuming
that caller was trying to merge down a single log file which doesn't
make any sense.

But in the case of multiple small unsorted logs we can absolutely end up
with their entries stored in one sorted page.   We have one sorted input
page that's merging multiple log files.  The merge function is also the
path that writes to the output file so we absolutely need to handle this
case.

We more carefully calculate the number of parents, clamping it to one
parent when we'd otherwise get "(roundup(1) -> 1) - 1 == 0" when
calculating the number of parents from the number of inputs.  We can
relax the warning and error to refuse to merge nothing.

The test triggers this case by putting single search entries in the log
files for mounts and unmounting them to force rotation of the mount log
files into mergable rotated log files.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
cf512c5fcf Use inode_count field for statfs file counts
Our statfs implementation had clients reading the super block and using
the next free inode number to guess how many inodes there might be.  We
are very aggressive with giving directories private pools of inode
numbers to allocate from.   They're often not used at all, creating huge
gaps in allocated inode numbers.   The ratio of the average number of
allocations per directory to the batch size given to each directory is
the factor that the used inode count can be off by.

Now that we have a precise count of active inodes we can use that to
return accurate counts of inodes in the files fields in the statfs
struct.  We still don't have static inode allocation so the fields don't
make a ton of sense.  We fake the total and free count to give a
reasonable estimate of the total files that doesn't change while the
free count is calculated from the correct count of used inodes.

While we're at it we add a request to get the summed fields that the
server can cheaply discover in cache rather than having the client
always perform read IOs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
a53d6d1a8e Add scoutfs_alloc_foreach_super which takes super
Add an alloc_foreach variant which uses the caller's super to walk the
allocators rather than always reading it off the device.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
95ed36f9d3 Maintain inode count in super and log trees
Add a count of used inodes to the super block and a change in the inode
count to the log_trees struct.   Client transactions track the change in
inode count as they create and delete inodes.   The log_trees delta is
added to the count in the super as finalized log_trees are deleted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:47 -07:00
Zach Brown
94e5bc1457 Remove unused scoutfs_last_ino()
Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
366f615c9f Add support for our format version
We had previously started on a relatively simple notion of an
interoperability version which wasn't quite right.  This fleshes out
support for a more functional format version.   The super blocks have a
single version that defines behaviour of the running system.   The code
supports a range of versions and we add some initial interfaces for
updating the version while the system is offline.   All of this together
should let us safely change the underlying format over time.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
ac2587017e Add write_nr to quorum blocks
Add a write_nr field to the quorum block header which is incremented
with every write.  Each event also gets a write_nr field that is set to
the incremented value from the header.   This gives us a history of the
order of event updates that isn't sensitive to misconfigured time.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
1cdcf41ac7 Move more block read/write functions to util
We're adding another command that does block IO so move some block
reading and writing functions out of mkfs.   We also grow a few function
variants and call the write_sync variant from mkfs instead of having it
manually sync.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
024426df28 Add a file for userspace quorum config helpers
Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
a0690070ae Don't null terminate our note strings
The code that shows the note sections as files uses the section size to
define the size of the notes payload.  We don't need to null terminate
the strings to define their lengths.  Doing so puts a null in the notes
file which isn't appreciated by many readers.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
4e00f95014 run-tests builds our targets with -j
The test harness might as well use all cpus when building.  It's
reasonably safe to assume both that the test systems are otherwise idle
and that the build is likely to succeed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
0c95388f3b Set TCP_USER_TIMEOUT in addition to keepalives
TCP keepalive probes only work when the connection is idle.  They're not
sent when there's unacked send data being retramnsmitted.  If the server
fails while we're retransmitting we don't break the connection and try
to elect and connect to a new server until the very long default
conneciton timeouts or the server comes back and the stale connection is
aborted.

We can set TCP_USER_TIMEOUT to break an unresponsive connection when
there's written data.  It changes the behavior of the keepalive probes
so we rework them a bit to clearly apply our timeout consistently
between the two mechanisms.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:30:46 -07:00
Zach Brown
d255dd3b32 Fix SCOUTFs typo in totl name nr define
Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:10:45 -07:00
Zach Brown
9b4ac64312 Consistently stop fencing as server stops
As the server comes up it needs to fence any previous servers before it
assumes exclusive access to the device.  If fencing fails it can leave
fence requests behind.   The error path for these very early failures
didn't shut down fencing so we'd have lingering fence requests span the
life cycle of server startup and shutdown.  The next time the server
starts up in this mount it can try to create the fence request again,
get an error because a lingering one already exists, and immediately
shut down.

The result is that fencing errors that hit that initial attempt during
server startup can become persistent fencing errors for the lifetime of
that mount, preventing it from every successfully starting the server.

Moving the fence stop call to hit all exiting error paths consistently
clean up fence requests and avoid this problem.  The next server
instance will get a chance to process the fence request again.  It might
well hit the same error, but at least it gets a chance.

Signed-off-by: Zach Brown <zab@versity.com>
2021-10-28 12:10:45 -07:00
Zach Brown
22f9ab4dab Merge pull request #53 from bgly/fix_mkdir_test
Fix mkdir-rename-rmdir test script
2021-10-26 11:53:15 -07:00
Bryant Duffy-Ly
501953d69e Fix mkdir-rename-rmdir test script
The current script gets stuck in an infinite loop when the test
suite is started with 1 mount point. This is due to the advancement
part of the script in which it advances the ops for each mount.
The current while loop checks for when the op_mnt wraps by checking if
it equals 0. But the problem is we set each of the op_mnts to 0 during
the advancement, so when it wraps it still equates to 0, so it is an
infinite loop. Therefore, the fix is to check at the end of the loop
check if the last op's mount number wrapped. If so just break out.

Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
2021-10-21 11:41:02 -05:00
Bryant Duffy-Ly
66b8c5fbd7 Enhance clarify of some kfree paths
In some of the allocation paths there are goto statements
that end up calling kfree(). That is fine, but in cases
where the pointer is not initially set to NULL then we
might have an undefined behavior. kfree on a NULL pointer
does nothing, so essentially these changes should not
change behavior, but clarifies the code path better.

Signed-off-by: Bryant Duffy-Ly <bduffyly@versity.com>
2021-10-06 18:07:27 -05:00
Zach Brown
3c6c2194bd Merge pull request #51 from versity/zab/totl_xattr_tag
Zab/totl xattr tag
2021-09-13 18:06:28 -07:00
Zach Brown
6ca8c0eec2 Consistently initialize dentry info
Unfortunately, we're back in kernels that don't yet have d_op->d_init.
We allocate our dentry info manually as we're given dentries.  The
recent verification work forgot to consistently make sure the info was
allocated before using it.   Fix that up, and while we're at it be a bit
more robust in how we check to see that it's been initialized without
grabbing the d_lock.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
ea2b01434e Add support for i_version
This adds i_version to our inode and maintains it as we allocate, load,
modify, and store inodes.  We set the flag in the superblock so
in-kernel users can use i_version to see changes in our inodes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
d5eec7d001 Fix uninitialized srch ret that won't happen
More recent gcc notices that ret in delete_files can be undefined if nr
is 0 while missing that we won't call delete_files in that case.  Seems
worth fixing, regardless.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
ab92d8d251 Add quick test for racing creates
Add a quick test to make sure that create is validating stale dentries
before deciding if it should create or return -eexist.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
b9a0f1709f Add xattr .totl. tag
Add the .totl. xattr tag.  When the tag is set the end of the name
specifies a total name with 3 encoded u64s separated by dots.  The value
of the xattr is a u64 that is added to the named total.   An ioctl is
added to read the totals.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-13 14:41:07 -07:00
Zach Brown
a59fd5865d Add seq and flags to btree items
The fs log btrees have values that start with a header that stores the
item's seq and flags.  There's a lot of sketchy code that manipulates
the value header as items are passed around.

This adds the seq and flags as core item fields in the btree.   They're
only set by the interfaces that are used to store fs items: _insert_list
and _merge.  The rest of the btree items that use the main interface
don't work with the fields.

This was done to help delta items discover when logged items have been
merged before the finalized lob btrees are deleted and the code ends up
being quite a bit cleaner.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-09 14:44:55 -07:00
Zach Brown
46edf82b6b Add inode crtime creation time
Add an inode creation time field.  It's created for all new inodes.
It's visible to stat_more.  setattr_more can set it during
restore.

Signed-off-by: Zach Brown <zab@versity.com>
2021-09-03 11:14:41 -07:00
Zach Brown
e9078d83bf Merge pull request #50 from versity/zab/verify_dentries
Verify dentries after locking
2021-08-31 11:48:29 -07:00
Zach Brown
79fbaa6481 Verify dentries after locking
Our dir methods were trusting dentry args.  The vfs code paths use
i_mutex to protect dentries across revalidate or lookup and method
calls.  But that doesn't protect methods running in other mounts.
Multiple nodes can interleave the initial lookup or revalidate then
actual method call.

Rename got this right.  It is very paranoid about verifying inputs after
acquiring all the locks it needs.

We extend this pattern to the rest of the methods that need to use the
mapping of name to inode (and our hash and pos) in dentries.  Once we
acquire the parent dir lock we verify that the dentry is still current,
returning -EEXIST or -ENOENT as appropriate.

Along these lines, we tighten up dentry info correctness a bit by
updating our dentry info (recording lock coverage and hash/pos) for
negative dentries produced by lookup or as the result of unlink.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-31 09:49:32 -07:00
Zach Brown
9b9d3cf6fc Merge pull request #49 from versity/zab/btree_merge_fixes
Zab/btree merge fixes
2021-08-25 11:50:40 -07:00
Zach Brown
ad5662b892 Handle dupe invalidation requests during recovery
Client lock invalidation handling was very strict about not receiving
duplicate invalidation requests from the server because it could only
track one pending request.  The promise to only send one invalidate at a
time is made by one server, it can't be enforced across server failover.
Particularly because invalidation processing can have to do quite a lot
of work with the server as it tears down state associated with the lock.

We fix this by recording and processing each individual incoming
invalidation request on the lock.

The code that handled reordering of incoming grant responses and
invalidation requests waited for the lock's mode to match the old mode
in the invalidation request before proceeding.  That would have
prevented duplicate invalidation requests from making forward progress.

To fix this we make lock client recieve processing synchronous instead
of going through async work which can reorder.  Now grant responses are
processed as they're received and will always be resolved before all the
invalidation requests are queued and processed in order.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
f5577e26b1 Reset item state when retrying stale forest reads
The forest reader reads items from the fs_root and all log btrees and
gives them to the caller who tracks them to resolve version differences.

The reads can run into stale blocks which have been overwritten.  The
forest reader was implementing the retry under the item state in the
caller.  This can corrupt items that are only seen firest in an old fs
root before a merge and then only seen in the fs_root after a merge.  In
this case the item won't have any versioning and the existing version
from the old fs_root is preferred.  This is particularly bad when the
new version was deleted -- in that case we have no metadata which would
tell us to drop the old item that was read from the old fs_root.

This is fixed by pushing the retry up to callers who wipe the item state
before each retry.  Now each set of items is related to a single
snapshot of the fs_root and logs at one point in time.

I haven't seen definitive evidence of this happening in practice.  I
found this problem after putting on my craziest thinking toque and
auditing the code for places where we could lose item updates.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
5f57785790 Fix btree merge input item iteration
Btree merging attempted to build an rbtree of the input roots with only
one version of an item present in the rbtree at a time.  It really
messed this up by completely dropping an input root when a root with a
newer version of its item tried to take its place in the rbtree.  What
it should have done is advance to the next item in the older root, which
itself could have required advancing some other older root.  Dropping
the root entirely is catastrophically wrong because it hides the rest of
the items in the root from merging.  This has been manifesting as
occasional mysterious item loss during tests where memory pressure, item
update patterns, and merging all lined up just so.

This fixes the problem by more clearly keeping the next item in each
root in the rbtree.   We sort by newest to oldest version so that once
we merge the most recent version of an item its easy to skip all the
older versions of the items in the next rbtree entries for the
rest of the input roots.

While we're at it we work with references to the static cached input
btree blocks.  The old code was a first pass that used an expensive
btree walk per item and copied the value payload.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
2a33b9faf0 Add some error testing to srch-basic-functionality
When the xattr inode searchs fail the test will eventually fail when the
output differs, but that could take a while.  Have it fail much sooner
so that we can have tighter debugging interations and trace ring buffer
contents that are likely to be a lot closer to the first failure.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
3740c0a995 More carefully scan for orphan inodes
The current orphan scan uses the forest_next_hint to look for candidate
orphan items to delete.  It doesn't skip deleted items and checks the
forest of log btrees so it'd return hints for every single item that
existed in all the log btrees across the system.  And we call the hint
per-item.

When the system is deleting a lot of files we end up generating a huge
load where all mounts are constantly getting the btree roots from the
server, reading all the newest log btree blocks, finding deleted orphan
items for inodes that have already been deleted, and moving on to the
next deleted orphan item.

The fix is to use a read-only traversal of only one version of the fs
root for all the items in one scan.   This avoids all the deleted orphan
items that exist in the log btrees which will disappear when they're
merged.  It lets the item iteration happen in a single read-only cached
btree instead of constantly reading in the most recently written root
block of every log btree.

The result is an enormous speedup of large deletions.  I don't want to
describe exactly how enormous.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
a4f5293e78 Flush invalidate and iput inode references
We can be performing final deletion as inodes are evicted during
unmount.  We have to keep full locking, transactions, and networking up
and running for the evict_inodes() call in generic_shutdown_super().
Unfortunately, this means that workers can be using inode references
during evict_inodes() which prevents them from being evicted.  Those
workers can then remain running as we tear down the system, causing
crashes and deadlocks as the final iputs try to use resources that have
been destroyed.

The fix is to first properly stop orphan scanning, which can instantiate
new cached inodes, up before the call to kill_block_super ends up trying
to evict all inodes.  Then we just need to wait for any pending iput and
invalidate work to finish and perform the final iput, which will always
evict because generic_shutdown_super has cleared MS_ACTIVE.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
0c3026a2b7 Add simple per-lock server message count stats
Add some simple tracking of message counts for each lock in the lock
server so that we can start to see where conflicts may be happening in a
running system.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
5bc95fac7d Add scoutfs_unmounting()
Add a quick helper that can be used to avoid doing work if we know that
we're already shutting down.  This can be a single coarser indicator
than adding functions to each subsystem to track that we're shutting
down.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
36fcc4665d Align first free ino to lock group
Currently the first inode number that can be allocated directly follows
the root inode.  This means the first batch of allocated inodes are in
the same lock group as the root inode.

The root inode is a bit special.  It is always hot as absolute path
lookups and inode-to-path resolution always read directory entries from
the root.

Let's try aligning the first free inode number to the next inode lock
group boundary.  This will stop work in those inodes from necessarily
conflicting with work in the root inode.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
b0a08eb922 Remove lock grace period
We had some logic to try and delay lock invalidation while the lock was
still actively in use.  This was trying to reduce the cost of
pathological lock conflict cases but it had some severe fairness
problems.

It was first introduced to deal with bad patterns in userspace that no
longer exist and it was built on top of the LSM transaction machinery
that also no longer exists.   It hasn't aged well.

Instead of introducing invalidation latency in the hopes that it leads
to more batched work, which it can't always, let's aim more towards
reducing latency in all parts of the write-invalidate-read path and
also aim towards reducing contention in the first place.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
bb571377dc Don't merge newer items past older
We have a problem where items can appear to go backwards in time because
of the way we chose which log btrees to finalize and merge.

Because we don't have versions in items in the fs_root, and even might
not have items at all if they were deleted, we always assume items in
log btrees are newer than items in the fs root.

This creates the requirement that we can't merge a log btree if it has
items that are also present in older versions in other log btrees which
are not being merged.  The unmerged old item in the log btree would take
precedent over the newer merged item in the fs root.

We weren't enforcing this requirement at all.  We used the max_item_seq
to ensure that all items were older than the current stable seq but that
says nothing about the relationship between older items in the finalized
and active log btrees.  Nothing at all stops an active btree from having
an old version of a newer item that is present in another mount's
finalized log btree.

To reliably fix this we create a strict item seq discontinuity between
all the finalized merge inputs and all the active log btrees.  Once any
log btree is naturally finalized the server forced all the clients to
group up and finalize all their open log btrees.   A merge operation can
then safely operate on all the finalized trees before any new trees are
given to clients who would start using increasing items seqs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-25 10:14:38 -07:00
Zach Brown
5897f4d889 Add a trivial trace_printk wrapper
Make it a bit easier to include the fsid and rid in trace_printk
messages when we're experimenting.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:12:20 -07:00
Zach Brown
999093bfc9 Add sync log trees network command
Add a command for the server to request that clients commit their open
transaction.   This will be used to create groups of finalized log
btrees for consistent merging.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:12:17 -07:00
Zach Brown
05b5d93365 Verify that quorum_slot_nr references valid slot
We were checking that quorum_slot_nr was within the range of possible
slots allowed by the format as it was parsed.  We weren't checking that
it referenced a configured slot.  Make sure, and give a nice error
message that shows the configured slots.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
4d7191dc48 Print messages on extent ins/rem errors
Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
4495dbdce6 Set initial quorum term from max of all blocks
During rough forced unmount testing we saw a seemingly mysterious
concurrent election.  It could be explained if mounts coming up don't
start with the same term.  Let's try having mounts initialize their term
to the greatest of all the terms they can see in the quorum blocks.
This will prevent the situation where some new quorum actors with
greater terms start out ignoring all the messages from others.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
70569b0448 Trivial quorum test;set -> test_and_set
Nothing interesting here, just a minor convenience to use test and set
instead of testing and then setting.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
823838cf01 Add more messages to server processing errors
The server doesn't give us much to go on when it gets an error handling
requests to work with log trees from the client.  This adds a lot of
specific error messages so we can get a better understanding of
failures.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-24 09:11:40 -07:00
Zach Brown
89b5865a4c Verify that log tree commit is for sending rid
We were trusting the rid in the log trees struct that the client sent.
Compare it to our recorded rid on the connection and fail if the client
sent the wrong rid.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-17 12:13:01 -07:00
Zach Brown
7cf9cd8c20 Merge pull request #48 from versity/zab/missed_invalidate_wakeup
Queue invalidation during previous request
2021-08-09 09:50:39 -07:00
Zach Brown
65ac42831f Queue invalidation during previous request
The locking protocol only allows one outstanding invalidation request
for a lock at a time.  The client invalidation state is a bit hairy and
involves removing the lock from the invalidation list while it is being
processed which includes sending the response.  This means that another
request can arrive while the lock is not on the invalidation list.  We
have fields in the lock to record another incoming request which puts
the lock back on the list.

But the invalidation work wasn't always queued again in this case.  It
*looks* like the incoming request path would queue the work, but by
definition the lock isn't on the invalidation list during this race.  If
it's the only lock in play then the invalidation list will be empty and
the work won't be queued.  The lock can get stuck with a pending
invalidation if nothing else kicks the invaliation worker.  We saw this
in testing when the root inode lock group missed the wakeup.

The fix is to have the work requeue itself after putting the lock back
on the invalidation list when it notices that another request came in.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-06 15:41:11 -07:00
Zach Brown
dde6dab0a1 Merge pull request #47 from versity/zab/stability_fixes
Zab/stability fixes
2021-08-02 12:22:44 -07:00
Zach Brown
cb1726681c Fix net BUG_ON if reconnection farewell send races
When a client socket disconnects we save the connection state to re-use
later if the client reconnects.  A newly accepted connection finds the
old connection associated with the reconnecting client and migrates
state from the old idle connection to the newly accepted connection.

While moving messages between the old and new send and resend queues the
code had an aggressive BUG_ON that was asserting that the newly accepted
connection couldn't have any messages in its resend queue.

This BUG can be tripped due to the ordering of greeting processing and
connection state migration.  The server greeting processing path sends
the farewell response to the client before it calls the net code to
migrate connection state.  When it "sends" the farewell response it puts
the message on the send queue and kicks the send work.  It's possible
for the send work to execute and move the farewell response to the
resend queue and trip the BUG_ON.

This is harmless.   The sent greeting response is going to end up on the
resend queue either way, there's no reason for the reconnection
migration to assert that it can't have happened yet.  It is going to be
dropped the moment we get a message from the client with a recv_seq that
is necessarily past the greeting response which always gets a seq of 1
from the newly accepted connection.

We remove the BUG_ON and try to splice the old resend queue after the
possible response at the head of the resend_queue so that it is the
first to be dropped.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-02 11:15:57 -07:00
Zach Brown
cdff272163 Fix alloc list exhaustion calculation
The last thing server commits do is move extents from the freed list
into freed extents.  It moves as many as it can until it runs out of
avail meta blocks and space fore freed meta blocks in the current
allocator's lists.

The calculation for whether the lists had resources to move an extent
was quite off.  It missed that the first move might have to dirty the
current allocator or the list block, that the btree could join/split
blocks at each level down the paths, and boy does it look like the
height component of the calculation was just bonkers.

With the wrong calculation the server could overflow the freed list
while moving extents and trigger a BUG_ON.   We rarely saw this in
testing.

Signed-off-by: Zach Brown <zab@versity.com>
2021-08-01 14:31:57 -07:00
Zach Brown
7e935898ab Avoid premature metadata enospc
server_get_log_trees() sets the low flag in a mount's meta_avail
allocator, triggering enospc for any space consuming allocatins in the
mount, if the server's global meta_vail pool falls below the reserved
block count.  Before each server transaction opens we swap the global
meta_avail and meta_freed allocators to ensure that the transaction has
at least the reserved count of blocks available.

This creates a risk of premature enospc as the global meta_avail pool
drains and swaps to the larger meta_freed.  The pool can be close to the
reserved count, perhaps at it exactly.  _get_log_trees can fill the
client's mount, even a little, and drop the global meta_avail total
under the reserved count, triggering enospc, even though meta_Freed
could have had quite a lot of blocks.

The fix is to ensure that the global meta_avail has 2x the reserved
count and swapping if it falls under that.  This ensures that a server
transaction can consume an entire reserved count and still have enough
to avoid triggering enospc.

This fixes a scattering of rare premature enospc returns that were
hitting during tests.  It was rare for meta_avail to fall just at the
reserved count and for get_log_trees to have to refill the client
allocator, but it happened.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:26:32 -07:00
Zach Brown
6d0694f1b0 Add resize_devices ioctl and scoutfs command
Add a scoutfs command that uses an ioctl to send a request to the server
to safely use a device that has grown.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:26:32 -07:00
Zach Brown
fd686cab86 Fix total_data_blocks calculation in mkfs
mkfs was incorrectly initializing total_data_blocks.  The field is meant
to record the number of blocks from the start of the device that the
filesystem could access.  mkfs was subtracting the initial reserved area
of the device, meaning the number of blocks that the filesystem might
access.

This could allow accesses past devices if mount checks the device size
against the smaller total_data_blocks.

And we're about to use total_data_blocks as the start of a new extent to
add when growing the volume.  It needs to be fixed so that this new
grown free extent doesn't overlap with the end of the existing free
extents.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:26:32 -07:00
Zach Brown
4c1181c055 Remove first_ and last_ super blkno fields
There are fields in the super block that specify the range of blocks
that would be used for metadata or data.  They are from the time when a
single block device was carved up into regions for metadata and data.

They don't make sense now that we have separate metadata and data block
devices.  The starting blkno is static and we go to the end of the
device.

This removes the fields now that they serve no purpose.   The only use
of them to check that freed extents fell within the correct bounds can
still be performed by using the static starting number or roughly using
the size of the devices.  It's not perfect, but this is already only
a check to see that the blknos aren't utter nonsense.

We're removing the fields now to avoid having to update them while
worrying about users when resizing devices.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
d6bed7181f Remove almost all interruptible waits
As subsystems were built I tended to use interruptible waits in the hope
that we'd let users break out of most waits.

The reality is that we have significant code paths that have trouble
unwinding.  Final inode deletion during iput->evict in a task is a good
example.  It's madness to have a pending signal turn an inode deletion
from an efficient inline operation to a deferred background orphan inode
scan deletion.

It also happens that golang built pre-emptive thread scheduling around
signals.  Under load we see a surprising amount of signal spam and it
has created surprising error cases which would have otherwise been fine.

This changes waits to expect that IOs (including network commands) will
complete reasonably promptly.  We remove all interruptible waits with
the notable exception of breaking out of a pending mount.  That requires
shuffling setup around a little bit so that the first network message we
wait for is the lock for getting the root inode.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
4893a6f915 scoutfs_dirents_equal should return bool
It looks like it returned u64 because it was derived from _name_hash().

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
384590f016 Sync net shouldn't wait for errored submits
If async network request submission fails then the response handler will
never be called.  The sync request wrapper made the mistake of trying to
wait for completion when initial submission failed.  This never happened
in normal operation but we're able to trigger it with some regularity
with forced unmount during tests.  Unmount would hang waiting for work
to shutdown which was waiting for request responses that would never
happen.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
192f077c16 Update data_version when fallocate changes size
Changing the file size can changes the file contents -- reads will
change when they stop returning data.  fallocate can change the file
size and if it does it should increment the data_version, just like
setattr does.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
a9baeab22e stage_tmpfile test gets current data_version
The stage_tmpfile test util was written when fallocate didn't update
data_version for size extensions.  It is more correct to get the
data_version after fallocate changes data_versions for however many
transactions, extent allocations, and i_size extensions it took to
allocate space.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
b7ab26539a Avoid lockdep warning about upstream inversion
Some kernels have blkdev_reread_part acquire the bd_mutex and then call
into drop_partitions which calls fsync_bdev which acquires s_umount.
This inverts the usual pattern of deactivate_super getting s_umount and
then using blkdev_put in kill_sb->put_super to drop a second device.

The inversion has been fixed upstream by years of rewrites.  We can't go
back in time to fix the kernels that we're testing against,
unfortunately, so we disable lockdep around our valid leg of the
inversion that lockdep is noticing in our testing.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:22:42 -07:00
Zach Brown
c51f0c37da Defer dirty inode data writeback (and use list)
iput() can only be used in contexts that could perform final inode
deletion which requires cluster locks and transactions.  This is
absolutely true for the transaction committing worker.  We can't have
deletion during transaction commit trying to get locks and dirty *more*
items in the transaction.

Now that we're properly getting locks in final inode deletion and
O_TMPFILE support has put pressure on deletion, we're seeing deadlocks
between inode eviction during transaction commit getting a index lock
and index lock invalidation trying to commit.

We use the newly offered queued iput to defer the iput from walking our
dirty inodes.   The transaction commit will be able to proceed while
the iput worker is off waiting for a lock.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 13:20:40 -07:00
Zach Brown
52107424dd Promote deferred iput to inode call
Lock invalidation had the ability to kick iput off to work context.  We
need to use it for inode writeback as well so we move the mechanism over
to inode.c and give it a proper call.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
099a65ab07 Try recovering from truncate errors and more info
We're seeing errors during truncate that are surprising.  Let's try and
recover from them and provide more info when they happen so that we can
dig deeper.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
21c5724dd5 Update fenced service file StartLimitBurst
The first draft was written against an older schema, StartLimitBurst is
in [Service] now.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
3974d98f6b Don't use "/dev/*" redirections near systemd
It sets up stdout and stderr as sockets, not pipes, so these links don't
work.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
2901b43906 Also allow omap requests to disconnected clients
We recently fixed problems sending omap responses to originating clients
which can race with the clients disconnecting.  We need to handle the
requests sent to clients on behalf of an origination request in exactly
the same way.  The send can race with the client being evicted.  It'll
be cleaned after the race is safely ignored by the client's rid being
removed from the server's request tracking.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
03d7a4e7fe Show relative times in quorum status file output
The times in the quorum status file are in absolute monotinic kernel
time since bootup.  That's not particularly helpful especially when
comparing across hosts with different boot times.

This shows relative times in timespec64 seconds until or since the times
in question.   While we're at it we also collect the send and receive
timestamps closer to each send or receive call.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
d5d3b12986 Specficially shutdown quorum during forced unmount
Generally, forced unmount works by returning errors for all IO.  Quorum
is pretty resilient in that it can have the IO errors eaten by server
startup and does its own messaging that won't return errors.  Trying to
force unmount can have the quorum service continually participate in
electing a server that immediately fails and shutds down.

This specifically shuts down the internal quorum service when it sees
that unmount is being forced.  This is easier and cleaner than having
the network IO return errors and then having that trigger shutdown.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
e4dca8ddcc Don't shutdown quorum if server startup fails
The quorum service shuts down if it sees errors that mean that it can't
do its job.

This is mostly fatal errors gathering resources at startup or runtime IO
errors but it was also shutting down if server startup fails.   That's
not quite right.  This should be treated like the server shutting down
on errors.  Quorum needs to stay around to participate in electing the
next server.

Fence timeouts could trigger this.   A quorum mount could crash, the
next server without a fence script could have a fence request timeout
and shutdown, and now the third remaining server is left to indefinitely
send vote requests into the void.

With this fixed, continuing that example, the quorum service in the
second mount remains to elect the third server with a working fence
script after the second server shuts down after its fence request times
out.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-30 11:34:52 -07:00
Zach Brown
011b7d52e5 Merge pull request #45 from versity/ben/systemd_configs
Add fenced systemd and example configs
2021-07-09 08:39:18 -07:00
Ben McClelland
3a9db45194 Add fenced systemd and example configs
This should be good enough to get single node mounts up and running with
fenced with minimal effort.  The example config will need to be copied
to /etc/scoutfs/scoutfs-fenced.conf for it to be functional, so this
still requires specific opt-in and wont accidentally run for multi-node
systems.

Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
2021-07-09 08:22:39 -07:00
92 changed files with 7093 additions and 4234 deletions

133
README.md
View File

@@ -1,135 +1,24 @@
# Introduction
scoutfs is a clustered in-kernel Linux filesystem designed and built
from the ground up to support large archival systems.
scoutfs is a clustered in-kernel Linux filesystem designed to support
large archival systems. It features additional interfaces and metadata
so that archive agents can perform their maintenance workflows without
walking all the files in the namespace. Its cluster support lets
deployments add nodes to satisfy archival tier bandwidth targets.
Its key differentiating features are:
The design goal is to reach file populations in the trillions, with the
archival bandwidth to match, while remaining operational and responsive.
- Integrated consistent indexing accelerates archival maintenance operations
- Commit logs allow nodes to write concurrently without contention
It meets best of breed expectations:
Highlights of the design and implementation include:
* Fully consistent POSIX semantics between nodes
* Rich metadata to ensure the integrity of metadata references
* Atomic transactions to maintain consistent persistent structures
* First class kernel implementation for high performance and low latency
* Integrated archival metadata replaces syncing to external databases
* Dynamic seperation of resources lets nodes write in parallel
* 64bit throughout; no limits on file or directory sizes or counts
* Open GPLv2 implementation
Learn more in the [white paper](https://docs.wixstatic.com/ugd/aaa89b_88a5cc84be0b4d1a90f60d8900834d28.pdf).
# Current Status
**Alpha Open Source Development**
scoutfs is under heavy active development. We're developing it in the
open to give the community an opportunity to affect the design and
implementation.
The core architectural design elements are in place. Much surrounding
functionality hasn't been implemented. It's appropriate for early
adopters and interested developers, not for production use.
In that vein, expect significant incompatible changes to both the format
of network messages and persistent structures. Since the format hash-checking
has now been removed in preparation for release, if there is any doubt, mkfs
is strongly recommended.
The current kernel module is developed against the RHEL/CentOS 7.x
kernel to minimize the friction of developing and testing with partners'
existing infrastructure. Once we're happy with the design we'll shift
development to the upstream kernel while maintaining distro
compatibility branches.
# Community Mailing List
Please join us on the open scoutfs-devel@scoutfs.org [mailing list
hosted on Google Groups](https://groups.google.com/a/scoutfs.org/forum/#!forum/scoutfs-devel)
for all discussion of scoutfs.
# Quick Start
**This following a very rough example of the procedure to get up and
running, experience will be needed to fill in the gaps. We're happy to
help on the mailing list.**
The requirements for running scoutfs on a small cluster are:
1. One or more nodes running x86-64 CentOS/RHEL 7.4 (or 7.3)
2. Access to two shared block devices
3. IPv4 connectivity between the nodes
The steps for getting scoutfs mounted and operational are:
1. Get the kernel module running on the nodes
2. Make a new filesystem on the devices with the userspace utilities
3. Mount the devices on all the nodes
In this example we use three nodes. The names of the block devices are
the same on all the nodes. Two of the nodes will be quorum members. A
majority of quorum members must be mounted to elect a leader to run a
server that all the mounts connect to. It should be noted that two
quorum members results in a majority of one, each member itself, so
split brain elections are possible but so unlikely that it's fine for a
demonstration.
1. Get the Kernel Module and Userspace Binaries
* Either use snapshot RPMs built from git by Versity:
```shell
rpm -i https://scoutfs.s3-us-west-2.amazonaws.com/scoutfs-repo-0.0.1-1.el7_4.noarch.rpm
yum install scoutfs-utils kmod-scoutfs
```
* Or use the binaries built from checked out git repositories:
```shell
yum install kernel-devel
git clone git@github.com:versity/scoutfs.git
make -C scoutfs
modprobe libcrc32c
insmod scoutfs/kmod/src/scoutfs.ko
alias scoutfs=$PWD/scoutfs/utils/src/scoutfs
```
2. Make a New Filesystem (**destroys contents**)
We specify quorum slots with the addresses of each of the quorum
member nodes, the metadata device, and the data device.
```shell
scoutfs mkfs -Q 0,$NODE0_ADDR,12345 -Q 1,$NODE1_ADDR,12345 /dev/meta_dev /dev/data_dev
```
3. Mount the Filesystem
First, mount each of the quorum nodes so that they can elect and
start a server for the remaining node to connect to. The slot numbers
were specified with the leading "0,..." and "1,..." in the mkfs options
above.
```shell
mount -t scoutfs -o quorum_slot_nr=$SLOT_NR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
Then mount the remaining node which can now connect to the running server.
```shell
mount -t scoutfs -o metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
4. For Kicks, Observe the Metadata Change Index
The `meta_seq` index tracks the inodes that are changed in each
transaction.
```shell
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/two; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
```

25
ReleaseNotes.md Normal file
View File

@@ -0,0 +1,25 @@
Versity ScoutFS Release Notes
=============================
---
v1.x
\
*TBD*
* **Add scoutfs(1) change-quorum-config command**
\
Add a change-quorum-config command to scoutfs(1) to change the quorum
configuration stored in the metadata device while the file system is
unmounted. This can be used to change the mounts that will
participate in quorum and the IP addresses they use.
---
v1.0
\
*Nov 8, 2021*
* **Initial Release**
\
Version 1.0 marks the first GA release.

View File

@@ -13,6 +13,7 @@ scoutfs-y += \
block.o \
btree.o \
client.o \
cwskip.o \
counters.o \
data.o \
dir.o \

View File

@@ -252,6 +252,7 @@ static struct scoutfs_ext_ops alloc_ext_ops = {
.next = alloc_ext_next,
.insert = alloc_ext_insert,
.remove = alloc_ext_remove,
.insert_overlap_warn = true,
};
static bool invalid_extent(u64 start, u64 end, u64 first, u64 last)
@@ -261,20 +262,17 @@ static bool invalid_extent(u64 start, u64 end, u64 first, u64 last)
static bool invalid_meta_blkno(struct super_block *sb, u64 blkno)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 last_meta = (i_size_read(sbi->meta_bdev->bd_inode) >> SCOUTFS_BLOCK_LG_SHIFT) - 1;
return invalid_extent(blkno, blkno,
le64_to_cpu(super->first_meta_blkno),
le64_to_cpu(super->last_meta_blkno));
return invalid_extent(blkno, blkno, SCOUTFS_META_DEV_START_BLKNO, last_meta);
}
static bool invalid_data_extent(struct super_block *sb, u64 start, u64 len)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
u64 last_data = (i_size_read(sb->s_bdev->bd_inode) >> SCOUTFS_BLOCK_SM_SHIFT) - 1;
return invalid_extent(start, start + len - 1,
le64_to_cpu(super->first_data_blkno),
le64_to_cpu(super->last_data_blkno));
return invalid_extent(start, start + len - 1, SCOUTFS_DATA_DEV_START_BLKNO, last_data);
}
void scoutfs_alloc_init(struct scoutfs_alloc *alloc,
@@ -972,6 +970,8 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
moved += ext.len;
scoutfs_inc_counter(sb, alloc_moved_extent);
trace_scoutfs_alloc_move_extent(sb, &ext);
}
scoutfs_inc_counter(sb, alloc_move);
@@ -980,6 +980,39 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
return ret;
}
/*
* Add new free space to an allocator. _ext_insert will make sure that it doesn't
* overlap with any existing extents. This is done by the server in a transaction that
* also updates total_*_blocks in the super so we don't verify.
*/
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
};
return scoutfs_ext_insert(sb, &alloc_ext_ops, &args, start, len, 0, 0);
}
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
};
return scoutfs_ext_remove(sb, &alloc_ext_ops, &args, start, len);
}
/*
* We only trim one block, instead of looping trimming all, because the
* caller is assuming that we do a fixed amount of work when they check
@@ -1026,18 +1059,31 @@ out:
}
/*
* True if the allocator has enough free blocks to cow (alloc and free)
* a list block and all the btree blocks that store extent items.
* True if the allocator has enough blocks in the avail list and space
* in the freed list to be able to perform the callers operations. If
* false the caller should back off and return partial progress rather
* than completely exhausting the avail list or overflowing the freed
* list.
*
* At most, an extent operation can dirty down three paths of the tree
* to modify a blkno item and two distant order items. We can grow and
* split the root, and then those three paths could share blocks but each
* modify two leaf blocks.
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*
* The caller tells us how many extents they're about to modify and how
* many other additional blocks they may cow manually. And finally, the
* caller could be the first to dirty the avail and freed blocks in the
* allocator,
*/
static bool list_can_cow(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root)
static bool list_has_blocks(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root, u32 extents, u32 addl_blocks)
{
u32 most = 1 + (1 + 1 + (3 * (1 - root->root.height + 1)));
u32 tree_blocks = (((1 + root->root.height) * 2) * 3) * extents;
u32 most = 1 + tree_blocks + addl_blocks;
if (le32_to_cpu(alloc->avail.first_nr) < most) {
scoutfs_inc_counter(sb, alloc_list_avail_lo);
@@ -1101,8 +1147,7 @@ int scoutfs_alloc_fill_list(struct super_block *sb,
goto out;
lblk = bl->data;
while (le32_to_cpu(lblk->nr) < target &&
list_can_cow(sb, alloc, root)) {
while (le32_to_cpu(lblk->nr) < target && list_has_blocks(sb, alloc, root, 1, 0)) {
ret = scoutfs_ext_alloc(sb, &alloc_ext_ops, &args, 0, 0,
target - le32_to_cpu(lblk->nr), &ext);
@@ -1114,6 +1159,8 @@ int scoutfs_alloc_fill_list(struct super_block *sb,
for (i = 0; i < ext.len; i++)
list_block_add(lhead, lblk, ext.start + i);
trace_scoutfs_alloc_fill_extent(sb, &ext);
}
out:
@@ -1146,7 +1193,7 @@ int scoutfs_alloc_empty_list(struct super_block *sb,
if (WARN_ON_ONCE(lhead_in_alloc(alloc, lhead)))
return -EINVAL;
while (lhead->ref.blkno && list_can_cow(sb, alloc, args.root)) {
while (lhead->ref.blkno && list_has_blocks(sb, alloc, args.root, 1, 1)) {
if (lhead->first_nr == 0) {
ret = trim_empty_first_block(sb, alloc, wri, lhead);
@@ -1182,6 +1229,8 @@ int scoutfs_alloc_empty_list(struct super_block *sb,
break;
list_block_remove(lhead, lblk, ext.len);
trace_scoutfs_alloc_empty_extent(sb, &ext);
}
scoutfs_block_put(sb, bl);
@@ -1284,15 +1333,17 @@ bool scoutfs_alloc_test_flag(struct super_block *sb,
}
/*
* Call the callers callback for every persistent allocator structure
* we can find.
* Iterate over the allocator structures referenced by the caller's
* super and call the caller's callback with summaries of the blocks
* found in each structure.
*
* The caller's responsible for the stability of the referenced blocks.
* If the blocks could be stale the caller must deal with retrying when
* it sees ESTALE.
*/
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
struct scoutfs_super_block *super = NULL;
struct scoutfs_srch_compact *sc;
struct scoutfs_log_merge_request *lmreq;
struct scoutfs_log_merge_complete *lmcomp;
@@ -1305,21 +1356,12 @@ int scoutfs_alloc_foreach(struct super_block *sb,
u64 id;
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
if (!super || !sc) {
if (!sc) {
ret = -ENOMEM;
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
/* all the server allocators */
ret = cb(sb, arg, SCOUTFS_ALLOC_OWNER_SERVER, 0, true, true,
le64_to_cpu(super->meta_alloc[0].total_len)) ?:
@@ -1462,6 +1504,40 @@ retry:
ret = 0;
out:
kfree(sc);
return ret;
}
/*
* Read the current on-disk super and use it to walk the allocators and
* call the caller's callback. This assumes that the super it's reading
* could be stale and will retry if it encounters stale blocks.
*/
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
out:
if (ret == -ESTALE) {
if (memcmp(&stale_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
@@ -1473,18 +1549,16 @@ out:
}
kfree(super);
kfree(sc);
return ret;
}
struct foreach_cb_args {
scoutfs_alloc_extent_cb_t cb;
void *cb_arg;
};
static int alloc_btree_extent_item_cb(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, void *arg)
static int alloc_btree_extent_item_cb(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, void *arg)
{
struct foreach_cb_args *cba = arg;
struct scoutfs_extent ext;

View File

@@ -132,6 +132,12 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_fill_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
@@ -160,6 +166,8 @@ typedef int (*scoutfs_alloc_foreach_cb_t)(struct super_block *sb, void *arg,
bool meta, bool avail, u64 blocks);
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg);
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg);
typedef void (*scoutfs_alloc_extent_cb_t)(struct super_block *sb, void *cb_arg,
struct scoutfs_extent *ext);

View File

@@ -645,9 +645,11 @@ static struct block_private *block_read(struct super_block *sb, u64 blkno)
goto out;
}
ret = wait_event_interruptible(binf->waitq, uptodate_or_error(bp));
if (ret == 0 && test_bit(BLOCK_BIT_ERROR, &bp->bits))
wait_event(binf->waitq, uptodate_or_error(bp));
if (test_bit(BLOCK_BIT_ERROR, &bp->bits))
ret = -EIO;
else
ret = 0;
out:
if (ret < 0) {

View File

@@ -30,6 +30,7 @@
#include "avl.h"
#include "hash.h"
#include "sort_priv.h"
#include "forest.h"
#include "scoutfs_trace.h"
@@ -502,9 +503,8 @@ static __le16 insert_value(struct scoutfs_btree_block *bt, __le16 item_off,
* This only consumes free space. It's safe to use references to block
* structures after this call.
*/
static void create_item(struct scoutfs_btree_block *bt,
struct scoutfs_key *key, void *val, unsigned val_len,
struct scoutfs_avl_node *parent, int cmp)
static void create_item(struct scoutfs_btree_block *bt, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, unsigned val_len, struct scoutfs_avl_node *parent, int cmp)
{
struct scoutfs_btree_item *item;
@@ -516,6 +516,8 @@ static void create_item(struct scoutfs_btree_block *bt,
item = end_item(bt);
item->key = *key;
item->seq = cpu_to_le64(seq);
item->flags = flags;
scoutfs_avl_insert(&bt->item_root, parent, &item->node, cmp);
leaf_item_hash_insert(bt, item_key(item), ptr_off(bt, item));
@@ -558,6 +560,8 @@ static void delete_item(struct scoutfs_btree_block *bt,
/* move the final item into the deleted space */
if (end != item) {
item->key = end->key;
item->seq = end->seq;
item->flags = end->flags;
item->val_off = end->val_off;
item->val_len = end->val_len;
leaf_item_hash_change(bt, &end->key, ptr_off(bt, item),
@@ -606,8 +610,8 @@ static void move_items(struct scoutfs_btree_block *dst,
else
next = next_item(src, from);
create_item(dst, item_key(from), item_val(src, from),
item_val_len(from), par, cmp);
create_item(dst, item_key(from), le64_to_cpu(from->seq), from->flags,
item_val(src, from), item_val_len(from), par, cmp);
if (move_right) {
if (par)
@@ -680,7 +684,7 @@ static void create_parent_item(struct scoutfs_btree_block *parent,
scoutfs_avl_search(&parent->item_root, cmp_key_item, key, &cmp, &par,
NULL, NULL);
create_item(parent, key, &ref, sizeof(ref), par, cmp);
create_item(parent, key, 0, 0, &ref, sizeof(ref), par, cmp);
}
/*
@@ -1229,10 +1233,6 @@ static int btree_walk(struct super_block *sb,
WARN_ON_ONCE((flags & (BTW_GET_PAR|BTW_SET_PAR)) && !par_root))
return -EINVAL;
/* all ops come through walk and walk calls all reads */
if (scoutfs_forcing_unmount(sb))
return -EIO;
scoutfs_inc_counter(sb, btree_walk);
restart:
@@ -1529,7 +1529,7 @@ int scoutfs_btree_insert(struct super_block *sb,
if (node) {
ret = -EEXIST;
} else {
create_item(bt, key, val, val_len, par, cmp);
create_item(bt, key, 0, 0, val, val_len, par, cmp);
ret = 0;
}
}
@@ -1630,7 +1630,7 @@ int scoutfs_btree_force(struct super_block *sb,
} else {
scoutfs_avl_search(&bt->item_root, cmp_key_item, key,
&cmp, &par, NULL, NULL);
create_item(bt, key, val, val_len, par, cmp);
create_item(bt, key, 0, 0, val, val_len, par, cmp);
}
ret = 0;
@@ -1849,8 +1849,8 @@ int scoutfs_btree_read_items(struct super_block *sb,
if (scoutfs_key_compare(&item->key, end) > 0)
break;
ret = cb(sb, item_key(item), item_val(bt, item),
item_val_len(item), arg);
ret = cb(sb, item_key(item), le64_to_cpu(item->seq), item->flags,
item_val(bt, item), item_val_len(item), arg);
if (ret < 0)
break;
@@ -1870,13 +1870,16 @@ out:
* This can make partial progress before returning an error, leaving
* dirty btree blocks with only some of the caller's items. It's up to
* the caller to resolve this.
*
* This, along with merging, are the only places that seq and flags are
* set in btree items. They're only used for fs items written through
* the item cache and forest of log btrees.
*/
int scoutfs_btree_insert_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_btree_item_list *lst)
int scoutfs_btree_insert_list(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_btree_root *root,
scoutfs_btree_item_iter_cb iter_cb, void *pos, void *arg)
{
struct scoutfs_btree_item_desc desc;
struct scoutfs_btree_item *item;
struct btree_walk_key_range kr;
struct scoutfs_btree_block *bt;
@@ -1885,29 +1888,46 @@ int scoutfs_btree_insert_list(struct super_block *sb,
int cmp;
int ret = 0;
while (lst) {
pos = iter_cb(sb, &desc, pos, arg);
while (pos) {
ret = btree_walk(sb, alloc, wri, root, BTW_DIRTY | BTW_INSERT,
&lst->key, lst->val_len, &bl, &kr, NULL);
desc.key, desc.val_len, &bl, &kr, NULL);
if (ret < 0)
goto out;
bt = bl->data;
do {
item = leaf_item_hash_search(sb, bt, &lst->key);
item = leaf_item_hash_search(sb, bt, desc.key);
if (item) {
update_item_value(bt, item, lst->val,
lst->val_len);
/* try to merge delta values, _NULL not deleted; merge will */
ret = scoutfs_forest_combine_deltas(desc.key,
item_val(bt, item),
item_val_len(item),
desc.val, desc.val_len);
if (ret < 0) {
scoutfs_block_put(sb, bl);
goto out;
}
item->seq = cpu_to_le64(desc.seq);
item->flags = desc.flags;
if (ret == 0)
update_item_value(bt, item, desc.val, desc.val_len);
else
ret = 0;
} else {
scoutfs_avl_search(&bt->item_root,
cmp_key_item, &lst->key,
cmp_key_item, desc.key,
&cmp, &par, NULL, NULL);
create_item(bt, &lst->key, lst->val,
lst->val_len, par, cmp);
create_item(bt, desc.key, desc.seq, desc.flags, desc.val,
desc.val_len, par, cmp);
}
lst = lst->next;
} while (lst && scoutfs_key_compare(&lst->key, &kr.end) <= 0 &&
mid_free_item_room(bt, lst->val_len));
pos = iter_cb(sb, &desc, pos, arg);
} while (pos && scoutfs_key_compare(desc.key, &kr.end) <= 0 &&
mid_free_item_room(bt, desc.val_len));
scoutfs_block_put(sb, bl);
}
@@ -2013,94 +2033,16 @@ int scoutfs_btree_rebalance(struct super_block *sb,
struct merge_pos {
struct rb_node node;
struct scoutfs_btree_root *root;
struct scoutfs_key key;
struct scoutfs_block *bl;
struct scoutfs_btree_block *bt;
struct scoutfs_avl_node *avl;
struct scoutfs_key *key;
u64 seq;
u8 flags;
unsigned int val_len;
u8 val[SCOUTFS_BTREE_MAX_VAL_LEN];
u8 *val;
};
/*
* Find the next item in the mpos's root after its key and make sure
* that it's in its sorted position in the rbtree. We're responsible
* for freeing the mpos if we don't put it back in the pos_root. This
* happens naturally naturally when its item_root has no more items to
* merge.
*/
static int reset_mpos(struct super_block *sb, struct rb_root *pos_root,
struct merge_pos *mpos, struct scoutfs_key *end,
scoutfs_btree_merge_cmp_t merge_cmp)
{
SCOUTFS_BTREE_ITEM_REF(iref);
struct merge_pos *walk;
struct rb_node *parent;
struct rb_node **node;
int key_cmp;
int val_cmp;
int ret;
restart:
if (!RB_EMPTY_NODE(&mpos->node)) {
rb_erase(&mpos->node, pos_root);
RB_CLEAR_NODE(&mpos->node);
}
/* find the next item in the root within end */
ret = scoutfs_btree_next(sb, mpos->root, &mpos->key, &iref);
if (ret == 0) {
if (scoutfs_key_compare(iref.key, end) > 0) {
ret = -ENOENT;
} else {
mpos->key = *iref.key;
mpos->val_len = iref.val_len;
memcpy(mpos->val, iref.val, iref.val_len);
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
kfree(mpos);
if (ret == -ENOENT)
ret = 0;
goto out;
}
rewalk:
/* sort merge items by key then oldest to newest */
node = &pos_root->rb_node;
parent = NULL;
while (*node) {
parent = *node;
walk = container_of(*node, struct merge_pos, node);
key_cmp = scoutfs_key_compare(&mpos->key, &walk->key);
val_cmp = merge_cmp(mpos->val, mpos->val_len,
walk->val, walk->val_len);
/* drop old versions of logged keys as we discover them */
if (key_cmp == 0) {
scoutfs_inc_counter(sb, btree_merge_drop_old);
if (val_cmp < 0) {
scoutfs_key_inc(&mpos->key);
goto restart;
} else {
BUG_ON(val_cmp == 0);
rb_erase(&walk->node, pos_root);
kfree(walk);
goto rewalk;
}
}
if ((key_cmp ?: val_cmp) < 0)
node = &(*node)->rb_left;
else
node = &(*node)->rb_right;
}
rb_link_node(&mpos->node, parent, node);
rb_insert_color(&mpos->node, pos_root);
ret = 0;
out:
return ret;
}
static struct merge_pos *first_mpos(struct rb_root *root)
{
struct rb_node *node = rb_first(root);
@@ -2109,22 +2051,178 @@ static struct merge_pos *first_mpos(struct rb_root *root)
return NULL;
}
static struct merge_pos *next_mpos(struct merge_pos *mpos)
{
struct rb_node *node;
if (mpos && (node = rb_next(&mpos->node)))
return container_of(node, struct merge_pos, node);
else
return NULL;
}
static void free_mpos(struct super_block *sb, struct merge_pos *mpos)
{
scoutfs_block_put(sb, mpos->bl);
kfree(mpos);
}
static void insert_mpos(struct rb_root *pos_root, struct merge_pos *ins)
{
struct rb_node **node = &pos_root->rb_node;
struct rb_node *parent = NULL;
struct merge_pos *mpos;
int cmp;
parent = NULL;
while (*node) {
parent = *node;
mpos = container_of(*node, struct merge_pos, node);
/* sort merge items by key then newest to oldest */
cmp = scoutfs_key_compare(ins->key, mpos->key) ?:
-scoutfs_cmp(ins->seq, mpos->seq);
if (cmp < 0)
node = &(*node)->rb_left;
else
node = &(*node)->rb_right;
}
rb_link_node(&ins->node, parent, node);
rb_insert_color(&ins->node, pos_root);
}
/*
* Find the next item in the merge_pos root in the caller's range and
* insert it into the rbtree sorted by key and version so that merging
* can find the next newest item at the front of the rbtree. We free
* the mpos on error or if there are no more items in the range.
*/
static int reset_mpos(struct super_block *sb, struct rb_root *pos_root, struct merge_pos *mpos,
struct scoutfs_key *start, struct scoutfs_key *end)
{
struct scoutfs_btree_item *item;
struct scoutfs_avl_node *next;
struct btree_walk_key_range kr;
struct scoutfs_key walk_key;
int ret = 0;
/* always erase before freeing or inserting */
if (!RB_EMPTY_NODE(&mpos->node)) {
rb_erase(&mpos->node, pos_root);
RB_CLEAR_NODE(&mpos->node);
}
/*
* advance to next item via the avl tree. The caller's pos is
* only ever incremented past the last key so we can use next to
* iterate rather than using search to skip past multiple items.
*/
if (mpos->avl)
mpos->avl = scoutfs_avl_next(&mpos->bt->item_root, mpos->avl);
/* find the next leaf with the key if we run out of items */
walk_key = *start;
while (!mpos->avl && !scoutfs_key_is_zeros(&walk_key)) {
scoutfs_block_put(sb, mpos->bl);
mpos->bl = NULL;
ret = btree_walk(sb, NULL, NULL, mpos->root, BTW_NEXT, &walk_key,
0, &mpos->bl, &kr, NULL);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
free_mpos(sb, mpos);
goto out;
}
mpos->bt = mpos->bl->data;
mpos->avl = scoutfs_avl_search(&mpos->bt->item_root, cmp_key_item,
start, NULL, NULL, &next, NULL) ?: next;
if (mpos->avl == NULL)
walk_key = kr.iter_next;
}
/* see if we're out of items within the range */
item = node_item(mpos->avl);
if (!item || scoutfs_key_compare(item_key(item), end) > 0) {
free_mpos(sb, mpos);
ret = 0;
goto out;
}
/* insert the next item within range at its version */
mpos->key = item_key(item);
mpos->seq = le64_to_cpu(item->seq);
mpos->flags = item->flags;
mpos->val_len = item_val_len(item);
mpos->val = item_val(mpos->bt, item);
insert_mpos(pos_root, mpos);
ret = 0;
out:
return ret;
}
/*
* The caller has reset all the merge positions for all the input log
* btree roots and wants the next logged item it should try and merge
* with the items in the fs_root.
*
* We look ahead in the logged item stream to see if we should merge any
* older logged delta items into one result for the caller. We also
* take this opportunity to skip and reset the mpos for any older
* versions of the first item.
*/
static int next_resolved_mpos(struct super_block *sb, struct rb_root *pos_root,
struct scoutfs_key *end, struct merge_pos **mpos_ret)
{
struct merge_pos *mpos;
struct merge_pos *next;
struct scoutfs_key key;
int ret = 0;
while ((mpos = first_mpos(pos_root)) && (next = next_mpos(mpos)) &&
!scoutfs_key_compare(mpos->key, next->key)) {
ret = scoutfs_forest_combine_deltas(mpos->key, mpos->val, mpos->val_len,
next->val, next->val_len);
if (ret < 0)
break;
/* reset advances to the next item */
key = *mpos->key;
scoutfs_key_inc(&key);
/* always skip next combined or older version */
ret = reset_mpos(sb, pos_root, next, &key, end);
if (ret < 0)
break;
if (ret == SCOUTFS_DELTA_COMBINED) {
scoutfs_inc_counter(sb, btree_merge_delta_combined);
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
scoutfs_inc_counter(sb, btree_merge_delta_null);
/* if merging resulted in no info, skip current */
ret = reset_mpos(sb, pos_root, mpos, &key, end);
if (ret < 0)
break;
}
}
*mpos_ret = mpos;
return ret;
}
/*
* Merge items from a number of read-only input roots into a writable
* destination root. The order of the input roots doesn't matter, the
* items are merged in sorted key order.
*
* The merge_cmp callback determines the order that the input items are
* merged in. The is_del callback determines if a merging item should
* be removed from the destination.
*
* subtree indicates that the destination root is in fact one of many
* parent blocks and shouldn't be split or allowed to fall below the
* join low water mark.
*
* drop_val indicates the initial length of the value that should be
* dropped when merging items into destination items.
*
* -ERANGE is returned if the merge doesn't fully exhaust the range, due
* to allocators running low or needing to join/split the parent.
* *next_ret is set to the next key which hasn't been merged so that the
@@ -2138,9 +2236,7 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *inputs,
scoutfs_btree_merge_cmp_t merge_cmp,
scoutfs_btree_merge_is_del_t merge_is_del, bool subtree,
int drop_val, int dirty_limit, int alloc_low)
bool subtree, int dirty_limit, int alloc_low)
{
struct scoutfs_btree_root_head *rhead;
struct rb_root pos_root = RB_ROOT;
@@ -2149,11 +2245,13 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_block *bl = NULL;
struct btree_walk_key_range kr;
struct scoutfs_avl_node *par;
struct scoutfs_key next;
struct merge_pos *mpos;
struct merge_pos *tmp;
int walk_val_len;
int walk_flags;
bool is_del;
int delta;
int cmp;
int ret;
@@ -2161,17 +2259,16 @@ int scoutfs_btree_merge(struct super_block *sb,
scoutfs_inc_counter(sb, btree_merge);
list_for_each_entry(rhead, inputs, head) {
mpos = kmalloc(sizeof(*mpos), GFP_NOFS);
mpos = kzalloc(sizeof(*mpos), GFP_NOFS);
if (!mpos) {
ret = -ENOMEM;
goto out;
}
RB_CLEAR_NODE(&mpos->node);
mpos->key = *start;
mpos->root = &rhead->root;
ret = reset_mpos(sb, &pos_root, mpos, end, merge_cmp);
ret = reset_mpos(sb, &pos_root, mpos, start, end);
if (ret < 0)
goto out;
}
@@ -2181,58 +2278,75 @@ int scoutfs_btree_merge(struct super_block *sb,
walk_flags |= BTW_SUBTREE;
walk_val_len = 0;
while ((mpos = first_mpos(&pos_root))) {
while ((ret = next_resolved_mpos(sb, &pos_root, end, &mpos)) == 0 && mpos) {
if (scoutfs_block_writer_dirty_bytes(sb, wri) >= dirty_limit) {
scoutfs_inc_counter(sb, btree_merge_dirty_limit);
ret = -ERANGE;
*next_ret = mpos->key;
*next_ret = *mpos->key;
goto out;
}
if (scoutfs_alloc_meta_low(sb, alloc, alloc_low)) {
scoutfs_inc_counter(sb, btree_merge_alloc_low);
ret = -ERANGE;
*next_ret = mpos->key;
*next_ret = *mpos->key;
goto out;
}
scoutfs_block_put(sb, bl);
bl = NULL;
ret = btree_walk(sb, alloc, wri, root, walk_flags,
&mpos->key, walk_val_len, &bl, &kr, NULL);
mpos->key, walk_val_len, &bl, &kr, NULL);
if (ret < 0) {
if (ret == -ERANGE)
*next_ret = mpos->key;
*next_ret = *mpos->key;
goto out;
}
bt = bl->data;
scoutfs_inc_counter(sb, btree_merge_walk);
for (; mpos; mpos = first_mpos(&pos_root)) {
/* catch non-root blocks that fell under low, maybe from null deltas */
if (root->ref.blkno != bt->hdr.blkno && !total_above_join_low_water(bt)) {
walk_flags |= BTW_DELETE;
continue;
}
/* val must have at least what we need to drop */
if (mpos->val_len < drop_val) {
ret = -EIO;
goto out;
}
while ((ret = next_resolved_mpos(sb, &pos_root, end, &mpos)) == 0 && mpos) {
/* walk to new leaf if we exceed parent ref key */
if (scoutfs_key_compare(&mpos->key, &kr.end) > 0)
if (scoutfs_key_compare(mpos->key, &kr.end) > 0)
break;
/* see if there's an existing item */
item = leaf_item_hash_search(sb, bt, &mpos->key);
is_del = merge_is_del(mpos->val, mpos->val_len);
item = leaf_item_hash_search(sb, bt, mpos->key);
is_del = !!(mpos->flags & SCOUTFS_ITEM_FLAG_DELETION);
/* see if we're merging delta items */
if (item && !is_del)
delta = scoutfs_forest_combine_deltas(mpos->key,
item_val(bt, item),
item_val_len(item),
mpos->val, mpos->val_len);
else
delta = 0;
if (delta < 0) {
ret = delta;
goto out;
} else if (delta == SCOUTFS_DELTA_COMBINED) {
scoutfs_inc_counter(sb, btree_merge_delta_combined);
} else if (delta == SCOUTFS_DELTA_COMBINED_NULL) {
scoutfs_inc_counter(sb, btree_merge_delta_null);
}
trace_scoutfs_btree_merge_items(sb, mpos->root,
&mpos->key, mpos->val_len,
mpos->key, mpos->val_len,
item ? root : NULL,
item ? item_key(item) : NULL,
item ? item_val_len(item) : 0, is_del);
/* rewalk and split if ins/update needs room */
if (!is_del && !mid_free_item_room(bt, mpos->val_len)) {
if (!is_del && !delta && !mid_free_item_room(bt, mpos->val_len)) {
walk_flags |= BTW_INSERT;
walk_val_len = mpos->val_len;
break;
@@ -2241,22 +2355,39 @@ int scoutfs_btree_merge(struct super_block *sb,
/* insert missing non-deletion merge items */
if (!item && !is_del) {
scoutfs_avl_search(&bt->item_root,
cmp_key_item, &mpos->key,
cmp_key_item, mpos->key,
&cmp, &par, NULL, NULL);
create_item(bt, &mpos->key,
mpos->val + drop_val,
mpos->val_len - drop_val, par, cmp);
create_item(bt, mpos->key, mpos->seq, mpos->flags,
mpos->val, mpos->val_len, par, cmp);
scoutfs_inc_counter(sb, btree_merge_insert);
}
/* update existing items */
if (item && !is_del) {
update_item_value(bt, item,
mpos->val + drop_val,
mpos->val_len - drop_val);
if (item && !is_del && !delta) {
item->seq = cpu_to_le64(mpos->seq);
item->flags = mpos->flags;
update_item_value(bt, item, mpos->val, mpos->val_len);
scoutfs_inc_counter(sb, btree_merge_update);
}
/* update combined delta item seq */
if (delta == SCOUTFS_DELTA_COMBINED) {
item->seq = cpu_to_le64(mpos->seq);
}
/*
* combined delta items that aren't needed are
* immediately dropped. We don't back off if
* the deletion would fall under the low water
* mark because we've already modified the
* value, we don't want to retry after a join
* and apply the value a second time.
*/
if (delta == SCOUTFS_DELTA_COMBINED_NULL) {
delete_item(bt, item, NULL);
scoutfs_inc_counter(sb, btree_merge_delta_null);
}
/* delete if merge item was deletion */
if (item && is_del) {
/* rewalk and join if non-root falls under low water mark */
@@ -2273,12 +2404,12 @@ int scoutfs_btree_merge(struct super_block *sb,
walk_flags &= ~(BTW_INSERT | BTW_DELETE);
walk_val_len = 0;
/* finished with this merge item */
scoutfs_key_inc(&mpos->key);
ret = reset_mpos(sb, &pos_root, mpos, end, merge_cmp);
/* finished with this key, skip any older items */
next = *mpos->key;
scoutfs_key_inc(&next);
ret = reset_mpos(sb, &pos_root, mpos, &next, end);
if (ret < 0)
goto out;
mpos = NULL;
}
}
@@ -2286,7 +2417,7 @@ int scoutfs_btree_merge(struct super_block *sb,
out:
scoutfs_block_put(sb, bl);
rbtree_postorder_for_each_entry_safe(mpos, tmp, &pos_root, node) {
kfree(mpos);
free_mpos(sb, mpos);
}
return ret;

View File

@@ -18,15 +18,30 @@ struct scoutfs_btree_item_ref {
#define SCOUTFS_BTREE_ITEM_REF(name) \
struct scoutfs_btree_item_ref name = {NULL,}
/* caller gives an item to the callback */
/* btree gives an item to caller */
typedef int (*scoutfs_btree_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, void *arg);
struct scoutfs_btree_item_desc {
struct scoutfs_key *key;
void *val;
u64 seq;
u8 flags;
unsigned val_len;
};
/* btree iterates through items from caller */
typedef void *(*scoutfs_btree_item_iter_cb)(struct super_block *sb,
struct scoutfs_btree_item_desc *desc,
void *pos, void *arg);
/* simple singly-linked list of items */
struct scoutfs_btree_item_list {
struct scoutfs_btree_item_list *next;
struct scoutfs_key key;
u64 seq;
u8 flags;
int val_len;
u8 val[0];
};
@@ -76,11 +91,9 @@ int scoutfs_btree_read_items(struct super_block *sb,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_btree_item_cb cb, void *arg);
int scoutfs_btree_insert_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_btree_item_list *lst);
int scoutfs_btree_insert_list(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_btree_root *root,
scoutfs_btree_item_iter_cb iter_cb, void *pos, void *arg);
int scoutfs_btree_parent_range(struct super_block *sb,
struct scoutfs_btree_root *root,
@@ -108,14 +121,7 @@ struct scoutfs_btree_root_head {
struct list_head head;
struct scoutfs_btree_root root;
};
/*
* Compare the values of merge input items whose keys are equal to
* determine their merge order.
*/
typedef int (*scoutfs_btree_merge_cmp_t)(void *a_val, int a_val_len,
void *b_val, int b_val_len);
/* whether merging item should be removed from destination */
typedef bool (*scoutfs_btree_merge_is_del_t)(void *val, int val_len);
int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
@@ -124,9 +130,7 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *input_list,
scoutfs_btree_merge_cmp_t merge_cmp,
scoutfs_btree_merge_is_del_t merge_is_del, bool subtree,
int drop_val, int dirty_limit, int alloc_low);
bool subtree, int dirty_limit, int alloc_low);
int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,

View File

@@ -32,6 +32,7 @@
#include "endian_swap.h"
#include "quorum.h"
#include "omap.h"
#include "trans.h"
/*
* The client is responsible for maintaining a connection to the server.
@@ -116,21 +117,6 @@ int scoutfs_client_get_roots(struct super_block *sb,
NULL, 0, roots, sizeof(*roots));
}
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
__le64 leseq;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
NULL, 0, &leseq, sizeof(leseq));
if (ret == 0)
*seq = le64_to_cpu(leseq);
return ret;
}
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
@@ -297,6 +283,40 @@ int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_op
volopt, sizeof(*volopt), NULL, 0);
}
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_RESIZE_DEVICES,
nrd, sizeof(*nrd), NULL, 0);
}
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_STATFS,
NULL, 0, nst, sizeof(*nst));
}
/*
* The server is asking that we trigger a commit of the current log
* trees so that they can ensure an item seq discontinuity between
* finalized log btrees and the next set of open log btrees. If we're
* shutting down then we're already going to perform a final commit.
*/
static int sync_log_trees(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != 0)
return -EINVAL;
if (!scoutfs_unmounting(sb))
scoutfs_trans_sync(sb, 0);
return scoutfs_net_response(sb, conn, cmd, id, 0, NULL, 0);
}
/* The client is receiving a invalidation request from the server */
static int client_lock(struct super_block *sb,
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
@@ -334,7 +354,8 @@ static int client_greeting(struct super_block *sb,
void *resp, unsigned int resp_len, int error,
void *data)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
@@ -351,17 +372,15 @@ static int client_greeting(struct super_block *sb,
}
if (gr->fsid != super->hdr.fsid) {
scoutfs_warn(sb, "server sent fsid 0x%llx, client has 0x%llx",
le64_to_cpu(gr->fsid),
le64_to_cpu(super->hdr.fsid));
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
ret = -EINVAL;
goto out;
}
if (gr->version != super->version) {
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
le64_to_cpu(gr->version),
le64_to_cpu(super->version));
if (le64_to_cpu(gr->fmt_vers) != sbi->fmt_vers) {
scoutfs_warn(sb, "server greeting response format version %llu did not match client format version %llu",
le64_to_cpu(gr->fmt_vers), sbi->fmt_vers);
ret = -EINVAL;
goto out;
}
@@ -487,7 +506,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.version = super->version;
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.server_term = cpu_to_le64(client->server_term);
greet.rid = cpu_to_le64(sbi->rid);
greet.flags = 0;
@@ -508,6 +527,7 @@ out:
}
static scoutfs_net_request_t client_req_funcs[] = {
[SCOUTFS_NET_CMD_SYNC_LOG_TREES] = sync_log_trees,
[SCOUTFS_NET_CMD_LOCK] = client_lock,
[SCOUTFS_NET_CMD_LOCK_RECOVER] = client_lock_recover,
[SCOUTFS_NET_CMD_OPEN_INO_MAP] = client_open_ino_map,
@@ -623,10 +643,8 @@ void scoutfs_client_destroy(struct super_block *sb)
client_farewell_response,
NULL, NULL);
if (ret == 0) {
ret = wait_for_completion_interruptible(
&client->farewell_comp);
if (ret == 0)
ret = client->farewell_error;
wait_for_completion(&client->farewell_comp);
ret = client->farewell_error;
}
if (ret) {
scoutfs_inc_counter(sb, client_farewell_error);
@@ -650,3 +668,11 @@ void scoutfs_client_destroy(struct super_block *sb)
kfree(client);
sbi->client_info = NULL;
}
void scoutfs_client_net_shutdown(struct super_block *sb)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
if (client && client->conn)
scoutfs_net_shutdown(sb, client->conn);
}

View File

@@ -10,7 +10,6 @@ int scoutfs_client_commit_log_trees(struct super_block *sb,
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
u64 *scoutfs_client_bulk_alloc(struct super_block *sb);
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_lock_request(struct super_block *sb,
struct scoutfs_net_lock *nl);
@@ -33,7 +32,10 @@ int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd);
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst);
void scoutfs_client_net_shutdown(struct super_block *sb);
int scoutfs_client_setup(struct super_block *sb);
void scoutfs_client_destroy(struct super_block *sb);

View File

@@ -47,6 +47,8 @@
EXPAND_COUNTER(btree_merge) \
EXPAND_COUNTER(btree_merge_alloc_low) \
EXPAND_COUNTER(btree_merge_delete) \
EXPAND_COUNTER(btree_merge_delta_combined) \
EXPAND_COUNTER(btree_merge_delta_null) \
EXPAND_COUNTER(btree_merge_dirty_limit) \
EXPAND_COUNTER(btree_merge_drop_old) \
EXPAND_COUNTER(btree_merge_insert) \
@@ -88,45 +90,33 @@
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(inode_evict_intr) \
EXPAND_COUNTER(item_alloc_bytes) \
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
EXPAND_COUNTER(item_delta) \
EXPAND_COUNTER(item_delta_written) \
EXPAND_COUNTER(item_dirty) \
EXPAND_COUNTER(item_free_bytes) \
EXPAND_COUNTER(item_invalidate) \
EXPAND_COUNTER(item_invalidate_page) \
EXPAND_COUNTER(item_invalidate_item) \
EXPAND_COUNTER(item_lookup) \
EXPAND_COUNTER(item_mark_dirty) \
EXPAND_COUNTER(item_next) \
EXPAND_COUNTER(item_page_accessed) \
EXPAND_COUNTER(item_page_alloc) \
EXPAND_COUNTER(item_page_clear_dirty) \
EXPAND_COUNTER(item_page_compact) \
EXPAND_COUNTER(item_page_free) \
EXPAND_COUNTER(item_page_lru_add) \
EXPAND_COUNTER(item_page_lru_remove) \
EXPAND_COUNTER(item_page_mark_dirty) \
EXPAND_COUNTER(item_page_rbtree_walk) \
EXPAND_COUNTER(item_page_split) \
EXPAND_COUNTER(item_pcpu_add_replaced) \
EXPAND_COUNTER(item_pcpu_page_hit) \
EXPAND_COUNTER(item_pcpu_page_miss) \
EXPAND_COUNTER(item_pcpu_page_miss_keys) \
EXPAND_COUNTER(item_read_pages_split) \
EXPAND_COUNTER(item_shrink_page) \
EXPAND_COUNTER(item_shrink_page_dirty) \
EXPAND_COUNTER(item_shrink_page_reader) \
EXPAND_COUNTER(item_shrink_page_trylock) \
EXPAND_COUNTER(item_shrink) \
EXPAND_COUNTER(item_shrink_all) \
EXPAND_COUNTER(item_shrink_exhausted) \
EXPAND_COUNTER(item_shrink_read_search) \
EXPAND_COUNTER(item_shrink_removed) \
EXPAND_COUNTER(item_shrink_searched) \
EXPAND_COUNTER(item_shrink_skipped) \
EXPAND_COUNTER(item_shrink_write_search) \
EXPAND_COUNTER(item_update) \
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grace_extended) \
EXPAND_COUNTER(lock_grace_set) \
EXPAND_COUNTER(lock_grace_wait) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
EXPAND_COUNTER(lock_grant_work) \
EXPAND_COUNTER(lock_invalidate_coverage) \
EXPAND_COUNTER(lock_invalidate_inode) \
EXPAND_COUNTER(lock_invalidate_request) \
@@ -179,6 +169,7 @@
EXPAND_COUNTER(srch_add_entry) \
EXPAND_COUNTER(srch_compact_dirty_block) \
EXPAND_COUNTER(srch_compact_entry) \
EXPAND_COUNTER(srch_compact_error) \
EXPAND_COUNTER(srch_compact_flush) \
EXPAND_COUNTER(srch_compact_log_page) \
EXPAND_COUNTER(srch_compact_removed_entry) \
@@ -193,6 +184,11 @@
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
EXPAND_COUNTER(totl_read_copied) \
EXPAND_COUNTER(totl_read_finalized) \
EXPAND_COUNTER(totl_read_fs) \
EXPAND_COUNTER(totl_read_item) \
EXPAND_COUNTER(totl_read_logged) \
EXPAND_COUNTER(trans_commit_data_alloc_low) \
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
EXPAND_COUNTER(trans_commit_fsync) \

584
kmod/src/cwskip.c Normal file
View File

@@ -0,0 +1,584 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/rcupdate.h>
#include <linux/random.h>
#include "cwskip.h"
/*
* This skip list is built to allow concurrent modification and limit
* contention to the region of the list around the modification. All
* node references are protected by RCU. Each node has a write_seq
* that works like a seqlock, the big differences are that we nest them
* and use trylock to acquire them.
*
* Readers sample the write_seqs of nodes containing links as they
* traverse them, verifying that the node hasn't been modified before
* traversing to the node referenced by the link.
*
* Writers remember the seqs of all the nodes they traversed to end up
* at their final node. They try to acquire the lock of all the nodes
* needed to modify the list at a given height. Their trylocks will
* fail if any of the nodes have changed since their traversal.
*
* The interface is built around references to adjacent pairs of nodes
* and their sequence numbers. This lets readers and writers traverse
* through their local region of the list until they hit contention and
* must start over with a full search.
*
* The caller is responsible for allocating and freeing nodes. The
* interface is built around caller's objects which each have embedded
* nodes.
*/
/*
* node_off is the positive offset of the cwskip node within the
* container structs stored in the list. The node_off is subtracted
* from node pointers to give the caller a pointer to their stored
* container struct.
*/
void scoutfs_cwskip_init_root(struct scoutfs_cwskip_root *root, scoutfs_cwskip_cmp_t cmp_fn,
unsigned long node_off)
{
memset(root, 0, sizeof(&root));
root->cmp_fn = cmp_fn;
root->node_off = node_off;
}
/* This is completely racey and should be used accordingly. */
bool scoutfs_cwskip_empty(struct scoutfs_cwskip_root *root)
{
int i;
for (i = 0; i < SCOUTFS_CWSKIP_MAX_HEIGHT; i++) {
if (root->node.links[i] != NULL)
return false;
}
return true;
}
/*
* Return a random height between 1 and max height, inclusive. Using
* ffs means that each greater height relies on all lower height bits
* being clear and we get the height distribution we want: 1 = 1/2,
* 2 = 1/4, 3 = 1/8, etc.
*/
int scoutfs_cwskip_rand_height(void)
{
return ffs(prandom_u32() | (1 << (SCOUTFS_CWSKIP_MAX_HEIGHT - 1)));
}
static void *node_container(struct scoutfs_cwskip_root *root, struct scoutfs_cwskip_node *node)
{
return node ? (void *)((unsigned long)node - root->node_off) : NULL;
}
/*
* Set the caller's containers for the given nodes. There isn't a
* previous container when the previous node is the root's static
* full-height node.
*/
static void set_containers(struct scoutfs_cwskip_root *root, struct scoutfs_cwskip_node *prev,
struct scoutfs_cwskip_node *node, void **prev_cont, void **node_cont)
{
if (prev_cont)
*prev_cont = (prev != &root->node) ? node_container(root, prev) : NULL;
if (node_cont)
*node_cont = node_container(root, node);
}
static struct scoutfs_cwskip_node *node_read_begin(struct scoutfs_cwskip_node *node,
unsigned int *seq)
{
if (node) {
*seq = READ_ONCE(node->write_seq) & ~1U;
smp_rmb();
} else {
*seq = 1; /* caller shouldn't use if we return null, being careful */
}
return node;
}
static bool node_read_retry(struct scoutfs_cwskip_node *node, unsigned int seq)
{
if (node) {
smp_rmb();
return READ_ONCE(node->write_seq) != seq;
}
return false;
}
/*
* write_seq is only an int to reduce the size of nodes and full-height
* seq arrays, it could be a long if archs have trouble with int
* cmpxchg.
*/
static bool __node_trylock(struct scoutfs_cwskip_node *node, unsigned int seq)
{
if (seq & 1)
return false;
return cmpxchg(&node->write_seq, seq, seq + 1) == seq;
}
static bool node_trylock(struct scoutfs_cwskip_node *node, unsigned int seq)
{
bool locked = __node_trylock(node, seq);
if (locked)
smp_wmb();
return locked;
}
static void __node_unlock(struct scoutfs_cwskip_node *node)
{
node->write_seq++;
}
static void node_unlock(struct scoutfs_cwskip_node *node)
{
__node_unlock(node);
smp_wmb();
}
/* return -1/1 to go left/right, never 0 */
static int random_cmp(void *K, void *C)
{
return (int)(prandom_u32() & 2) - 1;
}
static void cwskip_search(struct scoutfs_cwskip_root *root, void *key, int *node_cmp,
struct scoutfs_cwskip_reader *rd, struct scoutfs_cwskip_writer *wr,
unsigned int *prev_seqs)
{
struct scoutfs_cwskip_node *prev;
struct scoutfs_cwskip_node *node;
scoutfs_cwskip_cmp_t cmp_fn;
unsigned int prev_seq;
unsigned int node_seq;
int level;
int cmp;
if (key == NULL)
cmp_fn = random_cmp;
restart:
prev = node_read_begin(&root->node, &prev_seq);
node = NULL;
node_seq = 1;
cmp = -1;
level = SCOUTFS_CWSKIP_MAX_HEIGHT - 1;
while (prev && level >= 0) {
node = node_read_begin(prev->links[level], &node_seq);
if (!node) {
cmp = -1;
level--;
continue;
}
cmp = cmp_fn(key, node_container(root, node));
if (cmp > 0) {
if (node_read_retry(prev, prev_seq))
goto restart;
prev = node;
prev_seq = node_seq;
node = NULL;
continue;
}
if (wr) {
wr->prevs[level] = prev;
prev_seqs[level] = prev_seq;
}
level--;
}
rd->prev = prev;
rd->prev_seq = prev_seq;
rd->node = node;
rd->node_seq = node_seq;
*node_cmp = cmp;
}
static void init_reader(struct scoutfs_cwskip_reader *rd, struct scoutfs_cwskip_root *root)
{
memset(rd, 0, sizeof(struct scoutfs_cwskip_reader));
rd->root = root;
}
/*
* Find and returns nodes that surround the search key.
*
* Either prev or null can be null if there are no nodes before or after
* the search key. *node_cmp is set to the final comparison of the key
* and the returned node's container key, it will be 0 if an exact match
* is found.
*
* This starts an RCU read critical section and is fully concurrent with
* both other readers and writers. The nodes won't be freed until
* after the section so its always safe to reference them but their
* contents might be nonsense if they're modified during the read.
* Nothing learned from the list during the read section should have an
* effect until after _read_valid has said it was OK.
*
* _read_valid can be called after referencing the nodes to see if they
* were stable during the read. _read_next can be used to iterate
* forward through the list without repeating the search. The caller
* must always call a matching _read_end once they're done.
*/
void scoutfs_cwskip_read_begin(struct scoutfs_cwskip_root *root, void *key, void **prev_cont,
void **node_cont, int *node_cmp, struct scoutfs_cwskip_reader *rd)
__acquires(RCU) /* :/ */
{
init_reader(rd, root);
rcu_read_lock();
cwskip_search(root, key, node_cmp, rd, NULL, NULL);
set_containers(root, rd->prev, rd->node, prev_cont, node_cont);
}
/*
* Returns true of the nodes referenced by the reader haven't been
* modified and any references of them were consistent. Thsi does not
* end the reader critical section and can be called multiple times.
*/
bool scoutfs_cwskip_read_valid(struct scoutfs_cwskip_reader *rd)
{
return !(node_read_retry(rd->prev, rd->prev_seq) &&
node_read_retry(rd->node, rd->node_seq));
}
/*
* Advance from the current prev/node to the next pair of nodes in the
* list. prev_cont is set to what node_cont was before the call.
* node_cont is set to the next node after the current node_cont.
*
* This returns true if it found a next node and that its load of the
* next pointer from node was valid and stable. Returning false means
* that the caller should retry. There could be more items in the list.
*/
bool scoutfs_cwskip_read_next(struct scoutfs_cwskip_reader *rd, void **prev_cont, void **node_cont)
{
struct scoutfs_cwskip_node *next;
unsigned int next_seq;
bool valid_next;
next = rd->node ? node_read_begin(rd->node->links[0], &next_seq) : NULL;
valid_next = scoutfs_cwskip_read_valid(rd) && next;
if (valid_next) {
rd->prev = rd->node;
rd->prev_seq = rd->node_seq;
rd->node = next;
rd->node_seq = next_seq;
set_containers(rd->root, rd->prev, rd->node, prev_cont, node_cont);
}
return valid_next;
}
/*
* End the critical section started with _read_begin.
*/
void scoutfs_cwskip_read_end(struct scoutfs_cwskip_reader *rd)
__releases(RCU) /* :/ */
{
rcu_read_unlock();
}
/*
* Higher locks are more likely to cause contention so we unlock them
* first.
*/
static void writer_unlock(struct scoutfs_cwskip_writer *wr)
{
int i;
for (i = wr->locked_height - 1; i >= 0; i--) {
if (i == 0 || (wr->prevs[i - 1] != wr->prevs[i]))
__node_unlock(wr->prevs[i]);
}
if (wr->node_locked)
__node_unlock(wr->node);
smp_wmb();
wr->locked_height = 0;
wr->node_locked = false;
}
/*
* A search traversal has saved all the previous nodes at each level.
*
* We try to acquire the write_seq locks for all the prevs up to height
* from the seqs that we read during the search. The search was
* protected by read sections so the prevs represent a consistent
* version of the list at some point in the past. If nodes have been
* locked since we read them we won't be able to acquire the locks.
* Nodes aren't re-inserted after removal so we shouldn't see nodes in
* multiple places (which would deadlock).
*
* The same node can be in multiple prev slots. We're careful to only
* try locking the lowest duplicate slot.
*
* We lock from the highest level down. This only matters when there's
* contention. The higher nodes are more likely to see contention so
* we want trylock to fail early to avoid useless locking churn on lower
* nodes.
*/
static bool writer_trylock(struct scoutfs_cwskip_writer *wr, unsigned int *prev_seqs, int height)
{
int i;
if (WARN_ON_ONCE(wr->locked_height != 0) ||
WARN_ON_ONCE(height < 1 || height > ARRAY_SIZE(wr->prevs)))
return false;
for (i = height - 1; i >= 0; i--) {
if ((i == 0 || wr->prevs[i - 1] != wr->prevs[i]) &&
!__node_trylock(wr->prevs[i], prev_seqs[i]))
break;
wr->locked_height++;
}
if (i < height) {
writer_unlock(wr);
return false;
}
/* paranoid debugging verification */
for (i = 0; i < wr->locked_height; i++) {
BUG_ON(wr->prevs[i]->height <= i);
BUG_ON(wr->node && i < wr->node->height && wr->prevs[i]->links[i] != wr->node);
}
smp_mb();
return true;
}
static void init_writer(struct scoutfs_cwskip_writer *wr, struct scoutfs_cwskip_root *root)
{
memset(wr, 0, sizeof(struct scoutfs_cwskip_writer));
wr->root = root;
}
/*
* Search for and return references to the two nodes that surround the
* search key, with the nodes locked.
*
* Either node can be null if there are no nodes before or after the
* search key. We still hold a lock on the static root node if the
* search key falls before the first node in the list.
*
* If lock_height is 0 then the caller is saying that they just want to
* lock the surrounding nodes and not modify their position in the list.
* We only lock those two nodes. Any greater lock_height represents a
* height that we need to lock so the caller can insert an allocated
* node with that height.
*
* The caller can use the writer context to iterate through locked nodes
* via the lowest level list that contains all nodes. If they hit a
* node that's higher than the locked height in the writer then they
* have to unlock and restart because we don't have the previous node
* for that height. We set a min level that we lock to reduce the
* possibility of hitting higher nodes and retrying.
*/
#define MIN_LOCKED_HEIGHT 4
void scoutfs_cwskip_write_begin(struct scoutfs_cwskip_root *root, void *key, int lock_height,
void **prev_cont, void **node_cont, int *node_cmp,
struct scoutfs_cwskip_writer *wr)
__acquires(RCU) /* :/ */
{
unsigned int prev_seqs[SCOUTFS_CWSKIP_MAX_HEIGHT];
struct scoutfs_cwskip_reader rd;
int node_height;
int use_height;
bool locked;
BUG_ON(WARN_ON_ONCE(lock_height < 0 || lock_height > SCOUTFS_CWSKIP_MAX_HEIGHT));
do {
init_reader(&rd, root);
init_writer(wr, root);
rcu_read_lock();
cwskip_search(root, key, node_cmp, &rd, wr, NULL);
wr->node = rd.node;
if (wr->node) {
/* _trylock of prevs will issue barrier on success */
if (!__node_trylock(wr->node, rd.node_seq)) {
locked = false;
continue;
}
wr->node_locked = true;
node_height = wr->node->height;
} else {
node_height = 0;
}
if (lock_height > 0)
use_height = max3(MIN_LOCKED_HEIGHT, node_height, lock_height);
else
use_height = 1;
locked = writer_trylock(wr, prev_seqs, use_height);
if (!locked)
rcu_read_unlock();
} while (!locked);
set_containers(root, wr->prevs[0], wr->node, prev_cont, node_cont);
}
/*
* Insert a new node between the writer's two locked nodes. The
* inserting node is locked and replaces the existing node in the writer
* which is unlocked.
*
* The next node may not exist. The previous nodes will always exist
* though they may be the static root node.
*
* The inserting node is visible to readers the moment we store the
* first link to it in previous nodes. We first lock it with a write
* barrier so that any readers will retry if they visit it before all
* its links are updated and its unlocked.
*
* We don't unlock prevs that are higher than the inserting node. This
* lets the caller continue iterating through nodes that are higher than
* insertion but still under the locked height.
*/
void scoutfs_cwskip_write_insert(struct scoutfs_cwskip_writer *wr,
struct scoutfs_cwskip_node *ins)
{
struct scoutfs_cwskip_node *node = wr->node;
int i;
BUG_ON(ins->height > wr->locked_height);
node_trylock(ins, ins->write_seq);
for (i = 0; i < ins->height; i++) {
ins->links[i] = wr->prevs[i]->links[i];
wr->prevs[i]->links[i] = ins;
}
if (node)
node_unlock(node);
wr->node = ins;
}
/*
* Remove the node in the writer from the list. The writers node
* pointer is not advanced because we don't want this to be able to fail
* if trylock on the next node fails. The caller can call _write_next
* on this writer and it will try and iterate from prevs[0].
*
* The caller's removal argument must be the node pointer in the writer.
* This is redundant but meant to communicate to the caller that they're
* responsible for the node after removing it (presumably queueing it
* for freeing before _write_end leaves rcu).
*
* Readers can be traversing our node as we modify its pointers and can
* read a temporarily inconsistent state. We have the node locked so
* the reader will immediately retry once the check the seqs after
* hitting our node that's being removed.
*/
void scoutfs_cwskip_write_remove(struct scoutfs_cwskip_writer *wr,
struct scoutfs_cwskip_node *node)
{
int i;
BUG_ON(node != wr->node);
BUG_ON(node->height > wr->locked_height);
for (i = 0; i < node->height; i++) {
wr->prevs[i]->links[i] = node->links[i];
node->links[i] = NULL;
}
node_unlock(node);
wr->node = NULL;
}
/*
* Advance through the list by setting prevs to node and node to the
* next node in the list after locking it. Returns true only if there
* was a next node that we were able to lock. Returning false can mean
* that we weren't able to lock the next node and the caller should
* retry a full search.
*
* This may be called after _write_remove clears node so we try to
* iterate from prev if there is no node.
*
* If lock_height is greater than zero then the caller needs at least
* that lock_height to insert a node of that height. If locked_height
* doesn't cover it then we return false so the caller can retry
* _write_begin with the needed height.
*
* Like insertion, we don't unlock prevs higher than the height of the
* next node. They're not strictly needed to modify the next node but
* we want to keep them locked so the caller can continue to iterate
* through nodes up to the locked height.
*/
bool scoutfs_cwskip_write_next(struct scoutfs_cwskip_writer *wr, int lock_height,
void **prev_cont, void **node_cont)
{
struct scoutfs_cwskip_node *next;
int i;
if (WARN_ON_ONCE(lock_height < 0 || lock_height > SCOUTFS_CWSKIP_MAX_HEIGHT))
return false;
if (wr->node)
next = rcu_dereference(wr->node->links[0]);
else
next = rcu_dereference(wr->prevs[0]->links[0]);
if (!next ||
(lock_height > wr->locked_height) ||
(lock_height > 0 && next->height > wr->locked_height) ||
!__node_trylock(next, next->write_seq))
return false;
if (!wr->node) {
/* set next as missing node */
wr->node = next;
wr->node_locked = true;
} else {
/* existing node becomes prevs for its height */
__node_unlock(wr->prevs[0]);
for (i = 0; i < wr->node->height; i++)
wr->prevs[0] = wr->node;
wr->node = next;
}
smp_wmb(); /* next locked and prev unlocked */
set_containers(wr->root, wr->prevs[0], wr->node, prev_cont, node_cont);
return true;
}
void scoutfs_cwskip_write_end(struct scoutfs_cwskip_writer *wr)
__releases(RCU) /* :/ */
{
writer_unlock(wr);
rcu_read_unlock();
}

68
kmod/src/cwskip.h Normal file
View File

@@ -0,0 +1,68 @@
#ifndef _SCOUTFS_CWSKIP_H_
#define _SCOUTFS_CWSKIP_H_
/* A billion seems like a lot. */
#define SCOUTFS_CWSKIP_MAX_HEIGHT 30
struct scoutfs_cwskip_node {
int height;
unsigned int write_seq;
struct scoutfs_cwskip_node *links[];
};
#define SCOUTFS_CWSKIP_FULL_NODE_BYTES \
offsetof(struct scoutfs_cwskip_node, links[SCOUTFS_CWSKIP_MAX_HEIGHT + 1])
typedef int (*scoutfs_cwskip_cmp_t)(void *K, void *C);
struct scoutfs_cwskip_root {
scoutfs_cwskip_cmp_t cmp_fn;
unsigned long node_off;
union {
struct scoutfs_cwskip_node node;
__u8 __full_root_node[SCOUTFS_CWSKIP_FULL_NODE_BYTES];
};
};
struct scoutfs_cwskip_reader {
struct scoutfs_cwskip_root *root;
struct scoutfs_cwskip_node *prev;
struct scoutfs_cwskip_node *node;
unsigned int prev_seq;
unsigned int node_seq;
};
/*
* The full height prevs array makes these pretty enormous :/.
*/
struct scoutfs_cwskip_writer {
struct scoutfs_cwskip_root *root;
bool node_locked;
int locked_height;
struct scoutfs_cwskip_node *node;
struct scoutfs_cwskip_node *prevs[SCOUTFS_CWSKIP_MAX_HEIGHT];
};
void scoutfs_cwskip_init_root(struct scoutfs_cwskip_root *root, scoutfs_cwskip_cmp_t cmp_fn,
unsigned long node_off);
bool scoutfs_cwskip_empty(struct scoutfs_cwskip_root *root);
int scoutfs_cwskip_rand_height(void);
void scoutfs_cwskip_read_begin(struct scoutfs_cwskip_root *root, void *key, void **prev_cont,
void **node_cont, int *node_cmp, struct scoutfs_cwskip_reader *rd);
bool scoutfs_cwskip_read_valid(struct scoutfs_cwskip_reader *rd);
bool scoutfs_cwskip_read_next(struct scoutfs_cwskip_reader *rd, void **prev_cont, void **node_cont);
void scoutfs_cwskip_read_end(struct scoutfs_cwskip_reader *rd);
void scoutfs_cwskip_write_begin(struct scoutfs_cwskip_root *root, void *key, int lock_height,
void **prev_cont, void **node_cont, int *node_cmp,
struct scoutfs_cwskip_writer *wr);
void scoutfs_cwskip_write_insert(struct scoutfs_cwskip_writer *wr,
struct scoutfs_cwskip_node *ins);
void scoutfs_cwskip_write_remove(struct scoutfs_cwskip_writer *wr,
struct scoutfs_cwskip_node *node);
bool scoutfs_cwskip_write_next(struct scoutfs_cwskip_writer *wr, int lock_height,
void **prev_cont, void **node_cont);
void scoutfs_cwskip_write_end(struct scoutfs_cwskip_writer *wr);
#endif

View File

@@ -207,6 +207,7 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
u64 offset;
s64 ret;
u8 flags;
int err;
int i;
flags = offline ? SEF_OFFLINE : 0;
@@ -246,6 +247,18 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
tr.len = min(ext.len - offset, last - iblock + 1);
tr.flags = ext.flags;
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
if (ret < 0) {
if (WARN_ON_ONCE(ret == -EINVAL)) {
scoutfs_err(sb, "unexpected truncate inconsistency: ino %llu iblock %llu last %llu, start %llu len %llu",
ino, iblock, last, tr.start, tr.len);
}
break;
}
if (tr.map) {
mutex_lock(&datinf->mutex);
ret = scoutfs_free_data(sb, datinf->alloc,
@@ -253,16 +266,16 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
&datinf->data_freed,
tr.map, tr.len);
mutex_unlock(&datinf->mutex);
if (ret < 0)
if (ret < 0) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, tr.map, tr.flags);
if (err < 0)
scoutfs_err(sb, "truncate err %d restoring extent after error %lld: ino %llu start %llu len %llu",
err, ret, ino, tr.start, tr.len);
break;
}
}
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
BUG_ON(ret); /* inconsistent, could prealloc items */
iblock += tr.len;
}
@@ -817,6 +830,7 @@ static int scoutfs_write_end(struct file *file, struct address_space *mapping,
scoutfs_inode_inc_data_version(inode);
}
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, wbd->lock, &wbd->ind_locks);
scoutfs_inode_queue_writeback(inode);
}
@@ -1018,8 +1032,11 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
end = (iblock + ret) << SCOUTFS_BLOCK_SM_SHIFT;
if (end > offset + len)
end = offset + len;
if (end > i_size_read(inode))
if (end > i_size_read(inode)) {
i_size_write(inode, end);
inode_inc_iversion(inode);
scoutfs_inode_inc_data_version(inode);
}
}
if (ret >= 0)
scoutfs_update_inode_item(inode, lock, &ind_locks);
@@ -1351,10 +1368,12 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
cur_time = CURRENT_TIME;
if (!is_stage) {
to->i_ctime = to->i_mtime = cur_time;
inode_inc_iversion(to);
scoutfs_inode_inc_data_version(to);
scoutfs_inode_set_data_seq(to);
}
from->i_ctime = from->i_mtime = cur_time;
inode_inc_iversion(from);
scoutfs_inode_inc_data_version(from);
scoutfs_inode_set_data_seq(from);

View File

@@ -38,13 +38,6 @@ struct scoutfs_data_wait {
.err = 0, \
}
struct scoutfs_traced_extent {
u64 iblock;
u64 count;
u64 blkno;
u8 flags;
};
extern const struct address_space_operations scoutfs_file_aops;
extern const struct file_operations scoutfs_file_fops;
struct scoutfs_alloc;

View File

@@ -31,6 +31,7 @@
#include "lock.h"
#include "hash.h"
#include "omap.h"
#include "forest.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -135,8 +136,8 @@ static int alloc_dentry_info(struct dentry *dentry)
{
struct dentry_info *di;
/* XXX read mb? */
if (dentry->d_fsdata)
smp_rmb();
if (dentry->d_op == &scoutfs_dentry_ops)
return 0;
di = kmem_cache_zalloc(dentry_info_cache, GFP_NOFS);
@@ -148,6 +149,7 @@ static int alloc_dentry_info(struct dentry *dentry)
spin_lock(&dentry->d_lock);
if (!dentry->d_fsdata) {
dentry->d_fsdata = di;
smp_wmb();
d_set_d_op(dentry, &scoutfs_dentry_ops);
}
spin_unlock(&dentry->d_lock);
@@ -253,7 +255,7 @@ static u64 dirent_name_hash(const char *name, unsigned int name_len)
((u64)dirent_name_fingerprint(name, name_len) << 32);
}
static u64 dirent_names_equal(const char *a_name, unsigned int a_len,
static bool dirent_names_equal(const char *a_name, unsigned int a_len,
const char *b_name, unsigned int b_len)
{
return a_len == b_len && memcmp(a_name, b_name, a_len) == 0;
@@ -275,8 +277,7 @@ static int lookup_dirent(struct super_block *sb, u64 dir_ino, const char *name,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
ret = -ENOMEM;
goto out;
return -ENOMEM;
}
init_dirent_key(&key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, 0);
@@ -316,6 +317,52 @@ out:
return ret;
}
/*
* Verify that the caller's dentry still precisely matches our dirent
* items.
*
* The caller has a dentry that the vfs revalidated before they acquired
* their locks. If the dentry is still covered by a lock we immediately
* return 0. If not, we check items and return -ENOENT if a positive
* dentry no longer matches the items or -EEXIST if a negative entry's
* name now has an item.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
struct scoutfs_dirent dent = {0,};
const char *name;
u64 dentry_ino;
int name_len;
u64 hash;
int ret;
if (scoutfs_lock_is_covered(sb, &di->lock_cov))
return 0;
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
name = dentry->d_name.name;
name_len = dentry->d_name.len;
hash = dirent_name_hash(name, name_len);
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret < 0 && ret != -ENOENT)
return ret;
if (dentry_ino != le64_to_cpu(dent.ino) || di->hash != le64_to_cpu(dent.hash) ||
di->pos != le64_to_cpu(dent.pos)) {
if (dentry_ino)
ret = -ENOENT;
else
ret = -EEXIST;
} else {
ret = 0;
}
return ret;
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
{
struct super_block *sb = dentry->d_sb;
@@ -423,7 +470,7 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
{
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent dent;
struct scoutfs_dirent dent = {0,};
struct inode *inode;
u64 ino = 0;
u64 hash;
@@ -451,9 +498,11 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
ret = 0;
} else if (ret == 0) {
ino = le64_to_cpu(dent.ino);
}
if (ret == 0)
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), dir_lock);
}
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
out:
@@ -462,7 +511,7 @@ out:
else if (ino == 0)
inode = NULL;
else
inode = scoutfs_iget(sb, ino);
inode = scoutfs_iget(sb, ino, 0);
/*
* We can't splice dir aliases into the dcache. dir entries
@@ -490,10 +539,10 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
{
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
struct scoutfs_dirent *dent;
struct scoutfs_key key;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key last_key;
struct scoutfs_lock *dir_lock;
struct scoutfs_key key;
int name_len;
u64 pos;
int ret;
@@ -503,8 +552,7 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
ret = -ENOMEM;
goto out;
return -ENOMEM;
}
init_dirent_key(&last_key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
@@ -571,18 +619,17 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
u64 ino, umode_t mode, struct scoutfs_lock *dir_lock,
struct scoutfs_lock *inode_lock)
{
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key rdir_key;
struct scoutfs_key ent_key;
struct scoutfs_key lb_key;
struct scoutfs_dirent *dent;
bool del_ent = false;
bool del_rdir = false;
bool del_ent = false;
int ret;
dent = alloc_dirent(name_len);
if (!dent) {
ret = -ENOMEM;
goto out;
return -ENOMEM;
}
/* initialize the dent */
@@ -753,6 +800,7 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -766,6 +814,11 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
&dir_lock, &inode_lock, NULL, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
pos = SCOUTFS_I(dir)->next_readdir_pos++;
@@ -781,6 +834,10 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_mtime = inode->i_atime = inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_mtime;
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
if (S_ISDIR(mode)) {
inc_nlink(inode);
@@ -855,6 +912,10 @@ static int scoutfs_link(struct dentry *old_dentry,
if (ret)
goto out_unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
if (inode->i_nlink == 0) {
@@ -902,6 +963,8 @@ retry:
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -953,6 +1016,14 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
if (ret)
return ret;
ret = alloc_dentry_info(dentry);
if (ret)
goto unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto unlock;
if (S_ISDIR(inode->i_mode) && i_size_read(inode)) {
ret = -ENOTEMPTY;
goto unlock;
@@ -990,9 +1061,13 @@ retry:
goto out;
}
update_dentry_info(sb, dentry, 0, 0, dir_lock);
dir->i_ctime = ts;
dir->i_mtime = ts;
i_size_write(dir, i_size_read(dir) - dentry->d_name.len);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
inode->i_ctime = ts;
drop_nlink(inode);
@@ -1185,6 +1260,7 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -1205,6 +1281,11 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
&dir_lock, &inode_lock, NULL, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
ret = symlink_item_ops(sb, SYM_CREATE, scoutfs_ino(inode), inode_lock,
symname, name_len);
@@ -1224,9 +1305,13 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode_inc_iversion(dir);
inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_ctime;
i_size_write(inode, name_len);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -1279,10 +1364,10 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list)
{
struct scoutfs_link_backref_entry *ent;
struct scoutfs_link_backref_entry *ent = NULL;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_lock *lock = NULL;
int len;
int ret;
@@ -1502,26 +1587,6 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
return ret;
}
/*
* Make sure that a dirent from the dir to the inode exists at the name.
* The caller has the name locked in the dir.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, const char *name,
unsigned name_len, u64 hash, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_dirent dent;
int ret;
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret == 0 && le64_to_cpu(dent.ino) != ino)
ret = -ENOENT;
else if (ret == -ENOENT && ino == 0)
ret = 0;
return ret;
}
/*
* The vfs performs checks on cached inodes and dirents before calling
* here. It doesn't hold any locks so all of those checks can be based
@@ -1550,8 +1615,9 @@ static int verify_entry(struct super_block *sb, u64 dir_ino, const char *name,
* from using parent/child locking orders as two groups can have both
* parent and child relationships to each other.
*/
static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
static int scoutfs_rename_common(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
struct super_block *sb = old_dir->i_sb;
struct inode *old_inode = old_dentry->d_inode;
@@ -1616,16 +1682,18 @@ static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
}
/* make sure that the entries assumed by the argument still exist */
ret = verify_entry(sb, scoutfs_ino(old_dir), old_dentry->d_name.name,
old_dentry->d_name.len, old_hash,
scoutfs_ino(old_inode), old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry->d_name.name,
new_dentry->d_name.len, new_hash,
new_inode ? scoutfs_ino(new_inode) : 0,
new_dir_lock);
ret = alloc_dentry_info(old_dentry) ?:
alloc_dentry_info(new_dentry) ?:
verify_entry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
if (ret)
goto out_unlock;
if ((flags & RENAME_NOREPLACE) && (new_inode != NULL)) {
ret = -EEXIST;
goto out_unlock;
}
if (should_orphan(new_inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(new_inode),
&orph_lock);
@@ -1732,6 +1800,13 @@ retry:
if (new_inode)
old_inode->i_ctime = now;
inode_inc_iversion(old_dir);
inode_inc_iversion(old_inode);
if (new_dir != old_dir)
inode_inc_iversion(new_dir);
if (new_inode)
inode_inc_iversion(new_inode);
scoutfs_update_inode_item(old_dir, old_dir_lock, &ind_locks);
scoutfs_update_inode_item(old_inode, old_inode_lock, &ind_locks);
if (new_dir != old_dir)
@@ -1801,6 +1876,23 @@ out_unlock:
return ret;
}
static int scoutfs_rename(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry)
{
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, 0);
}
static int scoutfs_rename2(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
if (flags & ~RENAME_NOREPLACE)
return -EINVAL;
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, flags);
}
#ifdef KC_FMODE_KABI_ITERATE
/* we only need this to set the iterate flag for kabi :/ */
static int scoutfs_dir_open(struct inode *inode, struct file *file)
@@ -1817,6 +1909,7 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
int ret;
@@ -1827,6 +1920,7 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
&dir_lock, &inode_lock, &orph_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
if (ret < 0) {
@@ -1835,9 +1929,12 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
}
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
si->crtime = inode->i_mtime;
insert_inode_hash(inode);
ihold(inode); /* need to update inode modifications in d_tmpfile */
d_tmpfile(dentry, inode);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -1886,6 +1983,7 @@ const struct inode_operations_wrapper scoutfs_dir_iops = {
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
.rename2 = scoutfs_rename2,
};
void scoutfs_dir_exit(void)

View File

@@ -81,7 +81,7 @@ static struct dentry *scoutfs_fh_to_dentry(struct super_block *sb,
trace_scoutfs_fh_to_dentry(sb, fh_type, sfid);
if (scoutfs_valid_fileid(fh_type))
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino));
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino), 0);
return d_obtain_alias(inode);
}
@@ -100,7 +100,7 @@ static struct dentry *scoutfs_fh_to_parent(struct super_block *sb,
if (scoutfs_valid_fileid(fh_type) &&
fh_type == FILEID_SCOUTFS_WITH_PARENT)
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino));
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino), 0);
return d_obtain_alias(inode);
}
@@ -123,7 +123,7 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
scoutfs_dir_free_backref_path(sb, &list);
trace_scoutfs_get_parent(sb, inode, ino);
inode = scoutfs_iget(sb, ino);
inode = scoutfs_iget(sb, ino, 0);
return d_obtain_alias(inode);
}

View File

@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include "msg.h"
#include "ext.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -191,6 +192,9 @@ int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
/* inserting extent must not overlap */
if (found.len && ext_overlap(&ins, found.start, found.len)) {
if (ops->insert_overlap_warn)
scoutfs_err(sb, "inserting extent %llu.%llu overlaps existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}
@@ -242,6 +246,8 @@ int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
/* removed extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
scoutfs_err(sb, "error removing extent %llu.%llu, isn't inside existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}

View File

@@ -15,6 +15,8 @@ struct scoutfs_ext_ops {
u64 start, u64 len, u64 map, u8 flags);
int (*remove)(struct super_block *sb, void *arg, u64 start, u64 len,
u64 map, u8 flags);
bool insert_overlap_warn;
};
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,

View File

@@ -376,7 +376,7 @@ int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies)
bool error;
long ret;
ret = wait_event_interruptible_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
ret = wait_event_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
if (ret == 0)
ret = -ETIMEDOUT;
else if (ret > 0)

View File

@@ -26,6 +26,7 @@
#include "hash.h"
#include "srch.h"
#include "counters.h"
#include "xattr.h"
#include "scoutfs_trace.h"
/*
@@ -65,6 +66,8 @@ struct forest_info {
struct workqueue_struct *workq;
struct delayed_work log_merge_dwork;
atomic64_t inode_count_delta;
};
#define DECLARE_FOREST_INFO(sb, name) \
@@ -221,25 +224,17 @@ out:
}
struct forest_read_items_data {
bool is_fs;
int fic;
scoutfs_forest_item_cb cb;
void *cb_arg;
};
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key,
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, void *arg)
{
struct forest_read_items_data *rid = arg;
struct scoutfs_log_item_value _liv = {0,};
struct scoutfs_log_item_value *liv = &_liv;
if (!rid->is_fs) {
liv = val;
val += sizeof(struct scoutfs_log_item_value);
val_len -= sizeof(struct scoutfs_log_item_value);
}
return rid->cb(sb, key, liv, val, val_len, rid->cb_arg);
return rid->cb(sb, key, seq, flags, val, val_len, rid->fic, rid->cb_arg);
}
/*
@@ -251,19 +246,16 @@ static int forest_read_items(struct super_block *sb, struct scoutfs_key *key,
* that covers all the blocks. Any keys outside of this range can't be
* trusted because we didn't visit all the trees to check their items.
*
* If we hit stale blocks and retry we can call the callback for
* duplicate items. This is harmless because the items are stable while
* the caller holds their cluster lock and the caller has to filter out
* item seqs anyway.
* We return -ESTALE if we hit stale blocks to give the caller a chance
* to reset their state and retry with a newer version of the btrees.
*/
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg)
{
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
struct forest_read_items_data rid = {
.cb = cb,
.cb_arg = arg,
@@ -275,31 +267,30 @@ int scoutfs_forest_read_items(struct super_block *sb,
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_block *bl;
struct scoutfs_key ltk;
struct scoutfs_key orig_start = *start;
struct scoutfs_key orig_end = *end;
int ret;
int i;
scoutfs_inc_counter(sb, forest_read_items);
calc_bloom_nrs(&bloom, &lock->start);
calc_bloom_nrs(&bloom, bloom_key);
retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
refs.fs_ref = roots.fs_root.ref;
refs.logs_ref = roots.logs_root.ref;
*start = lock->start;
*end = lock->end;
*start = orig_start;
*end = orig_end;
/* start with fs root items */
rid.is_fs = true;
rid.fic |= FIC_FS_ROOT;
ret = scoutfs_btree_read_items(sb, &roots.fs_root, key, start, end,
forest_read_items, &rid);
if (ret < 0)
goto out;
rid.is_fs = false;
rid.fic &= ~FIC_FS_ROOT;
scoutfs_key_init_log_trees(&ltk, 0, 0);
for (;; scoutfs_key_inc(&ltk)) {
@@ -344,24 +335,40 @@ retry:
scoutfs_inc_counter(sb, forest_bloom_pass);
if ((le64_to_cpu(lt.flags) & SCOUTFS_LOG_TREES_FINALIZED))
rid.fic |= FIC_FINALIZED;
ret = scoutfs_btree_read_items(sb, &lt.item_root, key, start,
end, forest_read_items, &rid);
if (ret < 0)
goto out;
rid.fic &= ~FIC_FINALIZED;
}
ret = 0;
out:
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
return -EIO;
prev_refs = refs;
goto retry;
}
return ret;
}
/*
* If the items are deltas then combine the src with the destination
* value and store the result in the destination.
*
* Returns:
* -errno: fatal error, no change
* 0: not delta items, no change
* +ve: SCOUTFS_DELTA_ values indicating when dst and/or src can be dropped
*/
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len)
{
if (key->sk_zone == SCOUTFS_XATTR_TOTL_ZONE)
return scoutfs_xattr_combine_totl(dst, dst_len, src, src_len);
return 0;
}
/*
* Make sure that the bloom bits for the lock's start key are all set in
* the current log's bloom block. We record the nr of our log tree in
@@ -487,13 +494,13 @@ out:
return ret;
}
int scoutfs_forest_insert_list(struct super_block *sb,
struct scoutfs_btree_item_list *lst)
int scoutfs_forest_insert_list(struct super_block *sb, scoutfs_btree_item_iter_cb cb,
void *pos, void *arg)
{
DECLARE_FOREST_INFO(sb, finf);
return scoutfs_btree_insert_list(sb, finf->alloc, finf->wri,
&finf->our_log.item_root, lst);
&finf->our_log.item_root, cb, pos, arg);
}
/*
@@ -518,6 +525,62 @@ int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id)
return ret;
}
void scoutfs_forest_inc_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_inc(&finf->inode_count_delta);
}
void scoutfs_forest_dec_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_dec(&finf->inode_count_delta);
}
/*
* Return the total inode count from the super block and all the
* log_btrees it references. This assumes it's working with a block
* reference hierarchy that should be fully consistent. If we see
* ESTALE we've hit persistent corruption.
*/
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count)
{
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
*inode_count = le64_to_cpu(super->inode_count);
scoutfs_key_init_log_trees(&key, 0, 0);
for (;;) {
ret = scoutfs_btree_next(sb, &super->logs_root, &key, &iref);
if (ret == 0) {
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
scoutfs_key_inc(&key);
lt = iref.val;
*inode_count += le64_to_cpu(lt->inode_count_delta);
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
else if (ret == -ESTALE)
ret = -EIO;
break;
}
}
return ret;
}
/*
* This is called from transactions as a new transaction opens and is
* serialized with all writers.
@@ -546,6 +609,8 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
WARN_ON_ONCE(finf->srch_bl); /* commiting should have put the block */
finf->srch_bl = NULL;
atomic64_set(&finf->inode_count_delta, le64_to_cpu(lt->inode_count_delta));
trace_scoutfs_forest_init_our_log(sb, le64_to_cpu(lt->rid),
le64_to_cpu(lt->nr),
le64_to_cpu(lt->item_root.ref.blkno),
@@ -573,30 +638,12 @@ void scoutfs_forest_get_btrees(struct super_block *sb,
scoutfs_block_put(sb, finf->srch_bl);
finf->srch_bl = NULL;
lt->inode_count_delta = cpu_to_le64(atomic64_read(&finf->inode_count_delta));
trace_scoutfs_forest_prepare_commit(sb, &lt->item_root.ref,
&lt->bloom_ref);
}
/*
* Compare input items to merge by their log item value seq when their
* keys match.
*/
static int merge_cmp(void *a_val, int a_val_len, void *b_val, int b_val_len)
{
struct scoutfs_log_item_value *a = a_val;
struct scoutfs_log_item_value *b = b_val;
/* sort merge item by seq */
return scoutfs_cmp(le64_to_cpu(a->seq), le64_to_cpu(b->seq));
}
static bool merge_is_del(void *val, int val_len)
{
struct scoutfs_log_item_value *liv = val;
return !!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION);
}
#define LOG_MERGE_DELAY_MS (5 * MSEC_PER_SEC)
/*
@@ -642,7 +689,7 @@ static void scoutfs_forest_log_merge_worker(struct work_struct *work)
scoutfs_alloc_init(&alloc, &req.meta_avail, &req.meta_freed);
scoutfs_block_writer_init(sb, &wri);
/* find finalized input log trees up to last_seq */
/* find finalized input log trees within the input seq */
for (scoutfs_key_init_log_trees(&key, 0, 0); ; scoutfs_key_inc(&key)) {
if (!rhead) {
@@ -658,10 +705,9 @@ static void scoutfs_forest_log_merge_worker(struct work_struct *work)
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
lt = iref.val;
if ((le64_to_cpu(lt->flags) &
SCOUTFS_LOG_TREES_FINALIZED) &&
(le64_to_cpu(lt->max_item_seq) <=
le64_to_cpu(req.last_seq))) {
if (lt->item_root.ref.blkno != 0 &&
(le64_to_cpu(lt->flags) & SCOUTFS_LOG_TREES_FINALIZED) &&
(le64_to_cpu(lt->finalize_seq) < le64_to_cpu(req.input_seq))) {
rhead->root = lt->item_root;
list_add_tail(&rhead->head, &inputs);
rhead = NULL;
@@ -687,10 +733,8 @@ static void scoutfs_forest_log_merge_worker(struct work_struct *work)
}
ret = scoutfs_btree_merge(sb, &alloc, &wri, &req.start, &req.end,
&next, &comp.root, &inputs, merge_cmp,
merge_is_del,
&next, &comp.root, &inputs,
!!(req.flags & cpu_to_le64(SCOUTFS_LOG_MERGE_REQUEST_SUBTREE)),
sizeof(struct scoutfs_log_item_value),
SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT, 10);
if (ret == -ERANGE) {
comp.remain = next;
@@ -747,9 +791,6 @@ int scoutfs_forest_setup(struct super_block *sb)
goto out;
}
queue_delayed_work(finf->workq, &finf->log_merge_dwork,
msecs_to_jiffies(LOG_MERGE_DELAY_MS));
ret = 0;
out:
if (ret)
@@ -758,6 +799,14 @@ out:
return 0;
}
void scoutfs_forest_start(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
queue_delayed_work(finf->workq, &finf->log_merge_dwork,
msecs_to_jiffies(LOG_MERGE_DELAY_MS));
}
void scoutfs_forest_stop(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);

View File

@@ -8,16 +8,18 @@ struct scoutfs_block;
#include "btree.h"
/* caller gives an item to the callback */
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
void *val, int val_len, void *arg);
enum {
FIC_FS_ROOT = (1 << 0),
FIC_FINALIZED = (1 << 1),
};
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, int fic, void *arg);
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next);
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
@@ -27,10 +29,15 @@ void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq);
int scoutfs_forest_get_max_seq(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *seq);
int scoutfs_forest_insert_list(struct super_block *sb,
struct scoutfs_btree_item_list *lst);
int scoutfs_forest_insert_list(struct super_block *sb, scoutfs_btree_item_iter_cb cb,
void *pos, void *arg);
int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id);
void scoutfs_forest_inc_inode_count(struct super_block *sb);
void scoutfs_forest_dec_inode_count(struct super_block *sb);
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count);
void scoutfs_forest_init_btrees(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
@@ -38,7 +45,14 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
void scoutfs_forest_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
/* > 0 error codes */
#define SCOUTFS_DELTA_COMBINED 1 /* src val was combined, drop src */
#define SCOUTFS_DELTA_COMBINED_NULL 2 /* combined val has no data, drop both */
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len);
int scoutfs_forest_setup(struct super_block *sb);
void scoutfs_forest_start(struct super_block *sb);
void scoutfs_forest_stop(struct super_block *sb);
void scoutfs_forest_destroy(struct super_block *sb);

View File

@@ -1,8 +1,15 @@
#ifndef _SCOUTFS_FORMAT_H_
#define _SCOUTFS_FORMAT_H_
#define SCOUTFS_INTEROP_VERSION 0ULL
#define SCOUTFS_INTEROP_VERSION_STR __stringify(0)
/*
* The format version defines the format of structures on devices,
* structures that are communicated over the wire, and the protocol
* behind the structures.
*/
#define SCOUTFS_FORMAT_VERSION_MIN 1
#define SCOUTFS_FORMAT_VERSION_MIN_STR __stringify(SCOUTFS_FORMAT_VERSION_MIN)
#define SCOUTFS_FORMAT_VERSION_MAX 1
#define SCOUTFS_FORMAT_VERSION_MAX_STR __stringify(SCOUTFS_FORMAT_VERSION_MAX)
/* statfs(2) f_type */
#define SCOUTFS_SUPER_MAGIC 0x554f4353 /* "SCOU" */
@@ -168,6 +175,11 @@ struct scoutfs_key {
#define sko_rid _sk_first
#define sko_ino _sk_second
/* xattr totl */
#define skxt_a _sk_first
#define skxt_b _sk_second
#define skxt_c _sk_third
/* inode */
#define ski_ino _sk_first
@@ -195,10 +207,6 @@ struct scoutfs_key {
#define sklt_rid _sk_first
#define sklt_nr _sk_second
/* seqs */
#define skts_trans_seq _sk_first
#define skts_rid _sk_second
/* mounted clients */
#define skmc_rid _sk_first
@@ -244,11 +252,15 @@ struct scoutfs_btree_root {
struct scoutfs_btree_item {
struct scoutfs_avl_node node;
struct scoutfs_key key;
__le64 seq;
__le16 val_off;
__le16 val_len;
__u8 __pad[4];
__u8 flags;
__u8 __pad[3];
};
#define SCOUTFS_ITEM_FLAG_DELETION (1 << 0)
struct scoutfs_btree_block {
struct scoutfs_block_header hdr;
struct scoutfs_avl_root item_root;
@@ -445,6 +457,12 @@ struct scoutfs_srch_compact {
* XXX I imagine we should rename these now that they've evolved to track
* all the btrees that clients use during a transaction. It's not just
* about item logs, it's about clients making changes to trees.
*
* @get_trans_seq, @commit_trans_seq: These pair of sequence numbers
* determine if a transaction is currently open for the mount that owns
* the log_trees struct. get_trans_seq is advanced by the server as the
* transaction is opened. The server sets comimt_trans_seq equal to
* get_ as the transaction is committed.
*/
struct scoutfs_log_trees {
struct scoutfs_alloc_list_head meta_avail;
@@ -456,7 +474,11 @@ struct scoutfs_log_trees {
struct scoutfs_srch_file srch_file;
__le64 data_alloc_zone_blocks;
__le64 data_alloc_zones[SCOUTFS_DATA_ALLOC_ZONE_LE64S];
__le64 inode_count_delta;
__le64 get_trans_seq;
__le64 commit_trans_seq;
__le64 max_item_seq;
__le64 finalize_seq;
__le64 rid;
__le64 nr;
__le64 flags;
@@ -464,21 +486,8 @@ struct scoutfs_log_trees {
#define SCOUTFS_LOG_TREES_FINALIZED (1ULL << 0)
struct scoutfs_log_item_value {
__le64 seq;
__u8 flags;
__u8 __pad[7];
__u8 data[];
};
/*
* FS items are limited by the max btree value length with the log item
* value header.
*/
#define SCOUTFS_MAX_VAL_SIZE \
(SCOUTFS_BTREE_MAX_VAL_LEN - sizeof(struct scoutfs_log_item_value))
#define SCOUTFS_LOG_ITEM_FLAG_DELETION (1 << 0)
/* FS items are limited by the max btree value length */
#define SCOUTFS_MAX_VAL_SIZE SCOUTFS_BTREE_MAX_VAL_LEN
struct scoutfs_bloom_block {
struct scoutfs_block_header hdr;
@@ -508,7 +517,6 @@ struct scoutfs_log_merge_status {
struct scoutfs_key next_range_key;
__le64 nr_requests;
__le64 nr_complete;
__le64 last_seq;
__le64 seq;
};
@@ -525,7 +533,7 @@ struct scoutfs_log_merge_request {
struct scoutfs_btree_root root;
struct scoutfs_key start;
struct scoutfs_key end;
__le64 last_seq;
__le64 input_seq;
__le64 rid;
__le64 seq;
__le64 flags;
@@ -575,49 +583,48 @@ struct scoutfs_log_merge_freeing {
/*
* Keys are first sorted by major key zones.
*/
#define SCOUTFS_INODE_INDEX_ZONE 1
#define SCOUTFS_ORPHAN_ZONE 2
#define SCOUTFS_FS_ZONE 3
#define SCOUTFS_LOCK_ZONE 4
#define SCOUTFS_INODE_INDEX_ZONE 4
#define SCOUTFS_ORPHAN_ZONE 8
#define SCOUTFS_XATTR_TOTL_ZONE 12
#define SCOUTFS_FS_ZONE 16
#define SCOUTFS_LOCK_ZONE 20
/* Items only stored in server btrees */
#define SCOUTFS_LOG_TREES_ZONE 6
#define SCOUTFS_TRANS_SEQ_ZONE 7
#define SCOUTFS_MOUNTED_CLIENT_ZONE 8
#define SCOUTFS_SRCH_ZONE 9
#define SCOUTFS_FREE_EXTENT_BLKNO_ZONE 10
#define SCOUTFS_FREE_EXTENT_ORDER_ZONE 11
#define SCOUTFS_LOG_TREES_ZONE 24
#define SCOUTFS_MOUNTED_CLIENT_ZONE 28
#define SCOUTFS_SRCH_ZONE 32
#define SCOUTFS_FREE_EXTENT_BLKNO_ZONE 36
#define SCOUTFS_FREE_EXTENT_ORDER_ZONE 40
/* Items only stored in log merge server btrees */
#define SCOUTFS_LOG_MERGE_STATUS_ZONE 12
#define SCOUTFS_LOG_MERGE_RANGE_ZONE 13
#define SCOUTFS_LOG_MERGE_REQUEST_ZONE 14
#define SCOUTFS_LOG_MERGE_COMPLETE_ZONE 15
#define SCOUTFS_LOG_MERGE_FREEING_ZONE 16
#define SCOUTFS_LOG_MERGE_STATUS_ZONE 44
#define SCOUTFS_LOG_MERGE_RANGE_ZONE 48
#define SCOUTFS_LOG_MERGE_REQUEST_ZONE 52
#define SCOUTFS_LOG_MERGE_COMPLETE_ZONE 56
#define SCOUTFS_LOG_MERGE_FREEING_ZONE 60
/* inode index zone */
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 1
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 2
#define SCOUTFS_INODE_INDEX_NR 3 /* don't forget to update */
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 4
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 8
/* orphan zone, redundant type used for clarity */
#define SCOUTFS_ORPHAN_TYPE 1
#define SCOUTFS_ORPHAN_TYPE 4
/* fs zone */
#define SCOUTFS_INODE_TYPE 1
#define SCOUTFS_XATTR_TYPE 2
#define SCOUTFS_DIRENT_TYPE 3
#define SCOUTFS_READDIR_TYPE 4
#define SCOUTFS_LINK_BACKREF_TYPE 5
#define SCOUTFS_SYMLINK_TYPE 6
#define SCOUTFS_DATA_EXTENT_TYPE 7
#define SCOUTFS_INODE_TYPE 4
#define SCOUTFS_XATTR_TYPE 8
#define SCOUTFS_DIRENT_TYPE 12
#define SCOUTFS_READDIR_TYPE 16
#define SCOUTFS_LINK_BACKREF_TYPE 20
#define SCOUTFS_SYMLINK_TYPE 24
#define SCOUTFS_DATA_EXTENT_TYPE 28
/* lock zone, only ever found in lock ranges, never in persistent items */
#define SCOUTFS_RENAME_TYPE 1
#define SCOUTFS_RENAME_TYPE 4
/* srch zone, only in server btrees */
#define SCOUTFS_SRCH_LOG_TYPE 1
#define SCOUTFS_SRCH_BLOCKS_TYPE 2
#define SCOUTFS_SRCH_PENDING_TYPE 3
#define SCOUTFS_SRCH_BUSY_TYPE 4
#define SCOUTFS_SRCH_LOG_TYPE 4
#define SCOUTFS_SRCH_BLOCKS_TYPE 8
#define SCOUTFS_SRCH_PENDING_TYPE 12
#define SCOUTFS_SRCH_BUSY_TYPE 16
/* file data extents have start and len in key */
struct scoutfs_data_extent_val {
@@ -642,6 +649,17 @@ struct scoutfs_xattr {
__u8 name[];
};
/*
* .totl. xattrs are mapped to items. The dotted u64s in the xattr name
* map to the item key. The item value total is the sum of all the
* xattr values. The item value count records the number of xattrs
* contributing to the total and is used when combining logged items to
* determine if totals are being created or destroyed.
*/
struct scoutfs_xattr_totl_val {
__le64 total;
__le64 count;
};
/* XXX does this exist upstream somewhere? */
#define member_sizeof(TYPE, MEMBER) (sizeof(((TYPE *)0)->MEMBER))
@@ -725,7 +743,9 @@ enum {
struct scoutfs_quorum_block {
struct scoutfs_block_header hdr;
__le64 write_nr;
struct scoutfs_quorum_block_event {
__le64 write_nr;
__le64 rid;
__le64 term;
struct scoutfs_timespec ts;
@@ -773,17 +793,14 @@ struct scoutfs_volume_options {
struct scoutfs_super_block {
struct scoutfs_block_header hdr;
__le64 id;
__le64 version;
__le64 fmt_vers;
__le64 flags;
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 seq;
__le64 next_ino;
__le64 inode_count;
__le64 total_meta_blocks; /* both static and dynamic */
__le64 first_meta_blkno; /* first dynamically allocated */
__le64 last_meta_blkno;
__le64 total_data_blocks;
__le64 first_data_blkno;
__le64 last_data_blkno;
struct scoutfs_quorum_config qconf;
struct scoutfs_alloc_root meta_alloc[2];
struct scoutfs_alloc_root data_alloc;
@@ -792,7 +809,6 @@ struct scoutfs_super_block {
struct scoutfs_btree_root fs_root;
struct scoutfs_btree_root logs_root;
struct scoutfs_btree_root log_merge;
struct scoutfs_btree_root trans_seqs;
struct scoutfs_btree_root mounted_clients;
struct scoutfs_btree_root srch_root;
struct scoutfs_volume_options volopt;
@@ -819,13 +835,6 @@ struct scoutfs_super_block {
*
* @offline_blocks: The number of fixed 4k blocks that could be made
* online by staging.
*
* XXX
* - otime?
* - compat flags?
* - version?
* - generation?
* - be more careful with rdev?
*/
struct scoutfs_inode {
__le64 size;
@@ -836,6 +845,7 @@ struct scoutfs_inode {
__le64 offline_blocks;
__le64 next_readdir_pos;
__le64 next_xattr_id;
__le64 version;
__le32 nlink;
__le32 uid;
__le32 gid;
@@ -845,6 +855,7 @@ struct scoutfs_inode {
struct scoutfs_timespec atime;
struct scoutfs_timespec ctime;
struct scoutfs_timespec mtime;
struct scoutfs_timespec crtime;
};
#define SCOUTFS_INO_FLAG_TRUNCATE 0x1
@@ -896,6 +907,7 @@ enum scoutfs_dentry_type {
#define SCOUTFS_XATTR_MAX_NAME_LEN 255
#define SCOUTFS_XATTR_MAX_VAL_LEN 65535
#define SCOUTFS_XATTR_MAX_PART_SIZE SCOUTFS_MAX_VAL_SIZE
#define SCOUTFS_XATTR_MAX_TOTL_U64 23 /* octal U64_MAX */
#define SCOUTFS_XATTR_NR_PARTS(name_len, val_len) \
DIV_ROUND_UP(sizeof(struct scoutfs_xattr) + name_len + val_len, \
@@ -926,7 +938,7 @@ enum scoutfs_dentry_type {
*/
struct scoutfs_net_greeting {
__le64 fsid;
__le64 version;
__le64 fmt_vers;
__le64 server_term;
__le64 rid;
__le64 flags;
@@ -957,7 +969,6 @@ struct scoutfs_net_greeting {
* response messages.
*/
struct scoutfs_net_header {
__le64 clock_sync_id;
__le64 seq;
__le64 recv_seq;
__le64 id;
@@ -977,8 +988,8 @@ enum scoutfs_net_cmd {
SCOUTFS_NET_CMD_ALLOC_INODES,
SCOUTFS_NET_CMD_GET_LOG_TREES,
SCOUTFS_NET_CMD_COMMIT_LOG_TREES,
SCOUTFS_NET_CMD_SYNC_LOG_TREES,
SCOUTFS_NET_CMD_GET_ROOTS,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
SCOUTFS_NET_CMD_GET_LAST_SEQ,
SCOUTFS_NET_CMD_LOCK,
SCOUTFS_NET_CMD_LOCK_RECOVER,
@@ -990,6 +1001,8 @@ enum scoutfs_net_cmd {
SCOUTFS_NET_CMD_GET_VOLOPT,
SCOUTFS_NET_CMD_SET_VOLOPT,
SCOUTFS_NET_CMD_CLEAR_VOLOPT,
SCOUTFS_NET_CMD_RESIZE_DEVICES,
SCOUTFS_NET_CMD_STATFS,
SCOUTFS_NET_CMD_FAREWELL,
SCOUTFS_NET_CMD_UNKNOWN,
};
@@ -1032,6 +1045,20 @@ struct scoutfs_net_roots {
struct scoutfs_btree_root srch_root;
};
struct scoutfs_net_resize_devices {
__le64 new_total_meta_blocks;
__le64 new_total_data_blocks;
};
struct scoutfs_net_statfs {
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 free_meta_blocks;
__le64 total_meta_blocks;
__le64 free_data_blocks;
__le64 total_data_blocks;
__le64 inode_count;
};
struct scoutfs_net_lock {
struct scoutfs_key key;
__le64 write_seq;
@@ -1058,6 +1085,7 @@ enum scoutfs_lock_trace {
SLT_INVALIDATE,
SLT_REQUEST,
SLT_RESPONSE,
SLT_NR,
};
/*

View File

@@ -35,6 +35,7 @@
#include "cmp.h"
#include "omap.h"
#include "forest.h"
#include "btree.h"
/*
* XXX
@@ -59,7 +60,7 @@ struct inode_sb_info {
bool stopped;
spinlock_t writeback_lock;
struct rb_root writeback_inodes;
struct list_head writeback_list;
struct inode_allocator dir_ino_alloc;
struct inode_allocator ino_alloc;
@@ -68,6 +69,9 @@ struct inode_sb_info {
/* serialize multiple inode ->evict trying to delete same ino's items */
spinlock_t deleting_items_lock;
struct list_head deleting_items_list;
struct work_struct iput_work;
struct llist_head iput_llist;
};
#define DECLARE_INODE_SB_INFO(sb, name) \
@@ -92,9 +96,9 @@ static void scoutfs_inode_ctor(void *obj)
atomic64_set(&si->data_waitq.changed, 0);
init_waitqueue_head(&si->data_waitq.waitq);
init_rwsem(&si->xattr_rwsem);
RB_CLEAR_NODE(&si->writeback_node);
INIT_LIST_HEAD(&si->writeback_entry);
scoutfs_lock_init_coverage(&si->ino_lock_cov);
atomic_set(&si->inv_iput_count, 0);
atomic_set(&si->iput_count, 0);
inode_init_once(&si->inode);
}
@@ -118,47 +122,14 @@ static void scoutfs_i_callback(struct rcu_head *head)
kmem_cache_free(scoutfs_inode_cachep, SCOUTFS_I(inode));
}
static void insert_writeback_inode(struct inode_sb_info *inf,
struct scoutfs_inode_info *ins)
{
struct rb_root *root = &inf->writeback_inodes;
struct rb_node **node = &root->rb_node;
struct rb_node *parent = NULL;
struct scoutfs_inode_info *si;
while (*node) {
parent = *node;
si = container_of(*node, struct scoutfs_inode_info,
writeback_node);
if (ins->ino < si->ino)
node = &(*node)->rb_left;
else if (ins->ino > si->ino)
node = &(*node)->rb_right;
else
BUG();
}
rb_link_node(&ins->writeback_node, parent, node);
rb_insert_color(&ins->writeback_node, root);
}
static void remove_writeback_inode(struct inode_sb_info *inf,
struct scoutfs_inode_info *si)
{
if (!RB_EMPTY_NODE(&si->writeback_node)) {
rb_erase(&si->writeback_node, &inf->writeback_inodes);
RB_CLEAR_NODE(&si->writeback_node);
}
}
void scoutfs_destroy_inode(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
spin_lock(&inf->writeback_lock);
remove_writeback_inode(inf, SCOUTFS_I(inode));
if (!list_empty(&si->writeback_entry))
list_del_init(&si->writeback_entry);
spin_unlock(&inf->writeback_lock);
scoutfs_lock_del_coverage(inode->i_sb, &si->ino_lock_cov);
@@ -215,6 +186,37 @@ static void set_inode_ops(struct inode *inode)
mapping_set_gfp_mask(inode->i_mapping, GFP_USER);
}
static unsigned int item_index_arr_ind(u8 type)
{
switch (type) {
case SCOUTFS_INODE_INDEX_META_SEQ_TYPE: return 0; break;
case SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE: return 1; break;
/* should never get here, we control callers, not untrusted data */
default: BUG(); break;
}
}
static void set_item_major(struct scoutfs_inode_info *si, u8 type, __le64 maj)
{
unsigned int ind = item_index_arr_ind(type);
si->item_majors[ind] = le64_to_cpu(maj);
}
static u64 get_item_major(struct scoutfs_inode_info *si, u8 type)
{
unsigned int ind = item_index_arr_ind(type);
return si->item_majors[ind];
}
static u64 get_item_minor(struct scoutfs_inode_info *si, u8 type)
{
unsigned int ind = item_index_arr_ind(type);
return si->item_minors[ind];
}
/*
* The caller has ensured that the fields in the incoming scoutfs inode
* reflect both the inode item and the inode index items. This happens
@@ -231,10 +233,8 @@ static void set_item_info(struct scoutfs_inode_info *si,
memset(si->item_minors, 0, sizeof(si->item_minors));
si->have_item = true;
si->item_majors[SCOUTFS_INODE_INDEX_META_SEQ_TYPE] =
le64_to_cpu(sinode->meta_seq);
si->item_majors[SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE] =
le64_to_cpu(sinode->data_seq);
set_item_major(si, SCOUTFS_INODE_INDEX_META_SEQ_TYPE, sinode->meta_seq);
set_item_major(si, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE, sinode->data_seq);
}
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
@@ -242,6 +242,7 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
i_size_write(inode, le64_to_cpu(cinode->size));
inode->i_version = le64_to_cpu(cinode->version);
set_nlink(inode, le32_to_cpu(cinode->nlink));
i_uid_write(inode, le32_to_cpu(cinode->uid));
i_gid_write(inode, le32_to_cpu(cinode->gid));
@@ -262,6 +263,8 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
si->next_readdir_pos = le64_to_cpu(cinode->next_readdir_pos);
si->next_xattr_id = le64_to_cpu(cinode->next_xattr_id);
si->flags = le32_to_cpu(cinode->flags);
si->crtime.tv_sec = le64_to_cpu(cinode->crtime.sec);
si->crtime.tv_nsec = le32_to_cpu(cinode->crtime.nsec);
/*
* i_blocks is initialized from online and offline and is then
@@ -374,6 +377,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
if (truncate)
si->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_inode_set_data_seq(inode);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
@@ -509,6 +513,7 @@ retry:
goto out;
setattr_copy(inode, attr);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
@@ -692,14 +697,14 @@ struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino)
return ilookup5(sb, ino, scoutfs_iget_test, &ino);
}
struct inode *scoutfs_iget(struct super_block *sb, u64 ino)
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf)
{
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode_info *si;
struct inode *inode;
int ret;
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, ino, &lock);
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, lkf, ino, &lock);
if (ret)
return ERR_PTR(ret);
@@ -714,6 +719,7 @@ struct inode *scoutfs_iget(struct super_block *sb, u64 ino)
/* XXX ensure refresh, instead clear in drop_inode? */
si = SCOUTFS_I(inode);
atomic64_set(&si->last_refreshed, 0);
inode->i_version = 0;
ret = scoutfs_inode_refresh(inode, lock, 0);
if (ret == 0)
@@ -741,6 +747,7 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
scoutfs_inode_get_onoff(inode, &online_blocks, &offline_blocks);
cinode->size = cpu_to_le64(i_size_read(inode));
cinode->version = cpu_to_le64(inode->i_version);
cinode->nlink = cpu_to_le32(inode->i_nlink);
cinode->uid = cpu_to_le32(i_uid_read(inode));
cinode->gid = cpu_to_le32(i_gid_read(inode));
@@ -764,6 +771,9 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
cinode->next_readdir_pos = cpu_to_le64(si->next_readdir_pos);
cinode->next_xattr_id = cpu_to_le64(si->next_xattr_id);
cinode->flags = cpu_to_le32(si->flags);
cinode->crtime.sec = cpu_to_le64(si->crtime.tv_sec);
cinode->crtime.nsec = cpu_to_le32(si->crtime.tv_nsec);
memset(cinode->crtime.__pad, 0, sizeof(cinode->crtime.__pad));
}
/*
@@ -814,16 +824,14 @@ static bool will_del_index(struct scoutfs_inode_info *si,
u8 type, u64 major, u32 minor)
{
return si && si->have_item &&
(si->item_majors[type] != major ||
si->item_minors[type] != minor);
(get_item_major(si, type) != major || get_item_minor(si, type) != minor);
}
static bool will_ins_index(struct scoutfs_inode_info *si,
u8 type, u64 major, u32 minor)
{
return !si || !si->have_item ||
(si->item_majors[type] != major ||
si->item_minors[type] != minor);
(get_item_major(si, type) != major || get_item_minor(si, type) != minor);
}
static bool inode_has_index(umode_t mode, u8 type)
@@ -931,14 +939,14 @@ static int update_index_items(struct super_block *sb,
if (ret || !will_del_index(si, type, major, minor))
return ret;
trace_scoutfs_delete_index_item(sb, type, si->item_majors[type],
si->item_minors[type], ino);
trace_scoutfs_delete_index_item(sb, type, get_item_major(si, type),
get_item_minor(si, type), ino);
scoutfs_inode_init_index_key(&del, type, si->item_majors[type],
si->item_minors[type], ino);
scoutfs_inode_init_index_key(&del, type, get_item_major(si, type),
get_item_minor(si, type), ino);
del_lock = find_index_lock(lock_list, type, si->item_majors[type],
si->item_minors[type], ino);
del_lock = find_index_lock(lock_list, type, get_item_major(si, type),
get_item_minor(si, type), ino);
ret = scoutfs_item_delete_force(sb, &del, del_lock);
if (ret) {
err = scoutfs_item_delete(sb, &ins, ins_lock);
@@ -1075,8 +1083,8 @@ static int prepare_index_items(struct scoutfs_inode_info *si,
}
if (will_del_index(si, type, major, minor)) {
ret = add_index_lock(list, ino, type, si->item_majors[type],
si->item_minors[type]);
ret = add_index_lock(list, ino, type, get_item_major(si, type),
get_item_minor(si, type));
if (ret)
return ret;
}
@@ -1095,7 +1103,7 @@ static u64 upd_data_seq(struct scoutfs_sb_info *sbi,
if (!si || !si->have_item || set_data_seq)
return sbi->trans_seq;
return si->item_majors[SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE];
return get_item_major(si, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE);
}
/*
@@ -1319,22 +1327,6 @@ static int remove_index_items(struct super_block *sb, u64 ino,
return ret;
}
/*
* A quick atomic sample of the last inode number that's been allocated.
*/
u64 scoutfs_last_ino(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
u64 last;
spin_lock(&sbi->next_ino_lock);
last = le64_to_cpu(super->next_ino);
spin_unlock(&sbi->next_ino_lock);
return last;
}
/*
* Return an allocated and unused inode number. Returns -ENOSPC if
* we're out of inode.
@@ -1614,6 +1606,8 @@ retry:
goto out;
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock);
if (ret == 0)
scoutfs_forest_dec_inode_count(sb);
out:
del_deleting_ino(inf, &del);
if (release)
@@ -1657,11 +1651,6 @@ void scoutfs_evict_inode(struct inode *inode)
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
}
if (ret == -ERESTARTSYS) {
/* can be in task with pending, could be found as orphan */
scoutfs_inc_counter(sb, inode_evict_intr);
ret = 0;
}
if (ret < 0) {
scoutfs_err(sb, "error %d while checking to delete inode nr %llu, it might linger.",
ret, ino);
@@ -1699,6 +1688,49 @@ int scoutfs_drop_inode(struct inode *inode)
generic_drop_inode(inode);
}
static void iput_worker(struct work_struct *work)
{
struct inode_sb_info *inf = container_of(work, struct inode_sb_info, iput_work);
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct llist_node *inodes;
bool more;
inodes = llist_del_all(&inf->iput_llist);
llist_for_each_entry_safe(si, tmp, inodes, iput_llnode) {
do {
more = atomic_dec_return(&si->iput_count) > 0;
iput(&si->inode);
} while (more);
}
}
/*
* Final iput can get into evict and perform final inode deletion which
* can delete a lot of items spanning multiple cluster locks and
* transactions. It should be understood as a heavy high level
* operation, more like file writing and less like dropping a refcount.
*
* Unfortunately we also have incentives to use igrab/iput from internal
* contexts that have no business doing that work, like lock
* invalidation or dirty inode writeback during transaction commit.
*
* In those cases we can kick iput off to background work context.
* Nothing stops multiple puts of an inode before the work runs so we
* can track multiple puts in flight.
*/
void scoutfs_inode_queue_iput(struct inode *inode)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
if (atomic_inc_return(&si->iput_count) == 1)
llist_add(&si->iput_llnode, &inf->iput_llist);
smp_wmb(); /* count and list visible before work executes */
schedule_work(&inf->iput_work);
}
/*
* All mounts are performing this work concurrently. We introduce
* significant jitter between them to try and keep them from all
@@ -1727,15 +1759,20 @@ static void schedule_orphan_dwork(struct inode_sb_info *inf)
* the cached inodes pinning the inode fail to delete as they are
* evicted from the cache -- either through crashing or errors.
*
* This work runs in all mounts in the background looking for orphaned
* inodes that should be deleted.
* This work runs in all mounts in the background looking for those
* orphaned inodes that weren't fully deleted.
*
* We use the forest hint call to read the persistent forest trees
* looking for orphan items without creating lock contention. Orphan
* items exist for O_TMPFILE users and we don't want to force them to
* commit by trying to acquire a conflicting read lock the orphan zone.
* There's no rush to reclaim deleted items, eventually they will be
* found in the persistent item btrees.
* First, we search for items in the current persistent fs root. We'll
* only find orphan items that made it to the fs root after being merged
* from a mount's log btree. This naturally avoids orphan items that
* exist while inodes have been unlinked but are still cached, including
* O_TMPFILE inodes that are actively used during normal operations.
* Scanning the read-only persistent fs root uses cached blocks and
* avoids the lock contention we'd cause if we tried to use the
* consistent item cache. The downside is that it adds a bit of
* latency. If an orphan was created in error it'll take until the
* mount's log btree is finalized and merged. A crash will have the log
* btree merged after it is fenced.
*
* Once we find candidate orphan items we can first check our local
* inode cache for inodes that are already on their way to eviction and
@@ -1743,10 +1780,6 @@ static void schedule_orphan_dwork(struct inode_sb_info *inf)
* the inode. Only if we don't have it cached, and no one else does, do
* we try and read it into our cache and evict it to trigger the final
* inode deletion process.
*
* Orphaned items that make it that far should be very rare. They can
* only exist if all the mounts that were using an inode after it had
* been unlinked (or created with o_tmpfile) didn't unmount cleanly.
*/
static void inode_orphan_scan_worker(struct work_struct *work)
{
@@ -1754,8 +1787,9 @@ static void inode_orphan_scan_worker(struct work_struct *work)
orphan_scan_dwork.work);
struct super_block *sb = inf->sb;
struct scoutfs_open_ino_map omap;
struct scoutfs_net_roots roots;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key last;
struct scoutfs_key next;
struct scoutfs_key key;
struct inode *inode;
u64 group_nr;
@@ -1768,6 +1802,10 @@ static void inode_orphan_scan_worker(struct work_struct *work)
init_orphan_key(&last, U64_MAX);
omap.args.group_nr = cpu_to_le64(U64_MAX);
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
for (ino = SCOUTFS_ROOT_INO + 1; ino != 0; ino++) {
if (inf->stopped) {
ret = 0;
@@ -1776,18 +1814,21 @@ static void inode_orphan_scan_worker(struct work_struct *work)
/* find the next orphan item */
init_orphan_key(&key, ino);
ret = scoutfs_forest_next_hint(sb, &key, &next);
ret = scoutfs_btree_next(sb, &roots.fs_root, &key, &iref);
if (ret < 0) {
if (ret == -ENOENT)
break;
goto out;
}
if (scoutfs_key_compare(&next, &last) > 0)
key = *iref.key;
scoutfs_btree_put_iref(&iref);
if (scoutfs_key_compare(&key, &last) > 0)
break;
scoutfs_inc_counter(sb, orphan_scan_item);
ino = le64_to_cpu(next.sko_ino);
ino = le64_to_cpu(key.sko_ino);
/* locally cached inodes will already be deleted */
inode = scoutfs_ilookup(sb, ino);
@@ -1814,7 +1855,7 @@ static void inode_orphan_scan_worker(struct work_struct *work)
}
/* try to cached and evict unused inode to delete, can be racing */
inode = scoutfs_iget(sb, ino);
inode = scoutfs_iget(sb, ino, 0);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
if (ret == -ENOENT)
@@ -1843,30 +1884,33 @@ out:
* ourselves in knots trying to call through the high level vfs sync
* methods.
*
* File data block allocations tend to advance through free space so we
* add the inode to the end of the list to roughly encourage sequential
* IO.
*
* This is called by writers who hold the inode and transaction. The
* inode's presence in the rbtree is removed by destroy_inode, prevented
* by the inode hold, and by committing the transaction, which is
* prevented by holding the transaction. The inode can only go from
* empty to on the rbtree while we're here.
* inode is removed from the list by evict->destroy if it's unlinked
* during the transaction or by committing the transaction. Pruning the
* icache won't try to evict the inode as long as it has dirty buffers.
*/
void scoutfs_inode_queue_writeback(struct inode *inode)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
if (RB_EMPTY_NODE(&si->writeback_node)) {
if (list_empty(&si->writeback_entry)) {
spin_lock(&inf->writeback_lock);
if (RB_EMPTY_NODE(&si->writeback_node))
insert_writeback_inode(inf, si);
if (list_empty(&si->writeback_entry))
list_add_tail(&si->writeback_entry, &inf->writeback_list);
spin_unlock(&inf->writeback_lock);
}
}
/*
* Walk our dirty inodes in ino order and either start dirty page
* writeback or wait for writeback to complete.
* Walk our dirty inodes and either start dirty page writeback or wait
* for writeback to complete.
*
* This is called by transaction commiting so other writers are
* This is called by transaction committing so other writers are
* excluded. We're still very careful to iterate over the tree while it
* and the inodes could be changing.
*
@@ -1879,29 +1923,19 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
{
DECLARE_INODE_SB_INFO(sb, inf);
struct scoutfs_inode_info *si;
struct rb_node *node;
struct scoutfs_inode_info *tmp;
struct inode *inode;
struct inode *defer_iput = NULL;
int ret;
spin_lock(&inf->writeback_lock);
node = rb_first(&inf->writeback_inodes);
while (node) {
si = container_of(node, struct scoutfs_inode_info,
writeback_node);
node = rb_next(node);
list_for_each_entry_safe(si, tmp, &inf->writeback_list, writeback_entry) {
inode = igrab(&si->inode);
if (!inode)
continue;
spin_unlock(&inf->writeback_lock);
if (defer_iput) {
iput(defer_iput);
defer_iput = NULL;
}
if (write)
ret = filemap_fdatawrite(inode->i_mapping);
else
@@ -1909,28 +1943,28 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
trace_scoutfs_inode_walk_writeback(sb, scoutfs_ino(inode),
write, ret);
if (ret) {
iput(inode);
scoutfs_inode_queue_iput(inode);
goto out;
}
spin_lock(&inf->writeback_lock);
if (WARN_ON_ONCE(RB_EMPTY_NODE(&si->writeback_node)))
node = rb_first(&inf->writeback_inodes);
/* restore tmp after reacquiring lock */
if (WARN_ON_ONCE(list_empty(&si->writeback_entry)))
tmp = list_first_entry(&inf->writeback_list, struct scoutfs_inode_info,
writeback_entry);
else
node = rb_next(&si->writeback_node);
tmp = list_next_entry(si, writeback_entry);
if (!write)
remove_writeback_inode(inf, si);
list_del_init(&si->writeback_entry);
/* avoid iput->destroy lock deadlock */
defer_iput = inode;
scoutfs_inode_queue_iput(inode);
}
spin_unlock(&inf->writeback_lock);
out:
if (defer_iput)
iput(defer_iput);
return ret;
}
@@ -1945,12 +1979,14 @@ int scoutfs_inode_setup(struct super_block *sb)
inf->sb = sb;
spin_lock_init(&inf->writeback_lock);
inf->writeback_inodes = RB_ROOT;
INIT_LIST_HEAD(&inf->writeback_list);
spin_lock_init(&inf->dir_ino_alloc.lock);
spin_lock_init(&inf->ino_alloc.lock);
INIT_DELAYED_WORK(&inf->orphan_scan_dwork, inode_orphan_scan_worker);
spin_lock_init(&inf->deleting_items_lock);
INIT_LIST_HEAD(&inf->deleting_items_list);
INIT_WORK(&inf->iput_work, iput_worker);
init_llist_head(&inf->iput_llist);
sbi->inode_sb_info = inf;
@@ -1962,15 +1998,18 @@ int scoutfs_inode_setup(struct super_block *sb)
* many other subsystems like networking and the server. We only kick
* it off once everything is ready.
*/
int scoutfs_inode_start(struct super_block *sb)
void scoutfs_inode_start(struct super_block *sb)
{
DECLARE_INODE_SB_INFO(sb, inf);
schedule_orphan_dwork(inf);
return 0;
}
void scoutfs_inode_stop(struct super_block *sb)
/*
* Orphan scanning can instantiate inodes. We shut it down before
* calling into the vfs to tear down dentries and inodes during unmount.
*/
void scoutfs_inode_orphan_stop(struct super_block *sb)
{
DECLARE_INODE_SB_INFO(sb, inf);
@@ -1980,6 +2019,14 @@ void scoutfs_inode_stop(struct super_block *sb)
}
}
void scoutfs_inode_flush_iput(struct super_block *sb)
{
DECLARE_INODE_SB_INFO(sb, inf);
if (inf)
flush_work(&inf->iput_work);
}
void scoutfs_inode_destroy(struct super_block *sb)
{
struct inode_sb_info *inf = SCOUTFS_SB(sb)->inode_sb_info;

View File

@@ -9,6 +9,8 @@
struct scoutfs_lock;
#define SCOUTFS_INODE_NR_INDICES 2
struct scoutfs_inode_info {
/* read or initialized for each inode instance */
u64 ino;
@@ -20,6 +22,7 @@ struct scoutfs_inode_info {
u64 online_blocks;
u64 offline_blocks;
u32 flags;
struct timespec crtime;
/*
* Protects per-inode extent items, most particularly readers
@@ -37,8 +40,8 @@ struct scoutfs_inode_info {
*/
struct mutex item_mutex;
bool have_item;
u64 item_majors[SCOUTFS_INODE_INDEX_NR];
u32 item_minors[SCOUTFS_INODE_INDEX_NR];
u64 item_majors[SCOUTFS_INODE_NR_INDICES];
u32 item_minors[SCOUTFS_INODE_NR_INDICES];
/* updated at on each new lock acquisition */
atomic64_t last_refreshed;
@@ -49,14 +52,14 @@ struct scoutfs_inode_info {
struct scoutfs_per_task pt_data_lock;
struct scoutfs_data_waitq data_waitq;
struct rw_semaphore xattr_rwsem;
struct rb_node writeback_node;
struct list_head writeback_entry;
struct scoutfs_lock_coverage ino_lock_cov;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node inv_iput_llnode;
atomic_t inv_iput_count;
struct llist_node iput_llnode;
atomic_t iput_count;
struct inode inode;
};
@@ -75,8 +78,9 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
void scoutfs_destroy_inode(struct inode *inode);
int scoutfs_drop_inode(struct inode *inode);
void scoutfs_evict_inode(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf);
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
@@ -125,14 +129,13 @@ int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_
void scoutfs_inode_queue_writeback(struct inode *inode);
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);
u64 scoutfs_last_ino(struct super_block *sb);
void scoutfs_inode_exit(void);
int scoutfs_inode_init(void);
int scoutfs_inode_setup(struct super_block *sb);
int scoutfs_inode_start(struct super_block *sb);
void scoutfs_inode_stop(struct super_block *sb);
void scoutfs_inode_start(struct super_block *sb);
void scoutfs_inode_orphan_stop(struct super_block *sb);
void scoutfs_inode_flush_iput(struct super_block *sb);
void scoutfs_inode_destroy(struct super_block *sb);
#endif

View File

@@ -21,6 +21,7 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/aio.h>
#include <linux/list_sort.h>
#include "format.h"
#include "key.h"
@@ -39,6 +40,7 @@
#include "srch.h"
#include "alloc.h"
#include "server.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -541,19 +543,17 @@ out:
static long scoutfs_ioc_stat_more(struct file *file, unsigned long arg)
{
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_ioctl_stat_more stm;
if (get_user(stm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
stm.valid_bytes = min_t(u64, stm.valid_bytes,
sizeof(struct scoutfs_ioctl_stat_more));
stm.meta_seq = scoutfs_inode_meta_seq(inode);
stm.data_seq = scoutfs_inode_data_seq(inode);
stm.data_version = scoutfs_inode_data_version(inode);
scoutfs_inode_get_onoff(inode, &stm.online_blocks, &stm.offline_blocks);
stm.crtime_sec = si->crtime.tv_sec;
stm.crtime_nsec = si->crtime.tv_nsec;
if (copy_to_user((void __user *)arg, &stm, stm.valid_bytes))
if (copy_to_user((void __user *)arg, &stm, sizeof(stm)))
return -EFAULT;
return 0;
@@ -617,6 +617,7 @@ static long scoutfs_ioc_data_waiting(struct file *file, unsigned long arg)
static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
{
struct inode *inode = file->f_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_ioctl_setattr_more __user *usm = (void __user *)arg;
struct scoutfs_ioctl_setattr_more sm;
@@ -685,6 +686,8 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
i_size_write(inode, sm.i_size);
inode->i_ctime.tv_sec = sm.ctime_sec;
inode->i_ctime.tv_nsec = sm.ctime_nsec;
si->crtime.tv_sec = sm.crtime_sec;
si->crtime.tv_nsec = sm.crtime_nsec;
scoutfs_update_inode_item(inode, lock, &ind_locks);
ret = 0;
@@ -867,15 +870,18 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_super_block *super;
struct scoutfs_ioctl_statfs_more sfm;
int ret;
if (get_user(sfm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super)
return -ENOMEM;
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
sfm.valid_bytes = min_t(u64, sfm.valid_bytes,
sizeof(struct scoutfs_ioctl_statfs_more));
sfm.fsid = le64_to_cpu(super->hdr.fsid);
sfm.rid = sbi->rid;
sfm.total_meta_blocks = le64_to_cpu(super->total_meta_blocks);
@@ -884,12 +890,15 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
ret = scoutfs_client_get_last_seq(sb, &sfm.committed_seq);
if (ret)
return ret;
goto out;
if (copy_to_user((void __user *)arg, &sfm, sfm.valid_bytes))
return -EFAULT;
return 0;
if (copy_to_user((void __user *)arg, &sfm, sizeof(sfm)))
ret = -EFAULT;
else
ret = 0;
out:
kfree(super);
return ret;
}
struct copy_alloc_detail_args {
@@ -993,6 +1002,324 @@ out:
return ret;
}
static long scoutfs_ioc_resize_devices(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_resize_devices __user *urd = (void __user *)arg;
struct scoutfs_ioctl_resize_devices rd;
struct scoutfs_net_resize_devices nrd;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rd, urd, sizeof(rd))) {
ret = -EFAULT;
goto out;
}
nrd.new_total_meta_blocks = cpu_to_le64(rd.new_total_meta_blocks);
nrd.new_total_data_blocks = cpu_to_le64(rd.new_total_data_blocks);
ret = scoutfs_client_resize_devices(sb, &nrd);
out:
return ret;
}
struct xattr_total_entry {
struct rb_node node;
struct scoutfs_ioctl_xattr_total xt;
u64 fs_seq;
u64 fs_total;
u64 fs_count;
u64 fin_seq;
u64 fin_total;
s64 fin_count;
u64 log_seq;
u64 log_total;
s64 log_count;
};
static int cmp_xt_entry_name(const struct xattr_total_entry *a,
const struct xattr_total_entry *b)
{
return scoutfs_cmp_u64s(a->xt.name[0], b->xt.name[0]) ?:
scoutfs_cmp_u64s(a->xt.name[1], b->xt.name[1]) ?:
scoutfs_cmp_u64s(a->xt.name[2], b->xt.name[2]);
}
/*
* Record the contribution of the three classes of logged items we can
* see: the item in the fs_root, items from finalized log btrees, and
* items from active log btrees. Once we have the full set the caller
* can decide which of the items contribute to the total it sends to the
* user.
*/
static int read_xattr_total_item(struct super_block *sb, struct scoutfs_key *key,
u64 seq, u8 flags, void *val, int val_len, int fic, void *arg)
{
struct scoutfs_xattr_totl_val *tval = val;
struct xattr_total_entry *ent;
struct xattr_total_entry rd;
struct rb_root *root = arg;
struct rb_node *parent;
struct rb_node **node;
int cmp;
rd.xt.name[0] = le64_to_cpu(key->skxt_a);
rd.xt.name[1] = le64_to_cpu(key->skxt_b);
rd.xt.name[2] = le64_to_cpu(key->skxt_c);
/* find entry matching name */
node = &root->rb_node;
parent = NULL;
cmp = -1;
while (*node) {
parent = *node;
ent = container_of(*node, struct xattr_total_entry, node);
/* sort merge items by key then newest to oldest */
cmp = cmp_xt_entry_name(&rd, ent);
if (cmp < 0)
node = &(*node)->rb_left;
else if (cmp > 0)
node = &(*node)->rb_right;
else
break;
}
/* allocate and insert new node if we need to */
if (cmp != 0) {
ent = kzalloc(sizeof(*ent), GFP_KERNEL);
if (!ent)
return -ENOMEM;
memcpy(&ent->xt.name, &rd.xt.name, sizeof(ent->xt.name));
rb_link_node(&ent->node, parent, node);
rb_insert_color(&ent->node, root);
}
if (fic & FIC_FS_ROOT) {
ent->fs_seq = seq;
ent->fs_total = le64_to_cpu(tval->total);
ent->fs_count = le64_to_cpu(tval->count);
} else if (fic & FIC_FINALIZED) {
ent->fin_seq = seq;
ent->fin_total += le64_to_cpu(tval->total);
ent->fin_count += le64_to_cpu(tval->count);
} else {
ent->log_seq = seq;
ent->log_total += le64_to_cpu(tval->total);
ent->log_count += le64_to_cpu(tval->count);
}
scoutfs_inc_counter(sb, totl_read_item);
return 0;
}
/* these are always _safe, node stores next */
#define for_each_xt_ent(ent, node, root) \
for (node = rb_first(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_next(node), 1); )
#define for_each_xt_ent_reverse(ent, node, root) \
for (node = rb_last(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_prev(node), 1); )
static void free_xt_ent(struct rb_root *root, struct xattr_total_entry *ent)
{
rb_erase(&ent->node, root);
kfree(ent);
}
static void free_all_xt_ents(struct rb_root *root)
{
struct xattr_total_entry *ent;
struct rb_node *node;
for_each_xt_ent(ent, node, root)
free_xt_ent(root, ent);
}
/*
* Starting from the caller's pos_name, copy the names, totals, and
* counts for the .totl. tagged xattrs in the system sorted by their
* name until the user's buffer is full. This only sees xattrs that
* have been committed. It doesn't use locking to force commits and
* block writers so it can be a little bit out of date with respect to
* dirty xattrs in memory across the system.
*
* Our reader has to be careful because the log btree merging code can
* write partial results to the fs_root. This means that a reader can
* see both cases where new finalized logs should be applied to the old
* fs items and where old finalized logs have already been applied to
* the partially merged fs items. Currently active logged items are
* always applied on top of all cases.
*
* These cases are differentiated with a combination of sequence numbers
* in items, the count of contributing xattrs, and a flag
* differentiating finalized and active logged items. This lets us
* recognize all cases, including when finalized logs were merged and
* deleted the fs item.
*
* We're allocating a tracking struct for each totl name we see while
* traversing the item btrees. The forest reader is providing the items
* it finds in leaf blocks that contain the search key. In the worst
* case all of these blocks are full and none of the items overlap. At
* most, figure order a thousand names per mount. But in practice many
* of these factors fall away: leaf blocks aren't fill, leaf items
* overlap, there aren't finalized log btrees, and not all mounts are
* actively changing totals. We're much more likely to only read a
* leaf block's worth of totals that have been long since merged into
* the fs_root.
*/
static long scoutfs_ioc_read_xattr_totals(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_read_xattr_totals __user *urxt = (void __user *)arg;
struct scoutfs_ioctl_read_xattr_totals rxt;
struct scoutfs_ioctl_xattr_total __user *uxt;
struct xattr_total_entry *ent;
struct scoutfs_key key;
struct scoutfs_key bloom_key;
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_root root = RB_ROOT;
struct rb_node *node;
int count = 0;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rxt, urxt, sizeof(rxt))) {
ret = -EFAULT;
goto out;
}
uxt = (void __user *)rxt.totals_ptr;
if ((rxt.totals_ptr & (sizeof(__u64) - 1)) ||
(rxt.totals_bytes < sizeof(struct scoutfs_ioctl_xattr_total))) {
ret = -EINVAL;
goto out;
}
scoutfs_key_set_zeros(&bloom_key);
bloom_key.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_xattr_init_totl_key(&start, rxt.pos_name);
while (rxt.totals_bytes >= sizeof(struct scoutfs_ioctl_xattr_total)) {
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
if (scoutfs_key_compare(&start, &end) > 0)
break;
key = start;
ret = scoutfs_forest_read_items(sb, &key, &bloom_key, &start, &end,
read_xattr_total_item, &root);
if (ret < 0) {
if (ret == -ESTALE) {
free_all_xt_ents(&root);
continue;
}
goto out;
}
if (RB_EMPTY_ROOT(&root))
break;
/* trim totals that fall outside of the consistent range */
for_each_xt_ent(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &start) < 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
for_each_xt_ent_reverse(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &end) > 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
/* copy resulting unique non-zero totals to userspace */
for_each_xt_ent(ent, node, &root) {
if (rxt.totals_bytes < sizeof(ent->xt))
break;
/* start with the fs item if we have it */
if (ent->fs_seq != 0) {
ent->xt.total = ent->fs_total;
ent->xt.count = ent->fs_count;
scoutfs_inc_counter(sb, totl_read_fs);
}
/* apply finalized logs if they're newer or creating */
if (((ent->fs_seq != 0) && (ent->fin_seq > ent->fs_seq)) ||
((ent->fs_seq == 0) && (ent->fin_count > 0))) {
ent->xt.total += ent->fin_total;
ent->xt.count += ent->fin_count;
scoutfs_inc_counter(sb, totl_read_finalized);
}
/* always apply active logs which must be newer than fs and finalized */
if (ent->log_seq > 0) {
ent->xt.total += ent->log_total;
ent->xt.count += ent->log_count;
scoutfs_inc_counter(sb, totl_read_logged);
}
if (ent->xt.total != 0 || ent->xt.count != 0) {
if (copy_to_user(uxt, &ent->xt, sizeof(ent->xt))) {
ret = -EFAULT;
goto out;
}
uxt++;
rxt.totals_bytes -= sizeof(ent->xt);
count++;
scoutfs_inc_counter(sb, totl_read_copied);
}
free_xt_ent(&root, ent);
}
/* continue after the last possible key read */
start = end;
scoutfs_key_inc(&start);
}
ret = 0;
out:
free_all_xt_ents(&root);
return ret ?: count;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -1022,6 +1349,10 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_alloc_detail(file, arg);
case SCOUTFS_IOC_MOVE_BLOCKS:
return scoutfs_ioc_move_blocks(file, arg);
case SCOUTFS_IOC_RESIZE_DEVICES:
return scoutfs_ioc_resize_devices(file, arg);
case SCOUTFS_IOC_READ_XATTR_TOTALS:
return scoutfs_ioc_read_xattr_totals(file, arg);
}
return -ENOTTY;

View File

@@ -13,8 +13,7 @@
* This is enforced by pahole scripting in external build environments.
*/
/* XXX I have no idea how these are chosen. */
#define SCOUTFS_IOCTL_MAGIC 's'
#define SCOUTFS_IOCTL_MAGIC 0xE8 /* arbitrarily chosen hole in ioctl-number.rst */
/*
* Packed scoutfs keys rarely cross the ioctl boundary so we have a
@@ -88,7 +87,7 @@ enum scoutfs_ino_walk_seq_type {
* Adds entries to the user's buffer for each inode that is found in the
* given index between the first and last positions.
*/
#define SCOUTFS_IOC_WALK_INODES _IOR(SCOUTFS_IOCTL_MAGIC, 1, \
#define SCOUTFS_IOC_WALK_INODES _IOW(SCOUTFS_IOCTL_MAGIC, 1, \
struct scoutfs_ioctl_walk_inodes)
/*
@@ -167,7 +166,7 @@ struct scoutfs_ioctl_ino_path_result {
};
/* Get a single path from the root to the given inode number */
#define SCOUTFS_IOC_INO_PATH _IOR(SCOUTFS_IOCTL_MAGIC, 2, \
#define SCOUTFS_IOC_INO_PATH _IOW(SCOUTFS_IOCTL_MAGIC, 2, \
struct scoutfs_ioctl_ino_path)
/*
@@ -215,23 +214,16 @@ struct scoutfs_ioctl_stage {
/*
* Give the user inode fields that are not otherwise visible. statx()
* isn't always available and xattrs are relatively expensive.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_stat_more {
__u64 valid_bytes;
__u64 meta_seq;
__u64 data_seq;
__u64 data_version;
__u64 online_blocks;
__u64 offline_blocks;
__u64 crtime_sec;
__u32 crtime_nsec;
__u8 _pad[4];
};
#define SCOUTFS_IOC_STAT_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 5, \
@@ -261,13 +253,14 @@ struct scoutfs_ioctl_data_waiting {
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
#define SCOUTFS_IOC_DATA_WAITING _IOW(SCOUTFS_IOCTL_MAGIC, 6, \
struct scoutfs_ioctl_data_waiting)
/*
* If i_size is set then data_version must be non-zero. If the offline
* flag is set then i_size must be set and a offline extent will be
* created from offset 0 to i_size.
* created from offset 0 to i_size. The time fields are always applied
* to the inode.
*/
struct scoutfs_ioctl_setattr_more {
__u64 data_version;
@@ -275,7 +268,8 @@ struct scoutfs_ioctl_setattr_more {
__u64 flags;
__u64 ctime_sec;
__u32 ctime_nsec;
__u8 _pad[4];
__u32 crtime_nsec;
__u64 crtime_sec;
};
#define SCOUTFS_IOC_SETATTR_MORE_OFFLINE (1 << 0)
@@ -291,8 +285,8 @@ struct scoutfs_ioctl_listxattr_hidden {
__u32 hash_pos;
};
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOWR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
/*
* Return the inode numbers of inodes which might contain the given
@@ -345,27 +339,17 @@ struct scoutfs_ioctl_search_xattrs {
/* set in output_flags if returned inodes reached last_ino */
#define SCOUTFS_SEARCH_XATTRS_OFLAG_END (1ULL << 0)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOW(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
/*
* Give the user information about the filesystem.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* @committed_seq: All seqs up to and including this seq have been
* committed. Can be compared with meta_seq and data_seq from inodes in
* stat_more to discover if changes have been committed to disk.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_statfs_more {
__u64 valid_bytes;
__u64 fsid;
__u64 rid;
__u64 committed_seq;
@@ -392,7 +376,7 @@ struct scoutfs_ioctl_data_wait_err {
__s64 err;
};
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOR(SCOUTFS_IOCTL_MAGIC, 11, \
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOW(SCOUTFS_IOCTL_MAGIC, 11, \
struct scoutfs_ioctl_data_wait_err)
@@ -411,7 +395,7 @@ struct scoutfs_ioctl_alloc_detail_entry {
__u8 __pad[6];
};
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
#define SCOUTFS_IOC_ALLOC_DETAIL _IOW(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
/*
@@ -474,7 +458,66 @@ struct scoutfs_ioctl_move_blocks {
__u64 flags;
};
#define SCOUTFS_IOC_MOVE_BLOCKS _IOR(SCOUTFS_IOCTL_MAGIC, 13, \
#define SCOUTFS_IOC_MOVE_BLOCKS _IOW(SCOUTFS_IOCTL_MAGIC, 13, \
struct scoutfs_ioctl_move_blocks)
struct scoutfs_ioctl_resize_devices {
__u64 new_total_meta_blocks;
__u64 new_total_data_blocks;
};
#define SCOUTFS_IOC_RESIZE_DEVICES \
_IOW(SCOUTFS_IOCTL_MAGIC, 14, struct scoutfs_ioctl_resize_devices)
#define SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR 3
/*
* Copy global totals of .totl. xattr value payloads to the user. This
* only sees xattrs which have been committed and this doesn't force
* commits of dirty data throughout the system. This can be out of sync
* by the amount of xattrs that can be dirty in open transactions that
* are being built throughout the system.
*
* pos_name: The array name of the first total that can be returned.
* The name is derived from the key of the xattrs that contribute to the
* total. For xattrs with a .totl.1.2.3 key, the pos_name[] should be
* {1, 2, 3}.
*
* totals_ptr: An aligned pointer to a buffer that will be filled with
* an array of scoutfs_ioctl_xattr_total structs for each total copied.
*
* totals_bytes: The size of the buffer in bytes. There must be room
* for at least one struct element so that returning 0 can promise that
* there were no more totals to copy after the pos_name.
*
* The number of copied elements is returned and 0 is returned if there
* were no more totals to copy after the pos_name.
*
* In addition to the usual errnos (EIO, EINVAL, EPERM, EFAULT) this
* adds:
*
* EINVAL: The totals_ buffer was not aligned or was not large enough
* for a single struct entry.
*/
struct scoutfs_ioctl_read_xattr_totals {
__u64 pos_name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 totals_ptr;
__u64 totals_bytes;
};
/*
* An individual total that is given to userspace. The total is the
* sum of all the values in the xattr payloads matching the name. The
* count is the number of xattrs, not number of files, contributing to
* the total.
*/
struct scoutfs_ioctl_xattr_total {
__u64 name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 total;
__u64 count;
};
#define SCOUTFS_IOC_READ_XATTR_TOTALS \
_IOW(SCOUTFS_IOCTL_MAGIC, 15, struct scoutfs_ioctl_read_xattr_totals)
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -18,13 +18,15 @@ int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_lock *lock);
u64 scoutfs_item_dirty_pages(struct super_block *sb);
u64 scoutfs_item_dirty_bytes(struct super_block *sb);
int scoutfs_item_write_dirty(struct super_block *sb);
int scoutfs_item_write_done(struct super_block *sb);
bool scoutfs_item_range_cached(struct super_block *sb,

View File

@@ -66,8 +66,6 @@
* relative to that lock state we resend.
*/
#define GRACE_PERIOD_KT ms_to_ktime(10)
/*
* allocated per-super, freed on unmount.
*/
@@ -82,15 +80,11 @@ struct lock_info {
struct list_head lru_list;
unsigned long long lru_nr;
struct workqueue_struct *workq;
struct work_struct grant_work;
struct list_head grant_list;
struct delayed_work inv_dwork;
struct work_struct inv_work;
struct list_head inv_list;
struct work_struct shrink_work;
struct list_head shrink_list;
atomic64_t next_refresh_gen;
struct work_struct inv_iput_work;
struct llist_head inv_iput_llist;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree tseq_tree;
@@ -126,34 +120,6 @@ static bool lock_modes_match(int granted, int requested)
requested == SCOUTFS_LOCK_READ);
}
/*
* Final iput can get into evict and perform final inode deletion which
* can delete a lot of items under locks and transactions. We really
* don't want to be doing all that in an iput during invalidation. When
* invalidation sees that iput might perform final deletion it puts them
* on a list and queues this work.
*
* Nothing stops multiple puts for multiple invalidations of an inode
* before the work runs so we can track multiple puts in flight.
*/
static void lock_inv_iput_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info, inv_iput_work);
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct llist_node *inodes;
bool more;
inodes = llist_del_all(&linfo->inv_iput_llist);
llist_for_each_entry_safe(si, tmp, inodes, inv_iput_llnode) {
do {
more = atomic_dec_return(&si->inv_iput_count) > 0;
iput(&si->inode);
} while (more);
}
}
/*
* Invalidate cached data associated with an inode whose lock is going
* away.
@@ -194,11 +160,8 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
} else {
/* defer iput to work context so we don't evict inodes from invalidation */
if (atomic_inc_return(&si->inv_iput_count) == 1)
llist_add(&si->inv_iput_llnode, &linfo->inv_iput_llist);
smp_wmb(); /* count and list visible before work executes */
queue_work(linfo->workq, &linfo->inv_iput_work);
/* defer iput to work context so we don't evict inodes from invalidation */
scoutfs_inode_queue_iput(inode);
}
}
}
@@ -288,7 +251,6 @@ static void lock_free(struct lock_info *linfo, struct scoutfs_lock *lock)
BUG_ON(!RB_EMPTY_NODE(&lock->node));
BUG_ON(!RB_EMPTY_NODE(&lock->range_node));
BUG_ON(!list_empty(&lock->lru_head));
BUG_ON(!list_empty(&lock->grant_head));
BUG_ON(!list_empty(&lock->inv_head));
BUG_ON(!list_empty(&lock->shrink_head));
BUG_ON(!list_empty(&lock->cov_list));
@@ -316,8 +278,8 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
RB_CLEAR_NODE(&lock->node);
RB_CLEAR_NODE(&lock->range_node);
INIT_LIST_HEAD(&lock->lru_head);
INIT_LIST_HEAD(&lock->grant_head);
INIT_LIST_HEAD(&lock->inv_head);
INIT_LIST_HEAD(&lock->inv_list);
INIT_LIST_HEAD(&lock->shrink_head);
spin_lock_init(&lock->cov_list_lock);
INIT_LIST_HEAD(&lock->cov_list);
@@ -364,23 +326,6 @@ static bool lock_counts_match(int granted, unsigned int *counts)
return true;
}
/*
* Returns true if there are any mode counts that match with the desired
* mode. There can be other non-matching counts as well but we're only
* testing for the existence of any matching counts.
*/
static bool lock_count_match_exists(int desired, unsigned int *counts)
{
enum scoutfs_lock_mode mode;
for (mode = 0; mode < SCOUTFS_LOCK_NR_MODES; mode++) {
if (counts[mode] && lock_modes_match(desired, mode))
return true;
}
return false;
}
/*
* An idle lock has nothing going on. It can be present in the lru and
* can be freed by the final put when it has a null mode.
@@ -598,45 +543,15 @@ static void put_lock(struct lock_info *linfo,struct scoutfs_lock *lock)
}
/*
* Locks have a grace period that extends after activity and prevents
* invalidation. It's intended to let nodes do reasonable batches of
* work as locks ping pong between nodes that are doing conflicting
* work.
*/
static void extend_grace(struct super_block *sb, struct scoutfs_lock *lock)
{
ktime_t now = ktime_get();
if (ktime_after(now, lock->grace_deadline))
scoutfs_inc_counter(sb, lock_grace_set);
else
scoutfs_inc_counter(sb, lock_grace_extended);
lock->grace_deadline = ktime_add(now, GRACE_PERIOD_KT);
}
static void queue_grant_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->grant_list))
queue_work(linfo->workq, &linfo->grant_work);
}
/*
* We immediately queue work on the assumption that the caller might
* have made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress, even if other locks are
* waiting for their grace period to elapse. It's a trade-off between
* invalidation latency and burning cpu repeatedly finding that locks
* are still in their grace period.
* The caller has made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress.
*/
static void queue_inv_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->inv_list))
mod_delayed_work(linfo->workq, &linfo->inv_dwork, 0);
queue_work(linfo->workq, &linfo->inv_work);
}
/*
@@ -684,72 +599,13 @@ static void bug_on_inconsistent_grant_cache(struct super_block *sb,
}
/*
* Each lock has received a grant response message from the server.
* The client is receiving a grant response message from the server.
* This is being called synchronously in the networking receive path so
* our work should be quick and reasonably non-blocking.
*
* Grant responses can be reordered with incoming invalidation requests
* from the server so we have to be careful to only set the new mode
* once the old mode matches.
*
* We extend the grace period as we grant the lock if there is a waiting
* locker who can use the lock. This stops invalidation from pulling
* the granted lock out from under the requester, resulting in a lot of
* churn with no forward progress. Using the grace period avoids having
* to identify a specific waiter and give it an acquired lock. It's
* also very similar to waking up the locker and having it win the race
* against the invalidation. In that case they'd extend the grace
* period anyway as they unlock.
*/
static void lock_grant_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
grant_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
scoutfs_inc_counter(sb, lock_grant_work);
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->grant_list, grant_head) {
nl = &lock->grant_nl;
/* wait for reordered invalidation to finish */
if (lock->mode != nl->old_mode)
continue;
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode,
nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) &&
lock_mode_can_read(nl->new_mode)) {
lock->refresh_gen =
atomic64_inc_return(&linfo->next_refresh_gen);
}
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_seq = le64_to_cpu(nl->write_seq);
if (lock_count_match_exists(nl->new_mode, lock->waiters))
extend_grace(sb, lock);
trace_scoutfs_lock_granted(sb, lock);
list_del_init(&lock->grant_head);
wake_up(&lock->waitq);
put_lock(linfo, lock);
}
/* invalidations might be waiting for our reordered grant */
queue_inv_work(linfo);
spin_unlock(&linfo->lock);
}
/*
* The client is receiving a grant response message from the server. We
* find the lock, record the response, and add it to the list for grant
* work to process.
* The server's state machine can immediately send an invalidate request
* after sending this grant response. We won't process the incoming
* invalidate request until after processing this grant response.
*/
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock *nl)
@@ -767,64 +623,61 @@ int scoutfs_lock_grant_response(struct super_block *sb,
trace_scoutfs_lock_grant_response(sb, lock);
BUG_ON(!lock->request_pending);
lock->grant_nl = *nl;
list_add_tail(&lock->grant_head, &linfo->grant_list);
queue_grant_work(linfo);
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode, nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) && lock_mode_can_read(nl->new_mode))
lock->refresh_gen = atomic64_inc_return(&linfo->next_refresh_gen);
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_seq = le64_to_cpu(nl->write_seq);
trace_scoutfs_lock_granted(sb, lock);
wake_up(&lock->waitq);
put_lock(linfo, lock);
spin_unlock(&linfo->lock);
return 0;
}
struct inv_req {
struct list_head head;
struct scoutfs_lock *lock;
u64 net_id;
struct scoutfs_net_lock nl;
};
/*
* Each lock has received a lock invalidation request from the server
* which specifies a new mode for the lock. The server will only send
* one invalidation request at a time for each lock. The server can
* send another invalidate request after we send the response but before
* we reacquire the lock and finish invalidation.
* which specifies a new mode for the lock. Our processing state
* machine and server failover and lock recovery can both conspire to
* give us triplicate invalidation requests. The incoming requests for
* a given lock need to be processed in order, but we can process locks
* in any order.
*
* This is an unsolicited request from the server so it can arrive at
* any time after we make the server aware of the lock by initially
* requesting it. We wait for users of the current mode to unlock
* before invalidating.
* any time after we make the server aware of the lock. We wait for
* users of the current mode to unlock before invalidating.
*
* This can arrive on behalf of our request for a mode that conflicts
* with our current mode. We have to proceed while we have a request
* pending. We can also be racing with shrink requests being sent while
* we're invalidating.
*
* This can be processed concurrently and experience reordering with a
* grant response sent back-to-back from the server. We carefully only
* invalidate once the lock mode matches what the server told us to
* invalidate.
*
* We delay invalidation processing until a grace period has elapsed
* since the last unlock. The intent is to let users do a reasonable
* batch of work before dropping the lock. Continuous unlocking can
* continuously extend the deadline.
*
* Before we start invalidating the lock we set the lock to the new
* mode, preventing further incompatible users of the old mode from
* using the lock while we're invalidating.
*
* This does a lot of serialized inode invalidation in one context and
* performs a lot of repeated calls to sync. It would be nice to get
* some concurrent inode invalidation and to more carefully only call
* sync when needed.
*/
static void lock_invalidate_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
inv_dwork.work);
struct lock_info *linfo = container_of(work, struct lock_info, inv_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
unsigned long delay = MAX_JIFFY_OFFSET;
ktime_t now = ktime_get();
ktime_t deadline;
struct inv_req *ireq;
LIST_HEAD(ready);
u64 net_id;
int ret;
scoutfs_inc_counter(sb, lock_invalidate_work);
@@ -832,25 +685,13 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->inv_list, inv_head) {
nl = &lock->inv_nl;
/* wait for reordered grant to finish */
if (lock->mode != nl->old_mode)
continue;
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
/* wait until incompatible holders unlock */
if (!lock_counts_match(nl->new_mode, lock->users))
continue;
/* skip if grace hasn't elapsed, record earliest */
deadline = lock->grace_deadline;
if (!linfo->shutdown && ktime_before(now, deadline)) {
delay = min(delay,
nsecs_to_jiffies(ktime_to_ns(
ktime_sub(deadline, now))));
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
continue;
}
/* set the new mode, no incompatible users during inval */
lock->mode = nl->new_mode;
@@ -861,12 +702,12 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_unlock(&linfo->lock);
if (list_empty(&ready))
goto out;
return;
/* invalidate once the lock is read */
list_for_each_entry(lock, &ready, inv_head) {
nl = &lock->inv_nl;
net_id = lock->inv_net_id;
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
/* only lock protocol, inv can't call subsystems after shutdown */
if (!linfo->shutdown) {
@@ -874,11 +715,10 @@ static void lock_invalidate_worker(struct work_struct *work)
BUG_ON(ret);
}
/* allow another request after we respond but before we finish */
lock->inv_net_id = 0;
/* respond with the key and modes from the request */
ret = scoutfs_client_lock_response(sb, net_id, nl);
/* respond with the key and modes from the request, server might have died */
ret = scoutfs_client_lock_response(sb, ireq->net_id, nl);
if (ret == -ENOTCONN)
ret = 0;
BUG_ON(ret);
scoutfs_inc_counter(sb, lock_invalidate_response);
@@ -888,69 +728,87 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &ready, inv_head) {
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
trace_scoutfs_lock_invalidated(sb, lock);
if (lock->inv_net_id == 0) {
list_del(&ireq->head);
kfree(ireq);
if (list_empty(&lock->inv_list)) {
/* finish if another request didn't arrive */
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
wake_up(&lock->waitq);
} else {
/* another request filled nl/net_id, put it back on the list */
/* another request arrived, back on the list and requeue */
list_move_tail(&lock->inv_head, &linfo->inv_list);
queue_inv_work(linfo);
}
put_lock(linfo, lock);
}
/* grant might have been waiting for invalidate request */
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
out:
/* queue delayed work if invalidations waiting on grace deadline */
if (delay != MAX_JIFFY_OFFSET)
queue_delayed_work(linfo->workq, &linfo->inv_dwork, delay);
}
/*
* Record an incoming invalidate request from the server and add its
* lock to the list for processing. This request can be from a new
* server and racing with invalidation that frees from an old server.
* It's fine to not find the requested lock and send an immediate
* response.
* Add an incoming invalidation request to the end of the list on the
* lock and queue it for blocking invalidation work. This is being
* called synchronously in the net recv path to avoid reordering with
* grants that were sent immediately before the server sent this
* invalidation.
*
* The invalidation process drops the linfo lock to send responses. The
* moment it does so we can receive another invalidation request (the
* server can ask us to go from write->read then read->null). We allow
* for one chain like this but it's a bug if we receive more concurrent
* invalidation requests than that. The server should be only sending
* one at a time.
* Incoming invalidation requests are a function of the remote lock
* server's state machine and are slightly decoupled from our lock
* state. We can receive duplicate requests if the server is quick
* enough to send the next request after we send a previous reply, or if
* pending invalidation spans server failover and lock recovery.
*
* Similarly, we can get a request to invalidate a lock we don't have if
* invalidation finished just after lock recovery to a new server.
* Happily we can just reply because we satisfy the invalidation
* response promise to not be using the old lock's mode if the lock
* doesn't exist.
*/
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
struct scoutfs_lock *lock = NULL;
struct inv_req *ireq;
int ret = 0;
scoutfs_inc_counter(sb, lock_invalidate_request);
ireq = kmalloc(sizeof(struct inv_req), GFP_NOFS);
BUG_ON(!ireq); /* lock server doesn't handle response errors */
if (ireq == NULL) {
ret = -ENOMEM;
goto out;
}
spin_lock(&linfo->lock);
lock = get_lock(sb, &nl->key);
if (lock) {
BUG_ON(lock->inv_net_id != 0);
lock->inv_net_id = net_id;
lock->inv_nl = *nl;
if (list_empty(&lock->inv_head)) {
trace_scoutfs_lock_invalidate_request(sb, lock);
ireq->lock = lock;
ireq->net_id = net_id;
ireq->nl = *nl;
if (list_empty(&lock->inv_list)) {
list_add_tail(&lock->inv_head, &linfo->inv_list);
lock->invalidate_pending = 1;
queue_inv_work(linfo);
}
trace_scoutfs_lock_invalidate_request(sb, lock);
queue_inv_work(linfo);
list_add_tail(&ireq->head, &lock->inv_list);
}
spin_unlock(&linfo->lock);
if (!lock)
out:
if (!lock) {
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret); /* lock server doesn't fence timed out client requests */
}
return ret;
}
@@ -1128,8 +986,14 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
trace_scoutfs_lock_wait(sb, lock);
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
if (flags & SCOUTFS_LKF_INTERRUPTIBLE) {
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
} else {
wait_event(lock->waitq, lock_wait_cond(sb, lock, mode));
ret = 0;
}
spin_lock(&linfo->lock);
if (ret)
break;
@@ -1373,10 +1237,20 @@ int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
/*
* As we unlock we always extend the grace period to give the caller
* another pass at the lock before its invalidated.
*/
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scoutfs_lock_mode mode)
{
DECLARE_LOCK_INFO(sb, linfo);
@@ -1389,7 +1263,6 @@ void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scou
spin_lock(&linfo->lock);
lock_dec_count(lock->users, mode);
extend_grace(sb, lock);
if (lock_mode_can_write(mode))
lock->dirty_trans_seq = scoutfs_trans_sample_seq(sb);
@@ -1629,10 +1502,18 @@ void scoutfs_lock_unmount_begin(struct super_block *sb)
if (linfo) {
linfo->unmounting = true;
flush_delayed_work(&linfo->inv_dwork);
flush_work(&linfo->inv_work);
}
}
void scoutfs_lock_flush_invalidate(struct super_block *sb)
{
DECLARE_LOCK_INFO(sb, linfo);
if (linfo)
flush_work(&linfo->inv_work);
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.
@@ -1696,6 +1577,8 @@ void scoutfs_lock_destroy(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
struct inv_req *ireq_tmp;
struct inv_req *ireq;
struct rb_node *node;
enum scoutfs_lock_mode mode;
@@ -1722,8 +1605,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
spin_unlock(&linfo->lock);
if (linfo->workq) {
/* pending grace work queues normal work */
flush_workqueue(linfo->workq);
/* now all work won't queue itself */
destroy_workqueue(linfo->workq);
}
@@ -1740,15 +1621,21 @@ void scoutfs_lock_destroy(struct super_block *sb)
* of free).
*/
spin_lock(&linfo->lock);
node = rb_first(&linfo->lock_tree);
while (node) {
lock = rb_entry(node, struct scoutfs_lock, node);
node = rb_next(node);
list_for_each_entry_safe(ireq, ireq_tmp, &lock->inv_list, head) {
list_del_init(&ireq->head);
put_lock(linfo, ireq->lock);
kfree(ireq);
}
lock->request_pending = 0;
if (!list_empty(&lock->lru_head))
__lock_del_lru(linfo, lock);
if (!list_empty(&lock->grant_head))
list_del_init(&lock->grant_head);
if (!list_empty(&lock->inv_head)) {
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
@@ -1758,6 +1645,7 @@ void scoutfs_lock_destroy(struct super_block *sb)
lock_remove(linfo, lock);
lock_free(linfo, lock);
}
spin_unlock(&linfo->lock);
kfree(linfo);
@@ -1782,15 +1670,11 @@ int scoutfs_lock_setup(struct super_block *sb)
linfo->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&linfo->shrinker);
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->grant_work, lock_grant_worker);
INIT_LIST_HEAD(&linfo->grant_list);
INIT_DELAYED_WORK(&linfo->inv_dwork, lock_invalidate_worker);
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);
INIT_WORK(&linfo->shrink_work, lock_shrink_worker);
INIT_LIST_HEAD(&linfo->shrink_list);
atomic64_set(&linfo->next_refresh_gen, 0);
INIT_WORK(&linfo->inv_iput_work, lock_inv_iput_worker);
init_llist_head(&linfo->inv_iput_llist);
scoutfs_tseq_tree_init(&linfo->tseq_tree, lock_tseq_show);
sbi->lock_info = linfo;

View File

@@ -6,7 +6,8 @@
#define SCOUTFS_LKF_REFRESH_INODE 0x01 /* update stale inode from item */
#define SCOUTFS_LKF_NONBLOCK 0x02 /* only use already held locks */
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_NONBLOCK << 1) - 1))
#define SCOUTFS_LKF_INTERRUPTIBLE 0x04 /* pending signals return -ERESTARTSYS */
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_INTERRUPTIBLE << 1) - 1))
#define SCOUTFS_LOCK_NR_MODES SCOUTFS_LOCK_INVALID
@@ -27,15 +28,11 @@ struct scoutfs_lock {
u64 dirty_trans_seq;
struct list_head lru_head;
wait_queue_head_t waitq;
ktime_t grace_deadline;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head grant_head;
struct scoutfs_net_lock grant_nl;
struct list_head inv_head;
struct scoutfs_net_lock inv_nl;
u64 inv_net_id;
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
struct list_head inv_list; /* list of lock's invalidation requests */
struct list_head shrink_head;
spinlock_t cov_list_lock;
@@ -87,6 +84,8 @@ int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int
struct scoutfs_lock **lock);
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 ino, struct scoutfs_lock **lock);
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode mode);
@@ -105,6 +104,7 @@ void scoutfs_free_unused_locks(struct super_block *sb);
int scoutfs_lock_setup(struct super_block *sb);
void scoutfs_lock_unmount_begin(struct super_block *sb);
void scoutfs_lock_flush_invalidate(struct super_block *sb);
void scoutfs_lock_shutdown(struct super_block *sb);
void scoutfs_lock_destroy(struct super_block *sb);

View File

@@ -78,9 +78,8 @@ struct lock_server_info {
struct scoutfs_tseq_tree tseq_tree;
struct dentry *tseq_dentry;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
struct scoutfs_tseq_tree stats_tseq_tree;
struct dentry *stats_tseq_dentry;
};
#define DECLARE_LOCK_SERVER_INFO(sb, name) \
@@ -107,6 +106,9 @@ struct server_lock_node {
struct list_head granted;
struct list_head requested;
struct list_head invalidated;
struct scoutfs_tseq_entry stats_tseq_entry;
u64 stats[SLT_NR];
};
/*
@@ -296,6 +298,8 @@ static struct server_lock_node *alloc_server_lock(struct lock_server_info *inf,
snode = get_server_lock(inf, key, ins, false);
if (snode != ins)
kfree(ins);
else
scoutfs_tseq_add(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
}
}
@@ -325,8 +329,10 @@ static void put_server_lock(struct lock_server_info *inf,
mutex_unlock(&snode->mutex);
if (should_free)
if (should_free) {
scoutfs_tseq_del(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
kfree(snode);
}
}
static struct client_lock_entry *find_entry(struct server_lock_node *snode,
@@ -388,6 +394,8 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
goto out;
}
snode->stats[SLT_REQUEST]++;
clent->snode = snode;
add_client_entry(snode, &snode->requested, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
@@ -428,6 +436,8 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
goto out;
}
snode->stats[SLT_RESPONSE]++;
clent = find_entry(snode, &snode->invalidated, rid);
if (!clent) {
put_server_lock(inf, snode);
@@ -508,6 +518,7 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER,
SLT_INVALIDATE, SLT_REQUEST,
gr->rid, 0, &nl);
snode->stats[SLT_INVALIDATE]++;
add_client_entry(snode, &snode->invalidated, gr);
}
@@ -544,6 +555,7 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER, SLT_GRANT,
SLT_RESPONSE, req->rid,
req->net_id, &nl);
snode->stats[SLT_GRANT]++;
/* don't track null client locks, track all else */
if (req->mode == SCOUTFS_LOCK_NULL)
@@ -786,13 +798,21 @@ static void lock_server_tseq_show(struct seq_file *m,
clent->net_id);
}
static void stats_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
{
struct server_lock_node *snode = container_of(ent, struct server_lock_node,
stats_tseq_entry);
seq_printf(m, SK_FMT" req %llu inv %llu rsp %llu gr %llu\n",
SK_ARG(&snode->key), snode->stats[SLT_REQUEST], snode->stats[SLT_INVALIDATE],
snode->stats[SLT_RESPONSE], snode->stats[SLT_GRANT]);
}
/*
* Setup the lock server. This is called before networking can deliver
* requests.
*/
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri)
int scoutfs_lock_server_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct lock_server_info *inf;
@@ -805,8 +825,7 @@ int scoutfs_lock_server_setup(struct super_block *sb,
spin_lock_init(&inf->lock);
inf->locks_root = RB_ROOT;
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
inf->alloc = alloc;
inf->wri = wri;
scoutfs_tseq_tree_init(&inf->stats_tseq_tree, stats_tseq_show);
inf->tseq_dentry = scoutfs_tseq_create("server_locks", sbi->debug_root,
&inf->tseq_tree);
@@ -815,6 +834,14 @@ int scoutfs_lock_server_setup(struct super_block *sb,
return -ENOMEM;
}
inf->stats_tseq_dentry = scoutfs_tseq_create("server_lock_stats", sbi->debug_root,
&inf->stats_tseq_tree);
if (!inf->stats_tseq_dentry) {
debugfs_remove(inf->tseq_dentry);
kfree(inf);
return -ENOMEM;
}
sbi->lock_server_info = inf;
return 0;
@@ -836,6 +863,7 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
if (inf) {
debugfs_remove(inf->tseq_dentry);
debugfs_remove(inf->stats_tseq_dentry);
rbtree_postorder_for_each_entry_safe(snode, stmp,
&inf->locks_root, node) {

View File

@@ -11,9 +11,7 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri);
int scoutfs_lock_server_setup(struct super_block *sb);
void scoutfs_lock_server_destroy(struct super_block *sb);
#endif

View File

@@ -4,6 +4,7 @@
#include <linux/bitops.h>
#include "key.h"
#include "counters.h"
#include "super.h"
void __printf(4, 5) scoutfs_msg(struct super_block *sb, const char *prefix,
const char *str, const char *fmt, ...);
@@ -23,6 +24,9 @@ do { \
#define scoutfs_info(sb, fmt, args...) \
scoutfs_msg_check(sb, KERN_INFO, "", fmt, ##args)
#define scoutfs_tprintk(sb, fmt, args...) \
trace_printk(SCSBF " " fmt "\n", SCSB_ARGS(sb), ##args);
#define scoutfs_bug_on(sb, cond, fmt, args...) \
do { \
if (cond) { \

View File

@@ -629,8 +629,6 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
break;
}
trace_scoutfs_recv_clock_sync(nh.clock_sync_id);
data_len = le16_to_cpu(nh.data_len);
scoutfs_inc_counter(sb, net_recv_messages);
@@ -677,8 +675,15 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/* synchronously process greeting before next recvmsg */
if (nh.cmd == SCOUTFS_NET_CMD_GREETING)
/*
* Initial received greetings are processed
* synchronously before any other incoming messages.
*
* Incoming requests or responses to the lock client are
* called synchronously to avoid reordering.
*/
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
scoutfs_net_proc_worker(&mrecv->proc_work);
else
queue_work(conn->workq, &mrecv->proc_work);
@@ -778,9 +783,6 @@ static void scoutfs_net_send_worker(struct work_struct *work)
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
msend->nh.clock_sync_id = scoutfs_clock_sync_id();
trace_scoutfs_send_clock_sync(msend->nh.clock_sync_id);
ret = sendmsg_full(conn->sock, &msend->nh, len);
spin_lock(&conn->lock);
@@ -833,17 +835,9 @@ static void scoutfs_net_destroy_worker(struct work_struct *work)
if (conn->listening_conn && conn->notify_down)
conn->notify_down(sb, conn, conn->info, conn->rid);
/*
* Usually networking is idle and we destroy pending sends, but when forcing unmount
* we can have to wake up waiters by failing pending sends.
*/
list_splice_init(&conn->resend_queue, &conn->send_queue);
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head) {
if (scoutfs_forcing_unmount(sb))
call_resp_func(sb, conn, msend->resp_func, msend->resp_data,
NULL, 0, -ECONNABORTED);
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head)
free_msend(ninf, msend);
}
/* accepted sockets are removed from their listener's list */
if (conn->listening_conn) {
@@ -873,13 +867,31 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
}
/*
* Have a pretty aggressive keepalive timeout of around 10 seconds. The
* TCP keepalives are being processed out of task context so they should
* be responsive even when mounts are under load.
* By default, TCP would maintain a connection to an unresponsive peer
* for a very long time indeed. We can't do that because quorum
* members will only participate in an election when they don't have a
* healthy connection to a server. We use the KEEPALIVE* and
* TCP_USER_TIMEOUT options to ensure that we'll break an unresponsive
* connection and return to the quorum and client connection paths to
* try and establish a new connection to an active server.
*
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
* send queue. It doesn't work if the connection is idle. Adding
* keepalice probes with user_timeout set changes how the keepalive
* timeout is calculated. CNT no longer matters. Each time
* additional probes (not the first) are sent the user timeout is
* checked against the last time data was received. If none of the
* keepalives are responded to then eventually the user timeout applies.
*
* Given all this, we start with the overall unresponsive timeout. Then
* we set the probes to start sending towards the end of the timeout.
* We give it a few tries for a successful response before the timeout
* elapses during the probe timer processing after the unsuccessful
* probes.
*/
#define KEEPCNT 3
#define KEEPIDLE 7
#define KEEPINTVL 1
#define UNRESPONSIVE_TIMEOUT_SECS 10
#define UNRESPONSIVE_PROBES 3
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
@@ -888,7 +900,7 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
int optval;
int ret;
/* but use a keepalive timeout instead of send timeout */
/* we use a keepalive timeout instead of send timeout */
tv.tv_sec = 0;
tv.tv_usec = 0;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
@@ -896,24 +908,32 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
if (ret)
goto out;
optval = KEEPCNT;
/* not checked when user_timeout != 0, but for clarity */
optval = UNRESPONSIVE_PROBES;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = KEEPIDLE;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = KEEPINTVL;
optval = 1;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&optval, sizeof(optval));
@@ -1106,9 +1126,11 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct scoutfs_net_connection *listener;
struct scoutfs_net_connection *acc_conn;
scoutfs_net_response_t resp_func;
struct message_send *msend;
struct message_send *tmp;
unsigned long delay;
void *resp_data;
trace_scoutfs_net_shutdown_work_enter(sb, 0, 0);
trace_scoutfs_conn_shutdown_start(conn);
@@ -1154,6 +1176,30 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
/* and wait for accepted conn shutdown work to finish */
wait_event(conn->waitq, empty_accepted_list(conn));
/*
* Forced unmount will cause net submit to fail once it's
* started and it calls shutdown to interrupt any previous
* senders waiting for a response. The response callbacks can
* do quite a lot of work so we're careful to call them outside
* the lock.
*/
if (scoutfs_forcing_unmount(sb)) {
spin_lock(&conn->lock);
list_splice_tail_init(&conn->send_queue, &conn->resend_queue);
while ((msend = list_first_entry_or_null(&conn->resend_queue,
struct message_send, head))) {
resp_func = msend->resp_func;
resp_data = msend->resp_data;
free_msend(ninf, msend);
spin_unlock(&conn->lock);
call_resp_func(sb, conn, resp_func, resp_data, NULL, 0, -ECONNABORTED);
spin_lock(&conn->lock);
}
spin_unlock(&conn->lock);
}
spin_lock(&conn->lock);
/* greetings aren't resent across sockets */
@@ -1486,8 +1532,7 @@ int scoutfs_net_connect(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin, unsigned long timeout_ms)
{
int error = 0;
int ret;
int ret = 0;
spin_lock(&conn->lock);
conn->connect_sin = *sin;
@@ -1495,10 +1540,8 @@ int scoutfs_net_connect(struct super_block *sb,
spin_unlock(&conn->lock);
queue_work(conn->workq, &conn->connect_work);
ret = wait_event_interruptible(conn->waitq,
connect_result(conn, &error));
return ret ?: error;
wait_event(conn->waitq, connect_result(conn, &ret));
return ret;
}
static void set_valid_greeting(struct scoutfs_net_connection *conn)
@@ -1634,10 +1677,10 @@ restart:
conn->next_send_id = reconn->next_send_id;
atomic64_set(&conn->recv_seq, atomic64_read(&reconn->recv_seq));
/* greeting response/ack will be on conn send queue */
/* reconn should be idle while in reconn_wait */
BUG_ON(!list_empty(&reconn->send_queue));
BUG_ON(!list_empty(&conn->resend_queue));
list_splice_init(&reconn->resend_queue, &conn->resend_queue);
/* queued greeting response is racing, can be in send or resend queue */
list_splice_tail_init(&reconn->resend_queue, &conn->resend_queue);
/* new conn info is unused, swap, old won't call down */
swap(conn->info, reconn->info);
@@ -1801,11 +1844,10 @@ int scoutfs_net_sync_request(struct super_block *sb,
ret = scoutfs_net_submit_request(sb, conn, cmd, arg, arg_len,
sync_response, &sreq, &id);
ret = wait_for_completion_interruptible(&sreq.comp);
if (ret == -ERESTARTSYS)
scoutfs_net_cancel_request(sb, conn, cmd, id);
else
if (ret == 0) {
wait_for_completion(&sreq.comp);
ret = sreq.error;
}
return ret;
}

View File

@@ -97,7 +97,7 @@ struct quorum_host_msg {
struct last_msg {
struct quorum_host_msg msg;
struct timespec64 ts;
ktime_t ts;
};
enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
@@ -209,7 +209,7 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
DECLARE_QUORUM_INFO(sb, qinf);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct timespec64 ts;
ktime_t now;
int i;
struct scoutfs_quorum_message qmes = {
@@ -235,7 +235,6 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
qmes.crc = quorum_message_crc(&qmes);
ts = ktime_to_timespec64(ktime_get());
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i) ||
@@ -243,12 +242,13 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
continue;
scoutfs_quorum_slot_sin(super, i, &sin);
now = ktime_get();
kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
spin_lock(&qinf->show_lock);
qinf->last_send[i].msg.term = term;
qinf->last_send[i].msg.type = type;
qinf->last_send[i].ts = ts;
qinf->last_send[i].ts = now;
spin_unlock(&qinf->show_lock);
if (i == only)
@@ -308,6 +308,8 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
if (ret < 0)
return ret;
now = ktime_get();
if (ret != sizeof(qmes) ||
qmes.crc != quorum_message_crc(&qmes) ||
qmes.fsid != super->hdr.fsid ||
@@ -327,7 +329,7 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
spin_lock(&qinf->show_lock);
qinf->last_recv[msg->from].msg = *msg;
qinf->last_recv[msg->from].ts = ktime_to_timespec64(ktime_get());
qinf->last_recv[msg->from].ts = now;
spin_unlock(&qinf->show_lock);
return 0;
@@ -390,6 +392,51 @@ out:
return ret;
}
/*
* It's really important in raft elections that the term not go
* backwards in time. We achieve this by having each participant record
* the greatest term they've seen in their quorum block. It's also
* important that participants agree on the greatest term. It can
* happen that one gets ahead of the rest, perhaps by being forcefully
* shutdown after having just been elected. As everyone starts up it's
* possible to have N-1 have term T-1 while just one participant thinks
* the term is T. That single participant will ignore all messages
* from older terms. If its timeout is greater then the others it can
* immediately override the election of the majority and request votes
* and become elected.
*
* A best-effort work around is to have everyone try and start from the
* greatest term that they can find in everyone's blocks. If it works
* then you avoid having those with greater terms ignore others. If it
* doesn't work the elections will eventually stabilize after rocky
* periods of fencing from what looks like concurrent elections.
*/
static void read_greatest_term(struct super_block *sb, u64 *term)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
int ret;
int e;
int s;
*term = 0;
for (s = 0; s < SCOUTFS_QUORUM_MAX_SLOTS; s++) {
if (!quorum_slot_present(super, s))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + s, &blk, false);
if (ret < 0)
continue;
for (e = 0; e < ARRAY_SIZE(blk.events); e++) {
if (blk.events[e].rid)
*term = max(*term, le64_to_cpu(blk.events[e].term));
}
}
}
static void set_quorum_block_event(struct super_block *sb, struct scoutfs_quorum_block *blk,
int event, u64 term)
{
@@ -401,8 +448,10 @@ static void set_quorum_block_event(struct super_block *sb, struct scoutfs_quorum
return;
getnstimeofday64(&ts);
le64_add_cpu(&blk->write_nr, 1);
ev = &blk->events[event];
ev->write_nr = blk->write_nr;
ev->rid = cpu_to_le64(sbi->rid);
ev->term = cpu_to_le64(term);
ev->ts.sec = cpu_to_le64(ts.tv_sec);
@@ -556,10 +605,8 @@ out:
ret = err;
}
if (ret < 0) {
scoutfs_err(sb, "error %d attempting to find and fence previous leaders", ret);
if (ret < 0)
scoutfs_inc_counter(sb, quorum_fence_error);
}
return ret;
}
@@ -576,29 +623,24 @@ static void scoutfs_quorum_worker(struct work_struct *work)
struct quorum_info *qinf = container_of(work, struct quorum_info, work);
struct super_block *sb = qinf->sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
struct scoutfs_quorum_block blk;
struct sockaddr_in unused;
struct quorum_host_msg msg;
struct quorum_status qst;
u64 blkno;
int ret;
int err;
/* recording votes from slots as native single word bitmap */
BUILD_BUG_ON(SCOUTFS_QUORUM_MAX_SLOTS > BITS_PER_LONG);
/* get our starting term from our persistent block */
blkno = SCOUTFS_QUORUM_BLKNO + opts->quorum_slot_nr;
ret = read_quorum_block(sb, blkno, &blk, false);
if (ret < 0)
goto out;
/* start out as a follower */
qst.role = FOLLOWER;
qst.term = le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_TERM].term);
qst.term = 0;
qst.vote_for = -1;
qst.vote_bits = 0;
/* read our starting term from greatest in all events in all slots */
read_greatest_term(sb, &qst.term);
/* see if there's a server to chose heartbeat or election timeout */
if (scoutfs_quorum_server_sin(sb, &unused) == 0)
qst.timeout = heartbeat_timeout();
@@ -610,7 +652,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
if (ret < 0)
goto out;
while (!qinf->shutdown) {
while (!(qinf->shutdown || scoutfs_forcing_unmount(sb))) {
ret = recv_msg(sb, &msg, qst.timeout);
if (ret < 0) {
@@ -697,11 +739,10 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* candidates count votes in their term */
if (qst.role == CANDIDATE &&
msg.type == SCOUTFS_QUORUM_MSG_VOTE) {
if (test_bit(msg.from, &qst.vote_bits)) {
if (test_and_set_bit(msg.from, &qst.vote_bits)) {
scoutfs_warn(sb, "already received vote from %u in term %llu, are there multiple mounts with quorum_slot_nr=%u?",
msg.from, qst.term, msg.from);
}
set_bit(msg.from, &qst.vote_bits);
scoutfs_inc_counter(sb, quorum_recv_vote);
}
@@ -733,13 +774,15 @@ static void scoutfs_quorum_worker(struct work_struct *work)
ret = scoutfs_server_start(sb, qst.term);
if (ret < 0) {
clear_bit(QINF_FLAG_SERVER, &qinf->flags);
scoutfs_err(sb, "server startup failed with %d", ret);
/* store our increased term */
err = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP, qst.term,
true);
if (err < 0 && ret == 0)
if (err < 0) {
ret = err;
goto out;
goto out;
}
ret = 0;
continue;
}
}
@@ -785,11 +828,11 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.term);
}
/* informational event that we're shutting down, nothing relies on it */
/* record that this slot no longer has an active quorum */
update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_END, qst.term, true);
out:
if (ret < 0) {
scoutfs_err(sb, "quorum service saw error %d, shutting down. Cluster will be degraded until this slot is remounted to restart the quorum service",
scoutfs_err(sb, "quorum service saw error %d, shutting down. This mount is no longer participating in quorum. It should be remounted to restore service.",
ret);
}
}
@@ -915,6 +958,7 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
struct quorum_status qst;
struct last_msg last;
struct timespec64 ts;
const ktime_t now = ktime_get();
size_t size;
int ret;
int i;
@@ -936,9 +980,9 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
qst.vote_for);
snprintf_ret(buf, size, &ret, "vote_bits 0x%lx (count %lu)\n",
qst.vote_bits, hweight_long(qst.vote_bits));
ts = ktime_to_timespec64(qst.timeout);
snprintf_ret(buf, size, &ret, "timeout %llu.%u\n",
(u64)ts.tv_sec, (int)ts.tv_nsec);
ts = ktime_to_timespec64(ktime_sub(qst.timeout, now));
snprintf_ret(buf, size, &ret, "timeout_in_secs %lld.%09u\n",
(s64)ts.tv_sec, (int)ts.tv_nsec);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
spin_lock(&qinf->show_lock);
@@ -948,10 +992,11 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
if (last.msg.term == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, last.ts));
snprintf_ret(buf, size, &ret,
"last_send to %u term %llu type %u ts %llu.%u\n",
"last_send to %u term %llu type %u secs_since %lld.%09u\n",
i, last.msg.term, last.msg.type,
(u64)last.ts.tv_sec, (int)last.ts.tv_nsec);
(s64)ts.tv_sec, (int)ts.tv_nsec);
}
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
@@ -961,10 +1006,12 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
if (last.msg.term == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, last.ts));
snprintf_ret(buf, size, &ret,
"last_recv from %u term %llu type %u ts %llu.%u\n",
"last_recv from %u term %llu type %u secs_since %lld.%09u\n",
i, last.msg.term, last.msg.type,
(u64)last.ts.tv_sec, (int)last.ts.tv_nsec);
(s64)ts.tv_sec, (int)ts.tv_nsec);
}
return ret;
@@ -1001,13 +1048,17 @@ static inline bool valid_ipv4_port(__be16 port)
static int verify_quorum_slots(struct super_block *sb)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
DECLARE_QUORUM_INFO(sb, qinf);
struct sockaddr_in other;
struct sockaddr_in sin;
int found = 0;
int ret;
int i;
int j;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
continue;
@@ -1048,6 +1099,25 @@ static int verify_quorum_slots(struct super_block *sb)
return -EINVAL;
}
if (!quorum_slot_present(super, opts->quorum_slot_nr)) {
char *str = slots;
*str = '\0';
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (quorum_slot_present(super, i)) {
ret = snprintf(str, &slots[ARRAY_SIZE(slots)] - str, "%c%u",
str == slots ? ' ' : ',', i);
if (ret < 2 || ret > 3) {
scoutfs_err(sb, "error gathering populated slots");
return -EINVAL;
}
str += ret;
}
}
scoutfs_err(sb, "quorum_slot_nr=%u option references unused slot, must be one of the following configured slots:%s",
opts->quorum_slot_nr, slots);
return -EINVAL;
}
/*
* Always require a majority except in the pathological cases of
* 1 or 2 members.

View File

@@ -58,9 +58,6 @@ struct lock_info;
__entry->pref##_map, \
__entry->pref##_flags
#define DECLARE_TRACED_EXTENT(name) \
struct scoutfs_traced_extent name = {0}
DECLARE_EVENT_CLASS(scoutfs_ino_ret_class,
TP_PROTO(struct super_block *sb, u64 ino, int ret),
@@ -406,21 +403,24 @@ TRACE_EVENT(scoutfs_sync_fs,
);
TRACE_EVENT(scoutfs_trans_write_func,
TP_PROTO(struct super_block *sb, unsigned long dirty),
TP_PROTO(struct super_block *sb, u64 dirty_block_bytes, u64 dirty_item_bytes),
TP_ARGS(sb, dirty),
TP_ARGS(sb, dirty_block_bytes, dirty_item_bytes),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(unsigned long, dirty)
__field(__u64, dirty_block_bytes)
__field(__u64, dirty_item_bytes)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dirty = dirty;
__entry->dirty_block_bytes = dirty_block_bytes;
__entry->dirty_item_bytes = dirty_item_bytes;
),
TP_printk(SCSBF" dirty %lu", SCSB_TRACE_ARGS, __entry->dirty)
TP_printk(SCSBF" dirty_block_bytes %llu dirty_item_bytes %llu",
SCSB_TRACE_ARGS, __entry->dirty_block_bytes, __entry->dirty_item_bytes)
);
DECLARE_EVENT_CLASS(scoutfs_trans_hold_release_class,
@@ -1954,74 +1954,6 @@ TRACE_EVENT(scoutfs_quorum_loop,
__entry->timeout_sec, __entry->timeout_nsec)
);
/*
* We can emit trace events to make it easier to synchronize the
* monotonic clocks in trace logs between nodes. By looking at the send
* and recv times of many messages flowing between nodes we can get
* surprisingly good estimates of the clock offset between them.
*/
DECLARE_EVENT_CLASS(scoutfs_clock_sync_class,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id),
TP_STRUCT__entry(
__field(__u64, clock_sync_id)
),
TP_fast_assign(
__entry->clock_sync_id = le64_to_cpu(clock_sync_id);
),
TP_printk("clock_sync_id %016llx", __entry->clock_sync_id)
);
DEFINE_EVENT(scoutfs_clock_sync_class, scoutfs_send_clock_sync,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id)
);
DEFINE_EVENT(scoutfs_clock_sync_class, scoutfs_recv_clock_sync,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id)
);
TRACE_EVENT(scoutfs_trans_seq_advance,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->trans_seq = trans_seq;
),
TP_printk(SCSBF" rid %016llx trans_seq %llu\n",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_trans_seq_remove,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->trans_seq = trans_seq;
),
TP_printk(SCSBF" rid %016llx trans_seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_trans_seq_last,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
@@ -2045,9 +1977,9 @@ TRACE_EVENT(scoutfs_trans_seq_last,
TRACE_EVENT(scoutfs_get_log_merge_status,
TP_PROTO(struct super_block *sb, u64 rid, struct scoutfs_key *next_range_key,
u64 nr_requests, u64 nr_complete, u64 last_seq, u64 seq),
u64 nr_requests, u64 nr_complete, u64 seq),
TP_ARGS(sb, rid, next_range_key, nr_requests, nr_complete, last_seq, seq),
TP_ARGS(sb, rid, next_range_key, nr_requests, nr_complete, seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
@@ -2055,7 +1987,6 @@ TRACE_EVENT(scoutfs_get_log_merge_status,
sk_trace_define(next_range_key)
__field(__u64, nr_requests)
__field(__u64, nr_complete)
__field(__u64, last_seq)
__field(__u64, seq)
),
@@ -2065,21 +1996,20 @@ TRACE_EVENT(scoutfs_get_log_merge_status,
sk_trace_assign(next_range_key, next_range_key);
__entry->nr_requests = nr_requests;
__entry->nr_complete = nr_complete;
__entry->last_seq = last_seq;
__entry->seq = seq;
),
TP_printk(SCSBF" rid %016llx next_range_key "SK_FMT" nr_requests %llu nr_complete %llu last_seq %llu seq %llu",
TP_printk(SCSBF" rid %016llx next_range_key "SK_FMT" nr_requests %llu nr_complete %llu seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, sk_trace_args(next_range_key),
__entry->nr_requests, __entry->nr_complete, __entry->last_seq, __entry->seq)
__entry->nr_requests, __entry->nr_complete, __entry->seq)
);
TRACE_EVENT(scoutfs_get_log_merge_request,
TP_PROTO(struct super_block *sb, u64 rid,
struct scoutfs_btree_root *root, struct scoutfs_key *start,
struct scoutfs_key *end, u64 last_seq, u64 seq),
struct scoutfs_key *end, u64 input_seq, u64 seq),
TP_ARGS(sb, rid, root, start, end, last_seq, seq),
TP_ARGS(sb, rid, root, start, end, input_seq, seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
@@ -2089,7 +2019,7 @@ TRACE_EVENT(scoutfs_get_log_merge_request,
__field(__u8, root_height)
sk_trace_define(start)
sk_trace_define(end)
__field(__u64, last_seq)
__field(__u64, input_seq)
__field(__u64, seq)
),
@@ -2101,14 +2031,14 @@ TRACE_EVENT(scoutfs_get_log_merge_request,
__entry->root_height = root->height;
sk_trace_assign(start, start);
sk_trace_assign(end, end);
__entry->last_seq = last_seq;
__entry->input_seq = input_seq;
__entry->seq = seq;
),
TP_printk(SCSBF" rid %016llx root blkno %llu seq %llu height %u start "SK_FMT" end "SK_FMT" last_seq %llu seq %llu",
TP_printk(SCSBF" rid %016llx root blkno %llu seq %llu height %u start "SK_FMT" end "SK_FMT" input_seq %llu seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->root_blkno,
__entry->root_seq, __entry->root_height,
sk_trace_args(start), sk_trace_args(end), __entry->last_seq,
sk_trace_args(start), sk_trace_args(end), __entry->input_seq,
__entry->seq)
);
@@ -2611,6 +2541,36 @@ TRACE_EVENT(scoutfs_alloc_move,
__entry->ret)
);
DECLARE_EVENT_CLASS(scoutfs_alloc_extent_class,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
STE_FIELDS(ext)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
STE_ASSIGN(ext, ext);
),
TP_printk(SCSBF" ext "STE_FMT, SCSB_TRACE_ARGS, STE_ENTRY_ARGS(ext))
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_move_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_fill_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_empty_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
TRACE_EVENT(scoutfs_item_read_page,
TP_PROTO(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *pg_start, struct scoutfs_key *pg_end),

File diff suppressed because it is too large Load Diff

View File

@@ -28,6 +28,7 @@
#include "btree.h"
#include "spbm.h"
#include "client.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -1481,10 +1482,11 @@ static int kway_merge(struct super_block *sb,
int ind;
int i;
if (WARN_ON_ONCE(nr <= 1))
if (WARN_ON_ONCE(nr <= 0))
return -EINVAL;
nr_parents = roundup_pow_of_two(nr) - 1;
/* always at least one parent for single leaf */
nr_parents = max_t(unsigned long, 1, roundup_pow_of_two(nr) - 1);
/* root at [1] for easy sib/parent index calc, final pad for odd sib */
nr_nodes = 1 + nr_parents + nr + 1;
tnodes = __vmalloc(nr_nodes * sizeof(struct tourn_node),
@@ -2081,7 +2083,7 @@ static int delete_files(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_srch_compact *sc)
{
int ret;
int ret = 0;
int i;
for (i = 0; i < sc->nr; i++) {
@@ -2127,6 +2129,7 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
struct scoutfs_alloc alloc;
unsigned long delay;
int ret;
int err;
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
if (sc == NULL) {
@@ -2165,10 +2168,14 @@ commit:
sc->meta_freed = alloc.freed;
sc->flags |= ret < 0 ? SCOUTFS_SRCH_COMPACT_FLAG_ERROR : 0;
ret = scoutfs_client_srch_commit_compact(sb, sc);
err = scoutfs_client_srch_commit_compact(sb, sc);
if (err < 0 && ret == 0)
ret = err;
out:
/* our allocators and files should be stable */
WARN_ON_ONCE(ret == -ESTALE);
if (ret < 0)
scoutfs_inc_counter(sb, srch_compact_error);
scoutfs_block_writer_forget_all(sb, &wri);
if (!atomic_read(&srinf->shutdown)) {

View File

@@ -20,7 +20,6 @@
#include <linux/statfs.h>
#include <linux/sched.h>
#include <linux/debugfs.h>
#include <linux/percpu.h>
#include "super.h"
#include "block.h"
@@ -52,66 +51,34 @@
static struct dentry *scoutfs_debugfs_root;
static DEFINE_PER_CPU(u64, clock_sync_ids) = 0;
/*
* Give the caller a unique clock sync id for a message they're about to
* send. We make the ids reasonably globally unique by using randomly
* initialized per-cpu 64bit counters.
*/
__le64 scoutfs_clock_sync_id(void)
/* the statfs file fields can be small (and signed?) :/ */
static __statfs_word saturate_truncated_word(u64 files)
{
u64 rnd = 0;
u64 ret;
u64 *id;
__statfs_word word = files;
retry:
preempt_disable();
id = this_cpu_ptr(&clock_sync_ids);
if (*id == 0) {
if (rnd == 0) {
preempt_enable();
get_random_bytes(&rnd, sizeof(rnd));
goto retry;
}
*id = rnd;
if (word != files) {
word = ~0ULL;
if (word < 0)
word = (unsigned long)word >> 1;
}
ret = ++(*id);
preempt_enable();
return cpu_to_le64(ret);
}
struct statfs_free_blocks {
u64 meta;
u64 data;
};
static int count_free_blocks(struct super_block *sb, void *arg, int owner,
u64 id, bool meta, bool avail, u64 blocks)
{
struct statfs_free_blocks *sfb = arg;
if (meta)
sfb->meta += blocks;
else
sfb->data += blocks;
return 0;
return word;
}
/*
* Build the free block counts by having alloc read all the persistent
* blocks which contain allocators and calling us for each of them.
* Only the super block reads aren't cached so repeatedly calling statfs
* is like repeated O_DIRECT IO. We can add a cache and stale results
* if that IO becomes a problem.
* The server gives us the current sum of free blocks and the total
* inode count that it can see across all the clients' log trees. It
* won't see allocations and inode creations or deletions that are dirty
* in client memory as it builds a transaction.
*
* We fake the number of free inodes value by assuming that we can fill
* free blocks with a certain number of inodes. We then the number of
* current inodes to that free count to determine the total possible
* inodes.
* We don't have static limits on the number of files so the statfs
* fields for the total possible files and the number free isn't
* particularly helpful. What we do want to report is the number of
* inodes, so we fake a max possible number of inodes given a
* conservative estimate of the total space consumption per file and
* then find the free by subtracting our precise count of active inodes.
* This seems like the least surprising compromise where the file max
* doesn't change and the caller gets the correct count of used inodes.
*
* The fsid that we report is constructed from the xor of the first two
* and second two little endian u32s that make up the uuid bytes.
@@ -119,41 +86,33 @@ static int count_free_blocks(struct super_block *sb, void *arg, int owner,
static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
{
struct super_block *sb = dentry->d_inode->i_sb;
struct scoutfs_super_block *super = NULL;
struct statfs_free_blocks sfb = {0,};
struct scoutfs_net_statfs nst;
u64 files;
u64 ffree;
__le32 uuid[4];
int ret;
scoutfs_inc_counter(sb, statfs);
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
ret = scoutfs_client_statfs(sb, &nst);
if (ret)
goto out;
ret = scoutfs_alloc_foreach(sb, count_free_blocks, &sfb);
if (ret < 0)
goto out;
kst->f_bfree = (sfb.meta << SCOUTFS_BLOCK_SM_LG_SHIFT) + sfb.data;
kst->f_bfree = (le64_to_cpu(nst.free_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.free_data_blocks);
kst->f_type = SCOUTFS_SUPER_MAGIC;
kst->f_bsize = SCOUTFS_BLOCK_SM_SIZE;
kst->f_blocks = (le64_to_cpu(super->total_meta_blocks) <<
SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(super->total_data_blocks);
kst->f_blocks = (le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.total_data_blocks);
kst->f_bavail = kst->f_bfree;
/* arbitrarily assume ~1K / empty file */
kst->f_ffree = sfb.meta * (SCOUTFS_BLOCK_LG_SIZE / 1024);
kst->f_files = kst->f_ffree + le64_to_cpu(super->next_ino);
files = div_u64(le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_LG_SHIFT, 2048);
ffree = files - le64_to_cpu(nst.inode_count);
kst->f_files = saturate_truncated_word(files);
kst->f_ffree = saturate_truncated_word(ffree);
BUILD_BUG_ON(sizeof(uuid) != sizeof(super->uuid));
memcpy(uuid, super->uuid, sizeof(uuid));
BUILD_BUG_ON(sizeof(uuid) != sizeof(nst.uuid));
memcpy(uuid, nst.uuid, sizeof(uuid));
kst->f_fsid.val[0] = le32_to_cpu(uuid[0]) ^ le32_to_cpu(uuid[1]);
kst->f_fsid.val[1] = le32_to_cpu(uuid[2]) ^ le32_to_cpu(uuid[3]);
kst->f_namelen = SCOUTFS_NAME_LEN;
@@ -162,8 +121,6 @@ static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
/* the vfs fills f_flags */
ret = 0;
out:
kfree(super);
/*
* We don't take cluster locks in statfs which makes it a very
* convenient place to trigger lock reclaim for debugging. We
@@ -230,7 +187,15 @@ static void scoutfs_metadev_close(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi->meta_bdev) {
/*
* Some kernels have blkdev_reread_part which calls
* fsync_bdev while holding the bd_mutex which inverts
* the s_umount hold in deactivate_super and blkdev_put
* from kill_sb->put_super.
*/
lockdep_off();
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
lockdep_on();
sbi->meta_bdev = NULL;
}
}
@@ -247,7 +212,16 @@ static void scoutfs_put_super(struct super_block *sb)
trace_scoutfs_put_super(sb);
scoutfs_inode_stop(sb);
/*
* Wait for invalidation and iput to finish with any lingering
* inode references that escaped the evict_inodes in
* generic_shutdown_super. MS_ACTIVE is clear so final iput
* will always evict.
*/
scoutfs_lock_flush_invalidate(sb);
scoutfs_inode_flush_iput(sb);
WARN_ON_ONCE(!list_empty(&sb->s_inodes));
scoutfs_forest_stop(sb);
scoutfs_srch_destroy(sb);
@@ -297,6 +271,8 @@ static void scoutfs_umount_begin(struct super_block *sb)
scoutfs_warn(sb, "forcing unmount, can return errors and lose unsynced data");
sbi->forced_unmount = true;
scoutfs_client_net_shutdown(sb);
}
static const struct super_operations scoutfs_super_ops = {
@@ -328,28 +304,16 @@ int scoutfs_write_super(struct super_block *sb,
sizeof(struct scoutfs_super_block));
}
static bool invalid_blkno_limits(struct super_block *sb, char *which,
u64 start, __le64 first, __le64 last,
struct block_device *bdev, int shift)
static bool small_bdev(struct super_block *sb, char *which, u64 blocks,
struct block_device *bdev, int shift)
{
u64 blkno;
u64 size = (u64)i_size_read(bdev->bd_inode);
u64 count = size >> shift;
if (le64_to_cpu(first) < start) {
scoutfs_err(sb, "super block first %s blkno %llu is within first valid blkno %llu",
which, le64_to_cpu(first), start);
return true;
}
if (blocks > count) {
scoutfs_err(sb, "super block records %llu %s blocks, but device %u:%u size %llu only allows %llu blocks",
blocks, which, MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev), size, count);
if (le64_to_cpu(first) > le64_to_cpu(last)) {
scoutfs_err(sb, "super block first %s blkno %llu is greater than last %s blkno %llu",
which, le64_to_cpu(first), which, le64_to_cpu(last));
return true;
}
blkno = (i_size_read(bdev->bd_inode) >> shift) - 1;
if (le64_to_cpu(last) > blkno) {
scoutfs_err(sb, "super block last %s blkno %llu is beyond device size last blkno %llu",
which, le64_to_cpu(last), blkno);
return true;
}
@@ -398,27 +362,32 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
goto out;
}
if (le64_to_cpu(super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
scoutfs_err(sb, "super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
if (super->version != cpu_to_le64(SCOUTFS_INTEROP_VERSION)) {
scoutfs_err(sb, "super block has invalid version %llu, expected %llu",
le64_to_cpu(super->version),
SCOUTFS_INTEROP_VERSION);
/*
* fill_supers checks the fmt_vers in both supers and then decides to use it.
* From then on we verify that the supers we read have that version.
*/
if (sbi->fmt_vers != 0 && le64_to_cpu(super->fmt_vers) != sbi->fmt_vers) {
scoutfs_err(sb, "super block has format version %llu than %llu read at mount",
le64_to_cpu(super->fmt_vers), sbi->fmt_vers);
ret = -EINVAL;
goto out;
}
/* XXX do we want more rigorous invalid super checking? */
if (invalid_blkno_limits(sb, "meta",
SCOUTFS_META_DEV_START_BLKNO,
super->first_meta_blkno,
super->last_meta_blkno, sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
invalid_blkno_limits(sb, "data",
SCOUTFS_DATA_DEV_START_BLKNO,
super->first_data_blkno,
super->last_data_blkno, sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
if (small_bdev(sb, "metadata", le64_to_cpu(super->total_meta_blocks), sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
small_bdev(sb, "data", le64_to_cpu(super->total_data_blocks), sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
ret = -EINVAL;
}
@@ -525,6 +494,14 @@ static int scoutfs_read_supers(struct super_block *sb)
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) != le64_to_cpu(data_super->fmt_vers)) {
scoutfs_err(sb, "meta device format version %llu != data device format version %llu",
le64_to_cpu(meta_super->fmt_vers), le64_to_cpu(data_super->fmt_vers));
goto out;
}
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
@@ -546,6 +523,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_flags |= MS_I_VERSION;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -561,12 +539,8 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
return ret;
spin_lock_init(&sbi->next_ino_lock);
init_waitqueue_head(&sbi->trans_hold_wq);
spin_lock_init(&sbi->data_wait_root.lock);
sbi->data_wait_root.root = RB_ROOT;
spin_lock_init(&sbi->trans_write_lock);
INIT_DELAYED_WORK(&sbi->trans_write_work, scoutfs_trans_write_func);
init_waitqueue_head(&sbi->trans_write_wq);
scoutfs_sysfs_init_attrs(sb, &sbi->mopts_ssa);
ret = scoutfs_parse_options(sb, data, &opts);
@@ -622,15 +596,16 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
scoutfs_quorum_setup(sb) ?:
scoutfs_client_setup(sb) ?:
scoutfs_volopt_setup(sb) ?:
scoutfs_trans_get_log_trees(sb) ?:
scoutfs_srch_setup(sb) ?:
scoutfs_inode_start(sb);
scoutfs_srch_setup(sb);
if (ret)
goto out;
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO);
/* this interruptible iget lets hung mount be aborted with ctl-c */
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO, SCOUTFS_LKF_INTERRUPTIBLE);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
if (ret == -ERESTARTSYS)
ret = -EINTR;
goto out;
}
@@ -640,10 +615,14 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
/* send requests once iget progress shows we had a server */
ret = scoutfs_trans_get_log_trees(sb);
if (ret)
goto out;
/* start up background services that use everything else */
scoutfs_inode_start(sb);
scoutfs_forest_start(sb);
scoutfs_trans_restart_sync_deadline(sb);
ret = 0;
out:
@@ -665,10 +644,17 @@ static struct dentry *scoutfs_mount(struct file_system_type *fs_type, int flags,
*/
static void scoutfs_kill_sb(struct super_block *sb)
{
trace_scoutfs_kill_sb(sb);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (SCOUTFS_HAS_SBI(sb))
if (sbi) {
sbi->unmounting = true;
smp_wmb();
}
if (SCOUTFS_HAS_SBI(sb)) {
scoutfs_inode_orphan_stop(sb);
scoutfs_lock_unmount_begin(sb);
}
kill_block_super(sb);
}
@@ -701,11 +687,15 @@ static int __init scoutfs_module_init(void)
*/
__asm__ __volatile__ (
".section .note.git_describe,\"a\"\n"
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".ascii \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_interop_version,\"a\"\n"
".string \""SCOUTFS_INTEROP_VERSION_STR"\\n\"\n"
".section .note.scoutfs_format_version_min,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MIN_STR"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_max,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MAX_STR"\\n\"\n"
".previous\n");
scoutfs_init_counters();
@@ -739,4 +729,5 @@ module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
MODULE_INFO(scoutfs_interop_version, SCOUTFS_INTEROP_VERSION_STR);
MODULE_INFO(scoutfs_format_version_min, SCOUTFS_FORMAT_VERSION_MIN_STR);
MODULE_INFO(scoutfs_format_version_max, SCOUTFS_FORMAT_VERSION_MAX_STR);

View File

@@ -36,6 +36,7 @@ struct scoutfs_sb_info {
/* assigned once at the start of each mount, read-only */
u64 rid;
u64 fmt_vers;
struct scoutfs_super_block super;
@@ -56,20 +57,11 @@ struct scoutfs_sb_info {
struct item_cache_info *item_cache_info;
struct fence_info *fence_info;
wait_queue_head_t trans_hold_wq;
struct task_struct *trans_task;
/* tracks tasks waiting for data extents */
struct scoutfs_data_wait_root data_wait_root;
spinlock_t trans_write_lock;
u64 trans_write_count;
/* set as transaction opens with trans holders excluded */
u64 trans_seq;
int trans_write_ret;
struct delayed_work trans_write_work;
wait_queue_head_t trans_write_wq;
struct workqueue_struct *trans_write_workq;
bool trans_deadline_expired;
struct trans_info *trans_info;
struct lock_info *lock_info;
@@ -89,6 +81,7 @@ struct scoutfs_sb_info {
struct dentry *debug_root;
bool forced_unmount;
bool unmounting;
unsigned long corruption_messages_once[SC_NR_LONGS];
};
@@ -117,6 +110,19 @@ static inline bool scoutfs_forcing_unmount(struct super_block *sb)
return sbi->forced_unmount;
}
/*
* True if we're shutting down the system and can be used as a coarse
* indicator that we can avoid doing some work that no longer makes
* sense.
*/
static inline bool scoutfs_unmounting(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
smp_rmb();
return !sbi || sbi->unmounting;
}
/*
* A small string embedded in messages that's used to identify a
* specific mount. It's the three most significant bytes of the fsid
@@ -154,6 +160,4 @@ int scoutfs_write_super(struct super_block *sb,
/* to keep this out of the ioctl.h public interface definition */
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
__le64 scoutfs_clock_sync_id(void);
#endif

View File

@@ -37,6 +37,16 @@ struct attr_funcs {
#define ATTR_FUNCS_RO(_name) \
static struct attr_funcs _name##_attr_funcs = __ATTR_RO(_name)
static ssize_t format_version_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return snprintf(buf, PAGE_SIZE, "%llu\n", sbi->fmt_vers);
}
ATTR_FUNCS_RO(format_version);
static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
@@ -91,6 +101,7 @@ static ssize_t attr_funcs_show(struct kobject *kobj, struct attribute *attr,
static struct attribute *sb_id_attrs[] = {
&format_version_attr_funcs.attr,
&fsid_attr_funcs.attr,
&rid_attr_funcs.attr,
NULL,

View File

@@ -17,6 +17,7 @@
#include <linux/atomic.h>
#include <linux/writeback.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include "super.h"
#include "trans.h"
@@ -53,15 +54,24 @@
/* sync dirty data at least this often */
#define TRANS_SYNC_DELAY (HZ * 10)
/*
* XXX move the rest of the super trans_ fields here.
*/
struct trans_info {
struct super_block *sb;
atomic_t holders;
struct scoutfs_log_trees lt;
struct scoutfs_alloc alloc;
struct scoutfs_block_writer wri;
wait_queue_head_t hold_wq;
struct task_struct *task;
spinlock_t write_lock;
u64 write_count;
int write_ret;
struct delayed_work write_work;
wait_queue_head_t write_wq;
struct workqueue_struct *write_workq;
bool deadline_expired;
};
#define DECLARE_TRANS_INFO(sb, name) \
@@ -91,6 +101,7 @@ static int commit_btrees(struct super_block *sb)
*/
int scoutfs_trans_get_log_trees(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_log_trees lt;
int ret = 0;
@@ -103,6 +114,11 @@ int scoutfs_trans_get_log_trees(struct super_block *sb)
scoutfs_forest_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
scoutfs_data_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
/* first set during mount from 0 to nonzero allows commits */
spin_lock(&tri->write_lock);
sbi->trans_seq = le64_to_cpu(lt.get_trans_seq);
spin_unlock(&tri->write_lock);
}
return ret;
}
@@ -120,13 +136,12 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
*/
static void sub_holders_and_wake(struct super_block *sb, int val)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
atomic_sub(val, &tri->holders);
smp_mb(); /* make sure sub is visible before we wake */
if (waitqueue_active(&sbi->trans_hold_wq))
wake_up(&sbi->trans_hold_wq);
if (waitqueue_active(&tri->hold_wq))
wake_up(&tri->hold_wq);
}
/*
@@ -154,96 +169,93 @@ static bool drained_holders(struct trans_info *tri)
* functions that would try to hold the transaction. We record the task
* whose committing the transaction so that holding won't deadlock.
*
* Any dirty block had to have allocated a new blkno which would have
* created dirty allocator metadata blocks. We can avoid writing
* entirely if we don't have any dirty metadata blocks. This is
* important because we don't try to serialize this work during
* unmount.. we can execute as the vfs is shutting down.. we need to
* decide that nothing is dirty without calling the vfs at all.
* Once we clear the write func bit in holders then waiting holders can
* enter the transaction and continue modifying the transaction. Once
* we start writing we consider the transaction done and won't exit,
* clearing the write func bit, until get_log_trees has opened the next
* transaction. The exception is forced unmount which is allowed to
* generate errors and throw away data.
*
* We first try to sync the dirty inodes and write their dirty data blocks,
* then we write all our dirty metadata blocks, and only when those succeed
* do we write the new super that references all of these newly written blocks.
*
* If there are write errors then blocks are kept dirty in memory and will
* be written again at the next sync.
* This means that the only way fsync can return an error is if we're in
* forced unmount.
*/
void scoutfs_trans_write_func(struct work_struct *work)
{
struct scoutfs_sb_info *sbi = container_of(work, struct scoutfs_sb_info,
trans_write_work.work);
struct super_block *sb = sbi->sb;
DECLARE_TRANS_INFO(sb, tri);
u64 trans_seq = sbi->trans_seq;
struct trans_info *tri = container_of(work, struct trans_info, write_work.work);
struct super_block *sb = tri->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
bool retrying = false;
char *s = NULL;
int ret = 0;
sbi->trans_task = current;
tri->task = current;
/* mark that we're writing so holders wait for us to finish and clear our bit */
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
wait_event(sbi->trans_hold_wq, drained_holders(tri));
wait_event(tri->hold_wq, drained_holders(tri));
/* mount hasn't opened first transaction yet, still complete sync */
if (sbi->trans_seq == 0) {
ret = 0;
goto out;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
goto out;
}
trace_scoutfs_trans_write_func(sb,
scoutfs_block_writer_dirty_bytes(sb, &tri->wri));
trace_scoutfs_trans_write_func(sb, scoutfs_block_writer_dirty_bytes(sb, &tri->wri),
scoutfs_item_dirty_bytes(sb));
if (!scoutfs_block_writer_has_dirty(sb, &tri->wri) &&
!scoutfs_item_dirty_pages(sb)) {
if (sbi->trans_deadline_expired) {
/*
* If we're not writing data then we only advance the
* seq at the sync deadline interval. This keeps idle
* mounts from pinning a seq and stopping readers of the
* seq indices but doesn't send a message for every sync
* syscall.
*/
ret = scoutfs_client_advance_seq(sb, &trans_seq);
if (ret < 0)
s = "clean advance seq";
}
goto err;
}
if (sbi->trans_deadline_expired)
if (tri->deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
scoutfs_inc_counter(sb, trans_commit_written);
/* XXX this all needs serious work for dealing with errors */
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb,
&tri->alloc, &tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "advance seq", scoutfs_client_advance_seq(sb, &trans_seq)) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
err:
if (ret < 0)
scoutfs_err(sb, "critical transaction commit failure: %s, %d",
s, ret);
do {
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc,
&tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
if (ret < 0) {
if (!retrying) {
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
s, ret);
retrying = true;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
break;
}
msleep(2 * MSEC_PER_SEC);
} else if (retrying) {
scoutfs_info(sb, "retried transaction commit succeeded");
}
} while (ret < 0);
out:
spin_lock(&sbi->trans_write_lock);
sbi->trans_write_count++;
sbi->trans_write_ret = ret;
sbi->trans_seq = trans_seq;
spin_unlock(&sbi->trans_write_lock);
wake_up(&sbi->trans_write_wq);
spin_lock(&tri->write_lock);
tri->write_count++;
tri->write_ret = ret;
spin_unlock(&tri->write_lock);
wake_up(&tri->write_wq);
/* we're done, wake waiting holders */
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
sbi->trans_task = NULL;
tri->task = NULL;
scoutfs_trans_restart_sync_deadline(sb);
}
@@ -254,17 +266,17 @@ struct write_attempt {
};
/* this is called as a wait_event() condition so it can't change task state */
static int write_attempted(struct scoutfs_sb_info *sbi,
struct write_attempt *attempt)
static int write_attempted(struct super_block *sb, struct write_attempt *attempt)
{
DECLARE_TRANS_INFO(sb, tri);
int done = 1;
spin_lock(&sbi->trans_write_lock);
if (sbi->trans_write_count > attempt->count)
attempt->ret = sbi->trans_write_ret;
spin_lock(&tri->write_lock);
if (tri->write_count > attempt->count)
attempt->ret = tri->write_ret;
else
done = 0;
spin_unlock(&sbi->trans_write_lock);
spin_unlock(&tri->write_lock);
return done;
}
@@ -274,10 +286,12 @@ static int write_attempted(struct scoutfs_sb_info *sbi,
* We always have delayed sync work pending but the caller wants it
* to execute immediately.
*/
static void queue_trans_work(struct scoutfs_sb_info *sbi)
static void queue_trans_work(struct super_block *sb)
{
sbi->trans_deadline_expired = false;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work, 0);
DECLARE_TRANS_INFO(sb, tri);
tri->deadline_expired = false;
mod_delayed_work(tri->write_workq, &tri->write_work, 0);
}
/*
@@ -290,26 +304,24 @@ static void queue_trans_work(struct scoutfs_sb_info *sbi)
*/
int scoutfs_trans_sync(struct super_block *sb, int wait)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct write_attempt attempt;
DECLARE_TRANS_INFO(sb, tri);
struct write_attempt attempt = { .ret = 0 };
int ret;
if (!wait) {
queue_trans_work(sbi);
queue_trans_work(sb);
return 0;
}
spin_lock(&sbi->trans_write_lock);
attempt.count = sbi->trans_write_count;
spin_unlock(&sbi->trans_write_lock);
spin_lock(&tri->write_lock);
attempt.count = tri->write_count;
spin_unlock(&tri->write_lock);
queue_trans_work(sbi);
queue_trans_work(sb);
ret = wait_event_interruptible(sbi->trans_write_wq,
write_attempted(sbi, &attempt));
if (ret == 0)
ret = attempt.ret;
wait_event(tri->write_wq, write_attempted(sb, &attempt));
ret = attempt.ret;
return ret;
}
@@ -325,10 +337,10 @@ int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
sbi->trans_deadline_expired = true;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work,
tri->deadline_expired = true;
mod_delayed_work(tri->write_workq, &tri->write_work,
TRANS_SYNC_DELAY);
}
@@ -410,16 +422,18 @@ static void release_holders(struct super_block *sb)
*/
static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
{
u64 dirty_blocks = (scoutfs_item_dirty_bytes(sb) >> SCOUTFS_BLOCK_LG_SHIFT) + 1;
/*
* In theory each dirty item page could be straddling two full
* blocks, requiring 4 allocations for each item cache page.
* That's much too conservative, typically many dirty item cache
* pages that are near each other all land in one block. This
* In theory each dirty item could be added to a full block that
* has to split, requiring 2 meta block allocs for each dirty
* item. That's much too conservative, typically many dirty
* items that are near each other all land in one block. This
* rough estimate is still so far beyond what typically happens
* that it accounts for having to dirty parent blocks and
* whatever dirtying is done during the transaction hold.
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc, scoutfs_item_dirty_pages(sb) * 2)) {
if (scoutfs_alloc_meta_low(sb, &tri->alloc, dirty_blocks * 4)) {
scoutfs_inc_counter(sb, trans_commit_dirty_meta_full);
return true;
}
@@ -482,10 +496,16 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
u64 seq;
int ret;
if (current == sbi->trans_task)
if (current == tri->task)
return 0;
for (;;) {
/* shouldn't get holders until mount finishes, (not locking for cheap test) */
if (WARN_ON_ONCE(sbi->trans_seq == 0)) {
ret = -EINVAL;
break;
}
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
@@ -496,9 +516,7 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
/* wait until the writer work is finished */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
ret = wait_event_interruptible(sbi->trans_hold_wq, holders_no_writer(tri));
if (ret < 0)
break;
wait_event(tri->hold_wq, holders_no_writer(tri));
continue;
}
@@ -513,11 +531,8 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
if (commit_before_hold(sb, tri)) {
seq = scoutfs_trans_sample_seq(sb);
release_holders(sb);
queue_trans_work(sbi);
ret = wait_event_interruptible(sbi->trans_hold_wq,
scoutfs_trans_sample_seq(sb) != seq);
if (ret < 0)
break;
queue_trans_work(sb);
wait_event(tri->hold_wq, scoutfs_trans_sample_seq(sb) != seq);
continue;
}
@@ -543,10 +558,9 @@ bool scoutfs_trans_held(void)
void scoutfs_release_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
if (current == sbi->trans_task)
if (current == tri->task)
return;
release_holders(sb);
@@ -561,12 +575,13 @@ void scoutfs_release_trans(struct super_block *sb)
*/
u64 scoutfs_trans_sample_seq(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 ret;
spin_lock(&sbi->trans_write_lock);
spin_lock(&tri->write_lock);
ret = sbi->trans_seq;
spin_unlock(&sbi->trans_write_lock);
spin_unlock(&tri->write_lock);
return ret;
}
@@ -580,12 +595,17 @@ int scoutfs_setup_trans(struct super_block *sb)
if (!tri)
return -ENOMEM;
tri->sb = sb;
atomic_set(&tri->holders, 0);
scoutfs_block_writer_init(sb, &tri->wri);
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",
WQ_UNBOUND, 1);
if (!sbi->trans_write_workq) {
spin_lock_init(&tri->write_lock);
INIT_DELAYED_WORK(&tri->write_work, scoutfs_trans_write_func);
init_waitqueue_head(&tri->write_wq);
init_waitqueue_head(&tri->hold_wq);
tri->write_workq = alloc_workqueue("scoutfs_trans", WQ_UNBOUND, 1);
if (!tri->write_workq) {
kfree(tri);
return -ENOMEM;
}
@@ -612,14 +632,14 @@ void scoutfs_shutdown_trans(struct super_block *sb)
DECLARE_TRANS_INFO(sb, tri);
if (tri) {
if (sbi->trans_write_workq) {
if (tri->write_workq) {
/* immediately queues pending timer */
flush_delayed_work(&sbi->trans_write_work);
flush_delayed_work(&tri->write_work);
/* prevents re-arming if it has to wait */
cancel_delayed_work_sync(&sbi->trans_write_work);
destroy_workqueue(sbi->trans_write_workq);
cancel_delayed_work_sync(&tri->write_work);
destroy_workqueue(tri->write_workq);
/* trans work schedules after shutdown see null */
sbi->trans_write_workq = NULL;
tri->write_workq = NULL;
}
scoutfs_block_writer_forget_all(sb, &tri->wri);

View File

@@ -97,6 +97,7 @@ static int unknown_prefix(const char *name)
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
#define TAG_LEN (sizeof(HIDE_TAG) - 1)
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
@@ -119,6 +120,9 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
} else if (!strncmp(name, SRCH_TAG, TAG_LEN)) {
if (++tgs->srch == 0)
return -EINVAL;
} else if (!strncmp(name, TOTL_TAG, TAG_LEN)) {
if (++tgs->totl == 0)
return -EINVAL;
} else {
/* only reason to use scoutfs. is tags */
if (!found)
@@ -364,7 +368,7 @@ static int change_xattr_items(struct inode *inode, u64 id,
}
/* update dirtied overlapping existing items, last partial first */
for (i = old_parts - 1; i >= 0; i--) {
for (i = min(old_parts, new_parts) - 1; i >= 0; i--) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
@@ -468,6 +472,100 @@ out:
return ret;
}
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
{
scoutfs_key_set_zeros(key);
key->sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
key->skxt_a = cpu_to_le64(name[0]);
key->skxt_b = cpu_to_le64(name[1]);
key->skxt_c = cpu_to_le64(name[2]);
}
/*
* Parse a u64 in any base after null terminating it while forbidding
* the leading + and trailing \n that kstrotull allows.
*/
static int parse_totl_u64(const char *s, int len, u64 *res)
{
char str[SCOUTFS_XATTR_MAX_TOTL_U64 + 1];
if (len <= 0 || len >= ARRAY_SIZE(str) || s[0] == '+' || s[len - 1] == '\n')
return -EINVAL;
memcpy(str, s, len);
str[len] = '\0';
return kstrtoull(str, 0, res) != 0 ? -EINVAL : 0;
}
/*
* non-destructive relatively quick parse of the last 3 dotted u64s that
* make up the name of the xattr total. -EINVAL is returned if there
* are anything but 3 valid u64 encodings between single dots at the end
* of the name.
*/
static int parse_totl_key(struct scoutfs_key *key, const char *name, int name_len)
{
u64 tot_name[3];
int end = name_len;
int nr = 0;
int len;
int ret;
int i;
/* parse name elements in reserve order from end of xattr name string */
for (i = name_len - 1; i >= 0 && nr < ARRAY_SIZE(tot_name); i--) {
if (name[i] != '.')
continue;
len = end - (i + 1);
ret = parse_totl_u64(&name[i + 1], len, &tot_name[nr]);
if (ret < 0)
goto out;
end = i;
nr++;
}
if (nr == ARRAY_SIZE(tot_name)) {
/* swap to account for parsing in reverse */
swap(tot_name[0], tot_name[2]);
scoutfs_xattr_init_totl_key(key, tot_name);
ret = 0;
} else {
ret = -EINVAL;
}
out:
return ret;
}
static int apply_totl_delta(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_xattr_totl_val *tval, struct scoutfs_lock *lock)
{
if (tval->total == 0 && tval->count == 0)
return 0;
return scoutfs_item_delta(sb, key, tval, sizeof(*tval), lock);
}
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
{
struct scoutfs_xattr_totl_val *s_tval = src;
struct scoutfs_xattr_totl_val *d_tval = dst;
if (src_len != sizeof(*s_tval) || dst_len != src_len)
return -EIO;
le64_add_cpu(&d_tval->total, le64_to_cpu(s_tval->total));
le64_add_cpu(&d_tval->count, le64_to_cpu(s_tval->count));
if (d_tval->total == 0 && d_tval->count == 0)
return SCOUTFS_DELTA_COMBINED_NULL;
return SCOUTFS_DELTA_COMBINED;
}
/*
* The confusing swiss army knife of creating, modifying, and deleting
* xattrs.
@@ -486,16 +584,22 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int bytes;
unsigned int val_len;
u64 ind_seq;
u64 total;
u64 hash = 0;
u64 id = 0;
int ret;
@@ -519,11 +623,15 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide || tgs.srch) && !capable(CAP_SYS_ADMIN))
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
/* alloc enough to read old totl value */
xat = __vmalloc(bytes + SCOUTFS_XATTR_MAX_TOTL_U64, GFP_NOFS, PAGE_KERNEL);
if (!xat) {
ret = -ENOMEM;
goto out;
@@ -536,9 +644,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
down_write(&si->xattr_rwsem);
/* find an existing xattr to delete */
/* find an existing xattr to delete, including possible totl value */
ret = get_next_xattr(inode, &key, xat,
sizeof(struct scoutfs_xattr) + name_len,
sizeof(struct scoutfs_xattr) + name_len + SCOUTFS_XATTR_MAX_TOTL_U64,
name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
goto unlock;
@@ -558,9 +666,23 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
goto unlock;
}
/* s64 count delta if we create or delete */
if (tgs.totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
if (found_parts && tgs.totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
if (ret < 0)
goto unlock;
le64_add_cpu(&tval.total, -total);
}
/* prepare our xattr */
if (value) {
if (found_parts)
@@ -572,6 +694,20 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
memset(xat->__pad, 0, sizeof(xat->__pad));
memcpy(xat->name, name, name_len);
memcpy(&xat->name[xat->name_len], value, size);
if (tgs.totl) {
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto unlock;
}
le64_add_cpu(&tval.total, total);
}
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret)
goto unlock;
}
retry:
@@ -597,6 +733,13 @@ retry:
undo_srch = true;
}
if (tgs.totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto release;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, bytes,
xattr_nr_parts(xat), found_parts, lck);
@@ -620,12 +763,20 @@ release:
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
out:
vfree(xat);
@@ -746,15 +897,22 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
{
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_xattr_totl_val tval;
struct scoutfs_key totl_key;
struct scoutfs_key last;
struct scoutfs_key key;
bool release = false;
unsigned int bytes;
unsigned int val_len;
void *value;
u64 total;
u64 hash;
int ret;
/* need a buffer large enough for all possible names */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN;
/* need a buffer large enough for all possible names and totl value */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN +
SCOUTFS_XATTR_MAX_TOTL_U64;
xat = kmalloc(bytes, GFP_NOFS);
if (!xat) {
ret = -ENOMEM;
@@ -773,11 +931,37 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (key.skx_part == 0 && (ret < sizeof(struct scoutfs_xattr) ||
ret < offsetof(struct scoutfs_xattr, name[xat->name_len]))) {
ret = -EIO;
break;
}
if (key.skx_part != 0 ||
scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) != 0)
memset(&tgs, 0, sizeof(tgs));
if (tgs.totl) {
value = &xat->name[xat->name_len];
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
if (val_len != le16_to_cpu(xat->val_len)) {
ret = -EIO;
goto out;
}
ret = parse_totl_key(&totl_key, xat->name, xat->name_len) ?:
parse_totl_u64(value, val_len, &total);
if (ret < 0)
break;
}
if (tgs.totl && totl_lock == NULL) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret < 0)
break;
}
ret = scoutfs_hold_trans(sb, false);
if (ret < 0)
break;
@@ -795,6 +979,14 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (tgs.totl) {
tval.total = cpu_to_le64(-total);
tval.count = cpu_to_le64(-1LL);
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
break;
}
scoutfs_release_trans(sb);
release = false;
@@ -803,6 +995,7 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
if (release)
scoutfs_release_trans(sb);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
kfree(xat);
out:
return ret;

View File

@@ -16,10 +16,14 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1;
srch:1,
totl:1;
};
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);
#endif

1
tests/.gitignore vendored
View File

@@ -1,5 +1,6 @@
src/*.d
src/createmany
src/dumb_renameat2
src/dumb_setxattr
src/handle_cat
src/bulk_create_paths

View File

@@ -3,6 +3,7 @@ SHELL := /usr/bin/bash
# each binary command is built from a single .c file
BIN := src/createmany \
src/dumb_renameat2 \
src/dumb_setxattr \
src/handle_cat \
src/bulk_create_paths \

View File

@@ -40,7 +40,7 @@ t_filter_dmesg()
# mount and unmount spew a bunch
re="$re|scoutfs.*client connected"
re="$re|scoutfs.*client disconnected"
re="$re|scoutfs.*server setting up"
re="$re|scoutfs.*server starting"
re="$re|scoutfs.*server ready"
re="$re|scoutfs.*server accepted"
re="$re|scoutfs.*server closing"
@@ -72,6 +72,12 @@ t_filter_dmesg()
re="$re|scoutfs .* error reading quorum block"
re="$re|scoutfs .* error .* writing quorum block"
re="$re|scoutfs .* error .* while checking to delete inode"
re="$re|scoutfs .* error .*writing btree blocks.*"
re="$re|scoutfs .* error .*writing super block.*"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.looping commit del.*upd freeing item"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
re="$re|scoutfs .* error.*server failed to bind to.*"
egrep -v "($re)"
}

View File

@@ -53,3 +53,5 @@ mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing
== concurrent creates make one file
one-file

View File

@@ -1,52 +1,2 @@
== create shared test file
== set and get xattrs between mount pairs while retrying
# file: /mnt/test/test/block-stale-reads/file
user.xat="1"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="2"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="3"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="4"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="5"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="6"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="7"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="8"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="9"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="10"
counter block_cache_remove_stale changed
== Issue scoutfs df to force block reads to trigger stale invalidation/retry
counter block_cache_remove_stale changed

View File

@@ -1,4 +0,0 @@
== create per mount files
== time independent modification
== time concurrent independent modification
== time concurrent conflicting modification

View File

@@ -0,0 +1,2 @@
=== renameat2 noreplace flag test
=== run two asynchronous calls to renameat2 NOREPLACE

View File

@@ -0,0 +1,27 @@
== make initial small fs
== 0s do nothing
== shrinking fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== existing sizes do nothing
== growing outside device fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing meta works
== resizing data works
== shrinking back fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing again does nothing
== resizing to full works
== cleanup extra fs

View File

@@ -16,3 +16,4 @@ setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Argument list too long
=== good length boundaries
=== 500 random lengths
=== alternate val size between interesting sizes

View File

@@ -2,6 +2,7 @@
== update existing xattr
== remove an xattr
== remove xattr with files
== trigger small log merges by rotating single block with unmount
== create entries in current log
== delete small fraction
== remove files

View File

@@ -0,0 +1,30 @@
== single file
1.2.3 = 1, 1
4.5.6 = 1, 1
== multiple files add up
1.2.3 = 2, 2
4.5.6 = 2, 2
== removing xattr updates total
1.2.3 = 2, 2
4.5.6 = 1, 1
== updating xattr updates total
1.2.3 = 11, 2
4.5.6 = 1, 1
== removing files update total
1.2.3 = 10, 1
== multiple files/names in one transaction
1.2.3 = 55, 10
== testing invalid names
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== testing invalid values
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== larger population that could merge

View File

@@ -9,6 +9,8 @@ generic/011
generic/013
generic/014
generic/020
generic/023
generic/024
generic/028
generic/032
generic/034
@@ -82,6 +84,7 @@ generic/016
generic/018
generic/021
generic/022
generic/025
generic/026
generic/031
generic/033
@@ -93,6 +96,7 @@ generic/060
generic/061
generic/063
generic/064
generic/078
generic/079
generic/081
generic/082
@@ -278,4 +282,4 @@ shared/004
shared/032
shared/051
shared/289
Passed all 73 tests
Passed all 75 tests

View File

@@ -254,17 +254,20 @@ test -e "$T_RESULTS" || mkdir -p "$T_RESULTS"
test -d "$T_RESULTS" || \
die "$T_RESULTS dir is not a directory"
# might as well build our stuff with all cpus, assuming idle system
MAKE_ARGS="-j $(getconf _NPROCESSORS_ONLN)"
# build kernel module
msg "building kmod/ dir $T_KMOD"
cmd cd "$T_KMOD"
cmd make
cmd make $MAKE_ARGS
cmd sync
cmd cd -
# build utils
msg "building utils/ dir $T_UTILS"
cmd cd "$T_UTILS"
cmd make
cmd make $MAKE_ARGS
cmd sync
cmd cd -
@@ -281,7 +284,7 @@ fi
# building our test binaries
msg "building test binaries"
cmd make
cmd make $MAKE_ARGS
# set any options implied by others
test -n "$T_MKFS" && T_UNMOUNT=1

View File

@@ -10,6 +10,7 @@ move-blocks.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
totl-xattr-tag.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
@@ -25,10 +26,10 @@ basic-posix-consistency.sh
dirent-consistency.sh
mkdir-rename-rmdir.sh
lock-ex-race-processes.sh
lock-conflicting-batch-commit.sh
cross-mount-data-free.sh
persistent-item-vers.sh
setup-error-teardown.sh
resize-devices.sh
fence-and-reclaim.sh
orphan-inodes.sh
mount-unmount-race.sh
@@ -36,4 +37,5 @@ createmany-parallel-mounts.sh
archive-light-cycle.sh
block-stale-reads.sh
inode-deletion.sh
renameat2-noreplace.sh
xfstests.sh

View File

@@ -0,0 +1,93 @@
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#ifndef RENAMEAT2_EXIST
#include <unistd.h>
#include <sys/syscall.h>
#if !defined(SYS_renameat2) && defined(__x86_64__)
#define SYS_renameat2 316 /* from arch/x86/entry/syscalls/syscall_64.tbl */
#endif
static int renameat2(int olddfd, const char *old_dir,
int newdfd, const char *new_dir,
unsigned int flags)
{
#ifdef SYS_renameat2
return syscall(SYS_renameat2, olddfd, old_dir, newdfd, new_dir, flags);
#else
errno = ENOSYS;
return -1;
#endif
}
#endif
#ifndef RENAME_NOREPLACE
#define RENAME_NOREPLACE (1 << 0) /* Don't overwrite newpath of rename */
#endif
#ifndef RENAME_EXCHANGE
#define RENAME_EXCHANGE (1 << 1) /* Exchange oldpath and newpath */
#endif
#ifndef RENAME_WHITEOUT
#define RENAME_WHITEOUT (1 << 2) /* Whiteout oldpath */
#endif
static void exit_usage(char **argv)
{
fprintf(stderr,
"usage: %s [-n|-x|-w] old_path new_path\n"
" -n noreplace\n"
" -x exchange\n"
" -w whiteout\n", argv[0]);
exit(1);
}
int main(int argc, char **argv)
{
const char *old_path = NULL;
const char *new_path = NULL;
unsigned int flags = 0;
int ret;
int c;
for (c = 1; c < argc; c++) {
if (argv[c][0] == '-') {
switch (argv[c][1]) {
case 'n':
flags |= RENAME_NOREPLACE;
break;
case 'x':
flags |= RENAME_EXCHANGE;
break;
case 'w':
flags |= RENAME_WHITEOUT;
break;
default:
exit_usage(argv);
}
} else if (!old_path) {
old_path = argv[c];
} else if (!new_path) {
new_path = argv[c];
} else {
exit_usage(argv);
}
}
if (!old_path || !new_path) {
printf("specify the correct directory path\n");
errno = ENOENT;
return 1;
}
ret = renameat2(AT_FDCWD, old_path, AT_FDCWD, new_path, flags);
if (ret == -1) {
perror("Error");
return 1;
}
return 0;
}

View File

@@ -48,8 +48,9 @@ char buf[SZ];
int main(int argc, char **argv)
{
struct scoutfs_ioctl_release ioctl_args = {0};
struct scoutfs_ioctl_release rel = {0};
struct scoutfs_ioctl_move_blocks mb;
struct scoutfs_ioctl_stat_more stm;
struct sub_tmp_info sub_tmps[8];
int tot_size = 0;
char *dest_file;
@@ -111,12 +112,19 @@ int main(int argc, char **argv)
exit(1);
}
// release everything in dest file
ioctl_args.offset = 0;
ioctl_args.length = tot_size;
ioctl_args.data_version = 0;
// get current data_version after fallocate's size extensions
ret = ioctl(dest_fd, SCOUTFS_IOC_STAT_MORE, &stm);
if (ret < 0) {
perror("stat_more ioctl error");
exit(1);
}
ret = ioctl(dest_fd, SCOUTFS_IOC_RELEASE, &ioctl_args);
// release everything in dest file
rel.offset = 0;
rel.length = tot_size;
rel.data_version = stm.data_version;
ret = ioctl(dest_fd, SCOUTFS_IOC_RELEASE, &rel);
if (ret < 0) {
perror("error");
exit(1);
@@ -130,7 +138,7 @@ int main(int argc, char **argv)
mb.from_off = 0;
mb.len = sub_tmp->length;
mb.to_off = sub_tmp->offset;
mb.data_version = 0;
mb.data_version = stm.data_version;
mb.flags = SCOUTFS_IOC_MB_STAGE;
ret = ioctl(dest_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);

View File

@@ -197,4 +197,13 @@ scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- data_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== concurrent creates make one file"
mkdir "$T_D0/concurrent"
for i in $(t_fs_nrs); do
eval p="\$T_D${i}/concurrent/one-file"
touch "$p" 2>&1 > "$T_TMP.multi-create.$i" &
done
wait
ls "$T_D0/concurrent"
t_pass

View File

@@ -5,57 +5,18 @@
# persistent blocks to create stable block reading scenarios. Instead
# we use triggers to exercise how readers encounter stale blocks.
#
# Trigger retries in the block cache by calling scoutfs df
# which in turn will call scoutfs_ioctl_alloc_detail. This
# is guaranteed to exist, which will force block cache reads.
t_require_commands touch setfattr getfattr
echo "== Issue scoutfs df to force block reads to trigger stale invalidation/retry"
nr=0
inc_wrap_fs_nr()
{
local nr="$(($1 + 1))"
old=$(t_counter block_cache_remove_stale $nr)
t_trigger_arm_silent block_remove_stale $nr
if [ "$nr" == "$T_NR_MOUNTS" ]; then
nr=0
fi
scoutfs df -p "$T_M0" > /dev/null
echo $nr
}
GETFATTR="getfattr --absolute-names"
SETFATTR="setfattr"
echo "== create shared test file"
touch "$T_D0/file"
$SETFATTR -n user.xat -v 0 "$T_D0/file"
#
# Trigger retries in the block cache as we bounce xattr values around
# between sequential pairs of mounts. This is a little silly because if
# either of the mounts are the server then they'll almost certaily have
# their trigger fired prematurely by message handling btree calls while
# working with the t_ helpers long before we work with the xattrs. But
# the block cache stale retry path is still being exercised.
#
echo "== set and get xattrs between mount pairs while retrying"
set_nr=0
get_nr=$(inc_wrap_fs_nr $set_nr)
for i in $(seq 1 10); do
eval set_file="\$T_D${set_nr}/file"
eval get_file="\$T_D${get_nr}/file"
old_set=$(t_counter block_cache_remove_stale $set_nr)
old_get=$(t_counter block_cache_remove_stale $get_nr)
t_trigger_arm_silent block_remove_stale $set_nr
t_trigger_arm_silent block_remove_stale $get_nr
$SETFATTR -n user.xat -v $i "$set_file"
$GETFATTR -n user.xat "$get_file" 2>&1 | t_filter_fs
t_counter_diff_changed block_cache_remove_stale $old_set $set_nr
t_counter_diff_changed block_cache_remove_stale $old_get $get_nr
set_nr="$get_nr"
get_nr=$(inc_wrap_fs_nr $set_nr)
done
t_counter_diff_changed block_cache_remove_stale $old $nr
t_pass

View File

@@ -1,59 +0,0 @@
#
# If bulk work accidentally conflicts in the worst way we'd like to have
# it not result in catastrophic performance. Make sure that each
# instance of bulk work is given the opportunity to get as much as it
# can into the transaction under a lock before the lock is revoked
# and the transaction is committed.
#
t_require_commands setfattr
t_require_mounts 2
NR=3000
echo "== create per mount files"
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
t_quiet mkdir -p "$dir"
for a in $(seq 1 $NR); do touch "$dir/$a"; done
done
echo "== time independent modification"
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
START=$SECONDS
for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a"
done
echo "mount $m: $((SECONDS - START))" >> $T_TMP.log
done
echo "== time concurrent independent modification"
START=$SECONDS
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
(for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a";
done) &
done
wait
IND="$((SECONDS - START))"
echo "ind: $IND" >> $T_TMP.log
echo "== time concurrent conflicting modification"
START=$SECONDS
for m in 0 1; do
eval dir="\$T_D${m}/dir/0"
(for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a";
done) &
done
wait
CONF="$((SECONDS - START))"
echo "conf: $CONF" >> $T_TMP.log
if [ "$CONF" -gt "$((IND * 5))" ]; then
t_fail "conflicting $CONF secs is more than 5x independent $IND secs"
fi
t_pass

View File

@@ -23,9 +23,7 @@ else
NR_MNTS=$T_NR_MOUNTS
fi
# test until final op mount dir wraps
while [ ${op_mnt[$NR_OPS]} == 0 ]; do
while : ; do
# sequentially perform each op from its mount dir
for op in $(seq 0 $((NR_OPS - 1))); do
m=${op_mnt[$op]}
@@ -45,7 +43,7 @@ while [ ${op_mnt[$NR_OPS]} == 0 ]; do
# advance through mnt nrs for each op
i=0
while [ ${op_mnt[$NR_OPS]} == 0 ]; do
while [ $i -lt $NR_OPS ]; do
((op_mnt[$i]++))
if [ ${op_mnt[$i]} -ge $NR_MNTS ]; then
op_mnt[$i]=0
@@ -54,6 +52,9 @@ while [ ${op_mnt[$NR_OPS]} == 0 ]; do
break
fi
done
# done when the last op's mnt nr wrapped
[ $i -ge $NR_OPS ] && break
done
t_pass

View File

@@ -0,0 +1,37 @@
#
# simple renameat2 NOREPLACE unit test
#
t_require_commands dumb_renameat2
t_require_mounts 2
echo "=== renameat2 noreplace flag test"
# give each mount their own dir (lock group) to minimize create contention
mkdir $T_M0/dir0
mkdir $T_M1/dir1
echo "=== run two asynchronous calls to renameat2 NOREPLACE"
for i in $(seq 0 100); do
# prepare inputs in isolation
touch "$T_M0/dir0/old0"
touch "$T_M1/dir1/old1"
# race doing noreplace renames, both can't succeed
dumb_renameat2 -n "$T_M0/dir0/old0" "$T_M0/dir0/sharednew" 2> /dev/null &
pid0=$!
dumb_renameat2 -n "$T_M1/dir1/old1" "$T_M1/dir0/sharednew" 2> /dev/null &
pid1=$!
wait $pid0
rc0=$?
wait $pid1
rc1=$?
test "$rc0" == 0 -a "$rc1" == 0 && t_fail "both renames succeeded"
# blow away possible files for either race outcome
rm -f "$T_M0/dir0/old0" "$T_M1/dir1/old1" "$T_M0/dir0/sharednew" "$T_M1/dir1/sharednew"
done
t_pass

View File

@@ -0,0 +1,149 @@
#
# Some basic tests of online resizing metadata and data devices.
#
statfs_total() {
local single="total_$1_blocks"
local mnt="$2"
scoutfs statfs -s $single -p "$mnt"
}
df_free() {
local md="$1"
local mnt="$2"
scoutfs df -p "$mnt" | awk '($1 == "'$md'") { print $5; exit }'
}
same_totals() {
cur_meta_tot=$(statfs_total meta "$SCR")
cur_data_tot=$(statfs_total data "$SCR")
test "$cur_meta_tot" == "$exp_meta_tot" || \
t_fail "cur total_meta_blocks $cur_meta_tot != expected $exp_meta_tot"
test "$cur_data_tot" == "$exp_data_tot" || \
t_fail "cur total_data_blocks $cur_data_tot != expected $exp_data_tot"
}
#
# make sure that the specified devices have grown by doubling. The
# total blocks can be tested exactly but the df reported total needs
# some slop to account for reserved blocks and concurrent allocation.
#
devices_grew() {
cur_meta_tot=$(statfs_total meta "$SCR")
cur_data_tot=$(statfs_total data "$SCR")
cur_meta_df=$(df_free MetaData "$SCR")
cur_data_df=$(df_free Data "$SCR")
local grow_meta_tot=$(echo "$exp_meta_tot * 2" | bc)
local grow_data_tot=$(echo "$exp_data_tot * 2" | bc)
local grow_meta_df=$(echo "($exp_meta_df * 1.95)/1" | bc)
local grow_data_df=$(echo "($exp_data_df * 1.95)/1" | bc)
if [ "$1" == "meta" ]; then
test "$cur_meta_tot" == "$grow_meta_tot" || \
t_fail "cur total_meta_blocks $cur_meta_tot != grown $grow_meta_tot"
test "$cur_meta_df" -lt "$grow_meta_df" && \
t_fail "cur meta df total $cur_meta_df < grown $grow_meta_df"
exp_meta_tot=$cur_meta_tot
exp_meta_df=$cur_meta_df
shift
fi
if [ "$1" == "data" ]; then
test "$cur_data_tot" == "$grow_data_tot" || \
t_fail "cur total_data_blocks $cur_data_tot != grown $grow_data_tot"
test "$cur_data_df" -lt "$grow_data_df" && \
t_fail "cur data df total $cur_data_df < grown $grow_data_df"
exp_data_tot=$cur_data_tot
exp_data_df=$cur_data_df
fi
}
# first calculate small mkfs based on device size
size_meta=$(blockdev --getsize64 "$T_EX_META_DEV")
size_data=$(blockdev --getsize64 "$T_EX_DATA_DEV")
quarter_meta=$(echo "$size_meta / 4" | bc)
quarter_data=$(echo "$size_data / 4" | bc)
# XXX this is all pretty manual, would be nice to have helpers
echo "== make initial small fs"
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m $quarter_meta -d $quarter_data \
"$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="/mnt/scoutfs.enospc"
mkdir -p "$SCR"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
# then calculate sizes based on blocks that mkfs used
quarter_meta=$(echo "$(statfs_total meta "$SCR") * 64 * 1024" | bc)
quarter_data=$(echo "$(statfs_total data "$SCR") * 4 * 1024" | bc)
whole_meta=$(echo "$quarter_meta * 4" | bc)
whole_data=$(echo "$quarter_data * 4" | bc)
outsize_meta=$(echo "$whole_meta * 2" | bc)
outsize_data=$(echo "$whole_data * 2" | bc)
half_meta=$(echo "$whole_meta / 2" | bc)
half_data=$(echo "$whole_data / 2" | bc)
shrink_meta=$(echo "$quarter_meta / 2" | bc)
shrink_data=$(echo "$quarter_data / 2" | bc)
# and save expected values for checks
exp_meta_tot=$(statfs_total meta "$SCR")
exp_meta_df=$(df_free MetaData "$SCR")
exp_data_tot=$(statfs_total data "$SCR")
exp_data_df=$(df_free Data "$SCR")
echo "== 0s do nothing"
scoutfs resize-devices -p "$SCR"
scoutfs resize-devices -p "$SCR" -m 0
scoutfs resize-devices -p "$SCR" -d 0
scoutfs resize-devices -p "$SCR" -m 0 -d 0
echo "== shrinking fails"
scoutfs resize-devices -p "$SCR" -m $shrink_meta
scoutfs resize-devices -p "$SCR" -d $shrink_data
scoutfs resize-devices -p "$SCR" -m $shrink_meta -d $shrink_data
same_totals
echo "== existing sizes do nothing"
scoutfs resize-devices -p "$SCR" -m $quarter_meta
scoutfs resize-devices -p "$SCR" -d $quarter_data
scoutfs resize-devices -p "$SCR" -m $quarter_meta -d $quarter_data
same_totals
echo "== growing outside device fails"
scoutfs resize-devices -p "$SCR" -m $outsize_meta
scoutfs resize-devices -p "$SCR" -d $outsize_data
scoutfs resize-devices -p "$SCR" -m $outsize_meta -d $outsize_data
same_totals
echo "== resizing meta works"
scoutfs resize-devices -p "$SCR" -m $half_meta
devices_grew meta
echo "== resizing data works"
scoutfs resize-devices -p "$SCR" -d $half_data
devices_grew data
echo "== shrinking back fails"
scoutfs resize-devices -p "$SCR" -m $quarter_meta
scoutfs resize-devices -p "$SCR" -m $quarter_data
same_totals
echo "== resizing again does nothing"
scoutfs resize-devices -p "$SCR" -m $half_meta
scoutfs resize-devices -p "$SCR" -m $half_data
same_totals
echo "== resizing to full works"
scoutfs resize-devices -p "$SCR" -m $whole_meta -d $whole_data
devices_grew meta data
echo "== cleanup extra fs"
umount "$SCR"
rmdir "$SCR"
t_pass

View File

@@ -46,6 +46,35 @@ print_and_run() {
"$@" || echo "returned nonzero status: $?"
}
# fill a buffer with strings that identify their byte offset
offs=""
for o in $(seq 0 7 $((65535 - 7))); do
offs+="$(printf "[%5u]" $o)"
done
change_val_sizes() {
local name="$1"
local file="$2"
local from="$3"
local to="$4"
while : ; do
setfattr -x "$name" "$file" > /dev/null 2>&1
setfattr -n "$name" -v "${offs:0:$from}" "$file"
setfattr -n "$name" -v "${offs:0:$to}" "$file"
if ! diff -u <(echo -n "${offs:0:$to}") <(getfattr --absolute-names --only-values -n "$name" $file) ; then
echo "setting $name from $from to $to failed"
fi
if [ $from == $3 ]; then
from=$4
to=$3
else
break
fi
done
}
echo "=== XATTR_ flag combinations"
touch "$FILE"
print_and_run dumb_setxattr -p "$FILE" -n user.test -v val -c -r
@@ -80,4 +109,17 @@ for i in $(seq 1 $NR); do
test_xattr_lengths $name_len $val_len
done
echo "=== alternate val size between interesting sizes"
name="user.test"
ITEM=896
HDR=$((8 + 9))
# one full item apart
change_val_sizes $name "$FILE" $(((ITEM * 2) - HDR)) $(((ITEM * 3) - HDR))
# multiple full items apart
change_val_sizes $name "$FILE" $(((ITEM * 6) - HDR)) $(((ITEM * 9) - HDR))
# item boundary fence posts
change_val_sizes $name "$FILE" $(((ITEM * 5) - HDR - 1)) $(((ITEM * 13) - HDR + 1))
# min and max
change_val_sizes $name "$FILE" 1 65535
t_pass

View File

@@ -17,8 +17,10 @@ diff_srch_find()
local n="$1"
sync
scoutfs search-xattrs "$n" -p "$T_M0" > "$T_TMP.srch"
find_xattrs -d "$T_D0" -m "$T_M0" -n "$n" > "$T_TMP.find"
scoutfs search-xattrs "$n" -p "$T_M0" > "$T_TMP.srch" || \
t_fail "search-xattrs failed"
find_xattrs -d "$T_D0" -m "$T_M0" -n "$n" > "$T_TMP.find" || \
t_fail "find_xattrs failed"
diff -u "$T_TMP.srch" "$T_TMP.find"
}
@@ -40,6 +42,31 @@ echo "== remove xattr with files"
rm -f "$T_D0/"{create,update}
diff_srch_find scoutfs.srch.test
echo "== trigger small log merges by rotating single block with unmount"
sv=$(t_server_nr)
i=1
while [ "$i" -lt "8" ]; do
for nr in $(t_fs_nrs); do
# not checking, can go over limit by fs_nrs
((i++))
if [ $nr == $sv ]; then
continue;
fi
eval path="\$T_D${nr}/single-block-$i"
touch "$path"
setfattr -n scoutfs.srch.single-block-logs -v $i "$path"
t_umount $nr
t_mount $nr
((i++))
done
done
# wait for srch compaction worker delay
sleep 10
rm -rf "$T_D0/single-block-*"
echo "== create entries in current log"
DIR="$T_D0/dir"
NR=$((LOG / 4))

View File

@@ -0,0 +1,126 @@
t_require_commands touch rm setfattr scoutfs find_xattrs
read_xattr_totals()
{
sync
scoutfs read-xattr-totals -p "$T_M0"
}
echo "== single file"
touch "$T_D0/file-1"
setfattr -n scoutfs.totl.test.1.2.3 -v 1 "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.4.5.6 -v 1 "$T_D0/file-1" 2>&1 | t_filter_fs
read_xattr_totals
echo "== multiple files add up"
touch "$T_D0/file-2"
setfattr -n scoutfs.totl.test.1.2.3 -v 1 "$T_D0/file-2" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.4.5.6 -v 1 "$T_D0/file-2" 2>&1 | t_filter_fs
read_xattr_totals
echo "== removing xattr updates total"
setfattr -x scoutfs.totl.test.4.5.6 "$T_D0/file-2" 2>&1 | t_filter_fs
read_xattr_totals
echo "== updating xattr updates total"
setfattr -n scoutfs.totl.test.1.2.3 -v 10 "$T_D0/file-2" 2>&1 | t_filter_fs
read_xattr_totals
echo "== removing files update total"
rm -f "$T_D0/file-1"
read_xattr_totals
rm -f "$T_D0/file-2"
read_xattr_totals
echo "== multiple files/names in one transaction"
for a in $(seq 1 10); do
touch "$T_D0/file-$a"
setfattr -n scoutfs.totl.test.1.2.3 -v $a "$T_D0/file-$a" 2>&1 | t_filter_fs
done
read_xattr_totals
rm -rf "$T_D0"/file-[0-9]*
echo "== testing invalid names"
touch "$T_D0/invalid"
setfattr -n scoutfs.totl.test... -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test..2.3 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1..3 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2. -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
echo "== testing invalid values"
setfattr -n scoutfs.totl.test.1.2.3 -v "+1" "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "10." "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "-" "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "junk10" "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "10junk" "$T_D0/invalid" 2>&1 | t_filter_fs
rm -f "$T_D0/invalid"
echo "== larger population that could merge"
NR=5000
TOTS=100
CHECK=100
PER_DIR=1000
PER_FILE=10
declare -A totals counts
LOTS="$T_D0/lots"
for i in $(seq 0 $PER_DIR $NR); do
p="$LOTS/$((i / PER_DIR))"
mkdir -p $p
done
for i in $(seq 0 $PER_FILE $NR); do
p="$LOTS/$((i / PER_DIR))/file-$((i / PER_FILE))"
touch $p
done
for phase in create update remove; do
for i in $(seq 0 $NR); do
p="$LOTS/$((i / PER_DIR))/file-$((i / PER_FILE))"
t=$((i % TOTS))
n="scoutfs.totl.test-$i.$t.0.0"
case $phase in
create)
v="$i"
setfattr -n "$n" -v "$v" "$p" 2>&1 >> $T_TMP.sfa
((totals[$t]+=$v))
((counts[$t]++))
;;
update)
v=$((i * 3))
delta=$((i * 2))
setfattr -n "$n" -v "$v" "$p" 2>&1 >> $T_TMP.sfa
((totals[$t]+=$delta))
;;
remove)
v=$((i * 3))
setfattr -x "$n" "$p" 2>&1 >> $T_TMP.sfa
((totals[$t]-=$v))
((counts[$t]--))
;;
esac
if [ "$i" -gt 0 -a "$((i % CHECK))" == "0" ]; then
echo "checking $phase $i" > $T_TMP.check_arr
echo "checking $phase $i" > $T_TMP.check_read
( for k in ${!totals[@]}; do
echo "$k.0.0 = ${totals[$k]}, ${counts[$k]}"
done ) | grep -v "= 0, 0$" | sort -n >> $T_TMP.check_arr
sync
read_xattr_totals | sort -n >> $T_TMP.check_read
diff -u $T_TMP.check_arr $T_TMP.check_read || \
t_fail "totals read didn't match expected arrays"
fi
done
done
rm -rf "$T_D0/merging"
t_pass

View File

@@ -60,13 +60,9 @@ EOF
cat << EOF > local.exclude
generic/003 # missing atime update in buffered read
generic/023 # renameat2 not implemented
generic/024 # renameat2 not implemented
generic/025 # renameat2 not implemented
generic/029 # mmap missing
generic/030 # mmap missing
generic/075 # file content mismatch failures (fds, etc)
generic/078 # renameat2 not implemented
generic/080 # mmap missing
generic/103 # enospc causes trans commit failures
generic/105 # needs trigage: something about acls

View File

@@ -7,7 +7,7 @@ message_output()
error_message()
{
message_output "$@" >> /dev/stderr
message_output "$@" >&2
}
error_exit()
@@ -18,7 +18,7 @@ error_exit()
log_message()
{
message_output "$@" >> /dev/stdout
message_output "$@"
}
# restart if we catch hup to re-read the config

View File

@@ -0,0 +1,3 @@
SCOUTFS_FENCED_DELAY=1
SCOUTFS_FENCED_RUN=/usr/libexec/scoutfs-fenced/run/local-force-unmount
SCOUTFS_FENCED_RUN_ARGS=""

View File

@@ -0,0 +1,11 @@
[Unit]
Description=ScoutFS fenced
[Service]
Restart=on-failure
RestartSec=5s
StartLimitBurst=5
ExecStart=/usr/libexec/scoutfs-fenced/scoutfs-fenced
[Install]
WantedBy=default.target

View File

@@ -142,7 +142,142 @@ If the
file is written to then the server cannot make forward progress and
shuts down. The request can similarly enter an errored state if enough
time passes before userspace completes the request.
.SH EXTENDED ATTRIBUTE TAGS
.B scoutfs
adds the
.IB scoutfs.
extended attribute namespace which uses a system of tags to extend the
functionality of extended attributes. Immediately following the
scoutfs. prefix are a series of tag words seperated by dots.
Any text starting after the last recognized tag is considered the xattr
name and is not parsed.
.sp
Tags may be combined in any order. Specifying a tag more than once
will return an error. There is no explicit boundary between the end of
tags and the start of the name so unknown or incorrect tags will be
successfully parsed as part of the name of the xattr. Tags can only be
created, updated, or removed with the CAP_SYS_ADMIN capability.
The following tags are currently supported:
.RS
.TP
.B .hide.
Attributes with the .hide. tag are not visible to the
.BR listxattr(2)
system call. They will instead be included in the output of the
.IB LISTXATTR_HIDDEN
ioctl. This is meant to be used by archival management agents to store
metadata that is bound to a specific volume and should not be
transferred with the file by tools that read extended attributes, like
.BR tar(1) .
.TP
.B .srch.
Attributes with the .srch. tag are indexed so that they can be
found by the
.IB SEARCH_XATTRS
ioctl. The search ioctl takes an extended attribute name and returns
the inode number of all the inodes which contain an extended attribute
with that name. The indexing structures behind .srch. tags are designed
to efficiently handle a large number of .srch. attributes per file with
no limits on the number of indexed files.
.TP
.B .totl.
Attributes with the .totl. flag are used to efficiently maintain counts
across all files in the system. The attribute's name must end in three
64bit values seperated by dots that specify the global total that the
extended attribute will contribute to. The value of the extended
attribute is a string representation of the 64bit quantity which will be
added to the total. As attributes are added, updated, or removed (and
particularly as a file is finally deleted), the corresponding global
total is also updated by the file system. All the totals with their
name, total value, and a count of contributing attributes can be read
with the
.IB READ_XATTR_TOTALS
ioctl.
.RE
.SH FORMAT VERSION
The format version defines the layout and use of structures stored on
devices and passed over the network. The version is incremented for
every change in structures that is not backwards compatible with
previous versions. A single version implies all changes, individual
changes can't be selectively adopted.
.sp
As a new file system is created the format version is stored in both of
the super blocks written to the metadata and data devices. By default
the greatest supported version is written while an older supported
version may be specified.
.sp
During mount the kernel module verifies that the format versions stored
in both of the super blocks match and are supported. That version
defines the set of features and behavior of all the mounts using the
file system, including the network protocol that is communicated over
the wire.
.sp
Any combination of software release versions that support the current
format version of the file system can safely be used concurrently. This
allows for rolling software updates of multiple mounts using a shared
file system.
.sp
To use new incompatible features added in newer format versions the super blocks must
be updated. This can currently only be safely performed on a
completely and cleanly unmounted file system. The
.BR scoutfs (8)
.I change-format-version
command can be used with the
.I --offline
option to write a newer supported version into the super blocks. It
will fail if it sees any indication of unresolved mounts that may be
using the devices: either active quorum members working with their
quorum blocks or persistent records of mounted clients that haven't been
resolved. Like creating a new file system, there is no protection
against multiple invocations of the change command corrupting the
system. Once the version is updated older software can no longer use
the file system so this change should be performed with care. Once the
newer format version is successfully written it can be mounted and newer
features can be used.
.sp
Each layer of the system can show its supported format versions:
.RS
.TP
.B Userspace utilities
.B scoutfs --help
includes the range of supported format versions for a given release
of the userspace utilities.
.TP
.B Kernel module
.I modinfo MODULE
shows the range of supproted versions for a kernel module file in the
.I scoutfs_format_version_min
and
.I scoutfs_format_version_min
fields.
.TP
.B Inserted module
The supported version range of an inserted module can be found in
.I .note.scoutfs_format_version_min
and
.I .note.scoutfs_format_version_max
notes files in the sysfs notes directory for the inserted module,
typically
.I /sys/module/scoutfs/notes/
.TP
.B Metadata and data devices
.I scoutfs print DEVICE
shows the
.I fmt_vers
field in the initial output of the super block on the device.
.TP
.B Mounted filesystem
The version that a mount is using is shown in the
.I format_version
file in the mount's sysfs directory, typically
.I /sys/fs/scoutfs/f.FSID.r.RID/
.RE
.SH CORRUPTION DETECTION
A
.B scoutfs

View File

@@ -14,6 +14,68 @@ option will, when the option is omitted, fall back to using the value of the
environment variable. If that variable is also absent the current working
directory will be used.
.TP
.BI "change-format-version [-V, --format-version VERS] [-F|--offline META-DEVICE DATA-DEVICE]"
.sp
Change the format version of an existing file system. The maxmimum
supported version is used by default. A specific version in the range
can be specified. The range of supported versions in shown in the
output of --help.
.RS 1.0i
.PD 0
.TP
.sp
.B "-F, --offline META-DEVICE DATA-DEVICE"
Change the format version by writing directly to the metadata and data
devices. Like mkfs, this writes directly to the devices without
protection and must only be used on completely unmounted devices. The
command will fail if it sees evidence of active quorum use of the device
or of previously connected clients which haven't been reclaimed. The
only way to avoid these checks is to fully mount and cleanly unmount the
file system.
.sp
This is not an atomic operation because it writes to blocks on two
devices. Write failure can result in the versions becoming out of sync
which will prevent the system from mouting. To recover the error must
be resolved so the command can be repeated and successfully write to
the super blocks on both devices.
.RE
.PD
.TP
.BI "change-quorum-config {-Q|--quorum-slot} NR,ADDR,PORT [-F|--offline META-DEVICE DATA-DEVICE]"
.sp
Change the quorum configuration for an existing file system. The new
configuration completely replaces the old configuration. Any slots
from the old configuration that should be retained must be described
with arguments in the new configuration.
.sp
Currently the configuration may only be changed offline.
.sp
.RS 1.0i
.PD 0
.TP
.B "-Q, --quorum-slot NR,ADDR,PORT"
The quorum configuration is built by specifying configured slots with
multiple arguments as described in the
.B mkfs
command.
.TP
.B "-F, --offline META-DEVICE"
Perform the change offline by updating the superblock in the metadata
device. The command will read the super block and refuse to make the
change if it sees any evidence that the metadata device is currently in
use. The file system must be successfully unmounted after possibly
recovering any previously unresolved mounts for the change to be
successful. After the change succeeds the newly configured slots can
be used by mounts.
.sp
The offline change directly reads from and writes to the device and does
not protect against concurrent use of the device. It must be carefully
run when the file system will not be mounted.
.RE
.PD
.TP
.BI "df [-h|--human-readable] [-p|--path PATH]"
.sp
@@ -32,7 +94,7 @@ A path within a ScoutFS filesystem.
.PD
.TP
.BI "mkfs META-DEVICE DATA-DEVICE {-Q|--quorum-slot} NR,ADDR,PORT [-m|--max-meta-size SIZE] [-d|--max-data-size SIZE] [-z|--data-alloc-zone-blocks BLOCKS] [-f|--force] [-A|--allow-small-size]"
.BI "mkfs META-DEVICE DATA-DEVICE {-Q|--quorum-slot} NR,ADDR,PORT [-m|--max-meta-size SIZE] [-d|--max-data-size SIZE] [-z|--data-alloc-zone-blocks BLOCKS] [-f|--force] [-A|--allow-small-size] [-V|--format-version VERS]"
.sp
Initialize a new ScoutFS filesystem on the target devices. Since ScoutFS uses
separate block devices for its metadata and data storage, two are required.
@@ -99,10 +161,74 @@ Set the data_alloc_zone_blocks volume option, as described in
.TP
.B "-f, --force"
Ignore presence of existing data on the data and metadata devices.
.TP
.B "-V, --format-verson"
Specify the format version to use in the newly created file system.
The range of supported versions is visible in the output of
+.BR scoutfs (8)
+.I --help
.
.RE
.PD
.TP
.BI "resize-devices [-p|--path PATH] [-m|--meta-size SIZE] [-d|--data-size SIZE]"
.sp
Resize the metadata or data devices of a mounted ScoutFS filesystem.
.sp
ScoutFS metadata has free extent records and fields in the super block
that reflect the size of the devices in use. This command sends a
request to the server to change the size of the device that can be used
by updating free extents and setting the super block fields.
.sp
The specified sizes are in bytes and are translated into block counts.
If the specified sizes are not a multiple of the metadata or data block
sizes then a message is output and the resized size is truncated down to
the next whole block. Specifying either a size of 0 or the current
device size makes no change. The current size of the devices can be
seen, in units of their respective block sizes, in the total_meta_blocks
and total_data_blocks fields returned by the scoutfs statfs command (via
the statfs_more ioctl).
.sp
Shrinking is not supported. Specifying a smaller size for either device
will return an error and neither device will be resized.
.sp
Specifying a larger size will expand the initial size of the device that
will be used. Free space records are added for the expanded region and
can be used once the resizing transaction is complete.
.sp
The resizing action is performed in a transaction on the server. This
command will hang until a server is elected and running and can service
the reqeust. The server serializes any concurrent requests to resize.
.sp
The new sizes must fit within the current sizes of the mounted devices.
Presumably this command is being performed as part of a larger
coordinated resize of the underlying devices. The device must be
expanded before ScoutFS can use the larger device and ScoutFS must stop
using a region to shrink before it could be removed from the device
(which is not currently supported).
.sp
The resize will be committed by the server before the response is sent
to the client. The system can be using the new device size before the
result is communicated through the client and this command completes.
The client could crash and the server could still have performed the
resize.
.RS 1.0i
.PD 0
.TP
.sp
.B "-p, --path PATH"
A path in the mounted ScoutFS filesystem which will have its devices
resized.
.TP
.B "-m, --meta-size SIZE"
.B "-d, --data-size SIZE"
The new size of the metadata or data device to use, in bytes. Size is given as
an integer followed by a units digit: "K", "M", "G", "T", "P", to denote
kibibytes, mebibytes, etc.
.RE
.PD
.BI "stat FILE [-s|--single-field FIELD-NAME]"
.sp
Display ScoutFS-specific metadata fields for the given file.

View File

@@ -56,10 +56,14 @@ install -m 644 -D src/ioctl.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/ioctl.h
install -m 644 -D src/format.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/format.h
install -m 755 -D fenced/scoutfs-fenced $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/scoutfs-fenced
install -m 755 -D fenced/local-force-unmount $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/local-force-unmount
install -m 644 -D fenced/scoutfs-fenced.service $RPM_BUILD_ROOT%{_unitdir}/scoutfs-fenced.service
install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-fenced.conf.example
%files
%defattr(644,root,root,755)
%{_mandir}/man*/scoutfs*.gz
%{_unitdir}/scoutfs-fenced.service
%{_sysconfdir}/scoutfs
%defattr(755,root,root,755)
%{_sbindir}/scoutfs
%{_libexecdir}/scoutfs-fenced

View File

@@ -75,6 +75,9 @@ void btree_append_item(struct scoutfs_btree_block *bt,
le16_add_cpu(&bt->total_item_bytes, sizeof(struct scoutfs_btree_item));
item->key = *key;
item->seq = cpu_to_le64(1);
item->flags = 0;
leaf_item_hash_insert(bt, &item->key,
cpu_to_le16((void *)item - (void *)bt));
if (val_len == 0)

View File

@@ -0,0 +1,247 @@
#define _GNU_SOURCE /* O_DIRECT */
#include <unistd.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <sys/time.h>
#include <uuid/uuid.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <assert.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <ctype.h>
#include <inttypes.h>
#include <argp.h>
#include "sparse.h"
#include "cmd.h"
#include "util.h"
#include "format.h"
#include "parse.h"
#include "crc.h"
#include "rand.h"
#include "dev.h"
#include "key.h"
#include "bitops.h"
#include "btree.h"
#include "leaf_item_hash.h"
#include "blkid.h"
#include "quorum.h"
struct change_fmt_vers_args {
char *meta_device;
char *data_device;
u64 fmt_vers;
bool offline;
};
static int do_change_fmt_vers(struct change_fmt_vers_args *args)
{
struct scoutfs_super_block *meta_super = NULL;
struct scoutfs_super_block *data_super = NULL;
bool wrote_meta = false;
char uuid_str[37];
int meta_fd = -1;
int data_fd = -1;
int ret;
meta_fd = open(args->meta_device, O_DIRECT | O_SYNC | O_RDWR | O_EXCL);
if (meta_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open meta device '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
goto out;
}
data_fd = open(args->data_device, O_DIRECT | O_SYNC | O_RDWR | O_EXCL);
if (data_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open data device '%s': %s (%d)\n",
args->data_device, strerror(errno), errno);
goto out;
}
ret = read_block_verify(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, 0, SCOUTFS_SUPER_BLKNO,
SCOUTFS_BLOCK_SM_SHIFT, (void **)&meta_super);
if (ret) {
ret = -errno;
fprintf(stderr, "failed to read meta super block: %s (%d)\n",
strerror(errno), errno);
goto out;
}
ret = read_block_verify(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER,
le64_to_cpu(meta_super->hdr.fsid), SCOUTFS_SUPER_BLKNO,
SCOUTFS_BLOCK_SM_SHIFT, (void **)&data_super);
if (ret) {
ret = -errno;
fprintf(stderr, "failed to read data super block: %s (%d)\n",
strerror(errno), errno);
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) == args->fmt_vers &&
meta_super->fmt_vers == data_super->fmt_vers) {
printf("both metadata and data device format version are already %llu, nothing to do.\n",
args->fmt_vers);
ret = 0;
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(meta_super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
fprintf(stderr, "meta super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(meta_super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(data_super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(data_super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
fprintf(stderr, "data super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(data_super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
ret = meta_super_in_use(meta_fd, meta_super);
if (ret < 0) {
if (ret == -EBUSY)
fprintf(stderr, "The filesystem must be fully recovered and cleanly unmounted to change the format version\n");
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) != args->fmt_vers) {
meta_super->fmt_vers = cpu_to_le64(args->fmt_vers);
ret = write_block(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, meta_super->hdr.fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT, &meta_super->hdr);
if (ret)
goto out;
wrote_meta = true;
}
if (le64_to_cpu(data_super->fmt_vers) != args->fmt_vers) {
data_super->fmt_vers = cpu_to_le64(args->fmt_vers);
ret = write_block(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER, data_super->hdr.fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT, &data_super->hdr);
if (ret < 0 && wrote_meta) {
fprintf(stderr, "Error writing data super block after writing the meta\n"
"super block. The two super blocks may now be out of sync which\n"
"would prevent mounting. Correct the source of the write error\n"
"and retry changing the version to write both super blocks.\n");
goto out;
}
}
uuid_unparse(meta_super->uuid, uuid_str);
printf("Successfully updated format version for scoutfs filesystem:\n"
" meta device path: %s\n"
" data device path: %s\n"
" fsid: %llx\n"
" uuid: %s\n"
" format version: %llu\n",
args->meta_device,
args->data_device,
le64_to_cpu(meta_super->hdr.fsid),
uuid_str,
le64_to_cpu(meta_super->fmt_vers));
out:
if (meta_super)
free(meta_super);
if (data_super)
free(data_super);
if (meta_fd != -1)
close(meta_fd);
if (data_fd != -1)
close(data_fd);
return ret;
}
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct change_fmt_vers_args *args = state->input;
int ret;
switch (key) {
case 'F':
args->offline = true;
break;
case 'V':
ret = parse_u64(arg, &args->fmt_vers);
if (ret)
return ret;
if (args->fmt_vers < SCOUTFS_FORMAT_VERSION_MIN ||
args->fmt_vers > SCOUTFS_FORMAT_VERSION_MAX)
argp_error(state, "format-version %llu is outside supported range of %u-%u",
args->fmt_vers, SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
break;
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
else if (!args->data_device)
args->data_device = strdup_or_error(state, arg);
else
argp_error(state, "more than two device arguments given");
break;
case ARGP_KEY_FINI:
if (!args->offline)
argp_error(state, "must specify --offline");
if (!args->meta_device)
argp_error(state, "no metadata device argument given");
if (!args->data_device)
argp_error(state, "no data device argument given");
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "offline", 'F', NULL, 0, "Write format version in offline device super blocks"},
{ "format-version", 'V', "VERS", 0, "Specify a format version within supported range ("SCOUTFS_FORMAT_VERSION_MIN_STR"-"SCOUTFS_FORMAT_VERSION_MAX_STR", default "SCOUTFS_FORMAT_VERSION_MAX_STR")"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"",
"Change format version of an existing ScoutFS filesystem"
};
static int change_fmt_vers_cmd(int argc, char *argv[])
{
struct change_fmt_vers_args change_fmt_vers_args = {
.offline = false,
.fmt_vers = SCOUTFS_FORMAT_VERSION_MAX,
};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &change_fmt_vers_args);
if (ret)
return ret;
return do_change_fmt_vers(&change_fmt_vers_args);
}
static void __attribute__((constructor)) change_fmt_vers_ctor(void)
{
cmd_register_argp("change-format-version", &argp, GROUP_CORE, change_fmt_vers_cmd);
}

View File

@@ -0,0 +1,171 @@
#define _GNU_SOURCE /* O_DIRECT */
#include <unistd.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <uuid/uuid.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <assert.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <inttypes.h>
#include <argp.h>
#include "sparse.h"
#include "cmd.h"
#include "util.h"
#include "format.h"
#include "parse.h"
#include "dev.h"
#include "quorum.h"
struct change_quorum_args {
char *meta_device;
bool offline;
int nr_slots;
struct scoutfs_quorum_slot slots[SCOUTFS_QUORUM_MAX_SLOTS];
};
static int do_change_quorum(struct change_quorum_args *args)
{
struct scoutfs_super_block *meta_super = NULL;
char uuid_str[37];
int meta_fd = -1;
int ret;
meta_fd = open(args->meta_device, O_DIRECT | O_SYNC | O_RDWR | O_EXCL);
if (meta_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open meta device '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
goto out;
}
ret = read_block_verify(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, 0, SCOUTFS_SUPER_BLKNO,
SCOUTFS_BLOCK_SM_SHIFT, (void **)&meta_super);
if (ret) {
ret = -errno;
fprintf(stderr, "failed to read meta super block: %s (%d)\n",
strerror(errno), errno);
goto out;
}
ret = meta_super_in_use(meta_fd, meta_super);
if (ret < 0) {
if (ret == -EBUSY)
fprintf(stderr, "The filesystem must be fully recovered and cleanly unmounted to change the quorum config\n");
goto out;
}
assert(sizeof(meta_super->qconf.slots) == sizeof(args->slots));
memcpy(meta_super->qconf.slots, args->slots, sizeof(meta_super->qconf.slots));
le64_add_cpu(&meta_super->qconf.version, 1);
ret = write_block(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, meta_super->hdr.fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT, &meta_super->hdr);
if (ret)
goto out;
uuid_unparse(meta_super->uuid, uuid_str);
printf("Successfully changed quorum config for scoutfs filesystem:\n"
" meta device path: %s\n"
" fsid: %llx\n"
" uuid: %s\n"
" quorum config version: %llu\n"
" quorum slots: ",
args->meta_device,
le64_to_cpu(meta_super->hdr.fsid),
uuid_str,
le64_to_cpu(meta_super->qconf.version));
print_quorum_slots(meta_super->qconf.slots, array_size(meta_super->qconf.slots),
" ");
out:
if (meta_super)
free(meta_super);
if (meta_fd != -1)
close(meta_fd);
return ret;
}
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct change_quorum_args *args = state->input;
struct scoutfs_quorum_slot slot;
int ret;
switch (key) {
case 'F':
args->offline = true;
break;
case 'Q':
ret = parse_quorum_slot(&slot, arg);
if (ret < 0)
return ret;
if (args->slots[ret].addr.v4.family != cpu_to_le16(SCOUTFS_AF_NONE))
argp_error(state, "Quorum slot %u already specified before slot '%s'\n",
ret, arg);
args->slots[ret] = slot;
args->nr_slots++;
break;
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
else
argp_error(state, "more than one metadata device argument given");
break;
case ARGP_KEY_FINI:
if (!args->offline)
argp_error(state, "must specify --offline");
if (!args->meta_device)
argp_error(state, "no metadata device argument given");
if (!args->nr_slots)
argp_error(state, "must specify at least one quorum slot with --quorum-slot|-Q");
if (!valid_quorum_slots(args->slots))
argp_error(state, "invalid quorum slot configuration");
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "quorum-slot", 'Q', "NR,ADDR,PORT", 0, "Specify quorum slot addresses [Required]"},
{ "offline", 'F', NULL, 0, "Write format version in offline device super blocks [Currently Required]"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"",
"Change quorum slots and addresses of an existing ScoutFS filesystem"
};
static int change_quorum_cmd(int argc, char *argv[])
{
struct change_quorum_args change_quorum_args = {
.offline = false,
};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &change_quorum_args);
if (ret)
return ret;
return do_change_quorum(&change_quorum_args);
}
static void __attribute__((constructor)) change_quorum_ctor(void)
{
cmd_register_argp("change-quorum-config", &argp, GROUP_CORE, change_quorum_cmd);
}

View File

@@ -8,6 +8,7 @@
#include "cmd.h"
#include "util.h"
#include "format.h"
static struct argp_command {
char *name;
@@ -69,6 +70,9 @@ static void usage(void)
fprintf(stderr, "Selected fs defaults to current working directory.\n");
fprintf(stderr, "See <command> --help for more details.\n");
fprintf(stderr, "\nSupported format version: %u-%u\n",
SCOUTFS_FORMAT_VERSION_MIN, SCOUTFS_FORMAT_VERSION_MAX);
fprintf(stderr, "\nCore admin:\n");
print_cmds_for_group(GROUP_CORE);
fprintf(stderr, "\nAdditional Information:\n");

View File

@@ -48,7 +48,6 @@ static int do_df(struct df_args *args)
if (fd < 0)
return fd;
sfm.valid_bytes = sizeof(struct scoutfs_ioctl_statfs_more);
ret = ioctl(fd, SCOUTFS_IOC_STATFS_MORE, &sfm);
if (ret < 0) {
fprintf(stderr, "statfs_more returned %d: error %s (%d)\n",

View File

@@ -31,31 +31,8 @@
#include "btree.h"
#include "leaf_item_hash.h"
#include "blkid.h"
#include "quorum.h"
/*
* Update the block header fields and write out the block.
*/
static int write_block(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr)
{
size_t size = 1ULL << shift;
ssize_t ret;
hdr->magic = cpu_to_le32(magic);
hdr->fsid = fsid;
hdr->blkno = cpu_to_le64(blkno);
hdr->seq = cpu_to_le64(seq);
hdr->crc = cpu_to_le32(crc_block(hdr, size));
ret = pwrite(fd, hdr, size, blkno << shift);
if (ret != size) {
fprintf(stderr, "write to blkno %llu returned %zd: %s (%d)\n",
blkno, ret, strerror(errno), errno);
return -errno;
}
return 0;
}
/*
* Return the order of the length of a free extent, which we define as
@@ -134,6 +111,7 @@ struct mkfs_args {
unsigned long long max_meta_size;
unsigned long long max_data_size;
u64 data_alloc_zone_blocks;
u64 fmt_vers;
bool force;
bool allow_small_size;
int nr_slots;
@@ -162,7 +140,6 @@ static int do_mkfs(struct mkfs_args *args)
int data_fd = -1;
char uuid_str[37];
void *zeros = NULL;
char *indent;
u64 blkno;
u64 meta_size;
u64 data_size;
@@ -236,20 +213,18 @@ static int do_mkfs(struct mkfs_args *args)
/* partially initialize the super so we can use it to init others */
memset(super, 0, SCOUTFS_BLOCK_SM_SIZE);
super->version = cpu_to_le64(SCOUTFS_INTEROP_VERSION);
super->fmt_vers = cpu_to_le64(args->fmt_vers);
uuid_generate(super->uuid);
super->next_ino = cpu_to_le64(SCOUTFS_ROOT_INO + 1);
super->next_ino = cpu_to_le64(round_up(SCOUTFS_ROOT_INO + 1, SCOUTFS_LOCK_INODE_GROUP_NR));
super->inode_count = cpu_to_le64(1);
super->seq = cpu_to_le64(1);
super->total_meta_blocks = cpu_to_le64(last_meta + 1);
super->first_meta_blkno = cpu_to_le64(next_meta);
super->last_meta_blkno = cpu_to_le64(last_meta);
super->total_data_blocks = cpu_to_le64(last_data - first_data + 1);
super->first_data_blkno = cpu_to_le64(first_data);
super->last_data_blkno = cpu_to_le64(last_data);
super->total_data_blocks = cpu_to_le64(last_data + 1);
assert(sizeof(args->slots) ==
member_sizeof(struct scoutfs_super_block, qconf.slots));
memcpy(super->qconf.slots, args->slots, sizeof(args->slots));
super->qconf.version = cpu_to_le64(1);
if (invalid_data_alloc_zone_blocks(le64_to_cpu(super->total_data_blocks),
args->data_alloc_zone_blocks)) {
@@ -320,7 +295,7 @@ static int do_mkfs(struct mkfs_args *args)
blkno = next_meta++;
ret = write_alloc_root(meta_fd, fsid, &super->data_alloc, bt,
1, blkno, first_data,
le64_to_cpu(super->total_data_blocks));
last_data - first_data + 1);
if (ret < 0)
goto out;
@@ -360,68 +335,44 @@ static int do_mkfs(struct mkfs_args *args)
}
/* write the super block to data dev and meta dev*/
ret = write_block(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
ret = write_block_sync(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
if (ret)
goto out;
if (fsync(data_fd)) {
ret = -errno;
fprintf(stderr, "failed to fsync '%s': %s (%d)\n",
args->data_device, strerror(errno), errno);
goto out;
}
super->flags |= cpu_to_le64(SCOUTFS_FLAG_IS_META_BDEV);
ret = write_block(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid,
1, SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
ret = write_block_sync(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid,
1, SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
if (ret)
goto out;
if (fsync(meta_fd)) {
ret = -errno;
fprintf(stderr, "failed to fsync '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
goto out;
}
uuid_unparse(super->uuid, uuid_str);
printf("Created scoutfs filesystem:\n"
" meta device path: %s\n"
" data device path: %s\n"
" fsid: %llx\n"
" version: %llx\n"
" uuid: %s\n"
" 64KB metadata blocks: "SIZE_FMT"\n"
" 4KB data blocks: "SIZE_FMT"\n"
" quorum slots: ",
" meta device path: %s\n"
" data device path: %s\n"
" fsid: %llx\n"
" uuid: %s\n"
" format version: %llu\n"
" 64KB metadata blocks: "SIZE_FMT"\n"
" 4KB data blocks: "SIZE_FMT"\n"
" quorum config version: %llu\n"
" quorum slots: ",
args->meta_device,
args->data_device,
le64_to_cpu(super->hdr.fsid),
le64_to_cpu(super->version),
uuid_str,
le64_to_cpu(super->fmt_vers),
SIZE_ARGS(le64_to_cpu(super->total_meta_blocks),
SCOUTFS_BLOCK_LG_SIZE),
SIZE_ARGS(le64_to_cpu(super->total_data_blocks),
SCOUTFS_BLOCK_SM_SIZE));
SCOUTFS_BLOCK_SM_SIZE),
le64_to_cpu(super->qconf.version));
indent = "";
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
struct scoutfs_quorum_slot *sl = &super->qconf.slots[i];
struct in_addr in;
if (sl->addr.v4.family != cpu_to_le16(SCOUTFS_AF_IPV4))
continue;
in.s_addr = htonl(le32_to_cpu(sl->addr.v4.addr));
printf("%s%u: %s:%u", indent,
i, inet_ntoa(in), le16_to_cpu(sl->addr.v4.port));
indent = "\n ";
}
printf("\n");
print_quorum_slots(super->qconf.slots, array_size(super->qconf.slots),
" ");
ret = 0;
out:
@@ -438,45 +389,6 @@ out:
return ret;
}
static bool valid_quorum_slots(struct scoutfs_quorum_slot *slots)
{
struct in_addr in;
bool valid = true;
char *addr;
int i;
int j;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_NONE))
continue;
if (slots[i].addr.v4.family != cpu_to_le16(SCOUTFS_AF_IPV4)) {
fprintf(stderr, "quorum slot nr %u has invalid family %u\n",
i, le16_to_cpu(slots[i].addr.v4.family));
valid = false;
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (slots[i].addr.v4.family != cpu_to_le16(SCOUTFS_AF_IPV4))
continue;
if (slots[i].addr.v4.addr == slots[j].addr.v4.addr &&
slots[i].addr.v4.port == slots[j].addr.v4.port) {
in.s_addr =
htonl(le32_to_cpu(slots[i].addr.v4.addr));
addr = inet_ntoa(in);
fprintf(stderr, "quorum slot nr %u and %u have the same address %s:%u\n",
i, j, addr,
le16_to_cpu(slots[i].addr.v4.port));
valid = false;
}
}
}
return valid;
}
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct mkfs_args *args = state->input;
@@ -526,6 +438,16 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
case 'A':
args->allow_small_size = true;
break;
case 'V':
ret = parse_u64(arg, &args->fmt_vers);
if (ret)
return ret;
if (args->fmt_vers < SCOUTFS_FORMAT_VERSION_MIN ||
args->fmt_vers > SCOUTFS_FORMAT_VERSION_MAX)
argp_error(state, "format-version %llu is outside supported range of %u-%u",
args->fmt_vers, SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
break;
case 'z': /* data-alloc-zone-blocks */
{
ret = parse_u64(arg, &args->data_alloc_zone_blocks);
@@ -547,7 +469,7 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
break;
case ARGP_KEY_FINI:
if (!args->nr_slots)
argp_error(state, "must specify at least one quorum slot with --quorum-count|-Q");
argp_error(state, "must specify at least one quorum slot with --quorum-slot|-Q");
if (!args->meta_device)
argp_error(state, "no metadata device argument given");
if (!args->data_device)
@@ -569,6 +491,7 @@ static struct argp_option options[] = {
{ "max-meta-size", 'm', "SIZE", 0, "Use a size less than the base metadata device size (bytes or KMGTP units)"},
{ "max-data-size", 'd', "SIZE", 0, "Use a size less than the base data device size (bytes or KMGTP units)"},
{ "data-alloc-zone-blocks", 'z', "BLOCKS", 0, "Divide data device into block zones so each mounts writes to a zone (4KB blocks)"},
{ "format-version", 'V', "version", 0, "Specify a format version within supported range, ("SCOUTFS_FORMAT_VERSION_MIN_STR"-"SCOUTFS_FORMAT_VERSION_MAX_STR", default "SCOUTFS_FORMAT_VERSION_MAX_STR")"},
{ NULL }
};
@@ -581,7 +504,9 @@ static struct argp argp = {
static int mkfs_cmd(int argc, char *argv[])
{
struct mkfs_args mkfs_args = {NULL,};
struct mkfs_args mkfs_args = {
.fmt_vers = SCOUTFS_FORMAT_VERSION_MAX,
};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &mkfs_args);

View File

@@ -47,13 +47,14 @@ static void print_inode(struct scoutfs_key *key, void *val, int val_len)
{
struct scoutfs_inode *inode = val;
printf(" inode: ino %llu size %llu nlink %u\n"
printf(" inode: ino %llu size %llu version %llu nlink %u\n"
" uid %u gid %u mode 0%o rdev 0x%x flags 0x%x\n"
" next_readdir_pos %llu meta_seq %llu data_seq %llu data_version %llu\n"
" atime %llu.%08u ctime %llu.%08u\n"
" mtime %llu.%08u\n",
le64_to_cpu(key->ski_ino),
le64_to_cpu(inode->size),
le64_to_cpu(inode->version),
le32_to_cpu(inode->nlink), le32_to_cpu(inode->uid),
le32_to_cpu(inode->gid), le32_to_cpu(inode->mode),
le32_to_cpu(inode->rdev),
@@ -75,6 +76,17 @@ static void print_orphan(struct scoutfs_key *key, void *val, int val_len)
printf(" orphan: ino %llu\n", le64_to_cpu(key->sko_ino));
}
static void print_xattr_totl(struct scoutfs_key *key, void *val, int val_len)
{
struct scoutfs_xattr_totl_val *tval = val;
printf(" xattr totl: %llu.%llu.%llu = %lld, %lld\n",
le64_to_cpu(key->skxt_a), le64_to_cpu(key->skxt_b),
le64_to_cpu(key->skxt_c), le64_to_cpu(tval->total),
le64_to_cpu(tval->count));
}
static u8 *global_printable_name(u8 *name, int name_len)
{
static u8 name_buf[SCOUTFS_NAME_LEN + 1];
@@ -163,6 +175,9 @@ static print_func_t find_printer(u8 zone, u8 type)
return print_orphan;
}
if (zone == SCOUTFS_XATTR_TOTL_ZONE)
return print_xattr_totl;
if (zone == SCOUTFS_FS_ZONE) {
switch(type) {
case SCOUTFS_INODE_TYPE: return print_inode;
@@ -178,15 +193,19 @@ static print_func_t find_printer(u8 zone, u8 type)
return NULL;
}
static int print_fs_item(struct scoutfs_key *key, void *val,
#define flag_char(val, bit, c) \
(((val) & (bit)) ? (c) : '-')
static int print_fs_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
print_func_t printer;
printf(" "SK_FMT"\n", SK_ARG(key));
printf(" "SK_FMT" %llu %c\n",
SK_ARG(key), seq, flag_char(flags, SCOUTFS_ITEM_FLAG_DELETION, 'd'));
/* only items in leaf blocks have values */
if (val) {
if (val != NULL && !(flags & SCOUTFS_ITEM_FLAG_DELETION)) {
printer = find_printer(key->sk_zone, key->sk_type);
if (printer)
printer(key, val, val_len);
@@ -198,37 +217,6 @@ static int print_fs_item(struct scoutfs_key *key, void *val,
return 0;
}
/* same as fs item but with a small header in the value */
static int print_logs_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_item_value *liv;
print_func_t printer;
printf(" "SK_FMT"\n", SK_ARG(key));
/* only items in leaf blocks have values */
if (val) {
liv = val;
printf(" log_item_value: seq %llu flags %x\n",
le64_to_cpu(liv->seq), liv->flags);
/* deletion items don't have values */
if (!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION)) {
printer = find_printer(key->sk_zone,
key->sk_type);
if (printer)
printer(key, val + sizeof(*liv),
val_len - sizeof(*liv));
else
printf(" (unknown zone %u type %u)\n",
key->sk_zone, key->sk_type);
}
}
return 0;
}
#define BTREF_F \
"blkno %llu seq %llu"
#define BTREF_A(ref) \
@@ -269,7 +257,7 @@ static int print_logs_item(struct scoutfs_key *key, void *val,
le64_to_cpu((srf)->ref.seq)
/* same as fs item but with a small header in the value */
static int print_log_trees_item(struct scoutfs_key *key, void *val,
static int print_log_trees_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_trees *lt = val;
@@ -289,7 +277,9 @@ static int print_log_trees_item(struct scoutfs_key *key, void *val,
" data_avail: "ALCROOT_F"\n"
" data_freed: "ALCROOT_F"\n"
" srch_file: "SRF_FMT"\n"
" inode_count_delta: %lld\n"
" max_item_seq: %llu\n"
" finalize_seq: %llu\n"
" rid: %016llx\n"
" nr: %llu\n"
" flags: %llx\n"
@@ -305,7 +295,9 @@ static int print_log_trees_item(struct scoutfs_key *key, void *val,
ALCROOT_A(&lt->data_avail),
ALCROOT_A(&lt->data_freed),
SRF_A(&lt->srch_file),
le64_to_cpu(lt->inode_count_delta),
le64_to_cpu(lt->max_item_seq),
le64_to_cpu(lt->finalize_seq),
le64_to_cpu(lt->rid),
le64_to_cpu(lt->nr),
le64_to_cpu(lt->flags),
@@ -328,7 +320,7 @@ static int print_log_trees_item(struct scoutfs_key *key, void *val,
return 0;
}
static int print_srch_root_item(struct scoutfs_key *key, void *val,
static int print_srch_root_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_srch_compact *sc;
@@ -361,16 +353,7 @@ static int print_srch_root_item(struct scoutfs_key *key, void *val,
return 0;
}
static int print_trans_seqs_entry(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
printf(" trans_seq %llu rid %016llx\n",
le64_to_cpu(key->skts_trans_seq), le64_to_cpu(key->skts_rid));
return 0;
}
static int print_mounted_client_entry(struct scoutfs_key *key, void *val,
static int print_mounted_client_entry(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_mounted_client_btree_val *mcv = val;
@@ -385,8 +368,8 @@ static int print_mounted_client_entry(struct scoutfs_key *key, void *val,
return 0;
}
static int print_log_merge_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
static int print_log_merge_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_merge_status *stat;
struct scoutfs_log_merge_range *rng;
@@ -397,12 +380,10 @@ static int print_log_merge_item(struct scoutfs_key *key, void *val,
switch (key->sk_zone) {
case SCOUTFS_LOG_MERGE_STATUS_ZONE:
stat = val;
printf(" status: next_range_key "SK_FMT" nr_req %llu nr_comp %llu"
" last_seq %llu seq %llu\n",
printf(" status: next_range_key "SK_FMT" nr_req %llu nr_comp %llu seq %llu\n",
SK_ARG(&stat->next_range_key),
le64_to_cpu(stat->nr_requests),
le64_to_cpu(stat->nr_complete),
le64_to_cpu(stat->last_seq),
le64_to_cpu(stat->seq));
break;
case SCOUTFS_LOG_MERGE_RANGE_ZONE:
@@ -414,12 +395,12 @@ static int print_log_merge_item(struct scoutfs_key *key, void *val,
case SCOUTFS_LOG_MERGE_REQUEST_ZONE:
req = val;
printf(" request: logs_root "BTROOT_F" logs_root "BTROOT_F" start "SK_FMT
" end "SK_FMT" last_seq %llu rid %016llx seq %llu flags 0x%llx\n",
" end "SK_FMT" input_seq %llu rid %016llx seq %llu flags 0x%llx\n",
BTROOT_A(&req->logs_root),
BTROOT_A(&req->root),
SK_ARG(&req->start),
SK_ARG(&req->end),
le64_to_cpu(req->last_seq),
le64_to_cpu(req->input_seq),
le64_to_cpu(req->rid),
le64_to_cpu(req->seq),
le64_to_cpu(req->flags));
@@ -451,7 +432,7 @@ static int print_log_merge_item(struct scoutfs_key *key, void *val,
return 0;
}
static int print_alloc_item(struct scoutfs_key *key, void *val,
static int print_alloc_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
if (key->sk_zone == SCOUTFS_FREE_EXTENT_BLKNO_ZONE)
@@ -469,7 +450,7 @@ static int print_alloc_item(struct scoutfs_key *key, void *val,
return 0;
}
typedef int (*print_item_func)(struct scoutfs_key *key, void *val,
typedef int (*print_item_func)(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg);
static int print_block_ref(struct scoutfs_key *key, void *val,
@@ -477,7 +458,7 @@ static int print_block_ref(struct scoutfs_key *key, void *val,
{
struct scoutfs_block_ref *ref = val;
func(key, NULL, 0, arg);
func(key, 0, 0, NULL, 0, arg);
printf(" ref blkno %llu seq %llu\n",
le64_to_cpu(ref->blkno), le64_to_cpu(ref->seq));
@@ -586,7 +567,7 @@ static int print_btree_block(int fd, struct scoutfs_super_block *super,
if (level)
print_block_ref(key, val, val_len, func, arg);
else
func(key, val, val_len, arg);
func(key, le64_to_cpu(item->seq), item->flags, val, val_len, arg);
}
free(bt);
@@ -744,8 +725,8 @@ struct print_recursion_args {
};
/* same as fs item but with a small header in the value */
static int print_log_trees_roots(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
static int print_log_trees_roots(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_trees *lt = val;
struct print_recursion_args *pa = arg;
@@ -776,14 +757,14 @@ static int print_log_trees_roots(struct scoutfs_key *key, void *val,
ret = err;
err = print_btree(pa->fd, pa->super, "", &lt->item_root,
print_logs_item, NULL);
print_fs_item, NULL);
if (err && !ret)
ret = err;
return ret;
}
static int print_srch_root_files(struct scoutfs_key *key, void *val,
static int print_srch_root_files(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
{
struct print_recursion_args *pa = arg;
@@ -843,7 +824,7 @@ static int print_btree_leaf_items(int fd, struct scoutfs_super_block *super,
break;
continue;
} else {
func(key, val, val_len, arg);
func(key, le64_to_cpu(item->seq), item->flags, val, val_len, arg);
}
node = avl_next(&bt->item_root, node);
@@ -910,13 +891,15 @@ static int print_quorum_blocks(int fd, struct scoutfs_super_block *super)
printf("quorum blkno %llu (slot %llu)\n",
blkno, blkno - SCOUTFS_QUORUM_BLKNO);
print_block_header(&blk->hdr, SCOUTFS_BLOCK_SM_SIZE);
printf(" write_nr %llu\n", le64_to_cpu(blk->write_nr));
for (e = 0; e < array_size(event_names); e++) {
ev = &blk->events[e];
printf(" %12s: rid %016llx term %llu ts %llu.%08u\n",
printf(" %12s: rid %016llx term %llu write_nr %llu ts %llu.%08u\n",
event_names[e], le64_to_cpu(ev->rid), le64_to_cpu(ev->term),
le64_to_cpu(ev->ts.sec), le32_to_cpu(ev->ts.nsec));
le64_to_cpu(ev->write_nr), le64_to_cpu(ev->ts.sec),
le32_to_cpu(ev->ts.nsec));
}
}
@@ -939,20 +922,15 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
uuid_unparse(super->uuid, uuid_str);
if (!(le64_to_cpu(super->flags) && SCOUTFS_FLAG_IS_META_BDEV))
fprintf(stderr,
"**** Printing metadata from a data device! Did you mean to do this? ****\n");
printf("super blkno %llu\n", blkno);
print_block_header(&super->hdr, SCOUTFS_BLOCK_SM_SIZE);
printf(" version %llx uuid %s\n",
le64_to_cpu(super->version), uuid_str);
printf(" fmt_vers %llu uuid %s\n",
le64_to_cpu(super->fmt_vers), uuid_str);
printf(" flags: 0x%016llx\n", le64_to_cpu(super->flags));
/* XXX these are all in a crazy order */
printf(" next_ino %llu seq %llu\n"
" total_meta_blocks %llu first_meta_blkno %llu last_meta_blkno %llu\n"
" total_data_blocks %llu first_data_blkno %llu last_data_blkno %llu\n"
printf(" next_ino %llu inode_count %llu seq %llu\n"
" total_meta_blocks %llu total_data_blocks %llu\n"
" meta_alloc[0]: "ALCROOT_F"\n"
" meta_alloc[1]: "ALCROOT_F"\n"
" data_alloc: "ALCROOT_F"\n"
@@ -963,17 +941,13 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
" fs_root: "BTR_FMT"\n"
" logs_root: "BTR_FMT"\n"
" log_merge: "BTR_FMT"\n"
" trans_seqs: "BTR_FMT"\n"
" mounted_clients: "BTR_FMT"\n"
" srch_root: "BTR_FMT"\n",
le64_to_cpu(super->next_ino),
le64_to_cpu(super->inode_count),
le64_to_cpu(super->seq),
le64_to_cpu(super->total_meta_blocks),
le64_to_cpu(super->first_meta_blkno),
le64_to_cpu(super->last_meta_blkno),
le64_to_cpu(super->total_data_blocks),
le64_to_cpu(super->first_data_blkno),
le64_to_cpu(super->last_data_blkno),
ALCROOT_A(&super->meta_alloc[0]),
ALCROOT_A(&super->meta_alloc[1]),
ALCROOT_A(&super->data_alloc),
@@ -984,7 +958,6 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
BTR_ARG(&super->fs_root),
BTR_ARG(&super->logs_root),
BTR_ARG(&super->log_merge),
BTR_ARG(&super->trans_seqs),
BTR_ARG(&super->mounted_clients),
BTR_ARG(&super->srch_root));
@@ -1029,6 +1002,13 @@ static int print_volume(int fd)
print_super_block(super, SCOUTFS_SUPER_BLKNO);
if (!(le64_to_cpu(super->flags) & SCOUTFS_FLAG_IS_META_BDEV)) {
fprintf(stderr,
"**** Printing from data device is not allowed ****\n");
ret = -EINVAL;
goto out;
}
ret = print_quorum_blocks(fd, super);
err = print_btree(fd, super, "mounted_clients", &super->mounted_clients,
@@ -1036,11 +1016,6 @@ static int print_volume(int fd)
if (err && !ret)
ret = err;
err = print_btree(fd, super, "trans_seqs", &super->trans_seqs,
print_trans_seqs_entry, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "log_merge", &super->log_merge,
print_log_merge_item, NULL);
if (err && !ret)
@@ -1100,6 +1075,7 @@ static int print_volume(int fd)
if (err && !ret)
ret = err;
out:
free(super);
return ret;

79
utils/src/quorum.c Normal file
View File

@@ -0,0 +1,79 @@
#include <stdio.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "quorum.h"
bool quorum_slot_present(struct scoutfs_super_block *super, int i)
{
return super->qconf.slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
}
bool valid_quorum_slots(struct scoutfs_quorum_slot *slots)
{
struct in_addr in;
bool valid = true;
char *addr;
int i;
int j;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_NONE))
continue;
if (slots[i].addr.v4.family != cpu_to_le16(SCOUTFS_AF_IPV4)) {
fprintf(stderr, "quorum slot nr %u has invalid family %u\n",
i, le16_to_cpu(slots[i].addr.v4.family));
valid = false;
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (slots[i].addr.v4.family != cpu_to_le16(SCOUTFS_AF_IPV4))
continue;
if (slots[i].addr.v4.addr == slots[j].addr.v4.addr &&
slots[i].addr.v4.port == slots[j].addr.v4.port) {
in.s_addr =
htonl(le32_to_cpu(slots[i].addr.v4.addr));
addr = inet_ntoa(in);
fprintf(stderr, "quorum slot nr %u and %u have the same address %s:%u\n",
i, j, addr,
le16_to_cpu(slots[i].addr.v4.port));
valid = false;
}
}
}
return valid;
}
/*
* Print quorum slots to stdout, a line at a time. The first line is
* not indented and the rest of the lines use the indent string from the
* caller.
*/
void print_quorum_slots(struct scoutfs_quorum_slot *slots, int nr, char *indent)
{
struct scoutfs_quorum_slot *sl;
struct in_addr in;
bool first = true;
int i;
for (i = 0, sl = slots; i < SCOUTFS_QUORUM_MAX_SLOTS; i++, sl++) {
if (sl->addr.v4.family != cpu_to_le16(SCOUTFS_AF_IPV4))
continue;
in.s_addr = htonl(le32_to_cpu(sl->addr.v4.addr));
printf("%s%u: %s:%u\n", first ? "" : indent,
i, inet_ntoa(in), le16_to_cpu(sl->addr.v4.port));
first = false;
}
}

10
utils/src/quorum.h Normal file
View File

@@ -0,0 +1,10 @@
#ifndef _QUORUM_H_
#define _QUORUM_H_
#include <stdbool.h>
bool quorum_slot_present(struct scoutfs_super_block *super, int i);
bool valid_quorum_slots(struct scoutfs_quorum_slot *slots);
void print_quorum_slots(struct scoutfs_quorum_slot *slots, int nr, char *indent);
#endif

View File

@@ -0,0 +1,120 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include <argp.h>
#include "sparse.h"
#include "parse.h"
#include "util.h"
#include "format.h"
#include "ioctl.h"
#include "cmd.h"
struct xattr_args {
char *path;
};
static int do_read_xattr_totals(struct xattr_args *args)
{
struct scoutfs_ioctl_read_xattr_totals rxt;
struct scoutfs_ioctl_xattr_total *xts = NULL;
struct scoutfs_ioctl_xattr_total *xt;
u64 bytes = 1024 * 1024;
int fd = -1;
int ret;
int i;
xts = malloc(bytes);
if (!xts) {
fprintf(stderr, "xattr total mem alloc failed\n");
ret = -ENOMEM;
goto out;
}
fd = get_path(args->path, O_RDONLY);
if (fd < 0)
return fd;
memset(&rxt, 0, sizeof(rxt));
rxt.totals_ptr = (unsigned long)xts;
rxt.totals_bytes = bytes;
for (;;) {
ret = ioctl(fd, SCOUTFS_IOC_READ_XATTR_TOTALS, &rxt);
if (ret == 0)
break;
if (ret < 0) {
ret = -errno;
fprintf(stderr, "read_xattr_totals ioctl failed: "
"%s (%d)\n", strerror(errno), errno);
goto out;
}
for (i = 0, xt = xts; i < ret; i++, xt++)
printf("%llu.%llu.%llu = %lld, %lld\n",
xt->name[0], xt->name[1], xt->name[2], xt->total, xt->count);
memcpy(&rxt.pos_name, &xts[ret - 1].name, sizeof(rxt.pos_name));
if (++rxt.pos_name[2] == 0 && ++rxt.pos_name[1] == 0 && ++rxt.pos_name[0] == 0)
break;
}
ret = 0;
out:
if (fd >= 0)
close(fd);
free(xts);
return ret;
};
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct xattr_args *args = state->input;
switch (key) {
case 'p':
args->path = strdup_or_error(state, arg);
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "path", 'p', "PATH", 0, "Path to ScoutFS filesystem"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"",
"Print global value totals of .totl. xattrs"
};
static int read_xattr_totals_cmd(int argc, char **argv)
{
struct xattr_args xattr_args = {NULL};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &xattr_args);
if (ret)
return ret;
return do_read_xattr_totals(&xattr_args);
}
static void __attribute__((constructor)) read_xattr_totals_ctor(void)
{
cmd_register_argp("read-xattr-totals", &argp, GROUP_INFO, read_xattr_totals_cmd);
}

120
utils/src/resize_devices.c Normal file
View File

@@ -0,0 +1,120 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include <argp.h>
#include "sparse.h"
#include "parse.h"
#include "util.h"
#include "format.h"
#include "ioctl.h"
#include "cmd.h"
struct resize_args {
char *path;
u64 meta_size;
u64 data_size;
};
static int do_resize_devices(struct resize_args *args)
{
struct scoutfs_ioctl_resize_devices rd;
int ret;
int fd;
if (args->meta_size & SCOUTFS_BLOCK_LG_MASK) {
printf("metadata device size %llu is not a multiple of %u metadata block size, truncating down to %llu byte size\n",
args->meta_size, SCOUTFS_BLOCK_LG_SIZE,
args->meta_size & ~(u64)SCOUTFS_BLOCK_LG_MASK);
}
if (args->data_size & SCOUTFS_BLOCK_SM_MASK) {
printf("data device size %llu is not a multiple of %u data block size, truncating down to %llu byte size\n",
args->data_size, SCOUTFS_BLOCK_SM_SIZE,
args->data_size & ~(u64)SCOUTFS_BLOCK_SM_MASK);
}
fd = get_path(args->path, O_RDONLY);
if (fd < 0)
return fd;
rd.new_total_meta_blocks = args->meta_size >> SCOUTFS_BLOCK_LG_SHIFT;
rd.new_total_data_blocks = args->data_size >> SCOUTFS_BLOCK_SM_SHIFT;
ret = ioctl(fd, SCOUTFS_IOC_RESIZE_DEVICES, &rd);
if (ret < 0) {
ret = -errno;
fprintf(stderr, "resize_devices ioctl failed: %s (%d)\n", strerror(errno), errno);
}
close(fd);
return ret;
};
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct resize_args *args = state->input;
int ret;
switch (key) {
case 'm': /* meta-size */
{
ret = parse_human(arg, &args->meta_size);
if (ret)
return ret;
break;
}
case 'd': /* data-size */
{
ret = parse_human(arg, &args->data_size);
if (ret)
return ret;
break;
}
case 'p':
args->path = strdup_or_error(state, arg);
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "path", 'p', "PATH", 0, "Path to ScoutFS filesystem"},
{ "meta-size", 'm', "SIZE", 0, "New metadata device size (bytes or KMGTP units)"},
{ "data-size", 'd', "SIZE", 0, "New data device size (bytes or KMGTP units)"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"",
"Online resize of metadata and/or data devices",
};
static int resize_devices_cmd(int argc, char **argv)
{
struct resize_args resize_args = {NULL,};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &resize_args);
if (ret)
return ret;
return do_resize_devices(&resize_args);
}
static void __attribute__((constructor)) read_xattr_totals_ctor(void)
{
cmd_register_argp("resize-devices", &argp, GROUP_CORE, resize_devices_cmd);
}

View File

@@ -21,6 +21,7 @@
struct setattr_args {
char *filename;
struct timespec ctime;
struct timespec crtime;
u64 data_version;
u64 i_size;
bool offline;
@@ -42,6 +43,8 @@ static int do_setattr(struct setattr_args *args)
sm.ctime_sec = args->ctime.tv_sec;
sm.ctime_nsec = args->ctime.tv_nsec;
sm.crtime_sec = args->crtime.tv_sec;
sm.crtime_nsec = args->crtime.tv_nsec;
sm.data_version = args->data_version;
if (args->offline)
sm.flags |= SCOUTFS_IOC_SETATTR_MORE_OFFLINE;
@@ -73,6 +76,11 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
if (ret)
return ret;
break;
case 'r': /* timespec */
ret = parse_timespec(arg, &args->crtime);
if (ret)
return ret;
break;
case 'V': /* data version */
ret = parse_u64(arg, &args->data_version);
if (ret)
@@ -112,7 +120,8 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
}
static struct argp_option options[] = {
{ "ctime", 't', "TIMESPEC", 0, "Set creation time using \"<seconds-since-epoch>.<nanoseconds>\" format"},
{ "ctime", 't', "TIMESPEC", 0, "Set change time using \"<seconds-since-epoch>.<nanoseconds>\" format"},
{ "crtime", 'r', "TIMESPEC", 0, "Set creation time using \"<seconds-since-epoch>.<nanoseconds>\" format"},
{ "data-version", 'V', "VERSION", 0, "Set data version"},
{ "size", 's', "SIZE", 0, "Set file size (bytes or KMGTP units). Requires --data-version"},
{ "offline", 'o', NULL, 0, "Set file contents as offline, not sparse. Requires --size"},

View File

@@ -37,6 +37,7 @@ static struct stat_more_field inode_fields[] = {
INODE_FIELD(data_version),
INODE_FIELD(online_blocks),
INODE_FIELD(offline_blocks),
{ .name = "crtime", .offset = INODE_FIELD_OFF(crtime_sec) },
{ NULL, }
};
@@ -60,6 +61,9 @@ static void print_inode_field(void *st, size_t off)
case INODE_FIELD_OFF(offline_blocks):
printf("%llu", stm->offline_blocks);
break;
case INODE_FIELD_OFF(crtime_sec):
printf("%llu.%09u", stm->crtime_sec, stm->crtime_nsec);
break;
};
}
@@ -127,12 +131,10 @@ static int do_stat(struct stat_args *args)
if (args->is_inode) {
cmd = SCOUTFS_IOC_STAT_MORE;
fields = inode_fields;
st.stm.valid_bytes = sizeof(struct scoutfs_ioctl_stat_more);
pr = print_inode_field;
} else {
cmd = SCOUTFS_IOC_STATFS_MORE;
fields = fs_fields;
st.sfm.valid_bytes = sizeof(struct scoutfs_ioctl_statfs_more);
pr = print_fs_field;
}

View File

@@ -10,6 +10,9 @@
#include <wordexp.h>
#include "util.h"
#include "format.h"
#include "crc.h"
#include "quorum.h"
#define ENV_PATH "SCOUTFS_MOUNT_PATH"
@@ -77,11 +80,16 @@ int read_block(int fd, u64 blkno, int shift, void **ret_val)
void *buf;
int ret;
buf = NULL;
*ret_val = NULL;
buf = malloc(size);
if (!buf)
return -ENOMEM;
ret = posix_memalign(&buf, size, size);
if (ret != 0) {
ret = -errno;
fprintf(stderr, "%zu byte aligned buffer allocation failed: %s (%d)\n",
size, strerror(errno), errno);
return ret;
}
ret = pread(fd, buf, size, blkno << shift);
if (ret == -1) {
@@ -98,3 +106,152 @@ int read_block(int fd, u64 blkno, int shift, void **ret_val)
return 0;
}
}
int read_block_crc(int fd, u64 blkno, int shift, void **ret_val)
{
struct scoutfs_block_header *hdr;
size_t size = 1ULL << shift;
int ret;
u32 crc;
ret = read_block(fd, blkno, shift, ret_val);
if (ret == 0) {
hdr = *ret_val;
crc = crc_block(hdr, size);
if (crc != le32_to_cpu(hdr->crc)) {
fprintf(stderr, "crc of read blkno %llu failed, stored %08x != calculated %08x\n",
blkno, le32_to_cpu(hdr->crc), crc);
free(*ret_val);
*ret_val = NULL;
ret = -EIO;
}
}
return ret;
}
int read_block_verify(int fd, u32 magic, u64 fsid, u64 blkno, int shift, void **ret_val)
{
struct scoutfs_block_header *hdr = NULL;
int ret;
ret = read_block_crc(fd, blkno, shift, ret_val);
if (ret == 0) {
hdr = *ret_val;
ret = -EIO;
if (le32_to_cpu(hdr->magic) != magic)
fprintf(stderr, "read blkno %llu has bad magic %08x != expected %08x\n",
blkno, le32_to_cpu(hdr->magic), magic);
else if (fsid != 0 && le64_to_cpu(hdr->fsid) != fsid)
fprintf(stderr, "read blkno %llu has bad fsid %016llx != expected %016llx\n",
blkno, le64_to_cpu(hdr->fsid), fsid);
else if (le32_to_cpu(hdr->blkno) != blkno)
fprintf(stderr, "read blkno %llu has bad blkno %llu != expected %llu\n",
blkno, le64_to_cpu(hdr->blkno), blkno);
else
ret = 0;
if (ret < 0) {
free(*ret_val);
*ret_val = NULL;
}
}
return ret;
}
/*
* Update the block header fields and write out the block.
*/
int write_block(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr)
{
size_t size = 1ULL << shift;
ssize_t ret;
hdr->magic = cpu_to_le32(magic);
hdr->fsid = fsid;
hdr->blkno = cpu_to_le64(blkno);
hdr->seq = cpu_to_le64(seq);
hdr->crc = cpu_to_le32(crc_block(hdr, size));
ret = pwrite(fd, hdr, size, blkno << shift);
if (ret != size) {
fprintf(stderr, "write to blkno %llu returned %zd: %s (%d)\n",
blkno, ret, strerror(errno), errno);
return -errno;
}
return 0;
}
int write_block_sync(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr)
{
int ret = write_block(fd, magic, fsid, seq, blkno, shift, hdr);
if (ret != 0)
return ret;
if (fsync(fd)) {
ret = -errno;
fprintf(stderr, "fsync after write to blkno %llu failed: %s (%d)\n",
blkno, strerror(errno), errno);
return ret;
}
return 0;
}
/*
* Check to see if the metadata super block indicates that there might
* be active mounts using the system. Returns -errno, 0, or -EBUSY if
* we found evidence that the device might be in use.
*/
int meta_super_in_use(int meta_fd, struct scoutfs_super_block *meta_super)
{
struct scoutfs_quorum_block *qblk = NULL;
struct scoutfs_quorum_block_event *beg;
struct scoutfs_quorum_block_event *end;
int ret = 0;
int i;
if (meta_super->mounted_clients.ref.blkno != 0) {
fprintf(stderr, "meta superblock mounted clients btree is not empty.\n");
ret = -EBUSY;
goto out;
}
/* check for active quorum slots */
for (i = 0; i < SCOUTFS_QUORUM_BLOCKS; i++) {
if (!quorum_slot_present(meta_super, i))
continue;
ret = read_block(meta_fd, SCOUTFS_QUORUM_BLKNO + i, SCOUTFS_BLOCK_SM_SHIFT,
(void **)&qblk);
if (ret < 0) {
fprintf(stderr, "error reading quorum block for slot %u\n", i);
goto out;
}
beg = &qblk->events[SCOUTFS_QUORUM_EVENT_BEGIN];
end = &qblk->events[SCOUTFS_QUORUM_EVENT_END];
if (le64_to_cpu(beg->write_nr) > le64_to_cpu(end->write_nr)) {
fprintf(stderr, "mount in quorum slot %u could still be running.\n"
" begin event: write_nr %llu timestamp %llu.%08u\n"
" end event: write_nr %llu timestamp %llu.%08u\n",
i, le64_to_cpu(beg->write_nr), le64_to_cpu(beg->ts.sec),
le32_to_cpu(beg->ts.nsec),
le64_to_cpu(end->write_nr), le64_to_cpu(end->ts.sec),
le32_to_cpu(end->ts.nsec));
ret = -EBUSY;
goto out;
}
free(qblk);
qblk = NULL;
}
out:
return ret;
}

View File

@@ -113,6 +113,16 @@ static inline int memcmp_lens(const void *a, int a_len,
int get_path(char *path, int flags);
int read_block(int fd, u64 blkno, int shift, void **ret_val);
int read_block_crc(int fd, u64 blkno, int shift, void **ret_val);
int read_block_verify(int fd, u32 magic, u64 fsid, u64 blkno, int shift, void **ret_val);
struct scoutfs_block_header;
struct scoutfs_super_block;
int write_block(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr);
int write_block_sync(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr);
int meta_super_in_use(int meta_fd, struct scoutfs_super_block *meta_super);
#define __stringify_1(x) #x
#define __stringify(x) __stringify_1(x)