Compare commits

...

507 Commits

Author SHA1 Message Date
Zach Brown
c3290771a0 Block cache use rht _lookup_ insert for EEXIST
The sneaky rhashtable_insert_fast() can't return -EEXIST despite the
last line of the function *REALLY* making it look like it can.  It just
inserts new objects at the head of the bucket lists without comparing
the insertion with existing objects.

The block cache was relying on insertion to resolve duplicate racing
allocated blocks.  Because it couldn't return -EEXIST we could get
duplicate cached blocks present in the hash table.

rhashtable_lookup_insert_fast() fixes this by actually comparing the
inserted objects key with the objects found in the insertion bucket.  A
racing allocator trying to insert a duplicate cached block will get an
error, drop their allocated block, and retry their lookup.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 09:24:23 -07:00
Zach Brown
cf3cb3f197 Wait for rhashtable to rehash on insert EBUSY
The rhashtable can return EBUSY if you insert fast enough to trigger an
expansion of the next table size that is waiting to be rehashed in an
rcu callback.  If we get EBUSY from rhasthable_insert we call
synchronize_rcu to wait for the rehash to complete before trying again.

This was hit in testing restores of a very large namespace and took a
few hours to hit.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-13 09:24:23 -07:00
Andy Grover
cb4ed98b3c Merge pull request #31 from versity/zab/block_shrink_wait_for_rebalance
Block cache shrink restart waits for rcu callbacks
2021-04-08 09:03:12 -07:00
Zach Brown
9ee7f7b9dc Block cache shrink restart waits for rcu callbacks
We're seeing cpu livelocks in block shrinking where counters show that a
single block cache shrink call is only getting EAGAIN from repeated
rhashtable walk attempts.  It occurred to me that the running task might
be preventing an RCU grace period from ending by never blocking.

The hope of this commit is that by waiting for rcu callbacks to run
we'll ensure that any pending rebalance callback runs before we retry
the rhashtable walk again.  I haven't been able to reproduce this easily
so this is a stab in the dark.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-07 12:50:50 -07:00
Zach Brown
300791ecfa Merge pull request #29 from agrover/cleanup
Cleanup
2021-04-07 12:27:00 -07:00
Andy Grover
4630b77b45 cleanup: Use flexible array members instead of 0-length arrays
See Documentation/process/deprecated.rst:217, items[] now preferred over
items[0].

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:14:47 -07:00
Andy Grover
bdc43ca634 cleanup: Fix ESTALE handling in forest_read_items
Kinda weird to goto back to the out label and then out the bottom. Just
return -EIO, like forest_next_hint() does.

Don't call client_get_roots() right before retry, since is the first thing
retry does.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:14:04 -07:00
Andy Grover
6406f05350 cleanup: Remove struct net_lock_grant_response
We're not using the roots member of this struct, so we can just
use struct scoutfs_net_lock directly.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-07 10:13:56 -07:00
Andy Grover
820b7295f0 cleanup: Unused LIST_HEADs
Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-05 16:23:41 -07:00
Zach Brown
b3611103ee Merge pull request #26 from agrover/tmpfile
Support O_TMPFILE and allow MOVE_BLOCKS into released extents
2021-04-05 15:23:41 -07:00
Andy Grover
0deb232d3f Support O_TMPFILE and allow MOVE_BLOCKS into released extents
Support O_TMPFILE: Create an unlinked file and put it on the orphan list.
If it ever gains a link, take it off the orphan list.

Change MOVE_BLOCKS ioctl to allow moving blocks into offline extent ranges.
Ioctl callers must set a new flag to enable this operation mode.

RH-compat: tmpfile support it actually backported by RH into 3.10 kernel.
We need to use some of their kabi-maintaining wrappers to use it:
use a struct inode_operations_wrapper instead of base struct
inode_operations, set S_IOPS_WRAPPER flag in i_flags. This lets
RH's modified vfs_tmpfile() find our tmpfile fn pointer.

Add a test that tests both creating tmpfiles as well as moving their
contents into a destination file via MOVE_BLOCKS.

xfstests common/004 now runs because tmpfile is supported.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-04-05 14:23:44 -07:00
Andy Grover
1366e254f9 Merge pull request #30 from versity/zab/srch_block_ref_leak
Zab/srch block ref leak
2021-04-01 16:50:34 -07:00
Zach Brown
1259f899a3 srch compaction needs to prepare alloc for commit
The srch client compaction work initializes allocators, dirties blocks,
and writes them out as its transaction.  It forgot to call the
pre-commit allocator prepare function.

The prepare function drops block references used by the meta allocator
during the transaction.  This leaked block references which kept blocks
from being freed by the shrinker under memory pressure.  Eventually
memory was full of leaked blocks and the shrinker walked all of them
looking blocks to free, resulting in an effective livelock that ground
the system to a crawl.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-01 13:04:40 -07:00
Zach Brown
2d393f435b Warn on leaked block refs on unmount
By the time we get to destroying the block cache we should have put all
our block references.  Warn as we tear down the blocks if we see any
blocks that still have references, implying a ref leak.  This caught a
leak caused by srch compaction forgetting to put allocator list block
refs.

Signed-off-by: Zach Brown <zab@versity.com>
2021-04-01 13:04:06 -07:00
Andy Grover
09c879bcf1 Merge pull request #25 from versity/zab/client_greeting_items_exist
Zab/client greeting items exist
2021-03-16 15:57:55 -07:00
Zach Brown
3de703757f Fix weird comment editing error
That comment looked very weird indeed until I recognized that I must
have forgotten to delete the first two attempts at starting the
sentence.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 12:02:05 -07:00
Zach Brown
7d67489b0c Handle resent initial client greetings
The very first greeting a client sends is unique becuase it doesn't yet
have a server_term field set and tells the server to create items to
track the client.

A server processing this request can create the items and then shut down
before the client is able to receive the reply.  They'll resend the
greeting without server_term but then the next server will get -EEXIST
errors as it tries to create items for the client.  This causes the
connection to break, which the client tries to reestablish, and the
pattern repeats indefinitely.

The fix is to simply recognize that -EEXIST is acceptable during item
creation.  Server message handlers always have to address the case where
a resent message was already processed by a previous server but it's
response didn't make it to the client.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 11:56:26 -07:00
Zach Brown
73084462e9 Remove unused client greeting_umb
Remove an old client info field from the unmount barrier mechanism which
was removed a while ago.  It used to be compared to a super field to
decide to finish unmount without reconnecting but now we check for our
mounted_client item in the server's btree.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-16 10:04:42 -07:00
Zach Brown
8c81af2b9b Merge pull request #22 from agrover/ipv6
Reserve space in superblock for IPv6 addresses
2021-03-15 16:04:26 -07:00
Andy Grover
efe5d92458 Reserve space in superblock for IPv6 addresses
Define a family field, and add a union for IPv4 and v6 variants, although
v6 is not supported yet.

Family field is now used to determine presence of address in a quorum slot,
instead of checking if addr is zero.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-03-12 14:10:42 -08:00
Andy Grover
d39e56d953 Merge pull request #24 from versity/zab/fix-block-stale-reads
Zab/fix block stale reads
2021-03-11 09:33:03 -08:00
Zach Brown
5661a1fb02 Fix block-stale-reads test
The block-stale-reads test was built from the ashes of a test that
used counters and triggers to work with the btree when it was
only used on the server.

The initial quick translation to try and trigger block cache retries
while the forest called the btree got so much wrong.  It was still
trying to use some 'cl' variable that didn't refer to the client any
more, the trigger helpers now call statfs to find paths and can end up
triggering themselves. and many more counters stale reads can happen
throughout the system while we're working -- not just one from our
trigger.

This fixes it up to consistently use fs numbers instead of
the silly stale cl variable and be less sensitive to triggers firing and
counter differences.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:36:41 -08:00
Zach Brown
12fa289399 Add t_trigger_arm_silent
t_trigger_arm always output the value of the trigger after arming on the
premise that tests required the trigger being armed.  In the process of
showing the trigger it calls a bunch of t_ helpers that build the path
to the trigger file using statfs_more to get the rid of mounts.

If the trigger being armed is in the server's mount and the specific
trigger test is fired by the server's statfs_more request processing
then the trigger can be fired before read its value.  Tests can
inconsistently fail as the golden output shows the trigger being armed
or not depending on if it was in the server's mount or not.

t_trigger_arm_silent doesn't output the value of the armed trigger.  It
can be used for low level triggers that don't rely on reading the
trigger's value to discover that their effect has happened.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:36:34 -08:00
Zach Brown
75e8fab57c Add t_counter_diff_changed
Tests can use t_counter_diff to put a message in their golden output
when a specific change in counters is expected.  This adds
t_counter_diff_changed to output a message that indicates change or not,
for tests that want to see counters change but the amount of change
doesn't need to be precisely known.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-10 12:32:04 -08:00
Zach Brown
513d6b2734 Merge pull request #20 from versity/zab/remove_trans_spinlock
Zab/remove trans spinlock
2021-03-04 13:59:07 -08:00
Zach Brown
f8d39610a2 Only get inode writeback_lock when adding inodes
Each transaction maintains a global list of inodes to sync.  It checks
the inode and adds it in each write_end call per OS page.  Locking and
unlocking the global spinlock was showing up in profiles.  At the very
least, we can only get the lock once per large file that's written
during a transaction.  This will reduce spinlock traffic on the lock by
the number of pages written per file.   We'll want a better solution in
the long run, but this helps for now.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-04 11:39:30 -08:00
Zach Brown
c470c1c9f6 Allow read-mostly _alloc_meta_low
Each transaction hold makes multiple calls to _alloc_meta_low to see if
the transaction should be committed to refill allocators before the
caller's hold is acquired and they can dirty blocks in the transaction.

_alloc_meta_low was using a spinlock to sample the allocator list_head
blocks to determine if there was space available.  The lock and unlock
stores were creating significant cacheline contention.

The _alloc_meta_low calls are higher frequency than allocations.  We can
use a seqlock to have exclusive writers and allow concurrent
_alloc_meta_low readers who retry if a writer intervenes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-04 11:39:30 -08:00
Andy Grover
cad902b9cd Merge pull request #19 from versity/zab/block_crash_and_consistency
Zab/block crash and consistency
2021-03-04 10:57:27 -08:00
Zach Brown
e163f3b099 Use atomic holders instead of trans info lock
We saw the transaction info lock showing up in profiles.  We were doing
quite a lot of work with that lock held.  We can remove it entirely and
use an atomic.

Instead of a locked holders count and writer boolean we can use an
atomic holders and have a high bit indicate that the write_func is
pending.  This turns the lock/unlock pairs in hold and release into
atomic inc/cmpxchg/dec operations.

Then we were checking allocators under the trans lock.  Now that we have
an atomic holders count we can increment it to prevent the writer from
commiting and release it after the checks if we need another commit
before the hold.

And finally, we were freeing our allocated reservation struct under the
lock.  We weren't actually doing anything with the reservation struct so
we can use journal_info as the nested hold counter instead of having it
point to an allocated and freed struct.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 14:18:04 -08:00
Zach Brown
a508baae76 Remove unused triggers
As the implementation shifted away from the ring of btree blocks and LSM
segments we lost callers to all these triggers.  They're unused and can
be removed.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:50:00 -08:00
Zach Brown
208c51d1d2 Update stale block reading test
The previous test that triggered re-reading blocks, as though they were
stale, was written in the era where it only hit btree blocks and
everything else was stored in LSM segments.

This reworks the test to make it clear that it affects all our block
readers today.  The test only exercise the core read retry path, but it
could be expanded to test callers retrying with newer references after
they get -ESTALE errors.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:50:00 -08:00
Zach Brown
9450959ca4 Protect stale block readers from local dirtying
Our block cache consistency mechanism allows readers to try and read
stale block references.  They check block headers of the block they read
to discover if it has been modified and they should retry the read with
newer block references.

For this to be correct the block contents can't change under the
readers.  That's obviously true in the simple imagined case of one node
writing and another node reading.  But we also have the case where the
stale reader and dirtying writer can be concurrent tasks in the same
mount which share a block cache.

There were a two failure cases that derive from the order of readers and
writers working with blocks.

If the reader goes first, the writer could find the existing block in
the cache and modify it while the reader assumes that it is read only.
The fix is to have the writer always remove any existing cached block
and insert a newly allocated block into the cache with the header fields
already changed.  Any existing readers will still have their cached
block references and any new readers will see the modified headers and
return -ESTALE.

The next failure comes from readers trying to invalidate dirty blocks
when they see modified headers.  They assumed that the existing cached
block was old and could be dropped so that a new current version could
be read.  But in this case a local writer has clobbered the reader's
stale block and the reader should immediately return -ESTALE.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:59 -08:00
Zach Brown
6237f0adc5 Add _block_dirty_ref to dirty blocks in one place
To create dirty blocks in memory each block type caller currently gets a
reference on a created block and then dirties it.  The reference it gets
could be an existing cached block that stale readers are currently
using.  This creates a problem with our block consistency protocol where
writers can dirty and modify cached blocks that readers are currently
reading in memory, leading to read corruption.

This commit is the first step in addressing that problem.  We add a
scoutfs_block_dirty_ref() call which returns a reference to a dirtied
block from the block core in one call.  We're only changing the callers
in this patch but we'll be reworking the dirtying mechanism in an
upcoming patch to avoid corrupting readers.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
f18fa0e97a Update scoutfs print for centralized block_ref
Update scoutfs print to use the new block_ref struct instead of the
handful of per-block type ref structs that we had accumulated.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
0969a94bfc Check one block_ref struct in block core
Each of the different block types had a reading function that read a
block and then checked their reference struct for their block type.

This gets rid of each block reference type and has a single block_ref
type which is then checked by a single ref reading function in the block
core.  By putting ref checking in the core we no longer have to export
checking the block header crc, verifying headers, invalidating blocks,
or even reading raw blocks themseves.  Everyone reads refs and leaves
the checking up to the core.

The changes don't have a significant functional effect.  This is mostly
just changing types and moving code around.  (There are some changes to
visible counters.)

This shares code, which is nice, but this is putting the block reference
checking in one place in the block core so that in a few patches we can
fix problems with writers dirtying blocks that are being read.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:17 -08:00
Zach Brown
b1b75cbe9f Fix block cache shrink and read racing crash
The block cache wasn't safely racing readers walking the rcu radix_tree
and the shrinker walking the LRU list.  A reader could get a reference
to a block that had been removed from the radix and was queued for
freeing.  It'd clobber the free's llist_head union member by putting the
block back on the lru and both the read and free would crash as they
each corrupted each other's memory.  We rarely saw this in heavy load
testing.

The fix is to clean up the use of rcu, refcounting, and freeing.

First, we get rid of the LRU list.  Now we don't have to worry about
resolving racing accesses of blocks between two independent structures.
Instead of shrinking walking the LRU list, we can mark blocks on access
such that shrinking can walk all blocks randomly and expect to quickly
find candidates to shrink.

To make it easier to concurrently walk all the blocks we switch to the
rhashtable instead of the radix tree.  It also has nice per-bucket
locking so we can get rid of the global lock that protected the LRU list
and radix insertion.  (And it isn't limited to 'long' keys so we can get
rid of the check for max meta blknos that couldn't be cached.)

Now we need to tighten up when read can get a reference and when shrink
can remove blocks.  We have presence in the hash table hold a refcount
but we make it a magic high bit in the refcount so that it can be
differentiated from other references.  Now lookup can atomically get a
reference to blocks that are in the hash table, and shrinking can
atomically remove blocks when it is the only other reference.

We also clean up freeing a bit. It has to wait for the rcu grace period
to ensure that no other rcu readers can reference the blocks its
freeing.  It has to iterate over the list with _safe because it's
freeing as it goes.

Interestingly, when reworking the shrinker I noticed that we weren't
scaling the nr_to_scan from the pages we returned in previous shrink
calls back to blocks.  We now divide the input from pages back into
blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-01 09:49:15 -08:00
Zach Brown
0f14826ff8 Merge pull request #18 from versity/zab/quorum_slots_unmount
Zab/quorum slots unmount
2021-02-22 13:34:25 -08:00
Zach Brown
336d521e44 Use spinlock to protect server farewell list
We had a mutex protecting the list of farewell requests.  The critical
sections are all very short so we can use a spinlock and be a bit
clearer and more efficient.  While we're at it, refactor freeing to free
outside of the criticial section.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
4fab75b862 Account for non-quorum in server farewell
The server has to be careful to only send farewell responses to quorum
clients once it knows that it won't need their vote to elect a leader to
server remaining clients.

The logic for doing this forgot to take non-quorum clients into account.
It would send farewell requests to all the final majority of quorum
members once they all tried to unmount.  This could leave non-quorum
clients hung in unmount trying to send their farewell requests.

The fix is to count mouted_clients items for non-quorum clients and hold
off on sending farewell requests to the final majority until those
non-quorum clients have unmounted.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
f6f72e7eae Resume running the mount-unmount-race test
The recent quorum and unmount fixes should have addressed the failures
we were seeing in the mount-unmount-race test.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
9878312b4d Update man pages for quorum slot changes
Update the man pages with descriptions of the new mkfs -Q quorum slot
configuration and quorum_slot_nr mount option.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
7421bd1861 Filter all test device digits to 0
We mask device numbers in command output to 0:0 so that we can have
consistent golden test output.  The device number matching regex
responsible for this missed a few digits.

It didn't show up until we both tested enough mounts to get larger
device minor numbers and fixed multi-mount consistency so that the
affected tests didn't fail for other reasons.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
1db6f8194d Update xfstests to use quorum slot options
Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
2de7692336 Unmount mount point, not device
Our test unmount function unmounted the device instead of the mount
point.  It was written this way back in an old version of the harness
which didn't track mount points.

Now that we have mount points, we can just unmount that.  This stops the
umount command from having to search through all the current mounts
looking for the mountpoint for the device it was asked to unmount.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
8c1d96898a Log wait failure in mount-unmount-race test
I got a test failure where waiting returned an error, but it wasn't
clear what the error was or where it might have come from.  Add more
logging so that we learn more about what might have gone wrong.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
090646aaeb Update repo README.md for quorum slots
Update the example configuration in the README to specify the quorum
slots in mkfs arguments and mount options.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
d53350f9f1 Consistently lock server mounted_clients btree
The mounted_clients btree stores items to track mounted clients.  It's
modified by multiple greeting workers and the farewell work.

The greeting work was serialized by the farewell_mutex, but the
modifications in the farewell thread weren't protected.  This could
result in modifications between the threads being lost if the dirty
block reference updates raced in just the right way.  I saw this in
testing with deletions in farewell being lost and then that lingering
item preventing unmount because the server thought it had to wait for a
remaining quorum member to unmount.

We fix this by adding a mutex specifically to protect the
mounted_clients btree in the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
57f34e90e9 Use mounted_client item as sign of farewell
As clients unmount they send a farewell request that cleans up
persistent state associated with the mount.  The client needs to be sure
that it gets processed, and we must maintain a majority of quorum
members mounted to be able to elect a server to process farewell
requests.

We had a mechanism using the unmount_barrier fields in the greeting and
super_block to let the final unmounting quorum majority know that their
farewells have been processed and that they didn't need to keep trying
to reconnect.

But we missed that we also need this out of band farewell handling
signal for non-quorum member clients as well.  The server can send
farewells to a non-member client as well as the final majority and then
tear down all the connections before the non-quorum client can see its
farewell response.  It also needs to be able to know that its farewell
has been processed before the server let the final majority unmount.

We can remove the custom unmount_barrier method and instead have all
unmounting clients check for their mounted_client item in the server's
btree.  This item is removed as the last step of farewell processing so
if the client sees that it has been removed it knows that it doesn't
need to resend the farewell and can finish unmounting.

This fixes a bug where a non-quorum unmount could hang if it raced with
the final majority unmounting.  I was able to trigger this hang in our
tests with 5 mounts and 3 quorum members.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
79f6878355 Clean up block writing in mkfs
scoutfs mkfs had two block writing functions: write_block to fill out
some block header fields including crc calculation, and then
write_block_raw to pwrite the raw buffer to the bytes in the device.

These were used inconsistenly as blocks came and went over time.  Most
callers filled out all the header fields themselves and called the raw
writer.  write_block was only used for super writing, which made sense
because it clobbered the block's header with the super header so the
caller's set header magic and seq fields would be lost.

This cleans up the mess.  We only have one block writer and the caller
provides all the hdr fields.  Everything uses it instead of filling out
the fields themselves and calling the raw writer.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
740e13e53a Return error from _quorum_setup
Well that's a silly mistake.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
dbb716f1bb Update tests for quorum slots
Update the tests to deal with the mkfs and mount changes for the
specifically configured quorum slots.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
87fcad5428 Update scoutfs mkfs and print for quorum slots
Signed-off-by: Zach Brown <zab@versity.com>
2021-02-22 13:28:38 -08:00
Zach Brown
406d157891 Add stringify macro to utils
Add macros for stringifying either the name of a macro or its value.  In
keeping with making our utils/ sort of look like kernel code, we use the
kernel stringify names.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-18 12:57:30 -08:00
Zach Brown
8e34c5d66a Use quorum slots and background election work
Previously quorum configuration specified the number of votes needed to
elected the leader.  This was an excessive amount of freedom in the
configuration of the cluster which created all sorts of problems which
had to be designed around.

Most acutely, though, it required a probabilistic mechanism for mounts
to persistently record that they're starting a server so that future
servers could find and possibly fence them.  They would write to a lot
of quorum blocks and trust that it was unlikely that future servers
would overwrite all of their written blocks.  Overwriting was always
possible, which would be bad enough, but it also required so much IO
that we had to use long election timeouts to avoid spurious fencing.
These longer timeouts had already gone wrong on some storage
configurations, leading to hung mounts.

To fix this and other problems we see coming, like live membership
changes, we now specifically configure the number and identity of mounts
which will be participating in quorum voting.  With specific identities,
mounts now have a corresponding specific block they can write to and
which future servers can read from to see if they're still running.

We change the quorum config in the super block from a single
quorum_count to an array of quorum slots which specify the address of
the mount that is assigned to that slot.  The mount argument to specify
a quorum voter changes from "server_addr=$addr" to "quorum_slot_nr=$nr"
which specifies the mount's slot.  The slot's address is used for udp
election messages and tcp server connections.

Now that we specifically have configured unique IP addresses for all the
quorum members, we can use UDP messages to send and receive the vote
mesages in the raft protocol to elect a leader.  The quorum code doesn't
have to read and write disk block votes and is a more reasonable core
loop that either waits for received network messages or timeouts to
advance the raft election state machine.

The quorum blocks are now used for slots to store their persistent raft
term and to set their leader state.  We have event fields in the block
to record the timestamp of the most recent interesting events that
happened to the slot.

Now that raft doesn't use IO, we can leave the quorum election work
running in the background.  The raft work in the quorum members is
always running so we can use a much more typical raft implementation
with heartbeats.  Critically, this decouples the client and election
life cycles.  Quorum is always running and is responsible for starting
and stopping the server.  The client repeatedly tries to connect to a
server, it has nothing to do with deciding to participate in quorum.

Finally, we add a quorum/status sysfs file which shows the state of the
quorum raft protocol in a member mount and has the last messages that
were sent to or received from the other members.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-18 12:57:30 -08:00
Zach Brown
1c7bbd6260 More accurately describe unmounting quorum members
As a client unmounts it sends a farewell request to the server.  We have
to carefully manage unmounting the final quorum members so that there is
always a remaining quorum to elect a leader to start a server to process
all their farewell requests.

The mechanism for doing this described these clients as "voters".
That's not really right, in our terminology voters and candidates are
temporary roles taken on by members during a specific election term in
the raft protocol.  It's more accurate to describe the final set of
clients as quorum members.  They can be voters or candidates depending
on how the raft protocol timeouts workout in any given election.

So we rename the greeting flag, mounted client flag, and the code and
comments on either side of the client and server to be a little clearer.

This only changes symbols and comments, there should be no functional
change.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-11 15:47:39 -08:00
Zach Brown
3ad18b0f3b Update super blkno field tests for meta device
As we read the super we check the first and last meta and data blkno
fields.  The tests weren't updated as we moved from one device to two
metadata and data devices.

Add a helper that tests the range for the device and test both meta and
data ranges fully, instead of only testing the endpoints of each and
assuming they're related because they're living on one device.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-11 15:47:29 -08:00
Andy Grover
79cd7a499b Merge pull request #17 from versity/zab/disable_mount_unmount_test
Disable mount-unmount-race test
2021-02-01 10:09:26 -08:00
Zach Brown
6ad18769cb Disable mount-unmount-race test
The mount-unmount-race test is occasionally hanging, disable it while we
debug it and have test coverage for unrelated work.

Signed-off-by: Zach Brown <zab@versity.com>
2021-02-01 10:07:47 -08:00
Zach Brown
49d82fcaaf Merge pull request #14 from agrover/fix-jira-202
utils: Do not assert if release is given unaligned offset or length
2021-02-01 09:46:01 -08:00
Zach Brown
e4e12c1968 Merge pull request #15 from agrover/radix-block
Remove unused radix_block struct
2021-02-01 09:24:59 -08:00
Andy Grover
15fd2ccc02 utils: Do not assert if release is given unaligned offset or length
This is checked for by the kernel ioctl code, so giving unaligned values
will return an error, instead of aborting with an assert.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-29 09:30:57 -08:00
Andy Grover
eea95357d3 Remove unused radix_block struct
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-26 16:07:05 -08:00
Andy Grover
9842c5d13e Merge pull request #13 from versity/zab/multi_mount_test_fixes
Zab/multi mount test fixes
2021-01-26 15:56:33 -08:00
Zach Brown
ade539217e Handle advance_seq being replayed in new server
As a core principle, all server message processing needs to be safe to
replay as servers shut down and requests are resent to new servers.

The advance_seq handler got this wrong.  It would only try to remove a
trans_seq item for the seq sent by the client before inserting a new
item for the next seq.  This change could be committed before the reply
was lost as the server shuts down.  The next server would process the
resent request but wouldn't find the old item for the seq that the
client sent, and would ignore the new item that the previous server
inserted.  It would then insert another greater seq for the same client.

This would leave behind a stale old trans_seq that would be returned as
the last_seq which would forever limit the results that could be
returned from the seq index walks.

This fix is to always remove all previous seq items for the client
before inserting a new one.  This creates O(clients) server work, but
it's minimal.

This manifest as occasional simple-inode-index test failures (say 1 in
5?) which would trigger if the unmounts during previous tests would
happen to have advance_seq resent across server shutdowns.  With this
change the test now reliably passes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
5a90234c94 Use terminated test name when saving passed stats
We've grown some test names that are prefixes of others
(createmany-parallel, createmany-parallel-mounts).  When we're searching
for lines with the test name we have to search for the exact test name,
by terminating the name with a space, instead of searching for a line
that starts with the test name.

This fixes strange output and saved passed stats for the names that
share a prefix.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
f81e4cb98a Add whitespace to xfstests output message
The message indicating that xfstests output was now being shown was
mashed up against the previous passed stats and it was gross and I hated
it.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
1fc706bf3f Filter hrtimer slow messages from dmesg
When running in debug kernels in guests we can really bog down things
enough to trigger hrtimer warnings.  I don't think there's much we can
reasonably do about that.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
e9c3aa6501 More carefully cancel server farewell work
Farewell work is queued by farewell message processing.  Server shutdown
didn't properly wait for pending farewell work to finish before tearing
down.  As the server work destroyed the server's connection the farewell
work could stlil be running and try to send responses down the socket.

We make the server more carefully avoid queueuing farewell work if it's
in the process of shutting down and wait for farewell work to finish
before destroying the server's resources.

This fixed all manner of crashes that were seen in testing when a bunch
of nodes unmounted, creating farewell work on the server as it itself
unmounted and destroyed the server.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
d39268bbc1 Fix spurious EIO from scoutfs_srch_get_compact
scoutfs_srch_get_compact() is building up a compaction request which has
a list of srch files to read and sort and write into a new srch file.
It finds input files by searching for a sufficient number of similar
files: first any unsorted log files and then sorted log files that are
around the same size.

It finds the files by using btree next on the srch zone which has types
for unsorted srch log files, sorted srch files, but also pending and
busy compaction items.

It was being far too cute about iterating over different key types.  It
was trying to adapt to finding the next key and was making assumptions
about the order of key types.  It didn't notice that the pending and
busy key types followed log and sorted and would generate EIO when it
ran into them and found their value length didn't match what it was
expecting.

Rework the next item ref parsing so that it returns -ENOENT if it gets
an unexpected key type, then look for the next key type when checking
enoent.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
35ed1a2438 Add t_require_meta_size function
Add a function that tests can use to skip when the metadata device isn't
large enough.  I thought we needed to avoid enospc in a particular test,
but it turns out the test's failure was unrelated.  So this isn't used
for now but it seems nice to keep around.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
32e7978a6e Extend lock invalidate grace period
The grace period is intended to let lock holders squeeze in more bulk
work before another node pulls the lock out from under them.  The length
of the delay is a balance between getting more work done per lock hold
and adding latency to ping-ponging workloads.

The current grace period was too short.  To do work in the conflicting
case you often have to read the result that the other mount wrote as you
invalidated their lock.  The test was written in the LSM world where
we'd effectively read a single level 0 1MB segment.  In the btree world
we're checking bloom blocks and reading the other mount's btree.  It has
more dependent read latency.

So we turn up the grace period to let conflicting readers squeeze in
more work before pulling the lock out from under them.  This value was
chosen to make lock-conflicting-batch-commit pass in guests sharing nvme
metadata devices in debugging kernels.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
8123b8fc35 fix lock-conflicting-batch-commit conf output
The test had a silly typo in the label it put on the time it took mounts
to perform conflicting metadata changes.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
da5911c311 Use d_materialise_unique to splice dir dentries
When we're splicing in dentries in lookup we can be splicing the result
of changes on other nodes into a stale dcache.  The stale dcache might
contain dir entries and the dcache does not allow aliased directories.

Use d_materialise_unique() to splice in dir inodes so that we remove all
aliased dentries which must be stale.

We can still use d_splice_alias() for all other inode types.  Any
existing stale dentries will fail revalidation before they're used.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
098fc420be Add some item cache page tracing
Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
7a96537210 Leave mounts mounted if run-tests fails
We can lose interesting state if the mounts are unmounted as tests fail,
only unmount if all the tests pass.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
0607dfdac8 Enable and collect trace_printk
Weirdly, run-tests was treating trace_printk not as an option to enable
trace_printk() traces but as an option to print trace events to the
console with printk?  That's not a thing.

Make -P really enable trace_printk tracing and collect it as it would
enabled trace events.  It needs to be treated seperately from the -t
options that enable trace events.

While we're at it treat the -P trace dumping option as a stand-alone
option that works without -t arguments.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
0354bb64c5 More carefully enable tracing in run-tests
run-tests.sh has a -t argument which takes a whitespace seperated string
of globs of events to enable.  This was hard to use and made it very
easy to accidentally expand the globs at the wrong place in the script.

This makes each -t argument specify a single word glob which is stored
in an array so the glob isn't expanded until it's applied to the trace
event path.   We also add an error for -t globs that didn't match any
events and add a message with the count of -t arguments and enabled
events.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
631801c45c Don't queue lock invalidation work during shutdown
The lock invalidation work function needs to be careful not to requeue
itself while we're shutting down or we can be left with invalidation
functions racing with shutdown.  Invalidation calls igrab so we can end
up with unmount warning that there are still inodes in use.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:46:07 -08:00
Zach Brown
47a1ac92f7 Update ino-path args in basic-posix-consistency
The ino-path calls in basic-posix-consistency weren't updated for the
recent change to scoutfs cli args.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-26 14:45:23 -08:00
Zach Brown
004f693af3 Add golden output for mount-unmount-race test
Signed-off-by: Zach Brown <zab@versity.com>
2021-01-25 14:19:35 -08:00
Andy Grover
f271a5d140 Merge pull request #12 from versity/zab/andys_fallocate_fix_minor_cleanup
Retry if transaction cannot alloc for fallocate or write
2021-01-25 12:52:14 -08:00
Andy Grover
355eac79d2 Retry if transaction cannot alloc for fallocate or write
Add a new distinguishable return value (ENOBUFS) from allocator for if
the transaction cannot alloc space. This doesn't mean the filesystem is
full -- opening a new transaction may result in forward progress.

Alter fallocate and get_blocks code to check for this err val and retry
with a new transaction. Handling actual ENOSPC can still happen, of
course.

Add counter called "alloc_trans_retry" and increment it from both spots.

Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: fixed up write_begin error paths]
2021-01-25 09:32:01 -08:00
Zach Brown
d8b4e94854 Merge pull request #10 from agrover/rm-item-accounting
Remove item accounting
2021-01-21 09:57:53 -08:00
Andy Grover
bed33c7ffd Remove item accounting
Remove kmod/src/count.h
Remove scoutfs_trans_track_item()
Remove reserved/actual fields from scoutfs_reservation

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-20 17:01:08 -08:00
Andy Grover
b370730029 Merge pull request #11 from versity/zab/item_cache_memory_corruption
Fix item cache page memory corruption
2021-01-20 10:27:20 -08:00
Zach Brown
d64dd89ead Fix item cache page memory corruption
The item cache page life cycle is tricky.  There are no proper page
reference counts, everthing is done by nesting the page rwlock inside
item_cache_info rwlock.  The intent is that you can only reference pages
while you hold the rwlocks appropriately.  The per-cpu page references
are outside that locking regime so they add a reference count.  Now
there are reference counts for the main cache index reference and for
each per-cpu reference.

The end result of all this is that you can only reference pages outside
of locks if you're protected by references.

Lock invalidation messed this up by trying to add its right split page
to the lru after it was unlocked.  Its page reference wasn't protected
at this point.  Shrinking could be freeing that page, and so it could be
putting a freed page's memory back on the lru.

Shrinking had a little bug that it was using list_move to move an
initialized lru_head list_head.  It turns out to be harmless (list_del
will just follow pointers to itself and set itself as next and prev all
over again), but boy does it catch one's eye.  Let's remove all
confusion and drop the reference while holding the cinf->rwlock instead
of trying to optimize freeing outside locks.

Finally, the big one: inserting a read item after compacting the page to
make room was inserting into stale parent pointers into the old
pre-compacted page, rather than the new page that was swapped in by
compaction.  This left references to a freed page in the page rbtree and
hilarity ensued.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-20 09:02:29 -08:00
Zach Brown
8d81196e01 Merge pull request #7 from agrover/versioning
Filesystem version instead of format hash check
2021-01-19 11:55:32 -08:00
Andy Grover
d731c1577e Filesystem version instead of format hash check
Instead of hashing headers, define an interop version. Do not mount
superblocks that have a different version, either higher or lower.

Since this is pretty much the same as the format hash except it's a
constant, minimal code changes are needed.

Initial dev version is 0, with the intent that version will be bumped to
1 immediately prior to tagging initial release version.

Update README. Fix comments.

Add interop version to notes and modinfo.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-15 10:53:00 -08:00
Andy Grover
a421bb0884 Merge pull request #5 from versity/zab/move_blocks_ioctl
Zab/move blocks ioctl
2021-01-14 16:18:45 -08:00
Zach Brown
773eb129ed Add move-blocks test
Add a basic test of the move_blocks ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
eb3981c103 Add move-blocks scoutfs cli command
Add a move-blocks command that translates arguments and calls the
MOVE_BLOCKS ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
3139d3ea68 Add move_blocks ioctl
Add a relatively constrained ioctl that moves extents between regular
files.  This is intended to be used by tasks which combine many existing
files into a much larger file without reading and writing all the file
contents.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
4da3d47601 Move ALLOC_DETAIL ioctl definition
By convention we have the _IO* ioctl definition after the argument
structs and ALLOC_DETAIL got it a bit wrong so move it down.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
aa1b1fa34f Add util.h for kernel helpers
Add a little header for inline convenience functions.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-14 13:42:22 -08:00
Zach Brown
8fcc9095e6 Merge pull request #6 from agrover/super
Fix mkfs check for existing ScoutFS superblock
2021-01-14 08:57:53 -08:00
Andy Grover
299062a456 Fix mkfs check for existing ScoutFS superblock
We were checking for the wrong magic value.

We now need to use -f when running mkfs in run-tests for things to work.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-13 16:32:41 -08:00
Andy Grover
7cac1e7136 Merge pull request #1 from agrover/use-argp
Rework scoutfs command-line parsing
2021-01-13 11:14:08 -08:00
Andy Grover
454dbebf59 Categorize not enough mounts as skip, not fail
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
2c5871c253 Change release ioctl to be denominated in bytes not blocks
This more closely matches stage ioctl and other conventions.

Also change release code to use offset/length nomenclature for consistency.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
64a698aa93 Make changes to tests for new scoutfs cmdline syntax
Some different error message require changes to golden/*

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
d48b447e75 Do not set -Wpadded except for checking kmod-shared headers
Remove now-unneeded manual padding in arg structs.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
5241bba7f6 Update scoutfs.8 man page
Update for cli args and options changes. Reorder subcommands to match
scoutfs built-in help.

Consistent ScoutFS capitalization.

Tighten up some descriptions and verbiage for consistency and omit
descriptions of internals in a few spots.

Add SEE ALSO for blockdev(8) and wipefs(8).

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
e0a2175c2e Use argp info instead of duplicating for cmd_register()
Make it static and then use it both for argp_parse as well as
cmd_register_argp.

Split commands into five groups, to help understanding of their
usefulness.

Mention that each command has its own help text, and that we are being
fancy to keep the user from having to give fs path.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
f2cd1003f6 Implement argp support for walk-inodes
This has some fancy parsing going on, and I decided to just leave it
in the main function instead of going to the effort to move it all
to the parsing function.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
97c6cc559e Implement argp support for data-waiting and data-wait-err
These both have a lot of required options.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
7c54c86c38 Implement argp support for setattr
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
e1ba508301 Implement argp support for counters
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
f35154eb19 counters: Ensure name_wid[0] is initialized to zero
I was seeing some segfaults and other weirdness without this.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
7befc61482 Implement argp support for mkfs and add --force
Support max-meta-size and max-data-size using KMGTP units with rounding.

Detect other fs signatures using blkid library.

Detect ScoutFS super using magic value.

Move read_block() from print.c into util.c since blkid also needs it.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 16:29:42 -08:00
Andy Grover
1383ca1a8d Merge pull request #3 from versity/zab/multithread_write_extra_commits
Consistently sample data alloc total_len
2021-01-12 11:51:15 -08:00
Andy Grover
6b5ddf2b3a Implement argp support for print
Print warning if printing a data dev, you probably wanted the meta dev.

Change read_block to return err value. Otherwise there are confusing
ENOMEM messages when pread() fails. e.g. try to print /dev/null.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Andy Grover
d025122fdd Implement argp support for listxaddr-hidden
Rename to list-hidden-xaddrs.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Andy Grover
706fe9a30e Implement argp support for search-xattrs
Get fs path via normal methods, and make xattr an argument not an option.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Andy Grover
0f17ecb9e3 Implement argp support for stage/release
Make offset and length optional. Allow size units (KMGTP) to be used
  for offset/length.

release: Since off/len no longer given in 4k blocks, round offset and
  length to to 4KiB, down and up respectively. Emit a message if rounding
  occurs.

Make version a required option.

stage: change ordering to src (the archive file) then the dest (the
  staged file).

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-12 10:47:47 -08:00
Zach Brown
fc003a5038 Consistently sample data alloc total_len
With many concurrent writers we were seeing excessive commits forced
because it thought the data allocator was running low.  The transaction
was checking the raw total_len value in the data_avail alloc_root for
the number of free data blocks.  But this read wasn't locked, and
allocators could completely remove a large free extent and then
re-insert a slightly smaller free extent as they perform their
alloction.  The transaction could see a temporary very small total_len
and trigger a commit.

Data allocations are serialized by a heavy mutex so we don't want to
have the reader try and use that to see a consistent total_len.  Instead
we create a data allocator run-time struct that has a consistent
total_len that is updated after all the extent items are manipulated.
This also gives us a place to put the caller's cached extent so that it
can be included in the total_len, previously it wasn't included in the
free total that the transaction saw.

The file data allocator can then initialize and use this struct instead
of its raw use of the root and cached extent.  Then the transaction can
sample its consistent total_len that reflects the root and cached
extent.

A subtle detail is that fallocate can't use _free_data to return an
allocated extent on error to the avail pool.  It instead frees into the
data_free pool like normal frees.  It doesn't really matter that this
could prematurely drain the avail pool because it's in an error path.

Signed-off-by: Zach Brown <zab@versity.com>
2021-01-06 09:25:32 -08:00
Andy Grover
10df01eb7a Implement argp support for ino-path
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
68b8e4098d Implement argp support for stat and statfs
Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
5701184324 Implement argp support for df
Convert arg parsing to use argp. Use new get_path() helper fn.

Add -h human-readable option.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
a3035582d3 Add strdup_or_error()
Add a helper function to handle the impossible event that strdup fails.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
9e47a32257 Add get_path()
Implement a fallback mechanism for opening paths to a filesystem. If
explicitly given, use that. If env var is set, use that. Otherwise, use
current working directory.

Use wordexp to expand ~, $HOME, etc.

Signed-off-by: Andy Grover <agrover@versity.com>
2021-01-04 11:49:31 -08:00
Andy Grover
b4592554af Merge pull request #2 from versity/zab/stage_read_zero_block
Zab/stage read zero block
2020-12-17 16:48:52 -08:00
Zach Brown
1e0f8ee27a Finally change all 'ci' inode info ptrs to 'si'
Finally get rid of the last silly vestige of the ancient 'ci' name and
update the scoutfs_inode_info pointers to si.  This is just a global
search and replace, nothing functional changes.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-15 15:20:02 -08:00
Zach Brown
511cb04330 Add stage-mulit-part test
Add a test which stages a file in multiple parts while a long-lived
process is blocking on offline extents trying to compare the file to the
known contents.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-15 15:13:42 -08:00
Zach Brown
807ae11ee9 Protect per-inode extent items with extent_sem
Now that we have full precision extents a writer with i_mutex and a page
lock can be modifying large extent items which cover much of the
surrounding pages in the file.  Readers can be in a different page with
only the page lock and try to work with extent items as the writer is
deleting and creating them.

We add a per-inode rwsem which just protects file extent item
manipulation.  We try to acquire it as close to the item use as possible
in data.c which is the only place we work with file extent items.

This stops rare read corruption we were seeing where get_block in a
reader was racing with extent item deletion in a stager at a further
offset in the file.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-15 11:56:50 -08:00
Zach Brown
7ca3672a67 Update repo README.md, remove from kmod
Move the main scoutfs README.md from the old kmod/ location into the top
of the new single repository.  We update the language and instructions
just a bit to reflect that we can checkout and build the module and
utilities from the single repo.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 10:39:20 -08:00
Zach Brown
eb22425bad Update tests/ README
The README in tests/ had gone a bit stale.  While it was originally
written to be a README.md displayed in the github repo, we can
still use it in place as a quick introduction to the tests.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 10:39:20 -08:00
Zach Brown
e386b900ee Remove README.md from utils
This was just boilerplate for the utils repo.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 10:39:20 -08:00
Zach Brown
6415814f92 Use kmod and utils subdirs instead of repos
When we had three repos the run-tests harness helped by checking
branches in kmod and utils repos to build and test.  Now that we have
one repo we can just use the sibling kmod/ and utils/ dirs in the repo.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 10:39:20 -08:00
Zach Brown
86cf3ec4ab Remove format.h and ioctl.h from utils
Now that we're in one repo utils can get its format and ioctl headers
from the authoriative kmod files.   When we're building a dist tarball
we copy the files over so that the build from the dist tarball can use
them.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 10:39:20 -08:00
Zach Brown
aa6e210ac7 Fix kmod spec path in dist tarball
For some reason, the make dist rule in kmod/ put the spec file in a
scoutfs-$ver/ directory, instead of scoutfs-kmod-$ver/ like the rest of
the files and instead of scoutfs-utils-$ver/ that the spec file for
utils is put in the utils dist tarball.

This adds -kmod to the path for the spec file so that it matches the
rest of the kmod dist tarball.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 10:39:20 -08:00
Zach Brown
e648063baa Add simple top-level Makefile
Add a trivial top-level Makefile that just runs Make in all the subdirs.
This will probably expand over time.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 10:39:20 -08:00
Zach Brown
bc09012836 Merge scoutfs-tests repo filtered to tests/
Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 09:49:17 -08:00
Zach Brown
cf78e92eaf Merge scoutfs-utils-dev repo filtered to utils/
Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 09:49:02 -08:00
Zach Brown
19f5c1d7bf Merge scoutfs-kmod-dev repo filtered to kmod/
Signed-off-by: Zach Brown <zab@versity.com>
2020-12-07 09:48:00 -08:00
Zach Brown
bb0ed34786 Initial commit 2020-12-07 09:47:12 -08:00
Zach Brown
14530471c4 scoutfs-tests: add srch-basic-functionality
Add basic functional testing of finding inodes by their xattrs.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 13:40:33 -08:00
Zach Brown
88aefc381a scoutfs-tests: add find_xattrs
Add a utility that mimics our search_xattrs ioctl with directory entry
walking and fgetxattr as efficiently as it can so we can use it to test
large file populations.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 13:40:33 -08:00
Zach Brown
8982750266 scoutfs-tests: bulk create more clearly sets xattr
Just set the value using a single char, this messed up and set the size
of the pointer.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 13:40:33 -08:00
Zach Brown
e2dfffcab9 scoutfs: search_xattrs name requires srch tag
The search_xattrs ioctl is only going to find entries for xattrs with
the .srch. tag which create srch entries as they're created and
destroyed.  Export the xattr tag parsing so that the ioctl can return
-EINVAL for xattrs which don't have the scoutfs prefix and the .srch.
tag.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
f0ddf5ff04 scoutfs: search_xattrs returns each ino once
Hash collisions can lead to multiple xattr ids in an inode being found
for a given name hash value.  If this happens we only want to return the
inode number once.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
18aee0ebbd scoutfs: fix lost entries in resumed srch compact
Compacting very large srch files can use all of a given operation's
metadata allocator.  When this happens we record the position in the
srch files of the compcation in the pending item.

We could lose entries when this happens because the kway_next callback
would advance the srch file position as it read entries and put them in
the tournament tree leaves, not as it put them in the output file.  We'd
continue from the entries that were next to go in the tournament leaves,
not from what was in the leaves.

This refactors the kway merge callbacks to differentiate between getting
entries at the position and advancing the positions.  We initialize the
tournament leaves by getting entries at the positions and only advance
the position as entries leave the tournament tree and are either stored
in the output srch files or are dropped.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
c35f1ff324 scoutfs: inc end when search xattrs retries
In the rare case that searching for xattrs only finds deletions within
its window it retries the search past the window.  The end entry is
inclusive and is the last entry that can be returned.  When retrying the
search we need to start from the entry after that to ensure forward
progress.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
6770a31683 scoutfs: consistently trim srch entry range
We have to limit the number of srch entries that we'll track while
performing a search for all the inodes that contain xattrs that match
the search hash value.

As we hit the limit on the number of entries to track we have to drop
entries.  As we drop entries we can't return any inodes for entries
past the dropped entries.  We were updating the end point of the search
as we dropped entries past the tracked set, but we weren't updating the
search end point if we dropped the last currently tracked entry.

And we were setting the end point to the dropped entry, not to the entry
before it.  This could lead us to spuriously returning deleted entries
if we drop the creation entry and then allow tracking its deletion
later.

This fixes both those problems.  We now properly set the end point to
just before the dropped entry for all entries that we drop.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
9395360324 scoutfs: add srch entry inc/dec
We're going to need to increment and decrement srch entries in coming
fixes.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
7c5823ad12 scoutfs: drop duplicate compacted srch entries
The k-way merge used by srch file compaction only dropped the second
entry in a pair of duplicate entries.  Duplicate entries are both
supposed to be removed so that entries for removed xattrs don't take up
space in the files.

This both drops the second entry and removes the first encoded entry.
As we encode entries we rememeber their starting offset and the previous
entry that they were encoded from.  When we hit a duplicate entry
we undo the encoding of the previous entry.

This only works wihin srch file blocks.  We can still have duplicate
entries that span blocks but that's unlikely and relatively harmless.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
560c91a0e4 scoutfs: fix binary search for sorted srch block
The search_xattrs ioctl looks for srch entries in srch files that map
the caller's hashed xattr name to inodes.  As it searches it maintains a
range of entries that it is looking for.  When it searches sorted srch
files for entries it first performs a binary search for the start of the
range and then iterates over the blocks until it reaches the end of its
range.

The binary search for the start of the range was a bit wrong.  If the
start of the range was less than all the blocks then the binary search
could wrap the left index, try to get a file block at a negative index,
and return an error for the search.

This is relatively hard to hit in practice.  You have to search for the
xattr name with the smallest hashed value and have a sorted srch file
that's just the right size so that blk offset 0 is the last block
compared in the binary search, which sets the right index to -1.  If
there are lots of xattrs, or sorted files of the wrong length, it'll
work.

This fixes the binary search so that it specifically records the first
block offset that intersects with the range and tests that the left and
right offsets haven't been inverted.  Now that we're not breaking out of
the binary search loop we can more obviously put each block reference
that we get.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Zach Brown
4647a6ccb2 scoutfs: fix srch btree iref puts
The srch code was putting btree item refs outside of success.  This is
fine, but they only need to be put when btree ops return success and
have set the reference.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-03 09:58:35 -08:00
Andy Grover
1bef610416 scoutfs: Don't destroy sroot unless srch_search_xattrs() was called
Until then, sroot is uninitialized so it's not safe to call
destroy_rb_root().

Signed-off-by: Andy Grover <agrover@versity.com>
2020-12-03 09:02:31 -08:00
Zach Brown
9375b9d3b7 scoutfs: commit while enough meta for dirty items
Dirty items in a client transaction are stored in OS pages.  When the
transaction is committed each item is stored in its position in a dirty
btree block in the client's existing log btree.  Allocators are refilled
between transaction commits so a given commit must have sufficient meta
allocator space (avail blocks and unused freed entries) for all the
btree blocks that are dirtied.

The number of btree blocks that are written, thus the number of cow
allocations and frees, depends on the number of blocks in the log btree
and the distribution of dirty items amongst those blocks.  In a typical
load items will be near each other and many dirty items in smaller
kernel pages will be stored in fewer larger btree blocks.

But with the right circumstances, the ratio of dirty pages to dirty
blocks can be much smaller.  With a very large directory and random
entry renames you can easily have 1 btree block dirtied for every page
of dirty items.

Our existing allocator meta allocator fill targets and the number of
dirty item cache pages we allowed did not properly take this in to
account.  It was possible (and, it turned out, relatively easy to test
for with a hgue directory and random renames) to run out of meta avail
blocks while storing dirty items in dirtied btree blocks.

This rebalances our targets and thresholds to make it more likely that
we'll have enough allocator resources to commit dirty items.  Instead of
having an arbitrary limit on the number of dirty item cache pages, we
require that a given number of dirty item cache pages have a given
number of allocator blocks available.

We require a decent number of avialable blocks for each dirty page, so
we increase the server's target number of blocks to give the client so
that it can still build large transactions.

This code is conservative and should not be a problem in practice, but
it's theoretically possible to build a log btree and set of dirty items
that would dirty more blocks that this code assumes.  We will probably
revisit this as we add proper support for ENOSPC.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-02 09:25:13 -08:00
Zach Brown
ae286bf837 scoutfs: update srch _alloc_meta_low callers
The srch system checks that is has allocator space while deleting srch
files and while merging them and dirtying output blocks.  Update the
callers to check for the correct number of avail or freed blocks that it
needs between each check.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-02 09:25:13 -08:00
Zach Brown
a5d9ac5514 scoutfs: rework scoutfs_alloc_meta_low, takes arg
Previously, scoutfs_alloc_meta_lo_thresh() returned true when a small
static number of metadata blocks were either available to allocate or
had space for freeing.  This didn't make a lot of sense as the correct
number depends on how many allocations each caller will make during
their atomic transaction.

Rework the call to take an argument for the number of avail or freed
blocks available to test.  This first pass just uses the existing
number, we'll get to the callers.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-02 09:25:13 -08:00
Zach Brown
7b2310442b scoutfs-tests: add createmany-rename-large-dir
Add a test that randomly renames entries in a single large directory.
This has caught bugs in the reservation of allocator resources for
client transactions.

Signed-off-by: Zach Brown <zab@versity.com>
2020-12-02 09:23:15 -08:00
Andy Grover
cf278f5fa0 scoutfs: Tidy some enum usage
Prefer named to anonymous enums. This helps readability a little.

Use enum as param type if possible (a couple spots).

Remove unused enum in lock_server.c.

Define enum spbm_flags using shift notation for consistency.

Rename get_file_block()'s "gfb" parameter to "flags" for consistency.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-11-30 13:35:44 -08:00
Andy Grover
73333af364 scoutfs: Use enum for lock mode
Signed-off-by: Andy Grover <agrover@versity.com>
2020-11-30 13:35:44 -08:00
Andy Grover
9a647a98f1 scoutfs-utils: Header changes to match kmod PR 41
Signed-off-by: Andy Grover <agrover@versity.com>
2020-11-30 13:35:39 -08:00
Zach Brown
2f3d1c395e scoutfs: show metadev_path in sysfs/mount_options
We forgot to add metadev_path to the options that are found in the
mount_options sysfs directory.

Signed-off-by: Zach Brown <zab@versity.com>
2020-11-24 14:02:02 -08:00
Zach Brown
222e5f1b9d scoutfs: convert endian in SCOUTFS_IS_META_BDEV
We missed that flags is le64.

Signed-off-by: Zach Brown <zab@versity.com>
2020-11-24 14:02:02 -08:00
Andy Grover
30668c1cdd scoutfs-utils: Fix df
Not initializing wid[] can cause incorrect output.

Also, we only need 6 columns if we reference the array from 0.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-11-19 13:03:31 -08:00
Zach Brown
84bb170e3a scoutfs-tests: add dmesg for missing metadev_path
The xfstests generic/067 test is a bit of a stinker in that it's trying
to make sure a mount failes when the device is invalid.  It does this
with raw mount calls without any filesystem-specific conventions.  Our
mount fails, so the test passes, but not for the reason the test
assumes.  It's not a great test.  But we expect it to not be great and
produce this message.

Signed-off-by: Zach Brown <zab@versity.com>
2020-11-19 11:42:04 -08:00
Zach Brown
320c411678 scoutfs-tests: add another expected ext4 dmesg
Add another expected message that comes from attempting to mount an ext4
filesystem from a device that returns read errors.

Signed-off-by: Zach Brown <zab@versity.com>
2020-11-19 11:42:04 -08:00
Zach Brown
c08f818b64 scoutfs-tests: fix T_SKIP_CHECKOUTS
The tests were checking that the literal string was zero, which it never
was.  Once we check the value of the variable then we notice that the
sense of some tests went from -n || to -n &&, so switch those to -z.

Signed-off-by: Zach Brown <zab@versity.com>
2020-11-19 11:42:04 -08:00
Andy Grover
0e5fb021a2 scoutfs-tests: xfstests.sh: Changes for metadata device for scratch dev
Change MOUNT_OPTIONS and define SCOUTFS_SCRATCH_MOUNT_OPTIONS.

Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: minor comment fixes]
2020-11-19 11:42:04 -08:00
Andy Grover
b40f53633f scoutfs-tests: Support for specifying scratch meta device
For xfstests, we need to be able to specify both for scratch device as
well.

using -e and -f for now, but we should really be switching to long options.

Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: minor arg message fixes]
2020-11-19 11:42:04 -08:00
Andy Grover
aed9f66410 scoutfs-tests: xfstests: honor SKIP_CHECKOUT
Signed-off-by: Andy Grover <agrover@versity.com>
2020-11-19 11:42:04 -08:00
Andy Grover
09256fdf15 scoutfs-tests: Changes for use of separate block devices for meta and data
Add -z option to run-tests.sh to specify metadata device.

Do a bunch of things twice.

Fix up setup-error-teardown test.

Signed-off-by: Andy Grover <agrover@versity.com>
[zab@versity.com: minor arg message fixes, golden output]
2020-11-19 11:42:04 -08:00
Andy Grover
8f72d16609 scoutfs-utils: Use separate block devices for metadata and data
mkfs: Take two block devices as arguments. Write everything to metadata
dev, and the superblock to the data dev. UUIDs match. Differentiate by
checking a bit in a new "flags" field in the superblock.

Refactor device_size() a little. Convert spaces to tabs.

Move code to pretty-print sizes to dev.c so we can use it in error
messages there, as well as in mkfs.c.

print: Include flags in output.

Add -D and -M options for setting max dev sizes

Allow sizes to be specified using units like "K", "G" etc.

Note: -D option replaces -S option, and uses above units rather than
the number of 4k data blocks.

Update man pages for cmdline changes.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-11-19 11:41:54 -08:00
Zach Brown
08eb75c508 scoutfs: update README.md for metadev_path
Update the README.md introduction to scoutfs to mention the need for and
use of metadata and data block devices.

Signed-off-by: Zach Brown <zab@versity.com>
2020-11-19 11:41:20 -08:00
Andy Grover
9f151fde92 scoutfs: Use separate block devices for metadata and data
Require a second path to metadata bdev be given via mount option.

Verify meta sb matches sb also written to data sb. Change code as needed
in super.c to allow both to be read. Remove check for overlapping
meta and data blknos, since they are now on entirely separate bdevs.

Use meta_bdev for superblock, quorum, and block.c reads and writes.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-11-19 11:41:20 -08:00
Zach Brown
f46ab548a4 scoutfs-utils: format df in two rows
It was too tricky to pick out the difference between metadata and data
usage in the previous format.  This makes it much more clear which
values are for either metadata or data.

Signed-off-by: Zach Brown <zab@versity.com>
2020-11-10 15:23:14 -08:00
Zach Brown
ff532eba75 scoutfs: recover max lock write_version
Write locks are given an increasing version number as they're granted
which makes its way into items in the log btrees and is used to find the
most recent version of an item.

The initialization of the lock server's next write_version for granted
locks dates back to the initial prototype of the forest of log btrees.
It is only initialized to zero as the module is loaded.  This means that
reloading the module, perhaps by rebooting, resets all the item versions
to 0 and can lead to newly written items being ignored in favour of
older existing items with greater versions from a previous mount.

To fix this we initialize the lock server's write_version to the
greatest of all the versions in items in log btrees.  We add a field to
the log_trees struct which records the greatest version which is
maintained as we write out items in transactions.  These are read by the
server as it starts.

Then lock recovery needs to include the write_version so that the
lock_server can be sure to set the next write_version past the greatest
version in the currently granted locks.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-30 11:14:10 -07:00
Zach Brown
736d9d7df8 scoutfs: remove struct scoutfs_log_trees_val
The log_trees structs store the data that is used by client commits.
The primary struct is communicated over the wire so it includes the rid
and nr that identify the log.  The _val struct was stored in btree item
values and was missing the rid and nr because those were stored in the
item's key.

It's madness to duplicate the entire struct just to shave off those two
fields.  We can remove the _val struct and store the main struct in item
values, including the rid and nr.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-30 11:14:10 -07:00
Zach Brown
45e2209123 scoutfs-tests: add persistent-item-vers test
Add a test which makes sure that we don't initialize the lock server's
write version to a version less than existing log tree items.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-30 11:13:00 -07:00
Zach Brown
9cf2a6ced0 scoutfs-tests: add remounting test helpers
Add functions to remount all the mounts, including after having removed
and reinserted the module.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-30 11:13:00 -07:00
Zach Brown
66c6331131 scoutfs-utils: add max item vers to log trees
Add a field to the log_trees struct which records the greatest item
version seen in items in the tree.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-30 11:12:52 -07:00
Zach Brown
42bf0980b6 scoutfs-utils: remove scoutfs_log_trees_val
We're just using the one log_trees struct for both network messages and
persistent btree item values.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-30 11:12:52 -07:00
Andy Grover
e6228ead73 scoutfs: Ensure padding in structs remains zeroed
Audit code for structs allocated on stack without initialization, or
using kmalloc() instead of kzalloc().

- avl.c: zero padding in avl_node on insert.
- btree.c: Verify item padding is zero, or WARN_ONCE.
- inode.c: scoutfs_inode contains scoutfs_timespecs, which have padding.
- net.c: zero pad in net header.
- net.h: scoutfs_net_addr has padding, zero it in scoutfs_addr_from_sin().
- xattr.c: scoutfs_xattr has padding, zero it.
- forest.c: item_root in forest_next_hint() appears to either be
    assigned-to or unused, so no need to zero it.
- key.h: Ensure padding is zeroed in scoutfs_key_set_{zeros,ones}

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:33 -07:00
Andy Grover
13438c8f5d scoutfs: Remove struct scoutfs_betimespec
Unused.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:33 -07:00
Andy Grover
d9d9b65f14 scoutfs: remove __packed from all struct definitions
Instead, explicitly add padding field, and adjust member ordering to
eliminate compiler-added padding between members, and at the end of the
struct (if possible: some structs end in a u8[0] array.)

This should prevent unaligned accesses. Not a big deal on x86_64, but
other archs like aarch64 really want this.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:33 -07:00
Andy Grover
5e1c8586cc scoutfs: ensure btree values end on 8-byte-alignment boundary
Round val_len up to BTREE_VALUE_ALIGN (8), to keep mid_free_len aligned.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:33 -07:00
Andy Grover
68d7a2e2cb scoutfs: align items in item cache to 8 bytes
This will ensure structs, which are internally 8 byte aligned, will remain
so when in the item cache.

16 bytes alignment doesn't seem like it's needed so just do 8.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:33 -07:00
Andy Grover
87cb971630 scoutfs: fix hash compiler warnings
Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:33 -07:00
Zach Brown
dc47ec65e4 scoutfs: remove btree value owner footer offset
We were using a trailing owner offset to iterate over btree item values
from the back of the block towards the front.  We did this to reclaim
fragmented free space in a block to satisfy an allocation instead of
having to split the block, which is expensive mostly because it has to
allocate and free metadata blocks.

In the before times, we used to compact items by sorting items by their
offset, moving them, and then sorting them by their keys again.  The
sorting by keys was expensive so we added these owner offsets to be able
to compact without sorting.

But the complexity of maintaining the owner metadata is not worth it.
We can avoid the expensive sorting by keys by allocating a temporary
array of item offsets and sorting only it by the value offset.  That's
nice and quick, it was the key comparisons that were expensive.  Then we
can remove the owner offset entirely, as well as the block header final
free region that compaction needed.

And we also don't compact as often in the modern era because we do the
bulk of our work in the item cache instead of in the btree, and we've
changed the split/merge/compaction heuristics to avoid constantly
splitting/merging/comapcting and an item population happens to hover
right around a shared threshold.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-29 14:15:33 -07:00
Zach Brown
dbea353b92 scoutfs: bring back sort_priv
Bring back sort_priv, we have need for sorting with a caller argument.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-29 14:15:33 -07:00
Andy Grover
5701182665 scoutfs-utils: Enable -Wpadded
The compiler will complain if it sees any padding.

Fix a spot in print.c for this.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:22 -07:00
Andy Grover
6fea9f90c4 scoutfs-utils: Sync latest headers with kernel code
__packed no longer used.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:22 -07:00
Andy Grover
e78ba2b427 scoutfs-utils: specify types for some long constants
This avoids warnings on Centos 7 from gcc 4.8.5.
2020-10-29 14:15:22 -07:00
Andy Grover
8bd6646d9a scoutfs-utils: avoid redeclarations of __[be,le][16,32,64] in sparse.h
dev.c includes linux/fs.h which includes linux/types.h, which defines
these types, __be16 etc. These are also defined in sparse.h, but I don't
think these are needed.

Definitions in linux/types.h includes stuff to set attr(bitwise) if
__CHECKER__ is defined, so we can remove __sp_biwise.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-29 14:15:22 -07:00
Zach Brown
6b1dd980f0 scoutfs-utils: remove btree item owner
We no longer have an owner offset trailing btree item values.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-29 14:15:22 -07:00
Zach Brown
ea7c41d876 scoutfs-utils: remove free_*_blocks super fields
The kernel is no longer storing the total free space in all allocators
in super block fields.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
669e7f733b scoutfs-utils: add -S to limit device size
Add an option to mkfs to have it limit the size of the device that's
used by mkfs.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
4bd86d1a00 scoutfs-utils: return error for small device
The check for a small device didn't return an error code because it was
copied from error tests of ret for an error code.  It has to generate
one, do so.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
b424208555 scoutfs-utils: remove unused packed extents
Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
4ca0b3ff74 scoutfs-utils: try compacting srch more frequently
Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
838e293413 scoutfs-utils: update compaction item printing
We now only use one srch file compaction struct and we store it in
PENDING and BUSY key types.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
a19e151277 scoutfs-utils: add fd which uses alloc_detail
Add the df command which uses the new alloc_detail ioctl to show df for
the metadata and data devices separately.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
36c426d555 scoutfs-utils: add total m/d blocks to statfs
Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
fddfde62e6 scoutfs-utils: add endian size swapping macros
Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
e6385784f5 scoutfs-utils: remove unused radix format
Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
23711f05f6 scoutfs-utils: alloc and data uses full extents
Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
d87e2e0166 scoutfs-utils: add btree insertion for mkfs
Use little helpers to insert items into new single block btrees for
mkfs.  We're about to insert a whole bunch more items.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:41 -07:00
Zach Brown
2e7053497e scoutfs: remove free_*_blocks super fields
Remove the old superblock fields which were used to track free blocks
found in the radix allocators.  We now walk all the allocators when we
need to know the free totals, rather than trying to keep fields in sync.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
735c2c6905 scoutfs: fix btree split/join setting parent keys
Before the introduction of the AVL tree to sort btree items, the items
were sorted by sorting a small packed array of offsets.  The final
offset in that array pointed to the item in the block with the greatest
key.

With the move to sorting items in an AVL tree by nodes embedded in item
structs, we now don't have the array of offsets and instead have a dense
array of items.  Creation and deletion of items always works with the
final item in the array.

last_item() used to return the item with the greatest key by returning
the item pointed to by the final entry in the sorted offset array, then
it returned the final entry in the item array for creation and deletion
but that was no longer the item with the greatest key.

But spliting and joining still used last_item() to find the item in the
block with the greatest key for updating references to blocks in
parents.  Since the introduction of the AVL tree splitting and joining
has been corrrupting the tree by setting parent block reference keys to
whatever item happened to be at the end of the array, not the item with
the greatest key.

The extent code recently pushed hard enough to hit this by working with
relatively random extent items in the core allocation btrees.
Eventually the parent block reference keys got out of sync and we'd fail
to find items by descending into the wrong children when looking for
them.  Extent deletion hit this during allocation, returned -ENOENT, and
the allocator turned that into -ENOSPC.

With this fixed we can repetedly create and delte millions of files with
heavily fragmented extents in a tiny metadata device.  Eventually it
actually runs out of space instead of spuriously returning ENOSPC in a
matter of minutes.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
a848477e64 scoutfs: remove unused packed exents
We use full data extent items now, we don't need the packed extent
structures.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
b094b18618 scoutfs: compact fewer srch files each time
With the introduction of incremental srch file compaction we added some
fields to the srch_compact struct to record the position of compaction
in each file.  This increased the size of the struct past the limit the
btree places on the size of item values.

We decrease the number of files per compaction from 8 to 4 to cut the
size of the srch_compcat struct in half.  This compacts twice as often,
but still relatively infrequently, and it uses half the space for srch
files waiting to hit the compaction threshold.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
7a3749d591 scoutfs: incremental srch compaction
Previously the srch compaction work would output the entire compacted
file and delete the input files in one atomic commit.  The server would
send the input files and an allocator to the client, and the client
would send back an output file and an allocator that included the
deletion of the input files.  The server would merge in the allocator
and replace the input file items with the output file item.

Doing it this way required giving an enormous allocation pool to the
client in a radix, which would deal with recursive operations
(allocating from and freeing to the radix that is being modified).  We
no longer have the radix allocator, and we use single block avail/free
lists instead of recursively modifying the btrees with free extent
items.  The compaction RPC needs to work with a finite amount of
allocator resources that can be stored in an alloc list block.

The compaction work now does a fixed amount of work and a compaction
operation spans multiple work iterations.

A single compaction struct is now sent between the client and server in
the get_compact and commit_compact messages.  The client records any
partial progress in the struct.  The server writes that position into
PENDING items.  It first searchs for pending items to give to clients
before searching for files to start a new compaction operation.

The compact struct has flags to indicate whether the output file is
being written or the input files are being deleted.  The server manages
the flags and sets the input file deletion flag only once the result of
the compaction has been reflected in the btree items which record srch
files.

We added the progress fields to the compaction struct, making it even
bigger than it already was, so we take the time to allocate them rather
than declaring them on the stack.

It's worth mentioning that each operation now takes a reasonably bounded
amount of time will make it feasible to decide that it has failed and
needs to be fenced.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
d589881855 scoutfs: add tot m/d device blocks to statfs_more
The total_{meta,data}_blocks scoutfs_super_block fields initialized by
mkfs aren't visible to userspace anywhere.  Add them to statfs_more so
that tools can get the totals (and use them for df, in this particular
case).

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
2073a672a0 scoutfs: remove unused statfs RPC
Remove the statfs RPC from the client and server now that we're using
allocator iteration to calculate free blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
33374d8fe6 scoutfs: get statfs free blocks with alloc_foreach
Use alloc_foreach to count the free blocks in all the allocators instead
of sending an RPC to the server.  We cache the results so that constant
df calls don't generate a constant stream of IO.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
3d790b24d5 scoutfs: add alloc_detail ioctl
An an ioctl which copies details of each persistent allocator to
userspace.  This will be used by a scoutfs command to give information
about the allocators in the system.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
fb66372988 scoutfs: add alloc foreach cb iterator
Add an alloc call which reads all the persistent allocators and calls a
callback for each.  This is going to be used to calculate free blocks
in clients for df, and in an ioctl to give a more detailed view of
allocators.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
8bf4c078df scoutfs: fix item cache page split key choice
The algorithm for choosing the split key assumed that there were
multiple items in the page.  That wasn't always true and it could result
in choosing the first item as the split key, which could end up
decrementing the left page's end key before it's start key.

We've since added compaction to the paths that split pages so we now
guarantee that we have at least two items in the page being split.  With
that we can be sure to use the second item's key and ensure that we're
never creating invalid keys for the pages created by the split.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
27bc0ef095 scoutfs: fix item cache page trim
The tests for the various page range intersections were out of order.
The edge overlap case could trigger before the bisection case and we'd
fail to remove the initial items in the page.  That would leave items
before the start key which would later be used as a midpoint for a
split, causing all kinds of chaos.

Rework the cases so that the overlap cases are last.  The unique bisect
case will be caught before we can mistake it for an edge overlap case.
And minimize the number of comparisons we calculate by storing the
handful that all the cases need.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
c4663ea1a1 scoutfs: compact items in item cache pages
The first pass of the item cache didn't try to reclaim freed space at
all.  It would leave behind very sparse pages.  The oldest of which
would be reclaimed by memory pressure.

While this worked, it created much more stress on the system than is
necessary.  Splitting a page with one key also makes it hard to
calculate the boundaries of the split pages, given that the start and
end keys could be the single item.

This adds a header field which tracks the free space in item cache
pgaes.  Free space is created before the alloc offset by removing items
from the rbtree, but also from shrinking item values when updating or
deleting items.

If we try to split a page with sufficient free space to insert the
largest possible item then we compact the page instead of splitting it.
We copy the items into the front of an unused page and swap the pages.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
e347ca3606 scoutfs: add unused item page rbtree verification
Add a quick function that walks the rbtree and makes sure it doesn't see
any obvious key errors.  This is far too expensive to use regularly but
it's handy to have around and add calls to when debugging.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
005cf99f42 scoutfs: use vmalloc for high order xattr allocs
The xattr item stream is constructred from a large contiguous region
that contains the struct header, the key, and the value.  The value
can be larger than a page so kmalloc is likely to fail as the system
gets fragmented.

Our recent move to the item cache added a significant source of page
allocation churn which moved the system towards fragmentation much more
quickly and was causing high-order allocation failures in testing.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
c61175e796 scoutfs: remove unused radix code
Remove the radix allocator that was added as we expermented with packed
extent items.  It didn't work out.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
e60f4e7082 scoutfs: use full extents for data and alloc
Previously we'd avoided full extents in file data mapping items because
we were deleting items from forest btrees directly.  That created
deletion items for every version of file extents as they were modified.
Now we have the item cache which can remove deleted items from memory
when deletion items aren't necessary.

By layering file data extents on an extent layer, we can also transition
allocators to use extents and fix a lot of problems in the radix block
allocator.

Most of this change is churn from changing allocator function and struct
names.

File data extents no longer have to manage loading and storing from and
to packed extent items at a fixed granularity.  All those loops are torn
out and data operations now call the extent layer with their callbacks
instead of calling its packed item extent functions.  This now means
that fallocate and especially restoring offline extents can use larger
extents.  Small file block allocation now comes from a cached extent
which reduces item calls for small file data streaming writes.

The big change in the server is to use more root structures to manage
recursive modification instead of relying on the allocator to notice and
do the right thing.  The radix allocator tried to notice when it was
actively operating on a root that it was also using to allocate and free
metadata blocks.  This resulted in a lot of bugs.  Instead we now double
buffer the server's avail and freed roots so that the server fills and
drains the stable roots from the previous transaction.  We also double
buffer the core fs metadata avail root so that we can increase the time
to reuse freed metadata blocks.

The server now only moves free extents into client allocators when they
fall below a low threshold.  This reduces the shared modification of the
client's allocator roots which requires cold block reads on both the
client and server.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
8f946aa478 scoutfs: add btree item extent allocator
Add an allocator which uses btree items to store extents.  Both the
client and server will use this for btree blocks, the client will use it
for srch blocks and data extents, and the server will move extents
between the core fs allocator btree roots and the clients' roots.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Zach Brown
b605407c29 scoutfs: add extent layer
Add infrastructure for working with extents.  Callers provide callbacks
which operate on their extent storage while this code performs the
fiddly splitting and merging of extents.  This layer doesn't have any
persitent structures itself, it only operates on native structs in
memory.

Signed-off-by: Zach Brown <zab@versity.com>
2020-10-26 15:19:03 -07:00
Andy Grover
84d6904de8 Add -s (skip checkouts) to run-tests
It can be handy to skip checking out specific branches from the
required repos, so -s option will skip doing so for kmod/utils/xfstests.

Also fix utils die messages to reference -U/u instead of -K/k.

Signed-off-by: Andy Grover <agrover@versity.com>
2020-10-06 09:05:46 -07:00
Zach Brown
85a27b2198 scoutfs-utils: per sec bulk_create_path banners
bulk_create_paths was inspired by createmany when it was outputting
status lines every 10000 files.  That's far too often if we're creating
files very quickly.  And it only tried to output a line after entire
directories, so output could stall for very large directories.

Behave more inline with vmstat, iostat, etc, and output a line at a
regular time interval.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:39 -07:00
Zach Brown
c1229644da scoutfs-tests: create xattrs in bulk_create_paths
Add options to bulk_create_paths for creating xattrs as we create files.
We can create normal xattrs, or .srch. tagged xattrs where all, some, or
none of the files share the same xattr name.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:39 -07:00
Zach Brown
a65eccd0f5 scoutfs-tests: remove tiny btree and segment use
The test that exercises re-reading stale cached blocks was still
trying to use both tiny btree blocks and segments, both of which have
been removed.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:39 -07:00
Zach Brown
e2a919492d scoutfs-utils: remove unused xattr index items
We're now using the .srch. xattr tags.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
1e2dc6c1df scoutfs-utils: add committed_seq to statfs_more
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
f04a636229 scoutfs-utils: add support for srch
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
e82cce36d9 scoutfs-utils: rework get_fs_roots to get_roots
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
e85fc5b1a7 scoutfs-utils: increase btree item value limit
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
c0fdd37e5a scoutfs-utils: add get_unaligned helpers
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
35d1ad1422 scoutfs-utils: switch to using fnv1a for hashing
Track the scoutfs module's switch to FNV1a for hashing.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
9bb32b8003 scoutfs-utils: fix last data blkno
The calculation of the last valid data blkno was off by one.  It was
calculating the total number of small blocks that fit in the device
size.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
5f0dbc5f85 scoutfs-utils: remove radix _first fields
The recent cleanup of the radix allocator included removing tracking of
the first set bits or references in blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
ffc1e5aa86 scoutfs-utils: update net root format
Track the changes in the kernel to communicate btree roots over the
network.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
39993d8b5f scoutfs-utils: use larger metadata blocks
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
b86a1bebbb scoutfs-utils: support btree avl and hash
Update the internal structure of btree blocks to use the avl item index
and hash table direct item lookup.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
aa84f7c601 scoutfs-utils: use scoutfs_key as btree key
Track the kernel changes to use the scoutfs_key struct as the btree key
instead of a big-endian binary blob.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
ac2d465b66 scoutfs-utils: print key zone and type numerically
The kernel has long sinced moved away from symbolic printing of key
cones and types, and it just removed the MAX values from the format
header.  Let's follow suit and get rid of the zone and type strings.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
4e546b2e7c scoutfs-utils: generate end_size_add_cpu()
We had manually implemented a few of the functions to add values to
specific endian types.  Make a macro to generate the function and
generate them for all the endian types we use.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:28 -07:00
Zach Brown
b28acdf904 scoutfs: use larger percpu_counter batch
The percpu_counter library merges the per-cpu counters with a shared
count when the per-cpu counter gets larger than a certain value.  The
default is very small, so we often end up taking a shared lock to update
the count.  Use a larger batch so that we take the lock less often.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
ae97ffd6fc scoutfs: remove unused kvec.h
We've removed the last use of kvecs to describe item values.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
12067e99ab scoutfs: remove item granular work from forest
Now that the item cache is bearing the load of high frequency item
calls, we can remove all the item granular work that the forest was
trying to do.  The item cache amortizes the cost of the forest so its
remaining methods can go straight to the btrees and don't need
complicated state to reduce the overhead of item calls.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
6bacd95aea scoutfs: fs uses item cache instead of forest
Use the new item cache for all the item work in the fs instead of
calling into the forest of btrees.  Most of this is mechanical
conversion from the _forest calls to the _item calls.  The item cache
no longer supports the kvec argument for describing values so all the
callers pass in the value pointer and length directly.

The item cache doesn't support saving items as they're deleted and later
restoring them from an error unwinding path.  There were only two users
of this.  Directory entries can easily guarantee that deletion won't
fail by dirtying the items first in the item cache.  Xattr updates were
a little trickier.  They can combine dirtying, creating, updating, and
deleting to atomically switch between items that describe different
versions of a multi-item value.  This also fixed a bug in the srch
xattrs where replacing an xattr would create a new id for the xattr and
leave existing srch items referencing a now deleted id.  Replacing now
reuses the old id.

And finally we add back in the locking and transaction item cache
integration.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
45e594396f scoutfs: add an item cache above the btrees
Add an item cache between fs callers and the forest of btrees.  Calling
out to the btrees for every item operation was far too expensive.  This
gives us a flexible in-memory structure for working with items that
isn't bound by the constrants of persistent block IO.  We can rarely
stream large groups of items to and from the btrees and then use
efficient kernel memory structures for more frequent item operations.

This adds the infrastructure, nothing is calling it yet.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
b1757a061e scoutfs: add forest methods for item cache
Add forest calls that the item cache will use.  It needs to read all the
items in the leaf blocks of forest btree which could contain the key,
write dirty items to the log btree, and dirty bits in the bloom block as
items are dirtied.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
1a994137f4 scoutfs: add btree methods for item cache
Add btree calls to call a callback for all items in a leaf, and to
insert a list of items into their leaf blocks.  These will be used by
the item cache to populate the cache and to write dirty items into dirty
btree blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
57af2bd34b scoutfs: give btree walk callers more keys
The current btree walk recorded the start and end of child subtrees as
it walked, and it could give the caller the next key to iterate towards
after the block it returned.  Future methods want to get at the key
bounds of child subtrees, so we add a key range struct that all walk
callers provide and fill it with all the interesting keys calculated
during the walk.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
9e975dffe1 scoutfs: refactor btree split condition
Btree traversal doesn't split a block if it has room for the caller's
item.  Extract this test into a function so that an upcoming btree call
can test that each of multiple insertions into a leaf will fit.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
d440056e6f scoutfs: remove unused xattr index code
Remove the last remnants of the indexed xattrs which used fs items.
This makes the significant change of renumbering the key zones so I
wanted it in its own commit.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
d1e62a43c9 scoutfs: fix leaking alloc bits in merge
In a merge where the input and source trees are the same, the input
block can be an initial pre-cow version of the dirty source block.
Dirtying blocks in the change will clear allocations in the dirty source
block but they will remain in the pre-cow input block.  The merge can
then set these blocks in the dst, even though they were also used by
allocation, because they're still set in the pre-cow input block.

This fix is clumsy, but minimal and specific to this problem.  A more
thorough fix is being worked on which introduces more staging more
allocator trees and should stop calls that are modifying the current
active avail or free trees.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
289caeb353 scoutfs: trace leaf_bit of modified radix bits
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
ba879b977a scoutfs: expand radix merge tracing
Add a trace event for entering _radix_merge() and rename the current
per-merge trace event.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
5c6b263d97 scoutfs: trace radix bit ops before assertions
Trace operations before they can trigger assertions so we can see the
violating operation in the traces.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
ca6b7f1e6d scoutfs: lock invalidate only syncs dirty
Lock invalidation has to make sure that changes are visible to future
readers.  It was syncing if the current transaction is dirty.  This was
never optimal, but it wasn't catastrophic when concurrent invalidation
work could all block on one sync in progress.

With the move to a single invalidation worker serially invalidating
locks it became unacceptable.  Invalidation happening in the presence of
writers would constantly sync the current transaction while very old
unused write locks were invalidated.  Their changes had long since been
committed in previous transactions.

We add a lock field to remember the transaction sequence which could
have been dirtied under the lock.  If that transaction has already been
comitted by the time we invalidate the lock it doesn't have to sync.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
55dde87bb1 scoutfs: fix lock invalidation work deadlock
The client lock network message processing callbacks were built to
simply perform the processing work for the message in the networking
work context that it was called in.  This particularly makes sense for
invalidation because it has to interact with other components that
require blocking contexts (syncing commits, invalidating inodes,
truncating pages, etc).

The problem is that these messages are per-lock.  With the right
workloads we can use all the capacity for executing work just in lock
invalidation work.  There is no more work execution available for other
network processing.  Critically, the blocked invalidation work is
waiting for the commit thread to get its network responses before
invalidation can make forward progress.  I was easily reproducing
deadlocks by leaving behind a lot of locks and then triggering a flood
of invalidation requests on behalf of shrinking due to memory pressure.

The fix is to put locks on lists and have a small fixed number of work
contexts process all the locks pending for each message type.  The
network callbacks don't block, they just put the lock on the list and
queue the work that will walk the lists.  Invalidation now blocks one
work context, not the number of incoming requests.

There were some wait conditions in work that used to use the lock workq.
Other paths that change those conditions now have to know to queue the
work specifically, not just wake tasks which included blocked work
executors.

The other subtle impact of the change is that we can no longer rely on
networking to shutdown message processing work that was happening in its
callbacks.  We have to specifically stop our work queues in _shutdown.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
f4db553c28 scoutfs: fix error unwinding in server advance_seq
While checking for lost server commit holds, I noticed that the
advance_seq request path had obviously incorrect unwinding after getting
an error.  Fix it up so that it always unlocks and applies its commit.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
4b9c02ba32 scoutfs: add committed_seq to statfs_more
Add the committed_seq to statfs_more which gives the greatest seq which
has been committed.  This lets callers disocover that a seq for a change
they made has been committed.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
6356440073 scoutfs: add error message for client commit error
We had a debugging WARN_ON that warns when a client has an error
commiting their transaction.  Let's add a bit more detail and promote it
to a proper error.  These should not happen.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
9658412d09 scoutfs: add forest counters
Add a bunch of counters to track significant events in the forest.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
57c7caf348 scoutfs: fix forest dirty log tracking
The forest code is responsible for constructing a consistent fs image
out of the items spread across all the btrees written by mounts in the
system.

Usually readers walk a btree looking for log trees that they should
read.  As a mount modifies items in its dirty log tree, readers need to
be sure to check that in-memory dirty log tree even though it isn't
present in the btree that records persistent log trees.

The code did this by setting a flag to indicate that readers using a
lock should check the dirty log tree.  But the flag usage wasn't
properly locked and left a race where a reader and writer could race,
leaving future readers to not know that they should check the dirty log
tree.  When we rarely hit that race we'd see item errors that made no
sense, like not being able to find an inode item to update after having
just created it in the current transaction.

To fix this, we clean up the tree tracking in the forest code.

We get rid of the static forest_root structs in the lock_private that
were used to track the two special-case roots that aren't found in log
tree items: the in-memory dirty log root and the final fs root.  All
roots are now dynamically allocated.  We use a flag in the root to
identify it as the dirty log root, and identify the fs root by its
rid/nr.  This results in a bunch of caller churn as we remove lpriv from
root identifying functions.

We get rid of the idea of the writer adding a static root to the list as
well as marking the log as needing to read the root.  Instead we make
all root management happen as we refresh the list.  The forest maintains
a commit sequence and writers set state in the lock to indicate that the
lock has dirty items in the log during this transaction.  Iteration then
compares the state set by the commit, writer, and the last refresh to
determine if a new refresh needs to happen.

Properly tracking the presence of dirty items lets us recognize when the
lock no longer has dirty items in the log and we can stop locking and
reading the dirty log and fall back to reading the committed stable
version.  The previous code didn't do that, it would lock and read the
dirty root forever.

While we're in here, we fix the locking around setting bloom bits and
have it track the version of the log tree that was set so that we don't
have to clear set bits as the log version is rotated out by the server.

There was also a subtle bug where we could hit to stale errors for the
same root and return -EIO because we triggering refresh returned stale.
We rework the retrying logic to use a separate error code to force
refreshing so that we can't accidentally trigger eio by conflating
reading stale blocks and forcing refreshing.

And finally, we no longer record that we need the dirty log tree in a
root if we have a lock that could never read.  It's a minor optimization
that doesn't change functional behaviour.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
f8bf1718a0 scoutfs: add a bunch of btree counters
Add some counters for the most basic btree events.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
c415cab1e9 scoutfs: use srch to track .srch. xattrs
Using strictly coherent btree items to map the hash of xattr names to
inode numbers proved the value of the functionality, but it was too
expensive.  We now have the more efficient srch infrastructure to use.

We change from the .indx. to the .srch. tag, and change the ioctl from
find_xattr to search_xattrs.  The idea is to communicate that these are
accelerated searches, not precise index lookups and are relatively
expensive.

Rather than maintaining btree items, xattr setting and deleting emits
srch entries which either tracks the xattr or combines with the previous
tracker and removes the entry.  These are done under the lock that
protects the main xattr item, we can remove the separate locking of the
previous index items.

The semantics of the search ioctl needs to change a bit.  Because
searches are so expensive we now return a flag to indicate that the
search completed.  While we're there, we also allow a last_ino parameter
so that searches can be divided up and run in parallel.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
f8e1812288 scoutfs: add srch infrastructure
This introduces the srch mechanism that we'll use to accelerate finding
files based on the presence of a given named xattr.  This is an
optimized version of the initial prototype that was using locked btree
items for .indx. xattrs.

This is built around specific compressed data structures, having the
operation cost match the reality of orders of magnitude more writers
than readers, and adopting a relaxed locking model.  Combine all of this
and maintaining the xattrs no longer tanks creation rates while
maintaining excellent search latencies, given that searches are defined
as rare and relatively expensive.

The core data type is the srch entry which maps a hashed name to an
inode number.  Mounts can append entries to the end of unsorted log
files during their transaction.  The server tracks these files and
rotates them into a list of files as they get large enough.  Mounts have
compaction work that regularly asks the server for a set of files to
read and combine into a single sorted output file.  The server only
initiates compactions when it sees a number of files of roughly the same
size.  Searches then walk all the commited srch files, both log files
and sorted compacted files, looking for entries that associate an xattr
name with an inode number.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
cca83b1758 scoutfs: rework get_fs_roots to get_roots
The get_fs_roots rpc and server interfaces were built around individual
roots.  Rebuild it around passing around a struct so that we can add
roots without impacting all the current users.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
8c114ddb87 scoutfs: increase max btree item size
Now that we have larger blocks we can have a larger max item.  This was
increased to make room for the srch compaction items which store a good
number of srch files in their value.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
ab271f4682 scoutfs: report sm metadata blocks in statfs
The conversion of the super block metadata block counters to units of
large metadata blocks forgot to scale back to the small block size when
filling out the block count fields in the statfs rpc.   This resulted in
the free and total metadata use being off by the factor of large to
small block size (default of ~16x at the moment).

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
42e7fbb4f7 scoutfs: switch to using fnv1a for hashing
We had a few uses of crc for hashing.  That was fine enough for initial
testing but the huge number of xattrs that srch is recording was
seeing very bad collisions from the clumsy combination of crc32c into
a 64bit hash.  Replace it with FNV for now.

This also takes the opportunity to use 3 hash functions in the forest
bloom filter so that we can extract them from the 64bit hash of the key
rather than iterating and recalculating hashes for each function.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
f48112e2a7 scoutfs: allocate contig block pages with nowarn
We first attempt to allocate our large logically contiguous cached
blocks with physically contiguous pages to minimize the impact on the
tlb.  When that fails we fall back to vmalloc()ed blocks.  Sadly,
high-order page allocation failure is expected and we forgot to provide
the flag that suppresses the page allocation failure message.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
07ba053021 scoutfs: check super blkno fields
We had a bug where mkfs would set a free data blkno allocator bit past
the end of the device.  (Just at it, in fact.  Those fenceposts.)  Add
some checks at mount to make sure that the allocator blkno ranges in the
super don't have obvious mistakes.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
69e5f5ae5f scoutfs: add btree walk trace point
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
2980edac53 scoutfs: restore btree block verification
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
f9ff25db23 scoutfs: add dirent name fingerprint
Entries in a directory are indexed by the hash of their name.  This
introduces a perfectly random access pattern.  And this results in a cow
storm as directories get large enough such that the leaf blocks that
store their entries are larger than our commits.  Each commit ends up
being full of cowed leaf blocks that contain a single new entry.

The dirent name fingerprints change the dirent key to first start with a
fingerprint of the name.  This reduces the scope of hash randomization
from the entire directory to entries with the same fingerprint.

On real customer dir sizes and file names we saw roughly 3x create rate
improvements from being able to create more entries in leaf blocks
within a commit.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
0a47e8f936 Revert "scoutfs: add block visited bit"
The radix allocator no longer uses the block visited bit because it
maintains its own much richer private per-block data stored off the priv
pointer.

Signed-off-by: Zach Brown <zab@versity.com>

This reverts commit 294b6d1f79e6d00ba60e26960c764d10c7f4b8a5.
2020-08-26 14:39:12 -07:00
Zach Brown
3a82090ab1 scoutfs: have per-fs inode nr allocators
We had previously seen lock contention between mounts that were either
resolving paths by looking up entries in directories or writing xattrs
in file inodes as they did archiving work.

The previous attempt to avoid this contention was to give each directory
its own inode number allocator which ensured that inodes created for
entries in the directory wouldn't share lock groups with inodes in other
directories.

But this creates the problem of operating on few files per lock for
reasonably small directories.  It also creates more server commits as
each new directory gets its inode allocation reservation.

The fix is to have mount-wide seperate allocators for directories and
for everything else.  This puts directories and files in seperate groups
and locks, regardless of directory population.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
4d0b78f5cb scoutfs: add counters for server commits
Add some counters for server commits.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
e6ae397d12 Revert "scoutfs: switch block cache to rbtree"
We had switched away from the radix_tree because we were adding a
_block_move call which couldn't fail.  We no longer need that call, so
we can go back to storing cached blocks in the radix tree which can use
RCU lookups.

This revert has some conflict resolution around recent commits to add
the IO_BUSY block flag and the switch to _LG_ blocks.

This reverts commit 10205a5670dd96af350cf481a3336817871a9a5b.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
e5f5ee2679 Revert "scoutfs: add scoutfs_block_move"
We add _block_move for the radix allocator, but it no longer needs it.

This reverts commit 6bb0726689981eb9699296ae2cb4c8599add5b90.
2020-08-26 14:39:12 -07:00
Zach Brown
8fe683dab8 scoutfs: cow dirty radix blocks instead of moving
The radix allocator has to be careful to not get lost in recursion
trying to allocate metadata blocks for its dirty radix blocks while
allocating metadata blocks for others.

The first pass had used path data structures to record the references to
all the blocks we'd need to modify to reflect the frees and allocations
performed while dirtying radix blocks.  Once it had all the path blocks
it moved the old clean blocks into new dirty locations so that the
dirtying couldn't fail.

This had two very bad performance implications.  First, it meant that
trying to read clean versions of dirtied trees would always read the old
blocks again because their clean version had been moved to the dirty
version.  Typically this wouldn't happen but the server does exactly
this every time it tries to merge freed blocks back into its avail
allocator.  This created a significant IO load on the server.  Secondly,
that block cache move not being allowed to fail motivated us to move to
a locked rbtree for the block cache instead of the lockless rcu
radix_tree.

This changes the recursion avoidance to use per-block private metadata
to track every block that we allocate and cow rather than move.  Each
dirty block knows its parent ref and the blknos it would clear and set.
If dirtying fails we can walk back through all the blocks we dirty and
restore their original references before dropping all the dirty blocks
and returning an error.  This lets us get rid of the path structure
entirely and results in a much cleaner system.

This change meant tracking free blocks without clearing them as they're
used to satisfy dirty block allocations.  The change now has a cursor
that walks the avail metadata tree without modifying it.  While building
this it became clear that tracking the first set bits of refs doesn't
provide any value if we're always searching from a cursor.  The cursor
ends up providing the same value of avoiding constantly searching empty
initial bits and refs.  Maintaining the first metadata was just
overhead.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
6d7b8233c6 scoutfs: add radix merge retry counter
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
26ccaca80b scoutfs: add commit written counter
Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
ca8abeebb1 scoutfs: check fs root in forest hint
The forst code has a hint call to gives iterators a place to start
reading from before they acquire locks.  It was checking all the log
trees but it wasn't checking the main fs tree.  This happened to be OK
today because we're not yet merging items from the log trees into the
main fs tree, but we don't want to miss them once we do start merging
the trees.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
b7943c5412 scoutfs: avoid reading forest roots with block IO
The forest item operations were reading the super block to find the
roots that it should read items from.

This was easiest to implement to start, but it is too expensive.  We
have to find the roots for every newly acquired lock and every call to
walk the inode seq indexes.

To avoid all these reads we first send the current stable versions of
the fs and logs btrees roots along with root grants.  Then we add a net
command to get the current stable roots from the server.  This is used
to refresh the roots if stale blocks are encountered and on the seq
index queries.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
304dbbbafa scoutfs: merge partial allocator blocks
The server fills radix allocators for the client to consume while
allocating during a transaction.  The radix merge function used to move
an entire radix block at a time.  With larger blocks this becomes much
too coarse and can move way too much in one call.

This moves allocator bits a word at a time and more precisely moves the
amount that the caller asked for.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
177af7f746 scoutfs: use larger metadata blocks
Introduce different constants for small and large metadata block
sizes.

The small 4KB size is used for the super block, quorum blocks, and as
the granularity of file data block allocation.  The larger 64KB size is
used for the radix, btree, and forest bloom metadata block structures.

The bulk of this are obvious transitions from the old single constant to
the appropriate new constant.  But there are a few more involved
changes, though just barely.

The block crc calculation now needs the caller to pass in the size of
the block.  The radix function to return free bytes instead returns free
blocks and the caller is responsible for knowing how big its managed
blocks are.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
99bc710f03 scoutfs: remove tiny btree block option
It used to take significant effort to create very tall btrees because
they only stored small references to large LSM segments.  Now they store
all file system metadata and we can easily create sufficiently large
btrees for testing.  We don't need the tiny btree option.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
ac0e58839d scoutfs: remove btree _before and _after
There's no users of these variants of _prev and _next so they can be
removed.  Support for them was also dropped in the previous reworking of
the internal structure of the btree blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
efd9763355 scoutfs: use efficient btree block structures
This btree implementation was first built for the relatively light duty
of indexing segments in the LSM item implementation.  We're now using it
as the core metadata index.  It's already using a lot of cpu to do its
job with small blocks and it only gets more expensive as the block size
increases.  These changes reduce the CPU use of working with the btree
block structures.

We use a balanced binary tree to index items by key in the block.  This
gives us rare tree balancing cost on insertion and deletion instead of
the memmove overhead of maintaining a dense array of item offsets sorted
by key.  The keys are stored in the item struct which are stored in an
array at the front of the block so searching for an item uses contiguous
cachelines.

We add a trailing owner offset to values so that we can iterate through
them.  This is used to track space freed up by values instead of paying
the memmove cost of keeping all the values at the end of the block.  We
occasionally reclaim the fragmented value free space instead of
splitting the block.

Direct item lookups use a small hash table at the end of the block
which maps offsets to items.  It uses linear probing and is guaranteed
to have a light load factor so lookups are very likely to only need
a single cache lookup.

We adjust the watermark for triggering a join from half of a block down
to a quarter.  This results in less utilized blocks on average.  But it
creates distance between the join and split thresholds so we get less
cpu use from constantly joining and splitting if item populations happen
to hover around the previously shared threshold.

While shifting the implementation we choose not to add support for some
features that no longer make sense.  There are no longer callers of
_before and _after, and having synthetic tests to use small btree blocks
no longer makes ense when we can easily create very tall trees.  Both
those btree interfaces and the tiny btree block support will be removed.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
f59336085d scoutfs: add avl
Add the little avl implementation that we're going to use for indexing
items within the btree blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
ad99636af8 scoutfs: use scoutfs_key as btree key
The btree currently uses variable length big-endian buffers that are
compared with memcmp() as keys.  This is a historical relic of the time
when keys could be very large.  We had dirent keys that included the
name and manifest entries that included those fs keys.

But now all the btree callers are jumping through hoops to translate
their fs keys into big-endian btree keys.  And the memcmp() of the
keys is showing up in profiles.

This makes the btree take native scoutfs_key structs as its key.  The
forest callers which are working with fs keys can just pass their keys
straight through.  The server btree callers with their private btrees
get key fields definied for their use instead of having individual
big-endian key structs.

A nice side-effect of this is that splitting parents doesn't have to
assume that a maximal key will be inserted by a child split.  We can
have more keys in parents and wider trees.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
f9df3ada6c scoutfs: remove MAX key TYPE and ZONE
These were used for constructing arrays of string mappings of key
fields.  We don't print keys with symbolic strings anymore so we don't
need to maintain these values anymore.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
22716c0389 scoutfs: add scoutfs_key_is_zeros()
Add a little function for testing if a given scoutfs key is all zeros.

Signed-off-by: Zach Brown <zab@versity.com>
2020-08-26 14:39:12 -07:00
Zach Brown
d2d32c8776 scoutfs-tests: add bulk_create_paths
Add a test utility which efficiently recreates paths read from stdin.

Signed-off-by: Zach Brown <zab@versity.com>
2020-06-02 10:12:50 -07:00
Benjamin LaHaise
492afae552 scoutfs: add data_wait_err for reporting errors
Add support for reporting errors to data waiters via a new
SCOUTFS_IOC_DATA_WAIT_ERR ioctl.  This allows waiters to return an error
to readers when staging fails.

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
[zab: renamed to data_wait_err, took ino arg]
Signed-off-by: Zach Brown <zab@versity.com>
2020-05-29 13:50:35 -07:00
Zach Brown
8cf6f73744 scoutfs-tests: filter another ext4 kernel message
Add another expected warning from ext4 during xfstests that should not
cause failure.

Signed-off-by: Zach Brown <zab@versity.com>
2020-05-29 13:50:35 -07:00
Benjamin LaHaise
74f85ff93d scoutfs: add data_wait_err for reporting errors
Add support for reporting errors to data waiters via a new
SCOUTFS_IOC_DATA_WAIT_ERR ioctl.  This allows waiters to return an error
to readers when staging fails.

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
[zab: renamed to data_wait_err, took ino arg]
Signed-off-by: Zach Brown <zab@versity.com>
2020-05-29 13:50:25 -07:00
Zach Brown
79e235af6e scoutfs-utils: fix argv typo in error message
The data_waiting ioctl used the wrong argv index when printing the path
arg string that it failed to open.

Signed-off-by: Zach Brown <zab@versity.com>
2020-05-29 13:50:25 -07:00
Zach Brown
0a8faf3e94 scoutfs-utils: add parse_s64()
Signed-off-by: Zach Brown <zab@versity.com>
2020-05-29 13:50:25 -07:00
Zach Brown
63cccfa582 scoutfs-tests: check setattr_more offline extent
We had a bug where we were creating extent lengths that were rounded up
to the size of the packed extent items instead of being limited by
i_size.  As it happens the last setattr_more test would have found it if
I'd actually done the math to check that the extent length was correct.
We add an explicit offline blocks count test because that's what lead us
to notice that the offline extent length was wrong.

Signed-off-by: Zach Brown <zab@versity.com>
2020-04-03 14:47:04 -07:00
Zach Brown
e44fb23064 scoutfs-tests: add setattr_more tests
We had a bug where offline extent creation during setattr_more just
wasn't making it all the way to persistent items.  This adds basic
sanity tests of the setattr_more interface.

Signed-off-by: Zach Brown <zab@versity.com>
2020-03-16 15:48:09 -07:00
Zach Brown
247e22f56f scoutfs-utils: remove unused corruption sources
Remove the definitions and descriptions of sources of corruption that
are no longer identified by the kernel module.

Signed-off-by: Zach Brown <zab@versity.com>
2020-03-05 09:01:54 -08:00
Zach Brown
3c7d1f3935 scoutfs-utils: quick forest bloom comment update
Signed-off-by: Zach Brown <zab@versity.com>
2020-03-05 09:01:54 -08:00
Zach Brown
91c64dfa2d scoutfs-utils: print packed extents
Add support for printing the invidual extents stored in packed extent
items.

Signed-off-by: Zach Brown <zab@versity.com>
2020-03-02 12:11:39 -08:00
Zach Brown
53f29d3f2a scoutfs-utils: add ilog2() helper
It's handy to use ilog2 in the format header for defining shifts based
on values.  Add a userspace helper that uses glibc's log2 functions.

Signed-off-by: Zach Brown <zab@versity.com>
2020-03-02 12:11:39 -08:00
Zach Brown
3308bf8d8c scoutfs-tests: use fallocate to get large extent
The simple-release-extents test wanted to create a file with a single
large extent, but it did it with a streaming write.  While we'd like
our data allocator to create a large extent from initial writes, it
certainly doesn't guarantee it.  Fallocate is much more likely to
createa a large extent.

Signed-off-by: Zach Brown <zab@versity.com>
2020-02-25 12:04:44 -08:00
Zach Brown
cce20dbeb6 scoutfs-tests: only check for new dmesg entries
The dmesg check was creating false positives when unexpected messages
from before the test run were forced out of the ring.  The evicted
messages were showing up as removals in the diff.

We only want to see new messages that were created during the test run.
So we format the diff to only output added lines.

Signed-off-by: Zach Brown <zab@versity.com>
2020-02-25 12:04:44 -08:00
Zach Brown
503011b777 scoutfs-tests: prepend our paths to PATH
We add directories of our built binaries for tests to find.  Let's
prepend them to PATH so that we find them before any installed
binaries in the system.

Signed-off-by: Zach Brown <zab@versity.com>
2020-02-25 12:04:44 -08:00
Zach Brown
ec782fff8d scoutfs-utils: meta and data free blocks
The super block now tracks free metadata and data blocks in separate
counters.

Signed-off-by: Zach Brown <zab@versity.com>
2020-02-25 12:04:17 -08:00
Zach Brown
6b66e583f2 scoutfs-utils: fix printing block hdr fields
The block header printing helper had the identifiers for the blkno and
seq in the format string swapped.

Signed-off-by: Zach Brown <zab@versity.com>
2020-02-25 12:04:17 -08:00
Zach Brown
ff436db49b scoutfs-utils: add support for radix alloc
Add support for initializing radix allocator blocks that describe free
space in mkfs and support for printing them out.

Signed-off-by: Zach Brown <zab@versity.com>
2020-02-25 12:04:17 -08:00
Zach Brown
34c3d903d9 scoutfs-utils: add round_down() and flsll()
Add quick helpers for these two kernel functions.

Signed-off-by: Zach Brown <zab@versity.com>
2020-02-25 12:04:17 -08:00
Zach Brown
794277053f scoutfs-utils: add a few more man pages
Add an overview man page for scoutfs and add a manpage for the userspace
utility and its commands.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-20 14:25:16 -08:00
Zach Brown
4c225c2061 scoutfs-tests: add -y for xfstests args
Add a -y argument so we can specify additional args to ./xfstests, and
clean up our xfstest a bit while we're in there.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:23:03 -08:00
Zach Brown
1ce084fcd9 scoutfs-tests: mount-unmount-race describe skip
Add a message describing when mount-unmount-race has to be skipped
because it doesn't have enough mounts to unmount while maintaining
quorum.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:23:03 -08:00
Zach Brown
7dc3d7d732 scoutfs-tests: fix t_require_mounts
t_require_mounts never actually did anything because bash is the best.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:23:03 -08:00
Zach Brown
89fcb207a7 scoutfs-tests: remove segment-cache-fwd-back-iter
The segment-cache-fwd-back-iter test only applied to populating the item
cache from segments, and we don't do that anymore.  The test can
be removed.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:23:03 -08:00
Zach Brown
3ce6061907 scoutfs-tests: offer ftrace printk and dump opts
Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:23:03 -08:00
Zach Brown
12b00d0058 scoutfs-tests: create dir in 0 mount
When running a test we only create the test dir through one mount, but
we were off-by-one when deciding that we were iterating through the
first mount.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:23:03 -08:00
Zach Brown
920fca752c scoutfs-utils: have xattr use max val size
Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:22:04 -08:00
Zach Brown
e0a49c46a7 scoutfs-utils: add packed extents and bitmaps
Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:22:04 -08:00
Zach Brown
c87a9f3a07 scoutfs-utils: resurrect bitops
We've had these in the past and we need them again for the block
allocator item bitmaps.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:22:04 -08:00
Zach Brown
3776c18c66 scoutfs-utils: switch to btree forest
Remove all the lsm code from mkfs and print, replacing
it with the forest of btrees.

Signed-off-by: Zach Brown <zab@versity.com>
2020-01-17 11:22:04 -08:00
Zach Brown
0fee134133 scoutfs-tests: add setup-error-teardown
Add a test which makes sure that errors during setup can be properly
torn down.  This found an assertion that was being triggered during lock
shudown.

Signed-off-by: Zach Brown <zab@versity.com>
2019-09-10 09:57:47 -07:00
Zach Brown
4326a95b9b scoutfs-tests: create results dir before logging
We can't use cmd() to create the results dir because it tries to
redirect output to the results dir, which fails, so mkdir isn't run and
we don't create the results dir.

Signed-off-by: Zach Brown <zab@versity.com>
2019-09-10 09:57:47 -07:00
Zach Brown
a471c7716e scoutfs-tests: verify branch name with origin
We check out the specified git branch with "origin/" prepended, but we
weren't verifying that same full branch so the verification failed
because it couldn't distinguish differentiate amongst possible named
branches.

Signed-off-by: Zach Brown <zab@versity.com>
2019-09-10 09:57:47 -07:00
Zach Brown
70efa2f905 scoutfs-utils: add statfs wrapper
Add a scoutfs command wrapper around the statfs_moe ioctl.  It's the
same as the stat_more ioctl but has different fields and a different
ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2019-08-20 15:52:17 -07:00
Zach Brown
7cd8738add scoutfs-utils: net uses rid instead of node_id
Now that networking is identifing clients by their rid some persistent
structures are using that to store records of clients.

Signed-off-by: Zach Brown <zab@versity.com>
2019-08-20 15:52:17 -07:00
Zach Brown
3670a5b80d scoutfs-utils: remove quorum slot config
The format no longer has statically configured named slots.  The only
persistent config is the number of monts that must be voting to reach
quorum.  The quorum blocks now have a log of successfull elections.

Signed-off-by: Zach Brown <zab@versity.com>
2019-08-20 15:52:17 -07:00
Zach Brown
fc15b816b0 scoutfs-utils: update format for rid
Signed-off-by: Zach Brown <zab@versity.com>
2019-08-20 15:52:17 -07:00
Zach Brown
2dc611a433 scoutfs-utils: update sysfs dir to use fr identity
Signed-off-by: Zach Brown <zab@versity.com>
2019-08-20 15:52:17 -07:00
Zach Brown
2b966fd45c scoutfs-tests: use larger fr ident strings
The kernel is now using three bytes from the ids to form the fr ident
string for a mount.

Signed-off-by: Zach Brown <zab@versity.com>
2019-08-16 14:16:30 -07:00
Zach Brown
3981f944dd scoutfs-tests: more dmesg filters
Add some more filters for device-mapper output and keep up with the lock
recovery messages in the kernel.

Signed-off-by: Zach Brown <zab@versity.com>
2019-08-16 14:15:52 -07:00
Zach Brown
b9bd7d1293 scoutfs-tests: initial commit
The first commit of the scoutfs-tests suite which uses multiple mounts
on one host to test multi-node scoutfs.

Signed-off-by: Zach Brown <zab@versity.com>
2019-08-02 16:51:34 -07:00
Zach Brown
adadd51815 scoutfs-utils: update for listxattr_hidden
listxattr_raw was renamed to listxattr_hidden to more accurately
describe the only reason that it exists.

Signed-off-by: Zach Brown <zab@versity.com>
2019-06-28 10:23:59 -07:00
Zach Brown
9a087be46c scoutfs-utils: update ioctl _IO usage
Signed-off-by: Zach Brown <zab@versity.com>
2019-06-28 10:23:59 -07:00
Zach Brown
8597fd0bfc scoutfs-utils: naturally align ioctl structs
Use natuturally aligned and explicitly padded ioctl structs.

Signed-off-by: Zach Brown <zab@versity.com>
2019-06-27 11:39:19 -07:00
Zach Brown
674224d454 scoutfs-utils: hidden and indexed xattrs
Add support for the xattr tags which can hide or index xattrs by their
name.  We get an item that indexes inodes by the presence of an xattr, a
listxattr_raw ioctl which can show hidden xattrs, and an ioctl that
finds inodes which have an xattr.

Signed-off-by: Zach Brown <zab@versity.com>
2019-06-24 10:08:35 -07:00
Zach Brown
8d505668fe scoutfs-utils: add quorum block listening flag
Signed-off-by: Zach Brown <zab@versity.com>
2019-05-30 15:00:56 -07:00
Zach Brown
da185b214b scoutfs: return non-zero status on error
The error return conventions were confused, resulting in main exiting
with success when command execution failed.

Signed-off-by: Zach Brown <zab@versity.com>
2019-05-30 13:45:57 -07:00
Zach Brown
336a6a155d scoutfs-utils: add setattr more command
Add a command that wraps the setattr_more ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2019-05-30 13:45:57 -07:00
Zach Brown
ffe15c2d82 scoutfs-utils: add string parsing functions
We're starting to collect a few of these.  Let's put them in one place.

Signed-off-by: Zach Brown <zab@versity.com>
2019-05-30 13:45:57 -07:00
Zach Brown
57da5fae4c scoutfs-utils: add waiting ioctl command
Add a quick command that lists the results of the new waiting ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2019-05-21 11:33:42 -07:00
Zach Brown
77bd0c20ab scoutfs-utils: add flags to quorum block
Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
514418421c scoutfs-utils: add support for unmount_barrier
Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
a9b46213b3 scoutfs-utils: remove ctrstat command
Remove the ctrstat command.  It was built back when we had a handful of
counters.  It's output format doesn't make much sense now that we have
an absolute ton of counters.  If we want fancy counter output in the
future we'd add it to the counters command.

Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
841fbc1b66 scoutfs-utils: add counters command
Add a command to output the sysfs counters for a volume, with the option
of generating a table that fits the terminal.

Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
3c9eeeb2ef scoutfs-utils: add transaction seq btree
Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
587760edb3 scoutfs-utils: add clock sync id to messages
Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
3d64c46fcd scoutfs-utils: add lock clients btree
Show the lock client btree entries in print.

Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
4c611474e8 scoutfs-utils: update for reliable messaging
Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
64bdda717c scoutfs-utils: move super id to block hdr magic
Move the magic value that identifies the super block into the block
header and use it for btree blocks as well.

Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
dd117593da scoutfs-utils: update format for locking service
Update the format header to reflect that the kernel now uses a locking
service instead of using an fs/dlm lockspace.  Nothing in userspace uses
locking.

Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
02d2edb467 scoutfs-utils: remove super server_addr
The server no longer stores the address to connect to in the super
block.  It's now stored in the quorum config and voting blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
ea969a5dde scoutfs-utils: update format.h for quorum
Signed-off-by: Zach Brown <zab@versity.com>
2019-04-12 10:54:20 -07:00
Zach Brown
f59dfe8b73 scoutfs-utils: make scoutfs binary executable
The %defattr in the %files section was accidentally setting the
installed scoutfs binary's mode to 644.

Signed-off-by: Zach Brown <zab@versity.com>
2018-09-14 16:18:33 -07:00
Zach Brown
266b6d8bdd scoutfs-utils: add a README.md
Add a markdown README for github.

Signed-off-by: Zach Brown <zab@versity.com>
2018-09-14 13:52:05 -07:00
Zach Brown
92f22358a7 scoutfs-utils: add rpm build make dist helpers
Add make targets to build a spec file and tarball with a version based
on a git tag.

Signed-off-by: Zach Brown <zab@versity.com>
2018-09-14 13:51:09 -07:00
Zach Brown
bbfa71361f scoutfs-utils: compaction request format update
Signed-off-by: Zach Brown <zab@versity.com>
2018-08-28 15:34:33 -07:00
Zach Brown
bf014a4c57 scoutfs-utils: update format network requests
We updated the format header when relaxing the restrictions on duplicate
request message processing.

Signed-off-by: Zach Brown <zab@versity.com>
2018-08-28 15:34:33 -07:00
Zach Brown
078d2f6073 scoutfs-utils: update format for greeting node_id
Signed-off-by: Zach Brown <zab@versity.com>
2018-08-28 15:34:33 -07:00
Zach Brown
7abf5c1e2b scoutfs-utils: calculate segment crc in mkfs
Signed-off-by: Zach Brown <zab@versity.com>
2018-08-21 13:27:37 -07:00
Zach Brown
c3ad8282a3 scoutfs-utils: update net format
Signed-off-by: Zach Brown <zab@versity.com>
2018-07-27 09:50:29 -07:00
Zach Brown
f368686b89 scoutfs-utils: add net free extents
Signed-off-by: Zach Brown <zab@versity.com>
2018-07-05 16:19:39 -07:00
Zach Brown
ea2ec838ec scoutfs-utils: use one super and verify its crc
Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 15:56:36 -07:00
Zach Brown
51a48fbbb6 scoutfs-utils: add TeX paper
Add the start of a paper that documents the scoutfs design.

Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
b96feaa5b0 scoutfs-utils: add scoutfs_net_extent to format.h
Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
35b5f1f9c5 scoutfs-utils: add fallocate corruption source
Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
0a62ffbc2f scoutfs-utils: buffer staging
The stage command was trivially implemented by allocating, reading, and
staging the entire region in buffer.  This is unreasonable for large
file regions.  Implement the stage command by having it read each
portion of the region into a smaller buffer, starting with a meg.

Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
445ac62172 scoutfs-utils: add extent corruption sources
Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
3ab93baa55 scoutfs-utils: update format for unwritten extents
Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
59739e0057 scoutfs-utils: remove sneaky tab in mkfs output
We had a tab in the mkfs output that'd cause it to be misaligned.

Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
cfc8cb8800 scoutfs-utils: support server extent allocation
Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
98d06c7a6b scoutfs-utils: mkfs requires 16 segments
mkfs needs to make sure that a device is large enough for a file system.
We had a tiny limit that almost certainly wouldn't have worked.
Increase the limit to a still absurdly small but arguably possible 16
segments.

Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
35e4ab92f0 scoutfs-utils: support file and node free extents
Add support for printing the items used to track file mapping extents
and free extents.

Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
f649edd65d scoutfs-utils: add block count corruption
Signed-off-by: Zach Brown <zab@versity.com>
2018-06-29 14:42:08 -07:00
Zach Brown
37d5aae4d2 scoutfs-utils: add corruption messages
Update the format header and add a man page which describes the
corruption messages that the kernel module can spit out.

Signed-off-by: Zach Brown <zab@versity.com>
2018-04-27 09:06:40 -07:00
Zach Brown
f275020baa scoutfs-utils: update btree constants
Signed-off-by: Zach Brown <zab@versity.com>
2018-04-13 08:59:06 -07:00
Zach Brown
8e6c18a0fa scoutfs-utils: support small keys
Make the changes to support the new small key struct.  mkfs and print
work with simpler keys, segment items, and manifest entries.  The item
cache keys ioctl now just needs to work with arrays of keys.

Signed-off-by: Zach Brown <zab@versity.com>
2018-04-04 09:15:54 -05:00
Zach Brown
837310e8e6 scoutfs-utils: add le64_add_cpu
Signed-off-by: Zach Brown <zab@versity.com>
2018-04-04 09:15:54 -05:00
Zach Brown
65ce5c6ad5 scoutfs-utils: clean up _MAX defines
Signed-off-by: Zach Brown <zab@versity.com>
2018-04-04 09:15:54 -05:00
Zach Brown
0770cc8c57 scoutfs-utils: support single dirent format
Signed-off-by: Zach Brown <zab@versity.com>
2018-04-04 09:15:54 -05:00
Zach Brown
787555158a scoutfs-utils: builtin rand returns int
When we added these externs to silence some spurious sparse warning we
forgot to give them a return value.

Signed-off-by: Zach Brown <zab@versity.com>
2018-04-03 11:18:45 -07:00
Zach Brown
8119a56c92 scoutfs-utils: update format for xattr cleanups
xattr items are now stored at the hash of the name and have a header in
the first part.

Signed-off-by: Zach Brown <zab@versity.com>
2018-03-15 09:23:53 -07:00
Zach Brown
ac1065014b scoutfs-utils: add stat -s option
Lots of tests run scout stat and parse a single value.  Give them an
option to have the only output be that value so they don't have to pull
it out of the output.

Signed-off-by: Zach Brown <zab@versity.com>
2018-02-21 09:36:49 -08:00
Zach Brown
02204c36fc scoutfs-utils: clean up 'stat' output
The previous formatting was modeled after the free form 'stat' output
and it's a real mess.  Just make it a simple "name value" table.

Signed-off-by: Zach Brown <zab@versity.com>
2018-02-21 09:36:49 -08:00
Zach Brown
2527b4906e scoutfs-utils: remove inode blocks field
It's the sum of oneline and offline and is redundant.

Signed-off-by: Zach Brown <zab@versity.com>
2018-02-21 09:36:49 -08:00
Zach Brown
d796fbf15e scoutfs: track online and offline blocks
Signed-off-by: Zach Brown <zab@versity.com>
2018-02-21 09:36:49 -08:00
Zach Brown
e68a999ed5 scoutfs-utils: remove locks command
scoutfs now directly uses the kernel dlm subsystem and offsers a debugfs
file with the current lock state.  We don't need userspace to read and
format the contents of a debugging file.

Signed-off-by: Zach Brown <zab@versity.com>
2018-02-14 15:01:16 -08:00
Zach Brown
7d674fa4bf scoutfs-utils: remove size inode index items
With the removal of the size index items we no longer have to print them
or be able to walk the index.  mkfs only needs to create a meta seq
index item for the root inode.

Signed-off-by: Zach Brown <zab@versity.com>
2018-01-30 15:03:50 -08:00
Mark Fasheh
7c30294e1b scoutfs-utils: update format.h with file handle definition
Signed-off-by: Mark Fasheh <mfasheh@versity.com>
2018-01-26 12:03:09 -08:00
Zach Brown
33fa14b730 scoutfs: remove SCOUTFS_LOCK_INODE_GROUP_OFFSET
This is an unused artifact from a previous key format.

Signed-off-by: Zach Brown <zab@versity.com>
2017-12-08 12:18:57 -06:00
Mark Fasheh
3ecc099589 scoutfs-utils: add command to print locking state
This command takes a device and dumps all dlmglue locks and their state to
the console. It also computes some average lock wait times. We provide a
couple of options:

--lvbs=[yes|no] turns on or off printing of lvb data (default is off)

--oneline provides a more concise per-lock printout.

Signed-off-by: Mark Fasheh <mfasheh@versity.com>
2017-11-28 14:47:50 -08:00
Mark Fasheh
7df8b87128 scoutfs-utils: cmd_register - pass a parsing friendly argv
We were chopping off the command string when passing the argument array into
registered commands. getopt expects a program name as the first argument, so
change cmd_execute() to only chop off the scoutfs program name now. Now we
can parse command arguments in an easy and standard manner.

This necessitates a small update of each commands usage of argv/argc.

Signed-off-by: Mark Fasheh <mfasheh@versity.com>
2017-11-28 14:47:50 -08:00
Zach Brown
0876fb31c6 scoutfs-utils: remove btree item bit augmentation
We no longer need the complexity of augmenting the btree to find items
with bits set.

Signed-off-by: Zach Brown <zab@versity.com>
2017-10-26 14:48:14 -07:00
Zach Brown
80e0c4bd56 scoutfs-utils: add support for btree migration key
Signed-off-by: Zach Brown <zab@versity.com>
2017-10-26 14:48:14 -07:00
Mark Fasheh
0acab247e3 scoutfs-utils: update scoutfs_inode definition
We need the flags field from -kmod.

Signed-off-by: Mark Fasheh <mfasheh@versity.com>
2017-10-18 13:23:06 -07:00
Zach Brown
4ab22d8f09 scoutfs-utils: update format for net greeting
Signed-off-by: Zach Brown <zab@versity.com>
2017-10-12 13:58:11 -07:00
Zach Brown
b3d11925c7 scoutfs-utils: add support for format_hash
Calculate the hash of format.h and ioctl.h, put it in the super during
mkfs, and print it out.

Signed-off-by: Zach Brown <zab@versity.com>
2017-10-12 13:58:11 -07:00
Zach Brown
34fc095392 scoutfs-utils: update btree ring calc
Update the calculation of the largest number of btree blocks based on
the format.h update that provides the min free space in parent blocks
instead of the free limit for the entire block.

Signed-off-by: Zach Brown <zab@versity.com>
2017-10-12 13:58:11 -07:00
Zach Brown
362fc0ab62 scoutfs-utils: update format.h
The kernel format.h has built up some changes that the userspace utils
don't use.  We're about to start enforcing exact matching of the source
files at run time so let's bring these back in sync.

Signed-off-by: Zach Brown <zab@versity.com>
2017-10-12 13:58:11 -07:00
Zach Brown
f02944bd73 scoutfs-utils: update inode index item types
Signed-off-by: Zach Brown <zab@versity.com>
2017-10-09 15:32:03 -07:00
Zach Brown
589e9d10b9 scoutfs-utils: move to block mapping items
Update the format header and print output for the mapping items and free
bitmaps.

Signed-off-by: Zach Brown <zab@versity.com>
2017-09-19 11:26:00 -07:00
Zach Brown
288a752f42 scoutfs-utils: update key printing
The kernel key printing code was refactored to more carefully print
keys.  Import this updated code by adding supporting functions around it
so that we don't have to make edits to it and can easily update the
import in the future.

Signed-off-by: Zach Brown <zab@versity.com>
2017-09-06 10:37:18 -07:00
Mark Fasheh
affdaddc15 scoutfs-utils: zero minor variable in parse_walk_entry()
Signed-off-by: Mark Fasheh <mfasheh@versity.com>
2017-08-31 11:19:36 -07:00
Mark Fasheh
2c89ff3a07 scoutfs-utils: remove inode ctime and mtime index items
These were removed in the kernel, we no longer need
them in userspace.

Signed-off-by: Mark Fasheh <mfasheh@versity.com>
2017-08-22 15:58:48 -07:00
Zach Brown
7684e7fcf6 scoutfs-utils: use exported types
format.h and ioctl.h are copied from the kernel module.  It had a habit
of accidentally using types that aren't exported to userspace.  It's
since added build checks that enforce exported types.  This copies the
fixed use of exported types over for hopefully the last time.

Signed-off-by: Zach Brown <zab@versity.com>
2017-08-17 15:27:22 -07:00
Zach Brown
cf291e2483 scoutfs-utils: make release block granular
Update messaging and the code to reflect that the release file region is
specified in terms of 4K blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2017-08-14 09:19:32 -07:00
Zach Brown
7bbe49fde2 scoutfs-utils: sort keys by zones, then types
Our item cache protocol is tied to holding DLM locks which cover a
region of the item namespace.  We want locks to cover all the data
associated with an inode and other locks to cover the indexes.  So we
resort the items first by major (index, fs) then by inode type (inode,
dirent, etc).

Signed-off-by: Zach Brown <zab@versity.com>
2017-07-19 10:33:12 -07:00
Zach Brown
6c37e3dee0 scoutfs-utils: add btree ring storage
Manifest entries and segment allocation bitmap regions are now stored in
btree items instead of the ring log.  This lets us work with them
incrementally and share them between nodes.

Signed-off-by: Zach Brown <zab@versity.com>
2017-07-17 13:43:37 -07:00
Zach Brown
d78649e065 scoutfs-utils: support symlink item with nr field
Signed-off-by: Zach Brown <zab@versity.com>
2017-06-27 14:08:50 -07:00
Zach Brown
6ae8e9743f scoutfs-utils: add support for skip list segments
Signed-off-by: Zach Brown <zab@versity.com>
2017-06-27 14:07:25 -07:00
Zach Brown
c6eaccbf90 scoutfs-utils: add item cache keys commands
Add ioctls to get the keys for cached ranges and items and print them.

Signed-off-by: Zach Brown <zab@versity.com>
2017-06-15 22:22:10 -07:00
Zach Brown
51ae302d81 scoutfs-utils: add key printing
Just lift the key printer from the kernel and use it to print
item keys in segments and in manifest entries.

Signed-off-by: Zach Brown <zab@versity.com>
2017-06-15 21:54:04 -07:00
Zach Brown
228c5d8b4b scoutfs-utils: support meta and data seqs
Signed-off-by: Zach Brown <zab@versity.com>
2017-05-23 12:17:52 -07:00
Zach Brown
08aaa5b430 scoutfs-utils: add stat command
The scoutfs stat command is modeled after stat(1) and uses the STAT_MORE
ioctl.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-23 12:17:50 -07:00
Zach Brown
9fc99a8c31 scoutfs-utils: add support for inode index items
Add support for the inode index items which are replacing the seq walks
from the old btree structures.  We create the index items for the root
inode, can print out the items, and add a commmand to walk the indices.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 11:14:10 -07:00
Zach Brown
9c00602051 scoutfs-utils: print extent flags
Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 11:14:10 -07:00
Zach Brown
a9cb464d49 scoutfs-utils: rename __bitwise
Recent kernel headers have leaked __bitwise into userspace.  Rename our
use of __bitwise in userspace sparse builds to avoid the collision.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 11:14:10 -07:00
Zach Brown
4585d57153 scoutfs-utils: only print recent super
It's a bit confusing to always see both the old and current super block.
Let's only print the first one.  We could add an argument to print all
of them.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 11:13:59 -07:00
Zach Brown
1c9a407059 scoutfs-utils: print extent items
Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 11:13:47 -07:00
Zach Brown
e09a216762 Support simpler ring entries
Add mkfs and print support for the simpler rings that the segment bitmap
allocator and manifest are now using.  Some other recent format header
updates come along for the ride.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
bd54995599 Add a simple native bitmap
Nothing fancy at all.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
f86ce74ffd Add BITS_PER_LONG define
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
a147239022 Remove dead block, btree, and buddy code
Remove the last bits of the dead code from the old btree design.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
2e2ee3b2f1 Print symlink items
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
77d0268cb2 Add printing xattrs
For now we only print the xattr names, not the values.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
13b2d9bb88 Remove find_xattr commands
We're no longer maintaining xattr backrefs.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
02993a2dd7 Update ino_path for the large cursor
Previously we could iterate over backref items with a small u64.  Now we
need a larger opaque buffer.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
16da3c182a Add printing link backref items
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
acda5a3bf1 Add support for free_segs in super
The allocator records the total number of free segments in the super
block.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
44f8551fb6 Print data items
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
52291b2c75 Update format for readdir_pos
We now track each parent dir's next readdir pos and the readdir pos of
each dirent.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
38c8a4901f Print orphan items
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
c4f2563cc1 Update tools to new segment item layout
The segment item struct used to have fiddly packed offsets and lengths.
Now it's just normal fields so we can work with them directly and get
rid of the native item indirection.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
e81c256a22 Remove the bitops helpers
We don't have any use for the bitops today, we'll resurrect this in
simpler form if it's needed again.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
34c62824e5 Use a treap walker to print segments
We were using a bitmap to record segments during manifest printing and
then walking that bitmap to print segments.  It's a little silly to have
a second data structure record the referenced segments when we could
just walk the manifest again to print the segments.

So refactor node printing into a treap walker that calls a function for
each node.  Then we can have functions that print the node data
structurs for each treap and then one that prints the segments that are
referenced by manifest nodes.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
26a4266964 Set manifest keys to precise segment keys
We had changed the manifest keys to fully cover the space around the
segments in the hopes that it'd let item reading easily find negative
cached regions around items.

But that makes compaction think that segments intersect with items when
they really don't.  We'd much rather avoid unnecessary compaction by
having the manifest entries precisely reflect the keys in the segment.

Item reading can do more work at run time to find the bounds of the key
space that are around the edges of the segments it works with.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
c2b47d84c1 Add next_seg_seq field to super
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
484b34057a Update mkfs and print for treap ring
Update mkfs and print now that the manifest and allocator are stored in
treaps in the ring.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:43 -07:00
Zach Brown
7c4bc528c6 Make sure manifests cover all keys
Make sure that the manifest entries for a given level fully
cover the possible key space.  This helps item reading describe
cached key ranges that extend around items.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:42 -07:00
Zach Brown
c3b6dd0763 Describe ring log with index,nr
Update mkfs and print to describe the ring blocks with a starting index
and number of blocks instead of a head and tail index.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:42 -07:00
Zach Brown
19b674cb38 Print dirent and readdir items
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:42 -07:00
Zach Brown
7cd70ab2bb Don't double increment segno when printing
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:42 -07:00
Zach Brown
818e149643 Update mkfs and print for lsm writing
Adapt mkfs and print for the format changes made to support writing
segments.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:42 -07:00
Zach Brown
eb4baa88f5 Print LSM structures
Print segments and their items instead of btree blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:29 -07:00
Zach Brown
c96b833a36 mkfs LSM segment and ring stuctures
Make a new file system by writing a root inode in a segment and storing
a manifest entry in the ring that references the segment.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 14:20:02 -07:00
Zach Brown
9d3fe27929 Add data_since command
Add a data_since command that operates just like inodes-since.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-18 10:52:25 -08:00
Zach Brown
5fcf70b53e Catch up to kernel's scoutfs_extent
Signed-off-by: Zach Brown <zab@versity.com>
2016-11-17 19:48:05 -08:00
Zach Brown
41e3ca0f41 Consistently use __u8 in format.h
Signed-off-by: Zach Brown <zab@versity.com>
2016-11-17 19:47:39 -08:00
Zach Brown
ec702b9bb3 Update the data_version ioctl to return the u64
We updated the code to use the new iteration of the data_version ioctl
but we forgot to update the ioctl definition so it didn't actually work.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-17 15:45:56 -08:00
Zach Brown
c3f122a5f1 Fix mkfs buddy initialization
mkfs was starting setting free blk bits from 0 instead of from
the blkno offset of the first free block.  This resulted in
the highest order above a used blkno being marked free.  Freeing
that blkno would set its lowest order blkno.  Now that blkno can be
allocated from two orders.  That, eventually, can lead to blocks
being doubly allocated and users trampling on each other.

While auditing the code to chase this bug down I also noticed that
write_buddy_blocks() was using a min() that makes no sense at all.  Here
'blk' is inclusive, the modulo math works on its own.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-17 15:43:23 -08:00
Zach Brown
932b0776d1 Add commands for working with offline data
Add the data_version, stage, and release commands for working with
offline extents of file data.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-16 14:40:09 -08:00
Zach Brown
22140c93d1 Print extents instead of bmap items
Print the extent items now that we're not using bmap items any more.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-16 14:40:04 -08:00
Zach Brown
c2cfb0227f Print the new inode data_version field
We've added a data_version field to the inode for tracking changes to
the file's data.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-16 14:39:39 -08:00
Zach Brown
f1d8955303 Support the ino_path ioctl
We updated the inode_paths ioctl to return one path and use a counter
cursor and renamed it to ino_path.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-16 14:39:00 -08:00
Zach Brown
fb16af7b7d btree nr_items is now a le16
The btree block now has a le16 nr_items field to make room for the
number of items that larger blocks can hold.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-16 14:38:04 -08:00
Zach Brown
cd0d045c93 Add support for full radix buddy blocks
Update mkfs and print for the full radix buddy allocators.  mkfs has to
calculate the number of blocks and the height of the tree and has to
initialize the paths down the left and right side of the tree.
Print needs to dump the new radix blockx and super block fields.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-04 14:16:25 -07:00
Zach Brown
40b9f19ec4 Add bitops.c for find_next_bit_le()
The upcoming buddy changes are going to need a find_next_bit_le().

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-04 14:16:25 -07:00
Zach Brown
871db60fb2 Add U16_MAX
Add a simple U16_MAX define for upcoming buddy changes.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-04 14:16:25 -07:00
Zach Brown
a901db2ff7 Print seqs in bmap items
The bmap items now have the sequence number that wrote each mapped
block.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-04 14:16:25 -07:00
Zach Brown
b436772376 Add orphan key
Printing the raw item is enough, it doesn't have a value to decode.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-04 14:16:25 -07:00
Zach Brown
e6222223c2 Update format.h for kernel code helpers
Update format.h for some format defines that have so far only been used
by the kernel code.

Signed-off-by: Zach Brown <zab@versity.com>
2016-11-04 14:16:22 -07:00
Zach Brown
0dff7f55a6 Use openssl for pseudo random bytes
The pseudo random byte wrapper function used the intel instructions
so that it could deal with high call rates, like initializing random
node priorities for a large treap.

But this is obviously not remotely portable and has the annoying habit
of tripping up versions of valgrind that haven't yet learned about these
instructions.

We don't actually have high bandwidth callers so let's back off and just
let openssl take care of this for us.

Signed-off-by: Zach Brown <zab@versity.com>
2016-09-27 09:47:50 -07:00
Zach Brown
4ccb80a8ec Initialize all the buddy slot free order fields
Initialize the free_order field in all the slots of the buddy index
block so that the kernel will try to allocate from them and will
initialize and populate the first block.

Signed-off-by: Zach Brown <zab@versity.com>
2016-09-08 16:40:39 -07:00
Zach Brown
86ffdf24a2 Add symlink support
Print out the raw symlink items.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-29 10:25:46 -07:00
Zach Brown
a89f6c10b1 Add buddy indirect order totals
The total counts of all the set order bits in all the child buddy blocks
is needed for statfs.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-23 16:41:57 -07:00
Zach Brown
2f91a9a735 Make command listing less noisy
It's still not great, but at least it's a little clearer.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-23 12:31:03 -07:00
Zach Brown
c17a7036ed Add find xattr commands
Add commands that use the find-xattr ioctls to show the inode numbers of
inodes which probably contain xattrs matching the specified name or
value.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-23 12:21:47 -07:00
Zach Brown
43619a245d Add inode-paths via link backrefs
Add the inode-paths command which uses the ioctl to display all the
paths that lead to the given inode.  We add support for printing
the new link backref items and inode and dirent fields.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-11 16:50:57 -07:00
Zach Brown
be4a137479 Add support for printing block map items
Signed-off-by: Zach Brown <zab@versity.com>
2016-08-10 15:19:09 -07:00
Zach Brown
25e3b03d94 Add support for simpler btree block
Update mkfs and print to the new simpler btree block format.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-02 13:31:06 -07:00
Zach Brown
0af40547b5 Update to smaller block size
We're going to try using a smaller fixed block size to reduce complexity
in the file data extent code.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-02 13:30:40 -07:00
Zach Brown
6a97aa3c9a Add support for the radix buddy bitmaps
Update mkfs and print to support the buddy allocator that's indexed by
radix blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2016-08-02 13:29:51 -07:00
Zach Brown
4b86256904 Ignore sparse warning for builtin fpclassify
Signed-off-by: Zach Brown <zab@versity.com>
2016-07-27 13:56:01 -07:00
Zach Brown
99167f6d66 Expand little endian bitops functions
We had the start of functions that operated on little endian bitmaps.
This adds more operations and uses __packed to support unaligned bitmaps
on platforms where unaligned accesses are a problem.

Signed-off-by: Zach Brown <zab@versity.com>
2016-07-27 13:50:51 -07:00
Zach Brown
c48e08a378 Add -fno-strict-aliasing
We modify the same memory through pointers of different types all the
live long day.

Signed-off-by: Zach Brown <zab@versity.com>
2016-07-27 13:49:27 -07:00
Zach Brown
1cacc50de0 Remove old unused lebitmap code
Signed-off-by: Zach Brown <zab@versity.com>
2016-07-22 15:04:07 -07:00
Zach Brown
fc37ece26b Remove homebrew tracing
Happily, it turns out that there are crash extensions for extracting
trace messages from crash dumps.  That's good enough for us.

Signed-off-by: Zach Brown <zab@versity.com>
2016-07-22 13:54:10 -07:00
Zach Brown
54044508fa Add inodes-since command
The kernel now has an ioctl to give us inode numbers with their sequence
number for every inode that's been modified since a given tree update
sequence number.

Update mkfs and print to the on-disk format changes and add a trivial
inodes-since command which calls the ioctl and prints the results.

Signed-off-by: Zach Brown <zab@versity.com>
2016-07-05 17:49:13 -04:00
Zach Brown
a069bdd945 Add format header updates for xattrs
The kernel now has items and structs for xattrs.

Signed-off-by: Zach Brown <zab@versity.com>
2016-07-04 11:02:07 -07:00
Zach Brown
d774e5308b Add support for printing traces from files
We can extract the formats and records from a crash dump and
print them from the files.

Signed-off-by: Zach Brown <zab@versity.com>
2016-05-28 12:41:30 -07:00
Zach Brown
54867b0f9c Add support for printing kernel traces
Add a 'trace' command which uses the debugfs file created by the scoutfs
kernel module to read and print trace messages.

Signed-off-by: Zach Brown <zab@versity.com>
2016-05-28 11:10:08 -07:00
Zach Brown
29c1f529f1 Get rid of max dirent collision nr in inode
The slightly tweaked format that uses linear probing to mitigate dirent
name hash collisions doesn't need a record of the greatest number of
collisions in the dir inode.

Signed-off-by: Zach Brown <zab@versity.com>
2016-05-02 21:40:02 -07:00
Zach Brown
67ad29508d Update for next_ino in super block
Add support for storing the next allocated inode in the super block.

Signed-off-by: Zach Brown <zab@versity.com>
2016-05-01 09:11:52 -07:00
Zach Brown
77c673f984 Add mkfs and print support for buddy alloc
Initialize the block count fields in the super block on mkfs and print
out the buddy allocator fields and blocks.

Signed-off-by: Zach Brown <zab@versity.com>
2016-04-30 12:25:59 -07:00
Zach Brown
1235f04c4a Print parent block ref item values
Signed-off-by: Zach Brown <zab@versity.com>
2016-04-14 12:59:37 -07:00
Zach Brown
56077b61a1 Move to btree blocks
Update mkfs and printing for the btree experiment.

Signed-off-by: Zach Brown <zab@versity.com>
2016-04-12 19:33:32 -07:00
Zach Brown
c4fcf40097 Update ring manifest deletion entries
The ring now contains stores full manifest entries that are deleted
rather than just their block number.

Signed-off-by: Zach Brown <zab@versity.com>
2016-04-02 20:30:45 -04:00
Zach Brown
544fd1ba9a Add ctrstat command
Like vmstat and iostat, this prints out our counters over time.

Signed-off-by: Zach Brown <zab@versity.com>
2016-04-01 00:04:26 -04:00
Zach Brown
af2975111a Update format for smaller bloom
Update our format for the smaller bloom sizes.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-29 13:10:45 -04:00
Zach Brown
7ea78502c8 Read both super blocks and use current
When printing try to read both super blocks and use the most recent one
instead of just using the first one.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-29 13:07:00 -04:00
Zach Brown
10cf83ffc5 Update key type value format change
Adding file data items changed the item key values.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-26 14:00:19 -04:00
Zach Brown
339c719e4e Print dirents in print command
Add support for printing dirent items to scoutfs print.  We're careful
to change non-printable characters to ".".

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-25 00:24:48 -04:00
Zach Brown
e1c1c50ead Update to multiple dirent hash format
Update print to show the inode fields in the newer dirent hashing
scheme.  mkfs doesn't create directory entries.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-24 21:09:51 -07:00
Zach Brown
e0e6179156 Fix bloom filter bugs
The bloom filter had two bad bugs.

First the calculation was adding the bit width of newly hashed data to
the hash value instead of the record of the hashed bits available.

And the block offset calculation for each bit wasn't truncated to the
number of bloom blocks.  While fixing this we can clean up the code and
make it faster by recording the bits in terms of their block and bit
offset instead of their large bit value.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-23 22:23:21 -04:00
Zach Brown
ddf5ef1017 Fix set_bit_le() type width problems
The swizzle value was defined in terms of longs but the code used u64s.
And the bare shifted value was an int so it'd get truncated.  Switch it
all to using longs.

The ratio of bugs to lines of code in that first attempt was through the
roof!

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-23 22:21:11 -04:00
Zach Brown
502783e1bc Update to segment format with skiplists and bloom
Update to the format rev which has large log segments that start with
bloom filter blocks, have items linked in a skip list, and item values
stored at offsets in the block.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-23 15:23:54 -07:00
Zach Brown
463f5e5a07 Correctly store last random word
pseudo_random_bytes() was accidentally copying the last partial long to
the beggining of the buffer instead of the end.  The final partial long
bytes weren't being filled.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-23 15:16:58 -07:00
Zach Brown
d0429e1c88 Add minimal bloom filter helpers
mkfs just needs to initialize bloom filter blocks with the bits for the
single root inode key.  We can get away with these skeletal functions
for now.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-23 14:01:16 -07:00
Zach Brown
8471134328 Add trivial set_bit_le in bitops.h
We're going to need to start setting bloom filters bits in mkfs so we'll
add this trivial inline.  It might grow later.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-23 14:00:07 -07:00
Zach Brown
f3de3b1817 Add DIV_ROUND_UP() to util.h
We're going to need this in some upcoming format.h changes.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-23 13:58:59 -07:00
Zach Brown
a0a3ef9675 Mark all mkfs chunks allocated in bitmap
The initial bitmap entry written in the ring by mkfs was off by one.
Three chunks were written but the 0th chunk is also free for the supers.
It has to mark the first four chunks as allocated.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-17 17:05:24 -07:00
Zach Brown
e59d0af199 Print full map and ring blocks
In the first pass we'd only printed the first map and ring blocks.

This reads the number of used map blocks into an allocation large enough
for the maximum number of map blocks.

Then we use the block numbers from the map blocks to print the active
ring blocks which are described by the super.

Signed-off-by: Zach Brown <zab@versity.com>
2016-03-17 17:05:19 -07:00
Zach Brown
d8f76cb893 Minor ring manifest format updates
Update to the format changes that were made while implementing ring
replay in the kernel.
2016-02-25 22:45:06 -08:00
Zach Brown
906c0186bc Get path size with stat or ioctl
If we're making a file system in a real device then we need to get
the device size with an ioctl.
2016-02-25 22:40:48 -08:00
Zach Brown
e9baa4559b Introduce chunk and segment terminology
The use of 'log' for all the large sizes was pretty confusing.  Let's
use 'chunk' to describe the large alloc size.  Other things live in them
as well as logs.  Then use 'log segment' to describe the larger log
structure stored in a chunk that's made up of all the little blocks.
2016-02-23 17:04:28 -08:00
Zach Brown
de1bf39614 Get rid of bricks
Get rid of the explicit distinction between brick and block numbers.
The format is now defined it terms of fixed 4k blocks.  Logs become a
logical structure that's made up of a fixed number of blocks.  The
allocator still manages large log sized regions.
2016-02-19 15:40:04 -08:00
Zach Brown
a7b8f955fe write ring brick as brick in mkfs
The only ring brick was being written as a full block which made its
brick checksum cover the entire block instead of just the brick.
2016-02-19 08:53:14 -08:00
Zach Brown
2c2f090168 Initial commit
This initial commit has enough to make a new file system and print out
it's structures.

Signed-off-by: Zach Brown <zab@versity.com>
2016-02-12 15:58:41 -08:00
202 changed files with 28208 additions and 8437 deletions

0
.gitignore vendored Normal file
View File

17
Makefile Normal file
View File

@@ -0,0 +1,17 @@
#
# Typically development is done in each subdir, but we have a tiny
# makefile here to make it easy to run simple targets across all the
# subdirs.
#
SUBDIRS := kmod utils tests
NOTTESTS := kmod utils
all clean: $(SUBDIRS) FORCE
dist: $(NOTTESTS) FORCE
$(SUBDIRS): FORCE
$(MAKE) -C $@ $(MAKECMDGOALS)
all:
FORCE:

View File

@@ -6,7 +6,7 @@ from the ground up to support large archival systems.
Its key differentiating features are:
- Integrated consistent indexing accelerates archival maintenance operations
- Log-structured commits allow nodes to write concurrently without contention
- Commit logs allow nodes to write concurrently without contention
It meets best of breed expectations:
@@ -31,15 +31,9 @@ functionality hasn't been implemented. It's appropriate for early
adopters and interested developers, not for production use.
In that vein, expect significant incompatible changes to both the format
of network messages and persistent structures. To avoid mistakes the
implementation currently calculates a hash of the format and ioctl
header files in the source tree. The kernel module will refuse to mount
a volume created by userspace utilities with a mismatched hash, and it
will refuse to connect to a remote node with a mismatched hash. This
means having to unmount, mkfs, and remount everything across many
functional changes. Once the format is nailed down we'll wire up
forward and back compat machinery and remove this temporary safety
measure.
of network messages and persistent structures. Since the format hash-checking
has now been removed in preparation for release, if there is any doubt, mkfs
is strongly recommended.
The current kernel module is developed against the RHEL/CentOS 7.x
kernel to minimize the friction of developing and testing with partners'
@@ -62,17 +56,22 @@ help on the mailing list.**
The requirements for running scoutfs on a small cluster are:
1. One or more nodes running x86-64 CentOS/RHEL 7.4 (or 7.3)
2. Access to a single shared block device
2. Access to two shared block devices
3. IPv4 connectivity between the nodes
The steps for getting scoutfs mounted and operational are:
1. Get the kernel module running on the nodes
2. Make a new filesystem on the device with the userspace utilities
3. Mount the device on all the nodes
2. Make a new filesystem on the devices with the userspace utilities
3. Mount the devices on all the nodes
In this example we run all of these commands on three nodes. The block
device name is the same on all the nodes.
In this example we use three nodes. The names of the block devices are
the same on all the nodes. Two of the nodes will be quorum members. A
majority of quorum members must be mounted to elect a leader to run a
server that all the mounts connect to. It should be noted that two
quorum members results in a majority of one, each member itself, so
split brain elections are possible but so unlikely that it's fine for a
demonstration.
1. Get the Kernel Module and Userspace Binaries
@@ -87,34 +86,37 @@ device name is the same on all the nodes.
```shell
yum install kernel-devel
git clone git@github.com:versity/scoutfs-kmod-dev.git
make -C scoutfs-kmod-dev module
git clone git@github.com:versity/scoutfs.git
make -C scoutfs
modprobe libcrc32c
insmod scoutfs-kmod-dev/src/scoutfs.ko
git clone git@github.com:versity/scoutfs-utils-dev.git
make -C scoutfs-utils-dev
alias scoutfs=$PWD/scoutfs-utils-dev/src/scoutfs
insmod scoutfs/kmod/src/scoutfs.ko
alias scoutfs=$PWD/scoutfs/utils/src/scoutfs
```
2. Make a New Filesystem (**destroys contents, no questions asked**)
2. Make a New Filesystem (**destroys contents**)
We specify that two of our three nodes must be present to form a
quorum for the system to function.
We specify quorum slots with the addresses of each of the quorum
member nodes, the metadata device, and the data device.
```shell
scoutfs mkfs -Q 2 /dev/shared_block_device
scoutfs mkfs -Q 0,$NODE0_ADDR,12345 -Q 1,$NODE1_ADDR,12345 /dev/meta_dev /dev/data_dev
```
3. Mount the Filesystem
Each mounting node provides its local IP address on which it will run
an internal server for the other mounts if it is elected the leader by
the quorum.
First, mount each of the quorum nodes so that they can elect and
start a server for the remaining node to connect to. The slot numbers
were specified with the leading "0,..." and "1,..." in the mkfs options
above.
```shell
mkdir /mnt/scoutfs
mount -t scoutfs -o server_addr=$NODE_ADDR /dev/shared_block_device /mnt/scoutfs
mount -t scoutfs -o quorum_slot_nr=$SLOT_NR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
Then mount the remaining node which can now connect to the running server.
```shell
mount -t scoutfs -o metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
4. For Kicks, Observe the Metadata Change Index

View File

@@ -16,11 +16,7 @@ SCOUTFS_GIT_DESCRIBE := \
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
echo no-git)
SCOUTFS_FORMAT_HASH := \
$(shell cat src/format.h src/ioctl.h | md5sum | cut -b1-16)
SCOUTFS_ARGS := SCOUTFS_GIT_DESCRIBE=$(SCOUTFS_GIT_DESCRIBE) \
SCOUTFS_FORMAT_HASH=$(SCOUTFS_FORMAT_HASH) \
CONFIG_SCOUTFS_FS=m -C $(SK_KSRC) M=$(CURDIR)/src \
EXTRA_CFLAGS="-Werror"
@@ -51,7 +47,7 @@ modules_install:
dist: scoutfs-kmod.spec
git archive --format=tar --prefix scoutfs-kmod-$(RPM_VERSION)/ HEAD^{tree} > $(TARFILE)
@ tar rf $(TARFILE) --transform="s@\(.*\)@scoutfs-$(RPM_VERSION)/\1@" scoutfs-kmod.spec
@ tar rf $(TARFILE) --transform="s@\(.*\)@scoutfs-kmod-$(RPM_VERSION)/\1@" scoutfs-kmod.spec
clean:
make $(SCOUTFS_ARGS) clean

View File

@@ -1,7 +1,6 @@
obj-$(CONFIG_SCOUTFS_FS) := scoutfs.o
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\" \
-DSCOUTFS_FORMAT_HASH=0x$(SCOUTFS_FORMAT_HASH)LLU
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\"
CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
@@ -9,6 +8,8 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
-include $(src)/Makefile.kernelcompat
scoutfs-y += \
avl.o \
alloc.o \
block.o \
btree.o \
client.o \
@@ -16,10 +17,12 @@ scoutfs-y += \
data.o \
dir.o \
export.o \
ext.o \
file.o \
forest.o \
inode.o \
ioctl.o \
item.o \
lock.o \
lock_server.o \
msg.o \
@@ -27,10 +30,11 @@ scoutfs-y += \
options.o \
per_task.o \
quorum.o \
radix.o \
scoutfs_trace.o \
server.o \
sort_priv.o \
spbm.o \
srch.o \
super.o \
sysfs.o \
trans.o \
@@ -50,5 +54,9 @@ $(src)/check_exported_types:
echo "no raw types in exported headers, preface with __"; \
exit 1; \
fi
@if egrep '\<__packed\>' $(src)/format.h $(src)/ioctl.h; then \
echo "no __packed allowed in exported headers"; \
exit 1; \
fi
extra-y += check_exported_types

1229
kmod/src/alloc.c Normal file

File diff suppressed because it is too large Load Diff

156
kmod/src/alloc.h Normal file
View File

@@ -0,0 +1,156 @@
#ifndef _SCOUTFS_ALLOC_H_
#define _SCOUTFS_ALLOC_H_
#include "ext.h"
/*
* These are implementation-specific metrics, they don't need to be
* consistent across implementations. They should probably be run-time
* knobs.
*/
/*
* The largest extent that we'll try to allocate with fallocate. We're
* trying not to completely consume a transactions data allocation all
* at once. This is only allocation granularity, repeated allocations
* can produce large contiguous extents.
*/
#define SCOUTFS_FALLOCATE_ALLOC_LIMIT \
(128ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* The largest aligned region that we'll try to allocate at the end of
* the file as it's extended. This is also limited to the current file
* size so we can only waste at most twice the total file size when
* files are less than this. We try to keep this around the point of
* diminishing returns in streaming performance of common data devices
* to limit waste.
*/
#define SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Small data allocations are satisfied by cached extents stored in
* the run-time alloc struct to minimize item operations for small
* block allocations. Large allocations come directly from btree
* extent items, and this defines the threshold beetwen them.
*/
#define SCOUTFS_ALLOC_DATA_LG_THRESH \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Fill client alloc roots to the target when they fall below the lo
* threshold.
*
* We're giving the client the most available meta blocks we can so that
* it has the freedom to build large transactions before worrying that
* it might run out of meta allocs during commits.
*/
#define SCOUTFS_SERVER_META_FILL_TARGET \
SCOUTFS_ALLOC_LIST_MAX_BLOCKS
#define SCOUTFS_SERVER_META_FILL_LO \
(SCOUTFS_ALLOC_LIST_MAX_BLOCKS / 2)
#define SCOUTFS_SERVER_DATA_FILL_TARGET \
(4ULL * 1024 * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
#define SCOUTFS_SERVER_DATA_FILL_LO \
(1ULL * 1024 * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Each of the server meta_alloc roots will try to keep a minimum amount
* of free blocks. The server will swap roots when its current avail
* falls below the threshold while the freed root is still above it. It
* must have room for all the largest allocation attempted in a
* transaction on the server.
*/
#define SCOUTFS_SERVER_META_ALLOC_MIN \
(SCOUTFS_SERVER_META_FILL_TARGET * 2)
/*
* A run-time use of a pair of persistent avail/freed roots as a
* metadata allocator. It has the machinery needed to lock and avoid
* recursion when dirtying the list blocks that are used during the
* transaction.
*/
struct scoutfs_alloc {
/* writers rarely modify list_head avail/freed. readers often check for _meta_alloc_low */
seqlock_t seqlock;
struct mutex mutex;
struct scoutfs_block *dirty_avail_bl;
struct scoutfs_block *dirty_freed_bl;
struct scoutfs_alloc_list_head avail;
struct scoutfs_alloc_list_head freed;
};
/*
* A run-time data allocator. We have a cached extent in memory that is
* a lot cheaper to work with than the extent items, and we have a
* consistent record of the total_len that can be sampled outside of the
* usual heavy serialization of the extent modifications.
*/
struct scoutfs_data_alloc {
struct scoutfs_alloc_root root;
struct scoutfs_extent cached;
atomic64_t total_len;
};
void scoutfs_alloc_init(struct scoutfs_alloc *alloc,
struct scoutfs_alloc_list_head *avail,
struct scoutfs_alloc_list_head *freed);
int scoutfs_alloc_prepare_commit(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri);
int scoutfs_alloc_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 *blkno);
int scoutfs_free_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 blkno);
void scoutfs_dalloc_init(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail);
void scoutfs_dalloc_get_root(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail);
u64 scoutfs_dalloc_total_len(struct scoutfs_data_alloc *dalloc);
int scoutfs_dalloc_return_cached(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_data_alloc *dalloc);
int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_data_alloc *dalloc, u64 count,
u64 *blkno_ret, u64 *count_ret);
int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *root, u64 blkno, u64 count);
int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total);
int scoutfs_alloc_fill_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_list_head *lhead,
struct scoutfs_alloc_root *root,
u64 lo, u64 target);
int scoutfs_alloc_empty_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *root,
struct scoutfs_alloc_list_head *lhead);
int scoutfs_alloc_splice_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_list_head *dst,
struct scoutfs_alloc_list_head *src);
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
typedef int (*scoutfs_alloc_foreach_cb_t)(struct super_block *sb, void *arg,
int owner, u64 id,
bool meta, bool avail, u64 blocks);
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg);
#endif

405
kmod/src/avl.c Normal file
View File

@@ -0,0 +1,405 @@
/*
* Copyright (C) 2020 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include "format.h"
#include "avl.h"
/*
* We use a simple avl to index items in btree blocks. The interface
* looks a bit like the kernel rbtree interface in that the caller
* manages locking and storage for the nodes. Node references are
* stored as byte offsets from the root so that the implementation
* doesn't have to know anything about the caller's container.
*
* We store the full height in each node, rather than just 2 bits for
* the balance, so that we can use the extra redundancy to verify the
* integrity of the tree.
*/
static struct scoutfs_avl_node *node_ptr(struct scoutfs_avl_root *root,
__le16 off)
{
return off ? (void *)root + le16_to_cpu(off) : NULL;
}
static __le16 node_off(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
return node ? cpu_to_le16((void *)node - (void *)root) : 0;
}
static __u8 node_height(struct scoutfs_avl_node *node)
{
return node ? node->height : 0;
}
struct scoutfs_avl_node *
scoutfs_avl_search(struct scoutfs_avl_root *root,
scoutfs_avl_compare_t compare, void *arg, int *cmp_ret,
struct scoutfs_avl_node **par,
struct scoutfs_avl_node **next,
struct scoutfs_avl_node **prev)
{
struct scoutfs_avl_node *node = node_ptr(root, root->node);
int cmp;
if (cmp_ret)
*cmp_ret = -1;
if (par)
*par = NULL;
if (next)
*next = NULL;
if (prev)
*prev = NULL;
while (node) {
cmp = compare(arg, node);
if (par)
*par = node;
if (cmp_ret)
*cmp_ret = cmp;
if (cmp < 0) {
if (next)
*next = node;
node = node_ptr(root, node->left);
} else if (cmp > 0) {
if (prev)
*prev = node;
node = node_ptr(root, node->right);
} else {
return node;
}
}
return NULL;
}
struct scoutfs_avl_node *scoutfs_avl_first(struct scoutfs_avl_root *root)
{
struct scoutfs_avl_node *node = node_ptr(root, root->node);
while (node && node->left)
node = node_ptr(root, node->left);
return node;
}
struct scoutfs_avl_node *scoutfs_avl_last(struct scoutfs_avl_root *root)
{
struct scoutfs_avl_node *node = node_ptr(root, root->node);
while (node && node->right)
node = node_ptr(root, node->right);
return node;
}
struct scoutfs_avl_node *scoutfs_avl_next(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *parent;
if (node->right) {
node = node_ptr(root, node->right);
while (node->left)
node = node_ptr(root, node->left);
return node;
}
while ((parent = node_ptr(root, node->parent)) &&
node == node_ptr(root, parent->right))
node = parent;
return parent;
}
struct scoutfs_avl_node *scoutfs_avl_prev(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *parent;
if (node->left) {
node = node_ptr(root, node->left);
while (node->right)
node = node_ptr(root, node->right);
return node;
}
while ((parent = node_ptr(root, node->parent)) &&
node == node_ptr(root, parent->left))
node = parent;
return parent;
}
static void set_parent_left_right(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *parent,
struct scoutfs_avl_node *old,
struct scoutfs_avl_node *new)
{
__le16 *off;
if (parent == NULL)
off = &root->node;
else if (parent->left == node_off(root, old))
off = &parent->left;
else
off = &parent->right;
*off = node_off(root, new);
}
static void set_height(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *left = node_ptr(root, node->left);
struct scoutfs_avl_node *right = node_ptr(root, node->right);
node->height = 1 + max(node_height(left), node_height(right));
}
static int node_balance(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
if (node == NULL)
return 0;
return (int)node_height(node_ptr(root, node->right)) -
(int)node_height(node_ptr(root, node->left));
}
/*
* d b
* / \ rotate right -> / \
* b e a d
* / \ <- rotate left / \
* a c c e
*
* The rotate functions are always called with the higher node as the
* earlier argument. Links to a and e are constant. We have to update
* the forward and back refs between parents and nodes for the three links
* along root->[db]->[bd]->c.
*/
static void rotate_right(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *d)
{
struct scoutfs_avl_node *gpa = node_ptr(root, d->parent);
struct scoutfs_avl_node *b = node_ptr(root, d->left);
struct scoutfs_avl_node *c = node_ptr(root, b->right);
set_parent_left_right(root, gpa, d, b);
b->parent = node_off(root, gpa);
b->right = node_off(root, d);
d->parent = node_off(root, b);
d->left = node_off(root, c);
if (c)
c->parent = node_off(root, d);
set_height(root, d);
set_height(root, b);
}
static void rotate_left(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *b)
{
struct scoutfs_avl_node *gpa = node_ptr(root, b->parent);
struct scoutfs_avl_node *d = node_ptr(root, b->right);
struct scoutfs_avl_node *c = node_ptr(root, d->left);
set_parent_left_right(root, gpa, b, d);
d->parent = node_off(root, gpa);
d->left = node_off(root, b);
b->parent = node_off(root, d);
b->right = node_off(root, c);
if (c)
c->parent = node_off(root, b);
set_height(root, b);
set_height(root, d);
}
/*
* Check the balance factor for the given node and perform rotations if
* its two child subtrees are too far out of balance. Return either the
* node again or the root of the newly balanced subtree.
*/
static struct scoutfs_avl_node *
rotate_imbalance(struct scoutfs_avl_root *root, struct scoutfs_avl_node *node)
{
int bal = node_balance(root, node);
struct scoutfs_avl_node *child;
if (bal >= -1 && bal <= 1)
return node;
if (bal > 0) {
/* turn right-left case into right-right */
child = node_ptr(root, node->right);
if (node_balance(root, child) < 0)
rotate_right(root, child);
/* rotate left to address right-right */
rotate_left(root, node);
} else {
/* or do the mirror for the left- cases */
child = node_ptr(root, node->left);
if (node_balance(root, child) > 0)
rotate_left(root, child);
rotate_right(root, node);
}
return node_ptr(root, node->parent);
}
void scoutfs_avl_insert(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *parent,
struct scoutfs_avl_node *node, int cmp)
{
node->parent = 0;
node->left = 0;
node->right = 0;
set_height(root, node);
memset(node->__pad, 0, sizeof(node->__pad));
if (parent == NULL) {
root->node = node_off(root, node);
node->parent = 0;
return;
}
if (cmp < 0)
parent->left = node_off(root, node);
else
parent->right = node_off(root, node);
node->parent = node_off(root, parent);
while (parent) {
set_height(root, parent);
parent = rotate_imbalance(root, parent);
parent = node_ptr(root, parent->parent);
}
}
static struct scoutfs_avl_node *avl_successor(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
node = node_ptr(root, node->right);
while (node->left)
node = node_ptr(root, node->left);
return node;
}
/*
* Find a node next successor and then swap the positions of the two
* nodes with each other in the tree. This is only tricky because the
* successor can be a direct child of the node and if we weren't careful
* we'd be modifying each of the nodes through the pointers between
* them.
*/
static void swap_with_successor(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *succ = avl_successor(root, node);
struct scoutfs_avl_node *succ_par = node_ptr(root, succ->parent);
struct scoutfs_avl_node *succ_right = node_ptr(root, succ->right);
struct scoutfs_avl_node *parent;
struct scoutfs_avl_node *left;
struct scoutfs_avl_node *right;
/* Link old node's parent and left child with the successor */
succ->parent = node->parent;
parent = node_ptr(root, succ->parent);
set_parent_left_right(root, parent, node, succ);
succ->left = node->left;
left = node_ptr(root, succ->left);
if (left)
left->parent = node_off(root, succ);
/*
* Link the old node's right with successor and the old
* successor's parent with the node, they could have pointed to
* each other.
*/
if (succ_par == node) {
succ->right = node_off(root, node);
node->parent = node_off(root, succ);
} else {
succ->right = node->right;
right = node_ptr(root, succ->right);
if (right)
right->parent = node_off(root, succ);
set_parent_left_right(root, succ_par, succ, node);
node->parent = node_off(root, succ_par);
}
/* Link the old successor's right with the node, it can't have left */
node->right = node_off(root, succ_right);
if (succ_right)
succ_right->parent = node_off(root, node);
node->left = 0;
swap(node->height, succ->height);
}
void scoutfs_avl_delete(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *parent;
struct scoutfs_avl_node *child;
if (node->left && node->right)
swap_with_successor(root, node);
parent = node_ptr(root, node->parent);
child = node_ptr(root, node->left ?: node->right);
set_parent_left_right(root, parent, node, child);
if (child)
child->parent = node->parent;
while (parent) {
set_height(root, parent);
parent = rotate_imbalance(root, parent);
parent = node_ptr(root, parent->parent);
}
}
/*
* Move the contents of a node to a new node location in memory. The
* logical position of the node in the tree does not change.
*/
void scoutfs_avl_relocate(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *to,
struct scoutfs_avl_node *from)
{
struct scoutfs_avl_node *parent = node_ptr(root, from->parent);
struct scoutfs_avl_node *left = node_ptr(root, from->left);
struct scoutfs_avl_node *right = node_ptr(root, from->right);
set_parent_left_right(root, parent, from, to);
to->parent = from->parent;
to->left = from->left;
if (left)
left->parent = node_off(root, to);
to->right = from->right;
if (right)
right->parent = node_off(root, to);
to->height = from->height;
}

30
kmod/src/avl.h Normal file
View File

@@ -0,0 +1,30 @@
#ifndef _SCOUTFS_AVL_H_
#define _SCOUTFS_AVL_H_
#include "format.h"
typedef int (*scoutfs_avl_compare_t)(void *arg,
struct scoutfs_avl_node *node);
struct scoutfs_avl_node *
scoutfs_avl_search(struct scoutfs_avl_root *root,
scoutfs_avl_compare_t compare, void *arg, int *cmp_ret,
struct scoutfs_avl_node **par,
struct scoutfs_avl_node **next,
struct scoutfs_avl_node **prev);
struct scoutfs_avl_node *scoutfs_avl_first(struct scoutfs_avl_root *root);
struct scoutfs_avl_node *scoutfs_avl_last(struct scoutfs_avl_root *root);
struct scoutfs_avl_node *scoutfs_avl_next(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node);
struct scoutfs_avl_node *scoutfs_avl_prev(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node);
void scoutfs_avl_insert(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *parent,
struct scoutfs_avl_node *node, int cmp);
void scoutfs_avl_delete(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node);
void scoutfs_avl_relocate(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *to,
struct scoutfs_avl_node *from);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -10,33 +10,19 @@ struct scoutfs_block_writer {
struct scoutfs_block {
u64 blkno;
void *data;
void *priv;
};
__le32 scoutfs_block_calc_crc(struct scoutfs_block_header *hdr);
bool scoutfs_block_valid_crc(struct scoutfs_block_header *hdr);
bool scoutfs_block_valid_ref(struct super_block *sb,
struct scoutfs_block_header *hdr,
__le64 seq, __le64 blkno);
bool scoutfs_block_tas_visited(struct super_block *sb,
struct scoutfs_block *bl);
void scoutfs_block_clear_visited(struct super_block *sb,
struct scoutfs_block *bl);
struct scoutfs_block *scoutfs_block_create(struct super_block *sb, u64 blkno);
struct scoutfs_block *scoutfs_block_read(struct super_block *sb, u64 blkno);
void scoutfs_block_invalidate(struct super_block *sb, struct scoutfs_block *bl);
bool scoutfs_block_consistent_ref(struct super_block *sb,
struct scoutfs_block *bl,
__le64 seq, __le64 blkno, u32 magic);
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret);
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);
void scoutfs_block_writer_init(struct super_block *sb,
struct scoutfs_block_writer *wri);
void scoutfs_block_writer_mark_dirty(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl);
bool scoutfs_block_writer_is_dirty(struct super_block *sb,
struct scoutfs_block *bl);
int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_block_ref *ref,
u32 magic, struct scoutfs_block **bl_ret,
u64 dirty_blkno, u64 *ref_blkno);
int scoutfs_block_writer_write(struct super_block *sb,
struct scoutfs_block_writer *wri);
void scoutfs_block_writer_forget_all(struct super_block *sb,
@@ -44,18 +30,17 @@ void scoutfs_block_writer_forget_all(struct super_block *sb,
void scoutfs_block_writer_forget(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl);
void scoutfs_block_move(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl, u64 blkno);
bool scoutfs_block_writer_has_dirty(struct super_block *sb,
struct scoutfs_block_writer *wri);
u64 scoutfs_block_writer_dirty_bytes(struct super_block *sb,
struct scoutfs_block_writer *wri);
int scoutfs_block_read_sm(struct super_block *sb, u64 blkno,
int scoutfs_block_read_sm(struct super_block *sb,
struct block_device *bdev, u64 blkno,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc);
int scoutfs_block_write_sm(struct super_block *sb, u64 blkno,
int scoutfs_block_write_sm(struct super_block *sb,
struct block_device *bdev, u64 blkno,
struct scoutfs_block_header *hdr, size_t len);
int scoutfs_block_setup(struct super_block *sb);

File diff suppressed because it is too large Load Diff

View File

@@ -3,15 +3,14 @@
#include <linux/uio.h>
struct scoutfs_radix_allocator;
struct scoutfs_alloc;
struct scoutfs_block_writer;
struct scoutfs_block;
struct scoutfs_btree_item_ref {
struct super_block *sb;
struct scoutfs_block *bl;
void *key;
unsigned key_len;
struct scoutfs_key *key;
void *val;
unsigned val_len;
};
@@ -19,50 +18,69 @@ struct scoutfs_btree_item_ref {
#define SCOUTFS_BTREE_ITEM_REF(name) \
struct scoutfs_btree_item_ref name = {NULL,}
/* caller gives an item to the callback */
typedef int (*scoutfs_btree_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
void *val, int val_len, void *arg);
int scoutfs_btree_lookup(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
/* simple singly-linked list of items */
struct scoutfs_btree_item_list {
struct scoutfs_btree_item_list *next;
struct scoutfs_key key;
int val_len;
u8 val[0];
};
int scoutfs_btree_lookup(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_insert(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_key *key,
void *val, unsigned val_len);
int scoutfs_btree_update(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_key *key,
void *val, unsigned val_len);
int scoutfs_btree_force(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_key *key,
void *val, unsigned val_len);
int scoutfs_btree_delete(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
void *key, unsigned key_len);
struct scoutfs_key *key);
int scoutfs_btree_next(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_key *key,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_after(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_prev(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_key *key,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_before(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_dirty(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
void *key, unsigned key_len);
struct scoutfs_key *key);
int scoutfs_btree_read_items(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_btree_item_cb cb, void *arg);
int scoutfs_btree_insert_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_btree_item_list *lst);
void scoutfs_btree_put_iref(struct scoutfs_btree_item_ref *iref);

View File

@@ -34,13 +34,10 @@
/*
* The client is responsible for maintaining a connection to the server.
* This includes managing quorum elections that determine which client
* should run the server that all the clients connect to.
*/
#define CLIENT_CONNECT_DELAY_MS (MSEC_PER_SEC / 10)
#define CLIENT_CONNECT_TIMEOUT_MS (1 * MSEC_PER_SEC)
#define CLIENT_QUORUM_TIMEOUT_MS (5 * MSEC_PER_SEC)
struct client_info {
struct super_block *sb;
@@ -52,7 +49,6 @@ struct client_info {
struct delayed_work connect_dwork;
u64 server_term;
u64 greeting_umb;
bool sending_farewell;
int farewell_error;
@@ -108,19 +104,27 @@ int scoutfs_client_commit_log_trees(struct super_block *sb,
lt, sizeof(*lt), NULL, 0);
}
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_GET_ROOTS,
NULL, 0, roots, sizeof(*roots));
}
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
__le64 before = cpu_to_le64p(seq);
__le64 after;
__le64 leseq;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
&before, sizeof(before),
&after, sizeof(after));
NULL, 0, &leseq, sizeof(leseq));
if (ret == 0)
*seq = le64_to_cpu(after);
*seq = le64_to_cpu(leseq);
return ret;
}
@@ -140,17 +144,6 @@ int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
return ret;
}
int scoutfs_client_statfs(struct super_block *sb,
struct scoutfs_net_statfs *nstatfs)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_STATFS, NULL, 0,
nstatfs,
sizeof(struct scoutfs_net_statfs));
}
/* process an incoming grant response from the server */
static int client_lock_response(struct super_block *sb,
struct scoutfs_net_connection *conn,
@@ -200,6 +193,28 @@ int scoutfs_client_lock_recover_response(struct super_block *sb, u64 net_id,
net_id, 0, nlr, bytes);
}
/* Find srch files that need to be compacted. */
int scoutfs_client_srch_get_compact(struct super_block *sb,
struct scoutfs_srch_compact *sc)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_SRCH_GET_COMPACT,
NULL, 0, sc, sizeof(*sc));
}
/* Commit the result of a srch file compaction. */
int scoutfs_client_srch_commit_compact(struct super_block *sb,
struct scoutfs_srch_compact *res)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_SRCH_COMMIT_COMPACT,
res, sizeof(*res), NULL, 0);
}
/* The client is receiving a invalidation request from the server */
static int client_lock(struct super_block *sb,
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
@@ -261,10 +276,10 @@ static int client_greeting(struct super_block *sb,
goto out;
}
if (gr->format_hash != super->format_hash) {
if (gr->version != super->version) {
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
le64_to_cpu(gr->format_hash),
le64_to_cpu(super->format_hash));
le64_to_cpu(gr->version),
le64_to_cpu(super->version));
ret = -EINVAL;
goto out;
}
@@ -273,52 +288,30 @@ static int client_greeting(struct super_block *sb,
scoutfs_net_client_greeting(sb, conn, new_server);
client->server_term = le64_to_cpu(gr->server_term);
client->greeting_umb = le64_to_cpu(gr->unmount_barrier);
ret = 0;
out:
return ret;
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
* The client is deciding if it needs to keep trying to reconnect to
* have its farewell request processed. The server removes our mounted
* client item last so that if we don't see it we know the server has
* processed our farewell and we don't need to reconnect, we can unmount
* safely.
*
* In the typical case a mount reads the super blocks and finds the
* address of the currently running server and connects to it.
* Non-voting clients who can't connect will keep trying alternating
* reading the address and getting connect timeouts.
*
* Voting mounts will try to elect a leader if they can't connect to the
* server. When a quorum can't connect and are able to elect a leader
* then a new server is started. The new server will write its address
* in the super and everyone will be able to connect.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once a client receives its farewell response it
* can exit. But a majority of clients need to stick around to elect a
* server to process all their farewell requests. This is coordinated
* by having the greeting tell the server that a client is a voter. The
* server then holds on to farewell requests from voters until only
* requests from the final quorum remain. These farewell responses are
* only sent after updating an unmount barrier in the super to indicate
* to the final quorum that they can safely exit without having received
* a farewell response over the network.
* This is peeking at btree blocks that the server could be actively
* freeing with cow updates so it can see stale blocks, we just return
* the error and we'll retry eventually as the connection times out.
*/
static void scoutfs_client_connect_worker(struct work_struct *work)
static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
{
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = NULL;
struct mount_options *opts = &sbi->opts;
const bool am_voter = opts->server_addr.sin_addr.s_addr != 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
ktime_t timeout_abs;
u64 elected_term;
struct scoutfs_key key = {
.sk_zone = SCOUTFS_MOUNTED_CLIENT_ZONE,
.skmc_rid = cpu_to_le64(rid),
};
struct scoutfs_super_block *super;
SCOUTFS_BTREE_ITEM_REF(iref);
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -331,57 +324,77 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
goto out;
/* can safely unmount if we see that server processed our farewell */
if (am_voter && client->sending_farewell &&
(le64_to_cpu(super->unmount_barrier) > client->greeting_umb)) {
ret = scoutfs_btree_lookup(sb, &super->mounted_clients, &key, &iref);
if (ret == 0) {
scoutfs_btree_put_iref(&iref);
ret = 1;
}
if (ret == -ENOENT)
ret = 0;
kfree(super);
out:
return ret;
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
*
* We ask quorum for an address to try and connect to. If there isn't
* one, or it fails, we back off a bit before trying again.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once the server processes a farewell request from
* the client it can forget the client. If the connection is broken
* before the client gets the farewell response it doesn't want to
* reconnect to send it again.. instead the client can read the metadata
* device to check for the lack of an item which indicates that the
* server has processed its farewell.
*/
static void scoutfs_client_connect_worker(struct work_struct *work)
{
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct mount_options *opts = &sbi->opts;
const bool am_quorum = opts->quorum_slot_nr >= 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
int ret;
/* can unmount once server farewell handling removes our item */
if (client->sending_farewell &&
lookup_mounted_client_item(sb, sbi->rid) == 0) {
client->farewell_error = 0;
complete(&client->farewell_comp);
ret = 0;
goto out;
}
/* try to connect to the super's server address */
scoutfs_addr_to_sin(&sin, &super->server_addr);
if (sin.sin_addr.s_addr != 0 && sin.sin_port != 0)
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
else
ret = -ENOTCONN;
/* voters try to elect a leader if they couldn't connect */
if (ret < 0) {
/* non-voters will keep retrying */
if (!am_voter)
goto out;
/* make sure local server isn't writing super during votes */
scoutfs_server_stop(sb);
timeout_abs = ktime_add_ms(ktime_get(),
CLIENT_QUORUM_TIMEOUT_MS);
ret = scoutfs_quorum_election(sb, timeout_abs,
le64_to_cpu(super->quorum_server_term),
&elected_term);
/* start the server if we were asked to */
if (elected_term > 0)
ret = scoutfs_server_start(sb, &opts->server_addr,
elected_term);
ret = -ENOTCONN;
ret = scoutfs_quorum_server_sin(sb, &sin);
if (ret < 0)
goto out;
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
if (ret < 0)
goto out;
}
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.format_hash = super->format_hash;
greet.version = super->version;
greet.server_term = cpu_to_le64(client->server_term);
greet.unmount_barrier = cpu_to_le64(client->greeting_umb);
greet.rid = cpu_to_le64(sbi->rid);
greet.flags = 0;
if (client->sending_farewell)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_FAREWELL);
if (am_voter)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_VOTER);
if (am_quorum)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_QUORUM);
ret = scoutfs_net_submit_request(sb, client->conn,
SCOUTFS_NET_CMD_GREETING,
@@ -390,7 +403,6 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
scoutfs_net_shutdown(sb, client->conn);
out:
kfree(super);
/* always have a small delay before retrying to avoid storms */
if (ret && !atomic_read(&client->shutting_down))

View File

@@ -7,17 +7,21 @@ int scoutfs_client_get_log_trees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_client_commit_log_trees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
u64 *scoutfs_client_bulk_alloc(struct super_block *sb);
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_statfs(struct super_block *sb,
struct scoutfs_net_statfs *nstatfs);
int scoutfs_client_lock_request(struct super_block *sb,
struct scoutfs_net_lock *nl);
int scoutfs_client_lock_response(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl);
int scoutfs_client_lock_recover_response(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock_recover *nlr);
int scoutfs_client_srch_get_compact(struct super_block *sb,
struct scoutfs_srch_compact *sc);
int scoutfs_client_srch_commit_compact(struct super_block *sb,
struct scoutfs_srch_compact *res);
int scoutfs_client_setup(struct super_block *sb);
void scoutfs_client_destroy(struct super_block *sb);

View File

@@ -1,319 +0,0 @@
#ifndef _SCOUTFS_COUNT_H_
#define _SCOUTFS_COUNT_H_
/*
* Our estimate of the space consumed while dirtying items is based on
* the number of items and the size of their values.
*
* The estimate is still a read-only input to entering the transaction.
* We'd like to use it as a clean rhs arg to hold_trans. We define SIC_
* functions which return the count struct. This lets us have a single
* arg and avoid bugs in initializing and passing in struct pointers
* from callers. The internal __count functions are used compose an
* estimate out of the sets of items it manipulates. We program in much
* clearer C instead of in the preprocessor.
*
* Compilers are able to collapse the inlines into constants for the
* constant estimates.
*/
struct scoutfs_item_count {
signed items;
signed vals;
};
/* The caller knows exactly what they're doing. */
static inline const struct scoutfs_item_count SIC_EXACT(signed items,
signed vals)
{
struct scoutfs_item_count cnt = {
.items = items,
.vals = vals,
};
return cnt;
}
/*
* Allocating an inode creates a new set of indexed items.
*/
static inline void __count_alloc_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
/*
* Dirtying an inode dirties the inode item and can delete and create
* the full set of indexed items.
*/
static inline void __count_dirty_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = 2 * SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
static inline const struct scoutfs_item_count SIC_ALLOC_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_alloc_inode(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_DIRTY_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Directory entries are stored in three items.
*/
static inline void __count_dirents(struct scoutfs_item_count *cnt,
unsigned name_len)
{
cnt->items += 3;
cnt->vals += 3 * offsetof(struct scoutfs_dirent, name[name_len]);
}
static inline void __count_sym_target(struct scoutfs_item_count *cnt,
unsigned size)
{
unsigned nr = DIV_ROUND_UP(size, SCOUTFS_MAX_VAL_SIZE);
cnt->items += nr;
cnt->vals += size;
}
static inline void __count_orphan(struct scoutfs_item_count *cnt)
{
cnt->items += 1;
}
static inline void __count_mknod(struct scoutfs_item_count *cnt,
unsigned name_len)
{
__count_alloc_inode(cnt);
__count_dirents(cnt, name_len);
__count_dirty_inode(cnt);
}
static inline const struct scoutfs_item_count SIC_MKNOD(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
return cnt;
}
/*
* Dropping the inode deletes all its items. Potentially enormous numbers
* of items (data mapping, xattrs) are deleted in their own transactions.
*/
static inline const struct scoutfs_item_count SIC_DROP_INODE(int mode,
u64 size)
{
struct scoutfs_item_count cnt = {0,};
if (S_ISLNK(mode))
__count_sym_target(&cnt, size);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
cnt.vals = 0;
return cnt;
}
static inline const struct scoutfs_item_count SIC_LINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Unlink can add orphan items.
*/
static inline const struct scoutfs_item_count SIC_UNLINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_SYMLINK(unsigned name_len,
unsigned size)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
__count_sym_target(&cnt, size);
return cnt;
}
/*
* This assumes the worst case of a rename between directories that
* unlinks an existing target. That'll be worse than the common case
* by a few hundred bytes.
*/
static inline const struct scoutfs_item_count SIC_RENAME(unsigned old_len,
unsigned new_len)
{
struct scoutfs_item_count cnt = {0,};
/* dirty dirs and inodes */
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
/* unlink old and new, link new */
__count_dirents(&cnt, old_len);
__count_dirents(&cnt, new_len);
__count_dirents(&cnt, new_len);
/* orphan the existing target */
__count_orphan(&cnt);
return cnt;
}
/*
* Creating an xattr results in a dirty set of items with values that
* store the xattr header, name, and value. There's always at least one
* item with the header and name. Any previously existing items are
* deleted which dirties their key but removes their value. The two
* sets of items are indexed by different ids so their items don't
* overlap. If the xattr name is indexed then we modify one xattr index
* item.
*/
static inline const struct scoutfs_item_count SIC_XATTR_SET(unsigned old_parts,
bool creating,
unsigned name_len,
unsigned size,
bool indexed)
{
struct scoutfs_item_count cnt = {0,};
unsigned int new_parts;
__count_dirty_inode(&cnt);
if (old_parts)
cnt.items += old_parts;
if (indexed)
cnt.items++;
if (creating) {
new_parts = SCOUTFS_XATTR_NR_PARTS(name_len, size);
cnt.items += new_parts;
cnt.vals += sizeof(struct scoutfs_xattr) + name_len + size;
}
return cnt;
}
/*
* write_begin can have to allocate all the blocks in the page and can
* have to add a big allocation from the server to do so:
* - merge added free extents from the server
* - remove a free extent per block
* - remove an offline extent for every other block
* - add a file extent per block
*/
static inline const struct scoutfs_item_count SIC_WRITE_BEGIN(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned nr_free = (1 + SCOUTFS_BLOCKS_PER_PAGE) * 3;
unsigned nr_file = (DIV_ROUND_UP(SCOUTFS_BLOCKS_PER_PAGE, 2) +
SCOUTFS_BLOCKS_PER_PAGE) * 3;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* Truncating an extent can:
* - delete existing file extent,
* - create two surrounding file extents,
* - add an offline file extent,
* - delete two existing free extents
* - create a merged free extent
*/
static inline const struct scoutfs_item_count
SIC_TRUNC_EXTENT(struct inode *inode)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_file = 1 + 2 + 1;
unsigned int nr_free = (2 + 1) * 2;
if (inode)
__count_dirty_inode(&cnt);
cnt.items += nr_file + nr_free;
cnt.vals += nr_file;
return cnt;
}
/*
* Fallocating an extent can, at most:
* - allocate from the server: delete two free and insert merged
* - free an allocated extent: delete one and create two split
* - remove an unallocated file extent: delete one and create two split
* - add an fallocated flie extent: delete two and inset one merged
*/
static inline const struct scoutfs_item_count SIC_FALLOCATE_ONE(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_free = ((1 + 2) * 2) * 2;
unsigned int nr_file = (1 + 2) * 2;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* ioc_setattr_more can dirty the inode and add a single offline extent.
*/
static inline const struct scoutfs_item_count SIC_SETATTR_MORE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
cnt.items++;
return cnt;
}
#endif

View File

@@ -12,18 +12,45 @@
* other places by this macro. Don't forget to update LAST_COUNTER.
*/
#define EXPAND_EACH_COUNTER \
EXPAND_COUNTER(block_cache_access) \
EXPAND_COUNTER(alloc_alloc_data) \
EXPAND_COUNTER(alloc_alloc_meta) \
EXPAND_COUNTER(alloc_free_data) \
EXPAND_COUNTER(alloc_free_meta) \
EXPAND_COUNTER(alloc_list_avail_lo) \
EXPAND_COUNTER(alloc_list_freed_hi) \
EXPAND_COUNTER(alloc_move) \
EXPAND_COUNTER(alloc_moved_extent) \
EXPAND_COUNTER(alloc_stale_list_block) \
EXPAND_COUNTER(block_cache_access_update) \
EXPAND_COUNTER(block_cache_alloc_failure) \
EXPAND_COUNTER(block_cache_alloc_page_order) \
EXPAND_COUNTER(block_cache_alloc_virt) \
EXPAND_COUNTER(block_cache_end_io_error) \
EXPAND_COUNTER(block_cache_forget) \
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_invalidate) \
EXPAND_COUNTER(block_cache_lru_move) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(btree_read_error) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_restart) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
EXPAND_COUNTER(btree_dirty) \
EXPAND_COUNTER(btree_force) \
EXPAND_COUNTER(btree_join) \
EXPAND_COUNTER(btree_insert) \
EXPAND_COUNTER(btree_leaf_item_hash_search) \
EXPAND_COUNTER(btree_lookup) \
EXPAND_COUNTER(btree_next) \
EXPAND_COUNTER(btree_prev) \
EXPAND_COUNTER(btree_split) \
EXPAND_COUNTER(btree_stale_read) \
EXPAND_COUNTER(btree_update) \
EXPAND_COUNTER(btree_walk) \
EXPAND_COUNTER(btree_walk_restart) \
EXPAND_COUNTER(client_farewell_error) \
EXPAND_COUNTER(corrupt_btree_block_level) \
EXPAND_COUNTER(corrupt_btree_no_child_ref) \
@@ -34,6 +61,8 @@
EXPAND_COUNTER(corrupt_symlink_inode_size) \
EXPAND_COUNTER(corrupt_symlink_missing_item) \
EXPAND_COUNTER(corrupt_symlink_not_null_term) \
EXPAND_COUNTER(data_fallocate_enobufs_retry) \
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
EXPAND_COUNTER(dentry_revalidate_error) \
EXPAND_COUNTER(dentry_revalidate_invalid) \
EXPAND_COUNTER(dentry_revalidate_locked) \
@@ -42,25 +71,66 @@
EXPAND_COUNTER(dentry_revalidate_root) \
EXPAND_COUNTER(dentry_revalidate_valid) \
EXPAND_COUNTER(dir_backref_excessive_retries) \
EXPAND_COUNTER(ext_op_insert) \
EXPAND_COUNTER(ext_op_next) \
EXPAND_COUNTER(ext_op_remove) \
EXPAND_COUNTER(forest_bloom_fail) \
EXPAND_COUNTER(forest_bloom_pass) \
EXPAND_COUNTER(forest_bloom_stale) \
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
EXPAND_COUNTER(item_dirty) \
EXPAND_COUNTER(item_invalidate) \
EXPAND_COUNTER(item_invalidate_page) \
EXPAND_COUNTER(item_lookup) \
EXPAND_COUNTER(item_mark_dirty) \
EXPAND_COUNTER(item_next) \
EXPAND_COUNTER(item_page_accessed) \
EXPAND_COUNTER(item_page_alloc) \
EXPAND_COUNTER(item_page_clear_dirty) \
EXPAND_COUNTER(item_page_compact) \
EXPAND_COUNTER(item_page_free) \
EXPAND_COUNTER(item_page_lru_add) \
EXPAND_COUNTER(item_page_lru_remove) \
EXPAND_COUNTER(item_page_mark_dirty) \
EXPAND_COUNTER(item_page_rbtree_walk) \
EXPAND_COUNTER(item_page_split) \
EXPAND_COUNTER(item_pcpu_add_replaced) \
EXPAND_COUNTER(item_pcpu_page_hit) \
EXPAND_COUNTER(item_pcpu_page_miss) \
EXPAND_COUNTER(item_pcpu_page_miss_keys) \
EXPAND_COUNTER(item_read_pages_split) \
EXPAND_COUNTER(item_shrink_page) \
EXPAND_COUNTER(item_shrink_page_dirty) \
EXPAND_COUNTER(item_shrink_page_reader) \
EXPAND_COUNTER(item_shrink_page_trylock) \
EXPAND_COUNTER(item_update) \
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grace_elapsed) \
EXPAND_COUNTER(lock_grace_extended) \
EXPAND_COUNTER(lock_grace_set) \
EXPAND_COUNTER(lock_grace_wait) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
EXPAND_COUNTER(lock_invalidate_commit) \
EXPAND_COUNTER(lock_grant_work) \
EXPAND_COUNTER(lock_invalidate_coverage) \
EXPAND_COUNTER(lock_invalidate_inode) \
EXPAND_COUNTER(lock_invalidate_request) \
EXPAND_COUNTER(lock_invalidate_response) \
EXPAND_COUNTER(lock_invalidate_sync) \
EXPAND_COUNTER(lock_invalidate_work) \
EXPAND_COUNTER(lock_lock) \
EXPAND_COUNTER(lock_lock_error) \
EXPAND_COUNTER(lock_nonblock_eagain) \
EXPAND_COUNTER(lock_recover_request) \
EXPAND_COUNTER(lock_shrink_queued) \
EXPAND_COUNTER(lock_shrink_request_aborted) \
EXPAND_COUNTER(lock_shrink_attempted) \
EXPAND_COUNTER(lock_shrink_aborted) \
EXPAND_COUNTER(lock_shrink_work) \
EXPAND_COUNTER(lock_unlock) \
EXPAND_COUNTER(lock_wait) \
EXPAND_COUNTER(net_dropped_response) \
@@ -73,29 +143,51 @@
EXPAND_COUNTER(net_recv_invalid_message) \
EXPAND_COUNTER(net_recv_messages) \
EXPAND_COUNTER(net_unknown_request) \
EXPAND_COUNTER(quorum_cycle) \
EXPAND_COUNTER(quorum_elected_leader) \
EXPAND_COUNTER(quorum_election_timeout) \
EXPAND_COUNTER(quorum_failure) \
EXPAND_COUNTER(quorum_read_block) \
EXPAND_COUNTER(quorum_read_block_error) \
EXPAND_COUNTER(quorum_elected) \
EXPAND_COUNTER(quorum_fence_error) \
EXPAND_COUNTER(quorum_fence_leader) \
EXPAND_COUNTER(quorum_read_invalid_block) \
EXPAND_COUNTER(quorum_saw_super_leader) \
EXPAND_COUNTER(quorum_timedout) \
EXPAND_COUNTER(quorum_write_block) \
EXPAND_COUNTER(quorum_write_block_error) \
EXPAND_COUNTER(quorum_fenced) \
EXPAND_COUNTER(radix_enospc_data) \
EXPAND_COUNTER(radix_enospc_paths) \
EXPAND_COUNTER(radix_enospc_synth) \
EXPAND_COUNTER(quorum_recv_error) \
EXPAND_COUNTER(quorum_recv_heartbeat) \
EXPAND_COUNTER(quorum_recv_invalid) \
EXPAND_COUNTER(quorum_recv_resignation) \
EXPAND_COUNTER(quorum_recv_vote) \
EXPAND_COUNTER(quorum_send_heartbeat) \
EXPAND_COUNTER(quorum_send_resignation) \
EXPAND_COUNTER(quorum_send_request) \
EXPAND_COUNTER(quorum_send_vote) \
EXPAND_COUNTER(quorum_server_shutdown) \
EXPAND_COUNTER(quorum_term_follower) \
EXPAND_COUNTER(server_commit_hold) \
EXPAND_COUNTER(server_commit_queue) \
EXPAND_COUNTER(server_commit_worker) \
EXPAND_COUNTER(srch_add_entry) \
EXPAND_COUNTER(srch_compact_dirty_block) \
EXPAND_COUNTER(srch_compact_entry) \
EXPAND_COUNTER(srch_compact_flush) \
EXPAND_COUNTER(srch_compact_log_page) \
EXPAND_COUNTER(srch_compact_removed_entry) \
EXPAND_COUNTER(srch_rotate_log) \
EXPAND_COUNTER(srch_search_log) \
EXPAND_COUNTER(srch_search_log_block) \
EXPAND_COUNTER(srch_search_retry_empty) \
EXPAND_COUNTER(srch_search_sorted) \
EXPAND_COUNTER(srch_search_sorted_block) \
EXPAND_COUNTER(srch_search_stale_eio) \
EXPAND_COUNTER(srch_search_stale_retry) \
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
EXPAND_COUNTER(trans_commit_data_alloc_low) \
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
EXPAND_COUNTER(trans_commit_fsync) \
EXPAND_COUNTER(trans_commit_full) \
EXPAND_COUNTER(trans_commit_meta_alloc_low) \
EXPAND_COUNTER(trans_commit_sync_fs) \
EXPAND_COUNTER(trans_commit_timer)
EXPAND_COUNTER(trans_commit_timer) \
EXPAND_COUNTER(trans_commit_written)
#define FIRST_COUNTER block_cache_access
#define LAST_COUNTER trans_commit_timer
#define FIRST_COUNTER alloc_alloc_data
#define LAST_COUNTER trans_commit_written
#undef EXPAND_COUNTER
#define EXPAND_COUNTER(which) struct percpu_counter which;
@@ -113,11 +205,21 @@ struct scoutfs_counters {
pcpu <= &SCOUTFS_SB(sb)->counters->LAST_COUNTER; \
pcpu++)
#define scoutfs_inc_counter(sb, which) \
percpu_counter_inc(&SCOUTFS_SB(sb)->counters->which)
/*
* We always read with _sum, we have no use for the shared count and
* certainly don't want to pay the cost of a shared lock to update it.
* The default batch of 32 make counter increments show up significantly
* in profiles.
*/
#define SCOUTFS_PCPU_COUNTER_BATCH (1 << 30)
#define scoutfs_add_counter(sb, which, cnt) \
percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, cnt)
#define scoutfs_inc_counter(sb, which) \
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, 1, \
SCOUTFS_PCPU_COUNTER_BATCH)
#define scoutfs_add_counter(sb, which, cnt) \
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, cnt, \
SCOUTFS_PCPU_COUNTER_BATCH)
void __init scoutfs_init_counters(void);
int scoutfs_setup_counters(struct super_block *sb);

File diff suppressed because it is too large Load Diff

View File

@@ -47,7 +47,7 @@ struct scoutfs_traced_extent {
extern const struct address_space_operations scoutfs_file_aops;
extern const struct file_operations scoutfs_file_fops;
struct scoutfs_radix_allocator;
struct scoutfs_alloc;
struct scoutfs_block_writer;
int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
@@ -58,6 +58,9 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len);
int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
struct scoutfs_lock *lock);
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off, bool to_stage,
u64 data_version);
int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u8 sef, u8 op, struct scoutfs_data_wait *ow,
@@ -77,11 +80,12 @@ int scoutfs_data_waiting(struct super_block *sb, u64 ino, u64 iblock,
unsigned int nr);
void scoutfs_data_init_btrees(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_log_trees *lt);
void scoutfs_data_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_data_prepare_commit(struct super_block *sb);
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb);
int scoutfs_data_setup(struct super_block *sb);

View File

@@ -13,7 +13,6 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/crc32c.h>
#include <linux/uio.h>
#include <linux/xattr.h>
#include <linux/namei.h>
@@ -28,9 +27,9 @@
#include "super.h"
#include "trans.h"
#include "xattr.h"
#include "kvec.h"
#include "forest.h"
#include "item.h"
#include "lock.h"
#include "hash.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -79,7 +78,7 @@ static unsigned int mode_to_type(umode_t mode)
#undef S_SHIFT
}
static unsigned int dentry_type(unsigned int type)
static unsigned int dentry_type(enum scoutfs_dentry_type type)
{
static unsigned char types[] = {
[SCOUTFS_DT_FIFO] = DT_FIFO,
@@ -213,12 +212,44 @@ static struct scoutfs_dirent *alloc_dirent(unsigned int name_len)
return kmalloc(dirent_bytes(name_len), GFP_NOFS);
}
/*
* Test a bit number as though an array of bytes is a large len-bit
* big-endian value. nr 0 is the LSB of the final byte, nr (len - 1) is
* the MSB of the first byte.
*/
static int test_be_bytes_bit(int nr, const char *bytes, int len)
{
return bytes[(len - 1 - nr) >> 3] & (1 << (nr & 7));
}
/*
* Generate a 32bit "fingerprint" of the name by extracting 32 evenly
* distributed bits from the name. The intent is to have the sort order
* of the fingerprints reflect the memcmp() sort order of the names
* while mapping large names down to small fs keys.
*
* Names that are smaller than 32bits are biased towards the high bits
* of the fingerprint so that most significant bits of the fingerprints
* consistently reflect the initial characters of the names.
*/
static u32 dirent_name_fingerprint(const char *name, unsigned int name_len)
{
int name_bits = name_len * 8;
int skip = max(name_bits / 32, 1);
u32 fp = 0;
int f;
int n;
for (f = 31, n = name_bits - 1; f >= 0 && n >= 0; f--, n -= skip)
fp |= !!test_be_bytes_bit(n, name, name_bits) << f;
return fp;
}
static u64 dirent_name_hash(const char *name, unsigned int name_len)
{
unsigned int half = (name_len + 1) / 2;
return crc32c(~0, name, half) |
((u64)crc32c(~0, name + name_len - half, half) << 32);
return scoutfs_hash32(name, name_len) |
((u64)dirent_name_fingerprint(name, name_len) << 32);
}
static u64 dirent_names_equal(const char *a_name, unsigned int a_len,
@@ -239,7 +270,6 @@ static int lookup_dirent(struct super_block *sb, u64 dir_ino, const char *name,
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_dirent *dent = NULL;
struct kvec val;
int ret;
dent = alloc_dirent(SCOUTFS_NAME_LEN);
@@ -250,10 +280,10 @@ static int lookup_dirent(struct super_block *sb, u64 dir_ino, const char *name,
init_dirent_key(&key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, 0);
init_dirent_key(&last_key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, U64_MAX);
kvec_init(&val, dent, dirent_bytes(SCOUTFS_NAME_LEN));
for (;;) {
ret = scoutfs_forest_next(sb, &key, &last_key, &val, lock);
ret = scoutfs_item_next(sb, &key, &last_key, dent,
dirent_bytes(SCOUTFS_NAME_LEN), lock);
if (ret < 0)
break;
@@ -433,7 +463,18 @@ out:
else
inode = scoutfs_iget(sb, ino);
return d_splice_alias(inode, dentry);
/*
* We can't splice dir aliases into the dcache. dir entries
* might have changed on other nodes so our dcache could still
* contain them, rather than having been moved in rename. For
* dirs, we use d_materialize_unique to remove any existing
* aliases which must be stale. Our inode numbers aren't reused
* so inodes pointed to by entries can't change types.
*/
if (!IS_ERR_OR_NULL(inode) && S_ISDIR(inode->i_mode))
return d_materialise_unique(dentry, inode);
else
return d_splice_alias(inode, dentry);
}
/*
@@ -452,7 +493,6 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
struct scoutfs_key key;
struct scoutfs_key last_key;
struct scoutfs_lock *dir_lock;
struct kvec val;
int name_len;
u64 pos;
int ret;
@@ -468,7 +508,6 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
init_dirent_key(&last_key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
SCOUTFS_DIRENT_LAST_POS, 0);
kvec_init(&val, dent, dirent_bytes(SCOUTFS_NAME_LEN));
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &dir_lock);
if (ret)
@@ -478,7 +517,9 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
init_dirent_key(&key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
kc_readdir_pos(file, ctx), 0);
ret = scoutfs_forest_next(sb, &key, &last_key, &val, dir_lock);
ret = scoutfs_item_next(sb, &key, &last_key, dent,
dirent_bytes(SCOUTFS_NAME_LEN),
dir_lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
@@ -535,7 +576,6 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
struct scoutfs_dirent *dent;
bool del_ent = false;
bool del_rdir = false;
struct kvec val;
int ret;
dent = alloc_dirent(name_len);
@@ -554,25 +594,27 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
init_dirent_key(&ent_key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, pos);
init_dirent_key(&rdir_key, SCOUTFS_READDIR_TYPE, dir_ino, pos, 0);
init_dirent_key(&lb_key, SCOUTFS_LINK_BACKREF_TYPE, ino, dir_ino, pos);
kvec_init(&val, dent, dirent_bytes(name_len));
ret = scoutfs_forest_create(sb, &ent_key, &val, dir_lock);
ret = scoutfs_item_create(sb, &ent_key, dent, dirent_bytes(name_len),
dir_lock);
if (ret)
goto out;
del_ent = true;
ret = scoutfs_forest_create(sb, &rdir_key, &val, dir_lock);
ret = scoutfs_item_create(sb, &rdir_key, dent, dirent_bytes(name_len),
dir_lock);
if (ret)
goto out;
del_rdir = true;
ret = scoutfs_forest_create(sb, &lb_key, &val, inode_lock);
ret = scoutfs_item_create(sb, &lb_key, dent, dirent_bytes(name_len),
inode_lock);
out:
if (ret < 0) {
if (del_ent)
scoutfs_forest_delete_dirty(sb, &ent_key);
scoutfs_item_delete(sb, &ent_key, dir_lock);
if (del_rdir)
scoutfs_forest_delete_dirty(sb, &rdir_key);
scoutfs_item_delete(sb, &rdir_key, dir_lock);
}
kfree(dent);
@@ -594,23 +636,20 @@ static int del_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
struct scoutfs_key rdir_key;
struct scoutfs_key ent_key;
struct scoutfs_key lb_key;
LIST_HEAD(dir_saved);
LIST_HEAD(inode_saved);
int ret;
init_dirent_key(&ent_key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, pos);
init_dirent_key(&rdir_key, SCOUTFS_READDIR_TYPE, dir_ino, pos, 0);
init_dirent_key(&lb_key, SCOUTFS_LINK_BACKREF_TYPE, ino, dir_ino, pos);
ret = scoutfs_forest_delete_save(sb, &ent_key, &dir_saved, dir_lock) ?:
scoutfs_forest_delete_save(sb, &rdir_key, &dir_saved, dir_lock) ?:
scoutfs_forest_delete_save(sb, &lb_key, &inode_saved, inode_lock);
if (ret < 0) {
scoutfs_forest_restore(sb, &dir_saved, dir_lock);
scoutfs_forest_restore(sb, &inode_saved, inode_lock);
} else {
scoutfs_forest_free_batch(sb, &dir_saved);
scoutfs_forest_free_batch(sb, &inode_saved);
ret = scoutfs_item_dirty(sb, &ent_key, dir_lock) ?:
scoutfs_item_dirty(sb, &rdir_key, dir_lock) ?:
scoutfs_item_dirty(sb, &lb_key, inode_lock);
if (ret == 0) {
ret = scoutfs_item_delete(sb, &ent_key, dir_lock) ?:
scoutfs_item_delete(sb, &rdir_key, dir_lock) ?:
scoutfs_item_delete(sb, &lb_key, inode_lock);
BUG_ON(ret); /* _dirty should have guaranteed success */
}
return ret;
@@ -627,7 +666,6 @@ static int del_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
*/
static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
umode_t mode, dev_t rdev,
const struct scoutfs_item_count cnt,
struct scoutfs_lock **dir_lock,
struct scoutfs_lock **inode_lock,
struct list_head *ind_locks)
@@ -642,7 +680,7 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
if (ret)
return ERR_PTR(ret);
ret = scoutfs_alloc_ino(dir, &ino);
ret = scoutfs_alloc_ino(sb, S_ISDIR(mode), &ino);
if (ret)
return ERR_PTR(ret);
@@ -666,7 +704,7 @@ retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, ind_locks, dir, true) ?:
scoutfs_inode_index_prepare_ino(sb, ind_locks, ino, mode) ?:
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq, cnt);
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -713,7 +751,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
inode = lock_hold_create(dir, dentry, mode, rdev,
SIC_MKNOD(dentry->d_name.len),
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
@@ -776,6 +813,7 @@ static int scoutfs_link(struct dentry *old_dentry,
struct scoutfs_lock *dir_lock;
struct scoutfs_lock *inode_lock = NULL;
LIST_HEAD(ind_locks);
bool del_orphan;
u64 dir_size;
u64 ind_seq;
u64 hash;
@@ -804,12 +842,13 @@ static int scoutfs_link(struct dentry *old_dentry,
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
del_orphan = (inode->i_nlink == 0);
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_LINK(dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -819,6 +858,12 @@ retry:
if (ret)
goto out;
if (del_orphan) {
ret = scoutfs_orphan_dirty(sb, scoutfs_ino(inode));
if (ret)
goto out;
}
pos = SCOUTFS_I(dir)->next_readdir_pos++;
ret = add_entry_items(sb, scoutfs_ino(dir), hash, pos,
@@ -834,6 +879,11 @@ retry:
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
if (del_orphan) {
ret = scoutfs_orphan_delete(sb, scoutfs_ino(inode));
WARN_ON_ONCE(ret);
}
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -890,8 +940,7 @@ retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_UNLINK(dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -960,17 +1009,16 @@ static void init_symlink_key(struct scoutfs_key *key, u64 ino, u8 nr)
* The target name can be null for deletion when val isn't used. Size
* still has to be provided to determine the number of items.
*/
enum {
enum symlink_ops {
SYM_CREATE = 0,
SYM_LOOKUP,
SYM_DELETE,
};
static int symlink_item_ops(struct super_block *sb, int op, u64 ino,
static int symlink_item_ops(struct super_block *sb, enum symlink_ops op, u64 ino,
struct scoutfs_lock *lock, const char *target,
size_t size)
{
struct scoutfs_key key;
struct kvec val;
unsigned bytes;
unsigned nr;
int ret;
@@ -985,14 +1033,16 @@ static int symlink_item_ops(struct super_block *sb, int op, u64 ino,
init_symlink_key(&key, ino, i);
bytes = min_t(u64, size, SCOUTFS_MAX_VAL_SIZE);
kvec_init(&val, (void *)target, bytes);
if (op == SYM_CREATE)
ret = scoutfs_forest_create(sb, &key, &val, lock);
ret = scoutfs_item_create(sb, &key, (void *)target,
bytes, lock);
else if (op == SYM_LOOKUP)
ret = scoutfs_forest_lookup_exact(sb, &key, &val, lock);
ret = scoutfs_item_lookup_exact(sb, &key,
(void *)target, bytes,
lock);
else if (op == SYM_DELETE)
ret = scoutfs_forest_delete(sb, &key, lock);
ret = scoutfs_item_delete(sb, &key, lock);
if (ret)
break;
@@ -1125,7 +1175,6 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
return ret;
inode = lock_hold_create(dir, dentry, S_IFLNK|S_IRWXUGO, 0,
SIC_SYMLINK(dentry->d_name.len, name_len),
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
@@ -1207,7 +1256,6 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_lock *lock = NULL;
struct kvec val;
int len;
int ret;
@@ -1223,13 +1271,13 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
init_dirent_key(&key, SCOUTFS_LINK_BACKREF_TYPE, ino, dir_ino, dir_pos);
init_dirent_key(&last_key, SCOUTFS_LINK_BACKREF_TYPE, ino, U64_MAX,
U64_MAX);
kvec_init(&val, &ent->dent, dirent_bytes(SCOUTFS_NAME_LEN));
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, ino, &lock);
if (ret)
goto out;
ret = scoutfs_forest_next(sb, &key, &last_key, &val, lock);
ret = scoutfs_item_next(sb, &key, &last_key, &ent->dent,
dirent_bytes(SCOUTFS_NAME_LEN), lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
lock = NULL;
if (ret < 0)
@@ -1558,9 +1606,7 @@ retry:
scoutfs_inode_index_prepare(sb, &ind_locks, new_dir, false)) ?:
(new_inode == NULL ? 0 :
scoutfs_inode_index_prepare(sb, &ind_locks, new_inode, false)) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_RENAME(old_dentry->d_name.len,
new_dentry->d_name.len));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -1728,6 +1774,42 @@ static int scoutfs_dir_open(struct inode *inode, struct file *file)
}
#endif
static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
LIST_HEAD(ind_locks);
int ret;
if (dentry->d_name.len > SCOUTFS_NAME_LEN)
return -ENAMETOOLONG;
inode = lock_hold_create(dir, dentry, mode, 0,
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
insert_inode_hash(inode);
d_tmpfile(dentry, inode);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
scoutfs_inode_index_unlock(sb, &ind_locks);
ret = scoutfs_orphan_inode(inode);
WARN_ON_ONCE(ret); /* XXX returning error but items deleted */
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
return ret;
}
const struct file_operations scoutfs_dir_fops = {
.KC_FOP_READDIR = scoutfs_readdir,
#ifdef KC_FMODE_KABI_ITERATE
@@ -1738,7 +1820,10 @@ const struct file_operations scoutfs_dir_fops = {
.llseek = generic_file_llseek,
};
const struct inode_operations scoutfs_dir_iops = {
const struct inode_operations_wrapper scoutfs_dir_iops = {
.ops = {
.lookup = scoutfs_lookup,
.mknod = scoutfs_mknod,
.create = scoutfs_create,
@@ -1755,6 +1840,8 @@ const struct inode_operations scoutfs_dir_iops = {
.removexattr = scoutfs_removexattr,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
};
void scoutfs_dir_exit(void)

View File

@@ -5,7 +5,7 @@
#include "lock.h"
extern const struct file_operations scoutfs_dir_fops;
extern const struct inode_operations scoutfs_dir_iops;
extern const struct inode_operations_wrapper scoutfs_dir_iops;
extern const struct inode_operations scoutfs_symlink_iops;
struct scoutfs_link_backref_entry {
@@ -14,7 +14,7 @@ struct scoutfs_link_backref_entry {
u64 dir_pos;
u16 name_len;
struct scoutfs_dirent dent;
/* the full name is allocated and stored in dent.name[0] */
/* the full name is allocated and stored in dent.name[] */
};
int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,

394
kmod/src/ext.c Normal file
View File

@@ -0,0 +1,394 @@
/*
* Copyright (C) 2020 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include "ext.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
* Extents are used to track free block regions and to map logical file
* regions to device blocks. Extents can be split and merged as
* they're modified. These helpers implement all the fiddly extent
* manipulations. Callers provide callbacks which implement the actual
* storage of extents in either the item cache or btree items.
*/
static void ext_zero(struct scoutfs_extent *ext)
{
memset(ext, 0, sizeof(struct scoutfs_extent));
}
static bool ext_overlap(struct scoutfs_extent *ext, u64 start, u64 len)
{
u64 e_end = ext->start + ext->len - 1;
u64 end = start + len - 1;
return !(e_end < start || ext->start > end);
}
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
{
u64 in_end = start + len - 1;
u64 out_end = out->start + out->len - 1;
return out->start <= start && out_end >= in_end;
}
/* we only translate mappings when they exist */
static inline u64 ext_map_add(u64 map, u64 diff)
{
return map ? map + diff : 0;
}
/*
* Extents can merge if they're logically contiguous, both don't have
* mappings or have mappings which are also contiguous, and have
* matching flags.
*/
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,
struct scoutfs_extent *right)
{
return (left->start + left->len == right->start) &&
((!left->map && !right->map) ||
(left->map + left->len == right->map)) &&
(left->flags == right->flags);
}
/*
* Split an existing extent in to left and right extents by removing
* an interior range. The split extents are all zeros if the range
* extends to their end of the extent.
*/
static void ext_split(struct scoutfs_extent *ext, u64 start, u64 len,
struct scoutfs_extent *left,
struct scoutfs_extent *right)
{
if (ext->start < start) {
left->start = ext->start;
left->len = start - ext->start;
left->map = ext->map;
left->flags = ext->flags;
} else {
ext_zero(left);
}
if (ext->start + ext->len > start + len) {
right->start = start + len;
right->len = ext->start + ext->len - right->start;
right->map = ext_map_add(ext->map, right->start - ext->start);
right->flags = ext->flags;
} else {
ext_zero(right);
}
}
#define op_call(sb, ops, arg, which, args...) \
({ \
int _ret; \
_ret = ops->which(sb, arg, ##args); \
scoutfs_inc_counter(sb, ext_op_##which); \
trace_scoutfs_ext_op_##which(sb, ##args, _ret); \
_ret; \
})
struct extent_changes {
struct scoutfs_extent exts[4];
bool ins[4];
u8 nr;
};
static void add_change(struct extent_changes *chg,
struct scoutfs_extent *ext, bool ins)
{
BUILD_BUG_ON(ARRAY_SIZE(chg->ins) != ARRAY_SIZE(chg->exts));
if (ext->len) {
BUG_ON(chg->nr == ARRAY_SIZE(chg->exts));
chg->exts[chg->nr] = *ext;
chg->ins[chg->nr] = !!ins;
chg->nr++;
}
}
static int apply_changes(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, struct extent_changes *chg)
{
int ret = 0;
int err;
int i;
for (i = 0; i < chg->nr; i++) {
if (chg->ins[i])
ret = op_call(sb, ops, arg, insert, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
else
ret = op_call(sb, ops, arg, remove, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
if (ret < 0)
break;
}
while (ret < 0 && --i >= 0) {
if (chg->ins[i])
err = op_call(sb, ops, arg, remove, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
else
err = op_call(sb, ops, arg, insert, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
BUG_ON(err); /* inconsistent */
}
return ret;
}
int scoutfs_ext_next(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, struct scoutfs_extent *ext)
{
int ret;
ret = op_call(sb, ops, arg, next, start, len, ext);
trace_scoutfs_ext_next(sb, start, len, ext, ret);
return ret;
}
/*
* Insert the given extent. EINVAL is returned if there's already an existing
* overlapping extent. This can merge with its neighbours.
*/
int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent ins;
int ret;
ins.start = start;
ins.len = len;
ins.map = map;
ins.flags = flags;
/* find right neighbour and check for overlap */
ret = op_call(sb, ops, arg, next, start, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
/* inserting extent must not overlap */
if (found.len && ext_overlap(&ins, found.start, found.len)) {
ret = -EINVAL;
goto out;
}
/* merge with right if we can */
if (found.len && scoutfs_ext_can_merge(&ins, &found)) {
ins.len += found.len;
add_change(&chg, &found, false);
}
/* see if we can merge with a left neighbour */
if (start > 0) {
ret = op_call(sb, ops, arg, next, start - 1, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (ret == 0 && scoutfs_ext_can_merge(&found, &ins)) {
ins.start = found.start;
ins.map = found.map;
ins.len += found.len;
add_change(&chg, &found, false);
}
}
add_change(&chg, &ins, true);
ret = apply_changes(sb, ops, arg, &chg);
out:
trace_scoutfs_ext_insert(sb, start, len, map, flags, ret);
return ret;
}
/*
* Remove the given extent. The extent to remove must be found entirely
* in an existing extent. If the existing extent is larger then we leave
* behind the remaining extent. The existing extent can be split.
*/
int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent left;
struct scoutfs_extent right;
int ret;
ret = op_call(sb, ops, arg, next, start, 1, &found);
if (ret < 0)
goto out;
/* removed extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}
ext_split(&found, start, len, &left, &right);
add_change(&chg, &found, false);
add_change(&chg, &left, true);
add_change(&chg, &right, true);
ret = apply_changes(sb, ops, arg, &chg);
out:
trace_scoutfs_ext_remove(sb, start, len, 0, 0, ret);
return ret;
}
/*
* Find and remove the next extent, removing only a portion if the
* extent is larger than the count. Returns ENOENT if it didn't
* find any extents.
*
* This does not search for merge candidates so it's safe to call with
* extents indexed by length.
*/
int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 count,
struct scoutfs_extent *ext)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent ins;
int ret;
ret = op_call(sb, ops, arg, next, start, len, &found);
if (ret < 0)
goto out;
add_change(&chg, &found, false);
if (found.len > count) {
ins.start = found.start + count;
ins.len = found.len - count;
ins.map = ext_map_add(found.map, count);
ins.flags = found.flags;
add_change(&chg, &ins, true);
}
ret = apply_changes(sb, ops, arg, &chg);
out:
if (ret == 0) {
ext->start = found.start;
ext->len = min(found.len, count);
ext->map = found.map;
ext->flags = found.flags;
} else {
ext_zero(ext);
}
trace_scoutfs_ext_alloc(sb, start, len, count, ext, ret);
return ret;
}
/*
* Set the map and flags for an extent region, with the magical property
* that extents with map and flags set to 0 are removed.
*
* If we're modifying an existing extent then the modification must be
* fully inside the existing extent. The modification can leave edges
* of the extent which need to be inserted. If the modification extends
* to the end of the existing extent then we need to check for adjacent
* neighbouring extents which might now be able to be merged.
*
* Inserting a new extent is like the case of modifying the entire
* existing extent. We need to check neighbours of the inserted extent
* to see if they can be merged.
*/
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent left;
struct scoutfs_extent right;
struct scoutfs_extent set;
int ret;
set.start = start;
set.len = len;
set.map = map;
set.flags = flags;
/* find extent to remove */
ret = op_call(sb, ops, arg, next, start, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (ret == 0 && ext_overlap(&found, start, len)) {
/* set extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}
add_change(&chg, &found, false);
ext_split(&found, start, len, &left, &right);
} else {
ext_zero(&found);
ext_zero(&left);
ext_zero(&right);
}
if (left.len) {
/* inserting split left, won't merge */
add_change(&chg, &left, true);
} else if (start > 0) {
ret = op_call(sb, ops, arg, next, start - 1, 1, &left);
if (ret < 0 && ret != -ENOENT)
goto out;
else if (ret == 0 && scoutfs_ext_can_merge(&left, &set)) {
/* remove found left, merging */
set.start = left.start;
set.map = left.map;
set.len += left.len;
add_change(&chg, &left, false);
}
}
if (right.len) {
/* inserting split right, won't merge */
add_change(&chg, &right, true);
} else {
ret = op_call(sb, ops, arg, next, start + len, 1, &right);
if (ret < 0 && ret != -ENOENT)
goto out;
else if (ret == 0 && scoutfs_ext_can_merge(&set, &right)) {
/* remove found right, merging */
set.len += right.len;
add_change(&chg, &right, false);
}
}
if (set.flags || set.map)
add_change(&chg, &set, true);
ret = apply_changes(sb, ops, arg, &chg);
out:
trace_scoutfs_ext_set(sb, start, len, map, flags, ret);
return ret;
}

36
kmod/src/ext.h Normal file
View File

@@ -0,0 +1,36 @@
#ifndef _SCOUTFS_EXT_H_
#define _SCOUTFS_EXT_H_
struct scoutfs_extent {
u64 start;
u64 len;
u64 map;
u8 flags;
};
struct scoutfs_ext_ops {
int (*next)(struct super_block *sb, void *arg,
u64 start, u64 len, struct scoutfs_extent *ext);
int (*insert)(struct super_block *sb, void *arg,
u64 start, u64 len, u64 map, u8 flags);
int (*remove)(struct super_block *sb, void *arg, u64 start, u64 len,
u64 map, u8 flags);
};
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,
struct scoutfs_extent *right);
int scoutfs_ext_next(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, struct scoutfs_extent *ext);
int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags);
int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len);
int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 limit,
struct scoutfs_extent *ext);
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags);
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,54 +1,43 @@
#ifndef _SCOUTFS_FOREST_H_
#define _SCOUTFS_FOREST_H_
struct scoutfs_radix_allocator;
struct scoutfs_alloc;
struct scoutfs_block_writer;
struct scoutfs_block;
#include "btree.h"
/* caller gives an item to the callback */
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
void *val, int val_len, void *arg);
int scoutfs_forest_lookup(struct super_block *sb, struct scoutfs_key *key,
struct kvec *val, struct scoutfs_lock *lock);
int scoutfs_forest_lookup_exact(struct super_block *sb,
struct scoutfs_key *key, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_next(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *last, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next);
int scoutfs_forest_prev(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *first, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_create(struct super_block *sb, struct scoutfs_key *key,
struct kvec *val, struct scoutfs_lock *lock);
int scoutfs_forest_create_force(struct super_block *sb,
struct scoutfs_key *key, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_update(struct super_block *sb, struct scoutfs_key *key,
struct kvec *val, struct scoutfs_lock *lock);
int scoutfs_forest_delete_dirty(struct super_block *sb,
struct scoutfs_key *key);
int scoutfs_forest_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_forest_delete_force(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_forest_delete_save(struct super_block *sb,
struct scoutfs_key *key,
struct list_head *list,
struct scoutfs_lock *lock);
int scoutfs_forest_restore(struct super_block *sb, struct list_head *list,
struct scoutfs_lock *lock);
void scoutfs_forest_free_batch(struct super_block *sb, struct list_head *list);
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
struct scoutfs_lock *lock);
void scoutfs_forest_set_max_vers(struct super_block *sb, u64 max_vers);
int scoutfs_forest_get_max_vers(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *vers);
int scoutfs_forest_insert_list(struct super_block *sb,
struct scoutfs_btree_item_list *lst);
int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id);
void scoutfs_forest_init_btrees(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_log_trees *lt);
void scoutfs_forest_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
void scoutfs_forest_clear_lock(struct super_block *sb,
struct scoutfs_lock *lock);
int scoutfs_forest_setup(struct super_block *sb);
void scoutfs_forest_destroy(struct super_block *sb);

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,49 @@
#ifndef _SCOUTFS_HASH_H_
#define _SCOUTFS_HASH_H_
#include <linux/crc32c.h>
/*
* We're using FNV1a for now. It's fine. Ish.
*
* The longer term plan is xxh3 but it looks like it'll take just a bit
* more time to be declared stable and then it needs to be ported to the
* kernel.
*
* - https://fastcompression.blogspot.com/2019/03/presenting-xxh3.html
* - https://github.com/Cyan4973/xxHash/releases/tag/v0.7.4
*/
static inline u32 fnv1a32(const void *data, unsigned int len)
{
u32 hash = 0x811c9dc5;
while (len--) {
hash ^= *(u8 *)(data++);
hash *= 0x01000193;
}
return hash;
}
static inline u64 fnv1a64(const void *data, unsigned int len)
{
u64 hash = 0xcbf29ce484222325ULL;
while (len--) {
hash ^= *(u8 *)(data++);
hash *= 0x100000001b3ULL;
}
return hash;
}
static inline u32 scoutfs_hash32(const void *data, unsigned int len)
{
return fnv1a32(data, len);
}
/* XXX replace with xxhash */
static inline u64 scoutfs_hash64(const void *data, unsigned int len)
{
unsigned int half = (len + 1) / 2;
return crc32c(~0, data, half) |
((u64)crc32c(~0, data + len - half, half) << 32);
return fnv1a64(data, len);
}
#endif

View File

@@ -30,8 +30,7 @@
#include "xattr.h"
#include "trans.h"
#include "msg.h"
#include "kvec.h"
#include "forest.h"
#include "item.h"
#include "client.h"
#include "cmp.h"
@@ -47,9 +46,17 @@
* - describe data locking size problems
*/
struct inode_allocator {
spinlock_t lock;
u64 ino;
u64 nr;
};
struct inode_sb_info {
spinlock_t writeback_lock;
struct rb_root writeback_inodes;
struct inode_allocator dir_ino_alloc;
struct inode_allocator ino_alloc;
};
#define DECLARE_INODE_SB_INFO(sb, name) \
@@ -64,30 +71,30 @@ static struct kmem_cache *scoutfs_inode_cachep;
*/
static void scoutfs_inode_ctor(void *obj)
{
struct scoutfs_inode_info *ci = obj;
struct scoutfs_inode_info *si = obj;
mutex_init(&ci->item_mutex);
seqcount_init(&ci->seqcount);
ci->staging = false;
scoutfs_per_task_init(&ci->pt_data_lock);
atomic64_set(&ci->data_waitq.changed, 0);
init_waitqueue_head(&ci->data_waitq.waitq);
init_rwsem(&ci->xattr_rwsem);
RB_CLEAR_NODE(&ci->writeback_node);
spin_lock_init(&ci->ino_alloc.lock);
init_rwsem(&si->extent_sem);
mutex_init(&si->item_mutex);
seqcount_init(&si->seqcount);
si->staging = false;
scoutfs_per_task_init(&si->pt_data_lock);
atomic64_set(&si->data_waitq.changed, 0);
init_waitqueue_head(&si->data_waitq.waitq);
init_rwsem(&si->xattr_rwsem);
RB_CLEAR_NODE(&si->writeback_node);
inode_init_once(&ci->inode);
inode_init_once(&si->inode);
}
struct inode *scoutfs_alloc_inode(struct super_block *sb)
{
struct scoutfs_inode_info *ci;
struct scoutfs_inode_info *si;
ci = kmem_cache_alloc(scoutfs_inode_cachep, GFP_NOFS);
if (!ci)
si = kmem_cache_alloc(scoutfs_inode_cachep, GFP_NOFS);
if (!si)
return NULL;
return &ci->inode;
return &si->inode;
}
static void scoutfs_i_callback(struct rcu_head *head)
@@ -175,7 +182,8 @@ static void set_inode_ops(struct inode *inode)
inode->i_fop = &scoutfs_file_fops;
break;
case S_IFDIR:
inode->i_op = &scoutfs_dir_iops;
inode->i_op = &scoutfs_dir_iops.ops;
inode->i_flags |= S_IOPS_WRAPPER;
inode->i_fop = &scoutfs_dir_fops;
break;
case S_IFLNK:
@@ -215,7 +223,7 @@ static void set_item_info(struct scoutfs_inode_info *si,
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
i_size_write(inode, le64_to_cpu(cinode->size));
set_nlink(inode, le32_to_cpu(cinode->nlink));
@@ -230,23 +238,23 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
inode->i_ctime.tv_sec = le64_to_cpu(cinode->ctime.sec);
inode->i_ctime.tv_nsec = le32_to_cpu(cinode->ctime.nsec);
ci->meta_seq = le64_to_cpu(cinode->meta_seq);
ci->data_seq = le64_to_cpu(cinode->data_seq);
ci->data_version = le64_to_cpu(cinode->data_version);
ci->online_blocks = le64_to_cpu(cinode->online_blocks);
ci->offline_blocks = le64_to_cpu(cinode->offline_blocks);
ci->next_readdir_pos = le64_to_cpu(cinode->next_readdir_pos);
ci->next_xattr_id = le64_to_cpu(cinode->next_xattr_id);
ci->flags = le32_to_cpu(cinode->flags);
si->meta_seq = le64_to_cpu(cinode->meta_seq);
si->data_seq = le64_to_cpu(cinode->data_seq);
si->data_version = le64_to_cpu(cinode->data_version);
si->online_blocks = le64_to_cpu(cinode->online_blocks);
si->offline_blocks = le64_to_cpu(cinode->offline_blocks);
si->next_readdir_pos = le64_to_cpu(cinode->next_readdir_pos);
si->next_xattr_id = le64_to_cpu(cinode->next_xattr_id);
si->flags = le32_to_cpu(cinode->flags);
/*
* i_blocks is initialized from online and offline and is then
* maintained as blocks come and go.
*/
inode->i_blocks = (ci->online_blocks + ci->offline_blocks)
<< SCOUTFS_BLOCK_SECTOR_SHIFT;
inode->i_blocks = (si->online_blocks + si->offline_blocks)
<< SCOUTFS_BLOCK_SM_SECTOR_SHIFT;
set_item_info(ci, cinode);
set_item_info(si, cinode);
}
static void init_inode_key(struct scoutfs_key *key, u64 ino)
@@ -276,7 +284,6 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock,
struct super_block *sb = inode->i_sb;
struct scoutfs_key key;
struct scoutfs_inode sinode;
struct kvec val;
const u64 refresh_gen = lock->refresh_gen;
int ret;
@@ -292,11 +299,11 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock,
return 0;
init_inode_key(&key, scoutfs_ino(inode));
kvec_init(&val, &sinode, sizeof(sinode));
mutex_lock(&si->item_mutex);
if (atomic64_read(&si->last_refreshed) < refresh_gen) {
ret = scoutfs_forest_lookup_exact(sb, &key, &val, lock);
ret = scoutfs_item_lookup_exact(sb, &key, &sinode,
sizeof(sinode), lock);
if (ret == 0) {
load_inode(inode, &sinode);
atomic64_set(&si->last_refreshed, refresh_gen);
@@ -329,7 +336,7 @@ int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
u64 new_size, bool truncate)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
LIST_HEAD(ind_locks);
int ret;
@@ -337,8 +344,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
if (!S_ISREG(inode->i_mode))
return 0;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, true,
SIC_DIRTY_INODE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, true);
if (ret)
return ret;
@@ -348,7 +354,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
truncate_setsize(inode, new_size);
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
if (truncate)
ci->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
si->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_inode_set_data_seq(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
@@ -360,17 +366,16 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
static int clear_truncate_flag(struct inode *inode, struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
LIST_HEAD(ind_locks);
int ret;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_DIRTY_INODE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret)
return ret;
ci->flags &= ~SCOUTFS_INO_FLAG_TRUNCATE;
si->flags &= ~SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
@@ -381,16 +386,17 @@ static int clear_truncate_flag(struct inode *inode, struct scoutfs_lock *lock)
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 start;
int ret, err;
trace_scoutfs_complete_truncate(inode, ci->flags);
trace_scoutfs_complete_truncate(inode, si->flags);
if (!(ci->flags & SCOUTFS_INO_FLAG_TRUNCATE))
if (!(si->flags & SCOUTFS_INO_FLAG_TRUNCATE))
return 0;
start = (i_size_read(inode) + SCOUTFS_BLOCK_SIZE - 1) >> SCOUTFS_BLOCK_SHIFT;
start = (i_size_read(inode) + SCOUTFS_BLOCK_SM_SIZE - 1) >>
SCOUTFS_BLOCK_SM_SHIFT;
ret = scoutfs_data_truncate_items(inode->i_sb, inode,
scoutfs_ino(inode), start, ~0ULL,
false, lock);
@@ -480,8 +486,7 @@ retry:
}
}
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
SIC_DIRTY_INODE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret)
goto out;
@@ -573,7 +578,7 @@ void scoutfs_inode_add_onoff(struct inode *inode, s64 on, s64 off)
si->online_blocks += on;
si->offline_blocks += off;
/* XXX not sure if this is right */
inode->i_blocks += (on + off) * SCOUTFS_BLOCK_SECTORS;
inode->i_blocks += (on + off) * SCOUTFS_BLOCK_SM_SECTORS;
trace_scoutfs_online_offline_blocks(inode, on, off,
si->online_blocks,
@@ -637,19 +642,19 @@ void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off)
static int scoutfs_iget_test(struct inode *inode, void *arg)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 *ino = arg;
return ci->ino == *ino;
return si->ino == *ino;
}
static int scoutfs_iget_set(struct inode *inode, void *arg)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 *ino = arg;
inode->i_ino = *ino;
ci->ino = *ino;
si->ino = *ino;
return 0;
}
@@ -681,8 +686,6 @@ struct inode *scoutfs_iget(struct super_block *sb, u64 ino)
/* XXX ensure refresh, instead clear in drop_inode? */
si = SCOUTFS_I(inode);
atomic64_set(&si->last_refreshed, 0);
si->ino_alloc.ino = 0;
si->ino_alloc.nr = 0;
ret = scoutfs_inode_refresh(inode, lock, 0);
if (ret) {
@@ -701,7 +704,7 @@ out:
static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
{
struct scoutfs_inode_info *ci = SCOUTFS_I(inode);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
u64 online_blocks;
u64 offline_blocks;
@@ -715,19 +718,22 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
cinode->rdev = cpu_to_le32(inode->i_rdev);
cinode->atime.sec = cpu_to_le64(inode->i_atime.tv_sec);
cinode->atime.nsec = cpu_to_le32(inode->i_atime.tv_nsec);
memset(cinode->atime.__pad, 0, sizeof(cinode->atime.__pad));
cinode->ctime.sec = cpu_to_le64(inode->i_ctime.tv_sec);
cinode->ctime.nsec = cpu_to_le32(inode->i_ctime.tv_nsec);
memset(cinode->ctime.__pad, 0, sizeof(cinode->ctime.__pad));
cinode->mtime.sec = cpu_to_le64(inode->i_mtime.tv_sec);
cinode->mtime.nsec = cpu_to_le32(inode->i_mtime.tv_nsec);
memset(cinode->mtime.__pad, 0, sizeof(cinode->mtime.__pad));
cinode->meta_seq = cpu_to_le64(scoutfs_inode_meta_seq(inode));
cinode->data_seq = cpu_to_le64(scoutfs_inode_data_seq(inode));
cinode->data_version = cpu_to_le64(scoutfs_inode_data_version(inode));
cinode->online_blocks = cpu_to_le64(online_blocks);
cinode->offline_blocks = cpu_to_le64(offline_blocks);
cinode->next_readdir_pos = cpu_to_le64(ci->next_readdir_pos);
cinode->next_xattr_id = cpu_to_le64(ci->next_xattr_id);
cinode->flags = cpu_to_le32(ci->flags);
cinode->next_readdir_pos = cpu_to_le64(si->next_readdir_pos);
cinode->next_xattr_id = cpu_to_le64(si->next_xattr_id);
cinode->flags = cpu_to_le32(si->flags);
}
/*
@@ -753,15 +759,13 @@ int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock)
struct super_block *sb = inode->i_sb;
struct scoutfs_inode sinode;
struct scoutfs_key key;
struct kvec val;
int ret;
store_inode(&sinode, inode);
kvec_init(&val, &sinode, sizeof(sinode));
init_inode_key(&key, scoutfs_ino(inode));
ret = scoutfs_forest_update(sb, &key, &val, lock);
ret = scoutfs_item_update(sb, &key, &sinode, sizeof(sinode), lock);
if (!ret)
trace_scoutfs_dirty_inode(inode);
return ret;
@@ -893,7 +897,7 @@ static int update_index_items(struct super_block *sb,
scoutfs_inode_init_index_key(&ins, type, major, minor, ino);
ins_lock = find_index_lock(lock_list, type, major, minor, ino);
ret = scoutfs_forest_create_force(sb, &ins, NULL, ins_lock);
ret = scoutfs_item_create_force(sb, &ins, NULL, 0, ins_lock);
if (ret || !will_del_index(si, type, major, minor))
return ret;
@@ -905,9 +909,9 @@ static int update_index_items(struct super_block *sb,
del_lock = find_index_lock(lock_list, type, si->item_majors[type],
si->item_minors[type], ino);
ret = scoutfs_forest_delete_force(sb, &del, del_lock);
ret = scoutfs_item_delete_force(sb, &del, del_lock);
if (ret) {
err = scoutfs_forest_delete(sb, &ins, ins_lock);
err = scoutfs_item_delete(sb, &ins, ins_lock);
BUG_ON(err);
}
@@ -966,7 +970,6 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
const u64 ino = scoutfs_ino(inode);
struct scoutfs_key key;
struct scoutfs_inode sinode;
struct kvec val;
int ret;
int err;
@@ -982,9 +985,8 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
BUG_ON(ret);
init_inode_key(&key, ino);
kvec_init(&val, &sinode, sizeof(sinode));
err = scoutfs_forest_update(sb, &key, &val, lock);
err = scoutfs_item_update(sb, &key, &sinode, sizeof(sinode), lock);
if (err) {
scoutfs_err(sb, "inode %llu update err %d", ino, err);
BUG_ON(err);
@@ -1185,8 +1187,7 @@ int scoutfs_inode_index_start(struct super_block *sb, u64 *seq)
* Returns > 0 if the seq changed and the locks should be retried.
*/
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
struct list_head *list, u64 seq,
const struct scoutfs_item_count cnt)
struct list_head *list, u64 seq)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct index_lock *ind_lock;
@@ -1202,7 +1203,7 @@ int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
goto out;
}
ret = scoutfs_hold_trans(sb, cnt);
ret = scoutfs_hold_trans(sb);
if (ret == 0 && seq != sbi->trans_seq) {
scoutfs_release_trans(sb);
ret = 1;
@@ -1216,8 +1217,7 @@ out:
}
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
bool set_data_seq,
const struct scoutfs_item_count cnt)
bool set_data_seq)
{
struct super_block *sb = inode->i_sb;
int ret;
@@ -1227,7 +1227,7 @@ int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
ret = scoutfs_inode_index_start(sb, &seq) ?:
scoutfs_inode_index_prepare(sb, list, inode,
set_data_seq) ?:
scoutfs_inode_index_try_lock_hold(sb, list, seq, cnt);
scoutfs_inode_index_try_lock_hold(sb, list, seq);
} while (ret > 0);
return ret;
@@ -1259,7 +1259,7 @@ static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
scoutfs_inode_init_index_key(&key, type, major, minor, ino);
lock = find_index_lock(ind_locks, type, major, minor, ino);
ret = scoutfs_forest_delete_force(sb, &key, lock);
ret = scoutfs_item_delete_force(sb, &key, lock);
if (ret == -ENOENT)
ret = 0;
return ret;
@@ -1321,14 +1321,16 @@ u64 scoutfs_last_ino(struct super_block *sb)
* minimize that loss while still being large enough for typical
* directory file counts.
*/
int scoutfs_alloc_ino(struct inode *parent, u64 *ino_ret)
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret)
{
struct scoutfs_inode_allocator *ia = &SCOUTFS_I(parent)->ino_alloc;
struct super_block *sb = parent->i_sb;
DECLARE_INODE_SB_INFO(sb, inf);
struct inode_allocator *ia;
u64 ino;
u64 nr;
int ret;
ia = is_dir ? &inf->dir_ino_alloc : &inf->ino_alloc;
spin_lock(&ia->lock);
if (ia->nr == 0) {
@@ -1363,29 +1365,26 @@ struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
umode_t mode, dev_t rdev, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_inode_info *ci;
struct scoutfs_inode_info *si;
struct scoutfs_key key;
struct scoutfs_inode sinode;
struct inode *inode;
struct kvec val;
int ret;
inode = new_inode(sb);
if (!inode)
return ERR_PTR(-ENOMEM);
ci = SCOUTFS_I(inode);
ci->ino = ino;
ci->data_version = 0;
ci->online_blocks = 0;
ci->offline_blocks = 0;
ci->next_readdir_pos = SCOUTFS_DIRENT_FIRST_POS;
ci->next_xattr_id = 0;
ci->have_item = false;
atomic64_set(&ci->last_refreshed, lock->refresh_gen);
ci->flags = 0;
ci->ino_alloc.ino = 0;
ci->ino_alloc.nr = 0;
si = SCOUTFS_I(inode);
si->ino = ino;
si->data_version = 0;
si->online_blocks = 0;
si->offline_blocks = 0;
si->next_readdir_pos = SCOUTFS_DIRENT_FIRST_POS;
si->next_xattr_id = 0;
si->have_item = false;
atomic64_set(&si->last_refreshed, lock->refresh_gen);
si->flags = 0;
scoutfs_inode_set_meta_seq(inode);
scoutfs_inode_set_data_seq(inode);
@@ -1399,9 +1398,8 @@ struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
store_inode(&sinode, inode);
init_inode_key(&key, scoutfs_ino(inode));
kvec_init(&val, &sinode, sizeof(sinode));
ret = scoutfs_forest_create(sb, &key, &val, lock);
ret = scoutfs_item_create(sb, &key, &sinode, sizeof(sinode), lock);
if (ret) {
iput(inode);
return ERR_PTR(ret);
@@ -1420,7 +1418,18 @@ static void init_orphan_key(struct scoutfs_key *key, u64 rid, u64 ino)
};
}
static int remove_orphan_item(struct super_block *sb, u64 ino)
int scoutfs_orphan_dirty(struct super_block *sb, u64 ino)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_lock *lock = sbi->rid_lock;
struct scoutfs_key key;
init_orphan_key(&key, sbi->rid, ino);
return scoutfs_item_dirty(sb, &key, lock);
}
int scoutfs_orphan_delete(struct super_block *sb, u64 ino)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_lock *lock = sbi->rid_lock;
@@ -1429,7 +1438,7 @@ static int remove_orphan_item(struct super_block *sb, u64 ino)
init_orphan_key(&key, sbi->rid, ino);
ret = scoutfs_forest_delete(sb, &key, lock);
ret = scoutfs_item_delete(sb, &key, lock);
if (ret == -ENOENT)
ret = 0;
@@ -1451,7 +1460,6 @@ static int delete_inode_items(struct super_block *sb, u64 ino)
struct scoutfs_key key;
LIST_HEAD(ind_locks);
bool release = false;
struct kvec val;
umode_t mode;
u64 ind_seq;
u64 size;
@@ -1462,9 +1470,9 @@ static int delete_inode_items(struct super_block *sb, u64 ino)
return ret;
init_inode_key(&key, ino);
kvec_init(&val, &sinode, sizeof(sinode));
ret = scoutfs_forest_lookup_exact(sb, &key, &val, lock);
ret = scoutfs_item_lookup_exact(sb, &key, &sinode, sizeof(sinode),
lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
@@ -1498,8 +1506,7 @@ static int delete_inode_items(struct super_block *sb, u64 ino)
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
prepare_index_deletion(sb, &ind_locks, ino, mode, &sinode) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_DROP_INODE(mode, size));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -1517,11 +1524,11 @@ retry:
goto out;
}
ret = scoutfs_forest_delete(sb, &key, lock);
ret = scoutfs_item_delete(sb, &key, lock);
if (ret)
goto out;
ret = remove_orphan_item(sb, ino);
ret = scoutfs_orphan_delete(sb, ino);
out:
if (release)
scoutfs_release_trans(sb);
@@ -1586,7 +1593,7 @@ int scoutfs_scan_orphans(struct super_block *sb)
init_orphan_key(&last, sbi->rid, ~0ULL);
while (1) {
ret = scoutfs_forest_next(sb, &key, &last, NULL, lock);
ret = scoutfs_item_next(sb, &key, &last, NULL, 0, lock);
if (ret == -ENOENT) /* No more orphan items */
break;
if (ret < 0)
@@ -1620,25 +1627,34 @@ int scoutfs_orphan_inode(struct inode *inode)
init_orphan_key(&key, sbi->rid, scoutfs_ino(inode));
ret = scoutfs_forest_create(sb, &key, NULL, lock);
ret = scoutfs_item_create(sb, &key, NULL, 0, lock);
return ret;
}
/*
* Track an inode that could have dirty pages. Used to kick off writeback
* on all dirty pages during transaction commit without tying ourselves in
* knots trying to call through the high level vfs sync methods.
* Track an inode that could have dirty pages. Used to kick off
* writeback on all dirty pages during transaction commit without tying
* ourselves in knots trying to call through the high level vfs sync
* methods.
*
* This is called by writers who hold the inode and transaction. The
* inode's presence in the rbtree is removed by destroy_inode, prevented
* by the inode hold, and by committing the transaction, which is
* prevented by holding the transaction. The inode can only go from
* empty to on the rbtree while we're here.
*/
void scoutfs_inode_queue_writeback(struct inode *inode)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
spin_lock(&inf->writeback_lock);
if (RB_EMPTY_NODE(&si->writeback_node))
insert_writeback_inode(inf, si);
spin_unlock(&inf->writeback_lock);
if (RB_EMPTY_NODE(&si->writeback_node)) {
spin_lock(&inf->writeback_lock);
if (RB_EMPTY_NODE(&si->writeback_node))
insert_writeback_inode(inf, si);
spin_unlock(&inf->writeback_lock);
}
}
/*
@@ -1724,6 +1740,8 @@ int scoutfs_inode_setup(struct super_block *sb)
spin_lock_init(&inf->writeback_lock);
inf->writeback_inodes = RB_ROOT;
spin_lock_init(&inf->dir_ino_alloc.lock);
spin_lock_init(&inf->ino_alloc.lock);
sbi->inode_sb_info = inf;

View File

@@ -4,18 +4,11 @@
#include "key.h"
#include "lock.h"
#include "per_task.h"
#include "count.h"
#include "format.h"
#include "data.h"
struct scoutfs_lock;
struct scoutfs_inode_allocator {
spinlock_t lock;
u64 ino;
u64 nr;
};
struct scoutfs_inode_info {
/* read or initialized for each inode instance */
u64 ino;
@@ -28,6 +21,14 @@ struct scoutfs_inode_info {
u64 offline_blocks;
u32 flags;
/*
* Protects per-inode extent items, most particularly readers
* who want to serialize writers without holding i_mutex. (only
* used in data.c, it's the only place that understands file
* extent items)
*/
struct rw_semaphore extent_sem;
/*
* The in-memory item info caches the current index item values
* so that we can decide to update them with comparisons instead
@@ -42,9 +43,6 @@ struct scoutfs_inode_info {
/* updated at on each new lock acquisition */
atomic64_t last_refreshed;
/* reset for every new inode instance */
struct scoutfs_inode_allocator ino_alloc;
/* initialized once for slab object */
seqcount_t seqcount;
bool staging; /* holder of i_mutex is staging */
@@ -84,18 +82,16 @@ int scoutfs_inode_index_prepare_ino(struct super_block *sb,
struct list_head *list, u64 ino,
umode_t mode);
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
struct list_head *list, u64 seq,
const struct scoutfs_item_count cnt);
struct list_head *list, u64 seq);
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
bool set_data_seq,
const struct scoutfs_item_count cnt);
bool set_data_seq);
void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list);
int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock);
void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
struct list_head *ind_locks);
int scoutfs_alloc_ino(struct inode *parent, u64 *ino);
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret);
struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
umode_t mode, dev_t rdev, u64 ino,
struct scoutfs_lock *lock);
@@ -118,6 +114,8 @@ int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_scan_orphans(struct super_block *sb);
int scoutfs_orphan_dirty(struct super_block *sb, u64 ino);
int scoutfs_orphan_delete(struct super_block *sb, u64 ino);
void scoutfs_inode_queue_writeback(struct inode *inode);
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);

View File

@@ -12,6 +12,7 @@
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/file.h>
#include <linux/uaccess.h>
#include <linux/compiler.h>
#include <linux/uio.h>
@@ -27,6 +28,7 @@
#include "ioctl.h"
#include "super.h"
#include "inode.h"
#include "item.h"
#include "forest.h"
#include "data.h"
#include "client.h"
@@ -34,6 +36,8 @@
#include "trans.h"
#include "xattr.h"
#include "hash.h"
#include "srch.h"
#include "alloc.h"
#include "scoutfs_trace.h"
/*
@@ -109,7 +113,7 @@ static long scoutfs_ioc_walk_inodes(struct file *file, unsigned long arg)
for (nr = 0; nr < walk.nr_entries; ) {
ret = scoutfs_forest_next(sb, &key, &last_key, NULL, lock);
ret = scoutfs_item_next(sb, &key, &last_key, NULL, 0, lock);
if (ret < 0 && ret != -ENOENT)
break;
@@ -271,8 +275,8 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
struct super_block *sb = inode->i_sb;
struct scoutfs_ioctl_release args;
struct scoutfs_lock *lock = NULL;
loff_t start;
loff_t end_inc;
u64 sblock;
u64 eblock;
u64 online;
u64 offline;
u64 isize;
@@ -283,9 +287,11 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
trace_scoutfs_ioc_release(sb, scoutfs_ino(inode), &args);
if (args.count == 0)
if (args.length == 0)
return 0;
if ((args.block + args.count) < args.block)
if (((args.offset + args.length) < args.offset) ||
(args.offset & SCOUTFS_BLOCK_SM_MASK) ||
(args.length & SCOUTFS_BLOCK_SM_MASK))
return -EINVAL;
@@ -318,23 +324,24 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
inode_dio_wait(inode);
/* drop all clean and dirty cached blocks in the range */
start = args.block << SCOUTFS_BLOCK_SHIFT;
end_inc = ((args.block + args.count) << SCOUTFS_BLOCK_SHIFT) - 1;
truncate_inode_pages_range(&inode->i_data, start, end_inc);
truncate_inode_pages_range(&inode->i_data, args.offset,
args.offset + args.length - 1);
sblock = args.offset >> SCOUTFS_BLOCK_SM_SHIFT;
eblock = (args.offset + args.length - 1) >> SCOUTFS_BLOCK_SM_SHIFT;
ret = scoutfs_data_truncate_items(sb, inode, scoutfs_ino(inode),
args.block,
args.block + args.count - 1, true,
sblock,
eblock, true,
lock);
if (ret == 0) {
scoutfs_inode_get_onoff(inode, &online, &offline);
isize = i_size_read(inode);
if (online == 0 && isize) {
start = (isize + SCOUTFS_BLOCK_SIZE - 1)
>> SCOUTFS_BLOCK_SHIFT;
sblock = (isize + SCOUTFS_BLOCK_SM_SIZE - 1)
>> SCOUTFS_BLOCK_SM_SHIFT;
ret = scoutfs_data_truncate_items(sb, inode,
scoutfs_ino(inode),
start, U64_MAX,
sblock, U64_MAX,
false, lock);
}
}
@@ -371,8 +378,8 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
trace_scoutfs_ioc_data_wait_err(sb, &args);
sblock = args.offset >> SCOUTFS_BLOCK_SHIFT;
eblock = (args.offset + args.count - 1) >> SCOUTFS_BLOCK_SHIFT;
sblock = args.offset >> SCOUTFS_BLOCK_SM_SHIFT;
eblock = (args.offset + args.count - 1) >> SCOUTFS_BLOCK_SM_SHIFT;
if (sblock > eblock)
return -EINVAL;
@@ -456,23 +463,24 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
trace_scoutfs_ioc_stage(sb, scoutfs_ino(inode), &args);
end_size = args.offset + args.count;
end_size = args.offset + args.length;
/* verify arg constraints that aren't dependent on file */
if (args.count < 0 || (end_size < args.offset) ||
args.offset & SCOUTFS_BLOCK_MASK)
if (args.length < 0 || (end_size < args.offset) ||
args.offset & SCOUTFS_BLOCK_SM_MASK) {
return -EINVAL;
}
if (args.count == 0)
if (args.length == 0)
return 0;
/* the iocb is really only used for the file pointer :P */
init_sync_kiocb(&kiocb, file);
kiocb.ki_pos = args.offset;
kiocb.ki_left = args.count;
kiocb.ki_nbytes = args.count;
kiocb.ki_left = args.length;
kiocb.ki_nbytes = args.length;
iov.iov_base = (void __user *)(unsigned long)args.buf_ptr;
iov.iov_len = args.count;
iov.iov_len = args.length;
ret = mnt_want_write_file(file);
if (ret)
@@ -494,7 +502,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
(file->f_flags & (O_APPEND | O_DIRECT | O_DSYNC)) ||
IS_SYNC(file->f_mapping->host) ||
(end_size > isize) ||
((end_size & SCOUTFS_BLOCK_MASK) && (end_size != isize))) {
((end_size & SCOUTFS_BLOCK_SM_MASK) && (end_size != isize))) {
ret = -EINVAL;
goto out;
}
@@ -511,11 +519,11 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
written = 0;
do {
ret = generic_file_buffered_write(&kiocb, &iov, 1, pos, &pos,
args.count, written);
args.length, written);
BUG_ON(ret == -EIOCBQUEUED);
if (ret > 0)
written += ret;
} while (ret > 0 && written < args.count);
} while (ret > 0 && written < args.length);
si->staging = false;
current->backing_dev_info = NULL;
@@ -666,8 +674,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
/* setting only so we don't see 0 data seq with nonzero data_version */
set_data_seq = sm.data_version != 0 ? true : false;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq,
SIC_SETATTR_MORE());
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq);
if (ret)
goto unlock;
@@ -759,18 +766,20 @@ out:
* but we don't check that the callers xattr name contains the tag and
* search for it regardless.
*/
static long scoutfs_ioc_find_xattrs(struct file *file, unsigned long arg)
static long scoutfs_ioc_search_xattrs(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_find_xattrs __user *ufx = (void __user *)arg;
struct scoutfs_ioctl_find_xattrs fx;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key last;
struct scoutfs_key key;
struct scoutfs_ioctl_search_xattrs __user *usx = (void __user *)arg;
struct scoutfs_ioctl_search_xattrs sx;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_srch_rb_root sroot;
struct scoutfs_srch_rb_node *snode;
u64 __user *uinos;
struct rb_node *node;
char *name = NULL;
int total = 0;
u64 hash;
u64 ino;
bool done = false;
u64 prev_ino = 0;
u64 total = 0;
int ret;
if (!(file->f_mode & FMODE_READ)) {
@@ -783,67 +792,73 @@ static long scoutfs_ioc_find_xattrs(struct file *file, unsigned long arg)
goto out;
}
if (copy_from_user(&fx, ufx, sizeof(fx))) {
if (copy_from_user(&sx, usx, sizeof(sx))) {
ret = -EFAULT;
goto out;
}
uinos = (u64 __user *)sx.inodes_ptr;
if (fx.name_bytes > SCOUTFS_XATTR_MAX_NAME_LEN) {
if (sx.name_bytes > SCOUTFS_XATTR_MAX_NAME_LEN) {
ret = -EINVAL;
goto out;
}
name = kmalloc(fx.name_bytes, GFP_KERNEL);
if (sx.nr_inodes == 0 || sx.last_ino < sx.next_ino) {
ret = 0;
goto out;
}
name = kmalloc(sx.name_bytes, GFP_KERNEL);
if (!name) {
ret = -ENOMEM;
goto out;
}
if (copy_from_user(name, (void __user *)fx.name_ptr, fx.name_bytes)) {
if (copy_from_user(name, (void __user *)sx.name_ptr, sx.name_bytes)) {
ret = -EFAULT;
goto out;
}
hash = scoutfs_hash64(name, fx.name_bytes);
scoutfs_xattr_index_key(&key, hash, fx.next_ino, 0);
scoutfs_xattr_index_key(&last, hash, U64_MAX, U64_MAX);
ino = 0;
if (scoutfs_xattr_parse_tags(name, sx.name_bytes, &tgs) < 0 ||
!tgs.srch) {
ret = -EINVAL;
goto out;
}
ret = scoutfs_lock_xattr_index(sb, SCOUTFS_LOCK_READ, 0, hash, &lock);
ret = scoutfs_srch_search_xattrs(sb, &sroot,
scoutfs_hash64(name, sx.name_bytes),
sx.next_ino, sx.last_ino, &done);
if (ret < 0)
goto out;
while (fx.nr_inodes) {
prev_ino = 0;
scoutfs_srch_foreach_rb_node(snode, node, &sroot) {
if (prev_ino == snode->ino)
continue;
ret = scoutfs_forest_next(sb, &key, &last, NULL, lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
if (put_user(snode->ino, uinos + total)) {
ret = -EFAULT;
break;
}
prev_ino = snode->ino;
/* xattrs hashes can collide and add multiple entries */
if (le64_to_cpu(key.skxi_ino) != ino) {
ino = le64_to_cpu(key.skxi_ino);
if (put_user(ino, (u64 __user *)fx.inodes_ptr)) {
ret = -EFAULT;
break;
}
fx.inodes_ptr += sizeof(u64);
fx.nr_inodes--;
total++;
ret = 0;
}
scoutfs_key_inc(&key);
if (++total == sx.nr_inodes)
break;
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
sx.output_flags = 0;
if (done && total == sroot.nr)
sx.output_flags |= SCOUTFS_SEARCH_XATTRS_OFLAG_END;
if (put_user(sx.output_flags, &usx->output_flags))
ret = -EFAULT;
else
ret = 0;
scoutfs_srch_destroy_rb_root(&sroot);
out:
kfree(name);
return ret ?: total;
}
@@ -853,6 +868,7 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_ioctl_statfs_more sfm;
int ret;
if (get_user(sfm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
@@ -861,6 +877,12 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
sizeof(struct scoutfs_ioctl_statfs_more));
sfm.fsid = le64_to_cpu(super->hdr.fsid);
sfm.rid = sbi->rid;
sfm.total_meta_blocks = le64_to_cpu(super->total_meta_blocks);
sfm.total_data_blocks = le64_to_cpu(super->total_data_blocks);
ret = scoutfs_client_get_last_seq(sb, &sfm.committed_seq);
if (ret)
return ret;
if (copy_to_user((void __user *)arg, &sfm, sfm.valid_bytes))
return -EFAULT;
@@ -868,6 +890,107 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
return 0;
}
struct copy_alloc_detail_args {
struct scoutfs_ioctl_alloc_detail_entry __user *uade;
u64 nr;
u64 copied;
};
static int copy_alloc_detail_to_user(struct super_block *sb, void *arg,
int owner, u64 id, bool meta, bool avail,
u64 blocks)
{
struct copy_alloc_detail_args *args = arg;
struct scoutfs_ioctl_alloc_detail_entry ade;
if (args->copied == args->nr)
return -EOVERFLOW;
ade.blocks = blocks;
ade.id = id;
ade.meta = !!meta;
ade.avail = !!avail;
if (copy_to_user(&args->uade[args->copied], &ade, sizeof(ade)))
return -EFAULT;
args->copied++;
return 0;
}
static long scoutfs_ioc_alloc_detail(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_alloc_detail __user *uad = (void __user *)arg;
struct scoutfs_ioctl_alloc_detail ad;
struct copy_alloc_detail_args args;
if (copy_from_user(&ad, uad, sizeof(ad)))
return -EFAULT;
args.uade = (struct scoutfs_ioctl_alloc_detail_entry __user *)
(uintptr_t)ad.entries_ptr;
args.nr = ad.entries_nr;
args.copied = 0;
return scoutfs_alloc_foreach(sb, copy_alloc_detail_to_user, &args) ?:
args.copied;
}
static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
{
struct inode *to = file_inode(file);
struct super_block *sb = to->i_sb;
struct scoutfs_ioctl_move_blocks __user *umb = (void __user *)arg;
struct scoutfs_ioctl_move_blocks mb;
struct file *from_file;
struct inode *from;
int ret;
if (copy_from_user(&mb, umb, sizeof(mb)))
return -EFAULT;
if (mb.len == 0)
return 0;
if (mb.from_off + mb.len < mb.from_off ||
mb.to_off + mb.len < mb.to_off)
return -EOVERFLOW;
from_file = fget(mb.from_fd);
if (!from_file)
return -EBADF;
from = file_inode(from_file);
if (from == to) {
ret = -EINVAL;
goto out;
}
if (from->i_sb != sb) {
ret = -EXDEV;
goto out;
}
if (mb.flags & SCOUTFS_IOC_MB_UNKNOWN) {
ret = -EINVAL;
goto out;
}
ret = mnt_want_write_file(file);
if (ret < 0)
goto out;
ret = scoutfs_data_move_blocks(from, mb.from_off, mb.len,
to, mb.to_off, !!(mb.flags & SCOUTFS_IOC_MB_STAGE),
mb.data_version);
mnt_drop_write_file(file);
out:
fput(from_file);
return ret;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -887,12 +1010,16 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_setattr_more(file, arg);
case SCOUTFS_IOC_LISTXATTR_HIDDEN:
return scoutfs_ioc_listxattr_hidden(file, arg);
case SCOUTFS_IOC_FIND_XATTRS:
return scoutfs_ioc_find_xattrs(file, arg);
case SCOUTFS_IOC_SEARCH_XATTRS:
return scoutfs_ioc_search_xattrs(file, arg);
case SCOUTFS_IOC_STATFS_MORE:
return scoutfs_ioc_statfs_more(file, arg);
case SCOUTFS_IOC_DATA_WAIT_ERR:
return scoutfs_ioc_data_wait_err(file, arg);
case SCOUTFS_IOC_ALLOC_DETAIL:
return scoutfs_ioc_alloc_detail(file, arg);
case SCOUTFS_IOC_MOVE_BLOCKS:
return scoutfs_ioc_move_blocks(file, arg);
}
return -ENOTTY;

View File

@@ -78,7 +78,7 @@ struct scoutfs_ioctl_walk_inodes {
__u8 _pad[11]; /* padded to align walk_inodes_entry total size */
};
enum {
enum scoutfs_ino_walk_seq_type {
SCOUTFS_IOC_WALK_INODES_META_SEQ = 0,
SCOUTFS_IOC_WALK_INODES_DATA_SEQ,
SCOUTFS_IOC_WALK_INODES_UNKNOWN,
@@ -163,7 +163,7 @@ struct scoutfs_ioctl_ino_path_result {
__u64 dir_pos;
__u16 path_bytes;
__u8 _pad[6];
__u8 path[0];
__u8 path[];
};
/* Get a single path from the root to the given inode number */
@@ -176,8 +176,8 @@ struct scoutfs_ioctl_ino_path_result {
* an offline record is left behind to trigger demand staging if the
* file is read.
*
* The starting block offset and number of blocks to release are in
* units 4KB blocks.
* The starting file offset and number of bytes to release must be in
* multiples of 4KB.
*
* The specified range can extend past i_size and can straddle sparse
* regions or blocks that are already offline. The only change it makes
@@ -193,8 +193,8 @@ struct scoutfs_ioctl_ino_path_result {
* presentation of the data in the file.
*/
struct scoutfs_ioctl_release {
__u64 block;
__u64 count;
__u64 offset;
__u64 length;
__u64 data_version;
};
@@ -205,7 +205,7 @@ struct scoutfs_ioctl_stage {
__u64 data_version;
__u64 buf_ptr;
__u64 offset;
__s32 count;
__s32 length;
__u32 _pad;
};
@@ -259,7 +259,7 @@ struct scoutfs_ioctl_data_waiting {
__u8 _pad[6];
};
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U8_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
struct scoutfs_ioctl_data_waiting)
@@ -279,7 +279,7 @@ struct scoutfs_ioctl_setattr_more {
};
#define SCOUTFS_IOC_SETATTR_MORE_OFFLINE (1 << 0)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U8_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U64_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE _IOW(SCOUTFS_IOCTL_MAGIC, 7, \
struct scoutfs_ioctl_setattr_more)
@@ -296,34 +296,57 @@ struct scoutfs_ioctl_listxattr_hidden {
/*
* Return the inode numbers of inodes which might contain the given
* named xattr. The inode may not have a set xattr with that name, the
* caller must check the returned inodes to see if they match.
* xattr. The inode may not have a set xattr with that name, the caller
* must check the returned inodes to see if they match.
*
* @next_ino: The next inode number that could be returned. Initialized
* to 0 when first searching and set to one past the last inode number
* returned to continue searching.
* @name_ptr: The address of the name of the xattr to search for. It does
* not need to be null terminated.
* @inodes_ptr: The address of the array of uint64_t inode numbers in which
* to store inode numbers that may contain the xattr. EFAULT may be returned
* if this address is not naturally aligned.
* @name_bytes: The number of non-null bytes found in the name at name_ptr.
* @last_ino: The last inode number that could be returned. U64_MAX to
* find all inodes.
* @name_ptr: The address of the name of the xattr to search for. It is
* not null terminated.
* @inodes_ptr: The address of the array of uint64_t inode numbers in
* which to store inode numbers that may contain the xattr. EFAULT may
* be returned if this address is not naturally aligned.
* @output_flags: Set as success is returned. If an error is returned
* then this field is undefined and should not be read.
* @nr_inodes: The number of elements in the array found at inodes_ptr.
* @name_bytes: The number of non-null bytes found in the name at
* name_ptr.
*
* This requires the CAP_SYS_ADMIN capability and will return -EPERM if
* it's not granted.
*
* The number of inode numbers stored in the inodes_ptr array is
* returned. If nr_inodes is 0 or last_ino is less than next_ino then 0
* will be immediately returned.
*
* Partial progress can be returned if an error is hit or if nr_inodes
* was larger than the internal limit on the number of inodes returned
* in a search pass. The _END output flag is set if all the results
* including last_ino were searched in this pass.
*
* It's valuable to provide a large inodes array so that all the results
* can be found in one search pass and _END can be set. There are
* significant constant costs for performing each search pass.
*/
struct scoutfs_ioctl_find_xattrs {
struct scoutfs_ioctl_search_xattrs {
__u64 next_ino;
__u64 last_ino;
__u64 name_ptr;
__u64 inodes_ptr;
__u64 output_flags;
__u64 nr_inodes;
__u16 name_bytes;
__u16 nr_inodes;
__u8 _pad[4];
__u8 _pad[6];
};
#define SCOUTFS_IOC_FIND_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_find_xattrs)
/* set in output_flags if returned inodes reached last_ino */
#define SCOUTFS_SEARCH_XATTRS_OFLAG_END (1ULL << 0)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
/*
* Give the user information about the filesystem.
@@ -335,13 +358,20 @@ struct scoutfs_ioctl_find_xattrs {
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* @committed_seq: All seqs up to and including this seq have been
* committed. Can be compared with meta_seq and data_seq from inodes in
* stat_more to discover if changes have been committed to disk.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_statfs_more {
__u64 valid_bytes;
__u64 fsid;
__u64 rid;
} __packed;
__u64 committed_seq;
__u64 total_meta_blocks;
__u64 total_data_blocks;
};
#define SCOUTFS_IOC_STATFS_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 10, \
struct scoutfs_ioctl_statfs_more)
@@ -364,4 +394,86 @@ struct scoutfs_ioctl_data_wait_err {
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOR(SCOUTFS_IOCTL_MAGIC, 11, \
struct scoutfs_ioctl_data_wait_err)
struct scoutfs_ioctl_alloc_detail {
__u64 entries_ptr;
__u64 entries_nr;
};
struct scoutfs_ioctl_alloc_detail_entry {
__u64 id;
__u64 blocks;
__u8 type;
__u8 meta:1,
avail:1;
__u8 __bit_pad:6;
__u8 __pad[6];
};
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
/*
* Move extents from one regular file to another at a different offset,
* on the same file system.
*
* from_fd specifies the source file and the ioctl is called on the
* destination file. Both files must have write access. from_off specifies
* the byte offset in the source, to_off is the byte offset in the
* destination, and len is the number of bytes in the region to move. All of
* the offsets and lengths must be in multiples of 4KB, except in the case
* where the from_off + len ends at the i_size of the source
* file. data_version is only used when STAGE flag is set (see below). flags
* field is currently only used to optionally specify STAGE behavior.
*
* This interface only moves extents which are block granular, it does
* not perform RMW of sub-block byte extents and it does not overwrite
* existing extents in the destination. It will split extents in the
* source.
*
* Only extents within i_size on the source are moved. The destination
* i_size will be updated if extents are moved beyond its current
* i_size. The i_size update will maintain final partial blocks in the
* source.
*
* If STAGE flag is not set, it will return an error if either of the files
* have offline extents. It will return 0 when all of the extents in the
* source region have been moved to the destination. Moving extents updates
* the ctime, mtime, meta_seq, data_seq, and data_version fields of both the
* source and destination inodes. If an error is returned then partial
* progress may have been made and inode fields may have been updated.
*
* If STAGE flag is set, as above except destination range must be in an
* offline extent. Fields are updated only for source inode.
*
* Errors specific to this interface include:
*
* EINVAL: from_off, len, or to_off aren't a multiple of 4KB; the source
* and destination files are the same inode; either the source or
* destination is not a regular file; the destination file has
* an existing overlapping extent (if STAGE flag not set); the
* destination range is not in an offline extent (if STAGE set).
* EOVERFLOW: either from_off + len or to_off + len exceeded 64bits.
* EBADF: from_fd isn't a valid open file descriptor.
* EXDEV: the source and destination files are in different filesystems.
* EISDIR: either the source or destination is a directory.
* ENODATA: either the source or destination file have offline extents and
* STAGE flag is not set.
* ESTALE: data_version does not match destination data_version.
*/
#define SCOUTFS_IOC_MB_STAGE (1 << 0)
#define SCOUTFS_IOC_MB_UNKNOWN (U64_MAX << 1)
struct scoutfs_ioctl_move_blocks {
__u64 from_fd;
__u64 from_off;
__u64 len;
__u64 to_off;
__u64 data_version;
__u64 flags;
};
#define SCOUTFS_IOC_MOVE_BLOCKS _IOR(SCOUTFS_IOCTL_MAGIC, 13, \
struct scoutfs_ioctl_move_blocks)
#endif

2543
kmod/src/item.c Normal file

File diff suppressed because it is too large Load Diff

39
kmod/src/item.h Normal file
View File

@@ -0,0 +1,39 @@
#ifndef _SCOUTFS_ITEM_H_
#define _SCOUTFS_ITEM_H_
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_lookup_exact(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock);
int scoutfs_item_next(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *last, void *val, int val_len,
struct scoutfs_lock *lock);
int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_lock *lock);
u64 scoutfs_item_dirty_pages(struct super_block *sb);
int scoutfs_item_write_dirty(struct super_block *sb);
int scoutfs_item_write_done(struct super_block *sb);
bool scoutfs_item_range_cached(struct super_block *sb,
struct scoutfs_key *start,
struct scoutfs_key *end, bool *dirty);
void scoutfs_item_invalidate(struct super_block *sb, struct scoutfs_key *start,
struct scoutfs_key *end);
int scoutfs_item_setup(struct super_block *sb);
void scoutfs_item_destroy(struct super_block *sb);
#endif

View File

@@ -78,6 +78,14 @@ static inline void scoutfs_key_set_zeros(struct scoutfs_key *key)
key->_sk_second = 0;
key->_sk_third = 0;
key->_sk_fourth = 0;
memset(key->__pad, 0, sizeof(key->__pad));
}
static inline bool scoutfs_key_is_zeros(struct scoutfs_key *key)
{
return key->sk_zone == 0 && key->_sk_first == 0 && key->sk_type == 0 &&
key->_sk_second == 0 && key->_sk_third == 0 &&
key->_sk_fourth == 0;
}
static inline void scoutfs_key_copy_or_zeros(struct scoutfs_key *dst,
@@ -97,6 +105,7 @@ static inline void scoutfs_key_set_ones(struct scoutfs_key *key)
key->_sk_second = cpu_to_le64(U64_MAX);
key->_sk_third = cpu_to_le64(U64_MAX);
key->_sk_fourth = U8_MAX;
memset(key->__pad, 0, sizeof(key->__pad));
}
/*
@@ -179,29 +188,19 @@ static inline void scoutfs_key_dec(struct scoutfs_key *key)
key->sk_zone--;
}
static inline void scoutfs_key_to_be(struct scoutfs_key_be *be,
struct scoutfs_key *key)
{
BUILD_BUG_ON(sizeof(struct scoutfs_key_be) !=
sizeof(struct scoutfs_key));
/*
* Some key types are used by multiple subsystems and shouldn't have
* duplicate private key init functions.
*/
be->sk_zone = key->sk_zone;
be->_sk_first = le64_to_be64(key->_sk_first);
be->sk_type = key->sk_type;
be->_sk_second = le64_to_be64(key->_sk_second);
be->_sk_third = le64_to_be64(key->_sk_third);
be->_sk_fourth = key->_sk_fourth;
}
static inline void scoutfs_key_from_be(struct scoutfs_key *key,
struct scoutfs_key_be *be)
static inline void scoutfs_key_init_log_trees(struct scoutfs_key *key,
u64 rid, u64 nr)
{
key->sk_zone = be->sk_zone;
key->_sk_first = be64_to_le64(be->_sk_first);
key->sk_type = be->sk_type;
key->_sk_second = be64_to_le64(be->_sk_second);
key->_sk_third = be64_to_le64(be->_sk_third);
key->_sk_fourth = be->_sk_fourth;
*key = (struct scoutfs_key) {
.sk_zone = SCOUTFS_LOG_TREES_ZONE,
.sklt_rid = cpu_to_le64(rid),
.sklt_nr = cpu_to_le64(nr),
};
}
#endif

View File

@@ -1,12 +0,0 @@
#ifndef _SCOUTFS_KVEC_H_
#define _SCOUTFS_KVEC_H_
#include <linux/uio.h>
static inline void kvec_init(struct kvec *kv, void *base, size_t len)
{
kv->iov_base = base;
kv->iov_len = len;
}
#endif

View File

@@ -21,7 +21,6 @@
#include "super.h"
#include "lock.h"
#include "forest.h"
#include "scoutfs_trace.h"
#include "msg.h"
#include "cmp.h"
@@ -34,6 +33,7 @@
#include "client.h"
#include "data.h"
#include "xattr.h"
#include "item.h"
/*
* scoutfs uses a lock service to manage item cache consistency between
@@ -65,7 +65,7 @@
* relative to that lock state we resend.
*/
#define GRACE_PERIOD_KT ms_to_ktime(2)
#define GRACE_PERIOD_KT ms_to_ktime(10)
/*
* allocated per-super, freed on unmount.
@@ -80,6 +80,12 @@ struct lock_info {
struct list_head lru_list;
unsigned long long lru_nr;
struct workqueue_struct *workq;
struct work_struct grant_work;
struct list_head grant_list;
struct delayed_work inv_dwork;
struct list_head inv_list;
struct work_struct shrink_work;
struct list_head shrink_list;
atomic64_t next_refresh_gen;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree tseq_tree;
@@ -88,19 +94,17 @@ struct lock_info {
#define DECLARE_LOCK_INFO(sb, name) \
struct lock_info *name = SCOUTFS_SB(sb)->lock_info
static void scoutfs_lock_shrink_worker(struct work_struct *work);
static bool lock_mode_invalid(int mode)
static bool lock_mode_invalid(enum scoutfs_lock_mode mode)
{
return (unsigned)mode >= SCOUTFS_LOCK_INVALID;
}
static bool lock_mode_can_read(int mode)
static bool lock_mode_can_read(enum scoutfs_lock_mode mode)
{
return mode == SCOUTFS_LOCK_READ || mode == SCOUTFS_LOCK_WRITE;
}
static bool lock_mode_can_write(int mode)
static bool lock_mode_can_write(enum scoutfs_lock_mode mode)
{
return mode == SCOUTFS_LOCK_WRITE || mode == SCOUTFS_LOCK_WRITE_ONLY;
}
@@ -143,7 +147,7 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
* leave cached items behind in the case of invalidating to a read lock.
*/
static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
int prev, int mode)
enum scoutfs_lock_mode prev, enum scoutfs_lock_mode mode)
{
struct scoutfs_lock_coverage *cov;
struct scoutfs_lock_coverage *tmp;
@@ -156,15 +160,13 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
BUG_ON(!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ) &&
mode != SCOUTFS_LOCK_NULL);
/* any transition from a mode allowed to dirty items has to write */
if (lock_mode_can_write(prev) && scoutfs_trans_has_dirty(sb)) {
/* sync when a write lock could have dirtied the current transaction */
if (lock_mode_can_write(prev) &&
(lock->dirty_trans_seq == scoutfs_trans_sample_seq(sb))) {
scoutfs_inc_counter(sb, lock_invalidate_sync);
ret = scoutfs_trans_sync(sb, 1);
if (ret < 0)
return ret;
if (ret > 0) {
scoutfs_add_counter(sb, lock_invalidate_commit, ret);
ret = 0;
}
}
/* have to invalidate if we're not in the only usable case */
@@ -193,6 +195,8 @@ retry:
ino++;
}
}
scoutfs_item_invalidate(sb, &lock->start, &lock->end);
}
return ret;
@@ -220,9 +224,11 @@ static void lock_free(struct lock_info *linfo, struct scoutfs_lock *lock)
BUG_ON(!RB_EMPTY_NODE(&lock->node));
BUG_ON(!RB_EMPTY_NODE(&lock->range_node));
BUG_ON(!list_empty(&lock->lru_head));
BUG_ON(!list_empty(&lock->grant_head));
BUG_ON(!list_empty(&lock->inv_head));
BUG_ON(!list_empty(&lock->shrink_head));
BUG_ON(!list_empty(&lock->cov_list));
scoutfs_forest_clear_lock(sb, lock);
kfree(lock);
}
@@ -245,7 +251,9 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
RB_CLEAR_NODE(&lock->node);
RB_CLEAR_NODE(&lock->range_node);
INIT_LIST_HEAD(&lock->lru_head);
INIT_LIST_HEAD(&lock->grant_head);
INIT_LIST_HEAD(&lock->inv_head);
INIT_LIST_HEAD(&lock->shrink_head);
spin_lock_init(&lock->cov_list_lock);
INIT_LIST_HEAD(&lock->cov_list);
@@ -253,21 +261,22 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
lock->end = *end;
lock->sb = sb;
init_waitqueue_head(&lock->waitq);
INIT_WORK(&lock->shrink_work, scoutfs_lock_shrink_worker);
lock->mode = SCOUTFS_LOCK_NULL;
atomic64_set(&lock->forest_bloom_nr, 0);
trace_scoutfs_lock_alloc(sb, lock);
return lock;
}
static void lock_inc_count(unsigned int *counts, int mode)
static void lock_inc_count(unsigned int *counts, enum scoutfs_lock_mode mode)
{
BUG_ON(mode < 0 || mode >= SCOUTFS_LOCK_NR_MODES);
counts[mode]++;
}
static void lock_dec_count(unsigned int *counts, int mode)
static void lock_dec_count(unsigned int *counts, enum scoutfs_lock_mode mode)
{
BUG_ON(mode < 0 || mode >= SCOUTFS_LOCK_NR_MODES);
counts[mode]--;
@@ -279,7 +288,7 @@ static void lock_dec_count(unsigned int *counts, int mode)
*/
static bool lock_counts_match(int granted, unsigned int *counts)
{
int mode;
enum scoutfs_lock_mode mode;
for (mode = 0; mode < SCOUTFS_LOCK_NR_MODES; mode++) {
if (counts[mode] && !lock_modes_match(granted, mode))
@@ -296,7 +305,7 @@ static bool lock_counts_match(int granted, unsigned int *counts)
*/
static bool lock_count_match_exists(int desired, unsigned int *counts)
{
int mode;
enum scoutfs_lock_mode mode;
for (mode = 0; mode < SCOUTFS_LOCK_NR_MODES; mode++) {
if (counts[mode] && lock_modes_match(desired, mode))
@@ -312,7 +321,7 @@ static bool lock_count_match_exists(int desired, unsigned int *counts)
*/
static bool lock_idle(struct scoutfs_lock *lock)
{
int mode;
enum scoutfs_lock_mode mode;
if (lock->request_pending || lock->invalidate_pending)
return false;
@@ -540,11 +549,80 @@ static void extend_grace(struct super_block *sb, struct scoutfs_lock *lock)
lock->grace_deadline = ktime_add(now, GRACE_PERIOD_KT);
}
static void queue_grant_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->grant_list) && !linfo->shutdown)
queue_work(linfo->workq, &linfo->grant_work);
}
/*
* The client is receiving a lock response message from the server.
* This can be reordered with incoming invlidation requests from the
* server so we have to be careful to only set the new mode once the old
* mode matches.
* We immediately queue work on the assumption that the caller might
* have made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress, even if other locks are
* waiting for their grace period to elapse. It's a trade-off between
* invalidation latency and burning cpu repeatedly finding that locks
* are still in their grace period.
*/
static void queue_inv_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->inv_list) && !linfo->shutdown)
mod_delayed_work(linfo->workq, &linfo->inv_dwork, 0);
}
/*
* The given lock is processing a received a grant response. Trigger a
* bug if the cache is inconsistent.
*
* We only have two modes that can create dirty items. We can't have
* dirty items when transitioning from write_only to write because the
* writer can't trust the cached items in the cache for reading. And we
* don't currently transition directly from write to write_only, we
* first go through null. So if we have dirty items as we're granted a
* mode it's always incorrect.
*
* And we can't have cached items that we're going to use for reading if
* the previous mode didn't allow reading.
*
* Inconsistencies have come from all sorts of bugs: invalidation missed
* items, the cache was populated outside of locking coverage, lock
* holders performed the wrong item operations under their lock,
* overlapping locks, out of order granting or invalidating, etc.
*/
static void bug_on_inconsistent_grant_cache(struct super_block *sb,
struct scoutfs_lock *lock,
int old_mode, int new_mode)
{
bool cached;
bool dirty;
cached = scoutfs_item_range_cached(sb, &lock->start, &lock->end,
&dirty);
if (dirty ||
(cached && (!lock_mode_can_read(old_mode) ||
!lock_mode_can_read(new_mode)))) {
scoutfs_err(sb, "granted lock item cache inconsistency, cached %u dirty %u old_mode %d new_mode %d: start "SK_FMT" end "SK_FMT" refresh_gen %llu mode %u waiters: rd %u wr %u wo %u users: rd %u wr %u wo %u",
cached, dirty, old_mode, new_mode, SK_ARG(&lock->start),
SK_ARG(&lock->end), lock->refresh_gen, lock->mode,
lock->waiters[SCOUTFS_LOCK_READ],
lock->waiters[SCOUTFS_LOCK_WRITE],
lock->waiters[SCOUTFS_LOCK_WRITE_ONLY],
lock->users[SCOUTFS_LOCK_READ],
lock->users[SCOUTFS_LOCK_WRITE],
lock->users[SCOUTFS_LOCK_WRITE_ONLY]);
BUG();
}
}
/*
* Each lock has received a grant response message from the server.
*
* Grant responses can be reordered with incoming invalidation requests
* from the server so we have to be careful to only set the new mode
* once the old mode matches.
*
* We extend the grace period as we grant the lock if there is a waiting
* locker who can use the lock. This stops invalidation from pulling
@@ -555,6 +633,58 @@ static void extend_grace(struct super_block *sb, struct scoutfs_lock *lock)
* against the invalidation. In that case they'd extend the grace
* period anyway as they unlock.
*/
static void lock_grant_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
grant_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
scoutfs_inc_counter(sb, lock_grant_work);
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->grant_list, grant_head) {
nl = &lock->grant_nl;
/* wait for reordered invalidation to finish */
if (lock->mode != nl->old_mode)
continue;
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode,
nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) &&
lock_mode_can_read(nl->new_mode)) {
lock->refresh_gen =
atomic64_inc_return(&linfo->next_refresh_gen);
}
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_version = le64_to_cpu(nl->write_version);
if (lock_count_match_exists(nl->new_mode, lock->waiters))
extend_grace(sb, lock);
trace_scoutfs_lock_granted(sb, lock);
list_del_init(&lock->grant_head);
wake_up(&lock->waitq);
put_lock(linfo, lock);
}
/* invalidations might be waiting for our reordered grant */
queue_inv_work(linfo);
spin_unlock(&linfo->lock);
}
/*
* The client is receiving a grant response message from the server. We
* find the lock, record the response, and add it to the list for grant
* work to process.
*/
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock *nl)
{
@@ -568,34 +698,12 @@ int scoutfs_lock_grant_response(struct super_block *sb,
/* lock must already be busy with request_pending */
lock = lock_lookup(sb, &nl->key, NULL);
BUG_ON(!lock);
trace_scoutfs_lock_grant_response(sb, lock);
BUG_ON(!lock->request_pending);
trace_scoutfs_lock_grant_response(sb, lock);
/* resolve unlikely work reordering with invalidation request */
while (lock->mode != nl->old_mode) {
spin_unlock(&linfo->lock);
/* implicit read barrier from waitq locks */
wait_event(lock->waitq, lock->mode == nl->old_mode);
spin_lock(&linfo->lock);
}
if (!lock_mode_can_read(nl->old_mode) &&
lock_mode_can_read(nl->new_mode)) {
lock->refresh_gen =
atomic64_inc_return(&linfo->next_refresh_gen);
}
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_version = le64_to_cpu(nl->write_version);
if (lock_count_match_exists(nl->new_mode, lock->waiters))
extend_grace(sb, lock);
trace_scoutfs_lock_granted(sb, lock);
wake_up(&lock->waitq);
put_lock(linfo, lock);
lock->grant_nl = *nl;
list_add_tail(&lock->grant_head, &linfo->grant_list);
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
@@ -603,34 +711,9 @@ int scoutfs_lock_grant_response(struct super_block *sb,
}
/*
* Invalidation waits until the old mode indicates that we've resolved
* unlikely races with reordered grant responses from the server and
* until the new mode satisfies active users.
*
* Once it's safe to proceed we set the lock mode here under the lock to
* prevent additional users of the old mode while we're invalidating.
*/
static bool lock_invalidate_safe(struct lock_info *linfo,
struct scoutfs_lock *lock,
int old_mode, int new_mode)
{
bool safe;
spin_lock(&linfo->lock);
safe = (lock->mode == old_mode) &&
lock_counts_match(new_mode, lock->users);
if (safe)
lock->mode = new_mode;
spin_unlock(&linfo->lock);
return safe;
}
/*
* The client is receiving a lock invalidation request from the server
* Each lock has received a lock invalidation request from the server
* which specifies a new mode for the lock. The server will only send
* one invalidation request at a time. This is executing in a blocking
* net receive work context.
* one invalidation request at a time for each lock.
*
* This is an unsolicited request from the server so it can arrive at
* any time after we make the server aware of the lock by initially
@@ -647,70 +730,134 @@ static bool lock_invalidate_safe(struct lock_info *linfo,
* invalidate once the lock mode matches what the server told us to
* invalidate.
*
* We delay invalidation processing until a grace period has elapsed since
* the last unlock. The intent is to let users do a reasonable batch of
* work before dropping the lock. Continuous unlocking can continuously
* extend the deadline.
* We delay invalidation processing until a grace period has elapsed
* since the last unlock. The intent is to let users do a reasonable
* batch of work before dropping the lock. Continuous unlocking can
* continuously extend the deadline.
*
* Before we start invalidating the lock we set the lock to the new
* mode, preventing further incompatible users of the old mode from
* using the lock while we're invalidating.
*
* This does a lot of serialized inode invalidation in one context and
* performs a lot of repeated calls to sync. It would be nice to get
* some concurrent inode invalidation and to more carefully only call
* sync when needed.
*/
static void lock_invalidate_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
inv_dwork.work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
unsigned long delay = MAX_JIFFY_OFFSET;
ktime_t now = ktime_get();
ktime_t deadline;
LIST_HEAD(ready);
u64 net_id;
int ret;
scoutfs_inc_counter(sb, lock_invalidate_work);
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->inv_list, inv_head) {
nl = &lock->inv_nl;
/* wait for reordered grant to finish */
if (lock->mode != nl->old_mode)
continue;
/* wait until incompatible holders unlock */
if (!lock_counts_match(nl->new_mode, lock->users))
continue;
/* skip if grace hasn't elapsed, record earliest */
deadline = lock->grace_deadline;
if (!linfo->shutdown && ktime_before(now, deadline)) {
delay = min(delay,
nsecs_to_jiffies(ktime_to_ns(
ktime_sub(deadline, now))));
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
continue;
}
/* set the new mode, no incompatible users during inval */
lock->mode = nl->new_mode;
/* move everyone that's ready to our private list */
list_move_tail(&lock->inv_head, &ready);
}
spin_unlock(&linfo->lock);
if (list_empty(&ready))
goto out;
/* invalidate once the lock is read */
list_for_each_entry(lock, &ready, inv_head) {
nl = &lock->inv_nl;
net_id = lock->inv_net_id;
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
/* respond with the key and modes from the request */
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret);
scoutfs_inc_counter(sb, lock_invalidate_response);
}
/* and finish all the invalidated locks */
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &ready, inv_head) {
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
trace_scoutfs_lock_invalidated(sb, lock);
wake_up(&lock->waitq);
put_lock(linfo, lock);
}
/* grant might have been waiting for invalidate request */
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
out:
/* queue delayed work if invalidations waiting on grace deadline */
if (delay != MAX_JIFFY_OFFSET)
queue_delayed_work(linfo->workq, &linfo->inv_dwork, delay);
}
/*
* Record an incoming invalidate request from the server and add its lock
* to the list for processing.
*
* This is trusting the server and will crash if it's sent bad requests :/
*/
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
ktime_t deadline;
bool grace_waited = false;
int ret;
scoutfs_inc_counter(sb, lock_invalidate_request);
spin_lock(&linfo->lock);
lock = get_lock(sb, &nl->key);
if (lock) {
BUG_ON(lock->invalidate_pending); /* XXX trusting server :/ */
lock->invalidate_pending = 1;
deadline = lock->grace_deadline;
trace_scoutfs_lock_invalidate_request(sb, lock);
}
spin_unlock(&linfo->lock);
BUG_ON(!lock);
/* wait for a grace period after the most recent unlock */
while (ktime_before(ktime_get(), deadline)) {
grace_waited = true;
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_hrtimeout(&deadline, HRTIMER_MODE_ABS);
spin_lock(&linfo->lock);
deadline = lock->grace_deadline;
spin_unlock(&linfo->lock);
if (lock) {
BUG_ON(lock->invalidate_pending);
lock->invalidate_pending = 1;
lock->inv_nl = *nl;
lock->inv_net_id = net_id;
list_add_tail(&lock->inv_head, &linfo->inv_list);
trace_scoutfs_lock_invalidate_request(sb, lock);
queue_inv_work(linfo);
}
if (grace_waited)
scoutfs_inc_counter(linfo->sb, lock_grace_elapsed);
/* sets the lock mode to prevent use of old mode during invalidate */
wait_event(lock->waitq, lock_invalidate_safe(linfo, lock, nl->old_mode,
nl->new_mode));
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
/* respond with the key and modes from the request */
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret);
scoutfs_inc_counter(sb, lock_invalidate_response);
spin_lock(&linfo->lock);
lock->invalidate_pending = 0;
trace_scoutfs_lock_invalidated(sb, lock);
wake_up(&lock->waitq);
put_lock(linfo, lock);
spin_unlock(&linfo->lock);
return 0;
@@ -749,6 +896,7 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
for (i = 0; lock && i < SCOUTFS_NET_LOCK_MAX_RECOVER_NR; i++) {
nlr->locks[i].key = lock->start;
nlr->locks[i].write_version = cpu_to_le64(lock->write_version);
nlr->locks[i].old_mode = lock->mode;
nlr->locks[i].new_mode = lock->mode;
@@ -769,7 +917,7 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
}
static bool lock_wait_cond(struct super_block *sb, struct scoutfs_lock *lock,
int mode)
enum scoutfs_lock_mode mode)
{
DECLARE_LOCK_INFO(sb, linfo);
bool wake;
@@ -803,7 +951,7 @@ static bool lock_flags_invalid(int flags)
* won't process our request until it receives our invalidation
* response.
*/
static int lock_key_range(struct super_block *sb, int mode, int flags,
static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_key *start, struct scoutfs_key *end,
struct scoutfs_lock **ret_lock)
{
@@ -911,7 +1059,7 @@ out_unlock:
return ret;
}
int scoutfs_lock_ino(struct super_block *sb, int mode, int flags, u64 ino,
int scoutfs_lock_ino(struct super_block *sb, enum scoutfs_lock_mode mode, int flags, u64 ino,
struct scoutfs_lock **ret_lock)
{
struct scoutfs_key start;
@@ -936,7 +1084,7 @@ int scoutfs_lock_ino(struct super_block *sb, int mode, int flags, u64 ino,
* is incremented as new locks are acquired and then indicates that an
* old inode with a smaller refresh_gen needs to be refreshed.
*/
int scoutfs_lock_inode(struct super_block *sb, int mode, int flags,
int scoutfs_lock_inode(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct inode *inode, struct scoutfs_lock **lock)
{
int ret;
@@ -999,7 +1147,7 @@ static void swap_arg(void *A, void *B, int size)
*
* (pretty great collision with d_lock() here)
*/
int scoutfs_lock_inodes(struct super_block *sb, int mode, int flags,
int scoutfs_lock_inodes(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct inode *a, struct scoutfs_lock **a_lock,
struct inode *b, struct scoutfs_lock **b_lock,
struct inode *c, struct scoutfs_lock **c_lock,
@@ -1047,7 +1195,7 @@ int scoutfs_lock_inodes(struct super_block *sb, int mode, int flags,
/*
* The rename lock is magical because it's global.
*/
int scoutfs_lock_rename(struct super_block *sb, int mode, int flags,
int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock)
{
struct scoutfs_key key = {
@@ -1094,7 +1242,7 @@ void scoutfs_lock_get_index_item_range(u8 type, u64 major, u64 ino,
* Lock the given index item. We use the index masks to calculate the
* start and end key values that are covered by the lock.
*/
int scoutfs_lock_inode_index(struct super_block *sb, int mode,
int scoutfs_lock_inode_index(struct super_block *sb, enum scoutfs_lock_mode mode,
u8 type, u64 major, u64 ino,
struct scoutfs_lock **ret_lock)
{
@@ -1106,24 +1254,6 @@ int scoutfs_lock_inode_index(struct super_block *sb, int mode,
return lock_key_range(sb, mode, 0, &start, &end, ret_lock);
}
/*
* Today we lock a hash value entirely. If we went to finer grained ino
* locking as well we'd need to check the manifest to find the next
* possible ino to lock so that we didn't try to iterate over all of
* them.
*/
int scoutfs_lock_xattr_index(struct super_block *sb, int mode, int flags,
u64 hash, struct scoutfs_lock **ret_lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_xattr_index_key(&start, hash, 0, 0);
scoutfs_xattr_index_key(&end, hash, U64_MAX, U64_MAX);
return lock_key_range(sb, mode, flags, &start, &end, ret_lock);
}
/*
* The rid lock protects a mount's private persistent items in the rid
* zone. It's held for the duration of the mount. It lets the mount
@@ -1135,7 +1265,7 @@ int scoutfs_lock_xattr_index(struct super_block *sb, int mode, int flags,
* able to. Maybe we have a bunch free and they're trying to allocate
* and are getting ENOSPC.
*/
int scoutfs_lock_rid(struct super_block *sb, int mode, int flags,
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 rid, struct scoutfs_lock **lock)
{
struct scoutfs_key start;
@@ -1156,7 +1286,7 @@ int scoutfs_lock_rid(struct super_block *sb, int mode, int flags,
* As we unlock we always extend the grace period to give the caller
* another pass at the lock before its invalidated.
*/
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, int mode)
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scoutfs_lock_mode mode)
{
DECLARE_LOCK_INFO(sb, linfo);
@@ -1169,9 +1299,12 @@ void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, int mode)
lock_dec_count(lock->users, mode);
extend_grace(sb, lock);
if (lock_mode_can_write(mode))
lock->dirty_trans_seq = scoutfs_trans_sample_seq(sb);
trace_scoutfs_lock_unlock(sb, lock);
wake_up(&lock->waitq);
queue_inv_work(linfo);
put_lock(linfo, lock);
spin_unlock(&linfo->lock);
@@ -1246,7 +1379,7 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
* the mode and keys from changing.
*/
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
int mode)
enum scoutfs_lock_mode mode)
{
signed char lock_mode = ACCESS_ONCE(lock->mode);
@@ -1256,38 +1389,50 @@ bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
}
/*
* The shrink callback got the lock, marked it request_pending, and
* handed it off to us. We kick off a null request and the lock will
* be freed by the response once all users drain. If this races with
* The shrink callback got the lock, marked it request_pending, and put
* it on the shrink list. We send a null request and the lock will be
* freed by the response once all users drain. If this races with
* invalidation then the server will only send the grant response once
* the invalidation is finished.
*/
static void scoutfs_lock_shrink_worker(struct work_struct *work)
static void lock_shrink_worker(struct work_struct *work)
{
struct scoutfs_lock *lock = container_of(work, struct scoutfs_lock,
shrink_work);
struct super_block *sb = lock->sb;
DECLARE_LOCK_INFO(sb, linfo);
struct lock_info *linfo = container_of(work, struct lock_info,
shrink_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
LIST_HEAD(list);
int ret;
/* unlocked lock access, but should be stable since we queued */
nl.key = lock->start;
nl.old_mode = lock->mode;
nl.new_mode = SCOUTFS_LOCK_NULL;
scoutfs_inc_counter(sb, lock_shrink_work);
ret = scoutfs_client_lock_request(sb, &nl);
if (ret) {
/* oh well, not freeing */
scoutfs_inc_counter(sb, lock_shrink_request_aborted);
spin_lock(&linfo->lock);
list_splice_init(&linfo->shrink_list, &list);
spin_unlock(&linfo->lock);
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &list, shrink_head) {
list_del_init(&lock->shrink_head);
lock->request_pending = 0;
wake_up(&lock->waitq);
put_lock(linfo, lock);
/* unlocked lock access, but should be stable since we queued */
nl.key = lock->start;
nl.old_mode = lock->mode;
nl.new_mode = SCOUTFS_LOCK_NULL;
spin_unlock(&linfo->lock);
ret = scoutfs_client_lock_request(sb, &nl);
if (ret) {
/* oh well, not freeing */
scoutfs_inc_counter(sb, lock_shrink_aborted);
spin_lock(&linfo->lock);
lock->request_pending = 0;
wake_up(&lock->waitq);
put_lock(linfo, lock);
spin_unlock(&linfo->lock);
}
}
}
@@ -1312,6 +1457,7 @@ static int scoutfs_lock_shrink(struct shrinker *shrink,
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
unsigned long nr;
bool added = false;
int ret;
nr = sc->nr_to_scan;
@@ -1325,15 +1471,17 @@ restart:
BUG_ON(!lock_idle(lock));
BUG_ON(lock->mode == SCOUTFS_LOCK_NULL);
BUG_ON(!list_empty(&lock->shrink_head));
if (nr-- == 0)
if (linfo->shutdown || nr-- == 0)
break;
__lock_del_lru(linfo, lock);
lock->request_pending = 1;
queue_work(linfo->workq, &lock->shrink_work);
list_add_tail(&lock->shrink_head, &linfo->shrink_list);
added = true;
scoutfs_inc_counter(sb, lock_shrink_queued);
scoutfs_inc_counter(sb, lock_shrink_attempted);
trace_scoutfs_lock_shrink(sb, lock);
/* could have bazillions of idle locks */
@@ -1343,6 +1491,9 @@ restart:
spin_unlock(&linfo->lock);
if (added)
queue_work(linfo->workq, &linfo->shrink_work);
out:
ret = min_t(unsigned long, linfo->lru_nr, INT_MAX);
trace_scoutfs_lock_shrink_exit(sb, sc->nr_to_scan, ret);
@@ -1377,10 +1528,15 @@ static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
}
/*
* We're going to be destroying the locks soon. We shouldn't have any
* normal task holders that would have prevented unmount. We can have
* internal threads blocked in locks. We force all currently blocked
* and future lock calls to return -ESHUTDOWN.
* The caller is going to be calling _destroy soon and, critically, is
* about to shutdown networking before calling us so that we don't get
* any callbacks while we're destroying. We have to ensure that we
* won't call networking after this returns.
*
* Internal fs threads can be using locking, and locking can have async
* work pending. We use ->shutdown to force callers to return
* -ESHUTDOWN and to prevent the future queueing of work that could call
* networking. Locks whose work is stopped will be torn down by _destroy.
*/
void scoutfs_lock_shutdown(struct super_block *sb)
{
@@ -1402,6 +1558,10 @@ void scoutfs_lock_shutdown(struct super_block *sb)
}
spin_unlock(&linfo->lock);
flush_work(&linfo->grant_work);
flush_delayed_work(&linfo->inv_dwork);
flush_work(&linfo->shrink_work);
}
/*
@@ -1422,7 +1582,7 @@ void scoutfs_lock_destroy(struct super_block *sb)
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
struct rb_node *node;
int mode;
enum scoutfs_lock_mode mode;
if (!linfo)
return;
@@ -1474,6 +1634,12 @@ void scoutfs_lock_destroy(struct super_block *sb)
lock->request_pending = 0;
if (!list_empty(&lock->lru_head))
__lock_del_lru(linfo, lock);
if (!list_empty(&lock->grant_head))
list_del_init(&lock->grant_head);
if (!list_empty(&lock->inv_head))
list_del_init(&lock->inv_head);
if (!list_empty(&lock->shrink_head))
list_del_init(&lock->shrink_head);
lock_remove(linfo, lock);
lock_free(linfo, lock);
}
@@ -1501,6 +1667,12 @@ int scoutfs_lock_setup(struct super_block *sb)
linfo->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&linfo->shrinker);
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->grant_work, lock_grant_worker);
INIT_LIST_HEAD(&linfo->grant_list);
INIT_DELAYED_WORK(&linfo->inv_dwork, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);
INIT_WORK(&linfo->shrink_work, lock_shrink_worker);
INIT_LIST_HEAD(&linfo->shrink_list);
atomic64_set(&linfo->next_refresh_gen, 0);
scoutfs_tseq_tree_init(&linfo->tseq_tree, lock_tseq_show);

View File

@@ -22,24 +22,31 @@ struct scoutfs_lock {
struct rb_node range_node;
u64 refresh_gen;
u64 write_version;
u64 dirty_trans_seq;
struct list_head lru_head;
wait_queue_head_t waitq;
struct work_struct shrink_work;
ktime_t grace_deadline;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head grant_head;
struct scoutfs_net_lock grant_nl;
struct list_head inv_head;
struct scoutfs_net_lock inv_nl;
u64 inv_net_id;
struct list_head shrink_head;
spinlock_t cov_list_lock;
struct list_head cov_list;
int mode;
enum scoutfs_lock_mode mode;
unsigned int waiters[SCOUTFS_LOCK_NR_MODES];
unsigned int users[SCOUTFS_LOCK_NR_MODES];
struct scoutfs_tseq_entry tseq_entry;
/* the forest btree code stores data per lock */
struct forest_lock_private *forest_private;
/* the forest tracks which log tree last saw bloom bit updates */
atomic64_t forest_bloom_nr;
};
struct scoutfs_lock_coverage {
@@ -55,29 +62,27 @@ int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
struct scoutfs_key *key);
int scoutfs_lock_inode(struct super_block *sb, int mode, int flags,
int scoutfs_lock_inode(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct inode *inode, struct scoutfs_lock **ret_lock);
int scoutfs_lock_ino(struct super_block *sb, int mode, int flags, u64 ino,
int scoutfs_lock_ino(struct super_block *sb, enum scoutfs_lock_mode mode, int flags, u64 ino,
struct scoutfs_lock **ret_lock);
void scoutfs_lock_get_index_item_range(u8 type, u64 major, u64 ino,
struct scoutfs_key *start,
struct scoutfs_key *end);
int scoutfs_lock_inode_index(struct super_block *sb, int mode,
int scoutfs_lock_inode_index(struct super_block *sb, enum scoutfs_lock_mode mode,
u8 type, u64 major, u64 ino,
struct scoutfs_lock **ret_lock);
int scoutfs_lock_xattr_index(struct super_block *sb, int mode, int flags,
u64 hash, struct scoutfs_lock **ret_lock);
int scoutfs_lock_inodes(struct super_block *sb, int mode, int flags,
int scoutfs_lock_inodes(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct inode *a, struct scoutfs_lock **a_lock,
struct inode *b, struct scoutfs_lock **b_lock,
struct inode *c, struct scoutfs_lock **c_lock,
struct inode *d, struct scoutfs_lock **D_lock);
int scoutfs_lock_rename(struct super_block *sb, int mode, int flags,
int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_rid(struct super_block *sb, int mode, int flags,
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 rid, struct scoutfs_lock **lock);
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
int level);
enum scoutfs_lock_mode mode);
void scoutfs_lock_init_coverage(struct scoutfs_lock_coverage *cov);
void scoutfs_lock_add_coverage(struct super_block *sb,
@@ -88,7 +93,7 @@ bool scoutfs_lock_is_covered(struct super_block *sb,
void scoutfs_lock_del_coverage(struct super_block *sb,
struct scoutfs_lock_coverage *cov);
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
int mode);
enum scoutfs_lock_mode mode);
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr);

View File

@@ -20,7 +20,6 @@
#include "tseq.h"
#include "spbm.h"
#include "block.h"
#include "radix.h"
#include "btree.h"
#include "msg.h"
#include "scoutfs_trace.h"
@@ -87,8 +86,10 @@ struct lock_server_info {
struct scoutfs_tseq_tree tseq_tree;
struct dentry *tseq_dentry;
struct scoutfs_radix_allocator *alloc;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
atomic64_t write_version;
};
#define DECLARE_LOCK_SERVER_INFO(sb, name) \
@@ -117,12 +118,6 @@ struct server_lock_node {
struct list_head invalidated;
};
enum {
CLE_GRANTED,
CLE_REQUESTED,
CLE_INVALIDATED,
};
/*
* Interactions with the client are tracked with these little mode
* wrappers.
@@ -494,7 +489,6 @@ static int process_waiting_requests(struct super_block *sb,
struct client_lock_entry *req_tmp;
struct client_lock_entry *gr;
struct client_lock_entry *gr_tmp;
static atomic64_t write_version = ATOMIC64_INIT(0);
u64 wv;
int ret;
@@ -548,7 +542,7 @@ static int process_waiting_requests(struct super_block *sb,
if (nl.new_mode == SCOUTFS_LOCK_WRITE ||
nl.new_mode == SCOUTFS_LOCK_WRITE_ONLY) {
wv = atomic64_inc_return(&write_version);
wv = atomic64_inc_return(&inf->write_version);
nl.write_version = cpu_to_le64(wv);
}
@@ -575,12 +569,22 @@ out:
return ret;
}
static void init_lock_clients_key(struct scoutfs_key *key, u64 rid)
{
*key = (struct scoutfs_key) {
.sk_zone = SCOUTFS_LOCK_CLIENTS_ZONE,
.sklc_rid = cpu_to_le64(rid),
};
}
/*
* The server received a greeting from a client for the first time. If
* the client had already talked to the server then we must find an
* existing record for it and should begin recovery. If it doesn't have
* a record then its timed out and we can't allow it to reconnect. If
* its connecting for the first time then we insert a new record. If
* we're creating a new record for a client we can see EEXIST if the
* greeting is resent to a new server after the record was committed but
* before the response was received by the client.
*
* This is running in concurrent client greeting processing contexts.
*/
@@ -589,23 +593,24 @@ int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_lock_client_btree_key cbk;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
cbk.rid = cpu_to_be64(rid);
init_lock_clients_key(&key, rid);
mutex_lock(&inf->mutex);
if (should_exist) {
ret = scoutfs_btree_lookup(sb, &super->lock_clients,
&cbk, sizeof(cbk), &iref);
ret = scoutfs_btree_lookup(sb, &super->lock_clients, &key,
&iref);
if (ret == 0)
scoutfs_btree_put_iref(&iref);
} else {
ret = scoutfs_btree_insert(sb, inf->alloc, inf->wri,
&super->lock_clients,
&cbk, sizeof(cbk), NULL, 0);
&key, NULL, 0);
if (ret == -EEXIST)
ret = 0;
}
mutex_unlock(&inf->mutex);
@@ -664,6 +669,14 @@ static int finished_recovery(struct super_block *sb, u64 rid, bool cancel)
return ret;
}
static void set_max_write_version(struct lock_server_info *inf, u64 new)
{
u64 old;
while (new > (old = atomic64_read(&inf->write_version)) &&
(atomic64_cmpxchg(&inf->write_version, old, new) != old));
}
/*
* We sent a lock recover request to the client when we received its
* greeting while in recovery. Here we instantiate all the locks it
@@ -727,6 +740,10 @@ int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
put_server_lock(inf, snode);
/* make sure next write lock is greater than all recovered */
set_max_write_version(inf,
le64_to_cpu(nlr->locks[i].write_version));
}
/* send request for next batch of keys */
@@ -738,15 +755,12 @@ out:
return ret;
}
static int get_rid_and_put_ref(struct scoutfs_btree_item_ref *iref,
u64 *rid)
static int get_rid_and_put_ref(struct scoutfs_btree_item_ref *iref, u64 *rid)
{
struct scoutfs_lock_client_btree_key *cbk;
int ret;
if (iref->key_len == sizeof(*cbk) && iref->val_len == 0) {
cbk = iref->key;
*rid = be64_to_cpu(cbk->rid);
if (iref->val_len == 0) {
*rid = le64_to_cpu(iref->key->sklc_rid);
ret = 0;
} else {
ret = -EIO;
@@ -767,8 +781,8 @@ static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
recovery_dwork.work);
struct super_block *sb = inf->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_lock_client_btree_key cbk;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
bool timed_out;
u64 rid;
int ret;
@@ -779,9 +793,8 @@ static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
/* we enter recovery if there are any client records */
for (rid = 0; ; rid++) {
cbk.rid = cpu_to_be64(rid);
ret = scoutfs_btree_next(sb, &super->lock_clients,
&cbk, sizeof(cbk), &iref);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT) {
ret = 0;
break;
@@ -806,10 +819,9 @@ static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
scoutfs_err(sb, "client rid %016llx lock recovery timed out",
rid);
cbk.rid = cpu_to_be64(rid);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients,
&cbk, sizeof(cbk));
&super->lock_clients, &key);
if (ret)
break;
}
@@ -838,7 +850,6 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_lock_client_btree_key cli;
struct client_lock_entry *clent;
struct client_lock_entry *tmp;
struct server_lock_node *snode;
@@ -847,10 +858,10 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
bool freed;
int ret = 0;
cli.rid = cpu_to_be64(rid);
mutex_lock(&inf->mutex);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &cli, sizeof(cli));
&super->lock_clients, &key);
mutex_unlock(&inf->mutex);
if (ret == -ENOENT) {
ret = 0;
@@ -951,14 +962,14 @@ static void lock_server_tseq_show(struct seq_file *m,
* we time them out.
*/
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri)
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 max_vers)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct lock_server_info *inf;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_lock_client_btree_key cbk;
struct scoutfs_key key;
unsigned int nr;
u64 rid;
int ret;
@@ -977,6 +988,7 @@ int scoutfs_lock_server_setup(struct super_block *sb,
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
inf->alloc = alloc;
inf->wri = wri;
atomic64_set(&inf->write_version, max_vers); /* inc_return gives +1 */
inf->tseq_dentry = scoutfs_tseq_create("server_locks", sbi->debug_root,
&inf->tseq_tree);
@@ -990,9 +1002,8 @@ int scoutfs_lock_server_setup(struct super_block *sb,
/* we enter recovery if there are any client records */
nr = 0;
for (rid = 0; ; rid++) {
cbk.rid = cpu_to_be64(rid);
ret = scoutfs_btree_next(sb, &super->lock_clients,
&cbk, sizeof(cbk), &iref);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT)
break;
if (ret == 0)

View File

@@ -12,8 +12,8 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri);
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 max_vers);
void scoutfs_lock_server_destroy(struct super_block *sb);
#endif

View File

@@ -100,7 +100,7 @@ do { \
} while (0)
/* listening and their accepting sockets have a fixed locking order */
enum {
enum spin_lock_subtype {
CONN_LOCK_LISTENER,
CONN_LOCK_ACCEPTED,
};
@@ -369,6 +369,7 @@ static int submit_send(struct super_block *sb,
msend->nh.cmd = cmd;
msend->nh.flags = flags;
msend->nh.error = net_err;
memset(msend->nh.__pad, 0, sizeof(msend->nh.__pad));
msend->nh.data_len = cpu_to_le16(data_len);
if (data_len)
memcpy(msend->nh.data, data, data_len);
@@ -943,7 +944,6 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
struct scoutfs_net_connection *acc_conn;
DECLARE_WAIT_QUEUE_HEAD(waitq);
struct socket *acc_sock;
LIST_HEAD(conn_list);
int ret;
trace_scoutfs_net_listen_work_enter(sb, 0, 0);
@@ -1545,9 +1545,8 @@ void scoutfs_net_client_greeting(struct super_block *sb,
* response and they can disconnect cleanly.
*
* At this point our connection is idle except for send submissions and
* shutdown being queued. Once we shut down a We completely own a We
* have exclusive access to a previous conn once its shutdown and we set
* _freeing.
* shutdown being queued. We have exclusive access to the previous conn
* once it's shutdown and we set _freeing.
*/
void scoutfs_net_server_greeting(struct super_block *sb,
struct scoutfs_net_connection *conn,

View File

@@ -76,7 +76,7 @@ struct scoutfs_net_connection {
void *info;
};
enum {
enum conn_flags {
CONN_FL_valid_greeting = (1UL << 0), /* other commands can proceed */
CONN_FL_established = (1UL << 1), /* added sends queue send work */
CONN_FL_shutting_down = (1UL << 2), /* shutdown work was queued */
@@ -90,18 +90,13 @@ enum {
#define SIN_ARG(sin) sin, be16_to_cpu((sin)->sin_port)
static inline void scoutfs_addr_to_sin(struct sockaddr_in *sin,
struct scoutfs_inet_addr *addr)
union scoutfs_inet_addr *addr)
{
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->port));
}
BUG_ON(addr->v4.family != cpu_to_le16(SCOUTFS_AF_IPV4));
static inline void scoutfs_addr_from_sin(struct scoutfs_inet_addr *addr,
struct sockaddr_in *sin)
{
addr->addr = be32_to_le32(sin->sin_addr.s_addr);
addr->port = be16_to_le16(sin->sin_port);
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->v4.addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
}
struct scoutfs_net_connection *

View File

@@ -16,6 +16,7 @@
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/debugfs.h>
#include <linux/namei.h>
#include <linux/parser.h>
#include <linux/inet.h>
@@ -27,80 +28,79 @@
#include "super.h"
static const match_table_t tokens = {
{Opt_server_addr, "server_addr=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_err, NULL}
};
struct options_sb_info {
struct dentry *debugfs_dir;
u32 btree_force_tiny_blocks;
};
u32 scoutfs_option_u32(struct super_block *sb, int token)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct options_sb_info *osi = sbi->options;
switch(token) {
case Opt_btree_force_tiny_blocks:
return osi->btree_force_tiny_blocks;
}
WARN_ON_ONCE(1);
return 0;
}
/* The caller's string is null terminted and can be clobbered */
static int parse_ipv4(struct super_block *sb, char *str,
struct sockaddr_in *sin)
static int parse_bdev_path(struct super_block *sb, substring_t *substr,
char **bdev_path_ret)
{
unsigned long port = 0;
__be32 addr;
char *c;
char *bdev_path;
struct inode *bdev_inode;
struct path path;
bool got_path = false;
int ret;
/* null term port, if specified */
c = strchr(str, ':');
if (c)
*c = '\0';
/* parse addr */
addr = in_aton(str);
if (ipv4_is_multicast(addr) || ipv4_is_lbcast(addr) ||
ipv4_is_zeronet(addr) ||
ipv4_is_local_multicast(addr)) {
scoutfs_err(sb, "invalid unicast ipv4 address: %s", str);
return -EINVAL;
bdev_path = match_strdup(substr);
if (!bdev_path) {
scoutfs_err(sb, "bdev string dup failed");
ret = -ENOMEM;
goto out;
}
/* parse port, if specified */
if (c) {
c++;
ret = kstrtoul(c, 0, &port);
if (ret != 0 || port == 0 || port >= U16_MAX) {
scoutfs_err(sb, "invalid port in ipv4 address: %s", c);
return -EINVAL;
}
ret = kern_path(bdev_path, LOOKUP_FOLLOW, &path);
if (ret) {
scoutfs_err(sb, "path %s not found for bdev: error %d",
bdev_path, ret);
goto out;
}
got_path = true;
bdev_inode = d_inode(path.dentry);
if (!S_ISBLK(bdev_inode->i_mode)) {
scoutfs_err(sb, "path %s for bdev is not a block device",
bdev_path);
ret = -ENOTBLK;
goto out;
}
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = addr;
sin->sin_port = cpu_to_be16(port);
out:
if (got_path) {
path_put(&path);
}
return 0;
if (ret < 0) {
kfree(bdev_path);
} else {
*bdev_path_ret = bdev_path;
}
return ret;
}
int scoutfs_parse_options(struct super_block *sb, char *options,
struct mount_options *parsed)
{
char ipstr[INET_ADDRSTRLEN + 1];
substring_t args[MAX_OPT_ARGS];
int nr;
int token;
char *p;
int ret;
/* Set defaults */
memset(parsed, 0, sizeof(*parsed));
parsed->quorum_slot_nr = -1;
while ((p = strsep(&options, ",")) != NULL) {
if (!*p)
@@ -108,10 +108,28 @@ int scoutfs_parse_options(struct super_block *sb, char *options,
token = match_token(p, tokens, args);
switch (token) {
case Opt_server_addr:
case Opt_quorum_slot_nr:
match_strlcpy(ipstr, args, ARRAY_SIZE(ipstr));
ret = parse_ipv4(sb, ipstr, &parsed->server_addr);
if (parsed->quorum_slot_nr != -1) {
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
return -EINVAL;
}
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 ||
nr >= SCOUTFS_QUORUM_MAX_SLOTS) {
scoutfs_err(sb, "invalid quorum_slot_nr option, must be between 0 and %u",
SCOUTFS_QUORUM_MAX_SLOTS - 1);
if (ret == 0)
ret = -EINVAL;
return ret;
}
parsed->quorum_slot_nr = nr;
break;
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0],
&parsed->metadev_path);
if (ret < 0)
return ret;
break;
@@ -122,6 +140,11 @@ int scoutfs_parse_options(struct super_block *sb, char *options,
}
}
if (!parsed->metadev_path) {
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
return -EINVAL;
}
return 0;
}
@@ -143,13 +166,6 @@ int scoutfs_options_setup(struct super_block *sb)
goto out;
}
if (!debugfs_create_bool("btree_force_tiny_blocks", 0644,
osi->debugfs_dir,
&osi->btree_force_tiny_blocks)) {
ret = -ENOMEM;
goto out;
}
ret = 0;
out:
if (ret)

View File

@@ -5,18 +5,15 @@
#include <linux/in.h>
#include "format.h"
enum {
/*
* For debugging we can quickly create huge trees by limiting
* the number of items in each block as though the blocks were tiny.
*/
Opt_btree_force_tiny_blocks,
Opt_server_addr,
enum scoutfs_mount_options {
Opt_quorum_slot_nr,
Opt_metadev_path,
Opt_err,
};
struct mount_options {
struct sockaddr_in server_addr;
int quorum_slot_nr;
char *metadev_path;
};
int scoutfs_parse_options(struct super_block *sb, char *options,

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,15 @@
#ifndef _SCOUTFS_QUORUM_H_
#define _SCOUTFS_QUORUM_H_
int scoutfs_quorum_election(struct super_block *sb, ktime_t timeout_abs,
u64 prev_term, u64 *elected_term);
void scoutfs_quorum_clear_leader(struct super_block *sb);
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
void scoutfs_quorum_server_shutdown(struct super_block *sb);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);
void scoutfs_quorum_destroy(struct super_block *sb);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,45 +0,0 @@
#ifndef _SCOUTFS_RADIX_H_
#define _SCOUTFS_RADIX_H_
#include "per_task.h"
struct scoutfs_block_writer;
struct scoutfs_radix_allocator {
struct mutex mutex;
struct scoutfs_radix_root avail;
struct scoutfs_radix_root freed;
};
int scoutfs_radix_alloc(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri, u64 *blkno);
int scoutfs_radix_alloc_data(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_radix_root *root,
int count, u64 *blkno_ret, int *count_ret);
int scoutfs_radix_free(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri, u64 blkno);
int scoutfs_radix_free_data(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_radix_root *root,
u64 blkno, int count);
int scoutfs_radix_merge(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_radix_root *dst,
struct scoutfs_radix_root *src,
struct scoutfs_radix_root *inp, bool meta, u64 count);
void scoutfs_radix_init_alloc(struct scoutfs_radix_allocator *alloc,
struct scoutfs_radix_root *avail,
struct scoutfs_radix_root *freed);
void scoutfs_radix_root_init(struct super_block *sb,
struct scoutfs_radix_root *root, bool meta);
u64 scoutfs_radix_root_free_bytes(struct super_block *sb,
struct scoutfs_radix_root *root);
u64 scoutfs_radix_bit_leaf_nr(u64 bit);
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -58,8 +58,8 @@ do { \
int scoutfs_server_lock_request(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_response(struct super_block *sb, u64 rid,
u64 id, struct scoutfs_net_lock *nl);
int scoutfs_server_lock_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_recover_request(struct super_block *sb, u64 rid,
struct scoutfs_key *key);
int scoutfs_server_hold_commit(struct super_block *sb);
@@ -67,8 +67,7 @@ int scoutfs_server_apply_commit(struct super_block *sb, int err);
struct sockaddr_in;
struct scoutfs_quorum_elected_info;
int scoutfs_server_start(struct super_block *sb, struct sockaddr_in *sin,
u64 term);
int scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_abort(struct super_block *sb);
void scoutfs_server_stop(struct super_block *sb);

71
kmod/src/sort_priv.c Normal file
View File

@@ -0,0 +1,71 @@
/*
* A copy of sort() from upstream with a priv argument that's passed
* to comparison, like list_sort().
*/
/* ------------------------ */
/*
* A fast, small, non-recursive O(nlog n) sort for the Linux kernel
*
* Jan 23 2005 Matt Mackall <mpm@selenic.com>
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sort.h>
#include <linux/slab.h>
#include "sort_priv.h"
/**
* sort - sort an array of elements
* @priv: caller's pointer to pass to comparison and swap functions
* @base: pointer to data to sort
* @num: number of elements
* @size: size of each element
* @cmp_func: pointer to comparison function
* @swap_func: pointer to swap function or NULL
*
* This function does a heapsort on the given array. You may provide a
* swap_func function optimized to your element type.
*
* Sorting time is O(n log n) both on average and worst-case. While
* qsort is about 20% faster on average, it suffers from exploitable
* O(n*n) worst-case behavior and extra memory requirements that make
* it less suitable for kernel use.
*/
void sort_priv(void *priv, void *base, size_t num, size_t size,
int (*cmp_func)(void *priv, const void *, const void *),
void (*swap_func)(void *priv, void *, void *, int size))
{
/* pre-scale counters for performance */
int i = (num/2 - 1) * size, n = num * size, c, r;
/* heapify */
for ( ; i >= 0; i -= size) {
for (r = i; r * 2 + size < n; r = c) {
c = r * 2 + size;
if (c < n - size &&
cmp_func(priv, base + c, base + c + size) < 0)
c += size;
if (cmp_func(priv, base + r, base + c) >= 0)
break;
swap_func(priv, base + r, base + c, size);
}
}
/* sort */
for (i = n - size; i > 0; i -= size) {
swap_func(priv, base, base + i, size);
for (r = 0; r * 2 + size < i; r = c) {
c = r * 2 + size;
if (c < i - size &&
cmp_func(priv, base + c, base + c + size) < 0)
c += size;
if (cmp_func(priv, base + r, base + c) >= 0)
break;
swap_func(priv, base + r, base + c, size);
}
}
}

8
kmod/src/sort_priv.h Normal file
View File

@@ -0,0 +1,8 @@
#ifndef _SCOUTFS_SORT_PRIV_H_
#define _SCOUTFS_SORT_PRIV_H_
void sort_priv(void *priv, void *base, size_t num, size_t size,
int (*cmp_func)(void *priv, const void *, const void *),
void (*swap_func)(void *priv, void *, void *, int size));
#endif

View File

@@ -47,9 +47,9 @@ bool scoutfs_spbm_empty(struct scoutfs_spbm *spbm)
return RB_EMPTY_ROOT(&spbm->root);
}
enum {
enum spbm_flags {
/* if a node isn't found then return an allocated new node */
SPBM_FIND_ALLOC = 0x1,
SPBM_FIND_ALLOC = (1 << 0),
};
static struct spbm_node *find_node(struct scoutfs_spbm *spbm, u64 index,
int flags)

2233
kmod/src/srch.c Normal file

File diff suppressed because it is too large Load Diff

68
kmod/src/srch.h Normal file
View File

@@ -0,0 +1,68 @@
#ifndef _SCOUTFS_SRCH_H_
#define _SCOUTFS_SRCH_H_
struct scoutfs_block;
struct scoutfs_srch_rb_root {
struct rb_root root;
struct rb_node *last;
unsigned long nr;
};
struct scoutfs_srch_rb_node {
struct rb_node node;
u64 ino;
u64 id;
};
#define scoutfs_srch_foreach_rb_node(snode, node, sroot) \
for (node = rb_first(&(sroot)->root); \
node && (snode = container_of(node, struct scoutfs_srch_rb_node, \
node), 1); \
node = rb_next(node))
int scoutfs_srch_add(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_srch_file *sfl,
struct scoutfs_block **bl_ret,
u64 hash, u64 ino, u64 id);
void scoutfs_srch_destroy_rb_root(struct scoutfs_srch_rb_root *sroot);
int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_srch_rb_root *sroot,
u64 hash, u64 ino, u64 last_ino, bool *done);
int scoutfs_srch_rotate_log(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_srch_file *sfl);
int scoutfs_srch_get_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
u64 rid, struct scoutfs_srch_compact *sc);
int scoutfs_srch_update_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root, u64 rid,
struct scoutfs_srch_compact *sc);
int scoutfs_srch_commit_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root, u64 rid,
struct scoutfs_srch_compact *res,
struct scoutfs_alloc_list_head *av,
struct scoutfs_alloc_list_head *fr);
int scoutfs_srch_cancel_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root, u64 rid,
struct scoutfs_alloc_list_head *av,
struct scoutfs_alloc_list_head *fr);
void scoutfs_srch_destroy(struct super_block *sb);
int scoutfs_srch_setup(struct super_block *sb);
#endif

View File

@@ -41,6 +41,9 @@
#include "sysfs.h"
#include "quorum.h"
#include "forest.h"
#include "srch.h"
#include "item.h"
#include "alloc.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
@@ -76,11 +79,30 @@ retry:
return cpu_to_le64(ret);
}
struct statfs_free_blocks {
u64 meta;
u64 data;
};
static int count_free_blocks(struct super_block *sb, void *arg, int owner,
u64 id, bool meta, bool avail, u64 blocks)
{
struct statfs_free_blocks *sfb = arg;
if (meta)
sfb->meta += blocks;
else
sfb->data += blocks;
return 0;
}
/*
* Ask the server for the current statfs fields. The message is very
* cheap so we're not worrying about spinning in statfs flooding the
* server with requests. We can add a cache and stale results if that
* becomes a problem.
* Build the free block counts by having alloc read all the persistent
* blocks which contain allocators and calling us for each of them.
* Only the super block reads aren't cached so repeatedly calling statfs
* is like repeated O_DIRECT IO. We can add a cache and stale results
* if that IO becomes a problem.
*
* We fake the number of free inodes value by assuming that we can fill
* free blocks with a certain number of inodes. We then the number of
@@ -93,30 +115,50 @@ retry:
static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
{
struct super_block *sb = dentry->d_inode->i_sb;
struct scoutfs_net_statfs nstatfs;
struct scoutfs_super_block *super = NULL;
struct statfs_free_blocks sfb = {0,};
__le32 uuid[4];
int ret;
ret = scoutfs_client_statfs(sb, &nstatfs);
if (ret)
return ret;
scoutfs_inc_counter(sb, statfs);
kst->f_bfree = le64_to_cpu(nstatfs.bfree);
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
ret = scoutfs_alloc_foreach(sb, count_free_blocks, &sfb);
if (ret < 0)
goto out;
kst->f_bfree = (sfb.meta << SCOUTFS_BLOCK_SM_LG_SHIFT) + sfb.data;
kst->f_type = SCOUTFS_SUPER_MAGIC;
kst->f_bsize = SCOUTFS_BLOCK_SIZE;
kst->f_blocks = le64_to_cpu(nstatfs.total_blocks);
kst->f_bsize = SCOUTFS_BLOCK_SM_SIZE;
kst->f_blocks = (le64_to_cpu(super->total_meta_blocks) <<
SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(super->total_data_blocks);
kst->f_bavail = kst->f_bfree;
kst->f_ffree = kst->f_bfree * 16;
kst->f_files = kst->f_ffree + le64_to_cpu(nstatfs.next_ino);
/* arbitrarily assume ~1K / empty file */
kst->f_ffree = sfb.meta * (SCOUTFS_BLOCK_LG_SIZE / 1024);
kst->f_files = kst->f_ffree + le64_to_cpu(super->next_ino);
BUILD_BUG_ON(sizeof(uuid) != sizeof(nstatfs.uuid));
memcpy(uuid, &nstatfs, sizeof(uuid));
BUILD_BUG_ON(sizeof(uuid) != sizeof(super->uuid));
memcpy(uuid, super->uuid, sizeof(uuid));
kst->f_fsid.val[0] = le32_to_cpu(uuid[0]) ^ le32_to_cpu(uuid[1]);
kst->f_fsid.val[1] = le32_to_cpu(uuid[2]) ^ le32_to_cpu(uuid[3]);
kst->f_namelen = SCOUTFS_NAME_LEN;
kst->f_frsize = SCOUTFS_BLOCK_SIZE;
kst->f_frsize = SCOUTFS_BLOCK_SM_SIZE;
/* the vfs fills f_flags */
ret = 0;
out:
kfree(super);
/*
* We don't take cluster locks in statfs which makes it a very
@@ -126,7 +168,7 @@ static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
if (scoutfs_trigger(sb, STATFS_LOCK_PURGE))
scoutfs_free_unused_locks(sb, -1UL);
return 0;
return ret;
}
static int scoutfs_show_options(struct seq_file *seq, struct dentry *root)
@@ -134,24 +176,36 @@ static int scoutfs_show_options(struct seq_file *seq, struct dentry *root)
struct super_block *sb = root->d_sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
seq_printf(seq, ",server_addr="SIN_FMT, SIN_ARG(&opts->server_addr));
if (opts->quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts->quorum_slot_nr);
seq_printf(seq, ",metadev_path=%s", opts->metadev_path);
return 0;
}
static ssize_t server_addr_show(struct kobject *kobj,
static ssize_t metadev_path_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
return snprintf(buf, PAGE_SIZE, "%s", opts->metadev_path);
}
SCOUTFS_ATTR_RO(metadev_path);
static ssize_t quorum_server_nr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
return snprintf(buf, PAGE_SIZE, SIN_FMT"\n",
SIN_ARG(&opts->server_addr));
return snprintf(buf, PAGE_SIZE, "%d\n", opts->quorum_slot_nr);
}
SCOUTFS_ATTR_RO(server_addr);
SCOUTFS_ATTR_RO(quorum_server_nr);
static struct attribute *mount_options_attrs[] = {
SCOUTFS_ATTR_PTR(server_addr),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(quorum_server_nr),
NULL,
};
@@ -163,6 +217,20 @@ static int scoutfs_sync_fs(struct super_block *sb, int wait)
return scoutfs_trans_sync(sb, wait);
}
/*
* Data dev is closed by generic code, but we have to explicitly close the meta
* dev.
*/
static void scoutfs_metadev_close(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi->meta_bdev) {
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
sbi->meta_bdev = NULL;
}
}
/*
* This destroys all the state that's built up in the sb info during
* mount. It's called by us on errors during mount if we haven't set
@@ -178,6 +246,7 @@ static void scoutfs_put_super(struct super_block *sb)
sbi->shutdown = true;
scoutfs_data_destroy(sb);
scoutfs_srch_destroy(sb);
scoutfs_unlock(sb, sbi->rid_lock, SCOUTFS_LOCK_WRITE);
sbi->rid_lock = NULL;
@@ -185,17 +254,15 @@ static void scoutfs_put_super(struct super_block *sb)
scoutfs_shutdown_trans(sb);
scoutfs_client_destroy(sb);
scoutfs_inode_destroy(sb);
scoutfs_item_destroy(sb);
scoutfs_forest_destroy(sb);
/* the server locks the listen address and compacts */
scoutfs_quorum_destroy(sb);
scoutfs_lock_shutdown(sb);
scoutfs_server_destroy(sb);
scoutfs_net_destroy(sb);
scoutfs_lock_destroy(sb);
/* server clears quorum leader flag during shutdown */
scoutfs_quorum_destroy(sb);
scoutfs_block_destroy(sb);
scoutfs_destroy_triggers(sb);
scoutfs_options_destroy(sb);
@@ -203,6 +270,9 @@ static void scoutfs_put_super(struct super_block *sb)
debugfs_remove(sbi->debug_root);
scoutfs_destroy_counters(sb);
scoutfs_destroy_sysfs(sb);
scoutfs_metadev_close(sb);
kfree(sbi->opts.metadev_path);
kfree(sbi);
sb->s_fs_info = NULL;
@@ -227,19 +297,51 @@ static const struct super_operations scoutfs_super_ops = {
int scoutfs_write_super(struct super_block *sb,
struct scoutfs_super_block *super)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
le64_add_cpu(&super->hdr.seq, 1);
return scoutfs_block_write_sm(sb, SCOUTFS_SUPER_BLKNO, &super->hdr,
return scoutfs_block_write_sm(sb, sbi->meta_bdev, SCOUTFS_SUPER_BLKNO,
&super->hdr,
sizeof(struct scoutfs_super_block));
}
/*
* Read the super block. If it's valid store it in the caller's super
* struct.
*/
int scoutfs_read_super(struct super_block *sb,
struct scoutfs_super_block *super_res)
static bool invalid_blkno_limits(struct super_block *sb, char *which,
u64 start, __le64 first, __le64 last,
struct block_device *bdev, int shift)
{
u64 blkno;
if (le64_to_cpu(first) < start) {
scoutfs_err(sb, "super block first %s blkno %llu is within first valid blkno %llu",
which, le64_to_cpu(first), start);
return true;
}
if (le64_to_cpu(first) > le64_to_cpu(last)) {
scoutfs_err(sb, "super block first %s blkno %llu is greater than last %s blkno %llu",
which, le64_to_cpu(first), which, le64_to_cpu(last));
return true;
}
blkno = (i_size_read(bdev->bd_inode) >> shift) - 1;
if (le64_to_cpu(last) > blkno) {
scoutfs_err(sb, "super block last %s blkno %llu is beyond device size last blkno %llu",
which, le64_to_cpu(last), blkno);
return true;
}
return false;
}
/*
* Read super, specifying bdev.
*/
static int scoutfs_read_super_from_bdev(struct super_block *sb,
struct block_device *bdev,
struct scoutfs_super_block *super_res)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super;
__le32 calc;
int ret;
@@ -248,9 +350,8 @@ int scoutfs_read_super(struct super_block *sb,
if (!super)
return -ENOMEM;
ret = scoutfs_block_read_sm(sb, SCOUTFS_SUPER_BLKNO, &super->hdr,
sizeof(struct scoutfs_super_block),
&calc);
ret = scoutfs_block_read_sm(sb, bdev, SCOUTFS_SUPER_BLKNO, &super->hdr,
sizeof(struct scoutfs_super_block), &calc);
if (ret < 0)
goto out;
@@ -276,31 +377,48 @@ int scoutfs_read_super(struct super_block *sb,
}
if (super->format_hash != cpu_to_le64(SCOUTFS_FORMAT_HASH)) {
scoutfs_err(sb, "super block has invalid format hash 0x%llx, expected 0x%llx",
le64_to_cpu(super->format_hash),
SCOUTFS_FORMAT_HASH);
if (super->version != cpu_to_le64(SCOUTFS_INTEROP_VERSION)) {
scoutfs_err(sb, "super block has invalid version %llu, expected %llu",
le64_to_cpu(super->version),
SCOUTFS_INTEROP_VERSION);
ret = -EINVAL;
goto out;
}
/* XXX do we want more rigorous invalid super checking? */
if (super->quorum_count == 0 ||
super->quorum_count > SCOUTFS_QUORUM_MAX_COUNT) {
scoutfs_err(sb, "super block has invalid quorum count %u, must be > 0 and <= %u",
super->quorum_count, SCOUTFS_QUORUM_MAX_COUNT);
if (invalid_blkno_limits(sb, "meta",
SCOUTFS_META_DEV_START_BLKNO,
super->first_meta_blkno,
super->last_meta_blkno, sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
invalid_blkno_limits(sb, "data",
SCOUTFS_DATA_DEV_START_BLKNO,
super->first_data_blkno,
super->last_data_blkno, sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
ret = -EINVAL;
goto out;
}
*super_res = *super;
ret = 0;
out:
if (ret == 0)
*super_res = *super;
kfree(super);
return ret;
}
/*
* Read the super block from meta dev.
*/
int scoutfs_read_super(struct super_block *sb,
struct scoutfs_super_block *super_res)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return scoutfs_read_super_from_bdev(sb, sbi->meta_bdev, super_res);
}
/*
* This needs to be setup after reading the super because it uses the
* fsid found in the super block.
@@ -337,10 +455,66 @@ static int assign_random_id(struct scoutfs_sb_info *sbi)
return 0;
}
/*
* Ensure superblock copies in metadata and data block devices are valid, and
* fill in in-memory superblock if so.
*/
static int scoutfs_read_supers(struct super_block *sb)
{
struct scoutfs_super_block *meta_super = NULL;
struct scoutfs_super_block *data_super = NULL;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
int ret = 0;
meta_super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
data_super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!meta_super || !data_super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super_from_bdev(sb, sbi->meta_bdev, meta_super);
if (ret < 0) {
scoutfs_err(sb, "could not get meta_super: error %d", ret);
goto out;
}
ret = scoutfs_read_super_from_bdev(sb, sb->s_bdev, data_super);
if (ret < 0) {
scoutfs_err(sb, "could not get data_super: error %d", ret);
goto out;
}
if (!SCOUTFS_IS_META_BDEV(meta_super)) {
scoutfs_err(sb, "meta_super META flag not set");
ret = -EINVAL;
goto out;
}
if (SCOUTFS_IS_META_BDEV(data_super)) {
scoutfs_err(sb, "data_super META flag set");
ret = -EINVAL;
goto out;
}
if (memcmp(meta_super->uuid, data_super->uuid, SCOUTFS_UUID_BYTES)) {
scoutfs_err(sb, "superblock UUID mismatch");
ret = -EINVAL;
goto out;
}
sbi->super = *meta_super;
out:
kfree(meta_super);
kfree(data_super);
return ret;
}
static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
{
struct scoutfs_sb_info *sbi;
struct mount_options opts;
struct block_device *meta_bdev;
struct inode *inode;
int ret;
@@ -379,14 +553,31 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sbi->opts = opts;
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SIZE);
if (ret != SCOUTFS_BLOCK_SIZE) {
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
if (ret != SCOUTFS_BLOCK_SM_SIZE) {
scoutfs_err(sb, "failed to set blocksize, returned %d", ret);
ret = -EIO;
goto out;
}
ret = scoutfs_read_super(sb, &SCOUTFS_SB(sb)->super) ?:
meta_bdev =
blkdev_get_by_path(sbi->opts.metadev_path,
SCOUTFS_META_BDEV_MODE, sb);
if (IS_ERR(meta_bdev)) {
scoutfs_err(sb, "could not open metadev: error %ld",
PTR_ERR(meta_bdev));
ret = PTR_ERR(meta_bdev);
goto out;
}
sbi->meta_bdev = meta_bdev;
ret = set_blocksize(sbi->meta_bdev, SCOUTFS_BLOCK_SM_SIZE);
if (ret != 0) {
scoutfs_err(sb, "failed to set metadev blocksize, returned %d",
ret);
goto out;
}
ret = scoutfs_read_supers(sb) ?:
scoutfs_debugfs_setup(sb) ?:
scoutfs_setup_sysfs(sb) ?:
scoutfs_setup_counters(sb) ?:
@@ -396,17 +587,19 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
scoutfs_setup_triggers(sb) ?:
scoutfs_block_setup(sb) ?:
scoutfs_forest_setup(sb) ?:
scoutfs_item_setup(sb) ?:
scoutfs_inode_setup(sb) ?:
scoutfs_data_setup(sb) ?:
scoutfs_setup_trans(sb) ?:
scoutfs_lock_setup(sb) ?:
scoutfs_net_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_server_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_client_setup(sb) ?:
scoutfs_lock_rid(sb, SCOUTFS_LOCK_WRITE, 0, sbi->rid,
&sbi->rid_lock) ?:
scoutfs_trans_get_log_trees(sb);
scoutfs_trans_get_log_trees(sb) ?:
scoutfs_srch_setup(sb);
if (ret)
goto out;
@@ -483,6 +676,10 @@ static int __init scoutfs_module_init(void)
".section .note.git_describe,\"a\"\n"
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_interop_version,\"a\"\n"
".string \""SCOUTFS_INTEROP_VERSION_STR"\\n\"\n"
".previous\n");
scoutfs_init_counters();
@@ -515,3 +712,4 @@ module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
MODULE_INFO(scoutfs_interop_version, SCOUTFS_INTEROP_VERSION_STR);

View File

@@ -25,6 +25,7 @@ struct options_sb_info;
struct net_info;
struct block_info;
struct forest_info;
struct srch_info;
struct scoutfs_sb_info {
struct super_block *sb;
@@ -35,6 +36,8 @@ struct scoutfs_sb_info {
struct scoutfs_super_block super;
struct block_device *meta_bdev;
spinlock_t next_ino_lock;
struct data_info *data_info;
@@ -44,6 +47,8 @@ struct scoutfs_sb_info {
struct quorum_info *quorum_info;
struct block_info *block_info;
struct forest_info *forest_info;
struct srch_info *srch_info;
struct item_cache_info *item_cache_info;
wait_queue_head_t trans_hold_wq;
struct task_struct *trans_task;
@@ -91,6 +96,13 @@ static inline bool SCOUTFS_HAS_SBI(struct super_block *sb)
return (sb != NULL) && (SCOUTFS_SB(sb) != NULL);
}
static inline bool SCOUTFS_IS_META_BDEV(struct scoutfs_super_block *super_block)
{
return !!(le64_to_cpu(super_block->flags) & SCOUTFS_FLAG_IS_META_BDEV);
}
#define SCOUTFS_META_BDEV_MODE (FMODE_READ | FMODE_WRITE | FMODE_EXCL)
/*
* A small string embedded in messages that's used to identify a
* specific mount. It's the three most significant bytes of the fsid

View File

@@ -25,8 +25,10 @@
#include "counters.h"
#include "client.h"
#include "inode.h"
#include "radix.h"
#include "alloc.h"
#include "block.h"
#include "msg.h"
#include "item.h"
#include "scoutfs_trace.h"
/*
@@ -37,17 +39,15 @@
* track the relationships between dirty blocks so there's only ever one
* transaction being built.
*
* The copy of the on-disk super block in the fs sb info has its header
* sequence advanced so that new dirty blocks inherit this dirty
* sequence number. It's only advanced once all those dirty blocks are
* reachable after having first written them all out and then the new
* super with that seq. It's first incremented at mount.
* Committing the current dirty transaction can be triggered by sync, a
* regular background commit interval, reaching a dirty block threshold,
* or the transaction running out of its private allocator resources.
* Once all the current holders release the writing func writes out the
* dirty blocks while excluding holders until it finishes.
*
* Unfortunately writers can nest. We don't bother trying to special
* case holding a transaction that you're already holding because that
* requires per-task storage. We just let anyone hold transactions
* regardless of waiters waiting to write, which risks waiters waiting a
* very long time.
* Unfortunately writing holders can nest. We track nested hold callers
* with the per-task journal_info pointer to avoid deadlocks between
* holders that might otherwise wait for a pending commit.
*/
/* sync dirty data at least this often */
@@ -57,31 +57,19 @@
* XXX move the rest of the super trans_ fields here.
*/
struct trans_info {
spinlock_t lock;
unsigned reserved_items;
unsigned reserved_vals;
unsigned holders;
bool writing;
atomic_t holders;
struct scoutfs_log_trees lt;
struct scoutfs_radix_allocator alloc;
struct scoutfs_alloc alloc;
struct scoutfs_block_writer wri;
};
#define DECLARE_TRANS_INFO(sb, name) \
struct trans_info *name = SCOUTFS_SB(sb)->trans_info
static bool drained_holders(struct trans_info *tri)
{
bool drained;
spin_lock(&tri->lock);
tri->writing = true;
drained = tri->holders == 0;
spin_unlock(&tri->lock);
return drained;
}
/* avoid the high sign bit out of an abundance of caution*/
#define TRANS_HOLDERS_WRITE_FUNC_BIT (1 << 30)
#define TRANS_HOLDERS_COUNT_MASK (TRANS_HOLDERS_WRITE_FUNC_BIT - 1)
static int commit_btrees(struct super_block *sb)
{
@@ -110,8 +98,7 @@ int scoutfs_trans_get_log_trees(struct super_block *sb)
ret = scoutfs_client_get_log_trees(sb, &lt);
if (ret == 0) {
tri->lt = lt;
scoutfs_radix_init_alloc(&tri->alloc, &lt.meta_avail,
&lt.meta_freed);
scoutfs_alloc_init(&tri->alloc, &lt.meta_avail, &lt.meta_freed);
scoutfs_block_writer_init(sb, &tri->wri);
scoutfs_forest_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
@@ -126,6 +113,37 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
return scoutfs_block_writer_has_dirty(sb, &tri->wri);
}
/*
* This is racing with wait_event conditions, make sure our atomic
* stores and waitqueue loads are ordered.
*/
static void sub_holders_and_wake(struct super_block *sb, int val)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
atomic_sub(val, &tri->holders);
smp_mb(); /* make sure sub is visible before we wake */
if (waitqueue_active(&sbi->trans_hold_wq))
wake_up(&sbi->trans_hold_wq);
}
/*
* called as a wait_event condition, needs to be careful to not change
* task state and is racing with waking paths that sub_return, test, and
* wake.
*/
static bool drained_holders(struct trans_info *tri)
{
int holders;
smp_mb(); /* make sure task in wait_event queue before atomic read */
holders = atomic_read(&tri->holders) & TRANS_HOLDERS_COUNT_MASK;
return holders == 0;
}
/*
* This work func is responsible for writing out all the dirty blocks
* that make up the current dirty transaction. It prevents writers from
@@ -156,54 +174,68 @@ void scoutfs_trans_write_func(struct work_struct *work)
trans_write_work.work);
struct super_block *sb = sbi->sb;
DECLARE_TRANS_INFO(sb, tri);
u64 trans_seq = sbi->trans_seq;
char *s = NULL;
int ret = 0;
sbi->trans_task = current;
/* mark that we're writing so holders wait for us to finish and clear our bit */
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
wait_event(sbi->trans_hold_wq, drained_holders(tri));
trace_scoutfs_trans_write_func(sb,
scoutfs_block_writer_dirty_bytes(sb, &tri->wri));
if (scoutfs_block_writer_has_dirty(sb, &tri->wri)) {
if (sbi->trans_deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
ret = scoutfs_inode_walk_writeback(sb, true) ?:
scoutfs_block_writer_write(sb, &tri->wri) ?:
scoutfs_inode_walk_writeback(sb, false) ?:
commit_btrees(sb) ?:
scoutfs_client_advance_seq(sb, &sbi->trans_seq) ?:
scoutfs_trans_get_log_trees(sb);
if (ret)
goto out;
} else if (sbi->trans_deadline_expired) {
/*
* If we're not writing data then we only advance the
* seq at the sync deadline interval. This keeps idle
* mounts from pinning a seq and stopping readers of the
* seq indices but doesn't send a message for every sync
* syscall.
*/
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
if (!scoutfs_block_writer_has_dirty(sb, &tri->wri) &&
!scoutfs_item_dirty_pages(sb)) {
if (sbi->trans_deadline_expired) {
/*
* If we're not writing data then we only advance the
* seq at the sync deadline interval. This keeps idle
* mounts from pinning a seq and stopping readers of the
* seq indices but doesn't send a message for every sync
* syscall.
*/
ret = scoutfs_client_advance_seq(sb, &trans_seq);
if (ret < 0)
s = "clean advance seq";
}
goto out;
}
out:
if (sbi->trans_deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
scoutfs_inc_counter(sb, trans_commit_written);
/* XXX this all needs serious work for dealing with errors */
WARN_ON_ONCE(ret);
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb,
&tri->alloc, &tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "advance seq", scoutfs_client_advance_seq(sb, &trans_seq)) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
out:
if (ret < 0)
scoutfs_err(sb, "critical transaction commit failure: %s, %d",
s, ret);
spin_lock(&sbi->trans_write_lock);
sbi->trans_write_count++;
sbi->trans_write_ret = ret;
sbi->trans_seq = trans_seq;
spin_unlock(&sbi->trans_write_lock);
wake_up(&sbi->trans_write_wq);
spin_lock(&tri->lock);
tri->writing = false;
spin_unlock(&tri->lock);
wake_up(&sbi->trans_hold_wq);
/* we're done, wake waiting holders */
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
sbi->trans_task = NULL;
@@ -295,133 +327,174 @@ void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
}
/*
* Each thread reserves space in the segment for their dirty items while
* they hold the transaction. This is calculated before the first
* transaction hold is acquired. It includes all the potential nested
* item manipulation that could happen with the transaction held.
* Including nested holds avoids having to deal with writing out partial
* transactions while a caller still holds the transaction.
* We store nested holders in the lower bits of journal_info. We use
* some higher bits as a magic value to detect if something goes
* horribly wrong and it gets clobbered.
*/
#define SCOUTFS_RESERVATION_MAGIC 0xd57cd13b
struct scoutfs_reservation {
unsigned magic;
unsigned holders;
struct scoutfs_item_count reserved;
struct scoutfs_item_count actual;
};
#define TRANS_JI_MAGIC 0xd5700000
#define TRANS_JI_MAGIC_MASK 0xfff00000
#define TRANS_JI_COUNT_MASK 0x000fffff
/* returns true if a caller already had a holder counted in journal_info */
static bool inc_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
if (holders == 0)
holders = TRANS_JI_MAGIC;
holders++;
current->journal_info = (void *)holders;
return (holders > (TRANS_JI_MAGIC | 1));
}
static void dec_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
WARN_ON_ONCE((holders & TRANS_JI_COUNT_MASK) == 0);
holders--;
if (holders == TRANS_JI_MAGIC)
holders = 0;
current->journal_info = (void *)holders;
}
/*
* Try to hold the transaction. If a caller already holds the trans then
* we piggy back on their hold. We wait if the writer is trying to
* write out the transation. And if our items won't fit then we kick off
* a write.
* This is called as the wait_event condition for holding a transaction.
* Increment the holder count unless the writer is present. We return
* false to wait until the writer finishes and wakes us.
*
* This is called as a condition for wait_event. It is very limited in
* the locking (blocking) it can do because the caller has set the task
* state before testing the condition safely race with waking after
* setting the condition. Our checking the amount of dirty metadata
* blocks and free data blocks is racy, but we don't mind the risk of
* delaying or prematurely forcing commits.
* This can be racing with itself while there's no waiters. We retry
* the cmpxchg instead of returning and waiting.
*/
static bool acquired_hold(struct super_block *sb,
struct scoutfs_reservation *rsv,
const struct scoutfs_item_count *cnt)
static bool inc_holders_unless_writer(struct trans_info *tri)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
bool acquired = false;
unsigned items;
unsigned vals;
int holders;
spin_lock(&tri->lock);
do {
smp_mb(); /* make sure we read after wait puts task in queue */
holders = atomic_read(&tri->holders);
if (holders & TRANS_HOLDERS_WRITE_FUNC_BIT)
return false;
trace_scoutfs_trans_acquired_hold(sb, cnt, rsv, rsv->holders,
&rsv->reserved, &rsv->actual,
tri->holders, tri->writing,
tri->reserved_items,
tri->reserved_vals);
} while (atomic_cmpxchg(&tri->holders, holders, holders + 1) != holders);
/* use a caller's existing reservation */
if (rsv->holders)
goto hold;
return true;
}
/* wait until the writing thread is finished */
if (tri->writing)
goto out;
/*
* As we drop the last trans holder we try to wake a writing thread that
* was waiting for us to finish.
*/
static void release_holders(struct super_block *sb)
{
dec_journal_info_holders();
sub_holders_and_wake(sb, 1);
}
/* see if we can reserve space for our item count */
items = tri->reserved_items + cnt->items;
vals = tri->reserved_vals + cnt->vals;
/*
* The caller has incremented holders so it is blocking commits. We
* make some quick checks to see if we need to trigger and wait for
* another commit before proceeding.
*/
static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
{
/*
* In theory each dirty item page could be straddling two full
* blocks, requiring 4 allocations for each item cache page.
* That's much too conservative, typically many dirty item cache
* pages that are near each other all land in one block. This
* rough estimate is still so far beyond what typically happens
* that it accounts for having to dirty parent blocks and
* whatever dirtying is done during the transaction hold.
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc, scoutfs_item_dirty_pages(sb) * 2)) {
scoutfs_inc_counter(sb, trans_commit_dirty_meta_full);
return true;
}
/* XXX arbitrarily limit to 8 meg transactions */
if (scoutfs_block_writer_dirty_bytes(sb, &tri->wri) >=
(8 * 1024 * 1024)) {
scoutfs_inc_counter(sb, trans_commit_full);
queue_trans_work(sbi);
goto out;
/*
* Extent modifications can use meta allocators without creating
* dirty items so we have to check the meta alloc specifically.
* The size of the client's avail and freed roots are bound so
* we're unlikely to need very many block allocations per
* transaction hold. XXX This should be more precisely tuned.
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc, 16)) {
scoutfs_inc_counter(sb, trans_commit_meta_alloc_low);
return true;
}
/* Try to refill data allocator before premature enospc */
if (scoutfs_data_alloc_free_bytes(sb) <= SCOUTFS_TRANS_DATA_ALLOC_LWM) {
scoutfs_inc_counter(sb, trans_commit_data_alloc_low);
queue_trans_work(sbi);
return true;
}
return false;
}
static bool acquired_hold(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
bool acquired;
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
acquired = true;
goto out;
}
tri->reserved_items = items;
tri->reserved_vals = vals;
/* wait if the writer is blocking holds */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
acquired = false;
goto out;
}
rsv->reserved.items = cnt->items;
rsv->reserved.vals = cnt->vals;
/* wait if we're triggering another commit */
if (commit_before_hold(sb, tri)) {
release_holders(sb);
queue_trans_work(sbi);
acquired = false;
goto out;
}
hold:
rsv->holders++;
tri->holders++;
trace_scoutfs_trans_acquired_hold(sb, current->journal_info, atomic_read(&tri->holders));
acquired = true;
out:
spin_unlock(&tri->lock);
return acquired;
}
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt)
/*
* Try to hold the transaction. Holding the transaction prevents it
* from being committed. If a transaction is currently being written
* then we'll block until it's done and our hold can be granted.
*
* If a caller already holds the trans then we unconditionally acquire
* our hold and return to avoid deadlocks with our caller, the writing
* thread, and us. We record nested holds in a call stack with the
* journal_info pointer in the task_struct.
*
* The writing thread marks itself as a global trans_task which
* short-circuits all the hold machinery so it can call code that would
* otherwise try to hold transactions while it is writing.
*/
int scoutfs_hold_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
int ret;
/*
* Caller shouldn't provide garbage counts, nor counts that
* can't fit in segments by themselves.
*/
if (WARN_ON_ONCE(cnt.items <= 0 || cnt.vals < 0))
return -EINVAL;
if (current == sbi->trans_task)
return 0;
rsv = current->journal_info;
if (rsv == NULL) {
rsv = kzalloc(sizeof(struct scoutfs_reservation), GFP_NOFS);
if (!rsv)
return -ENOMEM;
rsv->magic = SCOUTFS_RESERVATION_MAGIC;
current->journal_info = rsv;
}
BUG_ON(rsv->magic != SCOUTFS_RESERVATION_MAGIC);
ret = wait_event_interruptible(sbi->trans_hold_wq,
acquired_hold(sb, rsv, &cnt));
if (ret && rsv->holders == 0) {
current->journal_info = NULL;
kfree(rsv);
}
return ret;
return wait_event_interruptible(sbi->trans_hold_wq, acquired_hold(sb));
}
/*
@@ -431,86 +504,39 @@ int scoutfs_hold_trans(struct super_block *sb,
*/
bool scoutfs_trans_held(void)
{
struct scoutfs_reservation *rsv = current->journal_info;
unsigned long holders = (unsigned long)current->journal_info;
return rsv && rsv->magic == SCOUTFS_RESERVATION_MAGIC;
return (holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) == TRANS_JI_MAGIC));
}
/*
* Record a transaction holder's individual contribution to the dirty
* items in the current transaction. We're making sure that the
* reservation matches the possible item manipulations while they hold
* the reservation.
*
* It is possible and legitimate for an individual contribution to be
* negative if they delete dirty items. The item cache makes sure that
* the total dirty item count doesn't fall below zero.
*/
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv = current->journal_info;
if (current == sbi->trans_task)
return;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
rsv->actual.items += items;
rsv->actual.vals += vals;
trace_scoutfs_trans_track_item(sb, items, vals, rsv->actual.items,
rsv->actual.vals, rsv->reserved.items,
rsv->reserved.vals);
WARN_ON_ONCE(rsv->actual.items > rsv->reserved.items);
WARN_ON_ONCE(rsv->actual.vals > rsv->reserved.vals);
}
/*
* As we drop the last hold in the reservation we try and wake other
* hold attempts that were waiting for space. As we drop the last trans
* holder we try to wake a writing thread that was waiting for us to
* finish.
*/
void scoutfs_release_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
DECLARE_TRANS_INFO(sb, tri);
bool wake = false;
if (current == sbi->trans_task)
return;
rsv = current->journal_info;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
release_holders(sb);
spin_lock(&tri->lock);
trace_scoutfs_release_trans(sb, current->journal_info, atomic_read(&tri->holders));
}
trace_scoutfs_release_trans(sb, rsv, rsv->holders, &rsv->reserved,
&rsv->actual, tri->holders, tri->writing,
tri->reserved_items, tri->reserved_vals);
/*
* Return the current transaction sequence. Whether this is racing with
* the transaction write thread is entirely dependent on the caller's
* context.
*/
u64 scoutfs_trans_sample_seq(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 ret;
BUG_ON(rsv->holders <= 0);
BUG_ON(tri->holders <= 0);
spin_lock(&sbi->trans_write_lock);
ret = sbi->trans_seq;
spin_unlock(&sbi->trans_write_lock);
if (--rsv->holders == 0) {
tri->reserved_items -= rsv->reserved.items;
tri->reserved_vals -= rsv->reserved.vals;
current->journal_info = NULL;
kfree(rsv);
wake = true;
}
if (--tri->holders == 0)
wake = true;
spin_unlock(&tri->lock);
if (wake)
wake_up(&sbi->trans_hold_wq);
return ret;
}
int scoutfs_setup_trans(struct super_block *sb)
@@ -522,7 +548,7 @@ int scoutfs_setup_trans(struct super_block *sb)
if (!tri)
return -ENOMEM;
spin_lock_init(&tri->lock);
atomic_set(&tri->holders, 0);
scoutfs_block_writer_init(sb, &tri->wri);
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",

View File

@@ -6,20 +6,16 @@
/* the client will force commits if data allocators get too low */
#define SCOUTFS_TRANS_DATA_ALLOC_LWM (256ULL * 1024 * 1024)
#include "count.h"
void scoutfs_trans_write_func(struct work_struct *work);
int scoutfs_trans_sync(struct super_block *sb, int wait);
int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
int datasync);
void scoutfs_trans_restart_sync_deadline(struct super_block *sb);
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt);
int scoutfs_hold_trans(struct super_block *sb);
bool scoutfs_trans_held(void);
void scoutfs_release_trans(struct super_block *sb);
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals);
u64 scoutfs_trans_sample_seq(struct super_block *sb);
int scoutfs_trans_get_log_trees(struct super_block *sb);
bool scoutfs_trans_has_dirty(struct super_block *sb);

View File

@@ -38,10 +38,7 @@ struct scoutfs_triggers {
struct scoutfs_triggers *name = SCOUTFS_SB(sb)->triggers
static char *names[] = {
[SCOUTFS_TRIGGER_BTREE_STALE_READ] = "btree_stale_read",
[SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF] = "btree_advance_ring_half",
[SCOUTFS_TRIGGER_HARD_STALE_ERROR] = "hard_stale_error",
[SCOUTFS_TRIGGER_SEG_STALE_READ] = "seg_stale_read",
[SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE] = "block_remove_stale",
[SCOUTFS_TRIGGER_STATFS_LOCK_PURGE] = "statfs_lock_purge",
};

View File

@@ -1,11 +1,8 @@
#ifndef _SCOUTFS_TRIGGERS_H_
#define _SCOUTFS_TRIGGERS_H_
enum {
SCOUTFS_TRIGGER_BTREE_STALE_READ,
SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF,
SCOUTFS_TRIGGER_HARD_STALE_ERROR,
SCOUTFS_TRIGGER_SEG_STALE_READ,
enum scoutfs_trigger {
SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE,
SCOUTFS_TRIGGER_STATFS_LOCK_PURGE,
SCOUTFS_TRIGGER_NR,
};

20
kmod/src/util.h Normal file
View File

@@ -0,0 +1,20 @@
#ifndef _SCOUTFS_UTIL_H_
#define _SCOUTFS_UTIL_H_
/*
* Little utility helpers that probably belong upstream.
*/
static inline void down_write_two(struct rw_semaphore *a,
struct rw_semaphore *b)
{
BUG_ON(a == b);
if (a > b)
swap(a, b);
down_write(a);
down_write_nested(b, SINGLE_DEPTH_NESTING);
}
#endif

View File

@@ -20,7 +20,7 @@
#include "inode.h"
#include "key.h"
#include "super.h"
#include "kvec.h"
#include "item.h"
#include "forest.h"
#include "trans.h"
#include "xattr.h"
@@ -94,21 +94,17 @@ static int unknown_prefix(const char *name)
strncmp(name, SCOUTFS_XATTR_PREFIX, SCOUTFS_XATTR_PREFIX_LEN);
}
struct prefix_tags {
unsigned long hide:1,
indx:1;
};
#define HIDE_TAG "hide."
#define INDX_TAG "indx."
#define SRCH_TAG "srch."
#define TAG_LEN (sizeof(HIDE_TAG) - 1)
static int parse_tags(const char *name, unsigned int name_len,
struct prefix_tags *tgs)
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs)
{
bool found;
memset(tgs, 0, sizeof(struct prefix_tags));
memset(tgs, 0, sizeof(struct scoutfs_xattr_prefix_tags));
if ((name_len < (SCOUTFS_XATTR_PREFIX_LEN + TAG_LEN + 1)) ||
strncmp(name, SCOUTFS_XATTR_PREFIX, SCOUTFS_XATTR_PREFIX_LEN))
@@ -120,8 +116,8 @@ static int parse_tags(const char *name, unsigned int name_len,
if (!strncmp(name, HIDE_TAG, TAG_LEN)) {
if (++tgs->hide == 0)
return -EINVAL;
} else if (!strncmp(name, INDX_TAG, TAG_LEN)) {
if (++tgs->indx == 0)
} else if (!strncmp(name, SRCH_TAG, TAG_LEN)) {
if (++tgs->srch == 0)
return -EINVAL;
} else {
/* only reason to use scoutfs. is tags */
@@ -136,17 +132,6 @@ static int parse_tags(const char *name, unsigned int name_len,
return 0;
}
void scoutfs_xattr_index_key(struct scoutfs_key *key,
u64 hash, u64 ino, u64 id)
{
scoutfs_key_set_zeros(key);
key->sk_zone = SCOUTFS_XATTR_INDEX_ZONE;
key->skxi_hash = cpu_to_le64(hash);
key->sk_type = SCOUTFS_XATTR_INDEX_NAME_TYPE;
key->skxi_ino = cpu_to_le64(ino);
key->skxi_id = cpu_to_le64(id);
}
/*
* Find the next xattr and copy the key, xattr header, and as much of
* the name and value into the callers buffer as we can. Returns the
@@ -171,7 +156,6 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key last;
struct kvec val;
u8 last_part;
int total;
u8 part;
@@ -194,8 +178,9 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
for (;;) {
key->skx_part = part;
kvec_init(&val, (void *)xat + total, bytes - total);
ret = scoutfs_forest_next(sb, key, &last, &val, lock);
ret = scoutfs_item_next(sb, key, &last,
(void *)xat + total, bytes - total,
lock);
if (ret < 0) {
/* XXX corruption, ran out of parts */
if (ret == -ENOENT && part > 0)
@@ -271,7 +256,6 @@ static int create_xattr_items(struct inode *inode, u64 id,
struct scoutfs_key key;
unsigned int part_bytes;
unsigned int total;
struct kvec val;
int ret;
init_xattr_key(&key, scoutfs_ino(inode),
@@ -282,12 +266,13 @@ static int create_xattr_items(struct inode *inode, u64 id,
while (total < bytes) {
part_bytes = min_t(unsigned int, bytes - total,
SCOUTFS_XATTR_MAX_PART_SIZE);
kvec_init(&val, (void *)xat + total, part_bytes);
ret = scoutfs_forest_create(sb, &key, &val, lock);
ret = scoutfs_item_create(sb, &key,
(void *)xat + total, part_bytes,
lock);
if (ret) {
while (key.skx_part-- > 0)
scoutfs_forest_delete_dirty(sb, &key);
scoutfs_item_delete(sb, &key, lock);
break;
}
@@ -299,24 +284,114 @@ static int create_xattr_items(struct inode *inode, u64 id,
}
/*
* Delete and save the items that make up the given xattr. If this
* returns an error then the deleted and saved items are left on the
* list for the caller to restore.
* Delete the items that make up the given xattr. If this returns an
* error then no items have been deleted.
*/
static int delete_xattr_items(struct inode *inode, u32 name_hash, u64 id,
u8 nr_parts, struct list_head *list,
struct scoutfs_lock *lock)
u8 nr_parts, struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key key;
int ret;
int ret = 0;
int i;
init_xattr_key(&key, scoutfs_ino(inode), name_hash, id);
do {
ret = scoutfs_forest_delete_save(sb, &key, list, lock);
} while (ret == 0 && ++key.skx_part < nr_parts);
/* dirty additional existing old items */
for (i = 1; i < nr_parts; i++) {
key.skx_part = i;
ret = scoutfs_item_dirty(sb, &key, lock);
if (ret)
goto out;
}
for (i = 0; i < nr_parts; i++) {
key.skx_part = i;
ret = scoutfs_item_delete(sb, &key, lock);
if (ret)
break;
}
out:
return ret;
}
/*
* The caller needs to overwrite existing old xattr items with new
* items. We carefully stage the changes so that we can always unwind
* to the original items if we return an error. Both items have at
* least one part. Either the old or new can have more parts. We dirty
* and create first because we can always unwind those. We delete last
* after dirtying so that it can't fail and we don't have to restore the
* deleted items.
*/
static int change_xattr_items(struct inode *inode, u64 id,
struct scoutfs_xattr *new_xat,
unsigned int new_bytes, u8 new_parts,
u8 old_parts, struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key key;
int last_created = -1;
int bytes;
int off;
int i;
int ret;
init_xattr_key(&key, scoutfs_ino(inode),
xattr_name_hash(new_xat->name, new_xat->name_len), id);
/* dirty existing old items */
for (i = 0; i < old_parts; i++) {
key.skx_part = i;
ret = scoutfs_item_dirty(sb, &key, lock);
if (ret)
goto out;
}
/* create any new items past the old */
for (i = old_parts; i < new_parts; i++) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
key.skx_part = i;
ret = scoutfs_item_create(sb, &key, (void *)new_xat + off,
bytes, lock);
if (ret)
goto out;
last_created = i;
}
/* update dirtied overlapping existing items, last partial first */
for (i = old_parts - 1; i >= 0; i--) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
key.skx_part = i;
ret = scoutfs_item_update(sb, &key, (void *)new_xat + off,
bytes, lock);
/* only last partial can fail, then we unwind created */
if (ret < 0)
goto out;
}
/* delete any dirtied old items past new */
for (i = new_parts; i < old_parts; i++) {
key.skx_part = i;
scoutfs_item_delete(sb, &key, lock);
}
ret = 0;
out:
if (ret < 0) {
/* delete any newly created items */
for (i = old_parts; i <= last_created; i++) {
key.skx_part = i;
scoutfs_item_delete(sb, &key, lock);
}
}
return ret;
}
@@ -346,7 +421,7 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
/* only need enough for caller's name and value sizes */
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
xat = kmalloc(bytes, GFP_NOFS);
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
if (!xat)
return -ENOMEM;
@@ -389,7 +464,7 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
ret = le16_to_cpu(xat->val_len);
memcpy(buffer, &xat->name[xat->name_len], ret);
out:
kfree(xat);
vfree(xat);
return ret;
}
@@ -411,20 +486,17 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *indx_lock = NULL;
struct scoutfs_lock *lck = NULL;
size_t name_len = strlen(name);
struct scoutfs_key indx_key;
struct scoutfs_key key;
struct prefix_tags tgs;
bool undo_indx = false;
bool undo_srch = false;
LIST_HEAD(ind_locks);
LIST_HEAD(saved);
u8 found_parts;
unsigned int bytes;
u64 ind_seq;
u64 hash;
u64 hash = 0;
u64 id = 0;
int ret;
int err;
@@ -444,14 +516,14 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
if (unknown_prefix(name))
return -EOPNOTSUPP;
if (parse_tags(name, name_len, &tgs) != 0)
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide || tgs.indx) && !capable(CAP_SYS_ADMIN))
if ((tgs.hide || tgs.srch) && !capable(CAP_SYS_ADMIN))
return -EPERM;
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
xat = kmalloc(bytes, GFP_NOFS);
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
if (!xat) {
ret = -ENOMEM;
goto out;
@@ -491,29 +563,21 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
/* prepare our xattr */
if (value) {
id = si->next_xattr_id++;
if (found_parts)
id = le64_to_cpu(key.skx_id);
else
id = si->next_xattr_id++;
xat->name_len = name_len;
xat->val_len = cpu_to_le16(size);
memset(xat->__pad, 0, sizeof(xat->__pad));
memcpy(xat->name, name, name_len);
memcpy(&xat->name[xat->name_len], value, size);
}
if (tgs.indx && !(found_parts && value)) {
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_lock_xattr_index(sb, SCOUTFS_LOCK_WRITE_ONLY, 0,
hash, &indx_lock);
if (ret < 0)
goto unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_XATTR_SET(found_parts,
value != NULL,
name_len, size,
tgs.indx));
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -523,34 +587,27 @@ retry:
if (ret < 0)
goto release;
if (tgs.indx && !(found_parts && value)) {
if (tgs.srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
scoutfs_xattr_index_key(&indx_key, hash, ino, id);
if (value)
ret = scoutfs_forest_create_force(sb, &indx_key, NULL,
indx_lock);
else
ret = scoutfs_forest_delete_force(sb, &indx_key,
indx_lock);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
goto release;
undo_indx = true;
undo_srch = true;
}
ret = 0;
if (found_parts)
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, bytes,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
&saved, lck);
if (value && ret == 0)
lck);
else
ret = create_xattr_items(inode, id, xat, bytes, lck);
if (ret < 0) {
scoutfs_forest_restore(sb, &saved, lck);
if (ret < 0)
goto release;
}
scoutfs_forest_free_batch(sb, &saved);
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
@@ -559,13 +616,8 @@ retry:
ret = 0;
release:
if (ret < 0 && undo_indx) {
if (value)
err = scoutfs_forest_delete_force(sb, &indx_key,
indx_lock);
else
err = scoutfs_forest_create_force(sb, &indx_key, NULL,
indx_lock);
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
@@ -573,10 +625,9 @@ release:
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, indx_lock, SCOUTFS_LOCK_WRITE_ONLY);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
out:
kfree(xat);
vfree(xat);
return ret;
}
@@ -601,10 +652,10 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_key key;
struct prefix_tags tgs;
unsigned int bytes;
ssize_t total = 0;
u32 name_hash = 0;
@@ -640,8 +691,8 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
break;
}
is_hidden = parse_tags(xat->name, xat->name_len, &tgs) == 0 &&
tgs.hide;
is_hidden = scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) == 0 && tgs.hide;
if (show_hidden == is_hidden) {
if (size) {
@@ -693,15 +744,12 @@ ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size)
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_lock *indx_lock = NULL;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_key indx_key;
struct scoutfs_key last;
struct scoutfs_key key;
struct prefix_tags tgs;
bool release = false;
unsigned int bytes;
struct kvec val;
u64 hash;
int ret;
@@ -717,8 +765,8 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
init_xattr_key(&last, ino, U32_MAX, U64_MAX);
for (;;) {
kvec_init(&val, (void *)xat, bytes);
ret = scoutfs_forest_next(sb, &key, &last, &val, lock);
ret = scoutfs_item_next(sb, &key, &last, (void *)xat, bytes,
lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
@@ -726,32 +774,23 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
}
if (key.skx_part != 0 ||
parse_tags(xat->name, xat->name_len, &tgs) != 0)
scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) != 0)
memset(&tgs, 0, sizeof(tgs));
if (tgs.indx) {
hash = scoutfs_hash64(xat->name, xat->name_len);
scoutfs_xattr_index_key(&indx_key, hash, ino,
le64_to_cpu(key.skx_id));
ret = scoutfs_lock_xattr_index(sb,
SCOUTFS_LOCK_WRITE_ONLY,
0, hash, &indx_lock);
if (ret < 0)
break;
}
ret = scoutfs_hold_trans(sb, SIC_EXACT(2, 0));
ret = scoutfs_hold_trans(sb);
if (ret < 0)
break;
release = true;
ret = scoutfs_forest_delete(sb, &key, lock);
ret = scoutfs_item_delete(sb, &key, lock);
if (ret < 0)
break;
if (tgs.indx) {
ret = scoutfs_forest_delete_force(sb, &indx_key,
indx_lock);
if (tgs.srch) {
hash = scoutfs_hash64(xat->name, xat->name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino,
le64_to_cpu(key.skx_id));
if (ret < 0)
break;
}
@@ -759,15 +798,11 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
scoutfs_release_trans(sb);
release = false;
scoutfs_unlock(sb, indx_lock, SCOUTFS_LOCK_WRITE_ONLY);
indx_lock = NULL;
/* don't need to inc, next won't see deleted item */
}
if (release)
scoutfs_release_trans(sb);
scoutfs_unlock(sb, indx_lock, SCOUTFS_LOCK_WRITE_ONLY);
kfree(xat);
out:
return ret;

View File

@@ -14,7 +14,12 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
void scoutfs_xattr_index_key(struct scoutfs_key *key,
u64 hash, u64 ino, u64 id);
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1;
};
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);
#endif

6
tests/.gitignore vendored Normal file
View File

@@ -0,0 +1,6 @@
src/*.d
src/createmany
src/dumb_setxattr
src/handle_cat
src/bulk_create_paths
src/find_xattrs

50
tests/Makefile Normal file
View File

@@ -0,0 +1,50 @@
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing -I ../kmod/src
SHELL := /usr/bin/bash
# each binary command is built from a single .c file
BIN := src/createmany \
src/dumb_setxattr \
src/handle_cat \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs
DEPS := $(wildcard src/*.d)
all: $(BIN)
ifneq ($(DEPS),)
-include $(DEPS)
endif
$(BIN): %: %.c Makefile
gcc $(CFLAGS) -MD -MP -MF $*.d $< -o $@
.PHONY: clean
clean:
@rm -f $(BIN) $(DEPS)
#
# Make sure we only have all three items needed for each test: entry in
# sequence, test script in tests/, and output in golden/.
#
.PHONY: check-test-files
check-test-files:
@for t in $$(grep -v "^#" sequence); do \
test -e "tests/$$t" || \
echo "no test for list entry: $$t"; \
t=$${t%%.sh}; \
test -e "golden/$$t" || \
echo "no output for list entry: $$t"; \
done; \
for t in golden/*; do \
t=$$(basename "$$t"); \
grep -q "^$$t.sh$$" sequence || \
echo "output not in list: $$t"; \
done; \
for t in tests/*; do \
t=$$(basename "$$t"); \
test "$$t" == "list" && continue; \
grep -q "^$$t$$" sequence || \
echo "test not in list: $$t"; \
done

123
tests/README.md Normal file
View File

@@ -0,0 +1,123 @@
This test suite exercises multi-node scoutfs by using multiple mounts on
one host to simulate multiple nodes across a network.
It also contains a light test wrapper that executes xfstests on one of
the test mounts.
## Invoking Tests
The basic test invocation has to specify the devices for the fs the
number of mounts to test, whether to create a new fs and insert the
built module, and where to put the results.
# bash ./run-tests.sh \
-M /dev/vda \
-D /dev/vdb \
-i \
-m \
-n 3 \
-q 2 \
-r ./results
All options can be seen by running with -h.
This script is built to test multi-node systems on one host by using
different mounts of the same devices. The script creates a fake block
device in front of each fs block device for each mount that will be
tested. Currently it will create free loop devices and will mount on
/mnt/test.[0-9].
All tests will be run by default. Particular tests can be included or
excluded by providing test name regular expressions with the -I and -E
options. The definitive list of tests and the order in which they'll be
run is found in the sequence file.
## xfstests
The last test that is run checks out, builds, and runs xfstests. It
needs -X and -x options for the xfstests git repo and branch. It also
needs spare devices on which to make scratch scoutfs volumes. The test
verifies that the expected set of xfstests tests ran and passed.
-f /dev/vdc \
-e /dev/vdd \
-X $HOME/git/scoutfs-xfstests \
-x scoutfs \
An xfstests repo that knows about scoutfs is only required to sprinkle
the scoutfs cases throughout the xfstests harness.
## Individual Test Invocation
Each test is run in a new bash invocation. A set of directories in the
test volume and in the results path are created for the test. Each
test's working directory isn't managed.
Test output, temp files, and dmesg snapshots are all put in a tmp/ dir
in the results/ dir. Per-test dirs are only destroyed before each test
invocation.
The harness will check for unexpected output in dmesg after each
individual test.
Each test that fails will have its results appened to the fail.log file
in the results/ directory. The details of the failure can be examined
in the directories for each test in results/output/ and results/tmp/.
## Writing tests
Tests have access to a set of t\_ prefixed bash functions that are found
in files in funcs/.
Tests complete by calling t\_ functions which indicate the result of the
test and can return a message. If the tests passes then its output is
compared with known good output. If the output doesn't match then the
test fails. The t\_ completion functions return specific status codes so
that returning without calling one can be detected.
The golden output has to be consistent across test platforms so there
are a number of filter functions which strip out local details from
command output. t\_filter\_fs is by far the most used which canonicalizes
fs mount paths and block device details.
Tests can be relatively loose about checking errors. If commands
produce output in failure cases then the test will fail without having
to specifically test for errors on every command execution. Care should
be taken to make sure that blowing through a bunch of commands with no
error checking doesn't produce catastrophic results. Usually tests are
simple and it's fine.
A bare sync will sync all the mounted filesystems and ensure that
no mounts have dirty data. sync -f can be used to sync just a specific
filesystem, though it doesn't exist on all platforms.
The harness doesn't currently ensure that all mounts are restored after
each test invocation. It probably should. Currently it's the
responsibility of the test to restore any mounts it alters and there are
t\_ functions to mount all configured mount points.
## Environment Variables
Tests have a number of exported environment variables that are commonly
used during the test.
| Variable | Description | Origin | Example |
| ---------------- | ------------------- | --------------- | ----------------- |
| T\_MB[0-9] | per-mount meta bdev | created per run | /dev/loop0 |
| T\_DB[0-9] | per-mount data bdev | created per run | /dev/loop1 |
| T\_D[0-9] | per-mount test dir | made for test | /mnt/test.[0-9]/t |
| T\_META\_DEVICE | main FS meta bdev | -M | /dev/vda |
| T\_DATA\_DEVICE | main FS data bdev | -D | /dev/vdb |
| T\_EX\_META\_DEV | scratch meta bdev | -f | /dev/vdd |
| T\_EX\_DATA\_DEV | scratch meta bdev | -e | /dev/vdc |
| T\_M[0-9] | mount paths | mounted per run | /mnt/test.[0-9]/ |
| T\_NR\_MOUNTS | number of mounts | -n | 3 |
| T\_O[0-9] | mount options | created per run | -o server\_addr= |
| T\_QUORUM | quorum count | -q | 2 |
| T\_TMP | per-test tmp prefix | made for test | results/tmp/t/tmp |
| T\_TMPDIR | per-test tmp dir dir | made for test | results/tmp/t |
There are also a number of variables that are set in response to options
and are exported but their use is rare so they aren't included here.

58
tests/funcs/exec.sh Normal file
View File

@@ -0,0 +1,58 @@
t_status_msg()
{
echo "$*" > "$T_TMPDIR/status.msg"
}
export T_PASS_STATUS=100
export T_SKIP_STATUS=101
export T_FAIL_STATUS=102
export T_FIRST_STATUS="$T_PASS_STATUS"
export T_LAST_STATUS="$T_FAIL_STATUS"
t_pass()
{
exit $T_PASS_STATUS
}
t_skip()
{
t_status_msg "$@"
exit $T_SKIP_STATUS
}
t_fail()
{
t_status_msg "$@"
exit $T_FAIL_STATUS
}
#
# Quietly run a command during a test. If it succeeds then we have a
# log of its execution but its output isn't included in the test's
# compared output. If it fails then the test fails.
#
t_quiet()
{
echo "# $*" >> "$T_TMPDIR/quiet.log"
"$@" > "$T_TMPDIR/quiet.log" 2>&1 || \
t_fail "quiet command failed"
}
#
# redirect test output back to the output of the invoking script intead
# of the compared output.
#
t_restore_output()
{
exec >&6 2>&1
}
#
# redirect a command's output back to the compared output after the
# test has restored its output
#
t_compare_output()
{
"$@" >&7 2>&1
}

66
tests/funcs/filter.sh Normal file
View File

@@ -0,0 +1,66 @@
# filter out device ids and mount paths
t_filter_fs()
{
sed -e 's@mnt/test\.[0-9]*@mnt/test@g' \
-e 's@Device: [a-fA-F0-9]*h/[0-9]*d@Device: 0h/0d@g'
}
#
# Filter out expected messages. Putting messages here implies that
# tests aren't relying on messages to discover failures.. they're
# directly testing the result of whatever it is that's generating the
# message.
#
t_filter_dmesg()
{
local re
# the kernel can just be noisy
re=" used greatest stack depth: "
# mkfs/mount checks partition tables
re="$re|unknown partition table"
# dm swizzling
re="$re|device doesn't appear to be in the dev hash table"
re="$re|device-mapper:.*uevent:.*version"
re="$re|device-mapper:.*ioctl:.*initialised"
# some tests try invalid devices
re="$re|scoutfs .* error reading super block"
re="$re| EXT4-fs (.*): get root inode failed"
re="$re| EXT4-fs (.*): mount failed"
re="$re| EXT4-fs (.*): no journal found"
re="$re| EXT4-fs (.*): VFS: Can't find ext4 filesystem"
# dropping caches is fine
re="$re| drop_caches: "
# mount and unmount spew a bunch
re="$re|scoutfs.*client connected"
re="$re|scoutfs.*client disconnected"
re="$re|scoutfs.*server setting up"
re="$re|scoutfs.*server ready"
re="$re|scoutfs.*server accepted"
re="$re|scoutfs.*server closing"
re="$re|scoutfs.*server shutting down"
re="$re|scoutfs.*server stopped"
# xfstests records test execution in desg
re="$re| run fstests "
# tests that drop unmount io triggers fencing
re="$re|scoutfs .* error: fencing "
re="$re|scoutfs .*: waiting for .* lock clients"
re="$re|scoutfs .*: all lock clients recovered"
re="$re|scoutfs .* error: client rid.*lock recovery timed out"
# some tests mount w/o options
re="$re|scoutfs .* error: Required mount option \"metadev_path\" not found"
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
egrep -v "($re)"
}

279
tests/funcs/fs.sh Normal file
View File

@@ -0,0 +1,279 @@
#
# Make all previously dirty items in memory in all mounts synced and
# visible in the inode seq indexes. We have to force a sync on every
# node by dirtying data as that's the only way to guarantee advancing
# the sequence number on each node which limits index visibility. Some
# distros don't have sync -f so we dirty our mounts then sync
# everything.
#
t_sync_seq_index()
{
local m
for m in $T_MS; do
t_quiet touch $m
done
t_quiet sync
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
#
t_ident()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
local fsid
local rid
fsid=$(scoutfs statfs -s fsid -p "$mnt")
rid=$(scoutfs statfs -s rid -p "$mnt")
echo "f.${fsid:0:6}.r.${rid:0:6}"
}
#
# Output the mount's sysfs path, defaulting to mount 0 if none is
# specified.
#
t_sysfs_path()
{
local nr="$1"
echo "/sys/fs/scoutfs/$(t_ident $nr)"
}
#
# Output the mount's debugfs path, defaulting to mount 0 if none is
# specified.
#
t_debugfs_path()
{
local nr="$1"
echo "/sys/kernel/debug/scoutfs/$(t_ident $nr)"
}
#
# output all the configured test nrs for iteration
#
t_fs_nrs()
{
seq 0 $((T_NR_MOUNTS - 1))
}
#
# Output the mount nr of the current server. This takes no steps to
# ensure that the server doesn't shut down and have some other mount
# take over.
#
t_server_nr()
{
for i in $(t_fs_nrs); do
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "1" ]; then
echo $i
return
fi
done
t_fail "t_server_nr didn't find a server"
}
#
# Output the mount nr of the first client that we find. There can be
# no clients if there's only one mount who has to be the server. This
# takes no steps to ensure that the client doesn't become a server at
# any point.
#
t_first_client_nr()
{
for i in $(t_fs_nrs); do
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "0" ]; then
echo $i
return
fi
done
t_fail "t_first_client_nr didn't find any clients"
}
#
# The number of quorum members needed to form a majority to start the
# server.
#
t_majority_count()
{
if [ "$T_QUORUM" -lt 3 ]; then
echo 1
else
echo $(((T_QUORUM / 2) + 1))
fi
}
t_mount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet mount -t scoutfs \$T_O$nr \$T_DB$nr \$T_M$nr
}
t_umount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount \$T_M$i
}
#
# Attempt to mount all the configured mounts, assuming that they're
# not already mounted.
#
t_mount_all()
{
local pids=""
local p
for i in $(t_fs_nrs); do
t_mount $i &
p="$!"
pids="$pids $!"
done
for p in $pids; do
t_quiet wait $p
done
}
#
# Attempt to unmount all the configured mounts, assuming that they're
# all mounted.
#
t_umount_all()
{
local pids=""
local p
for i in $(t_fs_nrs); do
t_umount $i &
p="$!"
pids="$pids $!"
done
for p in $pids; do
t_quiet wait $p
done
}
t_remount_all()
{
t_quiet t_umount_all || t_fail "umounting all failed"
t_quiet t_mount_all || t_fail "mounting all failed"
}
t_reinsert_remount_all()
{
t_quiet t_umount_all || t_fail "umounting all failed"
t_quiet rmmod scoutfs || \
t_fail "rmmod scoutfs failed"
t_quiet insmod "$T_KMOD/src/scoutfs.ko" ||
t_fail "insmod scoutfs failed"
t_quiet t_mount_all || t_fail "mounting all failed"
}
t_trigger_path() {
local nr="$1"
echo "/sys/kernel/debug/scoutfs/$(t_ident $nr)/trigger"
}
t_trigger_get() {
local which="$1"
local nr="$2"
cat "$(t_trigger_path "$nr")/$which"
}
t_trigger_show() {
local which="$1"
local string="$2"
local nr="$3"
echo "trigger $which $string: $(t_trigger_get $which $nr)"
}
t_trigger_arm_silent() {
local which="$1"
local nr="$2"
local path=$(t_trigger_path "$nr")
echo 1 > "$path/$which"
}
t_trigger_arm() {
local which="$1"
local nr="$2"
t_trigger_arm_silent $which $nr
t_trigger_show $which armed $nr
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified.
#
t_counter() {
local which="$1"
local nr="$2"
cat "$(t_sysfs_path $nr)/counters/$which"
}
#
# output the difference between the current value of a counter and the
# caller's provided previous value.
#
t_counter_diff_value() {
local which="$1"
local old="$2"
local nr="$3"
local new="$(t_counter $which $nr)"
echo "$((new - old))"
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified. For tests which expect a
# specific difference in counters.
#
t_counter_diff() {
local which="$1"
local old="$2"
local nr="$3"
echo "counter $which diff $(t_counter_diff_value $which $old $nr)"
}
#
# output a message indicating whether or not the counter value changed.
# For tests that expect a difference, or not, but the amount of
# difference isn't significant.
#
t_counter_diff_changed() {
local which="$1"
local old="$2"
local nr="$3"
local diff="$(t_counter_diff_value $which $old $nr)"
test "$diff" -eq 0 && \
echo "counter $which didn't change" ||
echo "counter $which changed"
}

40
tests/funcs/require.sh Normal file
View File

@@ -0,0 +1,40 @@
#
# Make sure that all the base command arguments are found in the path.
# This isn't strictly necessary as the test will naturally fail if the
# command isn't found, but it's nice to fail fast and clearly
# communicate why.
#
t_require_commands() {
local c
for c in "$@"; do
which "$c" >/dev/null 2>&1 || \
t_fail "command $c not found in path"
done
}
#
# make sure that we have at least this many mounts
#
t_require_mounts() {
local req="$1"
test "$T_NR_MOUNTS" -ge "$req" || \
t_skip "$req mounts required, only have $T_NR_MOUNTS"
}
#
# Require that the meta device be at least the size string argument, as
# parsed by numfmt using single char base 2 suffixes (iec).. 64G, etc.
#
t_require_meta_size() {
local dev="$T_META_DEVICE"
local req_iec="$1"
local req_bytes=$(numfmt --from=iec --to=none $req_iec)
local dev_bytes=$(blockdev --getsize64 $dev)
local dev_iec=$(numfmt --from=auto --to=iec $dev_bytes)
test "$dev_bytes" -ge "$req_bytes" || \
t_skip "$dev must be at least $req_iec, is $dev_iec"
}

View File

@@ -0,0 +1,36 @@
== calculate number of files
== create per mount dirs
== generate phase scripts
== round 1: create
== round 1: online
== round 1: verify
== round 1: release
== round 1: offline
== round 1: stage
== round 1: online
== round 1: verify
== round 1: release
== round 1: offline
== round 1: unlink
== round 2: create
== round 2: online
== round 2: verify
== round 2: release
== round 2: offline
== round 2: stage
== round 2: online
== round 2: verify
== round 2: release
== round 2: offline
== round 2: unlink
== round 3: create
== round 3: online
== round 3: verify
== round 3: release
== round 3: offline
== round 3: stage
== round 3: online
== round 3: verify
== round 3: release
== round 3: offline
== round 3: unlink

View File

@@ -0,0 +1,53 @@
== single block write
online: 1
offline: 0
st_blocks: 8
== single block overwrite
online: 1
offline: 0
st_blocks: 8
== append
online: 2
offline: 0
st_blocks: 16
== release
online: 0
offline: 2
st_blocks: 16
== duplicate release
online: 0
offline: 2
st_blocks: 16
== duplicate release past i_size
online: 0
offline: 2
st_blocks: 16
== stage
online: 2
offline: 0
st_blocks: 16
== duplicate stage
online: 2
offline: 0
st_blocks: 16
== larger file
online: 256
offline: 0
st_blocks: 2048
== partial truncate
online: 128
offline: 0
st_blocks: 1024
== single sparse block
online: 1
offline: 0
st_blocks: 8
== empty file
online: 0
offline: 0
st_blocks: 0
== non-regular file
online: 0
offline: 0
st_blocks: 0
== cleanup

View File

@@ -0,0 +1,55 @@
== root inode updates flow back and forth
== stat of created file matches
== written file contents match
== overwritten file contents match
== appended file contents match
== fiemap matches after racey appends
== unlinked file isn't found
== symlink targets match
/mnt/test/test/basic-posix-consistency/file.targ
/mnt/test/test/basic-posix-consistency/file.targ
/mnt/test/test/basic-posix-consistency/file.targ2
/mnt/test/test/basic-posix-consistency/file.targ2
== new xattrs are visible
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="1"
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="1"
== modified xattrs are updated
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="2"
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="2"
== deleted xattrs
/mnt/test/test/basic-posix-consistency/file: user.xat: No such attribute
/mnt/test/test/basic-posix-consistency/file: user.xat: No such attribute
== readdir after modification
one
two
three
four
one
two
three
four
two
four
two
four
== can delete empty dir
== some easy rename cases
--- file between dirs
--- file within dir
--- dir within dir
--- overwrite file
--- can't overwrite non-empty dir
mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to /mnt/test/test/basic-posix-consistency/dir/a/dir: Directory not empty
--- can overwrite empty dir
== path resoluion
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing

View File

@@ -0,0 +1,52 @@
== create shared test file
== set and get xattrs between mount pairs while retrying
# file: /mnt/test/test/block-stale-reads/file
user.xat="1"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="2"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="3"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="4"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="5"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="6"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="7"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="8"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="9"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="10"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed

View File

View File

@@ -0,0 +1,4 @@
Run createmany in /mnt/test/test/createmany-parallel/0
Run createmany in /mnt/test/test/createmany-parallel/1
Run createmany in /mnt/test/test/createmany-parallel/2
Run createmany in /mnt/test/test/createmany-parallel/3

View File

@@ -0,0 +1,3 @@
== measure initial createmany
== measure initial createmany
== measure two concurrent createmany runs

View File

@@ -0,0 +1,2 @@
== create large directory with 1220608 files
== randomly renaming 5000 files

View File

@@ -0,0 +1,2 @@
== repeated cross-mount alloc+free, totalling 2x free
== remove empty test file

View File

@@ -0,0 +1,10 @@
== create per node dirs
== touch files on each node
== recreate the files
== turn the files into directories
== rename parent dirs
== rename parent dirs back
== create some hard links
== recreate one of the hard links
== delete the remaining hard link
== race to blow everything away

View File

View File

@@ -0,0 +1,4 @@
== create files and sync
== modify files
== mount and unmount
== verify files

View File

@@ -0,0 +1,4 @@
== create per mount files
== time independent modification
== time concurrent independent modification
== time concurrent conflicting modification

View File

@@ -0,0 +1,2 @@
=== setup files ===
=== ping-pong xattr ops ===

View File

@@ -0,0 +1 @@
== race writing and index walking

View File

@@ -0,0 +1,3 @@
== make test dir
== do enough stuff to make lock leaks visible
== make sure nothing has leaked

View File

@@ -0,0 +1,2 @@
=== getcwd after lock revocation
trigger statfs_lock_purge armed: 1

View File

@@ -0,0 +1,15 @@
=== setup test file ===
# file: /mnt/test/test/lock-shrink-consistency/dir/file
user.test="aaa"
=== commit dirty trans and revoke lock ===
trigger statfs_lock_purge armed: 1
trigger statfs_lock_purge after it fired: 0
=== change xattr on other mount ===
# file: /mnt/test/test/lock-shrink-consistency/dir/file
user.test="bbb"
=== verify new xattr under new lock on first mount ===
# file: /mnt/test/test/lock-shrink-consistency/dir/file
user.test="bbb"

View File

@@ -0,0 +1,3 @@
== create per mount files
== 30s of racing random mount/umount
== mounting any unmounted

33
tests/golden/move-blocks Normal file
View File

@@ -0,0 +1,33 @@
== build test files
== wrapped offsets should fail
ioctl failed on '/mnt/test/test/move-blocks/to': Value too large for defined data type (75)
scoutfs: move-blocks failed: Value too large for defined data type (75)
ioctl failed on '/mnt/test/test/move-blocks/to': Value too large for defined data type (75)
scoutfs: move-blocks failed: Value too large for defined data type (75)
== specifying same file fails
ioctl failed on '/mnt/test/test/move-blocks/hardlink': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
== specifying files in other file systems fails
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid cross-device link (18)
scoutfs: move-blocks failed: Invalid cross-device link (18)
== offsets must be multiples of 4KB
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
== can't move onto existing extent
ioctl failed on '/mnt/test/test/move-blocks/to': Invalid argument (22)
scoutfs: move-blocks failed: Invalid argument (22)
== can't move between files with offline extents
ioctl failed on '/mnt/test/test/move-blocks/to': No data available (61)
scoutfs: move-blocks failed: No data available (61)
ioctl failed on '/mnt/test/test/move-blocks/to': No data available (61)
scoutfs: move-blocks failed: No data available (61)
== basic moves work
== moving final partial block sets partial i_size
123
== moving updates inode fields
== moving blocks backwards works
== combine many files into one

View File

@@ -0,0 +1,56 @@
== create files
== waiter shows up in ioctl
offline waiting should be empty:
0
offline waiting should now have one known entry:
== multiple waiters on same block listed once
offline waiting still has one known entry:
== different blocks show up
offline waiting now has two known entries:
== staging wakes everyone
offline waiting should be empty again:
0
== interruption does no harm
offline waiting should now have one known entry:
offline waiting should be empty again:
0
== EIO injection for waiting readers works
offline waiting should now have two known entries:
2
data_wait_err found 2 waiters.
offline waiting should now have 0 known entries:
0
dd: error reading /mnt/test/test/offline-extent-waiting/dir/file: Input/output error
0+0 records in
0+0 records out
dd: error reading /mnt/test/test/offline-extent-waiting/dir/file: Input/output error
0+0 records in
0+0 records out
offline waiting should be empty again:
0
== readahead while offline does no harm
== waiting on interesting blocks works
offline waiting is empty at block 0
0
offline waiting is empty at block 1
0
offline waiting is empty at block 128
0
offline waiting is empty at block 129
0
offline waiting is empty at block 254
0
offline waiting is empty at block 255
0
== contents match when staging blocks forward
== contents match when staging blocks backwards
== truncate to same size doesn't wait
offline wating should be empty:
0
== truncating does wait
truncate should be waiting for first block:
trunate should no longer be waiting:
0
== writing waits
should be waiting for write
== cleanup

View File

@@ -0,0 +1,4 @@
== advance lock version by creating unrelated files
== create before file version
== verify before version, touch after version
== verify after version

31
tests/golden/setattr_more Normal file
View File

@@ -0,0 +1,31 @@
== 0 data_version arg fails
setattr: data version must not be 0
Try `setattr --help' or `setattr --usage' for more information.
== args must specify size and offline
setattr: must provide size if using --offline option
Try `setattr --help' or `setattr --usage' for more information.
== only works on regular files
failed to open '/mnt/test/test/setattr_more/dir': Is a directory (21)
scoutfs: setattr failed: Is a directory (21)
setattr_more ioctl failed on '/mnt/test/test/setattr_more/char': Inappropriate ioctl for device (25)
scoutfs: setattr failed: Inappropriate ioctl for device (25)
== non-zero file size fails
setattr_more ioctl failed on '/mnt/test/test/setattr_more/file': Invalid argument (22)
scoutfs: setattr failed: Invalid argument (22)
== non-zero file data_version fails
setattr_more ioctl failed on '/mnt/test/test/setattr_more/file': Invalid argument (22)
scoutfs: setattr failed: Invalid argument (22)
== large size is set
578437695752307201
== large data_version is set
578437695752307201
== large ctime is set
1972-02-19 00:06:25.999999999 +0000
== large offline extents are created
Filesystem type is: 554f4353
File size of /mnt/test/test/setattr_more/file is 40988672 (10007 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 10006: 0.. 10006: 10007: unknown,eof
/mnt/test/test/setattr_more/file: 1 extent found
== correct offline extent length
976563

View File

@@ -0,0 +1 @@
== interrupt waiting mount

View File

@@ -0,0 +1,9 @@
== dirs shouldn't appear in data_seq queries
== two created files are present and come after each other
found first
found second
== unlinked entries must not be present
== dirty inodes can not be present
== changing metadata must increase meta seq
== changing contents must increase data seq
== make sure dirtying doesn't livelock walk

View File

@@ -0,0 +1,146 @@
== simple whole file multi-block releasing
== release last block that straddles i_size
== release entire file past i_size
== releasing offline extents is fine
== 0 count is fine
== release past i_size is fine
== wrapped blocks fails
release ioctl failed: Invalid argument (22)
scoutfs: release failed: Invalid argument (22)
== releasing non-file fails
ioctl failed: Inappropriate ioctl for device (25)
release: must provide file version --data-version
Try `release --help' or `release --usage' for more information.
== releasing a non-scoutfs file fails
ioctl failed: Inappropriate ioctl for device (25)
release: must provide file version --data-version
Try `release --help' or `release --usage' for more information.
== releasing bad version fails
release: must provide file version --data-version
Try `release --help' or `release --usage' for more information.
== verify small release merging
0 0 0: (0 0 1) (1 101 4)
0 0 1: (0 0 2) (2 102 3)
0 0 2: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
0 0 3: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
0 0 4: (0 0 1) (1 101 3) (4 0 1)
0 1 0: (0 0 2) (2 102 3)
0 1 1: (0 0 2) (2 102 3)
0 1 2: (0 0 3) (3 103 2)
0 1 3: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
0 1 4: (0 0 2) (2 102 2) (4 0 1)
0 2 0: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
0 2 1: (0 0 3) (3 103 2)
0 2 2: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
0 2 3: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
0 2 4: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
0 3 0: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
0 3 1: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
0 3 2: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
0 3 3: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
0 3 4: (0 0 1) (1 101 2) (3 0 2)
0 4 0: (0 0 1) (1 101 3) (4 0 1)
0 4 1: (0 0 2) (2 102 2) (4 0 1)
0 4 2: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
0 4 3: (0 0 1) (1 101 2) (3 0 2)
0 4 4: (0 0 1) (1 101 3) (4 0 1)
1 0 0: (0 0 2) (2 102 3)
1 0 1: (0 0 2) (2 102 3)
1 0 2: (0 0 3) (3 103 2)
1 0 3: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
1 0 4: (0 0 2) (2 102 2) (4 0 1)
1 1 0: (0 0 2) (2 102 3)
1 1 1: (0 100 1) (1 0 1) (2 102 3)
1 1 2: (0 100 1) (1 0 2) (3 103 2)
1 1 3: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
1 1 4: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
1 2 0: (0 0 3) (3 103 2)
1 2 1: (0 100 1) (1 0 2) (3 103 2)
1 2 2: (0 100 1) (1 0 2) (3 103 2)
1 2 3: (0 100 1) (1 0 3) (4 104 1)
1 2 4: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
1 3 0: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
1 3 1: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
1 3 2: (0 100 1) (1 0 3) (4 104 1)
1 3 3: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
1 3 4: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
1 4 0: (0 0 2) (2 102 2) (4 0 1)
1 4 1: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
1 4 2: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
1 4 3: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
1 4 4: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
2 0 0: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
2 0 1: (0 0 3) (3 103 2)
2 0 2: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
2 0 3: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
2 0 4: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
2 1 0: (0 0 3) (3 103 2)
2 1 1: (0 100 1) (1 0 2) (3 103 2)
2 1 2: (0 100 1) (1 0 2) (3 103 2)
2 1 3: (0 100 1) (1 0 3) (4 104 1)
2 1 4: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
2 2 0: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
2 2 1: (0 100 1) (1 0 2) (3 103 2)
2 2 2: (0 100 2) (2 0 1) (3 103 2)
2 2 3: (0 100 2) (2 0 2) (4 104 1)
2 2 4: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
2 3 0: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
2 3 1: (0 100 1) (1 0 3) (4 104 1)
2 3 2: (0 100 2) (2 0 2) (4 104 1)
2 3 3: (0 100 2) (2 0 2) (4 104 1)
2 3 4: (0 100 2) (2 0 3)
2 4 0: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
2 4 1: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
2 4 2: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
2 4 3: (0 100 2) (2 0 3)
2 4 4: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
3 0 0: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
3 0 1: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
3 0 2: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
3 0 3: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
3 0 4: (0 0 1) (1 101 2) (3 0 2)
3 1 0: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
3 1 1: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
3 1 2: (0 100 1) (1 0 3) (4 104 1)
3 1 3: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
3 1 4: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
3 2 0: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
3 2 1: (0 100 1) (1 0 3) (4 104 1)
3 2 2: (0 100 2) (2 0 2) (4 104 1)
3 2 3: (0 100 2) (2 0 2) (4 104 1)
3 2 4: (0 100 2) (2 0 3)
3 3 0: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
3 3 1: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
3 3 2: (0 100 2) (2 0 2) (4 104 1)
3 3 3: (0 100 3) (3 0 1) (4 104 1)
3 3 4: (0 100 3) (3 0 2)
3 4 0: (0 0 1) (1 101 2) (3 0 2)
3 4 1: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
3 4 2: (0 100 2) (2 0 3)
3 4 3: (0 100 3) (3 0 2)
3 4 4: (0 100 3) (3 0 2)
4 0 0: (0 0 1) (1 101 3) (4 0 1)
4 0 1: (0 0 2) (2 102 2) (4 0 1)
4 0 2: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
4 0 3: (0 0 1) (1 101 2) (3 0 2)
4 0 4: (0 0 1) (1 101 3) (4 0 1)
4 1 0: (0 0 2) (2 102 2) (4 0 1)
4 1 1: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
4 1 2: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
4 1 3: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
4 1 4: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
4 2 0: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
4 2 1: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
4 2 2: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
4 2 3: (0 100 2) (2 0 3)
4 2 4: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
4 3 0: (0 0 1) (1 101 2) (3 0 2)
4 3 1: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
4 3 2: (0 100 2) (2 0 3)
4 3 3: (0 100 3) (3 0 2)
4 3 4: (0 100 3) (3 0 2)
4 4 0: (0 0 1) (1 101 3) (4 0 1)
4 4 1: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
4 4 2: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
4 4 3: (0 100 3) (3 0 2)
4 4 4: (0 100 4) (4 0 1)

View File

@@ -0,0 +1,23 @@
== create/release/stage single block file
== create/release/stage larger file
== multiple release,drop_cache,stage cycles
== release+stage shouldn't change stat, data seq or vers
== stage does change meta_seq
== can't use stage to extend online file
stage: must provide file version with --data-version
Try `stage --help' or `stage --usage' for more information.
== wrapped region fails
stage returned -1, not 4096: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== non-block aligned offset fails
stage returned -1, not 4095: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== non-block aligned len within block fails
stage returned -1, not 1024: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== partial final block that writes to i_size does work
== zero length stage doesn't bring blocks online
== stage of non-regular file fails
ioctl failed: Inappropriate ioctl for device (25)
stage: must provide file version with --data-version
Try `stage --help' or `stage --usage' for more information.

View File

@@ -0,0 +1,18 @@
=== XATTR_ flag combinations
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -c -r
returned -1 errno 22 (Invalid argument)
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -r
returned -1 errno 61 (No data available)
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -c
returned 0
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -c
returned -1 errno 17 (File exists)
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -r
returned 0
=== bad lengths
setfattr: /mnt/test/test/simple-xattr-unit/file: Operation not supported
setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Argument list too long
=== good length boundaries
=== 500 random lengths

View File

@@ -0,0 +1,13 @@
== create new xattrs
== update existing xattr
== remove an xattr
== remove xattr with files
== create entries in current log
== delete small fraction
== remove files
== create entries that exceed one log
== delete fractions in phases
== remove files
== create entries for exceed search entry limit
== delete half
== entirely remove third batch

Some files were not shown because too many files have changed in this diff Show More