Compare commits

..

1 Commits

Author SHA1 Message Date
Zach Brown
8ee41caa24 scoutfs: introduce CodingStyle.txt
Add a coding style document that tries to record the conventions that
the project uses.  It seemed more appropriate to put it up in the -kmod
git repo context rather than in src/ which would end up in fs/scoutfs/
upstream.

Signed-off-by: Zach Brown <zab@versity.com>
2021-03-31 10:55:18 -07:00
147 changed files with 3637 additions and 16332 deletions

82
CodingStyle.txt Normal file
View File

@@ -0,0 +1,82 @@
We try to maintain a consistent coding style across the project. It's
admitedly arbitrary and starts with and is based on upstream's
Documentation/CodingStyle. Conventions are added here as they come up
during review. We'll demonstrate each sylistic preference with a diff
snippet.
== Try to make one exit point for reasonably long functions
{
- void *a;
- void *b;
+ void *a = NULL;
+ void *b = NULL;
+ int ret;
a = kalloc();
- if (!a)
- return 1;
+ if (!a) {
+ ret = 1;
+ goto out;
+ }
b = kalloc();
if (!b) {
- kfree(a);
- return 2;
+ ret = 2;
+ goto out;
}
- return 3
+ ret = 3;
+out:
+ kfree(a);
+ kfree(b);
+ return ret;
}
The idea is to initialize all state at the top of the function,
modifying it throughout, and clean it all up at the end. Having one
exit point also gives us a place to add tracing of function exit.
== Multiple declarations on a line
- int i, j;
+ int i;
+ int j;
Declare function variables one per line. The verbose declarations
create pressure to think about excessive stack use or over-long
functions, makes initializers clear, and leaves room for comments.
== Balance braces
- if (IS_ERR(super_block))
+ if (IS_ERR(super_block)) {
return PTR_ERR(super_block);
- else {
+ } else {
*super_res = *super_block;
kfree(super_block);
return 0;
}
*nervous twitch*
== Cute variable defintion waterfalls
+ struct block_device *meta_bdev;
struct scoutfs_sb_info *sbi;
struct mount_options opts;
- struct block_device *meta_bdev;
struct inode *inode;
This isn't strictly necessary, but it's nice to try and make a pretty
descending length of variable distributions. It often has the
accidental effect of sorting definitions by decreasing complexity. I
tend to group types when the name lengths are pretty close, even if
they're not strictly sorted, so that all the ints, u64s, keys, etc, are
all together.

133
README.md
View File

@@ -1,24 +1,135 @@
# Introduction
scoutfs is a clustered in-kernel Linux filesystem designed to support
large archival systems. It features additional interfaces and metadata
so that archive agents can perform their maintenance workflows without
walking all the files in the namespace. Its cluster support lets
deployments add nodes to satisfy archival tier bandwidth targets.
scoutfs is a clustered in-kernel Linux filesystem designed and built
from the ground up to support large archival systems.
The design goal is to reach file populations in the trillions, with the
archival bandwidth to match, while remaining operational and responsive.
Its key differentiating features are:
Highlights of the design and implementation include:
- Integrated consistent indexing accelerates archival maintenance operations
- Commit logs allow nodes to write concurrently without contention
It meets best of breed expectations:
* Fully consistent POSIX semantics between nodes
* Rich metadata to ensure the integrity of metadata references
* Atomic transactions to maintain consistent persistent structures
* Integrated archival metadata replaces syncing to external databases
* Dynamic seperation of resources lets nodes write in parallel
* 64bit throughout; no limits on file or directory sizes or counts
* First class kernel implementation for high performance and low latency
* Open GPLv2 implementation
Learn more in the [white paper](https://docs.wixstatic.com/ugd/aaa89b_88a5cc84be0b4d1a90f60d8900834d28.pdf).
# Current Status
**Alpha Open Source Development**
scoutfs is under heavy active development. We're developing it in the
open to give the community an opportunity to affect the design and
implementation.
The core architectural design elements are in place. Much surrounding
functionality hasn't been implemented. It's appropriate for early
adopters and interested developers, not for production use.
In that vein, expect significant incompatible changes to both the format
of network messages and persistent structures. Since the format hash-checking
has now been removed in preparation for release, if there is any doubt, mkfs
is strongly recommended.
The current kernel module is developed against the RHEL/CentOS 7.x
kernel to minimize the friction of developing and testing with partners'
existing infrastructure. Once we're happy with the design we'll shift
development to the upstream kernel while maintaining distro
compatibility branches.
# Community Mailing List
Please join us on the open scoutfs-devel@scoutfs.org [mailing list
hosted on Google Groups](https://groups.google.com/a/scoutfs.org/forum/#!forum/scoutfs-devel)
for all discussion of scoutfs.
# Quick Start
**This following a very rough example of the procedure to get up and
running, experience will be needed to fill in the gaps. We're happy to
help on the mailing list.**
The requirements for running scoutfs on a small cluster are:
1. One or more nodes running x86-64 CentOS/RHEL 7.4 (or 7.3)
2. Access to two shared block devices
3. IPv4 connectivity between the nodes
The steps for getting scoutfs mounted and operational are:
1. Get the kernel module running on the nodes
2. Make a new filesystem on the devices with the userspace utilities
3. Mount the devices on all the nodes
In this example we use three nodes. The names of the block devices are
the same on all the nodes. Two of the nodes will be quorum members. A
majority of quorum members must be mounted to elect a leader to run a
server that all the mounts connect to. It should be noted that two
quorum members results in a majority of one, each member itself, so
split brain elections are possible but so unlikely that it's fine for a
demonstration.
1. Get the Kernel Module and Userspace Binaries
* Either use snapshot RPMs built from git by Versity:
```shell
rpm -i https://scoutfs.s3-us-west-2.amazonaws.com/scoutfs-repo-0.0.1-1.el7_4.noarch.rpm
yum install scoutfs-utils kmod-scoutfs
```
* Or use the binaries built from checked out git repositories:
```shell
yum install kernel-devel
git clone git@github.com:versity/scoutfs.git
make -C scoutfs
modprobe libcrc32c
insmod scoutfs/kmod/src/scoutfs.ko
alias scoutfs=$PWD/scoutfs/utils/src/scoutfs
```
2. Make a New Filesystem (**destroys contents**)
We specify quorum slots with the addresses of each of the quorum
member nodes, the metadata device, and the data device.
```shell
scoutfs mkfs -Q 0,$NODE0_ADDR,12345 -Q 1,$NODE1_ADDR,12345 /dev/meta_dev /dev/data_dev
```
3. Mount the Filesystem
First, mount each of the quorum nodes so that they can elect and
start a server for the remaining node to connect to. The slot numbers
were specified with the leading "0,..." and "1,..." in the mkfs options
above.
```shell
mount -t scoutfs -o quorum_slot_nr=$SLOT_NR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
Then mount the remaining node which can now connect to the running server.
```shell
mount -t scoutfs -o metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
4. For Kicks, Observe the Metadata Change Index
The `meta_seq` index tracks the inodes that are changed in each
transaction.
```shell
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/two; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
```

View File

@@ -1,116 +0,0 @@
Versity ScoutFS Release Notes
=============================
---
v1.5
\
*Jun 21, 2022*
* **Fix persistent error during server startup**
\
Fixed a case where the server would always hit a consistent error on
seartup, preventing the system from mounting. This required a rare
but valid state across the clients.
* **Fix a client hang that would lead to fencing**
\
The client module's use of in-kernel networking was missing annotation
that could lead to communication hanging. The server would fence the
client when it stopped communicating. This could be identified by the
server fencing a client after it disconnected with no attempt by the
client to reconnect.
---
v1.4
\
*May 6, 2022*
* **Fix possible client crash during server failover**
\
Fixed a narrow window during server failover and lock recovery that
could cause a client mount to believe that it had an inconsistent item
cache and panic. This required very specific lock state and messaging
patterns between multiple mounts and multiple servers which made it
unlikely to occur in the field.
---
v1.3
\
*Apr 7, 2022*
* **Fix rare server instability under heavy load**
\
Fixed a case of server instability under heavy load due to concurrent
work fully exhausting metadata block allocation pools reserved for a
single server transaction. This would cause brief interruption as the
server shutdown and the next server started up and made progress as
pending work was retried.
* **Fix slow fencing preventing server startup**
\
If a server had to process many fence requests with a slow fencing
mechanism it could be interrupted before it finished. The server
now makes sure heartbeat messages are sent while it is making progress
on fencing requests so that other quorum members don't interrupt the
process.
* **Performance improvement in getxattr and setxattr**
\
Kernel allocation patterns in the getxattr and setxattr
implementations were causing significant contention between CPUs. Their
allocation strategy was changed so that concurrent tasks can call these
xattr methods without degrading performance.
---
v1.2
\
*Mar 14, 2022*
* **Fix deadlock between fallocate() and read() system calls**
\
Fixed a lock inversion that could cause two tasks to deadlock if they
performed fallocate() and read() on a file at the same time. The
deadlock was uninterruptible so the machine needed to be rebooted. This
was relatively rare as fallocate() is usually used to prepare files
before they're used.
* **Fix instability from heavy file deletion workloads**
\
Fixed rare circumstances under which background file deletion cleanup
tasks could try to delete a file while it is being deleted by another
task. Heavy load across multiple nodes, either many files being deleted
or large files being deleted, increased the chances of this happening.
Heavy staging could cause this problem because staging can create many
internal temporary files that need to be deleted.
---
v1.1
\
*Feb 4, 2022*
* **Add scoutfs(1) change-quorum-config command**
\
Add a change-quorum-config command to scoutfs(1) to change the quorum
configuration stored in the metadata device while the file system is
unmounted. This can be used to change the mounts that will
participate in quorum and the IP addresses they use.
* **Fix Rare Risk of Item Cache Corruption**
\
Code review found a rare potential source of item cache corruption.
If this happened it would look as though deleted parts of the filesystem
returned, but only at the time they were deleted. Old deleted items are
not affected. This problem only affected the item cache, never
persistent storage. Unmounting and remounting would drop the bad item
cache and resync it with the correct persistent data.
---
v1.0
\
*Nov 8, 2021*
* **Initial Release**
\
Version 1.0 marks the first GA release.

View File

@@ -18,7 +18,6 @@ scoutfs-y += \
dir.o \
export.o \
ext.o \
fence.o \
file.o \
forest.o \
inode.o \
@@ -28,11 +27,9 @@ scoutfs-y += \
lock_server.o \
msg.o \
net.o \
omap.o \
options.o \
per_task.o \
quorum.o \
recov.o \
scoutfs_trace.o \
server.o \
sort_priv.o \
@@ -43,7 +40,6 @@ scoutfs-y += \
trans.o \
triggers.o \
tseq.o \
volopt.o \
xattr.o
#

View File

@@ -29,8 +29,8 @@
* The core allocator uses extent items in btrees rooted in the super.
* Each free extent is stored in two items. The first item is indexed
* by block location and is used to merge adjacent extents when freeing.
* The second item is indexed by the order of the length and is used to
* find large extents to allocate from.
* The second item is indexed by length and is used to find large
* extents to allocate from.
*
* Free extent always consumes the front of the largest extent. This
* attempts to discourage fragmentation by given smaller freed extents
@@ -66,68 +66,26 @@
* blocks to modify the next blocks, and swaps them at each transaction.
*/
/*
* Return the order of the length of a free extent, which we define as
* floor(log_8_(len)): 0..7 = 0, 8..63 = 1, etc.
*/
static u64 free_extent_order(u64 len)
{
return (fls64(len | 1) - 1) / 3;
}
/*
* The smallest (non-zero) length that will be mapped to the same order
* as the given length.
*/
static u64 smallest_order_length(u64 len)
{
return 1ULL << (free_extent_order(len) * 3);
}
/*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*/
static u32 extent_mod_blocks(u32 height)
{
return ((1 + height) * 2) * 3;
}
/*
* Free extents don't have flags and are stored in two indexes sorted by
* block location and by length order, largest first. The location key
* field is set to the final block in the extent so that we can find
* intersections by calling _next() with the start of the range we're
* searching for.
*
* We never store 0 length extents but we do build keys for searching
* the order index from 0,0 without having to map it to a real extent.
* block location and by length, largest first. The block location key
* is set to the final block in the extent so that we can find
* intersections by calling _next() iterators starting with the block
* we're searching for.
*/
static void init_ext_key(struct scoutfs_key *key, int zone, u64 start, u64 len)
static void init_ext_key(struct scoutfs_key *key, int type, u64 start, u64 len)
{
*key = (struct scoutfs_key) {
.sk_zone = zone,
.sk_zone = SCOUTFS_FREE_EXTENT_ZONE,
.sk_type = type,
};
if (len == 0) {
/* we only use 0 len extents for magic 0,0 order lookups */
WARN_ON_ONCE(zone != SCOUTFS_FREE_EXTENT_ORDER_ZONE || start != 0);
return;
}
if (zone == SCOUTFS_FREE_EXTENT_BLKNO_ZONE) {
if (type == SCOUTFS_FREE_EXTENT_BLKNO_TYPE) {
key->skfb_end = cpu_to_le64(start + len - 1);
key->skfb_len = cpu_to_le64(len);
} else if (zone == SCOUTFS_FREE_EXTENT_ORDER_ZONE) {
key->skfo_revord = cpu_to_le64(U64_MAX - free_extent_order(len));
key->skfo_end = cpu_to_le64(start + len - 1);
key->skfo_len = cpu_to_le64(len);
} else if (type == SCOUTFS_FREE_EXTENT_LEN_TYPE) {
key->skfl_neglen = cpu_to_le64(-len);
key->skfl_blkno = cpu_to_le64(start);
} else {
BUG();
}
@@ -135,27 +93,23 @@ static void init_ext_key(struct scoutfs_key *key, int zone, u64 start, u64 len)
static void ext_from_key(struct scoutfs_extent *ext, struct scoutfs_key *key)
{
if (key->sk_zone == SCOUTFS_FREE_EXTENT_BLKNO_ZONE) {
if (key->sk_type == SCOUTFS_FREE_EXTENT_BLKNO_TYPE) {
ext->start = le64_to_cpu(key->skfb_end) -
le64_to_cpu(key->skfb_len) + 1;
ext->len = le64_to_cpu(key->skfb_len);
} else {
ext->start = le64_to_cpu(key->skfo_end) -
le64_to_cpu(key->skfo_len) + 1;
ext->len = le64_to_cpu(key->skfo_len);
ext->start = le64_to_cpu(key->skfl_blkno);
ext->len = -le64_to_cpu(key->skfl_neglen);
}
ext->map = 0;
ext->flags = 0;
/* we never store 0 length extents */
WARN_ON_ONCE(ext->len == 0);
}
struct alloc_ext_args {
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
struct scoutfs_alloc_root *root;
int zone;
int type;
};
static int alloc_ext_next(struct super_block *sb, void *arg,
@@ -166,13 +120,13 @@ static int alloc_ext_next(struct super_block *sb, void *arg,
struct scoutfs_key key;
int ret;
init_ext_key(&key, args->zone, start, len);
init_ext_key(&key, args->type, start, len);
ret = scoutfs_btree_next(sb, &args->root->root, &key, &iref);
if (ret == 0) {
if (iref.val_len != 0)
ret = -EIO;
else if (iref.key->sk_zone != args->zone)
else if (iref.key->sk_type != args->type)
ret = -ENOENT;
else
ext_from_key(ext, iref.key);
@@ -185,19 +139,19 @@ static int alloc_ext_next(struct super_block *sb, void *arg,
return ret;
}
static int other_zone(int zone)
static int other_type(int type)
{
if (zone == SCOUTFS_FREE_EXTENT_BLKNO_ZONE)
return SCOUTFS_FREE_EXTENT_ORDER_ZONE;
else if (zone == SCOUTFS_FREE_EXTENT_ORDER_ZONE)
return SCOUTFS_FREE_EXTENT_BLKNO_ZONE;
if (type == SCOUTFS_FREE_EXTENT_BLKNO_TYPE)
return SCOUTFS_FREE_EXTENT_LEN_TYPE;
else if (type == SCOUTFS_FREE_EXTENT_LEN_TYPE)
return SCOUTFS_FREE_EXTENT_BLKNO_TYPE;
else
BUG();
}
/*
* Insert an extent along with its matching item which is indexed by
* opposite of its order or blkno. If we succeed we update the root's
* opposite of its len or blkno. If we succeed we update the root's
* record of the total length of all the stored extents.
*/
static int alloc_ext_insert(struct super_block *sb, void *arg,
@@ -213,8 +167,8 @@ static int alloc_ext_insert(struct super_block *sb, void *arg,
if (WARN_ON_ONCE(map || flags))
return -EINVAL;
init_ext_key(&key, args->zone, start, len);
init_ext_key(&other, other_zone(args->zone), start, len);
init_ext_key(&key, args->type, start, len);
init_ext_key(&other, other_type(args->type), start, len);
ret = scoutfs_btree_insert(sb, args->alloc, args->wri,
&args->root->root, &key, NULL, 0);
@@ -242,8 +196,8 @@ static int alloc_ext_remove(struct super_block *sb, void *arg,
int ret;
int err;
init_ext_key(&key, args->zone, start, len);
init_ext_key(&other, other_zone(args->zone), start, len);
init_ext_key(&key, args->type, start, len);
init_ext_key(&other, other_type(args->type), start, len);
ret = scoutfs_btree_delete(sb, args->alloc, args->wri,
&args->root->root, &key);
@@ -267,7 +221,6 @@ static struct scoutfs_ext_ops alloc_ext_ops = {
.next = alloc_ext_next,
.insert = alloc_ext_insert,
.remove = alloc_ext_remove,
.insert_overlap_warn = true,
};
static bool invalid_extent(u64 start, u64 end, u64 first, u64 last)
@@ -277,17 +230,20 @@ static bool invalid_extent(u64 start, u64 end, u64 first, u64 last)
static bool invalid_meta_blkno(struct super_block *sb, u64 blkno)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 last_meta = (i_size_read(sbi->meta_bdev->bd_inode) >> SCOUTFS_BLOCK_LG_SHIFT) - 1;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
return invalid_extent(blkno, blkno, SCOUTFS_META_DEV_START_BLKNO, last_meta);
return invalid_extent(blkno, blkno,
le64_to_cpu(super->first_meta_blkno),
le64_to_cpu(super->last_meta_blkno));
}
static bool invalid_data_extent(struct super_block *sb, u64 start, u64 len)
{
u64 last_data = (i_size_read(sb->s_bdev->bd_inode) >> SCOUTFS_BLOCK_SM_SHIFT) - 1;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
return invalid_extent(start, start + len - 1, SCOUTFS_DATA_DEV_START_BLKNO, last_data);
return invalid_extent(start, start + len - 1,
le64_to_cpu(super->first_data_blkno),
le64_to_cpu(super->last_data_blkno));
}
void scoutfs_alloc_init(struct scoutfs_alloc *alloc,
@@ -663,7 +619,7 @@ int scoutfs_dalloc_return_cached(struct super_block *sb,
.alloc = alloc,
.wri = wri,
.root = &dalloc->root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
.type = SCOUTFS_FREE_EXTENT_BLKNO_TYPE,
};
int ret = 0;
@@ -689,14 +645,6 @@ int scoutfs_dalloc_return_cached(struct super_block *sb,
*
* Unlike meta allocations, the caller is expected to serialize
* allocations from the root.
*
* ENOBUFS is returned if the data allocator ran out of space and we can
* probably refill it from the server. The caller is expected to back
* out, commit the transaction, and try again.
*
* ENOSPC is returned if the data allocator ran out of space but we have
* a flag from the server telling us that there's no more space
* available. This is a hard error and should be returned.
*/
int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
@@ -707,7 +655,7 @@ int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
.alloc = alloc,
.wri = wri,
.root = &dalloc->root,
.zone = SCOUTFS_FREE_EXTENT_ORDER_ZONE,
.type = SCOUTFS_FREE_EXTENT_LEN_TYPE,
};
struct scoutfs_extent ext;
u64 len;
@@ -745,13 +693,13 @@ int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
ret = 0;
out:
if (ret < 0) {
if (ret == -ENOENT) {
if (le32_to_cpu(dalloc->root.flags) & SCOUTFS_ALLOC_FLAG_LOW)
ret = -ENOSPC;
else
ret = -ENOBUFS;
}
/*
* Special retval meaning there wasn't space to alloc from
* this txn. Doesn't mean filesystem is completely full.
* Maybe upper layers want to try again.
*/
if (ret == -ENOENT)
ret = -ENOBUFS;
*blkno_ret = 0;
*count_ret = 0;
} else {
@@ -780,7 +728,7 @@ int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
.type = SCOUTFS_FREE_EXTENT_BLKNO_TYPE,
};
int ret;
@@ -793,95 +741,6 @@ int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
return ret;
}
/*
* Return the first zone bit that the extent intersects with.
*/
static int first_extent_zone(struct scoutfs_extent *ext, __le64 *zones, u64 zone_blocks)
{
int first;
int last;
int nr;
first = div64_u64(ext->start, zone_blocks);
last = div64_u64(ext->start + ext->len - 1, zone_blocks);
nr = find_next_bit_le(zones, SCOUTFS_DATA_ALLOC_MAX_ZONES, first);
if (nr <= last)
return nr;
return SCOUTFS_DATA_ALLOC_MAX_ZONES;
}
/*
* Find an extent in specific zones to satisfy an allocation. We use
* the order items to search for the largest extent that intersects with
* the zones whose bits are set in the caller's bitmap.
*/
static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *root,
__le64 *zones, u64 zone_blocks,
struct scoutfs_extent *found_ret, u64 count,
struct scoutfs_extent *ext_ret)
{
struct alloc_ext_args args = {
.root = root,
.zone = SCOUTFS_FREE_EXTENT_ORDER_ZONE,
};
struct scoutfs_extent found;
struct scoutfs_extent ext;
u64 start;
u64 len;
int nr;
int ret;
/* don't bother when there are no bits set */
if (find_next_bit_le(zones, SCOUTFS_DATA_ALLOC_MAX_ZONES, 0) ==
SCOUTFS_DATA_ALLOC_MAX_ZONES)
return -ENOENT;
/* start searching for largest extent from the first zone */
len = smallest_order_length(SCOUTFS_BLOCK_SM_MAX);
nr = 0;
for (;;) {
/* search for extents in the next zone at our order */
nr = find_next_bit_le(zones, SCOUTFS_DATA_ALLOC_MAX_ZONES, nr);
if (nr >= SCOUTFS_DATA_ALLOC_MAX_ZONES) {
/* wrap down to next smaller order if we run out of bits */
len >>= 3;
if (len == 0) {
ret = -ENOENT;
break;
}
nr = find_next_bit_le(zones, SCOUTFS_DATA_ALLOC_MAX_ZONES, 0);
}
start = (u64)nr * zone_blocks;
ret = scoutfs_ext_next(sb, &alloc_ext_ops, &args, start, len, &found);
if (ret < 0)
break;
/* see if the next extent intersects any zones */
nr = first_extent_zone(&found, zones, zone_blocks);
if (nr < SCOUTFS_DATA_ALLOC_MAX_ZONES) {
start = (u64)nr * zone_blocks;
ext.start = max(start, found.start);
ext.len = min(count, found.start + found.len - ext.start);
*found_ret = found;
*ext_ret = ext;
ret = 0;
break;
}
/* continue searching past extent */
nr = div64_u64(found.start + found.len - 1, zone_blocks) + 1;
len = smallest_order_length(found.len);
}
return ret;
}
/*
* Move extent items adding up to the requested total length from the
@@ -892,19 +751,6 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
* -ENOENT is returned if we run out of extents in the source tree
* before moving the total.
*
* If meta_reserved is non-zero then -EINPROGRESS can be returned if the
* current meta allocator's avail blocks or room for freed blocks would
* have fallen under the reserved amount. The could have been
* successfully dirtied in this case but the number of blocks moved is
* not returned. The caller is expected to deal with the partial
* progress by commiting the dirty trees and examining the resulting
* modified trees to see if they need to continue moving extents.
*
* The caller can specify that extents in the source tree should first
* be found based on their zone bitmaps. We'll first try to find
* extents in the exclusive zones, then vacant zones, and then we'll
* fall back to normal allocation that ignores zones.
*
* This first pass is not optimal because it performs full btree walks
* per extent. We could optimize this with more clever btree item
* manipulation functions which can iterate through src and dst blocks
@@ -913,85 +759,32 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved)
struct scoutfs_alloc_root *src, u64 total)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
};
struct scoutfs_extent found;
struct scoutfs_extent ext;
u64 moved = 0;
u64 count;
int ret = 0;
int err;
if (zone_blocks == 0) {
exclusive = NULL;
vacant = NULL;
}
while (moved < total) {
count = total - moved;
if (exclusive) {
/* first try to find extents in our exclusive zones */
ret = find_zone_extent(sb, src, exclusive, zone_blocks,
&found, count, &ext);
if (ret == -ENOENT) {
exclusive = NULL;
continue;
}
} else if (vacant) {
/* then try to find extents in vacant zones */
ret = find_zone_extent(sb, src, vacant, zone_blocks,
&found, count, &ext);
if (ret == -ENOENT) {
vacant = NULL;
continue;
}
} else {
/* otherwise fall back to finding extents anywhere */
args.root = src;
args.zone = SCOUTFS_FREE_EXTENT_ORDER_ZONE;
ret = scoutfs_ext_next(sb, &alloc_ext_ops, &args, 0, 0, &found);
if (ret == 0) {
ext.start = found.start;
ext.len = min(count, found.len);
}
}
if (ret < 0)
break;
if (meta_reserved != 0 &&
scoutfs_alloc_meta_low(sb, alloc, meta_reserved +
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
ret = -EINPROGRESS;
break;
}
/* searching set start/len, finish initializing alloced extent */
ext.map = found.map ? ext.start - found.start + found.map : 0;
ext.flags = found.flags;
/* remove the allocation from the found extent */
args.root = src;
args.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE;
ret = scoutfs_ext_remove(sb, &alloc_ext_ops, &args, ext.start, ext.len);
args.type = SCOUTFS_FREE_EXTENT_LEN_TYPE;
ret = scoutfs_ext_alloc(sb, &alloc_ext_ops, &args,
0, 0, total - moved, &ext);
if (ret < 0)
break;
/* insert the allocated extent into the dest */
args.root = dst;
args.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE;
args.type = SCOUTFS_FREE_EXTENT_BLKNO_TYPE;
ret = scoutfs_ext_insert(sb, &alloc_ext_ops, &args, ext.start,
ext.len, ext.map, ext.flags);
if (ret < 0) {
/* and put it back in src if insertion failed */
args.root = src;
args.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE;
args.type = SCOUTFS_FREE_EXTENT_BLKNO_TYPE;
err = scoutfs_ext_insert(sb, &alloc_ext_ops, &args,
ext.start, ext.len, ext.map,
ext.flags);
@@ -1001,8 +794,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
moved += ext.len;
scoutfs_inc_counter(sb, alloc_moved_extent);
trace_scoutfs_alloc_move_extent(sb, &ext);
}
scoutfs_inc_counter(sb, alloc_move);
@@ -1011,39 +802,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
return ret;
}
/*
* Add new free space to an allocator. _ext_insert will make sure that it doesn't
* overlap with any existing extents. This is done by the server in a transaction that
* also updates total_*_blocks in the super so we don't verify.
*/
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
};
return scoutfs_ext_insert(sb, &alloc_ext_ops, &args, start, len, 0, 0);
}
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
};
return scoutfs_ext_remove(sb, &alloc_ext_ops, &args, start, len);
}
/*
* We only trim one block, instead of looping trimming all, because the
* caller is assuming that we do a fixed amount of work when they check
@@ -1090,22 +848,18 @@ out:
}
/*
* True if the allocator has enough blocks in the avail list and space
* in the freed list to be able to perform the callers operations. If
* false the caller should back off and return partial progress rather
* than completely exhausting the avail list or overflowing the freed
* list.
* True if the allocator has enough free blocks to cow (alloc and free)
* a list block and all the btree blocks that store extent items.
*
* The caller tells us how many extents they're about to modify and how
* many other additional blocks they may cow manually. And finally, the
* caller could be the first to dirty the avail and freed blocks in the
* allocator,
* At most, an extent operation can dirty down three paths of the tree
* to modify a blkno item and two distant len items. We can grow and
* split the root, and then those three paths could share blocks but each
* modify two leaf blocks.
*/
static bool list_has_blocks(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root, u32 extents, u32 addl_blocks)
static bool list_can_cow(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root)
{
u32 tree_blocks = extent_mod_blocks(root->root.height) * extents;
u32 most = 1 + tree_blocks + addl_blocks;
u32 most = 1 + (1 + 1 + (3 * (1 - root->root.height + 1)));
if (le32_to_cpu(alloc->avail.first_nr) < most) {
scoutfs_inc_counter(sb, alloc_list_avail_lo);
@@ -1147,7 +901,7 @@ int scoutfs_alloc_fill_list(struct super_block *sb,
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_ORDER_ZONE,
.type = SCOUTFS_FREE_EXTENT_LEN_TYPE,
};
struct scoutfs_alloc_list_block *lblk;
struct scoutfs_block *bl = NULL;
@@ -1169,7 +923,8 @@ int scoutfs_alloc_fill_list(struct super_block *sb,
goto out;
lblk = bl->data;
while (le32_to_cpu(lblk->nr) < target && list_has_blocks(sb, alloc, root, 1, 0)) {
while (le32_to_cpu(lblk->nr) < target &&
list_can_cow(sb, alloc, root)) {
ret = scoutfs_ext_alloc(sb, &alloc_ext_ops, &args, 0, 0,
target - le32_to_cpu(lblk->nr), &ext);
@@ -1181,8 +936,6 @@ int scoutfs_alloc_fill_list(struct super_block *sb,
for (i = 0; i < ext.len; i++)
list_block_add(lhead, lblk, ext.start + i);
trace_scoutfs_alloc_fill_extent(sb, &ext);
}
out:
@@ -1205,7 +958,7 @@ int scoutfs_alloc_empty_list(struct super_block *sb,
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
.type = SCOUTFS_FREE_EXTENT_BLKNO_TYPE,
};
struct scoutfs_alloc_list_block *lblk = NULL;
struct scoutfs_block *bl = NULL;
@@ -1215,7 +968,7 @@ int scoutfs_alloc_empty_list(struct super_block *sb,
if (WARN_ON_ONCE(lhead_in_alloc(alloc, lhead)))
return -EINVAL;
while (lhead->ref.blkno && list_has_blocks(sb, alloc, args.root, 1, 1)) {
while (lhead->ref.blkno && list_can_cow(sb, alloc, args.root)) {
if (lhead->first_nr == 0) {
ret = trim_empty_first_block(sb, alloc, wri, lhead);
@@ -1251,8 +1004,6 @@ int scoutfs_alloc_empty_list(struct super_block *sb,
break;
list_block_remove(lhead, lblk, ext.len);
trace_scoutfs_alloc_empty_extent(sb, &ext);
}
scoutfs_block_put(sb, bl);
@@ -1340,61 +1091,37 @@ bool scoutfs_alloc_meta_low(struct super_block *sb,
return lo;
}
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space)
{
unsigned int seq;
do {
seq = read_seqbegin(&alloc->seqlock);
*avail_total = le32_to_cpu(alloc->avail.first_nr);
*freed_space = list_block_space(alloc->freed.first_nr);
} while (read_seqretry(&alloc->seqlock, seq));
}
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag)
{
unsigned int seq;
bool set;
do {
seq = read_seqbegin(&alloc->seqlock);
set = !!(le32_to_cpu(alloc->avail.flags) & flag);
} while (read_seqretry(&alloc->seqlock, seq));
return set;
}
/*
* Iterate over the allocator structures referenced by the caller's
* super and call the caller's callback with summaries of the blocks
* found in each structure.
*
* The caller's responsible for the stability of the referenced blocks.
* If the blocks could be stale the caller must deal with retrying when
* it sees ESTALE.
* Call the callers callback for every persistent allocator structure
* we can find.
*/
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg)
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
struct scoutfs_super_block *super = NULL;
struct scoutfs_srch_compact *sc;
struct scoutfs_log_merge_request *lmreq;
struct scoutfs_log_merge_complete *lmcomp;
struct scoutfs_log_trees lt;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int expected;
u64 avail_tot;
u64 freed_tot;
u64 id;
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
if (!sc) {
if (!super || !sc) {
ret = -ENOMEM;
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
/* all the server allocators */
ret = cb(sb, arg, SCOUTFS_ALLOC_OWNER_SERVER, 0, true, true,
le64_to_cpu(super->meta_alloc[0].total_len)) ?:
@@ -1484,93 +1211,8 @@ int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_blo
scoutfs_key_inc(&key);
}
/* log merge allocators */
memset(&key, 0, sizeof(key));
key.sk_zone = SCOUTFS_LOG_MERGE_REQUEST_ZONE;
expected = sizeof(*lmreq);
id = 0;
avail_tot = 0;
freed_tot = 0;
for (;;) {
ret = scoutfs_btree_next(sb, &super->log_merge, &key, &iref);
if (ret == 0) {
if (iref.key->sk_zone != key.sk_zone) {
ret = -ENOENT;
} else if (iref.val_len == expected) {
key = *iref.key;
if (key.sk_zone == SCOUTFS_LOG_MERGE_REQUEST_ZONE) {
lmreq = iref.val;
id = le64_to_cpu(lmreq->rid);
avail_tot = le64_to_cpu(lmreq->meta_avail.total_nr);
freed_tot = le64_to_cpu(lmreq->meta_freed.total_nr);
} else {
lmcomp = iref.val;
id = le64_to_cpu(lmcomp->rid);
avail_tot = le64_to_cpu(lmcomp->meta_avail.total_nr);
freed_tot = le64_to_cpu(lmcomp->meta_freed.total_nr);
}
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(&iref);
}
if (ret == -ENOENT) {
if (key.sk_zone == SCOUTFS_LOG_MERGE_REQUEST_ZONE) {
memset(&key, 0, sizeof(key));
key.sk_zone = SCOUTFS_LOG_MERGE_COMPLETE_ZONE;
expected = sizeof(*lmcomp);
continue;
}
break;
}
if (ret < 0)
goto out;
ret = cb(sb, arg, SCOUTFS_ALLOC_OWNER_LOG_MERGE, id, true, true, avail_tot) ?:
cb(sb, arg, SCOUTFS_ALLOC_OWNER_LOG_MERGE, id, true, false, freed_tot);
if (ret < 0)
goto out;
scoutfs_key_inc(&key);
}
ret = 0;
out:
kfree(sc);
return ret;
}
/*
* Read the current on-disk super and use it to walk the allocators and
* call the caller's callback. This assumes that the super it's reading
* could be stale and will retry if it encounters stale blocks.
*/
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
out:
if (ret == -ESTALE) {
if (memcmp(&stale_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
@@ -1582,64 +1224,6 @@ out:
}
kfree(super);
return ret;
}
struct foreach_cb_args {
scoutfs_alloc_extent_cb_t cb;
void *cb_arg;
};
static int alloc_btree_extent_item_cb(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, void *arg)
{
struct foreach_cb_args *cba = arg;
struct scoutfs_extent ext;
if (key->sk_zone != SCOUTFS_FREE_EXTENT_BLKNO_ZONE)
return -ENOENT;
ext_from_key(&ext, key);
cba->cb(sb, cba->cb_arg, &ext);
return 0;
}
/*
* Call the caller's callback on each extent stored in the allocator's
* btree. The callback sees extents called in order by starting blkno.
*/
int scoutfs_alloc_extents_cb(struct super_block *sb, struct scoutfs_alloc_root *root,
scoutfs_alloc_extent_cb_t cb, void *cb_arg)
{
struct foreach_cb_args cba = {
.cb = cb,
.cb_arg = cb_arg,
};
struct scoutfs_key start;
struct scoutfs_key end;
struct scoutfs_key key;
int ret;
init_ext_key(&key, SCOUTFS_FREE_EXTENT_BLKNO_ZONE, 0, 1);
for (;;) {
/* will stop at order items before getting stuck in final block */
BUILD_BUG_ON(SCOUTFS_FREE_EXTENT_BLKNO_ZONE > SCOUTFS_FREE_EXTENT_ORDER_ZONE);
init_ext_key(&start, SCOUTFS_FREE_EXTENT_BLKNO_ZONE, 0, 1);
init_ext_key(&end, SCOUTFS_FREE_EXTENT_ORDER_ZONE, 0, 1);
ret = scoutfs_btree_read_items(sb, &root->root, &key, &start, &end,
alloc_btree_extent_item_cb, &cba);
if (ret < 0 || end.sk_zone != SCOUTFS_FREE_EXTENT_BLKNO_ZONE) {
if (ret == -ENOENT)
ret = 0;
break;
}
key = end;
scoutfs_key_inc(&key);
}
kfree(sc);
return ret;
}

View File

@@ -38,10 +38,6 @@
#define SCOUTFS_ALLOC_DATA_LG_THRESH \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/* the client will force commits if data allocators get too low */
#define SCOUTFS_ALLOC_DATA_REFILL_THRESH \
((256ULL * 1024 * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Fill client alloc roots to the target when they fall below the lo
* threshold.
@@ -59,16 +55,15 @@
#define SCOUTFS_SERVER_DATA_FILL_LO \
(1ULL * 1024 * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Log merge meta allocations are only used for one request and will
* never use more than the dirty limit.
* Each of the server meta_alloc roots will try to keep a minimum amount
* of free blocks. The server will swap roots when its current avail
* falls below the threshold while the freed root is still above it. It
* must have room for all the largest allocation attempted in a
* transaction on the server.
*/
#define SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT (64ULL * 1024 * 1024)
/* a few extra blocks for alloc blocks */
#define SCOUTFS_SERVER_MERGE_FILL_TARGET \
((SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT >> SCOUTFS_BLOCK_LG_SHIFT) + 4)
#define SCOUTFS_SERVER_MERGE_FILL_LO SCOUTFS_SERVER_MERGE_FILL_TARGET
#define SCOUTFS_SERVER_META_ALLOC_MIN \
(SCOUTFS_SERVER_META_FILL_TARGET * 2)
/*
* A run-time use of a pair of persistent avail/freed roots as a
@@ -130,14 +125,7 @@ int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
struct scoutfs_alloc_root *src, u64 total);
int scoutfs_alloc_fill_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
@@ -158,21 +146,11 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag);
typedef int (*scoutfs_alloc_foreach_cb_t)(struct super_block *sb, void *arg,
int owner, u64 id,
bool meta, bool avail, u64 blocks);
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg);
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg);
typedef void (*scoutfs_alloc_extent_cb_t)(struct super_block *sb, void *cb_arg,
struct scoutfs_extent *ext);
int scoutfs_alloc_extents_cb(struct super_block *sb, struct scoutfs_alloc_root *root,
scoutfs_alloc_extent_cb_t cb, void *cb_arg);
#endif

View File

@@ -128,7 +128,6 @@ static __le32 block_calc_crc(struct scoutfs_block_header *hdr, u32 size)
static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
{
struct block_private *bp;
unsigned int noio_flags;
/*
* If we had multiple blocks per page we'd need to be a little
@@ -148,19 +147,8 @@ static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
set_bit(BLOCK_BIT_PAGE_ALLOC, &bp->bits);
bp->bl.data = page_address(bp->page);
} else {
/*
* __vmalloc doesn't pass the gfp flags down to pte
* allocs, they're done with user alloc flags.
* Unfortunately, some lockdep doesn't know that
* PF_NOMEMALLOC prevents __GFP_FS reclaim and generates
* spurious reclaim-on dependencies and warnings.
*/
lockdep_off();
noio_flags = memalloc_noio_save();
bp->virt = __vmalloc(SCOUTFS_BLOCK_LG_SIZE, GFP_NOFS | __GFP_HIGHMEM, PAGE_KERNEL);
memalloc_noio_restore(noio_flags);
lockdep_on();
bp->virt = __vmalloc(SCOUTFS_BLOCK_LG_SIZE,
GFP_NOFS | __GFP_HIGHMEM, PAGE_KERNEL);
if (!bp->virt) {
kfree(bp);
bp = NULL;
@@ -200,9 +188,7 @@ static void block_free(struct super_block *sb, struct block_private *bp)
else
BUG();
/* ok to tear down dirty blocks when forcing unmount */
WARN_ON_ONCE(!scoutfs_forcing_unmount(sb) && !list_empty(&bp->dirty_entry));
WARN_ON_ONCE(!list_empty(&bp->dirty_entry));
WARN_ON_ONCE(atomic_read(&bp->refcount));
WARN_ON_ONCE(atomic_read(&bp->io_count));
kfree(bp);
@@ -300,16 +286,10 @@ static int block_insert(struct super_block *sb, struct block_private *bp)
WARN_ON_ONCE(atomic_read(&bp->refcount) & BLOCK_REF_INSERTED);
retry:
atomic_add(BLOCK_REF_INSERTED, &bp->refcount);
ret = rhashtable_lookup_insert_fast(&binf->ht, &bp->ht_head, block_ht_params);
ret = rhashtable_insert_fast(&binf->ht, &bp->ht_head, block_ht_params);
if (ret < 0) {
atomic_sub(BLOCK_REF_INSERTED, &bp->refcount);
if (ret == -EBUSY) {
/* wait for pending rebalance to finish */
synchronize_rcu();
goto retry;
}
} else {
atomic_inc(&binf->total_inserted);
TRACE_BLOCK(insert, bp);
@@ -416,7 +396,6 @@ static void block_remove_all(struct super_block *sb)
if (block_get_if_inserted(bp)) {
block_remove(sb, bp);
WARN_ON_ONCE(atomic_read(&bp->refcount) != 1);
block_put(sb, bp);
}
}
@@ -487,9 +466,6 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
sector_t sector;
int ret = 0;
if (scoutfs_forcing_unmount(sb))
return -EIO;
sector = bp->bl.blkno << (SCOUTFS_BLOCK_LG_SHIFT - 9);
WARN_ON_ONCE(bp->bl.blkno == U64_MAX);
@@ -645,11 +621,9 @@ static struct block_private *block_read(struct super_block *sb, u64 blkno)
goto out;
}
wait_event(binf->waitq, uptodate_or_error(bp));
if (test_bit(BLOCK_BIT_ERROR, &bp->bits))
ret = wait_event_interruptible(binf->waitq, uptodate_or_error(bp));
if (ret == 0 && test_bit(BLOCK_BIT_ERROR, &bp->bits))
ret = -EIO;
else
ret = 0;
out:
if (ret < 0) {
@@ -1099,11 +1073,10 @@ restart:
if (bp == NULL)
break;
if (bp == ERR_PTR(-EAGAIN)) {
/* hard exit to wait for rcu rebalance to finish */
/* hard reset to not hold rcu grace period across retries */
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
scoutfs_inc_counter(sb, block_cache_shrink_restart);
synchronize_rcu();
goto restart;
}
@@ -1155,7 +1128,7 @@ static void sm_block_bio_end_io(struct bio *bio, int err)
* only layer that sees the full block buffer so we pass the calculated
* crc to the caller for them to check in their context.
*/
static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw, u64 blkno,
static int sm_block_io(struct block_device *bdev, int rw, u64 blkno,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc)
{
@@ -1167,9 +1140,6 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
BUILD_BUG_ON(PAGE_SIZE < SCOUTFS_BLOCK_SM_SIZE);
if (scoutfs_forcing_unmount(sb))
return -EIO;
if (WARN_ON_ONCE(len > SCOUTFS_BLOCK_SM_SIZE) ||
WARN_ON_ONCE(!(rw & WRITE) && !blk_crc))
return -EINVAL;
@@ -1222,14 +1192,14 @@ int scoutfs_block_read_sm(struct super_block *sb,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc)
{
return sm_block_io(sb, bdev, READ, blkno, hdr, len, blk_crc);
return sm_block_io(bdev, READ, blkno, hdr, len, blk_crc);
}
int scoutfs_block_write_sm(struct super_block *sb,
struct block_device *bdev, u64 blkno,
struct scoutfs_block_header *hdr, size_t len)
{
return sm_block_io(sb, bdev, WRITE, blkno, hdr, len, NULL);
return sm_block_io(bdev, WRITE, blkno, hdr, len, NULL);
}
int scoutfs_block_setup(struct super_block *sb)
@@ -1267,7 +1237,7 @@ out:
if (ret)
scoutfs_block_destroy(sb);
return ret;
return 0;
}
void scoutfs_block_destroy(struct super_block *sb)

File diff suppressed because it is too large Load Diff

View File

@@ -20,15 +20,13 @@ struct scoutfs_btree_item_ref {
/* caller gives an item to the callback */
typedef int (*scoutfs_btree_item_cb)(struct super_block *sb,
struct scoutfs_key *key, u64 seq, u8 flags,
struct scoutfs_key *key,
void *val, int val_len, void *arg);
/* simple singly-linked list of items */
struct scoutfs_btree_item_list {
struct scoutfs_btree_item_list *next;
struct scoutfs_key key;
u64 seq;
u8 flags;
int val_len;
u8 val[0];
};
@@ -84,49 +82,6 @@ int scoutfs_btree_insert_list(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_btree_item_list *lst);
int scoutfs_btree_parent_range(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_key *start,
struct scoutfs_key *end);
int scoutfs_btree_get_parent(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_btree_root *par_root);
int scoutfs_btree_set_parent(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_btree_root *par_root);
int scoutfs_btree_rebalance(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key);
/* merge input is a list of roots */
struct scoutfs_btree_root_head {
struct list_head head;
struct scoutfs_btree_root root;
};
int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *start,
struct scoutfs_key *end,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *input_list,
bool subtree, int dirty_limit, int alloc_low);
int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *key,
struct scoutfs_btree_root *root, int free_budget);
void scoutfs_btree_put_iref(struct scoutfs_btree_item_ref *iref);
#endif

View File

@@ -31,8 +31,6 @@
#include "net.h"
#include "endian_swap.h"
#include "quorum.h"
#include "omap.h"
#include "trans.h"
/*
* The client is responsible for maintaining a connection to the server.
@@ -49,7 +47,6 @@ struct client_info {
struct workqueue_struct *workq;
struct delayed_work connect_dwork;
unsigned long connect_delay_jiffies;
u64 server_term;
@@ -117,6 +114,21 @@ int scoutfs_client_get_roots(struct super_block *sb,
NULL, 0, roots, sizeof(*roots));
}
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
__le64 leseq;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
NULL, 0, &leseq, sizeof(leseq));
if (ret == 0)
*seq = le64_to_cpu(leseq);
return ret;
}
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
@@ -138,7 +150,7 @@ static int client_lock_response(struct super_block *sb,
void *resp, unsigned int resp_len,
int error, void *data)
{
if (resp_len != sizeof(struct scoutfs_net_lock))
if (resp_len != sizeof(struct scoutfs_net_lock_grant_response))
return -EINVAL;
/* XXX error? */
@@ -203,120 +215,6 @@ int scoutfs_client_srch_commit_compact(struct super_block *sb,
res, sizeof(*res), NULL, 0);
}
int scoutfs_client_get_log_merge(struct super_block *sb,
struct scoutfs_log_merge_request *req)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_GET_LOG_MERGE,
NULL, 0, req, sizeof(*req));
}
int scoutfs_client_commit_log_merge(struct super_block *sb,
struct scoutfs_log_merge_complete *comp)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_COMMIT_LOG_MERGE,
comp, sizeof(*comp), NULL, 0);
}
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_response(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
id, 0, map, sizeof(*map));
}
/* The client is receiving an omap request from the server */
static int client_open_ino_map(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != sizeof(struct scoutfs_open_ino_map_args))
return -EINVAL;
return scoutfs_omap_client_handle_request(sb, id, arg);
}
/* The client is sending an omap request to the server */
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_open_ino_map_args args = {
.group_nr = cpu_to_le64(group_nr),
.req_id = 0,
};
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
&args, sizeof(args), map, sizeof(*map));
}
/* The client is asking the server for the current volume options */
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_GET_VOLOPT,
NULL, 0, volopt, sizeof(*volopt));
}
/* The client is asking the server to update volume options */
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_SET_VOLOPT,
volopt, sizeof(*volopt), NULL, 0);
}
/* The client is asking the server to clear volume options */
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_CLEAR_VOLOPT,
volopt, sizeof(*volopt), NULL, 0);
}
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_RESIZE_DEVICES,
nrd, sizeof(*nrd), NULL, 0);
}
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_STATFS,
NULL, 0, nst, sizeof(*nst));
}
/*
* The server is asking that we trigger a commit of the current log
* trees so that they can ensure an item seq discontinuity between
* finalized log btrees and the next set of open log btrees. If we're
* shutting down then we're already going to perform a final commit.
*/
static int sync_log_trees(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != 0)
return -EINVAL;
if (!scoutfs_unmounting(sb))
scoutfs_trans_sync(sb, 0);
return scoutfs_net_response(sb, conn, cmd, id, 0, NULL, 0);
}
/* The client is receiving a invalidation request from the server */
static int client_lock(struct super_block *sb,
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
@@ -354,8 +252,7 @@ static int client_greeting(struct super_block *sb,
void *resp, unsigned int resp_len, int error,
void *data)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
@@ -372,15 +269,17 @@ static int client_greeting(struct super_block *sb,
}
if (gr->fsid != super->hdr.fsid) {
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
scoutfs_warn(sb, "server sent fsid 0x%llx, client has 0x%llx",
le64_to_cpu(gr->fsid),
le64_to_cpu(super->hdr.fsid));
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(gr->fmt_vers) != sbi->fmt_vers) {
scoutfs_warn(sb, "server greeting response format version %llu did not match client format version %llu",
le64_to_cpu(gr->fmt_vers), sbi->fmt_vers);
if (gr->version != super->version) {
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
le64_to_cpu(gr->version),
le64_to_cpu(super->version));
ret = -EINVAL;
goto out;
}
@@ -389,7 +288,6 @@ static int client_greeting(struct super_block *sb,
scoutfs_net_client_greeting(sb, conn, new_server);
client->server_term = le64_to_cpu(gr->server_term);
client->connect_delay_jiffies = 0;
ret = 0;
out:
return ret;
@@ -439,20 +337,6 @@ out:
return ret;
}
/*
* If we're not seeing successful connections we want to back off. Each
* connection attempt starts by setting a long connection work delay.
* We only set a shorter delay if we see a greeting response from the
* server. At that point we'll try to immediately reconnect if the
* connection is broken.
*/
static void queue_connect_dwork(struct super_block *sb, struct client_info *client)
{
if (!atomic_read(&client->shutting_down) && !scoutfs_forcing_unmount(sb))
queue_delayed_work(client->workq, &client->connect_dwork,
client->connect_delay_jiffies);
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
@@ -477,15 +361,12 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_mount_options opts;
struct mount_options *opts = &sbi->opts;
const bool am_quorum = opts->quorum_slot_nr >= 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
bool am_quorum;
int ret;
scoutfs_options_read(sb, &opts);
am_quorum = opts.quorum_slot_nr >= 0;
/* can unmount once server farewell handling removes our item */
if (client->sending_farewell &&
lookup_mounted_client_item(sb, sbi->rid) == 0) {
@@ -495,9 +376,6 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
goto out;
}
/* always wait a bit until a greeting response sets a lower delay */
client->connect_delay_jiffies = msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS);
ret = scoutfs_quorum_server_sin(sb, &sin);
if (ret < 0)
goto out;
@@ -509,7 +387,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.version = super->version;
greet.server_term = cpu_to_le64(client->server_term);
greet.rid = cpu_to_le64(sbi->rid);
greet.flags = 0;
@@ -525,15 +403,16 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
scoutfs_net_shutdown(sb, client->conn);
out:
if (ret)
queue_connect_dwork(sb, client);
/* always have a small delay before retrying to avoid storms */
if (ret && !atomic_read(&client->shutting_down))
queue_delayed_work(client->workq, &client->connect_dwork,
msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS));
}
static scoutfs_net_request_t client_req_funcs[] = {
[SCOUTFS_NET_CMD_SYNC_LOG_TREES] = sync_log_trees,
[SCOUTFS_NET_CMD_LOCK] = client_lock,
[SCOUTFS_NET_CMD_LOCK_RECOVER] = client_lock_recover,
[SCOUTFS_NET_CMD_OPEN_INO_MAP] = client_open_ino_map,
};
/*
@@ -546,7 +425,8 @@ static void client_notify_down(struct super_block *sb,
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
queue_connect_dwork(sb, client);
if (!atomic_read(&client->shutting_down))
queue_delayed_work(client->workq, &client->connect_dwork, 0);
}
int scoutfs_client_setup(struct super_block *sb)
@@ -581,7 +461,7 @@ int scoutfs_client_setup(struct super_block *sb)
goto out;
}
queue_connect_dwork(sb, client);
queue_delayed_work(client->workq, &client->connect_dwork, 0);
ret = 0;
out:
@@ -638,7 +518,7 @@ void scoutfs_client_destroy(struct super_block *sb)
if (client == NULL)
return;
if (client->server_term != 0 && !scoutfs_forcing_unmount(sb)) {
if (client->server_term != 0) {
client->sending_farewell = true;
ret = scoutfs_net_submit_request(sb, client->conn,
SCOUTFS_NET_CMD_FAREWELL,
@@ -646,8 +526,10 @@ void scoutfs_client_destroy(struct super_block *sb)
client_farewell_response,
NULL, NULL);
if (ret == 0) {
wait_for_completion(&client->farewell_comp);
ret = client->farewell_error;
ret = wait_for_completion_interruptible(
&client->farewell_comp);
if (ret == 0)
ret = client->farewell_error;
}
if (ret) {
scoutfs_inc_counter(sb, client_farewell_error);
@@ -671,11 +553,3 @@ void scoutfs_client_destroy(struct super_block *sb)
kfree(client);
sbi->client_info = NULL;
}
void scoutfs_client_net_shutdown(struct super_block *sb)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
if (client && client->conn)
scoutfs_net_shutdown(sb, client->conn);
}

View File

@@ -10,6 +10,7 @@ int scoutfs_client_commit_log_trees(struct super_block *sb,
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
u64 *scoutfs_client_bulk_alloc(struct super_block *sb);
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_lock_request(struct super_block *sb,
struct scoutfs_net_lock *nl);
@@ -21,21 +22,7 @@ int scoutfs_client_srch_get_compact(struct super_block *sb,
struct scoutfs_srch_compact *sc);
int scoutfs_client_srch_commit_compact(struct super_block *sb,
struct scoutfs_srch_compact *res);
int scoutfs_client_get_log_merge(struct super_block *sb,
struct scoutfs_log_merge_request *req);
int scoutfs_client_commit_log_merge(struct super_block *sb,
struct scoutfs_log_merge_complete *comp);
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map);
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map);
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd);
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst);
void scoutfs_client_net_shutdown(struct super_block *sb);
int scoutfs_client_setup(struct super_block *sb);
void scoutfs_client_destroy(struct super_block *sb);

View File

@@ -44,16 +44,6 @@
EXPAND_COUNTER(btree_insert) \
EXPAND_COUNTER(btree_leaf_item_hash_search) \
EXPAND_COUNTER(btree_lookup) \
EXPAND_COUNTER(btree_merge) \
EXPAND_COUNTER(btree_merge_alloc_low) \
EXPAND_COUNTER(btree_merge_delete) \
EXPAND_COUNTER(btree_merge_delta_combined) \
EXPAND_COUNTER(btree_merge_delta_null) \
EXPAND_COUNTER(btree_merge_dirty_limit) \
EXPAND_COUNTER(btree_merge_drop_old) \
EXPAND_COUNTER(btree_merge_insert) \
EXPAND_COUNTER(btree_merge_update) \
EXPAND_COUNTER(btree_merge_walk) \
EXPAND_COUNTER(btree_next) \
EXPAND_COUNTER(btree_prev) \
EXPAND_COUNTER(btree_split) \
@@ -93,8 +83,6 @@
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
EXPAND_COUNTER(item_delta) \
EXPAND_COUNTER(item_delta_written) \
EXPAND_COUNTER(item_dirty) \
EXPAND_COUNTER(item_invalidate) \
EXPAND_COUNTER(item_invalidate_page) \
@@ -124,8 +112,12 @@
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grace_extended) \
EXPAND_COUNTER(lock_grace_set) \
EXPAND_COUNTER(lock_grace_wait) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
EXPAND_COUNTER(lock_grant_work) \
EXPAND_COUNTER(lock_invalidate_coverage) \
EXPAND_COUNTER(lock_invalidate_inode) \
EXPAND_COUNTER(lock_invalidate_request) \
@@ -151,13 +143,6 @@
EXPAND_COUNTER(net_recv_invalid_message) \
EXPAND_COUNTER(net_recv_messages) \
EXPAND_COUNTER(net_unknown_request) \
EXPAND_COUNTER(orphan_scan) \
EXPAND_COUNTER(orphan_scan_attempts) \
EXPAND_COUNTER(orphan_scan_cached) \
EXPAND_COUNTER(orphan_scan_error) \
EXPAND_COUNTER(orphan_scan_item) \
EXPAND_COUNTER(orphan_scan_omap_set) \
EXPAND_COUNTER(quorum_candidate_server_stopping) \
EXPAND_COUNTER(quorum_elected) \
EXPAND_COUNTER(quorum_fence_error) \
EXPAND_COUNTER(quorum_fence_leader) \
@@ -179,7 +164,6 @@
EXPAND_COUNTER(srch_add_entry) \
EXPAND_COUNTER(srch_compact_dirty_block) \
EXPAND_COUNTER(srch_compact_entry) \
EXPAND_COUNTER(srch_compact_error) \
EXPAND_COUNTER(srch_compact_flush) \
EXPAND_COUNTER(srch_compact_log_page) \
EXPAND_COUNTER(srch_compact_removed_entry) \
@@ -194,11 +178,6 @@
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
EXPAND_COUNTER(totl_read_copied) \
EXPAND_COUNTER(totl_read_finalized) \
EXPAND_COUNTER(totl_read_fs) \
EXPAND_COUNTER(totl_read_item) \
EXPAND_COUNTER(totl_read_logged) \
EXPAND_COUNTER(trans_commit_data_alloc_low) \
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
EXPAND_COUNTER(trans_commit_fsync) \

View File

@@ -207,7 +207,6 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
u64 offset;
s64 ret;
u8 flags;
int err;
int i;
flags = offline ? SEF_OFFLINE : 0;
@@ -247,18 +246,6 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
tr.len = min(ext.len - offset, last - iblock + 1);
tr.flags = ext.flags;
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
if (ret < 0) {
if (WARN_ON_ONCE(ret == -EINVAL)) {
scoutfs_err(sb, "unexpected truncate inconsistency: ino %llu iblock %llu last %llu, start %llu len %llu",
ino, iblock, last, tr.start, tr.len);
}
break;
}
if (tr.map) {
mutex_lock(&datinf->mutex);
ret = scoutfs_free_data(sb, datinf->alloc,
@@ -266,16 +253,16 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
&datinf->data_freed,
tr.map, tr.len);
mutex_unlock(&datinf->mutex);
if (ret < 0) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, tr.map, tr.flags);
if (err < 0)
scoutfs_err(sb, "truncate err %d restoring extent after error %lld: ino %llu start %llu len %llu",
err, ret, ino, tr.start, tr.len);
if (ret < 0)
break;
}
}
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
BUG_ON(ret); /* inconsistent, could prealloc items */
iblock += tr.len;
}
@@ -325,9 +312,10 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
while (iblock <= last) {
if (inode)
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, true, false);
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks,
true);
else
ret = scoutfs_hold_trans(sb, false);
ret = scoutfs_hold_trans(sb);
if (ret)
break;
@@ -768,7 +756,8 @@ retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &wbd->ind_locks, inode,
true) ?:
scoutfs_inode_index_try_lock_hold(sb, &wbd->ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, &wbd->ind_locks,
ind_seq);
} while (ret > 0);
if (ret < 0)
goto out;
@@ -830,7 +819,6 @@ static int scoutfs_write_end(struct file *file, struct address_space *mapping,
scoutfs_inode_inc_data_version(inode);
}
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, wbd->lock, &wbd->ind_locks);
scoutfs_inode_queue_writeback(inode);
}
@@ -983,6 +971,9 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
u64 last;
s64 ret;
mutex_lock(&inode->i_mutex);
down_write(&si->extent_sem);
/* XXX support more flags */
if (mode & ~(FALLOC_FL_KEEP_SIZE)) {
ret = -EOPNOTSUPP;
@@ -1000,22 +991,18 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
goto out;
}
mutex_lock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret)
goto out_mutex;
goto out;
inode_dio_wait(inode);
down_write(&si->extent_sem);
if (!(mode & FALLOC_FL_KEEP_SIZE) &&
(offset + len > i_size_read(inode))) {
ret = inode_newsize_ok(inode, offset + len);
if (ret)
goto out_extent;
goto out;
}
iblock = offset >> SCOUTFS_BLOCK_SM_SHIFT;
@@ -1023,9 +1010,9 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
while(iblock <= last) {
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret)
goto out_extent;
goto out;
ret = fallocate_extents(sb, inode, iblock, last, lock);
@@ -1033,11 +1020,8 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
end = (iblock + ret) << SCOUTFS_BLOCK_SM_SHIFT;
if (end > offset + len)
end = offset + len;
if (end > i_size_read(inode)) {
if (end > i_size_read(inode))
i_size_write(inode, end);
inode_inc_iversion(inode);
scoutfs_inode_inc_data_version(inode);
}
}
if (ret >= 0)
scoutfs_update_inode_item(inode, lock, &ind_locks);
@@ -1051,19 +1035,17 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
}
if (ret <= 0)
goto out_extent;
goto out;
iblock += ret;
ret = 0;
}
out_extent:
up_write(&si->extent_sem);
out_mutex:
out:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
up_write(&si->extent_sem);
mutex_unlock(&inode->i_mutex);
out:
trace_scoutfs_data_fallocate(sb, ino, mode, offset, len, ret);
return ret;
}
@@ -1104,7 +1086,7 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
}
/* we're updating meta_seq with offline block count */
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false);
if (ret < 0)
goto out;
@@ -1153,8 +1135,7 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
*/
#define MOVE_DATA_EXTENTS_PER_HOLD 16
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off, bool is_stage,
u64 data_version)
u64 byte_len, struct inode *to, u64 to_off)
{
struct scoutfs_inode_info *from_si = SCOUTFS_I(from);
struct scoutfs_inode_info *to_si = SCOUTFS_I(to);
@@ -1164,7 +1145,6 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
struct data_ext_args from_args;
struct data_ext_args to_args;
struct scoutfs_extent ext;
struct timespec cur_time;
LIST_HEAD(locks);
bool done = false;
loff_t from_size;
@@ -1200,11 +1180,6 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
goto out;
}
if (is_stage && (data_version != SCOUTFS_I(to)->data_version)) {
ret = -ESTALE;
goto out;
}
from_iblock = from_off >> SCOUTFS_BLOCK_SM_SHIFT;
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
@@ -1227,7 +1202,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
/* can't stage once data_version changes */
scoutfs_inode_get_onoff(from, &junk, &from_offline);
scoutfs_inode_get_onoff(to, &junk, &to_offline);
if (from_offline || (to_offline && !is_stage)) {
if (from_offline || to_offline) {
ret = -ENODATA;
goto out;
}
@@ -1256,7 +1231,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
ret = scoutfs_inode_index_start(sb, &seq) ?:
scoutfs_inode_index_prepare(sb, &locks, from, true) ?:
scoutfs_inode_index_prepare(sb, &locks, to, true) ?:
scoutfs_inode_index_try_lock_hold(sb, &locks, seq, false);
scoutfs_inode_index_try_lock_hold(sb, &locks, seq);
if (ret > 0)
continue;
if (ret < 0)
@@ -1271,8 +1246,6 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
/* arbitrarily limit the number of extents per trans hold */
for (i = 0; i < MOVE_DATA_EXTENTS_PER_HOLD; i++) {
struct scoutfs_extent off_ext;
/* find the next extent to move */
ret = scoutfs_ext_next(sb, &data_ext_ops, &from_args,
from_iblock, 1, &ext);
@@ -1301,27 +1274,10 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
to_start = to_iblock + (from_start - from_iblock);
if (is_stage) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
to_start, 1, &off_ext);
if (ret)
break;
if (!scoutfs_ext_inside(to_start, len, &off_ext) ||
!(off_ext.flags & SEF_OFFLINE)) {
ret = -EINVAL;
break;
}
ret = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
} else {
/* insert the new, fails if it overlaps */
ret = scoutfs_ext_insert(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
}
/* insert the new, fails if it overlaps */
ret = scoutfs_ext_insert(sb, &data_ext_ops, &to_args,
to_start, len,
map, ext.flags);
if (ret < 0)
break;
@@ -1329,18 +1285,10 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
ret = scoutfs_ext_set(sb, &data_ext_ops, &from_args,
from_start, len, 0, 0);
if (ret < 0) {
if (is_stage) {
/* re-mark dest range as offline */
WARN_ON_ONCE(!(off_ext.flags & SEF_OFFLINE));
err = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
to_start, len,
0, off_ext.flags);
} else {
/* remove inserted new on err */
err = scoutfs_ext_remove(sb, &data_ext_ops,
&to_args, to_start,
len);
}
/* remove inserted new on err */
err = scoutfs_ext_remove(sb, &data_ext_ops,
&to_args, to_start,
len);
BUG_ON(err); /* XXX inconsistent */
break;
}
@@ -1368,17 +1316,12 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
up_write(&from_si->extent_sem);
up_write(&to_si->extent_sem);
cur_time = CURRENT_TIME;
if (!is_stage) {
to->i_ctime = to->i_mtime = cur_time;
inode_inc_iversion(to);
scoutfs_inode_inc_data_version(to);
scoutfs_inode_set_data_seq(to);
}
from->i_ctime = from->i_mtime = cur_time;
inode_inc_iversion(from);
from->i_ctime = from->i_mtime =
to->i_ctime = to->i_mtime = CURRENT_TIME;
scoutfs_inode_inc_data_version(from);
scoutfs_inode_inc_data_version(to);
scoutfs_inode_set_data_seq(from);
scoutfs_inode_set_data_seq(to);
scoutfs_update_inode_item(from, from_lock, &locks);
scoutfs_update_inode_item(to, to_lock, &locks);
@@ -1864,17 +1807,13 @@ int scoutfs_data_prepare_commit(struct super_block *sb)
return ret;
}
/*
* Return true if the data allocator is lower than the caller's
* requirement and we haven't been told by the server that we're out of
* free extents.
*/
bool scoutfs_data_alloc_should_refill(struct super_block *sb, u64 blocks)
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb)
{
DECLARE_DATA_INFO(sb, datinf);
return (scoutfs_dalloc_total_len(&datinf->dalloc) < blocks) &&
!(le32_to_cpu(datinf->dalloc.root.flags) & SCOUTFS_ALLOC_FLAG_LOW);
return scoutfs_dalloc_total_len(&datinf->dalloc) <<
SCOUTFS_BLOCK_SM_SHIFT;
}
int scoutfs_data_setup(struct super_block *sb)

View File

@@ -38,6 +38,13 @@ struct scoutfs_data_wait {
.err = 0, \
}
struct scoutfs_traced_extent {
u64 iblock;
u64 count;
u64 blkno;
u8 flags;
};
extern const struct address_space_operations scoutfs_file_aops;
extern const struct file_operations scoutfs_file_fops;
struct scoutfs_alloc;
@@ -52,8 +59,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len);
int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
struct scoutfs_lock *lock);
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off, bool to_stage,
u64 data_version);
u64 byte_len, struct inode *to, u64 to_off);
int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u8 sef, u8 op, struct scoutfs_data_wait *ow,
@@ -79,7 +85,7 @@ void scoutfs_data_init_btrees(struct super_block *sb,
void scoutfs_data_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_data_prepare_commit(struct super_block *sb);
bool scoutfs_data_alloc_should_refill(struct super_block *sb, u64 blocks);
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb);
int scoutfs_data_setup(struct super_block *sb);
void scoutfs_data_destroy(struct super_block *sb);

View File

@@ -30,8 +30,6 @@
#include "item.h"
#include "lock.h"
#include "hash.h"
#include "omap.h"
#include "forest.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -136,8 +134,8 @@ static int alloc_dentry_info(struct dentry *dentry)
{
struct dentry_info *di;
smp_rmb();
if (dentry->d_op == &scoutfs_dentry_ops)
/* XXX read mb? */
if (dentry->d_fsdata)
return 0;
di = kmem_cache_zalloc(dentry_info_cache, GFP_NOFS);
@@ -149,7 +147,6 @@ static int alloc_dentry_info(struct dentry *dentry)
spin_lock(&dentry->d_lock);
if (!dentry->d_fsdata) {
dentry->d_fsdata = di;
smp_wmb();
d_set_d_op(dentry, &scoutfs_dentry_ops);
}
spin_unlock(&dentry->d_lock);
@@ -255,7 +252,7 @@ static u64 dirent_name_hash(const char *name, unsigned int name_len)
((u64)dirent_name_fingerprint(name, name_len) << 32);
}
static bool dirent_names_equal(const char *a_name, unsigned int a_len,
static u64 dirent_names_equal(const char *a_name, unsigned int a_len,
const char *b_name, unsigned int b_len)
{
return a_len == b_len && memcmp(a_name, b_name, a_len) == 0;
@@ -277,7 +274,8 @@ static int lookup_dirent(struct super_block *sb, u64 dir_ino, const char *name,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
init_dirent_key(&key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, 0);
@@ -317,52 +315,6 @@ out:
return ret;
}
/*
* Verify that the caller's dentry still precisely matches our dirent
* items.
*
* The caller has a dentry that the vfs revalidated before they acquired
* their locks. If the dentry is still covered by a lock we immediately
* return 0. If not, we check items and return -ENOENT if a positive
* dentry no longer matches the items or -EEXIST if a negative entry's
* name now has an item.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
struct scoutfs_dirent dent = {0,};
const char *name;
u64 dentry_ino;
int name_len;
u64 hash;
int ret;
if (scoutfs_lock_is_covered(sb, &di->lock_cov))
return 0;
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
name = dentry->d_name.name;
name_len = dentry->d_name.len;
hash = dirent_name_hash(name, name_len);
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret < 0 && ret != -ENOENT)
return ret;
if (dentry_ino != le64_to_cpu(dent.ino) || di->hash != le64_to_cpu(dent.hash) ||
di->pos != le64_to_cpu(dent.pos)) {
if (dentry_ino)
ret = -ENOENT;
else
ret = -EEXIST;
} else {
ret = 0;
}
return ret;
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
{
struct super_block *sb = dentry->d_sb;
@@ -470,7 +422,7 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
{
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent dent = {0,};
struct scoutfs_dirent dent;
struct inode *inode;
u64 ino = 0;
u64 hash;
@@ -498,11 +450,9 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
ret = 0;
} else if (ret == 0) {
ino = le64_to_cpu(dent.ino);
}
if (ret == 0)
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), dir_lock);
}
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
out:
@@ -511,7 +461,7 @@ out:
else if (ino == 0)
inode = NULL;
else
inode = scoutfs_iget(sb, ino, 0, 0);
inode = scoutfs_iget(sb, ino);
/*
* We can't splice dir aliases into the dcache. dir entries
@@ -539,10 +489,10 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
{
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key last_key;
struct scoutfs_dirent *dent;
struct scoutfs_key key;
struct scoutfs_key last_key;
struct scoutfs_lock *dir_lock;
int name_len;
u64 pos;
int ret;
@@ -552,7 +502,8 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
init_dirent_key(&last_key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
@@ -619,17 +570,18 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
u64 ino, umode_t mode, struct scoutfs_lock *dir_lock,
struct scoutfs_lock *inode_lock)
{
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key rdir_key;
struct scoutfs_key ent_key;
struct scoutfs_key lb_key;
bool del_rdir = false;
struct scoutfs_dirent *dent;
bool del_ent = false;
bool del_rdir = false;
int ret;
dent = alloc_dirent(name_len);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
/* initialize the dent */
@@ -716,11 +668,10 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
umode_t mode, dev_t rdev,
struct scoutfs_lock **dir_lock,
struct scoutfs_lock **inode_lock,
struct scoutfs_lock **orph_lock,
struct list_head *ind_locks)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct inode *inode;
u64 ind_seq;
int ret = 0;
u64 ino;
@@ -749,25 +700,21 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
if (ret)
goto out_unlock;
if (orph_lock) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, ino, orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, ind_locks, dir, true) ?:
scoutfs_inode_index_prepare_ino(sb, ind_locks, ino, mode) ?:
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
goto out_unlock;
ret = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock, &inode);
if (ret < 0)
inode = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
goto out;
}
ret = scoutfs_dirty_inode_item(dir, *dir_lock);
out:
@@ -777,16 +724,10 @@ out_unlock:
if (ret) {
scoutfs_inode_index_unlock(sb, ind_locks);
scoutfs_unlock(sb, *dir_lock, SCOUTFS_LOCK_WRITE);
*dir_lock = NULL;
scoutfs_unlock(sb, *inode_lock, SCOUTFS_LOCK_WRITE);
*dir_lock = NULL;
*inode_lock = NULL;
if (orph_lock) {
scoutfs_unlock(sb, *orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
*orph_lock = NULL;
}
if (!IS_ERR_OR_NULL(inode))
iput(inode);
inode = ERR_PTR(ret);
}
@@ -800,7 +741,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -811,14 +751,9 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
inode = lock_hold_create(dir, dentry, mode, rdev,
&dir_lock, &inode_lock, NULL, &ind_locks);
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
pos = SCOUTFS_I(dir)->next_readdir_pos++;
@@ -834,10 +769,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_mtime = inode->i_atime = inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_mtime;
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
if (S_ISDIR(mode)) {
inc_nlink(inode);
@@ -881,15 +812,12 @@ static int scoutfs_link(struct dentry *old_dentry,
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
LIST_HEAD(ind_locks);
bool del_orphan = false;
u64 dir_size;
u64 ind_seq;
u64 hash;
u64 pos;
int ret;
int err;
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
@@ -912,25 +840,12 @@ static int scoutfs_link(struct dentry *old_dentry,
if (ret)
goto out_unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
if (inode->i_nlink == 0) {
del_orphan = true;
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -940,31 +855,20 @@ retry:
if (ret)
goto out;
if (del_orphan) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
if (ret)
goto out;
}
pos = SCOUTFS_I(dir)->next_readdir_pos++;
ret = add_entry_items(sb, scoutfs_ino(dir), hash, pos,
dentry->d_name.name, dentry->d_name.len,
scoutfs_ino(inode), inode->i_mode, dir_lock,
inode_lock);
if (ret) {
err = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
WARN_ON_ONCE(err); /* no orphan, might not scan and delete after crash */
if (ret)
goto out;
}
update_dentry_info(sb, dentry, hash, pos, dir_lock);
i_size_write(dir, dir_size);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -977,8 +881,6 @@ out_unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
@@ -1003,7 +905,6 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
struct inode *inode = dentry->d_inode;
struct timespec ts = current_kernel_time();
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_lock *dir_lock = NULL;
LIST_HEAD(ind_locks);
u64 ind_seq;
@@ -1016,58 +917,42 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
if (ret)
return ret;
ret = alloc_dentry_info(dentry);
if (ret)
goto unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto unlock;
if (S_ISDIR(inode->i_mode) && i_size_read(inode)) {
ret = -ENOTEMPTY;
goto unlock;
}
if (should_orphan(inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
if (ret < 0)
goto unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, false);
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
goto unlock;
if (should_orphan(inode)) {
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
if (ret < 0)
goto out;
}
ret = del_entry_items(sb, scoutfs_ino(dir), dentry_info_hash(dentry),
dentry_info_pos(dentry), scoutfs_ino(inode),
dir_lock, inode_lock);
if (ret) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
WARN_ON_ONCE(ret); /* should have been dirty */
if (ret)
goto out;
}
update_dentry_info(sb, dentry, 0, 0, dir_lock);
if (should_orphan(inode)) {
/*
* Insert the orphan item before we modify any inode
* metadata so we can gracefully exit should it
* fail.
*/
ret = scoutfs_orphan_inode(inode);
WARN_ON_ONCE(ret); /* XXX returning error but items deleted */
if (ret)
goto out;
}
dir->i_ctime = ts;
dir->i_mtime = ts;
i_size_write(dir, i_size_read(dir) - dentry->d_name.len);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
inode->i_ctime = ts;
drop_nlink(inode);
@@ -1084,7 +969,6 @@ unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
@@ -1260,7 +1144,6 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -1278,14 +1161,9 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
return ret;
inode = lock_hold_create(dir, dentry, S_IFLNK|S_IRWXUGO, 0,
&dir_lock, &inode_lock, NULL, &ind_locks);
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
ret = symlink_item_ops(sb, SYM_CREATE, scoutfs_ino(inode), inode_lock,
symname, name_len);
@@ -1305,13 +1183,9 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode_inc_iversion(dir);
inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_ctime;
i_size_write(inode, name_len);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -1319,11 +1193,11 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
insert_inode_hash(inode);
/* XXX need to set i_op/fop before here for sec callbacks */
d_instantiate(dentry, inode);
inode = NULL;
ret = 0;
out:
if (ret < 0) {
/* XXX remove inode items */
if (!IS_ERR_OR_NULL(inode))
iput(inode);
symlink_item_ops(sb, SYM_DELETE, scoutfs_ino(inode), inode_lock,
NULL, name_len);
@@ -1334,9 +1208,6 @@ out:
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
if (!IS_ERR_OR_NULL(inode))
iput(inode);
return ret;
}
@@ -1367,10 +1238,10 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list)
{
struct scoutfs_link_backref_entry *ent = NULL;
struct scoutfs_lock *lock = NULL;
struct scoutfs_link_backref_entry *ent;
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_lock *lock = NULL;
int len;
int ret;
@@ -1590,6 +1461,26 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
return ret;
}
/*
* Make sure that a dirent from the dir to the inode exists at the name.
* The caller has the name locked in the dir.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, const char *name,
unsigned name_len, u64 hash, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_dirent dent;
int ret;
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret == 0 && le64_to_cpu(dent.ino) != ino)
ret = -ENOENT;
else if (ret == -ENOENT && ino == 0)
ret = 0;
return ret;
}
/*
* The vfs performs checks on cached inodes and dirents before calling
* here. It doesn't hold any locks so all of those checks can be based
@@ -1618,9 +1509,8 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
* from using parent/child locking orders as two groups can have both
* parent and child relationships to each other.
*/
static int scoutfs_rename_common(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
{
struct super_block *sb = old_dir->i_sb;
struct inode *old_inode = old_dentry->d_inode;
@@ -1630,7 +1520,6 @@ static int scoutfs_rename_common(struct inode *old_dir,
struct scoutfs_lock *new_dir_lock = NULL;
struct scoutfs_lock *old_inode_lock = NULL;
struct scoutfs_lock *new_inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct timespec now;
bool ins_new = false;
bool del_new = false;
@@ -1685,25 +1574,16 @@ static int scoutfs_rename_common(struct inode *old_dir,
}
/* make sure that the entries assumed by the argument still exist */
ret = alloc_dentry_info(old_dentry) ?:
alloc_dentry_info(new_dentry) ?:
verify_entry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
ret = verify_entry(sb, scoutfs_ino(old_dir), old_dentry->d_name.name,
old_dentry->d_name.len, old_hash,
scoutfs_ino(old_inode), old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry->d_name.name,
new_dentry->d_name.len, new_hash,
new_inode ? scoutfs_ino(new_inode) : 0,
new_dir_lock);
if (ret)
goto out_unlock;
if ((flags & RENAME_NOREPLACE) && (new_inode != NULL)) {
ret = -EEXIST;
goto out_unlock;
}
if (should_orphan(new_inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(new_inode),
&orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, old_dir, false) ?:
@@ -1712,7 +1592,7 @@ retry:
scoutfs_inode_index_prepare(sb, &ind_locks, new_dir, false)) ?:
(new_inode == NULL ? 0 :
scoutfs_inode_index_prepare(sb, &ind_locks, new_inode, false)) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -1763,7 +1643,7 @@ retry:
ins_old = true;
if (should_orphan(new_inode)) {
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(new_inode), orph_lock);
ret = scoutfs_orphan_inode(new_inode);
if (ret)
goto out;
}
@@ -1803,13 +1683,6 @@ retry:
if (new_inode)
old_inode->i_ctime = now;
inode_inc_iversion(old_dir);
inode_inc_iversion(old_inode);
if (new_dir != old_dir)
inode_inc_iversion(new_dir);
if (new_inode)
inode_inc_iversion(new_inode);
scoutfs_update_inode_item(old_dir, old_dir_lock, &ind_locks);
scoutfs_update_inode_item(old_inode, old_inode_lock, &ind_locks);
if (new_dir != old_dir)
@@ -1874,28 +1747,10 @@ out_unlock:
scoutfs_unlock(sb, old_dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, new_dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, rename_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
static int scoutfs_rename(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry)
{
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, 0);
}
static int scoutfs_rename2(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
if (flags & ~RENAME_NOREPLACE)
return -EINVAL;
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, flags);
}
#ifdef KC_FMODE_KABI_ITERATE
/* we only need this to set the iterate flag for kabi :/ */
static int scoutfs_dir_open(struct inode *inode, struct file *file)
@@ -1905,55 +1760,6 @@ static int scoutfs_dir_open(struct inode *inode, struct file *file)
}
#endif
static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
int ret;
if (dentry->d_name.len > SCOUTFS_NAME_LEN)
return -ENAMETOOLONG;
inode = lock_hold_create(dir, dentry, mode, 0,
&dir_lock, &inode_lock, &orph_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
if (ret < 0)
goto out; /* XXX returning error but items created */
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
si->crtime = inode->i_mtime;
insert_inode_hash(inode);
ihold(inode); /* need to update inode modifications in d_tmpfile */
d_tmpfile(dentry, inode);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
scoutfs_inode_index_unlock(sb, &ind_locks);
out:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
if (!IS_ERR_OR_NULL(inode))
iput(inode);
return ret;
}
const struct file_operations scoutfs_dir_fops = {
.KC_FOP_READDIR = scoutfs_readdir,
#ifdef KC_FMODE_KABI_ITERATE
@@ -1964,10 +1770,7 @@ const struct file_operations scoutfs_dir_fops = {
.llseek = generic_file_llseek,
};
const struct inode_operations_wrapper scoutfs_dir_iops = {
.ops = {
const struct inode_operations scoutfs_dir_iops = {
.lookup = scoutfs_lookup,
.mknod = scoutfs_mknod,
.create = scoutfs_create,
@@ -1984,9 +1787,6 @@ const struct inode_operations_wrapper scoutfs_dir_iops = {
.removexattr = scoutfs_removexattr,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
.rename2 = scoutfs_rename2,
};
void scoutfs_dir_exit(void)

View File

@@ -5,7 +5,7 @@
#include "lock.h"
extern const struct file_operations scoutfs_dir_fops;
extern const struct inode_operations_wrapper scoutfs_dir_iops;
extern const struct inode_operations scoutfs_dir_iops;
extern const struct inode_operations scoutfs_symlink_iops;
struct scoutfs_link_backref_entry {
@@ -14,7 +14,7 @@ struct scoutfs_link_backref_entry {
u64 dir_pos;
u16 name_len;
struct scoutfs_dirent dent;
/* the full name is allocated and stored in dent.name[] */
/* the full name is allocated and stored in dent.name[0] */
};
int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,

View File

@@ -81,7 +81,7 @@ static struct dentry *scoutfs_fh_to_dentry(struct super_block *sb,
trace_scoutfs_fh_to_dentry(sb, fh_type, sfid);
if (scoutfs_valid_fileid(fh_type))
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino), 0, SCOUTFS_IGF_LINKED);
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino));
return d_obtain_alias(inode);
}
@@ -100,7 +100,7 @@ static struct dentry *scoutfs_fh_to_parent(struct super_block *sb,
if (scoutfs_valid_fileid(fh_type) &&
fh_type == FILEID_SCOUTFS_WITH_PARENT)
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino), 0, SCOUTFS_IGF_LINKED);
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino));
return d_obtain_alias(inode);
}
@@ -123,7 +123,7 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
scoutfs_dir_free_backref_path(sb, &list);
trace_scoutfs_get_parent(sb, inode, ino);
inode = scoutfs_iget(sb, ino, 0, SCOUTFS_IGF_LINKED);
inode = scoutfs_iget(sb, ino);
return d_obtain_alias(inode);
}

View File

@@ -13,7 +13,6 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include "msg.h"
#include "ext.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -39,7 +38,7 @@ static bool ext_overlap(struct scoutfs_extent *ext, u64 start, u64 len)
return !(e_end < start || ext->start > end);
}
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
static bool ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
{
u64 in_end = start + len - 1;
u64 out_end = out->start + out->len - 1;
@@ -192,9 +191,6 @@ int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
/* inserting extent must not overlap */
if (found.len && ext_overlap(&ins, found.start, found.len)) {
if (ops->insert_overlap_warn)
scoutfs_err(sb, "inserting extent %llu.%llu overlaps existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}
@@ -245,9 +241,7 @@ int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
goto out;
/* removed extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
scoutfs_err(sb, "error removing extent %llu.%llu, isn't inside existing %llu.%llu",
start, len, found.start, found.len);
if (!ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}
@@ -347,7 +341,7 @@ int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
if (ret == 0 && ext_overlap(&found, start, len)) {
/* set extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
if (!ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}

View File

@@ -15,8 +15,6 @@ struct scoutfs_ext_ops {
u64 start, u64 len, u64 map, u8 flags);
int (*remove)(struct super_block *sb, void *arg, u64 start, u64 len,
u64 map, u8 flags);
bool insert_overlap_warn;
};
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,
@@ -33,6 +31,5 @@ int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
struct scoutfs_extent *ext);
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags);
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out);
#endif

View File

@@ -1,481 +0,0 @@
/*
* Copyright (C) 2019 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/kobject.h>
#include <linux/sysfs.h>
#include <linux/device.h>
#include <linux/timer.h>
#include <asm/barrier.h>
#include "super.h"
#include "msg.h"
#include "sysfs.h"
#include "server.h"
#include "fence.h"
/*
* Fencing ensures that a given mount can no longer write to the
* metadata or data devices. It's necessary to ensure that it's safe to
* give another mount access to a resource that is currently owned by a
* mount that has stopped responding.
*
* Fencing is performed in collaboration between the currently elected
* quorum leader mount and userspace running on its host. The kernel
* creates fencing requests as it notices that mounts have stopped
* participating. The fence requests are published as directories in
* sysfs. Userspace agents watch for directories, take action, and
* write to files in the directory to indicate that the mount has been
* fenced. Once the mount is fenced the server can reclaim the
* resources previously held by the fenced mount.
*
* The fence requests contain metadata identifying the specific instance
* of the mount that needs to be fenced. This lets a fencing agent
* ensure that a specific mount has been fenced without necessarily
* destroying the node that was hosting it. Maybe the node had rebooted
* and the mount is no longer there, maybe the mount can be force
* unmounted, maybe the node can be configured to isolate the mount from
* the devices.
*
* The fencing mechanism is asynchronous and can fail but the server
* cannot make progress until it completes. If a fence request times
* out the server shuts down in the hope that another instance of a
* server might have more luck fencing a non-responsive mount.
*
* Sources of fencing are fundamentally anchored in shared persistent
* state. It is possible, though unlikely, that servers can fence a
* node and then themselves fail, leaving the next server to try and
* fence the mount again.
*/
struct fence_info {
struct kset *kset;
struct kobject fence_dir_kobj;
struct workqueue_struct *wq;
wait_queue_head_t waitq;
spinlock_t lock;
struct list_head list;
};
#define DECLARE_FENCE_INFO(sb, name) \
struct fence_info *name = SCOUTFS_SB(sb)->fence_info
struct pending_fence {
struct super_block *sb;
struct scoutfs_sysfs_attrs ssa;
struct list_head entry;
struct timer_list timer;
ktime_t start_kt;
__be32 ipv4_addr;
bool fenced;
bool error;
int reason;
u64 rid;
};
#define FENCE_FROM_KOBJ(kobj) \
container_of(SCOUTFS_SYSFS_ATTRS(kobj), struct pending_fence, ssa)
#define DECLARE_FENCE_FROM_KOBJ(name, kobj) \
struct pending_fence *name = FENCE_FROM_KOBJ(kobj)
static void destroy_fence(struct pending_fence *fence)
{
struct super_block *sb = fence->sb;
scoutfs_sysfs_destroy_attrs(sb, &fence->ssa);
del_timer_sync(&fence->timer);
kfree(fence);
}
static ssize_t elapsed_secs_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
ktime_t now = ktime_get();
struct timeval tv = { 0, };
if (ktime_after(now, fence->start_kt))
tv = ktime_to_timeval(ktime_sub(now, fence->start_kt));
return snprintf(buf, PAGE_SIZE, "%llu", (long long)tv.tv_sec);
}
SCOUTFS_ATTR_RO(elapsed_secs);
static ssize_t fenced_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%u", !!fence->fenced);
}
/*
* any write to the fenced file from userspace indicates that the mount
* has been safely fenced and can no longer write to the shared device.
*/
static ssize_t fenced_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
DECLARE_FENCE_INFO(fence->sb, fi);
if (!fence->fenced) {
del_timer_sync(&fence->timer);
fence->fenced = true;
wake_up(&fi->waitq);
}
return count;
}
SCOUTFS_ATTR_RW(fenced);
static ssize_t error_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%u", !!fence->error);
}
/*
* Fencing can tell us that they were unable to fence the given mount.
* We can't continue if the mount can't be isolated so we shut down the
* server.
*/
static ssize_t error_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf,
size_t count)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
struct super_block *sb = fence->sb;
DECLARE_FENCE_INFO(fence->sb, fi);
if (!fence->error) {
fence->error = true;
scoutfs_err(sb, "error indicated by fence action for rid %016llx", fence->rid);
wake_up(&fi->waitq);
}
return count;
}
SCOUTFS_ATTR_RW(error);
static ssize_t ipv4_addr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%pI4", &fence->ipv4_addr);
}
SCOUTFS_ATTR_RO(ipv4_addr);
static ssize_t reason_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
unsigned r = fence->reason;
char *str = "unknown";
static char *reasons[] = {
[SCOUTFS_FENCE_CLIENT_RECOVERY] = "client_recovery",
[SCOUTFS_FENCE_CLIENT_RECONNECT] = "client_reconnect",
[SCOUTFS_FENCE_QUORUM_BLOCK_LEADER] = "quorum_block_leader",
};
if (r < ARRAY_SIZE(reasons) && reasons[r])
str = reasons[r];
return snprintf(buf, PAGE_SIZE, "%s", str);
}
SCOUTFS_ATTR_RO(reason);
static ssize_t rid_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%016llx", fence->rid);
}
SCOUTFS_ATTR_RO(rid);
static struct attribute *fence_attrs[] = {
SCOUTFS_ATTR_PTR(elapsed_secs),
SCOUTFS_ATTR_PTR(fenced),
SCOUTFS_ATTR_PTR(error),
SCOUTFS_ATTR_PTR(ipv4_addr),
SCOUTFS_ATTR_PTR(reason),
SCOUTFS_ATTR_PTR(rid),
NULL,
};
#define FENCE_TIMEOUT_MS (MSEC_PER_SEC * 30)
static void fence_timeout(struct timer_list *timer)
{
struct pending_fence *fence = from_timer(fence, timer, timer);
struct super_block *sb = fence->sb;
DECLARE_FENCE_INFO(sb, fi);
fence->error = true;
scoutfs_err(sb, "fence request for rid %016llx was not serviced in %lums, raising error",
fence->rid, FENCE_TIMEOUT_MS);
wake_up(&fi->waitq);
}
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret;
fence = kzalloc(sizeof(struct pending_fence), GFP_NOFS);
if (!fence) {
ret = -ENOMEM;
goto out;
}
fence->sb = sb;
scoutfs_sysfs_init_attrs(sb, &fence->ssa);
fence->start_kt = ktime_get();
fence->ipv4_addr = ipv4_addr;
fence->fenced = false;
fence->error = false;
fence->reason = reason;
fence->rid = rid;
ret = scoutfs_sysfs_create_attrs_parent(sb, &fi->kset->kobj,
&fence->ssa, fence_attrs,
"%016llx", rid);
if (ret < 0) {
kfree(fence);
goto out;
}
timer_setup(&fence->timer, fence_timeout, 0);
fence->timer.expires = jiffies + msecs_to_jiffies(FENCE_TIMEOUT_MS);
add_timer(&fence->timer);
spin_lock(&fi->lock);
list_add_tail(&fence->entry, &fi->list);
spin_unlock(&fi->lock);
out:
return ret;
}
/*
* Give the caller the rid of the next fence request which has been
* fenced. This doesn't have a position from which to return the next
* because the caller either frees the fence request it's given or shuts
* down.
*/
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret = -ENOENT;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->fenced || fence->error) {
*rid = fence->rid;
*reason = fence->reason;
*error = fence->error;
ret = 0;
break;
}
}
spin_unlock(&fi->lock);
return ret;
}
int scoutfs_fence_reason_pending(struct super_block *sb, int reason)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
bool pending = false;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->reason == reason) {
pending = true;
break;
}
}
spin_unlock(&fi->lock);
return pending;
}
int scoutfs_fence_free(struct super_block *sb, u64 rid)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret = -ENOENT;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->rid == rid) {
list_del_init(&fence->entry);
ret = 0;
break;
}
}
spin_unlock(&fi->lock);
if (ret == 0) {
destroy_fence(fence);
wake_up(&fi->waitq);
}
return ret;
}
static bool all_fenced(struct fence_info *fi, bool *error)
{
struct pending_fence *fence;
bool all = true;
*error = false;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->error) {
*error = true;
all = true;
break;
}
if (!fence->fenced) {
all = false;
break;
}
}
spin_unlock(&fi->lock);
return all;
}
/*
* The caller waits for all the current requests to be fenced, but not
* necessarily reclaimed.
*/
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies)
{
DECLARE_FENCE_INFO(sb, fi);
bool error;
long ret;
ret = wait_event_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
if (ret == 0)
ret = -ETIMEDOUT;
else if (ret > 0)
ret = 0;
else if (error)
ret = -EIO;
return ret;
}
/*
* This must be called early during startup so that it is guaranteed that
* no other subsystems will try and call fence_start while we're waiting
* for testing fence requests to complete.
*/
int scoutfs_fence_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_mount_options opts;
struct fence_info *fi;
int ret;
/* can only fence if we can be elected by quorum */
scoutfs_options_read(sb, &opts);
if (opts.quorum_slot_nr == -1) {
ret = 0;
goto out;
}
fi = kzalloc(sizeof(struct fence_info), GFP_KERNEL);
if (!fi) {
ret = -ENOMEM;
goto out;
}
init_waitqueue_head(&fi->waitq);
spin_lock_init(&fi->lock);
INIT_LIST_HEAD(&fi->list);
sbi->fence_info = fi;
fi->kset = kset_create_and_add("fence", NULL, scoutfs_sysfs_sb_dir(sb));
if (!fi->kset) {
ret = -ENOMEM;
goto out;
}
fi->wq = alloc_workqueue("scoutfs_fence",
WQ_UNBOUND | WQ_NON_REENTRANT, 0);
if (!fi->wq) {
ret = -ENOMEM;
goto out;
}
ret = 0;
out:
if (ret)
scoutfs_fence_destroy(sb);
return ret;
}
/*
* Tear down all pending fence requests because the server is shutting down.
*/
void scoutfs_fence_stop(struct super_block *sb)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
do {
spin_lock(&fi->lock);
fence = list_first_entry_or_null(&fi->list, struct pending_fence, entry);
if (fence)
list_del_init(&fence->entry);
spin_unlock(&fi->lock);
if (fence) {
destroy_fence(fence);
wake_up(&fi->waitq);
}
} while (fence);
}
void scoutfs_fence_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct fence_info *fi = SCOUTFS_SB(sb)->fence_info;
struct pending_fence *fence;
struct pending_fence *tmp;
if (fi) {
if (fi->wq)
destroy_workqueue(fi->wq);
list_for_each_entry_safe(fence, tmp, &fi->list, entry)
destroy_fence(fence);
if (fi->kset)
kset_unregister(fi->kset);
kfree(fi);
sbi->fence_info = NULL;
}
}

View File

@@ -1,20 +0,0 @@
#ifndef _SCOUTFS_FENCE_H_
#define _SCOUTFS_FENCE_H_
enum {
SCOUTFS_FENCE_CLIENT_RECOVERY,
SCOUTFS_FENCE_CLIENT_RECONNECT,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER,
};
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason);
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error);
int scoutfs_fence_reason_pending(struct super_block *sb, int reason);
int scoutfs_fence_free(struct super_block *sb, u64 rid);
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies);
int scoutfs_fence_setup(struct super_block *sb);
void scoutfs_fence_stop(struct super_block *sb);
void scoutfs_fence_destroy(struct super_block *sb);
#endif

View File

@@ -27,14 +27,8 @@
#include "file.h"
#include "inode.h"
#include "per_task.h"
#include "omap.h"
/*
* Start a high level file read. We check for offline extents in the
* read region here so that we only check the extents once. We use the
* dio count to prevent releasing while we're reading after we've
* checked the extents.
*/
/* TODO: Direct I/O, AIO */
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos)
{
@@ -48,32 +42,30 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
int ret;
retry:
/* protect checked extents from release */
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
/* protect checked extents from stage/release */
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ,
&dw, inode_lock);
if (ret != 0)
goto out;
} else {
WARN_ON_ONCE(true);
}
ret = generic_file_aio_read(iocb, iov, nr_segs, pos);
out:
inode_dio_done(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
if (scoutfs_per_task_del(&si->pt_data_lock, &pt_ent))
inode_dio_done(inode);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {

View File

@@ -26,7 +26,6 @@
#include "hash.h"
#include "srch.h"
#include "counters.h"
#include "xattr.h"
#include "scoutfs_trace.h"
/*
@@ -38,9 +37,9 @@
*
* The log btrees are modified by multiple transactions over time so
* there is no consistent ordering relationship between the items in
* different btrees. Each item in a log btree stores a seq for the
* item. Readers check log btrees for the most recent seq that it
* should use.
* different btrees. Each item in a log btree stores a version number
* for the item. Readers check log btrees for the most recent version
* that it should use.
*
* The item cache reads items in bulk from stable btrees, and writes a
* transaction's worth of dirty items into the item log btree.
@@ -53,8 +52,6 @@
*/
struct forest_info {
struct super_block *sb;
struct mutex mutex;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
@@ -63,11 +60,6 @@ struct forest_info {
struct mutex srch_mutex;
struct scoutfs_srch_file srch_file;
struct scoutfs_block *srch_bl;
struct workqueue_struct *workq;
struct delayed_work log_merge_dwork;
atomic64_t inode_count_delta;
};
#define DECLARE_FOREST_INFO(sb, name) \
@@ -224,17 +216,25 @@ out:
}
struct forest_read_items_data {
int fic;
bool is_fs;
scoutfs_forest_item_cb cb;
void *cb_arg;
};
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, void *arg)
{
struct forest_read_items_data *rid = arg;
struct scoutfs_log_item_value _liv = {0,};
struct scoutfs_log_item_value *liv = &_liv;
return rid->cb(sb, key, seq, flags, val, val_len, rid->fic, rid->cb_arg);
if (!rid->is_fs) {
liv = val;
val += sizeof(struct scoutfs_log_item_value);
val_len -= sizeof(struct scoutfs_log_item_value);
}
return rid->cb(sb, key, liv, val, val_len, rid->cb_arg);
}
/*
@@ -246,16 +246,19 @@ static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u6
* that covers all the blocks. Any keys outside of this range can't be
* trusted because we didn't visit all the trees to check their items.
*
* We return -ESTALE if we hit stale blocks to give the caller a chance
* to reset their state and retry with a newer version of the btrees.
* If we hit stale blocks and retry we can call the callback for
* duplicate items. This is harmless because the items are stable while
* the caller holds their cluster lock and the caller has to filter out
* item versions anyway.
*/
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg)
{
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
struct forest_read_items_data rid = {
.cb = cb,
.cb_arg = arg,
@@ -267,30 +270,32 @@ int scoutfs_forest_read_items(struct super_block *sb,
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_block *bl;
struct scoutfs_key ltk;
struct scoutfs_key orig_start = *start;
struct scoutfs_key orig_end = *end;
int ret;
int i;
scoutfs_inc_counter(sb, forest_read_items);
calc_bloom_nrs(&bloom, bloom_key);
calc_bloom_nrs(&bloom, &lock->start);
roots = lock->roots;
retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
refs.fs_ref = roots.fs_root.ref;
refs.logs_ref = roots.logs_root.ref;
*start = orig_start;
*end = orig_end;
*start = lock->start;
*end = lock->end;
/* start with fs root items */
rid.fic |= FIC_FS_ROOT;
rid.is_fs = true;
ret = scoutfs_btree_read_items(sb, &roots.fs_root, key, start, end,
forest_read_items, &rid);
if (ret < 0)
goto out;
rid.fic &= ~FIC_FS_ROOT;
rid.is_fs = false;
scoutfs_key_init_log_trees(&ltk, 0, 0);
for (;; scoutfs_key_inc(&ltk)) {
@@ -335,40 +340,30 @@ int scoutfs_forest_read_items(struct super_block *sb,
scoutfs_inc_counter(sb, forest_bloom_pass);
if ((le64_to_cpu(lt.flags) & SCOUTFS_LOG_TREES_FINALIZED))
rid.fic |= FIC_FINALIZED;
ret = scoutfs_btree_read_items(sb, &lt.item_root, key, start,
end, forest_read_items, &rid);
if (ret < 0)
goto out;
rid.fic &= ~FIC_FINALIZED;
}
ret = 0;
out:
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
goto out;
}
prev_refs = refs;
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
goto retry;
}
return ret;
}
/*
* If the items are deltas then combine the src with the destination
* value and store the result in the destination.
*
* Returns:
* -errno: fatal error, no change
* 0: not delta items, no change
* +ve: SCOUTFS_DELTA_ values indicating when dst and/or src can be dropped
*/
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len)
{
if (key->sk_zone == SCOUTFS_XATTR_TOTL_ZONE)
return scoutfs_xattr_combine_totl(dst, dst_len, src, src_len);
return 0;
}
/*
* Make sure that the bloom bits for the lock's start key are all set in
* the current log's bloom block. We record the nr of our log tree in
@@ -438,29 +433,29 @@ out:
/*
* The caller is commiting items in the transaction and has found the
* greatest item seq amongst them. We store it in the log_trees root
* greatest item version amongst them. We store it in the log_trees root
* to send to the server.
*/
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq)
void scoutfs_forest_set_max_vers(struct super_block *sb, u64 max_vers)
{
DECLARE_FOREST_INFO(sb, finf);
finf->our_log.max_item_seq = cpu_to_le64(max_seq);
finf->our_log.max_item_vers = cpu_to_le64(max_vers);
}
/*
* The server is calling during setup to find the greatest item seq
* The server is calling during setup to find the greatest item version
* amongst all the log tree roots. They have the authoritative current
* super.
*
* Item seqs are only used to compare items in log trees, not in the
* main fs tree. All we have to do is find the greatest seq amongst the
* log_trees so that the core seq will have a greater seq than all the
* items in the log_trees.
* Item versions are only used to compare items in log trees, not in the
* main fs tree. All we have to do is find the greatest version amongst
* the log_trees so that new locks will have a write_version greater
* than all the items in the log_trees.
*/
int scoutfs_forest_get_max_seq(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *seq)
int scoutfs_forest_get_max_vers(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *vers)
{
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
@@ -468,7 +463,7 @@ int scoutfs_forest_get_max_seq(struct super_block *sb,
int ret;
scoutfs_key_init_log_trees(&ltk, 0, 0);
*seq = 0;
*vers = 0;
for (;; scoutfs_key_inc(&ltk)) {
ret = scoutfs_btree_next(sb, &super->logs_root, &ltk, &iref);
@@ -476,7 +471,8 @@ int scoutfs_forest_get_max_seq(struct super_block *sb,
if (iref.val_len == sizeof(struct scoutfs_log_trees)) {
ltk = *iref.key;
lt = iref.val;
*seq = max(*seq, le64_to_cpu(lt->max_item_seq));
*vers = max(*vers,
le64_to_cpu(lt->max_item_vers));
} else {
ret = -EIO;
}
@@ -525,62 +521,6 @@ int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id)
return ret;
}
void scoutfs_forest_inc_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_inc(&finf->inode_count_delta);
}
void scoutfs_forest_dec_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_dec(&finf->inode_count_delta);
}
/*
* Return the total inode count from the super block and all the
* log_btrees it references. This assumes it's working with a block
* reference hierarchy that should be fully consistent. If we see
* ESTALE we've hit persistent corruption.
*/
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count)
{
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
*inode_count = le64_to_cpu(super->inode_count);
scoutfs_key_init_log_trees(&key, 0, 0);
for (;;) {
ret = scoutfs_btree_next(sb, &super->logs_root, &key, &iref);
if (ret == 0) {
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
scoutfs_key_inc(&key);
lt = iref.val;
*inode_count += le64_to_cpu(lt->inode_count_delta);
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
else if (ret == -ESTALE)
ret = -EIO;
break;
}
}
return ret;
}
/*
* This is called from transactions as a new transaction opens and is
* serialized with all writers.
@@ -601,7 +541,7 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
memset(&finf->our_log, 0, sizeof(finf->our_log));
finf->our_log.item_root = lt->item_root;
finf->our_log.bloom_ref = lt->bloom_ref;
finf->our_log.max_item_seq = lt->max_item_seq;
finf->our_log.max_item_vers = lt->max_item_vers;
finf->our_log.rid = lt->rid;
finf->our_log.nr = lt->nr;
finf->srch_file = lt->srch_file;
@@ -609,8 +549,6 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
WARN_ON_ONCE(finf->srch_bl); /* commiting should have put the block */
finf->srch_bl = NULL;
atomic64_set(&finf->inode_count_delta, le64_to_cpu(lt->inode_count_delta));
trace_scoutfs_forest_init_our_log(sb, le64_to_cpu(lt->rid),
le64_to_cpu(lt->nr),
le64_to_cpu(lt->item_root.ref.blkno),
@@ -633,137 +571,15 @@ void scoutfs_forest_get_btrees(struct super_block *sb,
lt->item_root = finf->our_log.item_root;
lt->bloom_ref = finf->our_log.bloom_ref;
lt->srch_file = finf->srch_file;
lt->max_item_seq = finf->our_log.max_item_seq;
lt->max_item_vers = finf->our_log.max_item_vers;
scoutfs_block_put(sb, finf->srch_bl);
finf->srch_bl = NULL;
lt->inode_count_delta = cpu_to_le64(atomic64_read(&finf->inode_count_delta));
trace_scoutfs_forest_prepare_commit(sb, &lt->item_root.ref,
&lt->bloom_ref);
}
#define LOG_MERGE_DELAY_MS (5 * MSEC_PER_SEC)
/*
* Regularly try to get a log merge request from the server. If we get
* a request we walk the log_trees items to find input trees and pass
* them to btree_merge. All of our work is done in dirty blocks
* allocated from available free blocks that the server gave us. If we
* hit an error then we drop our dirty blocks without writing them and
* send an error flag to the server so they can reclaim our allocators
* and ignore the rest of our work.
*/
static void scoutfs_forest_log_merge_worker(struct work_struct *work)
{
struct forest_info *finf = container_of(work, struct forest_info,
log_merge_dwork.work);
struct super_block *sb = finf->sb;
struct scoutfs_btree_root_head *rhead = NULL;
struct scoutfs_btree_root_head *tmp;
struct scoutfs_log_merge_complete comp;
struct scoutfs_log_merge_request req;
struct scoutfs_log_trees *lt;
struct scoutfs_block_writer wri;
struct scoutfs_alloc alloc;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key next;
struct scoutfs_key key;
unsigned long delay;
LIST_HEAD(inputs);
int ret;
ret = scoutfs_client_get_log_merge(sb, &req);
if (ret < 0)
goto resched;
comp.root = req.root;
comp.start = req.start;
comp.end = req.end;
comp.remain = req.end;
comp.rid = req.rid;
comp.seq = req.seq;
comp.flags = 0;
scoutfs_alloc_init(&alloc, &req.meta_avail, &req.meta_freed);
scoutfs_block_writer_init(sb, &wri);
/* find finalized input log trees within the input seq */
for (scoutfs_key_init_log_trees(&key, 0, 0); ; scoutfs_key_inc(&key)) {
if (!rhead) {
rhead = kmalloc(sizeof(*rhead), GFP_NOFS);
if (!rhead) {
ret = -ENOMEM;
goto out;
}
}
ret = scoutfs_btree_next(sb, &req.logs_root, &key, &iref);
if (ret == 0) {
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
lt = iref.val;
if (lt->item_root.ref.blkno != 0 &&
(le64_to_cpu(lt->flags) & SCOUTFS_LOG_TREES_FINALIZED) &&
(le64_to_cpu(lt->finalize_seq) < le64_to_cpu(req.input_seq))) {
rhead->root = lt->item_root;
list_add_tail(&rhead->head, &inputs);
rhead = NULL;
}
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
if (ret == -ENOENT) {
ret = 0;
break;
}
goto out;
}
}
/* shouldn't be possible, but it's harmless */
if (list_empty(&inputs)) {
ret = 0;
goto out;
}
ret = scoutfs_btree_merge(sb, &alloc, &wri, &req.start, &req.end,
&next, &comp.root, &inputs,
!!(req.flags & cpu_to_le64(SCOUTFS_LOG_MERGE_REQUEST_SUBTREE)),
SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT, 10);
if (ret == -ERANGE) {
comp.remain = next;
le64_add_cpu(&comp.flags, SCOUTFS_LOG_MERGE_COMP_REMAIN);
ret = 0;
}
out:
scoutfs_alloc_prepare_commit(sb, &alloc, &wri);
if (ret == 0)
ret = scoutfs_block_writer_write(sb, &wri);
scoutfs_block_writer_forget_all(sb, &wri);
comp.meta_avail = alloc.avail;
comp.meta_freed = alloc.freed;
if (ret < 0)
le64_add_cpu(&comp.flags, SCOUTFS_LOG_MERGE_COMP_ERROR);
ret = scoutfs_client_commit_log_merge(sb, &comp);
kfree(rhead);
list_for_each_entry_safe(rhead, tmp, &inputs, head)
kfree(rhead);
resched:
delay = ret == 0 ? 0 : msecs_to_jiffies(LOG_MERGE_DELAY_MS);
queue_delayed_work(finf->workq, &finf->log_merge_dwork, delay);
}
int scoutfs_forest_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
@@ -777,20 +593,10 @@ int scoutfs_forest_setup(struct super_block *sb)
}
/* the finf fields will be setup as we open a transaction */
finf->sb = sb;
mutex_init(&finf->mutex);
mutex_init(&finf->srch_mutex);
INIT_DELAYED_WORK(&finf->log_merge_dwork,
scoutfs_forest_log_merge_worker);
sbi->forest_info = finf;
finf->workq = alloc_workqueue("scoutfs_log_merge", WQ_NON_REENTRANT |
WQ_UNBOUND | WQ_HIGHPRI, 0);
if (!finf->workq) {
ret = -ENOMEM;
goto out;
}
ret = 0;
out:
if (ret)
@@ -799,24 +605,6 @@ out:
return 0;
}
void scoutfs_forest_start(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
queue_delayed_work(finf->workq, &finf->log_merge_dwork,
msecs_to_jiffies(LOG_MERGE_DELAY_MS));
}
void scoutfs_forest_stop(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
if (finf && finf->workq) {
cancel_delayed_work_sync(&finf->log_merge_dwork);
destroy_workqueue(finf->workq);
}
}
void scoutfs_forest_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
@@ -824,7 +612,6 @@ void scoutfs_forest_destroy(struct super_block *sb)
if (finf) {
scoutfs_block_put(sb, finf->srch_bl);
kfree(finf);
sbi->forest_info = NULL;
}

View File

@@ -8,36 +8,29 @@ struct scoutfs_block;
#include "btree.h"
/* caller gives an item to the callback */
enum {
FIC_FS_ROOT = (1 << 0),
FIC_FINALIZED = (1 << 1),
};
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, int fic, void *arg);
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
void *val, int val_len, void *arg);
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next);
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
struct scoutfs_lock *lock);
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq);
int scoutfs_forest_get_max_seq(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *seq);
void scoutfs_forest_set_max_vers(struct super_block *sb, u64 max_vers);
int scoutfs_forest_get_max_vers(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *vers);
int scoutfs_forest_insert_list(struct super_block *sb,
struct scoutfs_btree_item_list *lst);
int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id);
void scoutfs_forest_inc_inode_count(struct super_block *sb);
void scoutfs_forest_dec_inode_count(struct super_block *sb);
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count);
void scoutfs_forest_init_btrees(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
@@ -45,15 +38,7 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
void scoutfs_forest_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
/* > 0 error codes */
#define SCOUTFS_DELTA_COMBINED 1 /* src val was combined, drop src */
#define SCOUTFS_DELTA_COMBINED_NULL 2 /* combined val has no data, drop both */
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len);
int scoutfs_forest_setup(struct super_block *sb);
void scoutfs_forest_start(struct super_block *sb);
void scoutfs_forest_stop(struct super_block *sb);
void scoutfs_forest_destroy(struct super_block *sb);
#endif

View File

@@ -1,15 +1,8 @@
#ifndef _SCOUTFS_FORMAT_H_
#define _SCOUTFS_FORMAT_H_
/*
* The format version defines the format of structures on devices,
* structures that are communicated over the wire, and the protocol
* behind the structures.
*/
#define SCOUTFS_FORMAT_VERSION_MIN 1
#define SCOUTFS_FORMAT_VERSION_MIN_STR __stringify(SCOUTFS_FORMAT_VERSION_MIN)
#define SCOUTFS_FORMAT_VERSION_MAX 1
#define SCOUTFS_FORMAT_VERSION_MAX_STR __stringify(SCOUTFS_FORMAT_VERSION_MAX)
#define SCOUTFS_INTEROP_VERSION 0ULL
#define SCOUTFS_INTEROP_VERSION_STR __stringify(0)
/* statfs(2) f_type */
#define SCOUTFS_SUPER_MAGIC 0x554f4353 /* "SCOU" */
@@ -175,11 +168,6 @@ struct scoutfs_key {
#define sko_rid _sk_first
#define sko_ino _sk_second
/* xattr totl */
#define skxt_a _sk_first
#define skxt_b _sk_second
#define skxt_c _sk_third
/* inode */
#define ski_ino _sk_first
@@ -207,16 +195,22 @@ struct scoutfs_key {
#define sklt_rid _sk_first
#define sklt_nr _sk_second
/* lock clients */
#define sklc_rid _sk_first
/* seqs */
#define skts_trans_seq _sk_first
#define skts_rid _sk_second
/* mounted clients */
#define skmc_rid _sk_first
/* free extents by blkno */
#define skfb_end _sk_first
#define skfb_len _sk_second
/* free extents by order */
#define skfo_revord _sk_first
#define skfo_end _sk_second
#define skfo_len _sk_third
#define skfb_end _sk_second
#define skfb_len _sk_third
/* free extents by len */
#define skfl_neglen _sk_second
#define skfl_blkno _sk_third
struct scoutfs_avl_root {
__le16 node;
@@ -252,15 +246,11 @@ struct scoutfs_btree_root {
struct scoutfs_btree_item {
struct scoutfs_avl_node node;
struct scoutfs_key key;
__le64 seq;
__le16 val_off;
__le16 val_len;
__u8 flags;
__u8 __pad[3];
__u8 __pad[4];
};
#define SCOUTFS_ITEM_FLAG_DELETION (1 << 0)
struct scoutfs_btree_block {
struct scoutfs_block_header hdr;
struct scoutfs_avl_root item_root;
@@ -269,7 +259,7 @@ struct scoutfs_btree_block {
__le16 mid_free_len;
__u8 level;
__u8 __pad[7];
struct scoutfs_btree_item items[];
struct scoutfs_btree_item items[0];
/* leaf blocks have a fixed size item offset hash table at the end */
};
@@ -298,10 +288,9 @@ struct scoutfs_alloc_list_head {
struct scoutfs_block_ref ref;
__le64 total_nr;
__le32 first_nr;
__le32 flags;
__u8 __pad[4];
};
/*
* While the main allocator uses extent items in btree blocks, metadata
* allocations for a single transaction are recorded in arrays in
@@ -318,7 +307,7 @@ struct scoutfs_alloc_list_block {
struct scoutfs_block_ref next;
__le32 start;
__le32 nr;
__le64 blknos[]; /* naturally aligned for sorting */
__le64 blknos[0]; /* naturally aligned for sorting */
};
#define SCOUTFS_ALLOC_LIST_MAX_BLOCKS \
@@ -330,25 +319,17 @@ struct scoutfs_alloc_list_block {
*/
struct scoutfs_alloc_root {
__le64 total_len;
__le32 flags;
__le32 _pad;
struct scoutfs_btree_root root;
};
/* Shared by _alloc_list_head and _alloc_root */
#define SCOUTFS_ALLOC_FLAG_LOW (1U << 0)
/* types of allocators, exposed to alloc_detail ioctl */
#define SCOUTFS_ALLOC_OWNER_NONE 0
#define SCOUTFS_ALLOC_OWNER_SERVER 1
#define SCOUTFS_ALLOC_OWNER_MOUNT 2
#define SCOUTFS_ALLOC_OWNER_SRCH 3
#define SCOUTFS_ALLOC_OWNER_LOG_MERGE 4
struct scoutfs_mounted_client_btree_val {
union scoutfs_inet_addr addr;
__u8 flags;
__u8 __pad[7];
};
#define SCOUTFS_MOUNTED_CLIENT_QUORUM (1 << 0)
@@ -381,7 +362,7 @@ struct scoutfs_srch_file {
struct scoutfs_srch_parent {
struct scoutfs_block_header hdr;
struct scoutfs_block_ref refs[];
struct scoutfs_block_ref refs[0];
};
#define SCOUTFS_SRCH_PARENT_REFS \
@@ -396,7 +377,7 @@ struct scoutfs_srch_block {
struct scoutfs_srch_entry tail;
__le32 entry_nr;
__le32 entry_bytes;
__u8 entries[];
__u8 entries[0];
};
/*
@@ -449,20 +430,10 @@ struct scoutfs_srch_compact {
/* client -> server: compaction failed */
#define SCOUTFS_SRCH_COMPACT_FLAG_ERROR (1 << 5)
#define SCOUTFS_DATA_ALLOC_MAX_ZONES 1024
#define SCOUTFS_DATA_ALLOC_ZONE_BYTES DIV_ROUND_UP(SCOUTFS_DATA_ALLOC_MAX_ZONES, 8)
#define SCOUTFS_DATA_ALLOC_ZONE_LE64S DIV_ROUND_UP(SCOUTFS_DATA_ALLOC_MAX_ZONES, 64)
/*
* XXX I imagine we should rename these now that they've evolved to track
* all the btrees that clients use during a transaction. It's not just
* about item logs, it's about clients making changes to trees.
*
* @get_trans_seq, @commit_trans_seq: These pair of sequence numbers
* determine if a transaction is currently open for the mount that owns
* the log_trees struct. get_trans_seq is advanced by the server as the
* transaction is opened. The server sets comimt_trans_seq equal to
* get_ as the transaction is committed.
*/
struct scoutfs_log_trees {
struct scoutfs_alloc_list_head meta_avail;
@@ -472,27 +443,31 @@ struct scoutfs_log_trees {
struct scoutfs_alloc_root data_avail;
struct scoutfs_alloc_root data_freed;
struct scoutfs_srch_file srch_file;
__le64 data_alloc_zone_blocks;
__le64 data_alloc_zones[SCOUTFS_DATA_ALLOC_ZONE_LE64S];
__le64 inode_count_delta;
__le64 get_trans_seq;
__le64 commit_trans_seq;
__le64 max_item_seq;
__le64 finalize_seq;
__le64 max_item_vers;
__le64 rid;
__le64 nr;
__le64 flags;
};
#define SCOUTFS_LOG_TREES_FINALIZED (1ULL << 0)
struct scoutfs_log_item_value {
__le64 vers;
__u8 flags;
__u8 __pad[7];
__u8 data[0];
};
/* FS items are limited by the max btree value length */
#define SCOUTFS_MAX_VAL_SIZE SCOUTFS_BTREE_MAX_VAL_LEN
/*
* FS items are limited by the max btree value length with the log item
* value header.
*/
#define SCOUTFS_MAX_VAL_SIZE \
(SCOUTFS_BTREE_MAX_VAL_LEN - sizeof(struct scoutfs_log_item_value))
#define SCOUTFS_LOG_ITEM_FLAG_DELETION (1 << 0)
struct scoutfs_bloom_block {
struct scoutfs_block_header hdr;
__le64 total_set;
__le64 bits[];
__le64 bits[0];
};
/*
@@ -509,122 +484,50 @@ struct scoutfs_bloom_block {
member_sizeof(struct scoutfs_bloom_block, bits[0]) * 8)
#define SCOUTFS_FOREST_BLOOM_FUNC_BITS (SCOUTFS_BLOCK_LG_SHIFT + 3)
/*
* A private server btree item which records the status of a log merge
* operation that is in progress.
*/
struct scoutfs_log_merge_status {
struct scoutfs_key next_range_key;
__le64 nr_requests;
__le64 nr_complete;
__le64 seq;
};
/*
* A request is sent to the client and stored in a server btree item to
* record resources that would be reclaimed if the client failed. It
* has all the inputs needed for the client to perform its portion of a
* merge.
*/
struct scoutfs_log_merge_request {
struct scoutfs_alloc_list_head meta_avail;
struct scoutfs_alloc_list_head meta_freed;
struct scoutfs_btree_root logs_root;
struct scoutfs_btree_root root;
struct scoutfs_key start;
struct scoutfs_key end;
__le64 input_seq;
__le64 rid;
__le64 seq;
__le64 flags;
};
/* request root is subtree of fs root at parent, restricted merging modifications */
#define SCOUTFS_LOG_MERGE_REQUEST_SUBTREE (1ULL << 0)
/*
* The output of a client's merge of log btree items into a subtree
* rooted at a parent in the fs_root. The client sends it to the
* server, who stores it in a btree item for later splicing/rebalancing.
*/
struct scoutfs_log_merge_complete {
struct scoutfs_alloc_list_head meta_avail;
struct scoutfs_alloc_list_head meta_freed;
struct scoutfs_btree_root root;
struct scoutfs_key start;
struct scoutfs_key end;
struct scoutfs_key remain;
__le64 rid;
__le64 seq;
__le64 flags;
};
/* merge failed, ignore completion and reclaim stored request */
#define SCOUTFS_LOG_MERGE_COMP_ERROR (1ULL << 0)
/* merge didn't complete range, restart from remain */
#define SCOUTFS_LOG_MERGE_COMP_REMAIN (1ULL << 1)
/*
* Range items record the ranges of the fs keyspace that still need to
* be merged. They're added as a merge starts, removed as requests are
* sent and added back if the request didn't consume its entire range.
*/
struct scoutfs_log_merge_range {
struct scoutfs_key start;
struct scoutfs_key end;
};
struct scoutfs_log_merge_freeing {
struct scoutfs_btree_root root;
struct scoutfs_key key;
__le64 seq;
};
/*
* Keys are first sorted by major key zones.
*/
#define SCOUTFS_INODE_INDEX_ZONE 4
#define SCOUTFS_ORPHAN_ZONE 8
#define SCOUTFS_XATTR_TOTL_ZONE 12
#define SCOUTFS_FS_ZONE 16
#define SCOUTFS_LOCK_ZONE 20
#define SCOUTFS_INODE_INDEX_ZONE 1
#define SCOUTFS_RID_ZONE 2
#define SCOUTFS_FS_ZONE 3
#define SCOUTFS_LOCK_ZONE 4
/* Items only stored in server btrees */
#define SCOUTFS_LOG_TREES_ZONE 24
#define SCOUTFS_MOUNTED_CLIENT_ZONE 28
#define SCOUTFS_SRCH_ZONE 32
#define SCOUTFS_FREE_EXTENT_BLKNO_ZONE 36
#define SCOUTFS_FREE_EXTENT_ORDER_ZONE 40
/* Items only stored in log merge server btrees */
#define SCOUTFS_LOG_MERGE_STATUS_ZONE 44
#define SCOUTFS_LOG_MERGE_RANGE_ZONE 48
#define SCOUTFS_LOG_MERGE_REQUEST_ZONE 52
#define SCOUTFS_LOG_MERGE_COMPLETE_ZONE 56
#define SCOUTFS_LOG_MERGE_FREEING_ZONE 60
#define SCOUTFS_LOG_TREES_ZONE 6
#define SCOUTFS_LOCK_CLIENTS_ZONE 7
#define SCOUTFS_TRANS_SEQ_ZONE 8
#define SCOUTFS_MOUNTED_CLIENT_ZONE 9
#define SCOUTFS_SRCH_ZONE 10
#define SCOUTFS_FREE_EXTENT_ZONE 11
/* inode index zone */
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 4
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 8
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 1
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 2
#define SCOUTFS_INODE_INDEX_NR 3 /* don't forget to update */
/* orphan zone, redundant type used for clarity */
#define SCOUTFS_ORPHAN_TYPE 4
/* rid zone (also used in server alloc btree) */
#define SCOUTFS_ORPHAN_TYPE 1
/* fs zone */
#define SCOUTFS_INODE_TYPE 4
#define SCOUTFS_XATTR_TYPE 8
#define SCOUTFS_DIRENT_TYPE 12
#define SCOUTFS_READDIR_TYPE 16
#define SCOUTFS_LINK_BACKREF_TYPE 20
#define SCOUTFS_SYMLINK_TYPE 24
#define SCOUTFS_DATA_EXTENT_TYPE 28
#define SCOUTFS_INODE_TYPE 1
#define SCOUTFS_XATTR_TYPE 2
#define SCOUTFS_DIRENT_TYPE 3
#define SCOUTFS_READDIR_TYPE 4
#define SCOUTFS_LINK_BACKREF_TYPE 5
#define SCOUTFS_SYMLINK_TYPE 6
#define SCOUTFS_DATA_EXTENT_TYPE 7
/* lock zone, only ever found in lock ranges, never in persistent items */
#define SCOUTFS_RENAME_TYPE 4
#define SCOUTFS_RENAME_TYPE 1
/* srch zone, only in server btrees */
#define SCOUTFS_SRCH_LOG_TYPE 4
#define SCOUTFS_SRCH_BLOCKS_TYPE 8
#define SCOUTFS_SRCH_PENDING_TYPE 12
#define SCOUTFS_SRCH_BUSY_TYPE 16
#define SCOUTFS_SRCH_LOG_TYPE 1
#define SCOUTFS_SRCH_BLOCKS_TYPE 2
#define SCOUTFS_SRCH_PENDING_TYPE 3
#define SCOUTFS_SRCH_BUSY_TYPE 4
/* free extents in allocator btrees in client and server, by blkno or len */
#define SCOUTFS_FREE_EXTENT_BLKNO_TYPE 1
#define SCOUTFS_FREE_EXTENT_LEN_TYPE 2
/* file data extents have start and len in key */
struct scoutfs_data_extent_val {
@@ -646,20 +549,9 @@ struct scoutfs_xattr {
__le16 val_len;
__u8 name_len;
__u8 __pad[5];
__u8 name[];
__u8 name[0];
};
/*
* .totl. xattrs are mapped to items. The dotted u64s in the xattr name
* map to the item key. The item value total is the sum of all the
* xattr values. The item value count records the number of xattrs
* contributing to the total and is used when combining logged items to
* determine if totals are being created or destroyed.
*/
struct scoutfs_xattr_totl_val {
__le64 total;
__le64 count;
};
/* XXX does this exist upstream somewhere? */
#define member_sizeof(TYPE, MEMBER) (sizeof(((TYPE *)0)->MEMBER))
@@ -694,12 +586,6 @@ struct scoutfs_xattr_totl_val {
#define SCOUTFS_QUORUM_HB_IVAL_MS 100
#define SCOUTFS_QUORUM_HB_TIMEO_MS (5 * MSEC_PER_SEC)
/*
* A newly elected leader will give fencing some time before giving up and
* shutting down.
*/
#define SCOUTFS_QUORUM_FENCE_TO_MS (15 * MSEC_PER_SEC)
struct scoutfs_quorum_message {
__le64 fsid;
__le64 version;
@@ -731,76 +617,35 @@ struct scoutfs_quorum_config {
} slots[SCOUTFS_QUORUM_MAX_SLOTS];
};
enum {
SCOUTFS_QUORUM_EVENT_BEGIN, /* quorum service starting up */
SCOUTFS_QUORUM_EVENT_TERM, /* updated persistent term */
SCOUTFS_QUORUM_EVENT_ELECT, /* won election */
SCOUTFS_QUORUM_EVENT_FENCE, /* server fenced others */
SCOUTFS_QUORUM_EVENT_STOP, /* server stopped */
SCOUTFS_QUORUM_EVENT_END, /* quorum service shutting down */
SCOUTFS_QUORUM_EVENT_NR,
};
struct scoutfs_quorum_block {
struct scoutfs_block_header hdr;
__le64 write_nr;
__le64 term;
__le64 random_write_mark;
__le64 flags;
struct scoutfs_quorum_block_event {
__le64 write_nr;
__le64 rid;
__le64 term;
struct scoutfs_timespec ts;
} events[SCOUTFS_QUORUM_EVENT_NR];
} write, update_term, set_leader, clear_leader, fenced;
};
/*
* Tunable options that apply to the entire system. They can be set in
* mkfs or in sysfs files which send an rpc to the server to make the
* change. The super version defines the options that exist.
*
* @set_bits: bits for each 64bit starting offset after set_bits
* indicate which logical option is set.
*
* @data_alloc_zone_blocks: if set, the data device is logically divided
* into contiguous zones of this many blocks. Data allocation will try
* and isolate allocated extents for each mount to their own zone. The
* zone size must be larger than the data alloc high water mark and
* large enough such that the number of zones is kept within its static
* limit.
*/
struct scoutfs_volume_options {
__le64 set_bits;
__le64 data_alloc_zone_blocks;
__le64 __future_expansion[63];
};
#define scoutfs_volopt_nr(field) \
((offsetof(struct scoutfs_volume_options, field) - \
(offsetof(struct scoutfs_volume_options, set_bits) + \
member_sizeof(struct scoutfs_volume_options, set_bits))) / sizeof(__le64))
#define scoutfs_volopt_bit(field) \
(1ULL << scoutfs_volopt_nr(field))
#define SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR \
scoutfs_volopt_nr(data_alloc_zone_blocks)
#define SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_BIT \
scoutfs_volopt_bit(data_alloc_zone_blocks)
#define SCOUTFS_VOLOPT_EXPANSION_BITS \
(~(scoutfs_volopt_bit(__future_expansion) - 1))
#define SCOUTFS_QUORUM_BLOCK_LEADER (1 << 0)
#define SCOUTFS_FLAG_IS_META_BDEV 0x01
struct scoutfs_super_block {
struct scoutfs_block_header hdr;
__le64 id;
__le64 fmt_vers;
__le64 version;
__le64 flags;
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 seq;
__le64 next_ino;
__le64 inode_count;
__le64 next_trans_seq;
__le64 total_meta_blocks; /* both static and dynamic */
__le64 first_meta_blkno; /* first dynamically allocated */
__le64 last_meta_blkno;
__le64 total_data_blocks;
__le64 first_data_blkno;
__le64 last_data_blkno;
struct scoutfs_quorum_config qconf;
struct scoutfs_alloc_root meta_alloc[2];
struct scoutfs_alloc_root data_alloc;
@@ -808,10 +653,10 @@ struct scoutfs_super_block {
struct scoutfs_alloc_list_head server_meta_freed[2];
struct scoutfs_btree_root fs_root;
struct scoutfs_btree_root logs_root;
struct scoutfs_btree_root log_merge;
struct scoutfs_btree_root lock_clients;
struct scoutfs_btree_root trans_seqs;
struct scoutfs_btree_root mounted_clients;
struct scoutfs_btree_root srch_root;
struct scoutfs_volume_options volopt;
};
#define SCOUTFS_ROOT_INO 1
@@ -835,6 +680,13 @@ struct scoutfs_super_block {
*
* @offline_blocks: The number of fixed 4k blocks that could be made
* online by staging.
*
* XXX
* - otime?
* - compat flags?
* - version?
* - generation?
* - be more careful with rdev?
*/
struct scoutfs_inode {
__le64 size;
@@ -845,7 +697,6 @@ struct scoutfs_inode {
__le64 offline_blocks;
__le64 next_readdir_pos;
__le64 next_xattr_id;
__le64 version;
__le32 nlink;
__le32 uid;
__le32 gid;
@@ -855,7 +706,6 @@ struct scoutfs_inode {
struct scoutfs_timespec atime;
struct scoutfs_timespec ctime;
struct scoutfs_timespec mtime;
struct scoutfs_timespec crtime;
};
#define SCOUTFS_INO_FLAG_TRUNCATE 0x1
@@ -879,7 +729,7 @@ struct scoutfs_dirent {
__le64 pos;
__u8 type;
__u8 __pad[7];
__u8 name[];
__u8 name[0];
};
#define SCOUTFS_NAME_LEN 255
@@ -907,7 +757,6 @@ enum scoutfs_dentry_type {
#define SCOUTFS_XATTR_MAX_NAME_LEN 255
#define SCOUTFS_XATTR_MAX_VAL_LEN 65535
#define SCOUTFS_XATTR_MAX_PART_SIZE SCOUTFS_MAX_VAL_SIZE
#define SCOUTFS_XATTR_MAX_TOTL_U64 23 /* octal U64_MAX */
#define SCOUTFS_XATTR_NR_PARTS(name_len, val_len) \
DIV_ROUND_UP(sizeof(struct scoutfs_xattr) + name_len + val_len, \
@@ -938,7 +787,7 @@ enum scoutfs_dentry_type {
*/
struct scoutfs_net_greeting {
__le64 fsid;
__le64 fmt_vers;
__le64 version;
__le64 server_term;
__le64 rid;
__le64 flags;
@@ -969,6 +818,7 @@ struct scoutfs_net_greeting {
* response messages.
*/
struct scoutfs_net_header {
__le64 clock_sync_id;
__le64 seq;
__le64 recv_seq;
__le64 id;
@@ -977,7 +827,7 @@ struct scoutfs_net_header {
__u8 flags;
__u8 error;
__u8 __pad[3];
__u8 data[];
__u8 data[0];
};
#define SCOUTFS_NET_FLAG_RESPONSE (1 << 0)
@@ -988,21 +838,13 @@ enum scoutfs_net_cmd {
SCOUTFS_NET_CMD_ALLOC_INODES,
SCOUTFS_NET_CMD_GET_LOG_TREES,
SCOUTFS_NET_CMD_COMMIT_LOG_TREES,
SCOUTFS_NET_CMD_SYNC_LOG_TREES,
SCOUTFS_NET_CMD_GET_ROOTS,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
SCOUTFS_NET_CMD_GET_LAST_SEQ,
SCOUTFS_NET_CMD_LOCK,
SCOUTFS_NET_CMD_LOCK_RECOVER,
SCOUTFS_NET_CMD_SRCH_GET_COMPACT,
SCOUTFS_NET_CMD_SRCH_COMMIT_COMPACT,
SCOUTFS_NET_CMD_GET_LOG_MERGE,
SCOUTFS_NET_CMD_COMMIT_LOG_MERGE,
SCOUTFS_NET_CMD_OPEN_INO_MAP,
SCOUTFS_NET_CMD_GET_VOLOPT,
SCOUTFS_NET_CMD_SET_VOLOPT,
SCOUTFS_NET_CMD_CLEAR_VOLOPT,
SCOUTFS_NET_CMD_RESIZE_DEVICES,
SCOUTFS_NET_CMD_STATFS,
SCOUTFS_NET_CMD_FAREWELL,
SCOUTFS_NET_CMD_UNKNOWN,
};
@@ -1045,32 +887,23 @@ struct scoutfs_net_roots {
struct scoutfs_btree_root srch_root;
};
struct scoutfs_net_resize_devices {
__le64 new_total_meta_blocks;
__le64 new_total_data_blocks;
};
struct scoutfs_net_statfs {
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 free_meta_blocks;
__le64 total_meta_blocks;
__le64 free_data_blocks;
__le64 total_data_blocks;
__le64 inode_count;
};
struct scoutfs_net_lock {
struct scoutfs_key key;
__le64 write_seq;
__le64 write_version;
__u8 old_mode;
__u8 new_mode;
__u8 __pad[6];
};
struct scoutfs_net_lock_grant_response {
struct scoutfs_net_lock nl;
struct scoutfs_net_roots roots;
};
struct scoutfs_net_lock_recover {
__le16 nr;
__u8 __pad[6];
struct scoutfs_net_lock locks[];
struct scoutfs_net_lock locks[0];
};
#define SCOUTFS_NET_LOCK_MAX_RECOVER_NR \
@@ -1085,7 +918,6 @@ enum scoutfs_lock_trace {
SLT_INVALIDATE,
SLT_REQUEST,
SLT_RESPONSE,
SLT_NR,
};
/*
@@ -1138,42 +970,4 @@ enum scoutfs_corruption_sources {
#define SC_NR_LONGS DIV_ROUND_UP(SC_NR_SOURCES, BITS_PER_LONG)
#define SCOUTFS_OPEN_INO_MAP_SHIFT 10
#define SCOUTFS_OPEN_INO_MAP_BITS (1 << SCOUTFS_OPEN_INO_MAP_SHIFT)
#define SCOUTFS_OPEN_INO_MAP_MASK (SCOUTFS_OPEN_INO_MAP_BITS - 1)
#define SCOUTFS_OPEN_INO_MAP_LE64S (SCOUTFS_OPEN_INO_MAP_BITS / 64)
/*
* The request and response conversation is as follows:
*
* client[init] -> server:
* group_nr = G
* req_id = 0 (I)
* server -> client[*]
* group_nr = G
* req_id = R
* client[*] -> server
* group_nr = G (I)
* req_id = R
* bits
* server -> client[init]
* group_nr = G (I)
* req_id = R (I)
* bits
*
* Many of the fields in individual messages are ignored ("I") because
* the net id or the omap req_id can be used to identify the
* conversation. We always include them on the wire to make inspected
* messages easier to follow.
*/
struct scoutfs_open_ino_map_args {
__le64 group_nr;
__le64 req_id;
};
struct scoutfs_open_ino_map {
struct scoutfs_open_ino_map_args args;
__le64 bits[SCOUTFS_OPEN_INO_MAP_LE64S];
};
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -9,8 +9,6 @@
struct scoutfs_lock;
#define SCOUTFS_INODE_NR_INDICES 2
struct scoutfs_inode_info {
/* read or initialized for each inode instance */
u64 ino;
@@ -22,7 +20,6 @@ struct scoutfs_inode_info {
u64 online_blocks;
u64 offline_blocks;
u32 flags;
struct timespec crtime;
/*
* Protects per-inode extent items, most particularly readers
@@ -40,8 +37,8 @@ struct scoutfs_inode_info {
*/
struct mutex item_mutex;
bool have_item;
u64 item_majors[SCOUTFS_INODE_NR_INDICES];
u32 item_minors[SCOUTFS_INODE_NR_INDICES];
u64 item_majors[SCOUTFS_INODE_INDEX_NR];
u32 item_minors[SCOUTFS_INODE_INDEX_NR];
/* updated at on each new lock acquisition */
atomic64_t last_refreshed;
@@ -52,14 +49,7 @@ struct scoutfs_inode_info {
struct scoutfs_per_task pt_data_lock;
struct scoutfs_data_waitq data_waitq;
struct rw_semaphore xattr_rwsem;
struct list_head writeback_entry;
struct scoutfs_lock_coverage ino_lock_cov;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node iput_llnode;
atomic_t iput_count;
struct rb_node writeback_node;
struct inode inode;
};
@@ -78,15 +68,11 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
void scoutfs_destroy_inode(struct inode *inode);
int scoutfs_drop_inode(struct inode *inode);
void scoutfs_evict_inode(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode);
int scoutfs_orphan_inode(struct inode *inode);
#define SCOUTFS_IGF_LINKED (1 << 0) /* enoent if nlink == 0 */
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf);
struct inode *scoutfs_ilookup_nowait(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup_nowait_nonewfree(struct super_block *sb, u64 ino);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
void scoutfs_inode_init_key(struct scoutfs_key *key, u64 ino);
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
u32 minor, u64 ino);
int scoutfs_inode_index_start(struct super_block *sb, u64 *seq);
@@ -96,9 +82,9 @@ int scoutfs_inode_index_prepare_ino(struct super_block *sb,
struct list_head *list, u64 ino,
umode_t mode);
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
struct list_head *list, u64 seq, bool allocing);
struct list_head *list, u64 seq);
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
bool set_data_seq, bool allocing);
bool set_data_seq);
void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list);
int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock);
@@ -106,8 +92,9 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
struct list_head *ind_locks);
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret);
int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, dev_t rdev,
u64 ino, struct scoutfs_lock *lock, struct inode **inode_ret);
struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
umode_t mode, dev_t rdev, u64 ino,
struct scoutfs_lock *lock);
void scoutfs_inode_set_meta_seq(struct inode *inode);
void scoutfs_inode_set_data_seq(struct inode *inode);
@@ -120,25 +107,23 @@ u64 scoutfs_inode_data_version(struct inode *inode);
void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off);
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock,
int flags);
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
void scoutfs_inode_schedule_orphan_dwork(struct super_block *sb);
int scoutfs_scan_orphans(struct super_block *sb);
void scoutfs_inode_queue_writeback(struct inode *inode);
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);
u64 scoutfs_last_ino(struct super_block *sb);
void scoutfs_inode_exit(void);
int scoutfs_inode_init(void);
int scoutfs_inode_setup(struct super_block *sb);
void scoutfs_inode_start(struct super_block *sb);
void scoutfs_inode_orphan_stop(struct super_block *sb);
void scoutfs_inode_flush_iput(struct super_block *sb);
void scoutfs_inode_destroy(struct super_block *sb);
#endif

View File

@@ -21,7 +21,6 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/aio.h>
#include <linux/list_sort.h>
#include "format.h"
#include "key.h"
@@ -39,8 +38,6 @@
#include "hash.h"
#include "srch.h"
#include "alloc.h"
#include "server.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -387,7 +384,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
if (sblock > eblock)
return -EINVAL;
inode = scoutfs_ilookup_nowait_nonewfree(sb, args.ino);
inode = scoutfs_ilookup(sb, args.ino);
if (!inode) {
ret = -ESTALE;
goto out;
@@ -543,17 +540,19 @@ out:
static long scoutfs_ioc_stat_more(struct file *file, unsigned long arg)
{
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_ioctl_stat_more stm;
if (get_user(stm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
stm.valid_bytes = min_t(u64, stm.valid_bytes,
sizeof(struct scoutfs_ioctl_stat_more));
stm.meta_seq = scoutfs_inode_meta_seq(inode);
stm.data_seq = scoutfs_inode_data_seq(inode);
stm.data_version = scoutfs_inode_data_version(inode);
scoutfs_inode_get_onoff(inode, &stm.online_blocks, &stm.offline_blocks);
stm.crtime_sec = si->crtime.tv_sec;
stm.crtime_nsec = si->crtime.tv_nsec;
if (copy_to_user((void __user *)arg, &stm, sizeof(stm)))
if (copy_to_user((void __user *)arg, &stm, stm.valid_bytes))
return -EFAULT;
return 0;
@@ -617,7 +616,6 @@ static long scoutfs_ioc_data_waiting(struct file *file, unsigned long arg)
static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
{
struct inode *inode = file->f_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_ioctl_setattr_more __user *usm = (void __user *)arg;
struct scoutfs_ioctl_setattr_more sm;
@@ -676,7 +674,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
/* setting only so we don't see 0 data seq with nonzero data_version */
set_data_seq = sm.data_version != 0 ? true : false;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq, false);
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq);
if (ret)
goto unlock;
@@ -686,8 +684,6 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
i_size_write(inode, sm.i_size);
inode->i_ctime.tv_sec = sm.ctime_sec;
inode->i_ctime.tv_nsec = sm.ctime_nsec;
si->crtime.tv_sec = sm.crtime_sec;
si->crtime.tv_nsec = sm.crtime_nsec;
scoutfs_update_inode_item(inode, lock, &ind_locks);
ret = 0;
@@ -870,35 +866,28 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super;
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_ioctl_statfs_more sfm;
int ret;
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super)
return -ENOMEM;
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
if (get_user(sfm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
sfm.valid_bytes = min_t(u64, sfm.valid_bytes,
sizeof(struct scoutfs_ioctl_statfs_more));
sfm.fsid = le64_to_cpu(super->hdr.fsid);
sfm.rid = sbi->rid;
sfm.total_meta_blocks = le64_to_cpu(super->total_meta_blocks);
sfm.total_data_blocks = le64_to_cpu(super->total_data_blocks);
sfm.reserved_meta_blocks = scoutfs_server_reserved_meta_blocks(sb);
ret = scoutfs_client_get_last_seq(sb, &sfm.committed_seq);
if (ret)
goto out;
return ret;
if (copy_to_user((void __user *)arg, &sfm, sizeof(sfm)))
ret = -EFAULT;
else
ret = 0;
out:
kfree(super);
return ret;
if (copy_to_user((void __user *)arg, &sfm, sfm.valid_bytes))
return -EFAULT;
return 0;
}
struct copy_alloc_detail_args {
@@ -983,18 +972,12 @@ static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
goto out;
}
if (mb.flags & SCOUTFS_IOC_MB_UNKNOWN) {
ret = -EINVAL;
goto out;
}
ret = mnt_want_write_file(file);
if (ret < 0)
goto out;
ret = scoutfs_data_move_blocks(from, mb.from_off, mb.len,
to, mb.to_off, !!(mb.flags & SCOUTFS_IOC_MB_STAGE),
mb.data_version);
to, mb.to_off);
mnt_drop_write_file(file);
out:
fput(from_file);
@@ -1002,402 +985,6 @@ out:
return ret;
}
static long scoutfs_ioc_resize_devices(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_resize_devices __user *urd = (void __user *)arg;
struct scoutfs_ioctl_resize_devices rd;
struct scoutfs_net_resize_devices nrd;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rd, urd, sizeof(rd))) {
ret = -EFAULT;
goto out;
}
nrd.new_total_meta_blocks = cpu_to_le64(rd.new_total_meta_blocks);
nrd.new_total_data_blocks = cpu_to_le64(rd.new_total_data_blocks);
ret = scoutfs_client_resize_devices(sb, &nrd);
out:
return ret;
}
struct xattr_total_entry {
struct rb_node node;
struct scoutfs_ioctl_xattr_total xt;
u64 fs_seq;
u64 fs_total;
u64 fs_count;
u64 fin_seq;
u64 fin_total;
s64 fin_count;
u64 log_seq;
u64 log_total;
s64 log_count;
};
static int cmp_xt_entry_name(const struct xattr_total_entry *a,
const struct xattr_total_entry *b)
{
return scoutfs_cmp_u64s(a->xt.name[0], b->xt.name[0]) ?:
scoutfs_cmp_u64s(a->xt.name[1], b->xt.name[1]) ?:
scoutfs_cmp_u64s(a->xt.name[2], b->xt.name[2]);
}
/*
* Record the contribution of the three classes of logged items we can
* see: the item in the fs_root, items from finalized log btrees, and
* items from active log btrees. Once we have the full set the caller
* can decide which of the items contribute to the total it sends to the
* user.
*/
static int read_xattr_total_item(struct super_block *sb, struct scoutfs_key *key,
u64 seq, u8 flags, void *val, int val_len, int fic, void *arg)
{
struct scoutfs_xattr_totl_val *tval = val;
struct xattr_total_entry *ent;
struct xattr_total_entry rd;
struct rb_root *root = arg;
struct rb_node *parent;
struct rb_node **node;
int cmp;
rd.xt.name[0] = le64_to_cpu(key->skxt_a);
rd.xt.name[1] = le64_to_cpu(key->skxt_b);
rd.xt.name[2] = le64_to_cpu(key->skxt_c);
/* find entry matching name */
node = &root->rb_node;
parent = NULL;
cmp = -1;
while (*node) {
parent = *node;
ent = container_of(*node, struct xattr_total_entry, node);
/* sort merge items by key then newest to oldest */
cmp = cmp_xt_entry_name(&rd, ent);
if (cmp < 0)
node = &(*node)->rb_left;
else if (cmp > 0)
node = &(*node)->rb_right;
else
break;
}
/* allocate and insert new node if we need to */
if (cmp != 0) {
ent = kzalloc(sizeof(*ent), GFP_KERNEL);
if (!ent)
return -ENOMEM;
memcpy(&ent->xt.name, &rd.xt.name, sizeof(ent->xt.name));
rb_link_node(&ent->node, parent, node);
rb_insert_color(&ent->node, root);
}
if (fic & FIC_FS_ROOT) {
ent->fs_seq = seq;
ent->fs_total = le64_to_cpu(tval->total);
ent->fs_count = le64_to_cpu(tval->count);
} else if (fic & FIC_FINALIZED) {
ent->fin_seq = seq;
ent->fin_total += le64_to_cpu(tval->total);
ent->fin_count += le64_to_cpu(tval->count);
} else {
ent->log_seq = seq;
ent->log_total += le64_to_cpu(tval->total);
ent->log_count += le64_to_cpu(tval->count);
}
scoutfs_inc_counter(sb, totl_read_item);
return 0;
}
/* these are always _safe, node stores next */
#define for_each_xt_ent(ent, node, root) \
for (node = rb_first(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_next(node), 1); )
#define for_each_xt_ent_reverse(ent, node, root) \
for (node = rb_last(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_prev(node), 1); )
static void free_xt_ent(struct rb_root *root, struct xattr_total_entry *ent)
{
rb_erase(&ent->node, root);
kfree(ent);
}
static void free_all_xt_ents(struct rb_root *root)
{
struct xattr_total_entry *ent;
struct rb_node *node;
for_each_xt_ent(ent, node, root)
free_xt_ent(root, ent);
}
/*
* Starting from the caller's pos_name, copy the names, totals, and
* counts for the .totl. tagged xattrs in the system sorted by their
* name until the user's buffer is full. This only sees xattrs that
* have been committed. It doesn't use locking to force commits and
* block writers so it can be a little bit out of date with respect to
* dirty xattrs in memory across the system.
*
* Our reader has to be careful because the log btree merging code can
* write partial results to the fs_root. This means that a reader can
* see both cases where new finalized logs should be applied to the old
* fs items and where old finalized logs have already been applied to
* the partially merged fs items. Currently active logged items are
* always applied on top of all cases.
*
* These cases are differentiated with a combination of sequence numbers
* in items, the count of contributing xattrs, and a flag
* differentiating finalized and active logged items. This lets us
* recognize all cases, including when finalized logs were merged and
* deleted the fs item.
*
* We're allocating a tracking struct for each totl name we see while
* traversing the item btrees. The forest reader is providing the items
* it finds in leaf blocks that contain the search key. In the worst
* case all of these blocks are full and none of the items overlap. At
* most, figure order a thousand names per mount. But in practice many
* of these factors fall away: leaf blocks aren't fill, leaf items
* overlap, there aren't finalized log btrees, and not all mounts are
* actively changing totals. We're much more likely to only read a
* leaf block's worth of totals that have been long since merged into
* the fs_root.
*/
static long scoutfs_ioc_read_xattr_totals(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_read_xattr_totals __user *urxt = (void __user *)arg;
struct scoutfs_ioctl_read_xattr_totals rxt;
struct scoutfs_ioctl_xattr_total __user *uxt;
struct xattr_total_entry *ent;
struct scoutfs_key key;
struct scoutfs_key bloom_key;
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_root root = RB_ROOT;
struct rb_node *node;
int count = 0;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rxt, urxt, sizeof(rxt))) {
ret = -EFAULT;
goto out;
}
uxt = (void __user *)rxt.totals_ptr;
if ((rxt.totals_ptr & (sizeof(__u64) - 1)) ||
(rxt.totals_bytes < sizeof(struct scoutfs_ioctl_xattr_total))) {
ret = -EINVAL;
goto out;
}
scoutfs_key_set_zeros(&bloom_key);
bloom_key.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_xattr_init_totl_key(&start, rxt.pos_name);
while (rxt.totals_bytes >= sizeof(struct scoutfs_ioctl_xattr_total)) {
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
if (scoutfs_key_compare(&start, &end) > 0)
break;
key = start;
ret = scoutfs_forest_read_items(sb, &key, &bloom_key, &start, &end,
read_xattr_total_item, &root);
if (ret < 0) {
if (ret == -ESTALE) {
free_all_xt_ents(&root);
continue;
}
goto out;
}
if (RB_EMPTY_ROOT(&root))
break;
/* trim totals that fall outside of the consistent range */
for_each_xt_ent(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &start) < 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
for_each_xt_ent_reverse(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &end) > 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
/* copy resulting unique non-zero totals to userspace */
for_each_xt_ent(ent, node, &root) {
if (rxt.totals_bytes < sizeof(ent->xt))
break;
/* start with the fs item if we have it */
if (ent->fs_seq != 0) {
ent->xt.total = ent->fs_total;
ent->xt.count = ent->fs_count;
scoutfs_inc_counter(sb, totl_read_fs);
}
/* apply finalized logs if they're newer or creating */
if (((ent->fs_seq != 0) && (ent->fin_seq > ent->fs_seq)) ||
((ent->fs_seq == 0) && (ent->fin_count > 0))) {
ent->xt.total += ent->fin_total;
ent->xt.count += ent->fin_count;
scoutfs_inc_counter(sb, totl_read_finalized);
}
/* always apply active logs which must be newer than fs and finalized */
if (ent->log_seq > 0) {
ent->xt.total += ent->log_total;
ent->xt.count += ent->log_count;
scoutfs_inc_counter(sb, totl_read_logged);
}
if (ent->xt.total != 0 || ent->xt.count != 0) {
if (copy_to_user(uxt, &ent->xt, sizeof(ent->xt))) {
ret = -EFAULT;
goto out;
}
uxt++;
rxt.totals_bytes -= sizeof(ent->xt);
count++;
scoutfs_inc_counter(sb, totl_read_copied);
}
free_xt_ent(&root, ent);
}
/* continue after the last possible key read */
start = end;
scoutfs_key_inc(&start);
}
ret = 0;
out:
free_all_xt_ents(&root);
return ret ?: count;
}
static long scoutfs_ioc_get_allocated_inos(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_get_allocated_inos __user *ugai = (void __user *)arg;
struct scoutfs_ioctl_get_allocated_inos gai;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key key;
struct scoutfs_key end;
u64 __user *uinos;
u64 bytes;
u64 ino;
int nr;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&gai, ugai, sizeof(gai))) {
ret = -EFAULT;
goto out;
}
if ((gai.inos_ptr & (sizeof(__u64) - 1)) || (gai.inos_bytes < sizeof(__u64))) {
ret = -EINVAL;
goto out;
}
scoutfs_inode_init_key(&key, gai.start_ino);
scoutfs_inode_init_key(&end, gai.start_ino | SCOUTFS_LOCK_INODE_GROUP_MASK);
uinos = (void __user *)gai.inos_ptr;
bytes = gai.inos_bytes;
nr = 0;
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, gai.start_ino, &lock);
if (ret < 0)
goto out;
while (bytes >= sizeof(*uinos)) {
ret = scoutfs_item_next(sb, &key, &end, NULL, 0, lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
break;
}
if (key.sk_zone != SCOUTFS_FS_ZONE) {
ret = 0;
break;
}
/* all fs items are owned by allocated inodes, and _first is always ino */
ino = le64_to_cpu(key._sk_first);
if (put_user(ino, uinos)) {
ret = -EFAULT;
break;
}
uinos++;
bytes -= sizeof(*uinos);
if (++nr == INT_MAX)
break;
scoutfs_inode_init_key(&key, ino + 1);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
out:
return ret ?: nr;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -1427,12 +1014,6 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_alloc_detail(file, arg);
case SCOUTFS_IOC_MOVE_BLOCKS:
return scoutfs_ioc_move_blocks(file, arg);
case SCOUTFS_IOC_RESIZE_DEVICES:
return scoutfs_ioc_resize_devices(file, arg);
case SCOUTFS_IOC_READ_XATTR_TOTALS:
return scoutfs_ioc_read_xattr_totals(file, arg);
case SCOUTFS_IOC_GET_ALLOCATED_INOS:
return scoutfs_ioc_get_allocated_inos(file, arg);
}
return -ENOTTY;

View File

@@ -13,7 +13,8 @@
* This is enforced by pahole scripting in external build environments.
*/
#define SCOUTFS_IOCTL_MAGIC 0xE8 /* arbitrarily chosen hole in ioctl-number.rst */
/* XXX I have no idea how these are chosen. */
#define SCOUTFS_IOCTL_MAGIC 's'
/*
* Packed scoutfs keys rarely cross the ioctl boundary so we have a
@@ -87,7 +88,7 @@ enum scoutfs_ino_walk_seq_type {
* Adds entries to the user's buffer for each inode that is found in the
* given index between the first and last positions.
*/
#define SCOUTFS_IOC_WALK_INODES _IOW(SCOUTFS_IOCTL_MAGIC, 1, \
#define SCOUTFS_IOC_WALK_INODES _IOR(SCOUTFS_IOCTL_MAGIC, 1, \
struct scoutfs_ioctl_walk_inodes)
/*
@@ -162,11 +163,11 @@ struct scoutfs_ioctl_ino_path_result {
__u64 dir_pos;
__u16 path_bytes;
__u8 _pad[6];
__u8 path[];
__u8 path[0];
};
/* Get a single path from the root to the given inode number */
#define SCOUTFS_IOC_INO_PATH _IOW(SCOUTFS_IOCTL_MAGIC, 2, \
#define SCOUTFS_IOC_INO_PATH _IOR(SCOUTFS_IOCTL_MAGIC, 2, \
struct scoutfs_ioctl_ino_path)
/*
@@ -214,16 +215,23 @@ struct scoutfs_ioctl_stage {
/*
* Give the user inode fields that are not otherwise visible. statx()
* isn't always available and xattrs are relatively expensive.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_stat_more {
__u64 valid_bytes;
__u64 meta_seq;
__u64 data_seq;
__u64 data_version;
__u64 online_blocks;
__u64 offline_blocks;
__u64 crtime_sec;
__u32 crtime_nsec;
__u8 _pad[4];
};
#define SCOUTFS_IOC_STAT_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 5, \
@@ -251,16 +259,15 @@ struct scoutfs_ioctl_data_waiting {
__u8 _pad[6];
};
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U8_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING _IOW(SCOUTFS_IOCTL_MAGIC, 6, \
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
struct scoutfs_ioctl_data_waiting)
/*
* If i_size is set then data_version must be non-zero. If the offline
* flag is set then i_size must be set and a offline extent will be
* created from offset 0 to i_size. The time fields are always applied
* to the inode.
* created from offset 0 to i_size.
*/
struct scoutfs_ioctl_setattr_more {
__u64 data_version;
@@ -268,12 +275,11 @@ struct scoutfs_ioctl_setattr_more {
__u64 flags;
__u64 ctime_sec;
__u32 ctime_nsec;
__u32 crtime_nsec;
__u64 crtime_sec;
__u8 _pad[4];
};
#define SCOUTFS_IOC_SETATTR_MORE_OFFLINE (1 << 0)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U64_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U8_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE _IOW(SCOUTFS_IOCTL_MAGIC, 7, \
struct scoutfs_ioctl_setattr_more)
@@ -285,8 +291,8 @@ struct scoutfs_ioctl_listxattr_hidden {
__u32 hash_pos;
};
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOWR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
/*
* Return the inode numbers of inodes which might contain the given
@@ -339,23 +345,32 @@ struct scoutfs_ioctl_search_xattrs {
/* set in output_flags if returned inodes reached last_ino */
#define SCOUTFS_SEARCH_XATTRS_OFLAG_END (1ULL << 0)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOW(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
/*
* Give the user information about the filesystem.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* @committed_seq: All seqs up to and including this seq have been
* committed. Can be compared with meta_seq and data_seq from inodes in
* stat_more to discover if changes have been committed to disk.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_statfs_more {
__u64 valid_bytes;
__u64 fsid;
__u64 rid;
__u64 committed_seq;
__u64 total_meta_blocks;
__u64 total_data_blocks;
__u64 reserved_meta_blocks;
};
#define SCOUTFS_IOC_STATFS_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 10, \
@@ -376,7 +391,7 @@ struct scoutfs_ioctl_data_wait_err {
__s64 err;
};
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOW(SCOUTFS_IOCTL_MAGIC, 11, \
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOR(SCOUTFS_IOCTL_MAGIC, 11, \
struct scoutfs_ioctl_data_wait_err)
@@ -395,7 +410,7 @@ struct scoutfs_ioctl_alloc_detail_entry {
__u8 __pad[6];
};
#define SCOUTFS_IOC_ALLOC_DETAIL _IOW(SCOUTFS_IOCTL_MAGIC, 12, \
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
/*
@@ -403,13 +418,12 @@ struct scoutfs_ioctl_alloc_detail_entry {
* on the same file system.
*
* from_fd specifies the source file and the ioctl is called on the
* destination file. Both files must have write access. from_off specifies
* the byte offset in the source, to_off is the byte offset in the
* destination, and len is the number of bytes in the region to move. All of
* the offsets and lengths must be in multiples of 4KB, except in the case
* where the from_off + len ends at the i_size of the source
* file. data_version is only used when STAGE flag is set (see below). flags
* field is currently only used to optionally specify STAGE behavior.
* destination file. Both files must have write access. from_off
* specifies the byte offset in the source, to_off is the byte offset in
* the destination, and len is the number of bytes in the region to
* move. All of the offsets and lengths must be in multiples of 4KB,
* except in the case where the from_off + len ends at the i_size of the
* source file.
*
* This interface only moves extents which are block granular, it does
* not perform RMW of sub-block byte extents and it does not overwrite
@@ -421,142 +435,33 @@ struct scoutfs_ioctl_alloc_detail_entry {
* i_size. The i_size update will maintain final partial blocks in the
* source.
*
* If STAGE flag is not set, it will return an error if either of the files
* have offline extents. It will return 0 when all of the extents in the
* source region have been moved to the destination. Moving extents updates
* the ctime, mtime, meta_seq, data_seq, and data_version fields of both the
* source and destination inodes. If an error is returned then partial
* It will return an error if either of the files have offline extents.
* It will return 0 when all of the extents in the source region have
* been moved to the destination. Moving extents updates the ctime,
* mtime, meta_seq, data_seq, and data_version fields of both the source
* and destination inodes. If an error is returned then partial
* progress may have been made and inode fields may have been updated.
*
* If STAGE flag is set, as above except destination range must be in an
* offline extent. Fields are updated only for source inode.
*
* Errors specific to this interface include:
*
* EINVAL: from_off, len, or to_off aren't a multiple of 4KB; the source
* and destination files are the same inode; either the source or
* destination is not a regular file; the destination file has
* an existing overlapping extent (if STAGE flag not set); the
* destination range is not in an offline extent (if STAGE set).
* an existing overlapping extent.
* EOVERFLOW: either from_off + len or to_off + len exceeded 64bits.
* EBADF: from_fd isn't a valid open file descriptor.
* EXDEV: the source and destination files are in different filesystems.
* EISDIR: either the source or destination is a directory.
* ENODATA: either the source or destination file have offline extents and
* STAGE flag is not set.
* ESTALE: data_version does not match destination data_version.
* ENODATA: either the source or destination file have offline extents.
*/
#define SCOUTFS_IOC_MB_STAGE (1 << 0)
#define SCOUTFS_IOC_MB_UNKNOWN (U64_MAX << 1)
struct scoutfs_ioctl_move_blocks {
__u64 from_fd;
__u64 from_off;
__u64 len;
__u64 to_off;
__u64 data_version;
__u64 flags;
};
#define SCOUTFS_IOC_MOVE_BLOCKS _IOW(SCOUTFS_IOCTL_MAGIC, 13, \
#define SCOUTFS_IOC_MOVE_BLOCKS _IOR(SCOUTFS_IOCTL_MAGIC, 13, \
struct scoutfs_ioctl_move_blocks)
struct scoutfs_ioctl_resize_devices {
__u64 new_total_meta_blocks;
__u64 new_total_data_blocks;
};
#define SCOUTFS_IOC_RESIZE_DEVICES \
_IOW(SCOUTFS_IOCTL_MAGIC, 14, struct scoutfs_ioctl_resize_devices)
#define SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR 3
/*
* Copy global totals of .totl. xattr value payloads to the user. This
* only sees xattrs which have been committed and this doesn't force
* commits of dirty data throughout the system. This can be out of sync
* by the amount of xattrs that can be dirty in open transactions that
* are being built throughout the system.
*
* pos_name: The array name of the first total that can be returned.
* The name is derived from the key of the xattrs that contribute to the
* total. For xattrs with a .totl.1.2.3 key, the pos_name[] should be
* {1, 2, 3}.
*
* totals_ptr: An aligned pointer to a buffer that will be filled with
* an array of scoutfs_ioctl_xattr_total structs for each total copied.
*
* totals_bytes: The size of the buffer in bytes. There must be room
* for at least one struct element so that returning 0 can promise that
* there were no more totals to copy after the pos_name.
*
* The number of copied elements is returned and 0 is returned if there
* were no more totals to copy after the pos_name.
*
* In addition to the usual errnos (EIO, EINVAL, EPERM, EFAULT) this
* adds:
*
* EINVAL: The totals_ buffer was not aligned or was not large enough
* for a single struct entry.
*/
struct scoutfs_ioctl_read_xattr_totals {
__u64 pos_name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 totals_ptr;
__u64 totals_bytes;
};
/*
* An individual total that is given to userspace. The total is the
* sum of all the values in the xattr payloads matching the name. The
* count is the number of xattrs, not number of files, contributing to
* the total.
*/
struct scoutfs_ioctl_xattr_total {
__u64 name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 total;
__u64 count;
};
#define SCOUTFS_IOC_READ_XATTR_TOTALS \
_IOW(SCOUTFS_IOCTL_MAGIC, 15, struct scoutfs_ioctl_read_xattr_totals)
/*
* This fills the caller's inos array with inode numbers that are in use
* after the start ino, within an internal inode group.
*
* This only makes a promise about the state of the inode numbers within
* the first and last numbers returned by one call. At one time, all of
* those inodes were still allocated. They could have changed before
* the call returned. And any numbers outside of the first and last
* (or single) are undefined.
*
* This doesn't iterate over all allocated inodes, it only probes a
* single group that the start inode is within. This interface was
* first introduced to support tests that needed to find out about a
* specific inode, while having some other similarly niche uses. It is
* unsuitable for a consistent iteration over all the inode numbers in
* use.
*
* This test of inode items doesn't serialize with the inode lifetime
* mechanism. It only tells you the numbers of inodes that were once
* active in the system and haven't yet been fully deleted. The inode
* numbers returned could have been in the process of being deleted and
* were already unreachable even before the call started.
*
* @start_ino: the first inode number that could be returned
* @inos_ptr: pointer to an aligned array of 64bit inode numbers
* @inos_bytes: the number of bytes available in the inos_ptr array
*
* Returns errors or the count of inode numbers returned, quite possibly
* including 0.
*/
struct scoutfs_ioctl_get_allocated_inos {
__u64 start_ino;
__u64 inos_ptr;
__u64 inos_bytes;
};
#define SCOUTFS_IOC_GET_ALLOCATED_INOS \
_IOW(SCOUTFS_IOCTL_MAGIC, 16, struct scoutfs_ioctl_get_allocated_inos)
#endif

View File

@@ -95,7 +95,7 @@ struct item_cache_info {
/* written by page readers, read by shrink */
spinlock_t active_lock;
struct list_head active_list;
struct rb_root active_root;
};
#define DECLARE_ITEM_CACHE_INFO(sb, name) \
@@ -127,7 +127,6 @@ struct cached_page {
unsigned long lru_time;
struct list_head dirty_list;
struct list_head dirty_head;
u64 max_seq;
struct page *page;
unsigned int page_off;
unsigned int erased_bytes;
@@ -139,11 +138,10 @@ struct cached_item {
struct list_head dirty_head;
unsigned int dirty:1, /* needs to be written */
persistent:1, /* in btrees, needs deletion item */
deletion:1, /* negative del item for writing */
delta:1; /* item vales are combined, freed after write */
deletion:1; /* negative del item for writing */
unsigned int val_len;
struct scoutfs_key key;
u64 seq;
struct scoutfs_log_item_value liv;
char val[0];
};
@@ -151,8 +149,7 @@ struct cached_item {
static int item_val_bytes(int val_len)
{
return round_up(offsetof(struct cached_item, val[val_len]),
CACHED_ITEM_ALIGN);
return round_up(offsetof(struct cached_item, val[val_len]), CACHED_ITEM_ALIGN);
}
/*
@@ -348,8 +345,7 @@ static struct cached_page *alloc_pg(struct super_block *sb, gfp_t gfp)
page = alloc_page(GFP_NOFS | gfp);
if (!page || !pg) {
kfree(pg);
if (page)
__free_page(page);
__free_page(page);
return NULL;
}
@@ -387,12 +383,6 @@ static void put_pg(struct super_block *sb, struct cached_page *pg)
}
}
static void update_pg_max_seq(struct cached_page *pg, struct cached_item *item)
{
if (item->seq > pg->max_seq)
pg->max_seq = item->seq;
}
/*
* Allocate space for a new item from the free offset at the end of a
* cached page. This isn't a blocking allocation, and it's likely that
@@ -400,7 +390,8 @@ static void update_pg_max_seq(struct cached_page *pg, struct cached_item *item)
* page or checking the free space first.
*/
static struct cached_item *alloc_item(struct cached_page *pg,
struct scoutfs_key *key, u64 seq, bool deletion,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
void *val, int val_len)
{
struct cached_item *item;
@@ -415,24 +406,22 @@ static struct cached_item *alloc_item(struct cached_page *pg,
INIT_LIST_HEAD(&item->dirty_head);
item->dirty = 0;
item->persistent = 0;
item->deletion = !!deletion;
item->delta = 0;
item->deletion = !!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION);
item->val_len = val_len;
item->key = *key;
item->seq = seq;
item->liv = *liv;
if (val_len)
memcpy(item->val, val, val_len);
update_pg_max_seq(pg, item);
return item;
}
static void erase_item(struct cached_page *pg, struct cached_item *item)
{
rbtree_erase(&item->node, &pg->item_root);
pg->erased_bytes += item_val_bytes(item->val_len);
pg->erased_bytes += round_up(item_val_bytes(item->val_len),
CACHED_ITEM_ALIGN);
}
static void lru_add(struct super_block *sb, struct item_cache_info *cinf,
@@ -632,8 +621,6 @@ static void mark_item_dirty(struct super_block *sb,
list_add_tail(&item->dirty_head, &pg->dirty_list);
item->dirty = 1;
}
update_pg_max_seq(pg, item);
}
static void clear_item_dirty(struct super_block *sb,
@@ -685,12 +672,6 @@ static void erase_page_items(struct cached_page *pg,
* to the dirty list after the left page, and by adding items to the
* tail of right's dirty list in key sort order.
*
* The max_seq of the source page might be larger than all the items
* while protecting an erased item from being reclaimed while an older
* read is in flight. We don't know where it might be in the source
* page so we have to assume that it's in the key range being moved and
* update the destination page's max_seq accordingly.
*
* The caller is responsible for page locking and managing the lru.
*/
static void move_page_items(struct super_block *sb,
@@ -716,7 +697,7 @@ static void move_page_items(struct super_block *sb,
if (stop && scoutfs_key_compare(&from->key, stop) >= 0)
break;
to = alloc_item(right, &from->key, from->seq, from->deletion, from->val,
to = alloc_item(right, &from->key, &from->liv, from->val,
from->val_len);
rbtree_insert(&to->node, par, pnode, &right->item_root);
par = &to->node;
@@ -728,13 +709,10 @@ static void move_page_items(struct super_block *sb,
}
to->persistent = from->persistent;
to->delta = from->delta;
to->deletion = from->deletion;
erase_item(left, from);
}
if (left->max_seq > right->max_seq)
right->max_seq = left->max_seq;
}
enum page_intersection_type {
@@ -874,7 +852,8 @@ static void compact_page_items(struct super_block *sb,
for (from = first_item(&pg->item_root); from; from = next_item(from)) {
to = page_address(empty->page) + page_off;
page_off += item_val_bytes(from->val_len);
page_off += round_up(item_val_bytes(from->val_len),
CACHED_ITEM_ALIGN);
/* copy the entire item, struct members and all */
memcpy(to, from, item_val_bytes(from->val_len));
@@ -1281,76 +1260,46 @@ static int cache_empty_page(struct super_block *sb,
return 0;
}
/*
* Readers operate independently from dirty items and transactions.
* They read a set of persistent items and insert them into the cache
* when there aren't already pages whose key range contains the items.
* This naturally prefers cached dirty items over stale read items.
*
* We have to deal with the case where dirty items are written and
* invalidated while a read is in flight. The reader won't have seen
* the items that were dirty in their persistent roots as they started
* reading. By the time they insert their read pages the previously
* dirty items have been reclaimed and are not in the cache. The old
* stale items will be inserted in their place, effectively corrupting
* by having the dirty items disappear.
*
* We fix this by tracking the max seq of items in pages. As readers
* start they record the current transaction seq. Invalidation skips
* pages with a max seq greater than the first reader seq because the
* items in the page have to stick around to prevent the readers stale
* items from being inserted.
*
* This naturally only affects a small set of pages with items that were
* written relatively recently. If we're in memory pressure then we
* probably have a lot of pages and they'll naturally have items that
* were visible to any raders. We don't bother with the complicated and
* expensive further refinement of tracking the ranges that are being
* read and comparing those with pages to invalidate.
*/
struct active_reader {
struct list_head head;
u64 seq;
struct rb_node node;
struct scoutfs_key start;
struct scoutfs_key end;
};
#define INIT_ACTIVE_READER(rdr) \
struct active_reader rdr = { .head = LIST_HEAD_INIT(rdr.head) }
static void add_active_reader(struct super_block *sb, struct active_reader *active)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
BUG_ON(!list_empty(&active->head));
active->seq = scoutfs_trans_sample_seq(sb);
spin_lock(&cinf->active_lock);
list_add_tail(&active->head, &cinf->active_list);
spin_unlock(&cinf->active_lock);
}
static u64 first_active_reader_seq(struct item_cache_info *cinf)
static struct active_reader *active_rbtree_walk(struct rb_root *root,
struct scoutfs_key *start,
struct scoutfs_key *end,
struct rb_node **par,
struct rb_node ***pnode)
{
struct rb_node **node = &root->rb_node;
struct rb_node *parent = NULL;
struct active_reader *ret = NULL;
struct active_reader *active;
u64 first;
int cmp;
/* only the calling task adds or deletes this active */
spin_lock(&cinf->active_lock);
active = list_first_entry_or_null(&cinf->active_list, struct active_reader, head);
first = active ? active->seq : U64_MAX;
spin_unlock(&cinf->active_lock);
while (*node) {
parent = *node;
active = container_of(*node, struct active_reader, node);
return first;
}
static void del_active_reader(struct item_cache_info *cinf, struct active_reader *active)
{
/* only the calling task adds or deletes this active */
if (!list_empty(&active->head)) {
spin_lock(&cinf->active_lock);
list_del_init(&active->head);
spin_unlock(&cinf->active_lock);
cmp = scoutfs_key_compare_ranges(start, end, &active->start,
&active->end);
if (cmp < 0) {
node = &(*node)->rb_left;
} else if (cmp > 0) {
node = &(*node)->rb_right;
} else {
ret = active;
node = &(*node)->rb_left;
}
}
if (par)
*par = parent;
if (pnode)
*pnode = node;
return ret;
}
/*
@@ -1359,16 +1308,16 @@ static void del_active_reader(struct item_cache_info *cinf, struct active_reader
* on our root and aren't in dirty or lru lists.
*
* We need to store deletion items here as we read items from all the
* btrees so that they can override older items. The deletion items
* will be deleted before we insert the pages into the cache. We don't
* insert old versions of items into the tree here so that the trees
* don't have to compare seqs.
* btrees so that they can override older versions of the items. The
* deletion items will be deleted before we insert the pages into the
* cache. We don't insert old versions of items into the tree here so
* that the trees don't have to compare versions.
*/
static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, int fic, void *arg)
static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_log_item_value *liv, void *val,
int val_len, void *arg)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const bool deletion = !!(flags & SCOUTFS_ITEM_FLAG_DELETION);
struct rb_root *root = arg;
struct cached_page *right = NULL;
struct cached_page *left = NULL;
@@ -1382,7 +1331,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
pg = page_rbtree_walk(sb, root, key, key, NULL, NULL, &p_par, &p_pnode);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
if (found && (found->seq >= seq))
if (found && (le64_to_cpu(found->liv.vers) >= le64_to_cpu(liv->vers)))
return 0;
if (!page_has_room(pg, val_len)) {
@@ -1396,7 +1345,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
&pnode);
}
item = alloc_item(pg, key, seq, deletion, val, val_len);
item = alloc_item(pg, key, liv, val, val_len);
if (!item) {
/* simpler split of private pages, no locking/dirty/lru */
if (!left)
@@ -1419,7 +1368,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
put_pg(sb, pg);
pg = scoutfs_key_compare(key, &left->end) <= 0 ? left : right;
item = alloc_item(pg, key, seq, deletion, val, val_len);
item = alloc_item(pg, key, liv, val, val_len);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par,
&pnode);
@@ -1450,20 +1399,22 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
* locks held, but without locking the cache. The regions we read can
* be stale with respect to the current cache, which can be read and
* dirtied by other cluster lock holders on our node, but the cluster
* locks protect the stable items we read. Invalidation is careful not
* to drop pages that have items that we couldn't see because they were
* dirty when we started reading.
* locks protect the stable items we read.
*
* The forest item reader is reading stable trees that could be
* overwritten. It can return -ESTALE which we return to the caller who
* will retry the operation and work with a new set of more recent
* btrees.
* There's also the exciting case where a reader can populate the cache
* with stale old persistent data which was read before another local
* cluster lock holder was able to read, dirty, write, and then shrink
* the cache. In this case the cache couldn't be cleared by lock
* invalidation because the caller is actively holding the lock. But
* shrinking could evict the cache within the held lock. So we record
* that we're an active reader in the range covered by the lock and
* shrink will refuse to reclaim any pages that intersect with our read.
*/
static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
struct scoutfs_key *key, struct scoutfs_lock *lock)
{
struct rb_root root = RB_ROOT;
INIT_ACTIVE_READER(active);
struct active_reader active;
struct cached_page *right = NULL;
struct cached_page *pg;
struct cached_page *rd;
@@ -1479,6 +1430,15 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
int pgi;
int ret;
/* stop shrink from freeing new clean data, would let us cache stale */
active.start = lock->start;
active.end = lock->end;
spin_lock(&cinf->active_lock);
active_rbtree_walk(&cinf->active_root, &active.start, &active.end,
&par, &pnode);
rbtree_insert(&active.node, par, pnode, &cinf->active_root);
spin_unlock(&cinf->active_lock);
/* start with an empty page that covers the whole lock */
pg = alloc_pg(sb, 0);
if (!pg) {
@@ -1489,12 +1449,8 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
pg->end = lock->end;
rbtree_insert(&pg->node, NULL, &root.rb_node, &root);
/* set active reader seq before reading persistent roots */
add_active_reader(sb, &active);
start = lock->start;
end = lock->end;
ret = scoutfs_forest_read_items(sb, key, &lock->start, &start, &end, read_page_item, &root);
ret = scoutfs_forest_read_items(sb, lock, key, &start, &end,
read_page_item, &root);
if (ret < 0)
goto out;
@@ -1570,7 +1526,9 @@ retry:
ret = 0;
out:
del_active_reader(cinf, &active);
spin_lock(&cinf->active_lock);
rbtree_erase(&active.node, &cinf->active_root);
spin_unlock(&cinf->active_lock);
/* free any pages we left dangling on error */
for_each_page_safe(&root, rd, pg_tmp) {
@@ -1629,7 +1587,7 @@ retry:
&lock->end);
else
ret = read_pages(sb, cinf, key, lock);
if (ret < 0 && ret != -ESTALE)
if (ret < 0)
goto out;
goto retry;
}
@@ -1825,21 +1783,6 @@ out:
return ret;
}
/*
* An item's seq is greater of the client transaction's seq and the
* lock's write_seq. This ensures that multiple commits in one lock
* grant will have increasing seqs, and new locks in open commits will
* also increase the seqs. It lets us limit the inputs of item merging
* to the last stable seq and ensure that all the items in open
* transactions and granted locks will have greater seqs.
*/
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return max(sbi->trans_seq, lock->write_seq);
}
/*
* Mark the item dirty. Dirtying while holding a transaction pins the
* page holding the item and guarantees that the item can be deleted or
@@ -1872,8 +1815,8 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
if (!item || item->deletion) {
ret = -ENOENT;
} else {
item->seq = item_seq(sb, lock);
mark_item_dirty(sb, cinf, pg, NULL, item);
item->liv.vers = cpu_to_le64(lock->write_version);
ret = 0;
}
@@ -1892,7 +1835,9 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct scoutfs_log_item_value liv = {
.vers = cpu_to_le64(lock->write_version),
};
struct cached_item *found;
struct cached_item *item;
struct cached_page *pg;
@@ -1920,7 +1865,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
goto unlock;
}
item = alloc_item(pg, key, seq, false, val, val_len);
item = alloc_item(pg, key, &liv, val, val_len);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
@@ -1965,7 +1910,9 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct scoutfs_log_item_value liv = {
.vers = cpu_to_le64(lock->write_version),
};
struct cached_item *item;
struct cached_item *found;
struct cached_page *pg;
@@ -1997,13 +1944,12 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
if (val_len)
memcpy(found->val, val, val_len);
if (val_len < found->val_len)
pg->erased_bytes += item_val_bytes(found->val_len) -
item_val_bytes(val_len);
pg->erased_bytes += found->val_len - val_len;
found->val_len = val_len;
found->seq = seq;
found->liv.vers = liv.vers;
mark_item_dirty(sb, cinf, pg, NULL, found);
} else {
item = alloc_item(pg, key, seq, false, val, val_len);
item = alloc_item(pg, key, &liv, val, val_len);
item->persistent = found->persistent;
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
@@ -2019,77 +1965,6 @@ out:
return ret;
}
/*
* Add a delta item. Delta items are an incremental change relative to
* the current persistent delta items. We never have to read the
* current items so the caller always writes with write only locks. If
* combining the current delta item and the caller's item results in a
* null we can just drop it, we don't have to emit a deletion item.
*/
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
struct rb_node *par;
int ret;
scoutfs_inc_counter(sb, item_delta);
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_WRITE_ONLY)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
if (ret < 0)
goto out;
ret = get_cached_page(sb, cinf, lock, key, true, true, val_len, &pg);
if (ret < 0)
goto out;
__acquire(pg->rwlock);
item = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
if (item) {
if (!item->delta) {
ret = -EIO;
goto unlock;
}
ret = scoutfs_forest_combine_deltas(key, item->val, item->val_len, val, val_len);
if (ret <= 0) {
if (ret == 0)
ret = -EIO;
goto unlock;
}
if (ret == SCOUTFS_DELTA_COMBINED) {
item->seq = seq;
mark_item_dirty(sb, cinf, pg, NULL, item);
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
clear_item_dirty(sb, cinf, pg, item);
erase_item(pg, item);
} else {
ret = -EIO;
goto unlock;
}
ret = 0;
} else {
item = alloc_item(pg, key, seq, false, val, val_len);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
item->delta = 1;
ret = 0;
}
unlock:
write_unlock(&pg->rwlock);
out:
return ret;
}
/*
* Delete an item from the cache. We can leave behind a dirty deletion
* item if there is a persistent item that needs to be overwritten.
@@ -2102,7 +1977,9 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct scoutfs_log_item_value liv = {
.vers = cpu_to_le64(lock->write_version),
};
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
@@ -2130,7 +2007,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
}
if (!item) {
item = alloc_item(pg, key, seq, false, NULL, 0);
item = alloc_item(pg, key, &liv, NULL, 0);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
}
@@ -2143,10 +2020,10 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
erase_item(pg, item);
} else {
/* must emit deletion to clobber old persistent item */
item->seq = seq;
item->liv.vers = cpu_to_le64(lock->write_version);
item->liv.flags |= SCOUTFS_LOG_ITEM_FLAG_DELETION;
item->deletion = 1;
pg->erased_bytes += item_val_bytes(item->val_len) -
item_val_bytes(0);
pg->erased_bytes += item->val_len;
item->val_len = 0;
mark_item_dirty(sb, cinf, pg, NULL, item);
}
@@ -2229,11 +2106,17 @@ int scoutfs_item_write_dirty(struct super_block *sb)
struct page *page;
LIST_HEAD(pages);
LIST_HEAD(pos);
u64 max_seq = 0;
u64 max_vers = 0;
int val_len;
int bytes;
int off;
int ret;
/* we're relying on struct layout to prepend item value headers */
BUILD_BUG_ON(offsetof(struct cached_item, val) !=
(offsetof(struct cached_item, liv) +
member_sizeof(struct cached_item, liv)));
if (atomic_read(&cinf->dirty_pages) == 0)
return 0;
@@ -2285,9 +2168,10 @@ int scoutfs_item_write_dirty(struct super_block *sb)
list_sort(NULL, &pg->dirty_list, cmp_item_key);
list_for_each_entry(item, &pg->dirty_list, dirty_head) {
val_len = sizeof(item->liv) + item->val_len;
bytes = offsetof(struct scoutfs_btree_item_list,
val[item->val_len]);
max_seq = max(max_seq, item->seq);
val[val_len]);
max_vers = max(max_vers, le64_to_cpu(item->liv.vers));
if (off + bytes > PAGE_SIZE) {
page = second;
@@ -2303,10 +2187,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
prev = &lst->next;
lst->key = item->key;
lst->seq = item->seq;
lst->flags = item->deletion ? SCOUTFS_ITEM_FLAG_DELETION : 0;
lst->val_len = item->val_len;
memcpy(lst->val, item->val, item->val_len);
lst->val_len = val_len;
memcpy(lst->val, &item->liv, val_len);
}
spin_lock(&cinf->dirty_lock);
@@ -2319,8 +2201,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
read_unlock(&pg->rwlock);
}
/* store max item seq in forest's log_trees */
scoutfs_forest_set_max_seq(sb, max_seq);
/* store max item vers in forest's log_trees */
scoutfs_forest_set_max_vers(sb, max_vers);
/* write all the dirty items into log btree blocks */
ret = scoutfs_forest_insert_list(sb, first);
@@ -2364,11 +2246,8 @@ retry:
dirty_head) {
clear_item_dirty(sb, cinf, pg, item);
if (item->delta)
scoutfs_inc_counter(sb, item_delta_written);
/* free deletion items */
if (item->deletion || item->delta)
if (item->deletion)
erase_item(pg, item);
else
item->persistent = 1;
@@ -2510,9 +2389,9 @@ retry:
/*
* Shrink the size the item cache. We're operating against the fast
* path lock ordering and we skip pages if we can't acquire locks. We
* can run into dirty pages or pages with items that weren't visible to
* the earliest active reader which must be skipped.
* path lock ordering and we skip pages if we can't acquire locks.
* Similarly, we can run into dirty pages or pages which intersect with
* active readers that we can't shrink and also choose to skip.
*/
static int item_lru_shrink(struct shrinker *shrink,
struct shrink_control *sc)
@@ -2521,24 +2400,26 @@ static int item_lru_shrink(struct shrinker *shrink,
struct item_cache_info,
shrinker);
struct super_block *sb = cinf->sb;
struct active_reader *active;
struct cached_page *tmp;
struct cached_page *pg;
u64 first_reader_seq;
int nr;
if (sc->nr_to_scan == 0)
goto out;
nr = sc->nr_to_scan;
/* can't invalidate pages with items that weren't visible to first reader */
first_reader_seq = first_active_reader_seq(cinf);
write_lock(&cinf->rwlock);
spin_lock(&cinf->lru_lock);
list_for_each_entry_safe(pg, tmp, &cinf->lru_list, lru_head) {
if (first_reader_seq <= pg->max_seq) {
/* can't invalidate ranges being read, reader might be stale */
spin_lock(&cinf->active_lock);
active = active_rbtree_walk(&cinf->active_root, &pg->start,
&pg->end, NULL, NULL);
spin_unlock(&cinf->active_lock);
if (active) {
scoutfs_inc_counter(sb, item_shrink_page_reader);
continue;
}
@@ -2607,7 +2488,7 @@ int scoutfs_item_setup(struct super_block *sb)
spin_lock_init(&cinf->lru_lock);
INIT_LIST_HEAD(&cinf->lru_list);
spin_lock_init(&cinf->active_lock);
INIT_LIST_HEAD(&cinf->active_list);
cinf->active_root = RB_ROOT;
cinf->pcpu_pages = alloc_percpu(struct item_percpu_pages);
if (!cinf->pcpu_pages)
@@ -2638,7 +2519,7 @@ void scoutfs_item_destroy(struct super_block *sb)
int cpu;
if (cinf) {
BUG_ON(!list_empty(&cinf->active_list));
BUG_ON(!RB_EMPTY_ROOT(&cinf->active_root));
unregister_hotcpu_notifier(&cinf->notifier);
unregister_shrinker(&cinf->shrinker);

View File

@@ -18,8 +18,6 @@ int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb,

View File

@@ -108,16 +108,6 @@ static inline void scoutfs_key_set_ones(struct scoutfs_key *key)
memset(key->__pad, 0, sizeof(key->__pad));
}
static inline bool scoutfs_key_is_ones(struct scoutfs_key *key)
{
return key->sk_zone == U8_MAX &&
key->_sk_first == cpu_to_le64(U64_MAX) &&
key->sk_type == U8_MAX &&
key->_sk_second == cpu_to_le64(U64_MAX) &&
key->_sk_third == cpu_to_le64(U64_MAX) &&
key->_sk_fourth == U8_MAX;
}
/*
* Return a -1/0/1 comparison of keys.
*

View File

@@ -34,7 +34,6 @@
#include "data.h"
#include "xattr.h"
#include "item.h"
#include "omap.h"
/*
* scoutfs uses a lock service to manage item cache consistency between
@@ -66,6 +65,8 @@
* relative to that lock state we resend.
*/
#define GRACE_PERIOD_KT ms_to_ktime(10)
/*
* allocated per-super, freed on unmount.
*/
@@ -73,19 +74,19 @@ struct lock_info {
struct super_block *sb;
spinlock_t lock;
bool shutdown;
bool unmounting;
struct rb_root lock_tree;
struct rb_root lock_range_tree;
struct shrinker shrinker;
struct list_head lru_list;
unsigned long long lru_nr;
struct workqueue_struct *workq;
struct work_struct inv_work;
struct work_struct grant_work;
struct list_head grant_list;
struct delayed_work inv_dwork;
struct list_head inv_list;
struct work_struct shrink_work;
struct list_head shrink_list;
atomic64_t next_refresh_gen;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree tseq_tree;
};
@@ -121,48 +122,21 @@ static bool lock_modes_match(int granted, int requested)
}
/*
* Invalidate cached data associated with an inode whose lock is going
* invalidate cached data associated with an inode whose lock is going
* away.
*
* We try to drop cached dentries and inodes covered by the lock if they
* aren't referenced. This removes them from the mount's open map and
* allows deletions to be performed by unlink without having to wait for
* remote cached inodes to be dropped.
*
* If the cached inode was already deferring final inode deletion then
* we can't perform that inline in invalidation. The locking alone
* deadlock, and it might also take multiple transactions to fully
* delete an inode with significant metadata. We only perform the iput
* inline if we know that possible eviction can't perform the final
* deletion, otherwise we kick it off to async work.
*/
static void invalidate_inode(struct super_block *sb, u64 ino)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_inode_info *si;
struct inode *inode;
inode = scoutfs_ilookup_nowait_nonewfree(sb, ino);
inode = scoutfs_ilookup(sb, ino);
if (inode) {
si = SCOUTFS_I(inode);
scoutfs_inc_counter(sb, lock_invalidate_inode);
if (S_ISREG(inode->i_mode)) {
truncate_inode_pages(inode->i_mapping, 0);
scoutfs_data_wait_changed(inode);
}
/* can't touch during unmount, dcache destroys w/o locks */
if (!linfo->unmounting)
d_prune_aliases(inode);
si->drop_invalidated = true;
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
} else {
/* defer iput to work context so we don't evict inodes from invalidation */
scoutfs_inode_queue_iput(inode);
}
iput(inode);
}
}
@@ -198,16 +172,6 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
retry:
/* invalidate inodes before removing coverage */
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
/* remove cov items to tell users that their cache is stale */
spin_lock(&lock->cov_list_lock);
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
@@ -223,6 +187,15 @@ retry:
}
spin_unlock(&lock->cov_list_lock);
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
scoutfs_item_invalidate(sb, &lock->start, &lock->end);
}
@@ -251,11 +224,11 @@ static void lock_free(struct lock_info *linfo, struct scoutfs_lock *lock)
BUG_ON(!RB_EMPTY_NODE(&lock->node));
BUG_ON(!RB_EMPTY_NODE(&lock->range_node));
BUG_ON(!list_empty(&lock->lru_head));
BUG_ON(!list_empty(&lock->grant_head));
BUG_ON(!list_empty(&lock->inv_head));
BUG_ON(!list_empty(&lock->shrink_head));
BUG_ON(!list_empty(&lock->cov_list));
kfree(lock->inode_deletion_data);
kfree(lock);
}
@@ -278,8 +251,8 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
RB_CLEAR_NODE(&lock->node);
RB_CLEAR_NODE(&lock->range_node);
INIT_LIST_HEAD(&lock->lru_head);
INIT_LIST_HEAD(&lock->grant_head);
INIT_LIST_HEAD(&lock->inv_head);
INIT_LIST_HEAD(&lock->inv_list);
INIT_LIST_HEAD(&lock->shrink_head);
spin_lock_init(&lock->cov_list_lock);
INIT_LIST_HEAD(&lock->cov_list);
@@ -289,7 +262,6 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
lock->sb = sb;
init_waitqueue_head(&lock->waitq);
lock->mode = SCOUTFS_LOCK_NULL;
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
atomic64_set(&lock->forest_bloom_nr, 0);
@@ -326,6 +298,23 @@ static bool lock_counts_match(int granted, unsigned int *counts)
return true;
}
/*
* Returns true if there are any mode counts that match with the desired
* mode. There can be other non-matching counts as well but we're only
* testing for the existence of any matching counts.
*/
static bool lock_count_match_exists(int desired, unsigned int *counts)
{
enum scoutfs_lock_mode mode;
for (mode = 0; mode < SCOUTFS_LOCK_NR_MODES; mode++) {
if (counts[mode] && lock_modes_match(desired, mode))
return true;
}
return false;
}
/*
* An idle lock has nothing going on. It can be present in the lru and
* can be freed by the final put when it has a null mode.
@@ -543,15 +532,45 @@ static void put_lock(struct lock_info *linfo,struct scoutfs_lock *lock)
}
/*
* The caller has made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress.
* Locks have a grace period that extends after activity and prevents
* invalidation. It's intended to let nodes do reasonable batches of
* work as locks ping pong between nodes that are doing conflicting
* work.
*/
static void extend_grace(struct super_block *sb, struct scoutfs_lock *lock)
{
ktime_t now = ktime_get();
if (ktime_after(now, lock->grace_deadline))
scoutfs_inc_counter(sb, lock_grace_set);
else
scoutfs_inc_counter(sb, lock_grace_extended);
lock->grace_deadline = ktime_add(now, GRACE_PERIOD_KT);
}
static void queue_grant_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->grant_list) && !linfo->shutdown)
queue_work(linfo->workq, &linfo->grant_work);
}
/*
* We immediately queue work on the assumption that the caller might
* have made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress, even if other locks are
* waiting for their grace period to elapse. It's a trade-off between
* invalidation latency and burning cpu repeatedly finding that locks
* are still in their grace period.
*/
static void queue_inv_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->inv_list))
queue_work(linfo->workq, &linfo->inv_work);
if (!list_empty(&linfo->inv_list) && !linfo->shutdown)
mod_delayed_work(linfo->workq, &linfo->inv_dwork, 0);
}
/*
@@ -599,17 +618,80 @@ static void bug_on_inconsistent_grant_cache(struct super_block *sb,
}
/*
* The client is receiving a grant response message from the server.
* This is being called synchronously in the networking receive path so
* our work should be quick and reasonably non-blocking.
* Each lock has received a grant response message from the server.
*
* The server's state machine can immediately send an invalidate request
* after sending this grant response. We won't process the incoming
* invalidate request until after processing this grant response.
* Grant responses can be reordered with incoming invalidation requests
* from the server so we have to be careful to only set the new mode
* once the old mode matches.
*
* We extend the grace period as we grant the lock if there is a waiting
* locker who can use the lock. This stops invalidation from pulling
* the granted lock out from under the requester, resulting in a lot of
* churn with no forward progress. Using the grace period avoids having
* to identify a specific waiter and give it an acquired lock. It's
* also very similar to waking up the locker and having it win the race
* against the invalidation. In that case they'd extend the grace
* period anyway as they unlock.
*/
static void lock_grant_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
grant_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock_grant_response *gr;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
scoutfs_inc_counter(sb, lock_grant_work);
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->grant_list, grant_head) {
gr = &lock->grant_resp;
nl = &lock->grant_resp.nl;
/* wait for reordered invalidation to finish */
if (lock->mode != nl->old_mode)
continue;
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode,
nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) &&
lock_mode_can_read(nl->new_mode)) {
lock->refresh_gen =
atomic64_inc_return(&linfo->next_refresh_gen);
}
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_version = le64_to_cpu(nl->write_version);
lock->roots = gr->roots;
if (lock_count_match_exists(nl->new_mode, lock->waiters))
extend_grace(sb, lock);
trace_scoutfs_lock_granted(sb, lock);
list_del_init(&lock->grant_head);
wake_up(&lock->waitq);
put_lock(linfo, lock);
}
/* invalidations might be waiting for our reordered grant */
queue_inv_work(linfo);
spin_unlock(&linfo->lock);
}
/*
* The client is receiving a grant response message from the server. We
* find the lock, record the response, and add it to the list for grant
* work to process.
*/
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock *nl)
struct scoutfs_net_lock_grant_response *gr)
{
struct scoutfs_net_lock *nl = &gr->nl;
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
@@ -623,63 +705,62 @@ int scoutfs_lock_grant_response(struct super_block *sb,
trace_scoutfs_lock_grant_response(sb, lock);
BUG_ON(!lock->request_pending);
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode, nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) && lock_mode_can_read(nl->new_mode))
lock->refresh_gen = atomic64_inc_return(&linfo->next_refresh_gen);
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_seq = le64_to_cpu(nl->write_seq);
trace_scoutfs_lock_granted(sb, lock);
wake_up(&lock->waitq);
put_lock(linfo, lock);
lock->grant_resp = *gr;
list_add_tail(&lock->grant_head, &linfo->grant_list);
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
return 0;
}
struct inv_req {
struct list_head head;
struct scoutfs_lock *lock;
u64 net_id;
struct scoutfs_net_lock nl;
};
/*
* Each lock has received a lock invalidation request from the server
* which specifies a new mode for the lock. Our processing state
* machine and server failover and lock recovery can both conspire to
* give us triplicate invalidation requests. The incoming requests for
* a given lock need to be processed in order, but we can process locks
* in any order.
* which specifies a new mode for the lock. The server will only send
* one invalidation request at a time for each lock.
*
* This is an unsolicited request from the server so it can arrive at
* any time after we make the server aware of the lock. We wait for
* users of the current mode to unlock before invalidating.
* any time after we make the server aware of the lock by initially
* requesting it. We wait for users of the current mode to unlock
* before invalidating.
*
* This can arrive on behalf of our request for a mode that conflicts
* with our current mode. We have to proceed while we have a request
* pending. We can also be racing with shrink requests being sent while
* we're invalidating.
*
* This can be processed concurrently and experience reordering with a
* grant response sent back-to-back from the server. We carefully only
* invalidate once the lock mode matches what the server told us to
* invalidate.
*
* We delay invalidation processing until a grace period has elapsed
* since the last unlock. The intent is to let users do a reasonable
* batch of work before dropping the lock. Continuous unlocking can
* continuously extend the deadline.
*
* Before we start invalidating the lock we set the lock to the new
* mode, preventing further incompatible users of the old mode from
* using the lock while we're invalidating. We record the previously
* granted mode so that we can send lock recover responses with the old
* granted mode during invalidation.
* using the lock while we're invalidating.
*
* This does a lot of serialized inode invalidation in one context and
* performs a lot of repeated calls to sync. It would be nice to get
* some concurrent inode invalidation and to more carefully only call
* sync when needed.
*/
static void lock_invalidate_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info, inv_work);
struct lock_info *linfo = container_of(work, struct lock_info,
inv_dwork.work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
struct inv_req *ireq;
unsigned long delay = MAX_JIFFY_OFFSET;
ktime_t now = ktime_get();
ktime_t deadline;
LIST_HEAD(ready);
u64 net_id;
int ret;
scoutfs_inc_counter(sb, lock_invalidate_work);
@@ -687,15 +768,26 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->inv_list, inv_head) {
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
nl = &lock->inv_nl;
/* wait for reordered grant to finish */
if (lock->mode != nl->old_mode)
continue;
/* wait until incompatible holders unlock */
if (!lock_counts_match(nl->new_mode, lock->users))
continue;
/* set the new mode, no incompatible users during inval, recov needs old */
lock->invalidating_mode = lock->mode;
/* skip if grace hasn't elapsed, record earliest */
deadline = lock->grace_deadline;
if (!linfo->shutdown && ktime_before(now, deadline)) {
delay = min(delay,
nsecs_to_jiffies(ktime_to_ns(
ktime_sub(deadline, now))));
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
continue;
}
/* set the new mode, no incompatible users during inval */
lock->mode = nl->new_mode;
/* move everyone that's ready to our private list */
@@ -705,23 +797,18 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_unlock(&linfo->lock);
if (list_empty(&ready))
return;
goto out;
/* invalidate once the lock is read */
list_for_each_entry(lock, &ready, inv_head) {
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
nl = &lock->inv_nl;
net_id = lock->inv_net_id;
/* only lock protocol, inv can't call subsystems after shutdown */
if (!linfo->shutdown) {
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
}
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
BUG_ON(ret);
/* respond with the key and modes from the request, server might have died */
ret = scoutfs_client_lock_response(sb, ireq->net_id, nl);
if (ret == -ENOTCONN)
ret = 0;
/* respond with the key and modes from the request */
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret);
scoutfs_inc_counter(sb, lock_invalidate_response);
@@ -731,91 +818,53 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &ready, inv_head) {
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
trace_scoutfs_lock_invalidated(sb, lock);
list_del(&ireq->head);
kfree(ireq);
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
if (list_empty(&lock->inv_list)) {
/* finish if another request didn't arrive */
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
wake_up(&lock->waitq);
} else {
/* another request arrived, back on the list and requeue */
list_move_tail(&lock->inv_head, &linfo->inv_list);
queue_inv_work(linfo);
}
wake_up(&lock->waitq);
put_lock(linfo, lock);
}
/* grant might have been waiting for invalidate request */
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
out:
/* queue delayed work if invalidations waiting on grace deadline */
if (delay != MAX_JIFFY_OFFSET)
queue_delayed_work(linfo->workq, &linfo->inv_dwork, delay);
}
/*
* Add an incoming invalidation request to the end of the list on the
* lock and queue it for blocking invalidation work. This is being
* called synchronously in the net recv path to avoid reordering with
* grants that were sent immediately before the server sent this
* invalidation.
* Record an incoming invalidate request from the server and add its lock
* to the list for processing.
*
* Incoming invalidation requests are a function of the remote lock
* server's state machine and are slightly decoupled from our lock
* state. We can receive duplicate requests if the server is quick
* enough to send the next request after we send a previous reply, or if
* pending invalidation spans server failover and lock recovery.
*
* Similarly, we can get a request to invalidate a lock we don't have if
* invalidation finished just after lock recovery to a new server.
* Happily we can just reply because we satisfy the invalidation
* response promise to not be using the old lock's mode if the lock
* doesn't exist.
* This is trusting the server and will crash if it's sent bad requests :/
*/
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock = NULL;
struct inv_req *ireq;
int ret = 0;
struct scoutfs_lock *lock;
scoutfs_inc_counter(sb, lock_invalidate_request);
ireq = kmalloc(sizeof(struct inv_req), GFP_NOFS);
BUG_ON(!ireq); /* lock server doesn't handle response errors */
if (ireq == NULL) {
ret = -ENOMEM;
goto out;
}
spin_lock(&linfo->lock);
lock = get_lock(sb, &nl->key);
BUG_ON(!lock);
if (lock) {
BUG_ON(lock->invalidate_pending);
lock->invalidate_pending = 1;
lock->inv_nl = *nl;
lock->inv_net_id = net_id;
list_add_tail(&lock->inv_head, &linfo->inv_list);
trace_scoutfs_lock_invalidate_request(sb, lock);
ireq->lock = lock;
ireq->net_id = net_id;
ireq->nl = *nl;
if (list_empty(&lock->inv_list)) {
list_add_tail(&lock->inv_head, &linfo->inv_list);
lock->invalidate_pending = 1;
queue_inv_work(linfo);
}
list_add_tail(&ireq->head, &lock->inv_list);
queue_inv_work(linfo);
}
spin_unlock(&linfo->lock);
out:
if (!lock) {
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret); /* lock server doesn't fence timed out client requests */
}
return ret;
return 0;
}
/*
@@ -830,7 +879,6 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_net_lock_recover *nlr;
enum scoutfs_lock_mode mode;
struct scoutfs_lock *lock;
struct scoutfs_lock *next;
struct rb_node *node;
@@ -851,15 +899,10 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
for (i = 0; lock && i < SCOUTFS_NET_LOCK_MAX_RECOVER_NR; i++) {
if (lock->invalidating_mode != SCOUTFS_LOCK_NULL)
mode = lock->invalidating_mode;
else
mode = lock->mode;
nlr->locks[i].key = lock->start;
nlr->locks[i].write_seq = cpu_to_le64(lock->write_seq);
nlr->locks[i].old_mode = mode;
nlr->locks[i].new_mode = mode;
nlr->locks[i].write_version = cpu_to_le64(lock->write_version);
nlr->locks[i].old_mode = lock->mode;
nlr->locks[i].new_mode = lock->mode;
node = rb_next(&lock->node);
if (node)
@@ -952,7 +995,7 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
lock_inc_count(lock->waiters, mode);
for (;;) {
if (WARN_ON_ONCE(linfo->shutdown)) {
if (linfo->shutdown) {
ret = -ESHUTDOWN;
break;
}
@@ -997,14 +1040,8 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
trace_scoutfs_lock_wait(sb, lock);
if (flags & SCOUTFS_LKF_INTERRUPTIBLE) {
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
} else {
wait_event(lock->waitq, lock_wait_cond(sb, lock, mode));
ret = 0;
}
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
spin_lock(&linfo->lock);
if (ret)
break;
@@ -1061,7 +1098,7 @@ int scoutfs_lock_inode(struct super_block *sb, enum scoutfs_lock_mode mode, int
goto out;
if (flags & SCOUTFS_LKF_REFRESH_INODE) {
ret = scoutfs_inode_refresh(inode, *lock);
ret = scoutfs_inode_refresh(inode, *lock, flags);
if (ret < 0) {
scoutfs_unlock(sb, *lock, mode);
*lock = NULL;
@@ -1222,46 +1259,37 @@ int scoutfs_lock_inode_index(struct super_block *sb, enum scoutfs_lock_mode mode
}
/*
* Orphan items are stored in their own zone which are modified with
* shared write_only locks and are read inconsistently without locks by
* background scanning work.
* The rid lock protects a mount's private persistent items in the rid
* zone. It's held for the duration of the mount. It lets the mount
* modify the rid items at will and signals to other mounts that we're
* still alive and our rid items shouldn't be reclaimed.
*
* Since we only use write_only locks we just lock the entire zone, but
* the api provides the inode in case we ever change the locking scheme.
* Being held for the entire mount prevents other nodes from reclaiming
* our items, like free blocks, when it would make sense for them to be
* able to. Maybe we have a bunch free and they're trying to allocate
* and are getting ENOSPC.
*/
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags, u64 ino,
struct scoutfs_lock **lock)
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 rid, struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_ORPHAN_ZONE;
start.sko_ino = 0;
start.sk_type = SCOUTFS_ORPHAN_TYPE;
start.sk_zone = SCOUTFS_RID_ZONE;
start.sko_rid = cpu_to_le64(rid);
scoutfs_key_set_zeros(&end);
end.sk_zone = SCOUTFS_ORPHAN_ZONE;
end.sko_ino = cpu_to_le64(U64_MAX);
end.sk_type = SCOUTFS_ORPHAN_TYPE;
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
end.sk_zone = SCOUTFS_RID_ZONE;
end.sko_rid = cpu_to_le64(rid);
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
/*
* As we unlock we always extend the grace period to give the caller
* another pass at the lock before its invalidated.
*/
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scoutfs_lock_mode mode)
{
DECLARE_LOCK_INFO(sb, linfo);
@@ -1274,6 +1302,7 @@ void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scou
spin_lock(&linfo->lock);
lock_dec_count(lock->users, mode);
extend_grace(sb, lock);
if (lock_mode_can_write(mode))
lock->dirty_trans_seq = scoutfs_trans_sample_seq(sb);
@@ -1448,7 +1477,7 @@ restart:
BUG_ON(lock->mode == SCOUTFS_LOCK_NULL);
BUG_ON(!list_empty(&lock->shrink_head));
if (nr-- == 0)
if (linfo->shutdown || nr-- == 0)
break;
__lock_del_lru(linfo, lock);
@@ -1475,7 +1504,7 @@ out:
return ret;
}
void scoutfs_free_unused_locks(struct super_block *sb)
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr)
{
struct lock_info *linfo = SCOUTFS_SB(sb)->lock_info;
struct shrink_control sc = {
@@ -1503,48 +1532,15 @@ static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
}
/*
* shrink_dcache_for_umount() tears down dentries with no locking. We
* need to make sure that our invalidation won't touch dentries before
* we return and the caller calls the generic vfs unmount path.
*/
void scoutfs_lock_unmount_begin(struct super_block *sb)
{
DECLARE_LOCK_INFO(sb, linfo);
if (linfo) {
linfo->unmounting = true;
flush_work(&linfo->inv_work);
}
}
void scoutfs_lock_flush_invalidate(struct super_block *sb)
{
DECLARE_LOCK_INFO(sb, linfo);
if (linfo)
flush_work(&linfo->inv_work);
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.
* The caller is going to be calling _destroy soon and, critically, is
* about to shutdown networking before calling us so that we don't get
* any callbacks while we're destroying. We have to ensure that we
* won't call networking after this returns.
*
* At this point all fs callers and internal services that use locks
* should have stopped. We won't have any callers initiating lock
* transitions and sending requests. We set the shutdown flag to catch
* anyone who breaks this rule.
*
* We unregister the shrinker so that we won't try and send null
* requests in response to memory pressure. The locks will all be
* unceremoniously dropped once we get a farewell response from the
* server which indicates that they destroyed our locking state.
*
* We will still respond to invalidation requests that have to be
* processed to let unmount in other mounts acquire locks and make
* progress. However, we don't fully process the invalidation because
* we're shutting down. We only update the lock state and send the
* response. We shouldn't have any users of locking that require
* invalidation correctness at this point.
* Internal fs threads can be using locking, and locking can have async
* work pending. We use ->shutdown to force callers to return
* -ESHUTDOWN and to prevent the future queueing of work that could call
* networking. Locks whose work is stopped will be torn down by _destroy.
*/
void scoutfs_lock_shutdown(struct super_block *sb)
{
@@ -1557,18 +1553,19 @@ void scoutfs_lock_shutdown(struct super_block *sb)
trace_scoutfs_lock_shutdown(sb, linfo);
/* stop the shrinker from queueing work */
unregister_shrinker(&linfo->shrinker);
flush_work(&linfo->shrink_work);
/* cause current and future lock calls to return errors */
spin_lock(&linfo->lock);
linfo->shutdown = true;
for (node = rb_first(&linfo->lock_tree); node; node = rb_next(node)) {
lock = rb_entry(node, struct scoutfs_lock, node);
wake_up(&lock->waitq);
}
spin_unlock(&linfo->lock);
flush_work(&linfo->grant_work);
flush_delayed_work(&linfo->inv_dwork);
flush_work(&linfo->shrink_work);
}
/*
@@ -1588,8 +1585,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
struct inv_req *ireq_tmp;
struct inv_req *ireq;
struct rb_node *node;
enum scoutfs_lock_mode mode;
@@ -1598,6 +1593,8 @@ void scoutfs_lock_destroy(struct super_block *sb)
trace_scoutfs_lock_destroy(sb, linfo);
/* stop the shrinker from queueing work */
unregister_shrinker(&linfo->shrinker);
/* make sure that no one's actively using locks */
spin_lock(&linfo->lock);
@@ -1616,6 +1613,8 @@ void scoutfs_lock_destroy(struct super_block *sb)
spin_unlock(&linfo->lock);
if (linfo->workq) {
/* pending grace work queues normal work */
flush_workqueue(linfo->workq);
/* now all work won't queue itself */
destroy_workqueue(linfo->workq);
}
@@ -1632,31 +1631,22 @@ void scoutfs_lock_destroy(struct super_block *sb)
* of free).
*/
spin_lock(&linfo->lock);
node = rb_first(&linfo->lock_tree);
while (node) {
lock = rb_entry(node, struct scoutfs_lock, node);
node = rb_next(node);
list_for_each_entry_safe(ireq, ireq_tmp, &lock->inv_list, head) {
list_del_init(&ireq->head);
put_lock(linfo, ireq->lock);
kfree(ireq);
}
lock->request_pending = 0;
if (!list_empty(&lock->lru_head))
__lock_del_lru(linfo, lock);
if (!list_empty(&lock->inv_head)) {
if (!list_empty(&lock->grant_head))
list_del_init(&lock->grant_head);
if (!list_empty(&lock->inv_head))
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
}
if (!list_empty(&lock->shrink_head))
list_del_init(&lock->shrink_head);
lock_remove(linfo, lock);
lock_free(linfo, lock);
}
spin_unlock(&linfo->lock);
kfree(linfo);
@@ -1681,7 +1671,9 @@ int scoutfs_lock_setup(struct super_block *sb)
linfo->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&linfo->shrinker);
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
INIT_WORK(&linfo->grant_work, lock_grant_worker);
INIT_LIST_HEAD(&linfo->grant_list);
INIT_DELAYED_WORK(&linfo->inv_dwork, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);
INIT_WORK(&linfo->shrink_work, lock_shrink_worker);
INIT_LIST_HEAD(&linfo->shrink_list);

View File

@@ -6,15 +6,12 @@
#define SCOUTFS_LKF_REFRESH_INODE 0x01 /* update stale inode from item */
#define SCOUTFS_LKF_NONBLOCK 0x02 /* only use already held locks */
#define SCOUTFS_LKF_INTERRUPTIBLE 0x04 /* pending signals return -ERESTARTSYS */
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_INTERRUPTIBLE << 1) - 1))
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_NONBLOCK << 1) - 1))
#define SCOUTFS_LOCK_NR_MODES SCOUTFS_LOCK_INVALID
struct inode_deletion_lock_data;
/*
* A few fields (start, end, refresh_gen, write_seq, granted_mode)
* A few fields (start, end, refresh_gen, write_version, granted_mode)
* are referenced by code outside lock.c.
*/
struct scoutfs_lock {
@@ -24,22 +21,26 @@ struct scoutfs_lock {
struct rb_node node;
struct rb_node range_node;
u64 refresh_gen;
u64 write_seq;
u64 write_version;
u64 dirty_trans_seq;
struct scoutfs_net_roots roots;
struct list_head lru_head;
wait_queue_head_t waitq;
ktime_t grace_deadline;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
struct list_head inv_list; /* list of lock's invalidation requests */
struct list_head grant_head;
struct scoutfs_net_lock_grant_response grant_resp;
struct list_head inv_head;
struct scoutfs_net_lock inv_nl;
u64 inv_net_id;
struct list_head shrink_head;
spinlock_t cov_list_lock;
struct list_head cov_list;
enum scoutfs_lock_mode mode;
enum scoutfs_lock_mode invalidating_mode;
unsigned int waiters[SCOUTFS_LOCK_NR_MODES];
unsigned int users[SCOUTFS_LOCK_NR_MODES];
@@ -47,9 +48,6 @@ struct scoutfs_lock {
/* the forest tracks which log tree last saw bloom bit updates */
atomic64_t forest_bloom_nr;
/* inode deletion tracks some state per lock */
struct inode_deletion_lock_data *inode_deletion_data;
};
struct scoutfs_lock_coverage {
@@ -59,7 +57,7 @@ struct scoutfs_lock_coverage {
};
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock *nl);
struct scoutfs_net_lock_grant_response *gr);
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl);
int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
@@ -82,10 +80,8 @@ int scoutfs_lock_inodes(struct super_block *sb, enum scoutfs_lock_mode mode, int
struct inode *d, struct scoutfs_lock **D_lock);
int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 ino, struct scoutfs_lock **lock);
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 rid, struct scoutfs_lock **lock);
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode mode);
@@ -100,11 +96,9 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode);
void scoutfs_free_unused_locks(struct super_block *sb);
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr);
int scoutfs_lock_setup(struct super_block *sb);
void scoutfs_lock_unmount_begin(struct super_block *sb);
void scoutfs_lock_flush_invalidate(struct super_block *sb);
void scoutfs_lock_shutdown(struct super_block *sb);
void scoutfs_lock_destroy(struct super_block *sb);

View File

@@ -20,10 +20,10 @@
#include "tseq.h"
#include "spbm.h"
#include "block.h"
#include "btree.h"
#include "msg.h"
#include "scoutfs_trace.h"
#include "lock_server.h"
#include "recov.h"
/*
* The scoutfs server implements a simple lock service. Client mounts
@@ -56,11 +56,14 @@
* Message requests and responses are reliably delivered in order across
* reconnection.
*
* As a new server comes up it recovers lock state from existing clients
* which were connected to a previous lock server. Recover requests are
* sent to clients as they connect and they respond with all there
* locks. Once all clients and locks are accounted for normal
* processing can resume.
* The server maintains a persistent record of connected clients. A new
* server instance discovers these and waits for previously connected
* clients to reconnect and recover their state before proceeding. If
* clients don't reconnect they are forcefully prevented from unsafely
* accessing the shared persistent storage. (fenced, according to the
* rules of the platform.. could range from being powered off to having
* their switch port disabled to having their local block device set
* read-only.)
*
* The lock server doesn't respond to memory pressure. The only way
* locks are freed is if they are invalidated to null on behalf of a
@@ -74,12 +77,19 @@ struct lock_server_info {
struct super_block *sb;
spinlock_t lock;
struct mutex mutex;
struct rb_root locks_root;
struct scoutfs_spbm recovery_pending;
struct delayed_work recovery_dwork;
struct scoutfs_tseq_tree tseq_tree;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree stats_tseq_tree;
struct dentry *stats_tseq_dentry;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
atomic64_t write_version;
};
#define DECLARE_LOCK_SERVER_INFO(sb, name) \
@@ -106,9 +116,6 @@ struct server_lock_node {
struct list_head granted;
struct list_head requested;
struct list_head invalidated;
struct scoutfs_tseq_entry stats_tseq_entry;
u64 stats[SLT_NR];
};
/*
@@ -153,30 +160,30 @@ enum {
*/
static void add_client_entry(struct server_lock_node *snode,
struct list_head *list,
struct client_lock_entry *c_ent)
struct client_lock_entry *clent)
{
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
if (list_empty(&c_ent->head))
list_add_tail(&c_ent->head, list);
if (list_empty(&clent->head))
list_add_tail(&clent->head, list);
else
list_move_tail(&c_ent->head, list);
list_move_tail(&clent->head, list);
c_ent->on_list = list == &snode->granted ? OL_GRANTED :
clent->on_list = list == &snode->granted ? OL_GRANTED :
list == &snode->requested ? OL_REQUESTED :
OL_INVALIDATED;
}
static void free_client_entry(struct lock_server_info *inf,
struct server_lock_node *snode,
struct client_lock_entry *c_ent)
struct client_lock_entry *clent)
{
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
if (!list_empty(&c_ent->head))
list_del_init(&c_ent->head);
scoutfs_tseq_del(&inf->tseq_tree, &c_ent->tseq_entry);
kfree(c_ent);
if (!list_empty(&clent->head))
list_del_init(&clent->head);
scoutfs_tseq_del(&inf->tseq_tree, &clent->tseq_entry);
kfree(clent);
}
static bool invalid_mode(u8 mode)
@@ -298,8 +305,6 @@ static struct server_lock_node *alloc_server_lock(struct lock_server_info *inf,
snode = get_server_lock(inf, key, ins, false);
if (snode != ins)
kfree(ins);
else
scoutfs_tseq_add(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
}
}
@@ -329,23 +334,21 @@ static void put_server_lock(struct lock_server_info *inf,
mutex_unlock(&snode->mutex);
if (should_free) {
scoutfs_tseq_del(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
if (should_free)
kfree(snode);
}
}
static struct client_lock_entry *find_entry(struct server_lock_node *snode,
struct list_head *list,
u64 rid)
{
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
list_for_each_entry(c_ent, list, head) {
if (c_ent->rid == rid)
return c_ent;
list_for_each_entry(clent, list, head) {
if (clent->rid == rid)
return clent;
}
return NULL;
@@ -364,7 +367,7 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
u64 net_id, struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct server_lock_node *snode;
int ret;
@@ -376,29 +379,27 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
goto out;
}
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!c_ent) {
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!clent) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&c_ent->head);
c_ent->rid = rid;
c_ent->net_id = net_id;
c_ent->mode = nl->new_mode;
INIT_LIST_HEAD(&clent->head);
clent->rid = rid;
clent->net_id = net_id;
clent->mode = nl->new_mode;
snode = alloc_server_lock(inf, &nl->key);
if (snode == NULL) {
kfree(c_ent);
kfree(clent);
ret = -ENOMEM;
goto out;
}
snode->stats[SLT_REQUEST]++;
c_ent->snode = snode;
add_client_entry(snode, &snode->requested, c_ent);
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
clent->snode = snode;
add_client_entry(snode, &snode->requested, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
ret = process_waiting_requests(sb, snode);
out:
@@ -417,7 +418,7 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct server_lock_node *snode;
int ret;
@@ -429,27 +430,25 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
goto out;
}
/* XXX should always have a server lock here? */
/* XXX should always have a server lock here? recovery? */
snode = get_server_lock(inf, &nl->key, NULL, false);
if (!snode) {
ret = -EINVAL;
goto out;
}
snode->stats[SLT_RESPONSE]++;
c_ent = find_entry(snode, &snode->invalidated, rid);
if (!c_ent) {
clent = find_entry(snode, &snode->invalidated, rid);
if (!clent) {
put_server_lock(inf, snode);
ret = -EINVAL;
goto out;
}
if (nl->new_mode == SCOUTFS_LOCK_NULL) {
free_client_entry(inf, snode, c_ent);
free_client_entry(inf, snode, clent);
} else {
c_ent->mode = nl->new_mode;
add_client_entry(snode, &snode->granted, c_ent);
clent->mode = nl->new_mode;
add_client_entry(snode, &snode->granted, clent);
}
ret = process_waiting_requests(sb, snode);
@@ -474,27 +473,31 @@ out:
* so we unlock the snode mutex.
*
* All progress must wait for all clients to finish with recovery
* because we don't know which locks they'll hold. Once recover
* finishes the server calls us to kick all the locks that were waiting
* during recovery.
* because we don't know which locks they'll hold. The unlocked
* recovery_pending test here is OK. It's filled by setup before
* anything runs. It's emptied by recovery completion. We can get a
* false nonempty result if we race with recovery completion, but that's
* OK because recovery completion processes all the locks that have
* requests after emptying, including the unlikely loser of that race.
*/
static int process_waiting_requests(struct super_block *sb,
struct server_lock_node *snode)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_net_lock_grant_response gres;
struct scoutfs_net_lock nl;
struct client_lock_entry *req;
struct client_lock_entry *req_tmp;
struct client_lock_entry *gr;
struct client_lock_entry *gr_tmp;
u64 seq;
u64 wv;
int ret;
BUG_ON(!mutex_is_locked(&snode->mutex));
/* processing waits for all invalidation responses or recovery */
if (!list_empty(&snode->invalidated) ||
scoutfs_recov_next_pending(sb, 0, SCOUTFS_RECOV_LOCKS) != 0) {
!scoutfs_spbm_empty(&inf->recovery_pending)) {
ret = 0;
goto out;
}
@@ -518,7 +521,6 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER,
SLT_INVALIDATE, SLT_REQUEST,
gr->rid, 0, &nl);
snode->stats[SLT_INVALIDATE]++;
add_client_entry(snode, &snode->invalidated, gr);
}
@@ -529,7 +531,6 @@ static int process_waiting_requests(struct super_block *sb,
nl.key = snode->key;
nl.new_mode = req->mode;
nl.write_seq = 0;
/* see if there's an existing compatible grant to replace */
gr = find_entry(snode, &snode->granted, req->rid);
@@ -542,20 +543,21 @@ static int process_waiting_requests(struct super_block *sb,
if (nl.new_mode == SCOUTFS_LOCK_WRITE ||
nl.new_mode == SCOUTFS_LOCK_WRITE_ONLY) {
/* doesn't commit seq update, recovered with locks */
seq = scoutfs_server_next_seq(sb);
nl.write_seq = cpu_to_le64(seq);
wv = atomic64_inc_return(&inf->write_version);
nl.write_version = cpu_to_le64(wv);
}
gres.nl = nl;
scoutfs_server_get_roots(sb, &gres.roots);
ret = scoutfs_server_lock_response(sb, req->rid,
req->net_id, &nl);
req->net_id, &gres);
if (ret)
goto out;
trace_scoutfs_lock_message(sb, SLT_SERVER, SLT_GRANT,
SLT_RESPONSE, req->rid,
req->net_id, &nl);
snode->stats[SLT_GRANT]++;
/* don't track null client locks, track all else */
if (req->mode == SCOUTFS_LOCK_NULL)
@@ -571,39 +573,89 @@ out:
return ret;
}
static void init_lock_clients_key(struct scoutfs_key *key, u64 rid)
{
*key = (struct scoutfs_key) {
.sk_zone = SCOUTFS_LOCK_CLIENTS_ZONE,
.sklc_rid = cpu_to_le64(rid),
};
}
/*
* The server received a greeting from a client for the first time. If
* the client is in lock recovery then we send the initial lock request.
* the client had already talked to the server then we must find an
* existing record for it and should begin recovery. If it doesn't have
* a record then its timed out and we can't allow it to reconnect. If
* we're creating a new record for a client we can see EEXIST if the
* greeting is resent to a new server after the record was committed but
* before the response was received by the client.
*
* This is running in concurrent client greeting processing contexts.
*/
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid)
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
if (scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
init_lock_clients_key(&key, rid);
mutex_lock(&inf->mutex);
if (should_exist) {
ret = scoutfs_btree_lookup(sb, &super->lock_clients, &key,
&iref);
if (ret == 0)
scoutfs_btree_put_iref(&iref);
} else {
ret = scoutfs_btree_insert(sb, inf->alloc, inf->wri,
&super->lock_clients,
&key, NULL, 0);
if (ret == -EEXIST)
ret = 0;
}
mutex_unlock(&inf->mutex);
if (should_exist && ret == 0) {
scoutfs_key_set_zeros(&key);
ret = scoutfs_server_lock_recover_request(sb, rid, &key);
} else {
ret = 0;
if (ret)
goto out;
}
out:
return ret;
}
/*
* All clients have finished lock recovery, we can make forward process
* on all the queued requests that were waiting on recovery.
* A client sent their last recovery response and can exit recovery. If
* they were the last client in recovery then we can process all the
* server locks that had requests.
*/
int scoutfs_lock_server_finished_recovery(struct super_block *sb)
static int finished_recovery(struct super_block *sb, u64 rid, bool cancel)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct server_lock_node *snode;
struct scoutfs_key key;
bool still_pending;
int ret = 0;
spin_lock(&inf->lock);
scoutfs_spbm_clear(&inf->recovery_pending, rid);
still_pending = !scoutfs_spbm_empty(&inf->recovery_pending);
spin_unlock(&inf->lock);
if (still_pending)
return 0;
if (cancel)
cancel_delayed_work_sync(&inf->recovery_dwork);
scoutfs_key_set_zeros(&key);
scoutfs_info(sb, "all lock clients recovered");
while ((snode = get_server_lock(inf, &key, NULL, true))) {
key = snode->key;
@@ -621,6 +673,14 @@ int scoutfs_lock_server_finished_recovery(struct super_block *sb)
return ret;
}
static void set_max_write_version(struct lock_server_info *inf, u64 new)
{
u64 old;
while (new > (old = atomic64_read(&inf->write_version)) &&
(atomic64_cmpxchg(&inf->write_version, old, new) != old));
}
/*
* We sent a lock recover request to the client when we received its
* greeting while in recovery. Here we instantiate all the locks it
@@ -632,61 +692,62 @@ int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *existing;
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct server_lock_node *snode;
struct scoutfs_key key;
int ret = 0;
int i;
/* client must be in recovery */
if (!scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
spin_lock(&inf->lock);
if (!scoutfs_spbm_test(&inf->recovery_pending, rid))
ret = -EINVAL;
spin_unlock(&inf->lock);
if (ret)
goto out;
}
/* client has sent us all their locks */
if (nlr->nr == 0) {
scoutfs_server_recov_finish(sb, rid, SCOUTFS_RECOV_LOCKS);
ret = 0;
ret = finished_recovery(sb, rid, true);
goto out;
}
for (i = 0; i < le16_to_cpu(nlr->nr); i++) {
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!c_ent) {
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!clent) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&c_ent->head);
c_ent->rid = rid;
c_ent->net_id = 0;
c_ent->mode = nlr->locks[i].new_mode;
INIT_LIST_HEAD(&clent->head);
clent->rid = rid;
clent->net_id = 0;
clent->mode = nlr->locks[i].new_mode;
snode = alloc_server_lock(inf, &nlr->locks[i].key);
if (snode == NULL) {
kfree(c_ent);
kfree(clent);
ret = -ENOMEM;
goto out;
}
existing = find_entry(snode, &snode->granted, rid);
if (existing) {
kfree(c_ent);
kfree(clent);
put_server_lock(inf, snode);
ret = -EEXIST;
goto out;
}
c_ent->snode = snode;
add_client_entry(snode, &snode->granted, c_ent);
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
clent->snode = snode;
add_client_entry(snode, &snode->granted, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
put_server_lock(inf, snode);
/* make sure next core seq is greater than all lock write seq */
scoutfs_server_set_seq_if_greater(sb,
le64_to_cpu(nlr->locks[i].write_seq));
/* make sure next write lock is greater than all recovered */
set_max_write_version(inf,
le64_to_cpu(nlr->locks[i].write_version));
}
/* send request for next batch of keys */
@@ -698,16 +759,102 @@ out:
return ret;
}
static int get_rid_and_put_ref(struct scoutfs_btree_item_ref *iref, u64 *rid)
{
int ret;
if (iref->val_len == 0) {
*rid = le64_to_cpu(iref->key->sklc_rid);
ret = 0;
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(iref);
return ret;
}
/*
* This work executes if enough time passes without all of the clients
* finishing with recovery and canceling the work. We walk through the
* client records and find any that still have their recovery pending.
*/
static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
{
struct lock_server_info *inf = container_of(work,
struct lock_server_info,
recovery_dwork.work);
struct super_block *sb = inf->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
bool timed_out;
u64 rid;
int ret;
ret = scoutfs_server_hold_commit(sb);
if (ret)
goto out;
/* we enter recovery if there are any client records */
for (rid = 0; ; rid++) {
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT) {
ret = 0;
break;
}
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
break;
spin_lock(&inf->lock);
if (scoutfs_spbm_test(&inf->recovery_pending, rid)) {
scoutfs_spbm_clear(&inf->recovery_pending, rid);
timed_out = true;
} else {
timed_out = false;
}
spin_unlock(&inf->lock);
if (!timed_out)
continue;
scoutfs_err(sb, "client rid %016llx lock recovery timed out",
rid);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &key);
if (ret)
break;
}
ret = scoutfs_server_apply_commit(sb, ret);
out:
/* force processing all pending lock requests */
if (ret == 0)
ret = finished_recovery(sb, 0, false);
if (ret < 0) {
scoutfs_err(sb, "lock server saw err %d while timing out clients, shutting down", ret);
scoutfs_server_abort(sb);
}
}
/*
* A client is leaving the lock service. They aren't using locks and
* won't send any more requests. We tear down all the state we had for
* them. This can be called multiple times for a given client as their
* farewell is resent to new servers. It's OK to not find any state.
* If we fail to delete a persistent entry then we have to shut down and
* hope that the next server has more luck.
*/
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *c_ent;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct client_lock_entry *clent;
struct client_lock_entry *tmp;
struct server_lock_node *snode;
struct scoutfs_key key;
@@ -715,7 +862,20 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
bool freed;
int ret = 0;
mutex_lock(&inf->mutex);
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &key);
mutex_unlock(&inf->mutex);
if (ret == -ENOENT) {
ret = 0;
goto out;
}
if (ret < 0)
goto out;
scoutfs_key_set_zeros(&key);
while ((snode = get_server_lock(inf, &key, NULL, true))) {
freed = false;
@@ -724,9 +884,9 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
(list == &snode->requested) ? &snode->invalidated :
NULL) {
list_for_each_entry_safe(c_ent, tmp, list, head) {
if (c_ent->rid == rid) {
free_client_entry(inf, snode, c_ent);
list_for_each_entry_safe(clent, tmp, list, head) {
if (clent->rid == rid) {
free_client_entry(inf, snode, clent);
freed = true;
}
}
@@ -749,7 +909,7 @@ out:
if (ret < 0) {
scoutfs_err(sb, "lock server err %d during client rid %016llx farewell, shutting down",
ret, rid);
scoutfs_server_stop(sb);
scoutfs_server_abort(sb);
}
return ret;
@@ -787,35 +947,36 @@ static char *lock_on_list_string(u8 on_list)
static void lock_server_tseq_show(struct seq_file *m,
struct scoutfs_tseq_entry *ent)
{
struct client_lock_entry *c_ent = container_of(ent,
struct client_lock_entry *clent = container_of(ent,
struct client_lock_entry,
tseq_entry);
struct server_lock_node *snode = c_ent->snode;
struct server_lock_node *snode = clent->snode;
seq_printf(m, SK_FMT" %s %s rid %016llx net_id %llu\n",
SK_ARG(&snode->key), lock_mode_string(c_ent->mode),
lock_on_list_string(c_ent->on_list), c_ent->rid,
c_ent->net_id);
}
static void stats_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
{
struct server_lock_node *snode = container_of(ent, struct server_lock_node,
stats_tseq_entry);
seq_printf(m, SK_FMT" req %llu inv %llu rsp %llu gr %llu\n",
SK_ARG(&snode->key), snode->stats[SLT_REQUEST], snode->stats[SLT_INVALIDATE],
snode->stats[SLT_RESPONSE], snode->stats[SLT_GRANT]);
SK_ARG(&snode->key), lock_mode_string(clent->mode),
lock_on_list_string(clent->on_list), clent->rid,
clent->net_id);
}
/*
* Setup the lock server. This is called before networking can deliver
* requests.
* requests. If we find existing client records then we enter recovery.
* Lock request processing is deferred until recovery is resolved for
* all the existing clients, either they reconnect and replay locks or
* we time them out.
*/
int scoutfs_lock_server_setup(struct super_block *sb)
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 max_vers)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct lock_server_info *inf;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
unsigned int nr;
u64 rid;
int ret;
inf = kzalloc(sizeof(struct lock_server_info), GFP_KERNEL);
if (!inf)
@@ -823,9 +984,15 @@ int scoutfs_lock_server_setup(struct super_block *sb)
inf->sb = sb;
spin_lock_init(&inf->lock);
mutex_init(&inf->mutex);
inf->locks_root = RB_ROOT;
scoutfs_spbm_init(&inf->recovery_pending);
INIT_DELAYED_WORK(&inf->recovery_dwork,
scoutfs_lock_server_recovery_timeout);
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
scoutfs_tseq_tree_init(&inf->stats_tseq_tree, stats_tseq_show);
inf->alloc = alloc;
inf->wri = wri;
atomic64_set(&inf->write_version, max_vers); /* inc_return gives +1 */
inf->tseq_dentry = scoutfs_tseq_create("server_locks", sbi->debug_root,
&inf->tseq_tree);
@@ -834,17 +1001,38 @@ int scoutfs_lock_server_setup(struct super_block *sb)
return -ENOMEM;
}
inf->stats_tseq_dentry = scoutfs_tseq_create("server_lock_stats", sbi->debug_root,
&inf->stats_tseq_tree);
if (!inf->stats_tseq_dentry) {
debugfs_remove(inf->tseq_dentry);
kfree(inf);
return -ENOMEM;
}
sbi->lock_server_info = inf;
return 0;
/* we enter recovery if there are any client records */
nr = 0;
for (rid = 0; ; rid++) {
init_lock_clients_key(&key, rid);
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
if (ret == -ENOENT)
break;
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
goto out;
ret = scoutfs_spbm_set(&inf->recovery_pending, rid);
if (ret)
goto out;
nr++;
if (rid == U64_MAX)
break;
}
ret = 0;
if (nr) {
schedule_delayed_work(&inf->recovery_dwork,
msecs_to_jiffies(LOCK_SERVER_RECOVERY_MS));
scoutfs_info(sb, "waiting for %u lock clients to recover", nr);
}
out:
return ret;
}
/*
@@ -857,13 +1045,14 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct server_lock_node *snode;
struct server_lock_node *stmp;
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct client_lock_entry *ctmp;
LIST_HEAD(list);
if (inf) {
cancel_delayed_work_sync(&inf->recovery_dwork);
debugfs_remove(inf->tseq_dentry);
debugfs_remove(inf->stats_tseq_dentry);
rbtree_postorder_for_each_entry_safe(snode, stmp,
&inf->locks_root, node) {
@@ -873,14 +1062,16 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
list_splice_init(&snode->invalidated, &list);
mutex_lock(&snode->mutex);
list_for_each_entry_safe(c_ent, ctmp, &list, head) {
free_client_entry(inf, snode, c_ent);
list_for_each_entry_safe(clent, ctmp, &list, head) {
free_client_entry(inf, snode, clent);
}
mutex_unlock(&snode->mutex);
kfree(snode);
}
scoutfs_spbm_destroy(&inf->recovery_pending);
kfree(inf);
sbi->lock_server_info = NULL;
}

View File

@@ -3,15 +3,17 @@
int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock_recover *nlr);
int scoutfs_lock_server_finished_recovery(struct super_block *sb);
int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
u64 net_id, struct scoutfs_net_lock *nl);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist);
int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);
int scoutfs_lock_server_setup(struct super_block *sb);
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 max_vers);
void scoutfs_lock_server_destroy(struct super_block *sb);
#endif

View File

@@ -4,7 +4,6 @@
#include <linux/bitops.h>
#include "key.h"
#include "counters.h"
#include "super.h"
void __printf(4, 5) scoutfs_msg(struct super_block *sb, const char *prefix,
const char *str, const char *fmt, ...);
@@ -24,9 +23,6 @@ do { \
#define scoutfs_info(sb, fmt, args...) \
scoutfs_msg_check(sb, KERN_INFO, "", fmt, ##args)
#define scoutfs_tprintk(sb, fmt, args...) \
trace_printk(SCSBF " " fmt "\n", SCSB_ARGS(sb), ##args);
#define scoutfs_bug_on(sb, cond, fmt, args...) \
do { \
if (cond) { \

View File

@@ -30,7 +30,6 @@
#include "net.h"
#include "endian_swap.h"
#include "tseq.h"
#include "fence.h"
/*
* scoutfs networking delivers requests and responses between nodes.
@@ -331,9 +330,6 @@ static int submit_send(struct super_block *sb,
WARN_ON_ONCE(id == 0 && (flags & SCOUTFS_NET_FLAG_RESPONSE)))
return -EINVAL;
if (scoutfs_forcing_unmount(sb))
return -EIO;
msend = kmalloc(offsetof(struct message_send,
nh.data[data_len]), GFP_NOFS);
if (!msend)
@@ -424,16 +420,6 @@ static int process_request(struct scoutfs_net_connection *conn,
mrecv->nh.data, le16_to_cpu(mrecv->nh.data_len));
}
static int call_resp_func(struct super_block *sb, struct scoutfs_net_connection *conn,
scoutfs_net_response_t resp_func, void *resp_data,
void *resp, unsigned int resp_len, int error)
{
if (resp_func)
return resp_func(sb, conn, resp, resp_len, error, resp_data);
else
return 0;
}
/*
* An incoming response finds the queued request and calls its response
* function. The response function for a given request will only be
@@ -448,6 +434,7 @@ static int process_response(struct scoutfs_net_connection *conn,
struct message_send *msend;
scoutfs_net_response_t resp_func = NULL;
void *resp_data;
int ret = 0;
spin_lock(&conn->lock);
@@ -462,8 +449,11 @@ static int process_response(struct scoutfs_net_connection *conn,
spin_unlock(&conn->lock);
return call_resp_func(sb, conn, resp_func, resp_data, mrecv->nh.data,
le16_to_cpu(mrecv->nh.data_len), net_err_to_host(mrecv->nh.error));
if (resp_func)
ret = resp_func(sb, conn, mrecv->nh.data,
le16_to_cpu(mrecv->nh.data_len),
net_err_to_host(mrecv->nh.error), resp_data);
return ret;
}
/*
@@ -629,6 +619,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
break;
}
trace_scoutfs_recv_clock_sync(nh.clock_sync_id);
data_len = le16_to_cpu(nh.data_len);
scoutfs_inc_counter(sb, net_recv_messages);
@@ -675,15 +667,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/*
* Initial received greetings are processed
* synchronously before any other incoming messages.
*
* Incoming requests or responses to the lock client are
* called synchronously to avoid reordering.
*/
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
/* synchronously process greeting before next recvmsg */
if (nh.cmd == SCOUTFS_NET_CMD_GREETING)
scoutfs_net_proc_worker(&mrecv->proc_work);
else
queue_work(conn->workq, &mrecv->proc_work);
@@ -783,6 +768,9 @@ static void scoutfs_net_send_worker(struct work_struct *work)
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
msend->nh.clock_sync_id = scoutfs_clock_sync_id();
trace_scoutfs_send_clock_sync(msend->nh.clock_sync_id);
ret = sendmsg_full(conn->sock, &msend->nh, len);
spin_lock(&conn->lock);
@@ -835,9 +823,11 @@ static void scoutfs_net_destroy_worker(struct work_struct *work)
if (conn->listening_conn && conn->notify_down)
conn->notify_down(sb, conn, conn->info, conn->rid);
/* free all messages, refactor and complete for forced unmount? */
list_splice_init(&conn->resend_queue, &conn->send_queue);
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head)
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head) {
free_msend(ninf, msend);
}
/* accepted sockets are removed from their listener's list */
if (conn->listening_conn) {
@@ -867,31 +857,13 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
}
/*
* By default, TCP would maintain a connection to an unresponsive peer
* for a very long time indeed. We can't do that because quorum
* members will only participate in an election when they don't have a
* healthy connection to a server. We use the KEEPALIVE* and
* TCP_USER_TIMEOUT options to ensure that we'll break an unresponsive
* connection and return to the quorum and client connection paths to
* try and establish a new connection to an active server.
*
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
* send queue. It doesn't work if the connection is idle. Adding
* keepalice probes with user_timeout set changes how the keepalive
* timeout is calculated. CNT no longer matters. Each time
* additional probes (not the first) are sent the user timeout is
* checked against the last time data was received. If none of the
* keepalives are responded to then eventually the user timeout applies.
*
* Given all this, we start with the overall unresponsive timeout. Then
* we set the probes to start sending towards the end of the timeout.
* We give it a few tries for a successful response before the timeout
* elapses during the probe timer processing after the unsuccessful
* probes.
* Have a pretty aggressive keepalive timeout of around 10 seconds. The
* TCP keepalives are being processed out of task context so they should
* be responsive even when mounts are under load.
*/
#define UNRESPONSIVE_TIMEOUT_SECS 10
#define UNRESPONSIVE_PROBES 3
#define KEEPCNT 3
#define KEEPIDLE 7
#define KEEPINTVL 1
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
@@ -900,7 +872,7 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
int optval;
int ret;
/* we use a keepalive timeout instead of send timeout */
/* but use a keepalive timeout instead of send timeout */
tv.tv_sec = 0;
tv.tv_usec = 0;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
@@ -908,32 +880,24 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
if (ret)
goto out;
/* not checked when user_timeout != 0, but for clarity */
optval = UNRESPONSIVE_PROBES;
optval = KEEPCNT;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
optval = KEEPIDLE;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
optval = KEEPINTVL;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&optval, sizeof(optval));
@@ -961,8 +925,6 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
ret = -EAFNOSUPPORT;
if (ret)
goto out;
conn->last_peername = conn->peername;
out:
return ret;
}
@@ -982,6 +944,7 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
struct scoutfs_net_connection *acc_conn;
DECLARE_WAIT_QUEUE_HEAD(waitq);
struct socket *acc_sock;
LIST_HEAD(conn_list);
int ret;
trace_scoutfs_net_listen_work_enter(sb, 0, 0);
@@ -991,8 +954,6 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
if (ret < 0)
break;
acc_sock->sk->sk_allocation = GFP_NOFS;
/* inherit accepted request funcs from listening conn */
acc_conn = scoutfs_net_alloc_conn(sb, conn->notify_up,
conn->notify_down,
@@ -1055,8 +1016,6 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
/* caller specified connect timeout */
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
@@ -1130,11 +1089,9 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct scoutfs_net_connection *listener;
struct scoutfs_net_connection *acc_conn;
scoutfs_net_response_t resp_func;
struct message_send *msend;
struct message_send *tmp;
unsigned long delay;
void *resp_data;
trace_scoutfs_net_shutdown_work_enter(sb, 0, 0);
trace_scoutfs_conn_shutdown_start(conn);
@@ -1180,30 +1137,6 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
/* and wait for accepted conn shutdown work to finish */
wait_event(conn->waitq, empty_accepted_list(conn));
/*
* Forced unmount will cause net submit to fail once it's
* started and it calls shutdown to interrupt any previous
* senders waiting for a response. The response callbacks can
* do quite a lot of work so we're careful to call them outside
* the lock.
*/
if (scoutfs_forcing_unmount(sb)) {
spin_lock(&conn->lock);
list_splice_tail_init(&conn->send_queue, &conn->resend_queue);
while ((msend = list_first_entry_or_null(&conn->resend_queue,
struct message_send, head))) {
resp_func = msend->resp_func;
resp_data = msend->resp_data;
free_msend(ninf, msend);
spin_unlock(&conn->lock);
call_resp_func(sb, conn, resp_func, resp_data, NULL, 0, -ECONNABORTED);
spin_lock(&conn->lock);
}
spin_unlock(&conn->lock);
}
spin_lock(&conn->lock);
/* greetings aren't resent across sockets */
@@ -1273,7 +1206,6 @@ static void scoutfs_net_reconn_free_worker(struct work_struct *work)
unsigned long now = jiffies;
unsigned long deadline = 0;
bool requeue = false;
int ret;
trace_scoutfs_net_reconn_free_work_enter(sb, 0, 0);
@@ -1287,18 +1219,10 @@ restart:
time_after_eq(now, acc->reconn_deadline))) {
set_conn_fl(acc, reconn_freeing);
spin_unlock(&conn->lock);
if (!test_conn_fl(conn, shutting_down)) {
scoutfs_info(sb, "client "SIN_FMT" reconnect timed out, fencing",
SIN_ARG(&acc->last_peername));
ret = scoutfs_fence_start(sb, acc->rid,
acc->last_peername.sin_addr.s_addr,
SCOUTFS_FENCE_CLIENT_RECONNECT);
if (ret) {
scoutfs_err(sb, "client fence returned err %d, shutting down server",
ret);
scoutfs_server_stop(sb);
}
}
if (!test_conn_fl(conn, shutting_down))
scoutfs_info(sb, "client timed out "SIN_FMT" -> "SIN_FMT", can not reconnect",
SIN_ARG(&acc->sockname),
SIN_ARG(&acc->peername));
destroy_conn(acc);
goto restart;
}
@@ -1369,7 +1293,6 @@ scoutfs_net_alloc_conn(struct super_block *sb,
init_waitqueue_head(&conn->waitq);
conn->sockname.sin_family = AF_INET;
conn->peername.sin_family = AF_INET;
conn->last_peername.sin_family = AF_INET;
INIT_LIST_HEAD(&conn->accepted_head);
INIT_LIST_HEAD(&conn->accepted_list);
conn->next_send_seq = 1;
@@ -1454,8 +1377,6 @@ int scoutfs_net_bind(struct super_block *sb,
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
(char *)&optval, sizeof(optval));
@@ -1538,7 +1459,8 @@ int scoutfs_net_connect(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin, unsigned long timeout_ms)
{
int ret = 0;
int error = 0;
int ret;
spin_lock(&conn->lock);
conn->connect_sin = *sin;
@@ -1546,8 +1468,10 @@ int scoutfs_net_connect(struct super_block *sb,
spin_unlock(&conn->lock);
queue_work(conn->workq, &conn->connect_work);
wait_event(conn->waitq, connect_result(conn, &ret));
return ret;
ret = wait_event_interruptible(conn->waitq,
connect_result(conn, &error));
return ret ?: error;
}
static void set_valid_greeting(struct scoutfs_net_connection *conn)
@@ -1683,10 +1607,10 @@ restart:
conn->next_send_id = reconn->next_send_id;
atomic64_set(&conn->recv_seq, atomic64_read(&reconn->recv_seq));
/* reconn should be idle while in reconn_wait */
/* greeting response/ack will be on conn send queue */
BUG_ON(!list_empty(&reconn->send_queue));
/* queued greeting response is racing, can be in send or resend queue */
list_splice_tail_init(&reconn->resend_queue, &conn->resend_queue);
BUG_ON(!list_empty(&conn->resend_queue));
list_splice_init(&reconn->resend_queue, &conn->resend_queue);
/* new conn info is unused, swap, old won't call down */
swap(conn->info, reconn->info);
@@ -1778,6 +1702,23 @@ int scoutfs_net_response_node(struct super_block *sb,
NULL, NULL, NULL);
}
/*
* The response function that was submitted with the request is not
* called if the request is canceled here.
*/
void scoutfs_net_cancel_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id)
{
struct message_send *msend;
spin_lock(&conn->lock);
msend = find_request(conn, cmd, id);
if (msend)
complete_send(conn, msend);
spin_unlock(&conn->lock);
}
struct sync_request_completion {
struct completion comp;
void *resp;
@@ -1833,10 +1774,11 @@ int scoutfs_net_sync_request(struct super_block *sb,
ret = scoutfs_net_submit_request(sb, conn, cmd, arg, arg_len,
sync_response, &sreq, &id);
if (ret == 0) {
wait_for_completion(&sreq.comp);
ret = wait_for_completion_interruptible(&sreq.comp);
if (ret == -ERESTARTSYS)
scoutfs_net_cancel_request(sb, conn, cmd, id);
else
ret = sreq.error;
}
return ret;
}

View File

@@ -49,7 +49,6 @@ struct scoutfs_net_connection {
u64 greeting_id;
struct sockaddr_in sockname;
struct sockaddr_in peername;
struct sockaddr_in last_peername;
struct list_head accepted_head;
struct scoutfs_net_connection *listening_conn;
@@ -100,16 +99,6 @@ static inline void scoutfs_addr_to_sin(struct sockaddr_in *sin,
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
}
static inline void scoutfs_sin_to_addr(union scoutfs_inet_addr *addr, struct sockaddr_in *sin)
{
BUG_ON(sin->sin_family != AF_INET);
memset(addr, 0, sizeof(union scoutfs_inet_addr));
addr->v4.family = cpu_to_le16(SCOUTFS_AF_IPV4);
addr->v4.addr = be32_to_le32(sin->sin_addr.s_addr);
addr->v4.port = be16_to_le16(sin->sin_port);
}
struct scoutfs_net_connection *
scoutfs_net_alloc_conn(struct super_block *sb,
scoutfs_net_notify_t notify_up,
@@ -134,6 +123,9 @@ int scoutfs_net_submit_request_node(struct super_block *sb,
u64 rid, u8 cmd, void *arg, u16 arg_len,
scoutfs_net_response_t resp_func,
void *resp_data, u64 *id_ret);
void scoutfs_net_cancel_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id);
int scoutfs_net_sync_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, void *arg, unsigned arg_len,

View File

@@ -1,872 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/rhashtable.h>
#include <linux/rcupdate.h>
#include "format.h"
#include "counters.h"
#include "cmp.h"
#include "inode.h"
#include "client.h"
#include "server.h"
#include "omap.h"
#include "recov.h"
#include "scoutfs_trace.h"
/*
* As a client removes an inode from its cache with an nlink of 0 it
* needs to decide if it is the last client using the inode and should
* fully delete all the inode's items. It needs to know if other mounts
* still have the inode in use.
*
* We need a way to communicate between mounts that an inode is in use.
* We don't want to pay the synchronous per-file locking round trip
* costs associated with per-inode open locks that you'd typically see
* in systems to solve this problem. The first prototypes of this
* tracked open file handles so this was coined the open map, though it
* now tracks cached inodes.
*
* Clients maintain bitmaps that cover groups of inodes. As inodes
* enter the cache their bit is set and as the inode is evicted the bit
* is cleared. As deletion is attempted, either by scanning orphans or
* evicting an inode with an nlink of 0, messages are sent around the
* cluster to get the current bitmaps for that inode's group from all
* active mounts. If the inode's bit is clear then it can be deleted.
*
* This layer maintains a list of client rids to send messages to. The
* server calls us as clients enter and leave the cluster. We can't
* process requests until all clients are present as a server starts up
* so we hook into recovery and delay processing until all previously
* existing clients are recovered or fenced.
*/
struct omap_rid_list {
int nr_rids;
struct list_head head;
};
struct omap_rid_entry {
struct list_head head;
u64 rid;
};
struct omap_info {
/* client */
struct rhashtable group_ht;
/* server */
struct rhashtable req_ht;
struct llist_head requests;
spinlock_t lock;
struct omap_rid_list rids;
atomic64_t next_req_id;
};
#define DECLARE_OMAP_INFO(sb, name) \
struct omap_info *name = SCOUTFS_SB(sb)->omap_info
/*
* The presence of an inode in the inode sets its bit in the lock
* group's bitmap.
*
* We don't want to add additional global synchronization of inode cache
* maintenance so these are tracked in an rcu hash table. Once their
* total reaches zero they're removed from the hash and queued for
* freeing and readers should ignore them.
*/
struct omap_group {
struct super_block *sb;
struct rhash_head ht_head;
struct rcu_head rcu;
u64 nr;
spinlock_t lock;
unsigned int total;
__le64 bits[SCOUTFS_OPEN_INO_MAP_LE64S];
};
#define trace_group(sb, which, group, bit_nr) \
do { \
__typeof__(group) _grp = (group); \
__typeof__(bit_nr) _nr = (bit_nr); \
\
trace_scoutfs_omap_group_##which(sb, _grp, _grp->nr, _grp->total, _nr); \
} while (0)
/*
* Each request is initialized with the rids of currently mounted
* clients. As each responds we remove their rid and send the response
* once everyone has contributed.
*
* The request frequency will typically be low, but in a mass rm -rf
* load we will see O(groups * clients) messages flying around.
*/
struct omap_request {
struct llist_node llnode;
struct rhash_head ht_head;
struct rcu_head rcu;
spinlock_t lock;
u64 client_rid;
u64 client_id;
struct omap_rid_list rids;
struct scoutfs_open_ino_map map;
};
static inline void init_rid_list(struct omap_rid_list *list)
{
INIT_LIST_HEAD(&list->head);
list->nr_rids = 0;
}
/*
* Negative searches almost never happen.
*/
static struct omap_rid_entry *find_rid(struct omap_rid_list *list, u64 rid)
{
struct omap_rid_entry *entry;
list_for_each_entry(entry, &list->head, head) {
if (rid == entry->rid)
return entry;
}
return NULL;
}
static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
{
int nr;
list_del(&entry->head);
nr = --list->nr_rids;
kfree(entry);
return nr;
}
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *src;
struct omap_rid_entry *dst;
int nr;
spin_lock(from_lock);
while (to->nr_rids != from->nr_rids) {
nr = from->nr_rids;
spin_unlock(from_lock);
while (to->nr_rids < nr) {
entry = kmalloc(sizeof(struct omap_rid_entry), GFP_NOFS);
if (!entry)
return -ENOMEM;
list_add_tail(&entry->head, &to->head);
to->nr_rids++;
}
while (to->nr_rids > nr) {
entry = list_first_entry(&to->head, struct omap_rid_entry, head);
list_del(&entry->head);
kfree(entry);
to->nr_rids--;
}
spin_lock(from_lock);
}
dst = list_first_entry(&to->head, struct omap_rid_entry, head);
list_for_each_entry(src, &from->head, head) {
dst->rid = src->rid;
dst = list_next_entry(dst, head);
}
spin_unlock(from_lock);
return 0;
}
static void free_rids(struct omap_rid_list *list)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *tmp;
list_for_each_entry_safe(entry, tmp, &list->head, head) {
list_del(&entry->head);
kfree(entry);
}
}
void scoutfs_omap_calc_group_nrs(u64 ino, u64 *group_nr, int *bit_nr)
{
*group_nr = ino >> SCOUTFS_OPEN_INO_MAP_SHIFT;
*bit_nr = ino & SCOUTFS_OPEN_INO_MAP_MASK;
}
static struct omap_group *alloc_group(struct super_block *sb, u64 group_nr)
{
struct omap_group *group;
group = kzalloc(sizeof(struct omap_group), GFP_NOFS);
if (group) {
group->sb = sb;
group->nr = group_nr;
spin_lock_init(&group->lock);
trace_group(sb, alloc, group, -1);
}
return group;
}
static void free_group(struct super_block *sb, struct omap_group *group)
{
trace_group(sb, free, group, -1);
kfree(group);
}
static void free_group_rcu(struct rcu_head *rcu)
{
struct omap_group *group = container_of(rcu, struct omap_group, rcu);
free_group(group->sb, group);
}
static const struct rhashtable_params group_ht_params = {
.key_len = member_sizeof(struct omap_group, nr),
.key_offset = offsetof(struct omap_group, nr),
.head_offset = offsetof(struct omap_group, ht_head),
};
/*
* Track an cached inode in its group. Our set can be racing with a
* final clear that removes the group from the hash, sets total to
* UINT_MAX, and calls rcu free. We can retry until the dead group is
* no longer visible in the hash table and we can insert a new allocated
* group.
*
* The caller must ensure that the bit is clear, -EEXIST will be
* returned otherwise.
*/
int scoutfs_omap_set(struct super_block *sb, u64 ino)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_group *group;
u64 group_nr;
int bit_nr;
bool found;
int ret = 0;
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
retry:
found = false;
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
if (group->total < UINT_MAX) {
found = true;
if (WARN_ON_ONCE(test_and_set_bit_le(bit_nr, group->bits)))
ret = -EEXIST;
else
group->total++;
}
trace_group(sb, inc, group, bit_nr);
spin_unlock(&group->lock);
}
rcu_read_unlock();
if (!found) {
group = alloc_group(sb, group_nr);
if (group) {
ret = rhashtable_lookup_insert_fast(&ominf->group_ht, &group->ht_head,
group_ht_params);
if (ret < 0)
free_group(sb, group);
if (ret == -EEXIST)
ret = 0;
if (ret == -EBUSY) {
/* wait for rehash to finish */
synchronize_rcu();
ret = 0;
}
if (ret == 0)
goto retry;
} else {
ret = -ENOMEM;
}
}
return ret;
}
bool scoutfs_omap_test(struct super_block *sb, u64 ino)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_group *group;
bool ret = false;
u64 group_nr;
int bit_nr;
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
ret = !!test_bit_le(bit_nr, group->bits);
spin_unlock(&group->lock);
}
rcu_read_unlock();
return ret;
}
/*
* Clear a previously set ino bit. Trying to clear a bit that's already
* clear implies imbalanced set/clear or bugs freeing groups. We only
* free groups here as the last clear drops the group's total to 0.
*/
void scoutfs_omap_clear(struct super_block *sb, u64 ino)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_group *group;
u64 group_nr;
int bit_nr;
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
WARN_ON_ONCE(!test_bit_le(bit_nr, group->bits));
WARN_ON_ONCE(group->total == 0);
WARN_ON_ONCE(group->total == UINT_MAX);
if (test_and_clear_bit_le(bit_nr, group->bits)) {
if (--group->total == 0) {
group->total = UINT_MAX;
rhashtable_remove_fast(&ominf->group_ht, &group->ht_head,
group_ht_params);
call_rcu(&group->rcu, free_group_rcu);
}
}
trace_group(sb, dec, group, bit_nr);
spin_unlock(&group->lock);
}
rcu_read_unlock();
WARN_ON_ONCE(!group);
}
/*
* The server adds rids as it discovers clients. We add them to the
* list of rids to send map requests to.
*/
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_rid_entry *entry;
struct omap_rid_entry *found;
entry = kmalloc(sizeof(struct omap_rid_entry), GFP_NOFS);
if (!entry)
return -ENOMEM;
spin_lock(&ominf->lock);
found = find_rid(&ominf->rids, rid);
if (!found) {
entry->rid = rid;
list_add_tail(&entry->head, &ominf->rids.head);
ominf->rids.nr_rids++;
}
spin_unlock(&ominf->lock);
if (found)
kfree(entry);
return 0;
}
static void free_req(struct omap_request *req)
{
free_rids(&req->rids);
kfree(req);
}
static void free_req_rcu(struct rcu_head *rcu)
{
struct omap_request *req = container_of(rcu, struct omap_request, rcu);
free_req(req);
}
static const struct rhashtable_params req_ht_params = {
.key_len = member_sizeof(struct omap_request, map.args.req_id),
.key_offset = offsetof(struct omap_request, map.args.req_id),
.head_offset = offsetof(struct omap_request, ht_head),
};
/*
* Remove a rid from all the pending requests. If it's the last rid we
* give the caller the details to send a response, they'll call back to
* keep removing. If their send fails they're going to shutdown the
* server so we can queue freeing the request as we give it to them.
*/
static int remove_rid_from_reqs(struct omap_info *ominf, u64 rid, u64 *resp_rid, u64 *resp_id,
struct scoutfs_open_ino_map *map)
{
struct omap_rid_entry *entry;
struct rhashtable_iter iter;
struct omap_request *req;
int ret = 0;
rhashtable_walk_enter(&ominf->req_ht, &iter);
rhashtable_walk_start(&iter);
for (;;) {
req = rhashtable_walk_next(&iter);
if (req == NULL)
break;
if (req == ERR_PTR(-EAGAIN))
continue;
spin_lock(&req->lock);
entry = find_rid(&req->rids, rid);
if (entry && free_rid(&req->rids, entry) == 0) {
*resp_rid = req->client_rid;
*resp_id = req->client_id;
memcpy(map, &req->map, sizeof(struct scoutfs_open_ino_map));
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
call_rcu(&req->rcu, free_req_rcu);
ret = 1;
}
spin_unlock(&req->lock);
if (ret > 0)
break;
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
if (ret <= 0) {
*resp_rid = 0;
*resp_id = 0;
}
return ret;
}
/*
* A client has been evicted. Remove its rid from the list and walk
* through all the pending requests and remove its rids, sending the
* response if it was the last rid waiting for a response.
*
* If this returns an error then the server will shut down.
*
* This can be called multiple times by different servers if there are
* errors reclaiming an evicted mount, so we allow asking to remove a
* rid that hasn't been added.
*/
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid)
{
DECLARE_OMAP_INFO(sb, ominf);
struct scoutfs_open_ino_map *map = NULL;
struct omap_rid_entry *entry;
u64 resp_rid = 0;
u64 resp_id = 0;
int ret;
spin_lock(&ominf->lock);
entry = find_rid(&ominf->rids, rid);
if (entry)
free_rid(&ominf->rids, entry);
spin_unlock(&ominf->lock);
if (!entry) {
ret = 0;
goto out;
}
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
if (!map) {
ret = -ENOMEM;
goto out;
}
/* remove the rid from all pending requests, sending responses if it was final */
for (;;) {
ret = remove_rid_from_reqs(ominf, rid, &resp_rid, &resp_id, map);
if (ret <= 0)
break;
ret = scoutfs_server_send_omap_response(sb, resp_rid, resp_id, map, 0);
if (ret < 0)
break;
}
out:
kfree(map);
return ret;
}
/*
* Handle a single incoming request in the server. This could have been
* delayed by recovery. This only returns an error if we couldn't send
* a processing error response to the client.
*/
static int handle_request(struct super_block *sb, struct omap_request *req)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_rid_list priv_rids;
struct omap_rid_entry *entry;
int ret;
init_rid_list(&priv_rids);
ret = copy_rids(&priv_rids, &ominf->rids, &ominf->lock);
if (ret < 0)
goto out;
/* don't send a request to the client who originated this request */
entry = find_rid(&priv_rids, req->client_rid);
if (entry && free_rid(&priv_rids, entry) == 0) {
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
&req->map, 0);
kfree(req);
req = NULL;
goto out;
}
/* this lock isn't needed but sparse gave warnings with conditional locking */
ret = copy_rids(&req->rids, &priv_rids, &ominf->lock);
if (ret < 0)
goto out;
do {
ret = rhashtable_insert_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
if (ret == -EBUSY)
synchronize_rcu(); /* wait for rehash to finish */
} while (ret == -EBUSY);
if (ret < 0)
goto out;
/*
* We can start getting responses the moment we send the first response. After
* we send the last request the req can be freed.
*/
while ((entry = list_first_entry_or_null(&priv_rids.head, struct omap_rid_entry, head))) {
ret = scoutfs_server_send_omap_request(sb, entry->rid, &req->map.args);
if (ret < 0) {
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
goto out;
}
free_rid(&priv_rids, entry);
}
ret = 0;
out:
free_rids(&priv_rids);
if (ret < 0) {
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
NULL, ret);
free_req(req);
}
return ret;
}
/*
* Handle all previously received omap requests from clients. Once
* we've finished recovery and can send requests to all clients we can
* handle all pending requests. The handling function frees the request
* and only returns an error if it couldn't send a response to the
* client.
*/
static int handle_requests(struct super_block *sb)
{
DECLARE_OMAP_INFO(sb, ominf);
struct llist_node *requests;
struct omap_request *req;
struct omap_request *tmp;
int ret;
int err;
if (scoutfs_recov_next_pending(sb, 0, SCOUTFS_RECOV_GREETING))
return 0;
ret = 0;
requests = llist_del_all(&ominf->requests);
llist_for_each_entry_safe(req, tmp, requests, llnode) {
err = handle_request(sb, req);
if (err < 0 && ret == 0)
ret = err;
}
return ret;
}
int scoutfs_omap_finished_recovery(struct super_block *sb)
{
return handle_requests(sb);
}
/*
* The server is receiving a request from a client for the bitmap of all
* open inodes around their ino. Queue it for processing which is
* typically immediate and inline but which can be deferred by recovery
* as the server first starts up.
*/
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map_args *args)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_request *req;
req = kzalloc(sizeof(struct omap_request), GFP_NOFS);
if (req == NULL)
return -ENOMEM;
spin_lock_init(&req->lock);
req->client_rid = rid;
req->client_id = id;
init_rid_list(&req->rids);
req->map.args.group_nr = args->group_nr;
req->map.args.req_id = cpu_to_le64(atomic64_inc_return(&ominf->next_req_id));
llist_add(&req->llnode, &ominf->requests);
return handle_requests(sb);
}
/*
* The client is receiving a request from the server for its map for the
* given group. Look up the group and copy the bits to the map.
*
* The mount originating the request for this bitmap has the inode group
* write locked. We can't be adding links to any inodes in the group
* because that requires the lock. Inodes bits can be set and cleared
* while we're sampling the bitmap. These races are fine, they can't be
* adding cached inodes if nlink is 0 and we don't have the lock. If
* the caller is removing a set bit then they're about to try and delete
* the inode themselves and will first have to acquire the cluster lock
* themselves.
*/
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map_args *args)
{
DECLARE_OMAP_INFO(sb, ominf);
u64 group_nr = le64_to_cpu(args->group_nr);
struct scoutfs_open_ino_map *map;
struct omap_group *group;
bool copied = false;
int ret;
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
if (!map)
return -ENOMEM;
map->args = *args;
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
trace_group(sb, request, group, -1);
if (group->total > 0 && group->total < UINT_MAX) {
memcpy(map->bits, group->bits, sizeof(map->bits));
copied = true;
}
spin_unlock(&group->lock);
}
rcu_read_unlock();
if (!copied)
memset(map->bits, 0, sizeof(map->bits));
ret = scoutfs_client_send_omap_response(sb, id, map);
kfree(map);
return ret;
}
/*
* The server has received an open ino map response from a client. Find
* the original request that it's serving, or in the response's map, and
* send a reply if this was the last response from a client we were
* waiting for.
*
* We can get responses for requests we're no longer tracking if, for
* example, sending to a client gets an error. We'll have already sent
* the response to the requesting client so we drop these responses on
* the floor.
*/
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map *resp_map)
{
DECLARE_OMAP_INFO(sb, ominf);
struct scoutfs_open_ino_map *map;
struct omap_rid_entry *entry;
bool send_response = false;
struct omap_request *req;
u64 resp_rid;
u64 resp_id;
int ret;
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
if (!map) {
ret = -ENOMEM;
goto out;
}
rcu_read_lock();
req = rhashtable_lookup(&ominf->req_ht, &resp_map->args.req_id, req_ht_params);
if (req) {
spin_lock(&req->lock);
entry = find_rid(&req->rids, rid);
if (entry) {
bitmap_or((unsigned long *)req->map.bits, (unsigned long *)req->map.bits,
(unsigned long *)resp_map->bits, SCOUTFS_OPEN_INO_MAP_BITS);
if (free_rid(&req->rids, entry) == 0)
send_response = true;
}
spin_unlock(&req->lock);
if (send_response) {
resp_rid = req->client_rid;
resp_id = req->client_id;
memcpy(map, &req->map, sizeof(struct scoutfs_open_ino_map));
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
call_rcu(&req->rcu, free_req_rcu);
}
}
rcu_read_unlock();
if (send_response)
ret = scoutfs_server_send_omap_response(sb, resp_rid, resp_id, map, 0);
else
ret = 0;
kfree(map);
out:
return ret;
}
/*
* The server is shutting down. Free all the server state associated
* with ongoing request processing. Clients who still have requests
* pending will resend them to the next server.
*/
void scoutfs_omap_server_shutdown(struct super_block *sb)
{
DECLARE_OMAP_INFO(sb, ominf);
struct rhashtable_iter iter;
struct llist_node *requests;
struct omap_request *req;
struct omap_request *tmp;
rhashtable_walk_enter(&ominf->req_ht, &iter);
rhashtable_walk_start(&iter);
for (;;) {
req = rhashtable_walk_next(&iter);
if (req == NULL)
break;
if (req == ERR_PTR(-EAGAIN))
continue;
if (req->rids.nr_rids != 0) {
free_rids(&req->rids);
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
call_rcu(&req->rcu, free_req_rcu);
}
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
requests = llist_del_all(&ominf->requests);
llist_for_each_entry_safe(req, tmp, requests, llnode)
kfree(req);
synchronize_rcu();
}
int scoutfs_omap_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct omap_info *ominf;
int ret;
ominf = kzalloc(sizeof(struct omap_info), GFP_KERNEL);
if (!ominf) {
ret = -ENOMEM;
goto out;
}
ret = rhashtable_init(&ominf->group_ht, &group_ht_params);
if (ret < 0) {
kfree(ominf);
goto out;
}
ret = rhashtable_init(&ominf->req_ht, &req_ht_params);
if (ret < 0) {
rhashtable_destroy(&ominf->group_ht);
kfree(ominf);
goto out;
}
init_llist_head(&ominf->requests);
spin_lock_init(&ominf->lock);
init_rid_list(&ominf->rids);
atomic64_set(&ominf->next_req_id, 0);
sbi->omap_info = ominf;
ret = 0;
out:
return ret;
}
/*
* To get here the server must have shut down, freeing requests, and
* evict must have been called on all cached inodes so we can just
* synchronize all the pending group frees.
*/
void scoutfs_omap_destroy(struct super_block *sb)
{
DECLARE_OMAP_INFO(sb, ominf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct rhashtable_iter iter;
if (ominf) {
synchronize_rcu();
/* double check that all the groups deced to 0 and were freed */
rhashtable_walk_enter(&ominf->group_ht, &iter);
rhashtable_walk_start(&iter);
WARN_ON_ONCE(rhashtable_walk_peek(&iter) != NULL);
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
rhashtable_destroy(&ominf->group_ht);
rhashtable_destroy(&ominf->req_ht);
kfree(ominf);
sbi->omap_info = NULL;
}
}

View File

@@ -1,23 +0,0 @@
#ifndef _SCOUTFS_OMAP_H_
#define _SCOUTFS_OMAP_H_
int scoutfs_omap_set(struct super_block *sb, u64 ino);
bool scoutfs_omap_test(struct super_block *sb, u64 ino);
void scoutfs_omap_clear(struct super_block *sb, u64 ino);
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map_args *args);
void scoutfs_omap_calc_group_nrs(u64 ino, u64 *group_nr, int *bit_nr);
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_finished_recovery(struct super_block *sb);
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map_args *args);
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map *resp_map);
void scoutfs_omap_server_shutdown(struct super_block *sb);
int scoutfs_omap_setup(struct super_block *sb);
void scoutfs_omap_destroy(struct super_block *sb);
#endif

View File

@@ -26,30 +26,22 @@
#include "msg.h"
#include "options.h"
#include "super.h"
#include "inode.h"
enum {
Opt_metadev_path,
Opt_orphan_scan_delay_ms,
Opt_quorum_slot_nr,
Opt_err,
};
static const match_table_t tokens = {
{Opt_metadev_path, "metadev_path=%s"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_err, NULL}
};
struct options_info {
seqlock_t seqlock;
struct scoutfs_mount_options opts;
struct scoutfs_sysfs_attrs sysfs_attrs;
struct options_sb_info {
struct dentry *debugfs_dir;
};
#define DECLARE_OPTIONS_INFO(sb, name) \
struct options_info *name = SCOUTFS_SB(sb)->options_info
u32 scoutfs_option_u32(struct super_block *sb, int token)
{
WARN_ON_ONCE(1);
return 0;
}
static int parse_bdev_path(struct super_block *sb, substring_t *substr,
char **bdev_path_ret)
@@ -97,29 +89,8 @@ out:
return ret;
}
static void free_options(struct scoutfs_mount_options *opts)
{
kfree(opts->metadev_path);
}
#define MIN_ORPHAN_SCAN_DELAY_MS 100UL
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
static void init_default_options(struct scoutfs_mount_options *opts)
{
memset(opts, 0, sizeof(*opts));
opts->quorum_slot_nr = -1;
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
}
/*
* Parse the option string into our options struct. This can allocate
* memory in the struct. The caller is responsible for always calling
* free_options() when the struct is destroyed, including when we return
* an error.
*/
static int parse_options(struct super_block *sb, char *options, struct scoutfs_mount_options *opts)
int scoutfs_parse_options(struct super_block *sb, char *options,
struct mount_options *parsed)
{
substring_t args[MAX_OPT_ARGS];
int nr;
@@ -127,61 +98,49 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
char *p;
int ret;
/* Set defaults */
memset(parsed, 0, sizeof(*parsed));
parsed->quorum_slot_nr = -1;
while ((p = strsep(&options, ",")) != NULL) {
if (!*p)
continue;
token = match_token(p, tokens, args);
switch (token) {
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
if (ret < 0)
return ret;
break;
case Opt_orphan_scan_delay_ms:
if (opts->orphan_scan_delay_ms != -1) {
scoutfs_err(sb, "multiple orphan_scan_delay_ms options provided, only provide one.");
return -EINVAL;
}
ret = match_int(args, &nr);
if (ret < 0 ||
nr < MIN_ORPHAN_SCAN_DELAY_MS || nr > MAX_ORPHAN_SCAN_DELAY_MS) {
scoutfs_err(sb, "invalid orphan_scan_delay_ms option, must be between %lu and %lu",
MIN_ORPHAN_SCAN_DELAY_MS, MAX_ORPHAN_SCAN_DELAY_MS);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->orphan_scan_delay_ms = nr;
break;
case Opt_quorum_slot_nr:
if (opts->quorum_slot_nr != -1) {
if (parsed->quorum_slot_nr != -1) {
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
return -EINVAL;
}
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 || nr >= SCOUTFS_QUORUM_MAX_SLOTS) {
if (ret < 0 || nr < 0 ||
nr >= SCOUTFS_QUORUM_MAX_SLOTS) {
scoutfs_err(sb, "invalid quorum_slot_nr option, must be between 0 and %u",
SCOUTFS_QUORUM_MAX_SLOTS - 1);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->quorum_slot_nr = nr;
parsed->quorum_slot_nr = nr;
break;
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0],
&parsed->metadev_path);
if (ret < 0)
return ret;
break;
default:
scoutfs_err(sb, "Unknown or malformed option, \"%s\"", p);
return -EINVAL;
scoutfs_err(sb, "Unknown or malformed option, \"%s\"",
p);
break;
}
}
if (!opts->metadev_path) {
if (!parsed->metadev_path) {
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
return -EINVAL;
}
@@ -189,181 +148,40 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
return 0;
}
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts)
{
DECLARE_OPTIONS_INFO(sb, optinf);
unsigned int seq;
if (WARN_ON_ONCE(optinf == NULL)) {
/* trying to use options before early setup or after destroy */
init_default_options(opts);
return;
}
do {
seq = read_seqbegin(&optinf->seqlock);
memcpy(opts, &optinf->opts, sizeof(struct scoutfs_mount_options));
} while (read_seqretry(&optinf->seqlock, seq));
}
/*
* Early setup that parses and stores the options so that the rest of
* setup can use them. Full options setup that relies on other
* components will be done later.
*/
int scoutfs_options_early_setup(struct super_block *sb, char *options)
int scoutfs_options_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_mount_options opts;
struct options_info *optinf;
struct options_sb_info *osi;
int ret;
init_default_options(&opts);
osi = kzalloc(sizeof(struct options_sb_info), GFP_KERNEL);
if (!osi)
return -ENOMEM;
ret = parse_options(sb, options, &opts);
if (ret < 0)
goto out;
sbi->options = osi;
optinf = kzalloc(sizeof(struct options_info), GFP_KERNEL);
if (!optinf) {
osi->debugfs_dir = debugfs_create_dir("options", sbi->debug_root);
if (!osi->debugfs_dir) {
ret = -ENOMEM;
goto out;
}
seqlock_init(&optinf->seqlock);
scoutfs_sysfs_init_attrs(sb, &optinf->sysfs_attrs);
write_seqlock(&optinf->seqlock);
optinf->opts = opts;
write_sequnlock(&optinf->seqlock);
sbi->options_info = optinf;
ret = 0;
out:
if (ret < 0)
free_options(&opts);
return ret;
}
int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
if (opts.quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
return 0;
}
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%s", opts.metadev_path);
}
SCOUTFS_ATTR_RO(metadev_path);
static ssize_t orphan_scan_delay_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.orphan_scan_delay_ms);
}
static ssize_t orphan_scan_delay_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < MIN_ORPHAN_SCAN_DELAY_MS || val > MAX_ORPHAN_SCAN_DELAY_MS) {
scoutfs_err(sb, "invalid orphan_scan_delay_ms value written to options sysfs file, must be between %lu and %lu",
MIN_ORPHAN_SCAN_DELAY_MS, MAX_ORPHAN_SCAN_DELAY_MS);
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.orphan_scan_delay_ms = val;
write_sequnlock(&optinf->seqlock);
scoutfs_inode_schedule_orphan_dwork(sb);
return count;
}
SCOUTFS_ATTR_RW(orphan_scan_delay_ms);
static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%d\n", opts.quorum_slot_nr);
}
SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
SCOUTFS_ATTR_PTR(quorum_slot_nr),
NULL,
};
int scoutfs_options_setup(struct super_block *sb)
{
DECLARE_OPTIONS_INFO(sb, optinf);
int ret;
ret = scoutfs_sysfs_create_attrs(sb, &optinf->sysfs_attrs, options_attrs, "mount_options");
if (ret < 0)
if (ret)
scoutfs_options_destroy(sb);
return ret;
}
/*
* We remove the sysfs files early in unmount so that they can't try to call other subsystems
* as they're being destroyed.
*/
void scoutfs_options_stop(struct super_block *sb)
{
DECLARE_OPTIONS_INFO(sb, optinf);
if (optinf)
scoutfs_sysfs_destroy_attrs(sb, &optinf->sysfs_attrs);
}
void scoutfs_options_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_OPTIONS_INFO(sb, optinf);
struct options_sb_info *osi = sbi->options;
scoutfs_options_stop(sb);
if (optinf) {
free_options(&optinf->opts);
kfree(optinf);
sbi->options_info = NULL;
if (osi) {
if (osi->debugfs_dir)
debugfs_remove_recursive(osi->debugfs_dir);
kfree(osi);
sbi->options = NULL;
}
}

View File

@@ -5,19 +5,23 @@
#include <linux/in.h>
#include "format.h"
struct scoutfs_mount_options {
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;
enum scoutfs_mount_options {
Opt_quorum_slot_nr,
Opt_metadev_path,
Opt_err,
};
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);
int scoutfs_options_show(struct seq_file *seq, struct dentry *root);
struct mount_options {
int quorum_slot_nr;
char *metadev_path;
};
int scoutfs_options_early_setup(struct super_block *sb, char *options);
int scoutfs_parse_options(struct super_block *sb, char *options,
struct mount_options *parsed);
int scoutfs_options_setup(struct super_block *sb);
void scoutfs_options_stop(struct super_block *sb);
void scoutfs_options_destroy(struct super_block *sb);
u32 scoutfs_option_u32(struct super_block *sb, int token);
#define scoutfs_option_bool scoutfs_option_u32
#endif /* _SCOUTFS_OPTIONS_H_ */

View File

@@ -32,7 +32,6 @@
#include "block.h"
#include "net.h"
#include "sysfs.h"
#include "fence.h"
#include "scoutfs_trace.h"
/*
@@ -61,9 +60,10 @@
* running (maybe they've deadlocked, or lost network communications).
* In addition to a configuration slot in the super block, each quorum
* member also has a known block location that represents their slot.
* The block contains an array of events which are updated during the life
* time of the quorum agent. The elected leader set its elected event
* and can then start the server.
* They set a flag in their block indicating that they've been elected
* leader, then read slots for all the other blocks looking for
* previously active leaders to fence. After that it can start the
* server.
*
* It's critical to raft elections that a participant's term not go
* backwards in time so each mount also uses its quorum block to store
@@ -97,7 +97,7 @@ struct quorum_host_msg {
struct last_msg {
struct quorum_host_msg msg;
ktime_t ts;
struct timespec64 ts;
};
enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
@@ -105,8 +105,6 @@ enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
struct quorum_status {
enum quorum_role role;
u64 term;
u64 server_start_term;
int server_event;
int vote_for;
unsigned long vote_bits;
ktime_t timeout;
@@ -118,7 +116,7 @@ struct quorum_info {
struct socket *sock;
bool shutdown;
int our_quorum_slot_nr;
unsigned long flags;
int votes_needed;
spinlock_t show_lock;
@@ -129,6 +127,8 @@ struct quorum_info {
struct scoutfs_sysfs_attrs ssa;
};
#define QINF_FLAG_SERVER 0
#define DECLARE_QUORUM_INFO(sb, name) \
struct quorum_info *name = SCOUTFS_SB(sb)->quorum_info
#define DECLARE_QUORUM_INFO_KOBJ(kobj, name) \
@@ -160,7 +160,9 @@ static ktime_t heartbeat_timeout(void)
static int create_socket(struct super_block *sb)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct mount_options *opts = &sbi->opts;
struct scoutfs_super_block *super = &sbi->super;
struct socket *sock = NULL;
struct sockaddr_in sin;
int addrlen;
@@ -174,7 +176,7 @@ static int create_socket(struct super_block *sb)
sock->sk->sk_allocation = GFP_NOFS;
scoutfs_quorum_slot_sin(super, qinf->our_quorum_slot_nr, &sin);
scoutfs_quorum_slot_sin(super, opts->quorum_slot_nr, &sin);
addrlen = sizeof(sin);
ret = kernel_bind(sock, (struct sockaddr *)&sin, addrlen);
@@ -205,15 +207,16 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
int only)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
ktime_t now;
struct timespec64 ts;
int i;
struct scoutfs_quorum_message qmes = {
.fsid = super->hdr.fsid,
.term = cpu_to_le64(term),
.type = type,
.from = qinf->our_quorum_slot_nr,
.from = opts->quorum_slot_nr,
};
struct kvec kv = {
.iov_base = &qmes,
@@ -232,20 +235,20 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
qmes.crc = quorum_message_crc(&qmes);
ts = ktime_to_timespec64(ktime_get());
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i) ||
(only >= 0 && i != only) || i == qinf->our_quorum_slot_nr)
(only >= 0 && i != only) || i == opts->quorum_slot_nr)
continue;
scoutfs_quorum_slot_sin(super, i, &sin);
now = ktime_get();
kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
spin_lock(&qinf->show_lock);
qinf->last_send[i].msg.term = term;
qinf->last_send[i].msg.type = type;
qinf->last_send[i].ts = now;
qinf->last_send[i].ts = ts;
spin_unlock(&qinf->show_lock);
if (i == only)
@@ -305,8 +308,6 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
if (ret < 0)
return ret;
now = ktime_get();
if (ret != sizeof(qmes) ||
qmes.crc != quorum_message_crc(&qmes) ||
qmes.fsid != super->hdr.fsid ||
@@ -326,25 +327,24 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
spin_lock(&qinf->show_lock);
qinf->last_recv[msg->from].msg = *msg;
qinf->last_recv[msg->from].ts = now;
qinf->last_recv[msg->from].ts = ktime_to_timespec64(ktime_get());
spin_unlock(&qinf->show_lock);
return 0;
}
/*
* Read and verify block fields before giving it to the caller. We
* should have exclusive write access to the block. We know that
* something has gone horribly wrong if we don't see our rid in the
* begin event after we've written it as we started up.
* The caller can provide a mark that they're using to track their
* written blocks. It's updated as they write the block and we can
* compare it with what we read to see if there have been unexpected
* intervening writes to the block -- the caller is supposed to have
* exclusive access to the block (or was fenced).
*/
static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_quorum_block *blk,
bool check_rid)
static int read_quorum_block(struct super_block *sb, u64 blkno,
struct scoutfs_quorum_block *blk, __le64 *mark)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
const u64 rid = sbi->rid;
char msg[150];
__le32 crc;
int ret;
@@ -355,283 +355,203 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
ret = scoutfs_block_read_sm(sb, sbi->meta_bdev, blkno,
&blk->hdr, sizeof(*blk), &crc);
if (ret < 0) {
scoutfs_err(sb, "quorum block read error %d", ret);
goto out;
}
/* detect invalid blocks */
if (blk->hdr.crc != crc)
snprintf(msg, sizeof(msg), "blk crc %08x != %08x",
le32_to_cpu(blk->hdr.crc), le32_to_cpu(crc));
else if (le32_to_cpu(blk->hdr.magic) != SCOUTFS_BLOCK_MAGIC_QUORUM)
snprintf(msg, sizeof(msg), "blk magic %08x != %08x",
le32_to_cpu(blk->hdr.magic), SCOUTFS_BLOCK_MAGIC_QUORUM);
else if (blk->hdr.fsid != super->hdr.fsid)
snprintf(msg, sizeof(msg), "blk fsid %016llx != %016llx",
le64_to_cpu(blk->hdr.fsid), le64_to_cpu(super->hdr.fsid));
else if (le64_to_cpu(blk->hdr.blkno) != blkno)
snprintf(msg, sizeof(msg), "blk blkno %llu != %llu",
le64_to_cpu(blk->hdr.blkno), blkno);
else if (check_rid && le64_to_cpu(blk->events[SCOUTFS_QUORUM_EVENT_BEGIN].rid) != rid)
snprintf(msg, sizeof(msg), "quorum block begin rid %016llx != our rid %016llx, are multiple mounts configured with this slot?",
le64_to_cpu(blk->events[SCOUTFS_QUORUM_EVENT_BEGIN].rid), rid);
else
msg[0] = '\0';
if (msg[0] != '\0') {
scoutfs_err(sb, "read invalid quorum block, %s", msg);
if (ret == 0 &&
((blk->hdr.crc != crc) ||
(le32_to_cpu(blk->hdr.magic) != SCOUTFS_BLOCK_MAGIC_QUORUM) ||
(blk->hdr.fsid != super->hdr.fsid) ||
(le64_to_cpu(blk->hdr.blkno) != blkno))) {
scoutfs_inc_counter(sb, quorum_read_invalid_block);
ret = -EIO;
goto out;
}
out:
if (mark && *mark != 0 && blk->random_write_mark != *mark) {
scoutfs_err(sb, "read unexpected quorum block write mark, are multiple mounts configured with the same slot?");
ret = -EIO;
}
if (ret < 0)
scoutfs_err(sb, "quorum block read error %d", ret);
return ret;
}
/*
* It's really important in raft elections that the term not go
* backwards in time. We achieve this by having each participant record
* the greatest term they've seen in their quorum block. It's also
* important that participants agree on the greatest term. It can
* happen that one gets ahead of the rest, perhaps by being forcefully
* shutdown after having just been elected. As everyone starts up it's
* possible to have N-1 have term T-1 while just one participant thinks
* the term is T. That single participant will ignore all messages
* from older terms. If its timeout is greater then the others it can
* immediately override the election of the majority and request votes
* and become elected.
*
* A best-effort work around is to have everyone try and start from the
* greatest term that they can find in everyone's blocks. If it works
* then you avoid having those with greater terms ignore others. If it
* doesn't work the elections will eventually stabilize after rocky
* periods of fencing from what looks like concurrent elections.
*/
static void read_greatest_term(struct super_block *sb, u64 *term)
static void set_quorum_block_event(struct super_block *sb,
struct scoutfs_quorum_block *blk,
struct scoutfs_quorum_block_event *ev)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
int ret;
int e;
int s;
*term = 0;
for (s = 0; s < SCOUTFS_QUORUM_MAX_SLOTS; s++) {
if (!quorum_slot_present(super, s))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + s, &blk, false);
if (ret < 0)
continue;
for (e = 0; e < ARRAY_SIZE(blk.events); e++) {
if (blk.events[e].rid)
*term = max(*term, le64_to_cpu(blk.events[e].term));
}
}
}
static void set_quorum_block_event(struct super_block *sb, struct scoutfs_quorum_block *blk,
int event, u64 term)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_quorum_block_event *ev;
struct timespec64 ts;
if (WARN_ON_ONCE(event < 0 || event >= SCOUTFS_QUORUM_EVENT_NR))
return;
getnstimeofday64(&ts);
le64_add_cpu(&blk->write_nr, 1);
ev = &blk->events[event];
ev->write_nr = blk->write_nr;
ev->rid = cpu_to_le64(sbi->rid);
ev->term = cpu_to_le64(term);
ev->ts.sec = cpu_to_le64(ts.tv_sec);
ev->ts.nsec = cpu_to_le32(ts.tv_nsec);
}
static int write_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_quorum_block *blk)
/*
* Every time we write a block we update the write stamp and random
* write mark so readers can see our write.
*/
static int write_quorum_block(struct super_block *sb, u64 blkno,
struct scoutfs_quorum_block *blk, __le64 *mark)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
int ret;
if (WARN_ON_ONCE(blkno < SCOUTFS_QUORUM_BLKNO) ||
WARN_ON_ONCE(blkno >= (SCOUTFS_QUORUM_BLKNO +
SCOUTFS_QUORUM_BLOCKS)))
return -EINVAL;
return scoutfs_block_write_sm(sb, sbi->meta_bdev, blkno, &blk->hdr, sizeof(*blk));
}
do {
get_random_bytes(&blk->random_write_mark,
sizeof(blk->random_write_mark));
} while (blk->random_write_mark == 0);
/*
* Read the caller's slot's quorum block, make a change, and write it
* back out.
*/
static int update_quorum_block(struct super_block *sb, int event, u64 term, bool check_rid)
{
DECLARE_QUORUM_INFO(sb, qinf);
u64 blkno = SCOUTFS_QUORUM_BLKNO + qinf->our_quorum_slot_nr;
struct scoutfs_quorum_block blk;
int ret;
if (mark)
*mark = blk->random_write_mark;
ret = read_quorum_block(sb, blkno, &blk, check_rid);
if (ret == 0) {
set_quorum_block_event(sb, &blk, event, term);
ret = write_quorum_block(sb, blkno, &blk);
if (ret < 0)
scoutfs_err(sb, "error %d reading quorum block %llu to update event %d term %llu",
ret, blkno, event, term);
} else {
scoutfs_err(sb, "error %d writing quorum block %llu after updating event %d term %llu",
ret, blkno, event, term);
}
set_quorum_block_event(sb, blk, &blk->write);
ret = scoutfs_block_write_sm(sb, sbi->meta_bdev, blkno,
&blk->hdr, sizeof(*blk));
if (ret < 0)
scoutfs_err(sb, "quorum block write error %d", ret);
return ret;
}
/*
* The calling server has been elected and has started running but can't
* yet assume that it has exclusive access to the metadata device. We
* read all the quorum blocks looking for previously elected leaders to
* fence so that we're the only leader running.
*
* We're relying on the invariant that there can't be two mounts running
* with the same slot nr at the same time. With this constraint there
* can be at most two previous leaders per slot that need to be fenced:
* a persistent record of an old mount on the slot, and an active mount.
*
* If we start fence requests then we only wait for them to complete
* before returning. The server will reclaim their resources once it is
* up and running and will call us to update the fence event. If we
* don't start fence requests then we update the fence event
* immediately, the server has nothing more to do.
*
* Quorum will be sending heartbeats while we wait for fencing. That
* keeps us from being fenced while we allow userspace fencing to take a
* reasonably long time. We still want to timeout eventually.
* Read the caller's slot's current quorum block, make a change, and
* write it back out. If the caller provides a mark it can cause read
* errors if we read a mark that doesn't match the last mark that the
* caller wrote.
*/
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
static int update_quorum_block(struct super_block *sb, u64 blkno,
__le64 *mark, int role, u64 term)
{
struct scoutfs_quorum_block blk;
u64 flags;
u64 bits;
u64 set;
int ret;
ret = read_quorum_block(sb, blkno, &blk, mark);
if (ret == 0) {
if (blk.term != cpu_to_le64(term)) {
blk.term = cpu_to_le64(term);
set_quorum_block_event(sb, &blk, &blk.update_term);
}
flags = le64_to_cpu(blk.flags);
bits = SCOUTFS_QUORUM_BLOCK_LEADER;
set = role == LEADER ? SCOUTFS_QUORUM_BLOCK_LEADER : 0;
if ((flags & bits) != set)
set_quorum_block_event(sb, &blk,
set ? &blk.set_leader :
&blk.clear_leader);
blk.flags = cpu_to_le64((flags & ~bits) | set);
ret = write_quorum_block(sb, blkno, &blk, mark);
}
return ret;
}
/*
* The calling server has been elected and updated their block, but
* can't yet assume that it has exclusive access to the metadata device.
* We read all the quorum blocks looking for previously elected leaders
* to fence so that we're the only leader running.
*/
static int fence_leader_blocks(struct super_block *sb)
{
#define NR_OLD 2
struct scoutfs_quorum_block_event old[SCOUTFS_QUORUM_MAX_SLOTS][NR_OLD] = {{{0,}}};
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct mount_options *opts = &sbi->opts;
struct scoutfs_quorum_block blk;
struct sockaddr_in sin;
const u64 rid = sbi->rid;
bool fence_started = false;
u64 fenced = 0;
__le64 fence_rid;
u64 blkno;
int ret = 0;
int err;
int i;
int j;
BUILD_BUG_ON(SCOUTFS_QUORUM_BLOCKS < SCOUTFS_QUORUM_MAX_SLOTS);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
if (i == opts->quorum_slot_nr)
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
blkno = SCOUTFS_QUORUM_BLKNO + i;
ret = read_quorum_block(sb, blkno, &blk, NULL);
if (ret < 0)
goto out;
/* elected leader still running */
if (le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_ELECT].term) >
le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_STOP].term))
old[i][0] = blk.events[SCOUTFS_QUORUM_EVENT_ELECT];
if (!(le64_to_cpu(blk.flags) & SCOUTFS_QUORUM_BLOCK_LEADER))
continue;
/* persistent record of previous server before elected */
if ((le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_FENCE].term) >
le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_STOP].term)) &&
(le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_FENCE].term) <
le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_ELECT].term)))
old[i][1] = blk.events[SCOUTFS_QUORUM_EVENT_FENCE];
scoutfs_inc_counter(sb, quorum_fence_leader);
scoutfs_quorum_slot_sin(super, i, &sin);
/* find greatest term that has fenced everything before it */
fenced = max(fenced, le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_FENCE].term));
}
scoutfs_err(sb, "fencing "SCSBF" at "SIN_FMT,
SCSB_LEFR_ARGS(super->hdr.fsid, blk.set_leader.rid),
SIN_ARG(&sin));
/* now actually fence any old leaders which haven't been fenced yet */
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
for (j = 0; j < NR_OLD; j++) {
if (le64_to_cpu(old[i][j].term) == 0 || /* uninitialized */
le64_to_cpu(old[i][j].term) < fenced || /* already fenced */
le64_to_cpu(old[i][j].term) > term || /* newer than us */
le64_to_cpu(old[i][j].rid) == rid) /* us */
continue;
blk.flags &= ~cpu_to_le64(SCOUTFS_QUORUM_BLOCK_LEADER);
set_quorum_block_event(sb, &blk, &blk.fenced);
scoutfs_inc_counter(sb, quorum_fence_leader);
scoutfs_quorum_slot_sin(super, i, &sin);
fence_rid = old[i][j].rid;
scoutfs_info(sb, "fencing previous leader "SCSBF" at term %llu in slot %u with address "SIN_FMT,
SCSB_LEFR_ARGS(super->hdr.fsid, fence_rid),
le64_to_cpu(old[i][j].term), i, SIN_ARG(&sin));
ret = scoutfs_fence_start(sb, le64_to_cpu(fence_rid), sin.sin_addr.s_addr,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER);
if (ret < 0)
goto out;
fence_started = true;
}
ret = write_quorum_block(sb, blkno, &blk, NULL);
if (ret < 0)
goto out;
}
out:
err = scoutfs_fence_wait_fenced(sb, msecs_to_jiffies(SCOUTFS_QUORUM_FENCE_TO_MS));
if (ret == 0)
ret = err;
if (ret < 0)
if (ret < 0) {
scoutfs_err(sb, "error %d fencing active", ret);
scoutfs_inc_counter(sb, quorum_fence_error);
}
return ret;
}
/*
* The main quorum task maintains its private status. It seemed cleaner
* to occasionally copy the status for showing in sysfs/debugfs files
* than to have the two lock access to shared status. The show copy is
* updated after being modified before the quorum task sleeps for a
* significant amount of time, either waiting on timeouts or interacting
* with the server.
*/
static void update_show_status(struct quorum_info *qinf, struct quorum_status *qst)
{
spin_lock(&qinf->show_lock);
qinf->show_status = *qst;
spin_unlock(&qinf->show_lock);
}
/*
* The quorum work always runs in the background of quorum member
* mounts. It's responsible for starting and stopping the server if
* it's elected leader. While it's leader it sends heartbeats to
* suppress other quorum work from standing for election.
* it's elected leader, and the server can call back into it to let it
* know that it has shut itself down (perhaps due to error) so that the
* work should stop sending heartbeats.
*/
static void scoutfs_quorum_worker(struct work_struct *work)
{
struct quorum_info *qinf = container_of(work, struct quorum_info, work);
struct super_block *sb = qinf->sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
struct scoutfs_quorum_block blk;
struct sockaddr_in unused;
struct quorum_host_msg msg;
struct quorum_status qst = {0,};
struct quorum_status qst;
__le64 mark;
u64 blkno;
int ret;
int err;
/* recording votes from slots as native single word bitmap */
BUILD_BUG_ON(SCOUTFS_QUORUM_MAX_SLOTS > BITS_PER_LONG);
/* get our starting term from our persistent block */
mark = 0;
blkno = SCOUTFS_QUORUM_BLKNO + opts->quorum_slot_nr;
ret = read_quorum_block(sb, blkno, &blk, &mark);
if (ret < 0)
goto out;
/* start out as a follower */
qst.role = FOLLOWER;
qst.term = le64_to_cpu(blk.term);
qst.vote_for = -1;
/* read our starting term from greatest in all events in all slots */
read_greatest_term(sb, &qst.term);
qst.vote_bits = 0;
/* see if there's a server to chose heartbeat or election timeout */
if (scoutfs_quorum_server_sin(sb, &unused) == 0)
@@ -639,14 +559,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
else
qst.timeout = election_timeout();
/* record that we're up and running, readers check that it isn't updated */
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_BEGIN, qst.term, false);
if (ret < 0)
goto out;
while (!(qinf->shutdown || scoutfs_forcing_unmount(sb))) {
update_show_status(qinf, &qst);
while (!qinf->shutdown) {
ret = recv_msg(sb, &msg, qst.timeout);
if (ret < 0) {
@@ -664,6 +577,29 @@ static void scoutfs_quorum_worker(struct work_struct *work)
msg.term < qst.term)
msg.type = SCOUTFS_QUORUM_MSG_INVALID;
/* if the server has shutdown we become follower */
if (!test_bit(QINF_FLAG_SERVER, &qinf->flags) &&
qst.role == LEADER) {
qst.role = FOLLOWER;
qst.vote_for = -1;
qst.vote_bits = 0;
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_server_shutdown);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.term);
scoutfs_inc_counter(sb, quorum_send_resignation);
ret = update_quorum_block(sb, blkno, &mark,
qst.role, qst.term);
if (ret < 0)
goto out;
}
spin_lock(&qinf->show_lock);
qinf->show_status = qst;
spin_unlock(&qinf->show_lock);
trace_scoutfs_quorum_loop(sb, qst.role, qst.term, qst.vote_for,
qst.vote_bits,
ktime_to_timespec64(qst.timeout));
@@ -674,6 +610,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
if (qst.role == LEADER) {
scoutfs_warn(sb, "saw msg type %u from %u for term %llu while leader in term %llu, shutting down server.",
msg.type, msg.from, msg.term, qst.term);
scoutfs_server_stop(sb);
}
qst.role = FOLLOWER;
qst.term = msg.term;
@@ -687,7 +624,8 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.timeout = election_timeout();
/* store our increased term */
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_TERM, qst.term, true);
ret = update_quorum_block(sb, blkno, &mark,
qst.role, qst.term);
if (ret < 0)
goto out;
}
@@ -695,36 +633,25 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* followers and candidates start new election on timeout */
if (qst.role != LEADER &&
ktime_after(ktime_get(), qst.timeout)) {
/* .. but only if their server has stopped */
if (!scoutfs_server_is_down(sb)) {
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_candidate_server_stopping);
continue;
}
qst.role = CANDIDATE;
qst.term++;
qst.vote_for = -1;
qst.vote_bits = 0;
set_bit(qinf->our_quorum_slot_nr, &qst.vote_bits);
set_bit(opts->quorum_slot_nr, &qst.vote_bits);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_REQUEST_VOTE,
qst.term);
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_send_request);
/* store our increased term */
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_TERM, qst.term, true);
if (ret < 0)
goto out;
}
/* candidates count votes in their term */
if (qst.role == CANDIDATE &&
msg.type == SCOUTFS_QUORUM_MSG_VOTE) {
if (test_and_set_bit(msg.from, &qst.vote_bits)) {
if (test_bit(msg.from, &qst.vote_bits)) {
scoutfs_warn(sb, "already received vote from %u in term %llu, are there multiple mounts with quorum_slot_nr=%u?",
msg.from, qst.term, msg.from);
}
set_bit(msg.from, &qst.vote_bits);
scoutfs_inc_counter(sb, quorum_recv_vote);
}
@@ -743,69 +670,24 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.term);
qst.timeout = heartbeat_interval();
update_show_status(qinf, &qst);
/* record that we've been elected before starting up server */
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_ELECT, qst.term, true);
/* set our leader flag and fence */
ret = update_quorum_block(sb, blkno, &mark,
qst.role, qst.term) ?:
fence_leader_blocks(sb);
if (ret < 0)
goto out;
qst.server_start_term = qst.term;
qst.server_event = SCOUTFS_QUORUM_EVENT_ELECT;
scoutfs_server_start(sb, qst.term);
}
/*
* This leader's server is up, having finished fencing
* previous leaders. We update the fence event with the
* current term to let future leaders know that previous
* servers have been fenced.
*/
if (qst.role == LEADER && qst.server_event != SCOUTFS_QUORUM_EVENT_FENCE &&
scoutfs_server_is_up(sb)) {
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_FENCE, qst.term, true);
if (ret < 0)
goto out;
qst.server_event = SCOUTFS_QUORUM_EVENT_FENCE;
}
/*
* Stop a running server if we're no longer leader in
* its term.
*/
if (!(qst.role == LEADER && qst.term == qst.server_start_term) &&
scoutfs_server_is_running(sb)) {
/* make very sure server is fully shut down */
scoutfs_server_stop(sb);
}
/* set server bit before server shutdown could clear */
set_bit(QINF_FLAG_SERVER, &qinf->flags);
/*
* A previously running server has stopped. The quorum
* protocol might have shut it down by changing roles or
* it might have stopped on its own, perhaps on errors.
* If we're still a leader then we become a follower and
* send resignations to encourage the next election.
* Always update the _STOP event to stop connections and
* fencing.
*/
if (qst.server_start_term > 0 && scoutfs_server_is_down(sb)) {
if (qst.role == LEADER) {
qst.role = FOLLOWER;
qst.vote_for = -1;
qst.vote_bits = 0;
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_server_shutdown);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.server_start_term);
scoutfs_inc_counter(sb, quorum_send_resignation);
}
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP,
qst.server_start_term, true);
if (ret < 0)
ret = scoutfs_server_start(sb, qst.term);
if (ret < 0) {
scoutfs_err(sb, "server startup failed with %d",
ret);
goto out;
qst.server_start_term = 0;
}
}
/* leaders regularly send heartbeats to delay elections */
@@ -842,70 +724,80 @@ static void scoutfs_quorum_worker(struct work_struct *work)
}
}
update_show_status(qinf, &qst);
/* always try to stop a running server as we stop */
if (scoutfs_server_is_running(sb)) {
scoutfs_server_stop_wait(sb);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION, qst.term);
if (qst.server_start_term > 0) {
err = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP,
qst.server_start_term, true);
if (err < 0 && ret == 0)
ret = err;
}
if (test_bit(QINF_FLAG_SERVER, &qinf->flags)) {
scoutfs_server_stop(sb);
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.term);
}
/* record that this slot no longer has an active quorum */
update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_END, qst.term, true);
/* always try to clear leader block as we stop to avoid fencing */
if (qst.role == LEADER) {
ret = update_quorum_block(sb, blkno, &mark,
FOLLOWER, qst.term);
if (ret < 0)
goto out;
}
out:
if (ret < 0) {
scoutfs_err(sb, "quorum service saw error %d, shutting down. This mount is no longer participating in quorum. It should be remounted to restore service.",
scoutfs_err(sb, "quorum service saw error %d, shutting down. Cluster will be degraded until this slot is remounted to restart the quorum service",
ret);
}
}
/*
* Set a flag for the quorum work's next iteration to indicate that the
* server has shutdown and that it should step down as leader, update
* quorum blocks, and stop sending heartbeats.
*/
void scoutfs_quorum_server_shutdown(struct super_block *sb)
{
DECLARE_QUORUM_INFO(sb, qinf);
set_bit(QINF_FLAG_SERVER, &qinf->flags);
}
/*
* Clients read quorum blocks looking for the leader with a server whose
* address it can try and connect to.
*
* There can be records of multiple previous elected leaders if the
* current server hasn't yet fenced any old servers. We use the elected
* leader with the greatest elected term. If we get it wrong the
* connection will timeout and the client will try again.
* There can be multiple running servers if a client checks before a
* server has had a chance to fence any old servers. We try to use the
* block with the most recent timestamp. If we get it wrong the
* connection will timeout and the client will try again, presumably
* finding a single server block.
*/
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
u64 elect_term;
u64 term = 0;
int ret = 0;
struct timespec64 recent = {0,};
struct timespec64 ts;
int ret;
int i;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk,
NULL);
if (ret < 0) {
scoutfs_err(sb, "error reading quorum block nr %u: %d",
i, ret);
goto out;
}
elect_term = le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_ELECT].term);
if (elect_term > term &&
elect_term > le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_STOP].term)) {
term = elect_term;
ts.tv_sec = le64_to_cpu(blk.set_leader.ts.sec);
ts.tv_nsec = le32_to_cpu(blk.set_leader.ts.nsec);
if ((le64_to_cpu(blk.flags) & SCOUTFS_QUORUM_BLOCK_LEADER) &&
(timespec64_to_ns(&ts) > timespec64_to_ns(&recent))) {
recent = ts;
scoutfs_quorum_slot_sin(super, i, sin);
continue;
}
}
if (term == 0)
if (timespec64_to_ns(&recent) == 0)
ret = -ENOENT;
out:
@@ -968,10 +860,10 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_QUORUM_INFO_KOBJ(kobj, qinf);
struct mount_options *opts = &SCOUTFS_SB(qinf->sb)->opts;
struct quorum_status qst;
struct last_msg last;
struct timespec64 ts;
const ktime_t now = ktime_get();
size_t size;
int ret;
int i;
@@ -984,20 +876,18 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
ret = 0;
snprintf_ret(buf, size, &ret, "quorum_slot_nr %u\n",
qinf->our_quorum_slot_nr);
opts->quorum_slot_nr);
snprintf_ret(buf, size, &ret, "term %llu\n",
qst.term);
snprintf_ret(buf, size, &ret, "server_start_term %llu\n", qst.server_start_term);
snprintf_ret(buf, size, &ret, "server_event %d\n", qst.server_event);
snprintf_ret(buf, size, &ret, "role %d (%s)\n",
qst.role, role_str(qst.role));
snprintf_ret(buf, size, &ret, "vote_for %d\n",
qst.vote_for);
snprintf_ret(buf, size, &ret, "vote_bits 0x%lx (count %lu)\n",
qst.vote_bits, hweight_long(qst.vote_bits));
ts = ktime_to_timespec64(ktime_sub(qst.timeout, now));
snprintf_ret(buf, size, &ret, "timeout_in_secs %lld.%09u\n",
(s64)ts.tv_sec, (int)ts.tv_nsec);
ts = ktime_to_timespec64(qst.timeout);
snprintf_ret(buf, size, &ret, "timeout %llu.%u\n",
(u64)ts.tv_sec, (int)ts.tv_nsec);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
spin_lock(&qinf->show_lock);
@@ -1007,11 +897,10 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
if (last.msg.term == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, last.ts));
snprintf_ret(buf, size, &ret,
"last_send to %u term %llu type %u secs_since %lld.%09u\n",
"last_send to %u term %llu type %u ts %llu.%u\n",
i, last.msg.term, last.msg.type,
(s64)ts.tv_sec, (int)ts.tv_nsec);
(u64)last.ts.tv_sec, (int)last.ts.tv_nsec);
}
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
@@ -1021,12 +910,10 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
if (last.msg.term == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, last.ts));
snprintf_ret(buf, size, &ret,
"last_recv from %u term %llu type %u secs_since %lld.%09u\n",
"last_recv from %u term %llu type %u ts %llu.%u\n",
i, last.msg.term, last.msg.type,
(s64)ts.tv_sec, (int)ts.tv_nsec);
(u64)last.ts.tv_sec, (int)last.ts.tv_nsec);
}
return ret;
@@ -1063,16 +950,13 @@ static inline bool valid_ipv4_port(__be16 port)
static int verify_quorum_slots(struct super_block *sb)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
DECLARE_QUORUM_INFO(sb, qinf);
struct sockaddr_in other;
struct sockaddr_in sin;
int found = 0;
int ret;
int i;
int j;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
continue;
@@ -1113,25 +997,6 @@ static int verify_quorum_slots(struct super_block *sb)
return -EINVAL;
}
if (!quorum_slot_present(super, qinf->our_quorum_slot_nr)) {
char *str = slots;
*str = '\0';
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (quorum_slot_present(super, i)) {
ret = snprintf(str, &slots[ARRAY_SIZE(slots)] - str, "%c%u",
str == slots ? ' ' : ',', i);
if (ret < 2 || ret > 3) {
scoutfs_err(sb, "error gathering populated slots");
return -EINVAL;
}
str += ret;
}
}
scoutfs_err(sb, "quorum_slot_nr=%u option references unused slot, must be one of the following configured slots:%s",
qinf->our_quorum_slot_nr, slots);
return -EINVAL;
}
/*
* Always require a majority except in the pathological cases of
* 1 or 2 members.
@@ -1151,12 +1016,11 @@ static int verify_quorum_slots(struct super_block *sb)
int scoutfs_quorum_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_mount_options opts;
struct mount_options *opts = &sbi->opts;
struct quorum_info *qinf;
int ret;
scoutfs_options_read(sb, &opts);
if (opts.quorum_slot_nr < 0)
if (opts->quorum_slot_nr < 0)
return 0;
qinf = kzalloc(sizeof(struct quorum_info), GFP_KERNEL);
@@ -1168,8 +1032,6 @@ int scoutfs_quorum_setup(struct super_block *sb)
spin_lock_init(&qinf->show_lock);
INIT_WORK(&qinf->work, scoutfs_quorum_worker);
scoutfs_sysfs_init_attrs(sb, &qinf->ssa);
/* static for the lifetime of the mount */
qinf->our_quorum_slot_nr = opts.quorum_slot_nr;
sbi->quorum_info = qinf;
qinf->sb = sb;

View File

@@ -2,13 +2,12 @@
#define _SCOUTFS_QUORUM_H_
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
void scoutfs_quorum_server_shutdown(struct super_block *sb);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);
void scoutfs_quorum_destroy(struct super_block *sb);

View File

@@ -1,305 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/rhashtable.h>
#include <linux/rcupdate.h>
#include <linux/list_sort.h>
#include "super.h"
#include "recov.h"
#include "cmp.h"
/*
* There are a few server messages which can't be processed until they
* know that they have state for all possibly active clients. These
* little helpers track which clients have recovered what state and give
* those message handlers a call to check if recovery has completed. We
* track the timeout here, but all we do is call back into the server to
* take steps to evict timed out clients and then let us know that their
* recovery has finished.
*/
struct recov_info {
struct super_block *sb;
spinlock_t lock;
struct list_head pending;
struct timer_list timer;
void (*timeout_fn)(struct super_block *);
};
#define DECLARE_RECOV_INFO(sb, name) \
struct recov_info *name = SCOUTFS_SB(sb)->recov_info
struct recov_pending {
struct list_head head;
u64 rid;
int which;
};
static struct recov_pending *next_pending(struct recov_info *recinf, u64 rid, int which)
{
struct recov_pending *pend;
list_for_each_entry(pend, &recinf->pending, head) {
if (pend->rid > rid && pend->which & which)
return pend;
}
return NULL;
}
static struct recov_pending *lookup_pending(struct recov_info *recinf, u64 rid, int which)
{
struct recov_pending *pend;
pend = next_pending(recinf, rid - 1, which);
if (pend && pend->rid == rid)
return pend;
return NULL;
}
/*
* We keep the pending list sorted by rid so that we can iterate over
* them. The list should be small and shouldn't be used often.
*/
static int cmp_pending_rid(void *priv, struct list_head *A, struct list_head *B)
{
struct recov_pending *a = list_entry(A, struct recov_pending, head);
struct recov_pending *b = list_entry(B, struct recov_pending, head);
return scoutfs_cmp_u64s(a->rid, b->rid);
}
/*
* Record that we'll be waiting for a client to recover something.
* _finished will eventually be called for every _prepare, either
* because recovery naturally finished or because it timed out and the
* server evicted the client.
*/
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *alloc;
struct recov_pending *pend;
if (WARN_ON_ONCE(which & SCOUTFS_RECOV_INVALID))
return -EINVAL;
alloc = kmalloc(sizeof(*pend), GFP_NOFS);
if (!alloc)
return -ENOMEM;
spin_lock(&recinf->lock);
pend = lookup_pending(recinf, rid, SCOUTFS_RECOV_ALL);
if (pend) {
pend->which |= which;
} else {
swap(pend, alloc);
pend->rid = rid;
pend->which = which;
list_add_tail(&pend->head, &recinf->pending);
list_sort(NULL, &recinf->pending, cmp_pending_rid);
}
spin_unlock(&recinf->lock);
kfree(alloc);
return 0;
}
/*
* Recovery is only finished once we've begun (which sets the timer) and
* all clients have finished. If we didn't test the timer we could
* claim it finished prematurely as clients are being prepared.
*/
static int recov_finished(struct recov_info *recinf)
{
return !!(recinf->timeout_fn != NULL && list_empty(&recinf->pending));
}
static void timer_callback(struct timer_list *timer)
{
struct recov_info *recinf = from_timer(recinf, timer, timer);
recinf->timeout_fn(recinf->sb);
}
/*
* Begin waiting for recovery once we've prepared all the clients. If
* the timeout period elapses before _finish is called on all prepared
* clients then the timer will call the callback.
*
* Returns > 0 if all the prepared clients finish recovery before begin
* is called.
*/
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms)
{
DECLARE_RECOV_INFO(sb, recinf);
int ret;
spin_lock(&recinf->lock);
recinf->timeout_fn = timeout_fn;
recinf->timer.expires = jiffies + msecs_to_jiffies(timeout_ms);
add_timer(&recinf->timer);
ret = recov_finished(recinf);
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
return ret;
}
/*
* A given client has recovered the given state. If it's finished all
* recovery then we free it, and if all clients have finished recovery
* then we cancel the timeout timer.
*
* Returns > 0 if _begin has been called and all clients have finished.
* The caller will only see > 0 returned once.
*/
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
int ret = 0;
spin_lock(&recinf->lock);
pend = lookup_pending(recinf, rid, which);
if (pend) {
pend->which &= ~which;
if (pend->which) {
pend = NULL;
} else {
list_del(&pend->head);
ret = recov_finished(recinf);
}
}
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
kfree(pend);
return ret;
}
/*
* Returns true if the given client is still trying to recover
* the given state.
*/
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
bool is_pending;
spin_lock(&recinf->lock);
is_pending = lookup_pending(recinf, rid, which) != NULL;
spin_unlock(&recinf->lock);
return is_pending;
}
/*
* Return the next rid after the given rid of a client waiting for the
* given state to be recovered. Start with rid 0, returns 0 when there
* are no more clients waiting for recovery.
*
* This is inherently racey. Callers are responsible for resolving any
* actions taken based on pending with the recovery finishing, perhaps
* before we return.
*/
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
spin_lock(&recinf->lock);
pend = next_pending(recinf, rid, which);
rid = pend ? pend->rid : 0;
spin_unlock(&recinf->lock);
return rid;
}
/*
* The server is shutting down and doesn't need to worry about recovery
* anymore. It'll be built up again by the next server, if needed.
*/
void scoutfs_recov_shutdown(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
struct recov_pending *tmp;
LIST_HEAD(list);
del_timer_sync(&recinf->timer);
spin_lock(&recinf->lock);
list_splice_init(&recinf->pending, &list);
recinf->timeout_fn = NULL;
spin_unlock(&recinf->lock);
list_for_each_entry_safe(pend, tmp, &list, head) {
list_del(&pend->head);
kfree(pend);
}
}
int scoutfs_recov_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct recov_info *recinf;
int ret;
recinf = kzalloc(sizeof(struct recov_info), GFP_KERNEL);
if (!recinf) {
ret = -ENOMEM;
goto out;
}
recinf->sb = sb;
spin_lock_init(&recinf->lock);
INIT_LIST_HEAD(&recinf->pending);
timer_setup(&recinf->timer, timer_callback, 0);
sbi->recov_info = recinf;
ret = 0;
out:
return ret;
}
void scoutfs_recov_destroy(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (recinf) {
scoutfs_recov_shutdown(sb);
kfree(recinf);
sbi->recov_info = NULL;
}
}

View File

@@ -1,23 +0,0 @@
#ifndef _SCOUTFS_RECOV_H_
#define _SCOUTFS_RECOV_H_
enum {
SCOUTFS_RECOV_GREETING = ( 1 << 0),
SCOUTFS_RECOV_LOCKS = ( 1 << 1),
SCOUTFS_RECOV_INVALID = (~0 << 2),
SCOUTFS_RECOV_ALL = (~SCOUTFS_RECOV_INVALID),
};
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which);
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms);
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which);
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which);
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which);
void scoutfs_recov_shutdown(struct super_block *sb);
int scoutfs_recov_setup(struct super_block *sb);
void scoutfs_recov_destroy(struct super_block *sb);
#endif

View File

@@ -58,6 +58,9 @@ struct lock_info;
__entry->pref##_map, \
__entry->pref##_flags
#define DECLARE_TRACED_EXTENT(name) \
struct scoutfs_traced_extent name = {0}
DECLARE_EVENT_CLASS(scoutfs_ino_ret_class,
TP_PROTO(struct super_block *sb, u64 ino, int ret),
@@ -403,36 +406,32 @@ TRACE_EVENT(scoutfs_sync_fs,
);
TRACE_EVENT(scoutfs_trans_write_func,
TP_PROTO(struct super_block *sb, u64 dirty_block_bytes, u64 dirty_item_pages),
TP_PROTO(struct super_block *sb, unsigned long dirty),
TP_ARGS(sb, dirty_block_bytes, dirty_item_pages),
TP_ARGS(sb, dirty),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, dirty_block_bytes)
__field(__u64, dirty_item_pages)
__field(unsigned long, dirty)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dirty_block_bytes = dirty_block_bytes;
__entry->dirty_item_pages = dirty_item_pages;
__entry->dirty = dirty;
),
TP_printk(SCSBF" dirty_block_bytes %llu dirty_item_pages %llu",
SCSB_TRACE_ARGS, __entry->dirty_block_bytes, __entry->dirty_item_pages)
TP_printk(SCSBF" dirty %lu", SCSB_TRACE_ARGS, __entry->dirty)
);
DECLARE_EVENT_CLASS(scoutfs_trans_hold_release_class,
TP_PROTO(struct super_block *sb, void *journal_info, int holders, int ret),
TP_PROTO(struct super_block *sb, void *journal_info, int holders),
TP_ARGS(sb, journal_info, holders, ret),
TP_ARGS(sb, journal_info, holders),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(unsigned long, journal_info)
__field(int, holders)
__field(int, ret)
),
TP_fast_assign(
@@ -441,17 +440,17 @@ DECLARE_EVENT_CLASS(scoutfs_trans_hold_release_class,
__entry->holders = holders;
),
TP_printk(SCSBF" journal_info 0x%0lx holders %d ret %d",
SCSB_TRACE_ARGS, __entry->journal_info, __entry->holders, __entry->ret)
TP_printk(SCSBF" journal_info 0x%0lx holders %d",
SCSB_TRACE_ARGS, __entry->journal_info, __entry->holders)
);
DEFINE_EVENT(scoutfs_trans_hold_release_class, scoutfs_hold_trans,
TP_PROTO(struct super_block *sb, void *journal_info, int holders, int ret),
TP_ARGS(sb, journal_info, holders, ret)
DEFINE_EVENT(scoutfs_trans_hold_release_class, scoutfs_trans_acquired_hold,
TP_PROTO(struct super_block *sb, void *journal_info, int holders),
TP_ARGS(sb, journal_info, holders)
);
DEFINE_EVENT(scoutfs_trans_hold_release_class, scoutfs_release_trans,
TP_PROTO(struct super_block *sb, void *journal_info, int holders, int ret),
TP_ARGS(sb, journal_info, holders, ret)
TP_PROTO(struct super_block *sb, void *journal_info, int holders),
TP_ARGS(sb, journal_info, holders)
);
TRACE_EVENT(scoutfs_ioc_release,
@@ -691,16 +690,15 @@ TRACE_EVENT(scoutfs_evict_inode,
TRACE_EVENT(scoutfs_drop_inode,
TP_PROTO(struct super_block *sb, __u64 ino, unsigned int nlink,
unsigned int unhashed, bool drop_invalidated),
unsigned int unhashed),
TP_ARGS(sb, ino, nlink, unhashed, drop_invalidated),
TP_ARGS(sb, ino, nlink, unhashed),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(unsigned int, nlink)
__field(unsigned int, unhashed)
__field(unsigned int, drop_invalidated)
),
TP_fast_assign(
@@ -708,12 +706,10 @@ TRACE_EVENT(scoutfs_drop_inode,
__entry->ino = ino;
__entry->nlink = nlink;
__entry->unhashed = unhashed;
__entry->drop_invalidated = !!drop_invalidated;
),
TP_printk(SCSBF" ino %llu nlink %u unhashed %d drop_invalidated %u", SCSB_TRACE_ARGS,
__entry->ino, __entry->nlink, __entry->unhashed,
__entry->drop_invalidated)
TP_printk(SCSBF" ino %llu nlink %u unhashed %d", SCSB_TRACE_ARGS,
__entry->ino, __entry->nlink, __entry->unhashed)
);
TRACE_EVENT(scoutfs_inode_walk_writeback,
@@ -986,6 +982,22 @@ TRACE_EVENT(scoutfs_delete_inode,
__entry->mode, __entry->size)
);
TRACE_EVENT(scoutfs_scan_orphans,
TP_PROTO(struct super_block *sb),
TP_ARGS(sb),
TP_STRUCT__entry(
__field(dev_t, dev)
),
TP_fast_assign(
__entry->dev = sb->s_dev;
),
TP_printk("dev %d,%d", MAJOR(__entry->dev), MINOR(__entry->dev))
);
DECLARE_EVENT_CLASS(scoutfs_key_class,
TP_PROTO(struct super_block *sb, struct scoutfs_key *key),
TP_ARGS(sb, key),
@@ -1629,164 +1641,6 @@ TRACE_EVENT(scoutfs_btree_walk,
__entry->level, __entry->ref_blkno, __entry->ref_seq)
);
TRACE_EVENT(scoutfs_btree_set_parent,
TP_PROTO(struct super_block *sb,
struct scoutfs_btree_root *root, struct scoutfs_key *key,
struct scoutfs_btree_root *par_root),
TP_ARGS(sb, root, key, par_root),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, root_blkno)
__field(__u64, root_seq)
__field(__u8, root_height)
sk_trace_define(key)
__field(__u64, par_root_blkno)
__field(__u64, par_root_seq)
__field(__u8, par_root_height)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->root_blkno = le64_to_cpu(root->ref.blkno);
__entry->root_seq = le64_to_cpu(root->ref.seq);
__entry->root_height = root->height;
sk_trace_assign(key, key);
__entry->par_root_blkno = le64_to_cpu(par_root->ref.blkno);
__entry->par_root_seq = le64_to_cpu(par_root->ref.seq);
__entry->par_root_height = par_root->height;
),
TP_printk(SCSBF" root blkno %llu seq %llu height %u, key "SK_FMT", par_root blkno %llu seq %llu height %u",
SCSB_TRACE_ARGS, __entry->root_blkno, __entry->root_seq,
__entry->root_height, sk_trace_args(key),
__entry->par_root_blkno, __entry->par_root_seq,
__entry->par_root_height)
);
TRACE_EVENT(scoutfs_btree_merge,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_root *root,
struct scoutfs_key *start, struct scoutfs_key *end),
TP_ARGS(sb, root, start, end),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, root_blkno)
__field(__u64, root_seq)
__field(__u8, root_height)
sk_trace_define(start)
sk_trace_define(end)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->root_blkno = le64_to_cpu(root->ref.blkno);
__entry->root_seq = le64_to_cpu(root->ref.seq);
__entry->root_height = root->height;
sk_trace_assign(start, start);
sk_trace_assign(end, end);
),
TP_printk(SCSBF" root blkno %llu seq %llu height %u start "SK_FMT" end "SK_FMT,
SCSB_TRACE_ARGS, __entry->root_blkno, __entry->root_seq,
__entry->root_height, sk_trace_args(start),
sk_trace_args(end))
);
TRACE_EVENT(scoutfs_btree_merge_items,
TP_PROTO(struct super_block *sb,
struct scoutfs_btree_root *m_root,
struct scoutfs_key *m_key, int m_val_len,
struct scoutfs_btree_root *f_root,
struct scoutfs_key *f_key, int f_val_len,
int is_del),
TP_ARGS(sb, m_root, m_key, m_val_len, f_root, f_key, f_val_len, is_del),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, m_root_blkno)
__field(__u64, m_root_seq)
__field(__u8, m_root_height)
sk_trace_define(m_key)
__field(int, m_val_len)
__field(__u64, f_root_blkno)
__field(__u64, f_root_seq)
__field(__u8, f_root_height)
sk_trace_define(f_key)
__field(int, f_val_len)
__field(int, is_del)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->m_root_blkno = m_root ?
le64_to_cpu(m_root->ref.blkno) : 0;
__entry->m_root_seq = m_root ? le64_to_cpu(m_root->ref.seq) : 0;
__entry->m_root_height = m_root ? m_root->height : 0;
sk_trace_assign(m_key, m_key);
__entry->m_val_len = m_val_len;
__entry->f_root_blkno = f_root ?
le64_to_cpu(f_root->ref.blkno) : 0;
__entry->f_root_seq = f_root ? le64_to_cpu(f_root->ref.seq) : 0;
__entry->f_root_height = f_root ? f_root->height : 0;
sk_trace_assign(f_key, f_key);
__entry->f_val_len = f_val_len;
__entry->is_del = !!is_del;
),
TP_printk(SCSBF" merge item root blkno %llu seq %llu height %u key "SK_FMT" val_len %d, fs item root blkno %llu seq %llu height %u key "SK_FMT" val_len %d, is_del %d",
SCSB_TRACE_ARGS, __entry->m_root_blkno, __entry->m_root_seq,
__entry->m_root_height, sk_trace_args(m_key),
__entry->m_val_len, __entry->f_root_blkno,
__entry->f_root_seq, __entry->f_root_height,
sk_trace_args(f_key), __entry->f_val_len, __entry->is_del)
);
DECLARE_EVENT_CLASS(scoutfs_btree_free_blocks,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_root *root,
u64 blkno),
TP_ARGS(sb, root, blkno),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, root_blkno)
__field(__u64, root_seq)
__field(__u8, root_height)
__field(__u64, blkno)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->root_blkno = le64_to_cpu(root->ref.blkno);
__entry->root_seq = le64_to_cpu(root->ref.seq);
__entry->root_height = root->height;
__entry->blkno = blkno;
),
TP_printk(SCSBF" root blkno %llu seq %llu height %u, free blkno %llu",
SCSB_TRACE_ARGS, __entry->root_blkno, __entry->root_seq,
__entry->root_height, __entry->blkno)
);
DEFINE_EVENT(scoutfs_btree_free_blocks, scoutfs_btree_free_blocks_single,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_root *root,
u64 blkno),
TP_ARGS(sb, root, blkno)
);
DEFINE_EVENT(scoutfs_btree_free_blocks, scoutfs_btree_free_blocks_leaf,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_root *root,
u64 blkno),
TP_ARGS(sb, root, blkno)
);
DEFINE_EVENT(scoutfs_btree_free_blocks, scoutfs_btree_free_blocks_parent,
TP_PROTO(struct super_block *sb, struct scoutfs_btree_root *root,
u64 blkno),
TP_ARGS(sb, root, blkno)
);
TRACE_EVENT(scoutfs_online_offline_blocks,
TP_PROTO(struct inode *inode, s64 on_delta, s64 off_delta,
u64 on_now, u64 off_now),
@@ -1843,53 +1697,6 @@ DEFINE_EVENT(scoutfs_server_client_count_class, scoutfs_server_client_down,
TP_ARGS(sb, rid, nr_clients)
);
DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, holding)
__field(int, applying)
__field(int, nr_holders)
__field(__u32, avail_before)
__field(__u32, freed_before)
__field(int, exceeded)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->holding = !!holding;
__entry->applying = !!applying;
__entry->nr_holders = nr_holders;
__entry->avail_before = avail_before;
__entry->freed_before = freed_before;
__entry->exceeded = !!exceeded;
),
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying, __entry->nr_holders,
__entry->avail_before, __entry->freed_before, __entry->exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_hold,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_apply,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_start,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_end,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
#define slt_symbolic(mode) \
__print_symbolic(mode, \
{ SLT_CLIENT, "client" }, \
@@ -2001,7 +1808,54 @@ TRACE_EVENT(scoutfs_quorum_loop,
__entry->timeout_sec, __entry->timeout_nsec)
);
TRACE_EVENT(scoutfs_trans_seq_last,
/*
* We can emit trace events to make it easier to synchronize the
* monotonic clocks in trace logs between nodes. By looking at the send
* and recv times of many messages flowing between nodes we can get
* surprisingly good estimates of the clock offset between them.
*/
DECLARE_EVENT_CLASS(scoutfs_clock_sync_class,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id),
TP_STRUCT__entry(
__field(__u64, clock_sync_id)
),
TP_fast_assign(
__entry->clock_sync_id = le64_to_cpu(clock_sync_id);
),
TP_printk("clock_sync_id %016llx", __entry->clock_sync_id)
);
DEFINE_EVENT(scoutfs_clock_sync_class, scoutfs_send_clock_sync,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id)
);
DEFINE_EVENT(scoutfs_clock_sync_class, scoutfs_recv_clock_sync,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id)
);
TRACE_EVENT(scoutfs_trans_seq_advance,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->trans_seq = trans_seq;
),
TP_printk(SCSBF" rid %016llx trans_seq %llu\n",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_trans_seq_remove,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, trans_seq),
@@ -2022,112 +1876,25 @@ TRACE_EVENT(scoutfs_trans_seq_last,
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_get_log_merge_status,
TP_PROTO(struct super_block *sb, u64 rid, struct scoutfs_key *next_range_key,
u64 nr_requests, u64 nr_complete, u64 seq),
TRACE_EVENT(scoutfs_trans_seq_last,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, next_range_key, nr_requests, nr_complete, seq),
TP_ARGS(sb, rid, trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
sk_trace_define(next_range_key)
__field(__u64, nr_requests)
__field(__u64, nr_complete)
__field(__u64, seq)
__field(__u64, trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
sk_trace_assign(next_range_key, next_range_key);
__entry->nr_requests = nr_requests;
__entry->nr_complete = nr_complete;
__entry->seq = seq;
__entry->trans_seq = trans_seq;
),
TP_printk(SCSBF" rid %016llx next_range_key "SK_FMT" nr_requests %llu nr_complete %llu seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, sk_trace_args(next_range_key),
__entry->nr_requests, __entry->nr_complete, __entry->seq)
);
TRACE_EVENT(scoutfs_get_log_merge_request,
TP_PROTO(struct super_block *sb, u64 rid,
struct scoutfs_btree_root *root, struct scoutfs_key *start,
struct scoutfs_key *end, u64 input_seq, u64 seq),
TP_ARGS(sb, rid, root, start, end, input_seq, seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, root_blkno)
__field(__u64, root_seq)
__field(__u8, root_height)
sk_trace_define(start)
sk_trace_define(end)
__field(__u64, input_seq)
__field(__u64, seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->root_blkno = le64_to_cpu(root->ref.blkno);
__entry->root_seq = le64_to_cpu(root->ref.seq);
__entry->root_height = root->height;
sk_trace_assign(start, start);
sk_trace_assign(end, end);
__entry->input_seq = input_seq;
__entry->seq = seq;
),
TP_printk(SCSBF" rid %016llx root blkno %llu seq %llu height %u start "SK_FMT" end "SK_FMT" input_seq %llu seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->root_blkno,
__entry->root_seq, __entry->root_height,
sk_trace_args(start), sk_trace_args(end), __entry->input_seq,
__entry->seq)
);
TRACE_EVENT(scoutfs_get_log_merge_complete,
TP_PROTO(struct super_block *sb, u64 rid,
struct scoutfs_btree_root *root, struct scoutfs_key *start,
struct scoutfs_key *end, struct scoutfs_key *remain,
u64 seq, u64 flags),
TP_ARGS(sb, rid, root, start, end, remain, seq, flags),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, root_blkno)
__field(__u64, root_seq)
__field(__u8, root_height)
sk_trace_define(start)
sk_trace_define(end)
sk_trace_define(remain)
__field(__u64, seq)
__field(__u64, flags)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->root_blkno = le64_to_cpu(root->ref.blkno);
__entry->root_seq = le64_to_cpu(root->ref.seq);
__entry->root_height = root->height;
sk_trace_assign(start, start);
sk_trace_assign(end, end);
sk_trace_assign(remain, remain);
__entry->seq = seq;
__entry->flags = flags;
),
TP_printk(SCSBF" rid %016llx root blkno %llu seq %llu height %u start "SK_FMT" end "SK_FMT" remain "SK_FMT" seq %llu flags 0x%llx",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->root_blkno,
__entry->root_seq, __entry->root_height,
sk_trace_args(start), sk_trace_args(end),
sk_trace_args(remain), __entry->seq, __entry->flags)
TP_printk(SCSBF" rid %016llx trans_seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
DECLARE_EVENT_CLASS(scoutfs_forest_bloom_class,
@@ -2588,36 +2355,6 @@ TRACE_EVENT(scoutfs_alloc_move,
__entry->ret)
);
DECLARE_EVENT_CLASS(scoutfs_alloc_extent_class,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
STE_FIELDS(ext)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
STE_ASSIGN(ext, ext);
),
TP_printk(SCSBF" ext "STE_FMT, SCSB_TRACE_ARGS, STE_ENTRY_ARGS(ext))
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_move_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_fill_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_empty_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
TRACE_EVENT(scoutfs_item_read_page,
TP_PROTO(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *pg_start, struct scoutfs_key *pg_end),
@@ -2665,87 +2402,6 @@ TRACE_EVENT(scoutfs_item_invalidate_page,
sk_trace_args(pg_start), sk_trace_args(pg_end), __entry->pgi)
);
DECLARE_EVENT_CLASS(scoutfs_omap_group_class,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, grp)
__field(__u64, group_nr)
__field(unsigned int, group_total)
__field(int, bit_nr)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->grp = grp;
__entry->group_nr = group_nr;
__entry->group_total = group_total;
__entry->bit_nr = bit_nr;
),
TP_printk(SCSBF" grp %p group_nr %llu group_total %u bit_nr %d",
SCSB_TRACE_ARGS, __entry->grp, __entry->group_nr, __entry->group_total,
__entry->bit_nr)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_alloc,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_free,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_inc,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_dec,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_request,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr)
);
DEFINE_EVENT(scoutfs_omap_group_class, scoutfs_omap_group_destroy,
TP_PROTO(struct super_block *sb, void *grp, u64 group_nr, unsigned int group_total,
int bit_nr),
TP_ARGS(sb, grp, group_nr, group_total, bit_nr)
);
TRACE_EVENT(scoutfs_omap_should_delete,
TP_PROTO(struct super_block *sb, u64 ino, unsigned int nlink, int ret),
TP_ARGS(sb, ino, nlink, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(unsigned int, nlink)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->nlink = nlink;
__entry->ret = ret;
),
TP_printk(SCSBF" ino %llu nlink %u ret %d",
SCSB_TRACE_ARGS, __entry->ino, __entry->nlink, __entry->ret)
);
#endif /* _TRACE_SCOUTFS_H */
/* This part must be outside protection */

File diff suppressed because it is too large Load Diff

View File

@@ -56,31 +56,22 @@ do { \
__entry->name##_data_len, __entry->name##_cmd, __entry->name##_flags, \
__entry->name##_error
u64 scoutfs_server_reserved_meta_blocks(struct super_block *sb);
int scoutfs_server_lock_request(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_net_lock *nl);
struct scoutfs_net_lock_grant_response *gr);
int scoutfs_server_lock_recover_request(struct super_block *sb, u64 rid,
struct scoutfs_key *key);
void scoutfs_server_recov_finish(struct super_block *sb, u64 rid, int which);
void scoutfs_server_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
int scoutfs_server_hold_commit(struct super_block *sb);
int scoutfs_server_apply_commit(struct super_block *sb, int err);
int scoutfs_server_send_omap_request(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map_args *args);
int scoutfs_server_send_omap_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map *map, int err);
u64 scoutfs_server_seq(struct super_block *sb);
u64 scoutfs_server_next_seq(struct super_block *sb);
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
void scoutfs_server_start(struct super_block *sb, u64 term);
struct sockaddr_in;
struct scoutfs_quorum_elected_info;
int scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_abort(struct super_block *sb);
void scoutfs_server_stop(struct super_block *sb);
void scoutfs_server_stop_wait(struct super_block *sb);
bool scoutfs_server_is_running(struct super_block *sb);
bool scoutfs_server_is_up(struct super_block *sb);
bool scoutfs_server_is_down(struct super_block *sb);
int scoutfs_server_setup(struct super_block *sb);
void scoutfs_server_destroy(struct super_block *sb);

View File

@@ -28,7 +28,6 @@
#include "btree.h"
#include "spbm.h"
#include "client.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -990,13 +989,12 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_srch_file *sfl, bool force)
struct scoutfs_srch_file *sfl)
{
struct scoutfs_key key;
int ret;
if (sfl->ref.blkno == 0 ||
(!force && le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT))
if (le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT)
return 0;
init_srch_key(&key, SCOUTFS_SRCH_LOG_TYPE,
@@ -1482,11 +1480,10 @@ static int kway_merge(struct super_block *sb,
int ind;
int i;
if (WARN_ON_ONCE(nr <= 0))
if (WARN_ON_ONCE(nr <= 1))
return -EINVAL;
/* always at least one parent for single leaf */
nr_parents = max_t(unsigned long, 1, roundup_pow_of_two(nr) - 1);
nr_parents = roundup_pow_of_two(nr) - 1;
/* root at [1] for easy sib/parent index calc, final pad for odd sib */
nr_nodes = 1 + nr_parents + nr + 1;
tnodes = __vmalloc(nr_nodes * sizeof(struct tourn_node),
@@ -2083,7 +2080,7 @@ static int delete_files(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_srch_compact *sc)
{
int ret = 0;
int ret;
int i;
for (i = 0; i < sc->nr; i++) {
@@ -2129,7 +2126,6 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
struct scoutfs_alloc alloc;
unsigned long delay;
int ret;
int err;
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
if (sc == NULL) {
@@ -2160,22 +2156,17 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
if (ret < 0)
goto commit;
ret = scoutfs_alloc_prepare_commit(sb, &alloc, &wri) ?:
scoutfs_block_writer_write(sb, &wri);
ret = scoutfs_block_writer_write(sb, &wri);
commit:
/* the server won't use our partial compact if _ERROR is set */
sc->meta_avail = alloc.avail;
sc->meta_freed = alloc.freed;
sc->flags |= ret < 0 ? SCOUTFS_SRCH_COMPACT_FLAG_ERROR : 0;
err = scoutfs_client_srch_commit_compact(sb, sc);
if (err < 0 && ret == 0)
ret = err;
ret = scoutfs_client_srch_commit_compact(sb, sc);
out:
/* our allocators and files should be stable */
WARN_ON_ONCE(ret == -ESTALE);
if (ret < 0)
scoutfs_inc_counter(sb, srch_compact_error);
scoutfs_block_writer_forget_all(sb, &wri);
if (!atomic_read(&srinf->shutdown)) {

View File

@@ -37,7 +37,7 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_srch_file *sfl, bool force);
struct scoutfs_srch_file *sfl);
int scoutfs_srch_get_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,

View File

@@ -20,6 +20,7 @@
#include <linux/statfs.h>
#include <linux/sched.h>
#include <linux/debugfs.h>
#include <linux/percpu.h>
#include "super.h"
#include "block.h"
@@ -43,42 +44,70 @@
#include "srch.h"
#include "item.h"
#include "alloc.h"
#include "recov.h"
#include "omap.h"
#include "volopt.h"
#include "fence.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
/* the statfs file fields can be small (and signed?) :/ */
static __statfs_word saturate_truncated_word(u64 files)
{
__statfs_word word = files;
static DEFINE_PER_CPU(u64, clock_sync_ids) = 0;
if (word != files) {
word = ~0ULL;
if (word < 0)
word = (unsigned long)word >> 1;
/*
* Give the caller a unique clock sync id for a message they're about to
* send. We make the ids reasonably globally unique by using randomly
* initialized per-cpu 64bit counters.
*/
__le64 scoutfs_clock_sync_id(void)
{
u64 rnd = 0;
u64 ret;
u64 *id;
retry:
preempt_disable();
id = this_cpu_ptr(&clock_sync_ids);
if (*id == 0) {
if (rnd == 0) {
preempt_enable();
get_random_bytes(&rnd, sizeof(rnd));
goto retry;
}
*id = rnd;
}
return word;
ret = ++(*id);
preempt_enable();
return cpu_to_le64(ret);
}
struct statfs_free_blocks {
u64 meta;
u64 data;
};
static int count_free_blocks(struct super_block *sb, void *arg, int owner,
u64 id, bool meta, bool avail, u64 blocks)
{
struct statfs_free_blocks *sfb = arg;
if (meta)
sfb->meta += blocks;
else
sfb->data += blocks;
return 0;
}
/*
* The server gives us the current sum of free blocks and the total
* inode count that it can see across all the clients' log trees. It
* won't see allocations and inode creations or deletions that are dirty
* in client memory as it builds a transaction.
* Build the free block counts by having alloc read all the persistent
* blocks which contain allocators and calling us for each of them.
* Only the super block reads aren't cached so repeatedly calling statfs
* is like repeated O_DIRECT IO. We can add a cache and stale results
* if that IO becomes a problem.
*
* We don't have static limits on the number of files so the statfs
* fields for the total possible files and the number free isn't
* particularly helpful. What we do want to report is the number of
* inodes, so we fake a max possible number of inodes given a
* conservative estimate of the total space consumption per file and
* then find the free by subtracting our precise count of active inodes.
* This seems like the least surprising compromise where the file max
* doesn't change and the caller gets the correct count of used inodes.
* We fake the number of free inodes value by assuming that we can fill
* free blocks with a certain number of inodes. We then the number of
* current inodes to that free count to determine the total possible
* inodes.
*
* The fsid that we report is constructed from the xor of the first two
* and second two little endian u32s that make up the uuid bytes.
@@ -86,33 +115,41 @@ static __statfs_word saturate_truncated_word(u64 files)
static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
{
struct super_block *sb = dentry->d_inode->i_sb;
struct scoutfs_net_statfs nst;
u64 files;
u64 ffree;
struct scoutfs_super_block *super = NULL;
struct statfs_free_blocks sfb = {0,};
__le32 uuid[4];
int ret;
scoutfs_inc_counter(sb, statfs);
ret = scoutfs_client_statfs(sb, &nst);
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
kst->f_bfree = (le64_to_cpu(nst.free_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.free_data_blocks);
ret = scoutfs_alloc_foreach(sb, count_free_blocks, &sfb);
if (ret < 0)
goto out;
kst->f_bfree = (sfb.meta << SCOUTFS_BLOCK_SM_LG_SHIFT) + sfb.data;
kst->f_type = SCOUTFS_SUPER_MAGIC;
kst->f_bsize = SCOUTFS_BLOCK_SM_SIZE;
kst->f_blocks = (le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.total_data_blocks);
kst->f_blocks = (le64_to_cpu(super->total_meta_blocks) <<
SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(super->total_data_blocks);
kst->f_bavail = kst->f_bfree;
files = div_u64(le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_LG_SHIFT, 2048);
ffree = files - le64_to_cpu(nst.inode_count);
kst->f_files = saturate_truncated_word(files);
kst->f_ffree = saturate_truncated_word(ffree);
/* arbitrarily assume ~1K / empty file */
kst->f_ffree = sfb.meta * (SCOUTFS_BLOCK_LG_SIZE / 1024);
kst->f_files = kst->f_ffree + le64_to_cpu(super->next_ino);
BUILD_BUG_ON(sizeof(uuid) != sizeof(nst.uuid));
memcpy(uuid, nst.uuid, sizeof(uuid));
BUILD_BUG_ON(sizeof(uuid) != sizeof(super->uuid));
memcpy(uuid, super->uuid, sizeof(uuid));
kst->f_fsid.val[0] = le32_to_cpu(uuid[0]) ^ le32_to_cpu(uuid[1]);
kst->f_fsid.val[1] = le32_to_cpu(uuid[2]) ^ le32_to_cpu(uuid[3]);
kst->f_namelen = SCOUTFS_NAME_LEN;
@@ -121,17 +158,57 @@ static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
/* the vfs fills f_flags */
ret = 0;
out:
kfree(super);
/*
* We don't take cluster locks in statfs which makes it a very
* convenient place to trigger lock reclaim for debugging. We
* try to free as many locks as possible.
*/
if (scoutfs_trigger(sb, STATFS_LOCK_PURGE))
scoutfs_free_unused_locks(sb);
scoutfs_free_unused_locks(sb, -1UL);
return ret;
}
static int scoutfs_show_options(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
if (opts->quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts->quorum_slot_nr);
seq_printf(seq, ",metadev_path=%s", opts->metadev_path);
return 0;
}
static ssize_t metadev_path_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
return snprintf(buf, PAGE_SIZE, "%s", opts->metadev_path);
}
SCOUTFS_ATTR_RO(metadev_path);
static ssize_t quorum_server_nr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
return snprintf(buf, PAGE_SIZE, "%d\n", opts->quorum_slot_nr);
}
SCOUTFS_ATTR_RO(quorum_server_nr);
static struct attribute *mount_options_attrs[] = {
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(quorum_server_nr),
NULL,
};
static int scoutfs_sync_fs(struct super_block *sb, int wait)
{
trace_scoutfs_sync_fs(sb, wait);
@@ -149,15 +226,7 @@ static void scoutfs_metadev_close(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi->meta_bdev) {
/*
* Some kernels have blkdev_reread_part which calls
* fsync_bdev while holding the bd_mutex which inverts
* the s_umount hold in deactivate_super and blkdev_put
* from kill_sb->put_super.
*/
lockdep_off();
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
lockdep_on();
sbi->meta_bdev = NULL;
}
}
@@ -174,67 +243,41 @@ static void scoutfs_put_super(struct super_block *sb)
trace_scoutfs_put_super(sb);
/*
* Wait for invalidation and iput to finish with any lingering
* inode references that escaped the evict_inodes in
* generic_shutdown_super. MS_ACTIVE is clear so final iput
* will always evict.
*/
scoutfs_lock_flush_invalidate(sb);
scoutfs_inode_flush_iput(sb);
WARN_ON_ONCE(!list_empty(&sb->s_inodes));
sbi->shutdown = true;
scoutfs_forest_stop(sb);
scoutfs_data_destroy(sb);
scoutfs_srch_destroy(sb);
scoutfs_lock_shutdown(sb);
scoutfs_unlock(sb, sbi->rid_lock, SCOUTFS_LOCK_WRITE);
sbi->rid_lock = NULL;
scoutfs_shutdown_trans(sb);
scoutfs_volopt_destroy(sb);
scoutfs_client_destroy(sb);
scoutfs_inode_destroy(sb);
scoutfs_item_destroy(sb);
scoutfs_forest_destroy(sb);
scoutfs_data_destroy(sb);
scoutfs_quorum_destroy(sb);
scoutfs_lock_shutdown(sb);
scoutfs_server_destroy(sb);
scoutfs_recov_destroy(sb);
scoutfs_net_destroy(sb);
scoutfs_lock_destroy(sb);
scoutfs_omap_destroy(sb);
scoutfs_block_destroy(sb);
scoutfs_destroy_triggers(sb);
scoutfs_fence_destroy(sb);
scoutfs_options_destroy(sb);
scoutfs_sysfs_destroy_attrs(sb, &sbi->mopts_ssa);
debugfs_remove(sbi->debug_root);
scoutfs_destroy_counters(sb);
scoutfs_destroy_sysfs(sb);
scoutfs_metadev_close(sb);
kfree(sbi->opts.metadev_path);
kfree(sbi);
sb->s_fs_info = NULL;
}
/*
* Record that we're performing a forced unmount. As put_super drives
* destruction of the filesystem we won't issue more network or storage
* operations because we assume that they'll hang. Pending operations
* can return errors when it's possible to do so. We may be racing with
* pending operations which can't be canceled.
*/
static void scoutfs_umount_begin(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
scoutfs_warn(sb, "forcing unmount, can return errors and lose unsynced data");
sbi->forced_unmount = true;
scoutfs_client_net_shutdown(sb);
}
static const struct super_operations scoutfs_super_ops = {
.alloc_inode = scoutfs_alloc_inode,
.drop_inode = scoutfs_drop_inode,
@@ -242,9 +285,8 @@ static const struct super_operations scoutfs_super_ops = {
.destroy_inode = scoutfs_destroy_inode,
.sync_fs = scoutfs_sync_fs,
.statfs = scoutfs_statfs,
.show_options = scoutfs_options_show,
.show_options = scoutfs_show_options,
.put_super = scoutfs_put_super,
.umount_begin = scoutfs_umount_begin,
};
/*
@@ -264,16 +306,28 @@ int scoutfs_write_super(struct super_block *sb,
sizeof(struct scoutfs_super_block));
}
static bool small_bdev(struct super_block *sb, char *which, u64 blocks,
struct block_device *bdev, int shift)
static bool invalid_blkno_limits(struct super_block *sb, char *which,
u64 start, __le64 first, __le64 last,
struct block_device *bdev, int shift)
{
u64 size = (u64)i_size_read(bdev->bd_inode);
u64 count = size >> shift;
u64 blkno;
if (blocks > count) {
scoutfs_err(sb, "super block records %llu %s blocks, but device %u:%u size %llu only allows %llu blocks",
blocks, which, MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev), size, count);
if (le64_to_cpu(first) < start) {
scoutfs_err(sb, "super block first %s blkno %llu is within first valid blkno %llu",
which, le64_to_cpu(first), start);
return true;
}
if (le64_to_cpu(first) > le64_to_cpu(last)) {
scoutfs_err(sb, "super block first %s blkno %llu is greater than last %s blkno %llu",
which, le64_to_cpu(first), which, le64_to_cpu(last));
return true;
}
blkno = (i_size_read(bdev->bd_inode) >> shift) - 1;
if (le64_to_cpu(last) > blkno) {
scoutfs_err(sb, "super block last %s blkno %llu is beyond device size last blkno %llu",
which, le64_to_cpu(last), blkno);
return true;
}
@@ -322,32 +376,27 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
goto out;
}
if (le64_to_cpu(super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
scoutfs_err(sb, "super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
/*
* fill_supers checks the fmt_vers in both supers and then decides to use it.
* From then on we verify that the supers we read have that version.
*/
if (sbi->fmt_vers != 0 && le64_to_cpu(super->fmt_vers) != sbi->fmt_vers) {
scoutfs_err(sb, "super block has format version %llu than %llu read at mount",
le64_to_cpu(super->fmt_vers), sbi->fmt_vers);
if (super->version != cpu_to_le64(SCOUTFS_INTEROP_VERSION)) {
scoutfs_err(sb, "super block has invalid version %llu, expected %llu",
le64_to_cpu(super->version),
SCOUTFS_INTEROP_VERSION);
ret = -EINVAL;
goto out;
}
/* XXX do we want more rigorous invalid super checking? */
if (small_bdev(sb, "metadata", le64_to_cpu(super->total_meta_blocks), sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
small_bdev(sb, "data", le64_to_cpu(super->total_data_blocks), sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
if (invalid_blkno_limits(sb, "meta",
SCOUTFS_META_DEV_START_BLKNO,
super->first_meta_blkno,
super->last_meta_blkno, sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
invalid_blkno_limits(sb, "data",
SCOUTFS_DATA_DEV_START_BLKNO,
super->first_data_blkno,
super->last_data_blkno, sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
ret = -EINVAL;
}
@@ -454,14 +503,6 @@ static int scoutfs_read_supers(struct super_block *sb)
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) != le64_to_cpu(data_super->fmt_vers)) {
scoutfs_err(sb, "meta device format version %llu != data device format version %llu",
le64_to_cpu(meta_super->fmt_vers), le64_to_cpu(data_super->fmt_vers));
goto out;
}
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
@@ -471,9 +512,9 @@ out:
static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
{
struct scoutfs_mount_options opts;
struct block_device *meta_bdev;
struct scoutfs_sb_info *sbi;
struct mount_options opts;
struct block_device *meta_bdev;
struct inode *inode;
int ret;
@@ -483,7 +524,6 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_flags |= MS_I_VERSION;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -499,14 +539,19 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
return ret;
spin_lock_init(&sbi->next_ino_lock);
init_waitqueue_head(&sbi->trans_hold_wq);
spin_lock_init(&sbi->data_wait_root.lock);
sbi->data_wait_root.root = RB_ROOT;
spin_lock_init(&sbi->trans_write_lock);
INIT_DELAYED_WORK(&sbi->trans_write_work, scoutfs_trans_write_func);
init_waitqueue_head(&sbi->trans_write_wq);
scoutfs_sysfs_init_attrs(sb, &sbi->mopts_ssa);
/* parse options early for use during setup */
ret = scoutfs_options_early_setup(sb, data);
if (ret < 0)
return ret;
scoutfs_options_read(sb, &opts);
ret = scoutfs_parse_options(sb, data, &opts);
if (ret)
goto out;
sbi->opts = opts;
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
if (ret != SCOUTFS_BLOCK_SM_SIZE) {
@@ -515,7 +560,9 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
meta_bdev = blkdev_get_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sb);
meta_bdev =
blkdev_get_by_path(sbi->opts.metadev_path,
SCOUTFS_META_BDEV_MODE, sb);
if (IS_ERR(meta_bdev)) {
scoutfs_err(sb, "could not open metadev: error %ld",
PTR_ERR(meta_bdev));
@@ -535,32 +582,30 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
scoutfs_setup_sysfs(sb) ?:
scoutfs_setup_counters(sb) ?:
scoutfs_options_setup(sb) ?:
scoutfs_sysfs_create_attrs(sb, &sbi->mopts_ssa,
mount_options_attrs, "mount_options") ?:
scoutfs_setup_triggers(sb) ?:
scoutfs_fence_setup(sb) ?:
scoutfs_block_setup(sb) ?:
scoutfs_forest_setup(sb) ?:
scoutfs_item_setup(sb) ?:
scoutfs_inode_setup(sb) ?:
scoutfs_data_setup(sb) ?:
scoutfs_setup_trans(sb) ?:
scoutfs_omap_setup(sb) ?:
scoutfs_lock_setup(sb) ?:
scoutfs_net_setup(sb) ?:
scoutfs_recov_setup(sb) ?:
scoutfs_server_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_client_setup(sb) ?:
scoutfs_volopt_setup(sb) ?:
scoutfs_lock_rid(sb, SCOUTFS_LOCK_WRITE, 0, sbi->rid,
&sbi->rid_lock) ?:
scoutfs_trans_get_log_trees(sb) ?:
scoutfs_srch_setup(sb);
if (ret)
goto out;
/* this interruptible iget lets hung mount be aborted with ctl-c */
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO, SCOUTFS_LKF_INTERRUPTIBLE, 0);
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
if (ret == -ERESTARTSYS)
ret = -EINTR;
goto out;
}
@@ -570,15 +615,12 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
/* send requests once iget progress shows we had a server */
ret = scoutfs_trans_get_log_trees(sb);
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
if (ret)
goto out;
/* start up background services that use everything else */
scoutfs_inode_start(sb);
scoutfs_forest_start(sb);
scoutfs_trans_restart_sync_deadline(sb);
// scoutfs_scan_orphans(sb);
ret = 0;
out:
/* on error, generic_shutdown_super calls put_super if s_root */
@@ -599,18 +641,7 @@ static struct dentry *scoutfs_mount(struct file_system_type *fs_type, int flags,
*/
static void scoutfs_kill_sb(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi) {
sbi->unmounting = true;
smp_wmb();
}
if (SCOUTFS_HAS_SBI(sb)) {
scoutfs_options_stop(sb);
scoutfs_inode_orphan_stop(sb);
scoutfs_lock_unmount_begin(sb);
}
trace_scoutfs_kill_sb(sb);
kill_block_super(sb);
}
@@ -643,15 +674,11 @@ static int __init scoutfs_module_init(void)
*/
__asm__ __volatile__ (
".section .note.git_describe,\"a\"\n"
".ascii \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_min,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MIN_STR"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_max,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MAX_STR"\\n\"\n"
".section .note.scoutfs_interop_version,\"a\"\n"
".string \""SCOUTFS_INTEROP_VERSION_STR"\\n\"\n"
".previous\n");
scoutfs_init_counters();
@@ -685,5 +712,4 @@ module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
MODULE_INFO(scoutfs_format_version_min, SCOUTFS_FORMAT_VERSION_MIN_STR);
MODULE_INFO(scoutfs_format_version_max, SCOUTFS_FORMAT_VERSION_MAX_STR);
MODULE_INFO(scoutfs_interop_version, SCOUTFS_INTEROP_VERSION_STR);

View File

@@ -26,17 +26,13 @@ struct net_info;
struct block_info;
struct forest_info;
struct srch_info;
struct recov_info;
struct omap_info;
struct volopt_info;
struct fence_info;
struct scoutfs_sb_info {
struct super_block *sb;
/* assigned once at the start of each mount, read-only */
u64 rid;
u64 fmt_vers;
struct scoutfs_lock *rid_lock;
struct scoutfs_super_block super;
@@ -44,7 +40,6 @@ struct scoutfs_sb_info {
spinlock_t next_ino_lock;
struct options_info *options_info;
struct data_info *data_info;
struct inode_sb_info *inode_sb_info;
struct btree_info *btree_info;
@@ -53,32 +48,40 @@ struct scoutfs_sb_info {
struct block_info *block_info;
struct forest_info *forest_info;
struct srch_info *srch_info;
struct omap_info *omap_info;
struct volopt_info *volopt_info;
struct item_cache_info *item_cache_info;
struct fence_info *fence_info;
wait_queue_head_t trans_hold_wq;
struct task_struct *trans_task;
/* tracks tasks waiting for data extents */
struct scoutfs_data_wait_root data_wait_root;
/* set as transaction opens with trans holders excluded */
spinlock_t trans_write_lock;
u64 trans_write_count;
u64 trans_seq;
int trans_write_ret;
struct delayed_work trans_write_work;
wait_queue_head_t trans_write_wq;
struct workqueue_struct *trans_write_workq;
bool trans_deadline_expired;
struct trans_info *trans_info;
struct lock_info *lock_info;
struct lock_server_info *lock_server_info;
struct client_info *client_info;
struct server_info *server_info;
struct recov_info *recov_info;
struct sysfs_info *sfsinfo;
struct scoutfs_counters *counters;
struct scoutfs_triggers *triggers;
struct mount_options opts;
struct options_sb_info *options;
struct scoutfs_sysfs_attrs mopts_ssa;
struct dentry *debug_root;
bool forced_unmount;
bool unmounting;
bool shutdown;
unsigned long corruption_messages_once[SC_NR_LONGS];
};
@@ -100,26 +103,6 @@ static inline bool SCOUTFS_IS_META_BDEV(struct scoutfs_super_block *super_block)
#define SCOUTFS_META_BDEV_MODE (FMODE_READ | FMODE_WRITE | FMODE_EXCL)
static inline bool scoutfs_forcing_unmount(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return sbi->forced_unmount;
}
/*
* True if we're shutting down the system and can be used as a coarse
* indicator that we can avoid doing some work that no longer makes
* sense.
*/
static inline bool scoutfs_unmounting(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
smp_rmb();
return !sbi || sbi->unmounting;
}
/*
* A small string embedded in messages that's used to identify a
* specific mount. It's the three most significant bytes of the fsid
@@ -157,4 +140,6 @@ int scoutfs_write_super(struct super_block *sb,
/* to keep this out of the ioctl.h public interface definition */
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
__le64 scoutfs_clock_sync_id(void);
#endif

View File

@@ -37,25 +37,6 @@ struct attr_funcs {
#define ATTR_FUNCS_RO(_name) \
static struct attr_funcs _name##_attr_funcs = __ATTR_RO(_name)
static ssize_t data_device_maj_min_show(struct kobject *kobj, struct attribute *attr, char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
return snprintf(buf, PAGE_SIZE, "%u:%u\n",
MAJOR(sb->s_bdev->bd_dev), MINOR(sb->s_bdev->bd_dev));
}
ATTR_FUNCS_RO(data_device_maj_min);
static ssize_t format_version_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return snprintf(buf, PAGE_SIZE, "%llu\n", sbi->fmt_vers);
}
ATTR_FUNCS_RO(format_version);
static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
@@ -110,8 +91,6 @@ static ssize_t attr_funcs_show(struct kobject *kobj, struct attribute *attr,
static struct attribute *sb_id_attrs[] = {
&data_device_maj_min_attr_funcs.attr,
&format_version_attr_funcs.attr,
&fsid_attr_funcs.attr,
&rid_attr_funcs.attr,
NULL,
@@ -152,10 +131,9 @@ void scoutfs_sysfs_init_attrs(struct super_block *sb,
* If this returns success then the file will be visible and show can
* be called until unmount.
*/
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
struct kobject *parent,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...)
int scoutfs_sysfs_create_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...)
{
va_list args;
size_t name_len;
@@ -196,8 +174,8 @@ int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
goto out;
}
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype, parent,
"%s", ssa->name);
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype,
scoutfs_sysfs_sb_dir(sb), "%s", ssa->name);
out:
if (ret) {
kfree(ssa->name);

View File

@@ -10,8 +10,6 @@
#define SCOUTFS_ATTR_RO(_name) \
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RO(_name)
#define SCOUTFS_ATTR_RW(_name) \
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RW(_name)
#define SCOUTFS_ATTR_PTR(_name) \
&scoutfs_attr_##_name.attr
@@ -36,14 +34,9 @@ struct scoutfs_sysfs_attrs {
void scoutfs_sysfs_init_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa);
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
struct kobject *parent,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...);
#define scoutfs_sysfs_create_attrs(sb, ssa, attrs, fmt, args...) \
scoutfs_sysfs_create_attrs_parent(sb, scoutfs_sysfs_sb_dir(sb), \
ssa, attrs, fmt, ##args)
int scoutfs_sysfs_create_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...);
void scoutfs_sysfs_destroy_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa);

View File

@@ -17,7 +17,6 @@
#include <linux/atomic.h>
#include <linux/writeback.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include "super.h"
#include "trans.h"
@@ -54,24 +53,15 @@
/* sync dirty data at least this often */
#define TRANS_SYNC_DELAY (HZ * 10)
/*
* XXX move the rest of the super trans_ fields here.
*/
struct trans_info {
struct super_block *sb;
atomic_t holders;
struct scoutfs_log_trees lt;
struct scoutfs_alloc alloc;
struct scoutfs_block_writer wri;
wait_queue_head_t hold_wq;
struct task_struct *task;
spinlock_t write_lock;
u64 write_count;
int write_ret;
struct delayed_work write_work;
wait_queue_head_t write_wq;
struct workqueue_struct *write_workq;
bool deadline_expired;
};
#define DECLARE_TRANS_INFO(sb, name) \
@@ -101,7 +91,6 @@ static int commit_btrees(struct super_block *sb)
*/
int scoutfs_trans_get_log_trees(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_log_trees lt;
int ret = 0;
@@ -114,11 +103,6 @@ int scoutfs_trans_get_log_trees(struct super_block *sb)
scoutfs_forest_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
scoutfs_data_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
/* first set during mount from 0 to nonzero allows commits */
spin_lock(&tri->write_lock);
sbi->trans_seq = le64_to_cpu(lt.get_trans_seq);
spin_unlock(&tri->write_lock);
}
return ret;
}
@@ -136,12 +120,13 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
*/
static void sub_holders_and_wake(struct super_block *sb, int val)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
atomic_sub(val, &tri->holders);
smp_mb(); /* make sure sub is visible before we wake */
if (waitqueue_active(&tri->hold_wq))
wake_up(&tri->hold_wq);
if (waitqueue_active(&sbi->trans_hold_wq))
wake_up(&sbi->trans_hold_wq);
}
/*
@@ -169,93 +154,90 @@ static bool drained_holders(struct trans_info *tri)
* functions that would try to hold the transaction. We record the task
* whose committing the transaction so that holding won't deadlock.
*
* Once we clear the write func bit in holders then waiting holders can
* enter the transaction and continue modifying the transaction. Once
* we start writing we consider the transaction done and won't exit,
* clearing the write func bit, until get_log_trees has opened the next
* transaction. The exception is forced unmount which is allowed to
* generate errors and throw away data.
* Any dirty block had to have allocated a new blkno which would have
* created dirty allocator metadata blocks. We can avoid writing
* entirely if we don't have any dirty metadata blocks. This is
* important because we don't try to serialize this work during
* unmount.. we can execute as the vfs is shutting down.. we need to
* decide that nothing is dirty without calling the vfs at all.
*
* This means that the only way fsync can return an error is if we're in
* forced unmount.
* We first try to sync the dirty inodes and write their dirty data blocks,
* then we write all our dirty metadata blocks, and only when those succeed
* do we write the new super that references all of these newly written blocks.
*
* If there are write errors then blocks are kept dirty in memory and will
* be written again at the next sync.
*/
void scoutfs_trans_write_func(struct work_struct *work)
{
struct trans_info *tri = container_of(work, struct trans_info, write_work.work);
struct super_block *sb = tri->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
bool retrying = false;
struct scoutfs_sb_info *sbi = container_of(work, struct scoutfs_sb_info,
trans_write_work.work);
struct super_block *sb = sbi->sb;
DECLARE_TRANS_INFO(sb, tri);
u64 trans_seq = sbi->trans_seq;
char *s = NULL;
int ret = 0;
tri->task = current;
sbi->trans_task = current;
/* mark that we're writing so holders wait for us to finish and clear our bit */
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
wait_event(tri->hold_wq, drained_holders(tri));
wait_event(sbi->trans_hold_wq, drained_holders(tri));
/* mount hasn't opened first transaction yet, still complete sync */
if (sbi->trans_seq == 0) {
ret = 0;
trace_scoutfs_trans_write_func(sb,
scoutfs_block_writer_dirty_bytes(sb, &tri->wri));
if (!scoutfs_block_writer_has_dirty(sb, &tri->wri) &&
!scoutfs_item_dirty_pages(sb)) {
if (sbi->trans_deadline_expired) {
/*
* If we're not writing data then we only advance the
* seq at the sync deadline interval. This keeps idle
* mounts from pinning a seq and stopping readers of the
* seq indices but doesn't send a message for every sync
* syscall.
*/
ret = scoutfs_client_advance_seq(sb, &trans_seq);
if (ret < 0)
s = "clean advance seq";
}
goto out;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
goto out;
}
trace_scoutfs_trans_write_func(sb, scoutfs_block_writer_dirty_bytes(sb, &tri->wri),
scoutfs_item_dirty_pages(sb));
if (tri->deadline_expired)
if (sbi->trans_deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
scoutfs_inc_counter(sb, trans_commit_written);
do {
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc,
&tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
if (ret < 0) {
if (!retrying) {
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
s, ret);
retrying = true;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
break;
}
msleep(2 * MSEC_PER_SEC);
} else if (retrying) {
scoutfs_info(sb, "retried transaction commit succeeded");
}
} while (ret < 0);
/* XXX this all needs serious work for dealing with errors */
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb,
&tri->alloc, &tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "advance seq", scoutfs_client_advance_seq(sb, &trans_seq)) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
out:
spin_lock(&tri->write_lock);
tri->write_count++;
tri->write_ret = ret;
spin_unlock(&tri->write_lock);
wake_up(&tri->write_wq);
if (ret < 0)
scoutfs_err(sb, "critical transaction commit failure: %s, %d",
s, ret);
spin_lock(&sbi->trans_write_lock);
sbi->trans_write_count++;
sbi->trans_write_ret = ret;
sbi->trans_seq = trans_seq;
spin_unlock(&sbi->trans_write_lock);
wake_up(&sbi->trans_write_wq);
/* we're done, wake waiting holders */
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
tri->task = NULL;
sbi->trans_task = NULL;
scoutfs_trans_restart_sync_deadline(sb);
}
@@ -266,17 +248,17 @@ struct write_attempt {
};
/* this is called as a wait_event() condition so it can't change task state */
static int write_attempted(struct super_block *sb, struct write_attempt *attempt)
static int write_attempted(struct scoutfs_sb_info *sbi,
struct write_attempt *attempt)
{
DECLARE_TRANS_INFO(sb, tri);
int done = 1;
spin_lock(&tri->write_lock);
if (tri->write_count > attempt->count)
attempt->ret = tri->write_ret;
spin_lock(&sbi->trans_write_lock);
if (sbi->trans_write_count > attempt->count)
attempt->ret = sbi->trans_write_ret;
else
done = 0;
spin_unlock(&tri->write_lock);
spin_unlock(&sbi->trans_write_lock);
return done;
}
@@ -286,12 +268,10 @@ static int write_attempted(struct super_block *sb, struct write_attempt *attempt
* We always have delayed sync work pending but the caller wants it
* to execute immediately.
*/
static void queue_trans_work(struct super_block *sb)
static void queue_trans_work(struct scoutfs_sb_info *sbi)
{
DECLARE_TRANS_INFO(sb, tri);
tri->deadline_expired = false;
mod_delayed_work(tri->write_workq, &tri->write_work, 0);
sbi->trans_deadline_expired = false;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work, 0);
}
/*
@@ -304,24 +284,26 @@ static void queue_trans_work(struct super_block *sb)
*/
int scoutfs_trans_sync(struct super_block *sb, int wait)
{
DECLARE_TRANS_INFO(sb, tri);
struct write_attempt attempt = { .ret = 0 };
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct write_attempt attempt;
int ret;
if (!wait) {
queue_trans_work(sb);
queue_trans_work(sbi);
return 0;
}
spin_lock(&tri->write_lock);
attempt.count = tri->write_count;
spin_unlock(&tri->write_lock);
spin_lock(&sbi->trans_write_lock);
attempt.count = sbi->trans_write_count;
spin_unlock(&sbi->trans_write_lock);
queue_trans_work(sb);
queue_trans_work(sbi);
wait_event(tri->write_wq, write_attempted(sb, &attempt));
ret = attempt.ret;
ret = wait_event_interruptible(sbi->trans_write_wq,
write_attempted(sbi, &attempt));
if (ret == 0)
ret = attempt.ret;
return ret;
}
@@ -337,10 +319,10 @@ int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
tri->deadline_expired = true;
mod_delayed_work(tri->write_workq, &tri->write_work,
sbi->trans_deadline_expired = true;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work,
TRANS_SYNC_DELAY);
}
@@ -448,8 +430,8 @@ static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
return true;
}
/* if we're low and can't refill then alloc could empty and return enospc */
if (scoutfs_data_alloc_should_refill(sb, SCOUTFS_ALLOC_DATA_REFILL_THRESH)) {
/* Try to refill data allocator before premature enospc */
if (scoutfs_data_alloc_free_bytes(sb) <= SCOUTFS_TRANS_DATA_ALLOC_LWM) {
scoutfs_inc_counter(sb, trans_commit_data_alloc_low);
return true;
}
@@ -457,15 +439,38 @@ static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
return false;
}
/*
* called as a wait_event condition, needs to be careful to not change
* task state and is racing with waking paths that sub_return, test, and
* wake.
*/
static bool holders_no_writer(struct trans_info *tri)
static bool acquired_hold(struct super_block *sb)
{
smp_mb(); /* make sure task in wait_event queue before atomic read */
return !(atomic_read(&tri->holders) & TRANS_HOLDERS_WRITE_FUNC_BIT);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
bool acquired;
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
acquired = true;
goto out;
}
/* wait if the writer is blocking holds */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
acquired = false;
goto out;
}
/* wait if we're triggering another commit */
if (commit_before_hold(sb, tri)) {
release_holders(sb);
queue_trans_work(sbi);
acquired = false;
goto out;
}
trace_scoutfs_trans_acquired_hold(sb, current->journal_info, atomic_read(&tri->holders));
acquired = true;
out:
return acquired;
}
/*
@@ -481,65 +486,15 @@ static bool holders_no_writer(struct trans_info *tri)
* The writing thread marks itself as a global trans_task which
* short-circuits all the hold machinery so it can call code that would
* otherwise try to hold transactions while it is writing.
*
* If the caller is adding metadata items that will eventually consume
* free space -- not dirtying existing items or adding deletion items --
* then we can return enospc if our metadata allocator indicates that
* we're low on space.
*/
int scoutfs_hold_trans(struct super_block *sb, bool allocing)
int scoutfs_hold_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
u64 seq;
int ret;
if (current == tri->task)
if (current == sbi->trans_task)
return 0;
for (;;) {
/* shouldn't get holders until mount finishes, (not locking for cheap test) */
if (WARN_ON_ONCE(sbi->trans_seq == 0)) {
ret = -EINVAL;
break;
}
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
ret = 0;
break;
}
/* wait until the writer work is finished */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
wait_event(tri->hold_wq, holders_no_writer(tri));
continue;
}
/* return enospc if server is into reserved blocks and we're allocating */
if (allocing && scoutfs_alloc_test_flag(sb, &tri->alloc, SCOUTFS_ALLOC_FLAG_LOW)) {
release_holders(sb);
ret = -ENOSPC;
break;
}
/* see if we need to trigger and wait for a commit before holding */
if (commit_before_hold(sb, tri)) {
seq = scoutfs_trans_sample_seq(sb);
release_holders(sb);
queue_trans_work(sb);
wait_event(tri->hold_wq, scoutfs_trans_sample_seq(sb) != seq);
continue;
}
ret = 0;
break;
}
trace_scoutfs_hold_trans(sb, current->journal_info, atomic_read(&tri->holders), ret);
return ret;
return wait_event_interruptible(sbi->trans_hold_wq, acquired_hold(sb));
}
/*
@@ -556,14 +511,15 @@ bool scoutfs_trans_held(void)
void scoutfs_release_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
if (current == tri->task)
if (current == sbi->trans_task)
return;
release_holders(sb);
trace_scoutfs_release_trans(sb, current->journal_info, atomic_read(&tri->holders), 0);
trace_scoutfs_release_trans(sb, current->journal_info, atomic_read(&tri->holders));
}
/*
@@ -573,13 +529,12 @@ void scoutfs_release_trans(struct super_block *sb)
*/
u64 scoutfs_trans_sample_seq(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 ret;
spin_lock(&tri->write_lock);
spin_lock(&sbi->trans_write_lock);
ret = sbi->trans_seq;
spin_unlock(&tri->write_lock);
spin_unlock(&sbi->trans_write_lock);
return ret;
}
@@ -593,17 +548,12 @@ int scoutfs_setup_trans(struct super_block *sb)
if (!tri)
return -ENOMEM;
tri->sb = sb;
atomic_set(&tri->holders, 0);
scoutfs_block_writer_init(sb, &tri->wri);
spin_lock_init(&tri->write_lock);
INIT_DELAYED_WORK(&tri->write_work, scoutfs_trans_write_func);
init_waitqueue_head(&tri->write_wq);
init_waitqueue_head(&tri->hold_wq);
tri->write_workq = alloc_workqueue("scoutfs_trans", WQ_UNBOUND, 1);
if (!tri->write_workq) {
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",
WQ_UNBOUND, 1);
if (!sbi->trans_write_workq) {
kfree(tri);
return -ENOMEM;
}
@@ -614,15 +564,8 @@ int scoutfs_setup_trans(struct super_block *sb)
}
/*
* While the vfs will have done an fs level sync before calling
* put_super, we may have done work down in our level after all the fs
* ops were done. An example is final inode deletion in iput, that's
* done in generic_shutdown_super after the sync and before calling our
* put_super.
*
* So we always try to write any remaining dirty transactions before
* shutting down. Typically there won't be any dirty data and the
* worker will just return.
* kill_sb calls sync before getting here so we know that dirty data
* should be in flight. We just have to wait for it to quiesce.
*/
void scoutfs_shutdown_trans(struct super_block *sb)
{
@@ -630,19 +573,13 @@ void scoutfs_shutdown_trans(struct super_block *sb)
DECLARE_TRANS_INFO(sb, tri);
if (tri) {
if (tri->write_workq) {
/* immediately queues pending timer */
flush_delayed_work(&tri->write_work);
/* prevents re-arming if it has to wait */
cancel_delayed_work_sync(&tri->write_work);
destroy_workqueue(tri->write_workq);
/* trans work schedules after shutdown see null */
tri->write_workq = NULL;
}
scoutfs_alloc_prepare_commit(sb, &tri->alloc, &tri->wri);
scoutfs_block_writer_forget_all(sb, &tri->wri);
if (sbi->trans_write_workq) {
cancel_delayed_work_sync(&sbi->trans_write_work);
destroy_workqueue(sbi->trans_write_workq);
/* trans work schedules after shutdown see null */
sbi->trans_write_workq = NULL;
}
kfree(tri);
sbi->trans_info = NULL;
}

View File

@@ -1,13 +1,18 @@
#ifndef _SCOUTFS_TRANS_H_
#define _SCOUTFS_TRANS_H_
/* the server will attempt to fill data allocs for each trans */
#define SCOUTFS_TRANS_DATA_ALLOC_HWM (2ULL * 1024 * 1024 * 1024)
/* the client will force commits if data allocators get too low */
#define SCOUTFS_TRANS_DATA_ALLOC_LWM (256ULL * 1024 * 1024)
void scoutfs_trans_write_func(struct work_struct *work);
int scoutfs_trans_sync(struct super_block *sb, int wait);
int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
int datasync);
void scoutfs_trans_restart_sync_deadline(struct super_block *sb);
int scoutfs_hold_trans(struct super_block *sb, bool allocing);
int scoutfs_hold_trans(struct super_block *sb);
bool scoutfs_trans_held(void);
void scoutfs_release_trans(struct super_block *sb);
u64 scoutfs_trans_sample_seq(struct super_block *sb);

View File

@@ -1,188 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/kobject.h>
#include <linux/sysfs.h>
#include "super.h"
#include "client.h"
#include "volopt.h"
/*
* Volume options are exposed through a sysfs directory. Getting and
* setting the values sends rpcs to the server who owns the options in
* the super block.
*/
struct volopt_info {
struct super_block *sb;
struct scoutfs_sysfs_attrs ssa;
};
#define DECLARE_VOLOPT_INFO(sb, name) \
struct volopt_info *name = SCOUTFS_SB(sb)->volopt_info
#define DECLARE_VOLOPT_INFO_KOBJ(kobj, name) \
DECLARE_VOLOPT_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
/*
* attribute arrays need to be dense but the options we export could
* well become sparse over time. .store and .load are generic and we
* have a lookup table to map the attributes array indexes to the number
* and name of the option.
*/
static struct volopt_nr_name {
int nr;
char *name;
} volopt_table[] = {
{ SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR, "data_alloc_zone_blocks" },
};
/* initialized by setup, pointer array is null terminated */
static struct kobj_attribute volopt_attrs[ARRAY_SIZE(volopt_table)];
static struct attribute *volopt_attr_ptrs[ARRAY_SIZE(volopt_table) + 1];
static void get_opt_data(struct kobj_attribute *attr, struct scoutfs_volume_options *volopt,
u64 *bit, __le64 **opt)
{
size_t index = attr - &volopt_attrs[0];
int nr = volopt_table[index].nr;
*bit = 1ULL << nr;
*opt = &volopt->set_bits + 1 + nr;
}
static ssize_t volopt_attr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
struct super_block *sb = vinf->sb;
struct scoutfs_volume_options volopt;
__le64 *opt;
u64 bit;
int ret;
ret = scoutfs_client_get_volopt(sb, &volopt);
if (ret < 0)
return ret;
get_opt_data(attr, &volopt, &bit, &opt);
if (le64_to_cpu(volopt.set_bits) & bit) {
return snprintf(buf, PAGE_SIZE, "%llu", le64_to_cpup(opt));
} else {
buf[0] = '\0';
return 0;
}
}
static ssize_t volopt_attr_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
struct super_block *sb = vinf->sb;
struct scoutfs_volume_options volopt = {0,};
u8 chars[32];
__le64 *opt;
u64 bit;
u64 val;
int ret;
if (count == 0)
return 0;
if (count > sizeof(chars) - 1)
return -ERANGE;
get_opt_data(attr, &volopt, &bit, &opt);
if (buf[0] == '\n' || buf[0] == '\r') {
volopt.set_bits = cpu_to_le64(bit);
ret = scoutfs_client_clear_volopt(sb, &volopt);
} else {
memcpy(chars, buf, count);
chars[count] = '\0';
ret = kstrtoull(chars, 0, &val);
if (ret < 0)
return ret;
volopt.set_bits = cpu_to_le64(bit);
*opt = cpu_to_le64(val);
ret = scoutfs_client_set_volopt(sb, &volopt);
}
if (ret == 0)
ret = count;
return ret;
}
/*
* The volume option sysfs files are slim shims around RPCs so this
* should be called after the client is setup and before it is torn
* down.
*/
int scoutfs_volopt_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct volopt_info *vinf;
int ret;
int i;
/* persistent volume options are always a bitmap u64 then the 64 options */
BUILD_BUG_ON(sizeof(struct scoutfs_volume_options) != (1 + 64) * 8);
vinf = kzalloc(sizeof(struct volopt_info), GFP_KERNEL);
if (!vinf) {
ret = -ENOMEM;
goto out;
}
scoutfs_sysfs_init_attrs(sb, &vinf->ssa);
vinf->sb = sb;
sbi->volopt_info = vinf;
for (i = 0; i < ARRAY_SIZE(volopt_table); i++) {
volopt_attrs[i] = (struct kobj_attribute) {
.attr = { .name = volopt_table[i].name, .mode = S_IWUSR | S_IRUGO },
.show = volopt_attr_show,
.store = volopt_attr_store,
};
volopt_attr_ptrs[i] = &volopt_attrs[i].attr;
}
BUILD_BUG_ON(ARRAY_SIZE(volopt_table) != ARRAY_SIZE(volopt_attr_ptrs) - 1);
volopt_attr_ptrs[i] = NULL;
ret = scoutfs_sysfs_create_attrs(sb, &vinf->ssa, volopt_attr_ptrs, "volume_options");
if (ret < 0)
goto out;
out:
if (ret)
scoutfs_volopt_destroy(sb);
return ret;
}
void scoutfs_volopt_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct volopt_info *vinf = SCOUTFS_SB(sb)->volopt_info;
if (vinf) {
scoutfs_sysfs_destroy_attrs(sb, &vinf->ssa);
kfree(vinf);
sbi->volopt_info = NULL;
}
}

View File

@@ -1,7 +0,0 @@
#ifndef _SCOUTFS_VOLOPT_H_
#define _SCOUTFS_VOLOPT_H_
int scoutfs_volopt_setup(struct super_block *sb);
void scoutfs_volopt_destroy(struct super_block *sb);
#endif

View File

@@ -57,6 +57,12 @@ static u32 xattr_names_equal(const char *a_name, unsigned int a_len,
return a_len == b_len && memcmp(a_name, b_name, a_len) == 0;
}
static unsigned int xattr_full_bytes(struct scoutfs_xattr *xat)
{
return offsetof(struct scoutfs_xattr,
name[xat->name_len + le16_to_cpu(xat->val_len)]);
}
static unsigned int xattr_nr_parts(struct scoutfs_xattr *xat)
{
return SCOUTFS_XATTR_NR_PARTS(xat->name_len,
@@ -91,7 +97,6 @@ static int unknown_prefix(const char *name)
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
#define TAG_LEN (sizeof(HIDE_TAG) - 1)
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
@@ -114,9 +119,6 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
} else if (!strncmp(name, SRCH_TAG, TAG_LEN)) {
if (++tgs->srch == 0)
return -EINVAL;
} else if (!strncmp(name, TOTL_TAG, TAG_LEN)) {
if (++tgs->totl == 0)
return -EINVAL;
} else {
/* only reason to use scoutfs. is tags */
if (!found)
@@ -131,29 +133,12 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
}
/*
* xattrs are stored in multiple items. The first item is a
* concatenation of an initial header, the name, and then as much of the
* value as fits in the remainder of the first item. This return the
* size of the first item that'd store an xattr with the given name
* length and value payload size.
*/
static int first_item_bytes(int name_len, size_t size)
{
if (WARN_ON_ONCE(name_len <= 0) ||
WARN_ON_ONCE(name_len > SCOUTFS_XATTR_MAX_NAME_LEN))
return 0;
return min_t(int, sizeof(struct scoutfs_xattr) + name_len + size,
SCOUTFS_XATTR_MAX_PART_SIZE);
}
/*
* Find the next xattr, set the caller's key, and copy as much of the
* first item into the callers buffer as we can. Returns the number of
* bytes copied which can include the header, name, and start of the
* value from the first item. The caller is responsible for comparing
* their lengths, the header, and the returned length before safely
* using the buffer.
* Find the next xattr and copy the key, xattr header, and as much of
* the name and value into the callers buffer as we can. Returns the
* number of bytes copied which include the header, name, and value and
* can be limited by the xattr length or the callers buffer. The caller
* is responsible for comparing their lengths, the header, and the
* returned length before safely using the xattr.
*
* If a name is provided then we'll iterate over items with a matching
* name_hash until we find a matching name. If we don't find a matching
@@ -165,17 +150,20 @@ static int first_item_bytes(int name_len, size_t size)
* Returns -ENOENT if it didn't find a next item.
*/
static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
struct scoutfs_xattr *xat, unsigned int xat_bytes,
struct scoutfs_xattr *xat, unsigned int bytes,
const char *name, unsigned int name_len,
u64 name_hash, u64 id, struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key last;
u8 last_part;
int total;
u8 part;
int ret;
/* need to be able to see the name we're looking for */
if (WARN_ON_ONCE(name_len > 0 &&
xat_bytes < offsetof(struct scoutfs_xattr, name[name_len])))
if (WARN_ON_ONCE(name_len > 0 && bytes < offsetof(struct scoutfs_xattr,
name[name_len])))
return -EINVAL;
if (name_len)
@@ -184,15 +172,26 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
init_xattr_key(key, scoutfs_ino(inode), name_hash, id);
init_xattr_key(&last, scoutfs_ino(inode), U32_MAX, U64_MAX);
last_part = 0;
part = 0;
total = 0;
for (;;) {
ret = scoutfs_item_next(sb, key, &last, xat, xat_bytes, lock);
if (ret < 0)
key->skx_part = part;
ret = scoutfs_item_next(sb, key, &last,
(void *)xat + total, bytes - total,
lock);
if (ret < 0) {
/* XXX corruption, ran out of parts */
if (ret == -ENOENT && part > 0)
ret = -EIO;
break;
}
trace_scoutfs_xattr_get_next_key(sb, key);
/* XXX corruption */
if (key->skx_part != 0) {
if (key->skx_part != part) {
ret = -EIO;
break;
}
@@ -202,7 +201,8 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
* the first part and if the next xattr name fits in our
* buffer then the item must have included it.
*/
if ((ret < sizeof(struct scoutfs_xattr) ||
if (part == 0 &&
(ret < sizeof(struct scoutfs_xattr) ||
(xat->name_len <= name_len &&
ret < offsetof(struct scoutfs_xattr,
name[xat->name_len])) ||
@@ -212,7 +212,7 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
break;
}
if (name_len > 0) {
if (part == 0 && name_len) {
/* ran out of names that could match */
if (le64_to_cpu(key->skx_name_hash) != name_hash) {
ret = -ENOENT;
@@ -220,126 +220,64 @@ static int get_next_xattr(struct inode *inode, struct scoutfs_key *key,
}
/* keep looking for our name */
if (!xattr_names_equal(name, name_len, xat->name, xat->name_len)) {
if (!xattr_names_equal(name, name_len,
xat->name, xat->name_len)) {
part = 0;
le64_add_cpu(&key->skx_id, 1);
continue;
}
/* use the matching name we found */
last_part = xattr_nr_parts(xat) - 1;
}
/* found next name */
break;
total += ret;
if (total == bytes || part == last_part) {
/* copied as much as we could */
ret = total;
break;
}
part++;
}
return ret;
}
/*
* The caller has already read and verified the xattr's first item.
* Copy the value from the tail of the first item and from any future
* items into the destination buffer.
*/
static int copy_xattr_value(struct super_block *sb, struct scoutfs_key *xat_key,
struct scoutfs_xattr *xat, int xat_bytes,
char *buffer, size_t size,
struct scoutfs_lock *lock)
{
struct scoutfs_key key;
size_t copied = 0;
int val_tail;
int bytes;
int ret;
int i;
/* must have first item up to value */
if (WARN_ON_ONCE(xat_bytes < sizeof(struct scoutfs_xattr)) ||
WARN_ON_ONCE(xat_bytes < offsetof(struct scoutfs_xattr, name[xat->name_len])))
return -EINVAL;
/* only ever copy up to the full value */
size = min_t(size_t, size, le16_to_cpu(xat->val_len));
/* must have full first item if caller needs value from second item */
val_tail = SCOUTFS_XATTR_MAX_PART_SIZE -
offsetof(struct scoutfs_xattr, name[xat->name_len]);
if (WARN_ON_ONCE(size > val_tail && xat_bytes != SCOUTFS_XATTR_MAX_PART_SIZE))
return -EINVAL;
/* copy from tail of first item */
bytes = min_t(unsigned int, size, val_tail);
if (bytes > 0) {
memcpy(buffer, &xat->name[xat->name_len], bytes);
copied += bytes;
}
key = *xat_key;
for (i = 1; copied < size; i++) {
key.skx_part = i;
bytes = min_t(unsigned int, size - copied, SCOUTFS_XATTR_MAX_PART_SIZE);
ret = scoutfs_item_lookup(sb, &key, buffer + copied, bytes, lock);
if (ret >= 0 && ret != bytes)
ret = -EIO;
if (ret < 0)
return ret;
copied += ret;
}
return copied;
}
/*
* The caller is working with items that are either in the allocated
* first compound item or further items that are offsets into a value
* buffer. Give them a pointer and length of the start of the item.
*/
static void xattr_item_part_buffer(void **buf, int *len, int part,
struct scoutfs_xattr *xat, unsigned int xat_bytes,
const char *value, size_t size)
{
int off;
if (part == 0) {
*buf = xat;
*len = xat_bytes;
} else {
off = (part * SCOUTFS_XATTR_MAX_PART_SIZE) -
offsetof(struct scoutfs_xattr, name[xat->name_len]);
BUG_ON(off >= size); /* calls limited by number of parts */
*buf = (void *)value + off;
*len = min_t(size_t, size - off, SCOUTFS_XATTR_MAX_PART_SIZE);
}
}
/*
* Create all the items associated with the given xattr. If this
* returns an error it will have already cleaned up any items it created
* before seeing the error.
*/
static int create_xattr_items(struct inode *inode, u64 id, struct scoutfs_xattr *xat,
int xat_bytes, const char *value, size_t size, u8 new_parts,
static int create_xattr_items(struct inode *inode, u64 id,
struct scoutfs_xattr *xat, unsigned int bytes,
struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key key;
int ret = 0;
void *buf;
int len;
int i;
unsigned int part_bytes;
unsigned int total;
int ret;
init_xattr_key(&key, scoutfs_ino(inode),
xattr_name_hash(xat->name, xat->name_len), id);
for (i = 0; i < new_parts; i++) {
key.skx_part = i;
xattr_item_part_buffer(&buf, &len, i, xat, xat_bytes, value, size);
total = 0;
ret = 0;
while (total < bytes) {
part_bytes = min_t(unsigned int, bytes - total,
SCOUTFS_XATTR_MAX_PART_SIZE);
ret = scoutfs_item_create(sb, &key, buf, len, lock);
if (ret < 0) {
ret = scoutfs_item_create(sb, &key,
(void *)xat + total, part_bytes,
lock);
if (ret) {
while (key.skx_part-- > 0)
scoutfs_item_delete(sb, &key, lock);
break;
}
total += part_bytes;
key.skx_part++;
}
return ret;
@@ -387,20 +325,20 @@ out:
* deleted items.
*/
static int change_xattr_items(struct inode *inode, u64 id,
struct scoutfs_xattr *xat, int xat_bytes,
const char *value, size_t size,
u8 new_parts, u8 old_parts, struct scoutfs_lock *lock)
struct scoutfs_xattr *new_xat,
unsigned int new_bytes, u8 new_parts,
u8 old_parts, struct scoutfs_lock *lock)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_key key;
int last_created = -1;
void *buf;
int len;
int bytes;
int off;
int i;
int ret;
init_xattr_key(&key, scoutfs_ino(inode),
xattr_name_hash(xat->name, xat->name_len), id);
xattr_name_hash(new_xat->name, new_xat->name_len), id);
/* dirty existing old items */
for (i = 0; i < old_parts; i++) {
@@ -412,10 +350,13 @@ static int change_xattr_items(struct inode *inode, u64 id,
/* create any new items past the old */
for (i = old_parts; i < new_parts; i++) {
key.skx_part = i;
xattr_item_part_buffer(&buf, &len, i, xat, xat_bytes, value, size);
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
ret = scoutfs_item_create(sb, &key, buf, len, lock);
key.skx_part = i;
ret = scoutfs_item_create(sb, &key, (void *)new_xat + off,
bytes, lock);
if (ret)
goto out;
@@ -423,11 +364,14 @@ static int change_xattr_items(struct inode *inode, u64 id,
}
/* update dirtied overlapping existing items, last partial first */
for (i = min(old_parts, new_parts) - 1; i >= 0; i--) {
key.skx_part = i;
xattr_item_part_buffer(&buf, &len, i, xat, xat_bytes, value, size);
for (i = old_parts - 1; i >= 0; i--) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
ret = scoutfs_item_update(sb, &key, buf, len, lock);
key.skx_part = i;
ret = scoutfs_item_update(sb, &key, (void *)new_xat + off,
bytes, lock);
/* only last partial can fail, then we unwind created */
if (ret < 0)
goto out;
@@ -464,7 +408,7 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_key key;
unsigned int xat_bytes;
unsigned int bytes;
size_t name_len;
int ret;
@@ -475,8 +419,9 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
return -ENODATA;
xat_bytes = first_item_bytes(name_len, size);
xat = kmalloc(xat_bytes, GFP_NOFS);
/* only need enough for caller's name and value sizes */
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
if (!xat)
return -ENOMEM;
@@ -486,129 +431,43 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
down_read(&si->xattr_rwsem);
ret = get_next_xattr(inode, &key, xat, xat_bytes, name, name_len, 0, 0, lck);
ret = get_next_xattr(inode, &key, xat, bytes,
name, name_len, 0, 0, lck);
up_read(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_READ);
if (ret < 0) {
if (ret == -ENOENT)
ret = -ENODATA;
goto unlock;
goto out;
}
/* the caller just wants to know the size */
if (size == 0) {
ret = le16_to_cpu(xat->val_len);
goto unlock;
goto out;
}
/* the caller's buffer wasn't big enough */
if (size < le16_to_cpu(xat->val_len)) {
ret = -ERANGE;
goto unlock;
goto out;
}
ret = copy_xattr_value(sb, &key, xat, xat_bytes, buffer, size, lck);
unlock:
up_read(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_READ);
/* XXX corruption, the items didn't match the header */
if (ret < xattr_full_bytes(xat)) {
ret = -EIO;
goto out;
}
ret = le16_to_cpu(xat->val_len);
memcpy(buffer, &xat->name[xat->name_len], ret);
out:
kfree(xat);
vfree(xat);
return ret;
}
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
{
scoutfs_key_set_zeros(key);
key->sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
key->skxt_a = cpu_to_le64(name[0]);
key->skxt_b = cpu_to_le64(name[1]);
key->skxt_c = cpu_to_le64(name[2]);
}
/*
* Parse a u64 in any base after null terminating it while forbidding
* the leading + and trailing \n that kstrotull allows.
*/
static int parse_totl_u64(const char *s, int len, u64 *res)
{
char str[SCOUTFS_XATTR_MAX_TOTL_U64 + 1];
if (len <= 0 || len >= ARRAY_SIZE(str) || s[0] == '+' || s[len - 1] == '\n')
return -EINVAL;
memcpy(str, s, len);
str[len] = '\0';
return kstrtoull(str, 0, res) != 0 ? -EINVAL : 0;
}
/*
* non-destructive relatively quick parse of the last 3 dotted u64s that
* make up the name of the xattr total. -EINVAL is returned if there
* are anything but 3 valid u64 encodings between single dots at the end
* of the name.
*/
static int parse_totl_key(struct scoutfs_key *key, const char *name, int name_len)
{
u64 tot_name[3];
int end = name_len;
int nr = 0;
int len;
int ret;
int i;
/* parse name elements in reserve order from end of xattr name string */
for (i = name_len - 1; i >= 0 && nr < ARRAY_SIZE(tot_name); i--) {
if (name[i] != '.')
continue;
len = end - (i + 1);
ret = parse_totl_u64(&name[i + 1], len, &tot_name[nr]);
if (ret < 0)
goto out;
end = i;
nr++;
}
if (nr == ARRAY_SIZE(tot_name)) {
/* swap to account for parsing in reverse */
swap(tot_name[0], tot_name[2]);
scoutfs_xattr_init_totl_key(key, tot_name);
ret = 0;
} else {
ret = -EINVAL;
}
out:
return ret;
}
static int apply_totl_delta(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_xattr_totl_val *tval, struct scoutfs_lock *lock)
{
if (tval->total == 0 && tval->count == 0)
return 0;
return scoutfs_item_delta(sb, key, tval, sizeof(*tval), lock);
}
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
{
struct scoutfs_xattr_totl_val *s_tval = src;
struct scoutfs_xattr_totl_val *d_tval = dst;
if (src_len != sizeof(*s_tval) || dst_len != src_len)
return -EIO;
le64_add_cpu(&d_tval->total, le64_to_cpu(s_tval->total));
le64_add_cpu(&d_tval->count, le64_to_cpu(s_tval->count));
if (d_tval->total == 0 && d_tval->count == 0)
return SCOUTFS_DELTA_COMBINED_NULL;
return SCOUTFS_DELTA_COMBINED;
}
/*
* The confusing swiss army knife of creating, modifying, and deleting
* xattrs.
@@ -627,23 +486,16 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int xat_bytes_totl;
unsigned int xat_bytes;
unsigned int val_len;
unsigned int bytes;
u64 ind_seq;
u64 total;
u64 hash = 0;
u64 id = 0;
int ret;
@@ -667,18 +519,11 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
if ((tgs.hide || tgs.srch) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
/* allocate enough to always read an existing xattr's totl */
xat_bytes_totl = first_item_bytes(name_len,
max_t(size_t, size, SCOUTFS_XATTR_MAX_TOTL_U64));
/* but store partial first item that only includes the new xattr's value */
xat_bytes = first_item_bytes(name_len, size);
xat = kmalloc(xat_bytes_totl, GFP_NOFS);
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
if (!xat) {
ret = -ENOMEM;
goto out;
@@ -691,8 +536,10 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
down_write(&si->xattr_rwsem);
/* find an existing xattr to delete, including possible totl value */
ret = get_next_xattr(inode, &key, xat, xat_bytes_totl, name, name_len, 0, 0, lck);
/* find an existing xattr to delete */
ret = get_next_xattr(inode, &key, xat,
sizeof(struct scoutfs_xattr) + name_len,
name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
goto unlock;
@@ -711,24 +558,10 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
goto unlock;
}
/* s64 count delta if we create or delete */
if (tgs.totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
if (found_parts && tgs.totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
if (ret < 0)
goto unlock;
le64_add_cpu(&tval.total, -total);
}
/* prepare the xattr header, name, and start of value in first item */
/* prepare our xattr */
if (value) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
@@ -738,29 +571,13 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
xat->val_len = cpu_to_le16(size);
memset(xat->__pad, 0, sizeof(xat->__pad));
memcpy(xat->name, name, name_len);
memcpy(&xat->name[name_len], value,
min(size, SCOUTFS_XATTR_MAX_PART_SIZE -
offsetof(struct scoutfs_xattr, name[name_len])));
if (tgs.totl) {
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto unlock;
}
le64_add_cpu(&tval.total, total);
}
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret)
goto unlock;
memcpy(&xat->name[xat->name_len], value, size);
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq);
if (ret > 0)
goto retry;
if (ret)
@@ -780,23 +597,15 @@ retry:
undo_srch = true;
}
if (tgs.totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto release;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
ret = change_xattr_items(inode, id, xat, bytes,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
lck);
else
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), lck);
ret = create_xattr_items(inode, id, xat, bytes, lck);
if (ret < 0)
goto release;
@@ -811,22 +620,14 @@ release:
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
out:
kfree(xat);
vfree(xat);
return ret;
}
@@ -855,7 +656,7 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_key key;
unsigned int xat_bytes;
unsigned int bytes;
ssize_t total = 0;
u32 name_hash = 0;
bool is_hidden;
@@ -868,8 +669,8 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
id = *id_pos;
/* need a buffer large enough for all possible names */
xat_bytes = first_item_bytes(SCOUTFS_XATTR_MAX_NAME_LEN, 0);
xat = kmalloc(xat_bytes, GFP_NOFS);
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN;
xat = kmalloc(bytes, GFP_NOFS);
if (!xat) {
ret = -ENOMEM;
goto out;
@@ -882,7 +683,8 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
down_read(&si->xattr_rwsem);
for (;;) {
ret = get_next_xattr(inode, &key, xat, xat_bytes, NULL, 0, name_hash, id, lck);
ret = get_next_xattr(inode, &key, xat, bytes,
NULL, 0, name_hash, id, lck);
if (ret < 0) {
if (ret == -ENOENT)
ret = total;
@@ -944,22 +746,15 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
{
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_xattr_totl_val tval;
struct scoutfs_key totl_key;
struct scoutfs_key last;
struct scoutfs_key key;
bool release = false;
unsigned int bytes;
unsigned int val_len;
void *value;
u64 total;
u64 hash;
int ret;
/* need a buffer large enough for all possible names and totl value */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN +
SCOUTFS_XATTR_MAX_TOTL_U64;
/* need a buffer large enough for all possible names */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN;
xat = kmalloc(bytes, GFP_NOFS);
if (!xat) {
ret = -ENOMEM;
@@ -978,38 +773,12 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (key.skx_part == 0 && (ret < sizeof(struct scoutfs_xattr) ||
ret < offsetof(struct scoutfs_xattr, name[xat->name_len]))) {
ret = -EIO;
break;
}
if (key.skx_part != 0 ||
scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) != 0)
memset(&tgs, 0, sizeof(tgs));
if (tgs.totl) {
value = &xat->name[xat->name_len];
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
if (val_len != le16_to_cpu(xat->val_len)) {
ret = -EIO;
goto out;
}
ret = parse_totl_key(&totl_key, xat->name, xat->name_len) ?:
parse_totl_u64(value, val_len, &total);
if (ret < 0)
break;
}
if (tgs.totl && totl_lock == NULL) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret < 0)
break;
}
ret = scoutfs_hold_trans(sb, false);
ret = scoutfs_hold_trans(sb);
if (ret < 0)
break;
release = true;
@@ -1026,14 +795,6 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (tgs.totl) {
tval.total = cpu_to_le64(-total);
tval.count = cpu_to_le64(-1LL);
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
break;
}
scoutfs_release_trans(sb);
release = false;
@@ -1042,7 +803,6 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
if (release)
scoutfs_release_trans(sb);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
kfree(xat);
out:
return ret;

View File

@@ -16,14 +16,10 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1,
totl:1;
srch:1;
};
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);
#endif

4
tests/.gitignore vendored
View File

@@ -1,10 +1,6 @@
src/*.d
src/createmany
src/dumb_renameat2
src/dumb_setxattr
src/handle_cat
src/handle_fsetxattr
src/bulk_create_paths
src/find_xattrs
src/stage_tmpfile
src/create_xattr_loop

View File

@@ -1,16 +1,12 @@
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing -I ../kmod/src
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing
SHELL := /usr/bin/bash
# each binary command is built from a single .c file
BIN := src/createmany \
src/dumb_renameat2 \
src/dumb_setxattr \
src/handle_cat \
src/handle_fsetxattr \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop
src/find_xattrs
DEPS := $(wildcard src/*.d)

View File

@@ -1,43 +0,0 @@
#!/usr/bin/bash
#
# This fencing script is used for testing clusters of multiple mounts on
# a single host. It finds mounts to fence by looking for their rids and
# only knows how to "fence" by using forced unmount.
#
echo "$0 running rid '$SCOUTFS_FENCED_REQ_RID' ip '$SCOUTFS_FENCED_REQ_IP' args '$@'"
log() {
echo "$@" > /dev/stderr
exit 1
}
echo_fail() {
echo "$@" > /dev/stderr
exit 1
}
rid="$SCOUTFS_FENCED_REQ_RID"
for fs in /sys/fs/scoutfs/*; do
[ ! -d "$fs" ] && continue
fs_rid="$(cat $fs/rid)" || \
echo_fail "failed to get rid in $fs"
if [ "$fs_rid" != "$rid" ]; then
continue
fi
nr="$(cat $fs/data_device_maj_min)" || \
echo_fail "failed to get data device major:minor in $fs"
mnts=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
echo_fail "findmnt -t scoutfs -S $nr failed"
for mnt in $mnts; do
umount -f "$mnt" || \
echo_fail "umout -f $mnt failed"
done
done
exit 0

View File

@@ -40,7 +40,7 @@ t_filter_dmesg()
# mount and unmount spew a bunch
re="$re|scoutfs.*client connected"
re="$re|scoutfs.*client disconnected"
re="$re|scoutfs.*server starting"
re="$re|scoutfs.*server setting up"
re="$re|scoutfs.*server ready"
re="$re|scoutfs.*server accepted"
re="$re|scoutfs.*server closing"
@@ -52,35 +52,15 @@ t_filter_dmesg()
# tests that drop unmount io triggers fencing
re="$re|scoutfs .* error: fencing "
re="$re|scoutfs .*: waiting for .* clients"
re="$re|scoutfs .*: all clients recovered"
re="$re|scoutfs .*: waiting for .* lock clients"
re="$re|scoutfs .*: all lock clients recovered"
re="$re|scoutfs .* error: client rid.*lock recovery timed out"
# we test bad devices and options
# some tests mount w/o options
re="$re|scoutfs .* error: Required mount option \"metadev_path\" not found"
re="$re|scoutfs .* error: meta_super META flag not set"
re="$re|scoutfs .* error: could not open metadev:.*"
re="$re|scoutfs .* error: Unknown or malformed option,.*"
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
# fencing tests force unmounts and trigger timeouts
re="$re|scoutfs .* forcing unmount"
re="$re|scoutfs .* reconnect timed out"
re="$re|scoutfs .* recovery timeout expired"
re="$re|scoutfs .* fencing previous leader"
re="$re|scoutfs .* reclaimed resources"
re="$re|scoutfs .* quorum .* error"
re="$re|scoutfs .* error reading quorum block"
re="$re|scoutfs .* error .* writing quorum block"
re="$re|scoutfs .* error .* while checking to delete inode"
re="$re|scoutfs .* error .*writing btree blocks.*"
re="$re|scoutfs .* error .*writing super block.*"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.looping commit del.*upd freeing item"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
re="$re|scoutfs .* error.*server failed to bind to.*"
egrep -v "($re)"
}

View File

@@ -17,17 +17,6 @@ t_sync_seq_index()
t_quiet sync
}
t_mount_rid()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
local rid
rid=$(scoutfs statfs -s rid -p "$mnt")
echo "$rid"
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
@@ -75,20 +64,6 @@ t_fs_nrs()
seq 0 $((T_NR_MOUNTS - 1))
}
#
# outputs "1" if the fs number has "1" in its quorum/is_leader file.
# All other cases output 0, including the fs nr being a client which
# won't have a quorum/ dir.
#
t_fs_is_leader()
{
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader 2>/dev/null)" == "1" ]; then
echo "1"
else
echo "0"
fi
}
#
# Output the mount nr of the current server. This takes no steps to
# ensure that the server doesn't shut down and have some other mount
@@ -97,7 +72,7 @@ t_fs_is_leader()
t_server_nr()
{
for i in $(t_fs_nrs); do
if [ "$(t_fs_is_leader $i)" == "1" ]; then
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "1" ]; then
echo $i
return
fi
@@ -115,7 +90,7 @@ t_server_nr()
t_first_client_nr()
{
for i in $(t_fs_nrs); do
if [ "$(t_fs_is_leader $i)" == "0" ]; then
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "0" ]; then
echo $i
return
fi
@@ -154,17 +129,7 @@ t_umount()
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount \$T_M$nr
}
t_force_umount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount -f \$T_M$nr
eval t_quiet umount \$T_M$i
}
#
@@ -312,113 +277,3 @@ t_counter_diff_changed() {
echo "counter $which didn't change" ||
echo "counter $which changed"
}
#
# See if we can find a local mount with the caller's rid.
#
t_rid_is_mounted() {
local rid="$1"
local fr="$1"
for fr in /sys/fs/scoutfs/*; do
if [ "$(cat $fr/rid)" == "$rid" ]; then
return 0
fi
done
return 1
}
#
# A given mount is being fenced if any mount has a fence request pending
# for it which hasn't finished and been removed.
#
t_rid_is_fencing() {
local rid="$1"
local fr
for fr in /sys/fs/scoutfs/*; do
if [ -d "$fr/fence/$rid" ]; then
return 0
fi
done
return 1
}
#
# Wait until the mount identified by the first rid arg is not in any
# states specified by the remaining state description word args.
#
t_wait_if_rid_is() {
local rid="$1"
while ( [[ $* =~ mounted ]] && t_rid_is_mounted $rid ) ||
( [[ $* =~ fencing ]] && t_rid_is_fencing $rid ) ; do
sleep .5
done
}
#
# Wait until any mount identifies itself as the elected leader. We can
# be waiting while tests mount and unmount so mounts may not be mounted
# at the test's expected mount points.
#
t_wait_for_leader() {
local i
while sleep .25; do
for i in $(t_fs_nrs); do
local ldr="$(t_sysfs_path $i 2>/dev/null)/quorum/is_leader"
if [ "$(cat $ldr 2>/dev/null)" == "1" ]; then
return
fi
done
done
}
t_set_sysfs_mount_option() {
local nr="$1"
local name="$2"
local val="$3"
local opt="$(t_sysfs_path $nr)/mount_options/$name"
echo "$val" > "$opt"
}
t_set_all_sysfs_mount_options() {
local name="$1"
local val="$2"
local i
for i in $(t_fs_nrs); do
t_set_sysfs_mount_option $i $name $val
done
}
declare -A _saved_opts
t_save_all_sysfs_mount_options() {
local name="$1"
local ind
local opt
local i
for i in $(t_fs_nrs); do
opt="$(t_sysfs_path $i)/mount_options/$name"
ind="$name_$i"
_saved_opts[$ind]="$(cat $opt)"
done
}
t_restore_all_sysfs_mount_options() {
local name="$1"
local ind
local i
for i in $(t_fs_nrs); do
ind="$name_$i"
t_set_sysfs_mount_option $i $name "${_saved_opts[$ind]}"
done
}

View File

@@ -1,6 +0,0 @@
== prepare devices, mount point, and logs
== bad devices, bad options
== swapped devices
== both meta devices
== both data devices
== good volume, bad option and good options

View File

@@ -53,5 +53,3 @@ mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing
== concurrent creates make one file
one-file

View File

@@ -1,2 +1,52 @@
== Issue scoutfs df to force block reads to trigger stale invalidation/retry
== create shared test file
== set and get xattrs between mount pairs while retrying
# file: /mnt/test/test/block-stale-reads/file
user.xat="1"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="2"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="3"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="4"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="5"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="6"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="7"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="8"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="9"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed
# file: /mnt/test/test/block-stale-reads/file
user.xat="10"
counter block_cache_remove_stale changed
counter block_cache_remove_stale changed

View File

@@ -1 +0,0 @@
== 60s of unmounting non-quorum clients during recovery

View File

@@ -1,8 +0,0 @@
== prepare directories and files
== fallocate until enospc
== remove all the files and verify free data blocks
== make small meta fs
== create large xattrs until we fill up metadata
== remove files with xattrs after enospc
== make sure we can create again
== cleanup small meta fs

View File

@@ -1,3 +0,0 @@
== creating reasonably large per-mount files
== 10s of racing cold reads and fallocate nop
== cleaning up files

View File

@@ -1,5 +0,0 @@
== make sure all mounts can see each other
== force unmount one client, connection timeout, fence nop, mount
== force unmount all non-server, connection timeout, fence nop, mount
== force unmount server, quorum elects new leader, fence nop, mount
== force unmount everything, new server fences all previous

View File

@@ -1,27 +0,0 @@
== basic unlink deletes
ino found in dseq index
ino not found in dseq index
== local open-unlink waits for close to delete
contents after rm: contents
ino found in dseq index
ino not found in dseq index
== multiple local opens are protected
contents after rm 1: contents
contents after rm 2: contents
ino found in dseq index
ino not found in dseq index
== remote unopened unlink deletes
ino not found in dseq index
ino not found in dseq index
== unlink wait for open on other mount
mount 0 contents after mount 1 rm: contents
ino found in dseq index
ino found in dseq index
stat: cannot stat /mnt/test/test/inode-deletion/file: No such file or directory
ino not found in dseq index
ino not found in dseq index
== lots of deletions use one open map
== open files survive remote scanning orphans
mount 0 contents after mount 1 remounted: contents
ino not found in dseq index
ino not found in dseq index

View File

@@ -0,0 +1,4 @@
== create per mount files
== time independent modification
== time concurrent independent modification
== time concurrent conflicting modification

View File

@@ -1,3 +0,0 @@
== starting background invalidating read/write load
== 60s of lock recovery during invalidating load
== stopping background load

View File

@@ -1,5 +0,0 @@
== test our inode existance function
== unlinked and opened inodes still exist
== orphan from failed evict deletion is picked up
== orphaned inos in all mounts all deleted
== 30s of racing evict deletion, orphan scanning, and open by handle

View File

@@ -1,2 +0,0 @@
=== renameat2 noreplace flag test
=== run two asynchronous calls to renameat2 NOREPLACE

View File

@@ -1,27 +0,0 @@
== make initial small fs
== 0s do nothing
== shrinking fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== existing sizes do nothing
== growing outside device fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing meta works
== resizing data works
== shrinking back fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing again does nothing
== resizing to full works
== cleanup extra fs

View File

@@ -16,4 +16,3 @@ setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Argument list too long
=== good length boundaries
=== 500 random lengths
=== alternate val size between interesting sizes

View File

@@ -2,7 +2,6 @@
== update existing xattr
== remove an xattr
== remove xattr with files
== trigger small log merges by rotating single block with unmount
== create entries in current log
== delete small fraction
== remove files

View File

@@ -1,18 +0,0 @@
total file size 33669120
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
*
00400000 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 |BBBBBBBBBBBBBBBB|
*
00801000 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 |CCCCCCCCCCCCCCCC|
*
00c03000 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 |DDDDDDDDDDDDDDDD|
*
01006000 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 |EEEEEEEEEEEEEEEE|
*
0140a000 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 |FFFFFFFFFFFFFFFF|
*
0180f000 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 |GGGGGGGGGGGGGGGG|
*
01c15000 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 |HHHHHHHHHHHHHHHH|
*
0201c000

View File

@@ -1,30 +0,0 @@
== single file
1.2.3 = 1, 1
4.5.6 = 1, 1
== multiple files add up
1.2.3 = 2, 2
4.5.6 = 2, 2
== removing xattr updates total
1.2.3 = 2, 2
4.5.6 = 1, 1
== updating xattr updates total
1.2.3 = 11, 2
4.5.6 = 1, 1
== removing files update total
1.2.3 = 10, 1
== multiple files/names in one transaction
1.2.3 = 55, 10
== testing invalid names
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== testing invalid values
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== larger population that could merge

View File

@@ -1,7 +1,6 @@
Ran:
generic/001
generic/002
generic/004
generic/005
generic/006
generic/007
@@ -9,8 +8,6 @@ generic/011
generic/013
generic/014
generic/020
generic/023
generic/024
generic/028
generic/032
generic/034
@@ -76,6 +73,7 @@ generic/376
generic/377
Not
run:
generic/004
generic/008
generic/009
generic/012
@@ -84,7 +82,6 @@ generic/016
generic/018
generic/021
generic/022
generic/025
generic/026
generic/031
generic/033
@@ -96,7 +93,6 @@ generic/060
generic/061
generic/063
generic/064
generic/078
generic/079
generic/081
generic/082
@@ -282,4 +278,4 @@ shared/004
shared/032
shared/051
shared/289
Passed all 75 tests
Passed all 72 tests

View File

@@ -18,15 +18,10 @@ die() {
exit 1
}
timestamp()
{
date '+%F %T.%N'
}
# output a message with a timestamp to the run.log
log()
{
echo "[$(timestamp)] $*" >> "$T_RESULTS/run.log"
echo "[$(date '+%F %T.%N')] $*" >> "$T_RESULTS/run.log"
}
# run a logged command, exiting if it fails
@@ -71,7 +66,6 @@ $(basename $0) options:
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
-y | xfstests ./check additional args
-z <nr> | set data-alloc-zone-blocks in mkfs
EOF
}
@@ -175,11 +169,6 @@ while true; do
T_XFSTESTS_ARGS="$2"
shift
;;
-z)
test -n "$2" || die "-z must have nr mounts argument"
T_DATA_ALLOC_ZONE_BLOCKS="-z $2"
shift
;;
-h|-\?|--help)
show_help
exit 1
@@ -227,9 +216,8 @@ test "$T_QUORUM" -le "$T_NR_MOUNTS" || \
die "-q quorum mmembers must not be greater than -n mounts"
# top level paths
T_TESTS=$(realpath "$(dirname $0)")
T_KMOD=$(realpath "$T_TESTS/../kmod")
T_UTILS=$(realpath "$T_TESTS/../utils")
T_KMOD=$(realpath "$(dirname $0)/../kmod")
T_UTILS=$(realpath "$T_KMOD/../utils")
test -d "$T_KMOD" || die "kmod/ repo dir $T_KMOD not directory"
test -d "$T_UTILS" || die "utils/ repo dir $T_UTILS not directory"
@@ -255,20 +243,17 @@ test -e "$T_RESULTS" || mkdir -p "$T_RESULTS"
test -d "$T_RESULTS" || \
die "$T_RESULTS dir is not a directory"
# might as well build our stuff with all cpus, assuming idle system
MAKE_ARGS="-j $(getconf _NPROCESSORS_ONLN)"
# build kernel module
msg "building kmod/ dir $T_KMOD"
cmd cd "$T_KMOD"
cmd make $MAKE_ARGS
cmd make
cmd sync
cmd cd -
# build utils
msg "building utils/ dir $T_UTILS"
cmd cd "$T_UTILS"
cmd make $MAKE_ARGS
cmd make
cmd sync
cmd cd -
@@ -285,7 +270,7 @@ fi
# building our test binaries
msg "building test binaries"
cmd make $MAKE_ARGS
cmd make
# set any options implied by others
test -n "$T_MKFS" && T_UNMOUNT=1
@@ -334,8 +319,7 @@ if [ -n "$T_MKFS" ]; then
done
msg "making new filesystem with $T_QUORUM quorum members"
cmd scoutfs mkfs -f $quo $T_DATA_ALLOC_ZONE_BLOCKS \
"$T_META_DEVICE" "$T_DATA_DEVICE"
cmd scoutfs mkfs -f $quo "$T_META_DEVICE" "$T_DATA_DEVICE"
fi
if [ -n "$T_INSMOD" ]; then
@@ -376,40 +360,6 @@ cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/proc/sys/kernel/ftrace_dump_on_oops
#
# Build a fenced config that runs scripts out of the repository rather
# than the default system directory
#
conf="$T_RESULTS/scoutfs-fenced.conf"
cat > $conf << EOF
SCOUTFS_FENCED_DELAY=1
SCOUTFS_FENCED_RUN=$T_TESTS/fenced-local-force-unmount.sh
SCOUTFS_FENCED_RUN_ARGS="ignored run args"
EOF
export SCOUTFS_FENCED_CONFIG_FILE="$conf"
T_FENCED_LOG="$T_RESULTS/fenced.log"
#
# Run the agent in the background, log its output, an kill it if we
# exit
#
fenced_log()
{
echo "[$(timestamp)] $*" >> "$T_FENCED_LOG"
}
fenced_pid=""
kill_fenced()
{
if test -n "$fenced_pid" -a -d "/proc/$fenced_pid" ; then
fenced_log "killing fenced pid $fenced_pid"
kill "$fenced_pid"
fi
}
trap kill_fenced EXIT
$T_UTILS/fenced/scoutfs-fenced > "$T_FENCED_LOG" 2>&1 &
fenced_pid=$!
fenced_log "started fenced pid $fenced_pid in the background"
#
# mount concurrently so that a quorum is present to elect the leader and
# start a server.

View File

@@ -1,45 +1,32 @@
export-get-name-parent.sh
basic-block-counts.sh
basic-bad-mounts.sh
inode-items-updated.sh
simple-inode-index.sh
simple-staging.sh
simple-release-extents.sh
fallocate.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
totl-xattr-tag.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
lock-recover-invalidate.sh
export-lookup-evict-race.sh
createmany-parallel.sh
createmany-large-names.sh
createmany-rename-large-dir.sh
stage-release-race-alloc.sh
stage-multi-part.sh
stage-tmpfile.sh
basic-posix-consistency.sh
dirent-consistency.sh
mkdir-rename-rmdir.sh
lock-ex-race-processes.sh
lock-conflicting-batch-commit.sh
cross-mount-data-free.sh
persistent-item-vers.sh
setup-error-teardown.sh
resize-devices.sh
fence-and-reclaim.sh
orphan-inodes.sh
mount-unmount-race.sh
client-unmount-recovery.sh
createmany-parallel-mounts.sh
archive-light-cycle.sh
block-stale-reads.sh
inode-deletion.sh
renameat2-noreplace.sh
xfstests.sh

View File

@@ -1,113 +0,0 @@
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/xattr.h>
#include <ctype.h>
#include <string.h>
#include <errno.h>
#include <limits.h>
static void exit_usage(void)
{
printf(" -h/-? output this usage message and exit\n"
" -c <count> number of xattrs to create\n"
" -n <string> xattr name prefix, -NR is appended\n"
" -p <path> string with path to file with xattrs\n"
" -s <size> xattr value size\n");
exit(1);
}
int main(int argc, char **argv)
{
char *pref = NULL;
char *path = NULL;
char *val;
char *name;
unsigned long long count = 0;
unsigned long long size = 0;
unsigned long long i;
int ret;
int c;
while ((c = getopt(argc, argv, "+c:n:p:s:")) != -1) {
switch (c) {
case 'c':
count = strtoull(optarg, NULL, 0);
break;
case 'n':
pref = strdup(optarg);
break;
case 'p':
path = strdup(optarg);
break;
case 's':
size = strtoull(optarg, NULL, 0);
break;
case '?':
printf("unknown argument: %c\n", optind);
case 'h':
exit_usage();
}
}
if (count == 0) {
printf("specify count of xattrs to create with -c\n");
exit(1);
}
if (count == ULLONG_MAX) {
printf("invalid -c count\n");
exit(1);
}
if (size == 0) {
printf("specify xattrs value size with -s\n");
exit(1);
}
if (size == ULLONG_MAX || size < 2) {
printf("invalid -s size\n");
exit(1);
}
if (path == NULL) {
printf("specify path to file with -p\n");
exit(1);
}
if (pref == NULL) {
printf("specify xattr name prefix string with -n\n");
exit(1);
}
ret = snprintf(NULL, 0, "%s-%llu", pref, ULLONG_MAX) + 1;
name = malloc(ret);
if (!name) {
printf("couldn't allocate xattr name buffer\n");
exit(1);
}
val = malloc(size);
if (!val) {
printf("couldn't allocate xattr value buffer\n");
exit(1);
}
memset(val, 'a', size - 1);
val[size - 1] = '\0';
for (i = 0; i < count; i++) {
sprintf(name, "%s-%llu", pref, i);
ret = setxattr(path, name, val, size, 0);
if (ret) {
printf("returned %d errno %d (%s)\n",
ret, errno, strerror(errno));
return 1;
}
}
return 0;
}

View File

@@ -1,93 +0,0 @@
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#ifndef RENAMEAT2_EXIST
#include <unistd.h>
#include <sys/syscall.h>
#if !defined(SYS_renameat2) && defined(__x86_64__)
#define SYS_renameat2 316 /* from arch/x86/entry/syscalls/syscall_64.tbl */
#endif
static int renameat2(int olddfd, const char *old_dir,
int newdfd, const char *new_dir,
unsigned int flags)
{
#ifdef SYS_renameat2
return syscall(SYS_renameat2, olddfd, old_dir, newdfd, new_dir, flags);
#else
errno = ENOSYS;
return -1;
#endif
}
#endif
#ifndef RENAME_NOREPLACE
#define RENAME_NOREPLACE (1 << 0) /* Don't overwrite newpath of rename */
#endif
#ifndef RENAME_EXCHANGE
#define RENAME_EXCHANGE (1 << 1) /* Exchange oldpath and newpath */
#endif
#ifndef RENAME_WHITEOUT
#define RENAME_WHITEOUT (1 << 2) /* Whiteout oldpath */
#endif
static void exit_usage(char **argv)
{
fprintf(stderr,
"usage: %s [-n|-x|-w] old_path new_path\n"
" -n noreplace\n"
" -x exchange\n"
" -w whiteout\n", argv[0]);
exit(1);
}
int main(int argc, char **argv)
{
const char *old_path = NULL;
const char *new_path = NULL;
unsigned int flags = 0;
int ret;
int c;
for (c = 1; c < argc; c++) {
if (argv[c][0] == '-') {
switch (argv[c][1]) {
case 'n':
flags |= RENAME_NOREPLACE;
break;
case 'x':
flags |= RENAME_EXCHANGE;
break;
case 'w':
flags |= RENAME_WHITEOUT;
break;
default:
exit_usage(argv);
}
} else if (!old_path) {
old_path = argv[c];
} else if (!new_path) {
new_path = argv[c];
} else {
exit_usage(argv);
}
}
if (!old_path || !new_path) {
printf("specify the correct directory path\n");
errno = ENOENT;
return 1;
}
ret = renameat2(AT_FDCWD, old_path, AT_FDCWD, new_path, flags);
if (ret == -1) {
perror("Error");
return 1;
}
return 0;
}

View File

@@ -1,189 +0,0 @@
/*
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <inttypes.h>
#include <errno.h>
#include <string.h>
#include <endian.h>
#include <time.h>
#include <linux/types.h>
#include <sys/xattr.h>
#define FILEID_SCOUTFS 0x81
#define FILEID_SCOUTFS_WITH_PARENT 0x82
struct our_handle {
struct file_handle handle;
/*
* scoutfs file handle can be ino or ino/parent. The
* handle_type field of struct file_handle denotes which
* version is in use. We only use the ino variant here.
*/
__le64 scoutfs_ino;
};
#define DEFAULT_NAME "user.handle_fsetxattr"
#define DEFAULT_VALUE "value"
static void exit_usage(void)
{
printf(" -h/-? output this usage message and exit\n"
" -e keep trying on enoent, consider success an error\n"
" -i <num> 64bit inode number for handle open, can be multiple\n"
" -m <string> scoutfs mount path string for ioctl fd\n"
" -n <string> optional xattr name string, defaults to \""DEFAULT_NAME"\"\n"
" -s <num> loop for num seconds, defaults to 0 for one iteration"
" -v <string> optional xattr value string, defaults to \""DEFAULT_VALUE"\"\n");
exit(1);
}
int main(int argc, char **argv)
{
struct our_handle handle;
struct timespec ts;
bool enoent_success_err = false;
uint64_t seconds = 0;
char *value = NULL;
char *name = NULL;
char *mnt = NULL;
int nr_inos = 0;
uint64_t *inos;
uint64_t i;
int *fds;
int mntfd;
int fd;
int ret;
char c;
int j;
/* can't have more inos than args */
inos = calloc(argc, sizeof(inos[0]));
fds = calloc(argc, sizeof(fds[0]));
if (!inos || !fds) {
perror("calloc");
exit(1);
}
for (i = 0; i < argc; i++)
fds[i] = -1;
while ((c = getopt(argc, argv, "+ei:m:n:s:v:")) != -1) {
switch (c) {
case 'e':
enoent_success_err = true;
break;
case 'i':
inos[nr_inos] = strtoll(optarg, NULL, 0);
nr_inos++;
break;
case 'm':
mnt = strdup(optarg);
break;
case 'n':
name = strdup(optarg);
break;
case 's':
seconds = strtoll(optarg, NULL, 0);
break;
case 'v':
value = strdup(optarg);
break;
case '?':
printf("unknown argument: %c\n", optind);
case 'h':
exit_usage();
}
}
if (nr_inos == 0) {
printf("specify non-zero inode number with -i\n");
exit(1);
}
if (!mnt) {
printf("specify scoutfs mount path for ioctl with -p\n");
exit(1);
}
if (name == NULL)
name = DEFAULT_NAME;
if (value == NULL)
value = DEFAULT_VALUE;
mntfd = open(mnt, O_RDONLY);
if (mntfd == -1) {
perror("opening mountpoint");
return 1;
}
clock_gettime(CLOCK_REALTIME, &ts);
seconds += ts.tv_sec;
for (i = 0; ; i++) {
for (j = 0; j < nr_inos; j++) {
fd = fds[j];
if (fd < 0) {
handle.handle.handle_bytes = sizeof(struct our_handle);
handle.handle.handle_type = FILEID_SCOUTFS;
handle.scoutfs_ino = htole64(inos[j]);
fd = open_by_handle_at(mntfd, &handle.handle, O_RDWR);
if (fd == -1) {
if (!enoent_success_err || errno != ENOENT) {
perror("open_by_handle_at");
return 1;
}
continue;
}
fds[j] = fd;
}
ret = fsetxattr(fd, name, value, strlen(value), 0);
if (ret < 0) {
perror("fsetxattr");
return 1;
}
}
if ((i % 10) == 0) {
clock_gettime(CLOCK_REALTIME, &ts);
if (ts.tv_sec >= seconds)
break;
}
}
if (enoent_success_err) {
bool able = false;
for (i = 0; i < nr_inos; i++) {
if (fds[i] >= 0) {
printf("was able to open ino %"PRIu64"\n", inos[i]);
able = true;
}
}
if (able)
exit(1);
}
/* not bothering to close or free */
return 0;
}

View File

@@ -1,153 +0,0 @@
/*
* Exercise O_TMPFILE creation as well as staging from tmpfiles into
* a released destination file.
*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <linux/types.h>
#include <assert.h>
#include "ioctl.h"
#define array_size(arr) (sizeof(arr) / sizeof(arr[0]))
/*
* Write known data into 8 tmpfiles.
* Make a new file X and release it
* Move contents of 8 tmpfiles into X.
*/
struct sub_tmp_info {
int fd;
unsigned int offset;
unsigned int length;
};
#define SZ 4096
char buf[SZ];
int main(int argc, char **argv)
{
struct scoutfs_ioctl_release rel = {0};
struct scoutfs_ioctl_move_blocks mb;
struct scoutfs_ioctl_stat_more stm;
struct sub_tmp_info sub_tmps[8];
int tot_size = 0;
char *dest_file;
int dest_fd;
char *mnt;
int ret;
int i;
if (argc < 3) {
printf("%s <mountpoint> <dest_file>\n", argv[0]);
return 1;
}
mnt = argv[1];
dest_file = argv[2];
for (i = 0; i < array_size(sub_tmps); i++) {
struct sub_tmp_info *sub_tmp = &sub_tmps[i];
int remaining;
sub_tmp->fd = open(mnt, O_RDWR | O_TMPFILE, S_IRUSR | S_IWUSR);
if (sub_tmp->fd < 0) {
perror("error");
exit(1);
}
sub_tmp->offset = tot_size;
/* First tmp file is 4MB */
/* Each is 4k bigger than last */
sub_tmp->length = (i + 1024) * sizeof(buf);
remaining = sub_tmp->length;
/* Each sub tmpfile written with 'A', 'B', etc. */
memset(buf, 'A' + i, sizeof(buf));
while (remaining) {
int written;
written = write(sub_tmp->fd, buf, sizeof(buf));
assert(written == sizeof(buf));
tot_size += sizeof(buf);
remaining -= written;
}
}
printf("total file size %d\n", tot_size);
dest_fd = open(dest_file, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR);
if (dest_fd == -1) {
perror("error");
exit(1);
}
// make dest file big
ret = posix_fallocate(dest_fd, 0, tot_size);
if (ret) {
perror("error");
exit(1);
}
// get current data_version after fallocate's size extensions
ret = ioctl(dest_fd, SCOUTFS_IOC_STAT_MORE, &stm);
if (ret < 0) {
perror("stat_more ioctl error");
exit(1);
}
// release everything in dest file
rel.offset = 0;
rel.length = tot_size;
rel.data_version = stm.data_version;
ret = ioctl(dest_fd, SCOUTFS_IOC_RELEASE, &rel);
if (ret < 0) {
perror("error");
exit(1);
}
// move contents into dest in reverse order
for (i = array_size(sub_tmps) - 1; i >= 0 ; i--) {
struct sub_tmp_info *sub_tmp = &sub_tmps[i];
mb.from_fd = sub_tmp->fd;
mb.from_off = 0;
mb.len = sub_tmp->length;
mb.to_off = sub_tmp->offset;
mb.data_version = stm.data_version;
mb.flags = SCOUTFS_IOC_MB_STAGE;
ret = ioctl(dest_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
if (ret < 0) {
perror("error");
exit(1);
}
}
return 0;
}

View File

@@ -1,36 +0,0 @@
mount_fail()
{
local mnt=${!#}
echo "mounting $@" >> $T_TMP.mount.out
mount -t scoutfs "$@" >> $T_TMP.mount.out 2>&1
if [ $? == 0 ]; then
umount "$mnt" || t_fail "couldn't unmount"
t_fail "bad mount succeeded"
fi
}
echo "== prepare devices, mount point, and logs"
SCR="/mnt/scoutfs.extra"
mkdir -p "$SCR"
> $T_TMP.mount.out
scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \
|| t_fail "mkfs failed"
echo "== bad devices, bad options"
mount_fail -o _bad /dev/null /dev/null "$SCR"
echo "== swapped devices"
mount_fail -o metadev_path=$T_EX_DATA_DEV,quorum_slot_nr=0 "$T_EX_META_DEV" "$SCR"
echo "== both meta devices"
mount_fail -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_META_DEV" "$SCR"
echo "== both data devices"
mount_fail -o metadev_path=$T_EX_DATA_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
echo "== good volume, bad option and good options"
mount_fail -o _bad,metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
t_pass

View File

@@ -197,13 +197,4 @@ scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- data_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== concurrent creates make one file"
mkdir "$T_D0/concurrent"
for i in $(t_fs_nrs); do
eval p="\$T_D${i}/concurrent/one-file"
touch "$p" 2>&1 > "$T_TMP.multi-create.$i" &
done
wait
ls "$T_D0/concurrent"
t_pass

View File

@@ -5,18 +5,57 @@
# persistent blocks to create stable block reading scenarios. Instead
# we use triggers to exercise how readers encounter stale blocks.
#
# Trigger retries in the block cache by calling scoutfs df
# which in turn will call scoutfs_ioctl_alloc_detail. This
# is guaranteed to exist, which will force block cache reads.
echo "== Issue scoutfs df to force block reads to trigger stale invalidation/retry"
nr=0
t_require_commands touch setfattr getfattr
old=$(t_counter block_cache_remove_stale $nr)
t_trigger_arm_silent block_remove_stale $nr
inc_wrap_fs_nr()
{
local nr="$(($1 + 1))"
scoutfs df -p "$T_M0" > /dev/null
if [ "$nr" == "$T_NR_MOUNTS" ]; then
nr=0
fi
t_counter_diff_changed block_cache_remove_stale $old $nr
echo $nr
}
GETFATTR="getfattr --absolute-names"
SETFATTR="setfattr"
echo "== create shared test file"
touch "$T_D0/file"
$SETFATTR -n user.xat -v 0 "$T_D0/file"
#
# Trigger retries in the block cache as we bounce xattr values around
# between sequential pairs of mounts. This is a little silly because if
# either of the mounts are the server then they'll almost certaily have
# their trigger fired prematurely by message handling btree calls while
# working with the t_ helpers long before we work with the xattrs. But
# the block cache stale retry path is still being exercised.
#
echo "== set and get xattrs between mount pairs while retrying"
set_nr=0
get_nr=$(inc_wrap_fs_nr $set_nr)
for i in $(seq 1 10); do
eval set_file="\$T_D${set_nr}/file"
eval get_file="\$T_D${get_nr}/file"
old_set=$(t_counter block_cache_remove_stale $set_nr)
old_get=$(t_counter block_cache_remove_stale $get_nr)
t_trigger_arm_silent block_remove_stale $set_nr
t_trigger_arm_silent block_remove_stale $get_nr
$SETFATTR -n user.xat -v $i "$set_file"
$GETFATTR -n user.xat "$get_file" 2>&1 | t_filter_fs
t_counter_diff_changed block_cache_remove_stale $old_set $set_nr
t_counter_diff_changed block_cache_remove_stale $old_get $get_nr
set_nr="$get_nr"
get_nr=$(inc_wrap_fs_nr $set_nr)
done
t_pass

View File

@@ -1,61 +0,0 @@
#
# Unmount Server and unmount a client as it's replaying to a remaining server
#
majority_nr=$(t_majority_count)
quorum_nr=$T_QUORUM
test "$quorum_nr" == "$majority_nr" && \
t_skip "all quorum members make up majority, need more mounts to unmount"
test "$T_NR_MOUNTS" -lt "$T_QUORUM" && \
t_skip "Need enough non-quorum clients to unmount"
for i in $(t_fs_nrs); do
mounted[$i]=1
done
LENGTH=60
echo "== ${LENGTH}s of unmounting non-quorum clients during recovery"
END=$((SECONDS + LENGTH))
while [ "$SECONDS" -lt "$END" ]; do
sv=$(t_server_nr)
rid=$(t_mount_rid $sv)
echo "sv $sv rid $rid" >> "$T_TMP.log"
sync
t_umount $sv &
for i in $(t_fs_nrs); do
if [ "$i" -ge "$quorum_nr" ]; then
t_umount $i &
echo "umount $i pid $pid quo $quorum_nr" \
>> $T_TMP.log
mounted[$i]=0
fi
done
wait
t_mount $sv &
for i in $(t_fs_nrs); do
if [ "${mounted[$i]}" == 0 ]; then
t_mount $i &
fi
done
wait
declare RID_LIST=$(cat /sys/fs/scoutfs/*/rid | sort -u)
read -a rid_arr <<< $RID_LIST
declare LOCK_LIST=$(cut -d' ' -f 5 /sys/kernel/debug/scoutfs/*/server_locks | sort -u)
read -a lock_arr <<< $LOCK_LIST
for i in "${lock_arr[@]}"; do
if [[ ! " ${rid_arr[*]} " =~ " $i " ]]; then
t_fail "RID($i): exists when not mounted"
fi
done
done
t_pass

View File

@@ -1,100 +0,0 @@
#
# test hititng enospc by filling with data or metadata and
# then recovering by removing what we filled.
#
# Type Size Total Used Free Use%
#MetaData 64KB 1048576 32782 1015794 3
# Data 4KB 16777152 0 16777152 0
free_blocks() {
local md="$1"
local mnt="$2"
scoutfs df -p "$mnt" | awk '($1 == "'$md'") { print $5; exit }'
}
t_require_commands scoutfs stat fallocate createmany
echo "== prepare directories and files"
for n in $(t_fs_nrs); do
eval path="\$T_D${n}/dir-$n/file-$n"
mkdir -p $(dirname $path)
touch $path
done
sync
echo "== fallocate until enospc"
before=$(free_blocks Data "$T_M0")
finished=0
while [ $finished != 1 ]; do
for n in $(t_fs_nrs); do
eval path="\$T_D${n}/dir-$n/file-$n"
off=$(stat -c "%s" "$path")
LC_ALL=C fallocate -o $off -l 128MiB "$path" > $T_TMP.fallocate 2>&1
err="$?"
if grep -qi "no space" $T_TMP.fallocate; then
finished=1
break
fi
if [ "$err" != "0" ]; then
t_fail "fallocate failed with $err"
fi
done
done
echo "== remove all the files and verify free data blocks"
for n in $(t_fs_nrs); do
eval dir="\$T_D${n}/dir-$n"
rm -rf "$dir"
done
sync
after=$(free_blocks Data "$T_M0")
# nothing else should be modifying data blocks
test "$before" == "$after" || \
t_fail "$after free data blocks after rm, expected $before"
# XXX this is all pretty manual, would be nice to have helpers
echo "== make small meta fs"
# meta device just big enough for reserves and the metadata we'll fill
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="/mnt/scoutfs.enospc"
mkdir -p "$SCR"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
echo "== create large xattrs until we fill up metadata"
mkdir -p "$SCR/xattrs"
for f in $(seq 1 100000); do
file="$SCR/xattrs/file-$f"
touch "$file"
LC_ALL=C create_xattr_loop -c 1000 -n user.scoutfs-enospc -p "$file" -s 65535 > $T_TMP.cxl 2>&1
err="$?"
if grep -qi "no space" $T_TMP.cxl; then
echo "enospc at f $f" >> $T_TMP.cxl
break
fi
if [ "$err" != "0" ]; then
t_fail "create_xattr_loop failed with $err"
fi
done
echo "== remove files with xattrs after enospc"
rm -rf "$SCR/xattrs"
echo "== make sure we can create again"
file="$SCR/file-after"
touch $file
setfattr -n user.scoutfs-enospc -v 1 "$file"
sync
rm -f "$file"
echo "== cleanup small meta fs"
umount "$SCR"
rmdir "$SCR"
t_pass

View File

@@ -1,32 +0,0 @@
#
# test racing fh_to_dentry with evict from lock invalidation. We've
# had deadlocks between the ordering of iget and evict when they acquire
# cluster locks.
#
t_require_commands touch stat handle_cat
t_require_mounts 2
CPUS=$(getconf _NPROCESSORS_ONLN)
NR=$((CPUS * 4))
END=$((SECONDS + 30))
touch "$T_D0/file"
ino=$(stat -c "%i" "$T_D0/file")
while test $SECONDS -lt $END; do
for i in $(seq 1 $NR); do
fs=$((RANDOM % T_NR_MOUNTS))
eval dir="\$T_D${fs}"
write=$((RANDOM & 1))
if [ "$write" == 1 ]; then
touch "$dir/file" &
else
handle_cat "$dir" "$ino" &
fi
done
wait
done
t_pass

Some files were not shown because too many files have changed in this diff Show More