Compare commits

..

2 Commits

Author SHA1 Message Date
Benjamin LaHaise
e064c439ff scoutfs: mmap: add support for writable shared mmap()ings
Add support for writable MAP_SHARED mmap()ings.  Avoid issues with late
writepage()s building transactions by doing the block_write_begin() work in
scoutfs_data_page_mkwrite().  Ensure the page is marked dirty and prepared
for write, then let the VM complete the write when the page is flushed or
invalidated.

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
2021-03-31 10:43:47 -07:00
Benjamin LaHaise
91e68b1f83 mmap: add support for read only mmap()
Add support for read only mmap().

Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
2021-03-31 10:42:37 -07:00
261 changed files with 10281 additions and 42606 deletions

0
.gitignore vendored
View File

View File

@@ -1,17 +0,0 @@
#
# Typically development is done in each subdir, but we have a tiny
# makefile here to make it easy to run simple targets across all the
# subdirs.
#
SUBDIRS := kmod utils tests
NOTTESTS := kmod utils
all clean: $(SUBDIRS) FORCE
dist: $(NOTTESTS) FORCE
$(SUBDIRS): FORCE
$(MAKE) -C $@ $(MAKECMDGOALS)
all:
FORCE:

View File

@@ -1,24 +0,0 @@
# Introduction
scoutfs is a clustered in-kernel Linux filesystem designed to support
large archival systems. It features additional interfaces and metadata
so that archive agents can perform their maintenance workflows without
walking all the files in the namespace. Its cluster support lets
deployments add nodes to satisfy archival tier bandwidth targets.
The design goal is to reach file populations in the trillions, with the
archival bandwidth to match, while remaining operational and responsive.
Highlights of the design and implementation include:
* Fully consistent POSIX semantics between nodes
* Atomic transactions to maintain consistent persistent structures
* Integrated archival metadata replaces syncing to external databases
* Dynamic seperation of resources lets nodes write in parallel
* 64bit throughout; no limits on file or directory sizes or counts
* Open GPLv2 implementation
# Community Mailing List
Please join us on the open scoutfs-devel@scoutfs.org [mailing list
hosted on Google Groups](https://groups.google.com/a/scoutfs.org/forum/#!forum/scoutfs-devel)

View File

@@ -1,97 +0,0 @@
Versity ScoutFS Release Notes
=============================
---
v1.4
\
*May 6, 2022*
* **Fix possible client crash during server failover**
\
Fixed a narrow window during server failover and lock recovery that
could cause a client mount to believe that it had an inconsistent item
cache and panic. This required very specific lock state and messaging
patterns between multiple mounts and multiple servers which made it
unlikely to occur in the field.
---
v1.3
\
*Apr 7, 2022*
* **Fix rare server instability under heavy load**
\
Fixed a case of server instability under heavy load due to concurrent
work fully exhausting metadata block allocation pools reserved for a
single server transaction. This would cause brief interruption as the
server shutdown and the next server started up and made progress as
pending work was retried.
* **Fix slow fencing preventing server startup**
\
If a server had to process many fence requests with a slow fencing
mechanism it could be interrupted before it finished. The server
now makes sure heartbeat messages are sent while it is making progress
on fencing requests so that other quorum members don't interrupt the
process.
* **Performance improvement in getxattr and setxattr**
\
Kernel allocation patterns in the getxattr and setxattr
implementations were causing significant contention between CPUs. Their
allocation strategy was changed so that concurrent tasks can call these
xattr methods without degrading performance.
---
v1.2
\
*Mar 14, 2022*
* **Fix deadlock between fallocate() and read() system calls**
\
Fixed a lock inversion that could cause two tasks to deadlock if they
performed fallocate() and read() on a file at the same time. The
deadlock was uninterruptible so the machine needed to be rebooted. This
was relatively rare as fallocate() is usually used to prepare files
before they're used.
* **Fix instability from heavy file deletion workloads**
\
Fixed rare circumstances under which background file deletion cleanup
tasks could try to delete a file while it is being deleted by another
task. Heavy load across multiple nodes, either many files being deleted
or large files being deleted, increased the chances of this happening.
Heavy staging could cause this problem because staging can create many
internal temporary files that need to be deleted.
---
v1.1
\
*Feb 4, 2022*
* **Add scoutfs(1) change-quorum-config command**
\
Add a change-quorum-config command to scoutfs(1) to change the quorum
configuration stored in the metadata device while the file system is
unmounted. This can be used to change the mounts that will
participate in quorum and the IP addresses they use.
* **Fix Rare Risk of Item Cache Corruption**
\
Code review found a rare potential source of item cache corruption.
If this happened it would look as though deleted parts of the filesystem
returned, but only at the time they were deleted. Old deleted items are
not affected. This problem only affected the item cache, never
persistent storage. Unmounting and remounting would drop the bad item
cache and resync it with the correct persistent data.
---
v1.0
\
*Nov 8, 2021*
* **Initial Release**
\
Version 1.0 marks the first GA release.

View File

@@ -16,7 +16,11 @@ SCOUTFS_GIT_DESCRIBE := \
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
echo no-git)
SCOUTFS_FORMAT_HASH := \
$(shell cat src/format.h src/ioctl.h | md5sum | cut -b1-16)
SCOUTFS_ARGS := SCOUTFS_GIT_DESCRIBE=$(SCOUTFS_GIT_DESCRIBE) \
SCOUTFS_FORMAT_HASH=$(SCOUTFS_FORMAT_HASH) \
CONFIG_SCOUTFS_FS=m -C $(SK_KSRC) M=$(CURDIR)/src \
EXTRA_CFLAGS="-Werror"
@@ -47,7 +51,7 @@ modules_install:
dist: scoutfs-kmod.spec
git archive --format=tar --prefix scoutfs-kmod-$(RPM_VERSION)/ HEAD^{tree} > $(TARFILE)
@ tar rf $(TARFILE) --transform="s@\(.*\)@scoutfs-kmod-$(RPM_VERSION)/\1@" scoutfs-kmod.spec
@ tar rf $(TARFILE) --transform="s@\(.*\)@scoutfs-$(RPM_VERSION)/\1@" scoutfs-kmod.spec
clean:
make $(SCOUTFS_ARGS) clean

133
kmod/README.md Normal file
View File

@@ -0,0 +1,133 @@
# Introduction
scoutfs is a clustered in-kernel Linux filesystem designed and built
from the ground up to support large archival systems.
Its key differentiating features are:
- Integrated consistent indexing accelerates archival maintenance operations
- Log-structured commits allow nodes to write concurrently without contention
It meets best of breed expectations:
* Fully consistent POSIX semantics between nodes
* Rich metadata to ensure the integrity of metadata references
* Atomic transactions to maintain consistent persistent structures
* First class kernel implementation for high performance and low latency
* Open GPLv2 implementation
Learn more in the [white paper](https://docs.wixstatic.com/ugd/aaa89b_88a5cc84be0b4d1a90f60d8900834d28.pdf).
# Current Status
**Alpha Open Source Development**
scoutfs is under heavy active development. We're developing it in the
open to give the community an opportunity to affect the design and
implementation.
The core architectural design elements are in place. Much surrounding
functionality hasn't been implemented. It's appropriate for early
adopters and interested developers, not for production use.
In that vein, expect significant incompatible changes to both the format
of network messages and persistent structures. To avoid mistakes the
implementation currently calculates a hash of the format and ioctl
header files in the source tree. The kernel module will refuse to mount
a volume created by userspace utilities with a mismatched hash, and it
will refuse to connect to a remote node with a mismatched hash. This
means having to unmount, mkfs, and remount everything across many
functional changes. Once the format is nailed down we'll wire up
forward and back compat machinery and remove this temporary safety
measure.
The current kernel module is developed against the RHEL/CentOS 7.x
kernel to minimize the friction of developing and testing with partners'
existing infrastructure. Once we're happy with the design we'll shift
development to the upstream kernel while maintaining distro
compatibility branches.
# Community Mailing List
Please join us on the open scoutfs-devel@scoutfs.org [mailing list
hosted on Google Groups](https://groups.google.com/a/scoutfs.org/forum/#!forum/scoutfs-devel)
for all discussion of scoutfs.
# Quick Start
**This following a very rough example of the procedure to get up and
running, experience will be needed to fill in the gaps. We're happy to
help on the mailing list.**
The requirements for running scoutfs on a small cluster are:
1. One or more nodes running x86-64 CentOS/RHEL 7.4 (or 7.3)
2. Access to a single shared block device
3. IPv4 connectivity between the nodes
The steps for getting scoutfs mounted and operational are:
1. Get the kernel module running on the nodes
2. Make a new filesystem on the device with the userspace utilities
3. Mount the device on all the nodes
In this example we run all of these commands on three nodes. The block
device name is the same on all the nodes.
1. Get the Kernel Module and Userspace Binaries
* Either use snapshot RPMs built from git by Versity:
```shell
rpm -i https://scoutfs.s3-us-west-2.amazonaws.com/scoutfs-repo-0.0.1-1.el7_4.noarch.rpm
yum install scoutfs-utils kmod-scoutfs
```
* Or use the binaries built from checked out git repositories:
```shell
yum install kernel-devel
git clone git@github.com:versity/scoutfs-kmod-dev.git
make -C scoutfs-kmod-dev module
modprobe libcrc32c
insmod scoutfs-kmod-dev/src/scoutfs.ko
git clone git@github.com:versity/scoutfs-utils-dev.git
make -C scoutfs-utils-dev
alias scoutfs=$PWD/scoutfs-utils-dev/src/scoutfs
```
2. Make a New Filesystem (**destroys contents, no questions asked**)
We specify that two of our three nodes must be present to form a
quorum for the system to function.
```shell
scoutfs mkfs -Q 2 /dev/shared_block_device
```
3. Mount the Filesystem
Each mounting node provides its local IP address on which it will run
an internal server for the other mounts if it is elected the leader by
the quorum.
```shell
mkdir /mnt/scoutfs
mount -t scoutfs -o server_addr=$NODE_ADDR /dev/shared_block_device /mnt/scoutfs
```
4. For Kicks, Observe the Metadata Change Index
The `meta_seq` index tracks the inodes that are changed in each
transaction.
```shell
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/two; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
```

View File

@@ -1,6 +1,7 @@
obj-$(CONFIG_SCOUTFS_FS) := scoutfs.o
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\"
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\" \
-DSCOUTFS_FORMAT_HASH=0x$(SCOUTFS_FORMAT_HASH)LLU
CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
@@ -8,8 +9,6 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
-include $(src)/Makefile.kernelcompat
scoutfs-y += \
avl.o \
alloc.o \
block.o \
btree.o \
client.o \
@@ -17,33 +16,26 @@ scoutfs-y += \
data.o \
dir.o \
export.o \
ext.o \
fence.o \
file.o \
forest.o \
inode.o \
ioctl.o \
item.o \
lock.o \
lock_server.o \
msg.o \
net.o \
omap.o \
options.o \
per_task.o \
quorum.o \
recov.o \
radix.o \
scoutfs_trace.o \
server.o \
sort_priv.o \
spbm.o \
srch.o \
super.o \
sysfs.o \
trans.o \
triggers.o \
tseq.o \
volopt.o \
xattr.o
#
@@ -58,9 +50,5 @@ $(src)/check_exported_types:
echo "no raw types in exported headers, preface with __"; \
exit 1; \
fi
@if egrep '\<__packed\>' $(src)/format.h $(src)/ioctl.h; then \
echo "no __packed allowed in exported headers"; \
exit 1; \
fi
extra-y += check_exported_types

File diff suppressed because it is too large Load Diff

View File

@@ -1,178 +0,0 @@
#ifndef _SCOUTFS_ALLOC_H_
#define _SCOUTFS_ALLOC_H_
#include "ext.h"
/*
* These are implementation-specific metrics, they don't need to be
* consistent across implementations. They should probably be run-time
* knobs.
*/
/*
* The largest extent that we'll try to allocate with fallocate. We're
* trying not to completely consume a transactions data allocation all
* at once. This is only allocation granularity, repeated allocations
* can produce large contiguous extents.
*/
#define SCOUTFS_FALLOCATE_ALLOC_LIMIT \
(128ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* The largest aligned region that we'll try to allocate at the end of
* the file as it's extended. This is also limited to the current file
* size so we can only waste at most twice the total file size when
* files are less than this. We try to keep this around the point of
* diminishing returns in streaming performance of common data devices
* to limit waste.
*/
#define SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Small data allocations are satisfied by cached extents stored in
* the run-time alloc struct to minimize item operations for small
* block allocations. Large allocations come directly from btree
* extent items, and this defines the threshold beetwen them.
*/
#define SCOUTFS_ALLOC_DATA_LG_THRESH \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/* the client will force commits if data allocators get too low */
#define SCOUTFS_ALLOC_DATA_REFILL_THRESH \
((256ULL * 1024 * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Fill client alloc roots to the target when they fall below the lo
* threshold.
*
* We're giving the client the most available meta blocks we can so that
* it has the freedom to build large transactions before worrying that
* it might run out of meta allocs during commits.
*/
#define SCOUTFS_SERVER_META_FILL_TARGET \
SCOUTFS_ALLOC_LIST_MAX_BLOCKS
#define SCOUTFS_SERVER_META_FILL_LO \
(SCOUTFS_ALLOC_LIST_MAX_BLOCKS / 2)
#define SCOUTFS_SERVER_DATA_FILL_TARGET \
(4ULL * 1024 * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
#define SCOUTFS_SERVER_DATA_FILL_LO \
(1ULL * 1024 * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* Log merge meta allocations are only used for one request and will
* never use more than the dirty limit.
*/
#define SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT (64ULL * 1024 * 1024)
/* a few extra blocks for alloc blocks */
#define SCOUTFS_SERVER_MERGE_FILL_TARGET \
((SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT >> SCOUTFS_BLOCK_LG_SHIFT) + 4)
#define SCOUTFS_SERVER_MERGE_FILL_LO SCOUTFS_SERVER_MERGE_FILL_TARGET
/*
* A run-time use of a pair of persistent avail/freed roots as a
* metadata allocator. It has the machinery needed to lock and avoid
* recursion when dirtying the list blocks that are used during the
* transaction.
*/
struct scoutfs_alloc {
/* writers rarely modify list_head avail/freed. readers often check for _meta_alloc_low */
seqlock_t seqlock;
struct mutex mutex;
struct scoutfs_block *dirty_avail_bl;
struct scoutfs_block *dirty_freed_bl;
struct scoutfs_alloc_list_head avail;
struct scoutfs_alloc_list_head freed;
};
/*
* A run-time data allocator. We have a cached extent in memory that is
* a lot cheaper to work with than the extent items, and we have a
* consistent record of the total_len that can be sampled outside of the
* usual heavy serialization of the extent modifications.
*/
struct scoutfs_data_alloc {
struct scoutfs_alloc_root root;
struct scoutfs_extent cached;
atomic64_t total_len;
};
void scoutfs_alloc_init(struct scoutfs_alloc *alloc,
struct scoutfs_alloc_list_head *avail,
struct scoutfs_alloc_list_head *freed);
int scoutfs_alloc_prepare_commit(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri);
int scoutfs_alloc_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 *blkno);
int scoutfs_free_meta(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, u64 blkno);
void scoutfs_dalloc_init(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail);
void scoutfs_dalloc_get_root(struct scoutfs_data_alloc *dalloc,
struct scoutfs_alloc_root *data_avail);
u64 scoutfs_dalloc_total_len(struct scoutfs_data_alloc *dalloc);
int scoutfs_dalloc_return_cached(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_data_alloc *dalloc);
int scoutfs_alloc_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_data_alloc *dalloc, u64 count,
u64 *blkno_ret, u64 *count_ret);
int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *root, u64 blkno, u64 count);
int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_fill_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_list_head *lhead,
struct scoutfs_alloc_root *root,
u64 lo, u64 target);
int scoutfs_alloc_empty_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *root,
struct scoutfs_alloc_list_head *lhead);
int scoutfs_alloc_splice_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_list_head *dst,
struct scoutfs_alloc_list_head *src);
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag);
typedef int (*scoutfs_alloc_foreach_cb_t)(struct super_block *sb, void *arg,
int owner, u64 id,
bool meta, bool avail, u64 blocks);
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg);
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg);
typedef void (*scoutfs_alloc_extent_cb_t)(struct super_block *sb, void *cb_arg,
struct scoutfs_extent *ext);
int scoutfs_alloc_extents_cb(struct super_block *sb, struct scoutfs_alloc_root *root,
scoutfs_alloc_extent_cb_t cb, void *cb_arg);
#endif

View File

@@ -1,405 +0,0 @@
/*
* Copyright (C) 2020 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include "format.h"
#include "avl.h"
/*
* We use a simple avl to index items in btree blocks. The interface
* looks a bit like the kernel rbtree interface in that the caller
* manages locking and storage for the nodes. Node references are
* stored as byte offsets from the root so that the implementation
* doesn't have to know anything about the caller's container.
*
* We store the full height in each node, rather than just 2 bits for
* the balance, so that we can use the extra redundancy to verify the
* integrity of the tree.
*/
static struct scoutfs_avl_node *node_ptr(struct scoutfs_avl_root *root,
__le16 off)
{
return off ? (void *)root + le16_to_cpu(off) : NULL;
}
static __le16 node_off(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
return node ? cpu_to_le16((void *)node - (void *)root) : 0;
}
static __u8 node_height(struct scoutfs_avl_node *node)
{
return node ? node->height : 0;
}
struct scoutfs_avl_node *
scoutfs_avl_search(struct scoutfs_avl_root *root,
scoutfs_avl_compare_t compare, void *arg, int *cmp_ret,
struct scoutfs_avl_node **par,
struct scoutfs_avl_node **next,
struct scoutfs_avl_node **prev)
{
struct scoutfs_avl_node *node = node_ptr(root, root->node);
int cmp;
if (cmp_ret)
*cmp_ret = -1;
if (par)
*par = NULL;
if (next)
*next = NULL;
if (prev)
*prev = NULL;
while (node) {
cmp = compare(arg, node);
if (par)
*par = node;
if (cmp_ret)
*cmp_ret = cmp;
if (cmp < 0) {
if (next)
*next = node;
node = node_ptr(root, node->left);
} else if (cmp > 0) {
if (prev)
*prev = node;
node = node_ptr(root, node->right);
} else {
return node;
}
}
return NULL;
}
struct scoutfs_avl_node *scoutfs_avl_first(struct scoutfs_avl_root *root)
{
struct scoutfs_avl_node *node = node_ptr(root, root->node);
while (node && node->left)
node = node_ptr(root, node->left);
return node;
}
struct scoutfs_avl_node *scoutfs_avl_last(struct scoutfs_avl_root *root)
{
struct scoutfs_avl_node *node = node_ptr(root, root->node);
while (node && node->right)
node = node_ptr(root, node->right);
return node;
}
struct scoutfs_avl_node *scoutfs_avl_next(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *parent;
if (node->right) {
node = node_ptr(root, node->right);
while (node->left)
node = node_ptr(root, node->left);
return node;
}
while ((parent = node_ptr(root, node->parent)) &&
node == node_ptr(root, parent->right))
node = parent;
return parent;
}
struct scoutfs_avl_node *scoutfs_avl_prev(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *parent;
if (node->left) {
node = node_ptr(root, node->left);
while (node->right)
node = node_ptr(root, node->right);
return node;
}
while ((parent = node_ptr(root, node->parent)) &&
node == node_ptr(root, parent->left))
node = parent;
return parent;
}
static void set_parent_left_right(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *parent,
struct scoutfs_avl_node *old,
struct scoutfs_avl_node *new)
{
__le16 *off;
if (parent == NULL)
off = &root->node;
else if (parent->left == node_off(root, old))
off = &parent->left;
else
off = &parent->right;
*off = node_off(root, new);
}
static void set_height(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *left = node_ptr(root, node->left);
struct scoutfs_avl_node *right = node_ptr(root, node->right);
node->height = 1 + max(node_height(left), node_height(right));
}
static int node_balance(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
if (node == NULL)
return 0;
return (int)node_height(node_ptr(root, node->right)) -
(int)node_height(node_ptr(root, node->left));
}
/*
* d b
* / \ rotate right -> / \
* b e a d
* / \ <- rotate left / \
* a c c e
*
* The rotate functions are always called with the higher node as the
* earlier argument. Links to a and e are constant. We have to update
* the forward and back refs between parents and nodes for the three links
* along root->[db]->[bd]->c.
*/
static void rotate_right(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *d)
{
struct scoutfs_avl_node *gpa = node_ptr(root, d->parent);
struct scoutfs_avl_node *b = node_ptr(root, d->left);
struct scoutfs_avl_node *c = node_ptr(root, b->right);
set_parent_left_right(root, gpa, d, b);
b->parent = node_off(root, gpa);
b->right = node_off(root, d);
d->parent = node_off(root, b);
d->left = node_off(root, c);
if (c)
c->parent = node_off(root, d);
set_height(root, d);
set_height(root, b);
}
static void rotate_left(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *b)
{
struct scoutfs_avl_node *gpa = node_ptr(root, b->parent);
struct scoutfs_avl_node *d = node_ptr(root, b->right);
struct scoutfs_avl_node *c = node_ptr(root, d->left);
set_parent_left_right(root, gpa, b, d);
d->parent = node_off(root, gpa);
d->left = node_off(root, b);
b->parent = node_off(root, d);
b->right = node_off(root, c);
if (c)
c->parent = node_off(root, b);
set_height(root, b);
set_height(root, d);
}
/*
* Check the balance factor for the given node and perform rotations if
* its two child subtrees are too far out of balance. Return either the
* node again or the root of the newly balanced subtree.
*/
static struct scoutfs_avl_node *
rotate_imbalance(struct scoutfs_avl_root *root, struct scoutfs_avl_node *node)
{
int bal = node_balance(root, node);
struct scoutfs_avl_node *child;
if (bal >= -1 && bal <= 1)
return node;
if (bal > 0) {
/* turn right-left case into right-right */
child = node_ptr(root, node->right);
if (node_balance(root, child) < 0)
rotate_right(root, child);
/* rotate left to address right-right */
rotate_left(root, node);
} else {
/* or do the mirror for the left- cases */
child = node_ptr(root, node->left);
if (node_balance(root, child) > 0)
rotate_left(root, child);
rotate_right(root, node);
}
return node_ptr(root, node->parent);
}
void scoutfs_avl_insert(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *parent,
struct scoutfs_avl_node *node, int cmp)
{
node->parent = 0;
node->left = 0;
node->right = 0;
set_height(root, node);
memset(node->__pad, 0, sizeof(node->__pad));
if (parent == NULL) {
root->node = node_off(root, node);
node->parent = 0;
return;
}
if (cmp < 0)
parent->left = node_off(root, node);
else
parent->right = node_off(root, node);
node->parent = node_off(root, parent);
while (parent) {
set_height(root, parent);
parent = rotate_imbalance(root, parent);
parent = node_ptr(root, parent->parent);
}
}
static struct scoutfs_avl_node *avl_successor(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
node = node_ptr(root, node->right);
while (node->left)
node = node_ptr(root, node->left);
return node;
}
/*
* Find a node next successor and then swap the positions of the two
* nodes with each other in the tree. This is only tricky because the
* successor can be a direct child of the node and if we weren't careful
* we'd be modifying each of the nodes through the pointers between
* them.
*/
static void swap_with_successor(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *succ = avl_successor(root, node);
struct scoutfs_avl_node *succ_par = node_ptr(root, succ->parent);
struct scoutfs_avl_node *succ_right = node_ptr(root, succ->right);
struct scoutfs_avl_node *parent;
struct scoutfs_avl_node *left;
struct scoutfs_avl_node *right;
/* Link old node's parent and left child with the successor */
succ->parent = node->parent;
parent = node_ptr(root, succ->parent);
set_parent_left_right(root, parent, node, succ);
succ->left = node->left;
left = node_ptr(root, succ->left);
if (left)
left->parent = node_off(root, succ);
/*
* Link the old node's right with successor and the old
* successor's parent with the node, they could have pointed to
* each other.
*/
if (succ_par == node) {
succ->right = node_off(root, node);
node->parent = node_off(root, succ);
} else {
succ->right = node->right;
right = node_ptr(root, succ->right);
if (right)
right->parent = node_off(root, succ);
set_parent_left_right(root, succ_par, succ, node);
node->parent = node_off(root, succ_par);
}
/* Link the old successor's right with the node, it can't have left */
node->right = node_off(root, succ_right);
if (succ_right)
succ_right->parent = node_off(root, node);
node->left = 0;
swap(node->height, succ->height);
}
void scoutfs_avl_delete(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node)
{
struct scoutfs_avl_node *parent;
struct scoutfs_avl_node *child;
if (node->left && node->right)
swap_with_successor(root, node);
parent = node_ptr(root, node->parent);
child = node_ptr(root, node->left ?: node->right);
set_parent_left_right(root, parent, node, child);
if (child)
child->parent = node->parent;
while (parent) {
set_height(root, parent);
parent = rotate_imbalance(root, parent);
parent = node_ptr(root, parent->parent);
}
}
/*
* Move the contents of a node to a new node location in memory. The
* logical position of the node in the tree does not change.
*/
void scoutfs_avl_relocate(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *to,
struct scoutfs_avl_node *from)
{
struct scoutfs_avl_node *parent = node_ptr(root, from->parent);
struct scoutfs_avl_node *left = node_ptr(root, from->left);
struct scoutfs_avl_node *right = node_ptr(root, from->right);
set_parent_left_right(root, parent, from, to);
to->parent = from->parent;
to->left = from->left;
if (left)
left->parent = node_off(root, to);
to->right = from->right;
if (right)
right->parent = node_off(root, to);
to->height = from->height;
}

View File

@@ -1,30 +0,0 @@
#ifndef _SCOUTFS_AVL_H_
#define _SCOUTFS_AVL_H_
#include "format.h"
typedef int (*scoutfs_avl_compare_t)(void *arg,
struct scoutfs_avl_node *node);
struct scoutfs_avl_node *
scoutfs_avl_search(struct scoutfs_avl_root *root,
scoutfs_avl_compare_t compare, void *arg, int *cmp_ret,
struct scoutfs_avl_node **par,
struct scoutfs_avl_node **next,
struct scoutfs_avl_node **prev);
struct scoutfs_avl_node *scoutfs_avl_first(struct scoutfs_avl_root *root);
struct scoutfs_avl_node *scoutfs_avl_last(struct scoutfs_avl_root *root);
struct scoutfs_avl_node *scoutfs_avl_next(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node);
struct scoutfs_avl_node *scoutfs_avl_prev(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node);
void scoutfs_avl_insert(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *parent,
struct scoutfs_avl_node *node, int cmp);
void scoutfs_avl_delete(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *node);
void scoutfs_avl_relocate(struct scoutfs_avl_root *root,
struct scoutfs_avl_node *to,
struct scoutfs_avl_node *from);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -10,19 +10,33 @@ struct scoutfs_block_writer {
struct scoutfs_block {
u64 blkno;
void *data;
void *priv;
};
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret);
__le32 scoutfs_block_calc_crc(struct scoutfs_block_header *hdr);
bool scoutfs_block_valid_crc(struct scoutfs_block_header *hdr);
bool scoutfs_block_valid_ref(struct super_block *sb,
struct scoutfs_block_header *hdr,
__le64 seq, __le64 blkno);
bool scoutfs_block_tas_visited(struct super_block *sb,
struct scoutfs_block *bl);
void scoutfs_block_clear_visited(struct super_block *sb,
struct scoutfs_block *bl);
struct scoutfs_block *scoutfs_block_create(struct super_block *sb, u64 blkno);
struct scoutfs_block *scoutfs_block_read(struct super_block *sb, u64 blkno);
void scoutfs_block_invalidate(struct super_block *sb, struct scoutfs_block *bl);
bool scoutfs_block_consistent_ref(struct super_block *sb,
struct scoutfs_block *bl,
__le64 seq, __le64 blkno, u32 magic);
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);
void scoutfs_block_writer_init(struct super_block *sb,
struct scoutfs_block_writer *wri);
int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_block_ref *ref,
u32 magic, struct scoutfs_block **bl_ret,
u64 dirty_blkno, u64 *ref_blkno);
void scoutfs_block_writer_mark_dirty(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl);
bool scoutfs_block_writer_is_dirty(struct super_block *sb,
struct scoutfs_block *bl);
int scoutfs_block_writer_write(struct super_block *sb,
struct scoutfs_block_writer *wri);
void scoutfs_block_writer_forget_all(struct super_block *sb,
@@ -30,17 +44,18 @@ void scoutfs_block_writer_forget_all(struct super_block *sb,
void scoutfs_block_writer_forget(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl);
void scoutfs_block_move(struct super_block *sb,
struct scoutfs_block_writer *wri,
struct scoutfs_block *bl, u64 blkno);
bool scoutfs_block_writer_has_dirty(struct super_block *sb,
struct scoutfs_block_writer *wri);
u64 scoutfs_block_writer_dirty_bytes(struct super_block *sb,
struct scoutfs_block_writer *wri);
int scoutfs_block_read_sm(struct super_block *sb,
struct block_device *bdev, u64 blkno,
int scoutfs_block_read_sm(struct super_block *sb, u64 blkno,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc);
int scoutfs_block_write_sm(struct super_block *sb,
struct block_device *bdev, u64 blkno,
int scoutfs_block_write_sm(struct super_block *sb, u64 blkno,
struct scoutfs_block_header *hdr, size_t len);
int scoutfs_block_setup(struct super_block *sb);

File diff suppressed because it is too large Load Diff

View File

@@ -3,14 +3,15 @@
#include <linux/uio.h>
struct scoutfs_alloc;
struct scoutfs_radix_allocator;
struct scoutfs_block_writer;
struct scoutfs_block;
struct scoutfs_btree_item_ref {
struct super_block *sb;
struct scoutfs_block *bl;
struct scoutfs_key *key;
void *key;
unsigned key_len;
void *val;
unsigned val_len;
};
@@ -18,114 +19,50 @@ struct scoutfs_btree_item_ref {
#define SCOUTFS_BTREE_ITEM_REF(name) \
struct scoutfs_btree_item_ref name = {NULL,}
/* caller gives an item to the callback */
typedef int (*scoutfs_btree_item_cb)(struct super_block *sb,
struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, void *arg);
/* simple singly-linked list of items */
struct scoutfs_btree_item_list {
struct scoutfs_btree_item_list *next;
struct scoutfs_key key;
u64 seq;
u8 flags;
int val_len;
u8 val[0];
};
int scoutfs_btree_lookup(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
int scoutfs_btree_lookup(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_insert(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
void *key, unsigned key_len,
void *val, unsigned val_len);
int scoutfs_btree_update(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
void *key, unsigned key_len,
void *val, unsigned val_len);
int scoutfs_btree_force(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
void *key, unsigned key_len,
void *val, unsigned val_len);
int scoutfs_btree_delete(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key);
void *key, unsigned key_len);
int scoutfs_btree_next(struct super_block *sb, struct scoutfs_btree_root *root,
struct scoutfs_key *key,
void *key, unsigned key_len,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_after(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_prev(struct super_block *sb, struct scoutfs_btree_root *root,
struct scoutfs_key *key,
void *key, unsigned key_len,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_before(struct super_block *sb, struct scoutfs_btree_root *root,
void *key, unsigned key_len,
struct scoutfs_btree_item_ref *iref);
int scoutfs_btree_dirty(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key);
int scoutfs_btree_read_items(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_btree_item_cb cb, void *arg);
int scoutfs_btree_insert_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_btree_item_list *lst);
int scoutfs_btree_parent_range(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_key *start,
struct scoutfs_key *end);
int scoutfs_btree_get_parent(struct super_block *sb,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_btree_root *par_root);
int scoutfs_btree_set_parent(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key,
struct scoutfs_btree_root *par_root);
int scoutfs_btree_rebalance(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_key *key);
/* merge input is a list of roots */
struct scoutfs_btree_root_head {
struct list_head head;
struct scoutfs_btree_root root;
};
int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *start,
struct scoutfs_key *end,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *input_list,
bool subtree, int dirty_limit, int alloc_low);
int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_key *key,
struct scoutfs_btree_root *root, int free_budget);
void *key, unsigned key_len);
void scoutfs_btree_put_iref(struct scoutfs_btree_item_ref *iref);

View File

@@ -31,15 +31,16 @@
#include "net.h"
#include "endian_swap.h"
#include "quorum.h"
#include "omap.h"
#include "trans.h"
/*
* The client is responsible for maintaining a connection to the server.
* This includes managing quorum elections that determine which client
* should run the server that all the clients connect to.
*/
#define CLIENT_CONNECT_DELAY_MS (MSEC_PER_SEC / 10)
#define CLIENT_CONNECT_TIMEOUT_MS (1 * MSEC_PER_SEC)
#define CLIENT_QUORUM_TIMEOUT_MS (5 * MSEC_PER_SEC)
struct client_info {
struct super_block *sb;
@@ -49,9 +50,9 @@ struct client_info {
struct workqueue_struct *workq;
struct delayed_work connect_dwork;
unsigned long connect_delay_jiffies;
u64 server_term;
u64 greeting_umb;
bool sending_farewell;
int farewell_error;
@@ -107,14 +108,21 @@ int scoutfs_client_commit_log_trees(struct super_block *sb,
lt, sizeof(*lt), NULL, 0);
}
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots)
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
__le64 before = cpu_to_le64p(seq);
__le64 after;
int ret;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_GET_ROOTS,
NULL, 0, roots, sizeof(*roots));
ret = scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
&before, sizeof(before),
&after, sizeof(after));
if (ret == 0)
*seq = le64_to_cpu(after);
return ret;
}
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
@@ -132,6 +140,17 @@ int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
return ret;
}
int scoutfs_client_statfs(struct super_block *sb,
struct scoutfs_net_statfs *nstatfs)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_STATFS, NULL, 0,
nstatfs,
sizeof(struct scoutfs_net_statfs));
}
/* process an incoming grant response from the server */
static int client_lock_response(struct super_block *sb,
struct scoutfs_net_connection *conn,
@@ -181,142 +200,6 @@ int scoutfs_client_lock_recover_response(struct super_block *sb, u64 net_id,
net_id, 0, nlr, bytes);
}
/* Find srch files that need to be compacted. */
int scoutfs_client_srch_get_compact(struct super_block *sb,
struct scoutfs_srch_compact *sc)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_SRCH_GET_COMPACT,
NULL, 0, sc, sizeof(*sc));
}
/* Commit the result of a srch file compaction. */
int scoutfs_client_srch_commit_compact(struct super_block *sb,
struct scoutfs_srch_compact *res)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_SRCH_COMMIT_COMPACT,
res, sizeof(*res), NULL, 0);
}
int scoutfs_client_get_log_merge(struct super_block *sb,
struct scoutfs_log_merge_request *req)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_GET_LOG_MERGE,
NULL, 0, req, sizeof(*req));
}
int scoutfs_client_commit_log_merge(struct super_block *sb,
struct scoutfs_log_merge_complete *comp)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_COMMIT_LOG_MERGE,
comp, sizeof(*comp), NULL, 0);
}
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_response(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
id, 0, map, sizeof(*map));
}
/* The client is receiving an omap request from the server */
static int client_open_ino_map(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != sizeof(struct scoutfs_open_ino_map_args))
return -EINVAL;
return scoutfs_omap_client_handle_request(sb, id, arg);
}
/* The client is sending an omap request to the server */
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_open_ino_map_args args = {
.group_nr = cpu_to_le64(group_nr),
.req_id = 0,
};
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
&args, sizeof(args), map, sizeof(*map));
}
/* The client is asking the server for the current volume options */
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_GET_VOLOPT,
NULL, 0, volopt, sizeof(*volopt));
}
/* The client is asking the server to update volume options */
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_SET_VOLOPT,
volopt, sizeof(*volopt), NULL, 0);
}
/* The client is asking the server to clear volume options */
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_CLEAR_VOLOPT,
volopt, sizeof(*volopt), NULL, 0);
}
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_RESIZE_DEVICES,
nrd, sizeof(*nrd), NULL, 0);
}
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_STATFS,
NULL, 0, nst, sizeof(*nst));
}
/*
* The server is asking that we trigger a commit of the current log
* trees so that they can ensure an item seq discontinuity between
* finalized log btrees and the next set of open log btrees. If we're
* shutting down then we're already going to perform a final commit.
*/
static int sync_log_trees(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != 0)
return -EINVAL;
if (!scoutfs_unmounting(sb))
scoutfs_trans_sync(sb, 0);
return scoutfs_net_response(sb, conn, cmd, id, 0, NULL, 0);
}
/* The client is receiving a invalidation request from the server */
static int client_lock(struct super_block *sb,
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
@@ -354,8 +237,7 @@ static int client_greeting(struct super_block *sb,
void *resp, unsigned int resp_len, int error,
void *data)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
@@ -372,15 +254,17 @@ static int client_greeting(struct super_block *sb,
}
if (gr->fsid != super->hdr.fsid) {
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
scoutfs_warn(sb, "server sent fsid 0x%llx, client has 0x%llx",
le64_to_cpu(gr->fsid),
le64_to_cpu(super->hdr.fsid));
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(gr->fmt_vers) != sbi->fmt_vers) {
scoutfs_warn(sb, "server greeting response format version %llu did not match client format version %llu",
le64_to_cpu(gr->fmt_vers), sbi->fmt_vers);
if (gr->format_hash != super->format_hash) {
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
le64_to_cpu(gr->format_hash),
le64_to_cpu(super->format_hash));
ret = -EINVAL;
goto out;
}
@@ -389,31 +273,52 @@ static int client_greeting(struct super_block *sb,
scoutfs_net_client_greeting(sb, conn, new_server);
client->server_term = le64_to_cpu(gr->server_term);
client->connect_delay_jiffies = 0;
client->greeting_umb = le64_to_cpu(gr->unmount_barrier);
ret = 0;
out:
return ret;
}
/*
* The client is deciding if it needs to keep trying to reconnect to
* have its farewell request processed. The server removes our mounted
* client item last so that if we don't see it we know the server has
* processed our farewell and we don't need to reconnect, we can unmount
* safely.
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
*
* This is peeking at btree blocks that the server could be actively
* freeing with cow updates so it can see stale blocks, we just return
* the error and we'll retry eventually as the connection times out.
* In the typical case a mount reads the super blocks and finds the
* address of the currently running server and connects to it.
* Non-voting clients who can't connect will keep trying alternating
* reading the address and getting connect timeouts.
*
* Voting mounts will try to elect a leader if they can't connect to the
* server. When a quorum can't connect and are able to elect a leader
* then a new server is started. The new server will write its address
* in the super and everyone will be able to connect.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once a client receives its farewell response it
* can exit. But a majority of clients need to stick around to elect a
* server to process all their farewell requests. This is coordinated
* by having the greeting tell the server that a client is a voter. The
* server then holds on to farewell requests from voters until only
* requests from the final quorum remain. These farewell responses are
* only sent after updating an unmount barrier in the super to indicate
* to the final quorum that they can safely exit without having received
* a farewell response over the network.
*/
static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
static void scoutfs_client_connect_worker(struct work_struct *work)
{
struct scoutfs_key key = {
.sk_zone = SCOUTFS_MOUNTED_CLIENT_ZONE,
.skmc_rid = cpu_to_le64(rid),
};
struct scoutfs_super_block *super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = NULL;
struct mount_options *opts = &sbi->opts;
const bool am_voter = opts->server_addr.sin_addr.s_addr != 0;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
ktime_t timeout_abs;
u64 elected_term;
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -426,97 +331,57 @@ static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
if (ret)
goto out;
ret = scoutfs_btree_lookup(sb, &super->mounted_clients, &key, &iref);
if (ret == 0) {
scoutfs_btree_put_iref(&iref);
ret = 1;
}
if (ret == -ENOENT)
ret = 0;
kfree(super);
out:
return ret;
}
/*
* If we're not seeing successful connections we want to back off. Each
* connection attempt starts by setting a long connection work delay.
* We only set a shorter delay if we see a greeting response from the
* server. At that point we'll try to immediately reconnect if the
* connection is broken.
*/
static void queue_connect_dwork(struct super_block *sb, struct client_info *client)
{
if (!atomic_read(&client->shutting_down) && !scoutfs_forcing_unmount(sb))
queue_delayed_work(client->workq, &client->connect_dwork,
client->connect_delay_jiffies);
}
/*
* This work is responsible for maintaining a connection from the client
* to the server. It's queued on mount and disconnect and we requeue
* the work if the work fails and we're not shutting down.
*
* We ask quorum for an address to try and connect to. If there isn't
* one, or it fails, we back off a bit before trying again.
*
* There's a tricky bit of coordination required to safely unmount.
* Clients need to tell the server that they won't be coming back with a
* farewell request. Once the server processes a farewell request from
* the client it can forget the client. If the connection is broken
* before the client gets the farewell response it doesn't want to
* reconnect to send it again.. instead the client can read the metadata
* device to check for the lack of an item which indicates that the
* server has processed its farewell.
*/
static void scoutfs_client_connect_worker(struct work_struct *work)
{
struct client_info *client = container_of(work, struct client_info,
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_mount_options opts;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
bool am_quorum;
int ret;
scoutfs_options_read(sb, &opts);
am_quorum = opts.quorum_slot_nr >= 0;
/* can unmount once server farewell handling removes our item */
if (client->sending_farewell &&
lookup_mounted_client_item(sb, sbi->rid) == 0) {
/* can safely unmount if we see that server processed our farewell */
if (am_voter && client->sending_farewell &&
(le64_to_cpu(super->unmount_barrier) > client->greeting_umb)) {
client->farewell_error = 0;
complete(&client->farewell_comp);
ret = 0;
goto out;
}
/* always wait a bit until a greeting response sets a lower delay */
client->connect_delay_jiffies = msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS);
/* try to connect to the super's server address */
scoutfs_addr_to_sin(&sin, &super->server_addr);
if (sin.sin_addr.s_addr != 0 && sin.sin_port != 0)
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
else
ret = -ENOTCONN;
ret = scoutfs_quorum_server_sin(sb, &sin);
if (ret < 0)
goto out;
/* voters try to elect a leader if they couldn't connect */
if (ret < 0) {
/* non-voters will keep retrying */
if (!am_voter)
goto out;
ret = scoutfs_net_connect(sb, client->conn, &sin,
CLIENT_CONNECT_TIMEOUT_MS);
if (ret < 0)
/* make sure local server isn't writing super during votes */
scoutfs_server_stop(sb);
timeout_abs = ktime_add_ms(ktime_get(),
CLIENT_QUORUM_TIMEOUT_MS);
ret = scoutfs_quorum_election(sb, timeout_abs,
le64_to_cpu(super->quorum_server_term),
&elected_term);
/* start the server if we were asked to */
if (elected_term > 0)
ret = scoutfs_server_start(sb, &opts->server_addr,
elected_term);
ret = -ENOTCONN;
goto out;
}
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.format_hash = super->format_hash;
greet.server_term = cpu_to_le64(client->server_term);
greet.unmount_barrier = cpu_to_le64(client->greeting_umb);
greet.rid = cpu_to_le64(sbi->rid);
greet.flags = 0;
if (client->sending_farewell)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_FAREWELL);
if (am_quorum)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_QUORUM);
if (am_voter)
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_VOTER);
ret = scoutfs_net_submit_request(sb, client->conn,
SCOUTFS_NET_CMD_GREETING,
@@ -525,15 +390,17 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
if (ret)
scoutfs_net_shutdown(sb, client->conn);
out:
if (ret)
queue_connect_dwork(sb, client);
kfree(super);
/* always have a small delay before retrying to avoid storms */
if (ret && !atomic_read(&client->shutting_down))
queue_delayed_work(client->workq, &client->connect_dwork,
msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS));
}
static scoutfs_net_request_t client_req_funcs[] = {
[SCOUTFS_NET_CMD_SYNC_LOG_TREES] = sync_log_trees,
[SCOUTFS_NET_CMD_LOCK] = client_lock,
[SCOUTFS_NET_CMD_LOCK_RECOVER] = client_lock_recover,
[SCOUTFS_NET_CMD_OPEN_INO_MAP] = client_open_ino_map,
};
/*
@@ -546,7 +413,8 @@ static void client_notify_down(struct super_block *sb,
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
queue_connect_dwork(sb, client);
if (!atomic_read(&client->shutting_down))
queue_delayed_work(client->workq, &client->connect_dwork, 0);
}
int scoutfs_client_setup(struct super_block *sb)
@@ -581,7 +449,7 @@ int scoutfs_client_setup(struct super_block *sb)
goto out;
}
queue_connect_dwork(sb, client);
queue_delayed_work(client->workq, &client->connect_dwork, 0);
ret = 0;
out:
@@ -638,7 +506,7 @@ void scoutfs_client_destroy(struct super_block *sb)
if (client == NULL)
return;
if (client->server_term != 0 && !scoutfs_forcing_unmount(sb)) {
if (client->server_term != 0) {
client->sending_farewell = true;
ret = scoutfs_net_submit_request(sb, client->conn,
SCOUTFS_NET_CMD_FAREWELL,
@@ -646,8 +514,10 @@ void scoutfs_client_destroy(struct super_block *sb)
client_farewell_response,
NULL, NULL);
if (ret == 0) {
wait_for_completion(&client->farewell_comp);
ret = client->farewell_error;
ret = wait_for_completion_interruptible(
&client->farewell_comp);
if (ret == 0)
ret = client->farewell_error;
}
if (ret) {
scoutfs_inc_counter(sb, client_farewell_error);
@@ -671,11 +541,3 @@ void scoutfs_client_destroy(struct super_block *sb)
kfree(client);
sbi->client_info = NULL;
}
void scoutfs_client_net_shutdown(struct super_block *sb)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
if (client && client->conn)
scoutfs_net_shutdown(sb, client->conn);
}

View File

@@ -7,35 +7,18 @@ int scoutfs_client_get_log_trees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_client_commit_log_trees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
u64 *scoutfs_client_bulk_alloc(struct super_block *sb);
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_statfs(struct super_block *sb,
struct scoutfs_net_statfs *nstatfs);
int scoutfs_client_lock_request(struct super_block *sb,
struct scoutfs_net_lock *nl);
int scoutfs_client_lock_response(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl);
int scoutfs_client_lock_recover_response(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock_recover *nlr);
int scoutfs_client_srch_get_compact(struct super_block *sb,
struct scoutfs_srch_compact *sc);
int scoutfs_client_srch_commit_compact(struct super_block *sb,
struct scoutfs_srch_compact *res);
int scoutfs_client_get_log_merge(struct super_block *sb,
struct scoutfs_log_merge_request *req);
int scoutfs_client_commit_log_merge(struct super_block *sb,
struct scoutfs_log_merge_complete *comp);
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map *map);
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
struct scoutfs_open_ino_map *map);
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd);
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst);
void scoutfs_client_net_shutdown(struct super_block *sb);
int scoutfs_client_setup(struct super_block *sb);
void scoutfs_client_destroy(struct super_block *sb);

319
kmod/src/count.h Normal file
View File

@@ -0,0 +1,319 @@
#ifndef _SCOUTFS_COUNT_H_
#define _SCOUTFS_COUNT_H_
/*
* Our estimate of the space consumed while dirtying items is based on
* the number of items and the size of their values.
*
* The estimate is still a read-only input to entering the transaction.
* We'd like to use it as a clean rhs arg to hold_trans. We define SIC_
* functions which return the count struct. This lets us have a single
* arg and avoid bugs in initializing and passing in struct pointers
* from callers. The internal __count functions are used compose an
* estimate out of the sets of items it manipulates. We program in much
* clearer C instead of in the preprocessor.
*
* Compilers are able to collapse the inlines into constants for the
* constant estimates.
*/
struct scoutfs_item_count {
signed items;
signed vals;
};
/* The caller knows exactly what they're doing. */
static inline const struct scoutfs_item_count SIC_EXACT(signed items,
signed vals)
{
struct scoutfs_item_count cnt = {
.items = items,
.vals = vals,
};
return cnt;
}
/*
* Allocating an inode creates a new set of indexed items.
*/
static inline void __count_alloc_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
/*
* Dirtying an inode dirties the inode item and can delete and create
* the full set of indexed items.
*/
static inline void __count_dirty_inode(struct scoutfs_item_count *cnt)
{
const int nr_indices = 2 * SCOUTFS_INODE_INDEX_NR;
cnt->items += 1 + nr_indices;
cnt->vals += sizeof(struct scoutfs_inode);
}
static inline const struct scoutfs_item_count SIC_ALLOC_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_alloc_inode(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_DIRTY_INODE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Directory entries are stored in three items.
*/
static inline void __count_dirents(struct scoutfs_item_count *cnt,
unsigned name_len)
{
cnt->items += 3;
cnt->vals += 3 * offsetof(struct scoutfs_dirent, name[name_len]);
}
static inline void __count_sym_target(struct scoutfs_item_count *cnt,
unsigned size)
{
unsigned nr = DIV_ROUND_UP(size, SCOUTFS_MAX_VAL_SIZE);
cnt->items += nr;
cnt->vals += size;
}
static inline void __count_orphan(struct scoutfs_item_count *cnt)
{
cnt->items += 1;
}
static inline void __count_mknod(struct scoutfs_item_count *cnt,
unsigned name_len)
{
__count_alloc_inode(cnt);
__count_dirents(cnt, name_len);
__count_dirty_inode(cnt);
}
static inline const struct scoutfs_item_count SIC_MKNOD(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
return cnt;
}
/*
* Dropping the inode deletes all its items. Potentially enormous numbers
* of items (data mapping, xattrs) are deleted in their own transactions.
*/
static inline const struct scoutfs_item_count SIC_DROP_INODE(int mode,
u64 size)
{
struct scoutfs_item_count cnt = {0,};
if (S_ISLNK(mode))
__count_sym_target(&cnt, size);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
cnt.vals = 0;
return cnt;
}
static inline const struct scoutfs_item_count SIC_LINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
return cnt;
}
/*
* Unlink can add orphan items.
*/
static inline const struct scoutfs_item_count SIC_UNLINK(unsigned name_len)
{
struct scoutfs_item_count cnt = {0,};
__count_dirents(&cnt, name_len);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_orphan(&cnt);
return cnt;
}
static inline const struct scoutfs_item_count SIC_SYMLINK(unsigned name_len,
unsigned size)
{
struct scoutfs_item_count cnt = {0,};
__count_mknod(&cnt, name_len);
__count_sym_target(&cnt, size);
return cnt;
}
/*
* This assumes the worst case of a rename between directories that
* unlinks an existing target. That'll be worse than the common case
* by a few hundred bytes.
*/
static inline const struct scoutfs_item_count SIC_RENAME(unsigned old_len,
unsigned new_len)
{
struct scoutfs_item_count cnt = {0,};
/* dirty dirs and inodes */
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
__count_dirty_inode(&cnt);
/* unlink old and new, link new */
__count_dirents(&cnt, old_len);
__count_dirents(&cnt, new_len);
__count_dirents(&cnt, new_len);
/* orphan the existing target */
__count_orphan(&cnt);
return cnt;
}
/*
* Creating an xattr results in a dirty set of items with values that
* store the xattr header, name, and value. There's always at least one
* item with the header and name. Any previously existing items are
* deleted which dirties their key but removes their value. The two
* sets of items are indexed by different ids so their items don't
* overlap. If the xattr name is indexed then we modify one xattr index
* item.
*/
static inline const struct scoutfs_item_count SIC_XATTR_SET(unsigned old_parts,
bool creating,
unsigned name_len,
unsigned size,
bool indexed)
{
struct scoutfs_item_count cnt = {0,};
unsigned int new_parts;
__count_dirty_inode(&cnt);
if (old_parts)
cnt.items += old_parts;
if (indexed)
cnt.items++;
if (creating) {
new_parts = SCOUTFS_XATTR_NR_PARTS(name_len, size);
cnt.items += new_parts;
cnt.vals += sizeof(struct scoutfs_xattr) + name_len + size;
}
return cnt;
}
/*
* write_begin can have to allocate all the blocks in the page and can
* have to add a big allocation from the server to do so:
* - merge added free extents from the server
* - remove a free extent per block
* - remove an offline extent for every other block
* - add a file extent per block
*/
static inline const struct scoutfs_item_count SIC_WRITE_BEGIN(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned nr_free = (1 + SCOUTFS_BLOCKS_PER_PAGE) * 3;
unsigned nr_file = (DIV_ROUND_UP(SCOUTFS_BLOCKS_PER_PAGE, 2) +
SCOUTFS_BLOCKS_PER_PAGE) * 3;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* Truncating an extent can:
* - delete existing file extent,
* - create two surrounding file extents,
* - add an offline file extent,
* - delete two existing free extents
* - create a merged free extent
*/
static inline const struct scoutfs_item_count
SIC_TRUNC_EXTENT(struct inode *inode)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_file = 1 + 2 + 1;
unsigned int nr_free = (2 + 1) * 2;
if (inode)
__count_dirty_inode(&cnt);
cnt.items += nr_file + nr_free;
cnt.vals += nr_file;
return cnt;
}
/*
* Fallocating an extent can, at most:
* - allocate from the server: delete two free and insert merged
* - free an allocated extent: delete one and create two split
* - remove an unallocated file extent: delete one and create two split
* - add an fallocated flie extent: delete two and inset one merged
*/
static inline const struct scoutfs_item_count SIC_FALLOCATE_ONE(void)
{
struct scoutfs_item_count cnt = {0,};
unsigned int nr_free = ((1 + 2) * 2) * 2;
unsigned int nr_file = (1 + 2) * 2;
__count_dirty_inode(&cnt);
cnt.items += nr_free + nr_file;
cnt.vals += nr_file;
return cnt;
}
/*
* ioc_setattr_more can dirty the inode and add a single offline extent.
*/
static inline const struct scoutfs_item_count SIC_SETATTR_MORE(void)
{
struct scoutfs_item_count cnt = {0,};
__count_dirty_inode(&cnt);
cnt.items++;
return cnt;
}
#endif

View File

@@ -12,55 +12,18 @@
* other places by this macro. Don't forget to update LAST_COUNTER.
*/
#define EXPAND_EACH_COUNTER \
EXPAND_COUNTER(alloc_alloc_data) \
EXPAND_COUNTER(alloc_alloc_meta) \
EXPAND_COUNTER(alloc_free_data) \
EXPAND_COUNTER(alloc_free_meta) \
EXPAND_COUNTER(alloc_list_avail_lo) \
EXPAND_COUNTER(alloc_list_freed_hi) \
EXPAND_COUNTER(alloc_move) \
EXPAND_COUNTER(alloc_moved_extent) \
EXPAND_COUNTER(alloc_stale_list_block) \
EXPAND_COUNTER(block_cache_access_update) \
EXPAND_COUNTER(block_cache_access) \
EXPAND_COUNTER(block_cache_alloc_failure) \
EXPAND_COUNTER(block_cache_alloc_page_order) \
EXPAND_COUNTER(block_cache_alloc_virt) \
EXPAND_COUNTER(block_cache_end_io_error) \
EXPAND_COUNTER(block_cache_forget) \
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_invalidate) \
EXPAND_COUNTER(block_cache_lru_move) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_restart) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
EXPAND_COUNTER(btree_dirty) \
EXPAND_COUNTER(btree_force) \
EXPAND_COUNTER(btree_join) \
EXPAND_COUNTER(btree_insert) \
EXPAND_COUNTER(btree_leaf_item_hash_search) \
EXPAND_COUNTER(btree_lookup) \
EXPAND_COUNTER(btree_merge) \
EXPAND_COUNTER(btree_merge_alloc_low) \
EXPAND_COUNTER(btree_merge_delete) \
EXPAND_COUNTER(btree_merge_delta_combined) \
EXPAND_COUNTER(btree_merge_delta_null) \
EXPAND_COUNTER(btree_merge_dirty_limit) \
EXPAND_COUNTER(btree_merge_drop_old) \
EXPAND_COUNTER(btree_merge_insert) \
EXPAND_COUNTER(btree_merge_update) \
EXPAND_COUNTER(btree_merge_walk) \
EXPAND_COUNTER(btree_next) \
EXPAND_COUNTER(btree_prev) \
EXPAND_COUNTER(btree_split) \
EXPAND_COUNTER(btree_read_error) \
EXPAND_COUNTER(btree_stale_read) \
EXPAND_COUNTER(btree_update) \
EXPAND_COUNTER(btree_walk) \
EXPAND_COUNTER(btree_walk_restart) \
EXPAND_COUNTER(client_farewell_error) \
EXPAND_COUNTER(corrupt_btree_block_level) \
EXPAND_COUNTER(corrupt_btree_no_child_ref) \
@@ -71,8 +34,6 @@
EXPAND_COUNTER(corrupt_symlink_inode_size) \
EXPAND_COUNTER(corrupt_symlink_missing_item) \
EXPAND_COUNTER(corrupt_symlink_not_null_term) \
EXPAND_COUNTER(data_fallocate_enobufs_retry) \
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
EXPAND_COUNTER(dentry_revalidate_error) \
EXPAND_COUNTER(dentry_revalidate_invalid) \
EXPAND_COUNTER(dentry_revalidate_locked) \
@@ -81,64 +42,25 @@
EXPAND_COUNTER(dentry_revalidate_root) \
EXPAND_COUNTER(dentry_revalidate_valid) \
EXPAND_COUNTER(dir_backref_excessive_retries) \
EXPAND_COUNTER(ext_op_insert) \
EXPAND_COUNTER(ext_op_next) \
EXPAND_COUNTER(ext_op_remove) \
EXPAND_COUNTER(forest_bloom_fail) \
EXPAND_COUNTER(forest_bloom_pass) \
EXPAND_COUNTER(forest_bloom_stale) \
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
EXPAND_COUNTER(item_delta) \
EXPAND_COUNTER(item_delta_written) \
EXPAND_COUNTER(item_dirty) \
EXPAND_COUNTER(item_invalidate) \
EXPAND_COUNTER(item_invalidate_page) \
EXPAND_COUNTER(item_lookup) \
EXPAND_COUNTER(item_mark_dirty) \
EXPAND_COUNTER(item_next) \
EXPAND_COUNTER(item_page_accessed) \
EXPAND_COUNTER(item_page_alloc) \
EXPAND_COUNTER(item_page_clear_dirty) \
EXPAND_COUNTER(item_page_compact) \
EXPAND_COUNTER(item_page_free) \
EXPAND_COUNTER(item_page_lru_add) \
EXPAND_COUNTER(item_page_lru_remove) \
EXPAND_COUNTER(item_page_mark_dirty) \
EXPAND_COUNTER(item_page_rbtree_walk) \
EXPAND_COUNTER(item_page_split) \
EXPAND_COUNTER(item_pcpu_add_replaced) \
EXPAND_COUNTER(item_pcpu_page_hit) \
EXPAND_COUNTER(item_pcpu_page_miss) \
EXPAND_COUNTER(item_pcpu_page_miss_keys) \
EXPAND_COUNTER(item_read_pages_split) \
EXPAND_COUNTER(item_shrink_page) \
EXPAND_COUNTER(item_shrink_page_dirty) \
EXPAND_COUNTER(item_shrink_page_reader) \
EXPAND_COUNTER(item_shrink_page_trylock) \
EXPAND_COUNTER(item_update) \
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grace_elapsed) \
EXPAND_COUNTER(lock_grace_extended) \
EXPAND_COUNTER(lock_grace_set) \
EXPAND_COUNTER(lock_grace_wait) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
EXPAND_COUNTER(lock_invalidate_commit) \
EXPAND_COUNTER(lock_invalidate_coverage) \
EXPAND_COUNTER(lock_invalidate_inode) \
EXPAND_COUNTER(lock_invalidate_request) \
EXPAND_COUNTER(lock_invalidate_response) \
EXPAND_COUNTER(lock_invalidate_sync) \
EXPAND_COUNTER(lock_invalidate_work) \
EXPAND_COUNTER(lock_lock) \
EXPAND_COUNTER(lock_lock_error) \
EXPAND_COUNTER(lock_nonblock_eagain) \
EXPAND_COUNTER(lock_recover_request) \
EXPAND_COUNTER(lock_shrink_attempted) \
EXPAND_COUNTER(lock_shrink_aborted) \
EXPAND_COUNTER(lock_shrink_work) \
EXPAND_COUNTER(lock_shrink_queued) \
EXPAND_COUNTER(lock_shrink_request_aborted) \
EXPAND_COUNTER(lock_unlock) \
EXPAND_COUNTER(lock_wait) \
EXPAND_COUNTER(net_dropped_response) \
@@ -151,64 +73,29 @@
EXPAND_COUNTER(net_recv_invalid_message) \
EXPAND_COUNTER(net_recv_messages) \
EXPAND_COUNTER(net_unknown_request) \
EXPAND_COUNTER(orphan_scan) \
EXPAND_COUNTER(orphan_scan_attempts) \
EXPAND_COUNTER(orphan_scan_cached) \
EXPAND_COUNTER(orphan_scan_error) \
EXPAND_COUNTER(orphan_scan_item) \
EXPAND_COUNTER(orphan_scan_omap_set) \
EXPAND_COUNTER(quorum_candidate_server_stopping) \
EXPAND_COUNTER(quorum_elected) \
EXPAND_COUNTER(quorum_fence_error) \
EXPAND_COUNTER(quorum_fence_leader) \
EXPAND_COUNTER(quorum_cycle) \
EXPAND_COUNTER(quorum_elected_leader) \
EXPAND_COUNTER(quorum_election_timeout) \
EXPAND_COUNTER(quorum_failure) \
EXPAND_COUNTER(quorum_read_block) \
EXPAND_COUNTER(quorum_read_block_error) \
EXPAND_COUNTER(quorum_read_invalid_block) \
EXPAND_COUNTER(quorum_recv_error) \
EXPAND_COUNTER(quorum_recv_heartbeat) \
EXPAND_COUNTER(quorum_recv_invalid) \
EXPAND_COUNTER(quorum_recv_resignation) \
EXPAND_COUNTER(quorum_recv_vote) \
EXPAND_COUNTER(quorum_send_heartbeat) \
EXPAND_COUNTER(quorum_send_resignation) \
EXPAND_COUNTER(quorum_send_request) \
EXPAND_COUNTER(quorum_send_vote) \
EXPAND_COUNTER(quorum_server_shutdown) \
EXPAND_COUNTER(quorum_term_follower) \
EXPAND_COUNTER(server_commit_hold) \
EXPAND_COUNTER(server_commit_queue) \
EXPAND_COUNTER(server_commit_worker) \
EXPAND_COUNTER(srch_add_entry) \
EXPAND_COUNTER(srch_compact_dirty_block) \
EXPAND_COUNTER(srch_compact_entry) \
EXPAND_COUNTER(srch_compact_error) \
EXPAND_COUNTER(srch_compact_flush) \
EXPAND_COUNTER(srch_compact_log_page) \
EXPAND_COUNTER(srch_compact_removed_entry) \
EXPAND_COUNTER(srch_rotate_log) \
EXPAND_COUNTER(srch_search_log) \
EXPAND_COUNTER(srch_search_log_block) \
EXPAND_COUNTER(srch_search_retry_empty) \
EXPAND_COUNTER(srch_search_sorted) \
EXPAND_COUNTER(srch_search_sorted_block) \
EXPAND_COUNTER(srch_search_stale_eio) \
EXPAND_COUNTER(srch_search_stale_retry) \
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
EXPAND_COUNTER(totl_read_copied) \
EXPAND_COUNTER(totl_read_finalized) \
EXPAND_COUNTER(totl_read_fs) \
EXPAND_COUNTER(totl_read_item) \
EXPAND_COUNTER(totl_read_logged) \
EXPAND_COUNTER(quorum_saw_super_leader) \
EXPAND_COUNTER(quorum_timedout) \
EXPAND_COUNTER(quorum_write_block) \
EXPAND_COUNTER(quorum_write_block_error) \
EXPAND_COUNTER(quorum_fenced) \
EXPAND_COUNTER(radix_enospc_data) \
EXPAND_COUNTER(radix_enospc_paths) \
EXPAND_COUNTER(radix_enospc_synth) \
EXPAND_COUNTER(trans_commit_data_alloc_low) \
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
EXPAND_COUNTER(trans_commit_fsync) \
EXPAND_COUNTER(trans_commit_meta_alloc_low) \
EXPAND_COUNTER(trans_commit_full) \
EXPAND_COUNTER(trans_commit_sync_fs) \
EXPAND_COUNTER(trans_commit_timer) \
EXPAND_COUNTER(trans_commit_written)
EXPAND_COUNTER(trans_commit_timer)
#define FIRST_COUNTER alloc_alloc_data
#define LAST_COUNTER trans_commit_written
#define FIRST_COUNTER block_cache_access
#define LAST_COUNTER trans_commit_timer
#undef EXPAND_COUNTER
#define EXPAND_COUNTER(which) struct percpu_counter which;
@@ -226,21 +113,11 @@ struct scoutfs_counters {
pcpu <= &SCOUTFS_SB(sb)->counters->LAST_COUNTER; \
pcpu++)
/*
* We always read with _sum, we have no use for the shared count and
* certainly don't want to pay the cost of a shared lock to update it.
* The default batch of 32 make counter increments show up significantly
* in profiles.
*/
#define SCOUTFS_PCPU_COUNTER_BATCH (1 << 30)
#define scoutfs_inc_counter(sb, which) \
percpu_counter_inc(&SCOUTFS_SB(sb)->counters->which)
#define scoutfs_inc_counter(sb, which) \
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, 1, \
SCOUTFS_PCPU_COUNTER_BATCH)
#define scoutfs_add_counter(sb, which, cnt) \
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, cnt, \
SCOUTFS_PCPU_COUNTER_BATCH)
#define scoutfs_add_counter(sb, which, cnt) \
percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, cnt)
void __init scoutfs_init_counters(void);
int scoutfs_setup_counters(struct super_block *sb);

File diff suppressed because it is too large Load Diff

View File

@@ -38,9 +38,16 @@ struct scoutfs_data_wait {
.err = 0, \
}
struct scoutfs_traced_extent {
u64 iblock;
u64 count;
u64 blkno;
u8 flags;
};
extern const struct address_space_operations scoutfs_file_aops;
extern const struct file_operations scoutfs_file_fops;
struct scoutfs_alloc;
struct scoutfs_radix_allocator;
struct scoutfs_block_writer;
int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
@@ -51,9 +58,6 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len);
int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
struct scoutfs_lock *lock);
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
u64 byte_len, struct inode *to, u64 to_off, bool to_stage,
u64 data_version);
int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
u8 sef, u8 op, struct scoutfs_data_wait *ow,
@@ -73,13 +77,12 @@ int scoutfs_data_waiting(struct super_block *sb, u64 ino, u64 iblock,
unsigned int nr);
void scoutfs_data_init_btrees(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_log_trees *lt);
void scoutfs_data_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
int scoutfs_data_prepare_commit(struct super_block *sb);
bool scoutfs_data_alloc_should_refill(struct super_block *sb, u64 blocks);
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb);
int scoutfs_data_setup(struct super_block *sb);
void scoutfs_data_destroy(struct super_block *sb);

View File

@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/crc32c.h>
#include <linux/uio.h>
#include <linux/xattr.h>
#include <linux/namei.h>
@@ -27,11 +28,9 @@
#include "super.h"
#include "trans.h"
#include "xattr.h"
#include "item.h"
#include "lock.h"
#include "hash.h"
#include "omap.h"
#include "kvec.h"
#include "forest.h"
#include "lock.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -80,7 +79,7 @@ static unsigned int mode_to_type(umode_t mode)
#undef S_SHIFT
}
static unsigned int dentry_type(enum scoutfs_dentry_type type)
static unsigned int dentry_type(unsigned int type)
{
static unsigned char types[] = {
[SCOUTFS_DT_FIFO] = DT_FIFO,
@@ -136,8 +135,8 @@ static int alloc_dentry_info(struct dentry *dentry)
{
struct dentry_info *di;
smp_rmb();
if (dentry->d_op == &scoutfs_dentry_ops)
/* XXX read mb? */
if (dentry->d_fsdata)
return 0;
di = kmem_cache_zalloc(dentry_info_cache, GFP_NOFS);
@@ -149,7 +148,6 @@ static int alloc_dentry_info(struct dentry *dentry)
spin_lock(&dentry->d_lock);
if (!dentry->d_fsdata) {
dentry->d_fsdata = di;
smp_wmb();
d_set_d_op(dentry, &scoutfs_dentry_ops);
}
spin_unlock(&dentry->d_lock);
@@ -215,47 +213,15 @@ static struct scoutfs_dirent *alloc_dirent(unsigned int name_len)
return kmalloc(dirent_bytes(name_len), GFP_NOFS);
}
/*
* Test a bit number as though an array of bytes is a large len-bit
* big-endian value. nr 0 is the LSB of the final byte, nr (len - 1) is
* the MSB of the first byte.
*/
static int test_be_bytes_bit(int nr, const char *bytes, int len)
{
return bytes[(len - 1 - nr) >> 3] & (1 << (nr & 7));
}
/*
* Generate a 32bit "fingerprint" of the name by extracting 32 evenly
* distributed bits from the name. The intent is to have the sort order
* of the fingerprints reflect the memcmp() sort order of the names
* while mapping large names down to small fs keys.
*
* Names that are smaller than 32bits are biased towards the high bits
* of the fingerprint so that most significant bits of the fingerprints
* consistently reflect the initial characters of the names.
*/
static u32 dirent_name_fingerprint(const char *name, unsigned int name_len)
{
int name_bits = name_len * 8;
int skip = max(name_bits / 32, 1);
u32 fp = 0;
int f;
int n;
for (f = 31, n = name_bits - 1; f >= 0 && n >= 0; f--, n -= skip)
fp |= !!test_be_bytes_bit(n, name, name_bits) << f;
return fp;
}
static u64 dirent_name_hash(const char *name, unsigned int name_len)
{
return scoutfs_hash32(name, name_len) |
((u64)dirent_name_fingerprint(name, name_len) << 32);
unsigned int half = (name_len + 1) / 2;
return crc32c(~0, name, half) |
((u64)crc32c(~0, name + name_len - half, half) << 32);
}
static bool dirent_names_equal(const char *a_name, unsigned int a_len,
static u64 dirent_names_equal(const char *a_name, unsigned int a_len,
const char *b_name, unsigned int b_len)
{
return a_len == b_len && memcmp(a_name, b_name, a_len) == 0;
@@ -273,19 +239,21 @@ static int lookup_dirent(struct super_block *sb, u64 dir_ino, const char *name,
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_dirent *dent = NULL;
struct kvec val;
int ret;
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
init_dirent_key(&key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, 0);
init_dirent_key(&last_key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, U64_MAX);
kvec_init(&val, dent, dirent_bytes(SCOUTFS_NAME_LEN));
for (;;) {
ret = scoutfs_item_next(sb, &key, &last_key, dent,
dirent_bytes(SCOUTFS_NAME_LEN), lock);
ret = scoutfs_forest_next(sb, &key, &last_key, &val, lock);
if (ret < 0)
break;
@@ -317,52 +285,6 @@ out:
return ret;
}
/*
* Verify that the caller's dentry still precisely matches our dirent
* items.
*
* The caller has a dentry that the vfs revalidated before they acquired
* their locks. If the dentry is still covered by a lock we immediately
* return 0. If not, we check items and return -ENOENT if a positive
* dentry no longer matches the items or -EEXIST if a negative entry's
* name now has an item.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
struct scoutfs_dirent dent = {0,};
const char *name;
u64 dentry_ino;
int name_len;
u64 hash;
int ret;
if (scoutfs_lock_is_covered(sb, &di->lock_cov))
return 0;
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
name = dentry->d_name.name;
name_len = dentry->d_name.len;
hash = dirent_name_hash(name, name_len);
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret < 0 && ret != -ENOENT)
return ret;
if (dentry_ino != le64_to_cpu(dent.ino) || di->hash != le64_to_cpu(dent.hash) ||
di->pos != le64_to_cpu(dent.pos)) {
if (dentry_ino)
ret = -ENOENT;
else
ret = -EEXIST;
} else {
ret = 0;
}
return ret;
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
{
struct super_block *sb = dentry->d_sb;
@@ -470,7 +392,7 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
{
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent dent = {0,};
struct scoutfs_dirent dent;
struct inode *inode;
u64 ino = 0;
u64 hash;
@@ -498,11 +420,9 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
ret = 0;
} else if (ret == 0) {
ino = le64_to_cpu(dent.ino);
}
if (ret == 0)
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), dir_lock);
}
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
out:
@@ -511,20 +431,9 @@ out:
else if (ino == 0)
inode = NULL;
else
inode = scoutfs_iget(sb, ino, 0, 0);
inode = scoutfs_iget(sb, ino);
/*
* We can't splice dir aliases into the dcache. dir entries
* might have changed on other nodes so our dcache could still
* contain them, rather than having been moved in rename. For
* dirs, we use d_materialize_unique to remove any existing
* aliases which must be stale. Our inode numbers aren't reused
* so inodes pointed to by entries can't change types.
*/
if (!IS_ERR_OR_NULL(inode) && S_ISDIR(inode->i_mode))
return d_materialise_unique(dentry, inode);
else
return d_splice_alias(inode, dentry);
return d_splice_alias(inode, dentry);
}
/*
@@ -539,10 +448,11 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
{
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key last_key;
struct scoutfs_dirent *dent;
struct scoutfs_key key;
struct scoutfs_key last_key;
struct scoutfs_lock *dir_lock;
struct kvec val;
int name_len;
u64 pos;
int ret;
@@ -552,11 +462,13 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
init_dirent_key(&last_key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
SCOUTFS_DIRENT_LAST_POS, 0);
kvec_init(&val, dent, dirent_bytes(SCOUTFS_NAME_LEN));
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &dir_lock);
if (ret)
@@ -566,9 +478,7 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
init_dirent_key(&key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
kc_readdir_pos(file, ctx), 0);
ret = scoutfs_item_next(sb, &key, &last_key, dent,
dirent_bytes(SCOUTFS_NAME_LEN),
dir_lock);
ret = scoutfs_forest_next(sb, &key, &last_key, &val, dir_lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
@@ -619,17 +529,19 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
u64 ino, umode_t mode, struct scoutfs_lock *dir_lock,
struct scoutfs_lock *inode_lock)
{
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key rdir_key;
struct scoutfs_key ent_key;
struct scoutfs_key lb_key;
bool del_rdir = false;
struct scoutfs_dirent *dent;
bool del_ent = false;
bool del_rdir = false;
struct kvec val;
int ret;
dent = alloc_dirent(name_len);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
/* initialize the dent */
@@ -642,27 +554,25 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
init_dirent_key(&ent_key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, pos);
init_dirent_key(&rdir_key, SCOUTFS_READDIR_TYPE, dir_ino, pos, 0);
init_dirent_key(&lb_key, SCOUTFS_LINK_BACKREF_TYPE, ino, dir_ino, pos);
kvec_init(&val, dent, dirent_bytes(name_len));
ret = scoutfs_item_create(sb, &ent_key, dent, dirent_bytes(name_len),
dir_lock);
ret = scoutfs_forest_create(sb, &ent_key, &val, dir_lock);
if (ret)
goto out;
del_ent = true;
ret = scoutfs_item_create(sb, &rdir_key, dent, dirent_bytes(name_len),
dir_lock);
ret = scoutfs_forest_create(sb, &rdir_key, &val, dir_lock);
if (ret)
goto out;
del_rdir = true;
ret = scoutfs_item_create(sb, &lb_key, dent, dirent_bytes(name_len),
inode_lock);
ret = scoutfs_forest_create(sb, &lb_key, &val, inode_lock);
out:
if (ret < 0) {
if (del_ent)
scoutfs_item_delete(sb, &ent_key, dir_lock);
scoutfs_forest_delete_dirty(sb, &ent_key);
if (del_rdir)
scoutfs_item_delete(sb, &rdir_key, dir_lock);
scoutfs_forest_delete_dirty(sb, &rdir_key);
}
kfree(dent);
@@ -684,20 +594,23 @@ static int del_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
struct scoutfs_key rdir_key;
struct scoutfs_key ent_key;
struct scoutfs_key lb_key;
LIST_HEAD(dir_saved);
LIST_HEAD(inode_saved);
int ret;
init_dirent_key(&ent_key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, pos);
init_dirent_key(&rdir_key, SCOUTFS_READDIR_TYPE, dir_ino, pos, 0);
init_dirent_key(&lb_key, SCOUTFS_LINK_BACKREF_TYPE, ino, dir_ino, pos);
ret = scoutfs_item_dirty(sb, &ent_key, dir_lock) ?:
scoutfs_item_dirty(sb, &rdir_key, dir_lock) ?:
scoutfs_item_dirty(sb, &lb_key, inode_lock);
if (ret == 0) {
ret = scoutfs_item_delete(sb, &ent_key, dir_lock) ?:
scoutfs_item_delete(sb, &rdir_key, dir_lock) ?:
scoutfs_item_delete(sb, &lb_key, inode_lock);
BUG_ON(ret); /* _dirty should have guaranteed success */
ret = scoutfs_forest_delete_save(sb, &ent_key, &dir_saved, dir_lock) ?:
scoutfs_forest_delete_save(sb, &rdir_key, &dir_saved, dir_lock) ?:
scoutfs_forest_delete_save(sb, &lb_key, &inode_saved, inode_lock);
if (ret < 0) {
scoutfs_forest_restore(sb, &dir_saved, dir_lock);
scoutfs_forest_restore(sb, &inode_saved, inode_lock);
} else {
scoutfs_forest_free_batch(sb, &dir_saved);
scoutfs_forest_free_batch(sb, &inode_saved);
}
return ret;
@@ -714,13 +627,13 @@ static int del_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
*/
static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
umode_t mode, dev_t rdev,
const struct scoutfs_item_count cnt,
struct scoutfs_lock **dir_lock,
struct scoutfs_lock **inode_lock,
struct scoutfs_lock **orph_lock,
struct list_head *ind_locks)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct inode *inode;
u64 ind_seq;
int ret = 0;
u64 ino;
@@ -729,7 +642,7 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
if (ret)
return ERR_PTR(ret);
ret = scoutfs_alloc_ino(sb, S_ISDIR(mode), &ino);
ret = scoutfs_alloc_ino(dir, &ino);
if (ret)
return ERR_PTR(ret);
@@ -749,25 +662,21 @@ static struct inode *lock_hold_create(struct inode *dir, struct dentry *dentry,
if (ret)
goto out_unlock;
if (orph_lock) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, ino, orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, ind_locks, dir, true) ?:
scoutfs_inode_index_prepare_ino(sb, ind_locks, ino, mode) ?:
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, ind_locks, ind_seq, cnt);
if (ret > 0)
goto retry;
if (ret)
goto out_unlock;
ret = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock, &inode);
if (ret < 0)
inode = scoutfs_new_inode(sb, dir, mode, rdev, ino, *inode_lock);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
goto out;
}
ret = scoutfs_dirty_inode_item(dir, *dir_lock);
out:
@@ -777,16 +686,10 @@ out_unlock:
if (ret) {
scoutfs_inode_index_unlock(sb, ind_locks);
scoutfs_unlock(sb, *dir_lock, SCOUTFS_LOCK_WRITE);
*dir_lock = NULL;
scoutfs_unlock(sb, *inode_lock, SCOUTFS_LOCK_WRITE);
*dir_lock = NULL;
*inode_lock = NULL;
if (orph_lock) {
scoutfs_unlock(sb, *orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
*orph_lock = NULL;
}
if (!IS_ERR_OR_NULL(inode))
iput(inode);
inode = ERR_PTR(ret);
}
@@ -800,7 +703,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -811,14 +713,10 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
inode = lock_hold_create(dir, dentry, mode, rdev,
&dir_lock, &inode_lock, NULL, &ind_locks);
SIC_MKNOD(dentry->d_name.len),
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
pos = SCOUTFS_I(dir)->next_readdir_pos++;
@@ -834,10 +732,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_mtime = inode->i_atime = inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_mtime;
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
if (S_ISDIR(mode)) {
inc_nlink(inode);
@@ -881,15 +775,12 @@ static int scoutfs_link(struct dentry *old_dentry,
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
LIST_HEAD(ind_locks);
bool del_orphan = false;
u64 dir_size;
u64 ind_seq;
u64 hash;
u64 pos;
int ret;
int err;
hash = dirent_name_hash(dentry->d_name.name, dentry->d_name.len);
@@ -912,25 +803,13 @@ static int scoutfs_link(struct dentry *old_dentry,
if (ret)
goto out_unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
if (inode->i_nlink == 0) {
del_orphan = true;
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_LINK(dentry->d_name.len));
if (ret > 0)
goto retry;
if (ret)
@@ -940,31 +819,20 @@ retry:
if (ret)
goto out;
if (del_orphan) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
if (ret)
goto out;
}
pos = SCOUTFS_I(dir)->next_readdir_pos++;
ret = add_entry_items(sb, scoutfs_ino(dir), hash, pos,
dentry->d_name.name, dentry->d_name.len,
scoutfs_ino(inode), inode->i_mode, dir_lock,
inode_lock);
if (ret) {
err = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
WARN_ON_ONCE(err); /* no orphan, might not scan and delete after crash */
if (ret)
goto out;
}
update_dentry_info(sb, dentry, hash, pos, dir_lock);
i_size_write(dir, dir_size);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -977,8 +845,6 @@ out_unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
@@ -1003,7 +869,6 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
struct inode *inode = dentry->d_inode;
struct timespec ts = current_kernel_time();
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_lock *dir_lock = NULL;
LIST_HEAD(ind_locks);
u64 ind_seq;
@@ -1016,63 +881,43 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
if (ret)
return ret;
ret = alloc_dentry_info(dentry);
if (ret)
goto unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto unlock;
if (S_ISDIR(inode->i_mode) && i_size_read(inode)) {
ret = -ENOTEMPTY;
goto unlock;
}
if (scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto unlock;
}
if (should_orphan(inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
if (ret < 0)
goto unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, dir, false) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, inode, false) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, false);
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_UNLINK(dentry->d_name.len));
if (ret > 0)
goto retry;
if (ret)
goto unlock;
if (should_orphan(inode)) {
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
if (ret < 0)
goto out;
}
ret = del_entry_items(sb, scoutfs_ino(dir), dentry_info_hash(dentry),
dentry_info_pos(dentry), scoutfs_ino(inode),
dir_lock, inode_lock);
if (ret) {
ret = scoutfs_inode_orphan_delete(sb, scoutfs_ino(inode), orph_lock);
WARN_ON_ONCE(ret); /* should have been dirty */
if (ret)
goto out;
}
update_dentry_info(sb, dentry, 0, 0, dir_lock);
if (should_orphan(inode)) {
/*
* Insert the orphan item before we modify any inode
* metadata so we can gracefully exit should it
* fail.
*/
ret = scoutfs_orphan_inode(inode);
WARN_ON_ONCE(ret); /* XXX returning error but items deleted */
if (ret)
goto out;
}
dir->i_ctime = ts;
dir->i_mtime = ts;
i_size_write(dir, i_size_read(dir) - dentry->d_name.len);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
inode->i_ctime = ts;
drop_nlink(inode);
@@ -1089,7 +934,6 @@ unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
@@ -1116,16 +960,17 @@ static void init_symlink_key(struct scoutfs_key *key, u64 ino, u8 nr)
* The target name can be null for deletion when val isn't used. Size
* still has to be provided to determine the number of items.
*/
enum symlink_ops {
enum {
SYM_CREATE = 0,
SYM_LOOKUP,
SYM_DELETE,
};
static int symlink_item_ops(struct super_block *sb, enum symlink_ops op, u64 ino,
static int symlink_item_ops(struct super_block *sb, int op, u64 ino,
struct scoutfs_lock *lock, const char *target,
size_t size)
{
struct scoutfs_key key;
struct kvec val;
unsigned bytes;
unsigned nr;
int ret;
@@ -1140,16 +985,14 @@ static int symlink_item_ops(struct super_block *sb, enum symlink_ops op, u64 ino
init_symlink_key(&key, ino, i);
bytes = min_t(u64, size, SCOUTFS_MAX_VAL_SIZE);
kvec_init(&val, (void *)target, bytes);
if (op == SYM_CREATE)
ret = scoutfs_item_create(sb, &key, (void *)target,
bytes, lock);
ret = scoutfs_forest_create(sb, &key, &val, lock);
else if (op == SYM_LOOKUP)
ret = scoutfs_item_lookup_exact(sb, &key,
(void *)target, bytes,
lock);
ret = scoutfs_forest_lookup_exact(sb, &key, &val, lock);
else if (op == SYM_DELETE)
ret = scoutfs_item_delete(sb, &key, lock);
ret = scoutfs_forest_delete(sb, &key, lock);
if (ret)
break;
@@ -1265,7 +1108,6 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
u64 hash;
u64 pos;
@@ -1283,14 +1125,10 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
return ret;
inode = lock_hold_create(dir, dentry, S_IFLNK|S_IRWXUGO, 0,
&dir_lock, &inode_lock, NULL, &ind_locks);
SIC_SYMLINK(dentry->d_name.len, name_len),
&dir_lock, &inode_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
ret = symlink_item_ops(sb, SYM_CREATE, scoutfs_ino(inode), inode_lock,
symname, name_len);
@@ -1310,13 +1148,9 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode_inc_iversion(dir);
inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_ctime;
i_size_write(inode, name_len);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -1324,11 +1158,11 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
insert_inode_hash(inode);
/* XXX need to set i_op/fop before here for sec callbacks */
d_instantiate(dentry, inode);
inode = NULL;
ret = 0;
out:
if (ret < 0) {
/* XXX remove inode items */
if (!IS_ERR_OR_NULL(inode))
iput(inode);
symlink_item_ops(sb, SYM_DELETE, scoutfs_ino(inode), inode_lock,
NULL, name_len);
@@ -1339,9 +1173,6 @@ out:
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
if (!IS_ERR_OR_NULL(inode))
iput(inode);
return ret;
}
@@ -1372,10 +1203,11 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list)
{
struct scoutfs_link_backref_entry *ent = NULL;
struct scoutfs_lock *lock = NULL;
struct scoutfs_link_backref_entry *ent;
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_lock *lock = NULL;
struct kvec val;
int len;
int ret;
@@ -1391,13 +1223,13 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
init_dirent_key(&key, SCOUTFS_LINK_BACKREF_TYPE, ino, dir_ino, dir_pos);
init_dirent_key(&last_key, SCOUTFS_LINK_BACKREF_TYPE, ino, U64_MAX,
U64_MAX);
kvec_init(&val, &ent->dent, dirent_bytes(SCOUTFS_NAME_LEN));
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, ino, &lock);
if (ret)
goto out;
ret = scoutfs_item_next(sb, &key, &last_key, &ent->dent,
dirent_bytes(SCOUTFS_NAME_LEN), lock);
ret = scoutfs_forest_next(sb, &key, &last_key, &val, lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
lock = NULL;
if (ret < 0)
@@ -1595,6 +1427,26 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
return ret;
}
/*
* Make sure that a dirent from the dir to the inode exists at the name.
* The caller has the name locked in the dir.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, const char *name,
unsigned name_len, u64 hash, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_dirent dent;
int ret;
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret == 0 && le64_to_cpu(dent.ino) != ino)
ret = -ENOENT;
else if (ret == -ENOENT && ino == 0)
ret = 0;
return ret;
}
/*
* The vfs performs checks on cached inodes and dirents before calling
* here. It doesn't hold any locks so all of those checks can be based
@@ -1623,9 +1475,8 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
* from using parent/child locking orders as two groups can have both
* parent and child relationships to each other.
*/
static int scoutfs_rename_common(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
{
struct super_block *sb = old_dir->i_sb;
struct inode *old_inode = old_dentry->d_inode;
@@ -1635,7 +1486,6 @@ static int scoutfs_rename_common(struct inode *old_dir,
struct scoutfs_lock *new_dir_lock = NULL;
struct scoutfs_lock *old_inode_lock = NULL;
struct scoutfs_lock *new_inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct timespec now;
bool ins_new = false;
bool del_new = false;
@@ -1690,31 +1540,16 @@ static int scoutfs_rename_common(struct inode *old_dir,
}
/* make sure that the entries assumed by the argument still exist */
ret = alloc_dentry_info(old_dentry) ?:
alloc_dentry_info(new_dentry) ?:
verify_entry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
ret = verify_entry(sb, scoutfs_ino(old_dir), old_dentry->d_name.name,
old_dentry->d_name.len, old_hash,
scoutfs_ino(old_inode), old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry->d_name.name,
new_dentry->d_name.len, new_hash,
new_inode ? scoutfs_ino(new_inode) : 0,
new_dir_lock);
if (ret)
goto out_unlock;
if ((flags & RENAME_NOREPLACE) && (new_inode != NULL)) {
ret = -EEXIST;
goto out_unlock;
}
if ((old_inode && scoutfs_inode_worm_denied(old_inode)) ||
(new_inode && scoutfs_inode_worm_denied(new_inode))) {
ret = -EACCES;
goto out_unlock;
}
if (should_orphan(new_inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(new_inode),
&orph_lock);
if (ret < 0)
goto out_unlock;
}
retry:
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
scoutfs_inode_index_prepare(sb, &ind_locks, old_dir, false) ?:
@@ -1723,7 +1558,9 @@ retry:
scoutfs_inode_index_prepare(sb, &ind_locks, new_dir, false)) ?:
(new_inode == NULL ? 0 :
scoutfs_inode_index_prepare(sb, &ind_locks, new_inode, false)) ?:
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq, true);
scoutfs_inode_index_try_lock_hold(sb, &ind_locks, ind_seq,
SIC_RENAME(old_dentry->d_name.len,
new_dentry->d_name.len));
if (ret > 0)
goto retry;
if (ret)
@@ -1774,7 +1611,7 @@ retry:
ins_old = true;
if (should_orphan(new_inode)) {
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(new_inode), orph_lock);
ret = scoutfs_orphan_inode(new_inode);
if (ret)
goto out;
}
@@ -1814,13 +1651,6 @@ retry:
if (new_inode)
old_inode->i_ctime = now;
inode_inc_iversion(old_dir);
inode_inc_iversion(old_inode);
if (new_dir != old_dir)
inode_inc_iversion(new_dir);
if (new_inode)
inode_inc_iversion(new_inode);
scoutfs_update_inode_item(old_dir, old_dir_lock, &ind_locks);
scoutfs_update_inode_item(old_inode, old_inode_lock, &ind_locks);
if (new_dir != old_dir)
@@ -1885,28 +1715,10 @@ out_unlock:
scoutfs_unlock(sb, old_dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, new_dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, rename_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
return ret;
}
static int scoutfs_rename(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry)
{
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, 0);
}
static int scoutfs_rename2(struct inode *old_dir,
struct dentry *old_dentry, struct inode *new_dir,
struct dentry *new_dentry, unsigned int flags)
{
if (flags & ~RENAME_NOREPLACE)
return -EINVAL;
return scoutfs_rename_common(old_dir, old_dentry, new_dir, new_dentry, flags);
}
#ifdef KC_FMODE_KABI_ITERATE
/* we only need this to set the iterate flag for kabi :/ */
static int scoutfs_dir_open(struct inode *inode, struct file *file)
@@ -1916,55 +1728,6 @@ static int scoutfs_dir_open(struct inode *inode, struct file *file)
}
#endif
static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
struct scoutfs_lock *orph_lock = NULL;
struct scoutfs_inode_info *si;
LIST_HEAD(ind_locks);
int ret;
if (dentry->d_name.len > SCOUTFS_NAME_LEN)
return -ENAMETOOLONG;
inode = lock_hold_create(dir, dentry, mode, 0,
&dir_lock, &inode_lock, &orph_lock, &ind_locks);
if (IS_ERR(inode))
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = scoutfs_inode_orphan_create(sb, scoutfs_ino(inode), orph_lock);
if (ret < 0)
goto out; /* XXX returning error but items created */
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
si->crtime = inode->i_mtime;
insert_inode_hash(inode);
ihold(inode); /* need to update inode modifications in d_tmpfile */
d_tmpfile(dentry, inode);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
scoutfs_inode_index_unlock(sb, &ind_locks);
out:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
if (!IS_ERR_OR_NULL(inode))
iput(inode);
return ret;
}
const struct file_operations scoutfs_dir_fops = {
.KC_FOP_READDIR = scoutfs_readdir,
#ifdef KC_FMODE_KABI_ITERATE
@@ -1975,10 +1738,7 @@ const struct file_operations scoutfs_dir_fops = {
.llseek = generic_file_llseek,
};
const struct inode_operations_wrapper scoutfs_dir_iops = {
.ops = {
const struct inode_operations scoutfs_dir_iops = {
.lookup = scoutfs_lookup,
.mknod = scoutfs_mknod,
.create = scoutfs_create,
@@ -1995,9 +1755,6 @@ const struct inode_operations_wrapper scoutfs_dir_iops = {
.removexattr = scoutfs_removexattr,
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
},
.tmpfile = scoutfs_tmpfile,
.rename2 = scoutfs_rename2,
};
void scoutfs_dir_exit(void)

View File

@@ -5,7 +5,7 @@
#include "lock.h"
extern const struct file_operations scoutfs_dir_fops;
extern const struct inode_operations_wrapper scoutfs_dir_iops;
extern const struct inode_operations scoutfs_dir_iops;
extern const struct inode_operations scoutfs_symlink_iops;
struct scoutfs_link_backref_entry {
@@ -14,7 +14,7 @@ struct scoutfs_link_backref_entry {
u64 dir_pos;
u16 name_len;
struct scoutfs_dirent dent;
/* the full name is allocated and stored in dent.name[] */
/* the full name is allocated and stored in dent.name[0] */
};
int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,

View File

@@ -81,7 +81,7 @@ static struct dentry *scoutfs_fh_to_dentry(struct super_block *sb,
trace_scoutfs_fh_to_dentry(sb, fh_type, sfid);
if (scoutfs_valid_fileid(fh_type))
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino), 0, SCOUTFS_IGF_LINKED);
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino));
return d_obtain_alias(inode);
}
@@ -100,7 +100,7 @@ static struct dentry *scoutfs_fh_to_parent(struct super_block *sb,
if (scoutfs_valid_fileid(fh_type) &&
fh_type == FILEID_SCOUTFS_WITH_PARENT)
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino), 0, SCOUTFS_IGF_LINKED);
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino));
return d_obtain_alias(inode);
}
@@ -123,7 +123,7 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
scoutfs_dir_free_backref_path(sb, &list);
trace_scoutfs_get_parent(sb, inode, ino);
inode = scoutfs_iget(sb, ino, 0, SCOUTFS_IGF_LINKED);
inode = scoutfs_iget(sb, ino);
return d_obtain_alias(inode);
}

View File

@@ -1,400 +0,0 @@
/*
* Copyright (C) 2020 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include "msg.h"
#include "ext.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
* Extents are used to track free block regions and to map logical file
* regions to device blocks. Extents can be split and merged as
* they're modified. These helpers implement all the fiddly extent
* manipulations. Callers provide callbacks which implement the actual
* storage of extents in either the item cache or btree items.
*/
static void ext_zero(struct scoutfs_extent *ext)
{
memset(ext, 0, sizeof(struct scoutfs_extent));
}
static bool ext_overlap(struct scoutfs_extent *ext, u64 start, u64 len)
{
u64 e_end = ext->start + ext->len - 1;
u64 end = start + len - 1;
return !(e_end < start || ext->start > end);
}
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
{
u64 in_end = start + len - 1;
u64 out_end = out->start + out->len - 1;
return out->start <= start && out_end >= in_end;
}
/* we only translate mappings when they exist */
static inline u64 ext_map_add(u64 map, u64 diff)
{
return map ? map + diff : 0;
}
/*
* Extents can merge if they're logically contiguous, both don't have
* mappings or have mappings which are also contiguous, and have
* matching flags.
*/
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,
struct scoutfs_extent *right)
{
return (left->start + left->len == right->start) &&
((!left->map && !right->map) ||
(left->map + left->len == right->map)) &&
(left->flags == right->flags);
}
/*
* Split an existing extent in to left and right extents by removing
* an interior range. The split extents are all zeros if the range
* extends to their end of the extent.
*/
static void ext_split(struct scoutfs_extent *ext, u64 start, u64 len,
struct scoutfs_extent *left,
struct scoutfs_extent *right)
{
if (ext->start < start) {
left->start = ext->start;
left->len = start - ext->start;
left->map = ext->map;
left->flags = ext->flags;
} else {
ext_zero(left);
}
if (ext->start + ext->len > start + len) {
right->start = start + len;
right->len = ext->start + ext->len - right->start;
right->map = ext_map_add(ext->map, right->start - ext->start);
right->flags = ext->flags;
} else {
ext_zero(right);
}
}
#define op_call(sb, ops, arg, which, args...) \
({ \
int _ret; \
_ret = ops->which(sb, arg, ##args); \
scoutfs_inc_counter(sb, ext_op_##which); \
trace_scoutfs_ext_op_##which(sb, ##args, _ret); \
_ret; \
})
struct extent_changes {
struct scoutfs_extent exts[4];
bool ins[4];
u8 nr;
};
static void add_change(struct extent_changes *chg,
struct scoutfs_extent *ext, bool ins)
{
BUILD_BUG_ON(ARRAY_SIZE(chg->ins) != ARRAY_SIZE(chg->exts));
if (ext->len) {
BUG_ON(chg->nr == ARRAY_SIZE(chg->exts));
chg->exts[chg->nr] = *ext;
chg->ins[chg->nr] = !!ins;
chg->nr++;
}
}
static int apply_changes(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, struct extent_changes *chg)
{
int ret = 0;
int err;
int i;
for (i = 0; i < chg->nr; i++) {
if (chg->ins[i])
ret = op_call(sb, ops, arg, insert, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
else
ret = op_call(sb, ops, arg, remove, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
if (ret < 0)
break;
}
while (ret < 0 && --i >= 0) {
if (chg->ins[i])
err = op_call(sb, ops, arg, remove, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
else
err = op_call(sb, ops, arg, insert, chg->exts[i].start,
chg->exts[i].len, chg->exts[i].map,
chg->exts[i].flags);
BUG_ON(err); /* inconsistent */
}
return ret;
}
int scoutfs_ext_next(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, struct scoutfs_extent *ext)
{
int ret;
ret = op_call(sb, ops, arg, next, start, len, ext);
trace_scoutfs_ext_next(sb, start, len, ext, ret);
return ret;
}
/*
* Insert the given extent. EINVAL is returned if there's already an existing
* overlapping extent. This can merge with its neighbours.
*/
int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent ins;
int ret;
ins.start = start;
ins.len = len;
ins.map = map;
ins.flags = flags;
/* find right neighbour and check for overlap */
ret = op_call(sb, ops, arg, next, start, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
/* inserting extent must not overlap */
if (found.len && ext_overlap(&ins, found.start, found.len)) {
if (ops->insert_overlap_warn)
scoutfs_err(sb, "inserting extent %llu.%llu overlaps existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}
/* merge with right if we can */
if (found.len && scoutfs_ext_can_merge(&ins, &found)) {
ins.len += found.len;
add_change(&chg, &found, false);
}
/* see if we can merge with a left neighbour */
if (start > 0) {
ret = op_call(sb, ops, arg, next, start - 1, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (ret == 0 && scoutfs_ext_can_merge(&found, &ins)) {
ins.start = found.start;
ins.map = found.map;
ins.len += found.len;
add_change(&chg, &found, false);
}
}
add_change(&chg, &ins, true);
ret = apply_changes(sb, ops, arg, &chg);
out:
trace_scoutfs_ext_insert(sb, start, len, map, flags, ret);
return ret;
}
/*
* Remove the given extent. The extent to remove must be found entirely
* in an existing extent. If the existing extent is larger then we leave
* behind the remaining extent. The existing extent can be split.
*/
int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent left;
struct scoutfs_extent right;
int ret;
ret = op_call(sb, ops, arg, next, start, 1, &found);
if (ret < 0)
goto out;
/* removed extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
scoutfs_err(sb, "error removing extent %llu.%llu, isn't inside existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}
ext_split(&found, start, len, &left, &right);
add_change(&chg, &found, false);
add_change(&chg, &left, true);
add_change(&chg, &right, true);
ret = apply_changes(sb, ops, arg, &chg);
out:
trace_scoutfs_ext_remove(sb, start, len, 0, 0, ret);
return ret;
}
/*
* Find and remove the next extent, removing only a portion if the
* extent is larger than the count. Returns ENOENT if it didn't
* find any extents.
*
* This does not search for merge candidates so it's safe to call with
* extents indexed by length.
*/
int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 count,
struct scoutfs_extent *ext)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent ins;
int ret;
ret = op_call(sb, ops, arg, next, start, len, &found);
if (ret < 0)
goto out;
add_change(&chg, &found, false);
if (found.len > count) {
ins.start = found.start + count;
ins.len = found.len - count;
ins.map = ext_map_add(found.map, count);
ins.flags = found.flags;
add_change(&chg, &ins, true);
}
ret = apply_changes(sb, ops, arg, &chg);
out:
if (ret == 0) {
ext->start = found.start;
ext->len = min(found.len, count);
ext->map = found.map;
ext->flags = found.flags;
} else {
ext_zero(ext);
}
trace_scoutfs_ext_alloc(sb, start, len, count, ext, ret);
return ret;
}
/*
* Set the map and flags for an extent region, with the magical property
* that extents with map and flags set to 0 are removed.
*
* If we're modifying an existing extent then the modification must be
* fully inside the existing extent. The modification can leave edges
* of the extent which need to be inserted. If the modification extends
* to the end of the existing extent then we need to check for adjacent
* neighbouring extents which might now be able to be merged.
*
* Inserting a new extent is like the case of modifying the entire
* existing extent. We need to check neighbours of the inserted extent
* to see if they can be merged.
*/
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags)
{
struct extent_changes chg = { .nr = 0 };
struct scoutfs_extent found;
struct scoutfs_extent left;
struct scoutfs_extent right;
struct scoutfs_extent set;
int ret;
set.start = start;
set.len = len;
set.map = map;
set.flags = flags;
/* find extent to remove */
ret = op_call(sb, ops, arg, next, start, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (ret == 0 && ext_overlap(&found, start, len)) {
/* set extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
ret = -EINVAL;
goto out;
}
add_change(&chg, &found, false);
ext_split(&found, start, len, &left, &right);
} else {
ext_zero(&found);
ext_zero(&left);
ext_zero(&right);
}
if (left.len) {
/* inserting split left, won't merge */
add_change(&chg, &left, true);
} else if (start > 0) {
ret = op_call(sb, ops, arg, next, start - 1, 1, &left);
if (ret < 0 && ret != -ENOENT)
goto out;
else if (ret == 0 && scoutfs_ext_can_merge(&left, &set)) {
/* remove found left, merging */
set.start = left.start;
set.map = left.map;
set.len += left.len;
add_change(&chg, &left, false);
}
}
if (right.len) {
/* inserting split right, won't merge */
add_change(&chg, &right, true);
} else {
ret = op_call(sb, ops, arg, next, start + len, 1, &right);
if (ret < 0 && ret != -ENOENT)
goto out;
else if (ret == 0 && scoutfs_ext_can_merge(&set, &right)) {
/* remove found right, merging */
set.len += right.len;
add_change(&chg, &right, false);
}
}
if (set.flags || set.map)
add_change(&chg, &set, true);
ret = apply_changes(sb, ops, arg, &chg);
out:
trace_scoutfs_ext_set(sb, start, len, map, flags, ret);
return ret;
}

View File

@@ -1,38 +0,0 @@
#ifndef _SCOUTFS_EXT_H_
#define _SCOUTFS_EXT_H_
struct scoutfs_extent {
u64 start;
u64 len;
u64 map;
u8 flags;
};
struct scoutfs_ext_ops {
int (*next)(struct super_block *sb, void *arg,
u64 start, u64 len, struct scoutfs_extent *ext);
int (*insert)(struct super_block *sb, void *arg,
u64 start, u64 len, u64 map, u8 flags);
int (*remove)(struct super_block *sb, void *arg, u64 start, u64 len,
u64 map, u8 flags);
bool insert_overlap_warn;
};
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,
struct scoutfs_extent *right);
int scoutfs_ext_next(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, struct scoutfs_extent *ext);
int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags);
int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len);
int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 limit,
struct scoutfs_extent *ext);
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
void *arg, u64 start, u64 len, u64 map, u8 flags);
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out);
#endif

View File

@@ -1,481 +0,0 @@
/*
* Copyright (C) 2019 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/kobject.h>
#include <linux/sysfs.h>
#include <linux/device.h>
#include <linux/timer.h>
#include <asm/barrier.h>
#include "super.h"
#include "msg.h"
#include "sysfs.h"
#include "server.h"
#include "fence.h"
/*
* Fencing ensures that a given mount can no longer write to the
* metadata or data devices. It's necessary to ensure that it's safe to
* give another mount access to a resource that is currently owned by a
* mount that has stopped responding.
*
* Fencing is performed in collaboration between the currently elected
* quorum leader mount and userspace running on its host. The kernel
* creates fencing requests as it notices that mounts have stopped
* participating. The fence requests are published as directories in
* sysfs. Userspace agents watch for directories, take action, and
* write to files in the directory to indicate that the mount has been
* fenced. Once the mount is fenced the server can reclaim the
* resources previously held by the fenced mount.
*
* The fence requests contain metadata identifying the specific instance
* of the mount that needs to be fenced. This lets a fencing agent
* ensure that a specific mount has been fenced without necessarily
* destroying the node that was hosting it. Maybe the node had rebooted
* and the mount is no longer there, maybe the mount can be force
* unmounted, maybe the node can be configured to isolate the mount from
* the devices.
*
* The fencing mechanism is asynchronous and can fail but the server
* cannot make progress until it completes. If a fence request times
* out the server shuts down in the hope that another instance of a
* server might have more luck fencing a non-responsive mount.
*
* Sources of fencing are fundamentally anchored in shared persistent
* state. It is possible, though unlikely, that servers can fence a
* node and then themselves fail, leaving the next server to try and
* fence the mount again.
*/
struct fence_info {
struct kset *kset;
struct kobject fence_dir_kobj;
struct workqueue_struct *wq;
wait_queue_head_t waitq;
spinlock_t lock;
struct list_head list;
};
#define DECLARE_FENCE_INFO(sb, name) \
struct fence_info *name = SCOUTFS_SB(sb)->fence_info
struct pending_fence {
struct super_block *sb;
struct scoutfs_sysfs_attrs ssa;
struct list_head entry;
struct timer_list timer;
ktime_t start_kt;
__be32 ipv4_addr;
bool fenced;
bool error;
int reason;
u64 rid;
};
#define FENCE_FROM_KOBJ(kobj) \
container_of(SCOUTFS_SYSFS_ATTRS(kobj), struct pending_fence, ssa)
#define DECLARE_FENCE_FROM_KOBJ(name, kobj) \
struct pending_fence *name = FENCE_FROM_KOBJ(kobj)
static void destroy_fence(struct pending_fence *fence)
{
struct super_block *sb = fence->sb;
scoutfs_sysfs_destroy_attrs(sb, &fence->ssa);
del_timer_sync(&fence->timer);
kfree(fence);
}
static ssize_t elapsed_secs_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
ktime_t now = ktime_get();
struct timeval tv = { 0, };
if (ktime_after(now, fence->start_kt))
tv = ktime_to_timeval(ktime_sub(now, fence->start_kt));
return snprintf(buf, PAGE_SIZE, "%llu", (long long)tv.tv_sec);
}
SCOUTFS_ATTR_RO(elapsed_secs);
static ssize_t fenced_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%u", !!fence->fenced);
}
/*
* any write to the fenced file from userspace indicates that the mount
* has been safely fenced and can no longer write to the shared device.
*/
static ssize_t fenced_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
DECLARE_FENCE_INFO(fence->sb, fi);
if (!fence->fenced) {
del_timer_sync(&fence->timer);
fence->fenced = true;
wake_up(&fi->waitq);
}
return count;
}
SCOUTFS_ATTR_RW(fenced);
static ssize_t error_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%u", !!fence->error);
}
/*
* Fencing can tell us that they were unable to fence the given mount.
* We can't continue if the mount can't be isolated so we shut down the
* server.
*/
static ssize_t error_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf,
size_t count)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
struct super_block *sb = fence->sb;
DECLARE_FENCE_INFO(fence->sb, fi);
if (!fence->error) {
fence->error = true;
scoutfs_err(sb, "error indicated by fence action for rid %016llx", fence->rid);
wake_up(&fi->waitq);
}
return count;
}
SCOUTFS_ATTR_RW(error);
static ssize_t ipv4_addr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%pI4", &fence->ipv4_addr);
}
SCOUTFS_ATTR_RO(ipv4_addr);
static ssize_t reason_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
unsigned r = fence->reason;
char *str = "unknown";
static char *reasons[] = {
[SCOUTFS_FENCE_CLIENT_RECOVERY] = "client_recovery",
[SCOUTFS_FENCE_CLIENT_RECONNECT] = "client_reconnect",
[SCOUTFS_FENCE_QUORUM_BLOCK_LEADER] = "quorum_block_leader",
};
if (r < ARRAY_SIZE(reasons) && reasons[r])
str = reasons[r];
return snprintf(buf, PAGE_SIZE, "%s", str);
}
SCOUTFS_ATTR_RO(reason);
static ssize_t rid_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
return snprintf(buf, PAGE_SIZE, "%016llx", fence->rid);
}
SCOUTFS_ATTR_RO(rid);
static struct attribute *fence_attrs[] = {
SCOUTFS_ATTR_PTR(elapsed_secs),
SCOUTFS_ATTR_PTR(fenced),
SCOUTFS_ATTR_PTR(error),
SCOUTFS_ATTR_PTR(ipv4_addr),
SCOUTFS_ATTR_PTR(reason),
SCOUTFS_ATTR_PTR(rid),
NULL,
};
#define FENCE_TIMEOUT_MS (MSEC_PER_SEC * 30)
static void fence_timeout(struct timer_list *timer)
{
struct pending_fence *fence = from_timer(fence, timer, timer);
struct super_block *sb = fence->sb;
DECLARE_FENCE_INFO(sb, fi);
fence->error = true;
scoutfs_err(sb, "fence request for rid %016llx was not serviced in %lums, raising error",
fence->rid, FENCE_TIMEOUT_MS);
wake_up(&fi->waitq);
}
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret;
fence = kzalloc(sizeof(struct pending_fence), GFP_NOFS);
if (!fence) {
ret = -ENOMEM;
goto out;
}
fence->sb = sb;
scoutfs_sysfs_init_attrs(sb, &fence->ssa);
fence->start_kt = ktime_get();
fence->ipv4_addr = ipv4_addr;
fence->fenced = false;
fence->error = false;
fence->reason = reason;
fence->rid = rid;
ret = scoutfs_sysfs_create_attrs_parent(sb, &fi->kset->kobj,
&fence->ssa, fence_attrs,
"%016llx", rid);
if (ret < 0) {
kfree(fence);
goto out;
}
timer_setup(&fence->timer, fence_timeout, 0);
fence->timer.expires = jiffies + msecs_to_jiffies(FENCE_TIMEOUT_MS);
add_timer(&fence->timer);
spin_lock(&fi->lock);
list_add_tail(&fence->entry, &fi->list);
spin_unlock(&fi->lock);
out:
return ret;
}
/*
* Give the caller the rid of the next fence request which has been
* fenced. This doesn't have a position from which to return the next
* because the caller either frees the fence request it's given or shuts
* down.
*/
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret = -ENOENT;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->fenced || fence->error) {
*rid = fence->rid;
*reason = fence->reason;
*error = fence->error;
ret = 0;
break;
}
}
spin_unlock(&fi->lock);
return ret;
}
int scoutfs_fence_reason_pending(struct super_block *sb, int reason)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
bool pending = false;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->reason == reason) {
pending = true;
break;
}
}
spin_unlock(&fi->lock);
return pending;
}
int scoutfs_fence_free(struct super_block *sb, u64 rid)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
int ret = -ENOENT;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->rid == rid) {
list_del_init(&fence->entry);
ret = 0;
break;
}
}
spin_unlock(&fi->lock);
if (ret == 0) {
destroy_fence(fence);
wake_up(&fi->waitq);
}
return ret;
}
static bool all_fenced(struct fence_info *fi, bool *error)
{
struct pending_fence *fence;
bool all = true;
*error = false;
spin_lock(&fi->lock);
list_for_each_entry(fence, &fi->list, entry) {
if (fence->error) {
*error = true;
all = true;
break;
}
if (!fence->fenced) {
all = false;
break;
}
}
spin_unlock(&fi->lock);
return all;
}
/*
* The caller waits for all the current requests to be fenced, but not
* necessarily reclaimed.
*/
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies)
{
DECLARE_FENCE_INFO(sb, fi);
bool error;
long ret;
ret = wait_event_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
if (ret == 0)
ret = -ETIMEDOUT;
else if (ret > 0)
ret = 0;
else if (error)
ret = -EIO;
return ret;
}
/*
* This must be called early during startup so that it is guaranteed that
* no other subsystems will try and call fence_start while we're waiting
* for testing fence requests to complete.
*/
int scoutfs_fence_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_mount_options opts;
struct fence_info *fi;
int ret;
/* can only fence if we can be elected by quorum */
scoutfs_options_read(sb, &opts);
if (opts.quorum_slot_nr == -1) {
ret = 0;
goto out;
}
fi = kzalloc(sizeof(struct fence_info), GFP_KERNEL);
if (!fi) {
ret = -ENOMEM;
goto out;
}
init_waitqueue_head(&fi->waitq);
spin_lock_init(&fi->lock);
INIT_LIST_HEAD(&fi->list);
sbi->fence_info = fi;
fi->kset = kset_create_and_add("fence", NULL, scoutfs_sysfs_sb_dir(sb));
if (!fi->kset) {
ret = -ENOMEM;
goto out;
}
fi->wq = alloc_workqueue("scoutfs_fence",
WQ_UNBOUND | WQ_NON_REENTRANT, 0);
if (!fi->wq) {
ret = -ENOMEM;
goto out;
}
ret = 0;
out:
if (ret)
scoutfs_fence_destroy(sb);
return ret;
}
/*
* Tear down all pending fence requests because the server is shutting down.
*/
void scoutfs_fence_stop(struct super_block *sb)
{
DECLARE_FENCE_INFO(sb, fi);
struct pending_fence *fence;
do {
spin_lock(&fi->lock);
fence = list_first_entry_or_null(&fi->list, struct pending_fence, entry);
if (fence)
list_del_init(&fence->entry);
spin_unlock(&fi->lock);
if (fence) {
destroy_fence(fence);
wake_up(&fi->waitq);
}
} while (fence);
}
void scoutfs_fence_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct fence_info *fi = SCOUTFS_SB(sb)->fence_info;
struct pending_fence *fence;
struct pending_fence *tmp;
if (fi) {
if (fi->wq)
destroy_workqueue(fi->wq);
list_for_each_entry_safe(fence, tmp, &fi->list, entry)
destroy_fence(fence);
if (fi->kset)
kset_unregister(fi->kset);
kfree(fi);
sbi->fence_info = NULL;
}
}

View File

@@ -1,20 +0,0 @@
#ifndef _SCOUTFS_FENCE_H_
#define _SCOUTFS_FENCE_H_
enum {
SCOUTFS_FENCE_CLIENT_RECOVERY,
SCOUTFS_FENCE_CLIENT_RECONNECT,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER,
};
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason);
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error);
int scoutfs_fence_reason_pending(struct super_block *sb, int reason);
int scoutfs_fence_free(struct super_block *sb, u64 rid);
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies);
int scoutfs_fence_setup(struct super_block *sb);
void scoutfs_fence_stop(struct super_block *sb);
void scoutfs_fence_destroy(struct super_block *sb);
#endif

View File

@@ -27,14 +27,8 @@
#include "file.h"
#include "inode.h"
#include "per_task.h"
#include "omap.h"
/*
* Start a high level file read. We check for offline extents in the
* read region here so that we only check the extents once. We use the
* dio count to prevent releasing while we're reading after we've
* checked the extents.
*/
/* TODO: Direct I/O, AIO */
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos)
{
@@ -48,32 +42,32 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
int ret;
retry:
/* protect checked extents from release */
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
/* protect checked extents from stage/release */
mutex_lock(&inode->i_mutex);
mutex_lock(&si->s_i_mutex);
atomic_inc(&inode->i_dio_count);
mutex_unlock(&si->s_i_mutex);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ,
&dw, inode_lock);
if (ret != 0)
goto out;
} else {
WARN_ON_ONCE(true);
}
ret = generic_file_aio_read(iocb, iov, nr_segs, pos);
out:
inode_dio_done(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
if (scoutfs_per_task_del(&si->pt_data_lock, &pt_ent))
inode_dio_done(inode);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {
@@ -107,11 +101,6 @@ retry:
if (ret)
goto out;
if (scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto out;
}
ret = scoutfs_complete_truncate(inode, inode_lock);
if (ret)
goto out;

File diff suppressed because it is too large Load Diff

View File

@@ -1,59 +1,55 @@
#ifndef _SCOUTFS_FOREST_H_
#define _SCOUTFS_FOREST_H_
struct scoutfs_alloc;
struct scoutfs_radix_allocator;
struct scoutfs_block_writer;
struct scoutfs_block;
#include "btree.h"
/* caller gives an item to the callback */
enum {
FIC_FS_ROOT = (1 << 0),
FIC_FINALIZED = (1 << 1),
};
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, int fic, void *arg);
int scoutfs_forest_lookup(struct super_block *sb, struct scoutfs_key *key,
struct kvec *val, struct scoutfs_lock *lock);
int scoutfs_forest_lookup_exact(struct super_block *sb,
struct scoutfs_key *key, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_next(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *last, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next);
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
struct scoutfs_lock *lock);
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq);
int scoutfs_forest_get_max_seq(struct super_block *sb,
struct scoutfs_super_block *super,
u64 *seq);
int scoutfs_forest_insert_list(struct super_block *sb,
struct scoutfs_btree_item_list *lst);
int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id);
void scoutfs_forest_inc_inode_count(struct super_block *sb);
void scoutfs_forest_dec_inode_count(struct super_block *sb);
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count);
int scoutfs_forest_prev(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *first, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_create(struct super_block *sb, struct scoutfs_key *key,
struct kvec *val, struct scoutfs_lock *lock);
int scoutfs_forest_create_force(struct super_block *sb,
struct scoutfs_key *key, struct kvec *val,
struct scoutfs_lock *lock);
int scoutfs_forest_update(struct super_block *sb, struct scoutfs_key *key,
struct kvec *val, struct scoutfs_lock *lock);
int scoutfs_forest_delete_dirty(struct super_block *sb,
struct scoutfs_key *key);
int scoutfs_forest_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_forest_delete_force(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_forest_delete_save(struct super_block *sb,
struct scoutfs_key *key,
struct list_head *list,
struct scoutfs_lock *lock);
int scoutfs_forest_restore(struct super_block *sb, struct list_head *list,
struct scoutfs_lock *lock);
void scoutfs_forest_free_batch(struct super_block *sb, struct list_head *list);
void scoutfs_forest_init_btrees(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_log_trees *lt);
void scoutfs_forest_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
/* > 0 error codes */
#define SCOUTFS_DELTA_COMBINED 1 /* src val was combined, drop src */
#define SCOUTFS_DELTA_COMBINED_NULL 2 /* combined val has no data, drop both */
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len);
void scoutfs_forest_clear_lock(struct super_block *sb,
struct scoutfs_lock *lock);
int scoutfs_forest_setup(struct super_block *sb);
void scoutfs_forest_start(struct super_block *sb);
void scoutfs_forest_stop(struct super_block *sb);
void scoutfs_forest_destroy(struct super_block *sb);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,49 +1,15 @@
#ifndef _SCOUTFS_HASH_H_
#define _SCOUTFS_HASH_H_
/*
* We're using FNV1a for now. It's fine. Ish.
*
* The longer term plan is xxh3 but it looks like it'll take just a bit
* more time to be declared stable and then it needs to be ported to the
* kernel.
*
* - https://fastcompression.blogspot.com/2019/03/presenting-xxh3.html
* - https://github.com/Cyan4973/xxHash/releases/tag/v0.7.4
*/
static inline u32 fnv1a32(const void *data, unsigned int len)
{
u32 hash = 0x811c9dc5;
while (len--) {
hash ^= *(u8 *)(data++);
hash *= 0x01000193;
}
return hash;
}
static inline u64 fnv1a64(const void *data, unsigned int len)
{
u64 hash = 0xcbf29ce484222325ULL;
while (len--) {
hash ^= *(u8 *)(data++);
hash *= 0x100000001b3ULL;
}
return hash;
}
static inline u32 scoutfs_hash32(const void *data, unsigned int len)
{
return fnv1a32(data, len);
}
#include <linux/crc32c.h>
/* XXX replace with xxhash */
static inline u64 scoutfs_hash64(const void *data, unsigned int len)
{
return fnv1a64(data, len);
unsigned int half = (len + 1) / 2;
return crc32c(~0, data, half) |
((u64)crc32c(~0, data + len - half, half) << 32);
}
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -4,12 +4,17 @@
#include "key.h"
#include "lock.h"
#include "per_task.h"
#include "count.h"
#include "format.h"
#include "data.h"
struct scoutfs_lock;
#define SCOUTFS_INODE_NR_INDICES 2
struct scoutfs_inode_allocator {
spinlock_t lock;
u64 ino;
u64 nr;
};
struct scoutfs_inode_info {
/* read or initialized for each inode instance */
@@ -22,19 +27,12 @@ struct scoutfs_inode_info {
u64 online_blocks;
u64 offline_blocks;
u32 flags;
struct timespec crtime;
struct timespec worm_expire;
/* Prevent readers from racing with xattr_set */
seqlock_t seqlock;
/*
* Protects per-inode extent items, most particularly readers
* who want to serialize writers without holding i_mutex. (only
* used in data.c, it's the only place that understands file
* extent items)
/* We can't use inode->i_mutex to protect i_dio_count due to lock
* ordering in the kernel between i_mutex and mmap_sem. Use this
* as an inner lock.
*/
struct rw_semaphore extent_sem;
struct mutex s_i_mutex;
/*
* The in-memory item info caches the current index item values
@@ -44,26 +42,22 @@ struct scoutfs_inode_info {
*/
struct mutex item_mutex;
bool have_item;
u64 item_majors[SCOUTFS_INODE_NR_INDICES];
u32 item_minors[SCOUTFS_INODE_NR_INDICES];
u64 item_majors[SCOUTFS_INODE_INDEX_NR];
u32 item_minors[SCOUTFS_INODE_INDEX_NR];
/* updated at on each new lock acquisition */
atomic64_t last_refreshed;
/* reset for every new inode instance */
struct scoutfs_inode_allocator ino_alloc;
/* initialized once for slab object */
seqcount_t seqcount;
bool staging; /* holder of i_mutex is staging */
struct scoutfs_per_task pt_data_lock;
struct scoutfs_data_waitq data_waitq;
struct rw_semaphore xattr_rwsem;
struct list_head writeback_entry;
struct scoutfs_lock_coverage ino_lock_cov;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node iput_llnode;
atomic_t iput_count;
struct rb_node writeback_node;
struct inode inode;
};
@@ -82,15 +76,11 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
void scoutfs_destroy_inode(struct inode *inode);
int scoutfs_drop_inode(struct inode *inode);
void scoutfs_evict_inode(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode);
int scoutfs_orphan_inode(struct inode *inode);
#define SCOUTFS_IGF_LINKED (1 << 0) /* enoent if nlink == 0 */
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf);
struct inode *scoutfs_ilookup_nowait(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup_nowait_nonewfree(struct super_block *sb, u64 ino);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
void scoutfs_inode_init_key(struct scoutfs_key *key, u64 ino);
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
u32 minor, u64 ino);
int scoutfs_inode_index_start(struct super_block *sb, u64 *seq);
@@ -100,18 +90,21 @@ int scoutfs_inode_index_prepare_ino(struct super_block *sb,
struct list_head *list, u64 ino,
umode_t mode);
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
struct list_head *list, u64 seq, bool allocing);
struct list_head *list, u64 seq,
const struct scoutfs_item_count cnt);
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
bool set_data_seq, bool allocing);
bool set_data_seq,
const struct scoutfs_item_count cnt);
void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list);
int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock);
void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
struct list_head *ind_locks);
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret);
int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, dev_t rdev,
u64 ino, struct scoutfs_lock *lock, struct inode **inode_ret);
int scoutfs_alloc_ino(struct inode *parent, u64 *ino);
struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
umode_t mode, dev_t rdev, u64 ino,
struct scoutfs_lock *lock);
void scoutfs_inode_set_meta_seq(struct inode *inode);
void scoutfs_inode_set_data_seq(struct inode *inode);
@@ -124,29 +117,23 @@ u64 scoutfs_inode_data_version(struct inode *inode);
void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off);
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock,
int flags);
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
void scoutfs_inode_schedule_orphan_dwork(struct super_block *sb);
int scoutfs_scan_orphans(struct super_block *sb);
void scoutfs_inode_queue_writeback(struct inode *inode);
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);
u64 scoutfs_last_ino(struct super_block *sb);
void scoutfs_inode_exit(void);
int scoutfs_inode_init(void);
int scoutfs_inode_setup(struct super_block *sb);
void scoutfs_inode_start(struct super_block *sb);
void scoutfs_inode_orphan_stop(struct super_block *sb);
void scoutfs_inode_flush_iput(struct super_block *sb);
void scoutfs_inode_destroy(struct super_block *sb);
void scoutfs_inode_get_worm(struct inode *inode, struct timespec *ts);
void scoutfs_inode_set_worm(struct inode *inode, u64 expire_sec, u32 expire_nsec);
bool scoutfs_inode_worm_denied(struct inode *inode);
#endif

View File

@@ -12,7 +12,6 @@
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/file.h>
#include <linux/uaccess.h>
#include <linux/compiler.h>
#include <linux/uio.h>
@@ -21,7 +20,6 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/aio.h>
#include <linux/list_sort.h>
#include "format.h"
#include "key.h"
@@ -29,7 +27,6 @@
#include "ioctl.h"
#include "super.h"
#include "inode.h"
#include "item.h"
#include "forest.h"
#include "data.h"
#include "client.h"
@@ -37,10 +34,6 @@
#include "trans.h"
#include "xattr.h"
#include "hash.h"
#include "srch.h"
#include "alloc.h"
#include "server.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -116,7 +109,7 @@ static long scoutfs_ioc_walk_inodes(struct file *file, unsigned long arg)
for (nr = 0; nr < walk.nr_entries; ) {
ret = scoutfs_item_next(sb, &key, &last_key, NULL, 0, lock);
ret = scoutfs_forest_next(sb, &key, &last_key, NULL, lock);
if (ret < 0 && ret != -ENOENT)
break;
@@ -275,11 +268,12 @@ out:
static long scoutfs_ioc_release(struct file *file, unsigned long arg)
{
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_ioctl_release args;
struct scoutfs_lock *lock = NULL;
u64 sblock;
u64 eblock;
loff_t start;
loff_t end_inc;
u64 online;
u64 offline;
u64 isize;
@@ -290,11 +284,9 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
trace_scoutfs_ioc_release(sb, scoutfs_ino(inode), &args);
if (args.length == 0)
if (args.count == 0)
return 0;
if (((args.offset + args.length) < args.offset) ||
(args.offset & SCOUTFS_BLOCK_SM_MASK) ||
(args.length & SCOUTFS_BLOCK_SM_MASK))
if ((args.block + args.count) < args.block)
return -EINVAL;
@@ -303,6 +295,7 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
return ret;
mutex_lock(&inode->i_mutex);
mutex_lock(&si->s_i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -327,30 +320,30 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
inode_dio_wait(inode);
/* drop all clean and dirty cached blocks in the range */
truncate_inode_pages_range(&inode->i_data, args.offset,
args.offset + args.length - 1);
start = args.block << SCOUTFS_BLOCK_SHIFT;
end_inc = ((args.block + args.count) << SCOUTFS_BLOCK_SHIFT) - 1;
truncate_inode_pages_range(&inode->i_data, start, end_inc);
sblock = args.offset >> SCOUTFS_BLOCK_SM_SHIFT;
eblock = (args.offset + args.length - 1) >> SCOUTFS_BLOCK_SM_SHIFT;
ret = scoutfs_data_truncate_items(sb, inode, scoutfs_ino(inode),
sblock,
eblock, true,
args.block,
args.block + args.count - 1, true,
lock);
if (ret == 0) {
scoutfs_inode_get_onoff(inode, &online, &offline);
isize = i_size_read(inode);
if (online == 0 && isize) {
sblock = (isize + SCOUTFS_BLOCK_SM_SIZE - 1)
>> SCOUTFS_BLOCK_SM_SHIFT;
start = (isize + SCOUTFS_BLOCK_SIZE - 1)
>> SCOUTFS_BLOCK_SHIFT;
ret = scoutfs_data_truncate_items(sb, inode,
scoutfs_ino(inode),
sblock, U64_MAX,
start, U64_MAX,
false, lock);
}
}
out:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
mutex_unlock(&si->s_i_mutex);
mutex_unlock(&inode->i_mutex);
mnt_drop_write_file(file);
@@ -363,6 +356,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_data_wait_err args;
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode_info *si;
struct inode *inode = NULL;
u64 sblock;
u64 eblock;
@@ -381,19 +375,21 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
trace_scoutfs_ioc_data_wait_err(sb, &args);
sblock = args.offset >> SCOUTFS_BLOCK_SM_SHIFT;
eblock = (args.offset + args.count - 1) >> SCOUTFS_BLOCK_SM_SHIFT;
sblock = args.offset >> SCOUTFS_BLOCK_SHIFT;
eblock = (args.offset + args.count - 1) >> SCOUTFS_BLOCK_SHIFT;
if (sblock > eblock)
return -EINVAL;
inode = scoutfs_ilookup_nowait_nonewfree(sb, args.ino);
inode = scoutfs_ilookup(sb, args.ino);
if (!inode) {
ret = -ESTALE;
goto out;
}
si = SCOUTFS_I(inode);
mutex_lock(&inode->i_mutex);
mutex_lock(&si->s_i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -411,6 +407,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
unlock:
mutex_unlock(&si->s_i_mutex);
mutex_unlock(&inode->i_mutex);
iput(inode);
out:
@@ -466,24 +463,23 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
trace_scoutfs_ioc_stage(sb, scoutfs_ino(inode), &args);
end_size = args.offset + args.length;
end_size = args.offset + args.count;
/* verify arg constraints that aren't dependent on file */
if (args.length < 0 || (end_size < args.offset) ||
args.offset & SCOUTFS_BLOCK_SM_MASK) {
if (args.count < 0 || (end_size < args.offset) ||
args.offset & SCOUTFS_BLOCK_MASK)
return -EINVAL;
}
if (args.length == 0)
if (args.count == 0)
return 0;
/* the iocb is really only used for the file pointer :P */
init_sync_kiocb(&kiocb, file);
kiocb.ki_pos = args.offset;
kiocb.ki_left = args.length;
kiocb.ki_nbytes = args.length;
kiocb.ki_left = args.count;
kiocb.ki_nbytes = args.count;
iov.iov_base = (void __user *)(unsigned long)args.buf_ptr;
iov.iov_len = args.length;
iov.iov_len = args.count;
ret = mnt_want_write_file(file);
if (ret)
@@ -505,7 +501,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
(file->f_flags & (O_APPEND | O_DIRECT | O_DSYNC)) ||
IS_SYNC(file->f_mapping->host) ||
(end_size > isize) ||
((end_size & SCOUTFS_BLOCK_SM_MASK) && (end_size != isize))) {
((end_size & SCOUTFS_BLOCK_MASK) && (end_size != isize))) {
ret = -EINVAL;
goto out;
}
@@ -522,11 +518,11 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
written = 0;
do {
ret = generic_file_buffered_write(&kiocb, &iov, 1, pos, &pos,
args.length, written);
args.count, written);
BUG_ON(ret == -EIOCBQUEUED);
if (ret > 0)
written += ret;
} while (ret > 0 && written < args.length);
} while (ret > 0 && written < args.count);
si->staging = false;
current->backing_dev_info = NULL;
@@ -543,17 +539,19 @@ out:
static long scoutfs_ioc_stat_more(struct file *file, unsigned long arg)
{
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_ioctl_stat_more stm;
if (get_user(stm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
stm.valid_bytes = min_t(u64, stm.valid_bytes,
sizeof(struct scoutfs_ioctl_stat_more));
stm.meta_seq = scoutfs_inode_meta_seq(inode);
stm.data_seq = scoutfs_inode_data_seq(inode);
stm.data_version = scoutfs_inode_data_version(inode);
scoutfs_inode_get_onoff(inode, &stm.online_blocks, &stm.offline_blocks);
stm.crtime_sec = si->crtime.tv_sec;
stm.crtime_nsec = si->crtime.tv_nsec;
if (copy_to_user((void __user *)arg, &stm, sizeof(stm)))
if (copy_to_user((void __user *)arg, &stm, stm.valid_bytes))
return -EFAULT;
return 0;
@@ -653,17 +651,13 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
goto out;
mutex_lock(&inode->i_mutex);
mutex_lock(&si->s_i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret)
goto unlock;
if (scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto unlock;
}
/* can only change size/dv on untouched regular files */
if ((sm.i_size != 0 || sm.data_version != 0) &&
((!S_ISREG(inode->i_mode) ||
@@ -681,7 +675,8 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
/* setting only so we don't see 0 data seq with nonzero data_version */
set_data_seq = sm.data_version != 0 ? true : false;
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq, false);
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq,
SIC_SETATTR_MORE());
if (ret)
goto unlock;
@@ -691,8 +686,6 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
i_size_write(inode, sm.i_size);
inode->i_ctime.tv_sec = sm.ctime_sec;
inode->i_ctime.tv_nsec = sm.ctime_nsec;
si->crtime.tv_sec = sm.crtime_sec;
si->crtime.tv_nsec = sm.crtime_nsec;
scoutfs_update_inode_item(inode, lock, &ind_locks);
ret = 0;
@@ -701,6 +694,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
mutex_unlock(&si->s_i_mutex);
mutex_unlock(&inode->i_mutex);
mnt_drop_write_file(file);
out:
@@ -775,20 +769,18 @@ out:
* but we don't check that the callers xattr name contains the tag and
* search for it regardless.
*/
static long scoutfs_ioc_search_xattrs(struct file *file, unsigned long arg)
static long scoutfs_ioc_find_xattrs(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_search_xattrs __user *usx = (void __user *)arg;
struct scoutfs_ioctl_search_xattrs sx;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_srch_rb_root sroot;
struct scoutfs_srch_rb_node *snode;
u64 __user *uinos;
struct rb_node *node;
struct scoutfs_ioctl_find_xattrs __user *ufx = (void __user *)arg;
struct scoutfs_ioctl_find_xattrs fx;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key last;
struct scoutfs_key key;
char *name = NULL;
bool done = false;
u64 prev_ino = 0;
u64 total = 0;
int total = 0;
u64 hash;
u64 ino;
int ret;
if (!(file->f_mode & FMODE_READ)) {
@@ -801,73 +793,67 @@ static long scoutfs_ioc_search_xattrs(struct file *file, unsigned long arg)
goto out;
}
if (copy_from_user(&sx, usx, sizeof(sx))) {
if (copy_from_user(&fx, ufx, sizeof(fx))) {
ret = -EFAULT;
goto out;
}
uinos = (u64 __user *)sx.inodes_ptr;
if (sx.name_bytes > SCOUTFS_XATTR_MAX_NAME_LEN) {
if (fx.name_bytes > SCOUTFS_XATTR_MAX_NAME_LEN) {
ret = -EINVAL;
goto out;
}
if (sx.nr_inodes == 0 || sx.last_ino < sx.next_ino) {
ret = 0;
goto out;
}
name = kmalloc(sx.name_bytes, GFP_KERNEL);
name = kmalloc(fx.name_bytes, GFP_KERNEL);
if (!name) {
ret = -ENOMEM;
goto out;
}
if (copy_from_user(name, (void __user *)sx.name_ptr, sx.name_bytes)) {
if (copy_from_user(name, (void __user *)fx.name_ptr, fx.name_bytes)) {
ret = -EFAULT;
goto out;
}
if (scoutfs_xattr_parse_tags(sb, name, sx.name_bytes, &tgs) < 0 ||
!tgs.srch) {
ret = -EINVAL;
goto out;
}
hash = scoutfs_hash64(name, fx.name_bytes);
scoutfs_xattr_index_key(&key, hash, fx.next_ino, 0);
scoutfs_xattr_index_key(&last, hash, U64_MAX, U64_MAX);
ino = 0;
ret = scoutfs_srch_search_xattrs(sb, &sroot,
scoutfs_hash64(name, sx.name_bytes),
sx.next_ino, sx.last_ino, &done);
ret = scoutfs_lock_xattr_index(sb, SCOUTFS_LOCK_READ, 0, hash, &lock);
if (ret < 0)
goto out;
prev_ino = 0;
scoutfs_srch_foreach_rb_node(snode, node, &sroot) {
if (prev_ino == snode->ino)
continue;
while (fx.nr_inodes) {
if (put_user(snode->ino, uinos + total)) {
ret = -EFAULT;
ret = scoutfs_forest_next(sb, &key, &last, NULL, lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
break;
}
prev_ino = snode->ino;
if (++total == sx.nr_inodes)
break;
/* xattrs hashes can collide and add multiple entries */
if (le64_to_cpu(key.skxi_ino) != ino) {
ino = le64_to_cpu(key.skxi_ino);
if (put_user(ino, (u64 __user *)fx.inodes_ptr)) {
ret = -EFAULT;
break;
}
fx.inodes_ptr += sizeof(u64);
fx.nr_inodes--;
total++;
ret = 0;
}
scoutfs_key_inc(&key);
}
sx.output_flags = 0;
if (done && total == sroot.nr)
sx.output_flags |= SCOUTFS_SEARCH_XATTRS_OFLAG_END;
if (put_user(sx.output_flags, &usx->output_flags))
ret = -EFAULT;
else
ret = 0;
scoutfs_srch_destroy_rb_root(&sroot);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
out:
kfree(name);
return ret ?: total;
}
@@ -875,534 +861,23 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super;
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_ioctl_statfs_more sfm;
int ret;
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super)
return -ENOMEM;
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
if (get_user(sfm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
sfm.valid_bytes = min_t(u64, sfm.valid_bytes,
sizeof(struct scoutfs_ioctl_statfs_more));
sfm.fsid = le64_to_cpu(super->hdr.fsid);
sfm.rid = sbi->rid;
sfm.total_meta_blocks = le64_to_cpu(super->total_meta_blocks);
sfm.total_data_blocks = le64_to_cpu(super->total_data_blocks);
sfm.reserved_meta_blocks = scoutfs_server_reserved_meta_blocks(sb);
ret = scoutfs_client_get_last_seq(sb, &sfm.committed_seq);
if (ret)
goto out;
if (copy_to_user((void __user *)arg, &sfm, sizeof(sfm)))
ret = -EFAULT;
else
ret = 0;
out:
kfree(super);
return ret;
}
struct copy_alloc_detail_args {
struct scoutfs_ioctl_alloc_detail_entry __user *uade;
u64 nr;
u64 copied;
};
static int copy_alloc_detail_to_user(struct super_block *sb, void *arg,
int owner, u64 id, bool meta, bool avail,
u64 blocks)
{
struct copy_alloc_detail_args *args = arg;
struct scoutfs_ioctl_alloc_detail_entry ade;
if (args->copied == args->nr)
return -EOVERFLOW;
ade.blocks = blocks;
ade.id = id;
ade.meta = !!meta;
ade.avail = !!avail;
if (copy_to_user(&args->uade[args->copied], &ade, sizeof(ade)))
if (copy_to_user((void __user *)arg, &sfm, sfm.valid_bytes))
return -EFAULT;
args->copied++;
return 0;
}
static long scoutfs_ioc_alloc_detail(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_alloc_detail __user *uad = (void __user *)arg;
struct scoutfs_ioctl_alloc_detail ad;
struct copy_alloc_detail_args args;
if (copy_from_user(&ad, uad, sizeof(ad)))
return -EFAULT;
args.uade = (struct scoutfs_ioctl_alloc_detail_entry __user *)
(uintptr_t)ad.entries_ptr;
args.nr = ad.entries_nr;
args.copied = 0;
return scoutfs_alloc_foreach(sb, copy_alloc_detail_to_user, &args) ?:
args.copied;
}
static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
{
struct inode *to = file_inode(file);
struct super_block *sb = to->i_sb;
struct scoutfs_ioctl_move_blocks __user *umb = (void __user *)arg;
struct scoutfs_ioctl_move_blocks mb;
struct file *from_file;
struct inode *from;
int ret;
if (copy_from_user(&mb, umb, sizeof(mb)))
return -EFAULT;
if (mb.len == 0)
return 0;
if (mb.from_off + mb.len < mb.from_off ||
mb.to_off + mb.len < mb.to_off)
return -EOVERFLOW;
from_file = fget(mb.from_fd);
if (!from_file)
return -EBADF;
from = file_inode(from_file);
if (from == to) {
ret = -EINVAL;
goto out;
}
if (from->i_sb != sb) {
ret = -EXDEV;
goto out;
}
if (mb.flags & SCOUTFS_IOC_MB_UNKNOWN) {
ret = -EINVAL;
goto out;
}
ret = mnt_want_write_file(file);
if (ret < 0)
goto out;
ret = scoutfs_data_move_blocks(from, mb.from_off, mb.len,
to, mb.to_off, !!(mb.flags & SCOUTFS_IOC_MB_STAGE),
mb.data_version);
mnt_drop_write_file(file);
out:
fput(from_file);
return ret;
}
static long scoutfs_ioc_resize_devices(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_resize_devices __user *urd = (void __user *)arg;
struct scoutfs_ioctl_resize_devices rd;
struct scoutfs_net_resize_devices nrd;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rd, urd, sizeof(rd))) {
ret = -EFAULT;
goto out;
}
nrd.new_total_meta_blocks = cpu_to_le64(rd.new_total_meta_blocks);
nrd.new_total_data_blocks = cpu_to_le64(rd.new_total_data_blocks);
ret = scoutfs_client_resize_devices(sb, &nrd);
out:
return ret;
}
struct xattr_total_entry {
struct rb_node node;
struct scoutfs_ioctl_xattr_total xt;
u64 fs_seq;
u64 fs_total;
u64 fs_count;
u64 fin_seq;
u64 fin_total;
s64 fin_count;
u64 log_seq;
u64 log_total;
s64 log_count;
};
static int cmp_xt_entry_name(const struct xattr_total_entry *a,
const struct xattr_total_entry *b)
{
return scoutfs_cmp_u64s(a->xt.name[0], b->xt.name[0]) ?:
scoutfs_cmp_u64s(a->xt.name[1], b->xt.name[1]) ?:
scoutfs_cmp_u64s(a->xt.name[2], b->xt.name[2]);
}
/*
* Record the contribution of the three classes of logged items we can
* see: the item in the fs_root, items from finalized log btrees, and
* items from active log btrees. Once we have the full set the caller
* can decide which of the items contribute to the total it sends to the
* user.
*/
static int read_xattr_total_item(struct super_block *sb, struct scoutfs_key *key,
u64 seq, u8 flags, void *val, int val_len, int fic, void *arg)
{
struct scoutfs_xattr_totl_val *tval = val;
struct xattr_total_entry *ent;
struct xattr_total_entry rd;
struct rb_root *root = arg;
struct rb_node *parent;
struct rb_node **node;
int cmp;
rd.xt.name[0] = le64_to_cpu(key->skxt_a);
rd.xt.name[1] = le64_to_cpu(key->skxt_b);
rd.xt.name[2] = le64_to_cpu(key->skxt_c);
/* find entry matching name */
node = &root->rb_node;
parent = NULL;
cmp = -1;
while (*node) {
parent = *node;
ent = container_of(*node, struct xattr_total_entry, node);
/* sort merge items by key then newest to oldest */
cmp = cmp_xt_entry_name(&rd, ent);
if (cmp < 0)
node = &(*node)->rb_left;
else if (cmp > 0)
node = &(*node)->rb_right;
else
break;
}
/* allocate and insert new node if we need to */
if (cmp != 0) {
ent = kzalloc(sizeof(*ent), GFP_KERNEL);
if (!ent)
return -ENOMEM;
memcpy(&ent->xt.name, &rd.xt.name, sizeof(ent->xt.name));
rb_link_node(&ent->node, parent, node);
rb_insert_color(&ent->node, root);
}
if (fic & FIC_FS_ROOT) {
ent->fs_seq = seq;
ent->fs_total = le64_to_cpu(tval->total);
ent->fs_count = le64_to_cpu(tval->count);
} else if (fic & FIC_FINALIZED) {
ent->fin_seq = seq;
ent->fin_total += le64_to_cpu(tval->total);
ent->fin_count += le64_to_cpu(tval->count);
} else {
ent->log_seq = seq;
ent->log_total += le64_to_cpu(tval->total);
ent->log_count += le64_to_cpu(tval->count);
}
scoutfs_inc_counter(sb, totl_read_item);
return 0;
}
/* these are always _safe, node stores next */
#define for_each_xt_ent(ent, node, root) \
for (node = rb_first(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_next(node), 1); )
#define for_each_xt_ent_reverse(ent, node, root) \
for (node = rb_last(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_prev(node), 1); )
static void free_xt_ent(struct rb_root *root, struct xattr_total_entry *ent)
{
rb_erase(&ent->node, root);
kfree(ent);
}
static void free_all_xt_ents(struct rb_root *root)
{
struct xattr_total_entry *ent;
struct rb_node *node;
for_each_xt_ent(ent, node, root)
free_xt_ent(root, ent);
}
/*
* Starting from the caller's pos_name, copy the names, totals, and
* counts for the .totl. tagged xattrs in the system sorted by their
* name until the user's buffer is full. This only sees xattrs that
* have been committed. It doesn't use locking to force commits and
* block writers so it can be a little bit out of date with respect to
* dirty xattrs in memory across the system.
*
* Our reader has to be careful because the log btree merging code can
* write partial results to the fs_root. This means that a reader can
* see both cases where new finalized logs should be applied to the old
* fs items and where old finalized logs have already been applied to
* the partially merged fs items. Currently active logged items are
* always applied on top of all cases.
*
* These cases are differentiated with a combination of sequence numbers
* in items, the count of contributing xattrs, and a flag
* differentiating finalized and active logged items. This lets us
* recognize all cases, including when finalized logs were merged and
* deleted the fs item.
*
* We're allocating a tracking struct for each totl name we see while
* traversing the item btrees. The forest reader is providing the items
* it finds in leaf blocks that contain the search key. In the worst
* case all of these blocks are full and none of the items overlap. At
* most, figure order a thousand names per mount. But in practice many
* of these factors fall away: leaf blocks aren't fill, leaf items
* overlap, there aren't finalized log btrees, and not all mounts are
* actively changing totals. We're much more likely to only read a
* leaf block's worth of totals that have been long since merged into
* the fs_root.
*/
static long scoutfs_ioc_read_xattr_totals(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_read_xattr_totals __user *urxt = (void __user *)arg;
struct scoutfs_ioctl_read_xattr_totals rxt;
struct scoutfs_ioctl_xattr_total __user *uxt;
struct xattr_total_entry *ent;
struct scoutfs_key key;
struct scoutfs_key bloom_key;
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_root root = RB_ROOT;
struct rb_node *node;
int count = 0;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rxt, urxt, sizeof(rxt))) {
ret = -EFAULT;
goto out;
}
uxt = (void __user *)rxt.totals_ptr;
if ((rxt.totals_ptr & (sizeof(__u64) - 1)) ||
(rxt.totals_bytes < sizeof(struct scoutfs_ioctl_xattr_total))) {
ret = -EINVAL;
goto out;
}
scoutfs_key_set_zeros(&bloom_key);
bloom_key.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_xattr_init_totl_key(&start, rxt.pos_name);
while (rxt.totals_bytes >= sizeof(struct scoutfs_ioctl_xattr_total)) {
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
if (scoutfs_key_compare(&start, &end) > 0)
break;
key = start;
ret = scoutfs_forest_read_items(sb, &key, &bloom_key, &start, &end,
read_xattr_total_item, &root);
if (ret < 0) {
if (ret == -ESTALE) {
free_all_xt_ents(&root);
continue;
}
goto out;
}
if (RB_EMPTY_ROOT(&root))
break;
/* trim totals that fall outside of the consistent range */
for_each_xt_ent(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &start) < 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
for_each_xt_ent_reverse(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &end) > 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
/* copy resulting unique non-zero totals to userspace */
for_each_xt_ent(ent, node, &root) {
if (rxt.totals_bytes < sizeof(ent->xt))
break;
/* start with the fs item if we have it */
if (ent->fs_seq != 0) {
ent->xt.total = ent->fs_total;
ent->xt.count = ent->fs_count;
scoutfs_inc_counter(sb, totl_read_fs);
}
/* apply finalized logs if they're newer or creating */
if (((ent->fs_seq != 0) && (ent->fin_seq > ent->fs_seq)) ||
((ent->fs_seq == 0) && (ent->fin_count > 0))) {
ent->xt.total += ent->fin_total;
ent->xt.count += ent->fin_count;
scoutfs_inc_counter(sb, totl_read_finalized);
}
/* always apply active logs which must be newer than fs and finalized */
if (ent->log_seq > 0) {
ent->xt.total += ent->log_total;
ent->xt.count += ent->log_count;
scoutfs_inc_counter(sb, totl_read_logged);
}
if (ent->xt.total != 0 || ent->xt.count != 0) {
if (copy_to_user(uxt, &ent->xt, sizeof(ent->xt))) {
ret = -EFAULT;
goto out;
}
uxt++;
rxt.totals_bytes -= sizeof(ent->xt);
count++;
scoutfs_inc_counter(sb, totl_read_copied);
}
free_xt_ent(&root, ent);
}
/* continue after the last possible key read */
start = end;
scoutfs_key_inc(&start);
}
ret = 0;
out:
free_all_xt_ents(&root);
return ret ?: count;
}
static long scoutfs_ioc_get_allocated_inos(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_get_allocated_inos __user *ugai = (void __user *)arg;
struct scoutfs_ioctl_get_allocated_inos gai;
struct scoutfs_lock *lock = NULL;
struct scoutfs_key key;
struct scoutfs_key end;
u64 __user *uinos;
u64 bytes;
u64 ino;
int nr;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&gai, ugai, sizeof(gai))) {
ret = -EFAULT;
goto out;
}
if ((gai.inos_ptr & (sizeof(__u64) - 1)) || (gai.inos_bytes < sizeof(__u64))) {
ret = -EINVAL;
goto out;
}
scoutfs_inode_init_key(&key, gai.start_ino);
scoutfs_inode_init_key(&end, gai.start_ino | SCOUTFS_LOCK_INODE_GROUP_MASK);
uinos = (void __user *)gai.inos_ptr;
bytes = gai.inos_bytes;
nr = 0;
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, gai.start_ino, &lock);
if (ret < 0)
goto out;
while (bytes >= sizeof(*uinos)) {
ret = scoutfs_item_next(sb, &key, &end, NULL, 0, lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
break;
}
if (key.sk_zone != SCOUTFS_FS_ZONE) {
ret = 0;
break;
}
/* all fs items are owned by allocated inodes, and _first is always ino */
ino = le64_to_cpu(key._sk_first);
if (put_user(ino, uinos)) {
ret = -EFAULT;
break;
}
uinos++;
bytes -= sizeof(*uinos);
if (++nr == INT_MAX)
break;
scoutfs_inode_init_key(&key, ino + 1);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
out:
return ret ?: nr;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -1422,22 +897,12 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_setattr_more(file, arg);
case SCOUTFS_IOC_LISTXATTR_HIDDEN:
return scoutfs_ioc_listxattr_hidden(file, arg);
case SCOUTFS_IOC_SEARCH_XATTRS:
return scoutfs_ioc_search_xattrs(file, arg);
case SCOUTFS_IOC_FIND_XATTRS:
return scoutfs_ioc_find_xattrs(file, arg);
case SCOUTFS_IOC_STATFS_MORE:
return scoutfs_ioc_statfs_more(file, arg);
case SCOUTFS_IOC_DATA_WAIT_ERR:
return scoutfs_ioc_data_wait_err(file, arg);
case SCOUTFS_IOC_ALLOC_DETAIL:
return scoutfs_ioc_alloc_detail(file, arg);
case SCOUTFS_IOC_MOVE_BLOCKS:
return scoutfs_ioc_move_blocks(file, arg);
case SCOUTFS_IOC_RESIZE_DEVICES:
return scoutfs_ioc_resize_devices(file, arg);
case SCOUTFS_IOC_READ_XATTR_TOTALS:
return scoutfs_ioc_read_xattr_totals(file, arg);
case SCOUTFS_IOC_GET_ALLOCATED_INOS:
return scoutfs_ioc_get_allocated_inos(file, arg);
}
return -ENOTTY;

View File

@@ -13,7 +13,8 @@
* This is enforced by pahole scripting in external build environments.
*/
#define SCOUTFS_IOCTL_MAGIC 0xE8 /* arbitrarily chosen hole in ioctl-number.rst */
/* XXX I have no idea how these are chosen. */
#define SCOUTFS_IOCTL_MAGIC 's'
/*
* Packed scoutfs keys rarely cross the ioctl boundary so we have a
@@ -77,7 +78,7 @@ struct scoutfs_ioctl_walk_inodes {
__u8 _pad[11]; /* padded to align walk_inodes_entry total size */
};
enum scoutfs_ino_walk_seq_type {
enum {
SCOUTFS_IOC_WALK_INODES_META_SEQ = 0,
SCOUTFS_IOC_WALK_INODES_DATA_SEQ,
SCOUTFS_IOC_WALK_INODES_UNKNOWN,
@@ -87,7 +88,7 @@ enum scoutfs_ino_walk_seq_type {
* Adds entries to the user's buffer for each inode that is found in the
* given index between the first and last positions.
*/
#define SCOUTFS_IOC_WALK_INODES _IOW(SCOUTFS_IOCTL_MAGIC, 1, \
#define SCOUTFS_IOC_WALK_INODES _IOR(SCOUTFS_IOCTL_MAGIC, 1, \
struct scoutfs_ioctl_walk_inodes)
/*
@@ -162,11 +163,11 @@ struct scoutfs_ioctl_ino_path_result {
__u64 dir_pos;
__u16 path_bytes;
__u8 _pad[6];
__u8 path[];
__u8 path[0];
};
/* Get a single path from the root to the given inode number */
#define SCOUTFS_IOC_INO_PATH _IOW(SCOUTFS_IOCTL_MAGIC, 2, \
#define SCOUTFS_IOC_INO_PATH _IOR(SCOUTFS_IOCTL_MAGIC, 2, \
struct scoutfs_ioctl_ino_path)
/*
@@ -175,8 +176,8 @@ struct scoutfs_ioctl_ino_path_result {
* an offline record is left behind to trigger demand staging if the
* file is read.
*
* The starting file offset and number of bytes to release must be in
* multiples of 4KB.
* The starting block offset and number of blocks to release are in
* units 4KB blocks.
*
* The specified range can extend past i_size and can straddle sparse
* regions or blocks that are already offline. The only change it makes
@@ -192,8 +193,8 @@ struct scoutfs_ioctl_ino_path_result {
* presentation of the data in the file.
*/
struct scoutfs_ioctl_release {
__u64 offset;
__u64 length;
__u64 block;
__u64 count;
__u64 data_version;
};
@@ -204,7 +205,7 @@ struct scoutfs_ioctl_stage {
__u64 data_version;
__u64 buf_ptr;
__u64 offset;
__s32 length;
__s32 count;
__u32 _pad;
};
@@ -214,16 +215,23 @@ struct scoutfs_ioctl_stage {
/*
* Give the user inode fields that are not otherwise visible. statx()
* isn't always available and xattrs are relatively expensive.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_stat_more {
__u64 valid_bytes;
__u64 meta_seq;
__u64 data_seq;
__u64 data_version;
__u64 online_blocks;
__u64 offline_blocks;
__u64 crtime_sec;
__u32 crtime_nsec;
__u8 _pad[4];
};
#define SCOUTFS_IOC_STAT_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 5, \
@@ -251,16 +259,15 @@ struct scoutfs_ioctl_data_waiting {
__u8 _pad[6];
};
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U8_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING _IOW(SCOUTFS_IOCTL_MAGIC, 6, \
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
struct scoutfs_ioctl_data_waiting)
/*
* If i_size is set then data_version must be non-zero. If the offline
* flag is set then i_size must be set and a offline extent will be
* created from offset 0 to i_size. The time fields are always applied
* to the inode.
* created from offset 0 to i_size.
*/
struct scoutfs_ioctl_setattr_more {
__u64 data_version;
@@ -268,12 +275,11 @@ struct scoutfs_ioctl_setattr_more {
__u64 flags;
__u64 ctime_sec;
__u32 ctime_nsec;
__u32 crtime_nsec;
__u64 crtime_sec;
__u8 _pad[4];
};
#define SCOUTFS_IOC_SETATTR_MORE_OFFLINE (1 << 0)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U64_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U8_MAX << 1)
#define SCOUTFS_IOC_SETATTR_MORE _IOW(SCOUTFS_IOCTL_MAGIC, 7, \
struct scoutfs_ioctl_setattr_more)
@@ -285,78 +291,57 @@ struct scoutfs_ioctl_listxattr_hidden {
__u32 hash_pos;
};
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOWR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
/*
* Return the inode numbers of inodes which might contain the given
* xattr. The inode may not have a set xattr with that name, the caller
* must check the returned inodes to see if they match.
* named xattr. The inode may not have a set xattr with that name, the
* caller must check the returned inodes to see if they match.
*
* @next_ino: The next inode number that could be returned. Initialized
* to 0 when first searching and set to one past the last inode number
* returned to continue searching.
* @last_ino: The last inode number that could be returned. U64_MAX to
* find all inodes.
* @name_ptr: The address of the name of the xattr to search for. It is
* not null terminated.
* @inodes_ptr: The address of the array of uint64_t inode numbers in
* which to store inode numbers that may contain the xattr. EFAULT may
* be returned if this address is not naturally aligned.
* @output_flags: Set as success is returned. If an error is returned
* then this field is undefined and should not be read.
* @name_ptr: The address of the name of the xattr to search for. It does
* not need to be null terminated.
* @inodes_ptr: The address of the array of uint64_t inode numbers in which
* to store inode numbers that may contain the xattr. EFAULT may be returned
* if this address is not naturally aligned.
* @name_bytes: The number of non-null bytes found in the name at name_ptr.
* @nr_inodes: The number of elements in the array found at inodes_ptr.
* @name_bytes: The number of non-null bytes found in the name at
* name_ptr.
*
* This requires the CAP_SYS_ADMIN capability and will return -EPERM if
* it's not granted.
*
* The number of inode numbers stored in the inodes_ptr array is
* returned. If nr_inodes is 0 or last_ino is less than next_ino then 0
* will be immediately returned.
*
* Partial progress can be returned if an error is hit or if nr_inodes
* was larger than the internal limit on the number of inodes returned
* in a search pass. The _END output flag is set if all the results
* including last_ino were searched in this pass.
*
* It's valuable to provide a large inodes array so that all the results
* can be found in one search pass and _END can be set. There are
* significant constant costs for performing each search pass.
*/
struct scoutfs_ioctl_search_xattrs {
struct scoutfs_ioctl_find_xattrs {
__u64 next_ino;
__u64 last_ino;
__u64 name_ptr;
__u64 inodes_ptr;
__u64 output_flags;
__u64 nr_inodes;
__u16 name_bytes;
__u8 _pad[6];
__u16 nr_inodes;
__u8 _pad[4];
};
/* set in output_flags if returned inodes reached last_ino */
#define SCOUTFS_SEARCH_XATTRS_OFLAG_END (1ULL << 0)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOW(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
#define SCOUTFS_IOC_FIND_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_find_xattrs)
/*
* Give the user information about the filesystem.
*
* @committed_seq: All seqs up to and including this seq have been
* committed. Can be compared with meta_seq and data_seq from inodes in
* stat_more to discover if changes have been committed to disk.
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_statfs_more {
__u64 valid_bytes;
__u64 fsid;
__u64 rid;
__u64 committed_seq;
__u64 total_meta_blocks;
__u64 total_data_blocks;
__u64 reserved_meta_blocks;
};
} __packed;
#define SCOUTFS_IOC_STATFS_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 10, \
struct scoutfs_ioctl_statfs_more)
@@ -376,187 +361,7 @@ struct scoutfs_ioctl_data_wait_err {
__s64 err;
};
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOW(SCOUTFS_IOCTL_MAGIC, 11, \
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOR(SCOUTFS_IOCTL_MAGIC, 11, \
struct scoutfs_ioctl_data_wait_err)
struct scoutfs_ioctl_alloc_detail {
__u64 entries_ptr;
__u64 entries_nr;
};
struct scoutfs_ioctl_alloc_detail_entry {
__u64 id;
__u64 blocks;
__u8 type;
__u8 meta:1,
avail:1;
__u8 __bit_pad:6;
__u8 __pad[6];
};
#define SCOUTFS_IOC_ALLOC_DETAIL _IOW(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
/*
* Move extents from one regular file to another at a different offset,
* on the same file system.
*
* from_fd specifies the source file and the ioctl is called on the
* destination file. Both files must have write access. from_off specifies
* the byte offset in the source, to_off is the byte offset in the
* destination, and len is the number of bytes in the region to move. All of
* the offsets and lengths must be in multiples of 4KB, except in the case
* where the from_off + len ends at the i_size of the source
* file. data_version is only used when STAGE flag is set (see below). flags
* field is currently only used to optionally specify STAGE behavior.
*
* This interface only moves extents which are block granular, it does
* not perform RMW of sub-block byte extents and it does not overwrite
* existing extents in the destination. It will split extents in the
* source.
*
* Only extents within i_size on the source are moved. The destination
* i_size will be updated if extents are moved beyond its current
* i_size. The i_size update will maintain final partial blocks in the
* source.
*
* If STAGE flag is not set, it will return an error if either of the files
* have offline extents. It will return 0 when all of the extents in the
* source region have been moved to the destination. Moving extents updates
* the ctime, mtime, meta_seq, data_seq, and data_version fields of both the
* source and destination inodes. If an error is returned then partial
* progress may have been made and inode fields may have been updated.
*
* If STAGE flag is set, as above except destination range must be in an
* offline extent. Fields are updated only for source inode.
*
* Errors specific to this interface include:
*
* EINVAL: from_off, len, or to_off aren't a multiple of 4KB; the source
* and destination files are the same inode; either the source or
* destination is not a regular file; the destination file has
* an existing overlapping extent (if STAGE flag not set); the
* destination range is not in an offline extent (if STAGE set).
* EOVERFLOW: either from_off + len or to_off + len exceeded 64bits.
* EBADF: from_fd isn't a valid open file descriptor.
* EXDEV: the source and destination files are in different filesystems.
* EISDIR: either the source or destination is a directory.
* ENODATA: either the source or destination file have offline extents and
* STAGE flag is not set.
* ESTALE: data_version does not match destination data_version.
*/
#define SCOUTFS_IOC_MB_STAGE (1 << 0)
#define SCOUTFS_IOC_MB_UNKNOWN (U64_MAX << 1)
struct scoutfs_ioctl_move_blocks {
__u64 from_fd;
__u64 from_off;
__u64 len;
__u64 to_off;
__u64 data_version;
__u64 flags;
};
#define SCOUTFS_IOC_MOVE_BLOCKS _IOW(SCOUTFS_IOCTL_MAGIC, 13, \
struct scoutfs_ioctl_move_blocks)
struct scoutfs_ioctl_resize_devices {
__u64 new_total_meta_blocks;
__u64 new_total_data_blocks;
};
#define SCOUTFS_IOC_RESIZE_DEVICES \
_IOW(SCOUTFS_IOCTL_MAGIC, 14, struct scoutfs_ioctl_resize_devices)
#define SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR 3
/*
* Copy global totals of .totl. xattr value payloads to the user. This
* only sees xattrs which have been committed and this doesn't force
* commits of dirty data throughout the system. This can be out of sync
* by the amount of xattrs that can be dirty in open transactions that
* are being built throughout the system.
*
* pos_name: The array name of the first total that can be returned.
* The name is derived from the key of the xattrs that contribute to the
* total. For xattrs with a .totl.1.2.3 key, the pos_name[] should be
* {1, 2, 3}.
*
* totals_ptr: An aligned pointer to a buffer that will be filled with
* an array of scoutfs_ioctl_xattr_total structs for each total copied.
*
* totals_bytes: The size of the buffer in bytes. There must be room
* for at least one struct element so that returning 0 can promise that
* there were no more totals to copy after the pos_name.
*
* The number of copied elements is returned and 0 is returned if there
* were no more totals to copy after the pos_name.
*
* In addition to the usual errnos (EIO, EINVAL, EPERM, EFAULT) this
* adds:
*
* EINVAL: The totals_ buffer was not aligned or was not large enough
* for a single struct entry.
*/
struct scoutfs_ioctl_read_xattr_totals {
__u64 pos_name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 totals_ptr;
__u64 totals_bytes;
};
/*
* An individual total that is given to userspace. The total is the
* sum of all the values in the xattr payloads matching the name. The
* count is the number of xattrs, not number of files, contributing to
* the total.
*/
struct scoutfs_ioctl_xattr_total {
__u64 name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 total;
__u64 count;
};
#define SCOUTFS_IOC_READ_XATTR_TOTALS \
_IOW(SCOUTFS_IOCTL_MAGIC, 15, struct scoutfs_ioctl_read_xattr_totals)
/*
* This fills the caller's inos array with inode numbers that are in use
* after the start ino, within an internal inode group.
*
* This only makes a promise about the state of the inode numbers within
* the first and last numbers returned by one call. At one time, all of
* those inodes were still allocated. They could have changed before
* the call returned. And any numbers outside of the first and last
* (or single) are undefined.
*
* This doesn't iterate over all allocated inodes, it only probes a
* single group that the start inode is within. This interface was
* first introduced to support tests that needed to find out about a
* specific inode, while having some other similarly niche uses. It is
* unsuitable for a consistent iteration over all the inode numbers in
* use.
*
* This test of inode items doesn't serialize with the inode lifetime
* mechanism. It only tells you the numbers of inodes that were once
* active in the system and haven't yet been fully deleted. The inode
* numbers returned could have been in the process of being deleted and
* were already unreachable even before the call started.
*
* @start_ino: the first inode number that could be returned
* @inos_ptr: pointer to an aligned array of 64bit inode numbers
* @inos_bytes: the number of bytes available in the inos_ptr array
*
* Returns errors or the count of inode numbers returned, quite possibly
* including 0.
*/
struct scoutfs_ioctl_get_allocated_inos {
__u64 start_ino;
__u64 inos_ptr;
__u64 inos_bytes;
};
#define SCOUTFS_IOC_GET_ALLOCATED_INOS \
_IOW(SCOUTFS_IOCTL_MAGIC, 16, struct scoutfs_ioctl_get_allocated_inos)
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,43 +0,0 @@
#ifndef _SCOUTFS_ITEM_H_
#define _SCOUTFS_ITEM_H_
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_lookup_within(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_lookup_exact(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock);
int scoutfs_item_next(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *last, void *val, int val_len,
struct scoutfs_lock *lock);
int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_lock *lock);
u64 scoutfs_item_dirty_pages(struct super_block *sb);
int scoutfs_item_write_dirty(struct super_block *sb);
int scoutfs_item_write_done(struct super_block *sb);
bool scoutfs_item_range_cached(struct super_block *sb,
struct scoutfs_key *start,
struct scoutfs_key *end, bool *dirty);
void scoutfs_item_invalidate(struct super_block *sb, struct scoutfs_key *start,
struct scoutfs_key *end);
int scoutfs_item_setup(struct super_block *sb);
void scoutfs_item_destroy(struct super_block *sb);
#endif

View File

@@ -78,14 +78,6 @@ static inline void scoutfs_key_set_zeros(struct scoutfs_key *key)
key->_sk_second = 0;
key->_sk_third = 0;
key->_sk_fourth = 0;
memset(key->__pad, 0, sizeof(key->__pad));
}
static inline bool scoutfs_key_is_zeros(struct scoutfs_key *key)
{
return key->sk_zone == 0 && key->_sk_first == 0 && key->sk_type == 0 &&
key->_sk_second == 0 && key->_sk_third == 0 &&
key->_sk_fourth == 0;
}
static inline void scoutfs_key_copy_or_zeros(struct scoutfs_key *dst,
@@ -105,17 +97,6 @@ static inline void scoutfs_key_set_ones(struct scoutfs_key *key)
key->_sk_second = cpu_to_le64(U64_MAX);
key->_sk_third = cpu_to_le64(U64_MAX);
key->_sk_fourth = U8_MAX;
memset(key->__pad, 0, sizeof(key->__pad));
}
static inline bool scoutfs_key_is_ones(struct scoutfs_key *key)
{
return key->sk_zone == U8_MAX &&
key->_sk_first == cpu_to_le64(U64_MAX) &&
key->sk_type == U8_MAX &&
key->_sk_second == cpu_to_le64(U64_MAX) &&
key->_sk_third == cpu_to_le64(U64_MAX) &&
key->_sk_fourth == U8_MAX;
}
/*
@@ -198,19 +179,29 @@ static inline void scoutfs_key_dec(struct scoutfs_key *key)
key->sk_zone--;
}
/*
* Some key types are used by multiple subsystems and shouldn't have
* duplicate private key init functions.
*/
static inline void scoutfs_key_init_log_trees(struct scoutfs_key *key,
u64 rid, u64 nr)
static inline void scoutfs_key_to_be(struct scoutfs_key_be *be,
struct scoutfs_key *key)
{
*key = (struct scoutfs_key) {
.sk_zone = SCOUTFS_LOG_TREES_ZONE,
.sklt_rid = cpu_to_le64(rid),
.sklt_nr = cpu_to_le64(nr),
};
BUILD_BUG_ON(sizeof(struct scoutfs_key_be) !=
sizeof(struct scoutfs_key));
be->sk_zone = key->sk_zone;
be->_sk_first = le64_to_be64(key->_sk_first);
be->sk_type = key->sk_type;
be->_sk_second = le64_to_be64(key->_sk_second);
be->_sk_third = le64_to_be64(key->_sk_third);
be->_sk_fourth = key->_sk_fourth;
}
static inline void scoutfs_key_from_be(struct scoutfs_key *key,
struct scoutfs_key_be *be)
{
key->sk_zone = be->sk_zone;
key->_sk_first = be64_to_le64(be->_sk_first);
key->sk_type = be->sk_type;
key->_sk_second = be64_to_le64(be->_sk_second);
key->_sk_third = be64_to_le64(be->_sk_third);
key->_sk_fourth = be->_sk_fourth;
}
#endif

12
kmod/src/kvec.h Normal file
View File

@@ -0,0 +1,12 @@
#ifndef _SCOUTFS_KVEC_H_
#define _SCOUTFS_KVEC_H_
#include <linux/uio.h>
static inline void kvec_init(struct kvec *kv, void *base, size_t len)
{
kv->iov_base = base;
kv->iov_len = len;
}
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -6,15 +6,12 @@
#define SCOUTFS_LKF_REFRESH_INODE 0x01 /* update stale inode from item */
#define SCOUTFS_LKF_NONBLOCK 0x02 /* only use already held locks */
#define SCOUTFS_LKF_INTERRUPTIBLE 0x04 /* pending signals return -ERESTARTSYS */
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_INTERRUPTIBLE << 1) - 1))
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_NONBLOCK << 1) - 1))
#define SCOUTFS_LOCK_NR_MODES SCOUTFS_LOCK_INVALID
struct inode_deletion_lock_data;
/*
* A few fields (start, end, refresh_gen, write_seq, granted_mode)
* A few fields (start, end, refresh_gen, write_version, granted_mode)
* are referenced by code outside lock.c.
*/
struct scoutfs_lock {
@@ -24,32 +21,25 @@ struct scoutfs_lock {
struct rb_node node;
struct rb_node range_node;
u64 refresh_gen;
u64 write_seq;
u64 dirty_trans_seq;
u64 write_version;
struct list_head lru_head;
wait_queue_head_t waitq;
struct work_struct shrink_work;
ktime_t grace_deadline;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
struct list_head inv_list; /* list of lock's invalidation requests */
struct list_head shrink_head;
spinlock_t cov_list_lock;
struct list_head cov_list;
enum scoutfs_lock_mode mode;
enum scoutfs_lock_mode invalidating_mode;
int mode;
unsigned int waiters[SCOUTFS_LOCK_NR_MODES];
unsigned int users[SCOUTFS_LOCK_NR_MODES];
struct scoutfs_tseq_entry tseq_entry;
/* the forest tracks which log tree last saw bloom bit updates */
atomic64_t forest_bloom_nr;
/* inode deletion tracks some state per lock */
struct inode_deletion_lock_data *inode_deletion_data;
/* the forest btree code stores data per lock */
struct forest_lock_private *forest_private;
};
struct scoutfs_lock_coverage {
@@ -65,29 +55,29 @@ int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
struct scoutfs_key *key);
int scoutfs_lock_inode(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
int scoutfs_lock_inode(struct super_block *sb, int mode, int flags,
struct inode *inode, struct scoutfs_lock **ret_lock);
int scoutfs_lock_ino(struct super_block *sb, enum scoutfs_lock_mode mode, int flags, u64 ino,
int scoutfs_lock_ino(struct super_block *sb, int mode, int flags, u64 ino,
struct scoutfs_lock **ret_lock);
void scoutfs_lock_get_index_item_range(u8 type, u64 major, u64 ino,
struct scoutfs_key *start,
struct scoutfs_key *end);
int scoutfs_lock_inode_index(struct super_block *sb, enum scoutfs_lock_mode mode,
int scoutfs_lock_inode_index(struct super_block *sb, int mode,
u8 type, u64 major, u64 ino,
struct scoutfs_lock **ret_lock);
int scoutfs_lock_inodes(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
int scoutfs_lock_xattr_index(struct super_block *sb, int mode, int flags,
u64 hash, struct scoutfs_lock **ret_lock);
int scoutfs_lock_inodes(struct super_block *sb, int mode, int flags,
struct inode *a, struct scoutfs_lock **a_lock,
struct inode *b, struct scoutfs_lock **b_lock,
struct inode *c, struct scoutfs_lock **c_lock,
struct inode *d, struct scoutfs_lock **D_lock);
int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
int scoutfs_lock_rename(struct super_block *sb, int mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 ino, struct scoutfs_lock **lock);
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
int scoutfs_lock_rid(struct super_block *sb, int mode, int flags,
u64 rid, struct scoutfs_lock **lock);
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode mode);
int level);
void scoutfs_lock_init_coverage(struct scoutfs_lock_coverage *cov);
void scoutfs_lock_add_coverage(struct super_block *sb,
@@ -98,13 +88,11 @@ bool scoutfs_lock_is_covered(struct super_block *sb,
void scoutfs_lock_del_coverage(struct super_block *sb,
struct scoutfs_lock_coverage *cov);
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode);
int mode);
void scoutfs_free_unused_locks(struct super_block *sb);
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr);
int scoutfs_lock_setup(struct super_block *sb);
void scoutfs_lock_unmount_begin(struct super_block *sb);
void scoutfs_lock_flush_invalidate(struct super_block *sb);
void scoutfs_lock_shutdown(struct super_block *sb);
void scoutfs_lock_destroy(struct super_block *sb);

View File

@@ -20,10 +20,11 @@
#include "tseq.h"
#include "spbm.h"
#include "block.h"
#include "radix.h"
#include "btree.h"
#include "msg.h"
#include "scoutfs_trace.h"
#include "lock_server.h"
#include "recov.h"
/*
* The scoutfs server implements a simple lock service. Client mounts
@@ -56,11 +57,14 @@
* Message requests and responses are reliably delivered in order across
* reconnection.
*
* As a new server comes up it recovers lock state from existing clients
* which were connected to a previous lock server. Recover requests are
* sent to clients as they connect and they respond with all there
* locks. Once all clients and locks are accounted for normal
* processing can resume.
* The server maintains a persistent record of connected clients. A new
* server instance discovers these and waits for previously connected
* clients to reconnect and recover their state before proceeding. If
* clients don't reconnect they are forcefully prevented from unsafely
* accessing the shared persistent storage. (fenced, according to the
* rules of the platform.. could range from being powered off to having
* their switch port disabled to having their local block device set
* read-only.)
*
* The lock server doesn't respond to memory pressure. The only way
* locks are freed is if they are invalidated to null on behalf of a
@@ -74,12 +78,17 @@ struct lock_server_info {
struct super_block *sb;
spinlock_t lock;
struct mutex mutex;
struct rb_root locks_root;
struct scoutfs_spbm recovery_pending;
struct delayed_work recovery_dwork;
struct scoutfs_tseq_tree tseq_tree;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree stats_tseq_tree;
struct dentry *stats_tseq_dentry;
struct scoutfs_radix_allocator *alloc;
struct scoutfs_block_writer *wri;
};
#define DECLARE_LOCK_SERVER_INFO(sb, name) \
@@ -106,9 +115,12 @@ struct server_lock_node {
struct list_head granted;
struct list_head requested;
struct list_head invalidated;
};
struct scoutfs_tseq_entry stats_tseq_entry;
u64 stats[SLT_NR];
enum {
CLE_GRANTED,
CLE_REQUESTED,
CLE_INVALIDATED,
};
/*
@@ -153,30 +165,30 @@ enum {
*/
static void add_client_entry(struct server_lock_node *snode,
struct list_head *list,
struct client_lock_entry *c_ent)
struct client_lock_entry *clent)
{
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
if (list_empty(&c_ent->head))
list_add_tail(&c_ent->head, list);
if (list_empty(&clent->head))
list_add_tail(&clent->head, list);
else
list_move_tail(&c_ent->head, list);
list_move_tail(&clent->head, list);
c_ent->on_list = list == &snode->granted ? OL_GRANTED :
clent->on_list = list == &snode->granted ? OL_GRANTED :
list == &snode->requested ? OL_REQUESTED :
OL_INVALIDATED;
}
static void free_client_entry(struct lock_server_info *inf,
struct server_lock_node *snode,
struct client_lock_entry *c_ent)
struct client_lock_entry *clent)
{
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
if (!list_empty(&c_ent->head))
list_del_init(&c_ent->head);
scoutfs_tseq_del(&inf->tseq_tree, &c_ent->tseq_entry);
kfree(c_ent);
if (!list_empty(&clent->head))
list_del_init(&clent->head);
scoutfs_tseq_del(&inf->tseq_tree, &clent->tseq_entry);
kfree(clent);
}
static bool invalid_mode(u8 mode)
@@ -298,8 +310,6 @@ static struct server_lock_node *alloc_server_lock(struct lock_server_info *inf,
snode = get_server_lock(inf, key, ins, false);
if (snode != ins)
kfree(ins);
else
scoutfs_tseq_add(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
}
}
@@ -329,23 +339,21 @@ static void put_server_lock(struct lock_server_info *inf,
mutex_unlock(&snode->mutex);
if (should_free) {
scoutfs_tseq_del(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
if (should_free)
kfree(snode);
}
}
static struct client_lock_entry *find_entry(struct server_lock_node *snode,
struct list_head *list,
u64 rid)
{
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
list_for_each_entry(c_ent, list, head) {
if (c_ent->rid == rid)
return c_ent;
list_for_each_entry(clent, list, head) {
if (clent->rid == rid)
return clent;
}
return NULL;
@@ -364,7 +372,7 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
u64 net_id, struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct server_lock_node *snode;
int ret;
@@ -376,29 +384,27 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
goto out;
}
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!c_ent) {
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!clent) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&c_ent->head);
c_ent->rid = rid;
c_ent->net_id = net_id;
c_ent->mode = nl->new_mode;
INIT_LIST_HEAD(&clent->head);
clent->rid = rid;
clent->net_id = net_id;
clent->mode = nl->new_mode;
snode = alloc_server_lock(inf, &nl->key);
if (snode == NULL) {
kfree(c_ent);
kfree(clent);
ret = -ENOMEM;
goto out;
}
snode->stats[SLT_REQUEST]++;
c_ent->snode = snode;
add_client_entry(snode, &snode->requested, c_ent);
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
clent->snode = snode;
add_client_entry(snode, &snode->requested, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
ret = process_waiting_requests(sb, snode);
out:
@@ -417,7 +423,7 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct server_lock_node *snode;
int ret;
@@ -429,27 +435,25 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
goto out;
}
/* XXX should always have a server lock here? */
/* XXX should always have a server lock here? recovery? */
snode = get_server_lock(inf, &nl->key, NULL, false);
if (!snode) {
ret = -EINVAL;
goto out;
}
snode->stats[SLT_RESPONSE]++;
c_ent = find_entry(snode, &snode->invalidated, rid);
if (!c_ent) {
clent = find_entry(snode, &snode->invalidated, rid);
if (!clent) {
put_server_lock(inf, snode);
ret = -EINVAL;
goto out;
}
if (nl->new_mode == SCOUTFS_LOCK_NULL) {
free_client_entry(inf, snode, c_ent);
free_client_entry(inf, snode, clent);
} else {
c_ent->mode = nl->new_mode;
add_client_entry(snode, &snode->granted, c_ent);
clent->mode = nl->new_mode;
add_client_entry(snode, &snode->granted, clent);
}
ret = process_waiting_requests(sb, snode);
@@ -474,9 +478,12 @@ out:
* so we unlock the snode mutex.
*
* All progress must wait for all clients to finish with recovery
* because we don't know which locks they'll hold. Once recover
* finishes the server calls us to kick all the locks that were waiting
* during recovery.
* because we don't know which locks they'll hold. The unlocked
* recovery_pending test here is OK. It's filled by setup before
* anything runs. It's emptied by recovery completion. We can get a
* false nonempty result if we race with recovery completion, but that's
* OK because recovery completion processes all the locks that have
* requests after emptying, including the unlikely loser of that race.
*/
static int process_waiting_requests(struct super_block *sb,
struct server_lock_node *snode)
@@ -487,14 +494,15 @@ static int process_waiting_requests(struct super_block *sb,
struct client_lock_entry *req_tmp;
struct client_lock_entry *gr;
struct client_lock_entry *gr_tmp;
u64 seq;
static atomic64_t write_version = ATOMIC64_INIT(0);
u64 wv;
int ret;
BUG_ON(!mutex_is_locked(&snode->mutex));
/* processing waits for all invalidation responses or recovery */
if (!list_empty(&snode->invalidated) ||
scoutfs_recov_next_pending(sb, 0, SCOUTFS_RECOV_LOCKS) != 0) {
!scoutfs_spbm_empty(&inf->recovery_pending)) {
ret = 0;
goto out;
}
@@ -518,7 +526,6 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER,
SLT_INVALIDATE, SLT_REQUEST,
gr->rid, 0, &nl);
snode->stats[SLT_INVALIDATE]++;
add_client_entry(snode, &snode->invalidated, gr);
}
@@ -529,7 +536,6 @@ static int process_waiting_requests(struct super_block *sb,
nl.key = snode->key;
nl.new_mode = req->mode;
nl.write_seq = 0;
/* see if there's an existing compatible grant to replace */
gr = find_entry(snode, &snode->granted, req->rid);
@@ -542,9 +548,8 @@ static int process_waiting_requests(struct super_block *sb,
if (nl.new_mode == SCOUTFS_LOCK_WRITE ||
nl.new_mode == SCOUTFS_LOCK_WRITE_ONLY) {
/* doesn't commit seq update, recovered with locks */
seq = scoutfs_server_next_seq(sb);
nl.write_seq = cpu_to_le64(seq);
wv = atomic64_inc_return(&write_version);
nl.write_version = cpu_to_le64(wv);
}
ret = scoutfs_server_lock_response(sb, req->rid,
@@ -555,7 +560,6 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER, SLT_GRANT,
SLT_RESPONSE, req->rid,
req->net_id, &nl);
snode->stats[SLT_GRANT]++;
/* don't track null client locks, track all else */
if (req->mode == SCOUTFS_LOCK_NULL)
@@ -573,37 +577,76 @@ out:
/*
* The server received a greeting from a client for the first time. If
* the client is in lock recovery then we send the initial lock request.
* the client had already talked to the server then we must find an
* existing record for it and should begin recovery. If it doesn't have
* a record then its timed out and we can't allow it to reconnect. If
* its connecting for the first time then we insert a new record. If
*
* This is running in concurrent client greeting processing contexts.
*/
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid)
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_lock_client_btree_key cbk;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
if (scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
cbk.rid = cpu_to_be64(rid);
mutex_lock(&inf->mutex);
if (should_exist) {
ret = scoutfs_btree_lookup(sb, &super->lock_clients,
&cbk, sizeof(cbk), &iref);
if (ret == 0)
scoutfs_btree_put_iref(&iref);
} else {
ret = scoutfs_btree_insert(sb, inf->alloc, inf->wri,
&super->lock_clients,
&cbk, sizeof(cbk), NULL, 0);
}
mutex_unlock(&inf->mutex);
if (should_exist && ret == 0) {
scoutfs_key_set_zeros(&key);
ret = scoutfs_server_lock_recover_request(sb, rid, &key);
} else {
ret = 0;
if (ret)
goto out;
}
out:
return ret;
}
/*
* All clients have finished lock recovery, we can make forward process
* on all the queued requests that were waiting on recovery.
* A client sent their last recovery response and can exit recovery. If
* they were the last client in recovery then we can process all the
* server locks that had requests.
*/
int scoutfs_lock_server_finished_recovery(struct super_block *sb)
static int finished_recovery(struct super_block *sb, u64 rid, bool cancel)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct server_lock_node *snode;
struct scoutfs_key key;
bool still_pending;
int ret = 0;
spin_lock(&inf->lock);
scoutfs_spbm_clear(&inf->recovery_pending, rid);
still_pending = !scoutfs_spbm_empty(&inf->recovery_pending);
spin_unlock(&inf->lock);
if (still_pending)
return 0;
if (cancel)
cancel_delayed_work_sync(&inf->recovery_dwork);
scoutfs_key_set_zeros(&key);
scoutfs_info(sb, "all lock clients recovered");
while ((snode = get_server_lock(inf, &key, NULL, true))) {
key = snode->key;
@@ -632,61 +675,58 @@ int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *existing;
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct server_lock_node *snode;
struct scoutfs_key key;
int ret = 0;
int i;
/* client must be in recovery */
if (!scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
spin_lock(&inf->lock);
if (!scoutfs_spbm_test(&inf->recovery_pending, rid))
ret = -EINVAL;
spin_unlock(&inf->lock);
if (ret)
goto out;
}
/* client has sent us all their locks */
if (nlr->nr == 0) {
scoutfs_server_recov_finish(sb, rid, SCOUTFS_RECOV_LOCKS);
ret = 0;
ret = finished_recovery(sb, rid, true);
goto out;
}
for (i = 0; i < le16_to_cpu(nlr->nr); i++) {
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!c_ent) {
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
if (!clent) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&c_ent->head);
c_ent->rid = rid;
c_ent->net_id = 0;
c_ent->mode = nlr->locks[i].new_mode;
INIT_LIST_HEAD(&clent->head);
clent->rid = rid;
clent->net_id = 0;
clent->mode = nlr->locks[i].new_mode;
snode = alloc_server_lock(inf, &nlr->locks[i].key);
if (snode == NULL) {
kfree(c_ent);
kfree(clent);
ret = -ENOMEM;
goto out;
}
existing = find_entry(snode, &snode->granted, rid);
if (existing) {
kfree(c_ent);
kfree(clent);
put_server_lock(inf, snode);
ret = -EEXIST;
goto out;
}
c_ent->snode = snode;
add_client_entry(snode, &snode->granted, c_ent);
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
clent->snode = snode;
add_client_entry(snode, &snode->granted, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
put_server_lock(inf, snode);
/* make sure next core seq is greater than all lock write seq */
scoutfs_server_set_seq_if_greater(sb,
le64_to_cpu(nlr->locks[i].write_seq));
}
/* send request for next batch of keys */
@@ -698,16 +738,108 @@ out:
return ret;
}
static int get_rid_and_put_ref(struct scoutfs_btree_item_ref *iref,
u64 *rid)
{
struct scoutfs_lock_client_btree_key *cbk;
int ret;
if (iref->key_len == sizeof(*cbk) && iref->val_len == 0) {
cbk = iref->key;
*rid = be64_to_cpu(cbk->rid);
ret = 0;
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(iref);
return ret;
}
/*
* This work executes if enough time passes without all of the clients
* finishing with recovery and canceling the work. We walk through the
* client records and find any that still have their recovery pending.
*/
static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
{
struct lock_server_info *inf = container_of(work,
struct lock_server_info,
recovery_dwork.work);
struct super_block *sb = inf->sb;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_lock_client_btree_key cbk;
SCOUTFS_BTREE_ITEM_REF(iref);
bool timed_out;
u64 rid;
int ret;
ret = scoutfs_server_hold_commit(sb);
if (ret)
goto out;
/* we enter recovery if there are any client records */
for (rid = 0; ; rid++) {
cbk.rid = cpu_to_be64(rid);
ret = scoutfs_btree_next(sb, &super->lock_clients,
&cbk, sizeof(cbk), &iref);
if (ret == -ENOENT) {
ret = 0;
break;
}
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
break;
spin_lock(&inf->lock);
if (scoutfs_spbm_test(&inf->recovery_pending, rid)) {
scoutfs_spbm_clear(&inf->recovery_pending, rid);
timed_out = true;
} else {
timed_out = false;
}
spin_unlock(&inf->lock);
if (!timed_out)
continue;
scoutfs_err(sb, "client rid %016llx lock recovery timed out",
rid);
cbk.rid = cpu_to_be64(rid);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients,
&cbk, sizeof(cbk));
if (ret)
break;
}
ret = scoutfs_server_apply_commit(sb, ret);
out:
/* force processing all pending lock requests */
if (ret == 0)
ret = finished_recovery(sb, 0, false);
if (ret < 0) {
scoutfs_err(sb, "lock server saw err %d while timing out clients, shutting down", ret);
scoutfs_server_abort(sb);
}
}
/*
* A client is leaving the lock service. They aren't using locks and
* won't send any more requests. We tear down all the state we had for
* them. This can be called multiple times for a given client as their
* farewell is resent to new servers. It's OK to not find any state.
* If we fail to delete a persistent entry then we have to shut down and
* hope that the next server has more luck.
*/
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
{
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct client_lock_entry *c_ent;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_lock_client_btree_key cli;
struct client_lock_entry *clent;
struct client_lock_entry *tmp;
struct server_lock_node *snode;
struct scoutfs_key key;
@@ -715,7 +847,20 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
bool freed;
int ret = 0;
cli.rid = cpu_to_be64(rid);
mutex_lock(&inf->mutex);
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
&super->lock_clients, &cli, sizeof(cli));
mutex_unlock(&inf->mutex);
if (ret == -ENOENT) {
ret = 0;
goto out;
}
if (ret < 0)
goto out;
scoutfs_key_set_zeros(&key);
while ((snode = get_server_lock(inf, &key, NULL, true))) {
freed = false;
@@ -724,9 +869,9 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
(list == &snode->requested) ? &snode->invalidated :
NULL) {
list_for_each_entry_safe(c_ent, tmp, list, head) {
if (c_ent->rid == rid) {
free_client_entry(inf, snode, c_ent);
list_for_each_entry_safe(clent, tmp, list, head) {
if (clent->rid == rid) {
free_client_entry(inf, snode, clent);
freed = true;
}
}
@@ -749,7 +894,7 @@ out:
if (ret < 0) {
scoutfs_err(sb, "lock server err %d during client rid %016llx farewell, shutting down",
ret, rid);
scoutfs_server_stop(sb);
scoutfs_server_abort(sb);
}
return ret;
@@ -787,35 +932,36 @@ static char *lock_on_list_string(u8 on_list)
static void lock_server_tseq_show(struct seq_file *m,
struct scoutfs_tseq_entry *ent)
{
struct client_lock_entry *c_ent = container_of(ent,
struct client_lock_entry *clent = container_of(ent,
struct client_lock_entry,
tseq_entry);
struct server_lock_node *snode = c_ent->snode;
struct server_lock_node *snode = clent->snode;
seq_printf(m, SK_FMT" %s %s rid %016llx net_id %llu\n",
SK_ARG(&snode->key), lock_mode_string(c_ent->mode),
lock_on_list_string(c_ent->on_list), c_ent->rid,
c_ent->net_id);
}
static void stats_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
{
struct server_lock_node *snode = container_of(ent, struct server_lock_node,
stats_tseq_entry);
seq_printf(m, SK_FMT" req %llu inv %llu rsp %llu gr %llu\n",
SK_ARG(&snode->key), snode->stats[SLT_REQUEST], snode->stats[SLT_INVALIDATE],
snode->stats[SLT_RESPONSE], snode->stats[SLT_GRANT]);
SK_ARG(&snode->key), lock_mode_string(clent->mode),
lock_on_list_string(clent->on_list), clent->rid,
clent->net_id);
}
/*
* Setup the lock server. This is called before networking can deliver
* requests.
* requests. If we find existing client records then we enter recovery.
* Lock request processing is deferred until recovery is resolved for
* all the existing clients, either they reconnect and replay locks or
* we time them out.
*/
int scoutfs_lock_server_setup(struct super_block *sb)
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct lock_server_info *inf;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_lock_client_btree_key cbk;
unsigned int nr;
u64 rid;
int ret;
inf = kzalloc(sizeof(struct lock_server_info), GFP_KERNEL);
if (!inf)
@@ -823,9 +969,14 @@ int scoutfs_lock_server_setup(struct super_block *sb)
inf->sb = sb;
spin_lock_init(&inf->lock);
mutex_init(&inf->mutex);
inf->locks_root = RB_ROOT;
scoutfs_spbm_init(&inf->recovery_pending);
INIT_DELAYED_WORK(&inf->recovery_dwork,
scoutfs_lock_server_recovery_timeout);
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
scoutfs_tseq_tree_init(&inf->stats_tseq_tree, stats_tseq_show);
inf->alloc = alloc;
inf->wri = wri;
inf->tseq_dentry = scoutfs_tseq_create("server_locks", sbi->debug_root,
&inf->tseq_tree);
@@ -834,17 +985,39 @@ int scoutfs_lock_server_setup(struct super_block *sb)
return -ENOMEM;
}
inf->stats_tseq_dentry = scoutfs_tseq_create("server_lock_stats", sbi->debug_root,
&inf->stats_tseq_tree);
if (!inf->stats_tseq_dentry) {
debugfs_remove(inf->tseq_dentry);
kfree(inf);
return -ENOMEM;
}
sbi->lock_server_info = inf;
return 0;
/* we enter recovery if there are any client records */
nr = 0;
for (rid = 0; ; rid++) {
cbk.rid = cpu_to_be64(rid);
ret = scoutfs_btree_next(sb, &super->lock_clients,
&cbk, sizeof(cbk), &iref);
if (ret == -ENOENT)
break;
if (ret == 0)
ret = get_rid_and_put_ref(&iref, &rid);
if (ret < 0)
goto out;
ret = scoutfs_spbm_set(&inf->recovery_pending, rid);
if (ret)
goto out;
nr++;
if (rid == U64_MAX)
break;
}
ret = 0;
if (nr) {
schedule_delayed_work(&inf->recovery_dwork,
msecs_to_jiffies(LOCK_SERVER_RECOVERY_MS));
scoutfs_info(sb, "waiting for %u lock clients to recover", nr);
}
out:
return ret;
}
/*
@@ -857,13 +1030,14 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
DECLARE_LOCK_SERVER_INFO(sb, inf);
struct server_lock_node *snode;
struct server_lock_node *stmp;
struct client_lock_entry *c_ent;
struct client_lock_entry *clent;
struct client_lock_entry *ctmp;
LIST_HEAD(list);
if (inf) {
cancel_delayed_work_sync(&inf->recovery_dwork);
debugfs_remove(inf->tseq_dentry);
debugfs_remove(inf->stats_tseq_dentry);
rbtree_postorder_for_each_entry_safe(snode, stmp,
&inf->locks_root, node) {
@@ -873,14 +1047,16 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
list_splice_init(&snode->invalidated, &list);
mutex_lock(&snode->mutex);
list_for_each_entry_safe(c_ent, ctmp, &list, head) {
free_client_entry(inf, snode, c_ent);
list_for_each_entry_safe(clent, ctmp, &list, head) {
free_client_entry(inf, snode, clent);
}
mutex_unlock(&snode->mutex);
kfree(snode);
}
scoutfs_spbm_destroy(&inf->recovery_pending);
kfree(inf);
sbi->lock_server_info = NULL;
}

View File

@@ -3,15 +3,17 @@
int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock_recover *nlr);
int scoutfs_lock_server_finished_recovery(struct super_block *sb);
int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
u64 net_id, struct scoutfs_net_lock *nl);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid);
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
bool should_exist);
int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);
int scoutfs_lock_server_setup(struct super_block *sb);
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri);
void scoutfs_lock_server_destroy(struct super_block *sb);
#endif

View File

@@ -4,7 +4,6 @@
#include <linux/bitops.h>
#include "key.h"
#include "counters.h"
#include "super.h"
void __printf(4, 5) scoutfs_msg(struct super_block *sb, const char *prefix,
const char *str, const char *fmt, ...);
@@ -24,9 +23,6 @@ do { \
#define scoutfs_info(sb, fmt, args...) \
scoutfs_msg_check(sb, KERN_INFO, "", fmt, ##args)
#define scoutfs_tprintk(sb, fmt, args...) \
trace_printk(SCSBF " " fmt "\n", SCSB_ARGS(sb), ##args);
#define scoutfs_bug_on(sb, cond, fmt, args...) \
do { \
if (cond) { \

View File

@@ -30,7 +30,6 @@
#include "net.h"
#include "endian_swap.h"
#include "tseq.h"
#include "fence.h"
/*
* scoutfs networking delivers requests and responses between nodes.
@@ -101,7 +100,7 @@ do { \
} while (0)
/* listening and their accepting sockets have a fixed locking order */
enum spin_lock_subtype {
enum {
CONN_LOCK_LISTENER,
CONN_LOCK_ACCEPTED,
};
@@ -331,9 +330,6 @@ static int submit_send(struct super_block *sb,
WARN_ON_ONCE(id == 0 && (flags & SCOUTFS_NET_FLAG_RESPONSE)))
return -EINVAL;
if (scoutfs_forcing_unmount(sb))
return -EIO;
msend = kmalloc(offsetof(struct message_send,
nh.data[data_len]), GFP_NOFS);
if (!msend)
@@ -373,7 +369,6 @@ static int submit_send(struct super_block *sb,
msend->nh.cmd = cmd;
msend->nh.flags = flags;
msend->nh.error = net_err;
memset(msend->nh.__pad, 0, sizeof(msend->nh.__pad));
msend->nh.data_len = cpu_to_le16(data_len);
if (data_len)
memcpy(msend->nh.data, data, data_len);
@@ -424,16 +419,6 @@ static int process_request(struct scoutfs_net_connection *conn,
mrecv->nh.data, le16_to_cpu(mrecv->nh.data_len));
}
static int call_resp_func(struct super_block *sb, struct scoutfs_net_connection *conn,
scoutfs_net_response_t resp_func, void *resp_data,
void *resp, unsigned int resp_len, int error)
{
if (resp_func)
return resp_func(sb, conn, resp, resp_len, error, resp_data);
else
return 0;
}
/*
* An incoming response finds the queued request and calls its response
* function. The response function for a given request will only be
@@ -448,6 +433,7 @@ static int process_response(struct scoutfs_net_connection *conn,
struct message_send *msend;
scoutfs_net_response_t resp_func = NULL;
void *resp_data;
int ret = 0;
spin_lock(&conn->lock);
@@ -462,8 +448,11 @@ static int process_response(struct scoutfs_net_connection *conn,
spin_unlock(&conn->lock);
return call_resp_func(sb, conn, resp_func, resp_data, mrecv->nh.data,
le16_to_cpu(mrecv->nh.data_len), net_err_to_host(mrecv->nh.error));
if (resp_func)
ret = resp_func(sb, conn, mrecv->nh.data,
le16_to_cpu(mrecv->nh.data_len),
net_err_to_host(mrecv->nh.error), resp_data);
return ret;
}
/*
@@ -629,6 +618,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
break;
}
trace_scoutfs_recv_clock_sync(nh.clock_sync_id);
data_len = le16_to_cpu(nh.data_len);
scoutfs_inc_counter(sb, net_recv_messages);
@@ -675,15 +666,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/*
* Initial received greetings are processed
* synchronously before any other incoming messages.
*
* Incoming requests or responses to the lock client are
* called synchronously to avoid reordering.
*/
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
/* synchronously process greeting before next recvmsg */
if (nh.cmd == SCOUTFS_NET_CMD_GREETING)
scoutfs_net_proc_worker(&mrecv->proc_work);
else
queue_work(conn->workq, &mrecv->proc_work);
@@ -783,6 +767,9 @@ static void scoutfs_net_send_worker(struct work_struct *work)
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
msend->nh.clock_sync_id = scoutfs_clock_sync_id();
trace_scoutfs_send_clock_sync(msend->nh.clock_sync_id);
ret = sendmsg_full(conn->sock, &msend->nh, len);
spin_lock(&conn->lock);
@@ -835,9 +822,11 @@ static void scoutfs_net_destroy_worker(struct work_struct *work)
if (conn->listening_conn && conn->notify_down)
conn->notify_down(sb, conn, conn->info, conn->rid);
/* free all messages, refactor and complete for forced unmount? */
list_splice_init(&conn->resend_queue, &conn->send_queue);
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head)
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head) {
free_msend(ninf, msend);
}
/* accepted sockets are removed from their listener's list */
if (conn->listening_conn) {
@@ -867,31 +856,13 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
}
/*
* By default, TCP would maintain a connection to an unresponsive peer
* for a very long time indeed. We can't do that because quorum
* members will only participate in an election when they don't have a
* healthy connection to a server. We use the KEEPALIVE* and
* TCP_USER_TIMEOUT options to ensure that we'll break an unresponsive
* connection and return to the quorum and client connection paths to
* try and establish a new connection to an active server.
*
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
* send queue. It doesn't work if the connection is idle. Adding
* keepalice probes with user_timeout set changes how the keepalive
* timeout is calculated. CNT no longer matters. Each time
* additional probes (not the first) are sent the user timeout is
* checked against the last time data was received. If none of the
* keepalives are responded to then eventually the user timeout applies.
*
* Given all this, we start with the overall unresponsive timeout. Then
* we set the probes to start sending towards the end of the timeout.
* We give it a few tries for a successful response before the timeout
* elapses during the probe timer processing after the unsuccessful
* probes.
* Have a pretty aggressive keepalive timeout of around 10 seconds. The
* TCP keepalives are being processed out of task context so they should
* be responsive even when mounts are under load.
*/
#define UNRESPONSIVE_TIMEOUT_SECS 10
#define UNRESPONSIVE_PROBES 3
#define KEEPCNT 3
#define KEEPIDLE 7
#define KEEPINTVL 1
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
@@ -900,7 +871,7 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
int optval;
int ret;
/* we use a keepalive timeout instead of send timeout */
/* but use a keepalive timeout instead of send timeout */
tv.tv_sec = 0;
tv.tv_usec = 0;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
@@ -908,32 +879,24 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
if (ret)
goto out;
/* not checked when user_timeout != 0, but for clarity */
optval = UNRESPONSIVE_PROBES;
optval = KEEPCNT;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
optval = KEEPIDLE;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
optval = KEEPINTVL;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&optval, sizeof(optval));
@@ -961,8 +924,6 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
ret = -EAFNOSUPPORT;
if (ret)
goto out;
conn->last_peername = conn->peername;
out:
return ret;
}
@@ -982,6 +943,7 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
struct scoutfs_net_connection *acc_conn;
DECLARE_WAIT_QUEUE_HEAD(waitq);
struct socket *acc_sock;
LIST_HEAD(conn_list);
int ret;
trace_scoutfs_net_listen_work_enter(sb, 0, 0);
@@ -1126,11 +1088,9 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct scoutfs_net_connection *listener;
struct scoutfs_net_connection *acc_conn;
scoutfs_net_response_t resp_func;
struct message_send *msend;
struct message_send *tmp;
unsigned long delay;
void *resp_data;
trace_scoutfs_net_shutdown_work_enter(sb, 0, 0);
trace_scoutfs_conn_shutdown_start(conn);
@@ -1176,30 +1136,6 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
/* and wait for accepted conn shutdown work to finish */
wait_event(conn->waitq, empty_accepted_list(conn));
/*
* Forced unmount will cause net submit to fail once it's
* started and it calls shutdown to interrupt any previous
* senders waiting for a response. The response callbacks can
* do quite a lot of work so we're careful to call them outside
* the lock.
*/
if (scoutfs_forcing_unmount(sb)) {
spin_lock(&conn->lock);
list_splice_tail_init(&conn->send_queue, &conn->resend_queue);
while ((msend = list_first_entry_or_null(&conn->resend_queue,
struct message_send, head))) {
resp_func = msend->resp_func;
resp_data = msend->resp_data;
free_msend(ninf, msend);
spin_unlock(&conn->lock);
call_resp_func(sb, conn, resp_func, resp_data, NULL, 0, -ECONNABORTED);
spin_lock(&conn->lock);
}
spin_unlock(&conn->lock);
}
spin_lock(&conn->lock);
/* greetings aren't resent across sockets */
@@ -1269,7 +1205,6 @@ static void scoutfs_net_reconn_free_worker(struct work_struct *work)
unsigned long now = jiffies;
unsigned long deadline = 0;
bool requeue = false;
int ret;
trace_scoutfs_net_reconn_free_work_enter(sb, 0, 0);
@@ -1283,18 +1218,10 @@ restart:
time_after_eq(now, acc->reconn_deadline))) {
set_conn_fl(acc, reconn_freeing);
spin_unlock(&conn->lock);
if (!test_conn_fl(conn, shutting_down)) {
scoutfs_info(sb, "client "SIN_FMT" reconnect timed out, fencing",
SIN_ARG(&acc->last_peername));
ret = scoutfs_fence_start(sb, acc->rid,
acc->last_peername.sin_addr.s_addr,
SCOUTFS_FENCE_CLIENT_RECONNECT);
if (ret) {
scoutfs_err(sb, "client fence returned err %d, shutting down server",
ret);
scoutfs_server_stop(sb);
}
}
if (!test_conn_fl(conn, shutting_down))
scoutfs_info(sb, "client timed out "SIN_FMT" -> "SIN_FMT", can not reconnect",
SIN_ARG(&acc->sockname),
SIN_ARG(&acc->peername));
destroy_conn(acc);
goto restart;
}
@@ -1365,7 +1292,6 @@ scoutfs_net_alloc_conn(struct super_block *sb,
init_waitqueue_head(&conn->waitq);
conn->sockname.sin_family = AF_INET;
conn->peername.sin_family = AF_INET;
conn->last_peername.sin_family = AF_INET;
INIT_LIST_HEAD(&conn->accepted_head);
INIT_LIST_HEAD(&conn->accepted_list);
conn->next_send_seq = 1;
@@ -1532,7 +1458,8 @@ int scoutfs_net_connect(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin, unsigned long timeout_ms)
{
int ret = 0;
int error = 0;
int ret;
spin_lock(&conn->lock);
conn->connect_sin = *sin;
@@ -1540,8 +1467,10 @@ int scoutfs_net_connect(struct super_block *sb,
spin_unlock(&conn->lock);
queue_work(conn->workq, &conn->connect_work);
wait_event(conn->waitq, connect_result(conn, &ret));
return ret;
ret = wait_event_interruptible(conn->waitq,
connect_result(conn, &error));
return ret ?: error;
}
static void set_valid_greeting(struct scoutfs_net_connection *conn)
@@ -1616,8 +1545,9 @@ void scoutfs_net_client_greeting(struct super_block *sb,
* response and they can disconnect cleanly.
*
* At this point our connection is idle except for send submissions and
* shutdown being queued. We have exclusive access to the previous conn
* once it's shutdown and we set _freeing.
* shutdown being queued. Once we shut down a We completely own a We
* have exclusive access to a previous conn once its shutdown and we set
* _freeing.
*/
void scoutfs_net_server_greeting(struct super_block *sb,
struct scoutfs_net_connection *conn,
@@ -1677,10 +1607,10 @@ restart:
conn->next_send_id = reconn->next_send_id;
atomic64_set(&conn->recv_seq, atomic64_read(&reconn->recv_seq));
/* reconn should be idle while in reconn_wait */
/* greeting response/ack will be on conn send queue */
BUG_ON(!list_empty(&reconn->send_queue));
/* queued greeting response is racing, can be in send or resend queue */
list_splice_tail_init(&reconn->resend_queue, &conn->resend_queue);
BUG_ON(!list_empty(&conn->resend_queue));
list_splice_init(&reconn->resend_queue, &conn->resend_queue);
/* new conn info is unused, swap, old won't call down */
swap(conn->info, reconn->info);
@@ -1772,6 +1702,23 @@ int scoutfs_net_response_node(struct super_block *sb,
NULL, NULL, NULL);
}
/*
* The response function that was submitted with the request is not
* called if the request is canceled here.
*/
void scoutfs_net_cancel_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id)
{
struct message_send *msend;
spin_lock(&conn->lock);
msend = find_request(conn, cmd, id);
if (msend)
complete_send(conn, msend);
spin_unlock(&conn->lock);
}
struct sync_request_completion {
struct completion comp;
void *resp;
@@ -1827,10 +1774,11 @@ int scoutfs_net_sync_request(struct super_block *sb,
ret = scoutfs_net_submit_request(sb, conn, cmd, arg, arg_len,
sync_response, &sreq, &id);
if (ret == 0) {
wait_for_completion(&sreq.comp);
ret = wait_for_completion_interruptible(&sreq.comp);
if (ret == -ERESTARTSYS)
scoutfs_net_cancel_request(sb, conn, cmd, id);
else
ret = sreq.error;
}
return ret;
}

View File

@@ -49,7 +49,6 @@ struct scoutfs_net_connection {
u64 greeting_id;
struct sockaddr_in sockname;
struct sockaddr_in peername;
struct sockaddr_in last_peername;
struct list_head accepted_head;
struct scoutfs_net_connection *listening_conn;
@@ -77,7 +76,7 @@ struct scoutfs_net_connection {
void *info;
};
enum conn_flags {
enum {
CONN_FL_valid_greeting = (1UL << 0), /* other commands can proceed */
CONN_FL_established = (1UL << 1), /* added sends queue send work */
CONN_FL_shutting_down = (1UL << 2), /* shutdown work was queued */
@@ -91,23 +90,18 @@ enum conn_flags {
#define SIN_ARG(sin) sin, be16_to_cpu((sin)->sin_port)
static inline void scoutfs_addr_to_sin(struct sockaddr_in *sin,
union scoutfs_inet_addr *addr)
struct scoutfs_inet_addr *addr)
{
BUG_ON(addr->v4.family != cpu_to_le16(SCOUTFS_AF_IPV4));
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->v4.addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->addr));
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->port));
}
static inline void scoutfs_sin_to_addr(union scoutfs_inet_addr *addr, struct sockaddr_in *sin)
static inline void scoutfs_addr_from_sin(struct scoutfs_inet_addr *addr,
struct sockaddr_in *sin)
{
BUG_ON(sin->sin_family != AF_INET);
memset(addr, 0, sizeof(union scoutfs_inet_addr));
addr->v4.family = cpu_to_le16(SCOUTFS_AF_IPV4);
addr->v4.addr = be32_to_le32(sin->sin_addr.s_addr);
addr->v4.port = be16_to_le16(sin->sin_port);
addr->addr = be32_to_le32(sin->sin_addr.s_addr);
addr->port = be16_to_le16(sin->sin_port);
}
struct scoutfs_net_connection *
@@ -134,6 +128,9 @@ int scoutfs_net_submit_request_node(struct super_block *sb,
u64 rid, u8 cmd, void *arg, u16 arg_len,
scoutfs_net_response_t resp_func,
void *resp_data, u64 *id_ret);
void scoutfs_net_cancel_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id);
int scoutfs_net_sync_request(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, void *arg, unsigned arg_len,

View File

@@ -1,872 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/rhashtable.h>
#include <linux/rcupdate.h>
#include "format.h"
#include "counters.h"
#include "cmp.h"
#include "inode.h"
#include "client.h"
#include "server.h"
#include "omap.h"
#include "recov.h"
#include "scoutfs_trace.h"
/*
* As a client removes an inode from its cache with an nlink of 0 it
* needs to decide if it is the last client using the inode and should
* fully delete all the inode's items. It needs to know if other mounts
* still have the inode in use.
*
* We need a way to communicate between mounts that an inode is in use.
* We don't want to pay the synchronous per-file locking round trip
* costs associated with per-inode open locks that you'd typically see
* in systems to solve this problem. The first prototypes of this
* tracked open file handles so this was coined the open map, though it
* now tracks cached inodes.
*
* Clients maintain bitmaps that cover groups of inodes. As inodes
* enter the cache their bit is set and as the inode is evicted the bit
* is cleared. As deletion is attempted, either by scanning orphans or
* evicting an inode with an nlink of 0, messages are sent around the
* cluster to get the current bitmaps for that inode's group from all
* active mounts. If the inode's bit is clear then it can be deleted.
*
* This layer maintains a list of client rids to send messages to. The
* server calls us as clients enter and leave the cluster. We can't
* process requests until all clients are present as a server starts up
* so we hook into recovery and delay processing until all previously
* existing clients are recovered or fenced.
*/
struct omap_rid_list {
int nr_rids;
struct list_head head;
};
struct omap_rid_entry {
struct list_head head;
u64 rid;
};
struct omap_info {
/* client */
struct rhashtable group_ht;
/* server */
struct rhashtable req_ht;
struct llist_head requests;
spinlock_t lock;
struct omap_rid_list rids;
atomic64_t next_req_id;
};
#define DECLARE_OMAP_INFO(sb, name) \
struct omap_info *name = SCOUTFS_SB(sb)->omap_info
/*
* The presence of an inode in the inode sets its bit in the lock
* group's bitmap.
*
* We don't want to add additional global synchronization of inode cache
* maintenance so these are tracked in an rcu hash table. Once their
* total reaches zero they're removed from the hash and queued for
* freeing and readers should ignore them.
*/
struct omap_group {
struct super_block *sb;
struct rhash_head ht_head;
struct rcu_head rcu;
u64 nr;
spinlock_t lock;
unsigned int total;
__le64 bits[SCOUTFS_OPEN_INO_MAP_LE64S];
};
#define trace_group(sb, which, group, bit_nr) \
do { \
__typeof__(group) _grp = (group); \
__typeof__(bit_nr) _nr = (bit_nr); \
\
trace_scoutfs_omap_group_##which(sb, _grp, _grp->nr, _grp->total, _nr); \
} while (0)
/*
* Each request is initialized with the rids of currently mounted
* clients. As each responds we remove their rid and send the response
* once everyone has contributed.
*
* The request frequency will typically be low, but in a mass rm -rf
* load we will see O(groups * clients) messages flying around.
*/
struct omap_request {
struct llist_node llnode;
struct rhash_head ht_head;
struct rcu_head rcu;
spinlock_t lock;
u64 client_rid;
u64 client_id;
struct omap_rid_list rids;
struct scoutfs_open_ino_map map;
};
static inline void init_rid_list(struct omap_rid_list *list)
{
INIT_LIST_HEAD(&list->head);
list->nr_rids = 0;
}
/*
* Negative searches almost never happen.
*/
static struct omap_rid_entry *find_rid(struct omap_rid_list *list, u64 rid)
{
struct omap_rid_entry *entry;
list_for_each_entry(entry, &list->head, head) {
if (rid == entry->rid)
return entry;
}
return NULL;
}
static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
{
int nr;
list_del(&entry->head);
nr = --list->nr_rids;
kfree(entry);
return nr;
}
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *src;
struct omap_rid_entry *dst;
int nr;
spin_lock(from_lock);
while (to->nr_rids != from->nr_rids) {
nr = from->nr_rids;
spin_unlock(from_lock);
while (to->nr_rids < nr) {
entry = kmalloc(sizeof(struct omap_rid_entry), GFP_NOFS);
if (!entry)
return -ENOMEM;
list_add_tail(&entry->head, &to->head);
to->nr_rids++;
}
while (to->nr_rids > nr) {
entry = list_first_entry(&to->head, struct omap_rid_entry, head);
list_del(&entry->head);
kfree(entry);
to->nr_rids--;
}
spin_lock(from_lock);
}
dst = list_first_entry(&to->head, struct omap_rid_entry, head);
list_for_each_entry(src, &from->head, head) {
dst->rid = src->rid;
dst = list_next_entry(dst, head);
}
spin_unlock(from_lock);
return 0;
}
static void free_rids(struct omap_rid_list *list)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *tmp;
list_for_each_entry_safe(entry, tmp, &list->head, head) {
list_del(&entry->head);
kfree(entry);
}
}
void scoutfs_omap_calc_group_nrs(u64 ino, u64 *group_nr, int *bit_nr)
{
*group_nr = ino >> SCOUTFS_OPEN_INO_MAP_SHIFT;
*bit_nr = ino & SCOUTFS_OPEN_INO_MAP_MASK;
}
static struct omap_group *alloc_group(struct super_block *sb, u64 group_nr)
{
struct omap_group *group;
group = kzalloc(sizeof(struct omap_group), GFP_NOFS);
if (group) {
group->sb = sb;
group->nr = group_nr;
spin_lock_init(&group->lock);
trace_group(sb, alloc, group, -1);
}
return group;
}
static void free_group(struct super_block *sb, struct omap_group *group)
{
trace_group(sb, free, group, -1);
kfree(group);
}
static void free_group_rcu(struct rcu_head *rcu)
{
struct omap_group *group = container_of(rcu, struct omap_group, rcu);
free_group(group->sb, group);
}
static const struct rhashtable_params group_ht_params = {
.key_len = member_sizeof(struct omap_group, nr),
.key_offset = offsetof(struct omap_group, nr),
.head_offset = offsetof(struct omap_group, ht_head),
};
/*
* Track an cached inode in its group. Our set can be racing with a
* final clear that removes the group from the hash, sets total to
* UINT_MAX, and calls rcu free. We can retry until the dead group is
* no longer visible in the hash table and we can insert a new allocated
* group.
*
* The caller must ensure that the bit is clear, -EEXIST will be
* returned otherwise.
*/
int scoutfs_omap_set(struct super_block *sb, u64 ino)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_group *group;
u64 group_nr;
int bit_nr;
bool found;
int ret = 0;
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
retry:
found = false;
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
if (group->total < UINT_MAX) {
found = true;
if (WARN_ON_ONCE(test_and_set_bit_le(bit_nr, group->bits)))
ret = -EEXIST;
else
group->total++;
}
trace_group(sb, inc, group, bit_nr);
spin_unlock(&group->lock);
}
rcu_read_unlock();
if (!found) {
group = alloc_group(sb, group_nr);
if (group) {
ret = rhashtable_lookup_insert_fast(&ominf->group_ht, &group->ht_head,
group_ht_params);
if (ret < 0)
free_group(sb, group);
if (ret == -EEXIST)
ret = 0;
if (ret == -EBUSY) {
/* wait for rehash to finish */
synchronize_rcu();
ret = 0;
}
if (ret == 0)
goto retry;
} else {
ret = -ENOMEM;
}
}
return ret;
}
bool scoutfs_omap_test(struct super_block *sb, u64 ino)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_group *group;
bool ret = false;
u64 group_nr;
int bit_nr;
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
ret = !!test_bit_le(bit_nr, group->bits);
spin_unlock(&group->lock);
}
rcu_read_unlock();
return ret;
}
/*
* Clear a previously set ino bit. Trying to clear a bit that's already
* clear implies imbalanced set/clear or bugs freeing groups. We only
* free groups here as the last clear drops the group's total to 0.
*/
void scoutfs_omap_clear(struct super_block *sb, u64 ino)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_group *group;
u64 group_nr;
int bit_nr;
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
WARN_ON_ONCE(!test_bit_le(bit_nr, group->bits));
WARN_ON_ONCE(group->total == 0);
WARN_ON_ONCE(group->total == UINT_MAX);
if (test_and_clear_bit_le(bit_nr, group->bits)) {
if (--group->total == 0) {
group->total = UINT_MAX;
rhashtable_remove_fast(&ominf->group_ht, &group->ht_head,
group_ht_params);
call_rcu(&group->rcu, free_group_rcu);
}
}
trace_group(sb, dec, group, bit_nr);
spin_unlock(&group->lock);
}
rcu_read_unlock();
WARN_ON_ONCE(!group);
}
/*
* The server adds rids as it discovers clients. We add them to the
* list of rids to send map requests to.
*/
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_rid_entry *entry;
struct omap_rid_entry *found;
entry = kmalloc(sizeof(struct omap_rid_entry), GFP_NOFS);
if (!entry)
return -ENOMEM;
spin_lock(&ominf->lock);
found = find_rid(&ominf->rids, rid);
if (!found) {
entry->rid = rid;
list_add_tail(&entry->head, &ominf->rids.head);
ominf->rids.nr_rids++;
}
spin_unlock(&ominf->lock);
if (found)
kfree(entry);
return 0;
}
static void free_req(struct omap_request *req)
{
free_rids(&req->rids);
kfree(req);
}
static void free_req_rcu(struct rcu_head *rcu)
{
struct omap_request *req = container_of(rcu, struct omap_request, rcu);
free_req(req);
}
static const struct rhashtable_params req_ht_params = {
.key_len = member_sizeof(struct omap_request, map.args.req_id),
.key_offset = offsetof(struct omap_request, map.args.req_id),
.head_offset = offsetof(struct omap_request, ht_head),
};
/*
* Remove a rid from all the pending requests. If it's the last rid we
* give the caller the details to send a response, they'll call back to
* keep removing. If their send fails they're going to shutdown the
* server so we can queue freeing the request as we give it to them.
*/
static int remove_rid_from_reqs(struct omap_info *ominf, u64 rid, u64 *resp_rid, u64 *resp_id,
struct scoutfs_open_ino_map *map)
{
struct omap_rid_entry *entry;
struct rhashtable_iter iter;
struct omap_request *req;
int ret = 0;
rhashtable_walk_enter(&ominf->req_ht, &iter);
rhashtable_walk_start(&iter);
for (;;) {
req = rhashtable_walk_next(&iter);
if (req == NULL)
break;
if (req == ERR_PTR(-EAGAIN))
continue;
spin_lock(&req->lock);
entry = find_rid(&req->rids, rid);
if (entry && free_rid(&req->rids, entry) == 0) {
*resp_rid = req->client_rid;
*resp_id = req->client_id;
memcpy(map, &req->map, sizeof(struct scoutfs_open_ino_map));
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
call_rcu(&req->rcu, free_req_rcu);
ret = 1;
}
spin_unlock(&req->lock);
if (ret > 0)
break;
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
if (ret <= 0) {
*resp_rid = 0;
*resp_id = 0;
}
return ret;
}
/*
* A client has been evicted. Remove its rid from the list and walk
* through all the pending requests and remove its rids, sending the
* response if it was the last rid waiting for a response.
*
* If this returns an error then the server will shut down.
*
* This can be called multiple times by different servers if there are
* errors reclaiming an evicted mount, so we allow asking to remove a
* rid that hasn't been added.
*/
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid)
{
DECLARE_OMAP_INFO(sb, ominf);
struct scoutfs_open_ino_map *map = NULL;
struct omap_rid_entry *entry;
u64 resp_rid = 0;
u64 resp_id = 0;
int ret;
spin_lock(&ominf->lock);
entry = find_rid(&ominf->rids, rid);
if (entry)
free_rid(&ominf->rids, entry);
spin_unlock(&ominf->lock);
if (!entry) {
ret = 0;
goto out;
}
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
if (!map) {
ret = -ENOMEM;
goto out;
}
/* remove the rid from all pending requests, sending responses if it was final */
for (;;) {
ret = remove_rid_from_reqs(ominf, rid, &resp_rid, &resp_id, map);
if (ret <= 0)
break;
ret = scoutfs_server_send_omap_response(sb, resp_rid, resp_id, map, 0);
if (ret < 0)
break;
}
out:
kfree(map);
return ret;
}
/*
* Handle a single incoming request in the server. This could have been
* delayed by recovery. This only returns an error if we couldn't send
* a processing error response to the client.
*/
static int handle_request(struct super_block *sb, struct omap_request *req)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_rid_list priv_rids;
struct omap_rid_entry *entry;
int ret;
init_rid_list(&priv_rids);
ret = copy_rids(&priv_rids, &ominf->rids, &ominf->lock);
if (ret < 0)
goto out;
/* don't send a request to the client who originated this request */
entry = find_rid(&priv_rids, req->client_rid);
if (entry && free_rid(&priv_rids, entry) == 0) {
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
&req->map, 0);
kfree(req);
req = NULL;
goto out;
}
/* this lock isn't needed but sparse gave warnings with conditional locking */
ret = copy_rids(&req->rids, &priv_rids, &ominf->lock);
if (ret < 0)
goto out;
do {
ret = rhashtable_insert_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
if (ret == -EBUSY)
synchronize_rcu(); /* wait for rehash to finish */
} while (ret == -EBUSY);
if (ret < 0)
goto out;
/*
* We can start getting responses the moment we send the first response. After
* we send the last request the req can be freed.
*/
while ((entry = list_first_entry_or_null(&priv_rids.head, struct omap_rid_entry, head))) {
ret = scoutfs_server_send_omap_request(sb, entry->rid, &req->map.args);
if (ret < 0) {
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
goto out;
}
free_rid(&priv_rids, entry);
}
ret = 0;
out:
free_rids(&priv_rids);
if (ret < 0) {
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
NULL, ret);
free_req(req);
}
return ret;
}
/*
* Handle all previously received omap requests from clients. Once
* we've finished recovery and can send requests to all clients we can
* handle all pending requests. The handling function frees the request
* and only returns an error if it couldn't send a response to the
* client.
*/
static int handle_requests(struct super_block *sb)
{
DECLARE_OMAP_INFO(sb, ominf);
struct llist_node *requests;
struct omap_request *req;
struct omap_request *tmp;
int ret;
int err;
if (scoutfs_recov_next_pending(sb, 0, SCOUTFS_RECOV_GREETING))
return 0;
ret = 0;
requests = llist_del_all(&ominf->requests);
llist_for_each_entry_safe(req, tmp, requests, llnode) {
err = handle_request(sb, req);
if (err < 0 && ret == 0)
ret = err;
}
return ret;
}
int scoutfs_omap_finished_recovery(struct super_block *sb)
{
return handle_requests(sb);
}
/*
* The server is receiving a request from a client for the bitmap of all
* open inodes around their ino. Queue it for processing which is
* typically immediate and inline but which can be deferred by recovery
* as the server first starts up.
*/
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map_args *args)
{
DECLARE_OMAP_INFO(sb, ominf);
struct omap_request *req;
req = kzalloc(sizeof(struct omap_request), GFP_NOFS);
if (req == NULL)
return -ENOMEM;
spin_lock_init(&req->lock);
req->client_rid = rid;
req->client_id = id;
init_rid_list(&req->rids);
req->map.args.group_nr = args->group_nr;
req->map.args.req_id = cpu_to_le64(atomic64_inc_return(&ominf->next_req_id));
llist_add(&req->llnode, &ominf->requests);
return handle_requests(sb);
}
/*
* The client is receiving a request from the server for its map for the
* given group. Look up the group and copy the bits to the map.
*
* The mount originating the request for this bitmap has the inode group
* write locked. We can't be adding links to any inodes in the group
* because that requires the lock. Inodes bits can be set and cleared
* while we're sampling the bitmap. These races are fine, they can't be
* adding cached inodes if nlink is 0 and we don't have the lock. If
* the caller is removing a set bit then they're about to try and delete
* the inode themselves and will first have to acquire the cluster lock
* themselves.
*/
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map_args *args)
{
DECLARE_OMAP_INFO(sb, ominf);
u64 group_nr = le64_to_cpu(args->group_nr);
struct scoutfs_open_ino_map *map;
struct omap_group *group;
bool copied = false;
int ret;
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
if (!map)
return -ENOMEM;
map->args = *args;
rcu_read_lock();
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
if (group) {
spin_lock(&group->lock);
trace_group(sb, request, group, -1);
if (group->total > 0 && group->total < UINT_MAX) {
memcpy(map->bits, group->bits, sizeof(map->bits));
copied = true;
}
spin_unlock(&group->lock);
}
rcu_read_unlock();
if (!copied)
memset(map->bits, 0, sizeof(map->bits));
ret = scoutfs_client_send_omap_response(sb, id, map);
kfree(map);
return ret;
}
/*
* The server has received an open ino map response from a client. Find
* the original request that it's serving, or in the response's map, and
* send a reply if this was the last response from a client we were
* waiting for.
*
* We can get responses for requests we're no longer tracking if, for
* example, sending to a client gets an error. We'll have already sent
* the response to the requesting client so we drop these responses on
* the floor.
*/
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map *resp_map)
{
DECLARE_OMAP_INFO(sb, ominf);
struct scoutfs_open_ino_map *map;
struct omap_rid_entry *entry;
bool send_response = false;
struct omap_request *req;
u64 resp_rid;
u64 resp_id;
int ret;
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
if (!map) {
ret = -ENOMEM;
goto out;
}
rcu_read_lock();
req = rhashtable_lookup(&ominf->req_ht, &resp_map->args.req_id, req_ht_params);
if (req) {
spin_lock(&req->lock);
entry = find_rid(&req->rids, rid);
if (entry) {
bitmap_or((unsigned long *)req->map.bits, (unsigned long *)req->map.bits,
(unsigned long *)resp_map->bits, SCOUTFS_OPEN_INO_MAP_BITS);
if (free_rid(&req->rids, entry) == 0)
send_response = true;
}
spin_unlock(&req->lock);
if (send_response) {
resp_rid = req->client_rid;
resp_id = req->client_id;
memcpy(map, &req->map, sizeof(struct scoutfs_open_ino_map));
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
call_rcu(&req->rcu, free_req_rcu);
}
}
rcu_read_unlock();
if (send_response)
ret = scoutfs_server_send_omap_response(sb, resp_rid, resp_id, map, 0);
else
ret = 0;
kfree(map);
out:
return ret;
}
/*
* The server is shutting down. Free all the server state associated
* with ongoing request processing. Clients who still have requests
* pending will resend them to the next server.
*/
void scoutfs_omap_server_shutdown(struct super_block *sb)
{
DECLARE_OMAP_INFO(sb, ominf);
struct rhashtable_iter iter;
struct llist_node *requests;
struct omap_request *req;
struct omap_request *tmp;
rhashtable_walk_enter(&ominf->req_ht, &iter);
rhashtable_walk_start(&iter);
for (;;) {
req = rhashtable_walk_next(&iter);
if (req == NULL)
break;
if (req == ERR_PTR(-EAGAIN))
continue;
if (req->rids.nr_rids != 0) {
free_rids(&req->rids);
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
call_rcu(&req->rcu, free_req_rcu);
}
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
requests = llist_del_all(&ominf->requests);
llist_for_each_entry_safe(req, tmp, requests, llnode)
kfree(req);
synchronize_rcu();
}
int scoutfs_omap_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct omap_info *ominf;
int ret;
ominf = kzalloc(sizeof(struct omap_info), GFP_KERNEL);
if (!ominf) {
ret = -ENOMEM;
goto out;
}
ret = rhashtable_init(&ominf->group_ht, &group_ht_params);
if (ret < 0) {
kfree(ominf);
goto out;
}
ret = rhashtable_init(&ominf->req_ht, &req_ht_params);
if (ret < 0) {
rhashtable_destroy(&ominf->group_ht);
kfree(ominf);
goto out;
}
init_llist_head(&ominf->requests);
spin_lock_init(&ominf->lock);
init_rid_list(&ominf->rids);
atomic64_set(&ominf->next_req_id, 0);
sbi->omap_info = ominf;
ret = 0;
out:
return ret;
}
/*
* To get here the server must have shut down, freeing requests, and
* evict must have been called on all cached inodes so we can just
* synchronize all the pending group frees.
*/
void scoutfs_omap_destroy(struct super_block *sb)
{
DECLARE_OMAP_INFO(sb, ominf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct rhashtable_iter iter;
if (ominf) {
synchronize_rcu();
/* double check that all the groups deced to 0 and were freed */
rhashtable_walk_enter(&ominf->group_ht, &iter);
rhashtable_walk_start(&iter);
WARN_ON_ONCE(rhashtable_walk_peek(&iter) != NULL);
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
rhashtable_destroy(&ominf->group_ht);
rhashtable_destroy(&ominf->req_ht);
kfree(ominf);
sbi->omap_info = NULL;
}
}

View File

@@ -1,23 +0,0 @@
#ifndef _SCOUTFS_OMAP_H_
#define _SCOUTFS_OMAP_H_
int scoutfs_omap_set(struct super_block *sb, u64 ino);
bool scoutfs_omap_test(struct super_block *sb, u64 ino);
void scoutfs_omap_clear(struct super_block *sb, u64 ino);
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
struct scoutfs_open_ino_map_args *args);
void scoutfs_omap_calc_group_nrs(u64 ino, u64 *group_nr, int *bit_nr);
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid);
int scoutfs_omap_finished_recovery(struct super_block *sb);
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map_args *args);
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map *resp_map);
void scoutfs_omap_server_shutdown(struct super_block *sb);
int scoutfs_omap_setup(struct super_block *sb);
void scoutfs_omap_destroy(struct super_block *sb);
#endif

View File

@@ -16,7 +16,6 @@
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/debugfs.h>
#include <linux/namei.h>
#include <linux/parser.h>
#include <linux/inet.h>
@@ -26,344 +25,147 @@
#include "msg.h"
#include "options.h"
#include "super.h"
#include "inode.h"
enum {
Opt_metadev_path,
Opt_orphan_scan_delay_ms,
Opt_quorum_slot_nr,
Opt_err,
};
static const match_table_t tokens = {
{Opt_metadev_path, "metadev_path=%s"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_server_addr, "server_addr=%s"},
{Opt_err, NULL}
};
struct options_info {
seqlock_t seqlock;
struct scoutfs_mount_options opts;
struct scoutfs_sysfs_attrs sysfs_attrs;
struct options_sb_info {
struct dentry *debugfs_dir;
u32 btree_force_tiny_blocks;
};
#define DECLARE_OPTIONS_INFO(sb, name) \
struct options_info *name = SCOUTFS_SB(sb)->options_info
static int parse_bdev_path(struct super_block *sb, substring_t *substr,
char **bdev_path_ret)
u32 scoutfs_option_u32(struct super_block *sb, int token)
{
char *bdev_path;
struct inode *bdev_inode;
struct path path;
bool got_path = false;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct options_sb_info *osi = sbi->options;
switch(token) {
case Opt_btree_force_tiny_blocks:
return osi->btree_force_tiny_blocks;
}
WARN_ON_ONCE(1);
return 0;
}
/* The caller's string is null terminted and can be clobbered */
static int parse_ipv4(struct super_block *sb, char *str,
struct sockaddr_in *sin)
{
unsigned long port = 0;
__be32 addr;
char *c;
int ret;
bdev_path = match_strdup(substr);
if (!bdev_path) {
scoutfs_err(sb, "bdev string dup failed");
ret = -ENOMEM;
goto out;
/* null term port, if specified */
c = strchr(str, ':');
if (c)
*c = '\0';
/* parse addr */
addr = in_aton(str);
if (ipv4_is_multicast(addr) || ipv4_is_lbcast(addr) ||
ipv4_is_zeronet(addr) ||
ipv4_is_local_multicast(addr)) {
scoutfs_err(sb, "invalid unicast ipv4 address: %s", str);
return -EINVAL;
}
ret = kern_path(bdev_path, LOOKUP_FOLLOW, &path);
if (ret) {
scoutfs_err(sb, "path %s not found for bdev: error %d",
bdev_path, ret);
goto out;
}
got_path = true;
bdev_inode = d_inode(path.dentry);
if (!S_ISBLK(bdev_inode->i_mode)) {
scoutfs_err(sb, "path %s for bdev is not a block device",
bdev_path);
ret = -ENOTBLK;
goto out;
/* parse port, if specified */
if (c) {
c++;
ret = kstrtoul(c, 0, &port);
if (ret != 0 || port == 0 || port >= U16_MAX) {
scoutfs_err(sb, "invalid port in ipv4 address: %s", c);
return -EINVAL;
}
}
out:
if (got_path) {
path_put(&path);
}
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = addr;
sin->sin_port = cpu_to_be16(port);
if (ret < 0) {
kfree(bdev_path);
} else {
*bdev_path_ret = bdev_path;
}
return ret;
return 0;
}
static void free_options(struct scoutfs_mount_options *opts)
{
kfree(opts->metadev_path);
}
#define MIN_ORPHAN_SCAN_DELAY_MS 100UL
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
static void init_default_options(struct scoutfs_mount_options *opts)
{
memset(opts, 0, sizeof(*opts));
opts->quorum_slot_nr = -1;
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
}
/*
* Parse the option string into our options struct. This can allocate
* memory in the struct. The caller is responsible for always calling
* free_options() when the struct is destroyed, including when we return
* an error.
*/
static int parse_options(struct super_block *sb, char *options, struct scoutfs_mount_options *opts)
int scoutfs_parse_options(struct super_block *sb, char *options,
struct mount_options *parsed)
{
char ipstr[INET_ADDRSTRLEN + 1];
substring_t args[MAX_OPT_ARGS];
int nr;
int token;
char *p;
int ret;
/* Set defaults */
memset(parsed, 0, sizeof(*parsed));
while ((p = strsep(&options, ",")) != NULL) {
if (!*p)
continue;
token = match_token(p, tokens, args);
switch (token) {
case Opt_server_addr:
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
match_strlcpy(ipstr, args, ARRAY_SIZE(ipstr));
ret = parse_ipv4(sb, ipstr, &parsed->server_addr);
if (ret < 0)
return ret;
break;
case Opt_orphan_scan_delay_ms:
if (opts->orphan_scan_delay_ms != -1) {
scoutfs_err(sb, "multiple orphan_scan_delay_ms options provided, only provide one.");
return -EINVAL;
}
ret = match_int(args, &nr);
if (ret < 0 ||
nr < MIN_ORPHAN_SCAN_DELAY_MS || nr > MAX_ORPHAN_SCAN_DELAY_MS) {
scoutfs_err(sb, "invalid orphan_scan_delay_ms option, must be between %lu and %lu",
MIN_ORPHAN_SCAN_DELAY_MS, MAX_ORPHAN_SCAN_DELAY_MS);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->orphan_scan_delay_ms = nr;
break;
case Opt_quorum_slot_nr:
if (opts->quorum_slot_nr != -1) {
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
return -EINVAL;
}
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 || nr >= SCOUTFS_QUORUM_MAX_SLOTS) {
scoutfs_err(sb, "invalid quorum_slot_nr option, must be between 0 and %u",
SCOUTFS_QUORUM_MAX_SLOTS - 1);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->quorum_slot_nr = nr;
break;
default:
scoutfs_err(sb, "Unknown or malformed option, \"%s\"", p);
return -EINVAL;
scoutfs_err(sb, "Unknown or malformed option, \"%s\"",
p);
break;
}
}
if (!opts->metadev_path) {
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
return -EINVAL;
}
return 0;
}
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts)
{
DECLARE_OPTIONS_INFO(sb, optinf);
unsigned int seq;
if (WARN_ON_ONCE(optinf == NULL)) {
/* trying to use options before early setup or after destroy */
init_default_options(opts);
return;
}
do {
seq = read_seqbegin(&optinf->seqlock);
memcpy(opts, &optinf->opts, sizeof(struct scoutfs_mount_options));
} while (read_seqretry(&optinf->seqlock, seq));
}
/*
* Early setup that parses and stores the options so that the rest of
* setup can use them. Full options setup that relies on other
* components will be done later.
*/
int scoutfs_options_early_setup(struct super_block *sb, char *options)
int scoutfs_options_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_mount_options opts;
struct options_info *optinf;
struct options_sb_info *osi;
int ret;
init_default_options(&opts);
osi = kzalloc(sizeof(struct options_sb_info), GFP_KERNEL);
if (!osi)
return -ENOMEM;
ret = parse_options(sb, options, &opts);
if (ret < 0)
goto out;
sbi->options = osi;
optinf = kzalloc(sizeof(struct options_info), GFP_KERNEL);
if (!optinf) {
osi->debugfs_dir = debugfs_create_dir("options", sbi->debug_root);
if (!osi->debugfs_dir) {
ret = -ENOMEM;
goto out;
}
seqlock_init(&optinf->seqlock);
scoutfs_sysfs_init_attrs(sb, &optinf->sysfs_attrs);
write_seqlock(&optinf->seqlock);
optinf->opts = opts;
write_sequnlock(&optinf->seqlock);
sbi->options_info = optinf;
ret = 0;
out:
if (ret < 0)
free_options(&opts);
return ret;
}
int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
if (opts.quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
return 0;
}
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%s", opts.metadev_path);
}
SCOUTFS_ATTR_RO(metadev_path);
static ssize_t orphan_scan_delay_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.orphan_scan_delay_ms);
}
static ssize_t orphan_scan_delay_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < MIN_ORPHAN_SCAN_DELAY_MS || val > MAX_ORPHAN_SCAN_DELAY_MS) {
scoutfs_err(sb, "invalid orphan_scan_delay_ms value written to options sysfs file, must be between %lu and %lu",
MIN_ORPHAN_SCAN_DELAY_MS, MAX_ORPHAN_SCAN_DELAY_MS);
return -EINVAL;
if (!debugfs_create_bool("btree_force_tiny_blocks", 0644,
osi->debugfs_dir,
&osi->btree_force_tiny_blocks)) {
ret = -ENOMEM;
goto out;
}
write_seqlock(&optinf->seqlock);
optinf->opts.orphan_scan_delay_ms = val;
write_sequnlock(&optinf->seqlock);
scoutfs_inode_schedule_orphan_dwork(sb);
return count;
}
SCOUTFS_ATTR_RW(orphan_scan_delay_ms);
static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%d\n", opts.quorum_slot_nr);
}
SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
SCOUTFS_ATTR_PTR(quorum_slot_nr),
NULL,
};
int scoutfs_options_setup(struct super_block *sb)
{
DECLARE_OPTIONS_INFO(sb, optinf);
int ret;
ret = scoutfs_sysfs_create_attrs(sb, &optinf->sysfs_attrs, options_attrs, "mount_options");
if (ret < 0)
ret = 0;
out:
if (ret)
scoutfs_options_destroy(sb);
return ret;
}
/*
* We remove the sysfs files early in unmount so that they can't try to call other subsystems
* as they're being destroyed.
*/
void scoutfs_options_stop(struct super_block *sb)
{
DECLARE_OPTIONS_INFO(sb, optinf);
if (optinf)
scoutfs_sysfs_destroy_attrs(sb, &optinf->sysfs_attrs);
}
void scoutfs_options_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_OPTIONS_INFO(sb, optinf);
struct options_sb_info *osi = sbi->options;
scoutfs_options_stop(sb);
if (optinf) {
free_options(&optinf->opts);
kfree(optinf);
sbi->options_info = NULL;
if (osi) {
if (osi->debugfs_dir)
debugfs_remove_recursive(osi->debugfs_dir);
kfree(osi);
sbi->options = NULL;
}
}

View File

@@ -5,19 +5,26 @@
#include <linux/in.h>
#include "format.h"
struct scoutfs_mount_options {
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;
enum {
/*
* For debugging we can quickly create huge trees by limiting
* the number of items in each block as though the blocks were tiny.
*/
Opt_btree_force_tiny_blocks,
Opt_server_addr,
Opt_err,
};
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);
int scoutfs_options_show(struct seq_file *seq, struct dentry *root);
struct mount_options {
struct sockaddr_in server_addr;
};
int scoutfs_options_early_setup(struct super_block *sb, char *options);
int scoutfs_parse_options(struct super_block *sb, char *options,
struct mount_options *parsed);
int scoutfs_options_setup(struct super_block *sb);
void scoutfs_options_stop(struct super_block *sb);
void scoutfs_options_destroy(struct super_block *sb);
u32 scoutfs_option_u32(struct super_block *sb, int token);
#define scoutfs_option_bool scoutfs_option_u32
#endif /* _SCOUTFS_OPTIONS_H_ */

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,10 @@
#ifndef _SCOUTFS_QUORUM_H_
#define _SCOUTFS_QUORUM_H_
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
int scoutfs_quorum_election(struct super_block *sb, ktime_t timeout_abs,
u64 prev_term, u64 *elected_term);
void scoutfs_quorum_clear_leader(struct super_block *sb);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);
void scoutfs_quorum_destroy(struct super_block *sb);
#endif

1546
kmod/src/radix.c Normal file

File diff suppressed because it is too large Load Diff

45
kmod/src/radix.h Normal file
View File

@@ -0,0 +1,45 @@
#ifndef _SCOUTFS_RADIX_H_
#define _SCOUTFS_RADIX_H_
#include "per_task.h"
struct scoutfs_block_writer;
struct scoutfs_radix_allocator {
struct mutex mutex;
struct scoutfs_radix_root avail;
struct scoutfs_radix_root freed;
};
int scoutfs_radix_alloc(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri, u64 *blkno);
int scoutfs_radix_alloc_data(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_radix_root *root,
int count, u64 *blkno_ret, int *count_ret);
int scoutfs_radix_free(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri, u64 blkno);
int scoutfs_radix_free_data(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_radix_root *root,
u64 blkno, int count);
int scoutfs_radix_merge(struct super_block *sb,
struct scoutfs_radix_allocator *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_radix_root *dst,
struct scoutfs_radix_root *src,
struct scoutfs_radix_root *inp, bool meta, u64 count);
void scoutfs_radix_init_alloc(struct scoutfs_radix_allocator *alloc,
struct scoutfs_radix_root *avail,
struct scoutfs_radix_root *freed);
void scoutfs_radix_root_init(struct super_block *sb,
struct scoutfs_radix_root *root, bool meta);
u64 scoutfs_radix_root_free_bytes(struct super_block *sb,
struct scoutfs_radix_root *root);
u64 scoutfs_radix_bit_leaf_nr(u64 bit);
#endif

View File

@@ -1,305 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/rhashtable.h>
#include <linux/rcupdate.h>
#include <linux/list_sort.h>
#include "super.h"
#include "recov.h"
#include "cmp.h"
/*
* There are a few server messages which can't be processed until they
* know that they have state for all possibly active clients. These
* little helpers track which clients have recovered what state and give
* those message handlers a call to check if recovery has completed. We
* track the timeout here, but all we do is call back into the server to
* take steps to evict timed out clients and then let us know that their
* recovery has finished.
*/
struct recov_info {
struct super_block *sb;
spinlock_t lock;
struct list_head pending;
struct timer_list timer;
void (*timeout_fn)(struct super_block *);
};
#define DECLARE_RECOV_INFO(sb, name) \
struct recov_info *name = SCOUTFS_SB(sb)->recov_info
struct recov_pending {
struct list_head head;
u64 rid;
int which;
};
static struct recov_pending *next_pending(struct recov_info *recinf, u64 rid, int which)
{
struct recov_pending *pend;
list_for_each_entry(pend, &recinf->pending, head) {
if (pend->rid > rid && pend->which & which)
return pend;
}
return NULL;
}
static struct recov_pending *lookup_pending(struct recov_info *recinf, u64 rid, int which)
{
struct recov_pending *pend;
pend = next_pending(recinf, rid - 1, which);
if (pend && pend->rid == rid)
return pend;
return NULL;
}
/*
* We keep the pending list sorted by rid so that we can iterate over
* them. The list should be small and shouldn't be used often.
*/
static int cmp_pending_rid(void *priv, struct list_head *A, struct list_head *B)
{
struct recov_pending *a = list_entry(A, struct recov_pending, head);
struct recov_pending *b = list_entry(B, struct recov_pending, head);
return scoutfs_cmp_u64s(a->rid, b->rid);
}
/*
* Record that we'll be waiting for a client to recover something.
* _finished will eventually be called for every _prepare, either
* because recovery naturally finished or because it timed out and the
* server evicted the client.
*/
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *alloc;
struct recov_pending *pend;
if (WARN_ON_ONCE(which & SCOUTFS_RECOV_INVALID))
return -EINVAL;
alloc = kmalloc(sizeof(*pend), GFP_NOFS);
if (!alloc)
return -ENOMEM;
spin_lock(&recinf->lock);
pend = lookup_pending(recinf, rid, SCOUTFS_RECOV_ALL);
if (pend) {
pend->which |= which;
} else {
swap(pend, alloc);
pend->rid = rid;
pend->which = which;
list_add_tail(&pend->head, &recinf->pending);
list_sort(NULL, &recinf->pending, cmp_pending_rid);
}
spin_unlock(&recinf->lock);
kfree(alloc);
return 0;
}
/*
* Recovery is only finished once we've begun (which sets the timer) and
* all clients have finished. If we didn't test the timer we could
* claim it finished prematurely as clients are being prepared.
*/
static int recov_finished(struct recov_info *recinf)
{
return !!(recinf->timeout_fn != NULL && list_empty(&recinf->pending));
}
static void timer_callback(struct timer_list *timer)
{
struct recov_info *recinf = from_timer(recinf, timer, timer);
recinf->timeout_fn(recinf->sb);
}
/*
* Begin waiting for recovery once we've prepared all the clients. If
* the timeout period elapses before _finish is called on all prepared
* clients then the timer will call the callback.
*
* Returns > 0 if all the prepared clients finish recovery before begin
* is called.
*/
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms)
{
DECLARE_RECOV_INFO(sb, recinf);
int ret;
spin_lock(&recinf->lock);
recinf->timeout_fn = timeout_fn;
recinf->timer.expires = jiffies + msecs_to_jiffies(timeout_ms);
add_timer(&recinf->timer);
ret = recov_finished(recinf);
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
return ret;
}
/*
* A given client has recovered the given state. If it's finished all
* recovery then we free it, and if all clients have finished recovery
* then we cancel the timeout timer.
*
* Returns > 0 if _begin has been called and all clients have finished.
* The caller will only see > 0 returned once.
*/
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
int ret = 0;
spin_lock(&recinf->lock);
pend = lookup_pending(recinf, rid, which);
if (pend) {
pend->which &= ~which;
if (pend->which) {
pend = NULL;
} else {
list_del(&pend->head);
ret = recov_finished(recinf);
}
}
spin_unlock(&recinf->lock);
if (ret > 0)
del_timer_sync(&recinf->timer);
kfree(pend);
return ret;
}
/*
* Returns true if the given client is still trying to recover
* the given state.
*/
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
bool is_pending;
spin_lock(&recinf->lock);
is_pending = lookup_pending(recinf, rid, which) != NULL;
spin_unlock(&recinf->lock);
return is_pending;
}
/*
* Return the next rid after the given rid of a client waiting for the
* given state to be recovered. Start with rid 0, returns 0 when there
* are no more clients waiting for recovery.
*
* This is inherently racey. Callers are responsible for resolving any
* actions taken based on pending with the recovery finishing, perhaps
* before we return.
*/
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
spin_lock(&recinf->lock);
pend = next_pending(recinf, rid, which);
rid = pend ? pend->rid : 0;
spin_unlock(&recinf->lock);
return rid;
}
/*
* The server is shutting down and doesn't need to worry about recovery
* anymore. It'll be built up again by the next server, if needed.
*/
void scoutfs_recov_shutdown(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct recov_pending *pend;
struct recov_pending *tmp;
LIST_HEAD(list);
del_timer_sync(&recinf->timer);
spin_lock(&recinf->lock);
list_splice_init(&recinf->pending, &list);
recinf->timeout_fn = NULL;
spin_unlock(&recinf->lock);
list_for_each_entry_safe(pend, tmp, &list, head) {
list_del(&pend->head);
kfree(pend);
}
}
int scoutfs_recov_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct recov_info *recinf;
int ret;
recinf = kzalloc(sizeof(struct recov_info), GFP_KERNEL);
if (!recinf) {
ret = -ENOMEM;
goto out;
}
recinf->sb = sb;
spin_lock_init(&recinf->lock);
INIT_LIST_HEAD(&recinf->pending);
timer_setup(&recinf->timer, timer_callback, 0);
sbi->recov_info = recinf;
ret = 0;
out:
return ret;
}
void scoutfs_recov_destroy(struct super_block *sb)
{
DECLARE_RECOV_INFO(sb, recinf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (recinf) {
scoutfs_recov_shutdown(sb);
kfree(recinf);
sbi->recov_info = NULL;
}
}

View File

@@ -1,23 +0,0 @@
#ifndef _SCOUTFS_RECOV_H_
#define _SCOUTFS_RECOV_H_
enum {
SCOUTFS_RECOV_GREETING = ( 1 << 0),
SCOUTFS_RECOV_LOCKS = ( 1 << 1),
SCOUTFS_RECOV_INVALID = (~0 << 2),
SCOUTFS_RECOV_ALL = (~SCOUTFS_RECOV_INVALID),
};
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which);
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
unsigned int timeout_ms);
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which);
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which);
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which);
void scoutfs_recov_shutdown(struct super_block *sb);
int scoutfs_recov_setup(struct super_block *sb);
void scoutfs_recov_destroy(struct super_block *sb);
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -56,31 +56,21 @@ do { \
__entry->name##_data_len, __entry->name##_cmd, __entry->name##_flags, \
__entry->name##_error
u64 scoutfs_server_reserved_meta_blocks(struct super_block *sb);
int scoutfs_server_lock_request(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_net_lock *nl);
int scoutfs_server_lock_response(struct super_block *sb, u64 rid,
u64 id, struct scoutfs_net_lock *nl);
int scoutfs_server_lock_recover_request(struct super_block *sb, u64 rid,
struct scoutfs_key *key);
void scoutfs_server_recov_finish(struct super_block *sb, u64 rid, int which);
int scoutfs_server_hold_commit(struct super_block *sb);
int scoutfs_server_apply_commit(struct super_block *sb, int err);
int scoutfs_server_send_omap_request(struct super_block *sb, u64 rid,
struct scoutfs_open_ino_map_args *args);
int scoutfs_server_send_omap_response(struct super_block *sb, u64 rid, u64 id,
struct scoutfs_open_ino_map *map, int err);
u64 scoutfs_server_seq(struct super_block *sb);
u64 scoutfs_server_next_seq(struct super_block *sb);
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
void scoutfs_server_start(struct super_block *sb, u64 term);
struct sockaddr_in;
struct scoutfs_quorum_elected_info;
int scoutfs_server_start(struct super_block *sb, struct sockaddr_in *sin,
u64 term);
void scoutfs_server_abort(struct super_block *sb);
void scoutfs_server_stop(struct super_block *sb);
void scoutfs_server_stop_wait(struct super_block *sb);
bool scoutfs_server_is_running(struct super_block *sb);
bool scoutfs_server_is_up(struct super_block *sb);
bool scoutfs_server_is_down(struct super_block *sb);
int scoutfs_server_setup(struct super_block *sb);
void scoutfs_server_destroy(struct super_block *sb);

View File

@@ -1,71 +0,0 @@
/*
* A copy of sort() from upstream with a priv argument that's passed
* to comparison, like list_sort().
*/
/* ------------------------ */
/*
* A fast, small, non-recursive O(nlog n) sort for the Linux kernel
*
* Jan 23 2005 Matt Mackall <mpm@selenic.com>
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sort.h>
#include <linux/slab.h>
#include "sort_priv.h"
/**
* sort - sort an array of elements
* @priv: caller's pointer to pass to comparison and swap functions
* @base: pointer to data to sort
* @num: number of elements
* @size: size of each element
* @cmp_func: pointer to comparison function
* @swap_func: pointer to swap function or NULL
*
* This function does a heapsort on the given array. You may provide a
* swap_func function optimized to your element type.
*
* Sorting time is O(n log n) both on average and worst-case. While
* qsort is about 20% faster on average, it suffers from exploitable
* O(n*n) worst-case behavior and extra memory requirements that make
* it less suitable for kernel use.
*/
void sort_priv(void *priv, void *base, size_t num, size_t size,
int (*cmp_func)(void *priv, const void *, const void *),
void (*swap_func)(void *priv, void *, void *, int size))
{
/* pre-scale counters for performance */
int i = (num/2 - 1) * size, n = num * size, c, r;
/* heapify */
for ( ; i >= 0; i -= size) {
for (r = i; r * 2 + size < n; r = c) {
c = r * 2 + size;
if (c < n - size &&
cmp_func(priv, base + c, base + c + size) < 0)
c += size;
if (cmp_func(priv, base + r, base + c) >= 0)
break;
swap_func(priv, base + r, base + c, size);
}
}
/* sort */
for (i = n - size; i > 0; i -= size) {
swap_func(priv, base, base + i, size);
for (r = 0; r * 2 + size < i; r = c) {
c = r * 2 + size;
if (c < i - size &&
cmp_func(priv, base + c, base + c + size) < 0)
c += size;
if (cmp_func(priv, base + r, base + c) >= 0)
break;
swap_func(priv, base + r, base + c, size);
}
}
}

View File

@@ -1,8 +0,0 @@
#ifndef _SCOUTFS_SORT_PRIV_H_
#define _SCOUTFS_SORT_PRIV_H_
void sort_priv(void *priv, void *base, size_t num, size_t size,
int (*cmp_func)(void *priv, const void *, const void *),
void (*swap_func)(void *priv, void *, void *, int size));
#endif

View File

@@ -47,9 +47,9 @@ bool scoutfs_spbm_empty(struct scoutfs_spbm *spbm)
return RB_EMPTY_ROOT(&spbm->root);
}
enum spbm_flags {
enum {
/* if a node isn't found then return an allocated new node */
SPBM_FIND_ALLOC = (1 << 0),
SPBM_FIND_ALLOC = 0x1,
};
static struct spbm_node *find_node(struct scoutfs_spbm *spbm, u64 index,
int flags)

File diff suppressed because it is too large Load Diff

View File

@@ -1,68 +0,0 @@
#ifndef _SCOUTFS_SRCH_H_
#define _SCOUTFS_SRCH_H_
struct scoutfs_block;
struct scoutfs_srch_rb_root {
struct rb_root root;
struct rb_node *last;
unsigned long nr;
};
struct scoutfs_srch_rb_node {
struct rb_node node;
u64 ino;
u64 id;
};
#define scoutfs_srch_foreach_rb_node(snode, node, sroot) \
for (node = rb_first(&(sroot)->root); \
node && (snode = container_of(node, struct scoutfs_srch_rb_node, \
node), 1); \
node = rb_next(node))
int scoutfs_srch_add(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_srch_file *sfl,
struct scoutfs_block **bl_ret,
u64 hash, u64 ino, u64 id);
void scoutfs_srch_destroy_rb_root(struct scoutfs_srch_rb_root *sroot);
int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_srch_rb_root *sroot,
u64 hash, u64 ino, u64 last_ino, bool *done);
int scoutfs_srch_rotate_log(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
struct scoutfs_srch_file *sfl, bool force);
int scoutfs_srch_get_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root,
u64 rid, struct scoutfs_srch_compact *sc);
int scoutfs_srch_update_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root, u64 rid,
struct scoutfs_srch_compact *sc);
int scoutfs_srch_commit_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root, u64 rid,
struct scoutfs_srch_compact *res,
struct scoutfs_alloc_list_head *av,
struct scoutfs_alloc_list_head *fr);
int scoutfs_srch_cancel_compact(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_btree_root *root, u64 rid,
struct scoutfs_alloc_list_head *av,
struct scoutfs_alloc_list_head *fr);
void scoutfs_srch_destroy(struct super_block *sb);
int scoutfs_srch_setup(struct super_block *sb);
#endif

View File

@@ -20,6 +20,7 @@
#include <linux/statfs.h>
#include <linux/sched.h>
#include <linux/debugfs.h>
#include <linux/percpu.h>
#include "super.h"
#include "block.h"
@@ -40,45 +41,51 @@
#include "sysfs.h"
#include "quorum.h"
#include "forest.h"
#include "srch.h"
#include "item.h"
#include "alloc.h"
#include "recov.h"
#include "omap.h"
#include "volopt.h"
#include "fence.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
/* the statfs file fields can be small (and signed?) :/ */
static __statfs_word saturate_truncated_word(u64 files)
{
__statfs_word word = files;
static DEFINE_PER_CPU(u64, clock_sync_ids) = 0;
if (word != files) {
word = ~0ULL;
if (word < 0)
word = (unsigned long)word >> 1;
/*
* Give the caller a unique clock sync id for a message they're about to
* send. We make the ids reasonably globally unique by using randomly
* initialized per-cpu 64bit counters.
*/
__le64 scoutfs_clock_sync_id(void)
{
u64 rnd = 0;
u64 ret;
u64 *id;
retry:
preempt_disable();
id = this_cpu_ptr(&clock_sync_ids);
if (*id == 0) {
if (rnd == 0) {
preempt_enable();
get_random_bytes(&rnd, sizeof(rnd));
goto retry;
}
*id = rnd;
}
return word;
ret = ++(*id);
preempt_enable();
return cpu_to_le64(ret);
}
/*
* The server gives us the current sum of free blocks and the total
* inode count that it can see across all the clients' log trees. It
* won't see allocations and inode creations or deletions that are dirty
* in client memory as it builds a transaction.
* Ask the server for the current statfs fields. The message is very
* cheap so we're not worrying about spinning in statfs flooding the
* server with requests. We can add a cache and stale results if that
* becomes a problem.
*
* We don't have static limits on the number of files so the statfs
* fields for the total possible files and the number free isn't
* particularly helpful. What we do want to report is the number of
* inodes, so we fake a max possible number of inodes given a
* conservative estimate of the total space consumption per file and
* then find the free by subtracting our precise count of active inodes.
* This seems like the least surprising compromise where the file max
* doesn't change and the caller gets the correct count of used inodes.
* We fake the number of free inodes value by assuming that we can fill
* free blocks with a certain number of inodes. We then the number of
* current inodes to that free count to determine the total possible
* inodes.
*
* The fsid that we report is constructed from the xor of the first two
* and second two little endian u32s that make up the uuid bytes.
@@ -86,52 +93,68 @@ static __statfs_word saturate_truncated_word(u64 files)
static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
{
struct super_block *sb = dentry->d_inode->i_sb;
struct scoutfs_net_statfs nst;
u64 files;
u64 ffree;
struct scoutfs_net_statfs nstatfs;
__le32 uuid[4];
int ret;
scoutfs_inc_counter(sb, statfs);
ret = scoutfs_client_statfs(sb, &nst);
ret = scoutfs_client_statfs(sb, &nstatfs);
if (ret)
goto out;
return ret;
kst->f_bfree = (le64_to_cpu(nst.free_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.free_data_blocks);
kst->f_bfree = le64_to_cpu(nstatfs.bfree);
kst->f_type = SCOUTFS_SUPER_MAGIC;
kst->f_bsize = SCOUTFS_BLOCK_SM_SIZE;
kst->f_blocks = (le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.total_data_blocks);
kst->f_bsize = SCOUTFS_BLOCK_SIZE;
kst->f_blocks = le64_to_cpu(nstatfs.total_blocks);
kst->f_bavail = kst->f_bfree;
files = div_u64(le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_LG_SHIFT, 2048);
ffree = files - le64_to_cpu(nst.inode_count);
kst->f_files = saturate_truncated_word(files);
kst->f_ffree = saturate_truncated_word(ffree);
kst->f_ffree = kst->f_bfree * 16;
kst->f_files = kst->f_ffree + le64_to_cpu(nstatfs.next_ino);
BUILD_BUG_ON(sizeof(uuid) != sizeof(nst.uuid));
memcpy(uuid, nst.uuid, sizeof(uuid));
BUILD_BUG_ON(sizeof(uuid) != sizeof(nstatfs.uuid));
memcpy(uuid, &nstatfs, sizeof(uuid));
kst->f_fsid.val[0] = le32_to_cpu(uuid[0]) ^ le32_to_cpu(uuid[1]);
kst->f_fsid.val[1] = le32_to_cpu(uuid[2]) ^ le32_to_cpu(uuid[3]);
kst->f_namelen = SCOUTFS_NAME_LEN;
kst->f_frsize = SCOUTFS_BLOCK_SM_SIZE;
kst->f_frsize = SCOUTFS_BLOCK_SIZE;
/* the vfs fills f_flags */
ret = 0;
out:
/*
* We don't take cluster locks in statfs which makes it a very
* convenient place to trigger lock reclaim for debugging. We
* try to free as many locks as possible.
*/
if (scoutfs_trigger(sb, STATFS_LOCK_PURGE))
scoutfs_free_unused_locks(sb);
scoutfs_free_unused_locks(sb, -1UL);
return ret;
return 0;
}
static int scoutfs_show_options(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
seq_printf(seq, ",server_addr="SIN_FMT, SIN_ARG(&opts->server_addr));
return 0;
}
static ssize_t server_addr_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
return snprintf(buf, PAGE_SIZE, SIN_FMT"\n",
SIN_ARG(&opts->server_addr));
}
SCOUTFS_ATTR_RO(server_addr);
static struct attribute *mount_options_attrs[] = {
SCOUTFS_ATTR_PTR(server_addr),
NULL,
};
static int scoutfs_sync_fs(struct super_block *sb, int wait)
{
trace_scoutfs_sync_fs(sb, wait);
@@ -140,28 +163,6 @@ static int scoutfs_sync_fs(struct super_block *sb, int wait)
return scoutfs_trans_sync(sb, wait);
}
/*
* Data dev is closed by generic code, but we have to explicitly close the meta
* dev.
*/
static void scoutfs_metadev_close(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi->meta_bdev) {
/*
* Some kernels have blkdev_reread_part which calls
* fsync_bdev while holding the bd_mutex which inverts
* the s_umount hold in deactivate_super and blkdev_put
* from kill_sb->put_super.
*/
lockdep_off();
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
lockdep_on();
sbi->meta_bdev = NULL;
}
}
/*
* This destroys all the state that's built up in the sb info during
* mount. It's called by us on errors during mount if we haven't set
@@ -174,67 +175,39 @@ static void scoutfs_put_super(struct super_block *sb)
trace_scoutfs_put_super(sb);
/*
* Wait for invalidation and iput to finish with any lingering
* inode references that escaped the evict_inodes in
* generic_shutdown_super. MS_ACTIVE is clear so final iput
* will always evict.
*/
scoutfs_lock_flush_invalidate(sb);
scoutfs_inode_flush_iput(sb);
WARN_ON_ONCE(!list_empty(&sb->s_inodes));
sbi->shutdown = true;
scoutfs_forest_stop(sb);
scoutfs_srch_destroy(sb);
scoutfs_lock_shutdown(sb);
scoutfs_shutdown_trans(sb);
scoutfs_volopt_destroy(sb);
scoutfs_client_destroy(sb);
scoutfs_inode_destroy(sb);
scoutfs_item_destroy(sb);
scoutfs_forest_destroy(sb);
scoutfs_data_destroy(sb);
scoutfs_quorum_destroy(sb);
scoutfs_unlock(sb, sbi->rid_lock, SCOUTFS_LOCK_WRITE);
sbi->rid_lock = NULL;
scoutfs_shutdown_trans(sb);
scoutfs_client_destroy(sb);
scoutfs_inode_destroy(sb);
scoutfs_forest_destroy(sb);
/* the server locks the listen address and compacts */
scoutfs_lock_shutdown(sb);
scoutfs_server_destroy(sb);
scoutfs_recov_destroy(sb);
scoutfs_net_destroy(sb);
scoutfs_lock_destroy(sb);
scoutfs_omap_destroy(sb);
/* server clears quorum leader flag during shutdown */
scoutfs_quorum_destroy(sb);
scoutfs_block_destroy(sb);
scoutfs_destroy_triggers(sb);
scoutfs_fence_destroy(sb);
scoutfs_options_destroy(sb);
scoutfs_sysfs_destroy_attrs(sb, &sbi->mopts_ssa);
debugfs_remove(sbi->debug_root);
scoutfs_destroy_counters(sb);
scoutfs_destroy_sysfs(sb);
scoutfs_metadev_close(sb);
kfree(sbi);
sb->s_fs_info = NULL;
}
/*
* Record that we're performing a forced unmount. As put_super drives
* destruction of the filesystem we won't issue more network or storage
* operations because we assume that they'll hang. Pending operations
* can return errors when it's possible to do so. We may be racing with
* pending operations which can't be canceled.
*/
static void scoutfs_umount_begin(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
scoutfs_warn(sb, "forcing unmount, can return errors and lose unsynced data");
sbi->forced_unmount = true;
scoutfs_client_net_shutdown(sb);
}
static const struct super_operations scoutfs_super_ops = {
.alloc_inode = scoutfs_alloc_inode,
.drop_inode = scoutfs_drop_inode,
@@ -242,9 +215,8 @@ static const struct super_operations scoutfs_super_ops = {
.destroy_inode = scoutfs_destroy_inode,
.sync_fs = scoutfs_sync_fs,
.statfs = scoutfs_statfs,
.show_options = scoutfs_options_show,
.show_options = scoutfs_show_options,
.put_super = scoutfs_put_super,
.umount_begin = scoutfs_umount_begin,
};
/*
@@ -255,39 +227,19 @@ static const struct super_operations scoutfs_super_ops = {
int scoutfs_write_super(struct super_block *sb,
struct scoutfs_super_block *super)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
le64_add_cpu(&super->hdr.seq, 1);
return scoutfs_block_write_sm(sb, sbi->meta_bdev, SCOUTFS_SUPER_BLKNO,
&super->hdr,
return scoutfs_block_write_sm(sb, SCOUTFS_SUPER_BLKNO, &super->hdr,
sizeof(struct scoutfs_super_block));
}
static bool small_bdev(struct super_block *sb, char *which, u64 blocks,
struct block_device *bdev, int shift)
{
u64 size = (u64)i_size_read(bdev->bd_inode);
u64 count = size >> shift;
if (blocks > count) {
scoutfs_err(sb, "super block records %llu %s blocks, but device %u:%u size %llu only allows %llu blocks",
blocks, which, MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev), size, count);
return true;
}
return false;
}
/*
* Read super, specifying bdev.
* Read the super block. If it's valid store it in the caller's super
* struct.
*/
static int scoutfs_read_super_from_bdev(struct super_block *sb,
struct block_device *bdev,
struct scoutfs_super_block *super_res)
int scoutfs_read_super(struct super_block *sb,
struct scoutfs_super_block *super_res)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super;
__le32 calc;
int ret;
@@ -296,8 +248,9 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
if (!super)
return -ENOMEM;
ret = scoutfs_block_read_sm(sb, bdev, SCOUTFS_SUPER_BLKNO, &super->hdr,
sizeof(struct scoutfs_super_block), &calc);
ret = scoutfs_block_read_sm(sb, SCOUTFS_SUPER_BLKNO, &super->hdr,
sizeof(struct scoutfs_super_block),
&calc);
if (ret < 0)
goto out;
@@ -322,54 +275,32 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
goto out;
}
if (le64_to_cpu(super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
scoutfs_err(sb, "super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
/*
* fill_supers checks the fmt_vers in both supers and then decides to use it.
* From then on we verify that the supers we read have that version.
*/
if (sbi->fmt_vers != 0 && le64_to_cpu(super->fmt_vers) != sbi->fmt_vers) {
scoutfs_err(sb, "super block has format version %llu than %llu read at mount",
le64_to_cpu(super->fmt_vers), sbi->fmt_vers);
if (super->format_hash != cpu_to_le64(SCOUTFS_FORMAT_HASH)) {
scoutfs_err(sb, "super block has invalid format hash 0x%llx, expected 0x%llx",
le64_to_cpu(super->format_hash),
SCOUTFS_FORMAT_HASH);
ret = -EINVAL;
goto out;
}
/* XXX do we want more rigorous invalid super checking? */
if (small_bdev(sb, "metadata", le64_to_cpu(super->total_meta_blocks), sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
small_bdev(sb, "data", le64_to_cpu(super->total_data_blocks), sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
if (super->quorum_count == 0 ||
super->quorum_count > SCOUTFS_QUORUM_MAX_COUNT) {
scoutfs_err(sb, "super block has invalid quorum count %u, must be > 0 and <= %u",
super->quorum_count, SCOUTFS_QUORUM_MAX_COUNT);
ret = -EINVAL;
goto out;
}
*super_res = *super;
ret = 0;
out:
if (ret == 0)
*super_res = *super;
kfree(super);
return ret;
}
/*
* Read the super block from meta dev.
*/
int scoutfs_read_super(struct super_block *sb,
struct scoutfs_super_block *super_res)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return scoutfs_read_super_from_bdev(sb, sbi->meta_bdev, super_res);
}
/*
* This needs to be setup after reading the super because it uses the
* fsid found in the super block.
@@ -406,74 +337,10 @@ static int assign_random_id(struct scoutfs_sb_info *sbi)
return 0;
}
/*
* Ensure superblock copies in metadata and data block devices are valid, and
* fill in in-memory superblock if so.
*/
static int scoutfs_read_supers(struct super_block *sb)
{
struct scoutfs_super_block *meta_super = NULL;
struct scoutfs_super_block *data_super = NULL;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
int ret = 0;
meta_super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
data_super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!meta_super || !data_super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super_from_bdev(sb, sbi->meta_bdev, meta_super);
if (ret < 0) {
scoutfs_err(sb, "could not get meta_super: error %d", ret);
goto out;
}
ret = scoutfs_read_super_from_bdev(sb, sb->s_bdev, data_super);
if (ret < 0) {
scoutfs_err(sb, "could not get data_super: error %d", ret);
goto out;
}
if (!SCOUTFS_IS_META_BDEV(meta_super)) {
scoutfs_err(sb, "meta_super META flag not set");
ret = -EINVAL;
goto out;
}
if (SCOUTFS_IS_META_BDEV(data_super)) {
scoutfs_err(sb, "data_super META flag set");
ret = -EINVAL;
goto out;
}
if (memcmp(meta_super->uuid, data_super->uuid, SCOUTFS_UUID_BYTES)) {
scoutfs_err(sb, "superblock UUID mismatch");
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) != le64_to_cpu(data_super->fmt_vers)) {
scoutfs_err(sb, "meta device format version %llu != data device format version %llu",
le64_to_cpu(meta_super->fmt_vers), le64_to_cpu(data_super->fmt_vers));
goto out;
}
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
kfree(data_super);
return ret;
}
static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
{
struct scoutfs_mount_options opts;
struct block_device *meta_bdev;
struct scoutfs_sb_info *sbi;
struct mount_options opts;
struct inode *inode;
int ret;
@@ -483,7 +350,6 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_flags |= MS_I_VERSION;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -499,68 +365,54 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
return ret;
spin_lock_init(&sbi->next_ino_lock);
init_waitqueue_head(&sbi->trans_hold_wq);
spin_lock_init(&sbi->data_wait_root.lock);
sbi->data_wait_root.root = RB_ROOT;
spin_lock_init(&sbi->trans_write_lock);
INIT_DELAYED_WORK(&sbi->trans_write_work, scoutfs_trans_write_func);
init_waitqueue_head(&sbi->trans_write_wq);
scoutfs_sysfs_init_attrs(sb, &sbi->mopts_ssa);
/* parse options early for use during setup */
ret = scoutfs_options_early_setup(sb, data);
if (ret < 0)
return ret;
scoutfs_options_read(sb, &opts);
ret = scoutfs_parse_options(sb, data, &opts);
if (ret)
goto out;
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
if (ret != SCOUTFS_BLOCK_SM_SIZE) {
sbi->opts = opts;
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SIZE);
if (ret != SCOUTFS_BLOCK_SIZE) {
scoutfs_err(sb, "failed to set blocksize, returned %d", ret);
ret = -EIO;
goto out;
}
meta_bdev = blkdev_get_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sb);
if (IS_ERR(meta_bdev)) {
scoutfs_err(sb, "could not open metadev: error %ld",
PTR_ERR(meta_bdev));
ret = PTR_ERR(meta_bdev);
goto out;
}
sbi->meta_bdev = meta_bdev;
ret = set_blocksize(sbi->meta_bdev, SCOUTFS_BLOCK_SM_SIZE);
if (ret != 0) {
scoutfs_err(sb, "failed to set metadev blocksize, returned %d",
ret);
goto out;
}
ret = scoutfs_read_supers(sb) ?:
ret = scoutfs_read_super(sb, &SCOUTFS_SB(sb)->super) ?:
scoutfs_debugfs_setup(sb) ?:
scoutfs_setup_sysfs(sb) ?:
scoutfs_setup_counters(sb) ?:
scoutfs_options_setup(sb) ?:
scoutfs_sysfs_create_attrs(sb, &sbi->mopts_ssa,
mount_options_attrs, "mount_options") ?:
scoutfs_setup_triggers(sb) ?:
scoutfs_fence_setup(sb) ?:
scoutfs_block_setup(sb) ?:
scoutfs_forest_setup(sb) ?:
scoutfs_item_setup(sb) ?:
scoutfs_inode_setup(sb) ?:
scoutfs_data_setup(sb) ?:
scoutfs_setup_trans(sb) ?:
scoutfs_omap_setup(sb) ?:
scoutfs_lock_setup(sb) ?:
scoutfs_net_setup(sb) ?:
scoutfs_recov_setup(sb) ?:
scoutfs_server_setup(sb) ?:
scoutfs_quorum_setup(sb) ?:
scoutfs_server_setup(sb) ?:
scoutfs_client_setup(sb) ?:
scoutfs_volopt_setup(sb) ?:
scoutfs_srch_setup(sb);
scoutfs_lock_rid(sb, SCOUTFS_LOCK_WRITE, 0, sbi->rid,
&sbi->rid_lock) ?:
scoutfs_trans_get_log_trees(sb);
if (ret)
goto out;
/* this interruptible iget lets hung mount be aborted with ctl-c */
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO, SCOUTFS_LKF_INTERRUPTIBLE, 0);
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
if (ret == -ERESTARTSYS)
ret = -EINTR;
goto out;
}
@@ -570,15 +422,12 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
/* send requests once iget progress shows we had a server */
ret = scoutfs_trans_get_log_trees(sb);
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
if (ret)
goto out;
/* start up background services that use everything else */
scoutfs_inode_start(sb);
scoutfs_forest_start(sb);
scoutfs_trans_restart_sync_deadline(sb);
// scoutfs_scan_orphans(sb);
ret = 0;
out:
/* on error, generic_shutdown_super calls put_super if s_root */
@@ -599,18 +448,7 @@ static struct dentry *scoutfs_mount(struct file_system_type *fs_type, int flags,
*/
static void scoutfs_kill_sb(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi) {
sbi->unmounting = true;
smp_wmb();
}
if (SCOUTFS_HAS_SBI(sb)) {
scoutfs_options_stop(sb);
scoutfs_inode_orphan_stop(sb);
scoutfs_lock_unmount_begin(sb);
}
trace_scoutfs_kill_sb(sb);
kill_block_super(sb);
}
@@ -643,15 +481,7 @@ static int __init scoutfs_module_init(void)
*/
__asm__ __volatile__ (
".section .note.git_describe,\"a\"\n"
".ascii \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_min,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MIN_STR"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_max,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MAX_STR"\\n\"\n"
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
scoutfs_init_counters();
@@ -685,5 +515,3 @@ module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
MODULE_INFO(scoutfs_format_version_min, SCOUTFS_FORMAT_VERSION_MIN_STR);
MODULE_INFO(scoutfs_format_version_max, SCOUTFS_FORMAT_VERSION_MAX_STR);

View File

@@ -25,26 +25,18 @@ struct options_sb_info;
struct net_info;
struct block_info;
struct forest_info;
struct srch_info;
struct recov_info;
struct omap_info;
struct volopt_info;
struct fence_info;
struct scoutfs_sb_info {
struct super_block *sb;
/* assigned once at the start of each mount, read-only */
u64 rid;
u64 fmt_vers;
struct scoutfs_lock *rid_lock;
struct scoutfs_super_block super;
struct block_device *meta_bdev;
spinlock_t next_ino_lock;
struct options_info *options_info;
struct data_info *data_info;
struct inode_sb_info *inode_sb_info;
struct btree_info *btree_info;
@@ -52,33 +44,39 @@ struct scoutfs_sb_info {
struct quorum_info *quorum_info;
struct block_info *block_info;
struct forest_info *forest_info;
struct srch_info *srch_info;
struct omap_info *omap_info;
struct volopt_info *volopt_info;
struct item_cache_info *item_cache_info;
struct fence_info *fence_info;
wait_queue_head_t trans_hold_wq;
struct task_struct *trans_task;
/* tracks tasks waiting for data extents */
struct scoutfs_data_wait_root data_wait_root;
/* set as transaction opens with trans holders excluded */
spinlock_t trans_write_lock;
u64 trans_write_count;
u64 trans_seq;
int trans_write_ret;
struct delayed_work trans_write_work;
wait_queue_head_t trans_write_wq;
struct workqueue_struct *trans_write_workq;
bool trans_deadline_expired;
struct trans_info *trans_info;
struct lock_info *lock_info;
struct lock_server_info *lock_server_info;
struct client_info *client_info;
struct server_info *server_info;
struct recov_info *recov_info;
struct sysfs_info *sfsinfo;
struct scoutfs_counters *counters;
struct scoutfs_triggers *triggers;
struct mount_options opts;
struct options_sb_info *options;
struct scoutfs_sysfs_attrs mopts_ssa;
struct dentry *debug_root;
bool forced_unmount;
bool unmounting;
bool shutdown;
unsigned long corruption_messages_once[SC_NR_LONGS];
};
@@ -93,33 +91,6 @@ static inline bool SCOUTFS_HAS_SBI(struct super_block *sb)
return (sb != NULL) && (SCOUTFS_SB(sb) != NULL);
}
static inline bool SCOUTFS_IS_META_BDEV(struct scoutfs_super_block *super_block)
{
return !!(le64_to_cpu(super_block->flags) & SCOUTFS_FLAG_IS_META_BDEV);
}
#define SCOUTFS_META_BDEV_MODE (FMODE_READ | FMODE_WRITE | FMODE_EXCL)
static inline bool scoutfs_forcing_unmount(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return sbi->forced_unmount;
}
/*
* True if we're shutting down the system and can be used as a coarse
* indicator that we can avoid doing some work that no longer makes
* sense.
*/
static inline bool scoutfs_unmounting(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
smp_rmb();
return !sbi || sbi->unmounting;
}
/*
* A small string embedded in messages that's used to identify a
* specific mount. It's the three most significant bytes of the fsid
@@ -157,4 +128,6 @@ int scoutfs_write_super(struct super_block *sb,
/* to keep this out of the ioctl.h public interface definition */
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
__le64 scoutfs_clock_sync_id(void);
#endif

View File

@@ -37,25 +37,6 @@ struct attr_funcs {
#define ATTR_FUNCS_RO(_name) \
static struct attr_funcs _name##_attr_funcs = __ATTR_RO(_name)
static ssize_t data_device_maj_min_show(struct kobject *kobj, struct attribute *attr, char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
return snprintf(buf, PAGE_SIZE, "%u:%u\n",
MAJOR(sb->s_bdev->bd_dev), MINOR(sb->s_bdev->bd_dev));
}
ATTR_FUNCS_RO(data_device_maj_min);
static ssize_t format_version_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return snprintf(buf, PAGE_SIZE, "%llu\n", sbi->fmt_vers);
}
ATTR_FUNCS_RO(format_version);
static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
@@ -110,8 +91,6 @@ static ssize_t attr_funcs_show(struct kobject *kobj, struct attribute *attr,
static struct attribute *sb_id_attrs[] = {
&data_device_maj_min_attr_funcs.attr,
&format_version_attr_funcs.attr,
&fsid_attr_funcs.attr,
&rid_attr_funcs.attr,
NULL,
@@ -152,10 +131,9 @@ void scoutfs_sysfs_init_attrs(struct super_block *sb,
* If this returns success then the file will be visible and show can
* be called until unmount.
*/
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
struct kobject *parent,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...)
int scoutfs_sysfs_create_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...)
{
va_list args;
size_t name_len;
@@ -196,8 +174,8 @@ int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
goto out;
}
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype, parent,
"%s", ssa->name);
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype,
scoutfs_sysfs_sb_dir(sb), "%s", ssa->name);
out:
if (ret) {
kfree(ssa->name);

View File

@@ -10,8 +10,6 @@
#define SCOUTFS_ATTR_RO(_name) \
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RO(_name)
#define SCOUTFS_ATTR_RW(_name) \
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RW(_name)
#define SCOUTFS_ATTR_PTR(_name) \
&scoutfs_attr_##_name.attr
@@ -36,14 +34,9 @@ struct scoutfs_sysfs_attrs {
void scoutfs_sysfs_init_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa);
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
struct kobject *parent,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...);
#define scoutfs_sysfs_create_attrs(sb, ssa, attrs, fmt, args...) \
scoutfs_sysfs_create_attrs_parent(sb, scoutfs_sysfs_sb_dir(sb), \
ssa, attrs, fmt, ##args)
int scoutfs_sysfs_create_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa,
struct attribute **attrs, char *fmt, ...);
void scoutfs_sysfs_destroy_attrs(struct super_block *sb,
struct scoutfs_sysfs_attrs *ssa);

View File

@@ -17,7 +17,6 @@
#include <linux/atomic.h>
#include <linux/writeback.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include "super.h"
#include "trans.h"
@@ -26,10 +25,8 @@
#include "counters.h"
#include "client.h"
#include "inode.h"
#include "alloc.h"
#include "radix.h"
#include "block.h"
#include "msg.h"
#include "item.h"
#include "scoutfs_trace.h"
/*
@@ -40,46 +37,51 @@
* track the relationships between dirty blocks so there's only ever one
* transaction being built.
*
* Committing the current dirty transaction can be triggered by sync, a
* regular background commit interval, reaching a dirty block threshold,
* or the transaction running out of its private allocator resources.
* Once all the current holders release the writing func writes out the
* dirty blocks while excluding holders until it finishes.
* The copy of the on-disk super block in the fs sb info has its header
* sequence advanced so that new dirty blocks inherit this dirty
* sequence number. It's only advanced once all those dirty blocks are
* reachable after having first written them all out and then the new
* super with that seq. It's first incremented at mount.
*
* Unfortunately writing holders can nest. We track nested hold callers
* with the per-task journal_info pointer to avoid deadlocks between
* holders that might otherwise wait for a pending commit.
* Unfortunately writers can nest. We don't bother trying to special
* case holding a transaction that you're already holding because that
* requires per-task storage. We just let anyone hold transactions
* regardless of waiters waiting to write, which risks waiters waiting a
* very long time.
*/
/* sync dirty data at least this often */
#define TRANS_SYNC_DELAY (HZ * 10)
/*
* XXX move the rest of the super trans_ fields here.
*/
struct trans_info {
struct super_block *sb;
atomic_t holders;
spinlock_t lock;
unsigned reserved_items;
unsigned reserved_vals;
unsigned holders;
bool writing;
struct scoutfs_log_trees lt;
struct scoutfs_alloc alloc;
struct scoutfs_radix_allocator alloc;
struct scoutfs_block_writer wri;
wait_queue_head_t hold_wq;
struct task_struct *task;
spinlock_t write_lock;
u64 write_count;
int write_ret;
struct delayed_work write_work;
wait_queue_head_t write_wq;
struct workqueue_struct *write_workq;
bool deadline_expired;
};
#define DECLARE_TRANS_INFO(sb, name) \
struct trans_info *name = SCOUTFS_SB(sb)->trans_info
/* avoid the high sign bit out of an abundance of caution*/
#define TRANS_HOLDERS_WRITE_FUNC_BIT (1 << 30)
#define TRANS_HOLDERS_COUNT_MASK (TRANS_HOLDERS_WRITE_FUNC_BIT - 1)
static bool drained_holders(struct trans_info *tri)
{
bool drained;
spin_lock(&tri->lock);
tri->writing = true;
drained = tri->holders == 0;
spin_unlock(&tri->lock);
return drained;
}
static int commit_btrees(struct super_block *sb)
{
@@ -101,7 +103,6 @@ static int commit_btrees(struct super_block *sb)
*/
int scoutfs_trans_get_log_trees(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_log_trees lt;
int ret = 0;
@@ -109,16 +110,12 @@ int scoutfs_trans_get_log_trees(struct super_block *sb)
ret = scoutfs_client_get_log_trees(sb, &lt);
if (ret == 0) {
tri->lt = lt;
scoutfs_alloc_init(&tri->alloc, &lt.meta_avail, &lt.meta_freed);
scoutfs_radix_init_alloc(&tri->alloc, &lt.meta_avail,
&lt.meta_freed);
scoutfs_block_writer_init(sb, &tri->wri);
scoutfs_forest_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
scoutfs_data_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
/* first set during mount from 0 to nonzero allows commits */
spin_lock(&tri->write_lock);
sbi->trans_seq = le64_to_cpu(lt.get_trans_seq);
spin_unlock(&tri->write_lock);
}
return ret;
}
@@ -129,36 +126,6 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
return scoutfs_block_writer_has_dirty(sb, &tri->wri);
}
/*
* This is racing with wait_event conditions, make sure our atomic
* stores and waitqueue loads are ordered.
*/
static void sub_holders_and_wake(struct super_block *sb, int val)
{
DECLARE_TRANS_INFO(sb, tri);
atomic_sub(val, &tri->holders);
smp_mb(); /* make sure sub is visible before we wake */
if (waitqueue_active(&tri->hold_wq))
wake_up(&tri->hold_wq);
}
/*
* called as a wait_event condition, needs to be careful to not change
* task state and is racing with waking paths that sub_return, test, and
* wake.
*/
static bool drained_holders(struct trans_info *tri)
{
int holders;
smp_mb(); /* make sure task in wait_event queue before atomic read */
holders = atomic_read(&tri->holders) & TRANS_HOLDERS_COUNT_MASK;
return holders == 0;
}
/*
* This work func is responsible for writing out all the dirty blocks
* that make up the current dirty transaction. It prevents writers from
@@ -169,93 +136,76 @@ static bool drained_holders(struct trans_info *tri)
* functions that would try to hold the transaction. We record the task
* whose committing the transaction so that holding won't deadlock.
*
* Once we clear the write func bit in holders then waiting holders can
* enter the transaction and continue modifying the transaction. Once
* we start writing we consider the transaction done and won't exit,
* clearing the write func bit, until get_log_trees has opened the next
* transaction. The exception is forced unmount which is allowed to
* generate errors and throw away data.
* Any dirty block had to have allocated a new blkno which would have
* created dirty allocator metadata blocks. We can avoid writing
* entirely if we don't have any dirty metadata blocks. This is
* important because we don't try to serialize this work during
* unmount.. we can execute as the vfs is shutting down.. we need to
* decide that nothing is dirty without calling the vfs at all.
*
* This means that the only way fsync can return an error is if we're in
* forced unmount.
* We first try to sync the dirty inodes and write their dirty data blocks,
* then we write all our dirty metadata blocks, and only when those succeed
* do we write the new super that references all of these newly written blocks.
*
* If there are write errors then blocks are kept dirty in memory and will
* be written again at the next sync.
*/
void scoutfs_trans_write_func(struct work_struct *work)
{
struct trans_info *tri = container_of(work, struct trans_info, write_work.work);
struct super_block *sb = tri->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
bool retrying = false;
char *s = NULL;
struct scoutfs_sb_info *sbi = container_of(work, struct scoutfs_sb_info,
trans_write_work.work);
struct super_block *sb = sbi->sb;
DECLARE_TRANS_INFO(sb, tri);
int ret = 0;
tri->task = current;
sbi->trans_task = current;
/* mark that we're writing so holders wait for us to finish and clear our bit */
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
wait_event(sbi->trans_hold_wq, drained_holders(tri));
wait_event(tri->hold_wq, drained_holders(tri));
trace_scoutfs_trans_write_func(sb,
scoutfs_block_writer_dirty_bytes(sb, &tri->wri));
/* mount hasn't opened first transaction yet, still complete sync */
if (sbi->trans_seq == 0) {
ret = 0;
goto out;
if (scoutfs_block_writer_has_dirty(sb, &tri->wri)) {
if (sbi->trans_deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
ret = scoutfs_inode_walk_writeback(sb, true) ?:
scoutfs_block_writer_write(sb, &tri->wri) ?:
scoutfs_inode_walk_writeback(sb, false) ?:
commit_btrees(sb) ?:
scoutfs_client_advance_seq(sb, &sbi->trans_seq) ?:
scoutfs_trans_get_log_trees(sb);
if (ret)
goto out;
} else if (sbi->trans_deadline_expired) {
/*
* If we're not writing data then we only advance the
* seq at the sync deadline interval. This keeps idle
* mounts from pinning a seq and stopping readers of the
* seq indices but doesn't send a message for every sync
* syscall.
*/
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
goto out;
}
trace_scoutfs_trans_write_func(sb, scoutfs_block_writer_dirty_bytes(sb, &tri->wri),
scoutfs_item_dirty_pages(sb));
if (tri->deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
scoutfs_inc_counter(sb, trans_commit_written);
do {
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc,
&tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
if (ret < 0) {
if (!retrying) {
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
s, ret);
retrying = true;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
break;
}
msleep(2 * MSEC_PER_SEC);
} else if (retrying) {
scoutfs_info(sb, "retried transaction commit succeeded");
}
} while (ret < 0);
out:
spin_lock(&tri->write_lock);
tri->write_count++;
tri->write_ret = ret;
spin_unlock(&tri->write_lock);
wake_up(&tri->write_wq);
/* XXX this all needs serious work for dealing with errors */
WARN_ON_ONCE(ret);
/* we're done, wake waiting holders */
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
spin_lock(&sbi->trans_write_lock);
sbi->trans_write_count++;
sbi->trans_write_ret = ret;
spin_unlock(&sbi->trans_write_lock);
wake_up(&sbi->trans_write_wq);
tri->task = NULL;
spin_lock(&tri->lock);
tri->writing = false;
spin_unlock(&tri->lock);
wake_up(&sbi->trans_hold_wq);
sbi->trans_task = NULL;
scoutfs_trans_restart_sync_deadline(sb);
}
@@ -266,17 +216,17 @@ struct write_attempt {
};
/* this is called as a wait_event() condition so it can't change task state */
static int write_attempted(struct super_block *sb, struct write_attempt *attempt)
static int write_attempted(struct scoutfs_sb_info *sbi,
struct write_attempt *attempt)
{
DECLARE_TRANS_INFO(sb, tri);
int done = 1;
spin_lock(&tri->write_lock);
if (tri->write_count > attempt->count)
attempt->ret = tri->write_ret;
spin_lock(&sbi->trans_write_lock);
if (sbi->trans_write_count > attempt->count)
attempt->ret = sbi->trans_write_ret;
else
done = 0;
spin_unlock(&tri->write_lock);
spin_unlock(&sbi->trans_write_lock);
return done;
}
@@ -286,12 +236,10 @@ static int write_attempted(struct super_block *sb, struct write_attempt *attempt
* We always have delayed sync work pending but the caller wants it
* to execute immediately.
*/
static void queue_trans_work(struct super_block *sb)
static void queue_trans_work(struct scoutfs_sb_info *sbi)
{
DECLARE_TRANS_INFO(sb, tri);
tri->deadline_expired = false;
mod_delayed_work(tri->write_workq, &tri->write_work, 0);
sbi->trans_deadline_expired = false;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work, 0);
}
/*
@@ -304,24 +252,26 @@ static void queue_trans_work(struct super_block *sb)
*/
int scoutfs_trans_sync(struct super_block *sb, int wait)
{
DECLARE_TRANS_INFO(sb, tri);
struct write_attempt attempt = { .ret = 0 };
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct write_attempt attempt;
int ret;
if (!wait) {
queue_trans_work(sb);
queue_trans_work(sbi);
return 0;
}
spin_lock(&tri->write_lock);
attempt.count = tri->write_count;
spin_unlock(&tri->write_lock);
spin_lock(&sbi->trans_write_lock);
attempt.count = sbi->trans_write_count;
spin_unlock(&sbi->trans_write_lock);
queue_trans_work(sb);
queue_trans_work(sbi);
wait_event(tri->write_wq, write_attempted(sb, &attempt));
ret = attempt.ret;
ret = wait_event_interruptible(sbi->trans_write_wq,
write_attempted(sbi, &attempt));
if (ret == 0)
ret = attempt.ret;
return ret;
}
@@ -337,208 +287,140 @@ int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
tri->deadline_expired = true;
mod_delayed_work(tri->write_workq, &tri->write_work,
sbi->trans_deadline_expired = true;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work,
TRANS_SYNC_DELAY);
}
/*
* We store nested holders in the lower bits of journal_info. We use
* some higher bits as a magic value to detect if something goes
* horribly wrong and it gets clobbered.
* Each thread reserves space in the segment for their dirty items while
* they hold the transaction. This is calculated before the first
* transaction hold is acquired. It includes all the potential nested
* item manipulation that could happen with the transaction held.
* Including nested holds avoids having to deal with writing out partial
* transactions while a caller still holds the transaction.
*/
#define TRANS_JI_MAGIC 0xd5700000
#define TRANS_JI_MAGIC_MASK 0xfff00000
#define TRANS_JI_COUNT_MASK 0x000fffff
/* returns true if a caller already had a holder counted in journal_info */
static bool inc_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
if (holders == 0)
holders = TRANS_JI_MAGIC;
holders++;
current->journal_info = (void *)holders;
return (holders > (TRANS_JI_MAGIC | 1));
}
static void dec_journal_info_holders(void)
{
unsigned long holders = (unsigned long)current->journal_info;
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
WARN_ON_ONCE((holders & TRANS_JI_COUNT_MASK) == 0);
holders--;
if (holders == TRANS_JI_MAGIC)
holders = 0;
current->journal_info = (void *)holders;
}
#define SCOUTFS_RESERVATION_MAGIC 0xd57cd13b
struct scoutfs_reservation {
unsigned magic;
unsigned holders;
struct scoutfs_item_count reserved;
struct scoutfs_item_count actual;
};
/*
* This is called as the wait_event condition for holding a transaction.
* Increment the holder count unless the writer is present. We return
* false to wait until the writer finishes and wakes us.
* Try to hold the transaction. If a caller already holds the trans then
* we piggy back on their hold. We wait if the writer is trying to
* write out the transation. And if our items won't fit then we kick off
* a write.
*
* This can be racing with itself while there's no waiters. We retry
* the cmpxchg instead of returning and waiting.
* This is called as a condition for wait_event. It is very limited in
* the locking (blocking) it can do because the caller has set the task
* state before testing the condition safely race with waking after
* setting the condition. Our checking the amount of dirty metadata
* blocks and free data blocks is racy, but we don't mind the risk of
* delaying or prematurely forcing commits.
*/
static bool inc_holders_unless_writer(struct trans_info *tri)
{
int holders;
do {
smp_mb(); /* make sure we read after wait puts task in queue */
holders = atomic_read(&tri->holders);
if (holders & TRANS_HOLDERS_WRITE_FUNC_BIT)
return false;
} while (atomic_cmpxchg(&tri->holders, holders, holders + 1) != holders);
return true;
}
/*
* As we drop the last trans holder we try to wake a writing thread that
* was waiting for us to finish.
*/
static void release_holders(struct super_block *sb)
{
dec_journal_info_holders();
sub_holders_and_wake(sb, 1);
}
/*
* The caller has incremented holders so it is blocking commits. We
* make some quick checks to see if we need to trigger and wait for
* another commit before proceeding.
*/
static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
{
/*
* In theory each dirty item page could be straddling two full
* blocks, requiring 4 allocations for each item cache page.
* That's much too conservative, typically many dirty item cache
* pages that are near each other all land in one block. This
* rough estimate is still so far beyond what typically happens
* that it accounts for having to dirty parent blocks and
* whatever dirtying is done during the transaction hold.
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc, scoutfs_item_dirty_pages(sb) * 2)) {
scoutfs_inc_counter(sb, trans_commit_dirty_meta_full);
return true;
}
/*
* Extent modifications can use meta allocators without creating
* dirty items so we have to check the meta alloc specifically.
* The size of the client's avail and freed roots are bound so
* we're unlikely to need very many block allocations per
* transaction hold. XXX This should be more precisely tuned.
*/
if (scoutfs_alloc_meta_low(sb, &tri->alloc, 16)) {
scoutfs_inc_counter(sb, trans_commit_meta_alloc_low);
return true;
}
/* if we're low and can't refill then alloc could empty and return enospc */
if (scoutfs_data_alloc_should_refill(sb, SCOUTFS_ALLOC_DATA_REFILL_THRESH)) {
scoutfs_inc_counter(sb, trans_commit_data_alloc_low);
return true;
}
return false;
}
/*
* called as a wait_event condition, needs to be careful to not change
* task state and is racing with waking paths that sub_return, test, and
* wake.
*/
static bool holders_no_writer(struct trans_info *tri)
{
smp_mb(); /* make sure task in wait_event queue before atomic read */
return !(atomic_read(&tri->holders) & TRANS_HOLDERS_WRITE_FUNC_BIT);
}
/*
* Try to hold the transaction. Holding the transaction prevents it
* from being committed. If a transaction is currently being written
* then we'll block until it's done and our hold can be granted.
*
* If a caller already holds the trans then we unconditionally acquire
* our hold and return to avoid deadlocks with our caller, the writing
* thread, and us. We record nested holds in a call stack with the
* journal_info pointer in the task_struct.
*
* The writing thread marks itself as a global trans_task which
* short-circuits all the hold machinery so it can call code that would
* otherwise try to hold transactions while it is writing.
*
* If the caller is adding metadata items that will eventually consume
* free space -- not dirtying existing items or adding deletion items --
* then we can return enospc if our metadata allocator indicates that
* we're low on space.
*/
int scoutfs_hold_trans(struct super_block *sb, bool allocing)
static bool acquired_hold(struct super_block *sb,
struct scoutfs_reservation *rsv,
const struct scoutfs_item_count *cnt)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
u64 seq;
int ret;
bool acquired = false;
unsigned items;
unsigned vals;
if (current == tri->task)
return 0;
spin_lock(&tri->lock);
for (;;) {
/* shouldn't get holders until mount finishes, (not locking for cheap test) */
if (WARN_ON_ONCE(sbi->trans_seq == 0)) {
ret = -EINVAL;
break;
}
trace_scoutfs_trans_acquired_hold(sb, cnt, rsv, rsv->holders,
&rsv->reserved, &rsv->actual,
tri->holders, tri->writing,
tri->reserved_items,
tri->reserved_vals);
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
ret = 0;
break;
}
/* use a caller's existing reservation */
if (rsv->holders)
goto hold;
/* wait until the writer work is finished */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
wait_event(tri->hold_wq, holders_no_writer(tri));
continue;
}
/* wait until the writing thread is finished */
if (tri->writing)
goto out;
/* return enospc if server is into reserved blocks and we're allocating */
if (allocing && scoutfs_alloc_test_flag(sb, &tri->alloc, SCOUTFS_ALLOC_FLAG_LOW)) {
release_holders(sb);
ret = -ENOSPC;
break;
}
/* see if we can reserve space for our item count */
items = tri->reserved_items + cnt->items;
vals = tri->reserved_vals + cnt->vals;
/* see if we need to trigger and wait for a commit before holding */
if (commit_before_hold(sb, tri)) {
seq = scoutfs_trans_sample_seq(sb);
release_holders(sb);
queue_trans_work(sb);
wait_event(tri->hold_wq, scoutfs_trans_sample_seq(sb) != seq);
continue;
}
ret = 0;
break;
/* XXX arbitrarily limit to 8 meg transactions */
if (scoutfs_block_writer_dirty_bytes(sb, &tri->wri) >=
(8 * 1024 * 1024)) {
scoutfs_inc_counter(sb, trans_commit_full);
queue_trans_work(sbi);
goto out;
}
trace_scoutfs_hold_trans(sb, current->journal_info, atomic_read(&tri->holders), ret);
/* Try to refill data allocator before premature enospc */
if (scoutfs_data_alloc_free_bytes(sb) <= SCOUTFS_TRANS_DATA_ALLOC_LWM) {
scoutfs_inc_counter(sb, trans_commit_data_alloc_low);
queue_trans_work(sbi);
goto out;
}
tri->reserved_items = items;
tri->reserved_vals = vals;
rsv->reserved.items = cnt->items;
rsv->reserved.vals = cnt->vals;
hold:
rsv->holders++;
tri->holders++;
acquired = true;
out:
spin_unlock(&tri->lock);
return acquired;
}
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
int ret;
/*
* Caller shouldn't provide garbage counts, nor counts that
* can't fit in segments by themselves.
*/
if (WARN_ON_ONCE(cnt.items <= 0 || cnt.vals < 0))
return -EINVAL;
if (current == sbi->trans_task)
return 0;
rsv = current->journal_info;
if (rsv == NULL) {
rsv = kzalloc(sizeof(struct scoutfs_reservation), GFP_NOFS);
if (!rsv)
return -ENOMEM;
rsv->magic = SCOUTFS_RESERVATION_MAGIC;
current->journal_info = rsv;
}
BUG_ON(rsv->magic != SCOUTFS_RESERVATION_MAGIC);
ret = wait_event_interruptible(sbi->trans_hold_wq,
acquired_hold(sb, rsv, &cnt));
if (ret && rsv->holders == 0) {
current->journal_info = NULL;
kfree(rsv);
}
return ret;
}
@@ -549,39 +431,86 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
*/
bool scoutfs_trans_held(void)
{
unsigned long holders = (unsigned long)current->journal_info;
struct scoutfs_reservation *rsv = current->journal_info;
return (holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) == TRANS_JI_MAGIC));
}
void scoutfs_release_trans(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
if (current == tri->task)
return;
release_holders(sb);
trace_scoutfs_release_trans(sb, current->journal_info, atomic_read(&tri->holders), 0);
return rsv && rsv->magic == SCOUTFS_RESERVATION_MAGIC;
}
/*
* Return the current transaction sequence. Whether this is racing with
* the transaction write thread is entirely dependent on the caller's
* context.
* Record a transaction holder's individual contribution to the dirty
* items in the current transaction. We're making sure that the
* reservation matches the possible item manipulations while they hold
* the reservation.
*
* It is possible and legitimate for an individual contribution to be
* negative if they delete dirty items. The item cache makes sure that
* the total dirty item count doesn't fall below zero.
*/
u64 scoutfs_trans_sample_seq(struct super_block *sb)
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 ret;
struct scoutfs_reservation *rsv = current->journal_info;
spin_lock(&tri->write_lock);
ret = sbi->trans_seq;
spin_unlock(&tri->write_lock);
if (current == sbi->trans_task)
return;
return ret;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
rsv->actual.items += items;
rsv->actual.vals += vals;
trace_scoutfs_trans_track_item(sb, items, vals, rsv->actual.items,
rsv->actual.vals, rsv->reserved.items,
rsv->reserved.vals);
WARN_ON_ONCE(rsv->actual.items > rsv->reserved.items);
WARN_ON_ONCE(rsv->actual.vals > rsv->reserved.vals);
}
/*
* As we drop the last hold in the reservation we try and wake other
* hold attempts that were waiting for space. As we drop the last trans
* holder we try to wake a writing thread that was waiting for us to
* finish.
*/
void scoutfs_release_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_reservation *rsv;
DECLARE_TRANS_INFO(sb, tri);
bool wake = false;
if (current == sbi->trans_task)
return;
rsv = current->journal_info;
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
spin_lock(&tri->lock);
trace_scoutfs_release_trans(sb, rsv, rsv->holders, &rsv->reserved,
&rsv->actual, tri->holders, tri->writing,
tri->reserved_items, tri->reserved_vals);
BUG_ON(rsv->holders <= 0);
BUG_ON(tri->holders <= 0);
if (--rsv->holders == 0) {
tri->reserved_items -= rsv->reserved.items;
tri->reserved_vals -= rsv->reserved.vals;
current->journal_info = NULL;
kfree(rsv);
wake = true;
}
if (--tri->holders == 0)
wake = true;
spin_unlock(&tri->lock);
if (wake)
wake_up(&sbi->trans_hold_wq);
}
int scoutfs_setup_trans(struct super_block *sb)
@@ -593,17 +522,12 @@ int scoutfs_setup_trans(struct super_block *sb)
if (!tri)
return -ENOMEM;
tri->sb = sb;
atomic_set(&tri->holders, 0);
spin_lock_init(&tri->lock);
scoutfs_block_writer_init(sb, &tri->wri);
spin_lock_init(&tri->write_lock);
INIT_DELAYED_WORK(&tri->write_work, scoutfs_trans_write_func);
init_waitqueue_head(&tri->write_wq);
init_waitqueue_head(&tri->hold_wq);
tri->write_workq = alloc_workqueue("scoutfs_trans", WQ_UNBOUND, 1);
if (!tri->write_workq) {
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",
WQ_UNBOUND, 1);
if (!sbi->trans_write_workq) {
kfree(tri);
return -ENOMEM;
}
@@ -614,15 +538,8 @@ int scoutfs_setup_trans(struct super_block *sb)
}
/*
* While the vfs will have done an fs level sync before calling
* put_super, we may have done work down in our level after all the fs
* ops were done. An example is final inode deletion in iput, that's
* done in generic_shutdown_super after the sync and before calling our
* put_super.
*
* So we always try to write any remaining dirty transactions before
* shutting down. Typically there won't be any dirty data and the
* worker will just return.
* kill_sb calls sync before getting here so we know that dirty data
* should be in flight. We just have to wait for it to quiesce.
*/
void scoutfs_shutdown_trans(struct super_block *sb)
{
@@ -630,19 +547,13 @@ void scoutfs_shutdown_trans(struct super_block *sb)
DECLARE_TRANS_INFO(sb, tri);
if (tri) {
if (tri->write_workq) {
/* immediately queues pending timer */
flush_delayed_work(&tri->write_work);
/* prevents re-arming if it has to wait */
cancel_delayed_work_sync(&tri->write_work);
destroy_workqueue(tri->write_workq);
/* trans work schedules after shutdown see null */
tri->write_workq = NULL;
}
scoutfs_alloc_prepare_commit(sb, &tri->alloc, &tri->wri);
scoutfs_block_writer_forget_all(sb, &tri->wri);
if (sbi->trans_write_workq) {
cancel_delayed_work_sync(&sbi->trans_write_work);
destroy_workqueue(sbi->trans_write_workq);
/* trans work schedules after shutdown see null */
sbi->trans_write_workq = NULL;
}
kfree(tri);
sbi->trans_info = NULL;
}

View File

@@ -1,16 +1,25 @@
#ifndef _SCOUTFS_TRANS_H_
#define _SCOUTFS_TRANS_H_
/* the server will attempt to fill data allocs for each trans */
#define SCOUTFS_TRANS_DATA_ALLOC_HWM (2ULL * 1024 * 1024 * 1024)
/* the client will force commits if data allocators get too low */
#define SCOUTFS_TRANS_DATA_ALLOC_LWM (256ULL * 1024 * 1024)
#include "count.h"
void scoutfs_trans_write_func(struct work_struct *work);
int scoutfs_trans_sync(struct super_block *sb, int wait);
int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
int datasync);
void scoutfs_trans_restart_sync_deadline(struct super_block *sb);
int scoutfs_hold_trans(struct super_block *sb, bool allocing);
int scoutfs_hold_trans(struct super_block *sb,
const struct scoutfs_item_count cnt);
bool scoutfs_trans_held(void);
void scoutfs_release_trans(struct super_block *sb);
u64 scoutfs_trans_sample_seq(struct super_block *sb);
void scoutfs_trans_track_item(struct super_block *sb, signed items,
signed vals);
int scoutfs_trans_get_log_trees(struct super_block *sb);
bool scoutfs_trans_has_dirty(struct super_block *sb);

View File

@@ -38,7 +38,10 @@ struct scoutfs_triggers {
struct scoutfs_triggers *name = SCOUTFS_SB(sb)->triggers
static char *names[] = {
[SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE] = "block_remove_stale",
[SCOUTFS_TRIGGER_BTREE_STALE_READ] = "btree_stale_read",
[SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF] = "btree_advance_ring_half",
[SCOUTFS_TRIGGER_HARD_STALE_ERROR] = "hard_stale_error",
[SCOUTFS_TRIGGER_SEG_STALE_READ] = "seg_stale_read",
[SCOUTFS_TRIGGER_STATFS_LOCK_PURGE] = "statfs_lock_purge",
};

View File

@@ -1,8 +1,11 @@
#ifndef _SCOUTFS_TRIGGERS_H_
#define _SCOUTFS_TRIGGERS_H_
enum scoutfs_trigger {
SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE,
enum {
SCOUTFS_TRIGGER_BTREE_STALE_READ,
SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF,
SCOUTFS_TRIGGER_HARD_STALE_ERROR,
SCOUTFS_TRIGGER_SEG_STALE_READ,
SCOUTFS_TRIGGER_STATFS_LOCK_PURGE,
SCOUTFS_TRIGGER_NR,
};

View File

@@ -1,20 +0,0 @@
#ifndef _SCOUTFS_UTIL_H_
#define _SCOUTFS_UTIL_H_
/*
* Little utility helpers that probably belong upstream.
*/
static inline void down_write_two(struct rw_semaphore *a,
struct rw_semaphore *b)
{
BUG_ON(a == b);
if (a > b)
swap(a, b);
down_write(a);
down_write_nested(b, SINGLE_DEPTH_NESTING);
}
#endif

View File

@@ -1,188 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/kobject.h>
#include <linux/sysfs.h>
#include "super.h"
#include "client.h"
#include "volopt.h"
/*
* Volume options are exposed through a sysfs directory. Getting and
* setting the values sends rpcs to the server who owns the options in
* the super block.
*/
struct volopt_info {
struct super_block *sb;
struct scoutfs_sysfs_attrs ssa;
};
#define DECLARE_VOLOPT_INFO(sb, name) \
struct volopt_info *name = SCOUTFS_SB(sb)->volopt_info
#define DECLARE_VOLOPT_INFO_KOBJ(kobj, name) \
DECLARE_VOLOPT_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
/*
* attribute arrays need to be dense but the options we export could
* well become sparse over time. .store and .load are generic and we
* have a lookup table to map the attributes array indexes to the number
* and name of the option.
*/
static struct volopt_nr_name {
int nr;
char *name;
} volopt_table[] = {
{ SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR, "data_alloc_zone_blocks" },
};
/* initialized by setup, pointer array is null terminated */
static struct kobj_attribute volopt_attrs[ARRAY_SIZE(volopt_table)];
static struct attribute *volopt_attr_ptrs[ARRAY_SIZE(volopt_table) + 1];
static void get_opt_data(struct kobj_attribute *attr, struct scoutfs_volume_options *volopt,
u64 *bit, __le64 **opt)
{
size_t index = attr - &volopt_attrs[0];
int nr = volopt_table[index].nr;
*bit = 1ULL << nr;
*opt = &volopt->set_bits + 1 + nr;
}
static ssize_t volopt_attr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
struct super_block *sb = vinf->sb;
struct scoutfs_volume_options volopt;
__le64 *opt;
u64 bit;
int ret;
ret = scoutfs_client_get_volopt(sb, &volopt);
if (ret < 0)
return ret;
get_opt_data(attr, &volopt, &bit, &opt);
if (le64_to_cpu(volopt.set_bits) & bit) {
return snprintf(buf, PAGE_SIZE, "%llu", le64_to_cpup(opt));
} else {
buf[0] = '\0';
return 0;
}
}
static ssize_t volopt_attr_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
struct super_block *sb = vinf->sb;
struct scoutfs_volume_options volopt = {0,};
u8 chars[32];
__le64 *opt;
u64 bit;
u64 val;
int ret;
if (count == 0)
return 0;
if (count > sizeof(chars) - 1)
return -ERANGE;
get_opt_data(attr, &volopt, &bit, &opt);
if (buf[0] == '\n' || buf[0] == '\r') {
volopt.set_bits = cpu_to_le64(bit);
ret = scoutfs_client_clear_volopt(sb, &volopt);
} else {
memcpy(chars, buf, count);
chars[count] = '\0';
ret = kstrtoull(chars, 0, &val);
if (ret < 0)
return ret;
volopt.set_bits = cpu_to_le64(bit);
*opt = cpu_to_le64(val);
ret = scoutfs_client_set_volopt(sb, &volopt);
}
if (ret == 0)
ret = count;
return ret;
}
/*
* The volume option sysfs files are slim shims around RPCs so this
* should be called after the client is setup and before it is torn
* down.
*/
int scoutfs_volopt_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct volopt_info *vinf;
int ret;
int i;
/* persistent volume options are always a bitmap u64 then the 64 options */
BUILD_BUG_ON(sizeof(struct scoutfs_volume_options) != (1 + 64) * 8);
vinf = kzalloc(sizeof(struct volopt_info), GFP_KERNEL);
if (!vinf) {
ret = -ENOMEM;
goto out;
}
scoutfs_sysfs_init_attrs(sb, &vinf->ssa);
vinf->sb = sb;
sbi->volopt_info = vinf;
for (i = 0; i < ARRAY_SIZE(volopt_table); i++) {
volopt_attrs[i] = (struct kobj_attribute) {
.attr = { .name = volopt_table[i].name, .mode = S_IWUSR | S_IRUGO },
.show = volopt_attr_show,
.store = volopt_attr_store,
};
volopt_attr_ptrs[i] = &volopt_attrs[i].attr;
}
BUILD_BUG_ON(ARRAY_SIZE(volopt_table) != ARRAY_SIZE(volopt_attr_ptrs) - 1);
volopt_attr_ptrs[i] = NULL;
ret = scoutfs_sysfs_create_attrs(sb, &vinf->ssa, volopt_attr_ptrs, "volume_options");
if (ret < 0)
goto out;
out:
if (ret)
scoutfs_volopt_destroy(sb);
return ret;
}
void scoutfs_volopt_destroy(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct volopt_info *vinf = SCOUTFS_SB(sb)->volopt_info;
if (vinf) {
scoutfs_sysfs_destroy_attrs(sb, &vinf->ssa);
kfree(vinf);
sbi->volopt_info = NULL;
}
}

View File

@@ -1,7 +0,0 @@
#ifndef _SCOUTFS_VOLOPT_H_
#define _SCOUTFS_VOLOPT_H_
int scoutfs_volopt_setup(struct super_block *sb);
void scoutfs_volopt_destroy(struct super_block *sb);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -14,17 +14,7 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1,
totl:1,
worm:1;
};
int scoutfs_xattr_parse_tags(struct super_block *sb, const char *name,
unsigned int name_len, struct scoutfs_xattr_prefix_tags *tgs);
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);
void scoutfs_xattr_index_key(struct scoutfs_key *key,
u64 hash, u64 ino, u64 id);
#endif

10
tests/.gitignore vendored
View File

@@ -1,10 +0,0 @@
src/*.d
src/createmany
src/dumb_renameat2
src/dumb_setxattr
src/handle_cat
src/handle_fsetxattr
src/bulk_create_paths
src/find_xattrs
src/stage_tmpfile
src/create_xattr_loop

View File

@@ -1,53 +0,0 @@
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing -I ../kmod/src
SHELL := /usr/bin/bash
# each binary command is built from a single .c file
BIN := src/createmany \
src/dumb_renameat2 \
src/dumb_setxattr \
src/handle_cat \
src/handle_fsetxattr \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop
DEPS := $(wildcard src/*.d)
all: $(BIN)
ifneq ($(DEPS),)
-include $(DEPS)
endif
$(BIN): %: %.c Makefile
gcc $(CFLAGS) -MD -MP -MF $*.d $< -o $@
.PHONY: clean
clean:
@rm -f $(BIN) $(DEPS)
#
# Make sure we only have all three items needed for each test: entry in
# sequence, test script in tests/, and output in golden/.
#
.PHONY: check-test-files
check-test-files:
@for t in $$(grep -v "^#" sequence); do \
test -e "tests/$$t" || \
echo "no test for list entry: $$t"; \
t=$${t%%.sh}; \
test -e "golden/$$t" || \
echo "no output for list entry: $$t"; \
done; \
for t in golden/*; do \
t=$$(basename "$$t"); \
grep -q "^$$t.sh$$" sequence || \
echo "output not in list: $$t"; \
done; \
for t in tests/*; do \
t=$$(basename "$$t"); \
test "$$t" == "list" && continue; \
grep -q "^$$t$$" sequence || \
echo "test not in list: $$t"; \
done

View File

@@ -1,124 +0,0 @@
This test suite exercises multi-node scoutfs by using multiple mounts on
one host to simulate multiple nodes across a network.
It also contains a light test wrapper that executes xfstests on one of
the test mounts.
## Invoking Tests
The basic test invocation has to specify the devices for the fs the
number of mounts to test, whether to create a new fs and insert the
built module, and where to put the results.
# bash ./run-tests.sh \
-M /dev/vda \
-D /dev/vdb \
-i \
-m \
-n 3 \
-q 2 \
-r ./results
All options can be seen by running with -h.
This script is built to test multi-node systems on one host by using
different mounts of the same devices. The script creates a fake block
device in front of each fs block device for each mount that will be
tested. Currently it will create free loop devices and will mount on
/mnt/test.[0-9].
All tests will be run by default. Particular tests can be included or
excluded by providing test name regular expressions with the -I and -E
options. The definitive list of tests and the order in which they'll be
run is found in the sequence file.
## xfstests
The last test that is run checks out, builds, and runs xfstests. It
needs -X and -x options for the xfstests git repo and branch. It also
needs spare devices on which to make scratch scoutfs volumes. The test
verifies that the expected set of xfstests tests ran and passed.
-f /dev/vdc \
-e /dev/vdd \
-X $HOME/git/scoutfs-xfstests \
-x scoutfs \
An xfstests repo that knows about scoutfs is only required to sprinkle
the scoutfs cases throughout the xfstests harness.
## Individual Test Invocation
Each test is run in a new bash invocation. A set of directories in the
test volume and in the results path are created for the test. Each
test's working directory isn't managed.
Test output, temp files, and dmesg snapshots are all put in a tmp/ dir
in the results/ dir. Per-test dirs are only destroyed before each test
invocation.
The harness will check for unexpected output in dmesg after each
individual test.
Each test that fails will have its results appened to the fail.log file
in the results/ directory. The details of the failure can be examined
in the directories for each test in results/output/ and results/tmp/.
## Writing tests
Tests have access to a set of t\_ prefixed bash functions that are found
in files in funcs/.
Tests complete by calling t\_ functions which indicate the result of the
test and can return a message. If the tests passes then its output is
compared with known good output. If the output doesn't match then the
test fails. The t\_ completion functions return specific status codes so
that returning without calling one can be detected.
The golden output has to be consistent across test platforms so there
are a number of filter functions which strip out local details from
command output. t\_filter\_fs is by far the most used which canonicalizes
fs mount paths and block device details.
Tests can be relatively loose about checking errors. If commands
produce output in failure cases then the test will fail without having
to specifically test for errors on every command execution. Care should
be taken to make sure that blowing through a bunch of commands with no
error checking doesn't produce catastrophic results. Usually tests are
simple and it's fine.
A bare sync will sync all the mounted filesystems and ensure that
no mounts have dirty data. sync -f can be used to sync just a specific
filesystem, though it doesn't exist on all platforms.
The harness doesn't currently ensure that all mounts are restored after
each test invocation. It probably should. Currently it's the
responsibility of the test to restore any mounts it alters and there are
t\_ functions to mount all configured mount points.
## Environment Variables
Tests have a number of exported environment variables that are commonly
used during the test.
| Variable | Description | Origin | Example |
| ---------------- | ------------------- | --------------- | ----------------- |
| T\_MB[0-9] | per-mount meta bdev | created per run | /dev/loop0 |
| T\_DB[0-9] | per-mount data bdev | created per run | /dev/loop1 |
| T\_D[0-9] | per-mount test dir | made for test | /mnt/test.[0-9]/t |
| T\_META\_DEVICE | main FS meta bdev | -M | /dev/vda |
| T\_DATA\_DEVICE | main FS data bdev | -D | /dev/vdb |
| T\_EX\_META\_DEV | scratch meta bdev | -f | /dev/vdd |
| T\_EX\_DATA\_DEV | scratch meta bdev | -e | /dev/vdc |
| T\_M[0-9] | mount paths | mounted per run | /mnt/test.[0-9]/ |
| T\_MODULE | built kernel module | created per run | ../kmod/src/..ko |
| T\_NR\_MOUNTS | number of mounts | -n | 3 |
| T\_O[0-9] | mount options | created per run | -o server\_addr= |
| T\_QUORUM | quorum count | -q | 2 |
| T\_TMP | per-test tmp prefix | made for test | results/tmp/t/tmp |
| T\_TMPDIR | per-test tmp dir dir | made for test | results/tmp/t |
There are also a number of variables that are set in response to options
and are exported but their use is rare so they aren't included here.

View File

@@ -1,43 +0,0 @@
#!/usr/bin/bash
#
# This fencing script is used for testing clusters of multiple mounts on
# a single host. It finds mounts to fence by looking for their rids and
# only knows how to "fence" by using forced unmount.
#
echo "$0 running rid '$SCOUTFS_FENCED_REQ_RID' ip '$SCOUTFS_FENCED_REQ_IP' args '$@'"
log() {
echo "$@" > /dev/stderr
exit 1
}
echo_fail() {
echo "$@" > /dev/stderr
exit 1
}
rid="$SCOUTFS_FENCED_REQ_RID"
for fs in /sys/fs/scoutfs/*; do
[ ! -d "$fs" ] && continue
fs_rid="$(cat $fs/rid)" || \
echo_fail "failed to get rid in $fs"
if [ "$fs_rid" != "$rid" ]; then
continue
fi
nr="$(cat $fs/data_device_maj_min)" || \
echo_fail "failed to get data device major:minor in $fs"
mnts=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
echo_fail "findmnt -t scoutfs -S $nr failed"
for mnt in $mnts; do
umount -f "$mnt" || \
echo_fail "umout -f $mnt failed"
done
done
exit 0

View File

@@ -1,58 +0,0 @@
t_status_msg()
{
echo "$*" > "$T_TMPDIR/status.msg"
}
export T_PASS_STATUS=100
export T_SKIP_STATUS=101
export T_FAIL_STATUS=102
export T_FIRST_STATUS="$T_PASS_STATUS"
export T_LAST_STATUS="$T_FAIL_STATUS"
t_pass()
{
exit $T_PASS_STATUS
}
t_skip()
{
t_status_msg "$@"
exit $T_SKIP_STATUS
}
t_fail()
{
t_status_msg "$@"
exit $T_FAIL_STATUS
}
#
# Quietly run a command during a test. If it succeeds then we have a
# log of its execution but its output isn't included in the test's
# compared output. If it fails then the test fails.
#
t_quiet()
{
echo "# $*" >> "$T_TMPDIR/quiet.log"
"$@" >> "$T_TMPDIR/quiet.log" 2>&1 || \
t_fail "quiet command failed"
}
#
# redirect test output back to the output of the invoking script intead
# of the compared output.
#
t_restore_output()
{
exec >&6 2>&1
}
#
# redirect a command's output back to the compared output after the
# test has restored its output
#
t_compare_output()
{
"$@" >&7 2>&1
}

View File

@@ -1,90 +0,0 @@
# filter out device ids and mount paths
t_filter_fs()
{
sed -e 's@mnt/test\.[0-9]*@mnt/test@g' \
-e 's@Device: [a-fA-F0-9]*h/[0-9]*d@Device: 0h/0d@g'
}
#
# Filter out expected messages. Putting messages here implies that
# tests aren't relying on messages to discover failures.. they're
# directly testing the result of whatever it is that's generating the
# message.
#
t_filter_dmesg()
{
local re
# the kernel can just be noisy
re=" used greatest stack depth: "
# mkfs/mount checks partition tables
re="$re|unknown partition table"
# dm swizzling
re="$re|device doesn't appear to be in the dev hash table"
re="$re|device-mapper:.*uevent:.*version"
re="$re|device-mapper:.*ioctl:.*initialised"
# some tests try invalid devices
re="$re|scoutfs .* error reading super block"
re="$re| EXT4-fs (.*): get root inode failed"
re="$re| EXT4-fs (.*): mount failed"
re="$re| EXT4-fs (.*): no journal found"
re="$re| EXT4-fs (.*): VFS: Can't find ext4 filesystem"
# dropping caches is fine
re="$re| drop_caches: "
# mount and unmount spew a bunch
re="$re|scoutfs.*client connected"
re="$re|scoutfs.*client disconnected"
re="$re|scoutfs.*server starting"
re="$re|scoutfs.*server ready"
re="$re|scoutfs.*server accepted"
re="$re|scoutfs.*server closing"
re="$re|scoutfs.*server shutting down"
re="$re|scoutfs.*server stopped"
# xfstests records test execution in desg
re="$re| run fstests "
# tests that drop unmount io triggers fencing
re="$re|scoutfs .* error: fencing "
re="$re|scoutfs .*: waiting for .* clients"
re="$re|scoutfs .*: all clients recovered"
re="$re|scoutfs .* error: client rid.*lock recovery timed out"
# we test bad devices and options
re="$re|scoutfs .* error: Required mount option \"metadev_path\" not found"
re="$re|scoutfs .* error: meta_super META flag not set"
re="$re|scoutfs .* error: could not open metadev:.*"
re="$re|scoutfs .* error: Unknown or malformed option,.*"
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
# fencing tests force unmounts and trigger timeouts
re="$re|scoutfs .* forcing unmount"
re="$re|scoutfs .* reconnect timed out"
re="$re|scoutfs .* recovery timeout expired"
re="$re|scoutfs .* fencing previous leader"
re="$re|scoutfs .* reclaimed resources"
re="$re|scoutfs .* quorum .* error"
re="$re|scoutfs .* error reading quorum block"
re="$re|scoutfs .* error .* writing quorum block"
re="$re|scoutfs .* error .* while checking to delete inode"
re="$re|scoutfs .* error .*writing btree blocks.*"
re="$re|scoutfs .* error .*writing super block.*"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.looping commit del.*upd freeing item"
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
re="$re|scoutfs .* error.*server failed to bind to.*"
# format vers back/compat tries bad mounts
re="$re|scoutfs .* error.*outside of supported version.*"
re="$re|scoutfs .* error.*could not get .*super.*"
egrep -v "($re)"
}

View File

@@ -1,455 +0,0 @@
#
# Make all previously dirty items in memory in all mounts synced and
# visible in the inode seq indexes. We have to force a sync on every
# node by dirtying data as that's the only way to guarantee advancing
# the sequence number on each node which limits index visibility. Some
# distros don't have sync -f so we dirty our mounts then sync
# everything.
#
t_sync_seq_index()
{
local m
for m in $T_MS; do
t_quiet touch $m
done
t_quiet sync
}
t_mount_rid()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
local rid
rid=$(scoutfs statfs -s rid -p "$mnt")
echo "$rid"
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given path
# in a mounted scoutfs volume.
#
t_ident_from_mnt()
{
local mnt="$1"
local fsid
local rid
fsid=$(scoutfs statfs -s fsid -p "$mnt")
rid=$(scoutfs statfs -s rid -p "$mnt")
echo "f.${fsid:0:6}.r.${rid:0:6}"
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
#
t_ident()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
t_ident_from_mnt "$mnt"
}
#
# Output the sysfs path for a path in a mounted fs.
#
t_sysfs_path_from_ident()
{
local ident="$1"
echo "/sys/fs/scoutfs/$ident"
}
#
# Output the sysfs path for a path in a mounted fs.
#
t_sysfs_path_from_mnt()
{
local mnt="$1"
t_sysfs_path_from_ident $(t_ident_from_mnt $mnt)
}
#
# Output the mount's sysfs path, defaulting to mount 0 if none is
# specified.
#
t_sysfs_path()
{
local nr="$1"
t_sysfs_path_from_ident $(t_ident $nr)
}
#
# Output the mount's debugfs path, defaulting to mount 0 if none is
# specified.
#
t_debugfs_path()
{
local nr="$1"
echo "/sys/kernel/debug/scoutfs/$(t_ident $nr)"
}
#
# output all the configured test nrs for iteration
#
t_fs_nrs()
{
seq 0 $((T_NR_MOUNTS - 1))
}
#
# outputs "1" if the fs number has "1" in its quorum/is_leader file.
# All other cases output 0, including the fs nr being a client which
# won't have a quorum/ dir.
#
t_fs_is_leader()
{
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader 2>/dev/null)" == "1" ]; then
echo "1"
else
echo "0"
fi
}
#
# Output the mount nr of the current server. This takes no steps to
# ensure that the server doesn't shut down and have some other mount
# take over.
#
t_server_nr()
{
for i in $(t_fs_nrs); do
if [ "$(t_fs_is_leader $i)" == "1" ]; then
echo $i
return
fi
done
t_fail "t_server_nr didn't find a server"
}
#
# Output the mount nr of the first client that we find. There can be
# no clients if there's only one mount who has to be the server. This
# takes no steps to ensure that the client doesn't become a server at
# any point.
#
t_first_client_nr()
{
for i in $(t_fs_nrs); do
if [ "$(t_fs_is_leader $i)" == "0" ]; then
echo $i
return
fi
done
t_fail "t_first_client_nr didn't find any clients"
}
#
# The number of quorum members needed to form a majority to start the
# server.
#
t_majority_count()
{
if [ "$T_QUORUM" -lt 3 ]; then
echo 1
else
echo $(((T_QUORUM / 2) + 1))
fi
}
t_mount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet mount -t scoutfs \$T_O$nr \$T_DB$nr \$T_M$nr
}
t_umount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount \$T_M$nr
}
t_force_umount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount -f \$T_M$nr
}
#
# Attempt to mount all the configured mounts, assuming that they're
# not already mounted.
#
t_mount_all()
{
local pids=""
local p
for i in $(t_fs_nrs); do
t_mount $i &
p="$!"
pids="$pids $!"
done
for p in $pids; do
t_quiet wait $p
done
}
#
# Attempt to unmount all the configured mounts, assuming that they're
# all mounted.
#
t_umount_all()
{
local pids=""
local p
for i in $(t_fs_nrs); do
t_umount $i &
p="$!"
pids="$pids $!"
done
for p in $pids; do
t_quiet wait $p
done
}
t_remount_all()
{
t_quiet t_umount_all || t_fail "umounting all failed"
t_quiet t_mount_all || t_fail "mounting all failed"
}
t_reinsert_remount_all()
{
t_quiet t_umount_all || t_fail "umounting all failed"
t_quiet rmmod scoutfs || \
t_fail "rmmod scoutfs failed"
t_quiet insmod "$T_KMOD/src/scoutfs.ko" ||
t_fail "insmod scoutfs failed"
t_quiet t_mount_all || t_fail "mounting all failed"
}
t_trigger_path() {
local nr="$1"
echo "/sys/kernel/debug/scoutfs/$(t_ident $nr)/trigger"
}
t_trigger_get() {
local which="$1"
local nr="$2"
cat "$(t_trigger_path "$nr")/$which"
}
t_trigger_show() {
local which="$1"
local string="$2"
local nr="$3"
echo "trigger $which $string: $(t_trigger_get $which $nr)"
}
t_trigger_arm_silent() {
local which="$1"
local nr="$2"
local path=$(t_trigger_path "$nr")
echo 1 > "$path/$which"
}
t_trigger_arm() {
local which="$1"
local nr="$2"
t_trigger_arm_silent $which $nr
t_trigger_show $which armed $nr
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified.
#
t_counter() {
local which="$1"
local nr="$2"
cat "$(t_sysfs_path $nr)/counters/$which"
}
#
# output the difference between the current value of a counter and the
# caller's provided previous value.
#
t_counter_diff_value() {
local which="$1"
local old="$2"
local nr="$3"
local new="$(t_counter $which $nr)"
echo "$((new - old))"
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified. For tests which expect a
# specific difference in counters.
#
t_counter_diff() {
local which="$1"
local old="$2"
local nr="$3"
echo "counter $which diff $(t_counter_diff_value $which $old $nr)"
}
#
# output a message indicating whether or not the counter value changed.
# For tests that expect a difference, or not, but the amount of
# difference isn't significant.
#
t_counter_diff_changed() {
local which="$1"
local old="$2"
local nr="$3"
local diff="$(t_counter_diff_value $which $old $nr)"
test "$diff" -eq 0 && \
echo "counter $which didn't change" ||
echo "counter $which changed"
}
#
# See if we can find a local mount with the caller's rid.
#
t_rid_is_mounted() {
local rid="$1"
local fr="$1"
for fr in /sys/fs/scoutfs/*; do
if [ "$(cat $fr/rid)" == "$rid" ]; then
return 0
fi
done
return 1
}
#
# A given mount is being fenced if any mount has a fence request pending
# for it which hasn't finished and been removed.
#
t_rid_is_fencing() {
local rid="$1"
local fr
for fr in /sys/fs/scoutfs/*; do
if [ -d "$fr/fence/$rid" ]; then
return 0
fi
done
return 1
}
#
# Wait until the mount identified by the first rid arg is not in any
# states specified by the remaining state description word args.
#
t_wait_if_rid_is() {
local rid="$1"
while ( [[ $* =~ mounted ]] && t_rid_is_mounted $rid ) ||
( [[ $* =~ fencing ]] && t_rid_is_fencing $rid ) ; do
sleep .5
done
}
#
# Wait until any mount identifies itself as the elected leader. We can
# be waiting while tests mount and unmount so mounts may not be mounted
# at the test's expected mount points.
#
t_wait_for_leader() {
local i
while sleep .25; do
for i in $(t_fs_nrs); do
local ldr="$(t_sysfs_path $i 2>/dev/null)/quorum/is_leader"
if [ "$(cat $ldr 2>/dev/null)" == "1" ]; then
return
fi
done
done
}
t_set_sysfs_mount_option() {
local nr="$1"
local name="$2"
local val="$3"
local opt="$(t_sysfs_path $nr)/mount_options/$name"
echo "$val" > "$opt"
}
t_set_all_sysfs_mount_options() {
local name="$1"
local val="$2"
local i
for i in $(t_fs_nrs); do
t_set_sysfs_mount_option $i $name $val
done
}
declare -A _saved_opts
t_save_all_sysfs_mount_options() {
local name="$1"
local ind
local opt
local i
for i in $(t_fs_nrs); do
opt="$(t_sysfs_path $i)/mount_options/$name"
ind="$name_$i"
_saved_opts[$ind]="$(cat $opt)"
done
}
t_restore_all_sysfs_mount_options() {
local name="$1"
local ind
local i
for i in $(t_fs_nrs); do
ind="$name_$i"
t_set_sysfs_mount_option $i $name "${_saved_opts[$ind]}"
done
}

View File

@@ -1,40 +0,0 @@
#
# Make sure that all the base command arguments are found in the path.
# This isn't strictly necessary as the test will naturally fail if the
# command isn't found, but it's nice to fail fast and clearly
# communicate why.
#
t_require_commands() {
local c
for c in "$@"; do
which "$c" >/dev/null 2>&1 || \
t_fail "command $c not found in path"
done
}
#
# make sure that we have at least this many mounts
#
t_require_mounts() {
local req="$1"
test "$T_NR_MOUNTS" -ge "$req" || \
t_skip "$req mounts required, only have $T_NR_MOUNTS"
}
#
# Require that the meta device be at least the size string argument, as
# parsed by numfmt using single char base 2 suffixes (iec).. 64G, etc.
#
t_require_meta_size() {
local dev="$T_META_DEVICE"
local req_iec="$1"
local req_bytes=$(numfmt --from=iec --to=none $req_iec)
local dev_bytes=$(blockdev --getsize64 $dev)
local dev_iec=$(numfmt --from=auto --to=iec $dev_bytes)
test "$dev_bytes" -ge "$req_bytes" || \
t_skip "$dev must be at least $req_iec, is $dev_iec"
}

View File

@@ -1,36 +0,0 @@
== calculate number of files
== create per mount dirs
== generate phase scripts
== round 1: create
== round 1: online
== round 1: verify
== round 1: release
== round 1: offline
== round 1: stage
== round 1: online
== round 1: verify
== round 1: release
== round 1: offline
== round 1: unlink
== round 2: create
== round 2: online
== round 2: verify
== round 2: release
== round 2: offline
== round 2: stage
== round 2: online
== round 2: verify
== round 2: release
== round 2: offline
== round 2: unlink
== round 3: create
== round 3: online
== round 3: verify
== round 3: release
== round 3: offline
== round 3: stage
== round 3: online
== round 3: verify
== round 3: release
== round 3: offline
== round 3: unlink

View File

@@ -1,6 +0,0 @@
== prepare devices, mount point, and logs
== bad devices, bad options
== swapped devices
== both meta devices
== both data devices
== good volume, bad option and good options

View File

@@ -1,53 +0,0 @@
== single block write
online: 1
offline: 0
st_blocks: 8
== single block overwrite
online: 1
offline: 0
st_blocks: 8
== append
online: 2
offline: 0
st_blocks: 16
== release
online: 0
offline: 2
st_blocks: 16
== duplicate release
online: 0
offline: 2
st_blocks: 16
== duplicate release past i_size
online: 0
offline: 2
st_blocks: 16
== stage
online: 2
offline: 0
st_blocks: 16
== duplicate stage
online: 2
offline: 0
st_blocks: 16
== larger file
online: 256
offline: 0
st_blocks: 2048
== partial truncate
online: 128
offline: 0
st_blocks: 1024
== single sparse block
online: 1
offline: 0
st_blocks: 8
== empty file
online: 0
offline: 0
st_blocks: 0
== non-regular file
online: 0
offline: 0
st_blocks: 0
== cleanup

View File

@@ -1,57 +0,0 @@
== root inode updates flow back and forth
== stat of created file matches
== written file contents match
== overwritten file contents match
== appended file contents match
== fiemap matches after racey appends
== unlinked file isn't found
== symlink targets match
/mnt/test/test/basic-posix-consistency/file.targ
/mnt/test/test/basic-posix-consistency/file.targ
/mnt/test/test/basic-posix-consistency/file.targ2
/mnt/test/test/basic-posix-consistency/file.targ2
== new xattrs are visible
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="1"
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="1"
== modified xattrs are updated
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="2"
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="2"
== deleted xattrs
/mnt/test/test/basic-posix-consistency/file: user.xat: No such attribute
/mnt/test/test/basic-posix-consistency/file: user.xat: No such attribute
== readdir after modification
one
two
three
four
one
two
three
four
two
four
two
four
== can delete empty dir
== some easy rename cases
--- file between dirs
--- file within dir
--- dir within dir
--- overwrite file
--- can't overwrite non-empty dir
mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to /mnt/test/test/basic-posix-consistency/dir/a/dir: Directory not empty
--- can overwrite empty dir
== path resoluion
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing
== concurrent creates make one file
one-file

View File

@@ -1,2 +0,0 @@
== Issue scoutfs df to force block reads to trigger stale invalidation/retry
counter block_cache_remove_stale changed

View File

@@ -1 +0,0 @@
== 60s of unmounting non-quorum clients during recovery

View File

@@ -1,4 +0,0 @@
Run createmany in /mnt/test/test/createmany-parallel/0
Run createmany in /mnt/test/test/createmany-parallel/1
Run createmany in /mnt/test/test/createmany-parallel/2
Run createmany in /mnt/test/test/createmany-parallel/3

View File

@@ -1,3 +0,0 @@
== measure initial createmany
== measure initial createmany
== measure two concurrent createmany runs

View File

@@ -1,2 +0,0 @@
== create large directory with 1220608 files
== randomly renaming 5000 files

View File

@@ -1,2 +0,0 @@
== repeated cross-mount alloc+free, totalling 2x free
== remove empty test file

View File

@@ -1,10 +0,0 @@
== create per node dirs
== touch files on each node
== recreate the files
== turn the files into directories
== rename parent dirs
== rename parent dirs back
== create some hard links
== recreate one of the hard links
== delete the remaining hard link
== race to blow everything away

View File

@@ -1,8 +0,0 @@
== prepare directories and files
== fallocate until enospc
== remove all the files and verify free data blocks
== make small meta fs
== create large xattrs until we fill up metadata
== remove files with xattrs after enospc
== make sure we can create again
== cleanup small meta fs

Some files were not shown because too many files have changed in this diff Show More