Compare commits

..

1 Commits

Author SHA1 Message Date
Zach Brown
96f2ad29dc Add inode crtime creation time
Add an inode creation time field.  It's created for all new inodes.
It's visible to stat_more.  setattr_more can set it during
restore.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-08 11:00:30 -07:00
79 changed files with 2166 additions and 4736 deletions

133
README.md
View File

@@ -1,24 +1,135 @@
# Introduction
scoutfs is a clustered in-kernel Linux filesystem designed to support
large archival systems. It features additional interfaces and metadata
so that archive agents can perform their maintenance workflows without
walking all the files in the namespace. Its cluster support lets
deployments add nodes to satisfy archival tier bandwidth targets.
scoutfs is a clustered in-kernel Linux filesystem designed and built
from the ground up to support large archival systems.
The design goal is to reach file populations in the trillions, with the
archival bandwidth to match, while remaining operational and responsive.
Its key differentiating features are:
Highlights of the design and implementation include:
- Integrated consistent indexing accelerates archival maintenance operations
- Commit logs allow nodes to write concurrently without contention
It meets best of breed expectations:
* Fully consistent POSIX semantics between nodes
* Rich metadata to ensure the integrity of metadata references
* Atomic transactions to maintain consistent persistent structures
* Integrated archival metadata replaces syncing to external databases
* Dynamic seperation of resources lets nodes write in parallel
* 64bit throughout; no limits on file or directory sizes or counts
* First class kernel implementation for high performance and low latency
* Open GPLv2 implementation
Learn more in the [white paper](https://docs.wixstatic.com/ugd/aaa89b_88a5cc84be0b4d1a90f60d8900834d28.pdf).
# Current Status
**Alpha Open Source Development**
scoutfs is under heavy active development. We're developing it in the
open to give the community an opportunity to affect the design and
implementation.
The core architectural design elements are in place. Much surrounding
functionality hasn't been implemented. It's appropriate for early
adopters and interested developers, not for production use.
In that vein, expect significant incompatible changes to both the format
of network messages and persistent structures. Since the format hash-checking
has now been removed in preparation for release, if there is any doubt, mkfs
is strongly recommended.
The current kernel module is developed against the RHEL/CentOS 7.x
kernel to minimize the friction of developing and testing with partners'
existing infrastructure. Once we're happy with the design we'll shift
development to the upstream kernel while maintaining distro
compatibility branches.
# Community Mailing List
Please join us on the open scoutfs-devel@scoutfs.org [mailing list
hosted on Google Groups](https://groups.google.com/a/scoutfs.org/forum/#!forum/scoutfs-devel)
for all discussion of scoutfs.
# Quick Start
**This following a very rough example of the procedure to get up and
running, experience will be needed to fill in the gaps. We're happy to
help on the mailing list.**
The requirements for running scoutfs on a small cluster are:
1. One or more nodes running x86-64 CentOS/RHEL 7.4 (or 7.3)
2. Access to two shared block devices
3. IPv4 connectivity between the nodes
The steps for getting scoutfs mounted and operational are:
1. Get the kernel module running on the nodes
2. Make a new filesystem on the devices with the userspace utilities
3. Mount the devices on all the nodes
In this example we use three nodes. The names of the block devices are
the same on all the nodes. Two of the nodes will be quorum members. A
majority of quorum members must be mounted to elect a leader to run a
server that all the mounts connect to. It should be noted that two
quorum members results in a majority of one, each member itself, so
split brain elections are possible but so unlikely that it's fine for a
demonstration.
1. Get the Kernel Module and Userspace Binaries
* Either use snapshot RPMs built from git by Versity:
```shell
rpm -i https://scoutfs.s3-us-west-2.amazonaws.com/scoutfs-repo-0.0.1-1.el7_4.noarch.rpm
yum install scoutfs-utils kmod-scoutfs
```
* Or use the binaries built from checked out git repositories:
```shell
yum install kernel-devel
git clone git@github.com:versity/scoutfs.git
make -C scoutfs
modprobe libcrc32c
insmod scoutfs/kmod/src/scoutfs.ko
alias scoutfs=$PWD/scoutfs/utils/src/scoutfs
```
2. Make a New Filesystem (**destroys contents**)
We specify quorum slots with the addresses of each of the quorum
member nodes, the metadata device, and the data device.
```shell
scoutfs mkfs -Q 0,$NODE0_ADDR,12345 -Q 1,$NODE1_ADDR,12345 /dev/meta_dev /dev/data_dev
```
3. Mount the Filesystem
First, mount each of the quorum nodes so that they can elect and
start a server for the remaining node to connect to. The slot numbers
were specified with the leading "0,..." and "1,..." in the mkfs options
above.
```shell
mount -t scoutfs -o quorum_slot_nr=$SLOT_NR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
Then mount the remaining node which can now connect to the running server.
```shell
mount -t scoutfs -o metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
```
4. For Kicks, Observe the Metadata Change Index
The `meta_seq` index tracks the inodes that are changed in each
transaction.
```shell
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/two; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
touch /mnt/scoutfs/one; sync
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
```

View File

@@ -1,12 +0,0 @@
Versity ScoutFS Release Notes
=============================
---
v1.0
\
*Nov 8, 2021*
* **Initial Release**
\
Version 1.0 marks the first GA release.

View File

@@ -252,7 +252,6 @@ static struct scoutfs_ext_ops alloc_ext_ops = {
.next = alloc_ext_next,
.insert = alloc_ext_insert,
.remove = alloc_ext_remove,
.insert_overlap_warn = true,
};
static bool invalid_extent(u64 start, u64 end, u64 first, u64 last)
@@ -262,17 +261,20 @@ static bool invalid_extent(u64 start, u64 end, u64 first, u64 last)
static bool invalid_meta_blkno(struct super_block *sb, u64 blkno)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 last_meta = (i_size_read(sbi->meta_bdev->bd_inode) >> SCOUTFS_BLOCK_LG_SHIFT) - 1;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
return invalid_extent(blkno, blkno, SCOUTFS_META_DEV_START_BLKNO, last_meta);
return invalid_extent(blkno, blkno,
le64_to_cpu(super->first_meta_blkno),
le64_to_cpu(super->last_meta_blkno));
}
static bool invalid_data_extent(struct super_block *sb, u64 start, u64 len)
{
u64 last_data = (i_size_read(sb->s_bdev->bd_inode) >> SCOUTFS_BLOCK_SM_SHIFT) - 1;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
return invalid_extent(start, start + len - 1, SCOUTFS_DATA_DEV_START_BLKNO, last_data);
return invalid_extent(start, start + len - 1,
le64_to_cpu(super->first_data_blkno),
le64_to_cpu(super->last_data_blkno));
}
void scoutfs_alloc_init(struct scoutfs_alloc *alloc,
@@ -970,8 +972,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
moved += ext.len;
scoutfs_inc_counter(sb, alloc_moved_extent);
trace_scoutfs_alloc_move_extent(sb, &ext);
}
scoutfs_inc_counter(sb, alloc_move);
@@ -980,39 +980,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
return ret;
}
/*
* Add new free space to an allocator. _ext_insert will make sure that it doesn't
* overlap with any existing extents. This is done by the server in a transaction that
* also updates total_*_blocks in the super so we don't verify.
*/
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
};
return scoutfs_ext_insert(sb, &alloc_ext_ops, &args, start, len, 0, 0);
}
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len)
{
struct alloc_ext_args args = {
.alloc = alloc,
.wri = wri,
.root = root,
.zone = SCOUTFS_FREE_EXTENT_BLKNO_ZONE,
};
return scoutfs_ext_remove(sb, &alloc_ext_ops, &args, start, len);
}
/*
* We only trim one block, instead of looping trimming all, because the
* caller is assuming that we do a fixed amount of work when they check
@@ -1059,31 +1026,18 @@ out:
}
/*
* True if the allocator has enough blocks in the avail list and space
* in the freed list to be able to perform the callers operations. If
* false the caller should back off and return partial progress rather
* than completely exhausting the avail list or overflowing the freed
* list.
* True if the allocator has enough free blocks to cow (alloc and free)
* a list block and all the btree blocks that store extent items.
*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*
* The caller tells us how many extents they're about to modify and how
* many other additional blocks they may cow manually. And finally, the
* caller could be the first to dirty the avail and freed blocks in the
* allocator,
* At most, an extent operation can dirty down three paths of the tree
* to modify a blkno item and two distant order items. We can grow and
* split the root, and then those three paths could share blocks but each
* modify two leaf blocks.
*/
static bool list_has_blocks(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root, u32 extents, u32 addl_blocks)
static bool list_can_cow(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root)
{
u32 tree_blocks = (((1 + root->root.height) * 2) * 3) * extents;
u32 most = 1 + tree_blocks + addl_blocks;
u32 most = 1 + (1 + 1 + (3 * (1 - root->root.height + 1)));
if (le32_to_cpu(alloc->avail.first_nr) < most) {
scoutfs_inc_counter(sb, alloc_list_avail_lo);
@@ -1147,7 +1101,8 @@ int scoutfs_alloc_fill_list(struct super_block *sb,
goto out;
lblk = bl->data;
while (le32_to_cpu(lblk->nr) < target && list_has_blocks(sb, alloc, root, 1, 0)) {
while (le32_to_cpu(lblk->nr) < target &&
list_can_cow(sb, alloc, root)) {
ret = scoutfs_ext_alloc(sb, &alloc_ext_ops, &args, 0, 0,
target - le32_to_cpu(lblk->nr), &ext);
@@ -1159,8 +1114,6 @@ int scoutfs_alloc_fill_list(struct super_block *sb,
for (i = 0; i < ext.len; i++)
list_block_add(lhead, lblk, ext.start + i);
trace_scoutfs_alloc_fill_extent(sb, &ext);
}
out:
@@ -1193,7 +1146,7 @@ int scoutfs_alloc_empty_list(struct super_block *sb,
if (WARN_ON_ONCE(lhead_in_alloc(alloc, lhead)))
return -EINVAL;
while (lhead->ref.blkno && list_has_blocks(sb, alloc, args.root, 1, 1)) {
while (lhead->ref.blkno && list_can_cow(sb, alloc, args.root)) {
if (lhead->first_nr == 0) {
ret = trim_empty_first_block(sb, alloc, wri, lhead);
@@ -1229,8 +1182,6 @@ int scoutfs_alloc_empty_list(struct super_block *sb,
break;
list_block_remove(lhead, lblk, ext.len);
trace_scoutfs_alloc_empty_extent(sb, &ext);
}
scoutfs_block_put(sb, bl);
@@ -1333,17 +1284,15 @@ bool scoutfs_alloc_test_flag(struct super_block *sb,
}
/*
* Iterate over the allocator structures referenced by the caller's
* super and call the caller's callback with summaries of the blocks
* found in each structure.
*
* The caller's responsible for the stability of the referenced blocks.
* If the blocks could be stale the caller must deal with retrying when
* it sees ESTALE.
* Call the callers callback for every persistent allocator structure
* we can find.
*/
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg)
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
struct scoutfs_super_block *super = NULL;
struct scoutfs_srch_compact *sc;
struct scoutfs_log_merge_request *lmreq;
struct scoutfs_log_merge_complete *lmcomp;
@@ -1356,12 +1305,21 @@ int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_blo
u64 id;
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
if (!sc) {
if (!super || !sc) {
ret = -ENOMEM;
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
/* all the server allocators */
ret = cb(sb, arg, SCOUTFS_ALLOC_OWNER_SERVER, 0, true, true,
le64_to_cpu(super->meta_alloc[0].total_len)) ?:
@@ -1504,40 +1462,6 @@ int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_blo
ret = 0;
out:
kfree(sc);
return ret;
}
/*
* Read the current on-disk super and use it to walk the allocators and
* call the caller's callback. This assumes that the super it's reading
* could be stale and will retry if it encounters stale blocks.
*/
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
out:
if (ret == -ESTALE) {
if (memcmp(&stale_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
@@ -1549,16 +1473,18 @@ out:
}
kfree(super);
kfree(sc);
return ret;
}
struct foreach_cb_args {
scoutfs_alloc_extent_cb_t cb;
void *cb_arg;
};
static int alloc_btree_extent_item_cb(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, void *arg)
static int alloc_btree_extent_item_cb(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, void *arg)
{
struct foreach_cb_args *cba = arg;
struct scoutfs_extent ext;

View File

@@ -132,12 +132,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
int scoutfs_alloc_fill_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
@@ -166,8 +160,6 @@ typedef int (*scoutfs_alloc_foreach_cb_t)(struct super_block *sb, void *arg,
bool meta, bool avail, u64 blocks);
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg);
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
scoutfs_alloc_foreach_cb_t cb, void *arg);
typedef void (*scoutfs_alloc_extent_cb_t)(struct super_block *sb, void *cb_arg,
struct scoutfs_extent *ext);

View File

@@ -645,11 +645,9 @@ static struct block_private *block_read(struct super_block *sb, u64 blkno)
goto out;
}
wait_event(binf->waitq, uptodate_or_error(bp));
if (test_bit(BLOCK_BIT_ERROR, &bp->bits))
ret = wait_event_interruptible(binf->waitq, uptodate_or_error(bp));
if (ret == 0 && test_bit(BLOCK_BIT_ERROR, &bp->bits))
ret = -EIO;
else
ret = 0;
out:
if (ret < 0) {

View File

@@ -30,7 +30,6 @@
#include "avl.h"
#include "hash.h"
#include "sort_priv.h"
#include "forest.h"
#include "scoutfs_trace.h"
@@ -503,8 +502,9 @@ static __le16 insert_value(struct scoutfs_btree_block *bt, __le16 item_off,
* This only consumes free space. It's safe to use references to block
* structures after this call.
*/
static void create_item(struct scoutfs_btree_block *bt, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, unsigned val_len, struct scoutfs_avl_node *parent, int cmp)
static void create_item(struct scoutfs_btree_block *bt,
struct scoutfs_key *key, void *val, unsigned val_len,
struct scoutfs_avl_node *parent, int cmp)
{
struct scoutfs_btree_item *item;
@@ -516,8 +516,6 @@ static void create_item(struct scoutfs_btree_block *bt, struct scoutfs_key *key,
item = end_item(bt);
item->key = *key;
item->seq = cpu_to_le64(seq);
item->flags = flags;
scoutfs_avl_insert(&bt->item_root, parent, &item->node, cmp);
leaf_item_hash_insert(bt, item_key(item), ptr_off(bt, item));
@@ -560,8 +558,6 @@ static void delete_item(struct scoutfs_btree_block *bt,
/* move the final item into the deleted space */
if (end != item) {
item->key = end->key;
item->seq = end->seq;
item->flags = end->flags;
item->val_off = end->val_off;
item->val_len = end->val_len;
leaf_item_hash_change(bt, &end->key, ptr_off(bt, item),
@@ -610,8 +606,8 @@ static void move_items(struct scoutfs_btree_block *dst,
else
next = next_item(src, from);
create_item(dst, item_key(from), le64_to_cpu(from->seq), from->flags,
item_val(src, from), item_val_len(from), par, cmp);
create_item(dst, item_key(from), item_val(src, from),
item_val_len(from), par, cmp);
if (move_right) {
if (par)
@@ -684,7 +680,7 @@ static void create_parent_item(struct scoutfs_btree_block *parent,
scoutfs_avl_search(&parent->item_root, cmp_key_item, key, &cmp, &par,
NULL, NULL);
create_item(parent, key, 0, 0, &ref, sizeof(ref), par, cmp);
create_item(parent, key, &ref, sizeof(ref), par, cmp);
}
/*
@@ -1533,7 +1529,7 @@ int scoutfs_btree_insert(struct super_block *sb,
if (node) {
ret = -EEXIST;
} else {
create_item(bt, key, 0, 0, val, val_len, par, cmp);
create_item(bt, key, val, val_len, par, cmp);
ret = 0;
}
}
@@ -1634,7 +1630,7 @@ int scoutfs_btree_force(struct super_block *sb,
} else {
scoutfs_avl_search(&bt->item_root, cmp_key_item, key,
&cmp, &par, NULL, NULL);
create_item(bt, key, 0, 0, val, val_len, par, cmp);
create_item(bt, key, val, val_len, par, cmp);
}
ret = 0;
@@ -1853,8 +1849,8 @@ int scoutfs_btree_read_items(struct super_block *sb,
if (scoutfs_key_compare(&item->key, end) > 0)
break;
ret = cb(sb, item_key(item), le64_to_cpu(item->seq), item->flags,
item_val(bt, item), item_val_len(item), arg);
ret = cb(sb, item_key(item), item_val(bt, item),
item_val_len(item), arg);
if (ret < 0)
break;
@@ -1874,10 +1870,6 @@ out:
* This can make partial progress before returning an error, leaving
* dirty btree blocks with only some of the caller's items. It's up to
* the caller to resolve this.
*
* This, along with merging, are the only places that seq and flags are
* set in btree items. They're only used for fs items written through
* the item cache and forest of log btrees.
*/
int scoutfs_btree_insert_list(struct super_block *sb,
struct scoutfs_alloc *alloc,
@@ -1903,28 +1895,13 @@ int scoutfs_btree_insert_list(struct super_block *sb,
do {
item = leaf_item_hash_search(sb, bt, &lst->key);
if (item) {
/* try to merge delta values, _NULL not deleted; merge will */
ret = scoutfs_forest_combine_deltas(&lst->key,
item_val(bt, item),
item_val_len(item),
lst->val, lst->val_len);
if (ret < 0) {
scoutfs_block_put(sb, bl);
goto out;
}
item->seq = cpu_to_le64(lst->seq);
item->flags = lst->flags;
if (ret == 0)
update_item_value(bt, item, lst->val, lst->val_len);
else
ret = 0;
update_item_value(bt, item, lst->val,
lst->val_len);
} else {
scoutfs_avl_search(&bt->item_root,
cmp_key_item, &lst->key,
&cmp, &par, NULL, NULL);
create_item(bt, &lst->key, lst->seq, lst->flags, lst->val,
create_item(bt, &lst->key, lst->val,
lst->val_len, par, cmp);
}
@@ -2036,16 +2013,94 @@ int scoutfs_btree_rebalance(struct super_block *sb,
struct merge_pos {
struct rb_node node;
struct scoutfs_btree_root *root;
struct scoutfs_block *bl;
struct scoutfs_btree_block *bt;
struct scoutfs_avl_node *avl;
struct scoutfs_key *key;
u64 seq;
u8 flags;
struct scoutfs_key key;
unsigned int val_len;
u8 *val;
u8 val[SCOUTFS_BTREE_MAX_VAL_LEN];
};
/*
* Find the next item in the mpos's root after its key and make sure
* that it's in its sorted position in the rbtree. We're responsible
* for freeing the mpos if we don't put it back in the pos_root. This
* happens naturally naturally when its item_root has no more items to
* merge.
*/
static int reset_mpos(struct super_block *sb, struct rb_root *pos_root,
struct merge_pos *mpos, struct scoutfs_key *end,
scoutfs_btree_merge_cmp_t merge_cmp)
{
SCOUTFS_BTREE_ITEM_REF(iref);
struct merge_pos *walk;
struct rb_node *parent;
struct rb_node **node;
int key_cmp;
int val_cmp;
int ret;
restart:
if (!RB_EMPTY_NODE(&mpos->node)) {
rb_erase(&mpos->node, pos_root);
RB_CLEAR_NODE(&mpos->node);
}
/* find the next item in the root within end */
ret = scoutfs_btree_next(sb, mpos->root, &mpos->key, &iref);
if (ret == 0) {
if (scoutfs_key_compare(iref.key, end) > 0) {
ret = -ENOENT;
} else {
mpos->key = *iref.key;
mpos->val_len = iref.val_len;
memcpy(mpos->val, iref.val, iref.val_len);
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
kfree(mpos);
if (ret == -ENOENT)
ret = 0;
goto out;
}
rewalk:
/* sort merge items by key then oldest to newest */
node = &pos_root->rb_node;
parent = NULL;
while (*node) {
parent = *node;
walk = container_of(*node, struct merge_pos, node);
key_cmp = scoutfs_key_compare(&mpos->key, &walk->key);
val_cmp = merge_cmp(mpos->val, mpos->val_len,
walk->val, walk->val_len);
/* drop old versions of logged keys as we discover them */
if (key_cmp == 0) {
scoutfs_inc_counter(sb, btree_merge_drop_old);
if (val_cmp < 0) {
scoutfs_key_inc(&mpos->key);
goto restart;
} else {
BUG_ON(val_cmp == 0);
rb_erase(&walk->node, pos_root);
kfree(walk);
goto rewalk;
}
}
if ((key_cmp ?: val_cmp) < 0)
node = &(*node)->rb_left;
else
node = &(*node)->rb_right;
}
rb_link_node(&mpos->node, parent, node);
rb_insert_color(&mpos->node, pos_root);
ret = 0;
out:
return ret;
}
static struct merge_pos *first_mpos(struct rb_root *root)
{
struct rb_node *node = rb_first(root);
@@ -2054,178 +2109,22 @@ static struct merge_pos *first_mpos(struct rb_root *root)
return NULL;
}
static struct merge_pos *next_mpos(struct merge_pos *mpos)
{
struct rb_node *node;
if (mpos && (node = rb_next(&mpos->node)))
return container_of(node, struct merge_pos, node);
else
return NULL;
}
static void free_mpos(struct super_block *sb, struct merge_pos *mpos)
{
scoutfs_block_put(sb, mpos->bl);
kfree(mpos);
}
static void insert_mpos(struct rb_root *pos_root, struct merge_pos *ins)
{
struct rb_node **node = &pos_root->rb_node;
struct rb_node *parent = NULL;
struct merge_pos *mpos;
int cmp;
parent = NULL;
while (*node) {
parent = *node;
mpos = container_of(*node, struct merge_pos, node);
/* sort merge items by key then newest to oldest */
cmp = scoutfs_key_compare(ins->key, mpos->key) ?:
-scoutfs_cmp(ins->seq, mpos->seq);
if (cmp < 0)
node = &(*node)->rb_left;
else
node = &(*node)->rb_right;
}
rb_link_node(&ins->node, parent, node);
rb_insert_color(&ins->node, pos_root);
}
/*
* Find the next item in the merge_pos root in the caller's range and
* insert it into the rbtree sorted by key and version so that merging
* can find the next newest item at the front of the rbtree. We free
* the mpos on error or if there are no more items in the range.
*/
static int reset_mpos(struct super_block *sb, struct rb_root *pos_root, struct merge_pos *mpos,
struct scoutfs_key *start, struct scoutfs_key *end)
{
struct scoutfs_btree_item *item;
struct scoutfs_avl_node *next;
struct btree_walk_key_range kr;
struct scoutfs_key walk_key;
int ret = 0;
/* always erase before freeing or inserting */
if (!RB_EMPTY_NODE(&mpos->node)) {
rb_erase(&mpos->node, pos_root);
RB_CLEAR_NODE(&mpos->node);
}
/*
* advance to next item via the avl tree. The caller's pos is
* only ever incremented past the last key so we can use next to
* iterate rather than using search to skip past multiple items.
*/
if (mpos->avl)
mpos->avl = scoutfs_avl_next(&mpos->bt->item_root, mpos->avl);
/* find the next leaf with the key if we run out of items */
walk_key = *start;
while (!mpos->avl && !scoutfs_key_is_zeros(&walk_key)) {
scoutfs_block_put(sb, mpos->bl);
mpos->bl = NULL;
ret = btree_walk(sb, NULL, NULL, mpos->root, BTW_NEXT, &walk_key,
0, &mpos->bl, &kr, NULL);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
free_mpos(sb, mpos);
goto out;
}
mpos->bt = mpos->bl->data;
mpos->avl = scoutfs_avl_search(&mpos->bt->item_root, cmp_key_item,
start, NULL, NULL, &next, NULL) ?: next;
if (mpos->avl == NULL)
walk_key = kr.iter_next;
}
/* see if we're out of items within the range */
item = node_item(mpos->avl);
if (!item || scoutfs_key_compare(item_key(item), end) > 0) {
free_mpos(sb, mpos);
ret = 0;
goto out;
}
/* insert the next item within range at its version */
mpos->key = item_key(item);
mpos->seq = le64_to_cpu(item->seq);
mpos->flags = item->flags;
mpos->val_len = item_val_len(item);
mpos->val = item_val(mpos->bt, item);
insert_mpos(pos_root, mpos);
ret = 0;
out:
return ret;
}
/*
* The caller has reset all the merge positions for all the input log
* btree roots and wants the next logged item it should try and merge
* with the items in the fs_root.
*
* We look ahead in the logged item stream to see if we should merge any
* older logged delta items into one result for the caller. We also
* take this opportunity to skip and reset the mpos for any older
* versions of the first item.
*/
static int next_resolved_mpos(struct super_block *sb, struct rb_root *pos_root,
struct scoutfs_key *end, struct merge_pos **mpos_ret)
{
struct merge_pos *mpos;
struct merge_pos *next;
struct scoutfs_key key;
int ret = 0;
while ((mpos = first_mpos(pos_root)) && (next = next_mpos(mpos)) &&
!scoutfs_key_compare(mpos->key, next->key)) {
ret = scoutfs_forest_combine_deltas(mpos->key, mpos->val, mpos->val_len,
next->val, next->val_len);
if (ret < 0)
break;
/* reset advances to the next item */
key = *mpos->key;
scoutfs_key_inc(&key);
/* always skip next combined or older version */
ret = reset_mpos(sb, pos_root, next, &key, end);
if (ret < 0)
break;
if (ret == SCOUTFS_DELTA_COMBINED) {
scoutfs_inc_counter(sb, btree_merge_delta_combined);
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
scoutfs_inc_counter(sb, btree_merge_delta_null);
/* if merging resulted in no info, skip current */
ret = reset_mpos(sb, pos_root, mpos, &key, end);
if (ret < 0)
break;
}
}
*mpos_ret = mpos;
return ret;
}
/*
* Merge items from a number of read-only input roots into a writable
* destination root. The order of the input roots doesn't matter, the
* items are merged in sorted key order.
*
* The merge_cmp callback determines the order that the input items are
* merged in. The is_del callback determines if a merging item should
* be removed from the destination.
*
* subtree indicates that the destination root is in fact one of many
* parent blocks and shouldn't be split or allowed to fall below the
* join low water mark.
*
* drop_val indicates the initial length of the value that should be
* dropped when merging items into destination items.
*
* -ERANGE is returned if the merge doesn't fully exhaust the range, due
* to allocators running low or needing to join/split the parent.
* *next_ret is set to the next key which hasn't been merged so that the
@@ -2239,7 +2138,9 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *inputs,
bool subtree, int dirty_limit, int alloc_low)
scoutfs_btree_merge_cmp_t merge_cmp,
scoutfs_btree_merge_is_del_t merge_is_del, bool subtree,
int drop_val, int dirty_limit, int alloc_low)
{
struct scoutfs_btree_root_head *rhead;
struct rb_root pos_root = RB_ROOT;
@@ -2248,13 +2149,11 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_block *bl = NULL;
struct btree_walk_key_range kr;
struct scoutfs_avl_node *par;
struct scoutfs_key next;
struct merge_pos *mpos;
struct merge_pos *tmp;
int walk_val_len;
int walk_flags;
bool is_del;
int delta;
int cmp;
int ret;
@@ -2262,16 +2161,17 @@ int scoutfs_btree_merge(struct super_block *sb,
scoutfs_inc_counter(sb, btree_merge);
list_for_each_entry(rhead, inputs, head) {
mpos = kzalloc(sizeof(*mpos), GFP_NOFS);
mpos = kmalloc(sizeof(*mpos), GFP_NOFS);
if (!mpos) {
ret = -ENOMEM;
goto out;
}
RB_CLEAR_NODE(&mpos->node);
mpos->key = *start;
mpos->root = &rhead->root;
ret = reset_mpos(sb, &pos_root, mpos, start, end);
ret = reset_mpos(sb, &pos_root, mpos, end, merge_cmp);
if (ret < 0)
goto out;
}
@@ -2281,75 +2181,58 @@ int scoutfs_btree_merge(struct super_block *sb,
walk_flags |= BTW_SUBTREE;
walk_val_len = 0;
while ((ret = next_resolved_mpos(sb, &pos_root, end, &mpos)) == 0 && mpos) {
while ((mpos = first_mpos(&pos_root))) {
if (scoutfs_block_writer_dirty_bytes(sb, wri) >= dirty_limit) {
scoutfs_inc_counter(sb, btree_merge_dirty_limit);
ret = -ERANGE;
*next_ret = *mpos->key;
*next_ret = mpos->key;
goto out;
}
if (scoutfs_alloc_meta_low(sb, alloc, alloc_low)) {
scoutfs_inc_counter(sb, btree_merge_alloc_low);
ret = -ERANGE;
*next_ret = *mpos->key;
*next_ret = mpos->key;
goto out;
}
scoutfs_block_put(sb, bl);
bl = NULL;
ret = btree_walk(sb, alloc, wri, root, walk_flags,
mpos->key, walk_val_len, &bl, &kr, NULL);
&mpos->key, walk_val_len, &bl, &kr, NULL);
if (ret < 0) {
if (ret == -ERANGE)
*next_ret = *mpos->key;
*next_ret = mpos->key;
goto out;
}
bt = bl->data;
scoutfs_inc_counter(sb, btree_merge_walk);
/* catch non-root blocks that fell under low, maybe from null deltas */
if (root->ref.blkno != bt->hdr.blkno && !total_above_join_low_water(bt)) {
walk_flags |= BTW_DELETE;
continue;
}
for (; mpos; mpos = first_mpos(&pos_root)) {
while ((ret = next_resolved_mpos(sb, &pos_root, end, &mpos)) == 0 && mpos) {
/* val must have at least what we need to drop */
if (mpos->val_len < drop_val) {
ret = -EIO;
goto out;
}
/* walk to new leaf if we exceed parent ref key */
if (scoutfs_key_compare(mpos->key, &kr.end) > 0)
if (scoutfs_key_compare(&mpos->key, &kr.end) > 0)
break;
/* see if there's an existing item */
item = leaf_item_hash_search(sb, bt, mpos->key);
is_del = !!(mpos->flags & SCOUTFS_ITEM_FLAG_DELETION);
/* see if we're merging delta items */
if (item && !is_del)
delta = scoutfs_forest_combine_deltas(mpos->key,
item_val(bt, item),
item_val_len(item),
mpos->val, mpos->val_len);
else
delta = 0;
if (delta < 0) {
ret = delta;
goto out;
} else if (delta == SCOUTFS_DELTA_COMBINED) {
scoutfs_inc_counter(sb, btree_merge_delta_combined);
} else if (delta == SCOUTFS_DELTA_COMBINED_NULL) {
scoutfs_inc_counter(sb, btree_merge_delta_null);
}
item = leaf_item_hash_search(sb, bt, &mpos->key);
is_del = merge_is_del(mpos->val, mpos->val_len);
trace_scoutfs_btree_merge_items(sb, mpos->root,
mpos->key, mpos->val_len,
&mpos->key, mpos->val_len,
item ? root : NULL,
item ? item_key(item) : NULL,
item ? item_val_len(item) : 0, is_del);
/* rewalk and split if ins/update needs room */
if (!is_del && !delta && !mid_free_item_room(bt, mpos->val_len)) {
if (!is_del && !mid_free_item_room(bt, mpos->val_len)) {
walk_flags |= BTW_INSERT;
walk_val_len = mpos->val_len;
break;
@@ -2358,39 +2241,22 @@ int scoutfs_btree_merge(struct super_block *sb,
/* insert missing non-deletion merge items */
if (!item && !is_del) {
scoutfs_avl_search(&bt->item_root,
cmp_key_item, mpos->key,
cmp_key_item, &mpos->key,
&cmp, &par, NULL, NULL);
create_item(bt, mpos->key, mpos->seq, mpos->flags,
mpos->val, mpos->val_len, par, cmp);
create_item(bt, &mpos->key,
mpos->val + drop_val,
mpos->val_len - drop_val, par, cmp);
scoutfs_inc_counter(sb, btree_merge_insert);
}
/* update existing items */
if (item && !is_del && !delta) {
item->seq = cpu_to_le64(mpos->seq);
item->flags = mpos->flags;
update_item_value(bt, item, mpos->val, mpos->val_len);
if (item && !is_del) {
update_item_value(bt, item,
mpos->val + drop_val,
mpos->val_len - drop_val);
scoutfs_inc_counter(sb, btree_merge_update);
}
/* update combined delta item seq */
if (delta == SCOUTFS_DELTA_COMBINED) {
item->seq = cpu_to_le64(mpos->seq);
}
/*
* combined delta items that aren't needed are
* immediately dropped. We don't back off if
* the deletion would fall under the low water
* mark because we've already modified the
* value, we don't want to retry after a join
* and apply the value a second time.
*/
if (delta == SCOUTFS_DELTA_COMBINED_NULL) {
delete_item(bt, item, NULL);
scoutfs_inc_counter(sb, btree_merge_delta_null);
}
/* delete if merge item was deletion */
if (item && is_del) {
/* rewalk and join if non-root falls under low water mark */
@@ -2407,12 +2273,12 @@ int scoutfs_btree_merge(struct super_block *sb,
walk_flags &= ~(BTW_INSERT | BTW_DELETE);
walk_val_len = 0;
/* finished with this key, skip any older items */
next = *mpos->key;
scoutfs_key_inc(&next);
ret = reset_mpos(sb, &pos_root, mpos, &next, end);
/* finished with this merge item */
scoutfs_key_inc(&mpos->key);
ret = reset_mpos(sb, &pos_root, mpos, end, merge_cmp);
if (ret < 0)
goto out;
mpos = NULL;
}
}
@@ -2420,7 +2286,7 @@ int scoutfs_btree_merge(struct super_block *sb,
out:
scoutfs_block_put(sb, bl);
rbtree_postorder_for_each_entry_safe(mpos, tmp, &pos_root, node) {
free_mpos(sb, mpos);
kfree(mpos);
}
return ret;

View File

@@ -20,15 +20,13 @@ struct scoutfs_btree_item_ref {
/* caller gives an item to the callback */
typedef int (*scoutfs_btree_item_cb)(struct super_block *sb,
struct scoutfs_key *key, u64 seq, u8 flags,
struct scoutfs_key *key,
void *val, int val_len, void *arg);
/* simple singly-linked list of items */
struct scoutfs_btree_item_list {
struct scoutfs_btree_item_list *next;
struct scoutfs_key key;
u64 seq;
u8 flags;
int val_len;
u8 val[0];
};
@@ -110,7 +108,14 @@ struct scoutfs_btree_root_head {
struct list_head head;
struct scoutfs_btree_root root;
};
/*
* Compare the values of merge input items whose keys are equal to
* determine their merge order.
*/
typedef int (*scoutfs_btree_merge_cmp_t)(void *a_val, int a_val_len,
void *b_val, int b_val_len);
/* whether merging item should be removed from destination */
typedef bool (*scoutfs_btree_merge_is_del_t)(void *val, int val_len);
int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
@@ -119,7 +124,9 @@ int scoutfs_btree_merge(struct super_block *sb,
struct scoutfs_key *next_ret,
struct scoutfs_btree_root *root,
struct list_head *input_list,
bool subtree, int dirty_limit, int alloc_low);
scoutfs_btree_merge_cmp_t merge_cmp,
scoutfs_btree_merge_is_del_t merge_is_del, bool subtree,
int drop_val, int dirty_limit, int alloc_low);
int scoutfs_btree_free_blocks(struct super_block *sb,
struct scoutfs_alloc *alloc,

View File

@@ -32,7 +32,6 @@
#include "endian_swap.h"
#include "quorum.h"
#include "omap.h"
#include "trans.h"
/*
* The client is responsible for maintaining a connection to the server.
@@ -117,6 +116,21 @@ int scoutfs_client_get_roots(struct super_block *sb,
NULL, 0, roots, sizeof(*roots));
}
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
__le64 leseq;
int ret;
ret = scoutfs_net_sync_request(sb, client->conn,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
NULL, 0, &leseq, sizeof(leseq));
if (ret == 0)
*seq = le64_to_cpu(leseq);
return ret;
}
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
@@ -283,40 +297,6 @@ int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_op
volopt, sizeof(*volopt), NULL, 0);
}
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_RESIZE_DEVICES,
nrd, sizeof(*nrd), NULL, 0);
}
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst)
{
struct client_info *client = SCOUTFS_SB(sb)->client_info;
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_STATFS,
NULL, 0, nst, sizeof(*nst));
}
/*
* The server is asking that we trigger a commit of the current log
* trees so that they can ensure an item seq discontinuity between
* finalized log btrees and the next set of open log btrees. If we're
* shutting down then we're already going to perform a final commit.
*/
static int sync_log_trees(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
if (arg_len != 0)
return -EINVAL;
if (!scoutfs_unmounting(sb))
scoutfs_trans_sync(sb, 0);
return scoutfs_net_response(sb, conn, cmd, id, 0, NULL, 0);
}
/* The client is receiving a invalidation request from the server */
static int client_lock(struct super_block *sb,
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
@@ -354,8 +334,7 @@ static int client_greeting(struct super_block *sb,
void *resp, unsigned int resp_len, int error,
void *data)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct client_info *client = SCOUTFS_SB(sb)->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
@@ -372,15 +351,17 @@ static int client_greeting(struct super_block *sb,
}
if (gr->fsid != super->hdr.fsid) {
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
scoutfs_warn(sb, "server sent fsid 0x%llx, client has 0x%llx",
le64_to_cpu(gr->fsid),
le64_to_cpu(super->hdr.fsid));
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(gr->fmt_vers) != sbi->fmt_vers) {
scoutfs_warn(sb, "server greeting response format version %llu did not match client format version %llu",
le64_to_cpu(gr->fmt_vers), sbi->fmt_vers);
if (gr->version != super->version) {
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
le64_to_cpu(gr->version),
le64_to_cpu(super->version));
ret = -EINVAL;
goto out;
}
@@ -506,7 +487,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
/* send a greeting to verify endpoints of each connection */
greet.fsid = super->hdr.fsid;
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.version = super->version;
greet.server_term = cpu_to_le64(client->server_term);
greet.rid = cpu_to_le64(sbi->rid);
greet.flags = 0;
@@ -527,7 +508,6 @@ out:
}
static scoutfs_net_request_t client_req_funcs[] = {
[SCOUTFS_NET_CMD_SYNC_LOG_TREES] = sync_log_trees,
[SCOUTFS_NET_CMD_LOCK] = client_lock,
[SCOUTFS_NET_CMD_LOCK_RECOVER] = client_lock_recover,
[SCOUTFS_NET_CMD_OPEN_INO_MAP] = client_open_ino_map,
@@ -643,8 +623,10 @@ void scoutfs_client_destroy(struct super_block *sb)
client_farewell_response,
NULL, NULL);
if (ret == 0) {
wait_for_completion(&client->farewell_comp);
ret = client->farewell_error;
ret = wait_for_completion_interruptible(
&client->farewell_comp);
if (ret == 0)
ret = client->farewell_error;
}
if (ret) {
scoutfs_inc_counter(sb, client_farewell_error);

View File

@@ -10,6 +10,7 @@ int scoutfs_client_commit_log_trees(struct super_block *sb,
int scoutfs_client_get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots);
u64 *scoutfs_client_bulk_alloc(struct super_block *sb);
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq);
int scoutfs_client_lock_request(struct super_block *sb,
struct scoutfs_net_lock *nl);
@@ -32,8 +33,6 @@ int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd);
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst);
int scoutfs_client_setup(struct super_block *sb);
void scoutfs_client_destroy(struct super_block *sb);

View File

@@ -47,8 +47,6 @@
EXPAND_COUNTER(btree_merge) \
EXPAND_COUNTER(btree_merge_alloc_low) \
EXPAND_COUNTER(btree_merge_delete) \
EXPAND_COUNTER(btree_merge_delta_combined) \
EXPAND_COUNTER(btree_merge_delta_null) \
EXPAND_COUNTER(btree_merge_dirty_limit) \
EXPAND_COUNTER(btree_merge_drop_old) \
EXPAND_COUNTER(btree_merge_insert) \
@@ -90,11 +88,10 @@
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(inode_evict_intr) \
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
EXPAND_COUNTER(item_delta) \
EXPAND_COUNTER(item_delta_written) \
EXPAND_COUNTER(item_dirty) \
EXPAND_COUNTER(item_invalidate) \
EXPAND_COUNTER(item_invalidate_page) \
@@ -124,8 +121,12 @@
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grace_extended) \
EXPAND_COUNTER(lock_grace_set) \
EXPAND_COUNTER(lock_grace_wait) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
EXPAND_COUNTER(lock_grant_work) \
EXPAND_COUNTER(lock_invalidate_coverage) \
EXPAND_COUNTER(lock_invalidate_inode) \
EXPAND_COUNTER(lock_invalidate_request) \
@@ -178,7 +179,6 @@
EXPAND_COUNTER(srch_add_entry) \
EXPAND_COUNTER(srch_compact_dirty_block) \
EXPAND_COUNTER(srch_compact_entry) \
EXPAND_COUNTER(srch_compact_error) \
EXPAND_COUNTER(srch_compact_flush) \
EXPAND_COUNTER(srch_compact_log_page) \
EXPAND_COUNTER(srch_compact_removed_entry) \
@@ -193,11 +193,6 @@
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
EXPAND_COUNTER(totl_read_copied) \
EXPAND_COUNTER(totl_read_finalized) \
EXPAND_COUNTER(totl_read_fs) \
EXPAND_COUNTER(totl_read_item) \
EXPAND_COUNTER(totl_read_logged) \
EXPAND_COUNTER(trans_commit_data_alloc_low) \
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
EXPAND_COUNTER(trans_commit_fsync) \

View File

@@ -207,7 +207,6 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
u64 offset;
s64 ret;
u8 flags;
int err;
int i;
flags = offline ? SEF_OFFLINE : 0;
@@ -247,18 +246,6 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
tr.len = min(ext.len - offset, last - iblock + 1);
tr.flags = ext.flags;
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
if (ret < 0) {
if (WARN_ON_ONCE(ret == -EINVAL)) {
scoutfs_err(sb, "unexpected truncate inconsistency: ino %llu iblock %llu last %llu, start %llu len %llu",
ino, iblock, last, tr.start, tr.len);
}
break;
}
if (tr.map) {
mutex_lock(&datinf->mutex);
ret = scoutfs_free_data(sb, datinf->alloc,
@@ -266,16 +253,16 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
&datinf->data_freed,
tr.map, tr.len);
mutex_unlock(&datinf->mutex);
if (ret < 0) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, tr.map, tr.flags);
if (err < 0)
scoutfs_err(sb, "truncate err %d restoring extent after error %lld: ino %llu start %llu len %llu",
err, ret, ino, tr.start, tr.len);
if (ret < 0)
break;
}
}
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
tr.start, tr.len, 0, flags);
BUG_ON(ret); /* inconsistent, could prealloc items */
iblock += tr.len;
}
@@ -830,7 +817,6 @@ static int scoutfs_write_end(struct file *file, struct address_space *mapping,
scoutfs_inode_inc_data_version(inode);
}
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, wbd->lock, &wbd->ind_locks);
scoutfs_inode_queue_writeback(inode);
}
@@ -1032,11 +1018,8 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
end = (iblock + ret) << SCOUTFS_BLOCK_SM_SHIFT;
if (end > offset + len)
end = offset + len;
if (end > i_size_read(inode)) {
if (end > i_size_read(inode))
i_size_write(inode, end);
inode_inc_iversion(inode);
scoutfs_inode_inc_data_version(inode);
}
}
if (ret >= 0)
scoutfs_update_inode_item(inode, lock, &ind_locks);
@@ -1368,12 +1351,10 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
cur_time = CURRENT_TIME;
if (!is_stage) {
to->i_ctime = to->i_mtime = cur_time;
inode_inc_iversion(to);
scoutfs_inode_inc_data_version(to);
scoutfs_inode_set_data_seq(to);
}
from->i_ctime = from->i_mtime = cur_time;
inode_inc_iversion(from);
scoutfs_inode_inc_data_version(from);
scoutfs_inode_set_data_seq(from);

View File

@@ -38,6 +38,13 @@ struct scoutfs_data_wait {
.err = 0, \
}
struct scoutfs_traced_extent {
u64 iblock;
u64 count;
u64 blkno;
u8 flags;
};
extern const struct address_space_operations scoutfs_file_aops;
extern const struct file_operations scoutfs_file_fops;
struct scoutfs_alloc;

View File

@@ -31,7 +31,6 @@
#include "lock.h"
#include "hash.h"
#include "omap.h"
#include "forest.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -136,8 +135,8 @@ static int alloc_dentry_info(struct dentry *dentry)
{
struct dentry_info *di;
smp_rmb();
if (dentry->d_op == &scoutfs_dentry_ops)
/* XXX read mb? */
if (dentry->d_fsdata)
return 0;
di = kmem_cache_zalloc(dentry_info_cache, GFP_NOFS);
@@ -149,7 +148,6 @@ static int alloc_dentry_info(struct dentry *dentry)
spin_lock(&dentry->d_lock);
if (!dentry->d_fsdata) {
dentry->d_fsdata = di;
smp_wmb();
d_set_d_op(dentry, &scoutfs_dentry_ops);
}
spin_unlock(&dentry->d_lock);
@@ -255,7 +253,7 @@ static u64 dirent_name_hash(const char *name, unsigned int name_len)
((u64)dirent_name_fingerprint(name, name_len) << 32);
}
static bool dirent_names_equal(const char *a_name, unsigned int a_len,
static u64 dirent_names_equal(const char *a_name, unsigned int a_len,
const char *b_name, unsigned int b_len)
{
return a_len == b_len && memcmp(a_name, b_name, a_len) == 0;
@@ -277,7 +275,8 @@ static int lookup_dirent(struct super_block *sb, u64 dir_ino, const char *name,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
init_dirent_key(&key, SCOUTFS_DIRENT_TYPE, dir_ino, hash, 0);
@@ -317,52 +316,6 @@ out:
return ret;
}
/*
* Verify that the caller's dentry still precisely matches our dirent
* items.
*
* The caller has a dentry that the vfs revalidated before they acquired
* their locks. If the dentry is still covered by a lock we immediately
* return 0. If not, we check items and return -ENOENT if a positive
* dentry no longer matches the items or -EEXIST if a negative entry's
* name now has an item.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, struct dentry *dentry,
struct scoutfs_lock *lock)
{
struct dentry_info *di = dentry->d_fsdata;
struct scoutfs_dirent dent = {0,};
const char *name;
u64 dentry_ino;
int name_len;
u64 hash;
int ret;
if (scoutfs_lock_is_covered(sb, &di->lock_cov))
return 0;
dentry_ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
name = dentry->d_name.name;
name_len = dentry->d_name.len;
hash = dirent_name_hash(name, name_len);
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret < 0 && ret != -ENOENT)
return ret;
if (dentry_ino != le64_to_cpu(dent.ino) || di->hash != le64_to_cpu(dent.hash) ||
di->pos != le64_to_cpu(dent.pos)) {
if (dentry_ino)
ret = -ENOENT;
else
ret = -EEXIST;
} else {
ret = 0;
}
return ret;
}
static int scoutfs_d_revalidate(struct dentry *dentry, unsigned int flags)
{
struct super_block *sb = dentry->d_sb;
@@ -470,7 +423,7 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
{
struct super_block *sb = dir->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent dent = {0,};
struct scoutfs_dirent dent;
struct inode *inode;
u64 ino = 0;
u64 hash;
@@ -498,11 +451,9 @@ static struct dentry *scoutfs_lookup(struct inode *dir, struct dentry *dentry,
ret = 0;
} else if (ret == 0) {
ino = le64_to_cpu(dent.ino);
}
if (ret == 0)
update_dentry_info(sb, dentry, le64_to_cpu(dent.hash),
le64_to_cpu(dent.pos), dir_lock);
}
scoutfs_unlock(sb, dir_lock, SCOUTFS_LOCK_READ);
out:
@@ -511,7 +462,7 @@ out:
else if (ino == 0)
inode = NULL;
else
inode = scoutfs_iget(sb, ino, 0);
inode = scoutfs_iget(sb, ino);
/*
* We can't splice dir aliases into the dcache. dir entries
@@ -539,10 +490,10 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
{
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *dir_lock = NULL;
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key last_key;
struct scoutfs_dirent *dent;
struct scoutfs_key key;
struct scoutfs_key last_key;
struct scoutfs_lock *dir_lock;
int name_len;
u64 pos;
int ret;
@@ -552,7 +503,8 @@ static int KC_DECLARE_READDIR(scoutfs_readdir, struct file *file,
dent = alloc_dirent(SCOUTFS_NAME_LEN);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
init_dirent_key(&last_key, SCOUTFS_READDIR_TYPE, scoutfs_ino(inode),
@@ -619,17 +571,18 @@ static int add_entry_items(struct super_block *sb, u64 dir_ino, u64 hash,
u64 ino, umode_t mode, struct scoutfs_lock *dir_lock,
struct scoutfs_lock *inode_lock)
{
struct scoutfs_dirent *dent = NULL;
struct scoutfs_key rdir_key;
struct scoutfs_key ent_key;
struct scoutfs_key lb_key;
bool del_rdir = false;
struct scoutfs_dirent *dent;
bool del_ent = false;
bool del_rdir = false;
int ret;
dent = alloc_dirent(name_len);
if (!dent) {
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
/* initialize the dent */
@@ -816,10 +769,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
pos = SCOUTFS_I(dir)->next_readdir_pos++;
ret = add_entry_items(sb, scoutfs_ino(dir), hash, pos,
@@ -835,9 +784,6 @@ static int scoutfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_mtime = inode->i_atime = inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_mtime;
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
if (S_ISDIR(mode)) {
inc_nlink(inode);
@@ -912,10 +858,6 @@ static int scoutfs_link(struct dentry *old_dentry,
if (ret)
goto out_unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out_unlock;
dir_size = i_size_read(dir) + dentry->d_name.len;
if (inode->i_nlink == 0) {
@@ -963,8 +905,6 @@ retry:
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode->i_ctime = dir->i_mtime;
inc_nlink(inode);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -1016,14 +956,6 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
if (ret)
return ret;
ret = alloc_dentry_info(dentry);
if (ret)
goto unlock;
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto unlock;
if (S_ISDIR(inode->i_mode) && i_size_read(inode)) {
ret = -ENOTEMPTY;
goto unlock;
@@ -1061,13 +993,9 @@ retry:
goto out;
}
update_dentry_info(sb, dentry, 0, 0, dir_lock);
dir->i_ctime = ts;
dir->i_mtime = ts;
i_size_write(dir, i_size_read(dir) - dentry->d_name.len);
inode_inc_iversion(dir);
inode_inc_iversion(inode);
inode->i_ctime = ts;
drop_nlink(inode);
@@ -1283,10 +1211,6 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
return PTR_ERR(inode);
si = SCOUTFS_I(inode);
ret = verify_entry(sb, scoutfs_ino(dir), dentry, dir_lock);
if (ret < 0)
goto out;
ret = symlink_item_ops(sb, SYM_CREATE, scoutfs_ino(inode), inode_lock,
symname, name_len);
if (ret)
@@ -1305,13 +1229,10 @@ static int scoutfs_symlink(struct inode *dir, struct dentry *dentry,
i_size_write(dir, i_size_read(dir) + dentry->d_name.len);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
inode_inc_iversion(dir);
inode->i_ctime = dir->i_mtime;
si->crtime = inode->i_ctime;
i_size_write(inode, name_len);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);
@@ -1364,10 +1285,10 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list)
{
struct scoutfs_link_backref_entry *ent = NULL;
struct scoutfs_lock *lock = NULL;
struct scoutfs_link_backref_entry *ent;
struct scoutfs_key last_key;
struct scoutfs_key key;
struct scoutfs_lock *lock = NULL;
int len;
int ret;
@@ -1587,6 +1508,26 @@ static int verify_ancestors(struct super_block *sb, u64 p1, u64 p2,
return ret;
}
/*
* Make sure that a dirent from the dir to the inode exists at the name.
* The caller has the name locked in the dir.
*/
static int verify_entry(struct super_block *sb, u64 dir_ino, const char *name,
unsigned name_len, u64 hash, u64 ino,
struct scoutfs_lock *lock)
{
struct scoutfs_dirent dent;
int ret;
ret = lookup_dirent(sb, dir_ino, name, name_len, hash, &dent, lock);
if (ret == 0 && le64_to_cpu(dent.ino) != ino)
ret = -ENOENT;
else if (ret == -ENOENT && ino == 0)
ret = 0;
return ret;
}
/*
* The vfs performs checks on cached inodes and dirents before calling
* here. It doesn't hold any locks so all of those checks can be based
@@ -1681,10 +1622,13 @@ static int scoutfs_rename(struct inode *old_dir, struct dentry *old_dentry,
}
/* make sure that the entries assumed by the argument still exist */
ret = alloc_dentry_info(old_dentry) ?:
alloc_dentry_info(new_dentry) ?:
verify_entry(sb, scoutfs_ino(old_dir), old_dentry, old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry, new_dir_lock);
ret = verify_entry(sb, scoutfs_ino(old_dir), old_dentry->d_name.name,
old_dentry->d_name.len, old_hash,
scoutfs_ino(old_inode), old_dir_lock) ?:
verify_entry(sb, scoutfs_ino(new_dir), new_dentry->d_name.name,
new_dentry->d_name.len, new_hash,
new_inode ? scoutfs_ino(new_inode) : 0,
new_dir_lock);
if (ret)
goto out_unlock;
@@ -1794,13 +1738,6 @@ retry:
if (new_inode)
old_inode->i_ctime = now;
inode_inc_iversion(old_dir);
inode_inc_iversion(old_inode);
if (new_dir != old_dir)
inode_inc_iversion(new_dir);
if (new_inode)
inode_inc_iversion(new_inode);
scoutfs_update_inode_item(old_dir, old_dir_lock, &ind_locks);
scoutfs_update_inode_item(old_inode, old_inode_lock, &ind_locks);
if (new_dir != old_dir)
@@ -1910,8 +1847,6 @@ static int scoutfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mod
insert_inode_hash(inode);
ihold(inode); /* need to update inode modifications in d_tmpfile */
d_tmpfile(dentry, inode);
inode_inc_iversion(inode);
scoutfs_forest_inc_inode_count(sb);
scoutfs_update_inode_item(inode, inode_lock, &ind_locks);
scoutfs_update_inode_item(dir, dir_lock, &ind_locks);

View File

@@ -81,7 +81,7 @@ static struct dentry *scoutfs_fh_to_dentry(struct super_block *sb,
trace_scoutfs_fh_to_dentry(sb, fh_type, sfid);
if (scoutfs_valid_fileid(fh_type))
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino), 0);
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino));
return d_obtain_alias(inode);
}
@@ -100,7 +100,7 @@ static struct dentry *scoutfs_fh_to_parent(struct super_block *sb,
if (scoutfs_valid_fileid(fh_type) &&
fh_type == FILEID_SCOUTFS_WITH_PARENT)
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino), 0);
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino));
return d_obtain_alias(inode);
}
@@ -123,7 +123,7 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
scoutfs_dir_free_backref_path(sb, &list);
trace_scoutfs_get_parent(sb, inode, ino);
inode = scoutfs_iget(sb, ino, 0);
inode = scoutfs_iget(sb, ino);
return d_obtain_alias(inode);
}

View File

@@ -13,7 +13,6 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include "msg.h"
#include "ext.h"
#include "counters.h"
#include "scoutfs_trace.h"
@@ -192,9 +191,6 @@ int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
/* inserting extent must not overlap */
if (found.len && ext_overlap(&ins, found.start, found.len)) {
if (ops->insert_overlap_warn)
scoutfs_err(sb, "inserting extent %llu.%llu overlaps existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}
@@ -246,8 +242,6 @@ int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
/* removed extent must be entirely within found */
if (!scoutfs_ext_inside(start, len, &found)) {
scoutfs_err(sb, "error removing extent %llu.%llu, isn't inside existing %llu.%llu",
start, len, found.start, found.len);
ret = -EINVAL;
goto out;
}

View File

@@ -15,8 +15,6 @@ struct scoutfs_ext_ops {
u64 start, u64 len, u64 map, u8 flags);
int (*remove)(struct super_block *sb, void *arg, u64 start, u64 len,
u64 map, u8 flags);
bool insert_overlap_warn;
};
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,

View File

@@ -376,7 +376,7 @@ int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies)
bool error;
long ret;
ret = wait_event_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
ret = wait_event_interruptible_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
if (ret == 0)
ret = -ETIMEDOUT;
else if (ret > 0)

View File

@@ -26,7 +26,6 @@
#include "hash.h"
#include "srch.h"
#include "counters.h"
#include "xattr.h"
#include "scoutfs_trace.h"
/*
@@ -66,8 +65,6 @@ struct forest_info {
struct workqueue_struct *workq;
struct delayed_work log_merge_dwork;
atomic64_t inode_count_delta;
};
#define DECLARE_FOREST_INFO(sb, name) \
@@ -224,17 +221,25 @@ out:
}
struct forest_read_items_data {
int fic;
bool is_fs;
scoutfs_forest_item_cb cb;
void *cb_arg;
};
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, void *arg)
{
struct forest_read_items_data *rid = arg;
struct scoutfs_log_item_value _liv = {0,};
struct scoutfs_log_item_value *liv = &_liv;
return rid->cb(sb, key, seq, flags, val, val_len, rid->fic, rid->cb_arg);
if (!rid->is_fs) {
liv = val;
val += sizeof(struct scoutfs_log_item_value);
val_len -= sizeof(struct scoutfs_log_item_value);
}
return rid->cb(sb, key, liv, val, val_len, rid->cb_arg);
}
/*
@@ -246,16 +251,19 @@ static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u6
* that covers all the blocks. Any keys outside of this range can't be
* trusted because we didn't visit all the trees to check their items.
*
* We return -ESTALE if we hit stale blocks to give the caller a chance
* to reset their state and retry with a newer version of the btrees.
* If we hit stale blocks and retry we can call the callback for
* duplicate items. This is harmless because the items are stable while
* the caller holds their cluster lock and the caller has to filter out
* item seqs anyway.
*/
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg)
{
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
struct forest_read_items_data rid = {
.cb = cb,
.cb_arg = arg,
@@ -267,30 +275,31 @@ int scoutfs_forest_read_items(struct super_block *sb,
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_block *bl;
struct scoutfs_key ltk;
struct scoutfs_key orig_start = *start;
struct scoutfs_key orig_end = *end;
int ret;
int i;
scoutfs_inc_counter(sb, forest_read_items);
calc_bloom_nrs(&bloom, bloom_key);
calc_bloom_nrs(&bloom, &lock->start);
retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
refs.fs_ref = roots.fs_root.ref;
refs.logs_ref = roots.logs_root.ref;
*start = orig_start;
*end = orig_end;
*start = lock->start;
*end = lock->end;
/* start with fs root items */
rid.fic |= FIC_FS_ROOT;
rid.is_fs = true;
ret = scoutfs_btree_read_items(sb, &roots.fs_root, key, start, end,
forest_read_items, &rid);
if (ret < 0)
goto out;
rid.fic &= ~FIC_FS_ROOT;
rid.is_fs = false;
scoutfs_key_init_log_trees(&ltk, 0, 0);
for (;; scoutfs_key_inc(&ltk)) {
@@ -335,40 +344,24 @@ int scoutfs_forest_read_items(struct super_block *sb,
scoutfs_inc_counter(sb, forest_bloom_pass);
if ((le64_to_cpu(lt.flags) & SCOUTFS_LOG_TREES_FINALIZED))
rid.fic |= FIC_FINALIZED;
ret = scoutfs_btree_read_items(sb, &lt.item_root, key, start,
end, forest_read_items, &rid);
if (ret < 0)
goto out;
rid.fic &= ~FIC_FINALIZED;
}
ret = 0;
out:
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
return -EIO;
prev_refs = refs;
goto retry;
}
return ret;
}
/*
* If the items are deltas then combine the src with the destination
* value and store the result in the destination.
*
* Returns:
* -errno: fatal error, no change
* 0: not delta items, no change
* +ve: SCOUTFS_DELTA_ values indicating when dst and/or src can be dropped
*/
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len)
{
if (key->sk_zone == SCOUTFS_XATTR_TOTL_ZONE)
return scoutfs_xattr_combine_totl(dst, dst_len, src, src_len);
return 0;
}
/*
* Make sure that the bloom bits for the lock's start key are all set in
* the current log's bloom block. We record the nr of our log tree in
@@ -525,62 +518,6 @@ int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id)
return ret;
}
void scoutfs_forest_inc_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_inc(&finf->inode_count_delta);
}
void scoutfs_forest_dec_inode_count(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
atomic64_dec(&finf->inode_count_delta);
}
/*
* Return the total inode count from the super block and all the
* log_btrees it references. This assumes it's working with a block
* reference hierarchy that should be fully consistent. If we see
* ESTALE we've hit persistent corruption.
*/
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count)
{
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
*inode_count = le64_to_cpu(super->inode_count);
scoutfs_key_init_log_trees(&key, 0, 0);
for (;;) {
ret = scoutfs_btree_next(sb, &super->logs_root, &key, &iref);
if (ret == 0) {
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
scoutfs_key_inc(&key);
lt = iref.val;
*inode_count += le64_to_cpu(lt->inode_count_delta);
} else {
ret = -EIO;
}
scoutfs_btree_put_iref(&iref);
}
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
else if (ret == -ESTALE)
ret = -EIO;
break;
}
}
return ret;
}
/*
* This is called from transactions as a new transaction opens and is
* serialized with all writers.
@@ -609,8 +546,6 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
WARN_ON_ONCE(finf->srch_bl); /* commiting should have put the block */
finf->srch_bl = NULL;
atomic64_set(&finf->inode_count_delta, le64_to_cpu(lt->inode_count_delta));
trace_scoutfs_forest_init_our_log(sb, le64_to_cpu(lt->rid),
le64_to_cpu(lt->nr),
le64_to_cpu(lt->item_root.ref.blkno),
@@ -638,12 +573,30 @@ void scoutfs_forest_get_btrees(struct super_block *sb,
scoutfs_block_put(sb, finf->srch_bl);
finf->srch_bl = NULL;
lt->inode_count_delta = cpu_to_le64(atomic64_read(&finf->inode_count_delta));
trace_scoutfs_forest_prepare_commit(sb, &lt->item_root.ref,
&lt->bloom_ref);
}
/*
* Compare input items to merge by their log item value seq when their
* keys match.
*/
static int merge_cmp(void *a_val, int a_val_len, void *b_val, int b_val_len)
{
struct scoutfs_log_item_value *a = a_val;
struct scoutfs_log_item_value *b = b_val;
/* sort merge item by seq */
return scoutfs_cmp(le64_to_cpu(a->seq), le64_to_cpu(b->seq));
}
static bool merge_is_del(void *val, int val_len)
{
struct scoutfs_log_item_value *liv = val;
return !!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION);
}
#define LOG_MERGE_DELAY_MS (5 * MSEC_PER_SEC)
/*
@@ -689,7 +642,7 @@ static void scoutfs_forest_log_merge_worker(struct work_struct *work)
scoutfs_alloc_init(&alloc, &req.meta_avail, &req.meta_freed);
scoutfs_block_writer_init(sb, &wri);
/* find finalized input log trees within the input seq */
/* find finalized input log trees up to last_seq */
for (scoutfs_key_init_log_trees(&key, 0, 0); ; scoutfs_key_inc(&key)) {
if (!rhead) {
@@ -705,9 +658,10 @@ static void scoutfs_forest_log_merge_worker(struct work_struct *work)
if (iref.val_len == sizeof(*lt)) {
key = *iref.key;
lt = iref.val;
if (lt->item_root.ref.blkno != 0 &&
(le64_to_cpu(lt->flags) & SCOUTFS_LOG_TREES_FINALIZED) &&
(le64_to_cpu(lt->finalize_seq) < le64_to_cpu(req.input_seq))) {
if ((le64_to_cpu(lt->flags) &
SCOUTFS_LOG_TREES_FINALIZED) &&
(le64_to_cpu(lt->max_item_seq) <=
le64_to_cpu(req.last_seq))) {
rhead->root = lt->item_root;
list_add_tail(&rhead->head, &inputs);
rhead = NULL;
@@ -733,8 +687,10 @@ static void scoutfs_forest_log_merge_worker(struct work_struct *work)
}
ret = scoutfs_btree_merge(sb, &alloc, &wri, &req.start, &req.end,
&next, &comp.root, &inputs,
&next, &comp.root, &inputs, merge_cmp,
merge_is_del,
!!(req.flags & cpu_to_le64(SCOUTFS_LOG_MERGE_REQUEST_SUBTREE)),
sizeof(struct scoutfs_log_item_value),
SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT, 10);
if (ret == -ERANGE) {
comp.remain = next;
@@ -791,6 +747,9 @@ int scoutfs_forest_setup(struct super_block *sb)
goto out;
}
queue_delayed_work(finf->workq, &finf->log_merge_dwork,
msecs_to_jiffies(LOG_MERGE_DELAY_MS));
ret = 0;
out:
if (ret)
@@ -799,14 +758,6 @@ out:
return 0;
}
void scoutfs_forest_start(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);
queue_delayed_work(finf->workq, &finf->log_merge_dwork,
msecs_to_jiffies(LOG_MERGE_DELAY_MS));
}
void scoutfs_forest_stop(struct super_block *sb)
{
DECLARE_FOREST_INFO(sb, finf);

View File

@@ -8,18 +8,16 @@ struct scoutfs_block;
#include "btree.h"
/* caller gives an item to the callback */
enum {
FIC_FS_ROOT = (1 << 0),
FIC_FINALIZED = (1 << 1),
};
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb, struct scoutfs_key *key, u64 seq,
u8 flags, void *val, int val_len, int fic, void *arg);
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
void *val, int val_len, void *arg);
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next);
int scoutfs_forest_read_items(struct super_block *sb,
struct scoutfs_lock *lock,
struct scoutfs_key *key,
struct scoutfs_key *bloom_key,
struct scoutfs_key *start,
struct scoutfs_key *end,
scoutfs_forest_item_cb cb, void *arg);
@@ -33,11 +31,6 @@ int scoutfs_forest_insert_list(struct super_block *sb,
struct scoutfs_btree_item_list *lst);
int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id);
void scoutfs_forest_inc_inode_count(struct super_block *sb);
void scoutfs_forest_dec_inode_count(struct super_block *sb);
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count);
void scoutfs_forest_init_btrees(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
@@ -45,14 +38,7 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
void scoutfs_forest_get_btrees(struct super_block *sb,
struct scoutfs_log_trees *lt);
/* > 0 error codes */
#define SCOUTFS_DELTA_COMBINED 1 /* src val was combined, drop src */
#define SCOUTFS_DELTA_COMBINED_NULL 2 /* combined val has no data, drop both */
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
void *src, int src_len);
int scoutfs_forest_setup(struct super_block *sb);
void scoutfs_forest_start(struct super_block *sb);
void scoutfs_forest_stop(struct super_block *sb);
void scoutfs_forest_destroy(struct super_block *sb);

View File

@@ -1,15 +1,8 @@
#ifndef _SCOUTFS_FORMAT_H_
#define _SCOUTFS_FORMAT_H_
/*
* The format version defines the format of structures on devices,
* structures that are communicated over the wire, and the protocol
* behind the structures.
*/
#define SCOUTFS_FORMAT_VERSION_MIN 1
#define SCOUTFS_FORMAT_VERSION_MIN_STR __stringify(SCOUTFS_FORMAT_VERSION_MIN)
#define SCOUTFS_FORMAT_VERSION_MAX 1
#define SCOUTFS_FORMAT_VERSION_MAX_STR __stringify(SCOUTFS_FORMAT_VERSION_MAX)
#define SCOUTFS_INTEROP_VERSION 0ULL
#define SCOUTFS_INTEROP_VERSION_STR __stringify(0)
/* statfs(2) f_type */
#define SCOUTFS_SUPER_MAGIC 0x554f4353 /* "SCOU" */
@@ -175,11 +168,6 @@ struct scoutfs_key {
#define sko_rid _sk_first
#define sko_ino _sk_second
/* xattr totl */
#define skxt_a _sk_first
#define skxt_b _sk_second
#define skxt_c _sk_third
/* inode */
#define ski_ino _sk_first
@@ -207,6 +195,10 @@ struct scoutfs_key {
#define sklt_rid _sk_first
#define sklt_nr _sk_second
/* seqs */
#define skts_trans_seq _sk_first
#define skts_rid _sk_second
/* mounted clients */
#define skmc_rid _sk_first
@@ -252,15 +244,11 @@ struct scoutfs_btree_root {
struct scoutfs_btree_item {
struct scoutfs_avl_node node;
struct scoutfs_key key;
__le64 seq;
__le16 val_off;
__le16 val_len;
__u8 flags;
__u8 __pad[3];
__u8 __pad[4];
};
#define SCOUTFS_ITEM_FLAG_DELETION (1 << 0)
struct scoutfs_btree_block {
struct scoutfs_block_header hdr;
struct scoutfs_avl_root item_root;
@@ -457,12 +445,6 @@ struct scoutfs_srch_compact {
* XXX I imagine we should rename these now that they've evolved to track
* all the btrees that clients use during a transaction. It's not just
* about item logs, it's about clients making changes to trees.
*
* @get_trans_seq, @commit_trans_seq: These pair of sequence numbers
* determine if a transaction is currently open for the mount that owns
* the log_trees struct. get_trans_seq is advanced by the server as the
* transaction is opened. The server sets comimt_trans_seq equal to
* get_ as the transaction is committed.
*/
struct scoutfs_log_trees {
struct scoutfs_alloc_list_head meta_avail;
@@ -474,11 +456,7 @@ struct scoutfs_log_trees {
struct scoutfs_srch_file srch_file;
__le64 data_alloc_zone_blocks;
__le64 data_alloc_zones[SCOUTFS_DATA_ALLOC_ZONE_LE64S];
__le64 inode_count_delta;
__le64 get_trans_seq;
__le64 commit_trans_seq;
__le64 max_item_seq;
__le64 finalize_seq;
__le64 rid;
__le64 nr;
__le64 flags;
@@ -486,8 +464,21 @@ struct scoutfs_log_trees {
#define SCOUTFS_LOG_TREES_FINALIZED (1ULL << 0)
/* FS items are limited by the max btree value length */
#define SCOUTFS_MAX_VAL_SIZE SCOUTFS_BTREE_MAX_VAL_LEN
struct scoutfs_log_item_value {
__le64 seq;
__u8 flags;
__u8 __pad[7];
__u8 data[];
};
/*
* FS items are limited by the max btree value length with the log item
* value header.
*/
#define SCOUTFS_MAX_VAL_SIZE \
(SCOUTFS_BTREE_MAX_VAL_LEN - sizeof(struct scoutfs_log_item_value))
#define SCOUTFS_LOG_ITEM_FLAG_DELETION (1 << 0)
struct scoutfs_bloom_block {
struct scoutfs_block_header hdr;
@@ -517,6 +508,7 @@ struct scoutfs_log_merge_status {
struct scoutfs_key next_range_key;
__le64 nr_requests;
__le64 nr_complete;
__le64 last_seq;
__le64 seq;
};
@@ -533,7 +525,7 @@ struct scoutfs_log_merge_request {
struct scoutfs_btree_root root;
struct scoutfs_key start;
struct scoutfs_key end;
__le64 input_seq;
__le64 last_seq;
__le64 rid;
__le64 seq;
__le64 flags;
@@ -583,48 +575,49 @@ struct scoutfs_log_merge_freeing {
/*
* Keys are first sorted by major key zones.
*/
#define SCOUTFS_INODE_INDEX_ZONE 4
#define SCOUTFS_ORPHAN_ZONE 8
#define SCOUTFS_XATTR_TOTL_ZONE 12
#define SCOUTFS_FS_ZONE 16
#define SCOUTFS_LOCK_ZONE 20
#define SCOUTFS_INODE_INDEX_ZONE 1
#define SCOUTFS_ORPHAN_ZONE 2
#define SCOUTFS_FS_ZONE 3
#define SCOUTFS_LOCK_ZONE 4
/* Items only stored in server btrees */
#define SCOUTFS_LOG_TREES_ZONE 24
#define SCOUTFS_MOUNTED_CLIENT_ZONE 28
#define SCOUTFS_SRCH_ZONE 32
#define SCOUTFS_FREE_EXTENT_BLKNO_ZONE 36
#define SCOUTFS_FREE_EXTENT_ORDER_ZONE 40
#define SCOUTFS_LOG_TREES_ZONE 6
#define SCOUTFS_TRANS_SEQ_ZONE 7
#define SCOUTFS_MOUNTED_CLIENT_ZONE 8
#define SCOUTFS_SRCH_ZONE 9
#define SCOUTFS_FREE_EXTENT_BLKNO_ZONE 10
#define SCOUTFS_FREE_EXTENT_ORDER_ZONE 11
/* Items only stored in log merge server btrees */
#define SCOUTFS_LOG_MERGE_STATUS_ZONE 44
#define SCOUTFS_LOG_MERGE_RANGE_ZONE 48
#define SCOUTFS_LOG_MERGE_REQUEST_ZONE 52
#define SCOUTFS_LOG_MERGE_COMPLETE_ZONE 56
#define SCOUTFS_LOG_MERGE_FREEING_ZONE 60
#define SCOUTFS_LOG_MERGE_STATUS_ZONE 12
#define SCOUTFS_LOG_MERGE_RANGE_ZONE 13
#define SCOUTFS_LOG_MERGE_REQUEST_ZONE 14
#define SCOUTFS_LOG_MERGE_COMPLETE_ZONE 15
#define SCOUTFS_LOG_MERGE_FREEING_ZONE 16
/* inode index zone */
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 4
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 8
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 1
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 2
#define SCOUTFS_INODE_INDEX_NR 3 /* don't forget to update */
/* orphan zone, redundant type used for clarity */
#define SCOUTFS_ORPHAN_TYPE 4
#define SCOUTFS_ORPHAN_TYPE 1
/* fs zone */
#define SCOUTFS_INODE_TYPE 4
#define SCOUTFS_XATTR_TYPE 8
#define SCOUTFS_DIRENT_TYPE 12
#define SCOUTFS_READDIR_TYPE 16
#define SCOUTFS_LINK_BACKREF_TYPE 20
#define SCOUTFS_SYMLINK_TYPE 24
#define SCOUTFS_DATA_EXTENT_TYPE 28
#define SCOUTFS_INODE_TYPE 1
#define SCOUTFS_XATTR_TYPE 2
#define SCOUTFS_DIRENT_TYPE 3
#define SCOUTFS_READDIR_TYPE 4
#define SCOUTFS_LINK_BACKREF_TYPE 5
#define SCOUTFS_SYMLINK_TYPE 6
#define SCOUTFS_DATA_EXTENT_TYPE 7
/* lock zone, only ever found in lock ranges, never in persistent items */
#define SCOUTFS_RENAME_TYPE 4
#define SCOUTFS_RENAME_TYPE 1
/* srch zone, only in server btrees */
#define SCOUTFS_SRCH_LOG_TYPE 4
#define SCOUTFS_SRCH_BLOCKS_TYPE 8
#define SCOUTFS_SRCH_PENDING_TYPE 12
#define SCOUTFS_SRCH_BUSY_TYPE 16
#define SCOUTFS_SRCH_LOG_TYPE 1
#define SCOUTFS_SRCH_BLOCKS_TYPE 2
#define SCOUTFS_SRCH_PENDING_TYPE 3
#define SCOUTFS_SRCH_BUSY_TYPE 4
/* file data extents have start and len in key */
struct scoutfs_data_extent_val {
@@ -649,17 +642,6 @@ struct scoutfs_xattr {
__u8 name[];
};
/*
* .totl. xattrs are mapped to items. The dotted u64s in the xattr name
* map to the item key. The item value total is the sum of all the
* xattr values. The item value count records the number of xattrs
* contributing to the total and is used when combining logged items to
* determine if totals are being created or destroyed.
*/
struct scoutfs_xattr_totl_val {
__le64 total;
__le64 count;
};
/* XXX does this exist upstream somewhere? */
#define member_sizeof(TYPE, MEMBER) (sizeof(((TYPE *)0)->MEMBER))
@@ -743,9 +725,7 @@ enum {
struct scoutfs_quorum_block {
struct scoutfs_block_header hdr;
__le64 write_nr;
struct scoutfs_quorum_block_event {
__le64 write_nr;
__le64 rid;
__le64 term;
struct scoutfs_timespec ts;
@@ -793,14 +773,17 @@ struct scoutfs_volume_options {
struct scoutfs_super_block {
struct scoutfs_block_header hdr;
__le64 id;
__le64 fmt_vers;
__le64 version;
__le64 flags;
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 seq;
__le64 next_ino;
__le64 inode_count;
__le64 total_meta_blocks; /* both static and dynamic */
__le64 first_meta_blkno; /* first dynamically allocated */
__le64 last_meta_blkno;
__le64 total_data_blocks;
__le64 first_data_blkno;
__le64 last_data_blkno;
struct scoutfs_quorum_config qconf;
struct scoutfs_alloc_root meta_alloc[2];
struct scoutfs_alloc_root data_alloc;
@@ -809,6 +792,7 @@ struct scoutfs_super_block {
struct scoutfs_btree_root fs_root;
struct scoutfs_btree_root logs_root;
struct scoutfs_btree_root log_merge;
struct scoutfs_btree_root trans_seqs;
struct scoutfs_btree_root mounted_clients;
struct scoutfs_btree_root srch_root;
struct scoutfs_volume_options volopt;
@@ -835,6 +819,12 @@ struct scoutfs_super_block {
*
* @offline_blocks: The number of fixed 4k blocks that could be made
* online by staging.
*
* XXX
* - compat flags?
* - version?
* - generation?
* - be more careful with rdev?
*/
struct scoutfs_inode {
__le64 size;
@@ -845,7 +835,6 @@ struct scoutfs_inode {
__le64 offline_blocks;
__le64 next_readdir_pos;
__le64 next_xattr_id;
__le64 version;
__le32 nlink;
__le32 uid;
__le32 gid;
@@ -907,7 +896,6 @@ enum scoutfs_dentry_type {
#define SCOUTFS_XATTR_MAX_NAME_LEN 255
#define SCOUTFS_XATTR_MAX_VAL_LEN 65535
#define SCOUTFS_XATTR_MAX_PART_SIZE SCOUTFS_MAX_VAL_SIZE
#define SCOUTFS_XATTR_MAX_TOTL_U64 23 /* octal U64_MAX */
#define SCOUTFS_XATTR_NR_PARTS(name_len, val_len) \
DIV_ROUND_UP(sizeof(struct scoutfs_xattr) + name_len + val_len, \
@@ -938,7 +926,7 @@ enum scoutfs_dentry_type {
*/
struct scoutfs_net_greeting {
__le64 fsid;
__le64 fmt_vers;
__le64 version;
__le64 server_term;
__le64 rid;
__le64 flags;
@@ -969,6 +957,7 @@ struct scoutfs_net_greeting {
* response messages.
*/
struct scoutfs_net_header {
__le64 clock_sync_id;
__le64 seq;
__le64 recv_seq;
__le64 id;
@@ -988,8 +977,8 @@ enum scoutfs_net_cmd {
SCOUTFS_NET_CMD_ALLOC_INODES,
SCOUTFS_NET_CMD_GET_LOG_TREES,
SCOUTFS_NET_CMD_COMMIT_LOG_TREES,
SCOUTFS_NET_CMD_SYNC_LOG_TREES,
SCOUTFS_NET_CMD_GET_ROOTS,
SCOUTFS_NET_CMD_ADVANCE_SEQ,
SCOUTFS_NET_CMD_GET_LAST_SEQ,
SCOUTFS_NET_CMD_LOCK,
SCOUTFS_NET_CMD_LOCK_RECOVER,
@@ -1001,8 +990,6 @@ enum scoutfs_net_cmd {
SCOUTFS_NET_CMD_GET_VOLOPT,
SCOUTFS_NET_CMD_SET_VOLOPT,
SCOUTFS_NET_CMD_CLEAR_VOLOPT,
SCOUTFS_NET_CMD_RESIZE_DEVICES,
SCOUTFS_NET_CMD_STATFS,
SCOUTFS_NET_CMD_FAREWELL,
SCOUTFS_NET_CMD_UNKNOWN,
};
@@ -1045,20 +1032,6 @@ struct scoutfs_net_roots {
struct scoutfs_btree_root srch_root;
};
struct scoutfs_net_resize_devices {
__le64 new_total_meta_blocks;
__le64 new_total_data_blocks;
};
struct scoutfs_net_statfs {
__u8 uuid[SCOUTFS_UUID_BYTES];
__le64 free_meta_blocks;
__le64 total_meta_blocks;
__le64 free_data_blocks;
__le64 total_data_blocks;
__le64 inode_count;
};
struct scoutfs_net_lock {
struct scoutfs_key key;
__le64 write_seq;
@@ -1085,7 +1058,6 @@ enum scoutfs_lock_trace {
SLT_INVALIDATE,
SLT_REQUEST,
SLT_RESPONSE,
SLT_NR,
};
/*

View File

@@ -35,7 +35,6 @@
#include "cmp.h"
#include "omap.h"
#include "forest.h"
#include "btree.h"
/*
* XXX
@@ -60,7 +59,7 @@ struct inode_sb_info {
bool stopped;
spinlock_t writeback_lock;
struct list_head writeback_list;
struct rb_root writeback_inodes;
struct inode_allocator dir_ino_alloc;
struct inode_allocator ino_alloc;
@@ -69,9 +68,6 @@ struct inode_sb_info {
/* serialize multiple inode ->evict trying to delete same ino's items */
spinlock_t deleting_items_lock;
struct list_head deleting_items_list;
struct work_struct iput_work;
struct llist_head iput_llist;
};
#define DECLARE_INODE_SB_INFO(sb, name) \
@@ -96,9 +92,9 @@ static void scoutfs_inode_ctor(void *obj)
atomic64_set(&si->data_waitq.changed, 0);
init_waitqueue_head(&si->data_waitq.waitq);
init_rwsem(&si->xattr_rwsem);
INIT_LIST_HEAD(&si->writeback_entry);
RB_CLEAR_NODE(&si->writeback_node);
scoutfs_lock_init_coverage(&si->ino_lock_cov);
atomic_set(&si->iput_count, 0);
atomic_set(&si->inv_iput_count, 0);
inode_init_once(&si->inode);
}
@@ -122,14 +118,47 @@ static void scoutfs_i_callback(struct rcu_head *head)
kmem_cache_free(scoutfs_inode_cachep, SCOUTFS_I(inode));
}
static void insert_writeback_inode(struct inode_sb_info *inf,
struct scoutfs_inode_info *ins)
{
struct rb_root *root = &inf->writeback_inodes;
struct rb_node **node = &root->rb_node;
struct rb_node *parent = NULL;
struct scoutfs_inode_info *si;
while (*node) {
parent = *node;
si = container_of(*node, struct scoutfs_inode_info,
writeback_node);
if (ins->ino < si->ino)
node = &(*node)->rb_left;
else if (ins->ino > si->ino)
node = &(*node)->rb_right;
else
BUG();
}
rb_link_node(&ins->writeback_node, parent, node);
rb_insert_color(&ins->writeback_node, root);
}
static void remove_writeback_inode(struct inode_sb_info *inf,
struct scoutfs_inode_info *si)
{
if (!RB_EMPTY_NODE(&si->writeback_node)) {
rb_erase(&si->writeback_node, &inf->writeback_inodes);
RB_CLEAR_NODE(&si->writeback_node);
}
}
void scoutfs_destroy_inode(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
spin_lock(&inf->writeback_lock);
if (!list_empty(&si->writeback_entry))
list_del_init(&si->writeback_entry);
remove_writeback_inode(inf, SCOUTFS_I(inode));
spin_unlock(&inf->writeback_lock);
scoutfs_lock_del_coverage(inode->i_sb, &si->ino_lock_cov);
@@ -186,37 +215,6 @@ static void set_inode_ops(struct inode *inode)
mapping_set_gfp_mask(inode->i_mapping, GFP_USER);
}
static unsigned int item_index_arr_ind(u8 type)
{
switch (type) {
case SCOUTFS_INODE_INDEX_META_SEQ_TYPE: return 0; break;
case SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE: return 1; break;
/* should never get here, we control callers, not untrusted data */
default: BUG(); break;
}
}
static void set_item_major(struct scoutfs_inode_info *si, u8 type, __le64 maj)
{
unsigned int ind = item_index_arr_ind(type);
si->item_majors[ind] = le64_to_cpu(maj);
}
static u64 get_item_major(struct scoutfs_inode_info *si, u8 type)
{
unsigned int ind = item_index_arr_ind(type);
return si->item_majors[ind];
}
static u64 get_item_minor(struct scoutfs_inode_info *si, u8 type)
{
unsigned int ind = item_index_arr_ind(type);
return si->item_minors[ind];
}
/*
* The caller has ensured that the fields in the incoming scoutfs inode
* reflect both the inode item and the inode index items. This happens
@@ -233,8 +231,10 @@ static void set_item_info(struct scoutfs_inode_info *si,
memset(si->item_minors, 0, sizeof(si->item_minors));
si->have_item = true;
set_item_major(si, SCOUTFS_INODE_INDEX_META_SEQ_TYPE, sinode->meta_seq);
set_item_major(si, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE, sinode->data_seq);
si->item_majors[SCOUTFS_INODE_INDEX_META_SEQ_TYPE] =
le64_to_cpu(sinode->meta_seq);
si->item_majors[SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE] =
le64_to_cpu(sinode->data_seq);
}
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
@@ -242,7 +242,6 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
i_size_write(inode, le64_to_cpu(cinode->size));
inode->i_version = le64_to_cpu(cinode->version);
set_nlink(inode, le32_to_cpu(cinode->nlink));
i_uid_write(inode, le32_to_cpu(cinode->uid));
i_gid_write(inode, le32_to_cpu(cinode->gid));
@@ -377,7 +376,6 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
if (truncate)
si->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_inode_set_data_seq(inode);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
@@ -513,7 +511,6 @@ retry:
goto out;
setattr_copy(inode, attr);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
@@ -697,14 +694,14 @@ struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino)
return ilookup5(sb, ino, scoutfs_iget_test, &ino);
}
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf)
struct inode *scoutfs_iget(struct super_block *sb, u64 ino)
{
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode_info *si;
struct inode *inode;
int ret;
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, lkf, ino, &lock);
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, ino, &lock);
if (ret)
return ERR_PTR(ret);
@@ -719,7 +716,6 @@ struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf)
/* XXX ensure refresh, instead clear in drop_inode? */
si = SCOUTFS_I(inode);
atomic64_set(&si->last_refreshed, 0);
inode->i_version = 0;
ret = scoutfs_inode_refresh(inode, lock, 0);
if (ret == 0)
@@ -747,7 +743,6 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
scoutfs_inode_get_onoff(inode, &online_blocks, &offline_blocks);
cinode->size = cpu_to_le64(i_size_read(inode));
cinode->version = cpu_to_le64(inode->i_version);
cinode->nlink = cpu_to_le32(inode->i_nlink);
cinode->uid = cpu_to_le32(i_uid_read(inode));
cinode->gid = cpu_to_le32(i_gid_read(inode));
@@ -824,14 +819,16 @@ static bool will_del_index(struct scoutfs_inode_info *si,
u8 type, u64 major, u32 minor)
{
return si && si->have_item &&
(get_item_major(si, type) != major || get_item_minor(si, type) != minor);
(si->item_majors[type] != major ||
si->item_minors[type] != minor);
}
static bool will_ins_index(struct scoutfs_inode_info *si,
u8 type, u64 major, u32 minor)
{
return !si || !si->have_item ||
(get_item_major(si, type) != major || get_item_minor(si, type) != minor);
(si->item_majors[type] != major ||
si->item_minors[type] != minor);
}
static bool inode_has_index(umode_t mode, u8 type)
@@ -939,14 +936,14 @@ static int update_index_items(struct super_block *sb,
if (ret || !will_del_index(si, type, major, minor))
return ret;
trace_scoutfs_delete_index_item(sb, type, get_item_major(si, type),
get_item_minor(si, type), ino);
trace_scoutfs_delete_index_item(sb, type, si->item_majors[type],
si->item_minors[type], ino);
scoutfs_inode_init_index_key(&del, type, get_item_major(si, type),
get_item_minor(si, type), ino);
scoutfs_inode_init_index_key(&del, type, si->item_majors[type],
si->item_minors[type], ino);
del_lock = find_index_lock(lock_list, type, get_item_major(si, type),
get_item_minor(si, type), ino);
del_lock = find_index_lock(lock_list, type, si->item_majors[type],
si->item_minors[type], ino);
ret = scoutfs_item_delete_force(sb, &del, del_lock);
if (ret) {
err = scoutfs_item_delete(sb, &ins, ins_lock);
@@ -1083,8 +1080,8 @@ static int prepare_index_items(struct scoutfs_inode_info *si,
}
if (will_del_index(si, type, major, minor)) {
ret = add_index_lock(list, ino, type, get_item_major(si, type),
get_item_minor(si, type));
ret = add_index_lock(list, ino, type, si->item_majors[type],
si->item_minors[type]);
if (ret)
return ret;
}
@@ -1103,7 +1100,7 @@ static u64 upd_data_seq(struct scoutfs_sb_info *sbi,
if (!si || !si->have_item || set_data_seq)
return sbi->trans_seq;
return get_item_major(si, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE);
return si->item_majors[SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE];
}
/*
@@ -1327,6 +1324,22 @@ static int remove_index_items(struct super_block *sb, u64 ino,
return ret;
}
/*
* A quick atomic sample of the last inode number that's been allocated.
*/
u64 scoutfs_last_ino(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
u64 last;
spin_lock(&sbi->next_ino_lock);
last = le64_to_cpu(super->next_ino);
spin_unlock(&sbi->next_ino_lock);
return last;
}
/*
* Return an allocated and unused inode number. Returns -ENOSPC if
* we're out of inode.
@@ -1606,8 +1619,6 @@ retry:
goto out;
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock);
if (ret == 0)
scoutfs_forest_dec_inode_count(sb);
out:
del_deleting_ino(inf, &del);
if (release)
@@ -1651,6 +1662,11 @@ void scoutfs_evict_inode(struct inode *inode)
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, orph_lock, SCOUTFS_LOCK_WRITE_ONLY);
}
if (ret == -ERESTARTSYS) {
/* can be in task with pending, could be found as orphan */
scoutfs_inc_counter(sb, inode_evict_intr);
ret = 0;
}
if (ret < 0) {
scoutfs_err(sb, "error %d while checking to delete inode nr %llu, it might linger.",
ret, ino);
@@ -1688,49 +1704,6 @@ int scoutfs_drop_inode(struct inode *inode)
generic_drop_inode(inode);
}
static void iput_worker(struct work_struct *work)
{
struct inode_sb_info *inf = container_of(work, struct inode_sb_info, iput_work);
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct llist_node *inodes;
bool more;
inodes = llist_del_all(&inf->iput_llist);
llist_for_each_entry_safe(si, tmp, inodes, iput_llnode) {
do {
more = atomic_dec_return(&si->iput_count) > 0;
iput(&si->inode);
} while (more);
}
}
/*
* Final iput can get into evict and perform final inode deletion which
* can delete a lot of items spanning multiple cluster locks and
* transactions. It should be understood as a heavy high level
* operation, more like file writing and less like dropping a refcount.
*
* Unfortunately we also have incentives to use igrab/iput from internal
* contexts that have no business doing that work, like lock
* invalidation or dirty inode writeback during transaction commit.
*
* In those cases we can kick iput off to background work context.
* Nothing stops multiple puts of an inode before the work runs so we
* can track multiple puts in flight.
*/
void scoutfs_inode_queue_iput(struct inode *inode)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
if (atomic_inc_return(&si->iput_count) == 1)
llist_add(&si->iput_llnode, &inf->iput_llist);
smp_wmb(); /* count and list visible before work executes */
schedule_work(&inf->iput_work);
}
/*
* All mounts are performing this work concurrently. We introduce
* significant jitter between them to try and keep them from all
@@ -1759,20 +1732,15 @@ static void schedule_orphan_dwork(struct inode_sb_info *inf)
* the cached inodes pinning the inode fail to delete as they are
* evicted from the cache -- either through crashing or errors.
*
* This work runs in all mounts in the background looking for those
* orphaned inodes that weren't fully deleted.
* This work runs in all mounts in the background looking for orphaned
* inodes that should be deleted.
*
* First, we search for items in the current persistent fs root. We'll
* only find orphan items that made it to the fs root after being merged
* from a mount's log btree. This naturally avoids orphan items that
* exist while inodes have been unlinked but are still cached, including
* O_TMPFILE inodes that are actively used during normal operations.
* Scanning the read-only persistent fs root uses cached blocks and
* avoids the lock contention we'd cause if we tried to use the
* consistent item cache. The downside is that it adds a bit of
* latency. If an orphan was created in error it'll take until the
* mount's log btree is finalized and merged. A crash will have the log
* btree merged after it is fenced.
* We use the forest hint call to read the persistent forest trees
* looking for orphan items without creating lock contention. Orphan
* items exist for O_TMPFILE users and we don't want to force them to
* commit by trying to acquire a conflicting read lock the orphan zone.
* There's no rush to reclaim deleted items, eventually they will be
* found in the persistent item btrees.
*
* Once we find candidate orphan items we can first check our local
* inode cache for inodes that are already on their way to eviction and
@@ -1780,6 +1748,10 @@ static void schedule_orphan_dwork(struct inode_sb_info *inf)
* the inode. Only if we don't have it cached, and no one else does, do
* we try and read it into our cache and evict it to trigger the final
* inode deletion process.
*
* Orphaned items that make it that far should be very rare. They can
* only exist if all the mounts that were using an inode after it had
* been unlinked (or created with o_tmpfile) didn't unmount cleanly.
*/
static void inode_orphan_scan_worker(struct work_struct *work)
{
@@ -1787,9 +1759,8 @@ static void inode_orphan_scan_worker(struct work_struct *work)
orphan_scan_dwork.work);
struct super_block *sb = inf->sb;
struct scoutfs_open_ino_map omap;
struct scoutfs_net_roots roots;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key last;
struct scoutfs_key next;
struct scoutfs_key key;
struct inode *inode;
u64 group_nr;
@@ -1802,10 +1773,6 @@ static void inode_orphan_scan_worker(struct work_struct *work)
init_orphan_key(&last, U64_MAX);
omap.args.group_nr = cpu_to_le64(U64_MAX);
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
for (ino = SCOUTFS_ROOT_INO + 1; ino != 0; ino++) {
if (inf->stopped) {
ret = 0;
@@ -1814,21 +1781,18 @@ static void inode_orphan_scan_worker(struct work_struct *work)
/* find the next orphan item */
init_orphan_key(&key, ino);
ret = scoutfs_btree_next(sb, &roots.fs_root, &key, &iref);
ret = scoutfs_forest_next_hint(sb, &key, &next);
if (ret < 0) {
if (ret == -ENOENT)
break;
goto out;
}
key = *iref.key;
scoutfs_btree_put_iref(&iref);
if (scoutfs_key_compare(&key, &last) > 0)
if (scoutfs_key_compare(&next, &last) > 0)
break;
scoutfs_inc_counter(sb, orphan_scan_item);
ino = le64_to_cpu(key.sko_ino);
ino = le64_to_cpu(next.sko_ino);
/* locally cached inodes will already be deleted */
inode = scoutfs_ilookup(sb, ino);
@@ -1855,7 +1819,7 @@ static void inode_orphan_scan_worker(struct work_struct *work)
}
/* try to cached and evict unused inode to delete, can be racing */
inode = scoutfs_iget(sb, ino, 0);
inode = scoutfs_iget(sb, ino);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
if (ret == -ENOENT)
@@ -1884,33 +1848,30 @@ out:
* ourselves in knots trying to call through the high level vfs sync
* methods.
*
* File data block allocations tend to advance through free space so we
* add the inode to the end of the list to roughly encourage sequential
* IO.
*
* This is called by writers who hold the inode and transaction. The
* inode is removed from the list by evict->destroy if it's unlinked
* during the transaction or by committing the transaction. Pruning the
* icache won't try to evict the inode as long as it has dirty buffers.
* inode's presence in the rbtree is removed by destroy_inode, prevented
* by the inode hold, and by committing the transaction, which is
* prevented by holding the transaction. The inode can only go from
* empty to on the rbtree while we're here.
*/
void scoutfs_inode_queue_writeback(struct inode *inode)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
if (list_empty(&si->writeback_entry)) {
if (RB_EMPTY_NODE(&si->writeback_node)) {
spin_lock(&inf->writeback_lock);
if (list_empty(&si->writeback_entry))
list_add_tail(&si->writeback_entry, &inf->writeback_list);
if (RB_EMPTY_NODE(&si->writeback_node))
insert_writeback_inode(inf, si);
spin_unlock(&inf->writeback_lock);
}
}
/*
* Walk our dirty inodes and either start dirty page writeback or wait
* for writeback to complete.
* Walk our dirty inodes in ino order and either start dirty page
* writeback or wait for writeback to complete.
*
* This is called by transaction committing so other writers are
* This is called by transaction commiting so other writers are
* excluded. We're still very careful to iterate over the tree while it
* and the inodes could be changing.
*
@@ -1923,19 +1884,29 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
{
DECLARE_INODE_SB_INFO(sb, inf);
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct rb_node *node;
struct inode *inode;
struct inode *defer_iput = NULL;
int ret;
spin_lock(&inf->writeback_lock);
list_for_each_entry_safe(si, tmp, &inf->writeback_list, writeback_entry) {
node = rb_first(&inf->writeback_inodes);
while (node) {
si = container_of(node, struct scoutfs_inode_info,
writeback_node);
node = rb_next(node);
inode = igrab(&si->inode);
if (!inode)
continue;
spin_unlock(&inf->writeback_lock);
if (defer_iput) {
iput(defer_iput);
defer_iput = NULL;
}
if (write)
ret = filemap_fdatawrite(inode->i_mapping);
else
@@ -1943,28 +1914,28 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
trace_scoutfs_inode_walk_writeback(sb, scoutfs_ino(inode),
write, ret);
if (ret) {
scoutfs_inode_queue_iput(inode);
iput(inode);
goto out;
}
spin_lock(&inf->writeback_lock);
/* restore tmp after reacquiring lock */
if (WARN_ON_ONCE(list_empty(&si->writeback_entry)))
tmp = list_first_entry(&inf->writeback_list, struct scoutfs_inode_info,
writeback_entry);
if (WARN_ON_ONCE(RB_EMPTY_NODE(&si->writeback_node)))
node = rb_first(&inf->writeback_inodes);
else
tmp = list_next_entry(si, writeback_entry);
node = rb_next(&si->writeback_node);
if (!write)
list_del_init(&si->writeback_entry);
remove_writeback_inode(inf, si);
scoutfs_inode_queue_iput(inode);
/* avoid iput->destroy lock deadlock */
defer_iput = inode;
}
spin_unlock(&inf->writeback_lock);
out:
if (defer_iput)
iput(defer_iput);
return ret;
}
@@ -1979,14 +1950,12 @@ int scoutfs_inode_setup(struct super_block *sb)
inf->sb = sb;
spin_lock_init(&inf->writeback_lock);
INIT_LIST_HEAD(&inf->writeback_list);
inf->writeback_inodes = RB_ROOT;
spin_lock_init(&inf->dir_ino_alloc.lock);
spin_lock_init(&inf->ino_alloc.lock);
INIT_DELAYED_WORK(&inf->orphan_scan_dwork, inode_orphan_scan_worker);
spin_lock_init(&inf->deleting_items_lock);
INIT_LIST_HEAD(&inf->deleting_items_list);
INIT_WORK(&inf->iput_work, iput_worker);
init_llist_head(&inf->iput_llist);
sbi->inode_sb_info = inf;
@@ -1998,18 +1967,15 @@ int scoutfs_inode_setup(struct super_block *sb)
* many other subsystems like networking and the server. We only kick
* it off once everything is ready.
*/
void scoutfs_inode_start(struct super_block *sb)
int scoutfs_inode_start(struct super_block *sb)
{
DECLARE_INODE_SB_INFO(sb, inf);
schedule_orphan_dwork(inf);
return 0;
}
/*
* Orphan scanning can instantiate inodes. We shut it down before
* calling into the vfs to tear down dentries and inodes during unmount.
*/
void scoutfs_inode_orphan_stop(struct super_block *sb)
void scoutfs_inode_stop(struct super_block *sb)
{
DECLARE_INODE_SB_INFO(sb, inf);
@@ -2019,14 +1985,6 @@ void scoutfs_inode_orphan_stop(struct super_block *sb)
}
}
void scoutfs_inode_flush_iput(struct super_block *sb)
{
DECLARE_INODE_SB_INFO(sb, inf);
if (inf)
flush_work(&inf->iput_work);
}
void scoutfs_inode_destroy(struct super_block *sb)
{
struct inode_sb_info *inf = SCOUTFS_SB(sb)->inode_sb_info;

View File

@@ -9,8 +9,6 @@
struct scoutfs_lock;
#define SCOUTFS_INODE_NR_INDICES 2
struct scoutfs_inode_info {
/* read or initialized for each inode instance */
u64 ino;
@@ -40,8 +38,8 @@ struct scoutfs_inode_info {
*/
struct mutex item_mutex;
bool have_item;
u64 item_majors[SCOUTFS_INODE_NR_INDICES];
u32 item_minors[SCOUTFS_INODE_NR_INDICES];
u64 item_majors[SCOUTFS_INODE_INDEX_NR];
u32 item_minors[SCOUTFS_INODE_INDEX_NR];
/* updated at on each new lock acquisition */
atomic64_t last_refreshed;
@@ -52,14 +50,14 @@ struct scoutfs_inode_info {
struct scoutfs_per_task pt_data_lock;
struct scoutfs_data_waitq data_waitq;
struct rw_semaphore xattr_rwsem;
struct list_head writeback_entry;
struct rb_node writeback_node;
struct scoutfs_lock_coverage ino_lock_cov;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node iput_llnode;
atomic_t iput_count;
struct llist_node inv_iput_llnode;
atomic_t inv_iput_count;
struct inode inode;
};
@@ -78,9 +76,8 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
void scoutfs_destroy_inode(struct inode *inode);
int scoutfs_drop_inode(struct inode *inode);
void scoutfs_evict_inode(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf);
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
@@ -129,13 +126,14 @@ int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_
void scoutfs_inode_queue_writeback(struct inode *inode);
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);
u64 scoutfs_last_ino(struct super_block *sb);
void scoutfs_inode_exit(void);
int scoutfs_inode_init(void);
int scoutfs_inode_setup(struct super_block *sb);
void scoutfs_inode_start(struct super_block *sb);
void scoutfs_inode_orphan_stop(struct super_block *sb);
void scoutfs_inode_flush_iput(struct super_block *sb);
int scoutfs_inode_start(struct super_block *sb);
void scoutfs_inode_stop(struct super_block *sb);
void scoutfs_inode_destroy(struct super_block *sb);
#endif

View File

@@ -21,7 +21,6 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/aio.h>
#include <linux/list_sort.h>
#include "format.h"
#include "key.h"
@@ -40,7 +39,6 @@
#include "srch.h"
#include "alloc.h"
#include "server.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -546,6 +544,11 @@ static long scoutfs_ioc_stat_more(struct file *file, unsigned long arg)
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_ioctl_stat_more stm;
if (get_user(stm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
stm.valid_bytes = min_t(u64, stm.valid_bytes,
sizeof(struct scoutfs_ioctl_stat_more));
stm.meta_seq = scoutfs_inode_meta_seq(inode);
stm.data_seq = scoutfs_inode_data_seq(inode);
stm.data_version = scoutfs_inode_data_version(inode);
@@ -553,7 +556,7 @@ static long scoutfs_ioc_stat_more(struct file *file, unsigned long arg)
stm.crtime_sec = si->crtime.tv_sec;
stm.crtime_nsec = si->crtime.tv_nsec;
if (copy_to_user((void __user *)arg, &stm, sizeof(stm)))
if (copy_to_user((void __user *)arg, &stm, stm.valid_bytes))
return -EFAULT;
return 0;
@@ -870,18 +873,15 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super;
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_ioctl_statfs_more sfm;
int ret;
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super)
return -ENOMEM;
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
if (get_user(sfm.valid_bytes, (__u64 __user *)arg))
return -EFAULT;
sfm.valid_bytes = min_t(u64, sfm.valid_bytes,
sizeof(struct scoutfs_ioctl_statfs_more));
sfm.fsid = le64_to_cpu(super->hdr.fsid);
sfm.rid = sbi->rid;
sfm.total_meta_blocks = le64_to_cpu(super->total_meta_blocks);
@@ -890,15 +890,12 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
ret = scoutfs_client_get_last_seq(sb, &sfm.committed_seq);
if (ret)
goto out;
return ret;
if (copy_to_user((void __user *)arg, &sfm, sizeof(sfm)))
ret = -EFAULT;
else
ret = 0;
out:
kfree(super);
return ret;
if (copy_to_user((void __user *)arg, &sfm, sfm.valid_bytes))
return -EFAULT;
return 0;
}
struct copy_alloc_detail_args {
@@ -1002,324 +999,6 @@ out:
return ret;
}
static long scoutfs_ioc_resize_devices(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_resize_devices __user *urd = (void __user *)arg;
struct scoutfs_ioctl_resize_devices rd;
struct scoutfs_net_resize_devices nrd;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rd, urd, sizeof(rd))) {
ret = -EFAULT;
goto out;
}
nrd.new_total_meta_blocks = cpu_to_le64(rd.new_total_meta_blocks);
nrd.new_total_data_blocks = cpu_to_le64(rd.new_total_data_blocks);
ret = scoutfs_client_resize_devices(sb, &nrd);
out:
return ret;
}
struct xattr_total_entry {
struct rb_node node;
struct scoutfs_ioctl_xattr_total xt;
u64 fs_seq;
u64 fs_total;
u64 fs_count;
u64 fin_seq;
u64 fin_total;
s64 fin_count;
u64 log_seq;
u64 log_total;
s64 log_count;
};
static int cmp_xt_entry_name(const struct xattr_total_entry *a,
const struct xattr_total_entry *b)
{
return scoutfs_cmp_u64s(a->xt.name[0], b->xt.name[0]) ?:
scoutfs_cmp_u64s(a->xt.name[1], b->xt.name[1]) ?:
scoutfs_cmp_u64s(a->xt.name[2], b->xt.name[2]);
}
/*
* Record the contribution of the three classes of logged items we can
* see: the item in the fs_root, items from finalized log btrees, and
* items from active log btrees. Once we have the full set the caller
* can decide which of the items contribute to the total it sends to the
* user.
*/
static int read_xattr_total_item(struct super_block *sb, struct scoutfs_key *key,
u64 seq, u8 flags, void *val, int val_len, int fic, void *arg)
{
struct scoutfs_xattr_totl_val *tval = val;
struct xattr_total_entry *ent;
struct xattr_total_entry rd;
struct rb_root *root = arg;
struct rb_node *parent;
struct rb_node **node;
int cmp;
rd.xt.name[0] = le64_to_cpu(key->skxt_a);
rd.xt.name[1] = le64_to_cpu(key->skxt_b);
rd.xt.name[2] = le64_to_cpu(key->skxt_c);
/* find entry matching name */
node = &root->rb_node;
parent = NULL;
cmp = -1;
while (*node) {
parent = *node;
ent = container_of(*node, struct xattr_total_entry, node);
/* sort merge items by key then newest to oldest */
cmp = cmp_xt_entry_name(&rd, ent);
if (cmp < 0)
node = &(*node)->rb_left;
else if (cmp > 0)
node = &(*node)->rb_right;
else
break;
}
/* allocate and insert new node if we need to */
if (cmp != 0) {
ent = kzalloc(sizeof(*ent), GFP_KERNEL);
if (!ent)
return -ENOMEM;
memcpy(&ent->xt.name, &rd.xt.name, sizeof(ent->xt.name));
rb_link_node(&ent->node, parent, node);
rb_insert_color(&ent->node, root);
}
if (fic & FIC_FS_ROOT) {
ent->fs_seq = seq;
ent->fs_total = le64_to_cpu(tval->total);
ent->fs_count = le64_to_cpu(tval->count);
} else if (fic & FIC_FINALIZED) {
ent->fin_seq = seq;
ent->fin_total += le64_to_cpu(tval->total);
ent->fin_count += le64_to_cpu(tval->count);
} else {
ent->log_seq = seq;
ent->log_total += le64_to_cpu(tval->total);
ent->log_count += le64_to_cpu(tval->count);
}
scoutfs_inc_counter(sb, totl_read_item);
return 0;
}
/* these are always _safe, node stores next */
#define for_each_xt_ent(ent, node, root) \
for (node = rb_first(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_next(node), 1); )
#define for_each_xt_ent_reverse(ent, node, root) \
for (node = rb_last(root); \
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
node = rb_prev(node), 1); )
static void free_xt_ent(struct rb_root *root, struct xattr_total_entry *ent)
{
rb_erase(&ent->node, root);
kfree(ent);
}
static void free_all_xt_ents(struct rb_root *root)
{
struct xattr_total_entry *ent;
struct rb_node *node;
for_each_xt_ent(ent, node, root)
free_xt_ent(root, ent);
}
/*
* Starting from the caller's pos_name, copy the names, totals, and
* counts for the .totl. tagged xattrs in the system sorted by their
* name until the user's buffer is full. This only sees xattrs that
* have been committed. It doesn't use locking to force commits and
* block writers so it can be a little bit out of date with respect to
* dirty xattrs in memory across the system.
*
* Our reader has to be careful because the log btree merging code can
* write partial results to the fs_root. This means that a reader can
* see both cases where new finalized logs should be applied to the old
* fs items and where old finalized logs have already been applied to
* the partially merged fs items. Currently active logged items are
* always applied on top of all cases.
*
* These cases are differentiated with a combination of sequence numbers
* in items, the count of contributing xattrs, and a flag
* differentiating finalized and active logged items. This lets us
* recognize all cases, including when finalized logs were merged and
* deleted the fs item.
*
* We're allocating a tracking struct for each totl name we see while
* traversing the item btrees. The forest reader is providing the items
* it finds in leaf blocks that contain the search key. In the worst
* case all of these blocks are full and none of the items overlap. At
* most, figure order a thousand names per mount. But in practice many
* of these factors fall away: leaf blocks aren't fill, leaf items
* overlap, there aren't finalized log btrees, and not all mounts are
* actively changing totals. We're much more likely to only read a
* leaf block's worth of totals that have been long since merged into
* the fs_root.
*/
static long scoutfs_ioc_read_xattr_totals(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_read_xattr_totals __user *urxt = (void __user *)arg;
struct scoutfs_ioctl_read_xattr_totals rxt;
struct scoutfs_ioctl_xattr_total __user *uxt;
struct xattr_total_entry *ent;
struct scoutfs_key key;
struct scoutfs_key bloom_key;
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_root root = RB_ROOT;
struct rb_node *node;
int count = 0;
int ret;
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
goto out;
}
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
goto out;
}
if (copy_from_user(&rxt, urxt, sizeof(rxt))) {
ret = -EFAULT;
goto out;
}
uxt = (void __user *)rxt.totals_ptr;
if ((rxt.totals_ptr & (sizeof(__u64) - 1)) ||
(rxt.totals_bytes < sizeof(struct scoutfs_ioctl_xattr_total))) {
ret = -EINVAL;
goto out;
}
scoutfs_key_set_zeros(&bloom_key);
bloom_key.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_xattr_init_totl_key(&start, rxt.pos_name);
while (rxt.totals_bytes >= sizeof(struct scoutfs_ioctl_xattr_total)) {
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
if (scoutfs_key_compare(&start, &end) > 0)
break;
key = start;
ret = scoutfs_forest_read_items(sb, &key, &bloom_key, &start, &end,
read_xattr_total_item, &root);
if (ret < 0) {
if (ret == -ESTALE) {
free_all_xt_ents(&root);
continue;
}
goto out;
}
if (RB_EMPTY_ROOT(&root))
break;
/* trim totals that fall outside of the consistent range */
for_each_xt_ent(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &start) < 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
for_each_xt_ent_reverse(ent, node, &root) {
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
if (scoutfs_key_compare(&key, &end) > 0) {
free_xt_ent(&root, ent);
} else {
break;
}
}
/* copy resulting unique non-zero totals to userspace */
for_each_xt_ent(ent, node, &root) {
if (rxt.totals_bytes < sizeof(ent->xt))
break;
/* start with the fs item if we have it */
if (ent->fs_seq != 0) {
ent->xt.total = ent->fs_total;
ent->xt.count = ent->fs_count;
scoutfs_inc_counter(sb, totl_read_fs);
}
/* apply finalized logs if they're newer or creating */
if (((ent->fs_seq != 0) && (ent->fin_seq > ent->fs_seq)) ||
((ent->fs_seq == 0) && (ent->fin_count > 0))) {
ent->xt.total += ent->fin_total;
ent->xt.count += ent->fin_count;
scoutfs_inc_counter(sb, totl_read_finalized);
}
/* always apply active logs which must be newer than fs and finalized */
if (ent->log_seq > 0) {
ent->xt.total += ent->log_total;
ent->xt.count += ent->log_count;
scoutfs_inc_counter(sb, totl_read_logged);
}
if (ent->xt.total != 0 || ent->xt.count != 0) {
if (copy_to_user(uxt, &ent->xt, sizeof(ent->xt))) {
ret = -EFAULT;
goto out;
}
uxt++;
rxt.totals_bytes -= sizeof(ent->xt);
count++;
scoutfs_inc_counter(sb, totl_read_copied);
}
free_xt_ent(&root, ent);
}
/* continue after the last possible key read */
start = end;
scoutfs_key_inc(&start);
}
ret = 0;
out:
free_all_xt_ents(&root);
return ret ?: count;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -1349,10 +1028,6 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_alloc_detail(file, arg);
case SCOUTFS_IOC_MOVE_BLOCKS:
return scoutfs_ioc_move_blocks(file, arg);
case SCOUTFS_IOC_RESIZE_DEVICES:
return scoutfs_ioc_resize_devices(file, arg);
case SCOUTFS_IOC_READ_XATTR_TOTALS:
return scoutfs_ioc_read_xattr_totals(file, arg);
}
return -ENOTTY;

View File

@@ -13,7 +13,8 @@
* This is enforced by pahole scripting in external build environments.
*/
#define SCOUTFS_IOCTL_MAGIC 0xE8 /* arbitrarily chosen hole in ioctl-number.rst */
/* XXX I have no idea how these are chosen. */
#define SCOUTFS_IOCTL_MAGIC 's'
/*
* Packed scoutfs keys rarely cross the ioctl boundary so we have a
@@ -87,7 +88,7 @@ enum scoutfs_ino_walk_seq_type {
* Adds entries to the user's buffer for each inode that is found in the
* given index between the first and last positions.
*/
#define SCOUTFS_IOC_WALK_INODES _IOW(SCOUTFS_IOCTL_MAGIC, 1, \
#define SCOUTFS_IOC_WALK_INODES _IOR(SCOUTFS_IOCTL_MAGIC, 1, \
struct scoutfs_ioctl_walk_inodes)
/*
@@ -166,7 +167,7 @@ struct scoutfs_ioctl_ino_path_result {
};
/* Get a single path from the root to the given inode number */
#define SCOUTFS_IOC_INO_PATH _IOW(SCOUTFS_IOCTL_MAGIC, 2, \
#define SCOUTFS_IOC_INO_PATH _IOR(SCOUTFS_IOCTL_MAGIC, 2, \
struct scoutfs_ioctl_ino_path)
/*
@@ -214,8 +215,18 @@ struct scoutfs_ioctl_stage {
/*
* Give the user inode fields that are not otherwise visible. statx()
* isn't always available and xattrs are relatively expensive.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_stat_more {
__u64 valid_bytes;
__u64 meta_seq;
__u64 data_seq;
__u64 data_version;
@@ -253,14 +264,13 @@ struct scoutfs_ioctl_data_waiting {
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
#define SCOUTFS_IOC_DATA_WAITING _IOW(SCOUTFS_IOCTL_MAGIC, 6, \
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
struct scoutfs_ioctl_data_waiting)
/*
* If i_size is set then data_version must be non-zero. If the offline
* flag is set then i_size must be set and a offline extent will be
* created from offset 0 to i_size. The time fields are always applied
* to the inode.
* created from offset 0 to i_size.
*/
struct scoutfs_ioctl_setattr_more {
__u64 data_version;
@@ -285,8 +295,8 @@ struct scoutfs_ioctl_listxattr_hidden {
__u32 hash_pos;
};
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOWR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOR(SCOUTFS_IOCTL_MAGIC, 8, \
struct scoutfs_ioctl_listxattr_hidden)
/*
* Return the inode numbers of inodes which might contain the given
@@ -339,17 +349,27 @@ struct scoutfs_ioctl_search_xattrs {
/* set in output_flags if returned inodes reached last_ino */
#define SCOUTFS_SEARCH_XATTRS_OFLAG_END (1ULL << 0)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOW(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
#define SCOUTFS_IOC_SEARCH_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
struct scoutfs_ioctl_search_xattrs)
/*
* Give the user information about the filesystem.
*
* @valid_bytes stores the number of bytes that are valid in the
* structure. The caller sets this to the size of the struct that they
* understand. The kernel then fills and copies back the min of the
* size they and the user caller understand. The user can tell if a
* field is set if all of its bytes are within the valid_bytes that the
* kernel set on return.
*
* @committed_seq: All seqs up to and including this seq have been
* committed. Can be compared with meta_seq and data_seq from inodes in
* stat_more to discover if changes have been committed to disk.
*
* New fields are only added to the end of the struct.
*/
struct scoutfs_ioctl_statfs_more {
__u64 valid_bytes;
__u64 fsid;
__u64 rid;
__u64 committed_seq;
@@ -376,7 +396,7 @@ struct scoutfs_ioctl_data_wait_err {
__s64 err;
};
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOW(SCOUTFS_IOCTL_MAGIC, 11, \
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOR(SCOUTFS_IOCTL_MAGIC, 11, \
struct scoutfs_ioctl_data_wait_err)
@@ -395,7 +415,7 @@ struct scoutfs_ioctl_alloc_detail_entry {
__u8 __pad[6];
};
#define SCOUTFS_IOC_ALLOC_DETAIL _IOW(SCOUTFS_IOCTL_MAGIC, 12, \
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
struct scoutfs_ioctl_alloc_detail)
/*
@@ -458,66 +478,7 @@ struct scoutfs_ioctl_move_blocks {
__u64 flags;
};
#define SCOUTFS_IOC_MOVE_BLOCKS _IOW(SCOUTFS_IOCTL_MAGIC, 13, \
#define SCOUTFS_IOC_MOVE_BLOCKS _IOR(SCOUTFS_IOCTL_MAGIC, 13, \
struct scoutfs_ioctl_move_blocks)
struct scoutfs_ioctl_resize_devices {
__u64 new_total_meta_blocks;
__u64 new_total_data_blocks;
};
#define SCOUTFS_IOC_RESIZE_DEVICES \
_IOW(SCOUTFS_IOCTL_MAGIC, 14, struct scoutfs_ioctl_resize_devices)
#define SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR 3
/*
* Copy global totals of .totl. xattr value payloads to the user. This
* only sees xattrs which have been committed and this doesn't force
* commits of dirty data throughout the system. This can be out of sync
* by the amount of xattrs that can be dirty in open transactions that
* are being built throughout the system.
*
* pos_name: The array name of the first total that can be returned.
* The name is derived from the key of the xattrs that contribute to the
* total. For xattrs with a .totl.1.2.3 key, the pos_name[] should be
* {1, 2, 3}.
*
* totals_ptr: An aligned pointer to a buffer that will be filled with
* an array of scoutfs_ioctl_xattr_total structs for each total copied.
*
* totals_bytes: The size of the buffer in bytes. There must be room
* for at least one struct element so that returning 0 can promise that
* there were no more totals to copy after the pos_name.
*
* The number of copied elements is returned and 0 is returned if there
* were no more totals to copy after the pos_name.
*
* In addition to the usual errnos (EIO, EINVAL, EPERM, EFAULT) this
* adds:
*
* EINVAL: The totals_ buffer was not aligned or was not large enough
* for a single struct entry.
*/
struct scoutfs_ioctl_read_xattr_totals {
__u64 pos_name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 totals_ptr;
__u64 totals_bytes;
};
/*
* An individual total that is given to userspace. The total is the
* sum of all the values in the xattr payloads matching the name. The
* count is the number of xattrs, not number of files, contributing to
* the total.
*/
struct scoutfs_ioctl_xattr_total {
__u64 name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
__u64 total;
__u64 count;
};
#define SCOUTFS_IOC_READ_XATTR_TOTALS \
_IOW(SCOUTFS_IOCTL_MAGIC, 15, struct scoutfs_ioctl_read_xattr_totals)
#endif

View File

@@ -127,7 +127,7 @@ struct cached_page {
unsigned long lru_time;
struct list_head dirty_list;
struct list_head dirty_head;
u64 max_seq;
u64 max_liv_seq;
struct page *page;
unsigned int page_off;
unsigned int erased_bytes;
@@ -139,11 +139,10 @@ struct cached_item {
struct list_head dirty_head;
unsigned int dirty:1, /* needs to be written */
persistent:1, /* in btrees, needs deletion item */
deletion:1, /* negative del item for writing */
delta:1; /* item vales are combined, freed after write */
deletion:1; /* negative del item for writing */
unsigned int val_len;
struct scoutfs_key key;
u64 seq;
struct scoutfs_log_item_value liv;
char val[0];
};
@@ -387,10 +386,12 @@ static void put_pg(struct super_block *sb, struct cached_page *pg)
}
}
static void update_pg_max_seq(struct cached_page *pg, struct cached_item *item)
static void update_pg_max_liv_seq(struct cached_page *pg, struct cached_item *item)
{
if (item->seq > pg->max_seq)
pg->max_seq = item->seq;
u64 liv_seq = le64_to_cpu(item->liv.seq);
if (liv_seq > pg->max_liv_seq)
pg->max_liv_seq = liv_seq;
}
/*
@@ -400,7 +401,8 @@ static void update_pg_max_seq(struct cached_page *pg, struct cached_item *item)
* page or checking the free space first.
*/
static struct cached_item *alloc_item(struct cached_page *pg,
struct scoutfs_key *key, u64 seq, bool deletion,
struct scoutfs_key *key,
struct scoutfs_log_item_value *liv,
void *val, int val_len)
{
struct cached_item *item;
@@ -415,16 +417,15 @@ static struct cached_item *alloc_item(struct cached_page *pg,
INIT_LIST_HEAD(&item->dirty_head);
item->dirty = 0;
item->persistent = 0;
item->deletion = !!deletion;
item->delta = 0;
item->deletion = !!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION);
item->val_len = val_len;
item->key = *key;
item->seq = seq;
item->liv = *liv;
if (val_len)
memcpy(item->val, val, val_len);
update_pg_max_seq(pg, item);
update_pg_max_liv_seq(pg, item);
return item;
}
@@ -633,7 +634,7 @@ static void mark_item_dirty(struct super_block *sb,
item->dirty = 1;
}
update_pg_max_seq(pg, item);
update_pg_max_liv_seq(pg, item);
}
static void clear_item_dirty(struct super_block *sb,
@@ -710,7 +711,7 @@ static void move_page_items(struct super_block *sb,
if (stop && scoutfs_key_compare(&from->key, stop) >= 0)
break;
to = alloc_item(right, &from->key, from->seq, from->deletion, from->val,
to = alloc_item(right, &from->key, &from->liv, from->val,
from->val_len);
rbtree_insert(&to->node, par, pnode, &right->item_root);
par = &to->node;
@@ -722,7 +723,7 @@ static void move_page_items(struct super_block *sb,
}
to->persistent = from->persistent;
to->delta = from->delta;
to->deletion = from->deletion;
erase_item(left, from);
}
@@ -1355,11 +1356,11 @@ static void del_active_reader(struct item_cache_info *cinf, struct active_reader
* insert old versions of items into the tree here so that the trees
* don't have to compare seqs.
*/
static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
void *val, int val_len, int fic, void *arg)
static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_log_item_value *liv, void *val,
int val_len, void *arg)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const bool deletion = !!(flags & SCOUTFS_ITEM_FLAG_DELETION);
struct rb_root *root = arg;
struct cached_page *right = NULL;
struct cached_page *left = NULL;
@@ -1373,7 +1374,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
pg = page_rbtree_walk(sb, root, key, key, NULL, NULL, &p_par, &p_pnode);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
if (found && (found->seq >= seq))
if (found && (le64_to_cpu(found->liv.seq) >= le64_to_cpu(liv->seq)))
return 0;
if (!page_has_room(pg, val_len)) {
@@ -1387,7 +1388,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
&pnode);
}
item = alloc_item(pg, key, seq, deletion, val, val_len);
item = alloc_item(pg, key, liv, val, val_len);
if (!item) {
/* simpler split of private pages, no locking/dirty/lru */
if (!left)
@@ -1410,7 +1411,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
put_pg(sb, pg);
pg = scoutfs_key_compare(key, &left->end) <= 0 ? left : right;
item = alloc_item(pg, key, seq, deletion, val, val_len);
item = alloc_item(pg, key, liv, val, val_len);
found = item_rbtree_walk(&pg->item_root, key, NULL, &par,
&pnode);
@@ -1444,11 +1445,6 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
* locks protect the stable items we read. Invalidation is careful not
* to drop pages that have items that we couldn't see because they were
* dirty when we started reading.
*
* The forest item reader is reading stable trees that could be
* overwritten. It can return -ESTALE which we return to the caller who
* will retry the operation and work with a new set of more recent
* btrees.
*/
static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
struct scoutfs_key *key, struct scoutfs_lock *lock)
@@ -1483,9 +1479,8 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
/* set active reader seq before reading persistent roots */
add_active_reader(sb, &active);
start = lock->start;
end = lock->end;
ret = scoutfs_forest_read_items(sb, key, &lock->start, &start, &end, read_page_item, &root);
ret = scoutfs_forest_read_items(sb, lock, key, &start, &end,
read_page_item, &root);
if (ret < 0)
goto out;
@@ -1620,7 +1615,7 @@ retry:
&lock->end);
else
ret = read_pages(sb, cinf, key, lock);
if (ret < 0 && ret != -ESTALE)
if (ret < 0)
goto out;
goto retry;
}
@@ -1824,11 +1819,11 @@ out:
* to the last stable seq and ensure that all the items in open
* transactions and granted locks will have greater seqs.
*/
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock)
static __le64 item_seq(struct super_block *sb, struct scoutfs_lock *lock)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return max(sbi->trans_seq, lock->write_seq);
return cpu_to_le64(max(sbi->trans_seq, lock->write_seq));
}
/*
@@ -1863,7 +1858,7 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
if (!item || item->deletion) {
ret = -ENOENT;
} else {
item->seq = item_seq(sb, lock);
item->liv.seq = item_seq(sb, lock);
mark_item_dirty(sb, cinf, pg, NULL, item);
ret = 0;
}
@@ -1883,7 +1878,9 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct scoutfs_log_item_value liv = {
.seq = item_seq(sb, lock),
};
struct cached_item *found;
struct cached_item *item;
struct cached_page *pg;
@@ -1911,7 +1908,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
goto unlock;
}
item = alloc_item(pg, key, seq, false, val, val_len);
item = alloc_item(pg, key, &liv, val, val_len);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
@@ -1956,7 +1953,9 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct scoutfs_log_item_value liv = {
.seq = item_seq(sb, lock),
};
struct cached_item *item;
struct cached_item *found;
struct cached_page *pg;
@@ -1991,10 +1990,10 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
pg->erased_bytes += item_val_bytes(found->val_len) -
item_val_bytes(val_len);
found->val_len = val_len;
found->seq = seq;
found->liv.seq = liv.seq;
mark_item_dirty(sb, cinf, pg, NULL, found);
} else {
item = alloc_item(pg, key, seq, false, val, val_len);
item = alloc_item(pg, key, &liv, val, val_len);
item->persistent = found->persistent;
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
@@ -2010,77 +2009,6 @@ out:
return ret;
}
/*
* Add a delta item. Delta items are an incremental change relative to
* the current persistent delta items. We never have to read the
* current items so the caller always writes with write only locks. If
* combining the current delta item and the caller's item results in a
* null we can just drop it, we don't have to emit a deletion item.
*/
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
struct rb_node *par;
int ret;
scoutfs_inc_counter(sb, item_delta);
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_WRITE_ONLY)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
if (ret < 0)
goto out;
ret = get_cached_page(sb, cinf, lock, key, true, true, val_len, &pg);
if (ret < 0)
goto out;
__acquire(pg->rwlock);
item = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
if (item) {
if (!item->delta) {
ret = -EIO;
goto unlock;
}
ret = scoutfs_forest_combine_deltas(key, item->val, item->val_len, val, val_len);
if (ret <= 0) {
if (ret == 0)
ret = -EIO;
goto unlock;
}
if (ret == SCOUTFS_DELTA_COMBINED) {
item->seq = seq;
mark_item_dirty(sb, cinf, pg, NULL, item);
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
clear_item_dirty(sb, cinf, pg, item);
erase_item(pg, item);
} else {
ret = -EIO;
goto unlock;
}
ret = 0;
} else {
item = alloc_item(pg, key, seq, false, val, val_len);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
mark_item_dirty(sb, cinf, pg, NULL, item);
item->delta = 1;
ret = 0;
}
unlock:
write_unlock(&pg->rwlock);
out:
return ret;
}
/*
* Delete an item from the cache. We can leave behind a dirty deletion
* item if there is a persistent item that needs to be overwritten.
@@ -2093,7 +2021,9 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock);
struct scoutfs_log_item_value liv = {
.seq = item_seq(sb, lock),
};
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
@@ -2121,7 +2051,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
}
if (!item) {
item = alloc_item(pg, key, seq, false, NULL, 0);
item = alloc_item(pg, key, &liv, NULL, 0);
rbtree_insert(&item->node, par, pnode, &pg->item_root);
}
@@ -2134,7 +2064,8 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
erase_item(pg, item);
} else {
/* must emit deletion to clobber old persistent item */
item->seq = seq;
item->liv.seq = liv.seq;
item->liv.flags |= SCOUTFS_LOG_ITEM_FLAG_DELETION;
item->deletion = 1;
pg->erased_bytes += item_val_bytes(item->val_len) -
item_val_bytes(0);
@@ -2221,10 +2152,16 @@ int scoutfs_item_write_dirty(struct super_block *sb)
LIST_HEAD(pages);
LIST_HEAD(pos);
u64 max_seq = 0;
int val_len;
int bytes;
int off;
int ret;
/* we're relying on struct layout to prepend item value headers */
BUILD_BUG_ON(offsetof(struct cached_item, val) !=
(offsetof(struct cached_item, liv) +
member_sizeof(struct cached_item, liv)));
if (atomic_read(&cinf->dirty_pages) == 0)
return 0;
@@ -2276,9 +2213,10 @@ int scoutfs_item_write_dirty(struct super_block *sb)
list_sort(NULL, &pg->dirty_list, cmp_item_key);
list_for_each_entry(item, &pg->dirty_list, dirty_head) {
val_len = sizeof(item->liv) + item->val_len;
bytes = offsetof(struct scoutfs_btree_item_list,
val[item->val_len]);
max_seq = max(max_seq, item->seq);
val[val_len]);
max_seq = max(max_seq, le64_to_cpu(item->liv.seq));
if (off + bytes > PAGE_SIZE) {
page = second;
@@ -2294,10 +2232,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
prev = &lst->next;
lst->key = item->key;
lst->seq = item->seq;
lst->flags = item->deletion ? SCOUTFS_ITEM_FLAG_DELETION : 0;
lst->val_len = item->val_len;
memcpy(lst->val, item->val, item->val_len);
lst->val_len = val_len;
memcpy(lst->val, &item->liv, val_len);
}
spin_lock(&cinf->dirty_lock);
@@ -2355,11 +2291,8 @@ retry:
dirty_head) {
clear_item_dirty(sb, cinf, pg, item);
if (item->delta)
scoutfs_inc_counter(sb, item_delta_written);
/* free deletion items */
if (item->deletion || item->delta)
if (item->deletion)
erase_item(pg, item);
else
item->persistent = 1;
@@ -2529,7 +2462,7 @@ static int item_lru_shrink(struct shrinker *shrink,
list_for_each_entry_safe(pg, tmp, &cinf->lru_list, lru_head) {
if (first_reader_seq <= pg->max_seq) {
if (first_reader_seq <= pg->max_liv_seq) {
scoutfs_inc_counter(sb, item_shrink_page_reader);
continue;
}

View File

@@ -18,8 +18,6 @@ int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb,

View File

@@ -66,6 +66,8 @@
* relative to that lock state we resend.
*/
#define GRACE_PERIOD_KT ms_to_ktime(10)
/*
* allocated per-super, freed on unmount.
*/
@@ -80,11 +82,15 @@ struct lock_info {
struct list_head lru_list;
unsigned long long lru_nr;
struct workqueue_struct *workq;
struct work_struct inv_work;
struct work_struct grant_work;
struct list_head grant_list;
struct delayed_work inv_dwork;
struct list_head inv_list;
struct work_struct shrink_work;
struct list_head shrink_list;
atomic64_t next_refresh_gen;
struct work_struct inv_iput_work;
struct llist_head inv_iput_llist;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree tseq_tree;
@@ -120,6 +126,34 @@ static bool lock_modes_match(int granted, int requested)
requested == SCOUTFS_LOCK_READ);
}
/*
* Final iput can get into evict and perform final inode deletion which
* can delete a lot of items under locks and transactions. We really
* don't want to be doing all that in an iput during invalidation. When
* invalidation sees that iput might perform final deletion it puts them
* on a list and queues this work.
*
* Nothing stops multiple puts for multiple invalidations of an inode
* before the work runs so we can track multiple puts in flight.
*/
static void lock_inv_iput_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info, inv_iput_work);
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct llist_node *inodes;
bool more;
inodes = llist_del_all(&linfo->inv_iput_llist);
llist_for_each_entry_safe(si, tmp, inodes, inv_iput_llnode) {
do {
more = atomic_dec_return(&si->inv_iput_count) > 0;
iput(&si->inode);
} while (more);
}
}
/*
* Invalidate cached data associated with an inode whose lock is going
* away.
@@ -160,8 +194,11 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
} else {
/* defer iput to work context so we don't evict inodes from invalidation */
scoutfs_inode_queue_iput(inode);
/* defer iput to work context so we don't evict inodes from invalidation */
if (atomic_inc_return(&si->inv_iput_count) == 1)
llist_add(&si->inv_iput_llnode, &linfo->inv_iput_llist);
smp_wmb(); /* count and list visible before work executes */
queue_work(linfo->workq, &linfo->inv_iput_work);
}
}
}
@@ -251,6 +288,7 @@ static void lock_free(struct lock_info *linfo, struct scoutfs_lock *lock)
BUG_ON(!RB_EMPTY_NODE(&lock->node));
BUG_ON(!RB_EMPTY_NODE(&lock->range_node));
BUG_ON(!list_empty(&lock->lru_head));
BUG_ON(!list_empty(&lock->grant_head));
BUG_ON(!list_empty(&lock->inv_head));
BUG_ON(!list_empty(&lock->shrink_head));
BUG_ON(!list_empty(&lock->cov_list));
@@ -278,8 +316,8 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
RB_CLEAR_NODE(&lock->node);
RB_CLEAR_NODE(&lock->range_node);
INIT_LIST_HEAD(&lock->lru_head);
INIT_LIST_HEAD(&lock->grant_head);
INIT_LIST_HEAD(&lock->inv_head);
INIT_LIST_HEAD(&lock->inv_list);
INIT_LIST_HEAD(&lock->shrink_head);
spin_lock_init(&lock->cov_list_lock);
INIT_LIST_HEAD(&lock->cov_list);
@@ -326,6 +364,23 @@ static bool lock_counts_match(int granted, unsigned int *counts)
return true;
}
/*
* Returns true if there are any mode counts that match with the desired
* mode. There can be other non-matching counts as well but we're only
* testing for the existence of any matching counts.
*/
static bool lock_count_match_exists(int desired, unsigned int *counts)
{
enum scoutfs_lock_mode mode;
for (mode = 0; mode < SCOUTFS_LOCK_NR_MODES; mode++) {
if (counts[mode] && lock_modes_match(desired, mode))
return true;
}
return false;
}
/*
* An idle lock has nothing going on. It can be present in the lru and
* can be freed by the final put when it has a null mode.
@@ -543,15 +598,45 @@ static void put_lock(struct lock_info *linfo,struct scoutfs_lock *lock)
}
/*
* The caller has made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress.
* Locks have a grace period that extends after activity and prevents
* invalidation. It's intended to let nodes do reasonable batches of
* work as locks ping pong between nodes that are doing conflicting
* work.
*/
static void extend_grace(struct super_block *sb, struct scoutfs_lock *lock)
{
ktime_t now = ktime_get();
if (ktime_after(now, lock->grace_deadline))
scoutfs_inc_counter(sb, lock_grace_set);
else
scoutfs_inc_counter(sb, lock_grace_extended);
lock->grace_deadline = ktime_add(now, GRACE_PERIOD_KT);
}
static void queue_grant_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->grant_list))
queue_work(linfo->workq, &linfo->grant_work);
}
/*
* We immediately queue work on the assumption that the caller might
* have made a change (set a lock mode) which can let one of the
* invalidating locks make forward progress, even if other locks are
* waiting for their grace period to elapse. It's a trade-off between
* invalidation latency and burning cpu repeatedly finding that locks
* are still in their grace period.
*/
static void queue_inv_work(struct lock_info *linfo)
{
assert_spin_locked(&linfo->lock);
if (!list_empty(&linfo->inv_list))
queue_work(linfo->workq, &linfo->inv_work);
mod_delayed_work(linfo->workq, &linfo->inv_dwork, 0);
}
/*
@@ -599,13 +684,72 @@ static void bug_on_inconsistent_grant_cache(struct super_block *sb,
}
/*
* The client is receiving a grant response message from the server.
* This is being called synchronously in the networking receive path so
* our work should be quick and reasonably non-blocking.
* Each lock has received a grant response message from the server.
*
* The server's state machine can immediately send an invalidate request
* after sending this grant response. We won't process the incoming
* invalidate request until after processing this grant response.
* Grant responses can be reordered with incoming invalidation requests
* from the server so we have to be careful to only set the new mode
* once the old mode matches.
*
* We extend the grace period as we grant the lock if there is a waiting
* locker who can use the lock. This stops invalidation from pulling
* the granted lock out from under the requester, resulting in a lot of
* churn with no forward progress. Using the grace period avoids having
* to identify a specific waiter and give it an acquired lock. It's
* also very similar to waking up the locker and having it win the race
* against the invalidation. In that case they'd extend the grace
* period anyway as they unlock.
*/
static void lock_grant_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info,
grant_work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
scoutfs_inc_counter(sb, lock_grant_work);
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->grant_list, grant_head) {
nl = &lock->grant_nl;
/* wait for reordered invalidation to finish */
if (lock->mode != nl->old_mode)
continue;
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode,
nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) &&
lock_mode_can_read(nl->new_mode)) {
lock->refresh_gen =
atomic64_inc_return(&linfo->next_refresh_gen);
}
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_seq = le64_to_cpu(nl->write_seq);
if (lock_count_match_exists(nl->new_mode, lock->waiters))
extend_grace(sb, lock);
trace_scoutfs_lock_granted(sb, lock);
list_del_init(&lock->grant_head);
wake_up(&lock->waitq);
put_lock(linfo, lock);
}
/* invalidations might be waiting for our reordered grant */
queue_inv_work(linfo);
spin_unlock(&linfo->lock);
}
/*
* The client is receiving a grant response message from the server. We
* find the lock, record the response, and add it to the list for grant
* work to process.
*/
int scoutfs_lock_grant_response(struct super_block *sb,
struct scoutfs_net_lock *nl)
@@ -623,61 +767,64 @@ int scoutfs_lock_grant_response(struct super_block *sb,
trace_scoutfs_lock_grant_response(sb, lock);
BUG_ON(!lock->request_pending);
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode, nl->new_mode);
if (!lock_mode_can_read(nl->old_mode) && lock_mode_can_read(nl->new_mode))
lock->refresh_gen = atomic64_inc_return(&linfo->next_refresh_gen);
lock->request_pending = 0;
lock->mode = nl->new_mode;
lock->write_seq = le64_to_cpu(nl->write_seq);
trace_scoutfs_lock_granted(sb, lock);
wake_up(&lock->waitq);
put_lock(linfo, lock);
lock->grant_nl = *nl;
list_add_tail(&lock->grant_head, &linfo->grant_list);
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
return 0;
}
struct inv_req {
struct list_head head;
struct scoutfs_lock *lock;
u64 net_id;
struct scoutfs_net_lock nl;
};
/*
* Each lock has received a lock invalidation request from the server
* which specifies a new mode for the lock. Our processing state
* machine and server failover and lock recovery can both conspire to
* give us triplicate invalidation requests. The incoming requests for
* a given lock need to be processed in order, but we can process locks
* in any order.
* which specifies a new mode for the lock. The server will only send
* one invalidation request at a time for each lock. The server can
* send another invalidate request after we send the response but before
* we reacquire the lock and finish invalidation.
*
* This is an unsolicited request from the server so it can arrive at
* any time after we make the server aware of the lock. We wait for
* users of the current mode to unlock before invalidating.
* any time after we make the server aware of the lock by initially
* requesting it. We wait for users of the current mode to unlock
* before invalidating.
*
* This can arrive on behalf of our request for a mode that conflicts
* with our current mode. We have to proceed while we have a request
* pending. We can also be racing with shrink requests being sent while
* we're invalidating.
*
* This can be processed concurrently and experience reordering with a
* grant response sent back-to-back from the server. We carefully only
* invalidate once the lock mode matches what the server told us to
* invalidate.
*
* We delay invalidation processing until a grace period has elapsed
* since the last unlock. The intent is to let users do a reasonable
* batch of work before dropping the lock. Continuous unlocking can
* continuously extend the deadline.
*
* Before we start invalidating the lock we set the lock to the new
* mode, preventing further incompatible users of the old mode from
* using the lock while we're invalidating.
*
* This does a lot of serialized inode invalidation in one context and
* performs a lot of repeated calls to sync. It would be nice to get
* some concurrent inode invalidation and to more carefully only call
* sync when needed.
*/
static void lock_invalidate_worker(struct work_struct *work)
{
struct lock_info *linfo = container_of(work, struct lock_info, inv_work);
struct lock_info *linfo = container_of(work, struct lock_info,
inv_dwork.work);
struct super_block *sb = linfo->sb;
struct scoutfs_net_lock *nl;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
struct inv_req *ireq;
unsigned long delay = MAX_JIFFY_OFFSET;
ktime_t now = ktime_get();
ktime_t deadline;
LIST_HEAD(ready);
u64 net_id;
int ret;
scoutfs_inc_counter(sb, lock_invalidate_work);
@@ -685,13 +832,25 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &linfo->inv_list, inv_head) {
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
nl = &lock->inv_nl;
/* wait for reordered grant to finish */
if (lock->mode != nl->old_mode)
continue;
/* wait until incompatible holders unlock */
if (!lock_counts_match(nl->new_mode, lock->users))
continue;
/* skip if grace hasn't elapsed, record earliest */
deadline = lock->grace_deadline;
if (!linfo->shutdown && ktime_before(now, deadline)) {
delay = min(delay,
nsecs_to_jiffies(ktime_to_ns(
ktime_sub(deadline, now))));
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
continue;
}
/* set the new mode, no incompatible users during inval */
lock->mode = nl->new_mode;
@@ -702,12 +861,12 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_unlock(&linfo->lock);
if (list_empty(&ready))
return;
goto out;
/* invalidate once the lock is read */
list_for_each_entry(lock, &ready, inv_head) {
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
nl = &ireq->nl;
nl = &lock->inv_nl;
net_id = lock->inv_net_id;
/* only lock protocol, inv can't call subsystems after shutdown */
if (!linfo->shutdown) {
@@ -715,10 +874,11 @@ static void lock_invalidate_worker(struct work_struct *work)
BUG_ON(ret);
}
/* respond with the key and modes from the request, server might have died */
ret = scoutfs_client_lock_response(sb, ireq->net_id, nl);
if (ret == -ENOTCONN)
ret = 0;
/* allow another request after we respond but before we finish */
lock->inv_net_id = 0;
/* respond with the key and modes from the request */
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret);
scoutfs_inc_counter(sb, lock_invalidate_response);
@@ -728,87 +888,69 @@ static void lock_invalidate_worker(struct work_struct *work)
spin_lock(&linfo->lock);
list_for_each_entry_safe(lock, tmp, &ready, inv_head) {
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
trace_scoutfs_lock_invalidated(sb, lock);
list_del(&ireq->head);
kfree(ireq);
if (list_empty(&lock->inv_list)) {
if (lock->inv_net_id == 0) {
/* finish if another request didn't arrive */
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
wake_up(&lock->waitq);
} else {
/* another request arrived, back on the list and requeue */
/* another request filled nl/net_id, put it back on the list */
list_move_tail(&lock->inv_head, &linfo->inv_list);
queue_inv_work(linfo);
}
put_lock(linfo, lock);
}
/* grant might have been waiting for invalidate request */
queue_grant_work(linfo);
spin_unlock(&linfo->lock);
out:
/* queue delayed work if invalidations waiting on grace deadline */
if (delay != MAX_JIFFY_OFFSET)
queue_delayed_work(linfo->workq, &linfo->inv_dwork, delay);
}
/*
* Add an incoming invalidation request to the end of the list on the
* lock and queue it for blocking invalidation work. This is being
* called synchronously in the net recv path to avoid reordering with
* grants that were sent immediately before the server sent this
* invalidation.
* Record an incoming invalidate request from the server and add its
* lock to the list for processing. This request can be from a new
* server and racing with invalidation that frees from an old server.
* It's fine to not find the requested lock and send an immediate
* response.
*
* Incoming invalidation requests are a function of the remote lock
* server's state machine and are slightly decoupled from our lock
* state. We can receive duplicate requests if the server is quick
* enough to send the next request after we send a previous reply, or if
* pending invalidation spans server failover and lock recovery.
*
* Similarly, we can get a request to invalidate a lock we don't have if
* invalidation finished just after lock recovery to a new server.
* Happily we can just reply because we satisfy the invalidation
* response promise to not be using the old lock's mode if the lock
* doesn't exist.
* The invalidation process drops the linfo lock to send responses. The
* moment it does so we can receive another invalidation request (the
* server can ask us to go from write->read then read->null). We allow
* for one chain like this but it's a bug if we receive more concurrent
* invalidation requests than that. The server should be only sending
* one at a time.
*/
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
struct scoutfs_net_lock *nl)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock = NULL;
struct inv_req *ireq;
struct scoutfs_lock *lock;
int ret = 0;
scoutfs_inc_counter(sb, lock_invalidate_request);
ireq = kmalloc(sizeof(struct inv_req), GFP_NOFS);
BUG_ON(!ireq); /* lock server doesn't handle response errors */
if (ireq == NULL) {
ret = -ENOMEM;
goto out;
}
spin_lock(&linfo->lock);
lock = get_lock(sb, &nl->key);
if (lock) {
trace_scoutfs_lock_invalidate_request(sb, lock);
ireq->lock = lock;
ireq->net_id = net_id;
ireq->nl = *nl;
if (list_empty(&lock->inv_list)) {
BUG_ON(lock->inv_net_id != 0);
lock->inv_net_id = net_id;
lock->inv_nl = *nl;
if (list_empty(&lock->inv_head)) {
list_add_tail(&lock->inv_head, &linfo->inv_list);
lock->invalidate_pending = 1;
queue_inv_work(linfo);
}
list_add_tail(&ireq->head, &lock->inv_list);
trace_scoutfs_lock_invalidate_request(sb, lock);
queue_inv_work(linfo);
}
spin_unlock(&linfo->lock);
out:
if (!lock) {
if (!lock)
ret = scoutfs_client_lock_response(sb, net_id, nl);
BUG_ON(ret); /* lock server doesn't fence timed out client requests */
}
return ret;
}
@@ -986,14 +1128,8 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
trace_scoutfs_lock_wait(sb, lock);
if (flags & SCOUTFS_LKF_INTERRUPTIBLE) {
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
} else {
wait_event(lock->waitq, lock_wait_cond(sb, lock, mode));
ret = 0;
}
ret = wait_event_interruptible(lock->waitq,
lock_wait_cond(sb, lock, mode));
spin_lock(&linfo->lock);
if (ret)
break;
@@ -1237,20 +1373,10 @@ int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock)
{
struct scoutfs_key start;
struct scoutfs_key end;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
scoutfs_key_set_ones(&end);
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
return lock_key_range(sb, mode, flags, &start, &end, lock);
}
/*
* As we unlock we always extend the grace period to give the caller
* another pass at the lock before its invalidated.
*/
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scoutfs_lock_mode mode)
{
DECLARE_LOCK_INFO(sb, linfo);
@@ -1263,6 +1389,7 @@ void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scou
spin_lock(&linfo->lock);
lock_dec_count(lock->users, mode);
extend_grace(sb, lock);
if (lock_mode_can_write(mode))
lock->dirty_trans_seq = scoutfs_trans_sample_seq(sb);
@@ -1502,18 +1629,10 @@ void scoutfs_lock_unmount_begin(struct super_block *sb)
if (linfo) {
linfo->unmounting = true;
flush_work(&linfo->inv_work);
flush_delayed_work(&linfo->inv_dwork);
}
}
void scoutfs_lock_flush_invalidate(struct super_block *sb)
{
DECLARE_LOCK_INFO(sb, linfo);
if (linfo)
flush_work(&linfo->inv_work);
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.
@@ -1577,8 +1696,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
struct inv_req *ireq_tmp;
struct inv_req *ireq;
struct rb_node *node;
enum scoutfs_lock_mode mode;
@@ -1605,6 +1722,8 @@ void scoutfs_lock_destroy(struct super_block *sb)
spin_unlock(&linfo->lock);
if (linfo->workq) {
/* pending grace work queues normal work */
flush_workqueue(linfo->workq);
/* now all work won't queue itself */
destroy_workqueue(linfo->workq);
}
@@ -1621,21 +1740,15 @@ void scoutfs_lock_destroy(struct super_block *sb)
* of free).
*/
spin_lock(&linfo->lock);
node = rb_first(&linfo->lock_tree);
while (node) {
lock = rb_entry(node, struct scoutfs_lock, node);
node = rb_next(node);
list_for_each_entry_safe(ireq, ireq_tmp, &lock->inv_list, head) {
list_del_init(&ireq->head);
put_lock(linfo, ireq->lock);
kfree(ireq);
}
lock->request_pending = 0;
if (!list_empty(&lock->lru_head))
__lock_del_lru(linfo, lock);
if (!list_empty(&lock->grant_head))
list_del_init(&lock->grant_head);
if (!list_empty(&lock->inv_head)) {
list_del_init(&lock->inv_head);
lock->invalidate_pending = 0;
@@ -1645,7 +1758,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
lock_remove(linfo, lock);
lock_free(linfo, lock);
}
spin_unlock(&linfo->lock);
kfree(linfo);
@@ -1670,11 +1782,15 @@ int scoutfs_lock_setup(struct super_block *sb)
linfo->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&linfo->shrinker);
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
INIT_WORK(&linfo->grant_work, lock_grant_worker);
INIT_LIST_HEAD(&linfo->grant_list);
INIT_DELAYED_WORK(&linfo->inv_dwork, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);
INIT_WORK(&linfo->shrink_work, lock_shrink_worker);
INIT_LIST_HEAD(&linfo->shrink_list);
atomic64_set(&linfo->next_refresh_gen, 0);
INIT_WORK(&linfo->inv_iput_work, lock_inv_iput_worker);
init_llist_head(&linfo->inv_iput_llist);
scoutfs_tseq_tree_init(&linfo->tseq_tree, lock_tseq_show);
sbi->lock_info = linfo;

View File

@@ -6,8 +6,7 @@
#define SCOUTFS_LKF_REFRESH_INODE 0x01 /* update stale inode from item */
#define SCOUTFS_LKF_NONBLOCK 0x02 /* only use already held locks */
#define SCOUTFS_LKF_INTERRUPTIBLE 0x04 /* pending signals return -ERESTARTSYS */
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_INTERRUPTIBLE << 1) - 1))
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_NONBLOCK << 1) - 1))
#define SCOUTFS_LOCK_NR_MODES SCOUTFS_LOCK_INVALID
@@ -28,11 +27,15 @@ struct scoutfs_lock {
u64 dirty_trans_seq;
struct list_head lru_head;
wait_queue_head_t waitq;
ktime_t grace_deadline;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
struct list_head inv_list; /* list of lock's invalidation requests */
struct list_head grant_head;
struct scoutfs_net_lock grant_nl;
struct list_head inv_head;
struct scoutfs_net_lock inv_nl;
u64 inv_net_id;
struct list_head shrink_head;
spinlock_t cov_list_lock;
@@ -84,8 +87,6 @@ int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int
struct scoutfs_lock **lock);
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
u64 ino, struct scoutfs_lock **lock);
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
struct scoutfs_lock **lock);
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode mode);
@@ -104,7 +105,6 @@ void scoutfs_free_unused_locks(struct super_block *sb);
int scoutfs_lock_setup(struct super_block *sb);
void scoutfs_lock_unmount_begin(struct super_block *sb);
void scoutfs_lock_flush_invalidate(struct super_block *sb);
void scoutfs_lock_shutdown(struct super_block *sb);
void scoutfs_lock_destroy(struct super_block *sb);

View File

@@ -78,8 +78,9 @@ struct lock_server_info {
struct scoutfs_tseq_tree tseq_tree;
struct dentry *tseq_dentry;
struct scoutfs_tseq_tree stats_tseq_tree;
struct dentry *stats_tseq_dentry;
struct scoutfs_alloc *alloc;
struct scoutfs_block_writer *wri;
};
#define DECLARE_LOCK_SERVER_INFO(sb, name) \
@@ -106,9 +107,6 @@ struct server_lock_node {
struct list_head granted;
struct list_head requested;
struct list_head invalidated;
struct scoutfs_tseq_entry stats_tseq_entry;
u64 stats[SLT_NR];
};
/*
@@ -298,8 +296,6 @@ static struct server_lock_node *alloc_server_lock(struct lock_server_info *inf,
snode = get_server_lock(inf, key, ins, false);
if (snode != ins)
kfree(ins);
else
scoutfs_tseq_add(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
}
}
@@ -329,10 +325,8 @@ static void put_server_lock(struct lock_server_info *inf,
mutex_unlock(&snode->mutex);
if (should_free) {
scoutfs_tseq_del(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
if (should_free)
kfree(snode);
}
}
static struct client_lock_entry *find_entry(struct server_lock_node *snode,
@@ -394,8 +388,6 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
goto out;
}
snode->stats[SLT_REQUEST]++;
clent->snode = snode;
add_client_entry(snode, &snode->requested, clent);
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
@@ -436,8 +428,6 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
goto out;
}
snode->stats[SLT_RESPONSE]++;
clent = find_entry(snode, &snode->invalidated, rid);
if (!clent) {
put_server_lock(inf, snode);
@@ -518,7 +508,6 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER,
SLT_INVALIDATE, SLT_REQUEST,
gr->rid, 0, &nl);
snode->stats[SLT_INVALIDATE]++;
add_client_entry(snode, &snode->invalidated, gr);
}
@@ -555,7 +544,6 @@ static int process_waiting_requests(struct super_block *sb,
trace_scoutfs_lock_message(sb, SLT_SERVER, SLT_GRANT,
SLT_RESPONSE, req->rid,
req->net_id, &nl);
snode->stats[SLT_GRANT]++;
/* don't track null client locks, track all else */
if (req->mode == SCOUTFS_LOCK_NULL)
@@ -798,21 +786,13 @@ static void lock_server_tseq_show(struct seq_file *m,
clent->net_id);
}
static void stats_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
{
struct server_lock_node *snode = container_of(ent, struct server_lock_node,
stats_tseq_entry);
seq_printf(m, SK_FMT" req %llu inv %llu rsp %llu gr %llu\n",
SK_ARG(&snode->key), snode->stats[SLT_REQUEST], snode->stats[SLT_INVALIDATE],
snode->stats[SLT_RESPONSE], snode->stats[SLT_GRANT]);
}
/*
* Setup the lock server. This is called before networking can deliver
* requests.
*/
int scoutfs_lock_server_setup(struct super_block *sb)
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct lock_server_info *inf;
@@ -825,7 +805,8 @@ int scoutfs_lock_server_setup(struct super_block *sb)
spin_lock_init(&inf->lock);
inf->locks_root = RB_ROOT;
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
scoutfs_tseq_tree_init(&inf->stats_tseq_tree, stats_tseq_show);
inf->alloc = alloc;
inf->wri = wri;
inf->tseq_dentry = scoutfs_tseq_create("server_locks", sbi->debug_root,
&inf->tseq_tree);
@@ -834,14 +815,6 @@ int scoutfs_lock_server_setup(struct super_block *sb)
return -ENOMEM;
}
inf->stats_tseq_dentry = scoutfs_tseq_create("server_lock_stats", sbi->debug_root,
&inf->stats_tseq_tree);
if (!inf->stats_tseq_dentry) {
debugfs_remove(inf->tseq_dentry);
kfree(inf);
return -ENOMEM;
}
sbi->lock_server_info = inf;
return 0;
@@ -863,7 +836,6 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
if (inf) {
debugfs_remove(inf->tseq_dentry);
debugfs_remove(inf->stats_tseq_dentry);
rbtree_postorder_for_each_entry_safe(snode, stmp,
&inf->locks_root, node) {

View File

@@ -11,7 +11,9 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
struct scoutfs_net_lock *nl);
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);
int scoutfs_lock_server_setup(struct super_block *sb);
int scoutfs_lock_server_setup(struct super_block *sb,
struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri);
void scoutfs_lock_server_destroy(struct super_block *sb);
#endif

View File

@@ -4,7 +4,6 @@
#include <linux/bitops.h>
#include "key.h"
#include "counters.h"
#include "super.h"
void __printf(4, 5) scoutfs_msg(struct super_block *sb, const char *prefix,
const char *str, const char *fmt, ...);
@@ -24,9 +23,6 @@ do { \
#define scoutfs_info(sb, fmt, args...) \
scoutfs_msg_check(sb, KERN_INFO, "", fmt, ##args)
#define scoutfs_tprintk(sb, fmt, args...) \
trace_printk(SCSBF " " fmt "\n", SCSB_ARGS(sb), ##args);
#define scoutfs_bug_on(sb, cond, fmt, args...) \
do { \
if (cond) { \

View File

@@ -629,6 +629,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
break;
}
trace_scoutfs_recv_clock_sync(nh.clock_sync_id);
data_len = le16_to_cpu(nh.data_len);
scoutfs_inc_counter(sb, net_recv_messages);
@@ -675,15 +677,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/*
* Initial received greetings are processed
* synchronously before any other incoming messages.
*
* Incoming requests or responses to the lock client are
* called synchronously to avoid reordering.
*/
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
/* synchronously process greeting before next recvmsg */
if (nh.cmd == SCOUTFS_NET_CMD_GREETING)
scoutfs_net_proc_worker(&mrecv->proc_work);
else
queue_work(conn->workq, &mrecv->proc_work);
@@ -783,6 +778,9 @@ static void scoutfs_net_send_worker(struct work_struct *work)
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
msend->nh.clock_sync_id = scoutfs_clock_sync_id();
trace_scoutfs_send_clock_sync(msend->nh.clock_sync_id);
ret = sendmsg_full(conn->sock, &msend->nh, len);
spin_lock(&conn->lock);
@@ -875,31 +873,13 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
}
/*
* By default, TCP would maintain a connection to an unresponsive peer
* for a very long time indeed. We can't do that because quorum
* members will only participate in an election when they don't have a
* healthy connection to a server. We use the KEEPALIVE* and
* TCP_USER_TIMEOUT options to ensure that we'll break an unresponsive
* connection and return to the quorum and client connection paths to
* try and establish a new connection to an active server.
*
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
* send queue. It doesn't work if the connection is idle. Adding
* keepalice probes with user_timeout set changes how the keepalive
* timeout is calculated. CNT no longer matters. Each time
* additional probes (not the first) are sent the user timeout is
* checked against the last time data was received. If none of the
* keepalives are responded to then eventually the user timeout applies.
*
* Given all this, we start with the overall unresponsive timeout. Then
* we set the probes to start sending towards the end of the timeout.
* We give it a few tries for a successful response before the timeout
* elapses during the probe timer processing after the unsuccessful
* probes.
* Have a pretty aggressive keepalive timeout of around 10 seconds. The
* TCP keepalives are being processed out of task context so they should
* be responsive even when mounts are under load.
*/
#define UNRESPONSIVE_TIMEOUT_SECS 10
#define UNRESPONSIVE_PROBES 3
#define KEEPCNT 3
#define KEEPIDLE 7
#define KEEPINTVL 1
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
@@ -908,7 +888,7 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
int optval;
int ret;
/* we use a keepalive timeout instead of send timeout */
/* but use a keepalive timeout instead of send timeout */
tv.tv_sec = 0;
tv.tv_usec = 0;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
@@ -916,32 +896,24 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
if (ret)
goto out;
/* not checked when user_timeout != 0, but for clarity */
optval = UNRESPONSIVE_PROBES;
optval = KEEPCNT;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
optval = KEEPIDLE;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
optval = KEEPINTVL;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&optval, sizeof(optval));
if (ret)
goto out;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&optval, sizeof(optval));
@@ -1514,7 +1486,8 @@ int scoutfs_net_connect(struct super_block *sb,
struct scoutfs_net_connection *conn,
struct sockaddr_in *sin, unsigned long timeout_ms)
{
int ret = 0;
int error = 0;
int ret;
spin_lock(&conn->lock);
conn->connect_sin = *sin;
@@ -1522,8 +1495,10 @@ int scoutfs_net_connect(struct super_block *sb,
spin_unlock(&conn->lock);
queue_work(conn->workq, &conn->connect_work);
wait_event(conn->waitq, connect_result(conn, &ret));
return ret;
ret = wait_event_interruptible(conn->waitq,
connect_result(conn, &error));
return ret ?: error;
}
static void set_valid_greeting(struct scoutfs_net_connection *conn)
@@ -1659,10 +1634,10 @@ restart:
conn->next_send_id = reconn->next_send_id;
atomic64_set(&conn->recv_seq, atomic64_read(&reconn->recv_seq));
/* reconn should be idle while in reconn_wait */
/* greeting response/ack will be on conn send queue */
BUG_ON(!list_empty(&reconn->send_queue));
/* queued greeting response is racing, can be in send or resend queue */
list_splice_tail_init(&reconn->resend_queue, &conn->resend_queue);
BUG_ON(!list_empty(&conn->resend_queue));
list_splice_init(&reconn->resend_queue, &conn->resend_queue);
/* new conn info is unused, swap, old won't call down */
swap(conn->info, reconn->info);
@@ -1826,10 +1801,11 @@ int scoutfs_net_sync_request(struct super_block *sb,
ret = scoutfs_net_submit_request(sb, conn, cmd, arg, arg_len,
sync_response, &sreq, &id);
if (ret == 0) {
wait_for_completion(&sreq.comp);
ret = wait_for_completion_interruptible(&sreq.comp);
if (ret == -ERESTARTSYS)
scoutfs_net_cancel_request(sb, conn, cmd, id);
else
ret = sreq.error;
}
return ret;
}

View File

@@ -97,7 +97,7 @@ struct quorum_host_msg {
struct last_msg {
struct quorum_host_msg msg;
ktime_t ts;
struct timespec64 ts;
};
enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
@@ -209,7 +209,7 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
DECLARE_QUORUM_INFO(sb, qinf);
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
ktime_t now;
struct timespec64 ts;
int i;
struct scoutfs_quorum_message qmes = {
@@ -235,6 +235,7 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
qmes.crc = quorum_message_crc(&qmes);
ts = ktime_to_timespec64(ktime_get());
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i) ||
@@ -242,13 +243,12 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
continue;
scoutfs_quorum_slot_sin(super, i, &sin);
now = ktime_get();
kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
spin_lock(&qinf->show_lock);
qinf->last_send[i].msg.term = term;
qinf->last_send[i].msg.type = type;
qinf->last_send[i].ts = now;
qinf->last_send[i].ts = ts;
spin_unlock(&qinf->show_lock);
if (i == only)
@@ -308,8 +308,6 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
if (ret < 0)
return ret;
now = ktime_get();
if (ret != sizeof(qmes) ||
qmes.crc != quorum_message_crc(&qmes) ||
qmes.fsid != super->hdr.fsid ||
@@ -329,7 +327,7 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
spin_lock(&qinf->show_lock);
qinf->last_recv[msg->from].msg = *msg;
qinf->last_recv[msg->from].ts = now;
qinf->last_recv[msg->from].ts = ktime_to_timespec64(ktime_get());
spin_unlock(&qinf->show_lock);
return 0;
@@ -392,51 +390,6 @@ out:
return ret;
}
/*
* It's really important in raft elections that the term not go
* backwards in time. We achieve this by having each participant record
* the greatest term they've seen in their quorum block. It's also
* important that participants agree on the greatest term. It can
* happen that one gets ahead of the rest, perhaps by being forcefully
* shutdown after having just been elected. As everyone starts up it's
* possible to have N-1 have term T-1 while just one participant thinks
* the term is T. That single participant will ignore all messages
* from older terms. If its timeout is greater then the others it can
* immediately override the election of the majority and request votes
* and become elected.
*
* A best-effort work around is to have everyone try and start from the
* greatest term that they can find in everyone's blocks. If it works
* then you avoid having those with greater terms ignore others. If it
* doesn't work the elections will eventually stabilize after rocky
* periods of fencing from what looks like concurrent elections.
*/
static void read_greatest_term(struct super_block *sb, u64 *term)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
int ret;
int e;
int s;
*term = 0;
for (s = 0; s < SCOUTFS_QUORUM_MAX_SLOTS; s++) {
if (!quorum_slot_present(super, s))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + s, &blk, false);
if (ret < 0)
continue;
for (e = 0; e < ARRAY_SIZE(blk.events); e++) {
if (blk.events[e].rid)
*term = max(*term, le64_to_cpu(blk.events[e].term));
}
}
}
static void set_quorum_block_event(struct super_block *sb, struct scoutfs_quorum_block *blk,
int event, u64 term)
{
@@ -448,10 +401,8 @@ static void set_quorum_block_event(struct super_block *sb, struct scoutfs_quorum
return;
getnstimeofday64(&ts);
le64_add_cpu(&blk->write_nr, 1);
ev = &blk->events[event];
ev->write_nr = blk->write_nr;
ev->rid = cpu_to_le64(sbi->rid);
ev->term = cpu_to_le64(term);
ev->ts.sec = cpu_to_le64(ts.tv_sec);
@@ -605,8 +556,10 @@ out:
ret = err;
}
if (ret < 0)
if (ret < 0) {
scoutfs_err(sb, "error %d attempting to find and fence previous leaders", ret);
scoutfs_inc_counter(sb, quorum_fence_error);
}
return ret;
}
@@ -623,24 +576,29 @@ static void scoutfs_quorum_worker(struct work_struct *work)
struct quorum_info *qinf = container_of(work, struct quorum_info, work);
struct super_block *sb = qinf->sb;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
struct scoutfs_quorum_block blk;
struct sockaddr_in unused;
struct quorum_host_msg msg;
struct quorum_status qst;
u64 blkno;
int ret;
int err;
/* recording votes from slots as native single word bitmap */
BUILD_BUG_ON(SCOUTFS_QUORUM_MAX_SLOTS > BITS_PER_LONG);
/* get our starting term from our persistent block */
blkno = SCOUTFS_QUORUM_BLKNO + opts->quorum_slot_nr;
ret = read_quorum_block(sb, blkno, &blk, false);
if (ret < 0)
goto out;
/* start out as a follower */
qst.role = FOLLOWER;
qst.term = 0;
qst.term = le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_TERM].term);
qst.vote_for = -1;
qst.vote_bits = 0;
/* read our starting term from greatest in all events in all slots */
read_greatest_term(sb, &qst.term);
/* see if there's a server to chose heartbeat or election timeout */
if (scoutfs_quorum_server_sin(sb, &unused) == 0)
qst.timeout = heartbeat_timeout();
@@ -652,7 +610,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
if (ret < 0)
goto out;
while (!(qinf->shutdown || scoutfs_forcing_unmount(sb))) {
while (!qinf->shutdown) {
ret = recv_msg(sb, &msg, qst.timeout);
if (ret < 0) {
@@ -739,10 +697,11 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* candidates count votes in their term */
if (qst.role == CANDIDATE &&
msg.type == SCOUTFS_QUORUM_MSG_VOTE) {
if (test_and_set_bit(msg.from, &qst.vote_bits)) {
if (test_bit(msg.from, &qst.vote_bits)) {
scoutfs_warn(sb, "already received vote from %u in term %llu, are there multiple mounts with quorum_slot_nr=%u?",
msg.from, qst.term, msg.from);
}
set_bit(msg.from, &qst.vote_bits);
scoutfs_inc_counter(sb, quorum_recv_vote);
}
@@ -774,15 +733,13 @@ static void scoutfs_quorum_worker(struct work_struct *work)
ret = scoutfs_server_start(sb, qst.term);
if (ret < 0) {
clear_bit(QINF_FLAG_SERVER, &qinf->flags);
scoutfs_err(sb, "server startup failed with %d", ret);
/* store our increased term */
err = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP, qst.term,
true);
if (err < 0) {
if (err < 0 && ret == 0)
ret = err;
goto out;
}
ret = 0;
continue;
goto out;
}
}
@@ -828,11 +785,11 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.term);
}
/* record that this slot no longer has an active quorum */
/* informational event that we're shutting down, nothing relies on it */
update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_END, qst.term, true);
out:
if (ret < 0) {
scoutfs_err(sb, "quorum service saw error %d, shutting down. This mount is no longer participating in quorum. It should be remounted to restore service.",
scoutfs_err(sb, "quorum service saw error %d, shutting down. Cluster will be degraded until this slot is remounted to restart the quorum service",
ret);
}
}
@@ -958,7 +915,6 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
struct quorum_status qst;
struct last_msg last;
struct timespec64 ts;
const ktime_t now = ktime_get();
size_t size;
int ret;
int i;
@@ -980,9 +936,9 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
qst.vote_for);
snprintf_ret(buf, size, &ret, "vote_bits 0x%lx (count %lu)\n",
qst.vote_bits, hweight_long(qst.vote_bits));
ts = ktime_to_timespec64(ktime_sub(qst.timeout, now));
snprintf_ret(buf, size, &ret, "timeout_in_secs %lld.%09u\n",
(s64)ts.tv_sec, (int)ts.tv_nsec);
ts = ktime_to_timespec64(qst.timeout);
snprintf_ret(buf, size, &ret, "timeout %llu.%u\n",
(u64)ts.tv_sec, (int)ts.tv_nsec);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
spin_lock(&qinf->show_lock);
@@ -992,11 +948,10 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
if (last.msg.term == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, last.ts));
snprintf_ret(buf, size, &ret,
"last_send to %u term %llu type %u secs_since %lld.%09u\n",
"last_send to %u term %llu type %u ts %llu.%u\n",
i, last.msg.term, last.msg.type,
(s64)ts.tv_sec, (int)ts.tv_nsec);
(u64)last.ts.tv_sec, (int)last.ts.tv_nsec);
}
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
@@ -1006,12 +961,10 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
if (last.msg.term == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, last.ts));
snprintf_ret(buf, size, &ret,
"last_recv from %u term %llu type %u secs_since %lld.%09u\n",
"last_recv from %u term %llu type %u ts %llu.%u\n",
i, last.msg.term, last.msg.type,
(s64)ts.tv_sec, (int)ts.tv_nsec);
(u64)last.ts.tv_sec, (int)last.ts.tv_nsec);
}
return ret;
@@ -1048,17 +1001,13 @@ static inline bool valid_ipv4_port(__be16 port)
static int verify_quorum_slots(struct super_block *sb)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
DECLARE_QUORUM_INFO(sb, qinf);
struct sockaddr_in other;
struct sockaddr_in sin;
int found = 0;
int ret;
int i;
int j;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(super, i))
continue;
@@ -1099,25 +1048,6 @@ static int verify_quorum_slots(struct super_block *sb)
return -EINVAL;
}
if (!quorum_slot_present(super, opts->quorum_slot_nr)) {
char *str = slots;
*str = '\0';
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (quorum_slot_present(super, i)) {
ret = snprintf(str, &slots[ARRAY_SIZE(slots)] - str, "%c%u",
str == slots ? ' ' : ',', i);
if (ret < 2 || ret > 3) {
scoutfs_err(sb, "error gathering populated slots");
return -EINVAL;
}
str += ret;
}
}
scoutfs_err(sb, "quorum_slot_nr=%u option references unused slot, must be one of the following configured slots:%s",
opts->quorum_slot_nr, slots);
return -EINVAL;
}
/*
* Always require a majority except in the pathological cases of
* 1 or 2 members.

View File

@@ -58,6 +58,9 @@ struct lock_info;
__entry->pref##_map, \
__entry->pref##_flags
#define DECLARE_TRACED_EXTENT(name) \
struct scoutfs_traced_extent name = {0}
DECLARE_EVENT_CLASS(scoutfs_ino_ret_class,
TP_PROTO(struct super_block *sb, u64 ino, int ret),
@@ -403,24 +406,21 @@ TRACE_EVENT(scoutfs_sync_fs,
);
TRACE_EVENT(scoutfs_trans_write_func,
TP_PROTO(struct super_block *sb, u64 dirty_block_bytes, u64 dirty_item_pages),
TP_PROTO(struct super_block *sb, unsigned long dirty),
TP_ARGS(sb, dirty_block_bytes, dirty_item_pages),
TP_ARGS(sb, dirty),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, dirty_block_bytes)
__field(__u64, dirty_item_pages)
__field(unsigned long, dirty)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dirty_block_bytes = dirty_block_bytes;
__entry->dirty_item_pages = dirty_item_pages;
__entry->dirty = dirty;
),
TP_printk(SCSBF" dirty_block_bytes %llu dirty_item_pages %llu",
SCSB_TRACE_ARGS, __entry->dirty_block_bytes, __entry->dirty_item_pages)
TP_printk(SCSBF" dirty %lu", SCSB_TRACE_ARGS, __entry->dirty)
);
DECLARE_EVENT_CLASS(scoutfs_trans_hold_release_class,
@@ -1954,6 +1954,74 @@ TRACE_EVENT(scoutfs_quorum_loop,
__entry->timeout_sec, __entry->timeout_nsec)
);
/*
* We can emit trace events to make it easier to synchronize the
* monotonic clocks in trace logs between nodes. By looking at the send
* and recv times of many messages flowing between nodes we can get
* surprisingly good estimates of the clock offset between them.
*/
DECLARE_EVENT_CLASS(scoutfs_clock_sync_class,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id),
TP_STRUCT__entry(
__field(__u64, clock_sync_id)
),
TP_fast_assign(
__entry->clock_sync_id = le64_to_cpu(clock_sync_id);
),
TP_printk("clock_sync_id %016llx", __entry->clock_sync_id)
);
DEFINE_EVENT(scoutfs_clock_sync_class, scoutfs_send_clock_sync,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id)
);
DEFINE_EVENT(scoutfs_clock_sync_class, scoutfs_recv_clock_sync,
TP_PROTO(__le64 clock_sync_id),
TP_ARGS(clock_sync_id)
);
TRACE_EVENT(scoutfs_trans_seq_advance,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->trans_seq = trans_seq;
),
TP_printk(SCSBF" rid %016llx trans_seq %llu\n",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_trans_seq_remove,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
TP_ARGS(sb, rid, trans_seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, s_rid)
__field(__u64, trans_seq)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->s_rid = rid;
__entry->trans_seq = trans_seq;
),
TP_printk(SCSBF" rid %016llx trans_seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->trans_seq)
);
TRACE_EVENT(scoutfs_trans_seq_last,
TP_PROTO(struct super_block *sb, u64 rid, u64 trans_seq),
@@ -1977,9 +2045,9 @@ TRACE_EVENT(scoutfs_trans_seq_last,
TRACE_EVENT(scoutfs_get_log_merge_status,
TP_PROTO(struct super_block *sb, u64 rid, struct scoutfs_key *next_range_key,
u64 nr_requests, u64 nr_complete, u64 seq),
u64 nr_requests, u64 nr_complete, u64 last_seq, u64 seq),
TP_ARGS(sb, rid, next_range_key, nr_requests, nr_complete, seq),
TP_ARGS(sb, rid, next_range_key, nr_requests, nr_complete, last_seq, seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
@@ -1987,6 +2055,7 @@ TRACE_EVENT(scoutfs_get_log_merge_status,
sk_trace_define(next_range_key)
__field(__u64, nr_requests)
__field(__u64, nr_complete)
__field(__u64, last_seq)
__field(__u64, seq)
),
@@ -1996,20 +2065,21 @@ TRACE_EVENT(scoutfs_get_log_merge_status,
sk_trace_assign(next_range_key, next_range_key);
__entry->nr_requests = nr_requests;
__entry->nr_complete = nr_complete;
__entry->last_seq = last_seq;
__entry->seq = seq;
),
TP_printk(SCSBF" rid %016llx next_range_key "SK_FMT" nr_requests %llu nr_complete %llu seq %llu",
TP_printk(SCSBF" rid %016llx next_range_key "SK_FMT" nr_requests %llu nr_complete %llu last_seq %llu seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, sk_trace_args(next_range_key),
__entry->nr_requests, __entry->nr_complete, __entry->seq)
__entry->nr_requests, __entry->nr_complete, __entry->last_seq, __entry->seq)
);
TRACE_EVENT(scoutfs_get_log_merge_request,
TP_PROTO(struct super_block *sb, u64 rid,
struct scoutfs_btree_root *root, struct scoutfs_key *start,
struct scoutfs_key *end, u64 input_seq, u64 seq),
struct scoutfs_key *end, u64 last_seq, u64 seq),
TP_ARGS(sb, rid, root, start, end, input_seq, seq),
TP_ARGS(sb, rid, root, start, end, last_seq, seq),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
@@ -2019,7 +2089,7 @@ TRACE_EVENT(scoutfs_get_log_merge_request,
__field(__u8, root_height)
sk_trace_define(start)
sk_trace_define(end)
__field(__u64, input_seq)
__field(__u64, last_seq)
__field(__u64, seq)
),
@@ -2031,14 +2101,14 @@ TRACE_EVENT(scoutfs_get_log_merge_request,
__entry->root_height = root->height;
sk_trace_assign(start, start);
sk_trace_assign(end, end);
__entry->input_seq = input_seq;
__entry->last_seq = last_seq;
__entry->seq = seq;
),
TP_printk(SCSBF" rid %016llx root blkno %llu seq %llu height %u start "SK_FMT" end "SK_FMT" input_seq %llu seq %llu",
TP_printk(SCSBF" rid %016llx root blkno %llu seq %llu height %u start "SK_FMT" end "SK_FMT" last_seq %llu seq %llu",
SCSB_TRACE_ARGS, __entry->s_rid, __entry->root_blkno,
__entry->root_seq, __entry->root_height,
sk_trace_args(start), sk_trace_args(end), __entry->input_seq,
sk_trace_args(start), sk_trace_args(end), __entry->last_seq,
__entry->seq)
);
@@ -2541,36 +2611,6 @@ TRACE_EVENT(scoutfs_alloc_move,
__entry->ret)
);
DECLARE_EVENT_CLASS(scoutfs_alloc_extent_class,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
STE_FIELDS(ext)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
STE_ASSIGN(ext, ext);
),
TP_printk(SCSBF" ext "STE_FMT, SCSB_TRACE_ARGS, STE_ENTRY_ARGS(ext))
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_move_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_fill_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
DEFINE_EVENT(scoutfs_alloc_extent_class, scoutfs_alloc_empty_extent,
TP_PROTO(struct super_block *sb, struct scoutfs_extent *ext),
TP_ARGS(sb, ext)
);
TRACE_EVENT(scoutfs_item_read_page,
TP_PROTO(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *pg_start, struct scoutfs_key *pg_end),

File diff suppressed because it is too large Load Diff

View File

@@ -28,7 +28,6 @@
#include "btree.h"
#include "spbm.h"
#include "client.h"
#include "counters.h"
#include "scoutfs_trace.h"
/*
@@ -1482,11 +1481,10 @@ static int kway_merge(struct super_block *sb,
int ind;
int i;
if (WARN_ON_ONCE(nr <= 0))
if (WARN_ON_ONCE(nr <= 1))
return -EINVAL;
/* always at least one parent for single leaf */
nr_parents = max_t(unsigned long, 1, roundup_pow_of_two(nr) - 1);
nr_parents = roundup_pow_of_two(nr) - 1;
/* root at [1] for easy sib/parent index calc, final pad for odd sib */
nr_nodes = 1 + nr_parents + nr + 1;
tnodes = __vmalloc(nr_nodes * sizeof(struct tourn_node),
@@ -2083,7 +2081,7 @@ static int delete_files(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_srch_compact *sc)
{
int ret = 0;
int ret;
int i;
for (i = 0; i < sc->nr; i++) {
@@ -2129,7 +2127,6 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
struct scoutfs_alloc alloc;
unsigned long delay;
int ret;
int err;
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
if (sc == NULL) {
@@ -2168,14 +2165,10 @@ commit:
sc->meta_freed = alloc.freed;
sc->flags |= ret < 0 ? SCOUTFS_SRCH_COMPACT_FLAG_ERROR : 0;
err = scoutfs_client_srch_commit_compact(sb, sc);
if (err < 0 && ret == 0)
ret = err;
ret = scoutfs_client_srch_commit_compact(sb, sc);
out:
/* our allocators and files should be stable */
WARN_ON_ONCE(ret == -ESTALE);
if (ret < 0)
scoutfs_inc_counter(sb, srch_compact_error);
scoutfs_block_writer_forget_all(sb, &wri);
if (!atomic_read(&srinf->shutdown)) {

View File

@@ -20,6 +20,7 @@
#include <linux/statfs.h>
#include <linux/sched.h>
#include <linux/debugfs.h>
#include <linux/percpu.h>
#include "super.h"
#include "block.h"
@@ -51,34 +52,66 @@
static struct dentry *scoutfs_debugfs_root;
/* the statfs file fields can be small (and signed?) :/ */
static __statfs_word saturate_truncated_word(u64 files)
{
__statfs_word word = files;
static DEFINE_PER_CPU(u64, clock_sync_ids) = 0;
if (word != files) {
word = ~0ULL;
if (word < 0)
word = (unsigned long)word >> 1;
/*
* Give the caller a unique clock sync id for a message they're about to
* send. We make the ids reasonably globally unique by using randomly
* initialized per-cpu 64bit counters.
*/
__le64 scoutfs_clock_sync_id(void)
{
u64 rnd = 0;
u64 ret;
u64 *id;
retry:
preempt_disable();
id = this_cpu_ptr(&clock_sync_ids);
if (*id == 0) {
if (rnd == 0) {
preempt_enable();
get_random_bytes(&rnd, sizeof(rnd));
goto retry;
}
*id = rnd;
}
return word;
ret = ++(*id);
preempt_enable();
return cpu_to_le64(ret);
}
struct statfs_free_blocks {
u64 meta;
u64 data;
};
static int count_free_blocks(struct super_block *sb, void *arg, int owner,
u64 id, bool meta, bool avail, u64 blocks)
{
struct statfs_free_blocks *sfb = arg;
if (meta)
sfb->meta += blocks;
else
sfb->data += blocks;
return 0;
}
/*
* The server gives us the current sum of free blocks and the total
* inode count that it can see across all the clients' log trees. It
* won't see allocations and inode creations or deletions that are dirty
* in client memory as it builds a transaction.
* Build the free block counts by having alloc read all the persistent
* blocks which contain allocators and calling us for each of them.
* Only the super block reads aren't cached so repeatedly calling statfs
* is like repeated O_DIRECT IO. We can add a cache and stale results
* if that IO becomes a problem.
*
* We don't have static limits on the number of files so the statfs
* fields for the total possible files and the number free isn't
* particularly helpful. What we do want to report is the number of
* inodes, so we fake a max possible number of inodes given a
* conservative estimate of the total space consumption per file and
* then find the free by subtracting our precise count of active inodes.
* This seems like the least surprising compromise where the file max
* doesn't change and the caller gets the correct count of used inodes.
* We fake the number of free inodes value by assuming that we can fill
* free blocks with a certain number of inodes. We then the number of
* current inodes to that free count to determine the total possible
* inodes.
*
* The fsid that we report is constructed from the xor of the first two
* and second two little endian u32s that make up the uuid bytes.
@@ -86,33 +119,41 @@ static __statfs_word saturate_truncated_word(u64 files)
static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
{
struct super_block *sb = dentry->d_inode->i_sb;
struct scoutfs_net_statfs nst;
u64 files;
u64 ffree;
struct scoutfs_super_block *super = NULL;
struct statfs_free_blocks sfb = {0,};
__le32 uuid[4];
int ret;
scoutfs_inc_counter(sb, statfs);
ret = scoutfs_client_statfs(sb, &nst);
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
kst->f_bfree = (le64_to_cpu(nst.free_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.free_data_blocks);
ret = scoutfs_alloc_foreach(sb, count_free_blocks, &sfb);
if (ret < 0)
goto out;
kst->f_bfree = (sfb.meta << SCOUTFS_BLOCK_SM_LG_SHIFT) + sfb.data;
kst->f_type = SCOUTFS_SUPER_MAGIC;
kst->f_bsize = SCOUTFS_BLOCK_SM_SIZE;
kst->f_blocks = (le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(nst.total_data_blocks);
kst->f_blocks = (le64_to_cpu(super->total_meta_blocks) <<
SCOUTFS_BLOCK_SM_LG_SHIFT) +
le64_to_cpu(super->total_data_blocks);
kst->f_bavail = kst->f_bfree;
files = div_u64(le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_LG_SHIFT, 2048);
ffree = files - le64_to_cpu(nst.inode_count);
kst->f_files = saturate_truncated_word(files);
kst->f_ffree = saturate_truncated_word(ffree);
/* arbitrarily assume ~1K / empty file */
kst->f_ffree = sfb.meta * (SCOUTFS_BLOCK_LG_SIZE / 1024);
kst->f_files = kst->f_ffree + le64_to_cpu(super->next_ino);
BUILD_BUG_ON(sizeof(uuid) != sizeof(nst.uuid));
memcpy(uuid, nst.uuid, sizeof(uuid));
BUILD_BUG_ON(sizeof(uuid) != sizeof(super->uuid));
memcpy(uuid, super->uuid, sizeof(uuid));
kst->f_fsid.val[0] = le32_to_cpu(uuid[0]) ^ le32_to_cpu(uuid[1]);
kst->f_fsid.val[1] = le32_to_cpu(uuid[2]) ^ le32_to_cpu(uuid[3]);
kst->f_namelen = SCOUTFS_NAME_LEN;
@@ -121,6 +162,8 @@ static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
/* the vfs fills f_flags */
ret = 0;
out:
kfree(super);
/*
* We don't take cluster locks in statfs which makes it a very
* convenient place to trigger lock reclaim for debugging. We
@@ -187,15 +230,7 @@ static void scoutfs_metadev_close(struct super_block *sb)
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
if (sbi->meta_bdev) {
/*
* Some kernels have blkdev_reread_part which calls
* fsync_bdev while holding the bd_mutex which inverts
* the s_umount hold in deactivate_super and blkdev_put
* from kill_sb->put_super.
*/
lockdep_off();
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
lockdep_on();
sbi->meta_bdev = NULL;
}
}
@@ -212,16 +247,7 @@ static void scoutfs_put_super(struct super_block *sb)
trace_scoutfs_put_super(sb);
/*
* Wait for invalidation and iput to finish with any lingering
* inode references that escaped the evict_inodes in
* generic_shutdown_super. MS_ACTIVE is clear so final iput
* will always evict.
*/
scoutfs_lock_flush_invalidate(sb);
scoutfs_inode_flush_iput(sb);
WARN_ON_ONCE(!list_empty(&sb->s_inodes));
scoutfs_inode_stop(sb);
scoutfs_forest_stop(sb);
scoutfs_srch_destroy(sb);
@@ -302,16 +328,28 @@ int scoutfs_write_super(struct super_block *sb,
sizeof(struct scoutfs_super_block));
}
static bool small_bdev(struct super_block *sb, char *which, u64 blocks,
struct block_device *bdev, int shift)
static bool invalid_blkno_limits(struct super_block *sb, char *which,
u64 start, __le64 first, __le64 last,
struct block_device *bdev, int shift)
{
u64 size = (u64)i_size_read(bdev->bd_inode);
u64 count = size >> shift;
u64 blkno;
if (blocks > count) {
scoutfs_err(sb, "super block records %llu %s blocks, but device %u:%u size %llu only allows %llu blocks",
blocks, which, MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev), size, count);
if (le64_to_cpu(first) < start) {
scoutfs_err(sb, "super block first %s blkno %llu is within first valid blkno %llu",
which, le64_to_cpu(first), start);
return true;
}
if (le64_to_cpu(first) > le64_to_cpu(last)) {
scoutfs_err(sb, "super block first %s blkno %llu is greater than last %s blkno %llu",
which, le64_to_cpu(first), which, le64_to_cpu(last));
return true;
}
blkno = (i_size_read(bdev->bd_inode) >> shift) - 1;
if (le64_to_cpu(last) > blkno) {
scoutfs_err(sb, "super block last %s blkno %llu is beyond device size last blkno %llu",
which, le64_to_cpu(last), blkno);
return true;
}
@@ -360,32 +398,27 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
goto out;
}
if (le64_to_cpu(super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
scoutfs_err(sb, "super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
/*
* fill_supers checks the fmt_vers in both supers and then decides to use it.
* From then on we verify that the supers we read have that version.
*/
if (sbi->fmt_vers != 0 && le64_to_cpu(super->fmt_vers) != sbi->fmt_vers) {
scoutfs_err(sb, "super block has format version %llu than %llu read at mount",
le64_to_cpu(super->fmt_vers), sbi->fmt_vers);
if (super->version != cpu_to_le64(SCOUTFS_INTEROP_VERSION)) {
scoutfs_err(sb, "super block has invalid version %llu, expected %llu",
le64_to_cpu(super->version),
SCOUTFS_INTEROP_VERSION);
ret = -EINVAL;
goto out;
}
/* XXX do we want more rigorous invalid super checking? */
if (small_bdev(sb, "metadata", le64_to_cpu(super->total_meta_blocks), sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
small_bdev(sb, "data", le64_to_cpu(super->total_data_blocks), sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
if (invalid_blkno_limits(sb, "meta",
SCOUTFS_META_DEV_START_BLKNO,
super->first_meta_blkno,
super->last_meta_blkno, sbi->meta_bdev,
SCOUTFS_BLOCK_LG_SHIFT) ||
invalid_blkno_limits(sb, "data",
SCOUTFS_DATA_DEV_START_BLKNO,
super->first_data_blkno,
super->last_data_blkno, sb->s_bdev,
SCOUTFS_BLOCK_SM_SHIFT)) {
ret = -EINVAL;
}
@@ -492,14 +525,6 @@ static int scoutfs_read_supers(struct super_block *sb)
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) != le64_to_cpu(data_super->fmt_vers)) {
scoutfs_err(sb, "meta device format version %llu != data device format version %llu",
le64_to_cpu(meta_super->fmt_vers), le64_to_cpu(data_super->fmt_vers));
goto out;
}
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
@@ -521,7 +546,6 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_flags |= MS_I_VERSION;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -537,8 +561,12 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
return ret;
spin_lock_init(&sbi->next_ino_lock);
init_waitqueue_head(&sbi->trans_hold_wq);
spin_lock_init(&sbi->data_wait_root.lock);
sbi->data_wait_root.root = RB_ROOT;
spin_lock_init(&sbi->trans_write_lock);
INIT_DELAYED_WORK(&sbi->trans_write_work, scoutfs_trans_write_func);
init_waitqueue_head(&sbi->trans_write_wq);
scoutfs_sysfs_init_attrs(sb, &sbi->mopts_ssa);
ret = scoutfs_parse_options(sb, data, &opts);
@@ -594,16 +622,15 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
scoutfs_quorum_setup(sb) ?:
scoutfs_client_setup(sb) ?:
scoutfs_volopt_setup(sb) ?:
scoutfs_srch_setup(sb);
scoutfs_trans_get_log_trees(sb) ?:
scoutfs_srch_setup(sb) ?:
scoutfs_inode_start(sb);
if (ret)
goto out;
/* this interruptible iget lets hung mount be aborted with ctl-c */
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO, SCOUTFS_LKF_INTERRUPTIBLE);
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
if (ret == -ERESTARTSYS)
ret = -EINTR;
goto out;
}
@@ -613,14 +640,10 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
goto out;
}
/* send requests once iget progress shows we had a server */
ret = scoutfs_trans_get_log_trees(sb);
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
if (ret)
goto out;
/* start up background services that use everything else */
scoutfs_inode_start(sb);
scoutfs_forest_start(sb);
scoutfs_trans_restart_sync_deadline(sb);
ret = 0;
out:
@@ -642,17 +665,10 @@ static struct dentry *scoutfs_mount(struct file_system_type *fs_type, int flags,
*/
static void scoutfs_kill_sb(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
trace_scoutfs_kill_sb(sb);
if (sbi) {
sbi->unmounting = true;
smp_wmb();
}
if (SCOUTFS_HAS_SBI(sb)) {
scoutfs_inode_orphan_stop(sb);
if (SCOUTFS_HAS_SBI(sb))
scoutfs_lock_unmount_begin(sb);
}
kill_block_super(sb);
}
@@ -685,15 +701,11 @@ static int __init scoutfs_module_init(void)
*/
__asm__ __volatile__ (
".section .note.git_describe,\"a\"\n"
".ascii \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_min,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MIN_STR"\\n\"\n"
".previous\n");
__asm__ __volatile__ (
".section .note.scoutfs_format_version_max,\"a\"\n"
".ascii \""SCOUTFS_FORMAT_VERSION_MAX_STR"\\n\"\n"
".section .note.scoutfs_interop_version,\"a\"\n"
".string \""SCOUTFS_INTEROP_VERSION_STR"\\n\"\n"
".previous\n");
scoutfs_init_counters();
@@ -727,5 +739,4 @@ module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
MODULE_INFO(scoutfs_format_version_min, SCOUTFS_FORMAT_VERSION_MIN_STR);
MODULE_INFO(scoutfs_format_version_max, SCOUTFS_FORMAT_VERSION_MAX_STR);
MODULE_INFO(scoutfs_interop_version, SCOUTFS_INTEROP_VERSION_STR);

View File

@@ -36,7 +36,6 @@ struct scoutfs_sb_info {
/* assigned once at the start of each mount, read-only */
u64 rid;
u64 fmt_vers;
struct scoutfs_super_block super;
@@ -57,11 +56,20 @@ struct scoutfs_sb_info {
struct item_cache_info *item_cache_info;
struct fence_info *fence_info;
wait_queue_head_t trans_hold_wq;
struct task_struct *trans_task;
/* tracks tasks waiting for data extents */
struct scoutfs_data_wait_root data_wait_root;
/* set as transaction opens with trans holders excluded */
spinlock_t trans_write_lock;
u64 trans_write_count;
u64 trans_seq;
int trans_write_ret;
struct delayed_work trans_write_work;
wait_queue_head_t trans_write_wq;
struct workqueue_struct *trans_write_workq;
bool trans_deadline_expired;
struct trans_info *trans_info;
struct lock_info *lock_info;
@@ -81,7 +89,6 @@ struct scoutfs_sb_info {
struct dentry *debug_root;
bool forced_unmount;
bool unmounting;
unsigned long corruption_messages_once[SC_NR_LONGS];
};
@@ -110,19 +117,6 @@ static inline bool scoutfs_forcing_unmount(struct super_block *sb)
return sbi->forced_unmount;
}
/*
* True if we're shutting down the system and can be used as a coarse
* indicator that we can avoid doing some work that no longer makes
* sense.
*/
static inline bool scoutfs_unmounting(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
smp_rmb();
return !sbi || sbi->unmounting;
}
/*
* A small string embedded in messages that's used to identify a
* specific mount. It's the three most significant bytes of the fsid
@@ -160,4 +154,6 @@ int scoutfs_write_super(struct super_block *sb,
/* to keep this out of the ioctl.h public interface definition */
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
__le64 scoutfs_clock_sync_id(void);
#endif

View File

@@ -37,16 +37,6 @@ struct attr_funcs {
#define ATTR_FUNCS_RO(_name) \
static struct attr_funcs _name##_attr_funcs = __ATTR_RO(_name)
static ssize_t format_version_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return snprintf(buf, PAGE_SIZE, "%llu\n", sbi->fmt_vers);
}
ATTR_FUNCS_RO(format_version);
static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
@@ -101,7 +91,6 @@ static ssize_t attr_funcs_show(struct kobject *kobj, struct attribute *attr,
static struct attribute *sb_id_attrs[] = {
&format_version_attr_funcs.attr,
&fsid_attr_funcs.attr,
&rid_attr_funcs.attr,
NULL,

View File

@@ -17,7 +17,6 @@
#include <linux/atomic.h>
#include <linux/writeback.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include "super.h"
#include "trans.h"
@@ -54,24 +53,15 @@
/* sync dirty data at least this often */
#define TRANS_SYNC_DELAY (HZ * 10)
/*
* XXX move the rest of the super trans_ fields here.
*/
struct trans_info {
struct super_block *sb;
atomic_t holders;
struct scoutfs_log_trees lt;
struct scoutfs_alloc alloc;
struct scoutfs_block_writer wri;
wait_queue_head_t hold_wq;
struct task_struct *task;
spinlock_t write_lock;
u64 write_count;
int write_ret;
struct delayed_work write_work;
wait_queue_head_t write_wq;
struct workqueue_struct *write_workq;
bool deadline_expired;
};
#define DECLARE_TRANS_INFO(sb, name) \
@@ -101,7 +91,6 @@ static int commit_btrees(struct super_block *sb)
*/
int scoutfs_trans_get_log_trees(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_log_trees lt;
int ret = 0;
@@ -114,11 +103,6 @@ int scoutfs_trans_get_log_trees(struct super_block *sb)
scoutfs_forest_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
scoutfs_data_init_btrees(sb, &tri->alloc, &tri->wri, &lt);
/* first set during mount from 0 to nonzero allows commits */
spin_lock(&tri->write_lock);
sbi->trans_seq = le64_to_cpu(lt.get_trans_seq);
spin_unlock(&tri->write_lock);
}
return ret;
}
@@ -136,12 +120,13 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
*/
static void sub_holders_and_wake(struct super_block *sb, int val)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
atomic_sub(val, &tri->holders);
smp_mb(); /* make sure sub is visible before we wake */
if (waitqueue_active(&tri->hold_wq))
wake_up(&tri->hold_wq);
if (waitqueue_active(&sbi->trans_hold_wq))
wake_up(&sbi->trans_hold_wq);
}
/*
@@ -169,93 +154,96 @@ static bool drained_holders(struct trans_info *tri)
* functions that would try to hold the transaction. We record the task
* whose committing the transaction so that holding won't deadlock.
*
* Once we clear the write func bit in holders then waiting holders can
* enter the transaction and continue modifying the transaction. Once
* we start writing we consider the transaction done and won't exit,
* clearing the write func bit, until get_log_trees has opened the next
* transaction. The exception is forced unmount which is allowed to
* generate errors and throw away data.
* Any dirty block had to have allocated a new blkno which would have
* created dirty allocator metadata blocks. We can avoid writing
* entirely if we don't have any dirty metadata blocks. This is
* important because we don't try to serialize this work during
* unmount.. we can execute as the vfs is shutting down.. we need to
* decide that nothing is dirty without calling the vfs at all.
*
* This means that the only way fsync can return an error is if we're in
* forced unmount.
* We first try to sync the dirty inodes and write their dirty data blocks,
* then we write all our dirty metadata blocks, and only when those succeed
* do we write the new super that references all of these newly written blocks.
*
* If there are write errors then blocks are kept dirty in memory and will
* be written again at the next sync.
*/
void scoutfs_trans_write_func(struct work_struct *work)
{
struct trans_info *tri = container_of(work, struct trans_info, write_work.work);
struct super_block *sb = tri->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
bool retrying = false;
struct scoutfs_sb_info *sbi = container_of(work, struct scoutfs_sb_info,
trans_write_work.work);
struct super_block *sb = sbi->sb;
DECLARE_TRANS_INFO(sb, tri);
u64 trans_seq = sbi->trans_seq;
char *s = NULL;
int ret = 0;
tri->task = current;
sbi->trans_task = current;
/* mark that we're writing so holders wait for us to finish and clear our bit */
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
wait_event(tri->hold_wq, drained_holders(tri));
/* mount hasn't opened first transaction yet, still complete sync */
if (sbi->trans_seq == 0) {
ret = 0;
goto out;
}
wait_event(sbi->trans_hold_wq, drained_holders(tri));
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
goto out;
}
trace_scoutfs_trans_write_func(sb, scoutfs_block_writer_dirty_bytes(sb, &tri->wri),
scoutfs_item_dirty_pages(sb));
trace_scoutfs_trans_write_func(sb,
scoutfs_block_writer_dirty_bytes(sb, &tri->wri));
if (tri->deadline_expired)
if (!scoutfs_block_writer_has_dirty(sb, &tri->wri) &&
!scoutfs_item_dirty_pages(sb)) {
if (sbi->trans_deadline_expired) {
/*
* If we're not writing data then we only advance the
* seq at the sync deadline interval. This keeps idle
* mounts from pinning a seq and stopping readers of the
* seq indices but doesn't send a message for every sync
* syscall.
*/
ret = scoutfs_client_advance_seq(sb, &trans_seq);
if (ret < 0)
s = "clean advance seq";
}
goto err;
}
if (sbi->trans_deadline_expired)
scoutfs_inc_counter(sb, trans_commit_timer);
scoutfs_inc_counter(sb, trans_commit_written);
do {
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc,
&tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
if (ret < 0) {
if (!retrying) {
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
s, ret);
retrying = true;
}
if (scoutfs_forcing_unmount(sb)) {
ret = -EIO;
break;
}
msleep(2 * MSEC_PER_SEC);
} else if (retrying) {
scoutfs_info(sb, "retried transaction commit succeeded");
}
} while (ret < 0);
/* XXX this all needs serious work for dealing with errors */
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb,
&tri->alloc, &tri->wri)) ?:
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
(s = "commit log trees", commit_btrees(sb)) ?:
scoutfs_item_write_done(sb) ?:
(s = "advance seq", scoutfs_client_advance_seq(sb, &trans_seq)) ?:
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
err:
if (ret < 0)
scoutfs_err(sb, "critical transaction commit failure: %s, %d",
s, ret);
out:
spin_lock(&tri->write_lock);
tri->write_count++;
tri->write_ret = ret;
spin_unlock(&tri->write_lock);
wake_up(&tri->write_wq);
spin_lock(&sbi->trans_write_lock);
sbi->trans_write_count++;
sbi->trans_write_ret = ret;
sbi->trans_seq = trans_seq;
spin_unlock(&sbi->trans_write_lock);
wake_up(&sbi->trans_write_wq);
/* we're done, wake waiting holders */
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
tri->task = NULL;
sbi->trans_task = NULL;
scoutfs_trans_restart_sync_deadline(sb);
}
@@ -266,17 +254,17 @@ struct write_attempt {
};
/* this is called as a wait_event() condition so it can't change task state */
static int write_attempted(struct super_block *sb, struct write_attempt *attempt)
static int write_attempted(struct scoutfs_sb_info *sbi,
struct write_attempt *attempt)
{
DECLARE_TRANS_INFO(sb, tri);
int done = 1;
spin_lock(&tri->write_lock);
if (tri->write_count > attempt->count)
attempt->ret = tri->write_ret;
spin_lock(&sbi->trans_write_lock);
if (sbi->trans_write_count > attempt->count)
attempt->ret = sbi->trans_write_ret;
else
done = 0;
spin_unlock(&tri->write_lock);
spin_unlock(&sbi->trans_write_lock);
return done;
}
@@ -286,12 +274,10 @@ static int write_attempted(struct super_block *sb, struct write_attempt *attempt
* We always have delayed sync work pending but the caller wants it
* to execute immediately.
*/
static void queue_trans_work(struct super_block *sb)
static void queue_trans_work(struct scoutfs_sb_info *sbi)
{
DECLARE_TRANS_INFO(sb, tri);
tri->deadline_expired = false;
mod_delayed_work(tri->write_workq, &tri->write_work, 0);
sbi->trans_deadline_expired = false;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work, 0);
}
/*
@@ -304,24 +290,26 @@ static void queue_trans_work(struct super_block *sb)
*/
int scoutfs_trans_sync(struct super_block *sb, int wait)
{
DECLARE_TRANS_INFO(sb, tri);
struct write_attempt attempt = { .ret = 0 };
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct write_attempt attempt;
int ret;
if (!wait) {
queue_trans_work(sb);
queue_trans_work(sbi);
return 0;
}
spin_lock(&tri->write_lock);
attempt.count = tri->write_count;
spin_unlock(&tri->write_lock);
spin_lock(&sbi->trans_write_lock);
attempt.count = sbi->trans_write_count;
spin_unlock(&sbi->trans_write_lock);
queue_trans_work(sb);
queue_trans_work(sbi);
wait_event(tri->write_wq, write_attempted(sb, &attempt));
ret = attempt.ret;
ret = wait_event_interruptible(sbi->trans_write_wq,
write_attempted(sbi, &attempt));
if (ret == 0)
ret = attempt.ret;
return ret;
}
@@ -337,10 +325,10 @@ int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
tri->deadline_expired = true;
mod_delayed_work(tri->write_workq, &tri->write_work,
sbi->trans_deadline_expired = true;
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work,
TRANS_SYNC_DELAY);
}
@@ -494,16 +482,10 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
u64 seq;
int ret;
if (current == tri->task)
if (current == sbi->trans_task)
return 0;
for (;;) {
/* shouldn't get holders until mount finishes, (not locking for cheap test) */
if (WARN_ON_ONCE(sbi->trans_seq == 0)) {
ret = -EINVAL;
break;
}
/* if a caller already has a hold we acquire unconditionally */
if (inc_journal_info_holders()) {
atomic_inc(&tri->holders);
@@ -514,7 +496,9 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
/* wait until the writer work is finished */
if (!inc_holders_unless_writer(tri)) {
dec_journal_info_holders();
wait_event(tri->hold_wq, holders_no_writer(tri));
ret = wait_event_interruptible(sbi->trans_hold_wq, holders_no_writer(tri));
if (ret < 0)
break;
continue;
}
@@ -529,8 +513,11 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
if (commit_before_hold(sb, tri)) {
seq = scoutfs_trans_sample_seq(sb);
release_holders(sb);
queue_trans_work(sb);
wait_event(tri->hold_wq, scoutfs_trans_sample_seq(sb) != seq);
queue_trans_work(sbi);
ret = wait_event_interruptible(sbi->trans_hold_wq,
scoutfs_trans_sample_seq(sb) != seq);
if (ret < 0)
break;
continue;
}
@@ -556,9 +543,10 @@ bool scoutfs_trans_held(void)
void scoutfs_release_trans(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_TRANS_INFO(sb, tri);
if (current == tri->task)
if (current == sbi->trans_task)
return;
release_holders(sb);
@@ -573,13 +561,12 @@ void scoutfs_release_trans(struct super_block *sb)
*/
u64 scoutfs_trans_sample_seq(struct super_block *sb)
{
DECLARE_TRANS_INFO(sb, tri);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
u64 ret;
spin_lock(&tri->write_lock);
spin_lock(&sbi->trans_write_lock);
ret = sbi->trans_seq;
spin_unlock(&tri->write_lock);
spin_unlock(&sbi->trans_write_lock);
return ret;
}
@@ -593,17 +580,12 @@ int scoutfs_setup_trans(struct super_block *sb)
if (!tri)
return -ENOMEM;
tri->sb = sb;
atomic_set(&tri->holders, 0);
scoutfs_block_writer_init(sb, &tri->wri);
spin_lock_init(&tri->write_lock);
INIT_DELAYED_WORK(&tri->write_work, scoutfs_trans_write_func);
init_waitqueue_head(&tri->write_wq);
init_waitqueue_head(&tri->hold_wq);
tri->write_workq = alloc_workqueue("scoutfs_trans", WQ_UNBOUND, 1);
if (!tri->write_workq) {
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",
WQ_UNBOUND, 1);
if (!sbi->trans_write_workq) {
kfree(tri);
return -ENOMEM;
}
@@ -630,14 +612,14 @@ void scoutfs_shutdown_trans(struct super_block *sb)
DECLARE_TRANS_INFO(sb, tri);
if (tri) {
if (tri->write_workq) {
if (sbi->trans_write_workq) {
/* immediately queues pending timer */
flush_delayed_work(&tri->write_work);
flush_delayed_work(&sbi->trans_write_work);
/* prevents re-arming if it has to wait */
cancel_delayed_work_sync(&tri->write_work);
destroy_workqueue(tri->write_workq);
cancel_delayed_work_sync(&sbi->trans_write_work);
destroy_workqueue(sbi->trans_write_workq);
/* trans work schedules after shutdown see null */
tri->write_workq = NULL;
sbi->trans_write_workq = NULL;
}
scoutfs_block_writer_forget_all(sb, &tri->wri);

View File

@@ -97,7 +97,6 @@ static int unknown_prefix(const char *name)
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
#define TAG_LEN (sizeof(HIDE_TAG) - 1)
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
@@ -120,9 +119,6 @@ int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
} else if (!strncmp(name, SRCH_TAG, TAG_LEN)) {
if (++tgs->srch == 0)
return -EINVAL;
} else if (!strncmp(name, TOTL_TAG, TAG_LEN)) {
if (++tgs->totl == 0)
return -EINVAL;
} else {
/* only reason to use scoutfs. is tags */
if (!found)
@@ -368,7 +364,7 @@ static int change_xattr_items(struct inode *inode, u64 id,
}
/* update dirtied overlapping existing items, last partial first */
for (i = min(old_parts, new_parts) - 1; i >= 0; i--) {
for (i = old_parts - 1; i >= 0; i--) {
off = i * SCOUTFS_XATTR_MAX_PART_SIZE;
bytes = min_t(unsigned int, new_bytes - off,
SCOUTFS_XATTR_MAX_PART_SIZE);
@@ -472,100 +468,6 @@ out:
return ret;
}
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
{
scoutfs_key_set_zeros(key);
key->sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
key->skxt_a = cpu_to_le64(name[0]);
key->skxt_b = cpu_to_le64(name[1]);
key->skxt_c = cpu_to_le64(name[2]);
}
/*
* Parse a u64 in any base after null terminating it while forbidding
* the leading + and trailing \n that kstrotull allows.
*/
static int parse_totl_u64(const char *s, int len, u64 *res)
{
char str[SCOUTFS_XATTR_MAX_TOTL_U64 + 1];
if (len <= 0 || len >= ARRAY_SIZE(str) || s[0] == '+' || s[len - 1] == '\n')
return -EINVAL;
memcpy(str, s, len);
str[len] = '\0';
return kstrtoull(str, 0, res) != 0 ? -EINVAL : 0;
}
/*
* non-destructive relatively quick parse of the last 3 dotted u64s that
* make up the name of the xattr total. -EINVAL is returned if there
* are anything but 3 valid u64 encodings between single dots at the end
* of the name.
*/
static int parse_totl_key(struct scoutfs_key *key, const char *name, int name_len)
{
u64 tot_name[3];
int end = name_len;
int nr = 0;
int len;
int ret;
int i;
/* parse name elements in reserve order from end of xattr name string */
for (i = name_len - 1; i >= 0 && nr < ARRAY_SIZE(tot_name); i--) {
if (name[i] != '.')
continue;
len = end - (i + 1);
ret = parse_totl_u64(&name[i + 1], len, &tot_name[nr]);
if (ret < 0)
goto out;
end = i;
nr++;
}
if (nr == ARRAY_SIZE(tot_name)) {
/* swap to account for parsing in reverse */
swap(tot_name[0], tot_name[2]);
scoutfs_xattr_init_totl_key(key, tot_name);
ret = 0;
} else {
ret = -EINVAL;
}
out:
return ret;
}
static int apply_totl_delta(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_xattr_totl_val *tval, struct scoutfs_lock *lock)
{
if (tval->total == 0 && tval->count == 0)
return 0;
return scoutfs_item_delta(sb, key, tval, sizeof(*tval), lock);
}
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
{
struct scoutfs_xattr_totl_val *s_tval = src;
struct scoutfs_xattr_totl_val *d_tval = dst;
if (src_len != sizeof(*s_tval) || dst_len != src_len)
return -EIO;
le64_add_cpu(&d_tval->total, le64_to_cpu(s_tval->total));
le64_add_cpu(&d_tval->count, le64_to_cpu(s_tval->count));
if (d_tval->total == 0 && d_tval->count == 0)
return SCOUTFS_DELTA_COMBINED_NULL;
return SCOUTFS_DELTA_COMBINED;
}
/*
* The confusing swiss army knife of creating, modifying, and deleting
* xattrs.
@@ -584,22 +486,16 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int bytes;
unsigned int val_len;
u64 ind_seq;
u64 total;
u64 hash = 0;
u64 id = 0;
int ret;
@@ -623,15 +519,11 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
if ((tgs.hide || tgs.srch) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
bytes = sizeof(struct scoutfs_xattr) + name_len + size;
/* alloc enough to read old totl value */
xat = __vmalloc(bytes + SCOUTFS_XATTR_MAX_TOTL_U64, GFP_NOFS, PAGE_KERNEL);
xat = __vmalloc(bytes, GFP_NOFS, PAGE_KERNEL);
if (!xat) {
ret = -ENOMEM;
goto out;
@@ -644,9 +536,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
down_write(&si->xattr_rwsem);
/* find an existing xattr to delete, including possible totl value */
/* find an existing xattr to delete */
ret = get_next_xattr(inode, &key, xat,
sizeof(struct scoutfs_xattr) + name_len + SCOUTFS_XATTR_MAX_TOTL_U64,
sizeof(struct scoutfs_xattr) + name_len,
name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
goto unlock;
@@ -666,23 +558,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
goto unlock;
}
/* s64 count delta if we create or delete */
if (tgs.totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
if (found_parts && tgs.totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
if (ret < 0)
goto unlock;
le64_add_cpu(&tval.total, -total);
}
/* prepare our xattr */
if (value) {
if (found_parts)
@@ -694,20 +572,6 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
memset(xat->__pad, 0, sizeof(xat->__pad));
memcpy(xat->name, name, name_len);
memcpy(&xat->name[xat->name_len], value, size);
if (tgs.totl) {
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto unlock;
}
le64_add_cpu(&tval.total, total);
}
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret)
goto unlock;
}
retry:
@@ -733,13 +597,6 @@ retry:
undo_srch = true;
}
if (tgs.totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto release;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, bytes,
xattr_nr_parts(xat), found_parts, lck);
@@ -763,20 +620,12 @@ release:
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
out:
vfree(xat);
@@ -897,22 +746,15 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
{
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_xattr_totl_val tval;
struct scoutfs_key totl_key;
struct scoutfs_key last;
struct scoutfs_key key;
bool release = false;
unsigned int bytes;
unsigned int val_len;
void *value;
u64 total;
u64 hash;
int ret;
/* need a buffer large enough for all possible names and totl value */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN +
SCOUTFS_XATTR_MAX_TOTL_U64;
/* need a buffer large enough for all possible names */
bytes = sizeof(struct scoutfs_xattr) + SCOUTFS_XATTR_MAX_NAME_LEN;
xat = kmalloc(bytes, GFP_NOFS);
if (!xat) {
ret = -ENOMEM;
@@ -931,37 +773,11 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (key.skx_part == 0 && (ret < sizeof(struct scoutfs_xattr) ||
ret < offsetof(struct scoutfs_xattr, name[xat->name_len]))) {
ret = -EIO;
break;
}
if (key.skx_part != 0 ||
scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) != 0)
memset(&tgs, 0, sizeof(tgs));
if (tgs.totl) {
value = &xat->name[xat->name_len];
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
if (val_len != le16_to_cpu(xat->val_len)) {
ret = -EIO;
goto out;
}
ret = parse_totl_key(&totl_key, xat->name, xat->name_len) ?:
parse_totl_u64(value, val_len, &total);
if (ret < 0)
break;
}
if (tgs.totl && totl_lock == NULL) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret < 0)
break;
}
ret = scoutfs_hold_trans(sb, false);
if (ret < 0)
break;
@@ -979,14 +795,6 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
break;
}
if (tgs.totl) {
tval.total = cpu_to_le64(-total);
tval.count = cpu_to_le64(-1LL);
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
break;
}
scoutfs_release_trans(sb);
release = false;
@@ -995,7 +803,6 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
if (release)
scoutfs_release_trans(sb);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
kfree(xat);
out:
return ret;

View File

@@ -16,14 +16,10 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1,
totl:1;
srch:1;
};
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);
#endif

View File

@@ -40,7 +40,7 @@ t_filter_dmesg()
# mount and unmount spew a bunch
re="$re|scoutfs.*client connected"
re="$re|scoutfs.*client disconnected"
re="$re|scoutfs.*server starting"
re="$re|scoutfs.*server setting up"
re="$re|scoutfs.*server ready"
re="$re|scoutfs.*server accepted"
re="$re|scoutfs.*server closing"

View File

@@ -53,5 +53,3 @@ mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing
== concurrent creates make one file
one-file

View File

@@ -0,0 +1,4 @@
== create per mount files
== time independent modification
== time concurrent independent modification
== time concurrent conflicting modification

View File

@@ -1,27 +0,0 @@
== make initial small fs
== 0s do nothing
== shrinking fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== existing sizes do nothing
== growing outside device fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing meta works
== resizing data works
== shrinking back fails
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
resize_devices ioctl failed: Invalid argument (22)
scoutfs: resize-devices failed: Invalid argument (22)
== resizing again does nothing
== resizing to full works
== cleanup extra fs

View File

@@ -16,4 +16,3 @@ setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Argument list too long
=== good length boundaries
=== 500 random lengths
=== alternate val size between interesting sizes

View File

@@ -2,7 +2,6 @@
== update existing xattr
== remove an xattr
== remove xattr with files
== trigger small log merges by rotating single block with unmount
== create entries in current log
== delete small fraction
== remove files

View File

@@ -1,30 +0,0 @@
== single file
1.2.3 = 1, 1
4.5.6 = 1, 1
== multiple files add up
1.2.3 = 2, 2
4.5.6 = 2, 2
== removing xattr updates total
1.2.3 = 2, 2
4.5.6 = 1, 1
== updating xattr updates total
1.2.3 = 11, 2
4.5.6 = 1, 1
== removing files update total
1.2.3 = 10, 1
== multiple files/names in one transaction
1.2.3 = 55, 10
== testing invalid names
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== testing invalid values
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
setfattr: /mnt/test/test/totl-xattr-tag/invalid: Invalid argument
== larger population that could merge

View File

@@ -254,20 +254,17 @@ test -e "$T_RESULTS" || mkdir -p "$T_RESULTS"
test -d "$T_RESULTS" || \
die "$T_RESULTS dir is not a directory"
# might as well build our stuff with all cpus, assuming idle system
MAKE_ARGS="-j $(getconf _NPROCESSORS_ONLN)"
# build kernel module
msg "building kmod/ dir $T_KMOD"
cmd cd "$T_KMOD"
cmd make $MAKE_ARGS
cmd make
cmd sync
cmd cd -
# build utils
msg "building utils/ dir $T_UTILS"
cmd cd "$T_UTILS"
cmd make $MAKE_ARGS
cmd make
cmd sync
cmd cd -
@@ -284,7 +281,7 @@ fi
# building our test binaries
msg "building test binaries"
cmd make $MAKE_ARGS
cmd make
# set any options implied by others
test -n "$T_MKFS" && T_UNMOUNT=1

View File

@@ -10,7 +10,6 @@ move-blocks.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
totl-xattr-tag.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
@@ -26,10 +25,10 @@ basic-posix-consistency.sh
dirent-consistency.sh
mkdir-rename-rmdir.sh
lock-ex-race-processes.sh
lock-conflicting-batch-commit.sh
cross-mount-data-free.sh
persistent-item-vers.sh
setup-error-teardown.sh
resize-devices.sh
fence-and-reclaim.sh
orphan-inodes.sh
mount-unmount-race.sh

View File

@@ -48,9 +48,8 @@ char buf[SZ];
int main(int argc, char **argv)
{
struct scoutfs_ioctl_release rel = {0};
struct scoutfs_ioctl_release ioctl_args = {0};
struct scoutfs_ioctl_move_blocks mb;
struct scoutfs_ioctl_stat_more stm;
struct sub_tmp_info sub_tmps[8];
int tot_size = 0;
char *dest_file;
@@ -112,19 +111,12 @@ int main(int argc, char **argv)
exit(1);
}
// get current data_version after fallocate's size extensions
ret = ioctl(dest_fd, SCOUTFS_IOC_STAT_MORE, &stm);
if (ret < 0) {
perror("stat_more ioctl error");
exit(1);
}
// release everything in dest file
rel.offset = 0;
rel.length = tot_size;
rel.data_version = stm.data_version;
ioctl_args.offset = 0;
ioctl_args.length = tot_size;
ioctl_args.data_version = 0;
ret = ioctl(dest_fd, SCOUTFS_IOC_RELEASE, &rel);
ret = ioctl(dest_fd, SCOUTFS_IOC_RELEASE, &ioctl_args);
if (ret < 0) {
perror("error");
exit(1);
@@ -138,7 +130,7 @@ int main(int argc, char **argv)
mb.from_off = 0;
mb.len = sub_tmp->length;
mb.to_off = sub_tmp->offset;
mb.data_version = stm.data_version;
mb.data_version = 0;
mb.flags = SCOUTFS_IOC_MB_STAGE;
ret = ioctl(dest_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);

View File

@@ -197,13 +197,4 @@ scoutfs walk-inodes -p "$T_M0" -- data_seq 0 -1 > "$T_TMP.0"
scoutfs walk-inodes -p "$T_M1" -- data_seq 0 -1 > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== concurrent creates make one file"
mkdir "$T_D0/concurrent"
for i in $(t_fs_nrs); do
eval p="\$T_D${i}/concurrent/one-file"
touch "$p" 2>&1 > "$T_TMP.multi-create.$i" &
done
wait
ls "$T_D0/concurrent"
t_pass

View File

@@ -0,0 +1,59 @@
#
# If bulk work accidentally conflicts in the worst way we'd like to have
# it not result in catastrophic performance. Make sure that each
# instance of bulk work is given the opportunity to get as much as it
# can into the transaction under a lock before the lock is revoked
# and the transaction is committed.
#
t_require_commands setfattr
t_require_mounts 2
NR=3000
echo "== create per mount files"
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
t_quiet mkdir -p "$dir"
for a in $(seq 1 $NR); do touch "$dir/$a"; done
done
echo "== time independent modification"
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
START=$SECONDS
for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a"
done
echo "mount $m: $((SECONDS - START))" >> $T_TMP.log
done
echo "== time concurrent independent modification"
START=$SECONDS
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
(for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a";
done) &
done
wait
IND="$((SECONDS - START))"
echo "ind: $IND" >> $T_TMP.log
echo "== time concurrent conflicting modification"
START=$SECONDS
for m in 0 1; do
eval dir="\$T_D${m}/dir/0"
(for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a";
done) &
done
wait
CONF="$((SECONDS - START))"
echo "conf: $CONF" >> $T_TMP.log
if [ "$CONF" -gt "$((IND * 5))" ]; then
t_fail "conflicting $CONF secs is more than 5x independent $IND secs"
fi
t_pass

View File

@@ -23,7 +23,9 @@ else
NR_MNTS=$T_NR_MOUNTS
fi
while : ; do
# test until final op mount dir wraps
while [ ${op_mnt[$NR_OPS]} == 0 ]; do
# sequentially perform each op from its mount dir
for op in $(seq 0 $((NR_OPS - 1))); do
m=${op_mnt[$op]}
@@ -43,7 +45,7 @@ while : ; do
# advance through mnt nrs for each op
i=0
while [ $i -lt $NR_OPS ]; do
while [ ${op_mnt[$NR_OPS]} == 0 ]; do
((op_mnt[$i]++))
if [ ${op_mnt[$i]} -ge $NR_MNTS ]; then
op_mnt[$i]=0
@@ -52,9 +54,6 @@ while : ; do
break
fi
done
# done when the last op's mnt nr wrapped
[ $i -ge $NR_OPS ] && break
done
t_pass

View File

@@ -1,149 +0,0 @@
#
# Some basic tests of online resizing metadata and data devices.
#
statfs_total() {
local single="total_$1_blocks"
local mnt="$2"
scoutfs statfs -s $single -p "$mnt"
}
df_free() {
local md="$1"
local mnt="$2"
scoutfs df -p "$mnt" | awk '($1 == "'$md'") { print $5; exit }'
}
same_totals() {
cur_meta_tot=$(statfs_total meta "$SCR")
cur_data_tot=$(statfs_total data "$SCR")
test "$cur_meta_tot" == "$exp_meta_tot" || \
t_fail "cur total_meta_blocks $cur_meta_tot != expected $exp_meta_tot"
test "$cur_data_tot" == "$exp_data_tot" || \
t_fail "cur total_data_blocks $cur_data_tot != expected $exp_data_tot"
}
#
# make sure that the specified devices have grown by doubling. The
# total blocks can be tested exactly but the df reported total needs
# some slop to account for reserved blocks and concurrent allocation.
#
devices_grew() {
cur_meta_tot=$(statfs_total meta "$SCR")
cur_data_tot=$(statfs_total data "$SCR")
cur_meta_df=$(df_free MetaData "$SCR")
cur_data_df=$(df_free Data "$SCR")
local grow_meta_tot=$(echo "$exp_meta_tot * 2" | bc)
local grow_data_tot=$(echo "$exp_data_tot * 2" | bc)
local grow_meta_df=$(echo "($exp_meta_df * 1.95)/1" | bc)
local grow_data_df=$(echo "($exp_data_df * 1.95)/1" | bc)
if [ "$1" == "meta" ]; then
test "$cur_meta_tot" == "$grow_meta_tot" || \
t_fail "cur total_meta_blocks $cur_meta_tot != grown $grow_meta_tot"
test "$cur_meta_df" -lt "$grow_meta_df" && \
t_fail "cur meta df total $cur_meta_df < grown $grow_meta_df"
exp_meta_tot=$cur_meta_tot
exp_meta_df=$cur_meta_df
shift
fi
if [ "$1" == "data" ]; then
test "$cur_data_tot" == "$grow_data_tot" || \
t_fail "cur total_data_blocks $cur_data_tot != grown $grow_data_tot"
test "$cur_data_df" -lt "$grow_data_df" && \
t_fail "cur data df total $cur_data_df < grown $grow_data_df"
exp_data_tot=$cur_data_tot
exp_data_df=$cur_data_df
fi
}
# first calculate small mkfs based on device size
size_meta=$(blockdev --getsize64 "$T_EX_META_DEV")
size_data=$(blockdev --getsize64 "$T_EX_DATA_DEV")
quarter_meta=$(echo "$size_meta / 4" | bc)
quarter_data=$(echo "$size_data / 4" | bc)
# XXX this is all pretty manual, would be nice to have helpers
echo "== make initial small fs"
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m $quarter_meta -d $quarter_data \
"$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="/mnt/scoutfs.enospc"
mkdir -p "$SCR"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
# then calculate sizes based on blocks that mkfs used
quarter_meta=$(echo "$(statfs_total meta "$SCR") * 64 * 1024" | bc)
quarter_data=$(echo "$(statfs_total data "$SCR") * 4 * 1024" | bc)
whole_meta=$(echo "$quarter_meta * 4" | bc)
whole_data=$(echo "$quarter_data * 4" | bc)
outsize_meta=$(echo "$whole_meta * 2" | bc)
outsize_data=$(echo "$whole_data * 2" | bc)
half_meta=$(echo "$whole_meta / 2" | bc)
half_data=$(echo "$whole_data / 2" | bc)
shrink_meta=$(echo "$quarter_meta / 2" | bc)
shrink_data=$(echo "$quarter_data / 2" | bc)
# and save expected values for checks
exp_meta_tot=$(statfs_total meta "$SCR")
exp_meta_df=$(df_free MetaData "$SCR")
exp_data_tot=$(statfs_total data "$SCR")
exp_data_df=$(df_free Data "$SCR")
echo "== 0s do nothing"
scoutfs resize-devices -p "$SCR"
scoutfs resize-devices -p "$SCR" -m 0
scoutfs resize-devices -p "$SCR" -d 0
scoutfs resize-devices -p "$SCR" -m 0 -d 0
echo "== shrinking fails"
scoutfs resize-devices -p "$SCR" -m $shrink_meta
scoutfs resize-devices -p "$SCR" -d $shrink_data
scoutfs resize-devices -p "$SCR" -m $shrink_meta -d $shrink_data
same_totals
echo "== existing sizes do nothing"
scoutfs resize-devices -p "$SCR" -m $quarter_meta
scoutfs resize-devices -p "$SCR" -d $quarter_data
scoutfs resize-devices -p "$SCR" -m $quarter_meta -d $quarter_data
same_totals
echo "== growing outside device fails"
scoutfs resize-devices -p "$SCR" -m $outsize_meta
scoutfs resize-devices -p "$SCR" -d $outsize_data
scoutfs resize-devices -p "$SCR" -m $outsize_meta -d $outsize_data
same_totals
echo "== resizing meta works"
scoutfs resize-devices -p "$SCR" -m $half_meta
devices_grew meta
echo "== resizing data works"
scoutfs resize-devices -p "$SCR" -d $half_data
devices_grew data
echo "== shrinking back fails"
scoutfs resize-devices -p "$SCR" -m $quarter_meta
scoutfs resize-devices -p "$SCR" -m $quarter_data
same_totals
echo "== resizing again does nothing"
scoutfs resize-devices -p "$SCR" -m $half_meta
scoutfs resize-devices -p "$SCR" -m $half_data
same_totals
echo "== resizing to full works"
scoutfs resize-devices -p "$SCR" -m $whole_meta -d $whole_data
devices_grew meta data
echo "== cleanup extra fs"
umount "$SCR"
rmdir "$SCR"
t_pass

View File

@@ -46,35 +46,6 @@ print_and_run() {
"$@" || echo "returned nonzero status: $?"
}
# fill a buffer with strings that identify their byte offset
offs=""
for o in $(seq 0 7 $((65535 - 7))); do
offs+="$(printf "[%5u]" $o)"
done
change_val_sizes() {
local name="$1"
local file="$2"
local from="$3"
local to="$4"
while : ; do
setfattr -x "$name" "$file" > /dev/null 2>&1
setfattr -n "$name" -v "${offs:0:$from}" "$file"
setfattr -n "$name" -v "${offs:0:$to}" "$file"
if ! diff -u <(echo -n "${offs:0:$to}") <(getfattr --absolute-names --only-values -n "$name" $file) ; then
echo "setting $name from $from to $to failed"
fi
if [ $from == $3 ]; then
from=$4
to=$3
else
break
fi
done
}
echo "=== XATTR_ flag combinations"
touch "$FILE"
print_and_run dumb_setxattr -p "$FILE" -n user.test -v val -c -r
@@ -109,17 +80,4 @@ for i in $(seq 1 $NR); do
test_xattr_lengths $name_len $val_len
done
echo "=== alternate val size between interesting sizes"
name="user.test"
ITEM=896
HDR=$((8 + 9))
# one full item apart
change_val_sizes $name "$FILE" $(((ITEM * 2) - HDR)) $(((ITEM * 3) - HDR))
# multiple full items apart
change_val_sizes $name "$FILE" $(((ITEM * 6) - HDR)) $(((ITEM * 9) - HDR))
# item boundary fence posts
change_val_sizes $name "$FILE" $(((ITEM * 5) - HDR - 1)) $(((ITEM * 13) - HDR + 1))
# min and max
change_val_sizes $name "$FILE" 1 65535
t_pass

View File

@@ -17,10 +17,8 @@ diff_srch_find()
local n="$1"
sync
scoutfs search-xattrs "$n" -p "$T_M0" > "$T_TMP.srch" || \
t_fail "search-xattrs failed"
find_xattrs -d "$T_D0" -m "$T_M0" -n "$n" > "$T_TMP.find" || \
t_fail "find_xattrs failed"
scoutfs search-xattrs "$n" -p "$T_M0" > "$T_TMP.srch"
find_xattrs -d "$T_D0" -m "$T_M0" -n "$n" > "$T_TMP.find"
diff -u "$T_TMP.srch" "$T_TMP.find"
}
@@ -42,31 +40,6 @@ echo "== remove xattr with files"
rm -f "$T_D0/"{create,update}
diff_srch_find scoutfs.srch.test
echo "== trigger small log merges by rotating single block with unmount"
sv=$(t_server_nr)
i=1
while [ "$i" -lt "8" ]; do
for nr in $(t_fs_nrs); do
# not checking, can go over limit by fs_nrs
((i++))
if [ $nr == $sv ]; then
continue;
fi
eval path="\$T_D${nr}/single-block-$i"
touch "$path"
setfattr -n scoutfs.srch.single-block-logs -v $i "$path"
t_umount $nr
t_mount $nr
((i++))
done
done
# wait for srch compaction worker delay
sleep 10
rm -rf "$T_D0/single-block-*"
echo "== create entries in current log"
DIR="$T_D0/dir"
NR=$((LOG / 4))

View File

@@ -1,126 +0,0 @@
t_require_commands touch rm setfattr scoutfs find_xattrs
read_xattr_totals()
{
sync
scoutfs read-xattr-totals -p "$T_M0"
}
echo "== single file"
touch "$T_D0/file-1"
setfattr -n scoutfs.totl.test.1.2.3 -v 1 "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.4.5.6 -v 1 "$T_D0/file-1" 2>&1 | t_filter_fs
read_xattr_totals
echo "== multiple files add up"
touch "$T_D0/file-2"
setfattr -n scoutfs.totl.test.1.2.3 -v 1 "$T_D0/file-2" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.4.5.6 -v 1 "$T_D0/file-2" 2>&1 | t_filter_fs
read_xattr_totals
echo "== removing xattr updates total"
setfattr -x scoutfs.totl.test.4.5.6 "$T_D0/file-2" 2>&1 | t_filter_fs
read_xattr_totals
echo "== updating xattr updates total"
setfattr -n scoutfs.totl.test.1.2.3 -v 10 "$T_D0/file-2" 2>&1 | t_filter_fs
read_xattr_totals
echo "== removing files update total"
rm -f "$T_D0/file-1"
read_xattr_totals
rm -f "$T_D0/file-2"
read_xattr_totals
echo "== multiple files/names in one transaction"
for a in $(seq 1 10); do
touch "$T_D0/file-$a"
setfattr -n scoutfs.totl.test.1.2.3 -v $a "$T_D0/file-$a" 2>&1 | t_filter_fs
done
read_xattr_totals
rm -rf "$T_D0"/file-[0-9]*
echo "== testing invalid names"
touch "$T_D0/invalid"
setfattr -n scoutfs.totl.test... -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test..2.3 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1..3 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2. -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2 -v 10 "$T_D0/invalid" 2>&1 | t_filter_fs
echo "== testing invalid values"
setfattr -n scoutfs.totl.test.1.2.3 -v "+1" "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "10." "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "-" "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "junk10" "$T_D0/invalid" 2>&1 | t_filter_fs
setfattr -n scoutfs.totl.test.1.2.3 -v "10junk" "$T_D0/invalid" 2>&1 | t_filter_fs
rm -f "$T_D0/invalid"
echo "== larger population that could merge"
NR=5000
TOTS=100
CHECK=100
PER_DIR=1000
PER_FILE=10
declare -A totals counts
LOTS="$T_D0/lots"
for i in $(seq 0 $PER_DIR $NR); do
p="$LOTS/$((i / PER_DIR))"
mkdir -p $p
done
for i in $(seq 0 $PER_FILE $NR); do
p="$LOTS/$((i / PER_DIR))/file-$((i / PER_FILE))"
touch $p
done
for phase in create update remove; do
for i in $(seq 0 $NR); do
p="$LOTS/$((i / PER_DIR))/file-$((i / PER_FILE))"
t=$((i % TOTS))
n="scoutfs.totl.test-$i.$t.0.0"
case $phase in
create)
v="$i"
setfattr -n "$n" -v "$v" "$p" 2>&1 >> $T_TMP.sfa
((totals[$t]+=$v))
((counts[$t]++))
;;
update)
v=$((i * 3))
delta=$((i * 2))
setfattr -n "$n" -v "$v" "$p" 2>&1 >> $T_TMP.sfa
((totals[$t]+=$delta))
;;
remove)
v=$((i * 3))
setfattr -x "$n" "$p" 2>&1 >> $T_TMP.sfa
((totals[$t]-=$v))
((counts[$t]--))
;;
esac
if [ "$i" -gt 0 -a "$((i % CHECK))" == "0" ]; then
echo "checking $phase $i" > $T_TMP.check_arr
echo "checking $phase $i" > $T_TMP.check_read
( for k in ${!totals[@]}; do
echo "$k.0.0 = ${totals[$k]}, ${counts[$k]}"
done ) | grep -v "= 0, 0$" | sort -n >> $T_TMP.check_arr
sync
read_xattr_totals | sort -n >> $T_TMP.check_read
diff -u $T_TMP.check_arr $T_TMP.check_read || \
t_fail "totals read didn't match expected arrays"
fi
done
done
rm -rf "$T_D0/merging"
t_pass

View File

@@ -7,7 +7,7 @@ message_output()
error_message()
{
message_output "$@" >&2
message_output "$@" >> /dev/stderr
}
error_exit()
@@ -18,7 +18,7 @@ error_exit()
log_message()
{
message_output "$@"
message_output "$@" >> /dev/stdout
}
# restart if we catch hup to re-read the config

View File

@@ -1,3 +0,0 @@
SCOUTFS_FENCED_DELAY=1
SCOUTFS_FENCED_RUN=/usr/libexec/scoutfs-fenced/run/local-force-unmount
SCOUTFS_FENCED_RUN_ARGS=""

View File

@@ -1,11 +0,0 @@
[Unit]
Description=ScoutFS fenced
[Service]
Restart=on-failure
RestartSec=5s
StartLimitBurst=5
ExecStart=/usr/libexec/scoutfs-fenced/scoutfs-fenced
[Install]
WantedBy=default.target

View File

@@ -142,142 +142,7 @@ If the
file is written to then the server cannot make forward progress and
shuts down. The request can similarly enter an errored state if enough
time passes before userspace completes the request.
.SH EXTENDED ATTRIBUTE TAGS
.B scoutfs
adds the
.IB scoutfs.
extended attribute namespace which uses a system of tags to extend the
functionality of extended attributes. Immediately following the
scoutfs. prefix are a series of tag words seperated by dots.
Any text starting after the last recognized tag is considered the xattr
name and is not parsed.
.sp
Tags may be combined in any order. Specifying a tag more than once
will return an error. There is no explicit boundary between the end of
tags and the start of the name so unknown or incorrect tags will be
successfully parsed as part of the name of the xattr. Tags can only be
created, updated, or removed with the CAP_SYS_ADMIN capability.
The following tags are currently supported:
.RS
.TP
.B .hide.
Attributes with the .hide. tag are not visible to the
.BR listxattr(2)
system call. They will instead be included in the output of the
.IB LISTXATTR_HIDDEN
ioctl. This is meant to be used by archival management agents to store
metadata that is bound to a specific volume and should not be
transferred with the file by tools that read extended attributes, like
.BR tar(1) .
.TP
.B .srch.
Attributes with the .srch. tag are indexed so that they can be
found by the
.IB SEARCH_XATTRS
ioctl. The search ioctl takes an extended attribute name and returns
the inode number of all the inodes which contain an extended attribute
with that name. The indexing structures behind .srch. tags are designed
to efficiently handle a large number of .srch. attributes per file with
no limits on the number of indexed files.
.TP
.B .totl.
Attributes with the .totl. flag are used to efficiently maintain counts
across all files in the system. The attribute's name must end in three
64bit values seperated by dots that specify the global total that the
extended attribute will contribute to. The value of the extended
attribute is a string representation of the 64bit quantity which will be
added to the total. As attributes are added, updated, or removed (and
particularly as a file is finally deleted), the corresponding global
total is also updated by the file system. All the totals with their
name, total value, and a count of contributing attributes can be read
with the
.IB READ_XATTR_TOTALS
ioctl.
.RE
.SH FORMAT VERSION
The format version defines the layout and use of structures stored on
devices and passed over the network. The version is incremented for
every change in structures that is not backwards compatible with
previous versions. A single version implies all changes, individual
changes can't be selectively adopted.
.sp
As a new file system is created the format version is stored in both of
the super blocks written to the metadata and data devices. By default
the greatest supported version is written while an older supported
version may be specified.
.sp
During mount the kernel module verifies that the format versions stored
in both of the super blocks match and are supported. That version
defines the set of features and behavior of all the mounts using the
file system, including the network protocol that is communicated over
the wire.
.sp
Any combination of software release versions that support the current
format version of the file system can safely be used concurrently. This
allows for rolling software updates of multiple mounts using a shared
file system.
.sp
To use new incompatible features added in newer format versions the super blocks must
be updated. This can currently only be safely performed on a
completely and cleanly unmounted file system. The
.BR scoutfs (8)
.I change-format-version
command can be used with the
.I --offline
option to write a newer supported version into the super blocks. It
will fail if it sees any indication of unresolved mounts that may be
using the devices: either active quorum members working with their
quorum blocks or persistent records of mounted clients that haven't been
resolved. Like creating a new file system, there is no protection
against multiple invocations of the change command corrupting the
system. Once the version is updated older software can no longer use
the file system so this change should be performed with care. Once the
newer format version is successfully written it can be mounted and newer
features can be used.
.sp
Each layer of the system can show its supported format versions:
.RS
.TP
.B Userspace utilities
.B scoutfs --help
includes the range of supported format versions for a given release
of the userspace utilities.
.TP
.B Kernel module
.I modinfo MODULE
shows the range of supproted versions for a kernel module file in the
.I scoutfs_format_version_min
and
.I scoutfs_format_version_min
fields.
.TP
.B Inserted module
The supported version range of an inserted module can be found in
.I .note.scoutfs_format_version_min
and
.I .note.scoutfs_format_version_max
notes files in the sysfs notes directory for the inserted module,
typically
.I /sys/module/scoutfs/notes/
.TP
.B Metadata and data devices
.I scoutfs print DEVICE
shows the
.I fmt_vers
field in the initial output of the super block on the device.
.TP
.B Mounted filesystem
The version that a mount is using is shown in the
.I format_version
file in the mount's sysfs directory, typically
.I /sys/fs/scoutfs/f.FSID.r.RID/
.RE
.SH CORRUPTION DETECTION
A
.B scoutfs

View File

@@ -14,34 +14,6 @@ option will, when the option is omitted, fall back to using the value of the
environment variable. If that variable is also absent the current working
directory will be used.
.TP
.BI "change-format-version [-V, --format-version VERS] [-F|--offline META-DEVICE DATA-DEVICE]"
.sp
Change the format version of an existing file system. The maxmimum
supported version is used by default. A specific version in the range
can be specified. The range of supported versions in shown in the
output of --help.
.RS 1.0i
.PD 0
.TP
.sp
.B "-F, --offline META-DEVICE DATA-DEVICE"
Change the format version by writing directly to the metadata and data
devices. Like mkfs, this writes directly to the devices without
protection and must only be used on completely unmounted devices. The
command will fail if it sees evidence of active quorum use of the device
or of previously connected clients which haven't been reclaimed. The
only way to avoid these checks is to fully mount and cleanly unmount the
file system.
.sp
This is not an atomic operation because it writes to blocks on two
devices. Write failure can result in the versions becoming out of sync
which will prevent the system from mouting. To recover the error must
be resolved so the command can be repeated and successfully write to
the super blocks on both devices.
.RE
.PD
.TP
.BI "df [-h|--human-readable] [-p|--path PATH]"
.sp
@@ -60,7 +32,7 @@ A path within a ScoutFS filesystem.
.PD
.TP
.BI "mkfs META-DEVICE DATA-DEVICE {-Q|--quorum-slot} NR,ADDR,PORT [-m|--max-meta-size SIZE] [-d|--max-data-size SIZE] [-z|--data-alloc-zone-blocks BLOCKS] [-f|--force] [-A|--allow-small-size] [-V|--format-version VERS]"
.BI "mkfs META-DEVICE DATA-DEVICE {-Q|--quorum-slot} NR,ADDR,PORT [-m|--max-meta-size SIZE] [-d|--max-data-size SIZE] [-z|--data-alloc-zone-blocks BLOCKS] [-f|--force] [-A|--allow-small-size]"
.sp
Initialize a new ScoutFS filesystem on the target devices. Since ScoutFS uses
separate block devices for its metadata and data storage, two are required.
@@ -127,74 +99,10 @@ Set the data_alloc_zone_blocks volume option, as described in
.TP
.B "-f, --force"
Ignore presence of existing data on the data and metadata devices.
.TP
.B "-V, --format-verson"
Specify the format version to use in the newly created file system.
The range of supported versions is visible in the output of
+.BR scoutfs (8)
+.I --help
.
.RE
.PD
.TP
.BI "resize-devices [-p|--path PATH] [-m|--meta-size SIZE] [-d|--data-size SIZE]"
.sp
Resize the metadata or data devices of a mounted ScoutFS filesystem.
.sp
ScoutFS metadata has free extent records and fields in the super block
that reflect the size of the devices in use. This command sends a
request to the server to change the size of the device that can be used
by updating free extents and setting the super block fields.
.sp
The specified sizes are in bytes and are translated into block counts.
If the specified sizes are not a multiple of the metadata or data block
sizes then a message is output and the resized size is truncated down to
the next whole block. Specifying either a size of 0 or the current
device size makes no change. The current size of the devices can be
seen, in units of their respective block sizes, in the total_meta_blocks
and total_data_blocks fields returned by the scoutfs statfs command (via
the statfs_more ioctl).
.sp
Shrinking is not supported. Specifying a smaller size for either device
will return an error and neither device will be resized.
.sp
Specifying a larger size will expand the initial size of the device that
will be used. Free space records are added for the expanded region and
can be used once the resizing transaction is complete.
.sp
The resizing action is performed in a transaction on the server. This
command will hang until a server is elected and running and can service
the reqeust. The server serializes any concurrent requests to resize.
.sp
The new sizes must fit within the current sizes of the mounted devices.
Presumably this command is being performed as part of a larger
coordinated resize of the underlying devices. The device must be
expanded before ScoutFS can use the larger device and ScoutFS must stop
using a region to shrink before it could be removed from the device
(which is not currently supported).
.sp
The resize will be committed by the server before the response is sent
to the client. The system can be using the new device size before the
result is communicated through the client and this command completes.
The client could crash and the server could still have performed the
resize.
.RS 1.0i
.PD 0
.TP
.sp
.B "-p, --path PATH"
A path in the mounted ScoutFS filesystem which will have its devices
resized.
.TP
.B "-m, --meta-size SIZE"
.B "-d, --data-size SIZE"
The new size of the metadata or data device to use, in bytes. Size is given as
an integer followed by a units digit: "K", "M", "G", "T", "P", to denote
kibibytes, mebibytes, etc.
.RE
.PD
.BI "stat FILE [-s|--single-field FIELD-NAME]"
.sp
Display ScoutFS-specific metadata fields for the given file.

View File

@@ -56,14 +56,10 @@ install -m 644 -D src/ioctl.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/ioctl.h
install -m 644 -D src/format.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/format.h
install -m 755 -D fenced/scoutfs-fenced $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/scoutfs-fenced
install -m 755 -D fenced/local-force-unmount $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/local-force-unmount
install -m 644 -D fenced/scoutfs-fenced.service $RPM_BUILD_ROOT%{_unitdir}/scoutfs-fenced.service
install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-fenced.conf.example
%files
%defattr(644,root,root,755)
%{_mandir}/man*/scoutfs*.gz
%{_unitdir}/scoutfs-fenced.service
%{_sysconfdir}/scoutfs
%defattr(755,root,root,755)
%{_sbindir}/scoutfs
%{_libexecdir}/scoutfs-fenced

View File

@@ -75,9 +75,6 @@ void btree_append_item(struct scoutfs_btree_block *bt,
le16_add_cpu(&bt->total_item_bytes, sizeof(struct scoutfs_btree_item));
item->key = *key;
item->seq = cpu_to_le64(1);
item->flags = 0;
leaf_item_hash_insert(bt, &item->key,
cpu_to_le16((void *)item - (void *)bt));
if (val_len == 0)

View File

@@ -1,287 +0,0 @@
#define _GNU_SOURCE /* O_DIRECT */
#include <unistd.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <sys/time.h>
#include <uuid/uuid.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <assert.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <ctype.h>
#include <inttypes.h>
#include <argp.h>
#include "sparse.h"
#include "cmd.h"
#include "util.h"
#include "format.h"
#include "parse.h"
#include "crc.h"
#include "rand.h"
#include "dev.h"
#include "key.h"
#include "bitops.h"
#include "btree.h"
#include "leaf_item_hash.h"
#include "blkid.h"
#include "quorum.h"
struct change_fmt_vers_args {
char *meta_device;
char *data_device;
u64 fmt_vers;
bool offline;
};
static int do_change_fmt_vers(struct change_fmt_vers_args *args)
{
struct scoutfs_super_block *meta_super = NULL;
struct scoutfs_super_block *data_super = NULL;
struct scoutfs_quorum_block *qblk = NULL;
struct scoutfs_quorum_block_event *beg;
struct scoutfs_quorum_block_event *end;
bool wrote_meta = false;
bool in_use = false;
char uuid_str[37];
int meta_fd = -1;
int data_fd = -1;
int ret;
int i;
meta_fd = open(args->meta_device, O_DIRECT | O_SYNC | O_RDWR | O_EXCL);
if (meta_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open meta device '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
goto out;
}
data_fd = open(args->data_device, O_DIRECT | O_SYNC | O_RDWR | O_EXCL);
if (data_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open data device '%s': %s (%d)\n",
args->data_device, strerror(errno), errno);
goto out;
}
ret = read_block_verify(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, 0, SCOUTFS_SUPER_BLKNO,
SCOUTFS_BLOCK_SM_SHIFT, (void **)&meta_super);
if (ret) {
ret = -errno;
fprintf(stderr, "failed to read meta super block: %s (%d)\n",
strerror(errno), errno);
goto out;
}
ret = read_block_verify(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER,
le64_to_cpu(meta_super->hdr.fsid), SCOUTFS_SUPER_BLKNO,
SCOUTFS_BLOCK_SM_SHIFT, (void **)&data_super);
if (ret) {
ret = -errno;
fprintf(stderr, "failed to read data super block: %s (%d)\n",
strerror(errno), errno);
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) == args->fmt_vers &&
meta_super->fmt_vers == data_super->fmt_vers) {
printf("both metadata and data device format version are already %llu, nothing to do.\n",
args->fmt_vers);
ret = 0;
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(meta_super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
fprintf(stderr, "meta super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(meta_super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
if (le64_to_cpu(data_super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
le64_to_cpu(data_super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
fprintf(stderr, "data super block has format version %llu outside of supported version range %u-%u",
le64_to_cpu(data_super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
ret = -EINVAL;
goto out;
}
if (meta_super->mounted_clients.ref.blkno != 0) {
fprintf(stderr, "meta superblock mounted clients btree is not empty.\n");
ret = -EBUSY;
in_use = true;
goto out;
}
/* check for active quorum slots */
for (i = 0; i < SCOUTFS_QUORUM_BLOCKS; i++) {
if (!quorum_slot_present(meta_super, i))
continue;
ret = read_block(meta_fd, SCOUTFS_QUORUM_BLKNO + i, SCOUTFS_BLOCK_SM_SHIFT,
(void **)&qblk);
if (ret < 0) {
fprintf(stderr, "error reading quorum block for slot %u\n", i);
goto out;
}
beg = &qblk->events[SCOUTFS_QUORUM_EVENT_BEGIN];
end = &qblk->events[SCOUTFS_QUORUM_EVENT_END];
if (le64_to_cpu(beg->write_nr) > le64_to_cpu(end->write_nr)) {
fprintf(stderr, "mount in quorum slot %u could still be running.\n"
" begin event: write_nr %llu timestamp %llu.%08u\n"
" end event: write_nr %llu timestamp %llu.%08u\n",
i, le64_to_cpu(beg->write_nr), le64_to_cpu(beg->ts.sec),
le32_to_cpu(beg->ts.nsec),
le64_to_cpu(end->write_nr), le64_to_cpu(end->ts.sec),
le32_to_cpu(end->ts.nsec));
ret = -EBUSY;
in_use = true;
goto out;
}
free(qblk);
qblk = NULL;
}
if (le64_to_cpu(meta_super->fmt_vers) != args->fmt_vers) {
meta_super->fmt_vers = cpu_to_le64(args->fmt_vers);
ret = write_block(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, meta_super->hdr.fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT, &meta_super->hdr);
if (ret)
goto out;
wrote_meta = true;
}
if (le64_to_cpu(data_super->fmt_vers) != args->fmt_vers) {
data_super->fmt_vers = cpu_to_le64(args->fmt_vers);
ret = write_block(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER, data_super->hdr.fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT, &data_super->hdr);
if (ret < 0 && wrote_meta) {
fprintf(stderr, "Error writing data super block after writing the meta\n"
"super block. The two super blocks may now be out of sync which\n"
"would prevent mounting. Correct the source of the write error\n"
"and retry changing the version to write both super blocks.\n");
goto out;
}
}
uuid_unparse(meta_super->uuid, uuid_str);
printf("Successfully updated format version for scoutfs filesystem:\n"
" meta device path: %s\n"
" data device path: %s\n"
" fsid: %llx\n"
" uuid: %s\n"
" format version: %llu\n",
args->meta_device,
args->data_device,
le64_to_cpu(meta_super->hdr.fsid),
uuid_str,
le64_to_cpu(meta_super->fmt_vers));
out:
if (in_use)
fprintf(stderr, "The filesystem must be fully recovered and cleanly unmounted to change the format version\n");
if (qblk)
free(qblk);
if (meta_super)
free(meta_super);
if (data_super)
free(data_super);
if (meta_fd != -1)
close(meta_fd);
if (data_fd != -1)
close(data_fd);
return ret;
}
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct change_fmt_vers_args *args = state->input;
int ret;
switch (key) {
case 'F':
args->offline = true;
break;
case 'V':
ret = parse_u64(arg, &args->fmt_vers);
if (ret)
return ret;
if (args->fmt_vers < SCOUTFS_FORMAT_VERSION_MIN ||
args->fmt_vers > SCOUTFS_FORMAT_VERSION_MAX)
argp_error(state, "format-version %llu is outside supported range of %u-%u",
args->fmt_vers, SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
break;
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
else if (!args->data_device)
args->data_device = strdup_or_error(state, arg);
else
argp_error(state, "more than two device arguments given");
break;
case ARGP_KEY_FINI:
if (!args->offline)
argp_error(state, "must specify --offline");
if (!args->meta_device)
argp_error(state, "no metadata device argument given");
if (!args->data_device)
argp_error(state, "no data device argument given");
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "offline", 'F', NULL, 0, "Write format version in offline device super blocks"},
{ "format-version", 'V', "VERS", 0, "Specify a format version within supported range ("SCOUTFS_FORMAT_VERSION_MIN_STR"-"SCOUTFS_FORMAT_VERSION_MAX_STR", default "SCOUTFS_FORMAT_VERSION_MAX_STR")"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"",
"Change format version of an existing ScoutFS filesystem"
};
static int change_fmt_vers_cmd(int argc, char *argv[])
{
struct change_fmt_vers_args change_fmt_vers_args = {
.offline = false,
.fmt_vers = SCOUTFS_FORMAT_VERSION_MAX,
};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &change_fmt_vers_args);
if (ret)
return ret;
return do_change_fmt_vers(&change_fmt_vers_args);
}
static void __attribute__((constructor)) change_fmt_vers_ctor(void)
{
cmd_register_argp("change-format-version", &argp, GROUP_CORE, change_fmt_vers_cmd);
}

View File

@@ -8,7 +8,6 @@
#include "cmd.h"
#include "util.h"
#include "format.h"
static struct argp_command {
char *name;
@@ -70,9 +69,6 @@ static void usage(void)
fprintf(stderr, "Selected fs defaults to current working directory.\n");
fprintf(stderr, "See <command> --help for more details.\n");
fprintf(stderr, "\nSupported format version: %u-%u\n",
SCOUTFS_FORMAT_VERSION_MIN, SCOUTFS_FORMAT_VERSION_MAX);
fprintf(stderr, "\nCore admin:\n");
print_cmds_for_group(GROUP_CORE);
fprintf(stderr, "\nAdditional Information:\n");

View File

@@ -48,6 +48,7 @@ static int do_df(struct df_args *args)
if (fd < 0)
return fd;
sfm.valid_bytes = sizeof(struct scoutfs_ioctl_statfs_more);
ret = ioctl(fd, SCOUTFS_IOC_STATFS_MORE, &sfm);
if (ret < 0) {
fprintf(stderr, "statfs_more returned %d: error %s (%d)\n",

View File

@@ -32,6 +32,30 @@
#include "leaf_item_hash.h"
#include "blkid.h"
/*
* Update the block header fields and write out the block.
*/
static int write_block(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr)
{
size_t size = 1ULL << shift;
ssize_t ret;
hdr->magic = cpu_to_le32(magic);
hdr->fsid = fsid;
hdr->blkno = cpu_to_le64(blkno);
hdr->seq = cpu_to_le64(seq);
hdr->crc = cpu_to_le32(crc_block(hdr, size));
ret = pwrite(fd, hdr, size, blkno << shift);
if (ret != size) {
fprintf(stderr, "write to blkno %llu returned %zd: %s (%d)\n",
blkno, ret, strerror(errno), errno);
return -errno;
}
return 0;
}
/*
* Return the order of the length of a free extent, which we define as
@@ -110,7 +134,6 @@ struct mkfs_args {
unsigned long long max_meta_size;
unsigned long long max_data_size;
u64 data_alloc_zone_blocks;
u64 fmt_vers;
bool force;
bool allow_small_size;
int nr_slots;
@@ -213,13 +236,16 @@ static int do_mkfs(struct mkfs_args *args)
/* partially initialize the super so we can use it to init others */
memset(super, 0, SCOUTFS_BLOCK_SM_SIZE);
super->fmt_vers = cpu_to_le64(args->fmt_vers);
super->version = cpu_to_le64(SCOUTFS_INTEROP_VERSION);
uuid_generate(super->uuid);
super->next_ino = cpu_to_le64(round_up(SCOUTFS_ROOT_INO + 1, SCOUTFS_LOCK_INODE_GROUP_NR));
super->inode_count = cpu_to_le64(1);
super->next_ino = cpu_to_le64(SCOUTFS_ROOT_INO + 1);
super->seq = cpu_to_le64(1);
super->total_meta_blocks = cpu_to_le64(last_meta + 1);
super->total_data_blocks = cpu_to_le64(last_data + 1);
super->first_meta_blkno = cpu_to_le64(next_meta);
super->last_meta_blkno = cpu_to_le64(last_meta);
super->total_data_blocks = cpu_to_le64(last_data - first_data + 1);
super->first_data_blkno = cpu_to_le64(first_data);
super->last_data_blkno = cpu_to_le64(last_data);
assert(sizeof(args->slots) ==
member_sizeof(struct scoutfs_super_block, qconf.slots));
@@ -294,7 +320,7 @@ static int do_mkfs(struct mkfs_args *args)
blkno = next_meta++;
ret = write_alloc_root(meta_fd, fsid, &super->data_alloc, bt,
1, blkno, first_data,
last_data - first_data + 1);
le64_to_cpu(super->total_data_blocks));
if (ret < 0)
goto out;
@@ -334,35 +360,49 @@ static int do_mkfs(struct mkfs_args *args)
}
/* write the super block to data dev and meta dev*/
ret = write_block_sync(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
ret = write_block(data_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid, 1,
SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
if (ret)
goto out;
if (fsync(data_fd)) {
ret = -errno;
fprintf(stderr, "failed to fsync '%s': %s (%d)\n",
args->data_device, strerror(errno), errno);
goto out;
}
super->flags |= cpu_to_le64(SCOUTFS_FLAG_IS_META_BDEV);
ret = write_block_sync(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid,
1, SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
ret = write_block(meta_fd, SCOUTFS_BLOCK_MAGIC_SUPER, fsid,
1, SCOUTFS_SUPER_BLKNO, SCOUTFS_BLOCK_SM_SHIFT,
&super->hdr);
if (ret)
goto out;
if (fsync(meta_fd)) {
ret = -errno;
fprintf(stderr, "failed to fsync '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
goto out;
}
uuid_unparse(super->uuid, uuid_str);
printf("Created scoutfs filesystem:\n"
" meta device path: %s\n"
" data device path: %s\n"
" fsid: %llx\n"
" version: %llx\n"
" uuid: %s\n"
" format version: %llu\n"
" 64KB metadata blocks: "SIZE_FMT"\n"
" 4KB data blocks: "SIZE_FMT"\n"
" quorum slots: ",
args->meta_device,
args->data_device,
le64_to_cpu(super->hdr.fsid),
le64_to_cpu(super->version),
uuid_str,
le64_to_cpu(super->fmt_vers),
SIZE_ARGS(le64_to_cpu(super->total_meta_blocks),
SCOUTFS_BLOCK_LG_SIZE),
SIZE_ARGS(le64_to_cpu(super->total_data_blocks),
@@ -486,16 +526,6 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
case 'A':
args->allow_small_size = true;
break;
case 'V':
ret = parse_u64(arg, &args->fmt_vers);
if (ret)
return ret;
if (args->fmt_vers < SCOUTFS_FORMAT_VERSION_MIN ||
args->fmt_vers > SCOUTFS_FORMAT_VERSION_MAX)
argp_error(state, "format-version %llu is outside supported range of %u-%u",
args->fmt_vers, SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
break;
case 'z': /* data-alloc-zone-blocks */
{
ret = parse_u64(arg, &args->data_alloc_zone_blocks);
@@ -539,7 +569,6 @@ static struct argp_option options[] = {
{ "max-meta-size", 'm', "SIZE", 0, "Use a size less than the base metadata device size (bytes or KMGTP units)"},
{ "max-data-size", 'd', "SIZE", 0, "Use a size less than the base data device size (bytes or KMGTP units)"},
{ "data-alloc-zone-blocks", 'z', "BLOCKS", 0, "Divide data device into block zones so each mounts writes to a zone (4KB blocks)"},
{ "format-version", 'V', "version", 0, "Specify a format version within supported range, ("SCOUTFS_FORMAT_VERSION_MIN_STR"-"SCOUTFS_FORMAT_VERSION_MAX_STR", default "SCOUTFS_FORMAT_VERSION_MAX_STR")"},
{ NULL }
};
@@ -552,9 +581,7 @@ static struct argp argp = {
static int mkfs_cmd(int argc, char *argv[])
{
struct mkfs_args mkfs_args = {
.fmt_vers = SCOUTFS_FORMAT_VERSION_MAX,
};
struct mkfs_args mkfs_args = {NULL,};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &mkfs_args);

View File

@@ -47,14 +47,13 @@ static void print_inode(struct scoutfs_key *key, void *val, int val_len)
{
struct scoutfs_inode *inode = val;
printf(" inode: ino %llu size %llu version %llu nlink %u\n"
printf(" inode: ino %llu size %llu nlink %u\n"
" uid %u gid %u mode 0%o rdev 0x%x flags 0x%x\n"
" next_readdir_pos %llu meta_seq %llu data_seq %llu data_version %llu\n"
" atime %llu.%08u ctime %llu.%08u\n"
" mtime %llu.%08u\n",
le64_to_cpu(key->ski_ino),
le64_to_cpu(inode->size),
le64_to_cpu(inode->version),
le32_to_cpu(inode->nlink), le32_to_cpu(inode->uid),
le32_to_cpu(inode->gid), le32_to_cpu(inode->mode),
le32_to_cpu(inode->rdev),
@@ -76,17 +75,6 @@ static void print_orphan(struct scoutfs_key *key, void *val, int val_len)
printf(" orphan: ino %llu\n", le64_to_cpu(key->sko_ino));
}
static void print_xattr_totl(struct scoutfs_key *key, void *val, int val_len)
{
struct scoutfs_xattr_totl_val *tval = val;
printf(" xattr totl: %llu.%llu.%llu = %lld, %lld\n",
le64_to_cpu(key->skxt_a), le64_to_cpu(key->skxt_b),
le64_to_cpu(key->skxt_c), le64_to_cpu(tval->total),
le64_to_cpu(tval->count));
}
static u8 *global_printable_name(u8 *name, int name_len)
{
static u8 name_buf[SCOUTFS_NAME_LEN + 1];
@@ -175,9 +163,6 @@ static print_func_t find_printer(u8 zone, u8 type)
return print_orphan;
}
if (zone == SCOUTFS_XATTR_TOTL_ZONE)
return print_xattr_totl;
if (zone == SCOUTFS_FS_ZONE) {
switch(type) {
case SCOUTFS_INODE_TYPE: return print_inode;
@@ -193,19 +178,15 @@ static print_func_t find_printer(u8 zone, u8 type)
return NULL;
}
#define flag_char(val, bit, c) \
(((val) & (bit)) ? (c) : '-')
static int print_fs_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
static int print_fs_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
print_func_t printer;
printf(" "SK_FMT" %llu %c\n",
SK_ARG(key), seq, flag_char(flags, SCOUTFS_ITEM_FLAG_DELETION, 'd'));
printf(" "SK_FMT"\n", SK_ARG(key));
/* only items in leaf blocks have values */
if (val != NULL && !(flags & SCOUTFS_ITEM_FLAG_DELETION)) {
if (val) {
printer = find_printer(key->sk_zone, key->sk_type);
if (printer)
printer(key, val, val_len);
@@ -217,6 +198,37 @@ static int print_fs_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
return 0;
}
/* same as fs item but with a small header in the value */
static int print_logs_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_item_value *liv;
print_func_t printer;
printf(" "SK_FMT"\n", SK_ARG(key));
/* only items in leaf blocks have values */
if (val) {
liv = val;
printf(" log_item_value: seq %llu flags %x\n",
le64_to_cpu(liv->seq), liv->flags);
/* deletion items don't have values */
if (!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION)) {
printer = find_printer(key->sk_zone,
key->sk_type);
if (printer)
printer(key, val + sizeof(*liv),
val_len - sizeof(*liv));
else
printf(" (unknown zone %u type %u)\n",
key->sk_zone, key->sk_type);
}
}
return 0;
}
#define BTREF_F \
"blkno %llu seq %llu"
#define BTREF_A(ref) \
@@ -257,7 +269,7 @@ static int print_fs_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
le64_to_cpu((srf)->ref.seq)
/* same as fs item but with a small header in the value */
static int print_log_trees_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
static int print_log_trees_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_trees *lt = val;
@@ -277,9 +289,7 @@ static int print_log_trees_item(struct scoutfs_key *key, u64 seq, u8 flags, void
" data_avail: "ALCROOT_F"\n"
" data_freed: "ALCROOT_F"\n"
" srch_file: "SRF_FMT"\n"
" inode_count_delta: %lld\n"
" max_item_seq: %llu\n"
" finalize_seq: %llu\n"
" rid: %016llx\n"
" nr: %llu\n"
" flags: %llx\n"
@@ -295,9 +305,7 @@ static int print_log_trees_item(struct scoutfs_key *key, u64 seq, u8 flags, void
ALCROOT_A(&lt->data_avail),
ALCROOT_A(&lt->data_freed),
SRF_A(&lt->srch_file),
le64_to_cpu(lt->inode_count_delta),
le64_to_cpu(lt->max_item_seq),
le64_to_cpu(lt->finalize_seq),
le64_to_cpu(lt->rid),
le64_to_cpu(lt->nr),
le64_to_cpu(lt->flags),
@@ -320,7 +328,7 @@ static int print_log_trees_item(struct scoutfs_key *key, u64 seq, u8 flags, void
return 0;
}
static int print_srch_root_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
static int print_srch_root_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_srch_compact *sc;
@@ -353,7 +361,16 @@ static int print_srch_root_item(struct scoutfs_key *key, u64 seq, u8 flags, void
return 0;
}
static int print_mounted_client_entry(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
static int print_trans_seqs_entry(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
printf(" trans_seq %llu rid %016llx\n",
le64_to_cpu(key->skts_trans_seq), le64_to_cpu(key->skts_rid));
return 0;
}
static int print_mounted_client_entry(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_mounted_client_btree_val *mcv = val;
@@ -368,8 +385,8 @@ static int print_mounted_client_entry(struct scoutfs_key *key, u64 seq, u8 flags
return 0;
}
static int print_log_merge_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
static int print_log_merge_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_merge_status *stat;
struct scoutfs_log_merge_range *rng;
@@ -380,10 +397,12 @@ static int print_log_merge_item(struct scoutfs_key *key, u64 seq, u8 flags, void
switch (key->sk_zone) {
case SCOUTFS_LOG_MERGE_STATUS_ZONE:
stat = val;
printf(" status: next_range_key "SK_FMT" nr_req %llu nr_comp %llu seq %llu\n",
printf(" status: next_range_key "SK_FMT" nr_req %llu nr_comp %llu"
" last_seq %llu seq %llu\n",
SK_ARG(&stat->next_range_key),
le64_to_cpu(stat->nr_requests),
le64_to_cpu(stat->nr_complete),
le64_to_cpu(stat->last_seq),
le64_to_cpu(stat->seq));
break;
case SCOUTFS_LOG_MERGE_RANGE_ZONE:
@@ -395,12 +414,12 @@ static int print_log_merge_item(struct scoutfs_key *key, u64 seq, u8 flags, void
case SCOUTFS_LOG_MERGE_REQUEST_ZONE:
req = val;
printf(" request: logs_root "BTROOT_F" logs_root "BTROOT_F" start "SK_FMT
" end "SK_FMT" input_seq %llu rid %016llx seq %llu flags 0x%llx\n",
" end "SK_FMT" last_seq %llu rid %016llx seq %llu flags 0x%llx\n",
BTROOT_A(&req->logs_root),
BTROOT_A(&req->root),
SK_ARG(&req->start),
SK_ARG(&req->end),
le64_to_cpu(req->input_seq),
le64_to_cpu(req->last_seq),
le64_to_cpu(req->rid),
le64_to_cpu(req->seq),
le64_to_cpu(req->flags));
@@ -432,7 +451,7 @@ static int print_log_merge_item(struct scoutfs_key *key, u64 seq, u8 flags, void
return 0;
}
static int print_alloc_item(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
static int print_alloc_item(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
if (key->sk_zone == SCOUTFS_FREE_EXTENT_BLKNO_ZONE)
@@ -450,7 +469,7 @@ static int print_alloc_item(struct scoutfs_key *key, u64 seq, u8 flags, void *va
return 0;
}
typedef int (*print_item_func)(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
typedef int (*print_item_func)(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg);
static int print_block_ref(struct scoutfs_key *key, void *val,
@@ -458,7 +477,7 @@ static int print_block_ref(struct scoutfs_key *key, void *val,
{
struct scoutfs_block_ref *ref = val;
func(key, 0, 0, NULL, 0, arg);
func(key, NULL, 0, arg);
printf(" ref blkno %llu seq %llu\n",
le64_to_cpu(ref->blkno), le64_to_cpu(ref->seq));
@@ -567,7 +586,7 @@ static int print_btree_block(int fd, struct scoutfs_super_block *super,
if (level)
print_block_ref(key, val, val_len, func, arg);
else
func(key, le64_to_cpu(item->seq), item->flags, val, val_len, arg);
func(key, val, val_len, arg);
}
free(bt);
@@ -725,8 +744,8 @@ struct print_recursion_args {
};
/* same as fs item but with a small header in the value */
static int print_log_trees_roots(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
unsigned val_len, void *arg)
static int print_log_trees_roots(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct scoutfs_log_trees *lt = val;
struct print_recursion_args *pa = arg;
@@ -757,14 +776,14 @@ static int print_log_trees_roots(struct scoutfs_key *key, u64 seq, u8 flags, voi
ret = err;
err = print_btree(pa->fd, pa->super, "", &lt->item_root,
print_fs_item, NULL);
print_logs_item, NULL);
if (err && !ret)
ret = err;
return ret;
}
static int print_srch_root_files(struct scoutfs_key *key, u64 seq, u8 flags, void *val,
static int print_srch_root_files(struct scoutfs_key *key, void *val,
unsigned val_len, void *arg)
{
struct print_recursion_args *pa = arg;
@@ -824,7 +843,7 @@ static int print_btree_leaf_items(int fd, struct scoutfs_super_block *super,
break;
continue;
} else {
func(key, le64_to_cpu(item->seq), item->flags, val, val_len, arg);
func(key, val, val_len, arg);
}
node = avl_next(&bt->item_root, node);
@@ -891,15 +910,13 @@ static int print_quorum_blocks(int fd, struct scoutfs_super_block *super)
printf("quorum blkno %llu (slot %llu)\n",
blkno, blkno - SCOUTFS_QUORUM_BLKNO);
print_block_header(&blk->hdr, SCOUTFS_BLOCK_SM_SIZE);
printf(" write_nr %llu\n", le64_to_cpu(blk->write_nr));
for (e = 0; e < array_size(event_names); e++) {
ev = &blk->events[e];
printf(" %12s: rid %016llx term %llu write_nr %llu ts %llu.%08u\n",
printf(" %12s: rid %016llx term %llu ts %llu.%08u\n",
event_names[e], le64_to_cpu(ev->rid), le64_to_cpu(ev->term),
le64_to_cpu(ev->write_nr), le64_to_cpu(ev->ts.sec),
le32_to_cpu(ev->ts.nsec));
le64_to_cpu(ev->ts.sec), le32_to_cpu(ev->ts.nsec));
}
}
@@ -928,13 +945,14 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
printf("super blkno %llu\n", blkno);
print_block_header(&super->hdr, SCOUTFS_BLOCK_SM_SIZE);
printf(" fmt_vers %llu uuid %s\n",
le64_to_cpu(super->fmt_vers), uuid_str);
printf(" version %llx uuid %s\n",
le64_to_cpu(super->version), uuid_str);
printf(" flags: 0x%016llx\n", le64_to_cpu(super->flags));
/* XXX these are all in a crazy order */
printf(" next_ino %llu inode_count %llu seq %llu\n"
" total_meta_blocks %llu total_data_blocks %llu\n"
printf(" next_ino %llu seq %llu\n"
" total_meta_blocks %llu first_meta_blkno %llu last_meta_blkno %llu\n"
" total_data_blocks %llu first_data_blkno %llu last_data_blkno %llu\n"
" meta_alloc[0]: "ALCROOT_F"\n"
" meta_alloc[1]: "ALCROOT_F"\n"
" data_alloc: "ALCROOT_F"\n"
@@ -945,13 +963,17 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
" fs_root: "BTR_FMT"\n"
" logs_root: "BTR_FMT"\n"
" log_merge: "BTR_FMT"\n"
" trans_seqs: "BTR_FMT"\n"
" mounted_clients: "BTR_FMT"\n"
" srch_root: "BTR_FMT"\n",
le64_to_cpu(super->next_ino),
le64_to_cpu(super->inode_count),
le64_to_cpu(super->seq),
le64_to_cpu(super->total_meta_blocks),
le64_to_cpu(super->first_meta_blkno),
le64_to_cpu(super->last_meta_blkno),
le64_to_cpu(super->total_data_blocks),
le64_to_cpu(super->first_data_blkno),
le64_to_cpu(super->last_data_blkno),
ALCROOT_A(&super->meta_alloc[0]),
ALCROOT_A(&super->meta_alloc[1]),
ALCROOT_A(&super->data_alloc),
@@ -962,6 +984,7 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
BTR_ARG(&super->fs_root),
BTR_ARG(&super->logs_root),
BTR_ARG(&super->log_merge),
BTR_ARG(&super->trans_seqs),
BTR_ARG(&super->mounted_clients),
BTR_ARG(&super->srch_root));
@@ -1013,6 +1036,11 @@ static int print_volume(int fd)
if (err && !ret)
ret = err;
err = print_btree(fd, super, "trans_seqs", &super->trans_seqs,
print_trans_seqs_entry, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "log_merge", &super->log_merge,
print_log_merge_item, NULL);
if (err && !ret)

View File

@@ -1,10 +0,0 @@
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "quorum.h"
bool quorum_slot_present(struct scoutfs_super_block *super, int i)
{
return super->qconf.slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
}

View File

@@ -1,8 +0,0 @@
#ifndef _QUORUM_H_
#define _QUORUM_H_
#include <stdbool.h>
bool quorum_slot_present(struct scoutfs_super_block *super, int i);
#endif

View File

@@ -1,120 +0,0 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include <argp.h>
#include "sparse.h"
#include "parse.h"
#include "util.h"
#include "format.h"
#include "ioctl.h"
#include "cmd.h"
struct xattr_args {
char *path;
};
static int do_read_xattr_totals(struct xattr_args *args)
{
struct scoutfs_ioctl_read_xattr_totals rxt;
struct scoutfs_ioctl_xattr_total *xts = NULL;
struct scoutfs_ioctl_xattr_total *xt;
u64 bytes = 1024 * 1024;
int fd = -1;
int ret;
int i;
xts = malloc(bytes);
if (!xts) {
fprintf(stderr, "xattr total mem alloc failed\n");
ret = -ENOMEM;
goto out;
}
fd = get_path(args->path, O_RDONLY);
if (fd < 0)
return fd;
memset(&rxt, 0, sizeof(rxt));
rxt.totals_ptr = (unsigned long)xts;
rxt.totals_bytes = bytes;
for (;;) {
ret = ioctl(fd, SCOUTFS_IOC_READ_XATTR_TOTALS, &rxt);
if (ret == 0)
break;
if (ret < 0) {
ret = -errno;
fprintf(stderr, "read_xattr_totals ioctl failed: "
"%s (%d)\n", strerror(errno), errno);
goto out;
}
for (i = 0, xt = xts; i < ret; i++, xt++)
printf("%llu.%llu.%llu = %lld, %lld\n",
xt->name[0], xt->name[1], xt->name[2], xt->total, xt->count);
memcpy(&rxt.pos_name, &xts[ret - 1].name, sizeof(rxt.pos_name));
if (++rxt.pos_name[2] == 0 && ++rxt.pos_name[1] == 0 && ++rxt.pos_name[0] == 0)
break;
}
ret = 0;
out:
if (fd >= 0)
close(fd);
free(xts);
return ret;
};
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct xattr_args *args = state->input;
switch (key) {
case 'p':
args->path = strdup_or_error(state, arg);
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "path", 'p', "PATH", 0, "Path to ScoutFS filesystem"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"",
"Print global value totals of .totl. xattrs"
};
static int read_xattr_totals_cmd(int argc, char **argv)
{
struct xattr_args xattr_args = {NULL};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &xattr_args);
if (ret)
return ret;
return do_read_xattr_totals(&xattr_args);
}
static void __attribute__((constructor)) read_xattr_totals_ctor(void)
{
cmd_register_argp("read-xattr-totals", &argp, GROUP_INFO, read_xattr_totals_cmd);
}

View File

@@ -1,120 +0,0 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include <argp.h>
#include "sparse.h"
#include "parse.h"
#include "util.h"
#include "format.h"
#include "ioctl.h"
#include "cmd.h"
struct resize_args {
char *path;
u64 meta_size;
u64 data_size;
};
static int do_resize_devices(struct resize_args *args)
{
struct scoutfs_ioctl_resize_devices rd;
int ret;
int fd;
if (args->meta_size & SCOUTFS_BLOCK_LG_MASK) {
printf("metadata device size %llu is not a multiple of %u metadata block size, truncating down to %llu byte size\n",
args->meta_size, SCOUTFS_BLOCK_LG_SIZE,
args->meta_size & ~(u64)SCOUTFS_BLOCK_LG_MASK);
}
if (args->data_size & SCOUTFS_BLOCK_SM_MASK) {
printf("data device size %llu is not a multiple of %u data block size, truncating down to %llu byte size\n",
args->data_size, SCOUTFS_BLOCK_SM_SIZE,
args->data_size & ~(u64)SCOUTFS_BLOCK_SM_MASK);
}
fd = get_path(args->path, O_RDONLY);
if (fd < 0)
return fd;
rd.new_total_meta_blocks = args->meta_size >> SCOUTFS_BLOCK_LG_SHIFT;
rd.new_total_data_blocks = args->data_size >> SCOUTFS_BLOCK_SM_SHIFT;
ret = ioctl(fd, SCOUTFS_IOC_RESIZE_DEVICES, &rd);
if (ret < 0) {
ret = -errno;
fprintf(stderr, "resize_devices ioctl failed: %s (%d)\n", strerror(errno), errno);
}
close(fd);
return ret;
};
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct resize_args *args = state->input;
int ret;
switch (key) {
case 'm': /* meta-size */
{
ret = parse_human(arg, &args->meta_size);
if (ret)
return ret;
break;
}
case 'd': /* data-size */
{
ret = parse_human(arg, &args->data_size);
if (ret)
return ret;
break;
}
case 'p':
args->path = strdup_or_error(state, arg);
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "path", 'p', "PATH", 0, "Path to ScoutFS filesystem"},
{ "meta-size", 'm', "SIZE", 0, "New metadata device size (bytes or KMGTP units)"},
{ "data-size", 'd', "SIZE", 0, "New data device size (bytes or KMGTP units)"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"",
"Online resize of metadata and/or data devices",
};
static int resize_devices_cmd(int argc, char **argv)
{
struct resize_args resize_args = {NULL,};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &resize_args);
if (ret)
return ret;
return do_resize_devices(&resize_args);
}
static void __attribute__((constructor)) read_xattr_totals_ctor(void)
{
cmd_register_argp("resize-devices", &argp, GROUP_CORE, resize_devices_cmd);
}

View File

@@ -21,7 +21,6 @@
struct setattr_args {
char *filename;
struct timespec ctime;
struct timespec crtime;
u64 data_version;
u64 i_size;
bool offline;
@@ -43,8 +42,6 @@ static int do_setattr(struct setattr_args *args)
sm.ctime_sec = args->ctime.tv_sec;
sm.ctime_nsec = args->ctime.tv_nsec;
sm.crtime_sec = args->crtime.tv_sec;
sm.crtime_nsec = args->crtime.tv_nsec;
sm.data_version = args->data_version;
if (args->offline)
sm.flags |= SCOUTFS_IOC_SETATTR_MORE_OFFLINE;
@@ -76,11 +73,6 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
if (ret)
return ret;
break;
case 'r': /* timespec */
ret = parse_timespec(arg, &args->crtime);
if (ret)
return ret;
break;
case 'V': /* data version */
ret = parse_u64(arg, &args->data_version);
if (ret)
@@ -120,8 +112,7 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
}
static struct argp_option options[] = {
{ "ctime", 't', "TIMESPEC", 0, "Set change time using \"<seconds-since-epoch>.<nanoseconds>\" format"},
{ "crtime", 'r', "TIMESPEC", 0, "Set creation time using \"<seconds-since-epoch>.<nanoseconds>\" format"},
{ "ctime", 't', "TIMESPEC", 0, "Set creation time using \"<seconds-since-epoch>.<nanoseconds>\" format"},
{ "data-version", 'V', "VERSION", 0, "Set data version"},
{ "size", 's', "SIZE", 0, "Set file size (bytes or KMGTP units). Requires --data-version"},
{ "offline", 'o', NULL, 0, "Set file contents as offline, not sparse. Requires --size"},

View File

@@ -131,10 +131,12 @@ static int do_stat(struct stat_args *args)
if (args->is_inode) {
cmd = SCOUTFS_IOC_STAT_MORE;
fields = inode_fields;
st.stm.valid_bytes = sizeof(struct scoutfs_ioctl_stat_more);
pr = print_inode_field;
} else {
cmd = SCOUTFS_IOC_STATFS_MORE;
fields = fs_fields;
st.sfm.valid_bytes = sizeof(struct scoutfs_ioctl_statfs_more);
pr = print_fs_field;
}

View File

@@ -10,8 +10,6 @@
#include <wordexp.h>
#include "util.h"
#include "format.h"
#include "crc.h"
#define ENV_PATH "SCOUTFS_MOUNT_PATH"
@@ -79,16 +77,11 @@ int read_block(int fd, u64 blkno, int shift, void **ret_val)
void *buf;
int ret;
buf = NULL;
*ret_val = NULL;
ret = posix_memalign(&buf, size, size);
if (ret != 0) {
ret = -errno;
fprintf(stderr, "%zu byte aligned buffer allocation failed: %s (%d)\n",
size, strerror(errno), errno);
return ret;
}
buf = malloc(size);
if (!buf)
return -ENOMEM;
ret = pread(fd, buf, size, blkno << shift);
if (ret == -1) {
@@ -105,99 +98,3 @@ int read_block(int fd, u64 blkno, int shift, void **ret_val)
return 0;
}
}
int read_block_crc(int fd, u64 blkno, int shift, void **ret_val)
{
struct scoutfs_block_header *hdr;
size_t size = 1ULL << shift;
int ret;
u32 crc;
ret = read_block(fd, blkno, shift, ret_val);
if (ret == 0) {
hdr = *ret_val;
crc = crc_block(hdr, size);
if (crc != le32_to_cpu(hdr->crc)) {
fprintf(stderr, "crc of read blkno %llu failed, stored %08x != calculated %08x\n",
blkno, le32_to_cpu(hdr->crc), crc);
free(*ret_val);
*ret_val = NULL;
ret = -EIO;
}
}
return ret;
}
int read_block_verify(int fd, u32 magic, u64 fsid, u64 blkno, int shift, void **ret_val)
{
struct scoutfs_block_header *hdr = NULL;
int ret;
ret = read_block_crc(fd, blkno, shift, ret_val);
if (ret == 0) {
hdr = *ret_val;
ret = -EIO;
if (le32_to_cpu(hdr->magic) != magic)
fprintf(stderr, "read blkno %llu has bad magic %08x != expected %08x\n",
blkno, le32_to_cpu(hdr->magic), magic);
else if (fsid != 0 && le64_to_cpu(hdr->fsid) != fsid)
fprintf(stderr, "read blkno %llu has bad fsid %016llx != expected %016llx\n",
blkno, le64_to_cpu(hdr->fsid), fsid);
else if (le32_to_cpu(hdr->blkno) != blkno)
fprintf(stderr, "read blkno %llu has bad blkno %llu != expected %llu\n",
blkno, le64_to_cpu(hdr->blkno), blkno);
else
ret = 0;
if (ret < 0) {
free(*ret_val);
*ret_val = NULL;
}
}
return ret;
}
/*
* Update the block header fields and write out the block.
*/
int write_block(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr)
{
size_t size = 1ULL << shift;
ssize_t ret;
hdr->magic = cpu_to_le32(magic);
hdr->fsid = fsid;
hdr->blkno = cpu_to_le64(blkno);
hdr->seq = cpu_to_le64(seq);
hdr->crc = cpu_to_le32(crc_block(hdr, size));
ret = pwrite(fd, hdr, size, blkno << shift);
if (ret != size) {
fprintf(stderr, "write to blkno %llu returned %zd: %s (%d)\n",
blkno, ret, strerror(errno), errno);
return -errno;
}
return 0;
}
int write_block_sync(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr)
{
int ret = write_block(fd, magic, fsid, seq, blkno, shift, hdr);
if (ret != 0)
return ret;
if (fsync(fd)) {
ret = -errno;
fprintf(stderr, "fsync after write to blkno %llu failed: %s (%d)\n",
blkno, strerror(errno), errno);
return ret;
}
return 0;
}

View File

@@ -113,14 +113,6 @@ static inline int memcmp_lens(const void *a, int a_len,
int get_path(char *path, int flags);
int read_block(int fd, u64 blkno, int shift, void **ret_val);
int read_block_crc(int fd, u64 blkno, int shift, void **ret_val);
int read_block_verify(int fd, u32 magic, u64 fsid, u64 blkno, int shift, void **ret_val);
struct scoutfs_block_header;
int write_block(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr);
int write_block_sync(int fd, u32 magic, __le64 fsid, u64 seq, u64 blkno,
int shift, struct scoutfs_block_header *hdr);
#define __stringify_1(x) #x
#define __stringify(x) __stringify_1(x)