Compare commits

..

15 Commits

Author SHA1 Message Date
Hunter Shaffer
7b121d9860 small btree balancing
While we are filling blocks the final block may not have enough items
to properly fill that block. Here we add a check that stops filling
the block if we have less than the minimum amount of items.

Signed-off-by: Hunter Shaffer <hunter.shaffer@versity.com>
2025-05-27 14:26:08 -07:00
Auke Kok
200b286b7b Restore crtime.
While doing this I noticed we attempt to restore data/meta_seq but
that goes nowhere, it's just ignored.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Auke Kok
c759928602 Restore project ID, retention et al in restore_copy
We didn't migrate the extra data from inodes on folders before,
which is a gap in testing. Make sure to test with a nested restored
folder to test that inheritance isn't in the way.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Hunter Shaffer
a1e8f38ef3 Limit scoutfs df output in testing
The tests scripts for restore_copy and parallel_restore were diffing
because of the scoutfs df output. This happened because the fields
other than Used would be dependent on the disk size used. This patch
fixes this by limiting the output to only the type and space used.
Signed-off-by: Hunter Shaffer <hunter.shaffer@versity.com>
2025-05-27 14:26:08 -07:00
Hunter Shaffer
aba4eb12ac Fix Quota hashing
When initializing the key for the quota we were originally given
the address of the pointer to the rule, that is fixed here.
There is also a test case verifying that we are able to perform
operations such as rule deletion and adding a rule to the restored
filesystem.

Signed-off-by: Hunter Shaffer <hunter.shaffer@versity.com>
2025-05-27 14:26:08 -07:00
Auke Kok
e4f5fc4682 Restore hardlink count.
The hardlink count of files was previously hard coded to 1. We want to
properly restore hard linked files because it saves space and time.

The test binary restore_copy exposed this missed case before and is updated
to make use of it.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Auke Kok
a2eb157a1f Copy a tree using parallel restore library.
This tool compies a source tree (whether it's scoutfs or not)
into an offline scoutfs meta device. It has only those 2 parameters
and does a single-process walk of the tree to restore all items
while preservice as much of the metadata as possible.

Signed-off-by: Hunter Shaffer <hunter.shaffer@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Zach Brown
758d5d64e7 Add test for parallel restore
This is the benchmark binary that bulk creates filesystem items, xattrs
and is heavily threaded to scope the performance of the library. The
test script invokes it to validate some basic constraints.

Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Hunter Shaffer <hunter.shaffer@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Zach Brown
51dbb7248f Add parallel restore
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Hunter Shaffer <hunter.shaffer@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Auke Kok
b53288ffdc Superblock checks for meta and data dev.
We check superblock magic, crc, flags. data device superblock is
checked but a little less thorough.  We check whether the device is
still mounted, since that would make checking invalid to begin with.
Quorum blocks are validated to have sane contents.

We add a global problem counter so we can trivially measure and
report whether any problem was found at all, instead of iterating
over all the problems and checking each individual count.

We pick the standard exit code values from `fsck` and mirror their
intentional behavior. This results in `fsck.scoutfs` can now be
trivially created by making it a wrapper around `scoutfs check`.

Signed-off-by: Auke Kok <auke.kok@versity.com>
Signed-off-by: Hunter Shaffer <hunter.shaffer@versity.com>
2025-05-27 14:26:08 -07:00
Auke Kok
17d99bdd0d Add man page content for check.
Adds basic man page content for the `check` subcommand.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Auke Kok
9d68c8bba7 Generic block header checks: crc, magic.
Generally as we call block_get() we should validate that if the block
has a hdr, at a minimum the crc is correct and the magic value is
the expected value passed, and the fsid matches the superblock. This
function implements just that. Returns -EINVAL, up to the caller to
report a problem() and handle the outcome. For now the code just hard
fails, which incedentally makes it fail the clobber-repair.sh tests
I wrote.

Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
Zach Brown
cfab835242 Add {read,write}-metadata-image scoutfs commands
Signed-off-by: Zach Brown <zab@versity.com>
2025-05-27 14:26:08 -07:00
Zach Brown
7c7b7e6eb9 Fix partial rename to check_meta_alloc
As I was committing the initial check command I had only partially
completed a rename of the function that checks the metadata allocators.

Signed-off-by: Zach Brown <zab@versity.com>
2025-05-27 14:26:08 -07:00
Zach Brown
47f3025680 Add check command
Signed-off-by: Zach Brown <zab@versity.com>
Signed-off-by: Auke Kok <auke.kok@versity.com>
2025-05-27 14:26:08 -07:00
95 changed files with 9383 additions and 2832 deletions

View File

@@ -1,62 +1,6 @@
Versity ScoutFS Release Notes
=============================
---
v1.26
\
*Nov 17, 2025*
Add the ino\_alloc\_per\_lock mount option. This changes the number of
inode numbers allocated under each cluster lock and can alleviate lock
contention for some patterns of larger file creation.
Add the tcp\_keepalive\_timeout\_ms mount option. This can enable the
system to survive longer periods of networking outages.
Fix a rare double free of internal btree metadata blocks when merging
log trees. The duplicated freed metadata block numbers would cause
persistent errors in the server, preventing the server from starting and
hanging the system.
Fix the data\_wait interface to not require the correct data\_version of
the inode when raising an error. This lets callers raise errors when
they're unable to recall the details of the inode to discover its
data\_version.
Change scoutfs to more aggressively reclaim cached memory when under
memory pressure. This makes scoutfs behave more like other kernel
components and it integrates better with the reclaim policy heuristics
in the VM core of the kernel.
Change scoutfs to more efficiently transmit and receive socket messages.
Under heavy load this can process messages sufficiently more quickly to
avoid hung task messages for tasks that were waiting for cluster lock
messages to be processed.
Fix faulty server block commit budget calculations that were generating
spurious "holders exceeded alloc budget" console messages.
---
v1.25
\
*Jun 3, 2025*
Fix a bug that could cause indefinite retries of failed client commits.
Under specific error conditions the client and server's understanding of
the current client commit could get out of sync. The client would retry
commits indefinitely that could never succeed. This manifested as
infinite "critical transaction commit failure" messages in the kernel
log on the client and matching "error <nr> committing client logs" on
the server.
Fix a bug in a specific case of server error handling that could result
in sending references to unwritten blocks to the client. The client
would try to read blocks that hadn't been written and return spurious
errors. This was seen under low free space conditions on the server and
resulted in error messages with error code 116 (The errno enum for
ESTALE, the client's indication that it couldn't read the blocks that it
expected.)
---
v1.24
\

View File

@@ -5,6 +5,13 @@ ifeq ($(SK_KSRC),)
SK_KSRC := $(shell echo /lib/modules/`uname -r`/build)
endif
# fail if sparse fails if we find it
ifeq ($(shell sparse && echo found),found)
SP =
else
SP = @:
endif
SCOUTFS_GIT_DESCRIBE ?= \
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
echo no-git)
@@ -29,7 +36,9 @@ TARFILE = scoutfs-kmod-$(RPM_VERSION).tar
all: module
module:
$(MAKE) CHECK=$(CURDIR)/src/sparse-filtered.sh C=1 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
$(MAKE) $(SCOUTFS_ARGS)
$(SP) $(MAKE) C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
modules_install:
$(MAKE) $(SCOUTFS_ARGS) modules_install

View File

@@ -158,6 +158,15 @@ ifneq (,$(shell grep 'sock_create_kern.*struct net' include/linux/net.h))
ccflags-y += -DKC_SOCK_CREATE_KERN_NET=1
endif
#
# v3.18-rc6-1619-gc0371da6047a
#
# iov_iter is now part of struct msghdr
#
ifneq (,$(shell grep 'struct iov_iter.*msg_iter' include/linux/socket.h))
ccflags-y += -DKC_MSGHDR_STRUCT_IOV_ITER=1
endif
#
# v4.17-rc6-7-g95582b008388
#
@@ -278,14 +287,6 @@ ifneq (,$(shell grep 'int ..mknod. .struct user_namespace' include/linux/fs.h))
ccflags-y += -DKC_VFS_METHOD_USER_NAMESPACE_ARG
endif
#
# v6.2-rc1-2-gabf08576afe3
#
# fs: vfs methods use struct mnt_idmap instead of struct user_namespace
ifneq (,$(shell grep 'int vfs_mknod.struct mnt_idmap' include/linux/fs.h))
ccflags-y += -DKC_VFS_METHOD_MNT_IDMAP_ARG
endif
#
# v5.17-rc2-21-g07888c665b40
#
@@ -433,56 +434,3 @@ endif
ifneq (,$(shell grep 'int ..remap_pages..struct vm_area_struct' include/linux/mm.h))
ccflags-y += -DKC_MM_REMAP_PAGES
endif
#
# v3.19-4742-g503c358cf192
#
# list_lru_shrink_count() and list_lru_shrink_walk() introduced
#
ifneq (,$(shell grep 'list_lru_shrink_count.*struct list_lru' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_SHRINK_COUNT_WALK
endif
#
# v3.19-4757-g3f97b163207c
#
# lru_list_walk_cb lru arg added
#
ifneq (,$(shell grep 'struct list_head \*item, spinlock_t \*lock, void \*cb_arg' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_WALK_CB_ITEM_LOCK
endif
#
# v6.7-rc4-153-g0a97c01cd20b
#
# list_lru_{add,del} -> list_lru_{add,del}_obj
#
ifneq (,$(shell grep '^bool list_lru_add_obj' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_ADD_OBJ
endif
#
# v6.12-rc6-227-gda0c02516c50
#
# lru_list_walk_cb lock arg removed
#
ifneq (,$(shell grep 'struct list_lru_one \*list, spinlock_t \*lock, void \*cb_arg' include/linux/list_lru.h))
ccflags-y += -DKC_LIST_LRU_WALK_CB_LIST_LOCK
endif
#
# v5.1-rc4-273-ge9b98e162aa5
#
# introduce stack trace helpers
#
ifneq (,$(shell grep '^unsigned int stack_trace_save' include/linux/stacktrace.h))
ccflags-y += -DKC_STACK_TRACE_SAVE
endif
# v6.1-rc1-4-g7420332a6ff4
#
# .get_acl() method now has dentry arg (and mnt_idmap). The old get_acl has been renamed
# to get_inode_acl() and is still available as well, but has an extra rcu param.
ifneq (,$(shell grep 'struct posix_acl ...get_acl..struct mnt_idmap ., struct dentry' include/linux/fs.h))
ccflags-y += -DKC_GET_ACL_DENTRY
endif

View File

@@ -107,15 +107,8 @@ struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct s
return acl;
}
#ifdef KC_GET_ACL_DENTRY
struct posix_acl *scoutfs_get_acl(KC_VFS_NS_DEF
struct dentry *dentry, int type)
{
struct inode *inode = dentry->d_inode;
#else
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
{
#endif
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
struct posix_acl *acl;
@@ -208,15 +201,8 @@ out:
return ret;
}
#ifdef KC_GET_ACL_DENTRY
int scoutfs_set_acl(KC_VFS_NS_DEF
struct dentry *dentry, struct posix_acl *acl, int type)
{
struct inode *inode = dentry->d_inode;
#else
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
{
#endif
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
LIST_HEAD(ind_locks);
@@ -254,12 +240,7 @@ int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value,
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
#ifdef KC_GET_ACL_DENTRY
acl = scoutfs_get_acl(KC_VFS_INIT_NS
dentry, type);
#else
acl = scoutfs_get_acl(dentry->d_inode, type);
#endif
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl == NULL)
@@ -305,11 +286,7 @@ int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *v
}
}
#ifdef KC_GET_ACL_DENTRY
ret = scoutfs_set_acl(KC_VFS_INIT_NS dentry, acl, type);
#else
ret = scoutfs_set_acl(dentry->d_inode, acl, type);
#endif
out:
posix_acl_release(acl);

View File

@@ -1,14 +1,9 @@
#ifndef _SCOUTFS_ACL_H_
#define _SCOUTFS_ACL_H_
#ifdef KC_GET_ACL_DENTRY
struct posix_acl *scoutfs_get_acl(KC_VFS_NS_DEF struct dentry *dentry, int type);
int scoutfs_set_acl(KC_VFS_NS_DEF struct dentry *dentry, struct posix_acl *acl, int type);
#else
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type);
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
#endif
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks);
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER

View File

@@ -86,47 +86,18 @@ static u64 smallest_order_length(u64 len)
}
/*
* Moving an extent between trees can dirty blocks in several ways. This
* function calculates worst case number of blocks across these scenarions.
* We treat the alloc and free counts independently, so the values below are
* max(allocated, freed), not the sum.
*
* We track extents with two separate btree items: by block number and by size.
*
* If we're removing an extent from the btree (allocating), we can dirty
* two blocks if the keys are in different leaves. If we wind up merging
* leaves because we fall below the low water mark, we can wind up freeing
* three leaves.
*
* That sequence is as follows, assuming the original keys are removed from
* blocks A and B:
*
* Allocate new dirty A' and B'
* Free old stable A and B
* B' has fallen below the low water mark, so copy B' into A'
* Free B'
*
* An extent insertion (freeing an extent) can dirty up to five distinct items
* in the btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent:
*
* In the by-blkno portion of the btree, we can dirty (allocate for COW) up
* to two blocks- either by merging adjacent extents, which can cause us to
* join leaf blocks; or by an insertion that causes a split.
*
* In the by-size portion, we never merge extents, so normally we just dirty
* a single item with a size insertion. But if we merged adjacent extents in
* the by-blkno portion of the tree, we might be working with three by-sizex
* items: removing the two old ones that were combined in the merge; and
* adding the new one for the larger, merged size.
*
* Finally, dirtying the paths to these leaves can grow the tree and grow/shrink
* neighbours at each level, so we multiply by the height of the tree after
* accounting for a possible new level.
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*/
static u32 extent_mod_blocks(u32 height)
{
return ((1 + height) * 3) * 5;
return ((1 + height) * 2) * 3;
}
/*
@@ -857,7 +828,7 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
.zone = SCOUTFS_FREE_EXTENT_ORDER_ZONE,
};
struct scoutfs_extent found;
struct scoutfs_extent ext = {0,};
struct scoutfs_extent ext;
u64 start;
u64 len;
int nr;

View File

@@ -22,8 +22,6 @@
#include <linux/rhashtable.h>
#include <linux/random.h>
#include <linux/sched/mm.h>
#include <linux/list_lru.h>
#include <linux/stacktrace.h>
#include "format.h"
#include "super.h"
@@ -40,12 +38,26 @@
* than the page size. Callers can have their own contexts for tracking
* dirty blocks that are written together. We pin dirty blocks in
* memory and only checksum them all as they're all written.
*
* Memory reclaim is driven by maintaining two very coarse groups of
* blocks. As we access blocks we mark them with an increasing counter
* to discourage them from being reclaimed. We then define a threshold
* at the current counter minus half the population. Recent blocks have
* a counter greater than the threshold, and all other blocks with
* counters less than it are considered older and are candidates for
* reclaim. This results in access updates rarely modifying an atomic
* counter as blocks need to be moved into the recent group, and shrink
* can randomly scan blocks looking for the half of the population that
* will be in the old group. It's reasonably effective, but is
* particularly efficient and avoids contention between concurrent
* accesses and shrinking.
*/
struct block_info {
struct super_block *sb;
atomic_t total_inserted;
atomic64_t access_counter;
struct rhashtable ht;
struct list_lru lru;
wait_queue_head_t waitq;
KC_DEFINE_SHRINKER(shrinker);
struct work_struct free_work;
@@ -64,15 +76,28 @@ enum block_status_bits {
BLOCK_BIT_PAGE_ALLOC, /* page (possibly high order) allocation */
BLOCK_BIT_VIRT, /* mapped virt allocation */
BLOCK_BIT_CRC_VALID, /* crc has been verified */
BLOCK_BIT_ACCESSED, /* seen by lookup since last lru add/walk */
};
/*
* We want to tie atomic changes in refcounts to whether or not the
* block is still visible in the hash table, so we store the hash
* table's reference up at a known high bit. We could naturally set the
* inserted bit through excessive refcount increments. We don't do
* anything about that but at least warn if we get close.
*
* We're avoiding the high byte for no real good reason, just out of a
* historical fear of implementations that don't provide the full
* precision.
*/
#define BLOCK_REF_INSERTED (1U << 23)
#define BLOCK_REF_FULL (BLOCK_REF_INSERTED >> 1)
struct block_private {
struct scoutfs_block bl;
struct super_block *sb;
atomic_t refcount;
u64 accessed;
struct rhash_head ht_head;
struct list_head lru_head;
struct list_head dirty_entry;
struct llist_node free_node;
unsigned long bits;
@@ -81,15 +106,13 @@ struct block_private {
struct page *page;
void *virt;
};
unsigned int stack_len;
unsigned long stack[10];
};
#define TRACE_BLOCK(which, bp) \
do { \
__typeof__(bp) _bp = (bp); \
trace_scoutfs_block_##which(_bp->sb, _bp, _bp->bl.blkno, atomic_read(&_bp->refcount), \
atomic_read(&_bp->io_count), _bp->bits); \
atomic_read(&_bp->io_count), _bp->bits, _bp->accessed); \
} while (0)
#define BLOCK_PRIVATE(_bl) \
@@ -103,17 +126,7 @@ static __le32 block_calc_crc(struct scoutfs_block_header *hdr, u32 size)
return cpu_to_le32(calc);
}
static noinline void save_block_stack(struct block_private *bp)
{
bp->stack_len = stack_trace_save(bp->stack, ARRAY_SIZE(bp->stack), 2);
}
static void print_block_stack(struct block_private *bp)
{
stack_trace_print(bp->stack, bp->stack_len, 1);
}
static noinline struct block_private *block_alloc(struct super_block *sb, u64 blkno)
static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
{
struct block_private *bp;
unsigned int nofs_flags;
@@ -163,13 +176,11 @@ static noinline struct block_private *block_alloc(struct super_block *sb, u64 bl
bp->bl.blkno = blkno;
bp->sb = sb;
atomic_set(&bp->refcount, 1);
INIT_LIST_HEAD(&bp->lru_head);
INIT_LIST_HEAD(&bp->dirty_entry);
set_bit(BLOCK_BIT_NEW, &bp->bits);
atomic_set(&bp->io_count, 0);
TRACE_BLOCK(allocate, bp);
save_block_stack(bp);
out:
if (!bp)
@@ -222,85 +233,32 @@ static void block_free_work(struct work_struct *work)
}
/*
* Users of blocks hold a refcount. If putting a refcount drops to zero
* then the block is freed.
*
* Acquiring new references and claiming the exclusive right to tear
* down a block is built around this LIVE_REFCOUNT_BASE refcount value.
* As blocks are initially cached they have the live base added to their
* refcount. Lookups will only increment the refcount and return blocks
* for reference holders while the refcount is >= than the base.
*
* To remove a block from the cache and eventually free it, either by
* the lru walk in the shrinker, or by reference holders, the live base
* is removed and turned into a normal refcount increment that will be
* put by the caller. This can only be done once for a block, and once
* its done lookup will not return any more references.
*/
#define LIVE_REFCOUNT_BASE (INT_MAX ^ (INT_MAX >> 1))
/*
* Inc the refcount while holding an incremented refcount. We can't
* have so many individual reference holders that they pass the live
* base.
* Get a reference to a block while holding an existing reference.
*/
static void block_get(struct block_private *bp)
{
int now = atomic_inc_return(&bp->refcount);
WARN_ON_ONCE((atomic_read(&bp->refcount) & ~BLOCK_REF_INSERTED) <= 0);
BUG_ON(now <= 1);
BUG_ON(now == LIVE_REFCOUNT_BASE);
atomic_inc(&bp->refcount);
}
/*
* if (*v >= u) {
* *v += a;
* return true;
* }
*/
static bool atomic_add_unless_less(atomic_t *v, int a, int u)
* Get a reference to a block as long as it's been inserted in the hash
* table and hasn't been removed.
*/
static struct block_private *block_get_if_inserted(struct block_private *bp)
{
int c;
int cnt;
do {
c = atomic_read(v);
if (c < u)
return false;
} while (atomic_cmpxchg(v, c, c + a) != c);
cnt = atomic_read(&bp->refcount);
WARN_ON_ONCE(cnt & BLOCK_REF_FULL);
if (!(cnt & BLOCK_REF_INSERTED))
return NULL;
return true;
}
} while (atomic_cmpxchg(&bp->refcount, cnt, cnt + 1) != cnt);
static bool block_get_if_live(struct block_private *bp)
{
return atomic_add_unless_less(&bp->refcount, 1, LIVE_REFCOUNT_BASE);
}
/*
* If the refcount still has the live base, subtract it and increment
* the callers refcount that they'll put.
*/
static bool block_get_remove_live(struct block_private *bp)
{
return atomic_add_unless_less(&bp->refcount, (1 - LIVE_REFCOUNT_BASE), LIVE_REFCOUNT_BASE);
}
/*
* Only get the live base refcount if it is the only refcount remaining.
* This means that there are no active refcount holders and the block
* can't be dirty or under IO, which both hold references.
*/
static bool block_get_remove_live_only(struct block_private *bp)
{
int c;
do {
c = atomic_read(&bp->refcount);
if (c != LIVE_REFCOUNT_BASE)
return false;
} while (atomic_cmpxchg(&bp->refcount, c, c - LIVE_REFCOUNT_BASE + 1) != c);
return true;
return bp;
}
/*
@@ -332,81 +290,143 @@ static const struct rhashtable_params block_ht_params = {
};
/*
* Insert the block into the cache so that it's visible for lookups.
* The caller can hold references (including for a dirty block).
*
* We make sure the base is added and the block is in the lru once it's
* in the hash. If hash table insertion fails it'll be briefly visible
* in the lru, but won't be isolated/evicted because we hold an
* incremented refcount in addition to the live base.
* Insert a new block into the hash table. Once it is inserted in the
* hash table readers can start getting references. The caller may have
* multiple refs but the block can't already be inserted.
*/
static int block_insert(struct super_block *sb, struct block_private *bp)
{
DECLARE_BLOCK_INFO(sb, binf);
int ret;
BUG_ON(atomic_read(&bp->refcount) >= LIVE_REFCOUNT_BASE);
atomic_add(LIVE_REFCOUNT_BASE, &bp->refcount);
smp_mb__after_atomic(); /* make sure live base is visible to list_lru walk */
list_lru_add_obj(&binf->lru, &bp->lru_head);
WARN_ON_ONCE(atomic_read(&bp->refcount) & BLOCK_REF_INSERTED);
retry:
atomic_add(BLOCK_REF_INSERTED, &bp->refcount);
ret = rhashtable_lookup_insert_fast(&binf->ht, &bp->ht_head, block_ht_params);
if (ret < 0) {
atomic_sub(BLOCK_REF_INSERTED, &bp->refcount);
if (ret == -EBUSY) {
/* wait for pending rebalance to finish */
synchronize_rcu();
goto retry;
} else {
atomic_sub(LIVE_REFCOUNT_BASE, &bp->refcount);
BUG_ON(atomic_read(&bp->refcount) >= LIVE_REFCOUNT_BASE);
list_lru_del_obj(&binf->lru, &bp->lru_head);
}
} else {
atomic_inc(&binf->total_inserted);
TRACE_BLOCK(insert, bp);
}
return ret;
}
/*
* Indicate to the lru walker that this block has been accessed since it
* was added or last walked.
*/
static void block_accessed(struct super_block *sb, struct block_private *bp)
static u64 accessed_recently(struct block_info *binf)
{
if (!test_and_set_bit(BLOCK_BIT_ACCESSED, &bp->bits))
scoutfs_inc_counter(sb, block_cache_access_update);
return atomic64_read(&binf->access_counter) - (atomic_read(&binf->total_inserted) >> 1);
}
/*
* Remove the block from the cache. When this returns the block won't
* be visible for additional references from lookup.
*
* We always try and remove from the hash table. It's safe to remove a
* block that isn't hashed, it just returns -ENOENT.
*
* This is racing with the lru walk in the shrinker also trying to
* remove idle blocks from the cache. They both try to remove the live
* refcount base and perform their removal and put if they get it.
* Make sure that a block that is being accessed is less likely to be
* reclaimed if it is seen by the shrinker. If the block hasn't been
* accessed recently we update its accessed value.
*/
static void block_remove(struct super_block *sb, struct block_private *bp)
static void block_accessed(struct super_block *sb, struct block_private *bp)
{
DECLARE_BLOCK_INFO(sb, binf);
rhashtable_remove_fast(&binf->ht, &bp->ht_head, block_ht_params);
if (block_get_remove_live(bp)) {
list_lru_del_obj(&binf->lru, &bp->lru_head);
block_put(sb, bp);
if (bp->accessed == 0 || bp->accessed < accessed_recently(binf)) {
scoutfs_inc_counter(sb, block_cache_access_update);
bp->accessed = atomic64_inc_return(&binf->access_counter);
}
}
/*
* The caller wants to remove the block from the hash table and has an
* idea what the refcount should be. If the refcount does still
* indicate that the block is hashed, and we're able to clear that bit,
* then we can remove it from the hash table.
*
* The caller makes sure that it's safe to be referencing this block,
* either with their own held reference (most everything) or by being in
* an rcu grace period (shrink).
*/
static bool block_remove_cnt(struct super_block *sb, struct block_private *bp, int cnt)
{
DECLARE_BLOCK_INFO(sb, binf);
int ret;
if ((cnt & BLOCK_REF_INSERTED) &&
(atomic_cmpxchg(&bp->refcount, cnt, cnt & ~BLOCK_REF_INSERTED) == cnt)) {
TRACE_BLOCK(remove, bp);
ret = rhashtable_remove_fast(&binf->ht, &bp->ht_head, block_ht_params);
WARN_ON_ONCE(ret); /* must have been inserted */
atomic_dec(&binf->total_inserted);
return true;
}
return false;
}
/*
* Try to remove the block from the hash table as long as the refcount
* indicates that it is still in the hash table. This can be racing
* with normal refcount changes so it might have to retry.
*/
static void block_remove(struct super_block *sb, struct block_private *bp)
{
int cnt;
do {
cnt = atomic_read(&bp->refcount);
} while ((cnt & BLOCK_REF_INSERTED) && !block_remove_cnt(sb, bp, cnt));
}
/*
* Take one shot at removing the block from the hash table if it's still
* in the hash table and the caller has the only other reference.
*/
static bool block_remove_solo(struct super_block *sb, struct block_private *bp)
{
return block_remove_cnt(sb, bp, BLOCK_REF_INSERTED | 1);
}
static bool io_busy(struct block_private *bp)
{
smp_rmb(); /* test after adding to wait queue */
return test_bit(BLOCK_BIT_IO_BUSY, &bp->bits);
}
/*
* Called during shutdown with no other users.
*/
static void block_remove_all(struct super_block *sb)
{
DECLARE_BLOCK_INFO(sb, binf);
struct rhashtable_iter iter;
struct block_private *bp;
rhashtable_walk_enter(&binf->ht, &iter);
rhashtable_walk_start(&iter);
for (;;) {
bp = rhashtable_walk_next(&iter);
if (bp == NULL)
break;
if (bp == ERR_PTR(-EAGAIN))
continue;
if (block_get_if_inserted(bp)) {
block_remove(sb, bp);
WARN_ON_ONCE(atomic_read(&bp->refcount) != 1);
block_put(sb, bp);
}
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
WARN_ON_ONCE(atomic_read(&binf->total_inserted) != 0);
}
/*
* XXX The io_count and sb fields in the block_private are only used
@@ -468,7 +488,7 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
int ret = 0;
if (scoutfs_forcing_unmount(sb))
return -ENOLINK;
return -EIO;
sector = bp->bl.blkno << (SCOUTFS_BLOCK_LG_SHIFT - 9);
@@ -523,10 +543,6 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
return ret;
}
/*
* Return a block with an elevated refcount if it was present in the
* hash table and its refcount didn't indicate that it was being freed.
*/
static struct block_private *block_lookup(struct super_block *sb, u64 blkno)
{
DECLARE_BLOCK_INFO(sb, binf);
@@ -534,8 +550,8 @@ static struct block_private *block_lookup(struct super_block *sb, u64 blkno)
rcu_read_lock();
bp = rhashtable_lookup(&binf->ht, &blkno, block_ht_params);
if (bp && !block_get_if_live(bp))
bp = NULL;
if (bp)
bp = block_get_if_inserted(bp);
rcu_read_unlock();
return bp;
@@ -696,8 +712,8 @@ retry:
ret = 0;
out:
if (!retried && !IS_ERR_OR_NULL(bp) && !block_is_dirty(bp) &&
(ret == -ESTALE || scoutfs_trigger(sb, BLOCK_REMOVE_STALE))) {
if ((ret == -ESTALE || scoutfs_trigger(sb, BLOCK_REMOVE_STALE)) &&
!retried && !block_is_dirty(bp)) {
retried = true;
scoutfs_inc_counter(sb, block_cache_remove_stale);
block_remove(sb, bp);
@@ -1062,106 +1078,100 @@ static unsigned long block_count_objects(struct shrinker *shrink, struct shrink_
struct super_block *sb = binf->sb;
scoutfs_inc_counter(sb, block_cache_count_objects);
return list_lru_shrink_count(&binf->lru, sc);
}
struct isolate_args {
struct super_block *sb;
struct list_head dispose;
};
#define DECLARE_ISOLATE_ARGS(sb_, name_) \
struct isolate_args name_ = { \
.sb = sb_, \
.dispose = LIST_HEAD_INIT(name_.dispose), \
}
static enum lru_status isolate_lru_block(struct list_head *item, struct list_lru_one *list,
void *cb_arg)
{
struct block_private *bp = container_of(item, struct block_private, lru_head);
struct isolate_args *ia = cb_arg;
TRACE_BLOCK(isolate, bp);
/* rotate accessed blocks to the tail of the list (lazy promotion) */
if (test_and_clear_bit(BLOCK_BIT_ACCESSED, &bp->bits)) {
scoutfs_inc_counter(ia->sb, block_cache_isolate_rotate);
return LRU_ROTATE;
}
/* any refs, including dirty/io, stop us from acquiring lru refcount */
if (!block_get_remove_live_only(bp)) {
scoutfs_inc_counter(ia->sb, block_cache_isolate_skip);
return LRU_SKIP;
}
scoutfs_inc_counter(ia->sb, block_cache_isolate_removed);
list_lru_isolate_move(list, &bp->lru_head, &ia->dispose);
return LRU_REMOVED;
}
static void shrink_dispose_blocks(struct super_block *sb, struct list_head *dispose)
{
struct block_private *bp;
struct block_private *bp__;
list_for_each_entry_safe(bp, bp__, dispose, lru_head) {
list_del_init(&bp->lru_head);
block_remove(sb, bp);
block_put(sb, bp);
}
return shrinker_min_long(atomic_read(&binf->total_inserted));
}
/*
* Remove a number of cached blocks that haven't been used recently.
*
* We don't maintain a strictly ordered LRU to avoid the contention of
* accesses always moving blocks around in some precise global
* structure.
*
* Instead we use counters to divide the blocks into two roughly equal
* groups by how recently they were accessed. We randomly walk all
* inserted blocks looking for any blocks in the older half to remove
* and free. The random walk and there being two groups means that we
* typically only walk a small multiple of the number we're looking for
* before we find them all.
*
* Our rcu walk of blocks can see blocks in all stages of their life
* cycle, from dirty blocks to those with 0 references that are queued
* for freeing. We only want to free idle inserted blocks so we
* atomically remove blocks when the only references are ours and the
* hash table.
*/
static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_control *sc)
{
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
struct super_block *sb = binf->sb;
DECLARE_ISOLATE_ARGS(sb, ia);
unsigned long freed;
struct rhashtable_iter iter;
struct block_private *bp;
bool stop = false;
unsigned long freed = 0;
unsigned long nr = sc->nr_to_scan;
u64 recently;
scoutfs_inc_counter(sb, block_cache_scan_objects);
freed = kc_list_lru_shrink_walk(&binf->lru, sc, isolate_lru_block, &ia);
shrink_dispose_blocks(sb, &ia.dispose);
return freed;
}
recently = accessed_recently(binf);
rhashtable_walk_enter(&binf->ht, &iter);
rhashtable_walk_start(&iter);
static enum lru_status dump_lru_block(struct list_head *item, struct list_lru_one *list,
void *cb_arg)
{
struct block_private *bp = container_of(item, struct block_private, lru_head);
/*
* This isn't great but I don't see a better way. We want to
* walk the hash from a random point so that we're not
* constantly walking over the same region that we've already
* freed old blocks within. The interface doesn't let us do
* this explicitly, but this seems to work? The difference this
* makes is enormous, around a few orders of magnitude fewer
* _nexts per shrink.
*/
if (iter.walker.tbl)
iter.slot = prandom_u32_max(iter.walker.tbl->size);
printk("blkno %llu refcount 0x%x io_count %d bits 0x%lx\n",
bp->bl.blkno, atomic_read(&bp->refcount), atomic_read(&bp->io_count),
bp->bits);
print_block_stack(bp);
while (nr > 0) {
bp = rhashtable_walk_next(&iter);
if (bp == NULL)
break;
if (bp == ERR_PTR(-EAGAIN)) {
/*
* We can be called from reclaim in the allocation
* to resize the hash table itself. We have to
* return so that the caller can proceed and
* enable hash table iteration again.
*/
scoutfs_inc_counter(sb, block_cache_shrink_stop);
stop = true;
break;
}
return LRU_SKIP;
}
scoutfs_inc_counter(sb, block_cache_shrink_next);
/*
* Called during shutdown with no other users. The isolating walk must
* find blocks on the lru that only have references for presence on the
* lru and in the hash table.
*/
static void block_shrink_all(struct super_block *sb)
{
DECLARE_BLOCK_INFO(sb, binf);
DECLARE_ISOLATE_ARGS(sb, ia);
long count;
if (bp->accessed >= recently) {
scoutfs_inc_counter(sb, block_cache_shrink_recent);
continue;
}
count = DIV_ROUND_UP(list_lru_count(&binf->lru), 128) * 2;
do {
kc_list_lru_walk(&binf->lru, isolate_lru_block, &ia, 128);
shrink_dispose_blocks(sb, &ia.dispose);
} while (list_lru_count(&binf->lru) > 0 && --count > 0);
count = list_lru_count(&binf->lru);
if (count > 0) {
scoutfs_err(sb, "failed to isolate/dispose %ld blocks", count);
kc_list_lru_walk(&binf->lru, dump_lru_block, sb, count);
if (block_get_if_inserted(bp)) {
if (block_remove_solo(sb, bp)) {
scoutfs_inc_counter(sb, block_cache_shrink_remove);
TRACE_BLOCK(shrink, bp);
freed++;
nr--;
}
block_put(sb, bp);
}
}
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
if (stop)
return SHRINK_STOP;
else
return freed;
}
struct sm_block_completion {
@@ -1200,7 +1210,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, blk_op
BUILD_BUG_ON(PAGE_SIZE < SCOUTFS_BLOCK_SM_SIZE);
if (scoutfs_forcing_unmount(sb))
return -ENOLINK;
return -EIO;
if (WARN_ON_ONCE(len > SCOUTFS_BLOCK_SM_SIZE) ||
WARN_ON_ONCE(!op_is_write(opf) && !blk_crc))
@@ -1266,7 +1276,7 @@ int scoutfs_block_write_sm(struct super_block *sb,
int scoutfs_block_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct block_info *binf = NULL;
struct block_info *binf;
int ret;
binf = kzalloc(sizeof(struct block_info), GFP_KERNEL);
@@ -1275,15 +1285,15 @@ int scoutfs_block_setup(struct super_block *sb)
goto out;
}
ret = list_lru_init(&binf->lru);
if (ret < 0)
goto out;
ret = rhashtable_init(&binf->ht, &block_ht_params);
if (ret < 0)
if (ret < 0) {
kfree(binf);
goto out;
}
binf->sb = sb;
atomic_set(&binf->total_inserted, 0);
atomic64_set(&binf->access_counter, 0);
init_waitqueue_head(&binf->waitq);
KC_INIT_SHRINKER_FUNCS(&binf->shrinker, block_count_objects,
block_scan_objects);
@@ -1295,10 +1305,8 @@ int scoutfs_block_setup(struct super_block *sb)
ret = 0;
out:
if (ret < 0 && binf) {
list_lru_destroy(&binf->lru);
kfree(binf);
}
if (ret)
scoutfs_block_destroy(sb);
return ret;
}
@@ -1310,10 +1318,9 @@ void scoutfs_block_destroy(struct super_block *sb)
if (binf) {
KC_UNREGISTER_SHRINKER(&binf->shrinker);
block_shrink_all(sb);
block_remove_all(sb);
flush_work(&binf->free_work);
rhashtable_destroy(&binf->ht);
list_lru_destroy(&binf->lru);
kfree(binf);
sbi->block_info = NULL;

View File

@@ -435,8 +435,8 @@ static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
if (ret == -ENOENT)
ret = 0;
out:
kfree(super);
out:
return ret;
}

View File

@@ -26,15 +26,17 @@
EXPAND_COUNTER(block_cache_alloc_page_order) \
EXPAND_COUNTER(block_cache_alloc_virt) \
EXPAND_COUNTER(block_cache_end_io_error) \
EXPAND_COUNTER(block_cache_isolate_removed) \
EXPAND_COUNTER(block_cache_isolate_rotate) \
EXPAND_COUNTER(block_cache_isolate_skip) \
EXPAND_COUNTER(block_cache_forget) \
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_count_objects) \
EXPAND_COUNTER(block_cache_scan_objects) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_stop) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
@@ -88,7 +90,6 @@
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(inode_deleted) \
EXPAND_COUNTER(item_cache_count_objects) \
EXPAND_COUNTER(item_cache_scan_objects) \
EXPAND_COUNTER(item_clear_dirty) \
@@ -116,11 +117,10 @@
EXPAND_COUNTER(item_pcpu_page_hit) \
EXPAND_COUNTER(item_pcpu_page_miss) \
EXPAND_COUNTER(item_pcpu_page_miss_keys) \
EXPAND_COUNTER(item_read_pages_barrier) \
EXPAND_COUNTER(item_read_pages_retry) \
EXPAND_COUNTER(item_read_pages_split) \
EXPAND_COUNTER(item_shrink_page) \
EXPAND_COUNTER(item_shrink_page_dirty) \
EXPAND_COUNTER(item_shrink_page_reader) \
EXPAND_COUNTER(item_shrink_page_trylock) \
EXPAND_COUNTER(item_update) \
EXPAND_COUNTER(item_write_dirty) \
@@ -145,7 +145,6 @@
EXPAND_COUNTER(lock_shrink_work) \
EXPAND_COUNTER(lock_unlock) \
EXPAND_COUNTER(lock_wait) \
EXPAND_COUNTER(log_merge_no_finalized) \
EXPAND_COUNTER(log_merge_wait_timeout) \
EXPAND_COUNTER(net_dropped_response) \
EXPAND_COUNTER(net_send_bytes) \
@@ -182,7 +181,6 @@
EXPAND_COUNTER(quorum_send_vote) \
EXPAND_COUNTER(quorum_server_shutdown) \
EXPAND_COUNTER(quorum_term_follower) \
EXPAND_COUNTER(reclaimed_open_logs) \
EXPAND_COUNTER(server_commit_hold) \
EXPAND_COUNTER(server_commit_queue) \
EXPAND_COUNTER(server_commit_worker) \

View File

@@ -2053,9 +2053,6 @@ const struct inode_operations scoutfs_dir_iops = {
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
#ifdef KC_GET_ACL_DENTRY
.set_acl = scoutfs_set_acl,
#endif
.symlink = scoutfs_symlink,
.permission = scoutfs_permission,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER

View File

@@ -470,7 +470,7 @@ struct scoutfs_srch_compact {
* @get_trans_seq, @commit_trans_seq: These pair of sequence numbers
* determine if a transaction is currently open for the mount that owns
* the log_trees struct. get_trans_seq is advanced by the server as the
* transaction is opened. The server sets commit_trans_seq equal to
* transaction is opened. The server sets comimt_trans_seq equal to
* get_ as the transaction is committed.
*/
struct scoutfs_log_trees {
@@ -1091,8 +1091,7 @@ enum scoutfs_net_cmd {
EXPAND_NET_ERRNO(ENOMEM) \
EXPAND_NET_ERRNO(EIO) \
EXPAND_NET_ERRNO(ENOSPC) \
EXPAND_NET_ERRNO(EINVAL) \
EXPAND_NET_ERRNO(ENOLINK)
EXPAND_NET_ERRNO(EINVAL)
#undef EXPAND_NET_ERRNO
#define EXPAND_NET_ERRNO(which) SCOUTFS_NET_ERR_##which,

View File

@@ -150,9 +150,6 @@ static const struct inode_operations scoutfs_file_iops = {
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
#ifdef KC_GET_ACL_DENTRY
.set_acl = scoutfs_set_acl,
#endif
.fiemap = scoutfs_data_fiemap,
};
@@ -166,9 +163,6 @@ static const struct inode_operations scoutfs_special_iops = {
#endif
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
#ifdef KC_GET_ACL_DENTRY
.set_acl = scoutfs_set_acl,
#endif
};
/*
@@ -1482,6 +1476,12 @@ static int remove_index_items(struct super_block *sb, u64 ino,
* Return an allocated and unused inode number. Returns -ENOSPC if
* we're out of inode.
*
* Each parent directory has its own pool of free inode numbers. Items
* are sorted by their inode numbers as they're stored in segments.
* This will tend to group together files that are created in a
* directory at the same time in segments. Concurrent creation across
* different directories will be stored in their own regions.
*
* Inode numbers are never reclaimed. If the inode is evicted or we're
* unmounted the pending inode numbers will be lost. Asking for a
* relatively small number from the server each time will tend to
@@ -1491,18 +1491,12 @@ static int remove_index_items(struct super_block *sb, u64 ino,
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret)
{
DECLARE_INODE_SB_INFO(sb, inf);
struct scoutfs_mount_options opts;
struct inode_allocator *ia;
u64 ino;
u64 nr;
int ret;
scoutfs_options_read(sb, &opts);
if (is_dir && opts.ino_alloc_per_lock == SCOUTFS_LOCK_INODE_GROUP_NR)
ia = &inf->dir_ino_alloc;
else
ia = &inf->ino_alloc;
ia = is_dir ? &inf->dir_ino_alloc : &inf->ino_alloc;
spin_lock(&ia->lock);
@@ -1523,17 +1517,6 @@ int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret)
*ino_ret = ia->ino++;
ia->nr--;
if (opts.ino_alloc_per_lock != SCOUTFS_LOCK_INODE_GROUP_NR) {
nr = ia->ino & SCOUTFS_LOCK_INODE_GROUP_MASK;
if (nr >= opts.ino_alloc_per_lock) {
nr = SCOUTFS_LOCK_INODE_GROUP_NR - nr;
if (nr > ia->nr)
nr = ia->nr;
ia->ino += nr;
ia->nr -= nr;
}
}
spin_unlock(&ia->lock);
ret = 0;
out:
@@ -1871,9 +1854,6 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
goto out;
ret = delete_inode_items(sb, ino, &sinode, lock, orph_lock);
if (ret == 0)
scoutfs_inc_counter(sb, inode_deleted);
out:
if (clear_trying)
clear_bit(bit_nr, ldata->trying);
@@ -1982,8 +1962,6 @@ static void iput_worker(struct work_struct *work)
while (count-- > 0)
iput(inode);
cond_resched();
/* can't touch inode after final iput */
spin_lock(&inf->iput_lock);
@@ -2205,7 +2183,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
struct scoutfs_inode_info *si;
struct scoutfs_inode_info *tmp;
struct inode *inode;
int ret = 0;
int ret;
spin_lock(&inf->writeback_lock);

View File

@@ -441,6 +441,8 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
if (!S_ISREG(inode->i_mode)) {
ret = -EINVAL;
} else if (scoutfs_inode_data_version(inode) != args.data_version) {
ret = -ESTALE;
} else {
ret = scoutfs_data_wait_err(inode, sblock, eblock, args.op,
args.err);
@@ -952,9 +954,6 @@ static int copy_alloc_detail_to_user(struct super_block *sb, void *arg,
if (args->copied == args->nr)
return -EOVERFLOW;
/* .type and .pad need clearing */
memset(&ade, 0, sizeof(struct scoutfs_ioctl_alloc_detail_entry));
ade.blocks = blocks;
ade.id = id;
ade.meta = !!meta;
@@ -1370,7 +1369,7 @@ static long scoutfs_ioc_get_referring_entries(struct file *file, unsigned long a
ent.d_type = bref->d_type;
ent.name_len = name_len;
if (copy_to_user(uent, &ent, offsetof(struct scoutfs_ioctl_dirent, name[0])) ||
if (copy_to_user(uent, &ent, sizeof(struct scoutfs_ioctl_dirent)) ||
copy_to_user(&uent->name[0], bref->dent.name, name_len) ||
put_user('\0', &uent->name[name_len])) {
ret = -EFAULT;

View File

@@ -366,15 +366,10 @@ struct scoutfs_ioctl_statfs_more {
*
* Find current waiters that match the inode, op, and block range to wake
* up and return an error.
*
* (*) ca. v1.25 and earlier required that the data_version passed match
* that of the waiter, but this check is removed. It was never needed
* because no data is modified during this ioctl. Any data_version value
* here is thus since then ignored.
*/
struct scoutfs_ioctl_data_wait_err {
__u64 ino;
__u64 data_version; /* Ignored, see above (*) */
__u64 data_version;
__u64 offset;
__u64 count;
__u64 op;

View File

@@ -86,8 +86,6 @@ struct item_cache_info {
/* often walked, but per-cpu refs are fast path */
rwlock_t rwlock;
struct rb_root pg_root;
/* stop readers from caching stale items behind reclaimed cleaned written items */
u64 read_dirty_barrier;
/* page-granular modification by writers, then exclusive to commit */
spinlock_t dirty_lock;
@@ -98,6 +96,10 @@ struct item_cache_info {
spinlock_t lru_lock;
struct list_head lru_list;
unsigned long lru_pages;
/* written by page readers, read by shrink */
spinlock_t active_lock;
struct list_head active_list;
};
#define DECLARE_ITEM_CACHE_INFO(sb, name) \
@@ -1283,6 +1285,78 @@ static int cache_empty_page(struct super_block *sb,
return 0;
}
/*
* Readers operate independently from dirty items and transactions.
* They read a set of persistent items and insert them into the cache
* when there aren't already pages whose key range contains the items.
* This naturally prefers cached dirty items over stale read items.
*
* We have to deal with the case where dirty items are written and
* invalidated while a read is in flight. The reader won't have seen
* the items that were dirty in their persistent roots as they started
* reading. By the time they insert their read pages the previously
* dirty items have been reclaimed and are not in the cache. The old
* stale items will be inserted in their place, effectively corrupting
* by having the dirty items disappear.
*
* We fix this by tracking the max seq of items in pages. As readers
* start they record the current transaction seq. Invalidation skips
* pages with a max seq greater than the first reader seq because the
* items in the page have to stick around to prevent the readers stale
* items from being inserted.
*
* This naturally only affects a small set of pages with items that were
* written relatively recently. If we're in memory pressure then we
* probably have a lot of pages and they'll naturally have items that
* were visible to any raders. We don't bother with the complicated and
* expensive further refinement of tracking the ranges that are being
* read and comparing those with pages to invalidate.
*/
struct active_reader {
struct list_head head;
u64 seq;
};
#define INIT_ACTIVE_READER(rdr) \
struct active_reader rdr = { .head = LIST_HEAD_INIT(rdr.head) }
static void add_active_reader(struct super_block *sb, struct active_reader *active)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
BUG_ON(!list_empty(&active->head));
active->seq = scoutfs_trans_sample_seq(sb);
spin_lock(&cinf->active_lock);
list_add_tail(&active->head, &cinf->active_list);
spin_unlock(&cinf->active_lock);
}
static u64 first_active_reader_seq(struct item_cache_info *cinf)
{
struct active_reader *active;
u64 first;
/* only the calling task adds or deletes this active */
spin_lock(&cinf->active_lock);
active = list_first_entry_or_null(&cinf->active_list, struct active_reader, head);
first = active ? active->seq : U64_MAX;
spin_unlock(&cinf->active_lock);
return first;
}
static void del_active_reader(struct item_cache_info *cinf, struct active_reader *active)
{
/* only the calling task adds or deletes this active */
if (!list_empty(&active->head)) {
spin_lock(&cinf->active_lock);
list_del_init(&active->head);
spin_unlock(&cinf->active_lock);
}
}
/*
* Add a newly read item to the pages that we're assembling for
* insertion into the cache. These pages are private, they only exist
@@ -1376,34 +1450,24 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
* and duplicates, we insert any resulting pages which don't overlap
* with existing cached pages.
*
* The forest item reader is reading stable trees that could be
* overwritten. It can return -ESTALE which we return to the caller who
* will retry the operation and work with a new set of more recent
* btrees.
*
* We only insert uncached regions because this is called with cluster
* locks held, but without locking the cache. The regions we read can
* be stale with respect to the current cache, which can be read and
* dirtied by other cluster lock holders on our node, but the cluster
* locks protect the stable items we read.
* locks protect the stable items we read. Invalidation is careful not
* to drop pages that have items that we couldn't see because they were
* dirty when we started reading.
*
* Using the presence of locally written dirty pages to override stale
* read pages only works if, well, the more recent locally written pages
* are still present. Readers are totally decoupled from writers and
* can have a set of items that is very old indeed. In the mean time
* more recent items would have been dirtied locally, committed,
* cleaned, and reclaimed. We have a coarse barrier which ensures that
* readers can't insert items read from old roots from before local data
* was written. If a write completes while a read is in progress the
* read will have to retry. The retried read can use cached blocks so
* we're relying on reads being much faster than writes to reduce the
* overhead to mostly cpu work of recollecting the items from cached
* blocks via a more recent root from the server.
* The forest item reader is reading stable trees that could be
* overwritten. It can return -ESTALE which we return to the caller who
* will retry the operation and work with a new set of more recent
* btrees.
*/
static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
struct scoutfs_key *key, struct scoutfs_lock *lock)
{
struct rb_root root = RB_ROOT;
INIT_ACTIVE_READER(active);
struct cached_page *right = NULL;
struct cached_page *pg;
struct cached_page *rd;
@@ -1416,7 +1480,6 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
struct rb_node *par;
struct rb_node *pg_tmp;
struct rb_node *item_tmp;
u64 rdbar;
int pgi;
int ret;
@@ -1430,9 +1493,8 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
pg->end = lock->end;
rbtree_insert(&pg->node, NULL, &root.rb_node, &root);
read_lock(&cinf->rwlock);
rdbar = cinf->read_dirty_barrier;
read_unlock(&cinf->rwlock);
/* set active reader seq before reading persistent roots */
add_active_reader(sb, &active);
start = lock->start;
end = lock->end;
@@ -1471,13 +1533,6 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
retry:
write_lock(&cinf->rwlock);
/* can't insert if write has cleaned since we read */
if (cinf->read_dirty_barrier != rdbar) {
scoutfs_inc_counter(sb, item_read_pages_barrier);
ret = -ESTALE;
goto unlock;
}
while ((rd = first_page(&root))) {
pg = page_rbtree_walk(sb, &cinf->pg_root, &rd->start, &rd->end,
@@ -1515,12 +1570,12 @@ retry:
}
}
ret = 0;
unlock:
write_unlock(&cinf->rwlock);
ret = 0;
out:
del_active_reader(cinf, &active);
/* free any pages we left dangling on error */
for_each_page_safe(&root, rd, pg_tmp) {
rbtree_erase(&rd->node, &root);
@@ -1580,7 +1635,6 @@ retry:
ret = read_pages(sb, cinf, key, lock);
if (ret < 0 && ret != -ESTALE)
goto out;
scoutfs_inc_counter(sb, item_read_pages_retry);
goto retry;
}
@@ -2347,12 +2401,6 @@ out:
* The caller has successfully committed all the dirty btree blocks that
* contained the currently dirty items. Clear all the dirty items and
* pages.
*
* This strange lock/trylock loop comes from sparse issuing spurious
* mismatched context warnings if we do anything (like unlock and relax)
* in the else branch of the failed trylock. We're jumping through
* hoops to not use the else but still drop and reacquire the dirty_lock
* if the trylock fails.
*/
int scoutfs_item_write_done(struct super_block *sb)
{
@@ -2361,35 +2409,40 @@ int scoutfs_item_write_done(struct super_block *sb)
struct cached_item *tmp;
struct cached_page *pg;
/* don't let read_pages miss written+cleaned items */
write_lock(&cinf->rwlock);
cinf->read_dirty_barrier++;
write_unlock(&cinf->rwlock);
retry:
spin_lock(&cinf->dirty_lock);
while ((pg = list_first_entry_or_null(&cinf->dirty_list, struct cached_page, dirty_head))) {
if (write_trylock(&pg->rwlock)) {
while ((pg = list_first_entry_or_null(&cinf->dirty_list,
struct cached_page,
dirty_head))) {
if (!write_trylock(&pg->rwlock)) {
spin_unlock(&cinf->dirty_lock);
list_for_each_entry_safe(item, tmp, &pg->dirty_list,
dirty_head) {
clear_item_dirty(sb, cinf, pg, item);
if (item->delta)
scoutfs_inc_counter(sb, item_delta_written);
/* free deletion items */
if (item->deletion || item->delta)
erase_item(pg, item);
else
item->persistent = 1;
}
write_unlock(&pg->rwlock);
spin_lock(&cinf->dirty_lock);
cpu_relax();
goto retry;
}
spin_unlock(&cinf->dirty_lock);
list_for_each_entry_safe(item, tmp, &pg->dirty_list,
dirty_head) {
clear_item_dirty(sb, cinf, pg, item);
if (item->delta)
scoutfs_inc_counter(sb, item_delta_written);
/* free deletion items */
if (item->deletion || item->delta)
erase_item(pg, item);
else
item->persistent = 1;
}
write_unlock(&pg->rwlock);
spin_lock(&cinf->dirty_lock);
} while (pg);
}
spin_unlock(&cinf->dirty_lock);
return 0;
@@ -2544,15 +2597,24 @@ static unsigned long item_cache_scan_objects(struct shrinker *shrink,
struct cached_page *tmp;
struct cached_page *pg;
unsigned long freed = 0;
u64 first_reader_seq;
int nr = sc->nr_to_scan;
scoutfs_inc_counter(sb, item_cache_scan_objects);
/* can't invalidate pages with items that weren't visible to first reader */
first_reader_seq = first_active_reader_seq(cinf);
write_lock(&cinf->rwlock);
spin_lock(&cinf->lru_lock);
list_for_each_entry_safe(pg, tmp, &cinf->lru_list, lru_head) {
if (first_reader_seq <= pg->max_seq) {
scoutfs_inc_counter(sb, item_shrink_page_reader);
continue;
}
if (!write_trylock(&pg->rwlock)) {
scoutfs_inc_counter(sb, item_shrink_page_trylock);
continue;
@@ -2619,6 +2681,8 @@ int scoutfs_item_setup(struct super_block *sb)
atomic_set(&cinf->dirty_pages, 0);
spin_lock_init(&cinf->lru_lock);
INIT_LIST_HEAD(&cinf->lru_list);
spin_lock_init(&cinf->active_lock);
INIT_LIST_HEAD(&cinf->active_list);
cinf->pcpu_pages = alloc_percpu(struct item_percpu_pages);
if (!cinf->pcpu_pages)
@@ -2651,6 +2715,8 @@ void scoutfs_item_destroy(struct super_block *sb)
int cpu;
if (cinf) {
BUG_ON(!list_empty(&cinf->active_list));
#ifdef KC_CPU_NOTIFIER
unregister_hotcpu_notifier(&cinf->notifier);
#endif

View File

@@ -81,69 +81,3 @@ kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
return written ? written : status;
}
#endif
#include <linux/list_lru.h>
#ifdef KC_LIST_LRU_WALK_CB_ITEM_LOCK
static enum lru_status kc_isolate(struct list_head *item, spinlock_t *lock, void *cb_arg)
{
struct kc_isolate_args *args = cb_arg;
/* isolate doesn't use list, nr_items updated in caller */
return args->isolate(item, NULL, args->cb_arg);
}
unsigned long kc_list_lru_walk(struct list_lru *lru, kc_list_lru_walk_cb_t isolate, void *cb_arg,
unsigned long nr_to_walk)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_walk(lru, kc_isolate, &args, nr_to_walk);
}
unsigned long kc_list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
kc_list_lru_walk_cb_t isolate, void *cb_arg)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_shrink_walk(lru, sc, kc_isolate, &args);
}
#endif
#ifdef KC_LIST_LRU_WALK_CB_LIST_LOCK
static enum lru_status kc_isolate(struct list_head *item, struct list_lru_one *list,
spinlock_t *lock, void *cb_arg)
{
struct kc_isolate_args *args = cb_arg;
return args->isolate(item, list, args->cb_arg);
}
unsigned long kc_list_lru_walk(struct list_lru *lru, kc_list_lru_walk_cb_t isolate, void *cb_arg,
unsigned long nr_to_walk)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_walk(lru, kc_isolate, &args, nr_to_walk);
}
unsigned long kc_list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
kc_list_lru_walk_cb_t isolate, void *cb_arg)
{
struct kc_isolate_args args = {
.isolate = isolate,
.cb_arg = cb_arg,
};
return list_lru_shrink_walk(lru, sc, kc_isolate, &args);
}
#endif

View File

@@ -263,11 +263,6 @@ typedef unsigned int blk_opf_t;
#define kc__vmalloc __vmalloc
#endif
#ifdef KC_VFS_METHOD_MNT_IDMAP_ARG
#define KC_VFS_NS_DEF struct mnt_idmap *mnt_idmap,
#define KC_VFS_NS mnt_idmap,
#define KC_VFS_INIT_NS &nop_mnt_idmap,
#else
#ifdef KC_VFS_METHOD_USER_NAMESPACE_ARG
#define KC_VFS_NS_DEF struct user_namespace *mnt_user_ns,
#define KC_VFS_NS mnt_user_ns,
@@ -277,7 +272,6 @@ typedef unsigned int blk_opf_t;
#define KC_VFS_NS
#define KC_VFS_INIT_NS
#endif
#endif /* KC_VFS_METHOD_MNT_IDMAP_ARG */
#ifdef KC_BIO_ALLOC_DEV_OPF_ARGS
#define kc_bio_alloc bio_alloc
@@ -416,77 +410,4 @@ static inline vm_fault_t vmf_error(int err)
}
#endif
#include <linux/list_lru.h>
#ifndef KC_LIST_LRU_SHRINK_COUNT_WALK
/* we don't bother with sc->{nid,memcg} (which doesn't exist in oldest kernels) */
static inline unsigned long list_lru_shrink_count(struct list_lru *lru,
struct shrink_control *sc)
{
return list_lru_count(lru);
}
static inline unsigned long
list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
list_lru_walk_cb isolate, void *cb_arg)
{
return list_lru_walk(lru, isolate, cb_arg, sc->nr_to_scan);
}
#endif
#ifndef KC_LIST_LRU_ADD_OBJ
#define list_lru_add_obj list_lru_add
#define list_lru_del_obj list_lru_del
#endif
#if defined(KC_LIST_LRU_WALK_CB_LIST_LOCK) || defined(KC_LIST_LRU_WALK_CB_ITEM_LOCK)
struct list_lru_one;
typedef enum lru_status (*kc_list_lru_walk_cb_t)(struct list_head *item, struct list_lru_one *list,
void *cb_arg);
struct kc_isolate_args {
kc_list_lru_walk_cb_t isolate;
void *cb_arg;
};
unsigned long kc_list_lru_walk(struct list_lru *lru, kc_list_lru_walk_cb_t isolate, void *cb_arg,
unsigned long nr_to_walk);
unsigned long kc_list_lru_shrink_walk(struct list_lru *lru, struct shrink_control *sc,
kc_list_lru_walk_cb_t isolate, void *cb_arg);
#else
#define kc_list_lru_shrink_walk list_lru_shrink_walk
#endif
#if defined(KC_LIST_LRU_WALK_CB_ITEM_LOCK)
/* isolate moved by hand, nr_items updated in walk as _REMOVE returned */
static inline void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
struct list_head *head)
{
list_move(item, head);
}
#endif
#ifndef KC_STACK_TRACE_SAVE
#include <linux/stacktrace.h>
static inline unsigned int stack_trace_save(unsigned long *store, unsigned int size,
unsigned int skipnr)
{
struct stack_trace trace = {
.entries = store,
.max_entries = size,
.skip = skipnr,
};
save_stack_trace(&trace);
return trace.nr_entries;
}
static inline void stack_trace_print(unsigned long *entries, unsigned int nr_entries, int spaces)
{
struct stack_trace trace = {
.entries = entries,
.nr_entries = nr_entries,
};
print_stack_trace(&trace, spaces);
}
#endif
#endif

View File

@@ -168,6 +168,7 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
enum scoutfs_lock_mode prev, enum scoutfs_lock_mode mode)
{
struct scoutfs_lock_coverage *cov;
struct scoutfs_lock_coverage *tmp;
u64 ino, last;
int ret = 0;
@@ -191,22 +192,19 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
/*
* Remove cov items to tell users that their cache is
* stale. The unlock pattern comes from avoiding bad
* sparse warnings when taking else in a failed trylock.
*/
retry:
/* remove cov items to tell users that their cache is stale */
spin_lock(&lock->cov_list_lock);
while ((cov = list_first_entry_or_null(&lock->cov_list,
struct scoutfs_lock_coverage, head))) {
if (spin_trylock(&cov->cov_lock)) {
list_del_init(&cov->head);
cov->lock = NULL;
spin_unlock(&cov->cov_lock);
scoutfs_inc_counter(sb, lock_invalidate_coverage);
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
if (!spin_trylock(&cov->cov_lock)) {
spin_unlock(&lock->cov_list_lock);
cpu_relax();
goto retry;
}
spin_unlock(&lock->cov_list_lock);
spin_lock(&lock->cov_list_lock);
list_del_init(&cov->head);
cov->lock = NULL;
spin_unlock(&cov->cov_lock);
scoutfs_inc_counter(sb, lock_invalidate_coverage);
}
spin_unlock(&lock->cov_list_lock);

View File

@@ -35,12 +35,6 @@ do { \
} \
} while (0) \
#define scoutfs_bug_on_err(sb, err, fmt, args...) \
do { \
__typeof__(err) _err = (err); \
scoutfs_bug_on(sb, _err < 0 && _err != -ENOLINK, fmt, ##args); \
} while (0)
/*
* Each message is only generated once per volume. Remounting resets
* the messages.

View File

@@ -20,7 +20,6 @@
#include <net/sock.h>
#include <net/tcp.h>
#include <linux/log2.h>
#include <linux/jhash.h>
#include "format.h"
#include "counters.h"
@@ -32,7 +31,6 @@
#include "endian_swap.h"
#include "tseq.h"
#include "fence.h"
#include "options.h"
/*
* scoutfs networking delivers requests and responses between nodes.
@@ -136,7 +134,6 @@ struct message_send {
struct message_recv {
struct scoutfs_tseq_entry tseq_entry;
struct work_struct proc_work;
struct list_head ordered_head;
struct scoutfs_net_connection *conn;
struct scoutfs_net_header nh;
};
@@ -335,7 +332,7 @@ static int submit_send(struct super_block *sb,
return -EINVAL;
if (scoutfs_forcing_unmount(sb))
return -ENOLINK;
return -EIO;
msend = kmalloc(offsetof(struct message_send,
nh.data[data_len]), GFP_NOFS);
@@ -501,51 +498,6 @@ static void scoutfs_net_proc_worker(struct work_struct *work)
trace_scoutfs_net_proc_work_exit(sb, 0, ret);
}
static void scoutfs_net_ordered_proc_worker(struct work_struct *work)
{
struct scoutfs_work_list *wlist = container_of(work, struct scoutfs_work_list, work);
struct message_recv *mrecv;
struct message_recv *mrecv__;
LIST_HEAD(list);
spin_lock(&wlist->lock);
list_splice_init(&wlist->list, &list);
spin_unlock(&wlist->lock);
list_for_each_entry_safe(mrecv, mrecv__, &list, ordered_head) {
list_del_init(&mrecv->ordered_head);
scoutfs_net_proc_worker(&mrecv->proc_work);
}
}
/*
* Some messages require in-order processing. But the scope of the
* ordering isn't global. In the case of lock messages, it's per lock.
* So for these messages we hash them to a number of ordered workers who
* walk a list and call the usual work function in order. This replaced
* first the proc work detecting OOO and re-ordering, and then only
* calling proc from the one recv work context.
*/
static void queue_ordered_proc(struct scoutfs_net_connection *conn, struct message_recv *mrecv)
{
struct scoutfs_work_list *wlist;
struct scoutfs_net_lock *nl;
u32 h;
if (WARN_ON_ONCE(mrecv->nh.cmd != SCOUTFS_NET_CMD_LOCK ||
le16_to_cpu(mrecv->nh.data_len) != sizeof(struct scoutfs_net_lock)))
return scoutfs_net_proc_worker(&mrecv->proc_work);
nl = (void *)mrecv->nh.data;
h = jhash(&nl->key, sizeof(struct scoutfs_key), 0x6fdd3cd5);
wlist = &conn->ordered_proc_wlists[h % conn->ordered_proc_nr];
spin_lock(&wlist->lock);
list_add_tail(&mrecv->ordered_head, &wlist->list);
spin_unlock(&wlist->lock);
queue_work(conn->workq, &wlist->work);
}
/*
* Free live responses up to and including the seq by marking them dead
* and moving them to the send queue to be freed.
@@ -589,17 +541,33 @@ static void free_acked_responses(struct scoutfs_net_connection *conn, u64 seq)
queue_work(conn->workq, &conn->send_work);
}
static int k_recvmsg(struct socket *sock, void *buf, unsigned len)
static int recvmsg_full(struct socket *sock, void *buf, unsigned len)
{
struct kvec kv = {
.iov_base = buf,
.iov_len = len,
};
struct msghdr msg = {
.msg_flags = MSG_NOSIGNAL,
};
struct msghdr msg;
struct kvec kv;
int ret;
return kernel_recvmsg(sock, &msg, &kv, 1, len, msg.msg_flags);
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, READ, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_recvmsg(sock, &msg, &kv, 1, len, msg.msg_flags);
if (ret <= 0)
return -ECONNABORTED;
len -= ret;
buf += ret;
}
return 0;
}
static bool invalid_message(struct scoutfs_net_connection *conn,
@@ -636,72 +604,6 @@ static bool invalid_message(struct scoutfs_net_connection *conn,
return false;
}
static int recv_one_message(struct super_block *sb, struct net_info *ninf,
struct scoutfs_net_connection *conn, struct scoutfs_net_header *nh,
unsigned int data_len)
{
struct message_recv *mrecv;
int ret;
scoutfs_inc_counter(sb, net_recv_messages);
scoutfs_add_counter(sb, net_recv_bytes, nh_bytes(data_len));
trace_scoutfs_net_recv_message(sb, &conn->sockname, &conn->peername, nh);
/* caller's invalid message checked data len */
mrecv = kmalloc(offsetof(struct message_recv, nh.data[data_len]), GFP_NOFS);
if (!mrecv) {
ret = -ENOMEM;
goto out;
}
mrecv->conn = conn;
INIT_WORK(&mrecv->proc_work, scoutfs_net_proc_worker);
INIT_LIST_HEAD(&mrecv->ordered_head);
mrecv->nh = *nh;
if (data_len)
memcpy(mrecv->nh.data, (nh + 1), data_len);
if (nh->cmd == SCOUTFS_NET_CMD_GREETING) {
/* greetings are out of band, no seq mechanics */
set_conn_fl(conn, saw_greeting);
} else if (le64_to_cpu(nh->seq) <=
atomic64_read(&conn->recv_seq)) {
/* drop any resent duplicated messages */
scoutfs_inc_counter(sb, net_recv_dropped_duplicate);
kfree(mrecv);
ret = 0;
goto out;
} else {
/* record that we've received sender's seq */
atomic64_set(&conn->recv_seq, le64_to_cpu(nh->seq));
/* and free our responses that sender has received */
free_acked_responses(conn, le64_to_cpu(nh->recv_seq));
}
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/*
* Initial received greetings are processed inline
* before any other incoming messages.
*
* Incoming requests or responses to the lock client
* can't handle re-ordering, so they're queued to
* ordered receive processing work.
*/
if (nh->cmd == SCOUTFS_NET_CMD_GREETING)
scoutfs_net_proc_worker(&mrecv->proc_work);
else if (nh->cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn)
queue_ordered_proc(conn, mrecv);
else
queue_work(conn->workq, &mrecv->proc_work);
ret = 0;
out:
return ret;
}
/*
* Always block receiving from the socket. Errors trigger shutting down
* the connection.
@@ -712,72 +614,86 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
struct super_block *sb = conn->sb;
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct socket *sock = conn->sock;
struct scoutfs_net_header *nh;
struct page *page = NULL;
struct scoutfs_net_header nh;
struct message_recv *mrecv;
unsigned int data_len;
int hdr_off;
int rx_off;
int size;
int ret;
trace_scoutfs_net_recv_work_enter(sb, 0, 0);
page = alloc_page(GFP_NOFS);
if (!page) {
ret = -ENOMEM;
goto out;
}
hdr_off = 0;
rx_off = 0;
for (;;) {
/* receive the header */
ret = k_recvmsg(sock, page_address(page) + rx_off, PAGE_SIZE - rx_off);
if (ret <= 0) {
ret = -ECONNABORTED;
goto out;
ret = recvmsg_full(sock, &nh, sizeof(nh));
if (ret)
break;
/* receiving an invalid message breaks the connection */
if (invalid_message(conn, &nh)) {
scoutfs_inc_counter(sb, net_recv_invalid_message);
ret = -EBADMSG;
break;
}
rx_off += ret;
data_len = le16_to_cpu(nh.data_len);
for (;;) {
size = rx_off - hdr_off;
if (size < sizeof(struct scoutfs_net_header))
break;
scoutfs_inc_counter(sb, net_recv_messages);
scoutfs_add_counter(sb, net_recv_bytes, nh_bytes(data_len));
trace_scoutfs_net_recv_message(sb, &conn->sockname,
&conn->peername, &nh);
nh = page_address(page) + hdr_off;
/* receiving an invalid message breaks the connection */
if (invalid_message(conn, nh)) {
scoutfs_inc_counter(sb, net_recv_invalid_message);
ret = -EBADMSG;
break;
}
data_len = le16_to_cpu(nh->data_len);
if (sizeof(struct scoutfs_net_header) + data_len > size)
break;
ret = recv_one_message(sb, ninf, conn, nh, data_len);
if (ret < 0)
goto out;
hdr_off += sizeof(struct scoutfs_net_header) + data_len;
/* invalid message checked data len */
mrecv = kmalloc(offsetof(struct message_recv,
nh.data[data_len]), GFP_NOFS);
if (!mrecv) {
ret = -ENOMEM;
break;
}
if ((PAGE_SIZE - rx_off) <
(sizeof(struct scoutfs_net_header) + SCOUTFS_NET_MAX_DATA_LEN)) {
if (size)
memmove(page_address(page), page_address(page) + hdr_off, size);
hdr_off = 0;
rx_off = size;
mrecv->conn = conn;
INIT_WORK(&mrecv->proc_work, scoutfs_net_proc_worker);
mrecv->nh = nh;
/* receive the data payload */
ret = recvmsg_full(sock, mrecv->nh.data, data_len);
if (ret) {
kfree(mrecv);
break;
}
if (nh.cmd == SCOUTFS_NET_CMD_GREETING) {
/* greetings are out of band, no seq mechanics */
set_conn_fl(conn, saw_greeting);
} else if (le64_to_cpu(nh.seq) <=
atomic64_read(&conn->recv_seq)) {
/* drop any resent duplicated messages */
scoutfs_inc_counter(sb, net_recv_dropped_duplicate);
kfree(mrecv);
continue;
} else {
/* record that we've received sender's seq */
atomic64_set(&conn->recv_seq, le64_to_cpu(nh.seq));
/* and free our responses that sender has received */
free_acked_responses(conn, le64_to_cpu(nh.recv_seq));
}
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
/*
* Initial received greetings are processed
* synchronously before any other incoming messages.
*
* Incoming requests or responses to the lock client are
* called synchronously to avoid reordering.
*/
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
scoutfs_net_proc_worker(&mrecv->proc_work);
else
queue_work(conn->workq, &mrecv->proc_work);
}
out:
__free_page(page);
if (ret)
scoutfs_inc_counter(sb, net_recv_error);
@@ -787,41 +703,33 @@ out:
trace_scoutfs_net_recv_work_exit(sb, 0, ret);
}
/*
* This consumes the kvec.
*/
static int k_sendmsg_full(struct socket *sock, struct kvec *kv, unsigned long nr_segs, size_t count)
static int sendmsg_full(struct socket *sock, void *buf, unsigned len)
{
int ret = 0;
struct msghdr msg;
struct kvec kv;
int ret;
while (count > 0) {
struct msghdr msg = {
.msg_flags = MSG_NOSIGNAL,
};
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
ret = kernel_sendmsg(sock, &msg, kv, nr_segs, count);
if (ret <= 0) {
ret = -ECONNABORTED;
break;
}
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, WRITE, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_sendmsg(sock, &msg, &kv, 1, len);
if (ret <= 0)
return -ECONNABORTED;
count -= ret;
if (count) {
while (nr_segs > 0 && ret >= kv->iov_len) {
ret -= kv->iov_len;
kv++;
nr_segs--;
}
if (nr_segs > 0 && ret > 0) {
kv->iov_base += ret;
kv->iov_len -= ret;
}
BUG_ON(nr_segs == 0);
}
ret = 0;
len -= ret;
buf += ret;
}
return ret;
return 0;
}
static void free_msend(struct net_info *ninf, struct message_send *msend)
@@ -852,73 +760,54 @@ static void scoutfs_net_send_worker(struct work_struct *work)
struct super_block *sb = conn->sb;
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct message_send *msend;
struct message_send *_msend_;
struct kvec kv[16];
unsigned long nr_segs;
size_t count;
int ret = 0;
int len;
int ret;
trace_scoutfs_net_send_work_enter(sb, 0, 0);
for (;;) {
nr_segs = 0;
count = 0;
spin_lock(&conn->lock);
spin_lock(&conn->lock);
list_for_each_entry_safe(msend, _msend_, &conn->send_queue, head) {
if (msend->dead) {
free_msend(ninf, msend);
continue;
}
len = nh_bytes(le16_to_cpu(msend->nh.data_len));
if ((msend->nh.cmd == SCOUTFS_NET_CMD_FAREWELL) &&
nh_is_response(&msend->nh)) {
set_conn_fl(conn, saw_farewell);
}
msend->nh.recv_seq = cpu_to_le64(atomic64_read(&conn->recv_seq));
scoutfs_inc_counter(sb, net_send_messages);
scoutfs_add_counter(sb, net_send_bytes, len);
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
count += len;
kv[nr_segs].iov_base = &msend->nh;
kv[nr_segs].iov_len = len;
if (++nr_segs == ARRAY_SIZE(kv))
break;
while ((msend = list_first_entry_or_null(&conn->send_queue,
struct message_send, head))) {
if (msend->dead) {
free_msend(ninf, msend);
continue;
}
if ((msend->nh.cmd == SCOUTFS_NET_CMD_FAREWELL) &&
nh_is_response(&msend->nh)) {
set_conn_fl(conn, saw_farewell);
}
msend->nh.recv_seq =
cpu_to_le64(atomic64_read(&conn->recv_seq));
spin_unlock(&conn->lock);
if (nr_segs == 0) {
ret = 0;
goto out;
}
len = nh_bytes(le16_to_cpu(msend->nh.data_len));
ret = k_sendmsg_full(conn->sock, kv, nr_segs, count);
if (ret < 0)
goto out;
scoutfs_inc_counter(sb, net_send_messages);
scoutfs_add_counter(sb, net_send_bytes, len);
trace_scoutfs_net_send_message(sb, &conn->sockname,
&conn->peername, &msend->nh);
ret = sendmsg_full(conn->sock, &msend->nh, len);
spin_lock(&conn->lock);
list_for_each_entry_safe(msend, _msend_, &conn->send_queue, head) {
msend->nh.recv_seq = 0;
/* resend if it wasn't freed while we sent */
if (!msend->dead)
list_move_tail(&msend->head, &conn->resend_queue);
msend->nh.recv_seq = 0;
if (--nr_segs == 0)
break;
}
spin_unlock(&conn->lock);
if (ret)
break;
/* resend if it wasn't freed while we sent */
if (!msend->dead)
list_move_tail(&msend->head, &conn->resend_queue);
}
out:
spin_unlock(&conn->lock);
if (ret) {
scoutfs_inc_counter(sb, net_send_error);
shutdown_conn(conn);
@@ -973,7 +862,6 @@ static void scoutfs_net_destroy_worker(struct work_struct *work)
destroy_workqueue(conn->workq);
scoutfs_tseq_del(&ninf->conn_tseq_tree, &conn->tseq_entry);
kfree(conn->info);
kfree(conn->ordered_proc_wlists);
trace_scoutfs_conn_destroy_free(conn);
kfree(conn);
@@ -999,7 +887,7 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
* send queue. It doesn't work if the connection is idle. Adding
* keepalive probes with user_timeout set changes how the keepalive
* keepalice probes with user_timeout set changes how the keepalive
* timeout is calculated. CNT no longer matters. Each time
* additional probes (not the first) are sent the user timeout is
* checked against the last time data was received. If none of the
@@ -1011,16 +899,14 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
* elapses during the probe timer processing after the unsuccessful
* probes.
*/
static int sock_opts_and_names(struct super_block *sb,
struct scoutfs_net_connection *conn,
#define UNRESPONSIVE_TIMEOUT_SECS 10
#define UNRESPONSIVE_PROBES 3
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
struct scoutfs_mount_options opts;
int optval;
int ret;
scoutfs_options_read(sb, &opts);
/* we use a keepalive timeout instead of send timeout */
ret = kc_sock_set_sndtimeo(sock, 0);
if (ret)
@@ -1033,7 +919,8 @@ static int sock_opts_and_names(struct super_block *sb,
if (ret)
goto out;
optval = (opts.tcp_keepalive_timeout_ms / MSEC_PER_SEC) - UNRESPONSIVE_PROBES;
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
ret = kc_tcp_sock_set_keepidle(sock, optval);
if (ret)
goto out;
@@ -1043,7 +930,7 @@ static int sock_opts_and_names(struct super_block *sb,
if (ret)
goto out;
optval = opts.tcp_keepalive_timeout_ms;
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
ret = kc_tcp_sock_set_user_timeout(sock, optval);
if (ret)
goto out;
@@ -1105,19 +992,13 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
conn->notify_down,
conn->info_size,
conn->req_funcs, "accepted");
/*
* scoutfs_net_alloc_conn() can fail due to ENOMEM. If this
* is the only thing that does so, there's no harm in trying
* to see if kernel_accept() can get enough memory to try accepting
* a new connection again. If that then fails with ENOMEM, it'll
* shut down the conn anyway. So just retry here.
*/
if (!acc_conn) {
sock_release(acc_sock);
ret = -ENOMEM;
continue;
}
ret = sock_opts_and_names(sb, acc_conn, acc_sock);
ret = sock_opts_and_names(acc_conn, acc_sock);
if (ret) {
sock_release(acc_sock);
destroy_conn(acc_conn);
@@ -1188,7 +1069,7 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
if (ret)
goto out;
ret = sock_opts_and_names(sb, conn, sock);
ret = sock_opts_and_names(conn, sock);
if (ret)
goto out;
@@ -1449,30 +1330,25 @@ scoutfs_net_alloc_conn(struct super_block *sb,
{
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
struct scoutfs_net_connection *conn;
unsigned int nr;
unsigned int i;
nr = min_t(unsigned int, num_possible_cpus(),
PAGE_SIZE / sizeof(struct scoutfs_work_list));
conn = kzalloc(sizeof(struct scoutfs_net_connection), GFP_NOFS);
if (conn) {
if (info_size)
conn->info = kzalloc(info_size, GFP_NOFS);
conn->ordered_proc_wlists = kmalloc_array(nr, sizeof(struct scoutfs_work_list),
GFP_NOFS);
conn->workq = alloc_workqueue("scoutfs_net_%s",
WQ_UNBOUND | WQ_NON_REENTRANT, 0,
name_suffix);
}
if (!conn || (info_size && !conn->info) || !conn->workq || !conn->ordered_proc_wlists) {
if (conn) {
kfree(conn->info);
kfree(conn->ordered_proc_wlists);
if (conn->workq)
destroy_workqueue(conn->workq);
if (!conn)
return NULL;
if (info_size) {
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
}
conn->workq = alloc_workqueue("scoutfs_net_%s",
WQ_UNBOUND | WQ_NON_REENTRANT, 0,
name_suffix);
if (!conn->workq) {
kfree(conn->info);
kfree(conn);
return NULL;
}
@@ -1502,13 +1378,6 @@ scoutfs_net_alloc_conn(struct super_block *sb,
INIT_DELAYED_WORK(&conn->reconn_free_dwork,
scoutfs_net_reconn_free_worker);
conn->ordered_proc_nr = nr;
for (i = 0; i < nr; i++) {
INIT_WORK(&conn->ordered_proc_wlists[i].work, scoutfs_net_ordered_proc_worker);
spin_lock_init(&conn->ordered_proc_wlists[i].lock);
INIT_LIST_HEAD(&conn->ordered_proc_wlists[i].list);
}
scoutfs_tseq_add(&ninf->conn_tseq_tree, &conn->tseq_entry);
trace_scoutfs_conn_alloc(conn);

View File

@@ -1,18 +1,10 @@
#ifndef _SCOUTFS_NET_H_
#define _SCOUTFS_NET_H_
#include <linux/spinlock.h>
#include <linux/list.h>
#include <linux/in.h>
#include "endian_swap.h"
#include "tseq.h"
struct scoutfs_work_list {
struct work_struct work;
spinlock_t lock;
struct list_head list;
};
struct scoutfs_net_connection;
/* These are called in their own blocking context */
@@ -69,8 +61,6 @@ struct scoutfs_net_connection {
struct list_head resend_queue;
atomic64_t recv_seq;
unsigned int ordered_proc_nr;
struct scoutfs_work_list *ordered_proc_wlists;
struct workqueue_struct *workq;
struct work_struct listen_work;

View File

@@ -592,7 +592,7 @@ static int handle_request(struct super_block *sb, struct omap_request *req)
ret = 0;
out:
free_rids(&priv_rids);
if ((ret < 0) && (req != NULL)) {
if (ret < 0) {
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
NULL, ret);
free_req(req);

View File

@@ -33,14 +33,12 @@ enum {
Opt_acl,
Opt_data_prealloc_blocks,
Opt_data_prealloc_contig_only,
Opt_ino_alloc_per_lock,
Opt_log_merge_wait_timeout_ms,
Opt_metadev_path,
Opt_noacl,
Opt_orphan_scan_delay_ms,
Opt_quorum_heartbeat_timeout_ms,
Opt_quorum_slot_nr,
Opt_tcp_keepalive_timeout_ms,
Opt_err,
};
@@ -48,14 +46,12 @@ static const match_table_t tokens = {
{Opt_acl, "acl"},
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
{Opt_ino_alloc_per_lock, "ino_alloc_per_lock=%s"},
{Opt_log_merge_wait_timeout_ms, "log_merge_wait_timeout_ms=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_noacl, "noacl"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_heartbeat_timeout_ms, "quorum_heartbeat_timeout_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_tcp_keepalive_timeout_ms, "tcp_keepalive_timeout_ms=%s"},
{Opt_err, NULL}
};
@@ -130,20 +126,16 @@ static void free_options(struct scoutfs_mount_options *opts)
#define MIN_DATA_PREALLOC_BLOCKS 1ULL
#define MAX_DATA_PREALLOC_BLOCKS ((unsigned long long)SCOUTFS_BLOCK_SM_MAX)
#define DEFAULT_TCP_KEEPALIVE_TIMEOUT_MS (60 * MSEC_PER_SEC)
static void init_default_options(struct scoutfs_mount_options *opts)
{
memset(opts, 0, sizeof(*opts));
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
opts->data_prealloc_contig_only = 1;
opts->ino_alloc_per_lock = SCOUTFS_LOCK_INODE_GROUP_NR;
opts->log_merge_wait_timeout_ms = DEFAULT_LOG_MERGE_WAIT_TIMEOUT_MS;
opts->orphan_scan_delay_ms = -1;
opts->quorum_heartbeat_timeout_ms = SCOUTFS_QUORUM_DEF_HB_TIMEO_MS;
opts->quorum_slot_nr = -1;
opts->tcp_keepalive_timeout_ms = DEFAULT_TCP_KEEPALIVE_TIMEOUT_MS;
}
static int verify_log_merge_wait_timeout_ms(struct super_block *sb, int ret, int val)
@@ -176,21 +168,6 @@ static int verify_quorum_heartbeat_timeout_ms(struct super_block *sb, int ret, u
return 0;
}
static int verify_tcp_keepalive_timeout_ms(struct super_block *sb, int ret, int val)
{
if (ret < 0) {
scoutfs_err(sb, "failed to parse tcp_keepalive_timeout_ms value");
return -EINVAL;
}
if (val <= (UNRESPONSIVE_PROBES * MSEC_PER_SEC)) {
scoutfs_err(sb, "invalid tcp_keepalive_timeout_ms value %d, must be larger than %lu",
val, (UNRESPONSIVE_PROBES * MSEC_PER_SEC));
return -EINVAL;
}
return 0;
}
/*
* Parse the option string into our options struct. This can allocate
* memory in the struct. The caller is responsible for always calling
@@ -241,26 +218,6 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
opts->data_prealloc_contig_only = nr;
break;
case Opt_ino_alloc_per_lock:
ret = match_int(args, &nr);
if (ret < 0 || nr < 1 || nr > SCOUTFS_LOCK_INODE_GROUP_NR) {
scoutfs_err(sb, "invalid ino_alloc_per_lock option, must be between 1 and %u",
SCOUTFS_LOCK_INODE_GROUP_NR);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->ino_alloc_per_lock = nr;
break;
case Opt_tcp_keepalive_timeout_ms:
ret = match_int(args, &nr);
ret = verify_tcp_keepalive_timeout_ms(sb, ret, nr);
if (ret < 0)
return ret;
opts->tcp_keepalive_timeout_ms = nr;
break;
case Opt_log_merge_wait_timeout_ms:
ret = match_int(args, &nr);
ret = verify_log_merge_wait_timeout_ms(sb, ret, nr);
@@ -408,14 +365,12 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
seq_puts(seq, ",acl");
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
seq_printf(seq, ",ino_alloc_per_lock=%u", opts.ino_alloc_per_lock);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
if (!is_acl)
seq_puts(seq, ",noacl");
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
if (opts.quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
seq_printf(seq, ",tcp_keepalive_timeout_ms=%d", opts.tcp_keepalive_timeout_ms);
return 0;
}
@@ -497,45 +452,6 @@ static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj
}
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
static ssize_t ino_alloc_per_lock_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.ino_alloc_per_lock);
}
static ssize_t ino_alloc_per_lock_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < 1 || val > SCOUTFS_LOCK_INODE_GROUP_NR) {
scoutfs_err(sb, "invalid ino_alloc_per_lock option, must be between 1 and %u",
SCOUTFS_LOCK_INODE_GROUP_NR);
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.ino_alloc_per_lock = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(ino_alloc_per_lock);
static ssize_t log_merge_wait_timeout_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
@@ -676,7 +592,6 @@ SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
SCOUTFS_ATTR_PTR(ino_alloc_per_lock),
SCOUTFS_ATTR_PTR(log_merge_wait_timeout_ms),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),

View File

@@ -8,17 +8,13 @@
struct scoutfs_mount_options {
u64 data_prealloc_blocks;
bool data_prealloc_contig_only;
unsigned int ino_alloc_per_lock;
unsigned int log_merge_wait_timeout_ms;
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;
u64 quorum_heartbeat_timeout_ms;
int tcp_keepalive_timeout_ms;
};
#define UNRESPONSIVE_PROBES 3
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);
int scoutfs_options_show(struct seq_file *seq, struct dentry *root);

View File

@@ -243,6 +243,10 @@ static int send_msg_members(struct super_block *sb, int type, u64 term, int only
};
struct sockaddr_in sin;
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL,
.msg_name = &sin,
.msg_namelen = sizeof(sin),
@@ -264,7 +268,9 @@ static int send_msg_members(struct super_block *sb, int type, u64 term, int only
scoutfs_quorum_slot_sin(&qinf->qconf, i, &sin);
now = ktime_get();
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, WRITE, (struct iovec *)&kv, sizeof(qmes), 1);
#endif
ret = kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
if (ret != kv.iov_len)
failed++;
@@ -306,6 +312,10 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
.iov_len = sizeof(struct scoutfs_quorum_message),
};
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_NOSIGNAL,
};
@@ -323,6 +333,9 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
ret = kc_tcp_sock_set_rcvtimeo(qinf->sock, rel_to);
}
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, READ, (struct iovec *)&kv, sizeof(struct scoutfs_quorum_message), 1);
#endif
ret = kernel_recvmsg(qinf->sock, &mh, &kv, 1, kv.iov_len, mh.msg_flags);
if (ret < 0)
return ret;
@@ -507,10 +520,10 @@ static int update_quorum_block(struct super_block *sb, int event, u64 term, bool
set_quorum_block_event(sb, &blk, event, term);
ret = write_quorum_block(sb, blkno, &blk);
if (ret < 0)
scoutfs_err(sb, "error %d writing quorum block %llu after updating event %d term %llu",
scoutfs_err(sb, "error %d reading quorum block %llu to update event %d term %llu",
ret, blkno, event, term);
} else {
scoutfs_err(sb, "error %d reading quorum block %llu to update event %d term %llu",
scoutfs_err(sb, "error %d writing quorum block %llu after updating event %d term %llu",
ret, blkno, event, term);
}
@@ -809,7 +822,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* followers and candidates start new election on timeout */
if (qst.role != LEADER &&
msg.type == SCOUTFS_QUORUM_MSG_INVALID &&
ktime_after(ktime_get(), qst.timeout)) {
/* .. but only if their server has stopped */
if (!scoutfs_server_is_down(sb)) {
@@ -970,10 +982,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
}
/* record that this slot no longer has an active quorum */
err = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_END, qst.term, true);
if (err < 0 && ret == 0)
ret = err;
update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_END, qst.term, true);
out:
if (ret < 0) {
scoutfs_err(sb, "quorum service saw error %d, shutting down. This mount is no longer participating in quorum. It should be remounted to restore service.",
@@ -1062,7 +1071,7 @@ static char *role_str(int role)
[LEADER] = "leader",
};
if (role < 0 || role >= ARRAY_SIZE(roles) || !roles[role])
if (role < 0 || role > ARRAY_SIZE(roles) || !roles[role])
return "invalid";
return roles[role];

View File

@@ -823,14 +823,13 @@ DEFINE_EVENT(scoutfs_lock_info_class, scoutfs_lock_destroy,
);
TRACE_EVENT(scoutfs_xattr_set,
TP_PROTO(struct super_block *sb, __u64 ino, size_t name_len,
const void *value, size_t size, int flags),
TP_PROTO(struct super_block *sb, size_t name_len, const void *value,
size_t size, int flags),
TP_ARGS(sb, ino, name_len, value, size, flags),
TP_ARGS(sb, name_len, value, size, flags),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(size_t, name_len)
__field(const void *, value)
__field(size_t, size)
@@ -839,16 +838,15 @@ TRACE_EVENT(scoutfs_xattr_set,
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->name_len = name_len;
__entry->value = value;
__entry->size = size;
__entry->flags = flags;
),
TP_printk(SCSBF" ino %llu name_len %zu value %p size %zu flags 0x%x",
SCSB_TRACE_ARGS, __entry->ino, __entry->name_len,
__entry->value, __entry->size, __entry->flags)
TP_printk(SCSBF" name_len %zu value %p size %zu flags 0x%x",
SCSB_TRACE_ARGS, __entry->name_len, __entry->value,
__entry->size, __entry->flags)
);
TRACE_EVENT(scoutfs_advance_dirty_super,
@@ -1968,17 +1966,15 @@ DEFINE_EVENT(scoutfs_server_client_count_class, scoutfs_server_client_down,
);
DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded),
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing,
exceeded),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, holding)
__field(int, applying)
__field(int, nr_holders)
__field(u32, budget)
__field(__u32, avail_before)
__field(__u32, freed_before)
__field(int, committing)
@@ -1989,45 +1985,35 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
__entry->holding = !!holding;
__entry->applying = !!applying;
__entry->nr_holders = nr_holders;
__entry->budget = budget;
__entry->avail_before = avail_before;
__entry->freed_before = freed_before;
__entry->committing = !!committing;
__entry->exceeded = !!exceeded;
),
TP_printk(SCSBF" holding %u applying %u nr %u budget %u avail_before %u freed_before %u committing %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying,
__entry->nr_holders, __entry->budget,
__entry->avail_before, __entry->freed_before,
__entry->committing, __entry->exceeded)
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u committing %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying, __entry->nr_holders,
__entry->avail_before, __entry->freed_before, __entry->committing,
__entry->exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_hold,
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_apply,
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_start,
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_end,
TP_PROTO(struct super_block *sb, int holding, int applying,
int nr_holders, u32 budget,
u32 avail_before, u32 freed_before,
int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, budget, avail_before, freed_before, committing, exceeded)
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
);
#define slt_symbolic(mode) \
@@ -2465,27 +2451,6 @@ TRACE_EVENT(scoutfs_block_dirty_ref,
__entry->block_blkno, __entry->block_seq)
);
TRACE_EVENT(scoutfs_get_file_block,
TP_PROTO(struct super_block *sb, u64 blkno, int flags),
TP_ARGS(sb, blkno, flags),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, blkno)
__field(int, flags)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->blkno = blkno;
__entry->flags = flags;
),
TP_printk(SCSBF" blkno %llu flags 0x%x",
SCSB_TRACE_ARGS, __entry->blkno, __entry->flags)
);
TRACE_EVENT(scoutfs_block_stale,
TP_PROTO(struct super_block *sb, struct scoutfs_block_ref *ref,
struct scoutfs_block_header *hdr, u32 magic, u32 crc),
@@ -2526,8 +2491,8 @@ TRACE_EVENT(scoutfs_block_stale,
DECLARE_EVENT_CLASS(scoutfs_block_class,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno, int refcount, int io_count,
unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits),
unsigned long bits, __u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, bp)
@@ -2535,6 +2500,7 @@ DECLARE_EVENT_CLASS(scoutfs_block_class,
__field(int, refcount)
__field(int, io_count)
__field(long, bits)
__field(__u64, accessed)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
@@ -2543,65 +2509,71 @@ DECLARE_EVENT_CLASS(scoutfs_block_class,
__entry->refcount = refcount;
__entry->io_count = io_count;
__entry->bits = bits;
__entry->accessed = accessed;
),
TP_printk(SCSBF" bp %p blkno %llu refcount %x io_count %d bits 0x%lx",
TP_printk(SCSBF" bp %p blkno %llu refcount %d io_count %d bits 0x%lx accessed %llu",
SCSB_TRACE_ARGS, __entry->bp, __entry->blkno, __entry->refcount,
__entry->io_count, __entry->bits)
__entry->io_count, __entry->bits, __entry->accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_allocate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_free,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_insert,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_remove,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_end_io,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_submit,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_invalidate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_mark_dirty,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_forget,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_shrink,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
);
DEFINE_EVENT(scoutfs_block_class, scoutfs_block_isolate,
TP_PROTO(struct super_block *sb, void *bp, u64 blkno,
int refcount, int io_count, unsigned long bits),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits)
int refcount, int io_count, unsigned long bits,
__u64 accessed),
TP_ARGS(sb, bp, blkno, refcount, io_count, bits, accessed)
);
DECLARE_EVENT_CLASS(scoutfs_ext_next_class,
@@ -3076,27 +3048,6 @@ DEFINE_EVENT(scoutfs_srch_compact_class, scoutfs_srch_compact_client_recv,
TP_ARGS(sb, sc)
);
TRACE_EVENT(scoutfs_ioc_search_xattrs,
TP_PROTO(struct super_block *sb, u64 ino, u64 last_ino),
TP_ARGS(sb, ino, last_ino),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(u64, ino)
__field(u64, last_ino)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->last_ino = last_ino;
),
TP_printk(SCSBF" ino %llu last_ino %llu", SCSB_TRACE_ARGS,
__entry->ino, __entry->last_ino)
);
#endif /* _TRACE_SCOUTFS_H */
/* This part must be outside protection */

View File

@@ -65,7 +65,6 @@ struct commit_users {
struct list_head holding;
struct list_head applying;
unsigned int nr_holders;
u32 budget;
u32 avail_before;
u32 freed_before;
bool committing;
@@ -85,9 +84,8 @@ static void init_commit_users(struct commit_users *cusers)
do { \
__typeof__(cusers) _cusers = (cusers); \
trace_scoutfs_server_commit_##which(sb, !list_empty(&_cusers->holding), \
!list_empty(&_cusers->applying), _cusers->nr_holders, _cusers->budget, \
_cusers->avail_before, _cusers->freed_before, _cusers->committing, \
_cusers->exceeded); \
!list_empty(&_cusers->applying), _cusers->nr_holders, _cusers->avail_before, \
_cusers->freed_before, _cusers->committing, _cusers->exceeded); \
} while (0)
struct server_info {
@@ -305,6 +303,7 @@ static void check_holder_budget(struct super_block *sb, struct server_info *serv
u32 freed_used;
u32 avail_now;
u32 freed_now;
u32 budget;
assert_spin_locked(&cusers->lock);
@@ -319,14 +318,15 @@ static void check_holder_budget(struct super_block *sb, struct server_info *serv
else
freed_used = SCOUTFS_ALLOC_LIST_MAX_BLOCKS - freed_now;
if (avail_used <= cusers->budget && freed_used <= cusers->budget)
budget = cusers->nr_holders * COMMIT_HOLD_ALLOC_BUDGET;
if (avail_used <= budget && freed_used <= budget)
return;
exceeded_once = true;
cusers->exceeded = cusers->nr_holders;
scoutfs_err(sb, "holders exceeded alloc budget %u av: bef %u now %u, fr: bef %u now %u",
cusers->budget, cusers->avail_before, avail_now,
scoutfs_err(sb, "%u holders exceeded alloc budget av: bef %u now %u, fr: bef %u now %u",
cusers->nr_holders, cusers->avail_before, avail_now,
cusers->freed_before, freed_now);
list_for_each_entry(hold, &cusers->holding, entry) {
@@ -349,7 +349,7 @@ static bool hold_commit(struct super_block *sb, struct server_info *server,
{
bool has_room;
bool held;
u32 new_budget;
u32 budget;
u32 av;
u32 fr;
@@ -367,8 +367,8 @@ static bool hold_commit(struct super_block *sb, struct server_info *server,
}
/* +2 for our additional hold and then for the final commit work the server does */
new_budget = max(cusers->budget, (cusers->nr_holders + 2) * COMMIT_HOLD_ALLOC_BUDGET);
has_room = av >= new_budget && fr >= new_budget;
budget = (cusers->nr_holders + 2) * COMMIT_HOLD_ALLOC_BUDGET;
has_room = av >= budget && fr >= budget;
/* checking applying so holders drain once an apply caller starts waiting */
held = !cusers->committing && has_room && list_empty(&cusers->applying);
@@ -388,7 +388,6 @@ static bool hold_commit(struct super_block *sb, struct server_info *server,
list_add_tail(&hold->entry, &cusers->holding);
cusers->nr_holders++;
cusers->budget = new_budget;
} else if (!has_room && cusers->nr_holders == 0 && !cusers->committing) {
cusers->committing = true;
@@ -517,7 +516,6 @@ static void commit_end(struct super_block *sb, struct commit_users *cusers, int
list_for_each_entry_safe(hold, tmp, &cusers->applying, entry)
list_del_init(&hold->entry);
cusers->committing = false;
cusers->budget = 0;
spin_unlock(&cusers->lock);
wake_up(&cusers->waitq);
@@ -610,7 +608,7 @@ static void scoutfs_server_commit_func(struct work_struct *work)
goto out;
if (scoutfs_forcing_unmount(sb)) {
ret = -ENOLINK;
ret = -EIO;
goto out;
}
@@ -994,11 +992,10 @@ static int for_each_rid_last_lt(struct super_block *sb, struct scoutfs_btree_roo
}
/*
* Log merge range items are stored at the starting fs key of the range
* with the zone overwritten to indicate the log merge item type. This
* day0 mistake loses sorting information for items in the different
* zones in the fs root, so the range items aren't strictly sorted by
* the starting key of their range.
* Log merge range items are stored at the starting fs key of the range.
* The only fs key field that doesn't hold information is the zone, so
* we use the zone to differentiate all types that we store in the log
* merge tree.
*/
static void init_log_merge_key(struct scoutfs_key *key, u8 zone, u64 first,
u64 second)
@@ -1030,51 +1027,6 @@ static int next_log_merge_item_key(struct super_block *sb, struct scoutfs_btree_
return ret;
}
/*
* The range items aren't sorted by their range.start because
* _RANGE_ZONE clobbers the range's zone. We sweep all the items and
* find the range with the next least starting key that's greater than
* the caller's starting key. We have to be careful to iterate over the
* log_merge tree keys because the ranges can overlap as they're mapped
* to the log_merge keys by clobbering their zone.
*/
static int next_log_merge_range(struct super_block *sb, struct scoutfs_btree_root *root,
struct scoutfs_key *start, struct scoutfs_log_merge_range *rng)
{
struct scoutfs_log_merge_range *next;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
key = *start;
key.sk_zone = SCOUTFS_LOG_MERGE_RANGE_ZONE;
scoutfs_key_set_ones(&rng->start);
do {
ret = scoutfs_btree_next(sb, root, &key, &iref);
if (ret == 0) {
if (iref.key->sk_zone != SCOUTFS_LOG_MERGE_RANGE_ZONE) {
ret = -ENOENT;
} else if (iref.val_len != sizeof(struct scoutfs_log_merge_range)) {
ret = -EIO;
} else {
next = iref.val;
if (scoutfs_key_compare(&next->start, &rng->start) < 0 &&
scoutfs_key_compare(&next->start, start) >= 0)
*rng = *next;
key = *iref.key;
scoutfs_key_inc(&key);
}
scoutfs_btree_put_iref(&iref);
}
} while (ret == 0);
if (ret == -ENOENT && !scoutfs_key_is_ones(&rng->start))
ret = 0;
return ret;
}
static int next_log_merge_item(struct super_block *sb,
struct scoutfs_btree_root *root,
u8 zone, u64 first, u64 second,
@@ -1086,101 +1038,6 @@ static int next_log_merge_item(struct super_block *sb,
return next_log_merge_item_key(sb, root, zone, &key, val, val_len);
}
static int do_finalize_ours(struct super_block *sb,
struct scoutfs_log_trees *lt,
struct commit_hold *hold)
{
struct server_info *server = SCOUTFS_SB(sb)->server_info;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_key key;
char *err_str = NULL;
u64 rid = le64_to_cpu(lt->rid);
bool more;
int ret;
int err;
mutex_lock(&server->srch_mutex);
ret = scoutfs_srch_rotate_log(sb, &server->alloc, &server->wri,
&super->srch_root, &lt->srch_file, true);
mutex_unlock(&server->srch_mutex);
if (ret < 0) {
scoutfs_err(sb, "error rotating srch log for rid %016llx: %d",
rid, ret);
return ret;
}
do {
more = false;
/*
* All of these can return errors, perhaps indicating successful
* partial progress, after having modified the allocator trees.
* We always have to update the roots in the log item.
*/
mutex_lock(&server->alloc_mutex);
ret = (err_str = "splice meta_freed to other_freed",
scoutfs_alloc_splice_list(sb, &server->alloc,
&server->wri, server->other_freed,
&lt->meta_freed)) ?:
(err_str = "splice meta_avail",
scoutfs_alloc_splice_list(sb, &server->alloc,
&server->wri, server->other_freed,
&lt->meta_avail)) ?:
(err_str = "empty data_avail",
alloc_move_empty(sb, &super->data_alloc,
&lt->data_avail,
COMMIT_HOLD_ALLOC_BUDGET / 2)) ?:
(err_str = "empty data_freed",
alloc_move_empty(sb, &super->data_alloc,
&lt->data_freed,
COMMIT_HOLD_ALLOC_BUDGET / 2));
mutex_unlock(&server->alloc_mutex);
/*
* only finalize, allowing merging, once the allocators are
* fully freed
*/
if (ret == 0) {
/* the transaction is no longer open */
le64_add_cpu(&lt->flags, SCOUTFS_LOG_TREES_FINALIZED);
lt->finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
}
scoutfs_key_init_log_trees(&key, rid, le64_to_cpu(lt->nr));
err = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, lt,
sizeof(*lt));
BUG_ON(err != 0); /* alloc, log, srch items out of sync */
if (ret == -EINPROGRESS) {
more = true;
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, hold, 0);
if (ret < 0)
WARN_ON_ONCE(ret < 0);
server_hold_commit(sb, hold);
mutex_lock(&server->logs_mutex);
} else if (ret == 0) {
memset(&lt->item_root, 0, sizeof(lt->item_root));
memset(&lt->bloom_ref, 0, sizeof(lt->bloom_ref));
lt->inode_count_delta = 0;
lt->max_item_seq = 0;
lt->finalize_seq = 0;
le64_add_cpu(&lt->nr, 1);
lt->flags = 0;
}
} while (more);
if (ret < 0) {
scoutfs_err(sb,
"error %d finalizing log trees for rid %016llx: %s",
ret, rid, err_str);
}
return ret;
}
/*
* Finalizing the log btrees for merging needs to be done carefully so
* that items don't appear to go backwards in time.
@@ -1232,6 +1089,7 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
struct scoutfs_log_merge_range rng;
struct scoutfs_mount_options opts;
struct scoutfs_log_trees each_lt;
struct scoutfs_log_trees fin;
unsigned int delay_ms;
unsigned long timeo;
bool saw_finalized;
@@ -1302,7 +1160,6 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
/* done if we're not finalizing and there's no finalized */
if (!finalize_ours && !saw_finalized) {
ret = 0;
scoutfs_inc_counter(sb, log_merge_no_finalized);
break;
}
@@ -1337,11 +1194,32 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
/* Finalize ours if it's visible to others */
if (ours_visible) {
ret = do_finalize_ours(sb, lt, hold);
fin = *lt;
memset(&fin.meta_avail, 0, sizeof(fin.meta_avail));
memset(&fin.meta_freed, 0, sizeof(fin.meta_freed));
memset(&fin.data_avail, 0, sizeof(fin.data_avail));
memset(&fin.data_freed, 0, sizeof(fin.data_freed));
memset(&fin.srch_file, 0, sizeof(fin.srch_file));
le64_add_cpu(&fin.flags, SCOUTFS_LOG_TREES_FINALIZED);
fin.finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
scoutfs_key_init_log_trees(&key, le64_to_cpu(fin.rid),
le64_to_cpu(fin.nr));
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &fin,
sizeof(fin));
if (ret < 0) {
err_str = "finalizing ours";
err_str = "updating finalized log_trees";
break;
}
memset(&lt->item_root, 0, sizeof(lt->item_root));
memset(&lt->bloom_ref, 0, sizeof(lt->bloom_ref));
lt->inode_count_delta = 0;
lt->max_item_seq = 0;
lt->finalize_seq = 0;
le64_add_cpu(&lt->nr, 1);
lt->flags = 0;
}
/* wait a bit for mounts to arrive */
@@ -1728,7 +1606,6 @@ static int server_commit_log_trees(struct super_block *sb,
int ret;
if (arg_len != sizeof(struct scoutfs_log_trees)) {
err_str = "invalid message log_trees size";
ret = -EINVAL;
goto out;
}
@@ -1792,7 +1669,7 @@ static int server_commit_log_trees(struct super_block *sb,
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
BUG_ON(ret < 0); /* dirtying should have guaranteed success, srch item inconsistent */
BUG_ON(ret < 0); /* dirtying should have guaranteed success */
if (ret < 0)
err_str = "updating log trees item";
@@ -1800,10 +1677,11 @@ unlock:
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
out:
if (ret < 0)
scoutfs_err(sb, "server error %d committing client logs for rid %016llx, nr %llu: %s",
ret, rid, le64_to_cpu(lt.nr), err_str);
scoutfs_err(sb, "server error %d committing client logs for rid %016llx: %s",
ret, rid, err_str);
out:
WARN_ON_ONCE(ret < 0);
return scoutfs_net_response(sb, conn, cmd, id, ret, NULL, 0);
}
@@ -1936,9 +1814,6 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
out:
mutex_unlock(&server->logs_mutex);
if (ret == 0)
scoutfs_inc_counter(sb, reclaimed_open_logs);
if (ret < 0 && ret != -EINPROGRESS)
scoutfs_err(sb, "server error %d reclaiming log trees for rid %016llx: %s",
ret, rid, err_str);
@@ -2140,7 +2015,7 @@ static int server_srch_get_compact(struct super_block *sb,
apply:
ret = server_apply_commit(sb, &hold, ret);
WARN_ON_ONCE(ret < 0 && ret != -ENOENT && ret != -ENOLINK); /* XXX leaked busy item */
WARN_ON_ONCE(ret < 0 && ret != -ENOENT); /* XXX leaked busy item */
out:
ret = scoutfs_net_response(sb, conn, cmd, id, ret,
sc, sizeof(struct scoutfs_srch_compact));
@@ -2180,7 +2055,7 @@ static int server_srch_commit_compact(struct super_block *sb,
&super->srch_root, rid, sc,
&av, &fr);
mutex_unlock(&server->srch_mutex);
if (ret < 0)
if (ret < 0) /* XXX very bad, leaks allocators */
goto apply;
/* reclaim allocators if they were set by _srch_commit_ */
@@ -2190,10 +2065,10 @@ static int server_srch_commit_compact(struct super_block *sb,
scoutfs_alloc_splice_list(sb, &server->alloc, &server->wri,
server->other_freed, &fr);
mutex_unlock(&server->alloc_mutex);
WARN_ON(ret < 0); /* XXX leaks allocators */
apply:
ret = server_apply_commit(sb, &hold, ret);
out:
WARN_ON(ret < 0); /* XXX leaks allocators */
return scoutfs_net_response(sb, conn, cmd, id, ret, NULL, 0);
}
@@ -2518,9 +2393,10 @@ out:
}
}
/* inconsistent */
scoutfs_bug_on_err(sb, ret,
"server error %d splicing log merge completion: %s", ret, err_str);
if (ret < 0)
scoutfs_err(sb, "server error %d splicing log merge completion: %s", ret, err_str);
BUG_ON(ret); /* inconsistent */
return ret ?: einprogress;
}
@@ -2655,7 +2531,7 @@ static void server_log_merge_free_work(struct work_struct *work)
ret = scoutfs_btree_free_blocks(sb, &server->alloc,
&server->wri, &fr.key,
&fr.root, COMMIT_HOLD_ALLOC_BUDGET / 8);
&fr.root, COMMIT_HOLD_ALLOC_BUDGET / 2);
if (ret < 0) {
err_str = "freeing log btree";
break;
@@ -2674,7 +2550,7 @@ static void server_log_merge_free_work(struct work_struct *work)
/* freed blocks are in allocator, we *have* to update fr */
BUG_ON(ret < 0);
if (server_hold_alloc_used_since(sb, &hold) >= (COMMIT_HOLD_ALLOC_BUDGET * 3) / 4) {
if (server_hold_alloc_used_since(sb, &hold) >= COMMIT_HOLD_ALLOC_BUDGET / 2) {
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
commit = false;
@@ -2765,7 +2641,10 @@ restart:
/* find the next range, always checking for splicing */
for (;;) {
ret = next_log_merge_range(sb, &super->log_merge, &stat.next_range_key, &rng);
key = stat.next_range_key;
key.sk_zone = SCOUTFS_LOG_MERGE_RANGE_ZONE;
ret = next_log_merge_item_key(sb, &super->log_merge, SCOUTFS_LOG_MERGE_RANGE_ZONE,
&key, &rng, sizeof(rng));
if (ret < 0 && ret != -ENOENT) {
err_str = "finding merge range item";
goto out;

View File

@@ -1,45 +0,0 @@
#!/bin/bash
#
# Unfortunately, kernels can ship which contain sparse errors that are
# unrelated to us.
#
# The exit status of this filtering wrapper will indicate an error if
# sparse wasn't found or if there were any unfiltered output lines. It
# can hide error exit status from sparse or grep if they don't produce
# output that makes it past the filters.
#
# must have sparse. Fail with error message, mask success path.
which sparse > /dev/null || exit 1
# initial unmatchable, additional added as RE+="|..."
RE="$^"
#
# Darn. sparse has multi-line error messages, and I'd rather not bother
# with multi-line filters. So we'll just drop this context.
#
# command-line: note: in included file (through include/linux/netlink.h, include/linux/ethtool.h, include/linux/netdevice.h, include/net/sock.h, /root/scoutfs/kmod/src/kernelcompat.h, builtin):
# fprintf(stderr, "%s: note: in included file%s:\n",
#
RE+="|: note: in included file"
# 3.10.0-1160.119.1.el7.x86_64.debug
# include/linux/posix_acl.h:138:9: warning: incorrect type in assignment (different address spaces)
# include/linux/posix_acl.h:138:9: expected struct posix_acl *<noident>
# include/linux/posix_acl.h:138:9: got struct posix_acl [noderef] <asn:4>*<noident>
RE+="|include/linux/posix_acl.h:"
# 3.10.0-1160.119.1.el7.x86_64.debug
#include/uapi/linux/perf_event.h:146:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)
RE+="|include/uapi/linux/perf_event.h:"
# 4.18.0-513.24.1.el8_9.x86_64+debug'
#./include/linux/skbuff.h:824:1: warning: directive in macro's argument list
RE+="|include/linux/skbuff.h:"
sparse "$@" |& \
grep -E -v "($RE)" |& \
awk '{ print $0 } END { exit NR > 0 }'
exit $?

View File

@@ -62,7 +62,7 @@
* re-allocated and re-written. Search can restart by checking the
* btree for the current set of files. Compaction reads log files which
* are protected from other compactions by the persistent busy items
* created by the server. Compaction won't see its blocks reused out
* created by the server. Compaction won't see it's blocks reused out
* from under it, but it can encounter stale cached blocks that need to
* be invalidated.
*/
@@ -442,10 +442,6 @@ out:
if (ret == 0 && (flags & GFB_INSERT) && blk >= le64_to_cpu(sfl->blocks))
sfl->blocks = cpu_to_le64(blk + 1);
if (bl) {
trace_scoutfs_get_file_block(sb, bl->blkno, flags);
}
*bl_ret = bl;
return ret;
}
@@ -537,35 +533,23 @@ out:
* the pairs cancel each other out by all readers (the second encoding
* looks like deletion) so they aren't visible to the first/last bounds of
* the block or file.
*
* We use the same entry repeatedly, so the diff between them will be empty.
* This lets us just emit the two-byte count word, leaving the other bytes
* as zero.
*
* Split the desired total len into two pieces, adding any remainder to the
* first four-bit value.
*/
static void append_padded_entry(struct scoutfs_srch_file *sfl,
struct scoutfs_srch_block *srb,
int len)
static int append_padded_entry(struct scoutfs_srch_file *sfl, u64 blk,
struct scoutfs_srch_block *srb, struct scoutfs_srch_entry *sre)
{
int each;
int rem;
u16 lengths = 0;
u8 *buf = srb->entries + le32_to_cpu(srb->entry_bytes);
int ret;
each = (len - 2) >> 1;
rem = (len - 2) & 1;
ret = encode_entry(srb->entries + le32_to_cpu(srb->entry_bytes),
sre, &srb->tail);
if (ret > 0) {
srb->tail = *sre;
le32_add_cpu(&srb->entry_nr, 1);
le32_add_cpu(&srb->entry_bytes, ret);
le64_add_cpu(&sfl->entries, 1);
ret = 0;
}
lengths |= each + rem;
lengths |= each << 4;
memset(buf, 0, len);
put_unaligned_le16(lengths, buf);
le32_add_cpu(&srb->entry_nr, 1);
le32_add_cpu(&srb->entry_bytes, len);
le64_add_cpu(&sfl->entries, 1);
return ret;
}
/*
@@ -576,41 +560,61 @@ static void append_padded_entry(struct scoutfs_srch_file *sfl,
* This is called when there is a single existing entry in the block.
* We have the entire block to work with. We encode pairs of matching
* entries. This hides them from readers (both searches and merging) as
* they're interpreted as creation and deletion and are deleted.
* they're interpreted as creation and deletion and are deleted. We use
* the existing hash value of the first entry in the block but then set
* the inode to an impossibly large number so it doesn't interfere with
* anything.
*
* For simplicity and to maintain sort ordering within the block, we reuse
* the existing entry. This lets us skip the encoding step, because we know
* the diff will be zero. We can zero-pad the resulting entries to hit the
* target offset exactly.
* To hit the specific offset we very carefully manage the amount of
* bytes of change between fields in the entry. We know that if we
* change all the byte of the ino and id we end up with a 20 byte
* (2+8+8,2) encoding of the pair of entries. To have the last entry
* start at the _SAFE_POS offset we know that the final 20 byte pair
* encoding needs to end at 2 bytes (second entry encoding) after the
* _SAFE_POS offset.
*
* Because we can't predict the exact number of entry_bytes when we start,
* we adjust the byte count of subsequent entries until we wind up at a
* multiple of 20 bytes away from our goal and then use that length for
* the remaining entries.
*
* We could just use a single pair of unnaturally large entries to consume
* the needed space, adjusting for an odd number of entry_bytes if necessary.
* The use of 19 or 20 bytes for the entry pair matches what we would see with
* real (non-zero) entries that vary from the existing entry.
* So as we encode pairs we watch the delta of our current offset from
* that desired final offset of 2 past _SAFE_POS. If we're a multiple
* of 20 away then we encode the full 20 byte pairs. If we're not, then
* we drop a byte to encode 19 bytes. That'll slowly change the offset
* to be a multiple of 20 again while encoding large entries.
*/
static void pad_entries_at_safe(struct scoutfs_srch_file *sfl,
static void pad_entries_at_safe(struct scoutfs_srch_file *sfl, u64 blk,
struct scoutfs_srch_block *srb)
{
struct scoutfs_srch_entry sre;
u32 target;
s32 diff;
u64 hash;
u64 ino;
u64 id;
int ret;
hash = le64_to_cpu(srb->tail.hash);
ino = le64_to_cpu(srb->tail.ino) | (1ULL << 62);
id = le64_to_cpu(srb->tail.id);
target = SCOUTFS_SRCH_BLOCK_SAFE_BYTES + 2;
while ((diff = target - le32_to_cpu(srb->entry_bytes)) > 0) {
append_padded_entry(sfl, srb, 10);
ino ^= 1ULL << (7 * 8);
if (diff % 20 == 0) {
append_padded_entry(sfl, srb, 10);
id ^= 1ULL << (7 * 8);
} else {
append_padded_entry(sfl, srb, 9);
id ^= 1ULL << (6 * 8);
}
}
WARN_ON_ONCE(diff != 0);
sre.hash = cpu_to_le64(hash);
sre.ino = cpu_to_le64(ino);
sre.id = cpu_to_le64(id);
ret = append_padded_entry(sfl, blk, srb, &sre);
if (ret == 0)
ret = append_padded_entry(sfl, blk, srb, &sre);
BUG_ON(ret != 0);
diff = target - le32_to_cpu(srb->entry_bytes);
}
}
/*
@@ -745,14 +749,14 @@ static int search_log_file(struct super_block *sb,
for (i = 0; i < le32_to_cpu(srb->entry_nr); i++) {
if (pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES) {
/* can only be inconsistency :/ */
ret = -EIO;
ret = EIO;
break;
}
ret = decode_entry(srb->entries + pos, &sre, &prev);
if (ret <= 0) {
/* can only be inconsistency :/ */
ret = -EIO;
ret = EIO;
break;
}
pos += ret;
@@ -855,15 +859,15 @@ static int search_sorted_file(struct super_block *sb,
if (pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES) {
/* can only be inconsistency :/ */
ret = -EIO;
goto out;
ret = EIO;
break;
}
ret = decode_entry(srb->entries + pos, &sre, &prev);
if (ret <= 0) {
/* can only be inconsistency :/ */
ret = -EIO;
goto out;
ret = EIO;
break;
}
pos += ret;
prev = sre;
@@ -968,8 +972,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
scoutfs_inc_counter(sb, srch_search_xattrs);
trace_scoutfs_ioc_search_xattrs(sb, ino, last_ino);
*done = false;
srch_init_rb_root(sroot);
@@ -1406,7 +1408,7 @@ int scoutfs_srch_commit_compact(struct super_block *sb,
ret = -EIO;
scoutfs_btree_put_iref(&iref);
}
if (ret < 0)
if (ret < 0) /* XXX leaks allocators */
goto out;
/* restore busy to pending if the operation failed */
@@ -1426,8 +1428,10 @@ int scoutfs_srch_commit_compact(struct super_block *sb,
/* update file references if we finished compaction (!deleting) */
if (!(res->flags & SCOUTFS_SRCH_COMPACT_FLAG_DELETE)) {
ret = commit_files(sb, alloc, wri, root, res);
if (ret < 0)
if (ret < 0) {
/* XXX we can't commit, shutdown? */
goto out;
}
/* transition flags for deleting input files */
for (i = 0; i < res->nr; i++) {
@@ -1454,7 +1458,7 @@ update:
le64_to_cpu(pending->id), 0);
ret = scoutfs_btree_insert(sb, alloc, wri, root, &key,
pending, sizeof(*pending));
if (WARN_ON_ONCE(ret < 0)) /* XXX inconsistency */
if (ret < 0)
goto out;
}
@@ -1467,6 +1471,7 @@ update:
BUG_ON(err); /* both busy and pending present */
}
out:
WARN_ON_ONCE(ret < 0); /* XXX inconsistency */
kfree(busy);
return ret;
}
@@ -1664,7 +1669,7 @@ static int kway_merge(struct super_block *sb,
/* end sorted block on _SAFE offset for testing */
if (bl && le32_to_cpu(srb->entry_nr) == 1 && logs_input &&
scoutfs_trigger(sb, SRCH_COMPACT_LOGS_PAD_SAFE)) {
pad_entries_at_safe(sfl, srb);
pad_entries_at_safe(sfl, blk, srb);
scoutfs_block_put(sb, bl);
bl = NULL;
blk++;
@@ -1797,7 +1802,7 @@ static void swap_page_sre(void *A, void *B, int size)
* typically, ~10x worst case).
*
* Because we read and sort all the input files we must perform the full
* compaction in one operation. The server must have given us
* compaction in one operation. The server must have given us a
* sufficiently large avail/freed lists, otherwise we'll return ENOSPC.
*/
static int compact_logs(struct super_block *sb,
@@ -1861,14 +1866,14 @@ static int compact_logs(struct super_block *sb,
if (pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES) {
/* can only be inconsistency :/ */
ret = -EIO;
goto out;
ret = EIO;
break;
}
ret = decode_entry(srb->entries + pos, sre, &prev);
if (ret <= 0) {
/* can only be inconsistency :/ */
ret = -EIO;
ret = EIO;
goto out;
}
prev = *sre;
@@ -2276,11 +2281,12 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
} else {
ret = -EINVAL;
}
if (ret < 0)
goto commit;
scoutfs_alloc_prepare_commit(sb, &alloc, &wri);
if (ret == 0)
ret = scoutfs_alloc_prepare_commit(sb, &alloc, &wri) ?:
scoutfs_block_writer_write(sb, &wri);
commit:
/* the server won't use our partial compact if _ERROR is set */
sc->meta_avail = alloc.avail;
sc->meta_freed = alloc.freed;
@@ -2297,7 +2303,7 @@ out:
scoutfs_inc_counter(sb, srch_compact_error);
scoutfs_block_writer_forget_all(sb, &wri);
queue_compact_work(srinf, sc != NULL && sc->nr > 0 && ret == 0);
queue_compact_work(srinf, sc->nr > 0 && ret == 0);
kfree(sc);
}

View File

@@ -512,9 +512,9 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sbi = kzalloc(sizeof(struct scoutfs_sb_info), GFP_KERNEL);
sb->s_fs_info = sbi;
sbi->sb = sb;
if (!sbi)
return -ENOMEM;
sbi->sb = sb;
ret = assign_random_id(sbi);
if (ret < 0)

View File

@@ -196,7 +196,7 @@ static int retry_forever(struct super_block *sb, int (*func)(struct super_block
}
if (scoutfs_forcing_unmount(sb)) {
ret = -ENOLINK;
ret = -EIO;
break;
}
@@ -252,7 +252,7 @@ void scoutfs_trans_write_func(struct work_struct *work)
}
if (scoutfs_forcing_unmount(sb)) {
ret = -ENOLINK;
ret = -EIO;
goto out;
}

View File

@@ -742,7 +742,7 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
int ret;
int err;
trace_scoutfs_xattr_set(sb, ino, name_len, value, size, flags);
trace_scoutfs_xattr_set(sb, name_len, value, size, flags);
if (WARN_ON_ONCE(tgs->totl && tgs->indx) ||
WARN_ON_ONCE((tgs->totl | tgs->indx) && !tag_lock))

View File

@@ -15,7 +15,9 @@ BIN := src/createmany \
src/o_tmpfile_umask \
src/o_tmpfile_linkat \
src/mmap_stress \
src/mmap_validate
src/mmap_validate \
src/parallel_restore \
src/restore_copy
DEPS := $(wildcard src/*.d)
@@ -27,8 +29,13 @@ endif
src/mmap_stress: LIBS+=-lpthread
src/parallel_restore_cflags := ../utils/src/scoutfs_parallel_restore.a -lm
src/parallel_restore: ../utils/src/scoutfs_parallel_restore.a
src/restore_copy_cflags := ../utils/src/scoutfs_parallel_restore.a -lm
src/restore_copy : ../utils/src/scoutfs_parallel_restore.a
$(BIN): %: %.c Makefile
gcc $(CFLAGS) -MD -MP -MF $*.d $< -o $@ $(LIBS)
gcc $(CFLAGS) -MD -MP -MF $*.d $< -o $@ $(LIBS) $($(@)_cflags)
.PHONY: clean
clean:

View File

@@ -117,7 +117,6 @@ used during the test.
| T\_NR\_MOUNTS | number of mounts | -n | 3 |
| T\_O[0-9] | mount options | created per run | -o server\_addr= |
| T\_QUORUM | quorum count | -q | 2 |
| T\_EXTRA | per-test file dir | revision ctled | tests/extra/t |
| T\_TMP | per-test tmp prefix | made for test | results/tmp/t/tmp |
| T\_TMPDIR | per-test tmp dir dir | made for test | results/tmp/t |

View File

@@ -1,882 +0,0 @@
Ran:
generic/001
generic/002
generic/004
generic/005
generic/006
generic/007
generic/008
generic/009
generic/011
generic/012
generic/013
generic/014
generic/015
generic/016
generic/018
generic/020
generic/021
generic/022
generic/023
generic/024
generic/025
generic/026
generic/028
generic/029
generic/030
generic/031
generic/032
generic/033
generic/034
generic/035
generic/037
generic/039
generic/040
generic/041
generic/050
generic/052
generic/053
generic/056
generic/057
generic/058
generic/059
generic/060
generic/061
generic/062
generic/063
generic/064
generic/065
generic/066
generic/067
generic/069
generic/070
generic/071
generic/073
generic/076
generic/078
generic/079
generic/080
generic/081
generic/082
generic/084
generic/086
generic/087
generic/088
generic/090
generic/091
generic/092
generic/094
generic/096
generic/097
generic/098
generic/099
generic/101
generic/104
generic/105
generic/106
generic/107
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/117
generic/118
generic/119
generic/120
generic/121
generic/122
generic/123
generic/124
generic/126
generic/128
generic/129
generic/130
generic/131
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/141
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/169
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/184
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/215
generic/216
generic/217
generic/218
generic/219
generic/220
generic/221
generic/222
generic/223
generic/225
generic/227
generic/228
generic/229
generic/230
generic/235
generic/236
generic/237
generic/238
generic/240
generic/244
generic/245
generic/246
generic/247
generic/248
generic/249
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/257
generic/258
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/286
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/294
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/306
generic/307
generic/308
generic/309
generic/312
generic/313
generic/314
generic/315
generic/316
generic/317
generic/319
generic/322
generic/324
generic/325
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/335
generic/336
generic/337
generic/341
generic/342
generic/343
generic/346
generic/348
generic/353
generic/355
generic/358
generic/359
generic/360
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/375
generic/376
generic/377
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
generic/389
generic/391
generic/392
generic/393
generic/394
generic/395
generic/396
generic/397
generic/398
generic/400
generic/401
generic/402
generic/403
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/426
generic/427
generic/428
generic/436
generic/437
generic/439
generic/440
generic/443
generic/445
generic/446
generic/448
generic/449
generic/450
generic/451
generic/452
generic/453
generic/454
generic/456
generic/458
generic/460
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/477
generic/478
generic/479
generic/480
generic/481
generic/483
generic/485
generic/486
generic/487
generic/488
generic/489
generic/490
generic/491
generic/492
generic/498
generic/499
generic/501
generic/502
generic/503
generic/504
generic/505
generic/506
generic/507
generic/508
generic/509
generic/510
generic/511
generic/512
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/523
generic/524
generic/525
generic/526
generic/527
generic/528
generic/529
generic/530
generic/531
generic/533
generic/534
generic/535
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/547
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/557
generic/566
generic/567
generic/571
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/604
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/611
generic/612
generic/613
generic/614
generic/618
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/632
generic/634
generic/635
generic/637
generic/638
generic/639
generic/640
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/676
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/032
Not
run:
generic/008
generic/009
generic/012
generic/015
generic/016
generic/018
generic/021
generic/022
generic/025
generic/026
generic/031
generic/033
generic/050
generic/052
generic/058
generic/059
generic/060
generic/061
generic/063
generic/064
generic/078
generic/079
generic/081
generic/082
generic/091
generic/094
generic/096
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/118
generic/119
generic/121
generic/122
generic/123
generic/128
generic/130
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/216
generic/217
generic/218
generic/219
generic/220
generic/222
generic/223
generic/225
generic/227
generic/229
generic/230
generic/235
generic/238
generic/240
generic/244
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/312
generic/314
generic/316
generic/317
generic/324
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/353
generic/355
generic/358
generic/359
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
generic/391
generic/392
generic/395
generic/396
generic/397
generic/398
generic/400
generic/402
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/427
generic/439
generic/440
generic/446
generic/449
generic/450
generic/451
generic/453
generic/454
generic/456
generic/458
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/485
generic/487
generic/488
generic/491
generic/492
generic/499
generic/501
generic/503
generic/505
generic/506
generic/507
generic/508
generic/511
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/528
generic/530
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/566
generic/567
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/612
generic/613
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/635
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/032
Passed all 512 tests

View File

@@ -1,44 +0,0 @@
generic/003 # missing atime update in buffered read
generic/075 # file content mismatch failures (fds, etc)
generic/103 # enospc causes trans commit failures
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/213 # enospc causes trans commit failures
generic/318 # can't support user namespaces until v5.11
generic/321 # requires selinux enabled for '+' in ls?
generic/338 # BUG_ON update inode error handling
generic/347 # _dmthin_mount doesn't work?
generic/356 # swap
generic/357 # swap
generic/409 # bind mounts not scripted yet
generic/410 # bind mounts not scripted yet
generic/411 # bind mounts not scripted yet
generic/423 # symlink inode size is strlen() + 1 on scoutfs
generic/430 # xfs_io copy_range missing in el7
generic/431 # xfs_io copy_range missing in el7
generic/432 # xfs_io copy_range missing in el7
generic/433 # xfs_io copy_range missing in el7
generic/434 # xfs_io copy_range missing in el7
generic/441 # dm-mapper
generic/444 # el9's posix_acl_update_mode is buggy ?
generic/467 # open_by_handle ESTALE
generic/472 # swap
generic/484 # dm-mapper
generic/493 # swap
generic/494 # swap
generic/495 # swap
generic/496 # swap
generic/497 # swap
generic/532 # xfs_io statx attrib_mask missing in el7
generic/554 # swap
generic/563 # cgroup+loopdev
generic/564 # xfs_io copy_range missing in el7
generic/565 # xfs_io copy_range missing in el7
generic/568 # falloc not resulting in block count increase
generic/569 # swap
generic/570 # swap
generic/620 # dm-hugedisk
generic/633 # id-mapped mounts missing in el7
generic/636 # swap
generic/641 # swap
generic/643 # swap

View File

@@ -8,34 +8,36 @@
echo "$0 running rid '$SCOUTFS_FENCED_REQ_RID' ip '$SCOUTFS_FENCED_REQ_IP' args '$@'"
echo_fail() {
echo "$@" >> /dev/stderr
log() {
echo "$@" > /dev/stderr
exit 1
}
# silence error messages
quiet_cat()
{
cat "$@" 2>/dev/null
echo_fail() {
echo "$@" > /dev/stderr
exit 1
}
rid="$SCOUTFS_FENCED_REQ_RID"
shopt -s nullglob
for fs in /sys/fs/scoutfs/*; do
fs_rid="$(quiet_cat $fs/rid)"
nr="$(quiet_cat $fs/data_device_maj_min)"
[ ! -d "$fs" -o "$fs_rid" != "$rid" ] && continue
[ ! -d "$fs" ] && continue
mnt=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
echo_fail "findmnt -t scoutfs -S $nr failed"
[ -z "$mnt" ] && continue
if ! umount -qf "$mnt"; then
if [ -d "$fs" ]; then
echo_fail "umount -qf $mnt failed"
fi
fs_rid="$(cat $fs/rid)" || \
echo_fail "failed to get rid in $fs"
if [ "$fs_rid" != "$rid" ]; then
continue
fi
nr="$(cat $fs/data_device_maj_min)" || \
echo_fail "failed to get data device major:minor in $fs"
mnts=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
echo_fail "findmnt -t scoutfs -S $nr failed"
for mnt in $mnts; do
umount -f "$mnt" || \
echo_fail "umout -f $mnt failed"
done
done
exit 0

View File

@@ -64,27 +64,21 @@ t_rc()
}
#
# As run, stdout/err are redirected to a file that will be compared with
# the stored expected golden output of the test. This redirects
# stdout/err in the script to stdout of the invoking run-test. It's
# intended to give visible output of tests without being included in the
# golden output.
# redirect test output back to the output of the invoking script intead
# of the compared output.
#
# (see the goofy "exec" fd manipulation in the main run-tests as it runs
# each test)
#
t_stdout_invoked()
t_restore_output()
{
exec >&6 2>&1
}
#
# This undoes t_stdout_invokved, returning the test's stdout/err to the
# output file as it was when it was launched.
# redirect a command's output back to the compared output after the
# test has restored its output
#
t_stdout_compare()
t_compare_output()
{
exec >&7 2>&1
"$@" >&7 2>&1
}
#

View File

@@ -121,7 +121,6 @@ t_filter_dmesg()
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
re="$re|clocksource: Long readout interval"
# fencing tests force unmounts and trigger timeouts
re="$re|scoutfs .* forcing unmount"
@@ -141,9 +140,6 @@ t_filter_dmesg()
re="$re|scoutfs .* error.*server failed to bind to.*"
re="$re|scoutfs .* critical transaction commit failure.*"
# ENOLINK (-67) indicates an expected forced unmount error
re="$re|scoutfs .* error -67 .*"
# change-devices causes loop device resizing
re="$re|loop: module loaded"
re="$re|loop[0-9].* detected capacity change from.*"
@@ -167,9 +163,6 @@ t_filter_dmesg()
# perf warning that it adjusted sample rate
re="$re|perf: interrupt took too long.*lowering kernel.perf_event_max_sample_rate.*"
# some ci test guests are unresponsive
re="$re|longest quorum heartbeat .* delay"
egrep -v "($re)" | \
ignore_harmless_unwind_kasan_stack_oob
}

View File

@@ -1,3 +1,4 @@
== setting longer hung task timeout
== creating fragmented extents
== unlink file with moved extents to free extents per block
== cleanup

View File

@@ -49,7 +49,7 @@ offline wating should be empty:
0
== truncating does wait
truncate should be waiting for first block:
truncate should no longer be waiting:
trunate should no longer be waiting:
0
== writing waits
should be waiting for write

View File

@@ -0,0 +1,26 @@
== simple mkfs/restore/mount
committed_seq 1120
total_meta_blocks 163840
total_data_blocks 15728640
1440 1440 57120
80 80 400
0: offset: 0 length: 1 flags: O.L
extents: 1
0: offset: 0 length: 1 flags: O.L
extents: 1
0: offset: 0 length: 1 flags: O.L
extents: 1
0: offset: 0 length: 1 flags: O.L
extents: 1
Type Used
MetaData 34721
Data 64
== under ENOSPC
Type Used
MetaData 117073
Data 64
== ENOSPC
== attempt to restore data device
== attempt format_v1 restore
== test if previously mounted
== cleanup

83
tests/golden/restore_copy Normal file
View File

@@ -0,0 +1,83 @@
== restore_copy content verification
d /mnt/test/data/d
f /mnt/test/data/f
l /mnt/test/data/l -> broken
f /mnt/test/data/h
l /mnt/test/data/F -> f
b /mnt/test/data/b
c /mnt/test/data/c
c /mnt/test/data/u
p /mnt/test/data/p
f /mnt/test/data/f4096
f /mnt/test/data/falloc
f /mnt/test/data/truncate
s /mnt/test/data/s
f /mnt/test/data/mode_t
f /mnt/test/data/uidgid
f /mnt/test/data/retention
f /mnt/test/data/proj
f /mnt/test/data/proj_d/f
d /mnt/test/data/proj_d
d /mnt/test/data
Quota rule: 7 13,L,- 0,L,- 0,L,- I 33 -
Quota rule: 7 11,L,- 0,L,- 0,L,- I 33 -
Quota rule: 7 12,L,- 0,L,- 0,L,- I 33 -
Quota rule: 7 10,L,- 0,L,- 0,L,- I 33 -
Quota rule: 7 15,L,- 0,L,- 0,L,- I 33 -
Quota rule: 7 14,L,- 0,L,- 0,L,- I 33 -
Wrote 1 directories, 0 files, 458752 bytes total
== verify metadata bits on restored fs
total 16516
-rw-r--r--. 1 33333 33333 0 uidgid
crw-r--r--. 1 0 0 2, 2 u
-rw-r--r--. 1 0 0 16777216 truncate
srwxr-xr-x. 1 0 0 0 s
-rw-r--r--. 1 0 0 0 retention
drwxr-xr-x. 2 0 0 1 proj_d
-rw-r--r--. 1 0 0 0 proj
prw-r--r--. 1 0 0 0 p
-rwsrwsrwx. 1 0 0 0 mode_t
lrwxrwxrwx. 1 0 0 7 l -> broken
-rw-r--r--. 2 0 0 0 h
-rw-r--r--. 1 0 0 131072 falloc
-rw-r--r--. 1 0 0 4096 f4096
-rw-r--r--. 2 0 0 0 f
drwxr-xr-x. 2 0 0 0 d
crw-r--r--. 1 0 0 0, 0 c
brw-r--r--. 1 0 0 1, 1 b
lrwxrwxrwx. 1 0 0 2 F -> f
1
12345
0: offset: 0 length: 1 flags: O.L
extents: 1
0: offset: 0 length: 32 flags: O.L
extents: 1
0: offset: 0 length: 4096 flags: O.L
extents: 1
7 15,L,- 0,L,- 0,L,- I 33 -
7 14,L,- 0,L,- 0,L,- I 33 -
7 13,L,- 0,L,- 0,L,- I 33 -
7 12,L,- 0,L,- 0,L,- I 33 -
7 11,L,- 0,L,- 0,L,- I 33 -
7 10,L,- 0,L,- 0,L,- I 33 -
12345
54321
crtime 55555.666666666
crtime 55556.666666666
== verify quota rules on restored fs
7 14,L,- 0,L,- 0,L,- I 33 -
7 13,L,- 0,L,- 0,L,- I 33 -
7 12,L,- 0,L,- 0,L,- I 33 -
7 11,L,- 0,L,- 0,L,- I 33 -
7 10,L,- 0,L,- 0,L,- I 33 -
7 15,L,- 0,L,- 0,L,- I 33 -
7 14,L,- 0,L,- 0,L,- I 33 -
7 13,L,- 0,L,- 0,L,- I 33 -
7 12,L,- 0,L,- 0,L,- I 33 -
7 11,L,- 0,L,- 0,L,- I 33 -
7 10,L,- 0,L,- 0,L,- I 33 -
Type Used
MetaData 34698
Data 64
== umount restored fs and check
== cleanup

View File

@@ -0,0 +1,882 @@
Ran:
generic/001
generic/002
generic/004
generic/005
generic/006
generic/007
generic/008
generic/009
generic/011
generic/012
generic/013
generic/014
generic/015
generic/016
generic/018
generic/020
generic/021
generic/022
generic/023
generic/024
generic/025
generic/026
generic/028
generic/029
generic/030
generic/031
generic/032
generic/033
generic/034
generic/035
generic/037
generic/039
generic/040
generic/041
generic/050
generic/052
generic/053
generic/056
generic/057
generic/058
generic/059
generic/060
generic/061
generic/062
generic/063
generic/064
generic/065
generic/066
generic/067
generic/069
generic/070
generic/071
generic/073
generic/076
generic/078
generic/079
generic/080
generic/081
generic/082
generic/084
generic/086
generic/087
generic/088
generic/090
generic/091
generic/092
generic/094
generic/096
generic/097
generic/098
generic/099
generic/101
generic/104
generic/105
generic/106
generic/107
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/117
generic/118
generic/119
generic/120
generic/121
generic/122
generic/123
generic/124
generic/126
generic/128
generic/129
generic/130
generic/131
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/141
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/169
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/184
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/215
generic/216
generic/217
generic/218
generic/219
generic/220
generic/221
generic/222
generic/223
generic/225
generic/227
generic/228
generic/229
generic/230
generic/235
generic/236
generic/237
generic/238
generic/240
generic/244
generic/245
generic/246
generic/247
generic/248
generic/249
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/257
generic/258
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/286
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/294
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/306
generic/307
generic/308
generic/309
generic/312
generic/313
generic/314
generic/315
generic/316
generic/317
generic/319
generic/322
generic/324
generic/325
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/335
generic/336
generic/337
generic/341
generic/342
generic/343
generic/346
generic/348
generic/353
generic/355
generic/358
generic/359
generic/360
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/375
generic/376
generic/377
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
generic/389
generic/391
generic/392
generic/393
generic/394
generic/395
generic/396
generic/397
generic/398
generic/400
generic/401
generic/402
generic/403
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/426
generic/427
generic/428
generic/436
generic/437
generic/439
generic/440
generic/443
generic/445
generic/446
generic/448
generic/449
generic/450
generic/451
generic/452
generic/453
generic/454
generic/456
generic/458
generic/460
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/477
generic/478
generic/479
generic/480
generic/481
generic/483
generic/485
generic/486
generic/487
generic/488
generic/489
generic/490
generic/491
generic/492
generic/498
generic/499
generic/501
generic/502
generic/503
generic/504
generic/505
generic/506
generic/507
generic/508
generic/509
generic/510
generic/511
generic/512
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/523
generic/524
generic/525
generic/526
generic/527
generic/528
generic/529
generic/530
generic/531
generic/533
generic/534
generic/535
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/547
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/557
generic/566
generic/567
generic/571
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/604
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/611
generic/612
generic/613
generic/614
generic/618
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/632
generic/634
generic/635
generic/637
generic/638
generic/639
generic/640
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/676
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/032
Not
run:
generic/008
generic/009
generic/012
generic/015
generic/016
generic/018
generic/021
generic/022
generic/025
generic/026
generic/031
generic/033
generic/050
generic/052
generic/058
generic/059
generic/060
generic/061
generic/063
generic/064
generic/078
generic/079
generic/081
generic/082
generic/091
generic/094
generic/096
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/118
generic/119
generic/121
generic/122
generic/123
generic/128
generic/130
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/216
generic/217
generic/218
generic/219
generic/220
generic/222
generic/223
generic/225
generic/227
generic/229
generic/230
generic/235
generic/238
generic/240
generic/244
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/312
generic/314
generic/316
generic/317
generic/324
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/353
generic/355
generic/358
generic/359
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
generic/391
generic/392
generic/395
generic/396
generic/397
generic/398
generic/400
generic/402
generic/404
generic/406
generic/407
generic/408
generic/412
generic/413
generic/414
generic/417
generic/419
generic/420
generic/421
generic/422
generic/424
generic/425
generic/427
generic/439
generic/440
generic/446
generic/449
generic/450
generic/451
generic/453
generic/454
generic/456
generic/458
generic/462
generic/463
generic/465
generic/466
generic/468
generic/469
generic/470
generic/471
generic/474
generic/485
generic/487
generic/488
generic/491
generic/492
generic/499
generic/501
generic/503
generic/505
generic/506
generic/507
generic/508
generic/511
generic/513
generic/514
generic/515
generic/516
generic/517
generic/518
generic/519
generic/520
generic/528
generic/530
generic/536
generic/537
generic/538
generic/539
generic/540
generic/541
generic/542
generic/543
generic/544
generic/545
generic/546
generic/548
generic/549
generic/550
generic/552
generic/553
generic/555
generic/556
generic/566
generic/567
generic/572
generic/573
generic/574
generic/575
generic/576
generic/577
generic/578
generic/580
generic/581
generic/582
generic/583
generic/584
generic/586
generic/587
generic/588
generic/591
generic/592
generic/593
generic/594
generic/595
generic/596
generic/597
generic/598
generic/599
generic/600
generic/601
generic/602
generic/603
generic/605
generic/606
generic/607
generic/608
generic/609
generic/610
generic/612
generic/613
generic/621
generic/623
generic/624
generic/625
generic/626
generic/628
generic/629
generic/630
generic/635
generic/644
generic/645
generic/646
generic/647
generic/651
generic/652
generic/653
generic/654
generic/655
generic/657
generic/658
generic/659
generic/660
generic/661
generic/662
generic/663
generic/664
generic/665
generic/666
generic/667
generic/668
generic/669
generic/673
generic/674
generic/675
generic/677
generic/678
generic/679
generic/680
generic/681
generic/682
generic/683
generic/684
generic/685
generic/686
generic/687
generic/688
generic/689
shared/002
shared/032
Passed all 512 tests

View File

@@ -56,7 +56,6 @@ $(basename $0) options:
| only tests matching will be run. Can be provided multiple
| times
-i | Force removing and inserting the built scoutfs.ko module.
-l <nr> | Loop each test <nr> times while passing, last run counts.
-M <file> | Specify the filesystem's meta data device path that contains
| the file system to be tested. Will be clobbered by -m mkfs.
-m | Run mkfs on the device before mounting and running
@@ -70,7 +69,6 @@ $(basename $0) options:
-r <dir> | Specify the directory in which to store results of
| test runs. The directory will be created if it doesn't
| exist. Previous results will be deleted as each test runs.
-R | shuffle the test order randomly using shuf
-s | Skip git repo checkouts.
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
@@ -91,8 +89,6 @@ done
# set some T_ defaults
T_TRACE_DUMP="0"
T_TRACE_PRINTK="0"
T_PORT_START="19700"
T_LOOP_ITER="1"
# array declarations to be able to use array ops
declare -a T_TRACE_GLOB
@@ -133,12 +129,6 @@ while true; do
-i)
T_INSMOD="1"
;;
-l)
test -n "$2" || die "-l must have a nr iterations argument"
test "$2" -eq "$2" 2>/dev/null || die "-l <nr> argument must be an integer"
T_LOOP_ITER="$2"
shift
;;
-M)
test -n "$2" || die "-z must have meta device file argument"
T_META_DEVICE="$2"
@@ -174,9 +164,6 @@ while true; do
T_RESULTS="$2"
shift
;;
-R)
T_SHUF="1"
;;
-s)
T_SKIP_CHECKOUT="1"
;;
@@ -274,37 +261,13 @@ for e in T_META_DEVICE T_DATA_DEVICE T_EX_META_DEV T_EX_DATA_DEV T_KMOD T_RESULT
eval $e=\"$(readlink -f "${!e}")\"
done
# try and check ports, but not necessary
T_TEST_PORT="$T_PORT_START"
T_SCRATCH_PORT="$((T_PORT_START + 100))"
T_DEV_PORT="$((T_PORT_START + 200))"
read local_start local_end < /proc/sys/net/ipv4/ip_local_port_range
if [ -n "$local_start" -a -n "$local_end" -a "$local_start" -lt "$local_end" ]; then
if [ ! "$T_DEV_PORT" -lt "$local_start" -a ! "$T_TEST_PORT" -gt "$local_end" ]; then
die "listening port range $T_TEST_PORT - $T_DEV_PORT is within local dynamic port range $local_start - $local_end in /proc/sys/net/ipv4/ip_local_port_range"
fi
fi
# permute sequence?
T_SEQUENCE=sequence
if [ -n "$T_SHUF" ]; then
msg "shuffling test order"
shuf sequence -o sequence.shuf
# keep xfstests at the end
if grep -q 'xfstests.sh' sequence.shuf ; then
sed -i '/xfstests.sh/d' sequence.shuf
echo "xfstests.sh" >> sequence.shuf
fi
T_SEQUENCE=sequence.shuf
fi
# include everything by default
test -z "$T_INCLUDE" && T_INCLUDE="-e '.*'"
# (quickly) exclude nothing by default
test -z "$T_EXCLUDE" && T_EXCLUDE="-e '\Zx'"
# eval to strip re ticks but not expand
tests=$(grep -v "^#" $T_SEQUENCE |
tests=$(grep -v "^#" sequence |
eval grep "$T_INCLUDE" | eval grep -v "$T_EXCLUDE")
test -z "$tests" && \
die "no tests found by including $T_INCLUDE and excluding $T_EXCLUDE"
@@ -383,7 +346,7 @@ fi
quo=""
if [ -n "$T_MKFS" ]; then
for i in $(seq -0 $((T_QUORUM - 1))); do
quo="$quo -Q $i,127.0.0.1,$((T_TEST_PORT + i))"
quo="$quo -Q $i,127.0.0.1,$((42000 + i))"
done
msg "making new filesystem with $T_QUORUM quorum members"
@@ -438,30 +401,6 @@ cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/sys/kernel/debug/tracing/buffer_size_kb \
/proc/sys/kernel/ftrace_dump_on_oops
# we can record pids to kill as we exit, we kill in reverse added order
atexit_kill_pids=""
add_atexit_kill_pid()
{
atexit_kill_pids="$1 $atexit_kill_pids"
}
atexit_kill()
{
local pid
# suppress bg function exited messages
exec {ERR}>&2 2>/dev/null
for pid in $atexit_kill_pids; do
if test -e "/proc/$pid/status" ; then
kill "$pid"
wait "$pid"
fi
done
exec 2>&$ERR {ERR}>&-
}
trap atexit_kill EXIT
#
# Build a fenced config that runs scripts out of the repository rather
# than the default system directory
@@ -475,43 +414,26 @@ EOF
export SCOUTFS_FENCED_CONFIG_FILE="$conf"
T_FENCED_LOG="$T_RESULTS/fenced.log"
#
# Run the agent in the background, log its output, an kill it if we
# exit
#
fenced_log()
{
echo "[$(timestamp)] $*" >> "$T_FENCED_LOG"
}
fenced_pid=""
kill_fenced()
{
if test -n "$fenced_pid" -a -d "/proc/$fenced_pid" ; then
fenced_log "killing fenced pid $fenced_pid"
kill "$fenced_pid"
fi
}
trap kill_fenced EXIT
$T_UTILS/fenced/scoutfs-fenced > "$T_FENCED_LOG" 2>&1 &
fenced_pid=$!
add_atexit_kill_pid $fenced_pid
#
# some critical failures will cause fs operations to hang. We can watch
# for evidence of them and cause the system to crash, at least.
#
crash_monitor()
{
local bad=0
while sleep 1; do
if dmesg | grep -q "inserting extent.*overlaps existing"; then
echo "run-tests monitor saw overlapping extent message"
bad=1
fi
if dmesg | grep -q "error indicated by fence action" ; then
echo "run-tests monitor saw fence agent error message"
bad=1
fi
if [ ! -e "/proc/${fenced_pid}/status" ]; then
echo "run-tests monitor didn't see fenced pid $fenced_pid /proc dir"
bad=1
fi
if [ "$bad" != 0 ]; then
echo "run-tests monitor triggering crash"
echo c > /proc/sysrq-trigger
exit 1
fi
done
}
crash_monitor &
add_atexit_kill_pid $!
fenced_log "started fenced pid $fenced_pid in the background"
# setup dm tables
echo "0 $(blockdev --getsz $T_META_DEVICE) linear $T_META_DEVICE 0" > \
@@ -584,7 +506,7 @@ fi
. funcs/filter.sh
# give tests access to built binaries in src/, prefer over installed
export PATH="$PWD/src:$PATH"
PATH="$PWD/src:$PATH"
msg "running tests"
> "$T_RESULTS/skip.log"
@@ -604,110 +526,101 @@ for t in $tests; do
t="tests/$t"
test_name=$(basename "$t" | sed -e 's/.sh$//')
# create a temporary dir and file path for the test
T_TMPDIR="$T_RESULTS/tmp/$test_name"
T_TMP="$T_TMPDIR/tmp"
cmd rm -rf "$T_TMPDIR"
cmd mkdir -p "$T_TMPDIR"
# create a test name dir in the fs, clean up old data as needed
T_DS=""
for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
dir="${T_M[$i]}/test/$test_name"
test $i == 0 && (
test -d "$dir" && cmd rm -rf "$dir"
cmd mkdir -p "$dir"
)
eval T_D$i=$dir
T_D[$i]=$dir
T_DS+="$dir "
done
# export all our T_ variables
for v in ${!T_*}; do
eval export $v
done
export PATH # give test access to scoutfs binary
# prepare to compare output to golden output
test -e "$T_RESULTS/output" || cmd mkdir -p "$T_RESULTS/output"
out="$T_RESULTS/output/$test_name"
> "$T_TMPDIR/status.msg"
golden="golden/$test_name"
# get stats from previous pass
last="$T_RESULTS/last-passed-test-stats"
stats=$(grep -s "^$test_name " "$last" | cut -d " " -f 2-)
test -n "$stats" && stats="last: $stats"
printf " %-30s $stats" "$test_name"
# mark in dmesg as to what test we are running
echo "run scoutfs test $test_name" > /dev/kmsg
# let the test get at its extra files
T_EXTRA="$T_TESTS/extra/$test_name"
# record dmesg before
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.before"
for iter in $(seq 1 $T_LOOP_ITER); do
# give tests stdout and compared output on specific fds
exec 6>&1
exec 7>$out
# create a temporary dir and file path for the test
T_TMPDIR="$T_RESULTS/tmp/$test_name"
T_TMP="$T_TMPDIR/tmp"
cmd rm -rf "$T_TMPDIR"
cmd mkdir -p "$T_TMPDIR"
# run the test with access to our functions
start_secs=$SECONDS
bash -c "for f in funcs/*.sh; do . \$f; done; . $t" >&7 2>&1
sts="$?"
log "test $t exited with status $sts"
stats="$((SECONDS - start_secs))s"
# create a test name dir in the fs, clean up old data as needed
T_DS=""
for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
dir="${T_M[$i]}/test/$test_name"
# close our weird descriptors
exec 6>&-
exec 7>&-
test $i == 0 && (
test -d "$dir" && cmd rm -rf "$dir"
cmd mkdir -p "$dir"
)
eval T_D$i=$dir
T_D[$i]=$dir
T_DS+="$dir "
done
# export all our T_ variables
for v in ${!T_*}; do
eval export $v
done
# prepare to compare output to golden output
test -e "$T_RESULTS/output" || cmd mkdir -p "$T_RESULTS/output"
out="$T_RESULTS/output/$test_name"
> "$T_TMPDIR/status.msg"
golden="golden/$test_name"
# record dmesg before
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.before"
# give tests stdout and compared output on specific fds
exec 6>&1
exec 7>$out
# run the test with access to our functions
start_secs=$SECONDS
bash -c "for f in funcs/*.sh; do . \$f; done; . $t" >&7 2>&1
sts="$?"
log "test $t exited with status $sts"
stats="$((SECONDS - start_secs))s"
# close our weird descriptors
exec 6>&-
exec 7>&-
# compare output if the test returned passed status
if [ "$sts" == "$T_PASS_STATUS" ]; then
if [ ! -e "$golden" ]; then
message="no golden output"
sts=$T_FAIL_STATUS
elif ! cmp -s "$golden" "$out"; then
message="output differs"
sts=$T_FAIL_STATUS
diff -u "$golden" "$out" >> "$T_RESULTS/fail.log"
fi
else
# get message from t_*() functions
message=$(cat "$T_TMPDIR/status.msg")
fi
# see if anything unexpected was added to dmesg
if [ "$sts" == "$T_PASS_STATUS" ]; then
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.after"
diff --old-line-format="" --unchanged-line-format="" \
"$T_TMPDIR/dmesg.before" "$T_TMPDIR/dmesg.after" > \
"$T_TMPDIR/dmesg.new"
if [ -s "$T_TMPDIR/dmesg.new" ]; then
message="unexpected messages in dmesg"
sts=$T_FAIL_STATUS
cat "$T_TMPDIR/dmesg.new" >> "$T_RESULTS/fail.log"
fi
fi
# record unknown exit status
if [ "$sts" -lt "$T_FIRST_STATUS" -o "$sts" -gt "$T_LAST_STATUS" ]; then
message="unknown status: $sts"
# compare output if the test returned passed status
if [ "$sts" == "$T_PASS_STATUS" ]; then
if [ ! -e "$golden" ]; then
message="no golden output"
sts=$T_FAIL_STATUS
elif ! cmp -s "$golden" "$out"; then
message="output differs"
sts=$T_FAIL_STATUS
diff -u "$golden" "$out" >> "$T_RESULTS/fail.log"
fi
else
# get message from t_*() functions
message=$(cat "$T_TMPDIR/status.msg")
fi
# stop looping if we didn't pass
if [ "$sts" != "$T_PASS_STATUS" ]; then
break;
# see if anything unexpected was added to dmesg
if [ "$sts" == "$T_PASS_STATUS" ]; then
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.after"
diff --old-line-format="" --unchanged-line-format="" \
"$T_TMPDIR/dmesg.before" "$T_TMPDIR/dmesg.after" > \
"$T_TMPDIR/dmesg.new"
if [ -s "$T_TMPDIR/dmesg.new" ]; then
message="unexpected messages in dmesg"
sts=$T_FAIL_STATUS
cat "$T_TMPDIR/dmesg.new" >> "$T_RESULTS/fail.log"
fi
done
fi
# record unknown exit status
if [ "$sts" -lt "$T_FIRST_STATUS" -o "$sts" -gt "$T_LAST_STATUS" ]; then
message="unknown status: $sts"
sts=$T_FAIL_STATUS
fi
# show and record the result of the test
if [ "$sts" == "$T_PASS_STATUS" ]; then

View File

@@ -57,4 +57,6 @@ archive-light-cycle.sh
block-stale-reads.sh
inode-deletion.sh
renameat2-noreplace.sh
parallel_restore.sh
restore_copy.sh
xfstests.sh

View File

@@ -19,7 +19,6 @@
#include <sys/types.h>
#include <stdio.h>
#include <sys/stat.h>
#include <inttypes.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
@@ -30,7 +29,7 @@
#include <errno.h>
static int size = 0;
static int duration = 0;
static int count = 0; /* XXX make this duration instead */
struct thread_info {
int nr;
@@ -42,8 +41,6 @@ static void *run_test_func(void *ptr)
void *buf = NULL;
char *addr = NULL;
struct thread_info *tinfo = ptr;
uint64_t seconds = 0;
struct timespec ts;
int c = 0;
int fd;
ssize_t read, written, ret;
@@ -64,15 +61,9 @@ static void *run_test_func(void *ptr)
usleep(100000); /* 0.1sec to allow all threads to start roughly at the same time */
clock_gettime(CLOCK_REALTIME, &ts); /* record start time */
seconds = ts.tv_sec + duration;
for (;;) {
if (++c % 16 == 0) {
clock_gettime(CLOCK_REALTIME, &ts);
if (ts.tv_sec >= seconds)
break;
}
if (++c > count)
break;
switch (rand() % 4) {
case 0: /* pread */
@@ -108,8 +99,6 @@ static void *run_test_func(void *ptr)
memcpy(addr, buf, size); /* noerr */
break;
}
usleep(10000);
}
munmap(addr, size);
@@ -131,7 +120,7 @@ int main(int argc, char **argv)
int i;
if (argc != 8) {
fprintf(stderr, "%s requires 7 arguments - size duration file1 file2 file3 file4 file5\n", argv[0]);
fprintf(stderr, "%s requires 7 arguments - size count file1 file2 file3 file4 file5\n", argv[0]);
exit(-1);
}
@@ -141,9 +130,9 @@ int main(int argc, char **argv)
exit(-1);
}
duration = atoi(argv[2]);
if (duration < 0) {
fprintf(stderr, "invalid duration, must be greater than or equal to 0\n");
count = atoi(argv[2]);
if (count < 0) {
fprintf(stderr, "invalid count, must be greater than 0\n");
exit(-1);
}

View File

@@ -0,0 +1,805 @@
#define _GNU_SOURCE /* O_DIRECT */
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/xattr.h>
#include <ctype.h>
#include <string.h>
#include <errno.h>
#include <limits.h>
#include <time.h>
#include <sys/prctl.h>
#include <signal.h>
#include <sys/socket.h>
#include "../../utils/src/sparse.h"
#include "../../utils/src/util.h"
#include "../../utils/src/list.h"
#include "../../utils/src/parse.h"
#include "../../kmod/src/format.h"
#include "../../utils/src/parallel_restore.h"
/*
* XXX:
* - add a nice description of what's going on
* - mention allocator contention
* - test child process dying handling
* - root dir entry name length is wrong
*/
#define ERRF " errno %d (%s)"
#define ERRA errno, strerror(errno)
#define error_exit(cond, fmt, args...) \
do { \
if (cond) { \
printf("error: "fmt"\n", ##args); \
exit(1); \
} \
} while (0)
#define dprintf(fmt, args...) \
do { \
if (0) \
printf(fmt, ##args); \
} while (0)
#define REG_MODE (S_IFREG | 0644)
#define DIR_MODE (S_IFDIR | 0755)
struct opts {
unsigned long long buf_size;
unsigned long long write_batch;
unsigned long long low_dirs;
unsigned long long high_dirs;
unsigned long long low_files;
unsigned long long high_files;
char *meta_path;
unsigned long long total_files;
bool read_only;
unsigned long long seed;
unsigned long long nr_writers;
};
static void usage(void)
{
printf("usage:\n"
" -b NR | threads write blocks in batches files (100000)\n"
" -d LOW:HIGH | range of subdirs per directory (5:10)\n"
" -f LOW:HIGH | range of files per directory (10:20)\n"
" -m PATH | path to metadata device\n"
" -n NR | total number of files to create (100)\n"
" -r | read-only, all work except writing, measure cpu cost\n"
" -s NR | randomization seed (random)\n"
" -w NR | number of writing processes to fork (online cpus)\n"
);
}
static size_t write_bufs(struct opts *opts, struct scoutfs_parallel_restore_writer *wri,
void *buf, size_t buf_size, int dev_fd)
{
size_t total = 0;
size_t count;
off_t off;
int ret;
do {
ret = scoutfs_parallel_restore_write_buf(wri, buf, buf_size, &off, &count);
error_exit(ret, "write buf %d", ret);
if (count > 0) {
if (!opts->read_only)
ret = pwrite(dev_fd, buf, count, off);
else
ret = count;
error_exit(ret != count, "pwrite count %zu ret %d", count, ret);
total += ret;
}
} while (count > 0);
return total;
}
struct gen_inode {
struct scoutfs_parallel_restore_inode inode;
struct scoutfs_parallel_restore_xattr **xattrs;
u64 nr_xattrs;
struct scoutfs_parallel_restore_entry **entries;
u64 nr_files;
u64 nr_entries;
};
static void free_gino(struct gen_inode *gino)
{
u64 i;
if (gino) {
if (gino->entries) {
for (i = 0; i < gino->nr_entries; i++)
free(gino->entries[i]);
free(gino->entries);
}
if (gino->xattrs) {
for (i = 0; i < gino->nr_xattrs; i++)
free(gino->xattrs[i]);
free(gino->xattrs);
}
free(gino);
}
}
static struct scoutfs_parallel_restore_xattr *
generate_xattr(struct opts *opts, u64 ino, u64 pos, char *name, int name_len, void *value,
int value_len)
{
struct scoutfs_parallel_restore_xattr *xattr;
xattr = malloc(sizeof(struct scoutfs_parallel_restore_xattr) + name_len + value_len);
error_exit(!xattr, "error allocating generated xattr");
*xattr = (struct scoutfs_parallel_restore_xattr) {
.ino = ino,
.pos = pos,
.name_len = name_len,
.value_len = value_len,
};
xattr->name = (void *)(xattr + 1);
xattr->value = (void *)(xattr->name + name_len);
memcpy(xattr->name, name, name_len);
if (value_len)
memcpy(xattr->value, value, value_len);
return xattr;
}
static struct gen_inode *generate_inode(struct opts *opts, u64 ino, mode_t mode)
{
struct gen_inode *gino;
struct timespec now;
clock_gettime(CLOCK_REALTIME, &now);
gino = calloc(1, sizeof(struct gen_inode));
error_exit(!gino, "failure allocating generated inode");
gino->inode = (struct scoutfs_parallel_restore_inode) {
.ino = ino,
.meta_seq = ino,
.data_seq = 0,
.mode = mode,
.atime = now,
.ctime = now,
.mtime = now,
.crtime = now,
};
/*
* hacky creation of a bunch of xattrs for now.
*/
if ((mode & S_IFMT) == S_IFREG) {
#define NV(n, v) { n, sizeof(n) - 1, v, sizeof(v) - 1, }
struct name_val {
char *name;
int len;
char *value;
int value_len;
} nv[] = {
NV("scoutfs.hide.totl.acct.8314611887310466424.2.0", "1"),
NV("scoutfs.hide.srch.sam_vol_E01001L6_4", ""),
NV("scoutfs.hide.sam_reqcopies", ""),
NV("scoutfs.hide.sam_copy_2", ""),
NV("scoutfs.hide.totl.acct.F01030L6.8314611887310466424.7.30", "1"),
NV("scoutfs.hide.sam_copy_1", ""),
NV("scoutfs.hide.srch.sam_vol_F01030L6_4", ""),
NV("scoutfs.hide.srch.sam_release_cand", ""),
NV("scoutfs.hide.sam_restime", ""),
NV("scoutfs.hide.sam_uuid", ""),
NV("scoutfs.hide.totl.acct.8314611887310466424.3.0", "1"),
NV("scoutfs.hide.srch.sam_vol_F01030L6", ""),
NV("scoutfs.hide.srch.sam_uuid_865939b7-24d6-472f-b85c-7ce7afeb813a", ""),
NV("scoutfs.hide.srch.sam_vol_E01001L6", ""),
NV("scoutfs.hide.totl.acct.E01001L6.8314611887310466424.7.1", "1"),
NV("scoutfs.hide.totl.acct.8314611887310466424.4.0", "1"),
NV("scoutfs.hide.totl.acct.8314611887310466424.11.0", "1"),
NV("scoutfs.hide.totl.acct.8314611887310466424.1.0", "1"),
};
unsigned int nr = array_size(nv);
int i;
gino->xattrs = calloc(nr, sizeof(struct scoutfs_parallel_restore_xattr *));
for (i = 0; i < nr; i++)
gino->xattrs[i] = generate_xattr(opts, ino, i, nv[i].name, nv[i].len,
nv[i].value, nv[i].value_len);
gino->nr_xattrs = nr;
gino->inode.nr_xattrs = nr;
gino->inode.size = 4096;
gino->inode.offline = true;
}
return gino;
}
static struct scoutfs_parallel_restore_entry *
generate_entry(struct opts *opts, char *prefix, u64 nr, u64 dir_ino, u64 pos, u64 ino, mode_t mode)
{
struct scoutfs_parallel_restore_entry *entry;
char buf[PATH_MAX];
int bytes;
bytes = snprintf(buf, sizeof(buf), "%s-%llu", prefix, nr);
entry = malloc(sizeof(struct scoutfs_parallel_restore_entry) + bytes);
error_exit(!entry, "error allocating generated entry");
*entry = (struct scoutfs_parallel_restore_entry) {
.dir_ino = dir_ino,
.pos = pos,
.ino = ino,
.mode = mode,
.name = (void *)(entry + 1),
.name_len = bytes,
};
memcpy(entry->name, buf, bytes);
return entry;
}
static u64 random64(void)
{
return ((u64)lrand48() << 32) | lrand48();
}
static u64 random_range(u64 low, u64 high)
{
return low + (random64() % (high - low + 1));
}
static struct gen_inode *generate_dir(struct opts *opts, u64 dir_ino, u64 ino_start, u64 ino_len,
bool no_dirs)
{
struct scoutfs_parallel_restore_entry *entry;
struct gen_inode *gino;
u64 nr_entries;
u64 nr_files;
u64 nr_dirs;
u64 ino;
char *prefix;
mode_t mode;
u64 i;
nr_dirs = no_dirs ? 0 : random_range(opts->low_dirs, opts->high_dirs);
nr_files = random_range(opts->low_files, opts->high_files);
if (1 + nr_dirs + nr_files > ino_len) {
nr_dirs = no_dirs ? 0 : (ino_len - 1) / 2;
nr_files = (ino_len - 1) - nr_dirs;
}
nr_entries = nr_dirs + nr_files;
gino = generate_inode(opts, dir_ino, DIR_MODE);
error_exit(!gino, "error allocating generated inode");
gino->inode.nr_subdirs = nr_dirs;
gino->nr_files = nr_files;
if (nr_entries) {
gino->entries = calloc(nr_entries, sizeof(struct scoutfs_parallel_restore_entry *));
error_exit(!gino->entries, "error allocating generated inode entries");
gino->nr_entries = nr_entries;
}
mode = DIR_MODE;
prefix = "dir";
for (i = 0; i < nr_entries; i++) {
if (i == nr_dirs) {
mode = REG_MODE;
prefix = "file";
}
ino = ino_start + i;
entry = generate_entry(opts, prefix, ino, gino->inode.ino,
SCOUTFS_DIRENT_FIRST_POS + i, ino, mode);
gino->entries[i] = entry;
gino->inode.total_entry_name_bytes += entry->name_len;
}
return gino;
}
/*
* Restore a generated inode. If it's a directory then we also restore
* all its entries. The caller is going to descend into subdir entries and generate
* those dir inodes. We have to generate and restore all non-dir inodes referenced
* by this inode's entries.
*/
static void restore_inode(struct opts *opts, struct scoutfs_parallel_restore_writer *wri,
struct gen_inode *gino)
{
struct gen_inode *nondir;
int ret;
u64 i;
ret = scoutfs_parallel_restore_add_inode(wri, &gino->inode);
error_exit(ret, "thread add root inode %d", ret);
for (i = 0; i < gino->nr_entries; i++) {
ret = scoutfs_parallel_restore_add_entry(wri, gino->entries[i]);
error_exit(ret, "thread add entry %d", ret);
/* caller only needs subdir entries, generate and free others */
if ((gino->entries[i]->mode & S_IFMT) != S_IFDIR) {
nondir = generate_inode(opts, gino->entries[i]->ino,
gino->entries[i]->mode);
restore_inode(opts, wri, nondir);
free_gino(nondir);
free(gino->entries[i]);
if (i != gino->nr_entries - 1)
gino->entries[i] = gino->entries[gino->nr_entries - 1];
gino->nr_entries--;
gino->nr_files--;
i--;
}
}
for (i = 0; i < gino->nr_xattrs; i++) {
ret = scoutfs_parallel_restore_add_xattr(wri, gino->xattrs[i]);
error_exit(ret, "thread add xattr %d", ret);
}
}
struct writer_args {
struct list_head head;
int dev_fd;
int pair_fd;
struct scoutfs_parallel_restore_slice slice;
u64 writer_nr;
u64 dir_height;
u64 ino_start;
u64 ino_len;
};
struct write_result {
struct scoutfs_parallel_restore_progress prog;
struct scoutfs_parallel_restore_slice slice;
__le64 files_created;
__le64 bytes_written;
};
static void write_bufs_and_send(struct opts *opts, struct scoutfs_parallel_restore_writer *wri,
void *buf, size_t buf_size, int dev_fd,
struct write_result *res, bool get_slice, int pair_fd)
{
size_t total;
int ret;
total = write_bufs(opts, wri, buf, buf_size, dev_fd);
le64_add_cpu(&res->bytes_written, total);
ret = scoutfs_parallel_restore_get_progress(wri, &res->prog);
error_exit(ret, "get prog %d", ret);
if (get_slice) {
ret = scoutfs_parallel_restore_get_slice(wri, &res->slice);
error_exit(ret, "thread get slice %d", ret);
}
ret = write(pair_fd, res, sizeof(struct write_result));
error_exit(ret != sizeof(struct write_result), "result send error");
memset(res, 0, sizeof(struct write_result));
}
/*
* Calculate the number of bytes in toplevel "dir-%llu" entry names for the given
* number of writers.
*/
static u64 topdir_entry_bytes(u64 nr_writers)
{
u64 bytes = (3 + 1) * nr_writers;
u64 limit;
u64 done;
u64 wid;
u64 nr;
for (done = 0, wid = 1, limit = 10; done < nr_writers; done += nr, wid++, limit *= 10) {
nr = min(limit - done, nr_writers - done);
bytes += nr * wid;
}
return bytes;
}
struct dir_pos {
struct gen_inode *gino;
u64 pos;
};
static void writer_proc(struct opts *opts, struct writer_args *args)
{
struct scoutfs_parallel_restore_writer *wri = NULL;
struct scoutfs_parallel_restore_entry *entry;
struct dir_pos *dirs = NULL;
struct write_result res;
struct gen_inode *gino;
void *buf = NULL;
u64 level;
u64 ino;
int ret;
memset(&res, 0, sizeof(res));
dirs = calloc(args->dir_height, sizeof(struct dir_pos));
error_exit(errno, "error allocating parent dirs "ERRF, ERRA);
errno = posix_memalign((void **)&buf, 4096, opts->buf_size);
error_exit(errno, "error allocating block buf "ERRF, ERRA);
ret = scoutfs_parallel_restore_create_writer(&wri);
error_exit(ret, "create writer %d", ret);
ret = scoutfs_parallel_restore_add_slice(wri, &args->slice);
error_exit(ret, "add slice %d", ret);
/* writer 0 creates the root dir */
if (args->writer_nr == 0) {
gino = generate_inode(opts, SCOUTFS_ROOT_INO, DIR_MODE);
gino->inode.nr_subdirs = opts->nr_writers;
gino->inode.total_entry_name_bytes = topdir_entry_bytes(opts->nr_writers);
ret = scoutfs_parallel_restore_add_inode(wri, &gino->inode);
error_exit(ret, "thread add root inode %d", ret);
free_gino(gino);
}
/* create root entry for our top level dir */
ino = args->ino_start++;
args->ino_len--;
entry = generate_entry(opts, "top", args->writer_nr,
SCOUTFS_ROOT_INO, SCOUTFS_DIRENT_FIRST_POS + args->writer_nr,
ino, DIR_MODE);
ret = scoutfs_parallel_restore_add_entry(wri, entry);
error_exit(ret, "thread top entry %d", ret);
free(entry);
level = args->dir_height - 1;
while (args->ino_len > 0 && level < args->dir_height) {
gino = dirs[level].gino;
/* generate and restore if we follow entries */
if (!gino) {
gino = generate_dir(opts, ino, args->ino_start, args->ino_len, level == 0);
args->ino_start += gino->nr_entries;
args->ino_len -= gino->nr_entries;
le64_add_cpu(&res.files_created, gino->nr_files);
restore_inode(opts, wri, gino);
dirs[level].gino = gino;
}
if (dirs[level].pos == gino->nr_entries) {
/* ascend if we're done with this dir */
dirs[level].gino = NULL;
dirs[level].pos = 0;
free_gino(gino);
level++;
} else {
/* otherwise descend into subdir entry */
ino = gino->entries[dirs[level].pos]->ino;
dirs[level].pos++;
level--;
}
/* do a partial write at batch intervals when there's still more to do */
if (le64_to_cpu(res.files_created) >= opts->write_batch && args->ino_len > 0)
write_bufs_and_send(opts, wri, buf, opts->buf_size, args->dev_fd,
&res, false, args->pair_fd);
}
write_bufs_and_send(opts, wri, buf, opts->buf_size, args->dev_fd,
&res, true, args->pair_fd);
scoutfs_parallel_restore_destroy_writer(&wri);
free(dirs);
free(buf);
}
/*
* If any of our children exited with an error code, we hard exit.
* The child processes should themselves report out any errors
* encountered. Any remaining children will receive SIGHUP and
* terminate.
*/
static void sigchld_handler(int signo, siginfo_t *info, void *context)
{
if (info->si_status)
exit(EXIT_FAILURE);
}
static void fork_writer(struct opts *opts, struct writer_args *args)
{
pid_t parent = getpid();
pid_t pid;
int ret;
pid = fork();
error_exit(pid == -1, "fork error");
if (pid != 0)
return;
ret = prctl(PR_SET_PDEATHSIG, SIGHUP);
error_exit(ret < 0, "failed to set parent death sig");
printf("pid %u getpid() %u parent %u getppid() %u\n",
pid, getpid(), parent, getppid());
error_exit(getppid() != parent, "child parent already changed");
writer_proc(opts, args);
exit(0);
}
static int do_restore(struct opts *opts)
{
struct scoutfs_parallel_restore_writer *wri = NULL;
struct scoutfs_parallel_restore_slice *slices = NULL;
struct scoutfs_super_block *super = NULL;
struct write_result res;
struct writer_args *args;
struct timespec begin;
struct timespec end;
LIST_HEAD(writers);
u64 next_ino;
u64 ino_per;
u64 avg_dirs;
u64 avg_files;
u64 dir_height;
u64 tot_files;
u64 tot_bytes;
int pair[2] = {-1, -1};
float secs;
void *buf = NULL;
int dev_fd = -1;
int ret;
int i;
ret = socketpair(PF_LOCAL, SOCK_STREAM, 0, pair);
error_exit(ret, "socketpair error "ERRF, ERRA);
dev_fd = open(opts->meta_path, O_DIRECT | (opts->read_only ? O_RDONLY : (O_RDWR|O_EXCL)));
error_exit(dev_fd < 0, "error opening '%s': "ERRF, opts->meta_path, ERRA);
errno = posix_memalign((void **)&super, 4096, SCOUTFS_BLOCK_SM_SIZE) ?:
posix_memalign((void **)&buf, 4096, opts->buf_size);
error_exit(errno, "error allocating block bufs "ERRF, ERRA);
ret = pread(dev_fd, super, SCOUTFS_BLOCK_SM_SIZE,
SCOUTFS_SUPER_BLKNO << SCOUTFS_BLOCK_SM_SHIFT);
error_exit(ret != SCOUTFS_BLOCK_SM_SIZE, "error reading super, ret %d", ret);
ret = scoutfs_parallel_restore_create_writer(&wri);
error_exit(ret, "create writer %d", ret);
ret = scoutfs_parallel_restore_import_super(wri, super, dev_fd);
error_exit(ret, "import super %d", ret);
slices = calloc(1 + opts->nr_writers, sizeof(struct scoutfs_parallel_restore_slice));
error_exit(!slices, "alloc slices");
scoutfs_parallel_restore_init_slices(wri, slices, 1 + opts->nr_writers);
ret = scoutfs_parallel_restore_add_slice(wri, &slices[0]);
error_exit(ret, "add slices[0] %d", ret);
next_ino = (SCOUTFS_ROOT_INO | SCOUTFS_LOCK_INODE_GROUP_MASK) + 1;
ino_per = opts->total_files / opts->nr_writers;
avg_dirs = (opts->low_dirs + opts->high_dirs) / 2;
avg_files = (opts->low_files + opts->high_files) / 2;
dir_height = 1;
tot_files = avg_files * opts->nr_writers;
while (tot_files < opts->total_files) {
dir_height++;
tot_files *= avg_dirs;
}
dprintf("height %llu tot %llu total %llu\n", dir_height, tot_files, opts->total_files);
clock_gettime(CLOCK_MONOTONIC_RAW, &begin);
/* start each writing process */
for (i = 0; i < opts->nr_writers; i++) {
args = calloc(1, sizeof(struct writer_args));
error_exit(!args, "alloc writer args");
args->dev_fd = dev_fd;
args->pair_fd = pair[1];
args->slice = slices[1 + i];
args->writer_nr = i;
args->dir_height = dir_height;
args->ino_start = next_ino;
args->ino_len = ino_per;
list_add_tail(&args->head, &writers);
next_ino += ino_per;
fork_writer(opts, args);
}
/* read results and watch for writers to finish */
tot_files = 0;
tot_bytes = 0;
i = 0;
while (i < opts->nr_writers) {
ret = read(pair[0], &res, sizeof(struct write_result));
error_exit(ret != sizeof(struct write_result), "result read error %d", ret);
ret = scoutfs_parallel_restore_add_progress(wri, &res.prog);
error_exit(ret, "add thr prog %d", ret);
if (res.slice.meta_len != 0) {
ret = scoutfs_parallel_restore_add_slice(wri, &res.slice);
error_exit(ret, "add thr slice %d", ret);
i++;
}
tot_files += le64_to_cpu(res.files_created);
tot_bytes += le64_to_cpu(res.bytes_written);
}
tot_bytes += write_bufs(opts, wri, buf, opts->buf_size, dev_fd);
ret = scoutfs_parallel_restore_export_super(wri, super);
error_exit(ret, "update super %d", ret);
if (!opts->read_only) {
ret = pwrite(dev_fd, super, SCOUTFS_BLOCK_SM_SIZE,
SCOUTFS_SUPER_BLKNO << SCOUTFS_BLOCK_SM_SHIFT);
error_exit(ret != SCOUTFS_BLOCK_SM_SIZE, "error writing super, ret %d", ret);
}
clock_gettime(CLOCK_MONOTONIC_RAW, &end);
scoutfs_parallel_restore_destroy_writer(&wri);
secs = ((float)end.tv_sec + ((float)end.tv_nsec/NSEC_PER_SEC)) -
((float)begin.tv_sec + ((float)begin.tv_nsec/NSEC_PER_SEC));
printf("created %llu files in %llu bytes and %f secs => %f bytes/file, %f files/sec\n",
tot_files, tot_bytes, secs,
(float)tot_bytes / tot_files, (float)tot_files / secs);
if (dev_fd >= 0)
close(dev_fd);
if (pair[0] >= 0)
close(pair[0]);
if (pair[1] >= 0)
close(pair[1]);
free(super);
free(slices);
free(buf);
return 0;
}
static int parse_low_high(char *str, u64 *low_ret, u64 *high_ret)
{
char *sep;
int ret = 0;
sep = index(str, ':');
if (sep) {
*sep = '\0';
ret = parse_u64(sep + 1, high_ret);
}
if (ret == 0)
ret = parse_u64(str, low_ret);
if (sep)
*sep = ':';
return ret;
}
int main(int argc, char **argv)
{
struct opts opts = {
.buf_size = (32 * 1024 * 1024),
.write_batch = 1000000,
.low_dirs = 5,
.high_dirs = 10,
.low_files = 10,
.high_files = 20,
.total_files = 100,
};
struct sigaction act = { 0 };
int ret;
int c;
opts.seed = random64();
opts.nr_writers = sysconf(_SC_NPROCESSORS_ONLN);
while ((c = getopt(argc, argv, "b:d:f:m:n:rs:w:")) != -1) {
switch(c) {
case 'b':
ret = parse_u64(optarg, &opts.write_batch);
error_exit(ret, "error parsing -b '%s'\n", optarg);
error_exit(opts.write_batch == 0, "-b can't be 0");
break;
case 'd':
ret = parse_low_high(optarg, &opts.low_dirs, &opts.high_dirs);
error_exit(ret, "error parsing -d '%s'\n", optarg);
break;
case 'f':
ret = parse_low_high(optarg, &opts.low_files, &opts.high_files);
error_exit(ret, "error parsing -f '%s'\n", optarg);
break;
case 'm':
opts.meta_path = strdup(optarg);
break;
case 'n':
ret = parse_u64(optarg, &opts.total_files);
error_exit(ret, "error parsing -n '%s'\n", optarg);
break;
case 'r':
opts.read_only = true;
break;
case 's':
ret = parse_u64(optarg, &opts.seed);
error_exit(ret, "error parsing -s '%s'\n", optarg);
break;
case 'w':
ret = parse_u64(optarg, &opts.nr_writers);
error_exit(ret, "error parsing -w '%s'\n", optarg);
break;
case '?':
printf("Unknown option '%c'\n", optopt);
usage();
exit(1);
}
}
error_exit(opts.low_dirs > opts.high_dirs, "LOW > HIGH in -d %llu:%llu",
opts.low_dirs, opts.high_dirs);
error_exit(opts.low_files > opts.high_files, "LOW > HIGH in -f %llu:%llu",
opts.low_files, opts.high_files);
error_exit(!opts.meta_path, "must specify metadata device path with -m");
printf("recreate with: -d %llu:%llu -f %llu:%llu -n %llu -s %llu -w %llu\n",
opts.low_dirs, opts.high_dirs, opts.low_files, opts.high_files,
opts.total_files, opts.seed, opts.nr_writers);
act.sa_flags = SA_SIGINFO | SA_RESTART;
act.sa_sigaction = &sigchld_handler;
if (sigaction(SIGCHLD, &act, NULL) == -1)
error_exit(ret, "error setting up signal handler\n");
ret = do_restore(&opts);
free(opts.meta_path);
return ret == 0 ? 0 : 1;
}

963
tests/src/restore_copy.c Normal file
View File

@@ -0,0 +1,963 @@
#define _GNU_SOURCE /* O_DIRECT */
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/xattr.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <ctype.h>
#include <string.h>
#include <errno.h>
#include <limits.h>
#include <time.h>
#include <sys/prctl.h>
#include <sys/socket.h>
#include <sys/signal.h>
#include <sys/statfs.h>
#include <dirent.h>
#include "../../utils/src/sparse.h"
#include "../../utils/src/util.h"
#include "../../utils/src/list.h"
#include "../../utils/src/parse.h"
#include "../../kmod/src/format.h"
#include "../../kmod/src/ioctl.h"
#include "../../utils/src/parallel_restore.h"
/*
* XXX:
*/
#define ERRF " errno %d (%s)"
#define ERRA errno, strerror(errno)
#define error_exit(cond, fmt, args...) \
do { \
if (cond) { \
printf("error: "fmt"\n", ##args); \
exit(1); \
} \
} while (0)
#define REG_MODE (S_IFREG | 0644)
#define DIR_MODE (S_IFDIR | 0755)
#define LNK_MODE (S_IFLNK | 0777)
/*
* At about 1k files we seem to be writing about 1MB of data, so
* set buffer sizes adequately above that.
*/
#define BATCH_FILES 1024
#define BUF_SIZ 2 * 1024 * 1024
/*
* We can't make duplicate inodes for hardlinked files, so we
* will need to track these as we generate them. Not too costly
* to do, since it's just an integer, and sorting shouldn't matter
* until we get into the millions of entries, hopefully.
*/
static struct list_head hardlinks;
struct hardlink_head {
struct list_head head;
u64 ino;
};
struct opts {
char *meta_path;
char *source_dir;
};
static bool warn_scoutfs = false;
static void usage(void)
{
printf("usage:\n"
" -m PATH | path to metadata device\n"
" -s PATH | path to source directory\n"
);
}
static size_t write_bufs(struct scoutfs_parallel_restore_writer *wri,
void *buf, int dev_fd)
{
size_t total = 0;
size_t count;
off_t off;
int ret;
do {
ret = scoutfs_parallel_restore_write_buf(wri, buf, BUF_SIZ, &off, &count);
error_exit(ret, "write buf %d", ret);
if (count > 0) {
ret = pwrite(dev_fd, buf, count, off);
error_exit(ret != count, "pwrite count %zu ret %d", count, ret);
total += ret;
}
} while (count > 0);
return total;
}
struct write_result {
struct scoutfs_parallel_restore_progress prog;
struct scoutfs_parallel_restore_slice slice;
__le64 files_created;
__le64 dirs_created;
__le64 bytes_written;
bool complete;
};
static void write_bufs_and_send(struct scoutfs_parallel_restore_writer *wri,
void *buf, int dev_fd,
struct write_result *res, bool get_slice, int pair_fd)
{
size_t total;
int ret;
total = write_bufs(wri, buf, dev_fd);
le64_add_cpu(&res->bytes_written, total);
ret = scoutfs_parallel_restore_get_progress(wri, &res->prog);
error_exit(ret, "get prog %d", ret);
if (get_slice) {
ret = scoutfs_parallel_restore_get_slice(wri, &res->slice);
error_exit(ret, "thread get slice %d", ret);
}
ret = write(pair_fd, res, sizeof(struct write_result));
error_exit(ret != sizeof(struct write_result), "result send error");
memset(res, 0, sizeof(struct write_result));
}
/*
* Adding xattrs is supported for files and directories only.
*
* If the filesystem on which the path resides isn't scoutfs, we omit the
* scoutfs specific ioctl to fetch hidden xattrs.
*
* Untested if the hidden xattr ioctl works on directories or symlinks.
*/
static void add_xattrs(struct scoutfs_parallel_restore_writer *wri, char *path, u64 ino, bool is_scoutfs)
{
struct scoutfs_ioctl_listxattr_hidden lxh;
struct scoutfs_parallel_restore_xattr *xattr;
char *buf = NULL;
char *name = NULL;
int fd = -1;
int bytes;
int len;
int value_len;
int ret;
int pos = 0;
if (!is_scoutfs)
goto normal_xattrs;
fd = open(path, O_RDONLY);
error_exit(fd < 0, "open"ERRF, ERRA);
memset(&lxh, 0, sizeof(lxh));
lxh.id_pos = 0;
lxh.hash_pos = 0;
lxh.buf_bytes = 256 * 1024;
buf = malloc(lxh.buf_bytes);
error_exit(!buf, "alloc xattr_hidden buf");
lxh.buf_ptr = (unsigned long)buf;
/* hidden */
for (;;) {
ret = ioctl(fd, SCOUTFS_IOC_LISTXATTR_HIDDEN, &lxh);
if (ret == 0) /* done */
break;
error_exit(ret < 0, "listxattr_hidden"ERRF, ERRA);
bytes = ret;
error_exit(bytes > lxh.buf_bytes, "listxattr_hidden overflow");
error_exit(buf[bytes - 1] != '\0', "listxattr_hidden didn't term");
name = buf;
do {
len = strlen(name);
error_exit(len == 0, "listxattr_hidden empty name");
error_exit(len > SCOUTFS_XATTR_MAX_NAME_LEN, "listxattr_hidden long name");
/* get value len */
value_len = fgetxattr(fd, name, NULL, 0);
error_exit(value_len < 0, "malloc value hidden"ERRF, ERRA);
/* allocate everything at once */
xattr = malloc(sizeof(struct scoutfs_parallel_restore_xattr) + len + value_len);
error_exit(!xattr, "error allocating generated xattr");
*xattr = (struct scoutfs_parallel_restore_xattr) {
.ino = ino,
.pos = pos++,
.name_len = len,
.value_len = value_len,
};
xattr->name = (void *)(xattr + 1);
xattr->value = (void *)(xattr->name + len);
/* get value into xattr directly */
ret = fgetxattr(fd, name, (void *)(xattr->name + len), value_len);
error_exit(ret != value_len, "fgetxattr value"ERRF, ERRA);
memcpy(xattr->name, name, len);
ret = scoutfs_parallel_restore_add_xattr(wri, xattr);
error_exit(ret, "add hidden xattr %d", ret);
free(xattr);
name += len + 1;
bytes -= len + 1;
} while (bytes > 0);
}
free(buf);
close(fd);
normal_xattrs:
value_len = listxattr(path, NULL, 0);
error_exit(value_len < 0, "hidden listxattr "ERRF, ERRA);
if (value_len == 0)
return;
buf = calloc(1, value_len);
error_exit(!buf, "malloc value"ERRF, ERRA);
ret = listxattr(path, buf, value_len);
error_exit(ret < 0, "hidden listxattr %d", ret);
name = buf;
bytes = ret;
do {
len = strlen(name);
error_exit(len == 0, "listxattr_hidden empty name");
error_exit(len > SCOUTFS_XATTR_MAX_NAME_LEN, "listxattr_hidden long name");
value_len = getxattr(path, name, NULL, 0);
error_exit(value_len < 0, "value "ERRF, ERRA);
xattr = malloc(sizeof(struct scoutfs_parallel_restore_xattr) + len + value_len);
error_exit(!xattr, "error allocating generated xattr");
*xattr = (struct scoutfs_parallel_restore_xattr) {
.ino = ino,
.pos = pos++,
.name_len = len,
.value_len = value_len,
};
xattr->name = (void *)(xattr + 1);
xattr->value = (void *)(xattr->name + len);
ret = getxattr(path, name, (void *)(xattr->name + len), value_len);
error_exit(ret != value_len, "fgetxattr value"ERRF, ERRA);
memcpy(xattr->name, name, len);
ret = scoutfs_parallel_restore_add_xattr(wri, xattr);
error_exit(ret, "add xattr %d", ret);
free(xattr);
name += len + 1;
bytes -= len + 1;
} while (bytes > 0);
free(buf);
}
/*
* We can't store the same inode multiple times, so we need to make
* sure to account for hardlinks. Maintain a LL that stores the first
* hardlink inode we encounter, and every subsequent hardlink to this
* inode will omit inserting an inode, and just adds another entry
*/
static bool is_new_inode_item(bool nlink, u64 ino)
{
struct hardlink_head *hh_tmp;
struct hardlink_head *hh;
if (!nlink)
return true;
/* lineair search, pretty awful, should be a binary tree */
list_for_each_entry_safe(hh, hh_tmp, &hardlinks, head) {
if (hh->ino == ino)
return false;
}
/* insert item */
hh = malloc(sizeof(struct hardlink_head));
error_exit(!hh, "malloc");
hh->ino = ino;
list_add_tail(&hh->head, &hardlinks);
/*
* XXX
*
* We can be confident that if we don't traverse filesystems
* that once we've created N entries of an N-linked inode, that
* it can be removed from the LL. This would significantly
* improve the manageability of the list.
*
* All we'd need to do is add a counter and compare it to the nr_links
* field of the inode.
*/
return true;
}
/*
* create the inode data for a given path as best as possible
* duplicating the exact data from the source path
*/
static struct scoutfs_parallel_restore_inode *read_inode_data(char *path, u64 ino, bool *nlink, bool is_scoutfs)
{
struct scoutfs_parallel_restore_inode *inode = NULL;
struct scoutfs_ioctl_stat_more stm;
struct scoutfs_ioctl_inode_attr_x iax;
struct stat st;
int ret;
int fd;
inode = calloc(1, sizeof(struct scoutfs_parallel_restore_inode));
error_exit(!inode, "failure allocating inode");
ret = lstat(path, &st);
error_exit(ret, "failure stat inode");
/* use exact inode numbers from path, except for root ino */
if (ino != SCOUTFS_ROOT_INO)
inode->ino = st.st_ino;
else
inode->ino = SCOUTFS_ROOT_INO;
inode->mode = st.st_mode;
inode->uid = st.st_uid;
inode->gid = st.st_gid;
inode->atime = st.st_atim;
inode->ctime = st.st_ctim;
inode->mtime = st.st_mtim;
inode->size = st.st_size;
inode->nlink = st.st_nlink;
inode->rdev = st.st_rdev;
/* scoutfs specific */
inode->meta_seq = 0;
inode->data_seq = 0;
inode->crtime = st.st_ctim;
/* we don't restore data */
if (S_ISREG(inode->mode) && (inode->size > 0))
inode->offline = true;
if (S_ISREG(inode->mode) || S_ISDIR(inode->mode)) {
if (is_scoutfs) {
fd = open(path, O_RDONLY);
error_exit(!fd, "open failure"ERRF, ERRA);
ret = ioctl(fd, SCOUTFS_IOC_STAT_MORE, &stm);
error_exit(ret, "failure SCOUTFS_IOC_STAT_MORE inode");
/* these aren't restored! */
inode->meta_seq = stm.meta_seq;
inode->data_seq = stm.data_seq;
inode->crtime = (struct timespec){.tv_sec = stm.crtime_sec, .tv_nsec = stm.crtime_nsec};
/* project ID, retention bit */
memset(&iax, 0, sizeof(iax));
iax.x_flags = 0;
iax.x_mask = SCOUTFS_IOC_IAX_PROJECT_ID | SCOUTFS_IOC_IAX__BITS;
iax.bits = SCOUTFS_IOC_IAX_B_RETENTION;
ret = ioctl(fd, SCOUTFS_IOC_GET_ATTR_X, &iax);
error_exit(ret, "failure SCOUTFS_IOC_GET_ATTR_X inode");
inode->proj = iax.project_id;
inode->flags |= (iax.bits & SCOUTFS_IOC_IAX_B_RETENTION) ? SCOUTFS_INO_FLAG_RETENTION : 0;
close(fd);
}
}
/* pass whether item is hardlinked or not */
*nlink = (st.st_nlink > 1);
return inode;
}
typedef int (*quota_ioctl_in)(struct scoutfs_ioctl_quota_rule *irules,
struct scoutfs_ioctl_get_quota_rules *gqr,
size_t nr, int fd);
static int get_quota_ioctl(struct scoutfs_ioctl_quota_rule *irules,
struct scoutfs_ioctl_get_quota_rules *rules_in,
size_t nr, int fd)
{
struct scoutfs_ioctl_get_quota_rules *gqr = rules_in;
int ret;
gqr->rules_ptr = (intptr_t)irules;
gqr->rules_nr = nr;
ret = ioctl(fd, SCOUTFS_IOC_GET_QUOTA_RULES, gqr);
error_exit(ret < 0, "quota ioctl error");
return ret;
}
static char opc[] = {
[SQ_OP_DATA] = 'D',
[SQ_OP_INODE] = 'I',
};
static char nsc[] = {
[SQ_NS_LITERAL] = 'L',
[SQ_NS_PROJ] = 'P',
[SQ_NS_UID] = 'U',
[SQ_NS_GID] = 'G',
};
static int insert_quota_rule(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_ioctl_quota_rule *irule)
{
struct scoutfs_parallel_restore_quota_rule *prule = NULL;
int ret;
int i;
prule = calloc(1, sizeof(struct scoutfs_parallel_restore_quota_rule));
error_exit(!prule, "quota rule alloc failed");
prule->limit = irule->limit;
prule->prio = irule->prio;
prule->op = irule->op;
prule->rule_flags = irule->rule_flags;
prule->names[0].val = irule->name_val[0];
prule->names[0].source = irule->name_source[0];
prule->names[0].flags = irule->name_flags[0];
prule->names[1].val = irule->name_val[1];
prule->names[1].source = irule->name_source[1];
prule->names[1].flags = irule->name_flags[1];
prule->names[2].val = irule->name_val[2];
prule->names[2].source = irule->name_source[2];
prule->names[2].flags = irule->name_flags[2];
/* print out the rule */
printf("Quota rule: %3u ", irule->prio);
for (i = 0; i < array_size(irule->name_val); i++) {
printf("%llu,%c,%c ",
irule->name_val[i],
nsc[irule->name_source[i]],
(irule->name_flags[i] & SQ_NF_SELECT) ? 'S' : '-');
}
printf("%c %llu %c\n",
opc[irule->op], irule->limit, (irule->rule_flags & SQ_RF_TOTL_COUNT) ? 'C' : '-');
ret = scoutfs_parallel_restore_add_quota_rule(wri, prule);
error_exit(ret, "quota add rule %d", ret);
free(prule);
return ret;
}
static int restore_quotas(struct scoutfs_parallel_restore_writer *wri,
quota_ioctl_in quota_in, char *path)
{
struct scoutfs_ioctl_get_quota_rules gqr = {{0,}};
struct scoutfs_ioctl_quota_rule *irules = NULL;
size_t rule_alloc = 0;
size_t rule_nr = 0;
size_t rule_count;
size_t i;
int fd = -1;
int ret;
fd = open(path, O_RDONLY);
error_exit(fd < 0, "open"ERRF, ERRA);
for (;;) {
if (rule_nr == rule_alloc) {
rule_alloc += 1024;
irules = realloc(irules, rule_alloc * sizeof(irules[0]));
error_exit(!irules, "irule realloc failed rule_nr:%zu alloced:%zu", rule_nr, rule_alloc);
if (!irules) {
ret = -errno;
fprintf(stderr, "memory allocation failed: %s (%d)\n",
strerror(errno), errno);
goto out;
}
}
ret = quota_in(&irules[rule_nr], &gqr, rule_alloc - rule_nr, fd);
if (ret == 0)
break;
if (ret < 0)
goto out;
rule_count = ret;
for (i = 0; i < rule_count; i++) {
ret = insert_quota_rule(wri, &irules[i]);
if (ret < 0)
goto out;
}
}
ret = 0;
out:
if (fd >= 0)
close(fd);
if (irules)
free(irules);
return ret;
}
struct writer_args {
struct list_head head;
int dev_fd;
int pair_fd;
struct scoutfs_parallel_restore_slice slice;
};
static void restore_path(struct scoutfs_parallel_restore_writer *wri, struct writer_args *args, struct write_result *res, void *buf, char *path, u64 ino)
{
struct scoutfs_parallel_restore_inode *inode;
struct scoutfs_parallel_restore_entry *entry;
DIR *dirp = NULL;
char *subdir = NULL;
char link[PATH_MAX + 1];
struct dirent *ent;
struct statfs stf;
int ret = 0;
int subdir_count = 0, file_count = 0;
size_t ent_len = 0;
size_t pos = 0;
bool nlink = false;
char ind = '?';
u64 mode;
bool is_scoutfs = false;
/* get fs info once per path */
ret = statfs(path, &stf);
error_exit(ret != 0, "statfs"ERRF, ERRA);
is_scoutfs = (stf.f_type == 0x554f4353);
if (!is_scoutfs && !warn_scoutfs) {
warn_scoutfs = true;
fprintf(stderr, "Non-scoutfs source path detected: scoutfs specific features disabled\n");
}
/* traverse the entire tree */
dirp = opendir(path);
errno = 0;
while ((ent = readdir(dirp))) {
if (ent->d_type == DT_DIR) {
if ((strcmp(ent->d_name, ".") == 0) ||
(strcmp(ent->d_name, "..") == 0)) {
/* position still matters */
pos++;
continue;
}
/* recurse into subdir */
ret = asprintf(&subdir, "%s/%s", path, ent->d_name);
error_exit(ret == -1, "asprintf subdir"ERRF, ERRA);
restore_path(wri, args, res, buf, subdir, ent->d_ino);
subdir_count++;
ent_len += strlen(ent->d_name);
entry = malloc(sizeof(struct scoutfs_parallel_restore_entry) + strlen(ent->d_name));
error_exit(!entry, "error allocating generated entry");
*entry = (struct scoutfs_parallel_restore_entry) {
.dir_ino = ino,
.pos = pos++,
.ino = ent->d_ino,
.mode = DIR_MODE,
.name = (void *)(entry + 1),
.name_len = strlen(ent->d_name),
};
memcpy(entry->name, ent->d_name, strlen(ent->d_name));
ret = scoutfs_parallel_restore_add_entry(wri, entry);
error_exit(ret, "add entry %d", ret);
free(entry);
add_xattrs(wri, subdir, ent->d_ino, is_scoutfs);
free(subdir);
le64_add_cpu(&res->dirs_created, 1);
} else if (ent->d_type == DT_REG) {
file_count++;
ent_len += strlen(ent->d_name);
entry = malloc(sizeof(struct scoutfs_parallel_restore_entry) + strlen(ent->d_name));
error_exit(!entry, "error allocating generated entry");
*entry = (struct scoutfs_parallel_restore_entry) {
.dir_ino = ino,
.pos = pos++,
.ino = ent->d_ino,
.mode = REG_MODE,
.name = (void *)(entry + 1),
.name_len = strlen(ent->d_name),
};
memcpy(entry->name, ent->d_name, strlen(ent->d_name));
ret = scoutfs_parallel_restore_add_entry(wri, entry);
error_exit(ret, "add entry %d", ret);
free(entry);
ret = asprintf(&subdir, "%s/%s", path, ent->d_name);
error_exit(ret == -1, "asprintf subdir"ERRF, ERRA);
/* file inode */
inode = read_inode_data(subdir, ent->d_ino, &nlink, is_scoutfs);
fprintf(stdout, "f %s/%s\n", path, ent->d_name);
if (is_new_inode_item(nlink, ent->d_ino)) {
ret = scoutfs_parallel_restore_add_inode(wri, inode);
error_exit(ret, "add reg file inode %d", ret);
/* xattrs */
add_xattrs(wri, subdir, ent->d_ino, is_scoutfs);
}
free(inode);
free(subdir);
le64_add_cpu(&res->files_created, 1);
} else if (ent->d_type == DT_LNK) {
/* readlink */
ret = asprintf(&subdir, "%s/%s", path, ent->d_name);
error_exit(ret == -1, "asprintf subdir"ERRF, ERRA);
ent_len += strlen(ent->d_name);
ret = readlink(subdir, link, PATH_MAX);
error_exit(ret < 0, "readlink %d", ret);
/* must 0-terminate if we want to print it */
link[ret] = 0;
entry = malloc(sizeof(struct scoutfs_parallel_restore_entry) + strlen(ent->d_name));
error_exit(!entry, "error allocating generated entry");
*entry = (struct scoutfs_parallel_restore_entry) {
.dir_ino = ino,
.pos = pos++,
.ino = ent->d_ino,
.mode = LNK_MODE,
.name = (void *)(entry + 1),
.name_len = strlen(ent->d_name),
};
memcpy(entry->name, ent->d_name, strlen(ent->d_name));
ret = scoutfs_parallel_restore_add_entry(wri, entry);
error_exit(ret, "add symlink entry %d", ret);
/* link inode */
inode = read_inode_data(subdir, ent->d_ino, &nlink, is_scoutfs);
fprintf(stdout, "l %s/%s -> %s\n", path, ent->d_name, link);
inode->mode = LNK_MODE;
inode->target = link;
inode->target_len = strlen(link) + 1; /* scoutfs null terminates symlinks */
ret = scoutfs_parallel_restore_add_inode(wri, inode);
error_exit(ret, "add syml inode %d", ret);
free(inode);
free(subdir);
le64_add_cpu(&res->files_created, 1);
} else {
/* odd stuff */
switch(ent->d_type) {
case DT_CHR:
ind = 'c';
mode = S_IFCHR;
break;
case DT_BLK:
ind = 'b';
mode = S_IFBLK;
break;
case DT_FIFO:
ind = 'p';
mode = S_IFIFO;
break;
case DT_SOCK:
ind = 's';
mode = S_IFSOCK;
break;
default:
error_exit(true, "Unknown readdir entry type");
;;
}
file_count++;
ent_len += strlen(ent->d_name);
entry = malloc(sizeof(struct scoutfs_parallel_restore_entry) + strlen(ent->d_name));
error_exit(!entry, "error allocating generated entry");
*entry = (struct scoutfs_parallel_restore_entry) {
.dir_ino = ino,
.pos = pos++,
.ino = ent->d_ino,
.mode = mode,
.name = (void *)(entry + 1),
.name_len = strlen(ent->d_name),
};
memcpy(entry->name, ent->d_name, strlen(ent->d_name));
ret = scoutfs_parallel_restore_add_entry(wri, entry);
error_exit(ret, "add entry %d", ret);
free(entry);
ret = asprintf(&subdir, "%s/%s", path, ent->d_name);
error_exit(ret == -1, "asprintf subdir"ERRF, ERRA);
/* file inode */
inode = read_inode_data(subdir, ent->d_ino, &nlink, is_scoutfs);
fprintf(stdout, "%c %s/%s\n", ind, path, ent->d_name);
if (is_new_inode_item(nlink, ent->d_ino)) {
ret = scoutfs_parallel_restore_add_inode(wri, inode);
error_exit(ret, "add reg file inode %d", ret);
}
free(inode);
free(subdir);
le64_add_cpu(&res->files_created, 1);
}
/* batch out changes, will be about 1M */
if (le64_to_cpu(res->files_created) > BATCH_FILES) {
write_bufs_and_send(wri, buf, args->dev_fd, res, false, args->pair_fd);
}
}
if (ent != NULL)
error_exit(errno, "readdir"ERRF, ERRA);
closedir(dirp);
/* create the dir itself */
inode = read_inode_data(path, ino, &nlink, is_scoutfs);
inode->nr_subdirs = subdir_count;
inode->total_entry_name_bytes = ent_len;
fprintf(stdout, "d %s\n", path);
ret = scoutfs_parallel_restore_add_inode(wri, inode);
error_exit(ret, "add dir inode %d", ret);
free(inode);
/* No need to send, we'll send final after last directory is complete */
}
static int do_restore(struct opts *opts)
{
struct scoutfs_parallel_restore_writer *pwri, *wri = NULL;
struct scoutfs_parallel_restore_slice *slices = NULL;
struct scoutfs_super_block *super = NULL;
struct writer_args *args;
struct write_result res;
int pair[2] = {-1, -1};
LIST_HEAD(writers);
void *buf = NULL;
void *bufp = NULL;
int dev_fd = -1;
pid_t pid;
int ret;
u64 tot_bytes;
u64 tot_dirs;
u64 tot_files;
ret = socketpair(PF_LOCAL, SOCK_STREAM, 0, pair);
error_exit(ret, "socketpair error "ERRF, ERRA);
dev_fd = open(opts->meta_path, O_DIRECT | (O_RDWR|O_EXCL));
error_exit(dev_fd < 0, "error opening '%s': "ERRF, opts->meta_path, ERRA);
errno = posix_memalign((void **)&super, 4096, SCOUTFS_BLOCK_SM_SIZE) ?:
posix_memalign((void **)&buf, 4096, BUF_SIZ);
error_exit(errno, "error allocating block bufs "ERRF, ERRA);
ret = pread(dev_fd, super, SCOUTFS_BLOCK_SM_SIZE,
SCOUTFS_SUPER_BLKNO << SCOUTFS_BLOCK_SM_SHIFT);
error_exit(ret != SCOUTFS_BLOCK_SM_SIZE, "error reading super, ret %d", ret);
error_exit((super->flags & SCOUTFS_FLAG_IS_META_BDEV) == 0, "super block is not meta dev");
ret = scoutfs_parallel_restore_create_writer(&wri);
error_exit(ret, "create writer %d", ret);
ret = scoutfs_parallel_restore_import_super(wri, super, dev_fd);
error_exit(ret, "import super %d", ret);
slices = calloc(2, sizeof(struct scoutfs_parallel_restore_slice));
error_exit(!slices, "alloc slices");
scoutfs_parallel_restore_init_slices(wri, slices, 2);
ret = scoutfs_parallel_restore_add_slice(wri, &slices[0]);
error_exit(ret, "add slices[0] %d", ret);
args = calloc(1, sizeof(struct writer_args));
error_exit(!args, "alloc writer args");
args->dev_fd = dev_fd;
args->slice = slices[1];
args->pair_fd = pair[1];
list_add_tail(&args->head, &writers);
/* fork writer process */
pid = fork();
error_exit(pid == -1, "fork error");
if (pid == 0) {
ret = prctl(PR_SET_PDEATHSIG, SIGHUP);
error_exit(ret < 0, "failed to set parent death sig");
errno = posix_memalign((void **)&bufp, 4096, BUF_SIZ);
error_exit(errno, "error allocating block bufp "ERRF, ERRA);
ret = scoutfs_parallel_restore_create_writer(&pwri);
error_exit(ret, "create pwriter %d", ret);
ret = scoutfs_parallel_restore_add_slice(pwri, &args->slice);
error_exit(ret, "add pslice %d", ret);
memset(&res, 0, sizeof(res));
restore_path(pwri, args, &res, bufp, opts->source_dir, SCOUTFS_ROOT_INO);
ret = restore_quotas(pwri, get_quota_ioctl, opts->source_dir);
error_exit(ret, "quota add %d", ret);
res.complete = true;
write_bufs_and_send(pwri, buf, args->dev_fd, &res, true, args->pair_fd);
scoutfs_parallel_restore_destroy_writer(&pwri);
free(bufp);
exit(0);
};
/* read results and wait for writer to finish */
tot_bytes = 0;
tot_dirs = 1;
tot_files = 0;
for (;;) {
ret = read(pair[0], &res, sizeof(struct write_result));
error_exit(ret != sizeof(struct write_result), "result read error %d", ret);
ret = scoutfs_parallel_restore_add_progress(wri, &res.prog);
error_exit(ret, "add thr prog %d", ret);
if (res.slice.meta_len != 0) {
ret = scoutfs_parallel_restore_add_slice(wri, &res.slice);
error_exit(ret, "add thr slice %d", ret);
if (res.complete)
break;
}
tot_bytes += le64_to_cpu(res.bytes_written);
tot_files += le64_to_cpu(res.files_created);
tot_dirs += le64_to_cpu(res.dirs_created);
}
tot_bytes += write_bufs(wri, buf, args->dev_fd);
fprintf(stdout, "Wrote %lld directories, %lld files, %lld bytes total\n",
tot_dirs, tot_files, tot_bytes);
/* write super to finalize */
ret = scoutfs_parallel_restore_export_super(wri, super);
error_exit(ret, "update super %d", ret);
ret = pwrite(dev_fd, super, SCOUTFS_BLOCK_SM_SIZE,
SCOUTFS_SUPER_BLKNO << SCOUTFS_BLOCK_SM_SHIFT);
error_exit(ret != SCOUTFS_BLOCK_SM_SIZE, "error writing super, ret %d", ret);
scoutfs_parallel_restore_destroy_writer(&wri);
if (dev_fd >= 0)
close(dev_fd);
if (pair[0] > 0)
close(pair[0]);
if (pair[1] > 0)
close(pair[1]);
free(super);
free(args);
free(slices);
free(buf);
return 0;
}
int main(int argc, char **argv)
{
struct opts opts = (struct opts){ 0 };
struct hardlink_head *hh_tmp;
struct hardlink_head *hh;
int ret;
int c;
INIT_LIST_HEAD(&hardlinks);
while ((c = getopt(argc, argv, "b:m:s:")) != -1) {
switch(c) {
case 'm':
opts.meta_path = strdup(optarg);
break;
case 's':
opts.source_dir = strdup(optarg);
break;
case '?':
printf("Unknown option '%c'\n", optopt);
usage();
exit(1);
}
}
error_exit(!opts.meta_path, "must specify metadata device path with -m");
error_exit(!opts.source_dir, "must specify source directory path with -s");
ret = do_restore(&opts);
free(opts.meta_path);
free(opts.source_dir);
list_for_each_entry_safe(hh, hh_tmp, &hardlinks, head) {
list_del_init(&hh->head);
free(hh);
}
return ret == 0 ? 0 : 1;
}

View File

@@ -15,7 +15,7 @@ echo "== prepare devices, mount point, and logs"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"
> $T_TMP.mount.out
scoutfs mkfs -f -Q 0,127.0.0.1,$T_SCRATCH_PORT "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \
scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \
|| t_fail "mkfs failed"
echo "== bad devices, bad options"

View File

@@ -11,7 +11,7 @@ truncate -s $sz "$T_TMP.equal"
truncate -s $large_sz "$T_TMP.large"
echo "== make scratch fs"
t_quiet scoutfs mkfs -f -Q 0,127.0.0.1,$T_SCRATCH_PORT "$T_EX_META_DEV" "$T_EX_DATA_DEV"
t_quiet scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"

View File

@@ -57,7 +57,7 @@ test "$before" == "$after" || \
# XXX this is all pretty manual, would be nice to have helpers
echo "== make small meta fs"
# meta device just big enough for reserves and the metadata we'll fill
scoutfs mkfs -A -f -Q 0,127.0.0.1,$T_SCRATCH_PORT -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"

View File

@@ -11,8 +11,8 @@
# format version.
#
# not supported on el8 or higher
if [ $(source /etc/os-release ; echo ${VERSION_ID:0:1}) -gt 7 ]; then
# not supported on el9!
if [ $(source /etc/os-release ; echo ${VERSION_ID:0:1}) -gt 8 ]; then
t_skip_permitted "Unsupported OS version"
fi
@@ -89,7 +89,7 @@ for vers in $(seq $MIN $((MAX - 1))); do
old_module="$builds/$vers/scoutfs.ko"
echo "mkfs $vers" >> "$T_TMP.log"
t_quiet $old_scoutfs mkfs -f -Q 0,127.0.0.1,$T_SCRATCH_PORT "$T_EX_META_DEV" "$T_EX_DATA_DEV" \
t_quiet $old_scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" \
|| t_fail "mkfs $vers failed"
echo "mount $vers with $vers" >> "$T_TMP.log"

View File

@@ -10,6 +10,30 @@ EXTENTS_PER_BTREE_BLOCK=600
EXTENTS_PER_LIST_BLOCK=8192
FREED_EXTENTS=$((EXTENTS_PER_BTREE_BLOCK * EXTENTS_PER_LIST_BLOCK))
#
# This test specifically creates a pathologically sparse file that will
# be as expensive as possible to free. This is usually fine on
# dedicated or reasonable hardware, but trying to run this in
# virtualized debug kernels can take a very long time. This test is
# about making sure that the server doesn't fail, not that the platform
# can handle the scale of work that our btree formats happen to require
# while execution is bogged down with use-after-free memory reference
# tracking. So we give the test a lot more breathing room before
# deciding that its hung.
#
echo "== setting longer hung task timeout"
if [ -w /proc/sys/kernel/hung_task_timeout_secs ]; then
secs=$(cat /proc/sys/kernel/hung_task_timeout_secs)
test "$secs" -gt 0 || \
t_fail "confusing value '$secs' from /proc/sys/kernel/hung_task_timeout_secs"
restore_hung_task_timeout()
{
echo "$secs" > /proc/sys/kernel/hung_task_timeout_secs
}
trap restore_hung_task_timeout EXIT
echo "$((secs * 5))" > /proc/sys/kernel/hung_task_timeout_secs
fi
echo "== creating fragmented extents"
fragmented_data_extents $FREED_EXTENTS $EXTENTS_PER_BTREE_BLOCK "$T_D0/alloc" "$T_D0/move"

View File

@@ -5,7 +5,7 @@
t_require_commands mmap_stress mmap_validate scoutfs xfs_io
echo "== mmap_stress"
mmap_stress 8192 30 "$T_D0/mmap_stress" "$T_D0/mmap_stress" "$T_D0/mmap_stress" "$T_D3/mmap_stress" "$T_D3/mmap_stress" | sed 's/:.*//g' | sort
mmap_stress 8192 2000 "$T_D0/mmap_stress" "$T_D1/mmap_stress" "$T_D2/mmap_stress" "$T_D3/mmap_stress" "$T_D4/mmap_stress" | sed 's/:.*//g' | sort
echo "== basic mmap/read/write consistency checks"
mmap_validate 256 1000 "$T_D0/mmap_val1" "$T_D1/mmap_val1"

View File

@@ -157,7 +157,7 @@ echo "truncate should be waiting for first block:"
expect_wait "$DIR/file" "change_size" $ino 0
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
sleep .1
echo "truncate should no longer be waiting:"
echo "trunate should no longer be waiting:"
scoutfs data-waiting -B 0 -I 0 -p "$DIR" | wc -l
cat "$DIR/golden" > "$DIR/file"
vers=$(scoutfs stat -s data_version "$DIR/file")
@@ -168,13 +168,10 @@ scoutfs release "$DIR/file" -V "$vers" -o 0 -l $BYTES
# overwrite, not truncate+write
dd if="$DIR/other" of="$DIR/file" \
bs=$BS count=$BLOCKS conv=notrunc status=none &
pid="$!"
sleep .1
echo "should be waiting for write"
expect_wait "$DIR/file" "write" $ino 0
scoutfs stage "$DIR/golden" "$DIR/file" -V "$vers" -o 0 -l $BYTES
# wait for the background dd to complete
wait "$pid" 2> /dev/null
cmp "$DIR/file" "$DIR/other"
echo "== cleanup"

View File

@@ -67,49 +67,18 @@ t_mount_all
while test -d $(echo /sys/fs/scoutfs/*/fence/* | cut -d " " -f 1); do
sleep .5
done
sv=$(t_server_nr)
# wait for reclaim_open_log_tree() to complete for each mount
while [ $(t_counter reclaimed_open_logs $sv) -lt $T_NR_MOUNTS ]; do
sleep 1
done
# wait for finalize_and_start_log_merge() to find no active merges in flight
# and not find any finalized trees
while [ $(t_counter log_merge_no_finalized $sv) -lt 1 ]; do
sleep 1
done
# wait for orphan scans to run
t_set_all_sysfs_mount_options orphan_scan_delay_ms 1000
# wait until we see two consecutive orphan scan attempts without
# any inode deletion forward progress in each mount
for nr in $(t_fs_nrs); do
C=0
LOSA=$(t_counter orphan_scan_attempts $nr)
LDOP=$(t_counter inode_deleted $nr)
while [ $C -lt 2 ]; do
sleep 1
OSA=$(t_counter orphan_scan_attempts $nr)
DOP=$(t_counter inode_deleted $nr)
if [ $OSA != $LOSA ]; then
if [ $DOP == $LDOP ]; then
(( C++ ))
else
C=0
fi
fi
LOSA=$OSA
LDOP=$DOP
# also have to wait for delayed log merge work from mount
C=120
while (( C-- )); do
brk=1
for ino in $inos; do
inode_exists $ino && brk=0
done
test $brk -eq 1 && break
sleep 1
done
for ino in $inos; do
inode_exists $ino && echo "$ino still exists"
done

View File

@@ -0,0 +1,74 @@
#
# validate parallel restore library
#
t_require_commands scoutfs parallel_restore find xargs
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"
scratch_mkfs() {
scoutfs mkfs $@ \
-A -f -Q 0,127.0.0.1,53000 $T_EX_META_DEV $T_EX_DATA_DEV
}
scratch_check() {
# give ample time for writes to commit
sleep 1
sync
scoutfs check -d ${T_TMPDIR}/check.debug $T_EX_META_DEV $T_EX_DATA_DEV
}
scratch_mount() {
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 $T_EX_DATA_DEV $SCR
}
echo "== simple mkfs/restore/mount"
# meta device just big enough for reserves and the metadata we'll fill
scratch_mkfs -V 2 -m 10G -d 60G > $T_TMP.mkfs.out 2>&1 || t_fail "mkfs failed"
parallel_restore -m "$T_EX_META_DEV" > /dev/null || t_fail "parallel_restore"
scratch_check || t_fail "check failed"
scratch_mount
scoutfs statfs -p "$SCR" | grep -v -e 'fsid' -e 'rid'
find "$SCR" -exec scoutfs list-hidden-xattrs {} \; | wc
scoutfs search-xattrs -p "$SCR" scoutfs.hide.srch.sam_vol_F01030L6 -p "$SCR" | wc
find "$SCR" -type f -name "file-*" | head -n 4 | xargs -n 1 scoutfs get-fiemap -L
scoutfs df -p "$SCR" | awk '{print $1, $4}'
scoutfs quota-list -p "$SCR"
umount "$SCR"
scratch_check || t_fail "check after mount failed"
echo "== under ENOSPC"
scratch_mkfs -V 2 -m 10G -d 60G > $T_TMP.mkfs.out 2>&1 || t_fail "mkfs failed"
parallel_restore -m "$T_EX_META_DEV" -n 2000000 > /dev/null || t_fail "parallel_restore"
scratch_check || t_fail "check failed"
scratch_mount
scoutfs df -p "$SCR" | awk '{print $1, $4}'
umount "$SCR"
scratch_check || t_fail "check after mount failed"
echo "== ENOSPC"
scratch_mkfs -V 2 -m 10G -d 60G > $T_TMP.mkfs.out 2>&1 || t_fail "mkfs failed"
parallel_restore -m "$T_EX_META_DEV" -d 600:1000 -f 600:1000 -n 4000000 | grep died 2>&1 && t_fail "parallel_restore"
echo "== attempt to restore data device"
scratch_mkfs -V 2 -m 10G -d 60G > $T_TMP.mkfs.out 2>&1 || t_fail "mkfs failed"
parallel_restore -m "$T_EX_DATA_DEV" | grep died 2>&1 && t_fail "parallel_restore"
echo "== attempt format_v1 restore"
scratch_mkfs -V 1 -m 10G -d 60G > $T_TMP.mkfs.out 2>&1 || t_fail "mkfs failed"
parallel_restore -m "$T_EX_META_DEV" | grep died 2>&1 && t_fail "parallel_restore"
echo "== test if previously mounted"
scratch_mkfs -V 2 -m 10G -d 60G > $T_TMP.mkfs.out 2>&1 || t_fail "mkfs failed"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
umount "$SCR"
parallel_restore -m "$T_EX_META_DEV" | grep died 2>&1 && t_fail "parallel_restore"
echo "== cleanup"
rmdir "$SCR"
t_pass

View File

@@ -72,7 +72,7 @@ quarter_data=$(echo "$size_data / 4" | bc)
# XXX this is all pretty manual, would be nice to have helpers
echo "== make initial small fs"
scoutfs mkfs -A -f -Q 0,127.0.0.1,$T_SCRATCH_PORT -m $quarter_meta -d $quarter_data \
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m $quarter_meta -d $quarter_data \
"$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="$T_TMPDIR/mnt.scratch"

118
tests/tests/restore_copy.sh Normal file
View File

@@ -0,0 +1,118 @@
#
# validate parallel restore library - using restore_copy.c
#
t_require_commands scoutfs restore_copy find xargs
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"
scratch_mkfs() {
scoutfs mkfs $@ \
-A -f -Q 0,127.0.0.1,53000 $T_EX_META_DEV $T_EX_DATA_DEV
}
scratch_check() {
# give ample time for writes to commit
sleep 1
sync
scoutfs check -d ${T_TMPDIR}/check.debug $T_EX_META_DEV $T_EX_DATA_DEV
}
scratch_mount() {
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 $T_EX_DATA_DEV $SCR
}
echo "== restore_copy content verification"
mkdir "$T_M0/data"
# create all supported inode types:
mkdir -p "$T_M0/data/d"
touch "$T_M0/data/f"
ln -sf "broken" "$T_M0/data/l"
ln "$T_M0/data/f" "$T_M0/data/h"
ln -sf "f" "$T_M0/data/F"
mknod "$T_M0/data/b" b 1 1
mknod "$T_M0/data/c" c 0 0
mknod "$T_M0/data/u" u 2 2
mknod "$T_M0/data/p" p
# some files with data
dd if=/dev/zero of="$T_M0/data/f4096" bs=4096 count=1 status=none
touch "$T_M0/data/falloc" "$T_M0/data/truncate"
xfs_io -C "falloc 65536 65536" "$T_M0/data/falloc"
xfs_io -C "truncate $((4096 * 4096))" "$T_M0/data/truncate"
# socket (could have used python but avoids python/python2/python3 problem)
perl -e "use IO::Socket; my \$s = IO::Socket::UNIX->new(Type=>SOCK_STREAM,Local=>'$T_M0/data/s') or die 'sock';"
# set all mode_t bits
touch "$T_M0/data/mode_t"
chmod 6777 "$T_M0/data/mode_t"
# uid/gid
touch "$T_M0/data/uidgid"
chown 33333:33333 "$T_M0/data/uidgid"
# set retention bit
touch "$T_M0/data/retention"
scoutfs set-attr-x -t 1 "$T_M0/data/retention"
# set project ID
touch "$T_M0/data/proj"
scoutfs set-attr-x -p 12345 "$T_M0/data/proj"
mkdir -p "$T_M0/data/proj_d"
touch "$T_M0/data/proj_d/f"
scoutfs set-attr-x -p 12345 "$T_M0/data/proj_d/f"
scoutfs set-attr-x -p 54321 "$T_M0/data/proj_d"
# quotas
for a in $(seq 10 15); do
scoutfs quota-add -p "$T_M0" -r "7 $a,L,- 0,L,- 0,L,- I 33 -"
done
# crtime
scoutfs set-attr-x -r 55555.666666666 "$T_M0/data/proj_d"
scoutfs set-attr-x -r 55556.666666666 "$T_M0/data/proj_d/f"
# data_seq, meta_seq, data_version is not restored.
scratch_mkfs -V 2 > $T_TMP.mkfs.out 2>&1 || t_fail "mkfs failed"
restore_copy -m $T_EX_META_DEV -s "$T_M0/data" | t_filter_fs
scratch_check || t_fail "check before mount failed"
scratch_mount
echo "== verify metadata bits on restored fs"
inspect() {
ls -Alnr --time-style=+""
scoutfs get-attr-x -t "retention"
scoutfs get-attr-x -p "proj"
scoutfs get-fiemap -L "f4096"
scoutfs get-fiemap -L "falloc"
scoutfs get-fiemap -L "truncate"
scoutfs quota-list -p "."
scoutfs get-attr-x -p "proj_d/f"
scoutfs get-attr-x -p "proj_d"
scoutfs stat proj_d | grep crtime
scoutfs stat proj_d/f | grep crtime
}
( cd "$SCR" ; inspect )
echo "== verify quota rules on restored fs"
scoutfs quota-del -p "$T_M0" -r "7 15,L,- 0,L,- 0,L,- I 33 -" || t_fail "quota-del failed"
scoutfs quota-list -p "$T_M0"
scoutfs quota-add -p "$T_M0" -r "7 15,L,- 0,L,- 0,L,- I 33 -" || t_fail "quota-add failed"
scoutfs quota-list -p "$T_M0"
scoutfs df -p "$SCR" | awk '{print $1, $4}'
echo "== umount restored fs and check"
umount "$SCR"
scratch_check || t_fail "check after mount failed"
#scoutfs print $T_META_DEVICE
#scoutfs print $T_EX_META_DEV
echo "== cleanup"
rmdir "$SCR"
scoutfs set-attr-x -t 0 "$T_M0/data/retention"
rm -rf "$T_M0/data"
scoutfs quota-wipe -p "$T_M0"
t_pass

View File

@@ -50,9 +50,9 @@ t_quiet sync
cat << EOF > local.config
export FSTYP=scoutfs
export MKFS_OPTIONS="-f"
export MKFS_TEST_OPTIONS="-Q 0,127.0.0.1,$T_TEST_PORT"
export MKFS_SCRATCH_OPTIONS="-Q 0,127.0.0.1,$T_SCRATCH_PORT"
export MKFS_DEV_OPTIONS="-Q 0,127.0.0.1,$T_DEV_PORT"
export MKFS_TEST_OPTIONS="-Q 0,127.0.0.1,42000"
export MKFS_SCRATCH_OPTIONS="-Q 0,127.0.0.1,43000"
export MKFS_DEV_OPTIONS="-Q 0,127.0.0.1,44000"
export TEST_DEV=$T_DB0
export TEST_DIR=$T_M0
export SCRATCH_META_DEV=$T_EX_META_DEV
@@ -63,47 +63,73 @@ export MOUNT_OPTIONS="-o quorum_slot_nr=0,metadev_path=$T_MB0"
export TEST_FS_MOUNT_OPTS="-o quorum_slot_nr=0,metadev_path=$T_MB0"
EOF
cp "$T_EXTRA/local.exclude" local.exclude
cat << EOF > local.exclude
generic/003 # missing atime update in buffered read
generic/075 # file content mismatch failures (fds, etc)
generic/103 # enospc causes trans commit failures
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/213 # enospc causes trans commit failures
generic/318 # can't support user namespaces until v5.11
generic/321 # requires selinux enabled for '+' in ls?
generic/338 # BUG_ON update inode error handling
generic/347 # _dmthin_mount doesn't work?
generic/356 # swap
generic/357 # swap
generic/409 # bind mounts not scripted yet
generic/410 # bind mounts not scripted yet
generic/411 # bind mounts not scripted yet
generic/423 # symlink inode size is strlen() + 1 on scoutfs
generic/430 # xfs_io copy_range missing in el7
generic/431 # xfs_io copy_range missing in el7
generic/432 # xfs_io copy_range missing in el7
generic/433 # xfs_io copy_range missing in el7
generic/434 # xfs_io copy_range missing in el7
generic/441 # dm-mapper
generic/444 # el9's posix_acl_update_mode is buggy ?
generic/467 # open_by_handle ESTALE
generic/472 # swap
generic/484 # dm-mapper
generic/493 # swap
generic/494 # swap
generic/495 # swap
generic/496 # swap
generic/497 # swap
generic/532 # xfs_io statx attrib_mask missing in el7
generic/554 # swap
generic/563 # cgroup+loopdev
generic/564 # xfs_io copy_range missing in el7
generic/565 # xfs_io copy_range missing in el7
generic/568 # falloc not resulting in block count increase
generic/569 # swap
generic/570 # swap
generic/620 # dm-hugedisk
generic/633 # id-mapped mounts missing in el7
generic/636 # swap
generic/641 # swap
generic/643 # swap
EOF
t_stdout_invoked
t_restore_output
echo " (showing output of xfstests)"
args="-E local.exclude ${T_XFSTESTS_ARGS:--g quick}"
./check $args
# the fs is unmounted when check finishes
t_stdout_compare
#
# ./check writes the results of the run to check.log. It lists the
# tests it ran, skipped, or failed. Then it writes a line saying
# everything passed or some failed.
#
#
# If XFSTESTS_ARGS were specified then we just pass/fail to match the
# check run.
#
if [ -n "$T_XFSTESTS_ARGS" ]; then
if tail -1 results/check.log | grep -q "Failed"; then
t_fail
else
t_pass
fi
fi
#
# Otherwise, typically, when there were no args then we scrape the most
# recent run and use it as the output to compare to make sure that we
# run the right tests and get the right results.
# ./check writes the results of the run to check.log. It lists
# the tests it ran, skipped, or failed. Then it writes a line saying
# everything passed or some failed. We scrape the most recent run and
# use it as the output to compare to make sure that we run the right
# tests and get the right results.
#
awk '
/^(Ran|Not run|Failures):.*/ {
if (pf) {
res=""
pf=""
}
res = res "\n" $0
} res = res "\n" $0
}
/^(Passed|Failed).*tests$/ {
pf=$0
@@ -113,14 +139,10 @@ awk '
}' < results/check.log > "$T_TMPDIR/results"
# put a test per line so diff shows tests that differ
grep -E "^(Ran|Not run|Failures):" "$T_TMPDIR/results" | fmt -w 1 > "$T_TMPDIR/results.fmt"
grep -E "^(Passed|Failed).*tests$" "$T_TMPDIR/results" >> "$T_TMPDIR/results.fmt"
egrep "^(Ran|Not run|Failures):" "$T_TMPDIR/results" | \
fmt -w 1 > "$T_TMPDIR/results.fmt"
egrep "^(Passed|Failed).*tests$" "$T_TMPDIR/results" >> "$T_TMPDIR/results.fmt"
diff -u "$T_EXTRA/expected-results" "$T_TMPDIR/results.fmt" > "$T_TMPDIR/results.diff"
if [ -s "$T_TMPDIR/results.diff" ]; then
echo "tests that were skipped/run differed from expected:"
cat "$T_TMPDIR/results.diff"
t_fail
fi
t_compare_output cat "$T_TMPDIR/results.fmt"
t_pass

View File

@@ -7,7 +7,7 @@ FMTIOC_H := format.h ioctl.h
FMTIOC_KMOD := $(addprefix ../kmod/src/,$(FMTIOC_H))
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -g -msse4.2 \
-fno-strict-aliasing \
-I src/ -fno-strict-aliasing \
-DSCOUTFS_FORMAT_HASH=0x$(SCOUTFS_FORMAT_HASH)LLU
ifneq ($(wildcard $(firstword $(FMTIOC_KMOD))),)
@@ -15,10 +15,13 @@ CFLAGS += -I../kmod/src
endif
BIN := src/scoutfs
OBJ := $(patsubst %.c,%.o,$(wildcard src/*.c))
DEPS := $(wildcard */*.d)
OBJ_DIRS := src src/check
OBJ := $(foreach dir,$(OBJ_DIRS),$(patsubst %.c,%.o,$(wildcard $(dir)/*.c)))
DEPS := $(foreach dir,$(OBJ_DIRS),$(wildcard $(dir)/*.d))
all: $(BIN)
AR := src/scoutfs_parallel_restore.a
all: $(BIN) $(AR)
ifneq ($(DEPS),)
-include $(DEPS)
@@ -36,6 +39,10 @@ $(BIN): $(OBJ)
$(QU) [BIN $@]
$(VE)gcc -o $@ $^ -luuid -lm -lcrypto -lblkid
$(AR): $(OBJ)
$(QU) [AR $@]
$(VE)ar rcs $@ $^
%.o %.d: %.c Makefile sparse.sh
$(QU) [CC $<]
$(VE)gcc $(CFLAGS) -MD -MP -MF $*.d -c $< -o $*.o

View File

@@ -7,7 +7,7 @@ message_output()
error_message()
{
message_output "$@" >> /dev/stderr
message_output "$@" >&2
}
error_exit()
@@ -62,28 +62,32 @@ test -x "$SCOUTFS_FENCED_RUN" || \
# files disappear.
#
# silence error messages
quiet_cat()
# generate failure messages to stderr while still echoing 0 for the caller
careful_cat()
{
cat "$@" 2>/dev/null
local path="$@"
cat "$@" || echo 0
}
while sleep $SCOUTFS_FENCED_DELAY; do
shopt -s nullglob
for fence in /sys/fs/scoutfs/*/fence/*; do
srv=$(basename $(dirname $(dirname $fence)))
fenced="$(quiet_cat $fence/fenced)"
error="$(quiet_cat $fence/error)"
rid="$(quiet_cat $fence/rid)"
ip="$(quiet_cat $fence/ipv4_addr)"
reason="$(quiet_cat $fence/reason)"
# request dirs can linger then disappear after fenced/error is set
if [ ! -d "$fence" -o "$fenced" == "1" -o "$error" == "1" ]; then
# catches unmatched regex when no dirs
if [ ! -d "$fence" ]; then
continue
fi
# skip requests that have been handled
if [ "$(careful_cat $fence/fenced)" == 1 -o \
"$(careful_cat $fence/error)" == 1 ]; then
continue
fi
srv=$(basename $(dirname $(dirname $fence)))
rid="$(cat $fence/rid)"
ip="$(cat $fence/ipv4_addr)"
reason="$(cat $fence/reason)"
log_message "server $srv fencing rid $rid at IP $ip for $reason"
# export _REQ_ vars for run to use

View File

@@ -55,14 +55,6 @@ with initial sparse regions (perhaps by multiple threads writing to
different regions) and wasted space isn't an issue (perhaps because the
file population contains few small files).
.TP
.B ino_alloc_per_lock=<number>
This option determines how many inode numbers are allocated in the same
cluster lock. The default, and maximum, is 1024. The minimum is 1.
Allocating fewer inodes per lock can allow more parallelism between
mounts because there are more locks that cover the same number of
created files. This can be helpful when working with smaller numbers of
large files.
.TP
.B log_merge_wait_timeout_ms=<number>
This option sets the amount of time, in milliseconds, that log merge
creation can wait before timing out. This setting is per-mount, only
@@ -138,23 +130,6 @@ the server for the filesystem if it is elected leader.
The assigned number must match one of the slots defined with \-Q options
when the filesystem was created with mkfs. If the number assigned
doesn't match a number created during mkfs then the mount will fail.
.TP
.B tcp_keepalive_timeout_ms=<number>
This option sets the amount of time, in milliseconds, that a client
connection will wait for active TCP packets, before deciding that
the connection is dead. This setting is per-mount and only changes
the behavior of that mount.
.sp
The default value of this setting is 60000msec (60s). Any precision
beyond a whole second is likely unrealistic due to the nature of
TCP keepalive mechanisms in the Linux kernel. Valid values are any
value higher than 3000 (3s).
.sp
The TCP keepalive mechanism is complex and observing a lost connection
quickly is important to maintain cluster stability. If the local
network suffers from intermittent outages this option may provide
some respite to overcome these outages without the cluster becoming
desynchronized.
.SH VOLUME OPTIONS
Volume options are persistent options which are stored in the super
block in the metadata device and which apply to all mounts of the volume.

View File

@@ -76,6 +76,41 @@ run when the file system will not be mounted.
.RE
.PD
.TP
.BI "check META-DEVICE DATA-DEVICE [-d|--debug FILE]"
.sp
Performs an offline file system check. The program iterates through all the
data structures on disk directly - the filesystem must not be mounted while
this operation is running.
.RS 1.0i
.PD 0
.sp
.TP
.B "-d, --debug FILE"
An output file where the program can output debug information about the
state of the filesystem as it performs the check. If
.B FILE
is "-", the debug output is written to the Standard Error output.
.TP
.RE
.sp
.B RETURN VALUE
The check function can return the following exit codes:
.RS
.TP
\fB 0 \fR - no filesystem issues detected
.TP
\fB 1 \fR - file system issues were detected
.TP
\fB 8 \fR - operational error
.TP
\fB 16 \fR - usage error
.TP
\fB 32 \fR - cancelled by user (SIGINT)
.TP
.RE
.PD
.TP
.BI "counters [-t|--table] SYSFS-DIR"
.sp

View File

@@ -54,6 +54,8 @@ cp man/*.8.gz $RPM_BUILD_ROOT%{_mandir}/man8/.
install -m 755 -D src/scoutfs $RPM_BUILD_ROOT%{_sbindir}/scoutfs
install -m 644 -D src/ioctl.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/ioctl.h
install -m 644 -D src/format.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/format.h
install -m 644 -D src/parallel_restore.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/parallel_restore.h
install -m 644 -D src/scoutfs_parallel_restore.a $RPM_BUILD_ROOT%{_libdir}/scoutfs/libscoutfs_parallel_restore.a
install -m 755 -D fenced/scoutfs-fenced $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/scoutfs-fenced
install -m 644 -D fenced/scoutfs-fenced.service $RPM_BUILD_ROOT%{_unitdir}/scoutfs-fenced.service
install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-fenced.conf.example
@@ -70,6 +72,7 @@ install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdi
%files -n scoutfs-devel
%defattr(644,root,root,755)
%{_includedir}/scoutfs
%{_libdir}/scoutfs
%clean
rm -rf %{buildroot}

View File

@@ -1,7 +1,7 @@
#!/bin/bash
# must have sparse. Fail with error message, mask success path.
which sparse > /dev/null || exit 1
# can we find sparse? If not, we're done.
which sparse > /dev/null 2>&1 || exit 0
#
# one of the problems with using sparse in userspace is that it picks up
@@ -22,11 +22,6 @@ RE="$RE|warning: memset with byte count of 4194304"
# some sparse versions don't know about some builtins
RE="$RE|error: undefined identifier '__builtin_fpclassify'"
# on el8, sparse can't handle __has_include for some reason when _GNU_SOURCE
# is defined, and we need that for O_DIRECT.
RE="$RE|note: in included file .through /usr/include/sys/stat.h.:"
RE="$RE|/usr/include/bits/statx.h:30:6: error: "
#
# don't filter out 'too many errors' here, it can signify that
# sparse doesn't understand something and is throwing a *ton*

166
utils/src/check/alloc.c Normal file
View File

@@ -0,0 +1,166 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <sys/mman.h>
#include <errno.h>
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "bitmap.h"
#include "key.h"
#include "alloc.h"
#include "block.h"
#include "btree.h"
#include "extent.h"
#include "iter.h"
#include "sns.h"
/*
* We check the list blocks serially.
*
* XXX:
* - compare ref seqs
* - detect cycles?
*/
int alloc_list_meta_iter(struct scoutfs_alloc_list_head *lhead, extent_cb_t cb, void *cb_arg)
{
struct scoutfs_alloc_list_block *lblk;
struct scoutfs_block_ref ref;
struct block *blk = NULL;
u64 blkno;
int ret;
ref = lhead->ref;
while (ref.blkno) {
blkno = le64_to_cpu(ref.blkno);
ret = cb(blkno, 1, cb_arg);
if (ret < 0) {
ret = xlate_iter_errno(ret);
goto out;
}
ret = block_get(&blk, blkno, 0);
if (ret < 0)
goto out;
lblk = block_buf(blk);
/* XXX verify block */
ret = block_hdr_valid(blk, blkno, 0, SCOUTFS_BLOCK_MAGIC_ALLOC_LIST);
if (ret < 0)
goto out;
/* XXX sort? maybe */
ref = lblk->next;
block_put(&blk);
}
ret = 0;
out:
return ret;
}
int alloc_root_meta_iter(struct scoutfs_alloc_root *root, extent_cb_t cb, void *cb_arg)
{
return btree_meta_iter(&root->root, cb, cb_arg);
}
int alloc_list_extent_iter(struct scoutfs_alloc_list_head *lhead, extent_cb_t cb, void *cb_arg)
{
struct scoutfs_alloc_list_block *lblk;
struct scoutfs_block_ref ref;
struct block *blk = NULL;
u64 blkno;
int ret;
int i;
ref = lhead->ref;
while (ref.blkno) {
blkno = le64_to_cpu(ref.blkno);
ret = block_get(&blk, blkno, 0);
if (ret < 0)
goto out;
sns_push("alloc_list_block", blkno, 0);
lblk = block_buf(blk);
/* XXX verify block */
ret = block_hdr_valid(blk, blkno, 0, SCOUTFS_BLOCK_MAGIC_ALLOC_LIST);
if (ret < 0)
goto out;
/* XXX sort? maybe */
ret = 0;
for (i = 0; i < le32_to_cpu(lblk->nr); i++) {
blkno = le64_to_cpu(lblk->blknos[le32_to_cpu(lblk->start) + i]);
ret = cb(blkno, 1, cb_arg);
if (ret < 0)
break;
}
ref = lblk->next;
block_put(&blk);
sns_pop();
if (ret < 0) {
ret = xlate_iter_errno(ret);
goto out;
}
}
ret = 0;
out:
return ret;
}
static bool valid_free_extent_key(struct scoutfs_key *key)
{
return (key->sk_zone == SCOUTFS_FREE_EXTENT_BLKNO_ZONE ||
key->sk_zone == SCOUTFS_FREE_EXTENT_ORDER_ZONE) &&
(!key->_sk_fourth && !key->sk_type &&
(key->sk_zone == SCOUTFS_FREE_EXTENT_ORDER_ZONE || !key->_sk_third));
}
static int free_item_cb(struct scoutfs_key *key, void *val, u16 val_len, void *cb_arg)
{
struct extent_cb_arg_t *ecba = cb_arg;
u64 start;
u64 len;
/* XXX not sure these eios are what we want */
if (val_len != 0)
return -EIO;
if (!valid_free_extent_key(key))
return -EIO;
if (key->sk_zone == SCOUTFS_FREE_EXTENT_ORDER_ZONE)
return -ECHECK_ITER_DONE;
start = le64_to_cpu(key->skfb_end) - le64_to_cpu(key->skfb_len) + 1;
len = le64_to_cpu(key->skfb_len);
return ecba->cb(start, len, ecba->cb_arg);
}
/*
* Call the callback with each of the primary BLKNO free extents stored
* in item in the given alloc root. It doesn't visit the secondary
* ORDER extents.
*/
int alloc_root_extent_iter(struct scoutfs_alloc_root *root, extent_cb_t cb, void *cb_arg)
{
struct extent_cb_arg_t ecba = { .cb = cb, .cb_arg = cb_arg };
return btree_item_iter(&root->root, free_item_cb, &ecba);
}

12
utils/src/check/alloc.h Normal file
View File

@@ -0,0 +1,12 @@
#ifndef _SCOUTFS_UTILS_CHECK_ALLOC_H
#define _SCOUTFS_UTILS_CHECK_ALLOC_H
#include "extent.h"
int alloc_list_meta_iter(struct scoutfs_alloc_list_head *lhead, extent_cb_t cb, void *cb_arg);
int alloc_root_meta_iter(struct scoutfs_alloc_root *root, extent_cb_t cb, void *cb_arg);
int alloc_list_extent_iter(struct scoutfs_alloc_list_head *lhead, extent_cb_t cb, void *cb_arg);
int alloc_root_extent_iter(struct scoutfs_alloc_root *root, extent_cb_t cb, void *cb_arg);
#endif

613
utils/src/check/block.c Normal file
View File

@@ -0,0 +1,613 @@
#define _ISOC11_SOURCE /* aligned_alloc */
#define _DEFAULT_SOURCE /* syscall() */
#include <stdlib.h>
#include <unistd.h>
#include <stdbool.h>
#include <stdio.h>
#include <errno.h>
#include <sys/syscall.h>
#include <linux/aio_abi.h>
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "list.h"
#include "cmp.h"
#include "hash.h"
#include "block.h"
#include "debug.h"
#include "super.h"
#include "eno.h"
#include "crc.h"
#include "sns.h"
static struct block_data {
struct list_head *hash_lists;
size_t hash_nr;
struct list_head active_head;
struct list_head inactive_head;
struct list_head dirty_list;
size_t nr_active;
size_t nr_inactive;
size_t nr_dirty;
int meta_fd;
size_t max_cached;
size_t nr_events;
aio_context_t ctx;
struct iocb *iocbs;
struct iocb **iocbps;
struct io_event *events;
} global_bdat;
struct block {
struct list_head hash_head;
struct list_head lru_head;
struct list_head dirty_head;
struct list_head submit_head;
unsigned long refcount;
unsigned long uptodate:1,
active:1;
u64 blkno;
void *buf;
size_t size;
};
#define BLK_FMT \
"blkno %llu rc %ld d %u a %u"
#define BLK_ARG(blk) \
(blk)->blkno, (blk)->refcount, !list_empty(&(blk)->dirty_head), blk->active
#define debug_blk(blk, fmt, args...) \
debug(fmt " " BLK_FMT, ##args, BLK_ARG(blk))
/*
* This just allocates and initialzies the block. The caller is
* responsible for putting it on the appropriate initial lists and
* managing refcounts.
*/
static struct block *alloc_block(struct block_data *bdat, u64 blkno, size_t size)
{
struct block *blk;
blk = calloc(1, sizeof(struct block));
if (blk) {
blk->buf = aligned_alloc(4096, size); /* XXX static alignment :/ */
if (!blk->buf) {
free(blk);
blk = NULL;
} else {
INIT_LIST_HEAD(&blk->hash_head);
INIT_LIST_HEAD(&blk->lru_head);
INIT_LIST_HEAD(&blk->dirty_head);
INIT_LIST_HEAD(&blk->submit_head);
blk->blkno = blkno;
blk->size = size;
}
}
return blk;
}
static void free_block(struct block_data *bdat, struct block *blk)
{
debug_blk(blk, "free");
if (!list_empty(&blk->lru_head)) {
if (blk->active)
bdat->nr_active--;
else
bdat->nr_inactive--;
list_del(&blk->lru_head);
}
if (!list_empty(&blk->dirty_head)) {
bdat->nr_dirty--;
list_del(&blk->dirty_head);
}
if (!list_empty(&blk->hash_head))
list_del(&blk->hash_head);
if (!list_empty(&blk->submit_head))
list_del(&blk->submit_head);
free(blk->buf);
free(blk);
}
static bool blk_is_dirty(struct block *blk)
{
return !list_empty(&blk->dirty_head);
}
/*
* Rebalance the cache.
*
* First we shrink the cache to limit it to max_cached blocks.
* Logically, we walk from oldest to newest in the inactive list and
* then in the active list. Since these lists are physically one
* list_head list we achieve this with a reverse walk starting from the
* active head.
*
* Then we rebalnace the size of the two lists. The constraint is that
* we don't let the active list grow larger than the inactive list. We
* move blocks from the oldest tail of the active list to the newest
* head of the inactive list.
*
* <- [active head] <-> [ .. active list .. ] <-> [inactive head] <-> [ .. inactive list .. ] ->
*/
static void rebalance_cache(struct block_data *bdat)
{
struct block *blk;
struct block *blk_;
list_for_each_entry_safe_reverse(blk, blk_, &bdat->active_head, lru_head) {
if ((bdat->nr_active + bdat->nr_inactive) < bdat->max_cached)
break;
if (&blk->lru_head == &bdat->inactive_head || blk->refcount > 0 ||
blk_is_dirty(blk))
continue;
free_block(bdat, blk);
}
list_for_each_entry_safe_reverse(blk, blk_, &bdat->inactive_head, lru_head) {
if (bdat->nr_active <= bdat->nr_inactive || &blk->lru_head == &bdat->active_head)
break;
list_move(&blk->lru_head, &bdat->inactive_head);
blk->active = 0;
bdat->nr_active--;
bdat->nr_inactive++;
}
}
static void make_active(struct block_data *bdat, struct block *blk)
{
if (!blk->active) {
if (!list_empty(&blk->lru_head)) {
list_move(&blk->lru_head, &bdat->active_head);
bdat->nr_inactive--;
} else {
list_add(&blk->lru_head, &bdat->active_head);
}
blk->active = 1;
bdat->nr_active++;
}
}
static int compar_iocbp(const void *A, const void *B)
{
struct iocb *a = *(struct iocb **)A;
struct iocb *b = *(struct iocb **)B;
return scoutfs_cmp(a->aio_offset, b->aio_offset);
}
static int submit_and_wait(struct block_data *bdat, struct list_head *list)
{
struct io_event *event;
struct iocb *iocb;
struct block *blk;
int ret;
int err;
int nr;
int i;
err = 0;
nr = 0;
list_for_each_entry(blk, list, submit_head) {
iocb = &bdat->iocbs[nr];
bdat->iocbps[nr] = iocb;
memset(iocb, 0, sizeof(struct iocb));
iocb->aio_data = (intptr_t)blk;
iocb->aio_lio_opcode = blk_is_dirty(blk) ? IOCB_CMD_PWRITE : IOCB_CMD_PREAD;
iocb->aio_fildes = bdat->meta_fd;
iocb->aio_buf = (intptr_t)blk->buf;
iocb->aio_nbytes = blk->size;
iocb->aio_offset = blk->blkno * blk->size;
nr++;
debug_blk(blk, "submit");
if ((nr < bdat->nr_events) && blk->submit_head.next != list)
continue;
qsort(bdat->iocbps, nr, sizeof(bdat->iocbps[0]), compar_iocbp);
ret = syscall(__NR_io_submit, bdat->ctx, nr, bdat->iocbps);
if (ret != nr) {
if (ret >= 0)
errno = EIO;
ret = -errno;
fprintf(stderr, "fatal system error submitting async IO: "ENO_FMT"\n",
ENO_ARG(-ret));
goto out;
}
ret = syscall(__NR_io_getevents, bdat->ctx, nr, nr, bdat->events, NULL);
if (ret != nr) {
if (ret >= 0)
errno = EIO;
ret = -errno;
fprintf(stderr, "fatal system error getting IO events: "ENO_FMT"\n",
ENO_ARG(-ret));
goto out;
}
ret = 0;
for (i = 0; i < nr; i++) {
event = &bdat->events[i];
iocb = (struct iocb *)(intptr_t)event->obj;
blk = (struct block *)(intptr_t)event->data;
debug_blk(blk, "complete res %lld", (long long)event->res);
if (event->res >= 0 && event->res != blk->size)
event->res = -EIO;
/* io errors are fatal */
if (event->res < 0) {
ret = event->res;
goto out;
}
if (iocb->aio_lio_opcode == IOCB_CMD_PREAD) {
blk->uptodate = 1;
} else {
list_del_init(&blk->dirty_head);
bdat->nr_dirty--;
}
}
nr = 0;
}
ret = 0;
out:
return ret ?: err;
}
static void inc_refcount(struct block *blk)
{
blk->refcount++;
}
void block_put(struct block **blkp)
{
struct block_data *bdat = &global_bdat;
struct block *blk = *blkp;
if (blk) {
blk->refcount--;
*blkp = NULL;
rebalance_cache(bdat);
}
}
static struct list_head *hash_bucket(struct block_data *bdat, u64 blkno)
{
u32 hash = scoutfs_hash32(&blkno, sizeof(blkno));
return &bdat->hash_lists[hash % bdat->hash_nr];
}
int block_hdr_valid(struct block *blk, u64 blkno, int bf, u32 magic)
{
struct scoutfs_block_header *hdr;
size_t size = (bf & BF_SM) ? SCOUTFS_BLOCK_SM_SIZE : SCOUTFS_BLOCK_LG_SIZE;
int ret;
u32 crc;
ret = block_get(&blk, blkno, bf);
if (ret < 0) {
fprintf(stderr, "error reading block %llu\n", blkno);
goto out;
}
hdr = block_buf(blk);
crc = crc_block(hdr, size);
/*
* a bad CRC is easy to repair, so we pass a different error code
* back. Unless the other data is also wrong - then it's EINVAL
* to signal that this isn't a valid block hdr at all.
*/
if (le32_to_cpu(hdr->crc) != crc)
ret = -EIO; /* keep checking other fields */
if (le32_to_cpu(hdr->magic) != magic)
ret = -EINVAL;
/*
* Our first caller fills in global_super. Until this completes,
* we can't do this check.
*/
if ((blkno != SCOUTFS_SUPER_BLKNO) &&
(hdr->fsid != global_super->hdr.fsid))
ret = -EINVAL;
block_put(&blk);
debug("%s blk_hdr_valid blkno %llu size %lu crc 0x%08x magic 0x%08x ret %d",
sns_str(), blkno, size, le32_to_cpu(hdr->crc), le32_to_cpu(hdr->magic),
ret);
out:
return ret;
}
static struct block *get_or_alloc(struct block_data *bdat, u64 blkno, int bf)
{
struct list_head *bucket = hash_bucket(bdat, blkno);
struct block *search;
struct block *blk;
size_t size;
size = (bf & BF_SM) ? SCOUTFS_BLOCK_SM_SIZE : SCOUTFS_BLOCK_LG_SIZE;
blk = NULL;
list_for_each_entry(search, bucket, hash_head) {
if (search->blkno == blkno && search->size == size) {
blk = search;
break;
}
}
if (!blk) {
blk = alloc_block(bdat, blkno, size);
if (blk) {
list_add(&blk->hash_head, bucket);
list_add(&blk->lru_head, &bdat->inactive_head);
bdat->nr_inactive++;
}
}
if (blk)
inc_refcount(blk);
return blk;
}
/*
* Get a block.
*
* The caller holds a refcount to the block while it's in use that
* prevents it from being removed from the cache. It must be dropped
* with block_put();
*/
int block_get(struct block **blk_ret, u64 blkno, int bf)
{
struct block_data *bdat = &global_bdat;
struct block *blk;
LIST_HEAD(list);
int ret;
blk = get_or_alloc(bdat, blkno, bf);
if (!blk) {
ret = -ENOMEM;
goto out;
}
if ((bf & BF_ZERO)) {
memset(blk->buf, 0, blk->size);
blk->uptodate = 1;
}
if (bf & BF_OVERWRITE)
blk->uptodate = 1;
if (!blk->uptodate) {
list_add(&blk->submit_head, &list);
ret = submit_and_wait(bdat, &list);
list_del_init(&blk->submit_head);
if (ret < 0)
goto out;
}
if ((bf & BF_DIRTY) && !blk_is_dirty(blk)) {
list_add_tail(&bdat->dirty_list, &blk->dirty_head);
bdat->nr_dirty++;
}
make_active(bdat, blk);
rebalance_cache(bdat);
ret = 0;
out:
if (ret < 0)
block_put(&blk);
*blk_ret = blk;
return ret;
}
void *block_buf(struct block *blk)
{
return blk->buf;
}
size_t block_size(struct block *blk)
{
return blk->size;
}
/*
* Drop the block from the cache, regardless of if it was free or not.
* This is used to avoid writing blocks which were dirtied but then
* later freed.
*
* The block is immediately freed and can't be referenced after this
* returns.
*/
void block_drop(struct block **blkp)
{
struct block_data *bdat = &global_bdat;
free_block(bdat, *blkp);
*blkp = NULL;
rebalance_cache(bdat);
}
/*
* This doesn't quite work for mixing large and small blocks, but that's
* fine, we never do that.
*/
static int compar_u64(const void *A, const void *B)
{
u64 a = *((u64 *)A);
u64 b = *((u64 *)B);
return scoutfs_cmp(a, b);
}
/*
* This read-ahead is synchronous and errors are ignored. If any of the
* blknos aren't present in the cache then we issue concurrent reads for
* them and wait. Any existing cached blocks will be left as is.
*
* We might be trying to read a lot more than the number of events so we
* sort the caller's blknos before iterating over them rather than
* relying on submission sorting the blocks in each submitted set.
*/
void block_readahead(u64 *blknos, size_t nr)
{
struct block_data *bdat = &global_bdat;
struct block *blk;
struct block *blk_;
LIST_HEAD(list);
size_t i;
if (nr == 0)
return;
qsort(blknos, nr, sizeof(blknos[0]), compar_u64);
for (i = 0; i < nr; i++) {
blk = get_or_alloc(bdat, blknos[i], 0);
if (blk) {
if (!blk->uptodate)
list_add_tail(&blk->submit_head, &list);
else
block_put(&blk);
}
}
(void)submit_and_wait(bdat, &list);
list_for_each_entry_safe(blk, blk_, &list, submit_head) {
list_del_init(&blk->submit_head);
block_put(&blk);
}
rebalance_cache(bdat);
}
/*
* The caller's block changes form a consistent transaction. If the amount of dirty
* blocks is large enough we issue a write.
*/
int block_try_commit(bool force)
{
struct block_data *bdat = &global_bdat;
struct block *blk;
struct block *blk_;
LIST_HEAD(list);
int ret;
if (!force && bdat->nr_dirty < bdat->nr_events)
return 0;
list_for_each_entry(blk, &bdat->dirty_list, dirty_head) {
list_add_tail(&blk->submit_head, &list);
inc_refcount(blk);
}
ret = submit_and_wait(bdat, &list);
list_for_each_entry_safe(blk, blk_, &list, submit_head) {
list_del_init(&blk->submit_head);
block_put(&blk);
}
if (ret < 0) {
fprintf(stderr, "error writing dirty transaction blocks\n");
goto out;
}
ret = block_get(&blk, SCOUTFS_SUPER_BLKNO, BF_SM | BF_OVERWRITE | BF_DIRTY);
if (ret == 0) {
list_add(&blk->submit_head, &list);
ret = submit_and_wait(bdat, &list);
list_del_init(&blk->submit_head);
block_put(&blk);
} else {
ret = -ENOMEM;
}
if (ret < 0)
fprintf(stderr, "error writing super block to commit transaction\n");
out:
rebalance_cache(bdat);
return ret;
}
int block_setup(int meta_fd, size_t max_cached_bytes, size_t max_dirty_bytes)
{
struct block_data *bdat = &global_bdat;
size_t i;
int ret;
bdat->max_cached = DIV_ROUND_UP(max_cached_bytes, SCOUTFS_BLOCK_LG_SIZE);
bdat->hash_nr = bdat->max_cached / 4;
bdat->nr_events = DIV_ROUND_UP(max_dirty_bytes, SCOUTFS_BLOCK_LG_SIZE);
bdat->iocbs = calloc(bdat->nr_events, sizeof(bdat->iocbs[0]));
bdat->iocbps = calloc(bdat->nr_events, sizeof(bdat->iocbps[0]));
bdat->events = calloc(bdat->nr_events, sizeof(bdat->events[0]));
bdat->hash_lists = calloc(bdat->hash_nr, sizeof(bdat->hash_lists[0]));
if (!bdat->iocbs || !bdat->iocbps || !bdat->events || !bdat->hash_lists) {
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&bdat->active_head);
INIT_LIST_HEAD(&bdat->inactive_head);
INIT_LIST_HEAD(&bdat->dirty_list);
bdat->meta_fd = meta_fd;
list_add(&bdat->inactive_head, &bdat->active_head);
for (i = 0; i < bdat->hash_nr; i++)
INIT_LIST_HEAD(&bdat->hash_lists[i]);
ret = syscall(__NR_io_setup, bdat->nr_events, &bdat->ctx);
out:
if (ret < 0) {
free(bdat->iocbs);
free(bdat->iocbps);
free(bdat->events);
free(bdat->hash_lists);
}
return ret;
}
void block_shutdown(void)
{
struct block_data *bdat = &global_bdat;
syscall(SYS_io_destroy, bdat->ctx);
free(bdat->iocbs);
free(bdat->iocbps);
free(bdat->events);
free(bdat->hash_lists);
}

34
utils/src/check/block.h Normal file
View File

@@ -0,0 +1,34 @@
#ifndef _SCOUTFS_UTILS_CHECK_BLOCK_H_
#define _SCOUTFS_UTILS_CHECK_BLOCK_H_
#include <unistd.h>
#include <stdbool.h>
struct block;
#include "sparse.h"
/* block flags passed to block_get() */
enum {
BF_ZERO = (1 << 0), /* zero contents buf as block is returned */
BF_DIRTY = (1 << 1), /* block will be written with transaction */
BF_SM = (1 << 2), /* small 4k block instead of large 64k block */
BF_OVERWRITE = (1 << 3), /* caller will overwrite contents, don't read */
};
int block_get(struct block **blk_ret, u64 blkno, int bf);
void block_put(struct block **blkp);
void *block_buf(struct block *blk);
size_t block_size(struct block *blk);
void block_drop(struct block **blkp);
void block_readahead(u64 *blknos, size_t nr);
int block_try_commit(bool force);
int block_setup(int meta_fd, size_t max_cached_bytes, size_t max_dirty_bytes);
void block_shutdown(void);
int block_hdr_valid(struct block *blk, u64 blkno, int bf, u32 magic);
#endif

217
utils/src/check/btree.c Normal file
View File

@@ -0,0 +1,217 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <errno.h>
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "key.h"
#include "avl.h"
#include "block.h"
#include "btree.h"
#include "extent.h"
#include "iter.h"
#include "sns.h"
#include "meta.h"
#include "problem.h"
static inline void *item_val(struct scoutfs_btree_block *bt, struct scoutfs_btree_item *item)
{
return (void *)bt + le16_to_cpu(item->val_off);
}
static void readahead_refs(struct scoutfs_btree_block *bt)
{
struct scoutfs_btree_item *item;
struct scoutfs_avl_node *node;
struct scoutfs_block_ref *ref;
u64 *blknos;
u64 blkno;
u16 valid = 0;
u16 nr = le16_to_cpu(bt->nr_items);
int i;
blknos = calloc(nr, sizeof(blknos[0]));
if (!blknos)
return;
node = avl_first(&bt->item_root);
for (i = 0; i < nr; i++) {
item = container_of(node, struct scoutfs_btree_item, node);
ref = item_val(bt, item);
blkno = le64_to_cpu(ref->blkno);
if (valid_meta_blkno(blkno))
blknos[valid++] = blkno;
node = avl_next(&bt->item_root, &item->node);
}
if (valid > 0)
block_readahead(blknos, valid);
free(blknos);
}
/*
* Call the callback on the referenced block. Then if the block
* contains referneces read it and recurse into all its references.
*/
static int btree_ref_meta_iter(struct scoutfs_block_ref *ref, unsigned level, extent_cb_t cb,
void *cb_arg)
{
struct scoutfs_btree_item *item;
struct scoutfs_btree_block *bt;
struct scoutfs_avl_node *node;
struct block *blk = NULL;
u64 blkno;
int ret;
int i;
blkno = le64_to_cpu(ref->blkno);
if (!blkno)
return 0;
ret = cb(blkno, 1, cb_arg);
if (ret < 0) {
ret = xlate_iter_errno(ret);
return 0;
}
if (level == 0)
return 0;
ret = block_get(&blk, blkno, 0);
if (ret < 0)
return ret;
ret = block_hdr_valid(blk, blkno, 0, SCOUTFS_BLOCK_MAGIC_BTREE);
if (ret < 0)
return ret;
sns_push("btree_parent", blkno, 0);
bt = block_buf(blk);
/* XXX integrate verification with block cache */
if (bt->level != level) {
problem(PB_BTREE_BLOCK_BAD_LEVEL, "expected %u level %u", level, bt->level);
ret = -EINVAL;
goto out;
}
/* read-ahead last level of parents */
if (level == 2)
readahead_refs(bt);
node = avl_first(&bt->item_root);
for (i = 0; i < le16_to_cpu(bt->nr_items); i++) {
item = container_of(node, struct scoutfs_btree_item, node);
ref = item_val(bt, item);
ret = btree_ref_meta_iter(ref, level - 1, cb, cb_arg);
if (ret < 0)
goto out;
node = avl_next(&bt->item_root, &item->node);
}
ret = 0;
out:
block_put(&blk);
sns_pop();
return ret;
}
int btree_meta_iter(struct scoutfs_btree_root *root, extent_cb_t cb, void *cb_arg)
{
/* XXX check root */
if (root->height == 0)
return 0;
return btree_ref_meta_iter(&root->ref, root->height - 1, cb, cb_arg);
}
static int btree_ref_item_iter(struct scoutfs_block_ref *ref, unsigned level,
btree_item_cb_t cb, void *cb_arg)
{
struct scoutfs_btree_item *item;
struct scoutfs_btree_block *bt;
struct scoutfs_avl_node *node;
struct block *blk = NULL;
u64 blkno;
int ret;
int i;
blkno = le64_to_cpu(ref->blkno);
if (!blkno)
return 0;
ret = block_get(&blk, blkno, 0);
if (ret < 0)
return ret;
if (level)
sns_push("btree_parent", blkno, 0);
else
sns_push("btree_leaf", blkno, 0);
ret = block_hdr_valid(blk, blkno, 0, SCOUTFS_BLOCK_MAGIC_BTREE);
if (ret < 0)
return ret;
bt = block_buf(blk);
/* XXX integrate verification with block cache */
if (bt->level != level) {
problem(PB_BTREE_BLOCK_BAD_LEVEL, "expected %u level %u", level, bt->level);
ret = -EINVAL;
goto out;
}
/* read-ahead leaves that contain items */
if (level == 1)
readahead_refs(bt);
node = avl_first(&bt->item_root);
for (i = 0; i < le16_to_cpu(bt->nr_items); i++) {
item = container_of(node, struct scoutfs_btree_item, node);
if (level) {
ref = item_val(bt, item);
ret = btree_ref_item_iter(ref, level - 1, cb, cb_arg);
} else {
ret = cb(&item->key, item_val(bt, item),
le16_to_cpu(item->val_len), cb_arg);
debug("free item key "SK_FMT" ret %d", SK_ARG(&item->key), ret);
}
if (ret < 0) {
ret = xlate_iter_errno(ret);
goto out;
}
node = avl_next(&bt->item_root, &item->node);
}
ret = 0;
out:
block_put(&blk);
sns_pop();
return ret;
}
int btree_item_iter(struct scoutfs_btree_root *root, btree_item_cb_t cb, void *cb_arg)
{
/* XXX check root */
if (root->height == 0)
return 0;
return btree_ref_item_iter(&root->ref, root->height - 1, cb, cb_arg);
}

14
utils/src/check/btree.h Normal file
View File

@@ -0,0 +1,14 @@
#ifndef _SCOUTFS_UTILS_CHECK_BTREE_H_
#define _SCOUTFS_UTILS_CHECK_BTREE_H_
#include "util.h"
#include "format.h"
#include "extent.h"
typedef int (*btree_item_cb_t)(struct scoutfs_key *key, void *val, u16 val_len, void *cb_arg);
int btree_meta_iter(struct scoutfs_btree_root *root, extent_cb_t cb, void *cb_arg);
int btree_item_iter(struct scoutfs_btree_root *root, btree_item_cb_t cb, void *cb_arg);
#endif

184
utils/src/check/check.c Normal file
View File

@@ -0,0 +1,184 @@
#define _GNU_SOURCE /* O_DIRECT */
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include <assert.h>
#include <stdbool.h>
#include <argp.h>
#include "sparse.h"
#include "parse.h"
#include "util.h"
#include "format.h"
#include "ioctl.h"
#include "cmd.h"
#include "dev.h"
#include "alloc.h"
#include "block.h"
#include "debug.h"
#include "meta.h"
#include "super.h"
#include "problem.h"
struct check_args {
char *meta_device;
char *data_device;
char *debug_path;
};
static int do_check(struct check_args *args)
{
int debug_fd = -1;
int meta_fd = -1;
int data_fd = -1;
int ret;
if (args->debug_path) {
if (strcmp(args->debug_path, "-") == 0)
debug_fd = dup(STDERR_FILENO);
else
debug_fd = open(args->debug_path, O_WRONLY | O_CREAT | O_TRUNC, 0644);
if (debug_fd < 0) {
ret = -errno;
fprintf(stderr, "error opening debug output file '%s': %s (%d)\n",
args->debug_path, strerror(errno), errno);
goto out;
}
debug_enable(debug_fd);
}
meta_fd = open(args->meta_device, O_DIRECT | O_RDWR | O_EXCL);
if (meta_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open meta device '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
goto out;
}
data_fd = open(args->data_device, O_DIRECT | O_RDWR | O_EXCL);
if (data_fd < 0) {
ret = -errno;
fprintf(stderr, "failed to open data device '%s': %s (%d)\n",
args->data_device, strerror(errno), errno);
goto out;
}
ret = block_setup(meta_fd, 128 * 1024 * 1024, 32 * 1024 * 1024);
if (ret < 0)
goto out;
/*
* At some point we may convert this to a multi-pass system where we may
* try and repair items, and, as long as repairs are made, we will rerun
* the checks more times. We may need to start counting how many problems we
* fix in the process of these loops, so that we don't stall on unrepairable
* problems and are making actual repair progress. IOW - when we do a full
* check loop without any problems fixed, we stop trying.
*/
ret = check_supers(data_fd) ?:
check_super_in_use(meta_fd) ?:
check_meta_alloc() ?:
check_super_crc();
if (ret < 0)
goto out;
debug("problem count %lu", problems_count());
if (problems_count() > 0)
printf("Problems detected.\n");
out:
/* and tear it all down */
block_shutdown();
super_shutdown();
debug_disable();
if (meta_fd >= 0)
close(meta_fd);
if (data_fd >= 0)
close(data_fd);
if (debug_fd >= 0)
close(debug_fd);
return ret;
}
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct check_args *args = state->input;
switch (key) {
case 'd':
args->debug_path = strdup_or_error(state, arg);
break;
case 'e':
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
else if (!args->data_device)
args->data_device = strdup_or_error(state, arg);
else
argp_error(state, "more than two device arguments given");
break;
case ARGP_KEY_FINI:
if (!args->meta_device)
argp_error(state, "no metadata device argument given");
if (!args->data_device)
argp_error(state, "no data device argument given");
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "debug", 'd', "FILE_PATH", 0, "Path to debug output file, will be created or truncated"},
{ NULL }
};
static struct argp argp = {
options,
parse_opt,
"META-DEVICE DATA-DEVICE",
"Check filesystem consistency"
};
/* Exit codes used by fsck-type programs */
#define FSCK_EX_NONDESTRUCT 1 /* File system errors corrected */
#define FSCK_EX_UNCORRECTED 4 /* File system errors left uncorrected */
#define FSCK_EX_ERROR 8 /* Operational error */
#define FSCK_EX_USAGE 16 /* Usage or syntax error */
static int check_cmd(int argc, char **argv)
{
struct check_args check_args = {NULL};
int ret;
ret = argp_parse(&argp, argc, argv, 0, NULL, &check_args);
if (ret)
exit(FSCK_EX_USAGE);
ret = do_check(&check_args);
if (ret < 0)
ret = FSCK_EX_ERROR;
if (problems_count() > 0)
ret |= FSCK_EX_UNCORRECTED;
exit(ret);
}
static void __attribute__((constructor)) check_ctor(void)
{
cmd_register_argp("check", &argp, GROUP_CORE, check_cmd);
}

16
utils/src/check/debug.c Normal file
View File

@@ -0,0 +1,16 @@
#include <stdlib.h>
#include "debug.h"
int debug_fd = -1;
void debug_enable(int fd)
{
debug_fd = fd;
}
void debug_disable(void)
{
if (debug_fd >= 0)
debug_fd = -1;
}

17
utils/src/check/debug.h Normal file
View File

@@ -0,0 +1,17 @@
#ifndef _SCOUTFS_UTILS_CHECK_DEBUG_H_
#define _SCOUTFS_UTILS_CHECK_DEBUG_H_
#include <stdio.h>
#define debug(fmt, args...) \
do { \
if (debug_fd >= 0) \
dprintf(debug_fd, fmt"\n", ##args); \
} while (0)
extern int debug_fd;
void debug_enable(int fd);
void debug_disable(void);
#endif

9
utils/src/check/eno.h Normal file
View File

@@ -0,0 +1,9 @@
#ifndef _SCOUTFS_UTILS_CHECK_ENO_H_
#define _SCOUTFS_UTILS_CHECK_ENO_H_
#include <errno.h>
#define ENO_FMT "%d (%s)"
#define ENO_ARG(eno) eno, strerror(eno)
#endif

313
utils/src/check/extent.c Normal file
View File

@@ -0,0 +1,313 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <errno.h>
#include "util.h"
#include "lk_rbtree_wrapper.h"
#include "debug.h"
#include "extent.h"
/*
* In-memory extent management in rbtree nodes.
*/
bool extents_overlap(u64 a_start, u64 a_len, u64 b_start, u64 b_len)
{
u64 a_end = a_start + a_len;
u64 b_end = b_start + b_len;
return !((a_end <= b_start) || (b_end <= a_start));
}
static int ext_contains(struct extent_node *ext, u64 start, u64 len)
{
return ext->start <= start && ext->start + ext->len >= start + len;
}
/*
* True if the given extent is bisected by the given range; there's
* leftover containing extents on both the left and right sides of the
* range in the extent.
*/
static int ext_bisected(struct extent_node *ext, u64 start, u64 len)
{
return ext->start < start && ext->start + ext->len > start + len;
}
static struct extent_node *ext_from_rbnode(struct rb_node *rbnode)
{
return rbnode ? container_of(rbnode, struct extent_node, rbnode) : NULL;
}
static struct extent_node *next_ext(struct extent_node *ext)
{
return ext ? ext_from_rbnode(rb_next(&ext->rbnode)) : NULL;
}
static struct extent_node *prev_ext(struct extent_node *ext)
{
return ext ? ext_from_rbnode(rb_prev(&ext->rbnode)) : NULL;
}
struct walk_results {
unsigned bisect_to_leaf:1;
struct extent_node *found;
struct extent_node *next;
struct rb_node *parent;
struct rb_node **node;
};
static void walk_extents(struct extent_root *root, u64 start, u64 len, struct walk_results *wlk)
{
struct rb_node **node = &root->rbroot.rb_node;
struct extent_node *ext;
u64 end = start + len;
int cmp;
wlk->found = NULL;
wlk->next = NULL;
wlk->parent = NULL;
while (*node) {
wlk->parent = *node;
ext = ext_from_rbnode(*node);
cmp = end <= ext->start ? -1 :
start >= ext->start + ext->len ? 1 : 0;
if (cmp < 0) {
node = &ext->rbnode.rb_left;
wlk->next = ext;
} else if (cmp > 0) {
node = &ext->rbnode.rb_right;
} else {
wlk->found = ext;
if (!(wlk->bisect_to_leaf && ext_bisected(ext, start, len)))
break;
/* walk right so we can insert greater right from bisection */
node = &ext->rbnode.rb_right;
}
}
wlk->node = node;
}
/*
* Return an extent that overlaps with the given range.
*/
int extent_lookup(struct extent_root *root, u64 start, u64 len, struct extent_node *found)
{
struct walk_results wlk = { 0, };
int ret;
walk_extents(root, start, len, &wlk);
if (wlk.found) {
memset(found, 0, sizeof(struct extent_node));
found->start = wlk.found->start;
found->len = wlk.found->len;
ret = 0;
} else {
ret = -ENOENT;
}
return ret;
}
/*
* Callers can iterate through direct node references and are entirely
* responsible for consistency when doing so.
*/
struct extent_node *extent_first(struct extent_root *root)
{
struct walk_results wlk = { 0, };
walk_extents(root, 0, 1, &wlk);
return wlk.found ?: wlk.next;
}
struct extent_node *extent_next(struct extent_node *ext)
{
return next_ext(ext);
}
struct extent_node *extent_prev(struct extent_node *ext)
{
return prev_ext(ext);
}
/*
* Insert a new extent into the tree. We can extend existing nodes,
* merge with neighbours, or remove existing extents entirely if we
* insert a range that fully spans existing nodes.
*/
static int walk_insert(struct extent_root *root, u64 start, u64 len, int found_err)
{
struct walk_results wlk = { 0, };
struct extent_node *ext;
struct extent_node *nei;
int ret;
walk_extents(root, start, len, &wlk);
ext = wlk.found;
if (ext && found_err) {
ret = found_err;
goto out;
}
if (!ext) {
ext = malloc(sizeof(struct extent_node));
if (!ext) {
ret = -ENOMEM;
goto out;
}
ext->start = start;
ext->len = len;
rb_link_node(&ext->rbnode, wlk.parent, wlk.node);
rb_insert_color(&ext->rbnode, &root->rbroot);
}
/* start by expanding an existing extent if our range is larger */
if (start < ext->start) {
ext->len += ext->start - start;
ext->start = start;
}
if (ext->start + ext->len < start + len)
ext->len += (start + len) - (ext->start + ext->len);
/* drop any fully spanned neighbors, possibly merging with a final adjacent one */
while ((nei = prev_ext(ext))) {
if (nei->start + nei->len < ext->start)
break;
if (nei->start < ext->start) {
ext->len += ext->start - nei->start;
ext->start = nei->start;
}
rb_erase(&nei->rbnode, &root->rbroot);
free(nei);
}
while ((nei = next_ext(ext))) {
if (ext->start + ext->len < nei->start)
break;
if (ext->start + ext->len < nei->start + nei->len)
ext->len += (nei->start + nei->len) - (ext->start + ext->len);
rb_erase(&nei->rbnode, &root->rbroot);
free(nei);
}
ret = 0;
out:
if (ret < 0)
debug("start %llu len %llu ret %d", start, len, ret);
return ret;
}
/*
* Insert a new extent. The specified extent must not overlap with any
* existing extents or -EEXIST is returned.
*/
int extent_insert_new(struct extent_root *root, u64 start, u64 len)
{
return walk_insert(root, start, len, true);
}
/*
* Insert an extent, extending any existing extents that may overlap.
*/
int extent_insert_extend(struct extent_root *root, u64 start, u64 len)
{
return walk_insert(root, start, len, false);
}
/*
* Remove the specified extent from an existing node. The given extent must be fully
* contained in a single node or -ENOENT is returned.
*/
int extent_remove(struct extent_root *root, u64 start, u64 len)
{
struct extent_node *ext;
struct extent_node *ins;
struct walk_results wlk = {
.bisect_to_leaf = 1,
};
int ret;
walk_extents(root, start, len, &wlk);
if (!(ext = wlk.found) || !ext_contains(ext, start, len)) {
ret = -ENOENT;
goto out;
}
if (ext_bisected(ext, start, len)) {
debug("found bisected start %llu len %llu", ext->start, ext->len);
ins = malloc(sizeof(struct extent_node));
if (!ins) {
ret = -ENOMEM;
goto out;
}
ins->start = start + len;
ins->len = (ext->start + ext->len) - ins->start;
rb_link_node(&ins->rbnode, wlk.parent, wlk.node);
rb_insert_color(&ins->rbnode, &root->rbroot);
}
if (start > ext->start) {
ext->len = start - ext->start;
} else if (len < ext->len) {
ext->start += len;
ext->len -= len;
} else {
rb_erase(&ext->rbnode, &root->rbroot);
}
ret = 0;
out:
debug("start %llu len %llu ret %d", start, len, ret);
return ret;
}
void extent_root_init(struct extent_root *root)
{
root->rbroot = RB_ROOT;
root->total = 0;
}
void extent_root_free(struct extent_root *root)
{
struct extent_node *ext;
struct rb_node *node;
struct rb_node *tmp;
for (node = rb_first(&root->rbroot); node && ((tmp = rb_next(node)), 1); node = tmp) {
ext = rb_entry(node, struct extent_node, rbnode);
rb_erase(&ext->rbnode, &root->rbroot);
free(ext);
}
}
void extent_root_print(struct extent_root *root)
{
struct extent_node *ext;
struct rb_node *node;
struct rb_node *tmp;
for (node = rb_first(&root->rbroot); node && ((tmp = rb_next(node)), 1); node = tmp) {
ext = rb_entry(node, struct extent_node, rbnode);
debug(" start %llu len %llu", ext->start, ext->len);
}
}

38
utils/src/check/extent.h Normal file
View File

@@ -0,0 +1,38 @@
#ifndef _SCOUTFS_UTILS_CHECK_EXTENT_H_
#define _SCOUTFS_UTILS_CHECK_EXTENT_H_
#include "lk_rbtree_wrapper.h"
struct extent_root {
struct rb_root rbroot;
u64 total;
};
struct extent_node {
struct rb_node rbnode;
u64 start;
u64 len;
};
typedef int (*extent_cb_t)(u64 start, u64 len, void *arg);
struct extent_cb_arg_t {
extent_cb_t cb;
void *cb_arg;
};
bool extents_overlap(u64 a_start, u64 a_len, u64 b_start, u64 b_len);
int extent_lookup(struct extent_root *root, u64 start, u64 len, struct extent_node *found);
struct extent_node *extent_first(struct extent_root *root);
struct extent_node *extent_next(struct extent_node *ext);
struct extent_node *extent_prev(struct extent_node *ext);
int extent_insert_new(struct extent_root *root, u64 start, u64 len);
int extent_insert_extend(struct extent_root *root, u64 start, u64 len);
int extent_remove(struct extent_root *root, u64 start, u64 len);
void extent_root_init(struct extent_root *root);
void extent_root_free(struct extent_root *root);
void extent_root_print(struct extent_root *root);
#endif

540
utils/src/check/image.c Normal file
View File

@@ -0,0 +1,540 @@
#define _GNU_SOURCE /* O_DIRECT */
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include <stdbool.h>
#include <argp.h>
#include "sparse.h"
#include "bitmap.h"
#include "parse.h"
#include "util.h"
#include "format.h"
#include "crc.h"
#include "cmd.h"
#include "dev.h"
#include "alloc.h"
#include "block.h"
#include "btree.h"
#include "log_trees.h"
#include "super.h"
/* huh. */
#define OFF_MAX (off_t)((u64)((off_t)~0ULL) >> 1)
#define SCOUTFS_META_IMAGE_HEADER_MAGIC 0x8aee00d098fa60c5ULL
#define SCOUTFS_META_IMAGE_BLOCK_HEADER_MAGIC 0x70bd5e9269effd86ULL
struct scoutfs_meta_image_header {
__le64 magic;
__le64 total_bytes;
__le32 version;
} __packed;
struct scoutfs_meta_image_block_header {
__le64 magic;
__le64 offset;
__le32 size;
__le32 crc;
} __packed;
struct image_args {
char *meta_device;
bool is_read;
bool show_header;
u64 ra_window;
};
struct block_bitmaps {
unsigned long *bits;
u64 size;
u64 count;
};
#define errf(fmt, args...) \
dprintf(STDERR_FILENO, fmt, ##args)
static int set_meta_bit(u64 start, u64 len, void *arg)
{
struct block_bitmaps *bm = arg;
int ret;
if (len != 1) {
ret = -EINVAL;
} else {
if (!test_bit(bm->bits, start)) {
set_bit(bm->bits, start);
bm->count++;
}
ret = 0;
}
return ret;
}
static int get_ref_bits(struct block_bitmaps *bm)
{
struct scoutfs_super_block *super = global_super;
int ret;
u64 i;
/*
* There are almost no small blocks we need to read, so we read
* them as the large blocks that contain them to simplify the
* block reading process.
*/
set_meta_bit(SCOUTFS_SUPER_BLKNO >> SCOUTFS_BLOCK_SM_LG_SHIFT, 1, bm);
for (i = 0; i < SCOUTFS_QUORUM_BLOCKS; i++)
set_meta_bit((SCOUTFS_QUORUM_BLKNO + i) >> SCOUTFS_BLOCK_SM_LG_SHIFT, 1, bm);
ret = alloc_root_meta_iter(&super->meta_alloc[0], set_meta_bit, bm) ?:
alloc_root_meta_iter(&super->meta_alloc[1], set_meta_bit, bm) ?:
alloc_root_meta_iter(&super->data_alloc, set_meta_bit, bm) ?:
alloc_list_meta_iter(&super->server_meta_avail[0], set_meta_bit, bm) ?:
alloc_list_meta_iter(&super->server_meta_avail[1], set_meta_bit, bm) ?:
alloc_list_meta_iter(&super->server_meta_freed[0], set_meta_bit, bm) ?:
alloc_list_meta_iter(&super->server_meta_freed[1], set_meta_bit, bm) ?:
btree_meta_iter(&super->fs_root, set_meta_bit, bm) ?:
btree_meta_iter(&super->logs_root, set_meta_bit, bm) ?:
btree_meta_iter(&super->log_merge, set_meta_bit, bm) ?:
btree_meta_iter(&super->mounted_clients, set_meta_bit, bm) ?:
btree_meta_iter(&super->srch_root, set_meta_bit, bm) ?:
log_trees_meta_iter(set_meta_bit, bm);
return ret;
}
/*
* Note that this temporarily modifies the header that it's given.
*/
static __le32 calc_crc(struct scoutfs_meta_image_block_header *bh, void *buf, size_t size)
{
__le32 saved = bh->crc;
u32 crc = ~0;
bh->crc = 0;
crc = crc32c(crc, bh, sizeof(*bh));
crc = crc32c(crc, buf, size);
bh->crc = saved;
return cpu_to_le32(crc);
}
static void printf_header(struct scoutfs_meta_image_header *hdr)
{
errf("magic: 0x%016llx\n"
"total_bytes: %llu\n"
"version: %u\n",
le64_to_cpu(hdr->magic),
le64_to_cpu(hdr->total_bytes),
le32_to_cpu(hdr->version));
}
typedef ssize_t (*rw_func_t)(int fd, void *buf, size_t count, off_t offset);
static inline ssize_t rw_read(int fd, void *buf, size_t count, off_t offset)
{
return read(fd, buf, count);
}
static inline ssize_t rw_pread(int fd, void *buf, size_t count, off_t offset)
{
return pread(fd, buf, count, offset);
}
static inline ssize_t rw_write(int fd, void *buf, size_t count, off_t offset)
{
return write(fd, buf, count);
}
static inline ssize_t rw_pwrite(int fd, void *buf, size_t count, off_t offset)
{
return pwrite(fd, buf, count, offset);
}
static int rw_full_count(rw_func_t func, u64 *tot, int fd, void *buf, size_t count, off_t offset)
{
ssize_t sret;
while (count > 0) {
sret = func(fd, buf, count, offset);
if (sret <= 0 || sret > count) {
if (sret < 0)
return -errno;
else
return -EIO;
}
if (tot)
*tot += sret;
buf += sret;
count -= sret;
}
return 0;
}
static int read_image(struct image_args *args, int fd, struct block_bitmaps *bm)
{
struct scoutfs_meta_image_block_header bh;
struct scoutfs_meta_image_header hdr;
u64 opening;
void *buf;
off_t off;
u64 bit;
u64 ra;
int ret;
buf = malloc(SCOUTFS_BLOCK_LG_SIZE);
if (!buf) {
ret = -ENOMEM;
goto out;
}
hdr.magic = cpu_to_le64(SCOUTFS_META_IMAGE_HEADER_MAGIC);
hdr.total_bytes = cpu_to_le64(sizeof(hdr) +
(bm->count * (SCOUTFS_BLOCK_LG_SIZE + sizeof(bh))));
hdr.version = cpu_to_le32(1);
if (args->show_header) {
printf_header(&hdr);
ret = 0;
goto out;
}
ret = rw_full_count(rw_write, NULL, STDOUT_FILENO, &hdr, sizeof(hdr), 0);
if (ret < 0)
goto out;
opening = args->ra_window;
ra = 0;
bit = 0;
for (bit = 0; (bit = find_next_set_bit(bm->bits, bit, bm->size)) < bm->size; bit++) {
/* readahead to open the full window, then a block at a time */
do {
ra = find_next_set_bit(bm->bits, ra, bm->size);
if (ra < bm->size) {
off = ra << SCOUTFS_BLOCK_LG_SHIFT;
posix_fadvise(fd, off, SCOUTFS_BLOCK_LG_SIZE, POSIX_FADV_WILLNEED);
ra++;
if (opening)
opening -= min(opening, SCOUTFS_BLOCK_LG_SIZE);
}
} while (opening > 0);
off = bit << SCOUTFS_BLOCK_LG_SHIFT;
ret = rw_full_count(rw_pread, NULL, fd, buf, SCOUTFS_BLOCK_LG_SIZE, off);
if (ret < 0)
goto out;
/*
* Might as well try to drop the pages we've used to
* reduce memory pressure on our read-ahead pages that
* are waiting.
*/
posix_fadvise(fd, off, SCOUTFS_BLOCK_LG_SIZE, POSIX_FADV_DONTNEED);
bh.magic = cpu_to_le64(SCOUTFS_META_IMAGE_BLOCK_HEADER_MAGIC);
bh.offset = cpu_to_le64(off);
bh.size = cpu_to_le32(SCOUTFS_BLOCK_LG_SIZE);
bh.crc = calc_crc(&bh, buf, SCOUTFS_BLOCK_LG_SIZE);
ret = rw_full_count(rw_write, NULL, STDOUT_FILENO, &bh, sizeof(bh), 0) ?:
rw_full_count(rw_write, NULL, STDOUT_FILENO, buf, SCOUTFS_BLOCK_LG_SIZE, 0);
if (ret < 0)
goto out;
}
out:
free(buf);
return ret;
}
static int invalid_header(struct scoutfs_meta_image_header *hdr)
{
if (le64_to_cpu(hdr->magic) != SCOUTFS_META_IMAGE_HEADER_MAGIC) {
errf("bad image header magic 0x%016llx (!= expected %016llx)\n",
le64_to_cpu(hdr->magic), SCOUTFS_META_IMAGE_HEADER_MAGIC);
} else if (le32_to_cpu(hdr->version) != 1) {
errf("unknown image header version %u\n", le32_to_cpu(hdr->version));
} else {
return 0;
}
return -EIO;
}
/*
* Doesn't catch offset+size overflowing, presumes pwrite() will return
* an error.
*/
static int invalid_block_header(struct scoutfs_meta_image_block_header *bh)
{
if (le64_to_cpu(bh->magic) != SCOUTFS_META_IMAGE_BLOCK_HEADER_MAGIC) {
errf("bad block header magic 0x%016llx (!= expected %016llx)\n",
le64_to_cpu(bh->magic), SCOUTFS_META_IMAGE_BLOCK_HEADER_MAGIC);
} else if (le32_to_cpu(bh->size) == 0) {
errf("invalid block header size %u\n", le32_to_cpu(bh->size));
} else if (le32_to_cpu(bh->size) > SIZE_MAX) {
errf("block header size %u too large for size_t (> %zu)\n",
le32_to_cpu(bh->size), (size_t)SIZE_MAX);
} else if (le64_to_cpu(bh->offset) > OFF_MAX) {
errf("block header offset %llu too large for off_t (> %llu)\n",
le64_to_cpu(bh->offset), (u64)OFF_MAX);
} else {
return 0;
}
return -EIO;
}
static int write_image(struct image_args *args, int fd, struct block_bitmaps *bm)
{
struct scoutfs_meta_image_block_header bh;
struct scoutfs_meta_image_header hdr;
size_t writeback_batch = (2 * 1024 * 1024);
size_t buf_size;
size_t dirty;
size_t size;
off_t first;
off_t last;
off_t off;
__le32 calc;
void *buf;
u64 tot;
int ret;
tot = 0;
ret = rw_full_count(rw_read, &tot, STDIN_FILENO, &hdr, sizeof(hdr), 0);
if (ret < 0)
goto out;
if (args->show_header) {
printf_header(&hdr);
ret = 0;
goto out;
}
ret = invalid_header(&hdr);
if (ret < 0)
goto out;
dirty = 0;
first = OFF_MAX;
last = 0;
buf = NULL;
buf_size = 0;
while (tot < le64_to_cpu(hdr.total_bytes)) {
ret = rw_full_count(rw_read, &tot, STDIN_FILENO, &bh, sizeof(bh), 0);
if (ret < 0)
goto out;
ret = invalid_block_header(&bh);
if (ret < 0)
goto out;
size = le32_to_cpu(bh.size);
if (buf_size < size) {
buf = realloc(buf, size);
if (!buf) {
ret = -ENOMEM;
goto out;
}
buf_size = size;
}
ret = rw_full_count(rw_read, &tot, STDIN_FILENO, buf, size, 0);
if (ret < 0)
goto out;
calc = calc_crc(&bh, buf, size);
if (calc != bh.crc) {
errf("crc err");
ret = -EIO;
goto out;
}
off = le64_to_cpu(bh.offset);
ret = rw_full_count(rw_pwrite, NULL, fd, buf, size, off);
if (ret < 0)
goto out;
dirty += size;
first = min(first, off);
last = max(last, off);
if (dirty >= writeback_batch) {
posix_fadvise(fd, first, last, POSIX_FADV_DONTNEED);
dirty = 0;
first = OFF_MAX;
last = 0;
}
}
ret = fsync(fd);
if (ret < 0) {
ret = -errno;
goto out;
}
out:
return ret;
}
static int do_image(struct image_args *args)
{
struct block_bitmaps bm = { .bits = NULL };
int meta_fd = -1;
u64 dev_size;
mode_t mode;
int ret;
mode = args->is_read ? O_RDONLY : O_RDWR;
meta_fd = open(args->meta_device, mode);
if (meta_fd < 0) {
ret = -errno;
errf("failed to open meta device '%s': %s (%d)\n",
args->meta_device, strerror(errno), errno);
goto out;
}
if (args->is_read) {
ret = flush_device(meta_fd);
if (ret < 0)
goto out;
ret = get_device_size(args->meta_device, meta_fd, &dev_size);
if (ret < 0)
goto out;
bm.size = DIV_ROUND_UP(dev_size, SCOUTFS_BLOCK_LG_SIZE);
bm.bits = calloc(1, round_up(bm.size, BITS_PER_LONG) / 8);
if (!bm.bits) {
ret = -ENOMEM;
goto out;
}
ret = block_setup(meta_fd, 128 * 1024 * 1024, 32 * 1024 * 1024) ?:
check_supers(-1) ?:
get_ref_bits(&bm) ?:
read_image(args, meta_fd, &bm);
block_shutdown();
} else {
ret = write_image(args, meta_fd, &bm);
}
out:
free(bm.bits);
if (meta_fd >= 0)
close(meta_fd);
return ret;
}
static int parse_opt(int key, char *arg, struct argp_state *state)
{
struct image_args *args = state->input;
int ret;
switch (key) {
case 'h':
args->show_header = true;
break;
case 'r':
ret = parse_u64(arg, &args->ra_window);
if (ret)
argp_error(state, "readahead winddoe parse error");
break;
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
else
argp_error(state, "more than two device arguments given");
break;
case ARGP_KEY_FINI:
if (!args->meta_device)
argp_error(state, "no metadata device argument given");
break;
default:
break;
}
return 0;
}
static struct argp_option options[] = {
{ "show-header", 'h', NULL, 0, "Print image header and exit without processing stream" },
{ "readahead", 'r', "NR", 0, "Maintain read-ahead window of NR blocks" },
{ NULL }
};
static struct argp read_image_argp = {
options,
parse_opt,
"META-DEVICE",
"Read metadata image stream from metadata device file"
};
#define DEFAULT_RA_WINDOW (512 * 1024)
static int read_image_cmd(int argc, char **argv)
{
struct image_args image_args = {
.is_read = true,
.ra_window = DEFAULT_RA_WINDOW,
};
int ret;
ret = argp_parse(&read_image_argp, argc, argv, 0, NULL, &image_args);
if (ret)
return ret;
return do_image(&image_args);
}
static struct argp write_image_argp = {
options,
parse_opt,
"META-DEVICE",
"Write metadata image stream to metadata device file"
};
static int write_image_cmd(int argc, char **argv)
{
struct image_args image_args = {
.is_read = false,
.ra_window = DEFAULT_RA_WINDOW,
};
int ret;
ret = argp_parse(&write_image_argp, argc, argv, 0, NULL, &image_args);
if (ret)
return ret;
return do_image(&image_args);
}
static void __attribute__((constructor)) image_ctor(void)
{
cmd_register_argp("read-metadata-image", &read_image_argp, GROUP_CORE, read_image_cmd);
cmd_register_argp("write-metadata-image", &write_image_argp, GROUP_CORE, write_image_cmd);
}

15
utils/src/check/iter.h Normal file
View File

@@ -0,0 +1,15 @@
#ifndef _SCOUTFS_UTILS_CHECK_ITER_H_
#define _SCOUTFS_UTILS_CHECK_ITER_H_
/*
* Callbacks can return a weird -errno that we'll never use to indicate
* that iteration can stop and return 0 for success.
*/
#define ECHECK_ITER_DONE EL2HLT
static inline int xlate_iter_errno(int ret)
{
return ret == -ECHECK_ITER_DONE ? 0 : ret;
}
#endif

View File

@@ -0,0 +1,98 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "key.h"
#include "alloc.h"
#include "btree.h"
#include "debug.h"
#include "extent.h"
#include "iter.h"
#include "sns.h"
#include "log_trees.h"
#include "super.h"
struct iter_args {
extent_cb_t cb;
void *cb_arg;
};
static int lt_meta_iter(struct scoutfs_key *key, void *val, u16 val_len, void *cb_arg)
{
struct iter_args *ia = cb_arg;
struct scoutfs_log_trees *lt;
int ret;
if (val_len != sizeof(struct scoutfs_log_trees))
; /* XXX */
lt = val;
sns_push("log_trees", le64_to_cpu(lt->rid), le64_to_cpu(lt->nr));
debug("lt rid 0x%16llx nr %llu", le64_to_cpu(lt->rid), le64_to_cpu(lt->nr));
sns_push("meta_avail", 0, 0);
ret = alloc_list_meta_iter(&lt->meta_avail, ia->cb, ia->cb_arg);
sns_pop();
if (ret < 0)
goto out;
sns_push("meta_freed", 0, 0);
ret = alloc_list_meta_iter(&lt->meta_freed, ia->cb, ia->cb_arg);
sns_pop();
if (ret < 0)
goto out;
sns_push("item_root", 0, 0);
ret = btree_meta_iter(&lt->item_root, ia->cb, ia->cb_arg);
sns_pop();
if (ret < 0)
goto out;
if (lt->bloom_ref.blkno) {
sns_push("bloom_ref", 0, 0);
ret = ia->cb(le64_to_cpu(lt->bloom_ref.blkno), 1, ia->cb_arg);
sns_pop();
if (ret < 0) {
ret = xlate_iter_errno(ret);
goto out;
}
}
sns_push("data_avail", 0, 0);
ret = alloc_root_meta_iter(&lt->data_avail, ia->cb, ia->cb_arg);
sns_pop();
if (ret < 0)
goto out;
sns_push("data_freed", 0, 0);
ret = alloc_root_meta_iter(&lt->data_freed, ia->cb, ia->cb_arg);
sns_pop();
if (ret < 0)
goto out;
ret = 0;
out:
sns_pop();
return ret;
}
/*
* Call the callers callback with the extent of all the metadata block references contained
* in log btrees. We walk the logs_root btree items and walk all the metadata structures
* they reference.
*/
int log_trees_meta_iter(extent_cb_t cb, void *cb_arg)
{
struct scoutfs_super_block *super = global_super;
struct iter_args ia = { .cb = cb, .cb_arg = cb_arg };
return btree_item_iter(&super->logs_root, lt_meta_iter, &ia);
}

View File

@@ -0,0 +1,8 @@
#ifndef _SCOUTFS_UTILS_CHECK_LOG_TREES_H_
#define _SCOUTFS_UTILS_CHECK_LOG_TREES_H_
#include "extent.h"
int log_trees_meta_iter(extent_cb_t cb, void *cb_arg);
#endif

367
utils/src/check/meta.c Normal file
View File

@@ -0,0 +1,367 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <sys/mman.h>
#include <errno.h>
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "bitmap.h"
#include "key.h"
#include "alloc.h"
#include "btree.h"
#include "debug.h"
#include "extent.h"
#include "sns.h"
#include "log_trees.h"
#include "meta.h"
#include "problem.h"
#include "super.h"
static struct meta_data {
struct extent_root meta_refed;
struct extent_root meta_free;
struct {
u64 ref_blocks;
u64 free_extents;
u64 free_blocks;
} stats;
} global_mdat;
bool valid_meta_blkno(u64 blkno)
{
u64 tot = le64_to_cpu(global_super->total_meta_blocks);
return blkno >= SCOUTFS_META_DEV_START_BLKNO && blkno < tot;
}
static bool valid_meta_extent(u64 start, u64 len)
{
u64 tot = le64_to_cpu(global_super->total_meta_blocks);
bool valid;
valid = len > 0 &&
start >= SCOUTFS_META_DEV_START_BLKNO &&
start < tot &&
len <= tot &&
((start + len) <= tot) &&
((start + len) > start);
debug("start %llu len %llu valid %u", start, len, !!valid);
if (!valid)
problem(PB_META_EXTENT_INVALID, "start %llu len %llu", start, len);
return valid;
}
/*
* Track references to individual metadata blocks. This uses the extent
* callback type but is only ever called for single block references.
* Any reference to a block that has already been referenced is
* considered invalid and is ignored. Later repair will resolve
* duplicate references.
*/
static int insert_meta_ref(u64 start, u64 len, void *arg)
{
struct meta_data *mdat = &global_mdat;
struct extent_root *root = arg;
int ret = 0;
/* this is tracking single metadata block references */
if (len != 1) {
ret = -EINVAL;
goto out;
}
if (valid_meta_blkno(start)) {
ret = extent_insert_new(root, start, len);
if (ret == 0)
mdat->stats.ref_blocks++;
else if (ret == -EEXIST)
problem(PB_META_REF_OVERLAPS_EXISTING, "blkno %llu", start);
}
out:
return ret;
}
static int insert_meta_free(u64 start, u64 len, void *arg)
{
struct meta_data *mdat = &global_mdat;
struct extent_root *root = arg;
int ret = 0;
if (valid_meta_extent(start, len)) {
ret = extent_insert_new(root, start, len);
if (ret == 0) {
mdat->stats.free_extents++;
mdat->stats.free_blocks++;
} else if (ret == -EEXIST) {
problem(PB_META_FREE_OVERLAPS_EXISTING,
"start %llu llen %llu", start, len);
}
}
return ret;
}
/*
* Walk all metadata references in the system. This walk doesn't need
* to read metadata that doesn't contain any metadata references so it
* can skip the bulk of metadata blocks. This gives us the set of
* referenced metadata blocks which we can then use to repair metadata
* allocator structures.
*/
static int get_meta_refs(void)
{
struct meta_data *mdat = &global_mdat;
struct scoutfs_super_block *super = global_super;
int ret;
extent_root_init(&mdat->meta_refed);
/* XXX record reserved blocks around super as referenced */
sns_push("meta_alloc", 0, 0);
ret = alloc_root_meta_iter(&super->meta_alloc[0], insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("meta_alloc", 1, 0);
ret = alloc_root_meta_iter(&super->meta_alloc[1], insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("data_alloc", 1, 0);
ret = alloc_root_meta_iter(&super->data_alloc, insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_avail", 0, 0);
ret = alloc_list_meta_iter(&super->server_meta_avail[0],
insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_avail", 1, 0);
ret = alloc_list_meta_iter(&super->server_meta_avail[1],
insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_freed", 0, 0);
ret = alloc_list_meta_iter(&super->server_meta_freed[0],
insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_freed", 1, 0);
ret = alloc_list_meta_iter(&super->server_meta_freed[1],
insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("fs_root", 0, 0);
ret = btree_meta_iter(&super->fs_root, insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("logs_root", 0, 0);
ret = btree_meta_iter(&super->logs_root, insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("log_merge", 0, 0);
ret = btree_meta_iter(&super->log_merge, insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("mounted_clients", 0, 0);
ret = btree_meta_iter(&super->mounted_clients, insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
sns_push("srch_root", 0, 0);
ret = btree_meta_iter(&super->srch_root, insert_meta_ref, &mdat->meta_refed);
sns_pop();
if (ret < 0)
goto out;
ret = log_trees_meta_iter(insert_meta_ref, &mdat->meta_refed);
if (ret < 0)
goto out;
debug("found %llu referenced metadata blocks", mdat->stats.ref_blocks);
ret = 0;
out:
return ret;
}
static int get_meta_free(void)
{
struct meta_data *mdat = &global_mdat;
struct scoutfs_super_block *super = global_super;
int ret;
extent_root_init(&mdat->meta_free);
sns_push("meta_alloc", 0, 0);
ret = alloc_root_extent_iter(&super->meta_alloc[0], insert_meta_free, &mdat->meta_free);
sns_pop();
if (ret < 0)
goto out;
sns_push("meta_alloc", 1, 0);
ret = alloc_root_extent_iter(&super->meta_alloc[1], insert_meta_free, &mdat->meta_free);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_avail", 0, 0);
ret = alloc_list_extent_iter(&super->server_meta_avail[0],
insert_meta_free, &mdat->meta_free);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_avail", 1, 0);
ret = alloc_list_extent_iter(&super->server_meta_avail[1],
insert_meta_free, &mdat->meta_free);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_freed", 0, 0);
ret = alloc_list_extent_iter(&super->server_meta_freed[0],
insert_meta_free, &mdat->meta_free);
sns_pop();
if (ret < 0)
goto out;
sns_push("server_meta_freed", 1, 0);
ret = alloc_list_extent_iter(&super->server_meta_freed[1],
insert_meta_free, &mdat->meta_free);
sns_pop();
if (ret < 0)
goto out;
debug("found %llu free metadata blocks in %llu extents",
mdat->stats.free_blocks, mdat->stats.free_extents);
ret = 0;
out:
return ret;
}
/*
* All the space between referenced blocks must be recorded in the free
* extents. The free extent walk didn't check that the extents
* overlapped with references, we do that here. Remember that metadata
* block references were merged into extents here, the refed extents
* aren't necessarily all a single block.
*/
static int compare_refs_and_free(void)
{
struct meta_data *mdat = &global_mdat;
struct extent_node *ref;
struct extent_node *free;
struct extent_node *next;
struct extent_node *prev;
u64 expect;
u64 start;
u64 end;
expect = 0;
ref = extent_first(&mdat->meta_refed);
free = extent_first(&mdat->meta_free);
while (ref || free) {
debug("exp %llu ref %llu.%llu free %llu.%llu",
expect, ref ? ref->start : 0, ref ? ref->len : 0,
free ? free->start : 0, free ? free->len : 0);
/* referenced marked free, remove ref from free and continue from same point */
if (ref && free && extents_overlap(ref->start, ref->len, free->start, free->len)) {
debug("ref extent %llu.%llu overlaps free %llu %llu",
ref->start, ref->len, free->start, free->len);
start = max(ref->start, free->start);
end = min(ref->start + ref->len, free->start + free->len);
prev = extent_prev(free);
extent_remove(&mdat->meta_free, start, end - start);
if (prev)
free = extent_next(prev);
else
free = extent_first(&mdat->meta_free);
continue;
}
/* see which extent starts earlier */
if (!free || (ref && ref->start <= free->start))
next = ref;
else
next = free;
/* untracked region before next extent */
if (expect < next->start) {
debug("missing free extent %llu.%llu", expect, next->start - expect);
expect = next->start;
continue;
}
/* didn't overlap, advance past next extent */
expect = next->start + next->len;
if (next == ref)
ref = extent_next(ref);
else
free = extent_next(free);
}
return 0;
}
/*
* Check the metadata allocators by comparing the set of referenced
* blocks with the set of free blocks that are stored in free btree
* items and alloc list blocks.
*/
int check_meta_alloc(void)
{
int ret;
ret = get_meta_refs();
if (ret < 0)
goto out;
ret = get_meta_free();
if (ret < 0)
goto out;
ret = compare_refs_and_free();
if (ret < 0)
goto out;
ret = 0;
out:
return ret;
}

9
utils/src/check/meta.h Normal file
View File

@@ -0,0 +1,9 @@
#ifndef _SCOUTFS_UTILS_CHECK_META_H_
#define _SCOUTFS_UTILS_CHECK_META_H_
bool valid_meta_blkno(u64 blkno);
int check_meta_alloc(void);
#endif

23
utils/src/check/padding.c Normal file
View File

@@ -0,0 +1,23 @@
#include <string.h>
#include <stdbool.h>
#include "util.h"
#include "padding.h"
bool padding_is_zeros(const void *data, size_t sz)
{
static char zeros[32] = {0,};
const size_t batch = array_size(zeros);
while (sz >= batch) {
if (memcmp(data, zeros, batch))
return false;
data += batch;
sz -= batch;
}
if (sz > 0 && memcmp(data, zeros, sz))
return false;
return true;
}

View File

@@ -0,0 +1,6 @@
#ifndef _SCOUTFS_UTILS_CHECK_PADDING_H_
#define _SCOUTFS_UTILS_CHECK_PADDING_H_
bool padding_is_zeros(const void *data, size_t sz);
#endif

44
utils/src/check/problem.c Normal file
View File

@@ -0,0 +1,44 @@
#include <stdio.h>
#include <stdint.h>
#include "problem.h"
#define PROB_STR(pb) [pb] = #pb
char *prob_strs[] = {
PROB_STR(PB_META_EXTENT_INVALID),
PROB_STR(PB_META_REF_OVERLAPS_EXISTING),
PROB_STR(PB_META_FREE_OVERLAPS_EXISTING),
PROB_STR(PB_BTREE_BLOCK_BAD_LEVEL),
PROB_STR(PB_SB_HDR_CRC_INVALID),
PROB_STR(PB_SB_HDR_MAGIC_INVALID),
PROB_STR(PB_FS_IN_USE),
PROB_STR(PB_MOUNTED_CLIENTS_REF_BLKNO),
PROB_STR(PB_SB_BAD_FLAG),
PROB_STR(PB_SB_BAD_FMT_VERS),
PROB_STR(PB_QCONF_WRONG_VERSION),
PROB_STR(PB_QSLOT_BAD_FAM),
PROB_STR(PB_QSLOT_BAD_PORT),
PROB_STR(PB_QSLOT_NO_ADDR),
PROB_STR(PB_QSLOT_BAD_ADDR),
PROB_STR(PB_DATA_DEV_SB_INVALID),
};
static struct problem_data {
uint64_t counts[PB__NR];
uint64_t count;
} global_pdat;
void problem_record(prob_t pb)
{
struct problem_data *pdat = &global_pdat;
pdat->counts[pb]++;
pdat->count++;
}
uint64_t problems_count(void)
{
struct problem_data *pdat = &global_pdat;
return pdat->count;
}

38
utils/src/check/problem.h Normal file
View File

@@ -0,0 +1,38 @@
#ifndef _SCOUTFS_UTILS_CHECK_PROBLEM_H_
#define _SCOUTFS_UTILS_CHECK_PROBLEM_H_
#include "debug.h"
#include "sns.h"
typedef enum {
PB_META_EXTENT_INVALID,
PB_META_REF_OVERLAPS_EXISTING,
PB_META_FREE_OVERLAPS_EXISTING,
PB_BTREE_BLOCK_BAD_LEVEL,
PB_SB_HDR_CRC_INVALID,
PB_SB_HDR_MAGIC_INVALID,
PB_FS_IN_USE,
PB_MOUNTED_CLIENTS_REF_BLKNO,
PB_SB_BAD_FLAG,
PB_SB_BAD_FMT_VERS,
PB_QCONF_WRONG_VERSION,
PB_QSLOT_BAD_FAM,
PB_QSLOT_BAD_PORT,
PB_QSLOT_NO_ADDR,
PB_QSLOT_BAD_ADDR,
PB_DATA_DEV_SB_INVALID,
PB__NR,
} prob_t;
extern char *prob_strs[];
#define problem(pb, fmt, ...) \
do { \
debug("problem found: "#pb": %s: "fmt, sns_str(), __VA_ARGS__); \
problem_record(pb); \
} while (0)
void problem_record(prob_t pb);
uint64_t problems_count(void);
#endif

118
utils/src/check/sns.c Normal file
View File

@@ -0,0 +1,118 @@
#include <stdlib.h>
#include <string.h>
#include "sns.h"
/*
* This "str num stack" is used to describe our location in metadata at
* any given time.
*
* As we descend into structures we pop a string on decribing them,
* perhaps with associated numbers. Pushing and popping is very cheap
* and only rarely do we format the stack into a string, as an arbitrary
* example:
* super.fs_root.btree_parent:1231.btree_leaf:3231"
*/
#define SNS_MAX_DEPTH 1000
#define SNS_STR_SIZE (SNS_MAX_DEPTH * (SNS_MAX_STR_LEN + 1 + 16 + 1))
static struct sns_data {
unsigned int depth;
struct sns_entry {
char *str;
size_t len;
u64 a;
u64 b;
} ents[SNS_MAX_DEPTH];
char str[SNS_STR_SIZE];
} global_lsdat;
void _sns_push(char *str, size_t len, u64 a, u64 b)
{
struct sns_data *lsdat = &global_lsdat;
if (lsdat->depth < SNS_MAX_DEPTH) {
lsdat->ents[lsdat->depth++] = (struct sns_entry) {
.str = str,
.len = len,
.a = a,
.b = b,
};
}
}
void sns_pop(void)
{
struct sns_data *lsdat = &global_lsdat;
if (lsdat->depth > 0)
lsdat->depth--;
}
static char *append_str(char *pos, char *str, size_t len)
{
memcpy(pos, str, len);
return pos + len;
}
/*
* This is not called for x = 0 so we don't need to emit an initial 0.
* We could by using do {} while instead of while {}.
*/
static char *append_u64x(char *pos, u64 x)
{
static char hex[] = "0123456789abcdef";
while (x) {
*pos++ = hex[x & 0xf];
x >>= 4;
}
return pos;
}
static char *append_char(char *pos, char c)
{
*(pos++) = c;
return pos;
}
/*
* Return a pointer to a null terminated string that describes the
* current location stack. The string buffer is global.
*/
char *sns_str(void)
{
struct sns_data *lsdat = &global_lsdat;
struct sns_entry *ent;
char *pos;
int i;
pos = lsdat->str;
for (i = 0; i < lsdat->depth; i++) {
ent = &lsdat->ents[i];
if (i)
pos = append_char(pos, '.');
pos = append_str(pos, ent->str, ent->len);
if (ent->a) {
pos = append_char(pos, ':');
pos = append_u64x(pos, ent->a);
}
if (ent->b) {
pos = append_char(pos, ':');
pos = append_u64x(pos, ent->b);
}
}
*pos = '\0';
return lsdat->str;
}

20
utils/src/check/sns.h Normal file
View File

@@ -0,0 +1,20 @@
#ifndef _SCOUTFS_UTILS_CHECK_SNS_H_
#define _SCOUTFS_UTILS_CHECK_SNS_H_
#include <assert.h>
#include "sparse.h"
#define SNS_MAX_STR_LEN 20
#define sns_push(str, a, b) \
do { \
build_assert(sizeof(str) - 1 <= SNS_MAX_STR_LEN); \
_sns_push((str), sizeof(str) - 1, a, b); \
} while (0)
void _sns_push(char *str, size_t len, u64 a, u64 b);
void sns_pop(void);
char *sns_str(void);
#endif

252
utils/src/check/super.c Normal file
View File

@@ -0,0 +1,252 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include "sparse.h"
#include "util.h"
#include "format.h"
#include "crc.h"
#include "block.h"
#include "super.h"
#include "problem.h"
/*
* After we check the super blocks we provide a global buffer to track
* the current super block. It is referenced to get static information
* about the system and is also modified and written as part of
* transactions.
*/
struct scoutfs_super_block *global_super;
/*
* Check superblock crc. We can't use global_super here since it's not the
* whole block itself, but only the struct scoutfs_super_block, so it needs
* to reload a copy here.
*/
int check_super_crc(void)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_block_header *hdr;
struct block *blk = NULL;
u32 crc;
int ret;
ret = block_get(&blk, SCOUTFS_SUPER_BLKNO, BF_SM | BF_DIRTY);
if (ret < 0) {
fprintf(stderr, "error reading super block\n");
return ret;
}
super = block_buf(blk);
crc = crc_block((struct scoutfs_block_header *)super, block_size(blk));
hdr = &global_super->hdr;
debug("superblock crc 0x%04x calculated 0x%04x " "%s", le32_to_cpu(hdr->crc), crc, le32_to_cpu(hdr->crc) == crc ? "(match)" : "(mismatch)");
if (crc != le32_to_cpu(hdr->crc))
problem(PB_SB_HDR_CRC_INVALID, "crc 0x%04x calculated 0x%04x", le32_to_cpu(hdr->crc), crc);
block_put(&blk);
return 0;
}
/*
* Crude check for the unlikely cases where the fs appears to still be mounted.
*/
int check_super_in_use(int meta_fd)
{
int ret = meta_super_in_use(meta_fd, global_super);
debug("meta_super_in_use ret %d", ret);
if (ret < 0)
problem(PB_FS_IN_USE, "File system appears in use. ret %d", ret);
debug("global_super->mounted_clients.ref.blkno 0x%08llx", global_super->mounted_clients.ref.blkno);
if (global_super->mounted_clients.ref.blkno != 0)
problem(PB_MOUNTED_CLIENTS_REF_BLKNO, "Mounted clients ref blkno 0x%08llx",
global_super->mounted_clients.ref.blkno);
return ret;
}
/*
* quick glance data device superblock checks.
*
* -EIO for crc failures, all others -EINVAL
*
* caller must have run check_supers() first so that global_super is
* setup, so that we can cross-ref to it.
*/
static int check_data_super(int data_fd)
{
struct scoutfs_super_block *super = NULL;
char *buf;
int ret = 0;
u32 crc;
ssize_t size = SCOUTFS_BLOCK_SM_SIZE;
off_t off = SCOUTFS_SUPER_BLKNO << SCOUTFS_BLOCK_SM_SHIFT;
buf = aligned_alloc(4096, size); /* XXX static alignment :/ */
if (!buf)
return -ENOMEM;
memset(buf, 0, size);
if (lseek(data_fd, off, SEEK_SET) != off)
return -errno;
if (read(data_fd, buf, size) < 0) {
ret = -errno;
goto out;
}
super = (struct scoutfs_super_block *)buf;
crc = crc_block((struct scoutfs_block_header *)buf, size);
debug("data fsid 0x%016llx", le64_to_cpu(super->hdr.fsid));
debug("data super magic 0x%04x", super->hdr.magic);
debug("data crc calc 0x%08x exp 0x%08x %s", crc, le32_to_cpu(super->hdr.crc),
crc == le32_to_cpu(super->hdr.crc) ? "(match)" : "(mismatch)");
debug("data flags %llu fmt_vers %llu", le64_to_cpu(super->flags), le64_to_cpu(super->fmt_vers));
if (crc != le32_to_cpu(super->hdr.crc))
/* tis but a scratch */
ret = -EIO;
if (le64_to_cpu(super->hdr.fsid) != le64_to_cpu(global_super->hdr.fsid))
/* mismatched data bdev? not good */
ret = -EINVAL;
if (le32_to_cpu(super->hdr.magic) != SCOUTFS_BLOCK_MAGIC_SUPER)
/* fsid matched but not a superblock? yikes */
ret = -EINVAL;
if (le64_to_cpu(super->flags) != 0) /* !SCOUTFS_FLAG_IS_META_BDEV */
ret = -EINVAL;
if ((le64_to_cpu(super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN) ||
(le64_to_cpu(super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX))
ret = -EINVAL;
if (ret != 0)
problem(PB_DATA_DEV_SB_INVALID, "data device is invalid or corrupt (%d)", ret);
out:
free(buf);
return ret;
}
/*
* After checking the supers we save a copy of it in a global buffer that's used by
* other modules to track the current super. It can be modified and written during commits.
*/
int check_supers(int data_fd)
{
struct scoutfs_super_block *super = NULL;
struct block *blk = NULL;
struct scoutfs_quorum_slot* slot = NULL;
struct in_addr in;
uint16_t family;
uint16_t port;
int ret;
sns_push("supers", 0, 0);
global_super = malloc(sizeof(struct scoutfs_super_block));
if (!global_super) {
fprintf(stderr, "error allocating super block buffer\n");
ret = -ENOMEM;
goto out;
}
ret = block_get(&blk, SCOUTFS_SUPER_BLKNO, BF_SM);
if (ret < 0) {
fprintf(stderr, "error reading super block\n");
goto out;
}
ret = block_hdr_valid(blk, SCOUTFS_SUPER_BLKNO, BF_SM, SCOUTFS_BLOCK_MAGIC_SUPER);
super = block_buf(blk);
if (ret < 0) {
/* */
if (ret == -EINVAL) {
/* that's really bad */
fprintf(stderr, "superblock invalid magic\n");
goto out;
} else if (ret == -EIO)
/* just report/count a CRC error */
problem(PB_SB_HDR_MAGIC_INVALID, "superblock magic invalid: 0x%04x is not 0x%04x",
super->hdr.magic, SCOUTFS_BLOCK_MAGIC_SUPER);
}
memcpy(global_super, super, sizeof(struct scoutfs_super_block));
debug("Superblock flag: %llu", global_super->flags);
if (le64_to_cpu(global_super->flags) != SCOUTFS_FLAG_IS_META_BDEV)
problem(PB_SB_BAD_FLAG, "Bad flag: %llu expecting: 1 or 0", global_super->flags);
debug("Superblock fmt_vers: %llu", le64_to_cpu(global_super->fmt_vers));
if ((le64_to_cpu(global_super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN) ||
(le64_to_cpu(global_super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX))
problem(PB_SB_BAD_FMT_VERS, "Bad fmt_vers: %llu outside supported range (%d-%d)",
le64_to_cpu(global_super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
SCOUTFS_FORMAT_VERSION_MAX);
debug("Quorum Config Version: %llu", global_super->qconf.version);
if (le64_to_cpu(global_super->qconf.version) != 1)
problem(PB_QCONF_WRONG_VERSION, "Wrong Version: %llu (expected 1)", global_super->qconf.version);
for (int i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
slot = &global_super->qconf.slots[i];
family = le16_to_cpu(slot->addr.v4.family);
port = le16_to_cpu(slot->addr.v4.port);
in.s_addr = le32_to_cpu(slot->addr.v4.addr);
if (family == SCOUTFS_AF_NONE) {
debug("Quorum slot %u is empty", i);
continue;
}
debug("Quorum slot %u family: %u, port: %u, address: %s", i, family, port, inet_ntoa(in));
if (family != SCOUTFS_AF_IPV4)
problem(PB_QSLOT_BAD_FAM, "Quorum Slot %u doesn't have valid address", i);
if (port == 0)
problem(PB_QSLOT_BAD_PORT, "Quorum Slot %u has bad port", i);
if (!in.s_addr) {
problem(PB_QSLOT_NO_ADDR, "Quorum Slot %u has not been assigned ipv4 address", i);
} else if (!(in.s_addr & 0xff000000)) {
problem(PB_QSLOT_BAD_ADDR, "Quorum Slot %u has invalid ipv4 address", i);
} else if ((in.s_addr & 0xff) == 0xff) {
problem(PB_QSLOT_BAD_ADDR, "Quorum Slot %u has invalid ipv4 address", i);
}
}
debug("super magic 0x%04x", global_super->hdr.magic);
if (le32_to_cpu(global_super->hdr.magic) != SCOUTFS_BLOCK_MAGIC_SUPER)
problem(PB_SB_HDR_MAGIC_INVALID, "superblock magic invalid: 0x%04x is not 0x%04x",
global_super->hdr.magic, SCOUTFS_BLOCK_MAGIC_SUPER);
/* `scoutfs image` command doesn't open data_fd */
if (data_fd < 0)
ret = 0;
else
ret = check_data_super(data_fd);
out:
block_put(&blk);
sns_pop();
return ret;
}
void super_shutdown(void)
{
free(global_super);
}

12
utils/src/check/super.h Normal file
View File

@@ -0,0 +1,12 @@
#ifndef _SCOUTFS_UTILS_CHECK_SUPER_H_
#define _SCOUTFS_UTILS_CHECK_SUPER_H_
extern struct scoutfs_super_block *global_super;
int check_super_crc();
int check_supers(int data_fd);
int super_commit(void);
int check_super_in_use(int meta_fd);
void super_shutdown(void);
#endif

1986
utils/src/parallel_restore.c Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,126 @@
#ifndef _SCOUTFS_PARALLEL_RESTORE_H_
#define _SCOUTFS_PARALLEL_RESTORE_H_
#include <errno.h>
struct scoutfs_parallel_restore_progress {
struct scoutfs_btree_root fs_items;
struct scoutfs_btree_root root_items;
struct scoutfs_srch_file sfl;
struct scoutfs_block_ref bloom_ref;
__le64 inode_count;
__le64 max_ino;
};
struct scoutfs_parallel_restore_slice {
__le64 fsid;
__le64 meta_start;
__le64 meta_len;
};
struct scoutfs_parallel_restore_entry {
u64 dir_ino;
u64 pos;
u64 ino;
mode_t mode;
char *name;
unsigned int name_len;
};
struct scoutfs_parallel_restore_xattr {
u64 ino;
u64 pos;
char *name;
unsigned int name_len;
void *value;
unsigned int value_len;
};
struct scoutfs_parallel_restore_inode {
/* all inodes */
u64 ino;
u64 meta_seq;
u64 data_seq;
u64 nr_xattrs;
u32 uid;
u32 gid;
u32 mode;
u32 rdev;
u32 flags;
u8 pad[4];
struct timespec atime;
struct timespec ctime;
struct timespec mtime;
struct timespec crtime;
u64 proj;
/* regular files */
u64 data_version;
u64 size;
bool offline;
u32 nlink;
/* only used for directories */
u64 nr_subdirs;
u64 total_entry_name_bytes;
/* only used for symlnks */
char *target;
unsigned int target_len; /* not including null terminator */
};
struct scoutfs_parallel_restore_quota_rule {
u64 limit;
u8 prio;
u8 op;
u8 rule_flags;
struct quota_rule_name {
u64 val;
u8 source;
u8 flags;
} names [3];
char *value;
unsigned int value_len;
};
typedef __typeof__(EINVAL) spr_err_t;
struct scoutfs_parallel_restore_writer;
spr_err_t scoutfs_parallel_restore_create_writer(struct scoutfs_parallel_restore_writer **wrip);
void scoutfs_parallel_restore_destroy_writer(struct scoutfs_parallel_restore_writer **wrip);
spr_err_t scoutfs_parallel_restore_init_slices(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_slice *slices,
int nr);
spr_err_t scoutfs_parallel_restore_add_slice(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_slice *slice);
spr_err_t scoutfs_parallel_restore_get_slice(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_slice *slice);
spr_err_t scoutfs_parallel_restore_add_inode(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_inode *inode);
spr_err_t scoutfs_parallel_restore_add_entry(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_entry *entry);
spr_err_t scoutfs_parallel_restore_add_xattr(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_xattr *xattr);
spr_err_t scoutfs_parallel_restore_get_progress(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_progress *prog);
spr_err_t scoutfs_parallel_restore_add_progress(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_progress *prog);
spr_err_t scoutfs_parallel_restore_add_quota_rule(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_parallel_restore_quota_rule *rule);
spr_err_t scoutfs_parallel_restore_write_buf(struct scoutfs_parallel_restore_writer *wri,
void *buf, size_t len, off_t *off_ret,
size_t *count_ret);
spr_err_t scoutfs_parallel_restore_import_super(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_super_block *super, int fd);
spr_err_t scoutfs_parallel_restore_export_super(struct scoutfs_parallel_restore_writer *wri,
struct scoutfs_super_block *super);
#endif