Compare commits

..

2 Commits

Author SHA1 Message Date
Zach Brown
ead8be6b8c Have xfstest pass when using args
The xfstests's golden output includes the full set of tests we expect to
run when no args are specified.  If we specify args then the set of
tests can change and the test will always fail when they do.

This fixes that by having the test check the set of tests itself, rather
than relying on golden output.  If args are specified then our xfstest
only fails if any of the executed xfstest tests failed.  Without args,
we perform the same scraping of the check output and compare it against
the expected results ourself.

It would have been a bit much to put that large file inline in the test
file, so we add a dir of per-test files in revision control.  We can
also put the list of exclusions there.

We can also clean up the output redirection helper functions to make
them more clear.  After xfstests has executed we want to redirect output
back to the compared output so that we can catch any unexpected output.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-06 16:03:46 -08:00
Zach Brown
ae84271b37 Add crash monitor to run-tests
Add a little background function that runs during the test which
triggers a crash if it finds catastrophic failure conditions.

This is the second bg task we want to kill and we can only have one
function run on the EXIT trap, so we create a generic process killing
trap function.

We feed it the fenced pid as well.  run-tests didn't log much of value
into the fenced log, and we're not logging the kills into anymore, so we
just remove run-tests fenced logging.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-06 12:11:46 -08:00
19 changed files with 99 additions and 359 deletions

View File

@@ -1,41 +1,6 @@
Versity ScoutFS Release Notes
=============================
---
v1.26
\
*Nov 17, 2025*
Add the ino\_alloc\_per\_lock mount option. This changes the number of
inode numbers allocated under each cluster lock and can alleviate lock
contention for some patterns of larger file creation.
Add the tcp\_keepalive\_timeout\_ms mount option. This can enable the
system to survive longer periods of networking outages.
Fix a rare double free of internal btree metadata blocks when merging
log trees. The duplicated freed metadata block numbers would cause
persistent errors in the server, preventing the server from starting and
hanging the system.
Fix the data\_wait interface to not require the correct data\_version of
the inode when raising an error. This lets callers raise errors when
they're unable to recall the details of the inode to discover its
data\_version.
Change scoutfs to more aggressively reclaim cached memory when under
memory pressure. This makes scoutfs behave more like other kernel
components and it integrates better with the reclaim policy heuristics
in the VM core of the kernel.
Change scoutfs to more efficiently transmit and receive socket messages.
Under heavy load this can process messages sufficiently more quickly to
avoid hung task messages for tasks that were waiting for cluster lock
messages to be processed.
Fix faulty server block commit budget calculations that were generating
spurious "holders exceeded alloc budget" console messages.
---
v1.25
\

View File

@@ -1482,6 +1482,12 @@ static int remove_index_items(struct super_block *sb, u64 ino,
* Return an allocated and unused inode number. Returns -ENOSPC if
* we're out of inode.
*
* Each parent directory has its own pool of free inode numbers. Items
* are sorted by their inode numbers as they're stored in segments.
* This will tend to group together files that are created in a
* directory at the same time in segments. Concurrent creation across
* different directories will be stored in their own regions.
*
* Inode numbers are never reclaimed. If the inode is evicted or we're
* unmounted the pending inode numbers will be lost. Asking for a
* relatively small number from the server each time will tend to
@@ -1491,18 +1497,12 @@ static int remove_index_items(struct super_block *sb, u64 ino,
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret)
{
DECLARE_INODE_SB_INFO(sb, inf);
struct scoutfs_mount_options opts;
struct inode_allocator *ia;
u64 ino;
u64 nr;
int ret;
scoutfs_options_read(sb, &opts);
if (is_dir && opts.ino_alloc_per_lock == SCOUTFS_LOCK_INODE_GROUP_NR)
ia = &inf->dir_ino_alloc;
else
ia = &inf->ino_alloc;
ia = is_dir ? &inf->dir_ino_alloc : &inf->ino_alloc;
spin_lock(&ia->lock);
@@ -1523,17 +1523,6 @@ int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret)
*ino_ret = ia->ino++;
ia->nr--;
if (opts.ino_alloc_per_lock != SCOUTFS_LOCK_INODE_GROUP_NR) {
nr = ia->ino & SCOUTFS_LOCK_INODE_GROUP_MASK;
if (nr >= opts.ino_alloc_per_lock) {
nr = SCOUTFS_LOCK_INODE_GROUP_NR - nr;
if (nr > ia->nr)
nr = ia->nr;
ia->ino += nr;
ia->nr -= nr;
}
}
spin_unlock(&ia->lock);
ret = 0;
out:

View File

@@ -35,12 +35,6 @@ do { \
} \
} while (0) \
#define scoutfs_bug_on_err(sb, err, fmt, args...) \
do { \
__typeof__(err) _err = (err); \
scoutfs_bug_on(sb, _err < 0 && _err != -ENOLINK, fmt, ##args); \
} while (0)
/*
* Each message is only generated once per volume. Remounting resets
* the messages.

View File

@@ -33,7 +33,6 @@ enum {
Opt_acl,
Opt_data_prealloc_blocks,
Opt_data_prealloc_contig_only,
Opt_ino_alloc_per_lock,
Opt_log_merge_wait_timeout_ms,
Opt_metadev_path,
Opt_noacl,
@@ -48,7 +47,6 @@ static const match_table_t tokens = {
{Opt_acl, "acl"},
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
{Opt_ino_alloc_per_lock, "ino_alloc_per_lock=%s"},
{Opt_log_merge_wait_timeout_ms, "log_merge_wait_timeout_ms=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_noacl, "noacl"},
@@ -138,7 +136,6 @@ static void init_default_options(struct scoutfs_mount_options *opts)
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
opts->data_prealloc_contig_only = 1;
opts->ino_alloc_per_lock = SCOUTFS_LOCK_INODE_GROUP_NR;
opts->log_merge_wait_timeout_ms = DEFAULT_LOG_MERGE_WAIT_TIMEOUT_MS;
opts->orphan_scan_delay_ms = -1;
opts->quorum_heartbeat_timeout_ms = SCOUTFS_QUORUM_DEF_HB_TIMEO_MS;
@@ -241,18 +238,6 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
opts->data_prealloc_contig_only = nr;
break;
case Opt_ino_alloc_per_lock:
ret = match_int(args, &nr);
if (ret < 0 || nr < 1 || nr > SCOUTFS_LOCK_INODE_GROUP_NR) {
scoutfs_err(sb, "invalid ino_alloc_per_lock option, must be between 1 and %u",
SCOUTFS_LOCK_INODE_GROUP_NR);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->ino_alloc_per_lock = nr;
break;
case Opt_tcp_keepalive_timeout_ms:
ret = match_int(args, &nr);
ret = verify_tcp_keepalive_timeout_ms(sb, ret, nr);
@@ -408,7 +393,6 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
seq_puts(seq, ",acl");
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
seq_printf(seq, ",ino_alloc_per_lock=%u", opts.ino_alloc_per_lock);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
if (!is_acl)
seq_puts(seq, ",noacl");
@@ -497,45 +481,6 @@ static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj
}
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
static ssize_t ino_alloc_per_lock_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.ino_alloc_per_lock);
}
static ssize_t ino_alloc_per_lock_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < 1 || val > SCOUTFS_LOCK_INODE_GROUP_NR) {
scoutfs_err(sb, "invalid ino_alloc_per_lock option, must be between 1 and %u",
SCOUTFS_LOCK_INODE_GROUP_NR);
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.ino_alloc_per_lock = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(ino_alloc_per_lock);
static ssize_t log_merge_wait_timeout_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
@@ -676,7 +621,6 @@ SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
SCOUTFS_ATTR_PTR(ino_alloc_per_lock),
SCOUTFS_ATTR_PTR(log_merge_wait_timeout_ms),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),

View File

@@ -8,7 +8,6 @@
struct scoutfs_mount_options {
u64 data_prealloc_blocks;
bool data_prealloc_contig_only;
unsigned int ino_alloc_per_lock;
unsigned int log_merge_wait_timeout_ms;
char *metadev_path;
unsigned int orphan_scan_delay_ms;

View File

@@ -994,11 +994,10 @@ static int for_each_rid_last_lt(struct super_block *sb, struct scoutfs_btree_roo
}
/*
* Log merge range items are stored at the starting fs key of the range
* with the zone overwritten to indicate the log merge item type. This
* day0 mistake loses sorting information for items in the different
* zones in the fs root, so the range items aren't strictly sorted by
* the starting key of their range.
* Log merge range items are stored at the starting fs key of the range.
* The only fs key field that doesn't hold information is the zone, so
* we use the zone to differentiate all types that we store in the log
* merge tree.
*/
static void init_log_merge_key(struct scoutfs_key *key, u8 zone, u64 first,
u64 second)
@@ -1030,51 +1029,6 @@ static int next_log_merge_item_key(struct super_block *sb, struct scoutfs_btree_
return ret;
}
/*
* The range items aren't sorted by their range.start because
* _RANGE_ZONE clobbers the range's zone. We sweep all the items and
* find the range with the next least starting key that's greater than
* the caller's starting key. We have to be careful to iterate over the
* log_merge tree keys because the ranges can overlap as they're mapped
* to the log_merge keys by clobbering their zone.
*/
static int next_log_merge_range(struct super_block *sb, struct scoutfs_btree_root *root,
struct scoutfs_key *start, struct scoutfs_log_merge_range *rng)
{
struct scoutfs_log_merge_range *next;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
int ret;
key = *start;
key.sk_zone = SCOUTFS_LOG_MERGE_RANGE_ZONE;
scoutfs_key_set_ones(&rng->start);
do {
ret = scoutfs_btree_next(sb, root, &key, &iref);
if (ret == 0) {
if (iref.key->sk_zone != SCOUTFS_LOG_MERGE_RANGE_ZONE) {
ret = -ENOENT;
} else if (iref.val_len != sizeof(struct scoutfs_log_merge_range)) {
ret = -EIO;
} else {
next = iref.val;
if (scoutfs_key_compare(&next->start, &rng->start) < 0 &&
scoutfs_key_compare(&next->start, start) >= 0)
*rng = *next;
key = *iref.key;
scoutfs_key_inc(&key);
}
scoutfs_btree_put_iref(&iref);
}
} while (ret == 0);
if (ret == -ENOENT && !scoutfs_key_is_ones(&rng->start))
ret = 0;
return ret;
}
static int next_log_merge_item(struct super_block *sb,
struct scoutfs_btree_root *root,
u8 zone, u64 first, u64 second,
@@ -1618,8 +1572,7 @@ static int server_get_log_trees(struct super_block *sb,
goto update;
}
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed,
COMMIT_HOLD_ALLOC_BUDGET / 2);
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 100);
if (ret == -EINPROGRESS)
ret = 0;
if (ret < 0) {
@@ -1729,7 +1682,6 @@ static int server_commit_log_trees(struct super_block *sb,
int ret;
if (arg_len != sizeof(struct scoutfs_log_trees)) {
err_str = "invalid message log_trees size";
ret = -EINVAL;
goto out;
}
@@ -1793,7 +1745,7 @@ static int server_commit_log_trees(struct super_block *sb,
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
BUG_ON(ret < 0); /* dirtying should have guaranteed success, srch item inconsistent */
BUG_ON(ret < 0); /* dirtying should have guaranteed success */
if (ret < 0)
err_str = "updating log trees item";
@@ -1801,10 +1753,11 @@ unlock:
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
out:
if (ret < 0)
scoutfs_err(sb, "server error %d committing client logs for rid %016llx, nr %llu: %s",
ret, rid, le64_to_cpu(lt.nr), err_str);
out:
WARN_ON_ONCE(ret < 0);
return scoutfs_net_response(sb, conn, cmd, id, ret, NULL, 0);
}
@@ -1914,11 +1867,9 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
scoutfs_alloc_splice_list(sb, &server->alloc, &server->wri, server->other_freed,
&lt.meta_avail)) ?:
(err_str = "empty data_avail",
alloc_move_empty(sb, &super->data_alloc, &lt.data_avail,
COMMIT_HOLD_ALLOC_BUDGET / 2)) ?:
alloc_move_empty(sb, &super->data_alloc, &lt.data_avail, 100)) ?:
(err_str = "empty data_freed",
alloc_move_empty(sb, &super->data_alloc, &lt.data_freed,
COMMIT_HOLD_ALLOC_BUDGET / 2));
alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 100));
mutex_unlock(&server->alloc_mutex);
/* only finalize, allowing merging, once the allocators are fully freed */
@@ -2143,7 +2094,7 @@ static int server_srch_get_compact(struct super_block *sb,
apply:
ret = server_apply_commit(sb, &hold, ret);
WARN_ON_ONCE(ret < 0 && ret != -ENOENT && ret != -ENOLINK); /* XXX leaked busy item */
WARN_ON_ONCE(ret < 0 && ret != -ENOENT); /* XXX leaked busy item */
out:
ret = scoutfs_net_response(sb, conn, cmd, id, ret,
sc, sizeof(struct scoutfs_srch_compact));
@@ -2521,9 +2472,10 @@ out:
}
}
/* inconsistent */
scoutfs_bug_on_err(sb, ret,
"server error %d splicing log merge completion: %s", ret, err_str);
if (ret < 0)
scoutfs_err(sb, "server error %d splicing log merge completion: %s", ret, err_str);
BUG_ON(ret); /* inconsistent */
return ret ?: einprogress;
}
@@ -2768,7 +2720,10 @@ restart:
/* find the next range, always checking for splicing */
for (;;) {
ret = next_log_merge_range(sb, &super->log_merge, &stat.next_range_key, &rng);
key = stat.next_range_key;
key.sk_zone = SCOUTFS_LOG_MERGE_RANGE_ZONE;
ret = next_log_merge_item_key(sb, &super->log_merge, SCOUTFS_LOG_MERGE_RANGE_ZONE,
&key, &rng, sizeof(rng));
if (ret < 0 && ret != -ENOENT) {
err_str = "finding merge range item";
goto out;
@@ -3039,13 +2994,7 @@ static int server_commit_log_merge(struct super_block *sb,
SCOUTFS_LOG_MERGE_STATUS_ZONE, 0, 0,
&stat, sizeof(stat));
if (ret < 0) {
/*
* During a retransmission, it's possible that the server
* already committed and resolved this log merge. ENOENT
* is expected in that case.
*/
if (ret != -ENOENT)
err_str = "getting merge status item";
err_str = "getting merge status item";
goto out;
}

View File

@@ -8,33 +8,36 @@
echo "$0 running rid '$SCOUTFS_FENCED_REQ_RID' ip '$SCOUTFS_FENCED_REQ_IP' args '$@'"
echo_fail() {
echo "$@" >&2
log() {
echo "$@" > /dev/stderr
exit 1
}
# silence error messages
quiet_cat()
{
cat "$@" 2>/dev/null
echo_fail() {
echo "$@" > /dev/stderr
exit 1
}
rid="$SCOUTFS_FENCED_REQ_RID"
shopt -s nullglob
for fs in /sys/fs/scoutfs/*; do
fs_rid="$(quiet_cat $fs/rid)"
nr="$(quiet_cat $fs/data_device_maj_min)"
[ ! -d "$fs" -o "$fs_rid" != "$rid" ] && continue
[ ! -d "$fs" ] && continue
mnt=$(findmnt -l -n -t scoutfs -o TARGET -S $nr)
[ -z "$mnt" ] && continue
if ! umount -qf "$mnt"; then
if [ -d "$fs" ]; then
echo_fail "umount -qf $mnt failed"
fi
fs_rid="$(cat $fs/rid)" || \
echo_fail "failed to get rid in $fs"
if [ "$fs_rid" != "$rid" ]; then
continue
fi
nr="$(cat $fs/data_device_maj_min)" || \
echo_fail "failed to get data device major:minor in $fs"
mnts=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
echo_fail "findmnt -t scoutfs -S $nr failed"
for mnt in $mnts; do
umount -f "$mnt" || \
echo_fail "umout -f $mnt failed"
done
done
exit 0

View File

@@ -121,7 +121,6 @@ t_filter_dmesg()
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
re="$re|clocksource: Long readout interval"
# fencing tests force unmounts and trigger timeouts
re="$re|scoutfs .* forcing unmount"
@@ -167,12 +166,6 @@ t_filter_dmesg()
# perf warning that it adjusted sample rate
re="$re|perf: interrupt took too long.*lowering kernel.perf_event_max_sample_rate.*"
# some ci test guests are unresponsive
re="$re|longest quorum heartbeat .* delay"
# creating block devices may trigger this
re="$re|block device autoloading is deprecated and will be removed."
egrep -v "($re)" | \
ignore_harmless_unwind_kasan_stack_oob
}

View File

@@ -43,14 +43,9 @@ t_tap_progress()
local testname=$1
local result=$2
local stmsg=""
local diff=""
local dmsg=""
if [[ -s $T_RESULTS/tmp/${testname}/status.msg ]]; then
stmsg="1"
fi
if [[ -s "$T_RESULTS/tmp/${testname}/dmesg.new" ]]; then
dmsg="1"
fi
@@ -66,7 +61,6 @@ t_tap_progress()
echo "# ${testname} ** skipped - permitted **"
else
echo "not ok ${i} - ${testname}"
case ${result} in
101)
echo "# ${testname} ** skipped **"
@@ -76,13 +70,6 @@ t_tap_progress()
;;
esac
if [[ -n "${stmsg}" ]]; then
echo "#"
echo "# status:"
echo "#"
cat $T_RESULTS/tmp/${testname}/status.msg | sed 's/^/# - /'
fi
if [[ -n "${diff}" ]]; then
echo "#"
echo "# diff:"

View File

@@ -39,6 +39,20 @@ cmd() {
die "cmd failed (check the run.log)"
}
# we can record pids to kill as we exit, we kill in reverse added order
declare -a atexit_kill_pids
atexit_kill()
{
local pid
for pid in $(echo ${atexit_kill_pids[*]} | rev); do
if test -e "/proc/$pid/status" ; then
kill "$pid"
fi
done
}
trap atexit_kill EXIT
show_help()
{
cat << EOF
@@ -438,30 +452,6 @@ cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/sys/kernel/debug/tracing/buffer_size_kb \
/proc/sys/kernel/ftrace_dump_on_oops
# we can record pids to kill as we exit, we kill in reverse added order
atexit_kill_pids=""
add_atexit_kill_pid()
{
atexit_kill_pids="$1 $atexit_kill_pids"
}
atexit_kill()
{
local pid
# suppress bg function exited messages
exec {ERR}>&2 2>/dev/null
for pid in $atexit_kill_pids; do
if test -e "/proc/$pid/status" ; then
kill "$pid"
wait "$pid"
fi
done
exec 2>&$ERR {ERR}>&-
}
trap atexit_kill EXIT
#
# Build a fenced config that runs scripts out of the repository rather
# than the default system directory
@@ -477,7 +467,7 @@ T_FENCED_LOG="$T_RESULTS/fenced.log"
$T_UTILS/fenced/scoutfs-fenced > "$T_FENCED_LOG" 2>&1 &
fenced_pid=$!
add_atexit_kill_pid $fenced_pid
atexit_kill_pids+=($fenced_pid)
#
# some critical failures will cause fs operations to hang. We can watch
@@ -506,12 +496,13 @@ crash_monitor()
if [ "$bad" != 0 ]; then
echo "run-tests monitor triggering crash"
echo c > /proc/sysrq-trigger
exit 1
# bg function doesn't reload bash, $$ is parent run-tests.sh
kill -9 $$
fi
done
}
crash_monitor &
add_atexit_kill_pid $!
atexit_kill_pids+=($!)
# setup dm tables
echo "0 $(blockdev --getsz $T_META_DEVICE) linear $T_META_DEVICE 0" > \

View File

@@ -19,7 +19,6 @@
#include <sys/types.h>
#include <stdio.h>
#include <sys/stat.h>
#include <inttypes.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
@@ -30,7 +29,7 @@
#include <errno.h>
static int size = 0;
static int duration = 0;
static int count = 0; /* XXX make this duration instead */
struct thread_info {
int nr;
@@ -42,8 +41,6 @@ static void *run_test_func(void *ptr)
void *buf = NULL;
char *addr = NULL;
struct thread_info *tinfo = ptr;
uint64_t seconds = 0;
struct timespec ts;
int c = 0;
int fd;
ssize_t read, written, ret;
@@ -64,15 +61,9 @@ static void *run_test_func(void *ptr)
usleep(100000); /* 0.1sec to allow all threads to start roughly at the same time */
clock_gettime(CLOCK_REALTIME, &ts); /* record start time */
seconds = ts.tv_sec + duration;
for (;;) {
if (++c % 16 == 0) {
clock_gettime(CLOCK_REALTIME, &ts);
if (ts.tv_sec >= seconds)
break;
}
if (++c > count)
break;
switch (rand() % 4) {
case 0: /* pread */
@@ -108,8 +99,6 @@ static void *run_test_func(void *ptr)
memcpy(addr, buf, size); /* noerr */
break;
}
usleep(10000);
}
munmap(addr, size);
@@ -131,7 +120,7 @@ int main(int argc, char **argv)
int i;
if (argc != 8) {
fprintf(stderr, "%s requires 7 arguments - size duration file1 file2 file3 file4 file5\n", argv[0]);
fprintf(stderr, "%s requires 7 arguments - size count file1 file2 file3 file4 file5\n", argv[0]);
exit(-1);
}
@@ -141,9 +130,9 @@ int main(int argc, char **argv)
exit(-1);
}
duration = atoi(argv[2]);
if (duration < 0) {
fprintf(stderr, "invalid duration, must be greater than or equal to 0\n");
count = atoi(argv[2]);
if (count < 0) {
fprintf(stderr, "invalid count, must be greater than 0\n");
exit(-1);
}

View File

@@ -72,7 +72,7 @@ touch $T_D0/dir/file
mkdir $T_D0/dir/dir
ln -s $T_D0/dir/file $T_D0/dir/symlink
mknod $T_D0/dir/char c 1 3 # null
mknod $T_D0/dir/block b 42 0 # SAMPLE block dev - nonexistant/demo use only number
mknod $T_D0/dir/block b 7 0 # loop0
for name in $(ls -UA $T_D0/dir | sort); do
ino=$(stat -c '%i' $T_D0/dir/$name)
$GRE $ino | filter_types

View File

@@ -5,7 +5,7 @@
t_require_commands mmap_stress mmap_validate scoutfs xfs_io
echo "== mmap_stress"
mmap_stress 8192 30 "$T_D0/mmap_stress" "$T_D0/mmap_stress" "$T_D0/mmap_stress" "$T_D3/mmap_stress" "$T_D3/mmap_stress" | sed 's/:.*//g' | sort
mmap_stress 8192 2000 "$T_D0/mmap_stress" "$T_D1/mmap_stress" "$T_D2/mmap_stress" "$T_D3/mmap_stress" "$T_D4/mmap_stress" | sed 's/:.*//g' | sort
echo "== basic mmap/read/write consistency checks"
mmap_validate 256 1000 "$T_D0/mmap_val1" "$T_D1/mmap_val1"

View File

@@ -8,19 +8,19 @@ t_require_mounts 2
echo "=== renameat2 noreplace flag test"
# give each mount their own dir (lock group) to minimize create contention
mkdir $T_D0/dir0
mkdir $T_D1/dir1
mkdir $T_M0/dir0
mkdir $T_M1/dir1
echo "=== run two asynchronous calls to renameat2 NOREPLACE"
for i in $(seq 0 100); do
# prepare inputs in isolation
touch "$T_D0/dir0/old0"
touch "$T_D1/dir1/old1"
touch "$T_M0/dir0/old0"
touch "$T_M1/dir1/old1"
# race doing noreplace renames, both can't succeed
dumb_renameat2 -n "$T_D0/dir0/old0" "$T_D0/dir0/sharednew" 2> /dev/null &
dumb_renameat2 -n "$T_M0/dir0/old0" "$T_M0/dir0/sharednew" 2> /dev/null &
pid0=$!
dumb_renameat2 -n "$T_D1/dir1/old1" "$T_D1/dir0/sharednew" 2> /dev/null &
dumb_renameat2 -n "$T_M1/dir1/old1" "$T_M1/dir0/sharednew" 2> /dev/null &
pid1=$!
wait $pid0
@@ -31,7 +31,7 @@ for i in $(seq 0 100); do
test "$rc0" == 0 -a "$rc1" == 0 && t_fail "both renames succeeded"
# blow away possible files for either race outcome
rm -f "$T_D0/dir0/old0" "$T_D1/dir1/old1" "$T_D0/dir0/sharednew" "$T_D1/dir1/sharednew"
rm -f "$T_M0/dir0/old0" "$T_M1/dir1/old1" "$T_M0/dir0/sharednew" "$T_M1/dir1/sharednew"
done
t_pass

View File

@@ -62,28 +62,32 @@ test -x "$SCOUTFS_FENCED_RUN" || \
# files disappear.
#
# silence error messages
quiet_cat()
# generate failure messages to stderr while still echoing 0 for the caller
careful_cat()
{
cat "$@" 2>/dev/null
local path="$@"
cat "$@" || echo 0
}
while sleep $SCOUTFS_FENCED_DELAY; do
shopt -s nullglob
for fence in /sys/fs/scoutfs/*/fence/*; do
srv=$(basename $(dirname $(dirname $fence)))
fenced="$(quiet_cat $fence/fenced)"
error="$(quiet_cat $fence/error)"
rid="$(quiet_cat $fence/rid)"
ip="$(quiet_cat $fence/ipv4_addr)"
reason="$(quiet_cat $fence/reason)"
# request dirs can linger then disappear after fenced/error is set
if [ ! -d "$fence" -o "$fenced" == "1" -o "$error" == "1" ]; then
# catches unmatched regex when no dirs
if [ ! -d "$fence" ]; then
continue
fi
# skip requests that have been handled
if [ "$(careful_cat $fence/fenced)" == 1 -o \
"$(careful_cat $fence/error)" == 1 ]; then
continue
fi
srv=$(basename $(dirname $(dirname $fence)))
rid="$(cat $fence/rid)"
ip="$(cat $fence/ipv4_addr)"
reason="$(cat $fence/reason)"
log_message "server $srv fencing rid $rid at IP $ip for $reason"
# export _REQ_ vars for run to use

View File

@@ -55,14 +55,6 @@ with initial sparse regions (perhaps by multiple threads writing to
different regions) and wasted space isn't an issue (perhaps because the
file population contains few small files).
.TP
.B ino_alloc_per_lock=<number>
This option determines how many inode numbers are allocated in the same
cluster lock. The default, and maximum, is 1024. The minimum is 1.
Allocating fewer inodes per lock can allow more parallelism between
mounts because there are more locks that cover the same number of
created files. This can be helpful when working with smaller numbers of
large files.
.TP
.B log_merge_wait_timeout_ms=<number>
This option sets the amount of time, in milliseconds, that log merge
creation can wait before timing out. This setting is per-mount, only

View File

@@ -4,12 +4,6 @@
%{!?_release: %global _release 0.%{pkg_date}git%{pkg_git_hash}}
%if 0%{?rhel} && 0%{?rhel} < 10
%global tuned_profiles_dir %{_prefix}/lib/tuned
%else
%global tuned_profiles_dir %{_prefix}/lib/tuned/profiles
%endif
Name: scoutfs-utils
Summary: scoutfs user space utilities
Version: %{pkg_version}
@@ -63,8 +57,6 @@ install -m 644 -D src/format.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/format.h
install -m 755 -D fenced/scoutfs-fenced $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/scoutfs-fenced
install -m 644 -D fenced/scoutfs-fenced.service $RPM_BUILD_ROOT%{_unitdir}/scoutfs-fenced.service
install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-fenced.conf.example
install -m 644 -D tuned/tuned.conf $RPM_BUILD_ROOT%{tuned_profiles_dir}/scoutfs/tuned.conf
install -m 644 -D tuned/40-scoutfs.conf $RPM_BUILD_ROOT%{_prefix}/lib/tuned/recommend.d/40-scoutfs.conf
%files
%defattr(644,root,root,755)
@@ -74,8 +66,6 @@ install -m 644 -D tuned/40-scoutfs.conf $RPM_BUILD_ROOT%{_prefix}/lib/tuned/reco
%defattr(755,root,root,755)
%{_sbindir}/scoutfs
%{_libexecdir}/scoutfs-fenced
%{tuned_profiles_dir}/scoutfs/tuned.conf
%{_prefix}/lib/tuned/recommend.d/40-scoutfs.conf
%files -n scoutfs-devel
%defattr(644,root,root,755)

View File

@@ -1,9 +0,0 @@
#
# scoutfs tuned recommendation
#
# If the system has support for mounting scoutfs filesystems, which is
# valid for client mounts and quorum mounts. We then always recommend
# the scoutfs profile.
[scoutfs]
/proc/filesystems=scoutfs

View File

@@ -1,40 +0,0 @@
#
# ScoutFS specific tuned profile
#
# The parameters below are a mix of settings present in the throughput-performance
# profile as well as the latency-performance profile. Generally speaking, we
# want to encourage the system to avoid swap and accumulating large amounts of
# dirty data, as this can cause reclaim to lead to congestion.
# Enable this profile with `$ sudo tuned-adm profile scoutfs`
# linux default values are marked with [<value>] for reference.
[main]
summary=Optimize for production scoutfs deployment
description=Configures the system for production scoutfs filesystem server deployment.
# network-throughput sets some larger buffers useful for 40gbe deployments
# network-throughput also inherits throughput-performance
include=network-throughput
[vm]
# throughput-performance sets dirty_bytes to 40% (much larger than linux default), but
# this allows the accumulation of large backlogs of writeback. We prefer to writeback
# often and early to avoid congestion [20%]
dirty_bytes = 10%
# start writing back at this amount [10%]
dirty_background_bytes = 5%
[sysctl]
# the kernel default is 60. Lower it to instruct the kernel that swapping is
# expensive and we want to avoid it. We assume scoutfs deployments have ample
# available RAM. [60]
vm.swappiness = 10
# increase pdflush runs so it can more aggressively write out dirty data [500]
vm.dirty_writeback_centisecs = 300
# decrease time dirty data will linger before being written back [3000]
vm.dirty_expire_centisecs = 2000