scoutfs-tests: initial commit

The first commit of the scoutfs-tests suite which uses multiple mounts
on one host to test multi-node scoutfs.

Signed-off-by: Zach Brown <zab@versity.com>
This commit is contained in:
Zach Brown
2019-08-02 13:24:42 -07:00
commit b9bd7d1293
63 changed files with 4366 additions and 0 deletions

4
tests/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
src/*.d
src/createmany
src/dumb_setxattr
src/handle_cat

47
tests/Makefile Normal file
View File

@@ -0,0 +1,47 @@
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing
SHELL := /usr/bin/bash
# each binary command is built from a single .c file
BIN := src/createmany \
src/dumb_setxattr \
src/handle_cat
DEPS := $(wildcard src/*.d)
all: $(BIN)
ifneq ($(DEPS),)
-include $(DEPS)
endif
$(BIN): %: %.c Makefile
gcc $(CFLAGS) -MD -MP -MF $*.d $< -o $@
.PHONY: clean
clean:
@rm -f $(BIN) $(DEPS)
#
# Make sure we only have all three items needed for each test: entry in
# sequence, test script in tests/, and output in golden/.
#
.PHONY: check-test-files
check-test-files:
@for t in $$(grep -v "^#" sequence); do \
test -e "tests/$$t" || \
echo "no test for list entry: $$t"; \
t=$${t%%.sh}; \
test -e "golden/$$t" || \
echo "no output for list entry: $$t"; \
done; \
for t in golden/*; do \
t=$$(basename "$$t"); \
grep -q "^$$t.sh$$" sequence || \
echo "output not in list: $$t"; \
done; \
for t in tests/*; do \
t=$$(basename "$$t"); \
test "$$t" == "list" && continue; \
grep -q "^$$t$$" sequence || \
echo "test not in list: $$t"; \
done

126
tests/README.md Normal file
View File

@@ -0,0 +1,126 @@
This test suite exercises multi-node scoutfs by using multiple mounts on
one host to simulate multiple nodes across a network.
It also contains a light test wrapper that executes xfstests on one of
the test mounts.
## Invoking Tests
The basic test invocation has to specify the location of locally checked
out git repos for scoutfs software that will be modified by the script,
the number of mounts to test, whether to create a new fs and insert the
built module, and where to put the results.
# bash ./run-tests.sh \
-d /dev/vda \
-i \
-K $HOME/git/scoutfs-kmod-dev \
-k master \
-m \
-n 3 \
-q 2 \
-r ./results \
-U $HOME/git/scoutfs-utils-dev \
-u master
All options can be seen by running with -h.
The script will try to check out a newly pulled version of the specified
branch in each specified local repository. They should be clean and the
script will try to fetch from origin and specifically check out local
branches that track the branches on origin.
This script is built to test multi-node systems on one host by using
different mounts of the same device. The script creates a fake block
device in front of the main fs block device for each mount that will be
tested. Currently it will create free loop devices and will mount on
/mnt/test.[0-9].
All tests will be run by default. Particular tests can be included or
excluded by providing test name regular expressions with the -I and -E
options. The definitive list of tests and the order in which they'll be
run is found in the sequence file.
## xfstests
The last test that is run checks out, builds, and runs xfstests. It
needs -X and -x options for the xfstests git repo and branch. The test
verifies that the expected set of xfstests tests ran and passed.
-X $HOME/git/scoutfs-xfstests \
-x scoutfs \
An xfstests repo that knows about scoutfs is only required to sprinkle
the scoutfs cases throughout the xfstests harness.
## Individual Test Invocation
Each test is run in a new bash invocation. A set of directories in the
test volume and in the results path are created for the test. Each
test's working directory isn't managed.
Test output, temp files, and dmesg snapshots are all put in a tmp/ dir
in the results/ dir. Per-test dirs are only destroyed before each test
invocation.
The harness will check for unexpected output in dmesg after each
individual test.
Each test that fails will have its results appened to the fail.log file
in the results/ directory. The details of the failure can be examined
in the directories for each test in results/output/ and results/tmp/.
## Writing tests
Tests have access to a set of t\_ prefixed bash functions that are found
in files in funcs/.
Tests complete by calling t\_ functions which indicate the result of the
test and can return a message. If the tests passes then its output is
compared with known good output. If the output doesn't match then the
test fails. The t\_ completion functions return specific status codes so
that returning without calling one can be detected.
The golden output has to be consistent across test platforms so there
are a number of filter functions which strip out local details from
command output. t\_filter\_fs is by far the most used which canonicalizes
fs mount paths and block device details.
Tests can be relatively loose about checking errors. If commands
produce output in failure cases then the test will fail without having
to specifically test for errors on every command execution. Care should
be taken to make sure that blowing through a bunch of commands with no
error checking doesn't produce catastrophic results. Usually tests are
simple and it's fine.
A bare sync will sync all the mounted filesystems and ensure that
no mounts have dirty data. sync -f can be used to sync just a specific
filesystem, though it doesn't exist on all platforms.
The harness doesn't currently ensure that all mounts are restored after
each test invocation. It probably should. Currently it's the
responsibility of the test to restore any mounts it alters and there are
t\_ functions to mount all configured mount points.
## Environment Variables
Tests have a number of exported environment variables that are commonly
used during the test.
| Variable | Description | Origin | Example |
| ------------- | ------------------- | --------------- | ------------------ |
| T\_B[0-9] | per-mount blockdev | created per run | /dev/loop0 |
| T\_D[0-9] | per-mount test dir | made for test | /mnt/test.[0-9]/t |
| T\_DEVICE | main FS device | -d | /dev/sda |
| T\_EXDEV | extra scratch dev | -e | /dev/sdb |
| T\_M[0-9] | mount paths | mounted per run | /mnt/test.[0-9]/ |
| T\_NR\_MOUNTS | number of mounts | -n | 3 |
| T\_O[0-9] | mount options | created per run | -o server\_addr= |
| T\_QUORUM | quorum count | -q | 2 |
| T\_TMP | per-test tmp prefix | made for test | results/tmp/t/tmp |
| T\_TMPDIR | per-test tmp dir dir | made for test | results/tmp/t |
There are also a number of variables that are set in response to options
and are exported but their use is rare so they aren't included here.

58
tests/funcs/exec.sh Normal file
View File

@@ -0,0 +1,58 @@
t_status_msg()
{
echo "$*" > "$T_TMPDIR/status.msg"
}
export T_PASS_STATUS=100
export T_SKIP_STATUS=101
export T_FAIL_STATUS=102
export T_FIRST_STATUS="$T_PASS_STATUS"
export T_LAST_STATUS="$T_FAIL_STATUS"
t_pass()
{
exit $T_PASS_STATUS
}
t_skip()
{
t_status_msg "$@"
exit $T_SKIP_STATUS
}
t_fail()
{
t_status_msg "$@"
exit $T_FAIL_STATUS
}
#
# Quietly run a command during a test. If it succeeds then we have a
# log of its execution but its output isn't included in the test's
# compared output. If it fails then the test fails.
#
t_quiet()
{
echo "# $*" >> "$T_TMPDIR/quiet.log"
"$@" > "$T_TMPDIR/quiet.log" 2>&1 || \
t_fail "quiet command failed"
}
#
# redirect test output back to the output of the invoking script intead
# of the compared output.
#
t_restore_output()
{
exec >&6 2>&1
}
#
# redirect a command's output back to the compared output after the
# test has restored its output
#
t_compare_output()
{
"$@" >&7 2>&1
}

55
tests/funcs/filter.sh Normal file
View File

@@ -0,0 +1,55 @@
# filter out device ids and mount paths
t_filter_fs()
{
sed -e 's@mnt/test\.[0-9]*@mnt/test@g' \
-e 's@Device: [a-fA-F0-7]*h/[0-9]*d@Device: 0h/0d@g'
}
#
# Filter out expected messages. Putting messages here implies that
# tests aren't relying on messages to discover failures.. they're
# directly testing the result of whatever it is that's generating the
# message.
#
t_filter_dmesg()
{
local re
# the kernel can just be noisy
re=" used greatest stack depth: "
# mkfs/mount checks partition tables
re="$re|unknown partition table"
# dm swizzling
re="$re|device doesn't appear to be in the dev hash table"
# some tests try invalid devices
re="$re|scoutfs .* error reading super block"
re="$re| EXT4-fs (.*): get root inode failed"
re="$re| EXT4-fs (.*): mount failed"
# dropping caches is fine
re="$re| drop_caches: "
# mount and unmount spew a bunch
re="$re|scoutfs.*client connected"
re="$re|scoutfs.*client disconnected"
re="$re|scoutfs.*server setting up"
re="$re|scoutfs.*server ready"
re="$re|scoutfs.*server accepted"
re="$re|scoutfs.*server closing"
re="$re|scoutfs.*server shutting down"
re="$re|scoutfs.*server stopped"
# xfstests records test execution in desg
re="$re| run fstests "
# tests that drop unmount io triggers fencing
re="$re|scoutfs .* error: fencing "
re="$re|scoutfs .* warning: waiting for .* lock clients"
re="$re|scoutfs .* error: client rid.*lock recovery timed out"
egrep -v "($re)"
}

213
tests/funcs/fs.sh Normal file
View File

@@ -0,0 +1,213 @@
#
# Make all previously dirty items in memory in all mounts synced and
# visible in the inode seq indexes. We have to force a sync on every
# node by dirtying data as that's the only way to guarantee advancing
# the sequence number on each node which limits index visibility. Some
# distros don't have sync -f so we dirty our mounts then sync
# everything.
#
t_sync_seq_index()
{
local m
for m in $T_MS; do
t_quiet touch $m
done
t_quiet sync
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
#
t_ident()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
local fsid
local rid
fsid=$(scoutfs statfs -s fsid "$mnt")
rid=$(scoutfs statfs -s rid "$mnt")
echo "f.${fsid:0:4}.r.${rid:0:4}"
}
#
# Output the mount's sysfs path, defaulting to mount 0 if none is
# specified.
#
t_sysfs_path()
{
local nr="$1"
echo "/sys/fs/scoutfs/$(t_ident $nr)"
}
#
# Output the mount's debugfs path, defaulting to mount 0 if none is
# specified.
#
t_debugfs_path()
{
local nr="$1"
echo "/sys/kernel/debug/scoutfs/$(t_ident $nr)"
}
#
# output all the configured test nrs for iteration
#
t_fs_nrs()
{
seq 0 $((T_NR_MOUNTS - 1))
}
#
# Output the mount nr of the current server. This takes no steps to
# ensure that the server doesn't shut down and have some other mount
# take over.
#
t_server_nr()
{
for i in $(t_fs_nrs); do
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "1" ]; then
echo $i
return
fi
done
t_fail "t_server_nr didn't find a server"
}
#
# Output the mount nr of the first client that we find. There can be
# no clients if there's only one mount who has to be the server. This
# takes no steps to ensure that the client doesn't become a server at
# any point.
#
t_first_client_nr()
{
for i in $(t_fs_nrs); do
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "0" ]; then
echo $i
return
fi
done
t_fail "t_first_client_nr didn't find any clients"
}
t_mount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet mount -t scoutfs \$T_O$nr \$T_B$nr \$T_M$nr
}
t_umount()
{
local nr="$1"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet umount \$T_B$i
}
#
# Attempt to mount all the configured mounts, assuming that they're
# not already mounted.
#
t_mount_all()
{
local pids=""
local p
for i in $(t_fs_nrs); do
t_mount $i &
p="$!"
pids="$pids $!"
done
for p in $pids; do
t_quiet wait $p
done
}
#
# Attempt to unmount all the configured mounts, assuming that they're
# all mounted.
#
t_umount_all()
{
local pids=""
local p
for i in $(t_fs_nrs); do
t_umount $i &
p="$!"
pids="$pids $!"
done
for p in $pids; do
t_quiet wait $p
done
}
t_trigger_path() {
local nr="$1"
echo "/sys/kernel/debug/scoutfs/$(t_ident $nr)/trigger"
}
t_trigger_get() {
local which="$1"
local nr="$2"
cat "$(t_trigger_path "$nr")/$which"
}
t_trigger_show() {
local which="$1"
local string="$2"
local nr="$3"
echo "trigger $which $string: $(t_trigger_get $which $nr)"
}
t_trigger_arm() {
local which="$1"
local nr="$2"
local path=$(t_trigger_path "$nr")
echo 1 > "$path/$which"
t_trigger_show $which armed $nr
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified.
#
t_counter() {
local which="$1"
local nr="$2"
cat "$(t_sysfs_path $nr)/counters/$which"
}
#
# output the value of the given counter for the given mount, defaulting
# to mount 0 if a mount isn't specified.
#
t_counter_diff() {
local which="$1"
local old="$2"
local nr="$3"
local new
new="$(t_counter $which $nr)"
echo "counter $which diff $((new - old))"
}

25
tests/funcs/require.sh Normal file
View File

@@ -0,0 +1,25 @@
#
# Make sure that all the base command arguments are found in the path.
# This isn't strictly necessary as the test will naturally fail if the
# command isn't found, but it's nice to fail fast and clearly
# communicate why.
#
t_require_commands() {
local c
for c in "$@"; do
which "$c" >/dev/null 2>&1 || \
t_fail "command $c not found in path"
done
}
#
# make sure that we have at least this many mounts
#
t_require_mounts() {
local req="1"
test "$T_NR_MOUNTS" -ge "$req" || \
t_fail "$req mounts required, only have $T_NR_MOUNTS"
}

View File

@@ -0,0 +1,36 @@
== calculate number of files
== create per mount dirs
== generate phase scripts
== round 1: create
== round 1: online
== round 1: verify
== round 1: release
== round 1: offline
== round 1: stage
== round 1: online
== round 1: verify
== round 1: release
== round 1: offline
== round 1: unlink
== round 2: create
== round 2: online
== round 2: verify
== round 2: release
== round 2: offline
== round 2: stage
== round 2: online
== round 2: verify
== round 2: release
== round 2: offline
== round 2: unlink
== round 3: create
== round 3: online
== round 3: verify
== round 3: release
== round 3: offline
== round 3: stage
== round 3: online
== round 3: verify
== round 3: release
== round 3: offline
== round 3: unlink

View File

@@ -0,0 +1,53 @@
== single block write
online: 1
offline: 0
st_blocks: 8
== single block overwrite
online: 1
offline: 0
st_blocks: 8
== append
online: 2
offline: 0
st_blocks: 16
== release
online: 0
offline: 2
st_blocks: 16
== duplicate release
online: 0
offline: 2
st_blocks: 16
== duplicate release past i_size
online: 0
offline: 2
st_blocks: 16
== stage
online: 2
offline: 0
st_blocks: 16
== duplicate stage
online: 2
offline: 0
st_blocks: 16
== larger file
online: 256
offline: 0
st_blocks: 2048
== partial truncate
online: 128
offline: 0
st_blocks: 1024
== single sparse block
online: 1
offline: 0
st_blocks: 8
== empty file
online: 0
offline: 0
st_blocks: 0
== non-regular file
online: 0
offline: 0
st_blocks: 0
== cleanup

View File

@@ -0,0 +1,55 @@
== root inode updates flow back and forth
== stat of created file matches
== written file contents match
== overwritten file contents match
== appended file contents match
== fiemap matches after racey appends
== unlinked file isn't found
== symlink targets match
/mnt/test/test/basic-posix-consistency/file.targ
/mnt/test/test/basic-posix-consistency/file.targ
/mnt/test/test/basic-posix-consistency/file.targ2
/mnt/test/test/basic-posix-consistency/file.targ2
== new xattrs are visible
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="1"
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="1"
== modified xattrs are updated
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="2"
# file: /mnt/test/test/basic-posix-consistency/file
user.xat="2"
== deleted xattrs
/mnt/test/test/basic-posix-consistency/file: user.xat: No such attribute
/mnt/test/test/basic-posix-consistency/file: user.xat: No such attribute
== readdir after modification
one
two
three
four
one
two
three
four
two
four
two
four
== can delete empty dir
== some easy rename cases
--- file between dirs
--- file within dir
--- dir within dir
--- overwrite file
--- can't overwrite non-empty dir
mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to /mnt/test/test/basic-posix-consistency/dir/a/dir: Directory not empty
--- can overwrite empty dir
== path resoluion
== inode indexes match after syncing existing
== inode indexes match after copying and syncing
== inode indexes match after removing and syncing

View File

View File

@@ -0,0 +1,4 @@
Run createmany in /mnt/test/test/createmany-parallel/0
Run createmany in /mnt/test/test/createmany-parallel/1
Run createmany in /mnt/test/test/createmany-parallel/2
Run createmany in /mnt/test/test/createmany-parallel/3

View File

@@ -0,0 +1,3 @@
== measure initial createmany
== measure initial createmany
== measure two concurrent createmany runs

View File

@@ -0,0 +1,2 @@
== repeated cross-mount alloc+free, totalling 2x free
== remove empty test file

View File

@@ -0,0 +1,10 @@
== create per node dirs
== touch files on each node
== recreate the files
== turn the files into directories
== rename parent dirs
== rename parent dirs back
== create some hard links
== recreate one of the hard links
== delete the remaining hard link
== race to blow everything away

View File

View File

@@ -0,0 +1,4 @@
== create files and sync
== modify files
== mount and unmount
== verify files

View File

@@ -0,0 +1,4 @@
== create per mount files
== time independent modification
== time concurrent independent modification
== time concurrent conflicting modification

View File

@@ -0,0 +1,2 @@
=== setup files ===
=== ping-pong xattr ops ===

View File

@@ -0,0 +1 @@
== race writing and index walking

View File

@@ -0,0 +1,3 @@
== make test dir
== do enough stuff to make lock leaks visible
== make sure nothing has leaked

View File

@@ -0,0 +1,2 @@
=== getcwd after lock revocation
trigger statfs_lock_purge armed: 1

View File

@@ -0,0 +1,15 @@
=== setup test file ===
# file: /mnt/test/test/lock-shrink-consistency/dir/file
user.test="aaa"
=== commit dirty trans and revoke lock ===
trigger statfs_lock_purge armed: 1
trigger statfs_lock_purge after it fired: 0
=== change xattr on other mount ===
# file: /mnt/test/test/lock-shrink-consistency/dir/file
user.test="bbb"
=== verify new xattr under new lock on first mount ===
# file: /mnt/test/test/lock-shrink-consistency/dir/file
user.test="bbb"

View File

@@ -0,0 +1,42 @@
== create files
== waiter shows up in ioctl
offline wating should be empty:
0
offline waiting should now have one known entry:
== multiple waiters on same block listed once
offline waiting still has one known entry:
== different blocks show up
offline waiting now has two known entries:
== staging wakes everyone
offline wating should be empty again:
0
== interruption does no harm
offline waiting should now have one known entry:
offline waiting should be empty again:
0
== readahead while offline does no harm
== waiting on interesting blocks works
offline waiting is empty at block 0
0
offline waiting is empty at block 1
0
offline waiting is empty at block 128
0
offline waiting is empty at block 129
0
offline waiting is empty at block 254
0
offline waiting is empty at block 255
0
== contents match when staging blocks forward
== contents match when staging blocks backwards
== truncate to same size doesn't wait
offline wating should be empty:
0
== truncating does wait
truncate should be waiting for first block:
trunate should no longer be waiting:
0
== writing waits
should be waiting for write
== cleanup

View File

@@ -0,0 +1,3 @@
== create files
== count allocations reading forwards
== count allocations reading backwards

View File

@@ -0,0 +1,9 @@
== dirs shouldn't appear in data_seq queries
== two created files are present and come after each other
found first
found second
== unlinked entries must not be present
== dirty inodes can not be present
== changing metadata must increase meta seq
== changing contents must increase data seq
== make sure dirtying doesn't livelock walk

View File

@@ -0,0 +1,146 @@
== simple whole file multi-block releasing
== release last block that straddles i_size
== release entire file past i_size
== releasing offline extents is fine
== 0 count is fine
== release past i_size is fine
== wrapped blocks fails
release ioctl failed: Invalid argument (22)
scoutfs: release failed: Invalid argument (22)
== releasing non-file fails
ioctl failed on '/mnt/test/test/simple-release-extents/file-char': Inappropriate ioctl for device (25)
release ioctl failed: Inappropriate ioctl for device (25)
scoutfs: release failed: Inappropriate ioctl for device (25)
== releasing a non-scoutfs file fails
ioctl failed on '/dev/null': Inappropriate ioctl for device (25)
release ioctl failed: Inappropriate ioctl for device (25)
scoutfs: release failed: Inappropriate ioctl for device (25)
== releasing bad version fails
release ioctl failed: Stale file handle (116)
scoutfs: release failed: Stale file handle (116)
== verify small release merging
0 0 0: (0 0 1) (1 101 4)
0 0 1: (0 0 2) (2 102 3)
0 0 2: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
0 0 3: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
0 0 4: (0 0 1) (1 101 3) (4 0 1)
0 1 0: (0 0 2) (2 102 3)
0 1 1: (0 0 2) (2 102 3)
0 1 2: (0 0 3) (3 103 2)
0 1 3: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
0 1 4: (0 0 2) (2 102 2) (4 0 1)
0 2 0: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
0 2 1: (0 0 3) (3 103 2)
0 2 2: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
0 2 3: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
0 2 4: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
0 3 0: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
0 3 1: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
0 3 2: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
0 3 3: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
0 3 4: (0 0 1) (1 101 2) (3 0 2)
0 4 0: (0 0 1) (1 101 3) (4 0 1)
0 4 1: (0 0 2) (2 102 2) (4 0 1)
0 4 2: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
0 4 3: (0 0 1) (1 101 2) (3 0 2)
0 4 4: (0 0 1) (1 101 3) (4 0 1)
1 0 0: (0 0 2) (2 102 3)
1 0 1: (0 0 2) (2 102 3)
1 0 2: (0 0 3) (3 103 2)
1 0 3: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
1 0 4: (0 0 2) (2 102 2) (4 0 1)
1 1 0: (0 0 2) (2 102 3)
1 1 1: (0 100 1) (1 0 1) (2 102 3)
1 1 2: (0 100 1) (1 0 2) (3 103 2)
1 1 3: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
1 1 4: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
1 2 0: (0 0 3) (3 103 2)
1 2 1: (0 100 1) (1 0 2) (3 103 2)
1 2 2: (0 100 1) (1 0 2) (3 103 2)
1 2 3: (0 100 1) (1 0 3) (4 104 1)
1 2 4: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
1 3 0: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
1 3 1: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
1 3 2: (0 100 1) (1 0 3) (4 104 1)
1 3 3: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
1 3 4: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
1 4 0: (0 0 2) (2 102 2) (4 0 1)
1 4 1: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
1 4 2: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
1 4 3: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
1 4 4: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
2 0 0: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
2 0 1: (0 0 3) (3 103 2)
2 0 2: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
2 0 3: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
2 0 4: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
2 1 0: (0 0 3) (3 103 2)
2 1 1: (0 100 1) (1 0 2) (3 103 2)
2 1 2: (0 100 1) (1 0 2) (3 103 2)
2 1 3: (0 100 1) (1 0 3) (4 104 1)
2 1 4: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
2 2 0: (0 0 1) (1 101 1) (2 0 1) (3 103 2)
2 2 1: (0 100 1) (1 0 2) (3 103 2)
2 2 2: (0 100 2) (2 0 1) (3 103 2)
2 2 3: (0 100 2) (2 0 2) (4 104 1)
2 2 4: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
2 3 0: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
2 3 1: (0 100 1) (1 0 3) (4 104 1)
2 3 2: (0 100 2) (2 0 2) (4 104 1)
2 3 3: (0 100 2) (2 0 2) (4 104 1)
2 3 4: (0 100 2) (2 0 3)
2 4 0: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
2 4 1: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
2 4 2: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
2 4 3: (0 100 2) (2 0 3)
2 4 4: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
3 0 0: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
3 0 1: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
3 0 2: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
3 0 3: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
3 0 4: (0 0 1) (1 101 2) (3 0 2)
3 1 0: (0 0 2) (2 102 1) (3 0 1) (4 104 1)
3 1 1: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
3 1 2: (0 100 1) (1 0 3) (4 104 1)
3 1 3: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
3 1 4: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
3 2 0: (0 0 1) (1 101 1) (2 0 2) (4 104 1)
3 2 1: (0 100 1) (1 0 3) (4 104 1)
3 2 2: (0 100 2) (2 0 2) (4 104 1)
3 2 3: (0 100 2) (2 0 2) (4 104 1)
3 2 4: (0 100 2) (2 0 3)
3 3 0: (0 0 1) (1 101 2) (3 0 1) (4 104 1)
3 3 1: (0 100 1) (1 0 1) (2 102 1) (3 0 1) (4 104 1)
3 3 2: (0 100 2) (2 0 2) (4 104 1)
3 3 3: (0 100 3) (3 0 1) (4 104 1)
3 3 4: (0 100 3) (3 0 2)
3 4 0: (0 0 1) (1 101 2) (3 0 2)
3 4 1: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
3 4 2: (0 100 2) (2 0 3)
3 4 3: (0 100 3) (3 0 2)
3 4 4: (0 100 3) (3 0 2)
4 0 0: (0 0 1) (1 101 3) (4 0 1)
4 0 1: (0 0 2) (2 102 2) (4 0 1)
4 0 2: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
4 0 3: (0 0 1) (1 101 2) (3 0 2)
4 0 4: (0 0 1) (1 101 3) (4 0 1)
4 1 0: (0 0 2) (2 102 2) (4 0 1)
4 1 1: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
4 1 2: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
4 1 3: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
4 1 4: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
4 2 0: (0 0 1) (1 101 1) (2 0 1) (3 103 1) (4 0 1)
4 2 1: (0 100 1) (1 0 2) (3 103 1) (4 0 1)
4 2 2: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
4 2 3: (0 100 2) (2 0 3)
4 2 4: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
4 3 0: (0 0 1) (1 101 2) (3 0 2)
4 3 1: (0 100 1) (1 0 1) (2 102 1) (3 0 2)
4 3 2: (0 100 2) (2 0 3)
4 3 3: (0 100 3) (3 0 2)
4 3 4: (0 100 3) (3 0 2)
4 4 0: (0 0 1) (1 101 3) (4 0 1)
4 4 1: (0 100 1) (1 0 1) (2 102 2) (4 0 1)
4 4 2: (0 100 2) (2 0 1) (3 103 1) (4 0 1)
4 4 3: (0 100 3) (3 0 2)
4 4 4: (0 100 4) (4 0 1)

View File

@@ -0,0 +1,23 @@
== create/release/stage single block file
== create/release/stage larger file
== multiple release,drop_cache,stage cycles
== release+stage shouldn't change stat, data seq or vers
== stage does change meta_seq
== can't use stage to extend online file
stage returned -1, not 4096: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== wrapped region fails
stage returned -1, not 4096: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== non-block aligned offset fails
stage returned -1, not 4095: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== non-block aligned len within block fails
stage returned -1, not 1024: error Invalid argument (22)
scoutfs: stage failed: Input/output error (5)
== partial final block that writes to i_size does work
== zero length stage doesn't bring blocks online
== stage of non-regular file fails
ioctl failed on '/mnt/test/test/simple-staging/file-char': Inappropriate ioctl for device (25)
stage returned -1, not 1: error Inappropriate ioctl for device (25)
scoutfs: stage failed: Input/output error (5)

View File

@@ -0,0 +1,18 @@
=== XATTR_ flag combinations
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -c -r
returned -1 errno 22 (Invalid argument)
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -r
returned -1 errno 61 (No data available)
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -c
returned 0
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -c
returned -1 errno 17 (File exists)
dumb_setxattr -p /mnt/test/test/simple-xattr-unit/file -n user.test -v val -r
returned 0
=== bad lengths
setfattr: /mnt/test/test/simple-xattr-unit/file: Operation not supported
setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
setfattr: /mnt/test/test/simple-xattr-unit/file: Argument list too long
=== good length boundaries
=== 500 random lengths

View File

@@ -0,0 +1,2 @@
== create initial files
== race stage and release

View File

@@ -0,0 +1,39 @@
== create file for xattr ping pong
# file: /mnt/test/test/stale-btree-seg-read/file
user.xat="initial"
== retry btree block read
trigger btree_stale_read armed: 1
# file: /mnt/test/test/stale-btree-seg-read/file
user.xat="btree"
trigger btree_stale_read after: 0
counter btree_stale_read diff 1
== retry segment read
trigger seg_stale_read armed: 1
# file: /mnt/test/test/stale-btree-seg-read/file
user.xat="segment"
trigger seg_stale_read after: 0
counter seg_stale_read diff 1
== get a hard error, then have it work
trigger hard_stale_error armed: 1
getfattr: /mnt/test/test/stale-btree-seg-read/file: Input/output error
trigger hard_stale_error after: 0
counter manifest_hard_stale_error diff 1
# file: /mnt/test/test/stale-btree-seg-read/file
user.xat="err"
== read through multiple stale cached btree blocks
Y
trigger btree_advance_ring_half armed: 1
trigger btree_advance_ring_half after: 0
trigger btree_advance_ring_half armed: 1
trigger btree_advance_ring_half after: 0
trigger btree_advance_ring_half armed: 1
trigger btree_advance_ring_half after: 0
trigger btree_advance_ring_half armed: 1
trigger btree_advance_ring_half after: 0
trigger statfs_lock_purge armed: 1
trigger statfs_lock_purge after: 0
N

281
tests/golden/xfstests Normal file
View File

@@ -0,0 +1,281 @@
Ran:
generic/001
generic/002
generic/005
generic/006
generic/007
generic/011
generic/013
generic/014
generic/020
generic/028
generic/032
generic/034
generic/035
generic/037
generic/039
generic/040
generic/041
generic/053
generic/056
generic/057
generic/062
generic/065
generic/066
generic/067
generic/069
generic/070
generic/071
generic/073
generic/076
generic/084
generic/086
generic/087
generic/088
generic/090
generic/092
generic/098
generic/101
generic/104
generic/106
generic/107
generic/117
generic/124
generic/129
generic/131
generic/169
generic/184
generic/221
generic/228
generic/236
generic/245
generic/249
generic/257
generic/258
generic/286
generic/294
generic/306
generic/307
generic/308
generic/309
generic/313
generic/315
generic/322
generic/335
generic/336
generic/337
generic/341
generic/342
generic/343
generic/348
generic/360
generic/376
generic/377
Not
run:
generic/004
generic/008
generic/009
generic/012
generic/015
generic/016
generic/018
generic/021
generic/022
generic/026
generic/031
generic/033
generic/050
generic/052
generic/058
generic/059
generic/060
generic/061
generic/063
generic/064
generic/079
generic/081
generic/082
generic/091
generic/094
generic/096
generic/110
generic/111
generic/113
generic/114
generic/115
generic/116
generic/118
generic/119
generic/121
generic/122
generic/123
generic/128
generic/130
generic/134
generic/135
generic/136
generic/138
generic/139
generic/140
generic/142
generic/143
generic/144
generic/145
generic/146
generic/147
generic/148
generic/149
generic/150
generic/151
generic/152
generic/153
generic/154
generic/155
generic/156
generic/157
generic/158
generic/159
generic/160
generic/161
generic/162
generic/163
generic/171
generic/172
generic/173
generic/174
generic/177
generic/178
generic/179
generic/180
generic/181
generic/182
generic/183
generic/185
generic/188
generic/189
generic/190
generic/191
generic/193
generic/194
generic/195
generic/196
generic/197
generic/198
generic/199
generic/200
generic/201
generic/202
generic/203
generic/205
generic/206
generic/207
generic/210
generic/211
generic/212
generic/214
generic/216
generic/217
generic/218
generic/219
generic/220
generic/222
generic/223
generic/225
generic/227
generic/229
generic/230
generic/235
generic/238
generic/240
generic/244
generic/250
generic/252
generic/253
generic/254
generic/255
generic/256
generic/259
generic/260
generic/261
generic/262
generic/263
generic/264
generic/265
generic/266
generic/267
generic/268
generic/271
generic/272
generic/276
generic/277
generic/278
generic/279
generic/281
generic/282
generic/283
generic/284
generic/287
generic/288
generic/289
generic/290
generic/291
generic/292
generic/293
generic/295
generic/296
generic/301
generic/302
generic/303
generic/304
generic/305
generic/312
generic/314
generic/316
generic/317
generic/318
generic/324
generic/326
generic/327
generic/328
generic/329
generic/330
generic/331
generic/332
generic/353
generic/355
generic/356
generic/357
generic/358
generic/359
generic/361
generic/362
generic/363
generic/364
generic/365
generic/366
generic/367
generic/368
generic/369
generic/370
generic/371
generic/372
generic/373
generic/374
generic/378
generic/379
generic/380
generic/381
generic/382
generic/383
generic/384
generic/385
generic/386
shared/001
shared/002
shared/003
shared/004
shared/032
shared/051
shared/289
Passed all 72 tests

524
tests/run-tests.sh Executable file
View File

@@ -0,0 +1,524 @@
#!/usr/bin/bash
#
# XXX
# - could have helper functions for waiting for pids
# - *always* be gathering traces? just slow ones?
# - would be nice to show running resource consumption
# - sample quorum from super instead of option (wrong w/o -m mkfs)
# - tracing options are not great, should be smarter
#
msg() {
echo "[== $@ ==]"
}
die() {
msg "$@, exiting"
exit 1
}
# output a message with a timestamp to the run.log
log()
{
echo "[$(date '+%F %T.%N')] $*" >> "$T_RESULTS/run.log"
}
# run a logged command, exiting if it fails
cmd() {
log "$*"
"$@" >> "$T_RESULTS/run.log" 2>&1 || \
die "cmd failed (check the run.log)"
}
show_help()
{
cat << EOF
$(basename $0) options:
-a | Abort after the first test failure, leave fs mounted.
-d <file> | Specify the storage device path that contains the
| file system to be tested. Will be clobbered by -m mkfs.
-E <re> | Exclude tests whose file name matches the regular expression.
| Can be provided multiple times
-e <file> | Specify an extra storage device for testing. Will be clobbered.
-I <re> | Include tests whose file name matches the regular expression.
| By default all tests are run. If this is provided then
| only tests matching will be run. Can be provided multiple
| times
-i | Force removing and inserting the built scoutfs.ko module.
-K | scouts-kmod-dev git repo. Used to build kernel module.
-k | Branch to checkout in scoutfs-kmod-dev repo.
-m | Run mkfs on the device before mounting and running
| tests. Implies unmounting existing mounts first.
-n | The number of devices and mounts to test.
-p | Exit script after preparing mounts only, don't run tests.
-q <nr> | Specify the quorum count needed to mount. This is
| used when running mkfs and is needed by a few tests.
-r <dir> | Specify the directory in which to store results of
| test runs. The directory will be created if it doesn't
| exist. Previous results will be deleted as each test runs.
-T | Output trace events with printk.
-t | Enabled trace events that match the given glob argument.
-U | scouts-utils-dev git repo. Used to build kernel module.
-u | Branch to checkout in scoutfs-utils-dev repo.
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
EOF
}
# unset all the T_ variables
for v in ${!T_*}; do
eval unset $v
done
while true; do
case $1 in
-a)
T_ABORT="1"
;;
-d)
test -n "$2" || die "-d must have device file argument"
T_DEVICE="$2"
shift
;;
-E)
test -n "$2" || die "-E must have test exclusion regex argument"
T_EXCLUDE+="-e '$2' "
shift
;;
-e)
test -n "$2" || die "-e must have extra device file argument"
T_EXDEV="$2"
shift
;;
-I)
test -n "$2" || die "-I must have test incusion regex argument"
T_INCLUDE+="-e '$2' "
shift
;;
-i)
T_INSMOD="1"
;;
-K)
test -n "$2" || die "-K must have kmod git repo dir argument"
T_KMOD_REPO="$2"
shift
;;
-k)
test -n "$2" || die "-k must have kmod git branch argument"
T_KMOD_BRANCH="$2"
shift
;;
-m)
T_MKFS="1"
;;
-n)
test -n "$2" || die "-n must have nr mounts argument"
T_NR_MOUNTS="$2"
shift
;;
-p)
T_PREPARE="1"
;;
-q)
test -n "$2" || die "-q must have quorum count argument"
T_QUORUM="$2"
shift
;;
-r)
test -n "$2" || die "-r must have results dir argument"
T_RESULTS="$2"
shift
;;
-T)
T_TRACE_PRINTK="1"
;;
-t)
test -n "$2" || die "-t must have trace glob argument"
T_TRACE_GLOB="$2"
shift
;;
-U)
test -n "$2" || die "-U must have utils git repo dir argument"
T_UTILS_REPO="$2"
shift
;;
-u)
test -n "$2" || die "-u must have utils git branch argument"
T_UTILS_BRANCH="$2"
shift
;;
-X)
test -n "$2" || die "-X requires xfstests git repo dir argument"
T_XFSTESTS_REPO="$2"
shift
;;
-x)
test -n "$2" || die "-x requires xfstests git branch argument"
T_XFSTESTS_BRANCH="$2"
shift
;;
-h|-\?|--help)
show_help
exit 1
;;
--)
break
;;
-?*)
printf 'WARN: Unknown option: %s\n' "$1" >&2
show_help
exit 1
;;
*)
break
;;
esac
shift
done
test -n "$T_DEVICE" || die "must specify -d fs device"
test -e "$T_DEVICE" || die "fs device -d '$T_DEVICE' doesn't exist"
test -n "$T_EXDEV" || die "must specify -e extra device"
test -e "$T_EXDEV" || die "fs device -d '$T_EXDEV' doesn't exist"
test -n "$T_KMOD_REPO" || die "must specify -K kmod repo dir"
test -n "$T_KMOD_BRANCH" || die "must specify -k kmod branch"
test -n "$T_MKFS" -a -z "$T_QUORUM" && die "mkfs (-m) requires quorum (-q)"
test -n "$T_RESULTS" || die "must specify -r results dir"
test -n "$T_UTILS_REPO" || die "must specify -K utils repo dir"
test -n "$T_UTILS_BRANCH" || die "must specify -k utils branch"
test -n "$T_XFSTESTS_REPO" -a -z "$T_XFSTESTS_BRANCH" && \
die "-X xfstests repo requires -x xfstests branch"
test -n "$T_XFSTESTS_BRANCH" -a -z "$T_XFSTESTS_REPO" && \
die "-X xfstests branch requires -x xfstests repo"
test -n "$T_NR_MOUNTS" || die "must specify -n nr mounts"
test "$T_NR_MOUNTS" -ge 1 -a "$T_NR_MOUNTS" -le 8 || \
die "-n nr mounts must be >= 1 and <= 8"
# canonicalize paths
for e in T_DEVICE T_EXDEV T_KMOD_REPO T_RESULTS T_UTILS_REPO T_XFSTESTS_REPO; do
eval $e=\"$(readlink -f "${!e}")\"
done
# include everything by default
test -z "$T_INCLUDE" && T_INCLUDE="-e '.*'"
# (quickly) exclude nothing by default
test -z "$T_EXCLUDE" && T_EXCLUDE="-e '\Zx'"
# eval to strip re ticks but not expand
tests=$(grep -v "^#" sequence |
eval grep "$T_INCLUDE" | eval grep -v "$T_EXCLUDE")
test -z "$tests" && \
die "no tests found by including $T_INCLUDE and excluding $T_EXCLUDE"
# create results dir
test -e "$T_RESULTS" || cmd mkdir -p "$T_RESULTS"
# checkout and build kernel module
if [ -n "$T_KMOD_REPO" ]; then
msg "building kmod repo $T_KMOD_REPO branch $T_KMOD_BRANCH"
cmd cd "$T_KMOD_REPO"
cmd git fetch
cmd git rev-parse --verify "$T_KMOD_BRANCH"
cmd git checkout -B "$T_KMOD_BRANCH" --track origin/$T_KMOD_BRANCH
cmd git pull --rebase
cmd make
cmd sync
cmd cd -
kmod="$T_KMOD_REPO/src/scoutfs.ko"
fi
# checkout and build utils
if [ -n "$T_UTILS_REPO" ]; then
msg "building utils repo $T_UTILS_REPO branch $T_UTILS_BRANCH"
cmd cd "$T_UTILS_REPO"
cmd git fetch
cmd git rev-parse --verify "$T_UTILS_BRANCH"
cmd git checkout -B "$T_UTILS_BRANCH" --track origin/$T_UTILS_BRANCH
cmd git pull --rebase
# might need git clean to remove stale src/*.o after update
cmd make
cmd sync
cmd cd -
# we can now run the built scoutfs binary
PATH="$PATH:$T_UTILS_REPO/src"
fi
# verify xfstests branch
if [ -n "$T_XFSTESTS_REPO" ]; then
msg "verifying xfstests repo $T_XFSTESTS_REPO branch $T_XFSTESTS_BRANCH"
cmd cd "$T_XFSTESTS_REPO"
cmd git rev-parse --verify "$T_XFSTESTS_BRANCH"
cmd cd -
fi
# building our test binaries
msg "building test binaries"
cmd make
# set any options implied by others
test -n "$T_MKFS" && T_UNMOUNT=1
test -n "$T_INSMOD" && T_UNMOUNT=1
#
# unmount concurrently because the final quorum can only unmount once
# they're all unmounting. We unmount all mounts because we might be
# removing the module.
#
unmount_all() {
msg "unmounting all scoutfs mounts"
pids=""
for m in $(findmnt -t scoutfs -o TARGET); do
if [ -d "$m" ]; then
cmd umount "$m" &
p="$!"
pids="$pids $!"
fi
done
for p in $pids; do
cmd wait $p
done
# delete all temp devices
for dev in $(losetup --associated "$T_DEVICE" | cut -d : -f 1); do
if [ -e "$dev" ]; then
cmd losetup -d "$dev"
fi
done
}
if [ -n "$T_UNMOUNT" ]; then
unmount_all
fi
if [ -n "$T_MKFS" ]; then
cmd scoutfs mkfs -Q "$T_QUORUM" "$T_DEVICE"
fi
if [ -n "$T_INSMOD" ]; then
msg "removing and reinserting scoutfs module"
test -e /sys/module/scoutfs && cmd rmmod scoutfs
cmd modprobe libcrc32c
cmd insmod "$T_KMOD_REPO/src/scoutfs.ko"
fi
if [ -n "$T_TRACE_GLOB" ]; then
msg "enabling trace events"
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
for g in $T_TRACE_GLOB; do
for e in /sys/kernel/debug/tracing/events/scoutfs/$g/enable; do
echo 1 > $e
done
done
if [ -n "$T_TRACE_PRINTK" ]; then
echo 1 > /sys/kernel/debug/tracing/options/trace_printk
fi
echo 1 > /proc/sys/kernel/ftrace_dump_on_oops
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/proc/sys/kernel/ftrace_dump_on_oops
fi
#
# mount concurrently so that a quorum is present to elect the leader and
# start a server.
#
msg "mounting $T_NR_MOUNTS mounts on $T_DEVICE"
pids=""
for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
opts="-o server_addr=127.0.0.1"
dev=$(losetup --find --show $T_DEVICE)
test -b "$dev" || die "failed to create temp device $dev"
dir="/mnt/test.$i"
test -d "$dir" || cmd mkdir -p "$dir"
msg "mounting $dev on $dir"
cmd mount -t scoutfs $opts "$dev" "$dir" &
p="$!"
pids="$pids $!"
log "background mount $i pid $p"
eval T_O$i=\"$opts\"
T_O[$i]="$opts"
T_OS+="$opts "
eval T_B$i=$dev
T_B[$i]=$dev
T_BS+="$dev "
eval T_M$i=\"$dir\"
T_M[$i]=$dir
T_MS+="$dir "
done
for p in $pids; do
log "waiting for background mount pid $p"
cmd wait $p
done
if [ -n "$T_PREPARE" ]; then
findmnt -t scoutfs
msg "-p given, exiting after preparing mounts"
exit 0
fi
# we need the STATUS definitions and filters
. funcs/exec.sh
. funcs/filter.sh
# give tests access to built binaries in src/
PATH="$PATH:$PWD/src"
msg "running tests"
> "$T_RESULTS/skip.log"
> "$T_RESULTS/fail.log"
passed=0
skipped=0
failed=0
for t in $tests; do
# tests has basenames from sequence, get path and name
t="tests/$t"
test_name=$(basename "$t" | sed -e 's/.sh$//')
# create a temporary dir and file path for the test
T_TMPDIR="$T_RESULTS/tmp/$test_name"
T_TMP="$T_TMPDIR/tmp"
cmd rm -rf "$T_TMPDIR"
cmd mkdir -p "$T_TMPDIR"
# create a test name dir in the fs
T_DS=""
for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
dir="${T_M[$i]}/test/$test_name"
test $i == 1 && cmd mkdir -p "$dir"
eval T_D$i=$dir
T_D[$i]=$dir
T_DS+="$dir "
done
# export all our T_ variables
for v in ${!T_*}; do
eval export $v
done
export PATH # give test access to scoutfs binary
# prepare to compare output to golden output
test -e "$T_RESULTS/output" || cmd mkdir -p "$T_RESULTS/output"
out="$T_RESULTS/output/$test_name"
> "$T_TMPDIR/status.msg"
golden="golden/$test_name"
# get stats from previous pass
last="$T_RESULTS/last-passed-test-stats"
stats=$(grep -s "^$test_name" "$last" | cut -d " " -f 2-)
test -n "$stats" && stats="last: $stats"
printf " %-30s $stats" "$test_name"
# record dmesg before
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.before"
# give tests stdout and compared output on specific fds
exec 6>&1
exec 7>$out
# run the test with access to our functions
start_secs=$SECONDS
bash -c "for f in funcs/*.sh; do . \$f; done; . $t" >&7 2>&1
sts="$?"
log "test $t exited with status $sts"
stats="$((SECONDS - start_secs))s"
# close our weird descriptors
exec 6>&-
exec 7>&-
# compare output if the test returned passed status
if [ "$sts" == "$T_PASS_STATUS" ]; then
if [ ! -e "$golden" ]; then
message="no golden output"
sts=$T_FAIL_STATUS
elif ! cmp -s "$golden" "$out"; then
message="output differs"
sts=$T_FAIL_STATUS
diff -u "$golden" "$out" >> "$T_RESULTS/fail.log"
fi
else
# get message from t_*() functions
message=$(cat "$T_TMPDIR/status.msg")
fi
# see if anything unexpected showed up in dmesg
if [ "$sts" == "$T_PASS_STATUS" ]; then
dmesg | t_filter_dmesg > "$T_TMPDIR/dmesg.after"
diff -u "$T_TMPDIR/dmesg.before" "$T_TMPDIR/dmesg.after" > \
"$T_TMPDIR/dmesg.diff"
if [ -s "$T_TMPDIR/dmesg.diff" ]; then
message="unexpected messages in dmesg"
sts=$T_FAIL_STATUS
cat "$T_TMPDIR/dmesg.diff" >> "$T_RESULTS/fail.log"
fi
fi
# record unknown exit status
if [ "$sts" -lt "$T_FIRST_STATUS" -o "$sts" -gt "$T_LAST_STATUS" ]; then
message="unknown status: $sts"
sts=$T_FAIL_STATUS
fi
# show and record the result of the test
if [ "$sts" == "$T_PASS_STATUS" ]; then
echo " passed: $stats"
((passed++))
# save stats for passed test
grep -s -v "^$test_name" "$last" > "$last.tmp"
echo "$test_name $stats" >> "$last.tmp"
mv -f "$last.tmp" "$last"
elif [ "$sts" == "$T_SKIP_STATUS" ]; then
echo " [ skipped: $message ]"
echo "$test_name $message" >> "$T_RESULTS/skip.log"
((skipped++))
elif [ "$sts" == "$T_FAIL_STATUS" ]; then
echo " [ failed: $message ]"
echo "$test_name $message" >> "$T_RESULTS/fail.log"
((failed++))
test -n "$T_ABORT" && die "aborting after first failure"
fi
done
msg "all tests run: $passed passed, $skipped skipped, $failed failed"
unmount_all
if [ -n "$T_TRACE_GLOB" ]; then
msg "saving traces and disabling tracing"
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
cat /sys/kernel/debug/tracing/trace > "$T_RESULTS/traces"
fi
if [ "$skipped" == 0 -a "$failed" == 0 ]; then
msg "all tests passed"
exit 0
fi
if [ "$skipped" != 0 ]; then
msg "$skipped tests skipped, check skip.log"
fi
if [ "$failed" != 0 ]; then
msg "$failed tests failed, check fail.log"
fi
exit 1

27
tests/sequence Normal file
View File

@@ -0,0 +1,27 @@
export-get-name-parent.sh
basic-block-counts.sh
inode-items-updated.sh
simple-inode-index.sh
simple-staging.sh
simple-release-extents.sh
offline-extent-waiting.sh
simple-xattr-unit.sh
segment-cache-fwd-back-iter.sh
lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
createmany-parallel.sh
createmany-large-names.sh
stage-release-race-alloc.sh
basic-posix-consistency.sh
dirent-consistency.sh
lock-ex-race-processes.sh
lock-conflicting-batch-commit.sh
cross-mount-data-free.sh
mount-unmount-race.sh
createmany-parallel-mounts.sh
archive-light-cycle.sh
# this creates a ton of tiny btree blocks so we do it last
stale-btree-seg-read.sh
xfstests.sh

208
tests/src/createmany.c Normal file
View File

@@ -0,0 +1,208 @@
/* -*- mode: c; c-basic-offset: 8; indent-tabs-mode: nil; -*-
* vim:expandtab:shiftwidth=8:tabstop=8:
*
* GPL HEADER START
*
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 only,
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
* You should have received a copy of the GNU General Public License
* version 2 along with this program; If not, see
* http://www.sun.com/software/products/lustre/docs/GPLv2.pdf
*
* Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa Clara,
* CA 95054 USA or visit www.sun.com if you need additional information or
* have any questions.
*
* GPL HEADER END
*/
/*
* Copyright (c) 2002, 2010, Oracle and/or its affiliates. All rights reserved.
* Use is subject to license terms.
*/
/*
* This file is part of Lustre, http://www.lustre.org/
* Lustre is a trademark of Sun Microsystems, Inc.
*/
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <time.h>
#include <errno.h>
#include <string.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
#include <getopt.h>
static void usage(char *prog)
{
printf("usage: %s {-o|-m|-d|-l<tgt>} [-r altpath ] filenamefmt count\n", prog);
printf(" %s {-o|-m|-d|-l<tgt>} [-r altpath ] filenamefmt ] -seconds\n", prog);
printf(" %s {-o|-m|-d|-l<tgt>} [-r altpath ] filenamefmt start count\n", prog);
exit(EXIT_FAILURE);
}
static char *get_file_name(const char *fmt, long n, int has_fmt_spec)
{
static char filename[4096];
int bytes;
bytes = has_fmt_spec ? snprintf(filename, 4095, fmt, n) :
snprintf(filename, 4095, "%s%ld", fmt, n);
if (bytes >= 4095) {
printf("file name too long\n");
exit(EXIT_FAILURE);
}
return filename;
}
double now(void)
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (double)tv.tv_sec + (double)tv.tv_usec / 1000000;
}
int main(int argc, char ** argv)
{
long i;
int rc = 0, do_open = 0, do_link = 0, do_mkdir = 0;
int do_unlink = 0, do_mknod = 0;
char *filename;
char *fmt = NULL, *fmt_unlink = NULL, *tgt = NULL;
double start, last;
long begin = 0, end = ~0UL >> 1, count = ~0UL >> 1;
int c, has_fmt_spec = 0, unlink_has_fmt_spec = 0;
/* Handle the last argument in form of "-seconds" */
if (argc > 1 && argv[argc - 1][0] == '-') {
char *endp;
argc--;
end = strtol(argv[argc] + 1, &endp, 0);
if (end <= 0 || *endp != '\0')
usage(argv[0]);
end = end + time(NULL);
}
while ((c = getopt(argc, argv, "omdl:r:")) != -1) {
switch(c) {
case 'o':
do_open++;
break;
case 'm':
do_mknod++;
break;
case 'd':
do_mkdir++;
break;
case 'l':
do_link++;
tgt = optarg;
break;
case 'r':
do_unlink++;
fmt_unlink = optarg;
break;
case '?':
printf("Unknown option '%c'\n", optopt);
usage(argv[0]);
}
}
if (do_open + do_mkdir + do_link + do_mknod != 1 ||
do_unlink > 1)
usage(argv[0]);
switch (argc - optind) {
case 3:
begin = strtol(argv[argc - 2], NULL, 0);
case 2:
count = strtol(argv[argc - 1], NULL, 0);
if (end != ~0UL >> 1)
usage(argv[0]);
case 1:
fmt = argv[optind];
break;
default:
usage(argv[0]);
}
start = last = now();
has_fmt_spec = strchr(fmt, '%') != NULL;
if (do_unlink)
unlink_has_fmt_spec = strchr(fmt_unlink, '%') != NULL;
for (i = 0; i < count && time(NULL) < end; i++, begin++) {
filename = get_file_name(fmt, begin, has_fmt_spec);
if (do_open) {
int fd = open(filename, O_CREAT|O_RDWR, 0644);
if (fd < 0) {
printf("open(%s) error: %s\n", filename,
strerror(errno));
rc = errno;
break;
}
close(fd);
} else if (do_link) {
rc = link(tgt, filename);
if (rc) {
printf("link(%s, %s) error: %s\n",
tgt, filename, strerror(errno));
rc = errno;
break;
}
} else if (do_mkdir) {
rc = mkdir(filename, 0755);
if (rc) {
printf("mkdir(%s) error: %s\n",
filename, strerror(errno));
rc = errno;
break;
}
} else {
rc = mknod(filename, S_IFREG| 0444, 0);
if (rc) {
printf("mknod(%s) error: %s\n",
filename, strerror(errno));
rc = errno;
break;
}
}
if (do_unlink) {
filename = get_file_name(fmt_unlink, begin,
unlink_has_fmt_spec);
rc = do_mkdir ? rmdir(filename) : unlink(filename);
if (rc) {
printf("unlink(%s) error: %s\n",
filename, strerror(errno));
rc = errno;
break;
}
}
if (i && (i % 10000) == 0) {
printf(" - created %ld (time %.2f total %.2f last %.2f)"
"\n", i, now(), now() - start, now() - last);
last = now();
}
}
printf("total: %ld creates%s in %.2f seconds: %.2f creates/second\n", i,
do_unlink ? "/deletions" : "",
now() - start, ((double)i / (now() - start)));
return rc;
}

116
tests/src/dumb_setxattr.c Normal file
View File

@@ -0,0 +1,116 @@
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/xattr.h>
#include <ctype.h>
#include <string.h>
#include <errno.h>
/*
* int setxattr(const char *path, const char *name,
* const void *value, size_t size, int flags);
*/
static void exit_usage(void)
{
printf(" -h/-? output this usage message and exit\n"
" -c add XATTR_CREATE to flags\n"
" -f <num> add parsed number to flags (defaults to 0)\n"
" -n <string> xattr name string\n"
" -N <num> raw xattr name pointer\n"
" -p <string> file path string\n"
" -P <num> raw file path pointer\n"
" -r add XATTR_REPLACE to flags\n"
" -s <num> xattr value size (defaults to strlen(-v))\n"
" -v <string> xattr value string\n"
" -V <num> raw xattr value pointer\n");
exit(1);
}
int main(int argc, char **argv)
{
unsigned char opts[256] = {0,};
char *path = NULL;
char *name = NULL;
char *value = NULL;
size_t size = 0;
int flags = 0;
int ret;
int c;
while ((c = getopt(argc, argv, "+cf:n:N:p:P:s:rv:V:")) != -1) {
switch (c) {
case 'c':
flags |= XATTR_CREATE;
break;
case 'f':
flags |= strtol(optarg, NULL, 0);
break;
case 'n':
name = strdup(optarg);
break;
case 'N':
name = (void *)strtol(optarg, NULL, 0);
break;
case 'p':
path = strdup(optarg);
break;
case 'P':
path = (void *)strtol(optarg, NULL, 0);
break;
case 'r':
flags |= XATTR_REPLACE;
break;
case 's':
size = strtoll(optarg, NULL, 0);
break;
case 'v':
value = strdup(optarg);
break;
case 'V':
value = (void *)strtol(optarg, NULL, 0);
break;
case '?':
printf("unknown argument: %c\n", optind);
case 'h':
exit_usage();
}
opts[c] = 1;
}
if (!opts['p'] && !opts['P']) {
printf("specify path with -p or raw path pointer with -P\n");
exit(1);
}
if (!opts['n'] && !opts['N']) {
printf("specify name with -n or raw name pointer with -N\n");
exit(1);
}
if (!opts['v'] && !opts['V']) {
printf("specify value with -v or raw value pointer with -V\n");
exit(1);
}
if (!opts['s']) {
if (opts['v']) {
size = strlen(value);
} else {
printf("specify size with -s when using -V\n");
exit(1);
}
}
ret = setxattr(path, name, value, size, flags);
if (ret)
printf("returned %d errno %d (%s)\n",
ret, errno, strerror(errno));
else
printf("returned %d\n", ret);
return 0;
}

84
tests/src/handle_cat.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Given a scoutfs mountpoint and an inode number, open the inode by
* handle and print the contents to stdout.
*
* Copyright (C) 2018 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <inttypes.h>
#include <errno.h>
#include <endian.h>
#include <linux/types.h>
#define FILEID_SCOUTFS 0x81
#define FILEID_SCOUTFS_WITH_PARENT 0x82
struct our_handle {
struct file_handle handle;
/*
* scoutfs file handle can be ino or ino/parent. The
* handle_type field of struct file_handle denotes which
* version is in use. We only use the ino variant here.
*/
__le64 scoutfs_ino;
};
#define SZ 4096
char buf[SZ];
int main(int argc, char **argv)
{
int fd, mntfd, bytes;
char *mnt;
uint64_t ino;
struct our_handle handle;
if (argc < 3) {
printf("%s <mountpoint> <inode #>\n", argv[0]);
return 1;
}
mnt = argv[1];
ino = strtoull(argv[2], NULL, 10);
mntfd = open(mnt, O_RDONLY);
if (mntfd == -1) {
perror("while opening mountpoint");
return 1;
}
handle.handle.handle_bytes = sizeof(struct our_handle);
handle.handle.handle_type = FILEID_SCOUTFS;
handle.scoutfs_ino = htole64(ino);
fd = open_by_handle_at(mntfd, &handle.handle, O_RDONLY);
if (fd == -1) {
perror("while opening inode by handle");
return 1;
}
while ((bytes = read(fd, buf, SZ)) > 0)
write(STDOUT_FILENO, buf, bytes);
close(fd);
close(mntfd);
return 0;
}

View File

@@ -0,0 +1,191 @@
#
# Concurrently perform archive ops on per-mount sets of files.
#
# Each mount has its own directorie. Each mount has processes which
# perform operations on the files in the mount directories.
#
# The test is organized as multiple rounds of the processes going
# through phases where they perform an operation on all their files.
#
# The phases are implemented as scripts that perform the operation on
# all the processes' files and which are run concurrently on all the
# mounts during the phase
#
# The test will raise errors if the scripts produce unexpected output or
# exit with non-zero status.
#
t_require_commands perl md5sum bc cut tr cmp scoutfs
#
# static size config, the rest is derived from mem or fs size
#
ROUNDS=3
PROCS_PER_MOUNT=2
MIN_FILE_BYTES=$((1024 * 1024))
MAX_FILE_BYTES=$((1024 * 1024 * 1024))
hashed_u64()
{
local str="$1"
local hex=$(echo "$str" | md5sum | cut -b1-16 | tr a-z A-Z)
echo "ibase=16; $hex" | bc
}
# random size within min and max for a given file, rounded up to a block
file_bytes()
{
local path="$1"
local nr=$(hashed_u64 "$path bytes")
local bytes=$(echo "($nr % ($MAX_FILE_BYTES - $MIN_FILE_BYTES)) + $MIN_FILE_BYTES" | bc)
echo "(($bytes + 4095) / 4096) * 4096" | bc
}
# run the named script in the background for each process on each mount
# and wait for them to finish
run_scripts()
{
local name="$1"
local script
local pids=""
local pid=""
local rc
local n
local p
for n in $(t_fs_nrs); do
for p in $(seq 1 $PROCS_PER_MOUNT); do
script="$T_D0/$name-$n-$p"
bash "$script" &
rc="$?"
pid="$!"
if [ "$rc" != 0 ]; then
echo failed to run script $script: rc $rc
continue
fi
echo "script $script pid $pid" >> $T_TMP.log
pids="$pids $pid"
done
done
for pid in $pids; do
wait $pid
rc="$?"
if [ "$rc" == "127" ]; then
continue
fi
if [ "$rc" != "0" ]; then
echo "script pid $pid failed: rc $rc"
fi
done
}
#
# Given static processes per mount and min and max file sizes, figure
# out the number of file sizes to work with so that all the files
# are limited by half of either fs size or memory, whichever is lesser.
#
echo "== calculate number of files"
# get meg config from lesser of mem or fs capacity
MEM_MEGS=$(free -m | awk '($1 == "Mem:"){print $2}')
if [ "$MEM_MEGS" -lt 256 -o "$MEM_MEGS" -gt $((1024 * 1024 * 1024)) ]; then
t_fail "host has questionable $meg MiB of mem?"
fi
MEM_MEGS=$((MEM_MEGS / 2))
FS_FREE_BLOCKS=$(stat -f -c '%f' "$T_M0")
FS_BLOCK_SIZE=$(stat -f -c '%S' "$T_M0")
FS_MEGS=$((FS_FREE_BLOCKS * FS_BLOCK_SIZE / (1024 * 1024)))
FS_MEGS=$((FS_MEGS / 2))
if [ "$MEM_MEGS" -lt "$FS_MEGS" ]; then
TARGET_MEGS=$MEM_MEGS
else
TARGET_MEGS=$FS_MEGS
fi
# calculated config
AVG_FILE_BYTES=$((MIN_FILE_BYTES + MAX_FILE_BYTES / 2))
TARGET_BYTES=$((TARGET_MEGS * 1024 * 1024))
TARGET_FILES=$((TARGET_BYTES / AVG_FILE_BYTES))
FILES_PER_PROC=$((TARGET_FILES / (PROCS_PER_MOUNT * T_NR_MOUNTS)))
test "$FILES_PER_PROC" -lt 2 && FILES_PER_PROC=2
for a in ROUNDS MIN_FILE_BYTES MAX_FILE_BYTES TARGET_BYTES PROCS_PER_MOUNT \
AVG_FILE_BYTES TARGET_FILES FILES_PER_PROC MEM_MEGS FS_FREE_BLOCKS \
FS_BLOCK_SIZE FS_MEGS TARGET_MEGS; do
eval echo $a=\$$a >> $T_TMP.log
done
echo "== create per mount dirs"
for n in $(t_fs_nrs); do
eval dir="\$T_D${n}/dir/$n"
t_quiet mkdir -p "$dir"
done
#
# Our unique file contents pattern are 4k "blocks" written as single
# lines that start with unique identifying values padded with spaces.
#
echo "perl -e 'for (my \$i = 0; \$i < '\$1'; \$i++) { printf(\"mount %020u process %020u file %020u blkno %020u%s\\n\", '\$2', '\$3', '\$4', \$i, \" \" x 3987); }'" > $T_D0/gen
echo "== generate phase scripts"
for n in $(t_fs_nrs); do
for p in $(seq 1 $PROCS_PER_MOUNT); do
gen="$T_D0/gen"
create="$T_D0/create-$n-$p"
> $create
verify="$T_D0/verify-$n-$p"
> $verify
release="$T_D0/release-$n-$p"
> $release
stage="$T_D0/stage-$n-$p"
> $stage
online="$T_D0/online-$n-$p"
> $online
offline="$T_D0/offline-$n-$p"
> $offline
unlink="$T_D0/unlink-$n-$p"
> $unlink
for f in $(seq 1 $FILES_PER_PROC); do
eval path="\$T_D${n}/dir/$n/$p-$f"
bytes=$(file_bytes "$path")
blocks=$(echo "$bytes / 4096" | bc)
echo "bash $gen $blocks $n $p $f > $path" >> $create
echo "cmp $path <(bash $gen $blocks $n $p $f)" >> $verify
echo "vers=\$(scoutfs stat -s data_version $path)" >> $release
echo "scoutfs release $path \$vers 0 $blocks" >> $release
echo "vers=\$(scoutfs stat -s data_version $path)" >> $stage
echo "scoutfs stage $path \$vers 0 $bytes <(bash $gen $blocks $n $p $f)" >> $stage
echo "rm -f $path" >> $unlink
echo "x=\$(scoutfs stat -s online_blocks $path)" >> $online
echo "test \$x == $blocks || echo $path has \$x online blocks, expected $blocks" >> $online
echo "x=\$(scoutfs stat -s offline_blocks $path)" >> $online
echo "test \$x == 0 || echo $path has \$x offline blocks, expected 0" >> $online
sed -e 's/online/SWIZZLE/g' -e 's/offline/online/g' -e 's/SWIZZLE/offline/g' \
< $online > $offline
done
done
done
for i in $(seq 1 $ROUNDS); do
for a in create online verify \
release offline \
stage online verify \
release offline unlink; do
echo "== round $i: $a"
run_scripts "$a"
done
done
t_pass

View File

@@ -0,0 +1,105 @@
#
# Test basic correctness of tracking online, offline, and st_blocks
# counts.
#
t_require_commands scoutfs dd truncate touch mkdir rm rmdir
# if vers is "stat" then we ask stat_more for the data_version
release_vers() {
local file="$1"
local vers="$2"
local block="$3"
local count="$4"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs release "$file" "$vers" "$block" "$count"
}
# if vers is "stat" then we ask stat_more for the data_version
stage_vers() {
local file="$1"
local vers="$2"
local offset="$3"
local count="$4"
local contents="$5"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs stage "$file" "$vers" "$offset" "$count" "$contents"
}
echo_blocks()
{
echo "online:" $(scoutfs stat -s online_blocks "$1")
echo "offline:" $(scoutfs stat -s offline_blocks "$1")
echo "st_blocks:" $(stat -c '%b' "$1")
}
FILE="$T_D0/file"
DIR="$T_D0/dir"
echo "== single block write"
dd if=/dev/zero of="$FILE" bs=4K count=1 status=none
echo_blocks "$FILE"
echo "== single block overwrite"
dd if=/dev/zero of="$FILE" bs=4K count=1 conv=notrunc status=none
echo_blocks "$FILE"
echo "== append"
dd if=/dev/zero of="$FILE" bs=4K count=1 conv=notrunc oflag=append status=none
echo_blocks "$FILE"
echo "== release"
release_vers "$FILE" stat 0 2
echo_blocks "$FILE"
echo "== duplicate release"
release_vers "$FILE" stat 0 2
echo_blocks "$FILE"
echo "== duplicate release past i_size"
release_vers "$FILE" stat 0 16
echo_blocks "$FILE"
echo "== stage"
stage_vers "$FILE" stat 0 8192 /dev/zero
echo_blocks "$FILE"
echo "== duplicate stage"
stage_vers "$FILE" stat 0 8192 /dev/zero
echo_blocks "$FILE"
echo "== larger file"
dd if=/dev/zero of="$FILE" bs=1M count=1 status=none
echo_blocks "$FILE"
echo "== partial truncate"
truncate -s 512K "$FILE"
echo_blocks "$FILE"
echo "== single sparse block"
rm -f "$FILE"
dd if=/dev/zero of="$FILE" bs=4K count=1 seek=1K status=none
echo_blocks "$FILE"
echo "== empty file"
rm -f "$FILE"
touch "$FILE"
echo_blocks "$FILE"
echo "== non-regular file"
mkdir "$DIR"
echo_blocks "$DIR"
echo "== cleanup"
rm -f "$FILE"
rmdir "$DIR"
t_pass

View File

@@ -0,0 +1,200 @@
#
# Test basic clustered posix consistency. We perform a bunch of
# operations in one mount and verify the results in another.
#
t_require_commands getfattr setfattr dd filefrag diff touch stat scoutfs
t_require_mounts 2
GETFATTR="getfattr --absolute-names"
SETFATTR="setfattr"
DD="dd status=none"
FILEFRAG="filefrag -v -b4096"
echo "== root inode updates flow back and forth"
sleep 1
touch "$T_M1"
stat "$T_M0" 2>&1 | t_filter_fs > "$T_TMP.0"
stat "$T_M1" 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
sleep 1
touch "$T_M0"
stat "$T_M0" 2>&1 | t_filter_fs > "$T_TMP.0"
stat "$T_M1" 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== stat of created file matches"
touch "$T_D0/file"
stat "$T_D0/file" 2>&1 | t_filter_fs > "$T_TMP.0"
stat "$T_D1/file" 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== written file contents match"
$DD if=/dev/urandom of="$T_D0/file" bs=4K count=1024
od -x "$T_D0/file" > "$T_TMP.0"
od -x "$T_D1/file" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== overwritten file contents match"
$DD if=/dev/urandom of="$T_D0/file" bs=4K count=1024 conv=notrunc
od -x "$T_D0/file" > "$T_TMP.0"
od -x "$T_D1/file" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== appended file contents match"
$DD if=/dev/urandom of="$T_D0/file" bs=1 count=1 conv=notrunc oflag=append
od -x "$T_D0/file" > "$T_TMP.0"
od -x "$T_D1/file" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== fiemap matches after racey appends"
for i in $(seq 1 10); do
$DD if=/dev/urandom of="$T_D0/file" bs=4096 count=1 \
conv=notrunc oflag=append &
$DD if=/dev/urandom of="$T_D1/file" bs=4096 count=1 \
conv=notrunc oflag=append &
wait
done
$FILEFRAG "$T_D0/file" | t_filter_fs > "$T_TMP.0"
$FILEFRAG "$T_D1/file" | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== unlinked file isn't found"
rm -f "$T_D0/file"
stat "$T_D0/file" 2>&1 | t_filter_fs > "$T_TMP.0"
stat "$T_D1/file" 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== symlink targets match"
ln -s "$T_D0/file.targ" "$T_D0/file"
readlink "$T_D0/file" | t_filter_fs
readlink "$T_D1/file" | t_filter_fs
rm -f "$T_D1/file"
ln -s "$T_D0/file.targ2" "$T_D0/file"
readlink "$T_D0/file" | t_filter_fs
readlink "$T_D1/file" | t_filter_fs
rm -f "$T_D1/file"
echo "== new xattrs are visible"
touch "$T_D0/file"
$SETFATTR -n user.xat -v 1 "$T_D0/file"
$GETFATTR -n user.xat "$T_D0/file" 2>&1 | t_filter_fs
$GETFATTR -n user.xat "$T_D1/file" 2>&1 | t_filter_fs
echo "== modified xattrs are updated"
$SETFATTR -n user.xat -v 2 "$T_D1/file"
$GETFATTR -n user.xat "$T_D0/file" 2>&1 | t_filter_fs
$GETFATTR -n user.xat "$T_D1/file" 2>&1 | t_filter_fs
echo "== deleted xattrs"
$SETFATTR -x user.xat "$T_D0/file"
$GETFATTR -n user.xat "$T_D0/file" 2>&1 | t_filter_fs
$GETFATTR -n user.xat "$T_D1/file" 2>&1 | t_filter_fs
rm -f "$T_D1/file"
echo "== readdir after modification"
mkdir "$T_D0/dir"
ls -UA "$T_D0/dir"
ls -UA "$T_D1/dir"
touch "$T_D1/dir"/{one,two,three,four}
ls -UA "$T_D0/dir"
ls -UA "$T_D1/dir"
rm -f "$T_D0/dir"/{one,three}
ls -UA "$T_D0/dir"
ls -UA "$T_D1/dir"
rm -f "$T_D0/dir"/{two,four}
ls -UA "$T_D0/dir"
ls -UA "$T_D1/dir"
echo "== can delete empty dir"
rmdir "$T_D1/dir"
echo "== some easy rename cases"
echo "--- file between dirs"
mkdir -p "$T_D0/dir/a"
mkdir -p "$T_D0/dir/b"
touch "$T_D0/dir/a/file"
mv "$T_D1/dir/a/file" "$T_D1/dir/b/file"
find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "--- file within dir"
mv "$T_D1/dir/b/file" "$T_D1/dir/b/file2"
find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "--- dir within dir"
mv "$T_D0/dir/b" "$T_D0/dir/c"
find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "--- overwrite file"
touch "$T_D1/dir/c/file"
mv "$T_D0/dir/c/file2" "$T_D0/dir/c/file"
find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "--- can't overwrite non-empty dir"
mkdir "$T_D0/dir/a/dir"
touch "$T_D0/dir/a/dir/nope"
mkdir "$T_D1/dir/c/clobber"
mv -T "$T_D1/dir/c/clobber" "$T_D1/dir/a/dir" 2>&1 | t_filter_fs
find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "--- can overwrite empty dir"
rm "$T_D0/dir/a/dir/nope"
mv -T "$T_D1/dir/c/clobber" "$T_D1/dir/a/dir"
find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
rm -rf "$T_D0/dir"
echo "== path resoluion"
touch "$T_D0/file"
ino=$(stat -c '%i' $T_D0/file)
for i in $(seq 1 1); do
for j in $(seq 1 4); do
lnk="$T_D0/dir/$RANDOM/$RANDOM/$RANDOM/$RANDOM"
mkdir -p $(dirname $lnk)
ln "$T_D0/file" $lnk
scoutfs ino-path $ino "$T_M0" > "$T_TMP.0"
scoutfs ino-path $ino "$T_M1" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
done
done
rm -rf "$T_D0/dir"
echo "== inode indexes match after syncing existing"
t_sync_seq_index
scoutfs walk-inodes meta_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes meta_seq 0 -1 "$T_M1" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes data_seq 0 -1 "$T_M1" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== inode indexes match after copying and syncing"
mkdir "$T_D0/dir"
cp -ar /boot/conf* "$T_D0/dir"
t_sync_seq_index
scoutfs walk-inodes meta_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes meta_seq 0 -1 "$T_M1" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes data_seq 0 -1 "$T_M1" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
echo "== inode indexes match after removing and syncing"
rm -f "$T_D1/dir/conf*"
t_sync_seq_index
scoutfs walk-inodes meta_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes meta_seq 0 -1 "$T_M1" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > "$T_TMP.0"
scoutfs walk-inodes data_seq 0 -1 "$T_M1" > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
t_pass

View File

@@ -0,0 +1,22 @@
#
# Create a lot of files with large names. In the past this caught bugs
# in the btree code as it stored manifest entries with large keys that
# stored directory entry names. Now keys are a small fixed size so this
# has less of an effect.
#
t_require_commands createmany
DIRS="0 1 2 3"
COUNT=100000
for i in $DIRS; do
d="$T_D0/$i"
mkdir -p "$d"
# Use an absurdly long file name to blow the dirent key sizes out
./src/createmany -o $d/file_$(printf "a%.0s" {1..195})_$i $COUNT \
>> $T_TMP.log &
done
wait
t_pass

View File

@@ -0,0 +1,47 @@
#
# Test clustered parallel createmany
#
t_require_commands mkdir createmany
t_require_mounts 2
COUNT=50000
# Prep dirs for test. Each mount needs to make their own parent dir for
# the createmany run, otherwise both dirs will end up in the same inode
# group, causing updates to bounce that lock around.
echo "== measure initial createmany"
mkdir -p $T_D0/dir/0
mkdir $T_D1/dir/1
echo "== measure initial createmany"
START=$SECONDS
createmany -o "$T_D0/file_" $COUNT >> $T_TMP.full
SINGLE=$((SECONDS - START))
echo single $SINGLE >> $T_TMP.full
echo "== measure two concurrent createmany runs"
START=$SECONDS
createmany -o $T_D0/dir/0/file $COUNT > /dev/null &
pids="$!"
createmany -o $T_D1/dir/1/file $COUNT > /dev/null &
pids="$pids $!"
for p in $pids; do
wait $p
done
BOTH=$((SECONDS - START))
echo both $BOTH >> $T_TMP.full
# Multi node still adds significant overhead, even with our CW locks
# being effectively local node for this test. Different hardware
# setups might have a different amount of skew on the result as
# well. Cover for this with a sufficiently large safety factor so
# we're not needlessly tripping up testing. We will still easily
# exceed this factor should the CW locked items go back to fully
# synchronized operation.
FACTOR=200
if [ "$BOTH" -gt $(($SINGLE*$FACTOR)) ]; then
echo "both createmany took $BOTH sec, more than $FACTOR x single $SINGLE sec"
fi
t_pass

View File

@@ -0,0 +1,22 @@
#
# Run createmany in parallel, make sure we don't crash or throw errors
#
t_require_commands createmany
DIRS="0 1 2 3"
COUNT=1000
for i in $DIRS; do
d="$T_D0/$i"
echo "Run createmany in $d" | t_filter_fs
mkdir -p "$d"
createmany -o "$d/file_$i" $COUNT >> $T_TMP.log &
done
wait
for i in $DIRS; do
rm -fr "$T_D0/$i"
done
t_pass

View File

@@ -0,0 +1,26 @@
#
# cross mount freeing
#
# We should be able to continually allocate on one node and free
# on another and have free blocks flow without seeing premature
# enospc failures.
#
t_require_commands stat fallocate truncate
t_require_mounts 2
echo "== repeated cross-mount alloc+free, totalling 2x free"
free_blocks=$(stat -f -c "%a" "$T_M0")
file_blocks=$((free_blocks / 10))
iter=$((free_blocks * 2 / file_blocks))
file_size=$((file_blocks * 4096))
for i in $(seq 1 $iter); do
fallocate -l $file_size "$T_D0/file"
truncate -s 0 "$T_D1/file"
done
echo "== remove empty test file"
t_quiet rm $T_D0/file
t_pass

View File

@@ -0,0 +1,107 @@
#
# basic dirent consistency
#
t_require_mounts 2
# atime isn't consistent
compare_find_stat_on_all_mounts() {
local path
local i
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir"
find $path | sort | xargs stat | t_filter_fs | \
grep -v "^Access: [0-9]*" 2>&1 > $T_TMP.stat.$i
done
for i in $(t_fs_nrs); do
diff -u $T_TMP.stat.0 $T_TMP.stat.$i || \
t_fail "node $i find output differed from node 0"
done
}
echo "== create per node dirs"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir/$i"
mkdir -p $path
done
echo "== touch files on each node"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir/$i/$i"
touch $path
done
compare_find_stat_on_all_mounts
echo "== recreate the files"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir/$i/$i"
rm -f $path
touch $path
done
compare_find_stat_on_all_mounts
echo "== turn the files into directories"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir/$i/$i"
rm -f $path
mkdir $path
done
compare_find_stat_on_all_mounts
echo "== rename parent dirs"
for i in $(t_fs_nrs); do
eval before="\$T_D${i}/dir/$i"
eval after="\$T_D${i}/dir/$i-renamed"
mv $before $after
done
compare_find_stat_on_all_mounts
echo "== rename parent dirs back"
for i in $(t_fs_nrs); do
eval before="\$T_D${i}/dir/$i-renamed"
eval after="\$T_D${i}/dir/$i"
mv $before $after
done
compare_find_stat_on_all_mounts
echo "== create some hard links"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir/$i/$i.file"
touch $path
for link in $(seq 1 3); do
ln $path $path-$link
done
done
compare_find_stat_on_all_mounts
echo "== recreate one of the hard links"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir/$i/$i.file-3"
rm -f $path
touch $path
done
compare_find_stat_on_all_mounts
echo "== delete the remaining hard link"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir/$i/$i.file-2"
rm -f $path
done
compare_find_stat_on_all_mounts
echo "== race to blow everything away"
pids=""
echo "[nodes are racing to log std(out|err) now..]" >> $T_TMP.log
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/dir"
rm -rf "$path/*" >> $T_TMP.log 2>&1 &
pids="$pids $!"
done
# failure's fine
wait $pids
compare_find_stat_on_all_mounts
t_pass

View File

@@ -0,0 +1,19 @@
#
# Test operation of scoutfs_get_name and scoutfs_get_parent. We can do
# this by creating a directory, recording it's inode number then
# opening it by handle after a remount.
#
t_require_commands mkdir stat handle_cat
DIR="$T_D0/dir"
mkdir -p "$DIR"
ino=$(stat -c "%i" "$DIR")
t_umount_all
t_mount_all
t_quiet handle_cat "$T_M0" "$ino"
t_pass

View File

@@ -0,0 +1,73 @@
#
# test inode item updating
#
# Our inode updating pattern involves updating in-memory inode
# structures and then explicitly migrating those to dirty persistent
# items. If we forget to update the persistent items then modifications
# to the in-memory inode can be lost as the inode is evicted.
#
# We test this by modifying inodes, unmounting, and comparing the
# mounted inodes to the inodes before the unmount.
#
t_require_commands mkdir stat touch find setfattr mv dd scoutfs
DIR="$T_D0/dir"
stat_paths()
{
while read path; do
echo "=== $path ==="
# XXX atime isn't consistent :/
stat "$path" 2>&1 | grep -v "Access: "
scoutfs stat "$path" 2>&1
done
}
t_quiet mkdir -p "$DIR"
echo "== create files and sync"
dd if=/dev/zero of="$DIR/truncate" bs=4096 count=1 status=none
dd if=/dev/zero of="$DIR/stage" bs=4096 count=1 status=none
vers=$(scoutfs stat -s data_version "$DIR/stage")
scoutfs release "$DIR/stage" $vers 0 1
dd if=/dev/zero of="$DIR/release" bs=4096 count=1 status=none
touch "$DIR/write_end"
mkdir "$DIR"/{mknod_dir,link_dir,unlink_dir,symlink_dir,rename_dir}
touch $DIR/setattr
touch $DIR/xattr_set
sync; sync
echo "== modify files"
truncate -s 0 "$DIR/truncate"
vers=$(scoutfs stat -s data_version "$DIR/stage")
scoutfs stage "$DIR/stage" $vers 0 4096 /dev/zero
vers=$(scoutfs stat -s data_version "$DIR/release")
scoutfs release "$DIR/release" $vers 0 1
dd if=/dev/zero of="$DIR/write_end" bs=4096 count=1 status=none conv=notrunc
touch $DIR/mknod_dir/mknod_file
touch $DIR/link_dir/link_targ
ln $DIR/link_dir/link_targ $DIR/link_dir/link_file
touch $DIR/unlink_dir/unlink_file
rm -f $DIR/unlink_dir/unlink_file
touch $DIR/symlink_dir/symlink_targ
ln -s $DIR/symlink_dir/symlink_targ $DIR/symlink_dir/symlink_file
touch $DIR/rename_dir/rename_from
mv $DIR/rename_dir/rename_from $DIR/rename_dir/rename_to
touch -m --date=@1234 $DIR/setattr
setfattr -n user.test -v val $DIR/xattr_set
find "$DIR"/* > $T_TMP.paths
echo $DIR/unlink_dir/unlink_file >> $T_TMP.paths
echo $DIR/rename_dir/rename_from >> $T_TMP.paths
stat_paths < $T_TMP.paths > $T_TMP.before
echo "== mount and unmount"
t_umount_all
t_mount_all
echo "== verify files"
stat_paths < $T_TMP.paths > $T_TMP.after
diff -u $T_TMP.before $T_TMP.after
t_pass

View File

@@ -0,0 +1,59 @@
#
# If bulk work accidentally conflicts in the worst way we'd like to have
# it not result in catastrophic performance. Make sure that each
# instance of bulk work is given the opportunity to get as much as it
# can into the transaction under a lock before the lock is revoked
# and the transaction is committed.
#
t_require_commands setfattr
t_require_mounts 2
NR=3000
echo "== create per mount files"
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
t_quiet mkdir -p "$dir"
for a in $(seq 1 $NR); do touch "$dir/$a"; done
done
echo "== time independent modification"
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
START=$SECONDS
for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a"
done
echo "mount $m: $((SECONDS - START))" >> $T_TMP.log
done
echo "== time concurrent independent modification"
START=$SECONDS
for m in 0 1; do
eval dir="\$T_D${m}/dir/$m"
(for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a";
done) &
done
wait
IND="$((SECONDS - START))"
echo "ind: $IND" >> $T_TMP.log
echo "== time concurrent conflicting modification"
START=$SECONDS
for m in 0 1; do
eval dir="\$T_D${m}/dir/0"
(for a in $(seq 1 $NR); do
setfattr -n user.test_grace -v $a "$dir/$a";
done) &
done
wait
CONF="$((SECONDS - START))"
echo "conf: $IND" >> $T_TMP.log
if [ "$CONF" -gt "$((IND * 5))" ]; then
t_fail "conflicting $CONF secs is more than 5x independent $IND secs"
fi
t_pass

View File

@@ -0,0 +1,32 @@
#
# Multi-mount, multi-process EX locking test. This has uncovered at
# least one race between the downconvert thread and local processes
# wanting a lock.
#
t_require_commands setfattr
DIR="$T_D0/dir"
FILES=4
COUNT=250
echo "=== setup files ==="
mkdir -p $T_D0/dir
for f in $(seq 1 $FILES); do
touch $T_D0/dir/file-$f
done
echo "=== ping-pong xattr ops ==="
pids=""
for f in $(seq 1 $FILES); do
for m in $(t_fs_nrs); do
eval file="\$T_D${m}/dir/file-$f"
(for i in $(seq 1 $COUNT); do
setfattr -n user.test -v mount-$m $file;
done) &
pids="$pids $!"
done
done
wait $pids
t_pass

View File

@@ -0,0 +1,16 @@
#
# make sure pr/cw don't conflict
#
t_require_commands scoutfs
FILE="$T_D0/file"
echo "== race writing and index walking"
for i in $(seq 1 10); do
dd if=/dev/zero of="$FILE" bs=4K count=1 status=none conv=notrunc &
scoutfs walk-inodes data_seq 0 -1 "$T_M0" > /dev/null &
wait
done
t_pass

View File

@@ -0,0 +1,39 @@
#
# make sure we don't leak lock refs
#
# We've had bugs where we leak lock references. We perform a bunch
# of operations and if they're leaking we should see user counts
# related to the number of iterations. The test assumes that the
# system is relatively idle and that they're won't be significant
# other users of the locks.
#
t_require_commands mkdir touch stat setfattr getfattr cp mv rm cat awk
DIR="$T_D0/dir"
echo "== make test dir"
mkdir "$DIR"
echo "== do enough stuff to make lock leaks visible"
for i in $(seq 1 20); do
t_quiet touch "$DIR/file"
t_quiet stat "$DIR/file"
t_quiet setfattr -n "user.name" -v "$i" "$DIR/file"
t_quiet getfattr --absolute-names -d "$DIR/file"
echo "pants" >> "$DIR/file"
t_quiet cp "$DIR/file" "$DIR/copied"
t_quiet mv "$DIR/copied" "$DIR/moved"
t_quiet truncate -s 0 "$DIR/moved"
t_quiet rm -f "$DIR/moved"
done
# start 2.2.0.0.0.0 end 2.2.255.18446744073709551615.18446744073709551615.255 refresh_gen 1 mode 2 waiters: rd 0 wr 0 wo 0 users: rd 0 wr 1 wo 0
# users are fields 18, 20, 22
echo "== make sure nothing has leaked"
awk '($18 > 10 || $20 > 10 || $22 > 10) {
print $i, "might have leaked:", $0
}' < "$(t_debugfs_path)/client_locks"
t_pass

View File

@@ -0,0 +1,17 @@
#
# make sure lock revocation doesn't confuse getcwd
#
DIR="$T_D0/dir"
t_quiet mkdir -p "$DIR"
echo "=== getcwd after lock revocation"
cd "$DIR"
t_trigger_arm statfs_lock_purge
stat -f "$T_M0" > /dev/null
strace -e getcwd pwd 2>&1 | grep -i enoent
ls -la /proc/self/cwd | grep "(deleted)"
cd - > /dev/null
t_pass

View File

@@ -0,0 +1,34 @@
#
# Test that lock shrinking properly invalidates metadata so that future
# locks see new data.
#
t_require_commands getfattr
t_require_mounts 2
GETFATTR="getfattr --absolute-names"
# put the inode in its own lock in a new dir with new ino allocation
echo "=== setup test file ==="
t_quiet mkdir -p $T_D0/dir
touch $T_D0/dir/file
setfattr -n user.test -v aaa $T_D0/dir/file
$GETFATTR -n user.test $T_D0/dir/file 2>&1 | t_filter_fs
echo "=== commit dirty trans and revoke lock ==="
t_trigger_arm statfs_lock_purge
stat -f "$T_M0" > /dev/null
t_quiet sync
t_trigger_show statfs_lock_purge "after it fired"
echo "=== change xattr on other mount ==="
setfattr -n user.test -v bbb $T_D1/dir/file
$GETFATTR -n user.test $T_D1/dir/file 2>&1 | t_filter_fs
# This forces the shrinking node to recreate the lock resource. If our
# lock shrinker isn't properly invalidating metadata, we'd get the old
# xattr value here.
echo "=== verify new xattr under new lock on first mount ==="
$GETFATTR -n user.test $T_D0/dir/file 2>&1 | t_filter_fs
t_pass

View File

@@ -0,0 +1,87 @@
#
# stress concurrent mounting and unmounting across mounts
#
# At the start of the test all mounts are mounted. Each iteration
# randomly decides to change each mount or to leave it alone.
#
# They create dirty items before unmounting to encourage compaction
# while unmounting
#
# For this test to be meaningful it needs multiple mounts beyond the
# quorum set which can be racing to mount and unmount. A reasonable
# config would be 5 mounts with 3 quorum. But the test will run with
# whatever count it finds.
#
# This assumes that all the mounts are configured as voting servers. We
# could update it to be more clever and know that it can always safely
# unmount mounts that aren't configured as servers.
#
# nothing to do if we can't unmount
test "$T_NR_MOUNTS" == "$T_QUORUM" && t_skip
nr_mounted=$T_NR_MOUNTS
nr_quorum=$T_QUORUM
echo "== create per mount files"
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/$i"
mkdir -p "$path"
touch "$path/$i"
mounted[$i]=1
done
LENGTH=30
echo "== ${LENGTH}s of racing random mount/umount"
END=$((SECONDS + LENGTH))
while [ "$SECONDS" -lt "$END" ]; do
# give some mounts dirty data
for i in $(t_fs_nrs); do
eval path="\$T_D${i}/$i"
dirty=$((RANDOM % 2))
if [ "${mounted[$i]}" == 1 -a "$dirty" == 1 ]; then
touch "$path/$i"
fi
done
pids=""
for i in $(t_fs_nrs); do
change=$((RANDOM % 2))
if [ "$change" == 0 ]; then
continue;
fi
if [ "${mounted[$i]}" == 1 ]; then
if [ "$nr_mounted" -gt "$nr_quorum" ]; then
t_umount $i &
pid=$!
pids="$pids $pid"
mounted[$i]=0
(( nr_mounted-- ))
fi
else
t_mount $i &
pid=$!
pids="$pids $pid"
mounted[$i]=1
(( nr_mounted++ ))
fi
done
echo "waiting (secs $SECONDS)" >> $T_TMP.log
for p in $pids; do
t_quiet wait $p
done
echo "done waiting (secs $SECONDS))" >> $T_TMP.log
done
echo "== mounting any unmounted"
for i in $(t_fs_nrs); do
if [ "${mounted[$i]}" == 0 ]; then
t_mount $i
fi
done
t_pass

View File

@@ -0,0 +1,156 @@
#
# test waiting for offline extents
#
t_require_commands dd cat cp scoutfs xfs_io
DIR="$T_D0/dir"
BLOCKS=256
BS=4096
BYTES=$(($BLOCKS * $BS))
expect_wait()
{
local file=$1
local ops=$2
shift
shift
> $T_TMP.wait.expected
while test -n "$ops" -a -n "$1"; do
echo "ino $1 iblock $2 ops $ops" >> $T_TMP.wait.expected
shift
shift
done
scoutfs data-waiting 0 0 "$file" > $T_TMP.wait.output
diff -u $T_TMP.wait.expected $T_TMP.wait.output
}
t_quiet mkdir -p "$DIR"
echo "== create files"
dd if=/dev/urandom of="$DIR/golden" bs=$BS count=$BLOCKS status=none
cp "$DIR/golden" "$DIR/file"
ino=$(stat -c "%i" "$DIR/file")
vers=$(scoutfs stat -s data_version "$DIR/file")
echo "== waiter shows up in ioctl"
echo "offline wating should be empty:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
cat "$DIR/file" > /dev/null &
sleep .1
echo "offline waiting should now have one known entry:"
expect_wait "$DIR/file" "read" $ino 0
echo "== multiple waiters on same block listed once"
cat "$DIR/file" > /dev/null &
sleep .1
echo "offline waiting still has one known entry:"
expect_wait "$DIR/file" "read" $ino 0
echo "== different blocks show up"
dd if="$DIR/file" of=/dev/null bs=$BS count=1 skip=1 2> /dev/null &
sleep .1
echo "offline waiting now has two known entries:"
expect_wait "$DIR/file" "read" $ino 0 $ino 1
echo "== staging wakes everyone"
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
sleep .1
echo "offline wating should be empty again:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
echo "== interruption does no harm"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
cat "$DIR/file" > /dev/null 2>&1 &
pid="$!"
sleep .1
echo "offline waiting should now have one known entry:"
expect_wait "$DIR/file" "read" $ino 0
kill "$pid"
# silence terminated message
wait "$pid" 2> /dev/null
echo "offline waiting should be empty again:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
echo "== readahead while offline does no harm"
xfs_io -c "fadvise -w 0 $BYTES" "$DIR/file"
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
cmp "$DIR/file" "$DIR/golden"
echo "== waiting on interesting blocks works"
blocks=""
for base in $(echo 0 $(($BLOCKS / 2)) $(($BLOCKS - 2))); do
for off in 0 1; do
blocks="$blocks $(($base + off))"
done
done
for b in $blocks; do
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
dd if="$DIR/file" of=/dev/null \
status=none bs=$BS count=1 skip=$b 2> /dev/null &
sleep .1
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
sleep .1
echo "offline waiting is empty at block $b"
scoutfs data-waiting 0 0 "$DIR" | wc -l
done
echo "== contents match when staging blocks forward"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
cat "$DIR/file" > "$DIR/forward" &
for b in $(seq 0 1 $((BLOCKS - 1))); do
dd if="$DIR/golden" of="$DIR/block" status=none bs=$BS skip=$b count=1
scoutfs stage "$DIR/file" "$vers" $((b * $BS)) $BS "$DIR/block"
done
sleep .1
cmp "$DIR/golden" "$DIR/forward"
echo "== contents match when staging blocks backwards"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
cat "$DIR/file" > "$DIR/backward" &
for b in $(seq $((BLOCKS - 1)) -1 0); do
dd if="$DIR/golden" of="$DIR/block" status=none bs=$BS skip=$b count=1
scoutfs stage "$DIR/file" "$vers" $((b * $BS)) $BS "$DIR/block"
done
sleep .1
cmp "$DIR/golden" "$DIR/backward"
echo "== truncate to same size doesn't wait"
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
truncate -s "$BYTES" "$DIR/file" &
sleep .1
echo "offline wating should be empty:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
echo "== truncating does wait"
truncate -s "$BS" "$DIR/file" &
sleep .1
echo "truncate should be waiting for first block:"
expect_wait "$DIR/file" "change_size" $ino 0
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
sleep .1
echo "trunate should no longer be waiting:"
scoutfs data-waiting 0 0 "$DIR" | wc -l
cat "$DIR/golden" > "$DIR/file"
vers=$(scoutfs stat -s data_version "$DIR/file")
echo "== writing waits"
dd if=/dev/urandom of="$DIR/other" bs=$BS count=$BLOCKS status=none
scoutfs release "$DIR/file" "$vers" 0 $BLOCKS
# overwrite, not truncate+write
dd if="$DIR/other" of="$DIR/file" \
bs=$BS count=$BLOCKS conv=notrunc status=none &
sleep .1
echo "should be waiting for write"
expect_wait "$DIR/file" "write" $ino 0
scoutfs stage "$DIR/file" "$vers" 0 $BYTES "$DIR/golden"
cmp "$DIR/file" "$DIR/other"
echo "== cleanup"
rm -rf "$DIR"
t_pass

View File

@@ -0,0 +1,39 @@
#
# We had some segment reading patterns that lead to excessive segment
# reading if ops iterated over items in the opposite order that they're
# sorted in segments.
#
# Let's make sure iterating over items in either directions causes the
# item reading path to cache the items in the segments regardless of
# which item caused the miss.
#
# We can use the count of item allocations as a proxy for the bad
# behaviour.
#
t_require_commands mkdir touch stat cat
DIR="$T_D0/dir"
NR=3000
t_quiet mkdir -p "$DIR"
echo "== create files"
for a in $(seq 1 $NR); do t_quiet touch $DIR/$a; done
echo "== count allocations reading forwards"
echo 3 > /proc/sys/vm/drop_caches
for a in $(seq 1 $NR); do stat $DIR/$a > /dev/null; done
FWD=$(t_counter item_alloc)
echo "forward item allocations: $FWD" >> "$T_TMP.log"
echo "== count allocations reading backwards"
echo 3 > /proc/sys/vm/drop_caches
for a in $(seq $NR -1 1); do stat $DIR/$a > /dev/null; done
BWD=$(t_counter item_alloc)
echo "backward item allocations: $BWD" >> "$T_TMP.log"
if [ "$BWD" -gt "$((FWD * 5))" ]; then
echo "backward item iteration allocated $BWD > 5x forward $FWD"
fi
t_pass

View File

@@ -0,0 +1,106 @@
#
# Test basic inode index item behaviour
#
t_require_commands touch mkdir sync scoutfs setfattr dd stat
get_meta_seq()
{
scoutfs stat -s meta_seq "$1"
}
query_index() {
local which="$1"
local first="${2:-0}"
local last="${3:--1}"
scoutfs walk-inodes $which $first $last "$T_M0"
}
# print the major in the index for the ino if it's found
ino_major() {
local which="$1"
local ino="$2"
scoutfs walk-inodes $which 0 -1 "$T_M0" | \
awk '($4 == "'$ino'") {print $2}'
}
DIR="$T_D0/dir"
echo "== dirs shouldn't appear in data_seq queries"
mkdir "$DIR"
ino=$(stat -c "%i" "$DIR")
t_sync_seq_index
query_index data_seq | grep "$ino\>"
echo "== two created files are present and come after each other"
touch "$DIR/first"
t_sync_seq_index
touch "$DIR/second"
t_sync_seq_index
ino=$(stat -c "%i" "$DIR/first")
query_index data_seq | awk '($4 == "'$ino'") {print "found first"}'
ino=$(stat -c "%i" "$DIR/second")
query_index data_seq | awk '($4 == "'$ino'") {print "found second"}'
echo "== unlinked entries must not be present"
touch "$DIR/victim"
ino=$(stat -c "%i" "$DIR/victim")
rm -f "$DIR/victim"
t_sync_seq_index
query_index data_seq | awk '($4 == "'$ino'") {print "found victim"}'
echo "== dirty inodes can not be present"
touch "$DIR/dirty_before"
ino=$(stat -c "%i" "$DIR/dirty_before")
before=$(get_meta_seq "$DIR/dirty_before")
if query_index meta_seq | grep -q "$ino\>"; then
# was dirty while in index if its seq matches newly created
touch "$DIR/dirty_after"
after=$(get_meta_seq "$DIR/dirty_after")
if [ "$before" == "$after" ]; then
echo "ino $ino before $before after $after"
fi
fi
echo "== changing metadata must increase meta seq"
touch "$DIR/meta_file"
ino=$(stat -c "%i" "$DIR/meta_file")
t_sync_seq_index
before=$(ino_major meta_seq $ino)
# no setattr at the time of writing, xattrs update :)
setfattr -n user.scoutfs-testing.meta_seq -v 1 "$DIR/meta_file"
t_sync_seq_index
after=$(ino_major meta_seq $ino)
test "$before" -lt "$after" || \
echo "meta seq after xattr set $after <= before $before"
echo "== changing contents must increase data seq"
echo "first contents" > "$DIR/regular_file"
ino=$(stat -c "%i" "$DIR/regular_file")
t_sync_seq_index
before=$(ino_major data_seq $ino)
echo "more contents" >> "$DIR/regular_file"
t_sync_seq_index
after=$(ino_major data_seq $ino)
test "$before" -lt "$after" || \
echo "data seq after modification $after <= before $before"
#
# we had a bug where sampling the next key in the manifest+segmenets
# didn't skip past deleted dirty items
#
echo "== make sure dirtying doesn't livelock walk"
dd if=/dev/urandom of="$DIR/dirtying" bs=4K count=1 >> $seqres.full 2>&1
nr=1
while [ "$nr" -lt 100 ]; do
echo "dirty/walk attempt $nr" >> $seqres.full
sync
dd if=/dev/urandom of="$DIR/dirtying" bs=4K count=1 conv=notrunc \
>> $seqres.full 2>&1
scoutfs walk-inodes data_seq 0 -1 $DIR/dirtying >& /dev/null
((nr++))
done
t_pass

View File

@@ -0,0 +1,131 @@
#
# Test that releasing extents creates offline extents.
#
t_require_commands xfs_io filefrag scoutfs mknod
# this test wants to ignore unwritten extents
fiemap_file() {
filefrag -v -b4096 "$1" | grep -v "unwritten"
}
create_file() {
local file="$1"
local size="$2"
t_quiet xfs_io -f -c "pwrite 0 $size" "$file"
}
# if vers is "stat" then we ask stat_more for the data_version
release_vers() {
local file="$1"
local vers="$2"
local block="$3"
local count="$4"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs release "$file" "$vers" "$block" "$count"
}
FILE="$T_D0/file"
CHAR="$FILE-char"
echo "== simple whole file multi-block releasing"
create_file "$FILE" 65536
release_vers "$FILE" stat 0 16
rm "$FILE"
echo "== release last block that straddles i_size"
create_file "$FILE" 6144
release_vers "$FILE" stat 1 1
rm "$FILE"
echo "== release entire file past i_size"
create_file "$FILE" 8192
release_vers "$FILE" stat 0 100
# not deleting for the following little tests
echo "== releasing offline extents is fine"
release_vers "$FILE" stat 0 100
echo "== 0 count is fine"
release_vers "$FILE" stat 0 0
echo "== release past i_size is fine"
release_vers "$FILE" stat 100 1
echo "== wrapped blocks fails"
release_vers "$FILE" stat $vers 0x8000000000000000 0x8000000000000000
echo "== releasing non-file fails"
mknod "$CHAR" c 1 3
release_vers "$CHAR" stat 0 1 2>&1 | t_filter_fs
rm "$CHAR"
echo "== releasing a non-scoutfs file fails"
release_vers "/dev/null" stat 0 1
echo "== releasing bad version fails"
release_vers "$FILE" 0 0 1
rm "$FILE"
#
# Finally every combination of releasing three single block extents
# inside a 5 block file, including repeated releases, merges offline
# extents as expected.
#
# We collapse down the resulting extent output so that the golden file
# isn't one of the biggest in the tree. Each extent is listed as
# "(logical physical count)". Offline extents have a physical of 0 and
# real allocations are filtered to start at physical 100.
#
echo "== verify small release merging"
for a in $(seq 0 4); do
for b in $(seq 0 4); do
for c in $(seq 0 4); do
# start with one contiguous extent
create_file "$FILE" $((5 * 4096))
nr=1
while fiemap_file "$FILE" | grep -q " extents found"; do
rm "$FILE"
create_file "$FILE" $((5 * 4096))
((nr++))
if [ $nr == 10 ]; then
t_fail "10 tries to get a single extent?"
fi
done
start=$(fiemap_file "$FILE" | \
awk '($1 == "0:"){print substr($4, 0, length($4)- 2)}')
release_vers "$FILE" stat $a 1
release_vers "$FILE" stat $b 1
release_vers "$FILE" stat $c 1
echo -n "$a $b $c:"
fiemap_file "$FILE" | \
awk 'BEGIN{ORS=""}($1 == (NR - 4)":") {
off=substr($2, 0, length($2)- 2);
phys=substr($4, 0, length($4)- 2);
if (phys > 100) {
phys = phys - phys + 100 + off;
}
len=substr($6, 0, length($6)- 1);
print " (" off, phys, len ")";
}'
echo
rm "$FILE"
done
done
done
t_pass

View File

@@ -0,0 +1,202 @@
#
# Test correctness of the staging operation
#
t_require_commands filefrag dd scoutfs cp cmp rm
fiemap_file() {
filefrag -v -b4096 "$1"
}
create_file() {
local file="$1"
local size="$2"
local blocks=$((size / 4096))
local remainder=$((size % 4096))
if [ "$blocks" != 0 ]; then
dd if=/dev/urandom bs=4096 count=$blocks of="$file" \
>> $seqres.full 2>&1
fi
if [ "$remainder" != 0 ]; then
dd if=/dev/urandom bs="$remainder" count=1 of="$file" \
conv=notrunc oflag=append >> $seqres.full 2>&1
fi
}
# if vers is "stat" then we ask stat_more for the data_version
release_vers() {
local file="$1"
local vers="$2"
local block="$3"
local count="$4"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs release "$file" "$vers" "$block" "$count"
}
# if vers is "stat" then we ask stat_more for the data_version
stage_vers() {
local file="$1"
local vers="$2"
local offset="$3"
local count="$4"
local contents="$5"
if [ "$vers" == "stat" ]; then
vers=$(scoutfs stat -s data_version "$file")
fi
scoutfs stage "$file" "$vers" "$offset" "$count" "$contents"
}
FILE="$T_D0/file"
CHAR="$FILE-char"
echo "== create/release/stage single block file"
create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
# make sure there only offline extents
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
stage_vers "$FILE" stat 0 4096 "$T_TMP"
cmp "$FILE" "$T_TMP"
rm -f "$FILE"
echo "== create/release/stage larger file"
create_file "$FILE" $((4096 * 4096))
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 4096
# make sure there only offline extents
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
stage_vers "$FILE" stat 0 $((4096 * 4096)) "$T_TMP"
cmp "$FILE" "$T_TMP"
rm -f "$FILE"
echo "== multiple release,drop_cache,stage cycles"
create_file "$FILE" $((4096 * 1024))
cp "$FILE" "$T_TMP"
nr=1
while [ "$nr" -lt 10 ]; do
echo "attempt $nr" >> $seqres.full 2>&1
release_vers "$FILE" stat 0 1024
sync
echo 3 > /proc/sys/vm/drop_caches
stage_vers "$FILE" stat 0 $((4096 * 1024)) "$T_TMP"
cmp "$FILE" "$T_TMP"
sync
((nr++))
done
rm -f "$FILE"
echo "== release+stage shouldn't change stat, data seq or vers"
create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
sync
stat "$FILE" > "$T_TMP.before"
scoutfs stat -s data_seq "$FILE" >> "$T_TMP.before"
scoutfs stat -s data_version "$FILE" >> "$T_TMP.before"
release_vers "$FILE" stat 0 1
stage_vers "$FILE" stat 0 4096 "$T_TMP"
stat "$FILE" > "$T_TMP.after"
scoutfs stat -s data_seq "$FILE" >> "$T_TMP.after"
scoutfs stat -s data_version "$FILE" >> "$T_TMP.after"
diff -u "$T_TMP.before" "$T_TMP.after"
rm -f "$FILE"
echo "== stage does change meta_seq"
create_file "$FILE" 4096
release_vers "$FILE" stat 0 1
sync
before=$(scoutfs stat -s meta_seq "$FILE")
stage_vers "$FILE" stat 0 4096 "$T_TMP"
after=$(scoutfs stat -s meta_seq "$FILE")
test "$before" == "$after" && echo "before $before == ater $after"
rm -f "$FILE"
# XXX this now waits, demand staging should be own test
#echo "== can't write to offline"
#create_file "$FILE" 4096
#release_vers "$FILE" stat 0 1
## make sure there only offline extents
#fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
#dd if=/dev/zero of="$FILE" conv=notrunc bs=4096 count=1 2>&1 | t_filter_fs
#fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
#rm -f "$FILE"
## XXX not worrying about this yet
#echo "== can't stage online when version matches"
#create_file "$FILE" 4096
#cp "$FILE" "$T_TMP"
#stage_vers "$FILE" stat 0 4096 /dev/zero
#cmp "$FILE" "$T_TMP"
#rm -f "$FILE"
echo "== can't use stage to extend online file"
touch "$FILE"
stage_vers "$FILE" stat 0 4096 /dev/zero
hexdump -C "$FILE"
rm -f "$FILE"
echo "== wrapped region fails"
create_file "$FILE" 4096
stage_vers "$FILE" stat 0xFFFFFFFFFFFFFFFF 4096 /dev/zero
rm -f "$FILE"
echo "== non-block aligned offset fails"
create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
stage_vers "$FILE" stat 1 4095 "$T_TMP"
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
rm -f "$FILE"
echo "== non-block aligned len within block fails"
create_file "$FILE" 4096
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
stage_vers "$FILE" stat 0 1024 "$T_TMP"
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
rm -f "$FILE"
echo "== partial final block that writes to i_size does work"
create_file "$FILE" 2048
cp "$FILE" "$T_TMP"
release_vers "$FILE" stat 0 1
stage_vers "$FILE" stat 0 2048 "$T_TMP"
cmp "$FILE" "$T_TMP"
rm -f "$FILE"
echo "== zero length stage doesn't bring blocks online"
create_file "$FILE" $((4096 * 100))
release_vers "$FILE" stat 0 100
stage_vers "$FILE" stat 4096 0 /dev/zero
fiemap_file "$FILE" | grep "^[ 0-9]*:" | grep -v "unknown"
rm -f "$FILE"
# XXX yup, needs to be updated for demand staging
##
## today readding offline returns -EIO (via -EINVAL from get_block and
## PageError), we'd want something more clever once this read hangs in
## demand staging
##
#echo "== stage suceeds after read error"
#create_file "$FILE" 4096
#cp "$FILE" "$T_TMP"
#sync
#release_vers "$FILE" stat 0 1
#md5sum "$FILE" 2>&1 | t_filter_fs
#stage_vers "$FILE" stat 0 4096 "$T_TMP"
#cmp "$FILE" "$T_TMP"
#rm -f "$FILE"
echo "== stage of non-regular file fails"
mknod "$CHAR" c 1 3
stage_vers "$CHAR" stat 0 1 /dev/zero 2>&1 | t_filter_fs
rm "$CHAR"
t_pass

View File

@@ -0,0 +1,83 @@
#
# simple xattr unit tests
#
t_require_commands hexdump setfattr getfattr cmp touch dumb_setxattr
FILE="$T_D0/file"
NR=500
long_string() {
local chars=$1
local bytes=$(((chars + 1) / 2))
local huge
huge=$(hexdump -vn "$bytes" -e ' /1 "%02x"' /dev/urandom)
echo ${huge:0:$chars}
}
# delete each xattr afterwards so they don't accumulate
test_xattr_lengths() {
local name_len=$1
local val_len=$2
local name="user.$(long_string $name_len)"
local val="$(long_string $val_len)"
echo "key len $name_len val len $val_len" >> "$T_TMP.log"
setfattr -n $name -v \"$val\" "$FILE"
# grep has trouble with enormous args? so we dump the
# name=value to a file and compare with a known good file
getfattr -d --absolute-names "$FILE" | grep "$name" > "$T_TMP.got"
if [ $val_len == 0 ]; then
echo "$name" > "$T_TMP.good"
else
echo "$name=\"$val\"" > "$T_TMP.good"
fi
cmp "$T_TMP.good" "$T_TMP.got" || exit 1
setfattr -x $name "$FILE"
}
print_and_run() {
printf '%s\n' "$*" | t_filter_fs
"$@" || echo "returned nonzero status: $?"
}
echo "=== XATTR_ flag combinations"
touch "$FILE"
print_and_run dumb_setxattr -p "$FILE" -n user.test -v val -c -r
print_and_run dumb_setxattr -p "$FILE" -n user.test -v val -r
print_and_run dumb_setxattr -p "$FILE" -n user.test -v val -c
print_and_run dumb_setxattr -p "$FILE" -n user.test -v val -c
print_and_run dumb_setxattr -p "$FILE" -n user.test -v val -r
rm "$FILE"
echo "=== bad lengths"
touch "$FILE"
setfattr -n \"\" -v val "$FILE" 2>&1 | t_filter_fs
setfattr -n user.$(long_string 256) -v val "$FILE" 2>&1 | t_filter_fs
setfattr -n user.$(long_string 1000) -v val "$FILE" 2>&1 | t_filter_fs
setfattr -n user.name -v $(long_string 65536) "$FILE" 2>&1 | t_filter_fs
# sync to make sure all reserved items are dirtied each time
echo "=== good length boundaries"
# 255 key len - strlen("user.")
for name_len in 1 249 250; do
for val_len in 0 1 254 255 256 65534 65535; do
sync
test_xattr_lengths $name_len $val_len
done
done
echo "=== $NR random lengths"
touch "$FILE"
for i in $(seq 1 $NR); do
name_len=$((1 + (RANDOM % 250)))
val_len=$((RANDOM % 65536))
test_xattr_lengths $name_len $val_len
done
t_pass

View File

@@ -0,0 +1,65 @@
#
# concurrent stage and release allocation consistency
#
t_require_commands rm mkdir dd cp cmp mv find scoutfs
EACH=4
NR=$((EACH * 4))
DIR="$T_D0/dir"
BLOCKS=256
BYTES=$(($BLOCKS * 4096))
release_file() {
local path="$1"
local vers=$(scoutfs stat -s data_version "$path")
echo "releasing $path" >> "$T_TMP.log"
scoutfs release "$path" "$vers" 0 $BLOCKS
echo "released $path" >> "$T_TMP.log"
}
stage_file() {
local path="$1"
local vers=$(scoutfs stat -s data_version "$path")
echo "staging $path" >> "$T_TMP.log"
scoutfs stage "$path" "$vers" 0 $BYTES \
"$DIR/good/$(basename $path)"
echo "staged $path" >> "$T_TMP.log"
}
echo "== create initial files"
mkdir -p "$DIR"/{on,off,good}
for i in $(seq 1 $NR); do
dd if=/dev/urandom of="$DIR/good/$i" bs=1MiB count=1 status=none
cp "$DIR/good/$i" "$DIR/on/$i"
done
echo "== race stage and release"
for r in $(seq 1 1000); do
on=$(find "$DIR"/on/* 2>/dev/null | shuf | head -$EACH)
off=$(find "$DIR"/off/* 2>/dev/null | shuf | head -$EACH)
echo r $r on $on off $off >> "$T_TMP.log"
for f in $on; do
release_file $f &
done
for f in $off; do
stage_file $f &
done
wait
[ -n "$on" ] && mv $on "$DIR/off/"
[ -n "$off" ] && mv $off "$DIR/on/"
for f in $(find "$DIR"/on/* 2>/dev/null); do
cmp "$f" "$DIR/good/$(basename $f)"
if [ $? != 0 ]; then
t_fail "file $f bad!"
fi
done
done
t_pass

View File

@@ -0,0 +1,129 @@
#
# verify stale btree and segment reading
#
t_require_commands touch stat setfattr getfattr createmany
t_require_mounts 2
GETFATTR="getfattr --absolute-names"
SETFATTR="setfattr"
#
# This exercises the soft retry of btree blocks and segment reads when
# inconsistent cached versions are found. It ensures that basic hard
# error returning turns into EIO in the case where the persistent reread
# blocks and segments really are inconsistent.
#
# The triggers apply across all execution in the file system. So to
# trigger btree block retries in the client we make sure that the server
# is running on the other node.
#
# We need to quiesce compaction before arming stale segment triggers
# because we don't want them to hit compaction.. they're not expected
# there because the server protects compaction input segments.
#
cl=$(t_first_client_nr)
sv=$(t_server_nr)
eval cl_dir="\$T_D${cl}"
eval sv_dir="\$T_D${sv}"
echo "== create file for xattr ping pong"
touch "$sv_dir/file"
$SETFATTR -n user.xat -v initial "$sv_dir/file"
$GETFATTR -n user.xat "$sv_dir/file" 2>&1 | t_filter_fs
echo "== retry btree block read"
$SETFATTR -n user.xat -v btree "$sv_dir/file"
t_trigger_arm btree_stale_read $cl
old=$(t_counter btree_stale_read $cl)
$GETFATTR -n user.xat "$cl_dir/file" 2>&1 | t_filter_fs
t_trigger_show btree_stale_read "after" $cl
t_counter_diff btree_stale_read $old $cl
echo "== retry segment read"
$SETFATTR -n user.xat -v segment "$sv_dir/file"
sync; sleep .5 # hopefully compaction finishes
t_trigger_arm seg_stale_read $cl
old=$(t_counter seg_stale_read $cl)
$GETFATTR -n user.xat "$cl_dir/file" 2>&1 | t_filter_fs
t_trigger_show seg_stale_read "after" $cl
t_counter_diff seg_stale_read $old $cl
echo "== get a hard error, then have it work"
$SETFATTR -n user.xat -v err "$sv_dir/file"
t_trigger_arm hard_stale_error $cl
old=$(t_counter manifest_hard_stale_error $cl)
$GETFATTR -n user.xat "$cl_dir/file" 2>&1 | t_filter_fs
t_trigger_show hard_stale_error "after" $cl
t_counter_diff manifest_hard_stale_error $old $cl
$GETFATTR -n user.xat "$cl_dir/file" 2>&1 | t_filter_fs
#
# we had bugs trying to read the manifest and segments when multiple
# segments and btree blocks were stale in memory but fine on disk.
#
# We can ensure stale cached blocks by reading on one node while
# aggressively advancing the btree ring on another. And we can ensure
# that there are lots of stale btree blocks to walk through by using
# tiny blocks which results in a huge tree.
#
LOTS=500000
INC=1000
stat_lots() {
local top="$1"
local out="$2"
local i
for i in $(seq 1 $INC $LOTS); do
stat "$top/dir/file_$i" | t_filter_fs
done > "$out"
}
advance_next_half() {
local nr="$1"
local which="btree_advance_ring_half"
t_trigger_arm $which $nr
while [ "$(t_trigger_get $which $nr)" == "1" ]; do
touch -a "$T_D0"
sync
sleep .1
done
t_trigger_show $which "after" $nr
}
echo "== read through multiple stale cached btree blocks"
# make sure we create a ton of blocks
echo 1 > "$(t_debugfs_path $sv)/options/btree_force_tiny_blocks"
cat "$(t_debugfs_path $sv)/options/btree_force_tiny_blocks"
# make enough items to create a tall tree
mkdir "$sv_dir/dir"
createmany -o "$sv_dir/dir/file_$i" $LOTS >> $T_TMP.log
# get our good stat output
stat_lots "$sv_dir" "$T_TMP.good"
# advance next block to half X
advance_next_half "$sv"
# densely fill half X with migration and start to write to half X+1
advance_next_half "$sv"
# read and cache a bunch of blocks in half X
stat_lots "$cl_dir" "$T_TMP.1"
# fill half X+1 with migration, then half X+2 (X!) with migration
advance_next_half "$sv"
advance_next_half "$sv"
# drop item cache by purging locks, forcing manifest reads
t_trigger_arm statfs_lock_purge $cl
stat -f "$cl_dir" > /dev/null
t_trigger_show statfs_lock_purge "after" $cl
# then attempt to read X+2 blocks through stale cached X blocks
stat_lots "$cl_dir" "$T_TMP.2"
# everyone needs to match
diff -u "$T_TMP.good" "$T_TMP.1"
diff -u "$T_TMP.good" "$T_TMP.2"
echo 0 > "$(t_debugfs_path $sv)/options/btree_force_tiny_blocks"
cat "$(t_debugfs_path $sv)/options/btree_force_tiny_blocks"
rm -rf "$sv_dir/dir"
t_pass

115
tests/tests/xfstests.sh Normal file
View File

@@ -0,0 +1,115 @@
#
# Run a specific set of tests in xfstests and make sure they pass. This
# references an external existing xfstests repo. It points xfstests at
# the first scoutfs mount and its device.
#
# This is a bit odd as a test because it's really running a bunch of
# tests on its own. We want to see their output as they progress.
# so we restore output to stdout while xfstests is running. We manually
# generate compared output from the check.log that xfstests procuces
# which lists the tests that were run and their result.
#
# _flakey_drop_and_remount drops writes during unmount, this stops a
# server from indicating that it is done and it will be fenced by the
# next server.. it's right there in the comment: "to simulate a crash".
# our fencing agent would find that the mount isn't actually live
# anymore and would be fine. For now it just barks out a warning
# in dmesg.
#
# make sure we have our config
if [ -z "$T_XFSTESTS_REPO" -o -z "$T_XFSTESTS_REPO" ]; then
t_fail "xfstests requires -X repo and -x branch"
fi
t_quiet mkdir -p "$T_TMPDIR/mnt.scratch"
t_quiet cd "$T_XFSTESTS_REPO"
t_quiet git fetch
# this remote use is bad, do better
t_quiet git checkout -B "$T_XFSTESTS_BRANCH" --track "origin/$T_XFSTESTS_BRANCH"
t_quiet make
t_quiet sync
# pwd stays in xfstests dir to build config and run
cat << EOF > local.config
export FSTYP=scoutfs
export MKFS_OPTIONS="-Q 1"
export TEST_DEV=$T_B0
export TEST_DIR=$T_M0
export SCRATCH_DEV=$T_EXDEV
export SCRATCH_MNT="$T_TMPDIR/mnt.scratch"
export MOUNT_OPTIONS="-o server_addr=127.0.0.1"
# is this needed?
export TEST_FS_MOUNT_OPTS="-o server_addr=127.0.0.1"
EOF
cat << EOF > local.exclude
generic/003 # missing atime update in buffered read
generic/023 # renameat2 not implemented
generic/024 # renameat2 not implemented
generic/025 # renameat2 not implemented
generic/029 # mmap missing
generic/030 # mmap missing
generic/075 # file content mismatch failures (fds, etc)
generic/078 # renameat2 not implemented
generic/080 # mmap missing
generic/103 # enospc causes trans commit failures
generic/105 # needs trigage: something about acls
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/120 # (can't exec 'cause no mmap)
generic/126 # (can't exec 'cause no mmap)
generic/141 # mmap missing
generic/213 # enospc causes trans commit failures
generic/215 # mmap missing
generic/237 # wrong error return from failing setfacl?
generic/246 # mmap missing
generic/247 # mmap missing
generic/248 # mmap missing
generic/319 # utils output change? update branch?
generic/321 # requires selinux enabled for '+' in ls?
generic/325 # mmap missing
generic/338 # BUG_ON update inode error handling
generic/346 # mmap missing
generic/347 # _dmthin_mount doesn't work?
generic/375 # utils output change? update branch?
EOF
t_restore_output
echo "(showing output of xfstests)"
./check -g quick -E local.exclude
#./check tests/generic/001
#./check tests/generic/006
# the fs is unmounted when check finishes
#
# ./check writes the results of the run to check.log. It lists
# the tests it ran, skipped, or failed. Then it writes a line saying
# everything passed or some failed. We scrape the most recent run and
# use it as the output to compare to make sure that we run the right
# tests and get the right results.
#
awk '
/^(Ran|Not run|Failures):.*/ {
if (pf) {
res=""
pf=""
} res = res "\n" $0
}
/^(Passed|Failed).*tests$/ {
pf=$0
}
END {
print res "\n" pf
}' < results/check.log > "$T_TMPDIR/results"
# put a test per line so diff shows tests that differ
egrep "^(Ran|Not run|Failures):" "$T_TMPDIR/results" | \
fmt -w 1 > "$T_TMPDIR/results.fmt"
egrep "^(Passed|Failed).*tests$" "$T_TMPDIR/results" >> "$T_TMPDIR/results.fmt"
t_compare_output cat "$T_TMPDIR/results.fmt"
t_pass