Files
scoutfs/tests
Zach Brown fd80c17ab6 Filter out kernel message when guests are slow
Ignore more kernel messages when debug guests are being slow.

Signed-off-by: Zach Brown <zab@versity.com>
2025-11-13 12:43:31 -08:00
..
2025-11-13 12:43:31 -08:00
2025-11-13 12:43:31 -08:00
2025-11-13 12:43:31 -08:00
2025-01-23 14:28:40 -05:00
2025-01-23 14:28:40 -05:00
2025-11-13 12:43:31 -08:00
2025-11-13 12:43:31 -08:00
2025-01-27 14:49:04 -05:00

This test suite exercises multi-node scoutfs by using multiple mounts on one host to simulate multiple nodes across a network.

It also contains a light test wrapper that executes xfstests on one of the test mounts.

Invoking Tests

The basic test invocation has to specify the devices for the fs the number of mounts to test, whether to create a new fs and insert the built module, and where to put the results.

# bash ./run-tests.sh                       \
    -M /dev/vda                             \
    -D /dev/vdb                             \
    -i                                      \
    -m                                      \
    -n 3                                    \
    -q 2                                    \
    -r ./results

All options can be seen by running with -h.

This script is built to test multi-node systems on one host by using different mounts of the same devices. The script creates a fake block device in front of each fs block device for each mount that will be tested. It will create predictable device mapper devices and mounts them on /mnt/test.N. These static device names and mount paths limit the script to a single execution per host.

All tests will be run by default. Particular tests can be included or excluded by providing test name regular expressions with the -I and -E options. The definitive list of tests and the order in which they'll be run is found in the sequence file.

xfstests

The last test that is run checks out, builds, and runs xfstests. It needs -X and -x options for the xfstests git repo and branch. It also needs spare devices on which to make scratch scoutfs volumes. The test verifies that the expected set of xfstests tests ran and passed.

    -f /dev/vdc                             \
    -e /dev/vdd                             \
    -X $HOME/git/scoutfs-xfstests           \
    -x scoutfs                              \

An xfstests repo that knows about scoutfs is only required to sprinkle the scoutfs cases throughout the xfstests harness.

Individual Test Invocation

Each test is run in a new bash invocation. A set of directories in the test volume and in the results path are created for the test. Each test's working directory isn't managed.

Test output, temp files, and dmesg snapshots are all put in a tmp/ dir in the results/ dir. Per-test dirs are only destroyed before each test invocation.

The harness will check for unexpected output in dmesg after each individual test.

Each test that fails will have its results appened to the fail.log file in the results/ directory. The details of the failure can be examined in the directories for each test in results/output/ and results/tmp/.

Writing tests

Tests have access to a set of t_ prefixed bash functions that are found in files in funcs/.

Tests complete by calling t_ functions which indicate the result of the test and can return a message. If the tests passes then its output is compared with known good output. If the output doesn't match then the test fails. The t_ completion functions return specific status codes so that returning without calling one can be detected.

The golden output has to be consistent across test platforms so there are a number of filter functions which strip out local details from command output. t_filter_fs is by far the most used which canonicalizes fs mount paths and block device details.

Tests can be relatively loose about checking errors. If commands produce output in failure cases then the test will fail without having to specifically test for errors on every command execution. Care should be taken to make sure that blowing through a bunch of commands with no error checking doesn't produce catastrophic results. Usually tests are simple and it's fine.

A bare sync will sync all the mounted filesystems and ensure that no mounts have dirty data. sync -f can be used to sync just a specific filesystem, though it doesn't exist on all platforms.

The harness doesn't currently ensure that all mounts are restored after each test invocation. It probably should. Currently it's the responsibility of the test to restore any mounts it alters and there are t_ functions to mount all configured mount points.

Environment Variables

Tests have a number of exported environment variables that are commonly used during the test.

Variable Description Origin Example
T_MB[0-9] per-mount meta bdev created per run /dev/mapper/_scoutfs_test_meta_[0-9]
T_DB[0-9] per-mount data bdev created per run /dev/mapper/_scoutfs_test_data_[0-9]
T_D[0-9] per-mount test dir made for test /mnt/test.[0-9]/t
T_META_DEVICE main FS meta bdev -M /dev/vda
T_DATA_DEVICE main FS data bdev -D /dev/vdb
T_EX_META_DEV scratch meta bdev -f /dev/vdd
T_EX_DATA_DEV scratch meta bdev -e /dev/vdc
T_M[0-9] mount paths mounted per run /mnt/test.[0-9]/
T_MODULE built kernel module created per run ../kmod/src/..ko
T_NR_MOUNTS number of mounts -n 3
T_O[0-9] mount options created per run -o server_addr=
T_QUORUM quorum count -q 2
T_EXTRA per-test file dir revision ctled tests/extra/t
T_TMP per-test tmp prefix made for test results/tmp/t/tmp
T_TMPDIR per-test tmp dir dir made for test results/tmp/t

There are also a number of variables that are set in response to options and are exported but their use is rare so they aren't included here.