Files
scoutfs/tests
Zach Brown 73bf916182 Return ENOSPC as space gets low
Returning ENOSPC is challenging because we have clients working on
allocators which are a fraction of the whole and we use COW transactions
so we need to be able to allocate to free.  This adds support for
returning ENOSPC to client posix allocators as free space gets low.

For metadata, we reserve a number of free blocks for making progress
with client and server transactions which can free space.  The server
sets the low flag in a client's allocator if we start to dip into
reserved blocks.  In the client we add an argument to entering a
transaction which indicates if we're allocating new space (as opposed to
just modifying existing data or freeing).  When an allocating
transaction runs low and the server low flag is set then we return
ENOSPC.

Adding an argument to transaciton holders and having it return ENOSPC
gave us the opportunity to clean it up and make it a little clearer.
More work is done outside the wait_event function and it now
specifically waits for a transaction to cycle when it forces a commit
rather than spinning until the transaction worker acquires the lock and
stops it.

For data the same pattern applies except there are no reserved blocks
and we don't COW data so it's a simple case of returning the hard ENOSPC
when the data allocator flag is set.

The server needs to consider the reserved count when refilling the
client's meta_avail allocator and when swapping between the two
meta_avail and meta_free allocators.

We add the reserved metadata block count to statfs_more so that df can
subtract it from the free meta blocks and make it clear when enospc is
going to be returned for metadata allocations.

We increase the minimum device size in mkfs so that small testing
devices provide sufficient reserved blocks.

And finally we add a little test that makes sure we can fill both
metadata and data to ENOSPC and then recover by deleting what we filled.

Signed-off-by: Zach Brown <zab@versity.com>
2021-07-07 14:13:14 -07:00
..
2021-07-02 10:54:56 -07:00
2021-07-07 14:13:14 -07:00
2021-07-07 14:13:14 -07:00
2021-05-26 14:18:28 -07:00
2020-12-07 10:39:20 -08:00
2021-07-07 14:13:14 -07:00

This test suite exercises multi-node scoutfs by using multiple mounts on one host to simulate multiple nodes across a network.

It also contains a light test wrapper that executes xfstests on one of the test mounts.

Invoking Tests

The basic test invocation has to specify the devices for the fs the number of mounts to test, whether to create a new fs and insert the built module, and where to put the results.

# bash ./run-tests.sh                       \
    -M /dev/vda                             \
    -D /dev/vdb                             \
    -i                                      \
    -m                                      \
    -n 3                                    \
    -q 2                                    \
    -r ./results

All options can be seen by running with -h.

This script is built to test multi-node systems on one host by using different mounts of the same devices. The script creates a fake block device in front of each fs block device for each mount that will be tested. Currently it will create free loop devices and will mount on /mnt/test.[0-9].

All tests will be run by default. Particular tests can be included or excluded by providing test name regular expressions with the -I and -E options. The definitive list of tests and the order in which they'll be run is found in the sequence file.

xfstests

The last test that is run checks out, builds, and runs xfstests. It needs -X and -x options for the xfstests git repo and branch. It also needs spare devices on which to make scratch scoutfs volumes. The test verifies that the expected set of xfstests tests ran and passed.

    -f /dev/vdc                             \
    -e /dev/vdd                             \
    -X $HOME/git/scoutfs-xfstests           \
    -x scoutfs                              \

An xfstests repo that knows about scoutfs is only required to sprinkle the scoutfs cases throughout the xfstests harness.

Individual Test Invocation

Each test is run in a new bash invocation. A set of directories in the test volume and in the results path are created for the test. Each test's working directory isn't managed.

Test output, temp files, and dmesg snapshots are all put in a tmp/ dir in the results/ dir. Per-test dirs are only destroyed before each test invocation.

The harness will check for unexpected output in dmesg after each individual test.

Each test that fails will have its results appened to the fail.log file in the results/ directory. The details of the failure can be examined in the directories for each test in results/output/ and results/tmp/.

Writing tests

Tests have access to a set of t_ prefixed bash functions that are found in files in funcs/.

Tests complete by calling t_ functions which indicate the result of the test and can return a message. If the tests passes then its output is compared with known good output. If the output doesn't match then the test fails. The t_ completion functions return specific status codes so that returning without calling one can be detected.

The golden output has to be consistent across test platforms so there are a number of filter functions which strip out local details from command output. t_filter_fs is by far the most used which canonicalizes fs mount paths and block device details.

Tests can be relatively loose about checking errors. If commands produce output in failure cases then the test will fail without having to specifically test for errors on every command execution. Care should be taken to make sure that blowing through a bunch of commands with no error checking doesn't produce catastrophic results. Usually tests are simple and it's fine.

A bare sync will sync all the mounted filesystems and ensure that no mounts have dirty data. sync -f can be used to sync just a specific filesystem, though it doesn't exist on all platforms.

The harness doesn't currently ensure that all mounts are restored after each test invocation. It probably should. Currently it's the responsibility of the test to restore any mounts it alters and there are t_ functions to mount all configured mount points.

Environment Variables

Tests have a number of exported environment variables that are commonly used during the test.

Variable Description Origin Example
T_MB[0-9] per-mount meta bdev created per run /dev/loop0
T_DB[0-9] per-mount data bdev created per run /dev/loop1
T_D[0-9] per-mount test dir made for test /mnt/test.[0-9]/t
T_META_DEVICE main FS meta bdev -M /dev/vda
T_DATA_DEVICE main FS data bdev -D /dev/vdb
T_EX_META_DEV scratch meta bdev -f /dev/vdd
T_EX_DATA_DEV scratch meta bdev -e /dev/vdc
T_M[0-9] mount paths mounted per run /mnt/test.[0-9]/
T_NR_MOUNTS number of mounts -n 3
T_O[0-9] mount options created per run -o server_addr=
T_QUORUM quorum count -q 2
T_TMP per-test tmp prefix made for test results/tmp/t/tmp
T_TMPDIR per-test tmp dir dir made for test results/tmp/t

There are also a number of variables that are set in response to options and are exported but their use is rare so they aren't included here.