The fence script we use for our single node multi-mount tests only knows
how to fence by using forced unmount to destroy a mount. As of now, the
tests only generate failing nodes that need to be fenced by using forced
unmount as well. This results in the awkward situation where the
testing fence script doesn't have anything to do because the mount is
already gone.
When the test fence script has nothing to do we might not notice if it
isn't run. This adds explicit verification to the fencing tests that
the script was really run. It adds per-invocation logging to the fence
script and the test makes sure that it was run.
While we're at it, we take the opportunity to tidy up some of the
scripting around this. We use a sysfs file with the data device
major:minor numbers so that the fencing script can find and unmount
mounts without having to ask them for their rid. They may not be
operational.
Signed-off-by: Zach Brown <zab@versity.com>
The local-force-unmount fenced fencing script only works when all the
mounts are on the local host and it uses force unmount. It is only
used in our specific local testing scripts. Packaging it as an example
lead people to believe that it could be used to cobble together a
multi-host testing network, however temporary.
Move it from being in utils and packged to being private to our tests so
that it doesn't present an attractive nuisance.
Signed-off-by: Zach Brown <zab@versity.com>
This should be good enough to get single node mounts up and running with
fenced with minimal effort. The example config will need to be copied
to /etc/scoutfs/scoutfs-fenced.conf for it to be functional, so this
still requires specific opt-in and wont accidentally run for multi-node
systems.
Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>