Zach Brown 03ab5cedb6 clean up createmany-parallel-mounts test
This test is trying to make sure that concurrent work isn't much, much,
slower than individual work.  It does this by timing creating a bunch of
files in a dir on a mount and then timing doing the same in two mounts
concurrently.  But it messed it up the concurrency pretty badly.

It had the concurrent createmany tasks creating files with a full path.
That means that every create is trying to read all the parent
directories.  The way inode number allocation works means that one of
the mounts is likely to be getting a write lock that includes a shared
parent.  This created a ton of cluster lock contention between the two
tasks.

Then it didn't sync the creates between phases.  It could be
accidentally recording the time it took to write out the dirty single
creates as time taken during the parallel creates.

By syncing between phases and having the createmany tasks create files
relative to their per-mount directories we actually perform concurrent
work and test that we're not creating contention outside of the task
load.

This became a problem as we switched from loopback devices to device
mapper devices.  The loopback writers were using buffered writes so we
were masking the io cost of constantly invalidating and refilling the
item cache by turning the reads into memory copies out of the page
cache.

While we're in here we actually clean up the created files and then use
t_fail to fail the test while the files still exist so they can be
examined.

Signed-off-by: Zach Brown <zab@versity.com>
2023-11-15 15:12:57 -08:00
2020-12-07 09:47:12 -08:00
2020-12-07 10:39:20 -08:00
2021-11-05 11:16:57 -07:00
2023-11-07 16:01:59 -08:00

Introduction

scoutfs is a clustered in-kernel Linux filesystem designed to support large archival systems. It features additional interfaces and metadata so that archive agents can perform their maintenance workflows without walking all the files in the namespace. Its cluster support lets deployments add nodes to satisfy archival tier bandwidth targets.

The design goal is to reach file populations in the trillions, with the archival bandwidth to match, while remaining operational and responsive.

Highlights of the design and implementation include:

  • Fully consistent POSIX semantics between nodes
  • Atomic transactions to maintain consistent persistent structures
  • Integrated archival metadata replaces syncing to external databases
  • Dynamic seperation of resources lets nodes write in parallel
  • 64bit throughout; no limits on file or directory sizes or counts
  • Open GPLv2 implementation

Community Mailing List

Please join us on the open scoutfs-devel@scoutfs.org mailing list hosted on Google Groups

Description
No description provided
Readme 6.7 MiB
Languages
C 87.2%
Shell 9.1%
Roff 2.5%
TeX 0.9%
Makefile 0.3%