Compare commits

..

9 Commits

Author SHA1 Message Date
Zach Brown
a1f917d5bb Update tracing with cluster lock changes
Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:06:06 -07:00
Zach Brown
68aa9e5ce6 Directly queue cluster lock work
We had a little helper that scheduled work after testing the list, which
required holding the spinlock.  This was a little too crude and required
scoutfs_unlock() acquiring the invalidate work list spinlock even though
it already had the cluster lock spinlock held and could see that there
are invalidate requests pending and should queue the invalidation work.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
28079610ed Two cluster lock LRU lists with less precision
Currently we maintain a single LRU list of cluster locks and every time
we acquire a cluster lock we move it to the head of the LRU, creating
significant contention acquiring the spinlock that protects the LRU
list.

This moves to two LRU lists, a list of cluster locks ready to be
reclaimed and one for locks that are in active use.  We mark locks with
which list they're on and only move them to the active list if they're
on the reclaim list.  We track imbalance between the two lists so that
they're always roughly the same size.

This removes contention maintaining a precise LRU amongst a set of
active cluster locks.  It doesn't address contention creating or
removing locks, which are already very expensive operations.

It also loses strict ordering by access time.  Reclaim has to make it
through the oldest half of locks before getting to the newer half,
though there is no guaranteed ordering amogst the newest half.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
05cd4ebe92 Lookup cluster locks with an RCU hash table
The previous work we did introduce the per-lock spinlock and the
refcount now make it easy to switch from an rbtree protected by a
spinlock to a hash table protected by RCU read critical sections.
The cluster lock lookup fast path now only dirties fields in the
scoutfs_lock struct itself.

We have to be a little careful when inserting so that users can't get
references to locks that made it into the hash table but which then had
to be removed because they were found to overlap.

Freeing is straight forward and we only have to make sure to free the
locks in RCU grace periods so that read sections can continue to
reference the memory and see the refcount that indicates that the locks
are freeing.

A few remaining places were using the lookup rbtree to walk all locks,
they're converted to using the range tree that we're keeping around to
resolve overlapping ranges but which is also handy for iteration that
isn't performance sensitive.

The LRU still does create contention on the linfo spinlock on every
lookup, fixing that is next.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
0041b599ab Protect clusters locks with refcounts
The first pass at managing the cluster lock state machine used a simple
global spinlock.  It's time to break it up.

This adds refcounting to the cluster lock struct.  Rather than managing
global data structures and individual lock state all under a global
spinlock, we use per-structure locks, a lock spinlock, and a lock
refcount.

Active users of the cluster lock hold a reference.  This primarily lets
unlock only check global structures once the refcounts say that it's
time to remove the lock from the structures.  The careful use of the
refcount to avoid locks that are being freed during lookup also paves
the way for using mostly read-only RCU lookup structures soon.

The global LRU is still modified on every lock use, that'll also be
removed up in future work.

The linfo spinlock is now only used for the LRU and lookup structures.
Other uses are removed, which causes more careful use of the finer
grained locks that initially just mirrored the use of the linfo spinlock
to keep those introduction patches safe.

The move from a single global lock to more fine grained locks creates
nesting that needs to be managed.  Shrinking and recovery in particular
need to be careful as they transition from spinlocks used to find
cluster locks to getting the cluster lock spinlock.

The presence of freeing locks in the lookup indexes means that some
callers need to retry if they hit freeing locks.  We have to add this
protection to recovery iterating over locks by their key value, but it
wouldn't have made sense to build that around the lookup rbtree as its
going away.  It makes sense to use the range tree that we're going to
keep using to make sure we don't accidentally introduce locks whose
ranges overlap (which would lead to item cache inconsistency).

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
f30e545870 Add per-cluster lock spinlock
Add a spinlock to the scoutfs_lock cluster lock which protects its
state.   This replaces the use of the mount-wide lock_info spinlock.

In practice, for now, this largely just mirrors the continued use of the
lock_info spinlock because it's still needed to protect the mount-wide
structures that are used during put_lock.   That'll be fixed in future
patches as the use of global structures is reduced.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
cf046f26c4 Cluster lock invalidation and shrink spinlocks
Cluster lock invalidation and shrinking have very similar work flows.
They rarely modify the state of locks and put them on lists for work to
process.  Today the lists and state modification are protected by the
mount-wide lock_info spinlock, which we want to break up.

This creates a little work_list struct that has a work_queue, list, and
lock.   Invalidation and shrinking use this to track locks that are
being processed and protect the list with the new spinlock in the
struct.

This leaves some awkward nesting with the lock_info spinlock because it
still protects invididual lock state.  That will be fixed as we move
towards individual lock refcounting and spinlocks.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
7f9f21317c Merge pull request #90 from versity/zab/multiple_alloc_move_commits
Reclaim log_trees alloc roots in multiple commits
2022-06-08 13:23:01 -07:00
Zach Brown
0d4bf83da3 Reclaim log_trees alloc roots in multiple commits
Client log_trees allocator btrees can build up quite a number of
extents.  In the right circumstances fragmented extents can have to
dirty a large number of paths to leaf blocks in the core allocator
btrees.  It might not be possible to dirty all the blocks necessary to
move all the extents in one commit.

This reworks the extent motion so that it can be performed in multiple
commits if the meta allocator for the commit runs out while it is moving
extents.  It's a minimal fix with as little disruption to the ordering
of commits and locking as possible.  It simply bubbles up an error when
the allocators run out and retries functions that can already be retried
in other circumstances.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 11:53:53 -07:00
31 changed files with 631 additions and 1126 deletions

View File

@@ -84,6 +84,21 @@ static u64 smallest_order_length(u64 len)
return 1ULL << (free_extent_order(len) * 3);
}
/*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*/
static u32 extent_mod_blocks(u32 height)
{
return ((1 + height) * 2) * 3;
}
/*
* Free extents don't have flags and are stored in two indexes sorted by
* block location and by length order, largest first. The location key
@@ -877,6 +892,14 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
* -ENOENT is returned if we run out of extents in the source tree
* before moving the total.
*
* If meta_reserved is non-zero then -EINPROGRESS can be returned if the
* current meta allocator's avail blocks or room for freed blocks would
* have fallen under the reserved amount. The could have been
* successfully dirtied in this case but the number of blocks moved is
* not returned. The caller is expected to deal with the partial
* progress by commiting the dirty trees and examining the resulting
* modified trees to see if they need to continue moving extents.
*
* The caller can specify that extents in the source tree should first
* be found based on their zone bitmaps. We'll first try to find
* extents in the exclusive zones, then vacant zones, and then we'll
@@ -891,7 +914,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks)
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved)
{
struct alloc_ext_args args = {
.alloc = alloc,
@@ -941,6 +964,14 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
break;
if (meta_reserved != 0 &&
scoutfs_alloc_meta_low(sb, alloc, meta_reserved +
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
ret = -EINPROGRESS;
break;
}
/* searching set start/len, finish initializing alloced extent */
ext.map = found.map ? ext.start - found.start + found.map : 0;
ext.flags = found.flags;
@@ -1065,15 +1096,6 @@ out:
* than completely exhausting the avail list or overflowing the freed
* list.
*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*
* The caller tells us how many extents they're about to modify and how
* many other additional blocks they may cow manually. And finally, the
* caller could be the first to dirty the avail and freed blocks in the
@@ -1082,7 +1104,7 @@ out:
static bool list_has_blocks(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root, u32 extents, u32 addl_blocks)
{
u32 tree_blocks = (((1 + root->root.height) * 2) * 3) * extents;
u32 tree_blocks = extent_mod_blocks(root->root.height) * extents;
u32 most = 1 + tree_blocks + addl_blocks;
if (le32_to_cpu(alloc->avail.first_nr) < most) {

View File

@@ -131,7 +131,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);

View File

@@ -529,11 +529,6 @@ static int scoutfs_get_block(struct inode *inode, sector_t iblock,
goto out;
}
if (create && !si->staging && scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto out;
}
/* convert unwritten to written, could be staging */
if (create && ext.map && (ext.flags & SEF_UNWRITTEN)) {
un.start = iblock;
@@ -1197,11 +1192,6 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
if (ret)
goto out;
if (scoutfs_inode_worm_denied(to)) {
ret = -EACCES;
goto out;
}
if ((from_off & SCOUTFS_BLOCK_SM_MASK) ||
(to_off & SCOUTFS_BLOCK_SM_MASK) ||
((byte_len & SCOUTFS_BLOCK_SM_MASK) &&

View File

@@ -1029,11 +1029,6 @@ static int scoutfs_unlink(struct inode *dir, struct dentry *dentry)
goto unlock;
}
if (scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto unlock;
}
if (should_orphan(inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(inode),
&orph_lock);
@@ -1702,12 +1697,6 @@ static int scoutfs_rename_common(struct inode *old_dir,
goto out_unlock;
}
if ((old_inode && scoutfs_inode_worm_denied(old_inode)) ||
(new_inode && scoutfs_inode_worm_denied(new_inode))) {
ret = -EACCES;
goto out_unlock;
}
if (should_orphan(new_inode)) {
ret = scoutfs_lock_orphan(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, scoutfs_ino(new_inode),
&orph_lock);

View File

@@ -107,11 +107,6 @@ retry:
if (ret)
goto out;
if (scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto out;
}
ret = scoutfs_complete_truncate(inode, inode_lock);
if (ret)
goto out;

View File

@@ -8,7 +8,7 @@
*/
#define SCOUTFS_FORMAT_VERSION_MIN 1
#define SCOUTFS_FORMAT_VERSION_MIN_STR __stringify(SCOUTFS_FORMAT_VERSION_MIN)
#define SCOUTFS_FORMAT_VERSION_MAX 2
#define SCOUTFS_FORMAT_VERSION_MAX 1
#define SCOUTFS_FORMAT_VERSION_MAX_STR __stringify(SCOUTFS_FORMAT_VERSION_MAX)
/* statfs(2) f_type */
@@ -856,12 +856,8 @@ struct scoutfs_inode {
struct scoutfs_timespec ctime;
struct scoutfs_timespec mtime;
struct scoutfs_timespec crtime;
struct scoutfs_timespec worm_level1_expire;
};
#define SCOUTFS_INODE_FMT_V1_BYTES offsetof(struct scoutfs_inode, worm_level1_expire)
#define SCOUTFS_INODE_FMT_V2_BYTES sizeof(struct scoutfs_inode)
#define SCOUTFS_INO_FLAG_TRUNCATE 0x1
#define SCOUTFS_ROOT_INO 1

View File

@@ -84,7 +84,6 @@ static void scoutfs_inode_ctor(void *obj)
{
struct scoutfs_inode_info *si = obj;
seqlock_init(&si->seqlock);
init_rwsem(&si->extent_sem);
mutex_init(&si->item_mutex);
seqcount_init(&si->seqcount);
@@ -214,30 +213,6 @@ static u64 get_item_minor(struct scoutfs_inode_info *si, u8 type)
return si->item_minors[ind];
}
void scoutfs_inode_get_worm(struct inode *inode, struct timespec *ts)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
unsigned int seq;
do {
seq = read_seqbegin(&si->seqlock);
*ts = si->worm_expire;
} while (read_seqretry(&si->seqlock, seq));
}
void scoutfs_inode_set_worm(struct inode *inode, u64 expire_sec, u32 expire_nsec)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
/* we don't deal with native timespec truncating our 64bit .sec */
BUILD_BUG_ON(sizeof(si->worm_expire.tv_sec) != sizeof(expire_sec));
write_seqlock(&si->seqlock);
si->worm_expire.tv_sec = expire_sec;
si->worm_expire.tv_nsec = expire_nsec;
write_sequnlock(&si->seqlock);
}
/*
* The caller has ensured that the fields in the incoming scoutfs inode
* reflect both the inode item and the inode index items. This happens
@@ -258,7 +233,7 @@ static void set_item_info(struct scoutfs_inode_info *si,
set_item_major(si, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE, sinode->data_seq);
}
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode, int inode_bytes)
static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
@@ -287,12 +262,6 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode, int in
si->crtime.tv_sec = le64_to_cpu(cinode->crtime.sec);
si->crtime.tv_nsec = le32_to_cpu(cinode->crtime.nsec);
if (inode_bytes == SCOUTFS_INODE_FMT_V2_BYTES)
scoutfs_inode_set_worm(inode, le64_to_cpu(cinode->worm_level1_expire.sec),
le32_to_cpu(cinode->worm_level1_expire.nsec));
else
scoutfs_inode_set_worm(inode, 0, 0);
/*
* i_blocks is initialized from online and offline and is then
* maintained as blocks come and go.
@@ -303,36 +272,6 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode, int in
set_item_info(si, cinode);
}
/* Returns the max inode size given format version */
static int max_inode_fmt_ver_bytes(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
int ret = 0;
if (sbi->fmt_vers == 1)
ret = SCOUTFS_INODE_FMT_V1_BYTES;
else if (sbi->fmt_vers == 2)
ret = SCOUTFS_INODE_FMT_V2_BYTES;
return ret;
}
/* Returns if inode bytes is valid for our format version */
static bool valid_inode_fmt_ver_bytes(struct super_block *sb, int bytes)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
int ver;
if (bytes == SCOUTFS_INODE_FMT_V1_BYTES)
ver = 1;
else if (bytes == SCOUTFS_INODE_FMT_V2_BYTES)
ver = 2;
else
ver = 0;
return ver > 0 && ver <= sbi->fmt_vers;
}
void scoutfs_inode_init_key(struct scoutfs_key *key, u64 ino)
{
*key = (struct scoutfs_key) {
@@ -342,23 +281,6 @@ void scoutfs_inode_init_key(struct scoutfs_key *key, u64 ino)
};
}
/*
* Read an inode item into the caller's buffer and return the size that
* we read. Returns errors if the inode size is unsupported or doesn't
* make sense for the format version.
*/
static int lookup_inode_item(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_inode *sinode, struct scoutfs_lock *lock)
{
int ret;
ret = scoutfs_item_lookup_within(sb, key, sinode, sizeof(struct scoutfs_inode), lock);
if (ret >= 0 && !valid_inode_fmt_ver_bytes(sb, ret))
return -EIO;
return ret;
}
/*
* Refresh the vfs inode fields if the lock indicates that the current
* contents could be stale.
@@ -394,13 +316,13 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
mutex_lock(&si->item_mutex);
if (atomic64_read(&si->last_refreshed) < refresh_gen) {
ret = lookup_inode_item(sb, &key, &sinode, lock);
if (ret > 0) {
load_inode(inode, &sinode, ret);
ret = scoutfs_item_lookup_exact(sb, &key, &sinode,
sizeof(sinode), lock);
if (ret == 0) {
load_inode(inode, &sinode);
atomic64_set(&si->last_refreshed, refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
si->drop_invalidated = false;
ret = 0;
}
} else {
ret = 0;
@@ -533,11 +455,6 @@ retry:
if (ret)
goto out;
if (scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto out;
}
attr_size = (attr->ia_valid & ATTR_SIZE) ? attr->ia_size :
i_size_read(inode);
@@ -850,10 +767,9 @@ out:
return inode;
}
static void store_inode(struct scoutfs_inode *cinode, struct inode *inode, int inode_bytes)
static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct timespec ts;
u64 online_blocks;
u64 offline_blocks;
@@ -887,15 +803,6 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode, int i
cinode->crtime.sec = cpu_to_le64(si->crtime.tv_sec);
cinode->crtime.nsec = cpu_to_le32(si->crtime.tv_nsec);
memset(cinode->crtime.__pad, 0, sizeof(cinode->crtime.__pad));
if (inode_bytes == SCOUTFS_INODE_FMT_V2_BYTES) {
scoutfs_inode_get_worm(inode, &ts);
cinode->worm_level1_expire.sec = cpu_to_le64(ts.tv_sec);
cinode->worm_level1_expire.nsec = cpu_to_le32(ts.tv_nsec);
memset(cinode->worm_level1_expire.__pad, 0,
sizeof(cinode->worm_level1_expire.__pad));
}
}
/*
@@ -921,15 +828,13 @@ int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock)
struct super_block *sb = inode->i_sb;
struct scoutfs_inode sinode;
struct scoutfs_key key;
int inode_bytes;
int ret;
inode_bytes = max_inode_fmt_ver_bytes(sb);
store_inode(&sinode, inode, inode_bytes);
store_inode(&sinode, inode);
scoutfs_inode_init_key(&key, scoutfs_ino(inode));
ret = scoutfs_item_update(sb, &key, &sinode, inode_bytes, lock);
ret = scoutfs_item_update(sb, &key, &sinode, sizeof(sinode), lock);
if (!ret)
trace_scoutfs_dirty_inode(inode);
return ret;
@@ -1130,9 +1035,8 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_inode sinode;
struct scoutfs_key key;
int inode_bytes;
struct scoutfs_inode sinode;
int ret;
int err;
@@ -1141,17 +1045,15 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
/* set the meta version once per trans for any inode updates */
scoutfs_inode_set_meta_seq(inode);
inode_bytes = max_inode_fmt_ver_bytes(sb);
/* only race with other inode field stores once */
store_inode(&sinode, inode, inode_bytes);
store_inode(&sinode, inode);
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list);
BUG_ON(ret);
scoutfs_inode_init_key(&key, ino);
err = scoutfs_item_update(sb, &key, &sinode, inode_bytes, lock);
err = scoutfs_item_update(sb, &key, &sinode, sizeof(sinode), lock);
if (err) {
scoutfs_err(sb, "inode %llu update err %d", ino, err);
BUG_ON(err);
@@ -1519,10 +1421,9 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
u64 ino, struct scoutfs_lock *lock, struct inode **inode_ret)
{
struct scoutfs_inode_info *si;
struct scoutfs_inode sinode;
struct scoutfs_key key;
struct scoutfs_inode sinode;
struct inode *inode;
int inode_bytes;
int ret;
inode = new_inode(sb);
@@ -1544,8 +1445,6 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
si->drop_invalidated = false;
si->flags = 0;
scoutfs_inode_set_worm(inode, 0, 0);
scoutfs_inode_set_meta_seq(inode);
scoutfs_inode_set_data_seq(inode);
@@ -1556,16 +1455,14 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
inode->i_rdev = rdev;
set_inode_ops(inode);
inode_bytes = max_inode_fmt_ver_bytes(sb);
store_inode(&sinode, inode, inode_bytes);
store_inode(&sinode, inode);
scoutfs_inode_init_key(&key, scoutfs_ino(inode));
ret = scoutfs_omap_set(sb, ino);
if (ret < 0)
goto out;
ret = scoutfs_item_create(sb, &key, &sinode, inode_bytes, lock);
ret = scoutfs_item_create(sb, &key, &sinode, sizeof(sinode), lock);
if (ret < 0)
scoutfs_omap_clear(sb, ino);
out:
@@ -1815,7 +1712,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
}
scoutfs_inode_init_key(&key, ino);
ret = lookup_inode_item(sb, &key, &sinode, lock);
ret = scoutfs_item_lookup_exact(sb, &key, &sinode, sizeof(sinode), lock);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
@@ -2172,25 +2069,6 @@ out:
return ret;
}
/*
* Return true if the inode is protected by worm and the current time is
* before the expiration time.
*/
bool scoutfs_inode_worm_denied(struct inode *inode)
{
struct timespec expire;
struct timespec cur;
scoutfs_inode_get_worm(inode, &expire);
if (expire.tv_sec != 0 || expire.tv_nsec != 0) {
cur = CURRENT_TIME;
if (timespec64_compare(&cur, &expire) < 0)
return true;
}
return false;
}
int scoutfs_inode_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);

View File

@@ -23,10 +23,6 @@ struct scoutfs_inode_info {
u64 offline_blocks;
u32 flags;
struct timespec crtime;
struct timespec worm_expire;
/* Prevent readers from racing with xattr_set */
seqlock_t seqlock;
/*
* Protects per-inode extent items, most particularly readers
@@ -145,8 +141,4 @@ void scoutfs_inode_orphan_stop(struct super_block *sb);
void scoutfs_inode_flush_iput(struct super_block *sb);
void scoutfs_inode_destroy(struct super_block *sb);
void scoutfs_inode_get_worm(struct inode *inode, struct timespec *ts);
void scoutfs_inode_set_worm(struct inode *inode, u64 expire_sec, u32 expire_nsec);
bool scoutfs_inode_worm_denied(struct inode *inode);
#endif

View File

@@ -659,11 +659,6 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
if (ret)
goto unlock;
if (scoutfs_inode_worm_denied(inode)) {
ret = -EACCES;
goto unlock;
}
/* can only change size/dv on untouched regular files */
if ((sm.i_size != 0 || sm.data_version != 0) &&
((!S_ISREG(inode->i_mode) ||
@@ -828,7 +823,7 @@ static long scoutfs_ioc_search_xattrs(struct file *file, unsigned long arg)
goto out;
}
if (scoutfs_xattr_parse_tags(sb, name, sx.name_bytes, &tgs) < 0 ||
if (scoutfs_xattr_parse_tags(name, sx.name_bytes, &tgs) < 0 ||
!tgs.srch) {
ret = -EINVAL;
goto out;

View File

@@ -1697,8 +1697,8 @@ static int copy_val(void *dst, int dst_len, void *src, int src_len)
* The amount of bytes copied is returned which can be 0 or truncated if
* the caller's buffer isn't big enough.
*/
static int item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, int len_limit, struct scoutfs_lock *lock)
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
struct cached_item *item;
@@ -1718,8 +1718,6 @@ static int item_lookup(struct super_block *sb, struct scoutfs_key *key,
item = item_rbtree_walk(&pg->item_root, key, NULL, NULL, NULL);
if (!item || item->deletion)
ret = -ENOENT;
else if (len_limit > 0 && item->val_len > len_limit)
ret = -EIO;
else
ret = copy_val(val, val_len, item->val, item->val_len);
@@ -1728,30 +1726,13 @@ out:
return ret;
}
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
return item_lookup(sb, key, val, val_len, 0, lock);
}
/*
* Return -EIO if the item we find has a value larger than the caller's
* val_len, rather than truncating and returning the size of the copied
* value.
*/
int scoutfs_item_lookup_within(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
return item_lookup(sb, key, val, val_len, val_len, lock);
}
int scoutfs_item_lookup_exact(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock)
{
int ret;
ret = item_lookup(sb, key, val, val_len, 0, lock);
ret = scoutfs_item_lookup(sb, key, val, val_len, lock);
if (ret == val_len)
ret = 0;
else if (ret >= 0)

View File

@@ -3,8 +3,6 @@
int scoutfs_item_lookup(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_lookup_within(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_lookup_exact(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock);

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
#ifndef _SCOUTFS_LOCK_H_
#define _SCOUTFS_LOCK_H_
#include <linux/rhashtable.h>
#include "key.h"
#include "tseq.h"
@@ -19,20 +21,24 @@ struct inode_deletion_lock_data;
*/
struct scoutfs_lock {
struct super_block *sb;
atomic_t refcount;
spinlock_t lock;
struct rcu_head rcu_head;
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_node node;
struct rhash_head ht_head;
struct rb_node range_node;
u64 refresh_gen;
u64 write_seq;
u64 dirty_trans_seq;
struct list_head lru_head;
int lru_on_list;
wait_queue_head_t waitq;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
struct list_head inv_list; /* list of lock's invalidation requests */
struct list_head inv_req_list; /* list of lock's invalidation requests */
struct list_head shrink_head;
spinlock_t cov_list_lock;

View File

@@ -1023,6 +1023,7 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__field(unsigned char, request_pending)
__field(unsigned char, invalidate_pending)
__field(int, mode)
__field(unsigned int, refcount)
__field(unsigned int, waiters_cw)
__field(unsigned int, waiters_pr)
__field(unsigned int, waiters_ex)
@@ -1038,6 +1039,7 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__entry->request_pending = lck->request_pending;
__entry->invalidate_pending = lck->invalidate_pending;
__entry->mode = lck->mode;
__entry->refcount = atomic_read(&lck->refcount);
__entry->waiters_pr = lck->waiters[SCOUTFS_LOCK_READ];
__entry->waiters_ex = lck->waiters[SCOUTFS_LOCK_WRITE];
__entry->waiters_cw = lck->waiters[SCOUTFS_LOCK_WRITE_ONLY];
@@ -1045,10 +1047,10 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__entry->users_ex = lck->users[SCOUTFS_LOCK_WRITE];
__entry->users_cw = lck->users[SCOUTFS_LOCK_WRITE_ONLY];
),
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u reqpnd %u invpnd %u rfrgen %llu waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u reqpnd %u invpnd %u rfrgen %llu rfcnt %d waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
SCSB_TRACE_ARGS, sk_trace_args(start), sk_trace_args(end),
__entry->mode, __entry->request_pending,
__entry->invalidate_pending, __entry->refresh_gen,
__entry->invalidate_pending, __entry->refresh_gen, __entry->refcount,
__entry->waiters_pr, __entry->waiters_ex, __entry->waiters_cw,
__entry->users_pr, __entry->users_ex, __entry->users_cw)
);

View File

@@ -689,23 +689,18 @@ static int alloc_move_refill_zoned(struct super_block *sb, struct scoutfs_alloc_
return scoutfs_alloc_move(sb, &server->alloc, &server->wri, dst, src,
min(target - le64_to_cpu(dst->total_len),
le64_to_cpu(src->total_len)),
exclusive, vacant, zone_blocks);
}
static inline int alloc_move_refill(struct super_block *sb, struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 lo, u64 target)
{
return alloc_move_refill_zoned(sb, dst, src, lo, target, NULL, NULL, 0);
exclusive, vacant, zone_blocks, 0);
}
static int alloc_move_empty(struct super_block *sb,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src)
struct scoutfs_alloc_root *src, u64 meta_reserved)
{
DECLARE_SERVER_INFO(sb, server);
return scoutfs_alloc_move(sb, &server->alloc, &server->wri,
dst, src, le64_to_cpu(src->total_len), NULL, NULL, 0);
dst, src, le64_to_cpu(src->total_len), NULL, NULL, 0,
meta_reserved);
}
/*
@@ -1344,7 +1339,7 @@ static int server_get_log_trees(struct super_block *sb,
goto unlock;
}
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed);
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 0);
if (ret < 0) {
err_str = "emptying committed data_freed";
goto unlock;
@@ -1544,9 +1539,11 @@ static int server_get_roots(struct super_block *sb,
* read and we finalize the tree so that it will be merged. We reclaim
* all the allocator items.
*
* The caller holds the commit rwsem which means we do all this work in
* one server commit. We'll need to keep the total amount of blocks in
* trees in check.
* The caller holds the commit rwsem which means we have to do our work
* in one commit. The alocator btrees can be very large and very
* fragmented. We return -EINPROGRESS if we couldn't fully reclaim the
* allocators in one commit. The caller should apply the current
* commit and call again in a new commit.
*
* By the time we're evicting a client they've either synced their data
* or have been forcefully removed. The free blocks in the allocator
@@ -1606,9 +1603,9 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
}
/*
* All of these can return errors after having modified the
* allocator trees. We have to try and update the roots in the
* log item.
* All of these can return errors, perhaps indicating successful
* partial progress, after having modified the allocator trees.
* We always have to update the roots in the log item.
*/
mutex_lock(&server->alloc_mutex);
ret = (err_str = "splice meta_freed to other_freed",
@@ -1618,18 +1615,21 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
scoutfs_alloc_splice_list(sb, &server->alloc, &server->wri, server->other_freed,
&lt.meta_avail)) ?:
(err_str = "empty data_avail",
alloc_move_empty(sb, &super->data_alloc, &lt.data_avail)) ?:
alloc_move_empty(sb, &super->data_alloc, &lt.data_avail, 100)) ?:
(err_str = "empty data_freed",
alloc_move_empty(sb, &super->data_alloc, &lt.data_freed));
alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 100));
mutex_unlock(&server->alloc_mutex);
/* the transaction is no longer open */
lt.commit_trans_seq = lt.get_trans_seq;
/* only finalize, allowing merging, once the allocators are fully freed */
if (ret == 0) {
/* the transaction is no longer open */
lt.commit_trans_seq = lt.get_trans_seq;
/* the mount is no longer writing to the zones */
zero_data_alloc_zone_bits(&lt);
le64_add_cpu(&lt.flags, SCOUTFS_LOG_TREES_FINALIZED);
lt.finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
/* the mount is no longer writing to the zones */
zero_data_alloc_zone_bits(&lt);
le64_add_cpu(&lt.flags, SCOUTFS_LOG_TREES_FINALIZED);
lt.finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
}
err = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
@@ -1638,7 +1638,7 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
out:
mutex_unlock(&server->logs_mutex);
if (ret < 0)
if (ret < 0 && ret != -EINPROGRESS)
scoutfs_err(sb, "server error %d reclaiming log trees for rid %016llx: %s",
ret, rid, err_str);
@@ -3536,26 +3536,37 @@ struct farewell_request {
* Reclaim all the resources for a mount which has gone away. It's sent
* us a farewell promising to leave or we actively fenced it.
*
* It's safe to call this multiple times for a given rid. Each
* individual action knows to recognize that it's already been performed
* and return success.
* This can be called multiple times across different servers for
* different reclaim attempts. The existence of the mounted_client item
* triggers reclaim and must be deleted last. Each step knows that it
* can be called multiple times and safely recognizes that its work
* might have already been done.
*
* Some steps (reclaiming large fragmented allocators) may need multiple
* calls to complete. They return -EINPROGRESS which tells us to apply
* the server commit and retry.
*/
static int reclaim_rid(struct super_block *sb, u64 rid)
{
COMMIT_HOLD(hold);
int ret;
int err;
server_hold_commit(sb, &hold);
do {
server_hold_commit(sb, &hold);
/* delete mounted client last, recovery looks for it */
ret = scoutfs_lock_server_farewell(sb, rid) ?:
reclaim_open_log_tree(sb, rid) ?:
cancel_srch_compact(sb, rid) ?:
cancel_log_merge(sb, rid) ?:
scoutfs_omap_remove_rid(sb, rid) ?:
delete_mounted_client(sb, rid);
err = scoutfs_lock_server_farewell(sb, rid) ?:
reclaim_open_log_tree(sb, rid) ?:
cancel_srch_compact(sb, rid) ?:
cancel_log_merge(sb, rid) ?:
scoutfs_omap_remove_rid(sb, rid) ?:
delete_mounted_client(sb, rid);
return server_apply_commit(sb, &hold, ret);
ret = server_apply_commit(sb, &hold, err == -EINPROGRESS ? 0 : err);
} while (err == -EINPROGRESS && ret == 0);
return ret;
}
/*

View File

@@ -79,18 +79,10 @@ static void init_xattr_key(struct scoutfs_key *key, u64 ino, u32 name_hash,
#define SCOUTFS_XATTR_PREFIX "scoutfs."
#define SCOUTFS_XATTR_PREFIX_LEN (sizeof(SCOUTFS_XATTR_PREFIX) - 1)
static int unknown_prefix(const char *name, bool *is_user)
static int unknown_prefix(const char *name)
{
if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
if (is_user)
*is_user = true;
return false;
}
if (is_user)
*is_user = false;
return strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) &&
return strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) &&
strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) &&
strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)&&
strncmp(name, SCOUTFS_XATTR_PREFIX, SCOUTFS_XATTR_PREFIX_LEN);
@@ -100,13 +92,11 @@ static int unknown_prefix(const char *name, bool *is_user)
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
#define WORM_TAG "worm."
#define TAG_LEN (sizeof(HIDE_TAG) - 1)
int scoutfs_xattr_parse_tags(struct super_block *sb, const char *name,
unsigned int name_len, struct scoutfs_xattr_prefix_tags *tgs)
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
bool found;
memset(tgs, 0, sizeof(struct scoutfs_xattr_prefix_tags));
@@ -127,9 +117,6 @@ int scoutfs_xattr_parse_tags(struct super_block *sb, const char *name,
} else if (!strncmp(name, TOTL_TAG, TAG_LEN)) {
if (++tgs->totl == 0)
return -EINVAL;
} else if (!strncmp(name, WORM_TAG, TAG_LEN)) {
if (++tgs->worm == 0 || sbi->fmt_vers < 2)
return -EINVAL;
} else {
/* only reason to use scoutfs. is tags */
if (!found)
@@ -481,7 +468,7 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
size_t name_len;
int ret;
if (unknown_prefix(name, NULL))
if (unknown_prefix(name))
return -EOPNOTSUPP;
name_len = strlen(name);
@@ -537,22 +524,6 @@ void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
key->skxt_c = cpu_to_le64(name[2]);
}
/*
* Currently only support enabling level1 worm by setting a non-zero
* expiration.
*/
static int parse_worm_name(const char *name)
{
static const char worm_name[] = "level1_expire";
char *last_dot;
last_dot = strrchr(name, '.');
if (!last_dot)
return -EINVAL;
return strcmp(worm_name, last_dot + 1) == 0 ? 0 : -EINVAL;
}
/*
* Parse a u64 in any base after null terminating it while forbidding
* the leading + and trailing \n that kstrotull allows.
@@ -570,66 +541,6 @@ static int parse_totl_u64(const char *s, int len, u64 *res)
return kstrtoull(str, 0, res) != 0 ? -EINVAL : 0;
}
static int parse_worm_u32(const char *s, int len, u32 *res)
{
u64 tmp;
int ret;
ret = parse_totl_u64(s, len, &tmp);
if (ret == 0 && tmp > U32_MAX) {
tmp = 0;
ret = -EINVAL;
}
*res = tmp;
return ret;
}
static int parse_worm_timespec(struct timespec *ts, const char *name, int name_len)
{
char *delim;
u64 sec;
u32 nsec;
int sec_len;
int nsec_len;
int ret;
memset(ts, 0, sizeof(struct scoutfs_timespec));
if (name_len < 3)
return -EINVAL;
delim = strnchr(name, name_len, '.');
if (!delim)
return -EINVAL;
if (delim == name || delim == (name + name_len - 1))
return -EINVAL;
sec_len = delim - name;
nsec_len = name_len - (sec_len + 1);
/* Check to make sure only one '.' */
if (strnchr(delim + 1, nsec_len, '.'))
return -EINVAL;
ret = parse_totl_u64(name, sec_len, &sec);
if (ret < 0)
return ret;
ret = parse_worm_u32(delim + 1, nsec_len, &nsec);
if (ret < 0)
return ret;
if (sec > S64_MAX || nsec >= NSEC_PER_SEC || (sec == 0 && nsec == 0))
return -EINVAL;
ts->tv_sec = sec;
ts->tv_nsec = nsec;
return 0;
}
/*
* non-destructive relatively quick parse of the last 3 dotted u64s that
* make up the name of the xattr total. -EINVAL is returned if there
@@ -714,25 +625,23 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
{
struct inode *inode = dentry->d_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_lock *totl_lock = NULL;
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr_prefix_tags tgs;
const u64 ino = scoutfs_ino(inode);
struct timespec worm_ts = {0,};
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
bool is_user = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int xat_bytes_totl;
unsigned int xat_bytes;
unsigned int val_len;
u8 found_parts;
u64 ind_seq;
u64 total;
u64 hash = 0;
@@ -752,20 +661,16 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
(flags & ~(XATTR_CREATE | XATTR_REPLACE)))
return -EINVAL;
if (unknown_prefix(name, &is_user))
if (unknown_prefix(name))
return -EOPNOTSUPP;
if (scoutfs_xattr_parse_tags(sb, name, name_len, &tgs) != 0)
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide | tgs.srch | tgs.totl | tgs.worm) && !capable(CAP_SYS_ADMIN))
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs.worm && !tgs.hide)
return -EINVAL;
if ((tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0)) ||
(tgs.worm && ((ret = parse_worm_name(name)) != 0)))
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
/* allocate enough to always read an existing xattr's totl */
@@ -786,11 +691,6 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
down_write(&si->xattr_rwsem);
if (!S_ISREG(inode->i_mode) && tgs.worm) {
ret = -EINVAL;
goto unlock;
}
/* find an existing xattr to delete, including possible totl value */
ret = get_next_xattr(inode, &key, xat, xat_bytes_totl, name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
@@ -811,12 +711,6 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
goto unlock;
}
/* current worm only protects user. xattrs and expiration xattr itself */
if (scoutfs_inode_worm_denied(inode) && (is_user || tgs.worm)) {
ret = -EACCES;
goto unlock;
}
/* s64 count delta if we create or delete */
if (tgs.totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
@@ -852,22 +746,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto unlock;
le64_add_cpu(&tval.total, total);
}
if (tgs.worm) {
/* can't set multiple times with different names */
scoutfs_inode_get_worm(inode, &worm_ts);
if (worm_ts.tv_sec || worm_ts.tv_nsec) {
ret = -EINVAL;
goto unlock;
}
ret = parse_worm_timespec(&worm_ts, value, size);
if (ret < 0)
goto unlock;
}
le64_add_cpu(&tval.total, total);
}
if (tgs.totl) {
@@ -919,9 +800,6 @@ retry:
if (ret < 0)
goto release;
if (tgs.worm)
scoutfs_inode_set_worm(inode, worm_ts.tv_sec, worm_ts.tv_nsec);
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
@@ -1011,7 +889,7 @@ ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
break;
}
is_hidden = scoutfs_xattr_parse_tags(sb, xat->name, xat->name_len,
is_hidden = scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) == 0 && tgs.hide;
if (show_hidden == is_hidden) {
@@ -1107,7 +985,8 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
}
if (key.skx_part != 0 ||
scoutfs_xattr_parse_tags(sb, xat->name, xat->name_len, &tgs) != 0)
scoutfs_xattr_parse_tags(xat->name, xat->name_len,
&tgs) != 0)
memset(&tgs, 0, sizeof(tgs));
if (tgs.totl) {

View File

@@ -17,12 +17,11 @@ int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1,
totl:1,
worm:1;
totl:1;
};
int scoutfs_xattr_parse_tags(struct super_block *sb, const char *name,
unsigned int name_len, struct scoutfs_xattr_prefix_tags *tgs);
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);

View File

@@ -112,7 +112,6 @@ used during the test.
| T\_EX\_META\_DEV | scratch meta bdev | -f | /dev/vdd |
| T\_EX\_DATA\_DEV | scratch meta bdev | -e | /dev/vdc |
| T\_M[0-9] | mount paths | mounted per run | /mnt/test.[0-9]/ |
| T\_MODULE | built kernel module | created per run | ../kmod/src/..ko |
| T\_NR\_MOUNTS | number of mounts | -n | 3 |
| T\_O[0-9] | mount options | created per run | -o server\_addr= |
| T\_QUORUM | quorum count | -q | 2 |

View File

@@ -35,7 +35,7 @@ t_fail()
t_quiet()
{
echo "# $*" >> "$T_TMPDIR/quiet.log"
"$@" >> "$T_TMPDIR/quiet.log" 2>&1 || \
"$@" > "$T_TMPDIR/quiet.log" 2>&1 || \
t_fail "quiet command failed"
}

View File

@@ -82,9 +82,5 @@ t_filter_dmesg()
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
re="$re|scoutfs .* error.*server failed to bind to.*"
# format vers back/compat tries bad mounts
re="$re|scoutfs .* error.*outside of supported version.*"
re="$re|scoutfs .* error.*could not get .*super.*"
egrep -v "($re)"
}

View File

@@ -29,12 +29,13 @@ t_mount_rid()
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given path
# in a mounted scoutfs volume.
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
#
t_ident_from_mnt()
t_ident()
{
local mnt="$1"
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
local fsid
local rid
@@ -44,38 +45,6 @@ t_ident_from_mnt()
echo "f.${fsid:0:6}.r.${rid:0:6}"
}
#
# Output the "f.$fsid.r.$rid" identifier string for the given mount
# number, 0 is used by default if none is specified.
#
t_ident()
{
local nr="${1:-0}"
local mnt="$(eval echo \$T_M$nr)"
t_ident_from_mnt "$mnt"
}
#
# Output the sysfs path for a path in a mounted fs.
#
t_sysfs_path_from_ident()
{
local ident="$1"
echo "/sys/fs/scoutfs/$ident"
}
#
# Output the sysfs path for a path in a mounted fs.
#
t_sysfs_path_from_mnt()
{
local mnt="$1"
t_sysfs_path_from_ident $(t_ident_from_mnt $mnt)
}
#
# Output the mount's sysfs path, defaulting to mount 0 if none is
# specified.
@@ -84,7 +53,7 @@ t_sysfs_path()
{
local nr="$1"
t_sysfs_path_from_ident $(t_ident $nr)
echo "/sys/fs/scoutfs/$(t_ident $nr)"
}
#

View File

@@ -1,4 +0,0 @@
== ensuring utils and module for old versions
== unmounting test fs and removing test module
== testing combinations of old and new format versions
== restoring test module and mount

View File

@@ -1,53 +0,0 @@
== worm xattr creation without .hide. fails
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== worm xattr creation on dir fails
setfattr: /mnt/test/test/worm-xattr-unit: Invalid argument
== worm xattr creation
== get correct parsed timespec value
== hidden scoutfs xattrs before expire
== user xattr creation before expire fails
setfattr: /mnt/test.0/test/worm-xattr-unit/file-1: Permission denied
== worm xattr deletion before expire fails
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Permission denied
== worm xattr update before expire fails
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Permission denied
== other worm xattr create before expire fails
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Permission denied
== file deletion before expire fails
rm: cannot remove /mnt/test/test/worm-xattr-unit/file-1: Permission denied
== file rename before expire fails
mv: cannot move /mnt/test/test/worm-xattr-unit/file-1 to /mnt/test/test/worm-xattr-unit/file-2: Permission denied
== file write before expire fails
date: write error: Permission denied
== file truncate before expire fails
truncate: failed to truncate /mnt/test/test/worm-xattr-unit/file-1 at 0 bytes: Permission denied
== file inode update before expire fails
touch: setting times of /mnt/test/test/worm-xattr-unit/file-1: Permission denied
== wait until expiration
== file write after expire
== file rename after expire
== other worm xattr create after expire fails
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== xattr deletion after expire
== invalid all zero expire value
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== invalid non integer expire value
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== invalid only dots dots expire value
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== invalid mixed dots secs expire value
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== invalid (u32)(u64) nsecs expire value
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== invalid negative (signed)(u64) secs expire value
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
setfattr: /mnt/test/test/worm-xattr-unit/file-1: Invalid argument
== cleanup

View File

@@ -342,8 +342,7 @@ if [ -n "$T_INSMOD" ]; then
msg "removing and reinserting scoutfs module"
test -e /sys/module/scoutfs && cmd rmmod scoutfs
cmd modprobe libcrc32c
T_MODULE="$T_KMOD/src/scoutfs.ko"
cmd insmod "$T_MODULE"
cmd insmod "$T_KMOD/src/scoutfs.ko"
fi
nr_globs=${#T_TRACE_GLOB[@]}

View File

@@ -9,11 +9,9 @@ fallocate.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
format-version-forward-back.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
worm-xattr-unit.sh
totl-xattr-tag.sh
lock-refleak.sh
lock-shrink-consistency.sh

View File

@@ -1,179 +0,0 @@
#
# Test our basic ability to work with different format versions.
#
# The current code being tested has a range of supported format
# versions. For each of the older supported format versions we have a
# git hash of the commit before the next greater version was introduced.
# We build versions of the scoutfs utility and kernel module for the
# last commit in tree that had a lesser supported version as its max
# supported version. We use those binaries to test forward and back
# compat as new and old code works with a persistent volume with a given
# format version.
#
mount_has_format_version()
{
local mnt="$1"
local vers="$2"
local sysfs_fmt_vers="$(t_sysfs_path_from_mnt $SCR)/format_version"
test "$(cat $sysfs_fmt_vers)" == "$vers"
}
SCR="/mnt/scoutfs.scratch"
MIN=$(modinfo $T_MODULE | awk '($1 == "scoutfs_format_version_min:"){print $2}')
MAX=$(modinfo $T_MODULE | awk '($1 == "scoutfs_format_version_max:"){print $2}')
echo "min: $MIN max: $MAX" > "$T_TMP.log"
test "$MIN" -gt 0 -a "$MAX" -gt 0 -a "$MIN" -le "$MAX" || \
t_fail "parsed bad versions, min: $MIN max: $MAX"
test "$MIN" == "$MAX" && \
t_skip "only one supported format version: $MIN"
# prepare dir and wipe any weird old partial state
builds="$T_RESULTS/format_version_builds"
mkdir -p "$builds"
echo "== ensuring utils and module for old versions"
declare -A commits
commits[1]=730a84a
for vers in $(seq $MIN $((MAX - 1))); do
dir="$builds/$vers"
platform=$(uname -rp)
buildmark="$dir/buildmark"
commit="${commits[$vers]}"
test -n "$commit" || \
t_fail "no commit for vers $vers"
# have our files for this version
test "$(cat $buildmark 2>&1)" == "$platform" && \
continue
# build as one big sequence of commands that can return failure
(
set -o pipefail
rm -rf $dir &&
mkdir -p $dir/building &&
cd "$T_TESTS/.." &&
git archive --format=tar "$commit" | tar -C "$dir/building" -xf - &&
cd - &&
find $dir &&
make -C "$dir/building" &&
mv $dir/building/utils/src/scoutfs $dir &&
mv $dir/building/kmod/src/scoutfs.ko $dir &&
rm -rf $dir/building &&
echo "$platform" > $buildmark &&
find $dir &&
cat $buildmark
) >> "$T_TMP.log" 2>&1 || t_fail "version $vers build failed"
done
echo "== unmounting test fs and removing test module"
t_quiet t_umount_all
t_quiet rmmod scoutfs
echo "== testing combinations of old and new format versions"
mkdir -p "$SCR"
for vers in $(seq $MIN $((MAX - 1))); do
old_scoutfs="$builds/$vers/scoutfs"
old_module="$builds/$vers/scoutfs.ko"
echo "mkfs $vers" >> "$T_TMP.log"
t_quiet $old_scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" \
|| t_fail "mkfs $vers failed"
echo "mount $vers with $vers" >> "$T_TMP.log"
t_quiet insmod $old_module
t_quiet mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
t_quiet mount_has_format_version "$SCR" "$vers"
echo "creating files in $vers" >> "$T_TMP.log"
t_quiet touch "$SCR/file-"{1,2,3}
stat "$SCR"/file-* > "$T_TMP.stat" || \
t_fail "stat in $vers failed"
echo "remounting $vers fs with $MAX" >> "$T_TMP.log"
t_quiet umount "$SCR"
rmmod scoutfs
insmod "$T_MODULE"
t_quiet mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
t_quiet mount_has_format_version "$SCR" "$vers"
echo "verifying stat in $vers with $MAX" >> "$T_TMP.log"
diff -u "$T_TMP.stat" <(stat "$SCR"/file-*)
echo "keep/update/del existing, create new in $vers" >> "$T_TMP.log"
t_quiet touch "$SCR/file-2"
t_quiet rm -f "$SCR/file-3"
t_quiet touch "$SCR/file-4"
stat "$SCR"/file-* > "$T_TMP.stat" || \
t_fail "stat in $vers failed"
echo "remounting $vers fs with $vers" >> "$T_TMP.log"
t_quiet umount "$SCR"
rmmod scoutfs
insmod "$old_module"
t_quiet mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
t_quiet mount_has_format_version "$SCR" "$vers"
echo "verifying stat in $vers with $vers" >> "$T_TMP.log"
diff -u "$T_TMP.stat" <(stat "$SCR"/file-*)
echo "changing format vers to $MAX" >> "$T_TMP.log"
t_quiet umount "$SCR"
rmmod scoutfs
t_quiet scoutfs change-format-version -F -V $MAX $T_EX_META_DEV "$T_EX_DATA_DEV"
echo "mount fs $MAX with old $vers should fail" >> "$T_TMP.log"
insmod "$old_module"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR" >> "$T_TMP.log" 2>&1
if [ "$?" == "0" ]; then
umount "$SCR"
t_fail "old code ver $vers able to mount new ver $MAX"
fi
echo "remounting $MAX fs with $MAX" >> "$T_TMP.log"
rmmod scoutfs
insmod "$T_MODULE"
t_quiet mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
t_quiet mount_has_format_version "$SCR" "$MAX"
echo "verifying stat in $MAX with $MAX" >> "$T_TMP.log"
diff -u "$T_TMP.stat" <(stat "$SCR"/file-*)
echo "keep/update/del existing, create new in $MAX" >> "$T_TMP.log"
t_quiet touch "$SCR/file-2"
t_quiet rm -f "$SCR/file-4"
t_quiet touch "$SCR/file-5"
stat "$SCR"/file-* > "$T_TMP.stat" || \
t_fail "stat in $MAX failed"
echo "remounting $MAX fs with $MAX again" >> "$T_TMP.log"
t_quiet umount "$SCR"
t_quiet mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"
t_quiet mount_has_format_version "$SCR" "$MAX"
echo "verifying stat in $MAX with $MAX again" >> "$T_TMP.log"
diff -u "$T_TMP.stat" <(stat "$SCR"/file-*)
echo "done with old vers $vers" >> "$T_TMP.log"
t_quiet umount "$SCR"
rmmod scoutfs
done
echo "== restoring test module and mount"
insmod "$T_MODULE"
t_mount_all
t_pass

View File

@@ -1,97 +0,0 @@
t_require_commands touch rm setfattr
touch "$T_D0/file-1"
SECS=$(date '+%s')
NSECS=$(date '+%N')
DELAY=10
EXP=$((SECS + DELAY))
echo "== worm xattr creation without .hide. fails"
setfattr -n scoutfs.worm.level1_expire -v $EXP.$NSECS "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== worm xattr creation on dir fails"
setfattr -n scoutfs.hide.worm.level1_expire -v $EXP.$NSECS "$T_D0" 2>&1 | t_filter_fs
echo "== worm xattr creation"
setfattr -n scoutfs.hide.worm.level1_expire -v $EXP.$NSECS "$T_D0/file-1"
echo "== get correct parsed timespec value"
diff -u --ignore-all-space <(echo "$EXP.$NSECS") <(getfattr --absolute-names --only-values -n scoutfs.hide.worm.level1_expire -m - "$T_D0/file-1")
echo "== hidden scoutfs xattrs before expire"
setfattr -n scoutfs.hide.srch.worm_test -v val "$T_D0/file-1"
echo "== user xattr creation before expire fails"
setfattr -n user.worm_test -v val "$T_D0/file-1"
echo "== worm xattr deletion before expire fails"
setfattr -x scoutfs.hide.worm.level1_expire "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== worm xattr update before expire fails"
setfattr -n scoutfs.hide.worm.level1_expire -v $SECS.$NSECS "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== other worm xattr create before expire fails"
setfattr -n scoutfs.hide.worm.other.level1_expire -v 123.456 "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== file deletion before expire fails"
rm -f "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== file rename before expire fails"
mv $T_D0/file-1 $T_D0/file-2 2>&1 | t_filter_fs
echo "== file write before expire fails"
date >> $T_D0/file-1
echo "== file truncate before expire fails"
truncate -s 0 $T_D0/file-1 2>&1 | t_filter_fs
echo "== file inode update before expire fails"
touch $T_D0/file-1 2>&1 | t_filter_fs
echo "== wait until expiration"
sleep $((DELAY + 1))
echo "== file write after expire"
date >> $T_D0/file-1
echo "== file rename after expire"
mv $T_D0/file-1 $T_D0/file-2
mv $T_D0/file-2 $T_D0/file-1
echo "== other worm xattr create after expire fails"
setfattr -n scoutfs.hide.worm.other.level1_expire -v 123.456 "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== xattr deletion after expire"
setfattr -x scoutfs.hide.worm.level1_expire "$T_D0/file-1"
echo "== invalid all zero expire value"
setfattr -n scoutfs.hide.worm.level1_expire -v 0.0 "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== invalid non integer expire value"
setfattr -n scoutfs.hide.worm.level1_expire -v a.a "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== invalid only dots dots expire value"
setfattr -n scoutfs.hide.worm.level1_expire -v . "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v .. "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v ... "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== invalid mixed dots secs expire value"
setfattr -n scoutfs.hide.worm.level1_expire -v 11 "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v .11 "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v 11. "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v .11. "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v .1.1. "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== invalid (u32)(u64) nsecs expire value"
setfattr -n scoutfs.hide.worm.level1_expire -v 1.1000000000 "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v 1.4294967296 "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v 1.18446744073709551615 "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== invalid negative (signed)(u64) secs expire value"
setfattr -n scoutfs.hide.worm.level1_expire -v 9223372036854775808.1 "$T_D0/file-1" 2>&1 | t_filter_fs
setfattr -n scoutfs.hide.worm.level1_expire -v 18446744073709551615.1 "$T_D0/file-1" 2>&1 | t_filter_fs
echo "== cleanup"
rm -f "$T_D0/file-1"
t_pass

View File

@@ -212,38 +212,6 @@ name, total value, and a count of contributing attributes can be read
with the
.IB READ_XATTR_TOTALS
ioctl.
.TP
.B .worm.
Attributes with the .worm. flag are used to control WORM (write once,
read many) access restrictions, typically used to comply with operational
regulations. The only currently supported mechanism is controlled by a
single .worm. attribute whose name ends in ".level1_expire". Additional
levels with different enfrocement policies may be added and would be
controlled by different attributes.
.sp
The level1 policy is enabled by setting an attribute on a file that
contains the .worm. tag and whose name ends in ".level1_expire". The
attribute name must also include the .hide. tag. As with other scoutfs
tagged attributes, the name may include any other string between the
tags and the final required suffix. Only one level1 expiration
attribute may be set at a time.
.sp
The value of the attribute contains a string representing the kernel
time at which the policy enforcement will expire. The time is formated
as "seconds.nanoseconds" in GMT. The attribute must be set with the
CAP_SYS_ADMIN capability, perhaps via the root user. Setting an
expiration value of "0.0" will always fail. The policy can only be set
on regular files.
.sp
The file is protected once the expiration attribute is set and can not
be modified until the expiration time has passed. The file data, its
inode fields, directory entries that link to its inode, untrusted
"user." attributes, and non-hidden scoutfs attributes are all protected
and modification attempts will fail with with permission denied.
Trusted system-level attributes like "security." and hidden scoutfs
attributes may still be modified to support ongoing archiving
operations. The worm attribute itself can not be modified once it is
set and can only be removed once the expiration time has passed.
.RE
.SH FORMAT VERSION
@@ -324,20 +292,6 @@ The version that a mount is using is shown in the
file in the mount's sysfs directory, typically
.I /sys/fs/scoutfs/f.FSID.r.RID/
.RE
.sp
The defined format versions are:
.RS
.TP
.sp
.B 1
Initial format version.
.TP
.B 2
Added level1 WORM file protection for regular files. The
".level1_expire" worm tagged extended attribute was added and the inode
item size was increased to store the parsed expiration time from the
extended attribute.
.RE
.SH CORRUPTION DETECTION
A

View File

@@ -119,16 +119,6 @@ static int do_change_fmt_vers(struct change_fmt_vers_args *args)
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) > args->fmt_vers ||
le64_to_cpu(data_super->fmt_vers) > args->fmt_vers) {
ret = -EPERM;
printf("Downgrade of Meta Format Version: %llu and Data Format Version: %llu to Format Version: %llu is not allowed\n",
le64_to_cpu(meta_super->fmt_vers),
le64_to_cpu(data_super->fmt_vers),
args->fmt_vers);
goto out;
}
if (le64_to_cpu(meta_super->fmt_vers) != args->fmt_vers) {
meta_super->fmt_vers = cpu_to_le64(args->fmt_vers);

View File

@@ -262,10 +262,7 @@ static int do_mkfs(struct mkfs_args *args)
inode.ctime.nsec = inode.atime.nsec;
inode.mtime.sec = inode.atime.sec;
inode.mtime.nsec = inode.atime.nsec;
if (args->fmt_vers == 1)
btree_append_item(bt, &key, &inode, SCOUTFS_INODE_FMT_V1_BYTES);
else
btree_append_item(bt, &key, &inode, SCOUTFS_INODE_FMT_V2_BYTES);
btree_append_item(bt, &key, &inode, sizeof(inode));
ret = write_block(meta_fd, SCOUTFS_BLOCK_MAGIC_BTREE, fsid, 1, blkno,
SCOUTFS_BLOCK_LG_SHIFT, &bt->hdr);

View File

@@ -69,12 +69,6 @@ static void print_inode(struct scoutfs_key *key, void *val, int val_len)
le32_to_cpu(inode->ctime.nsec),
le64_to_cpu(inode->mtime.sec),
le32_to_cpu(inode->mtime.nsec));
if (val_len == SCOUTFS_INODE_FMT_V2_BYTES) {
printf(" worm_level1_expire %llu.%08u\n",
le64_to_cpu(inode->worm_level1_expire.sec),
le32_to_cpu(inode->worm_level1_expire.nsec));
}
}
static void print_orphan(struct scoutfs_key *key, void *val, int val_len)