Compare commits

..

7 Commits

Author SHA1 Message Date
Zach Brown
a1f917d5bb Update tracing with cluster lock changes
Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:06:06 -07:00
Zach Brown
68aa9e5ce6 Directly queue cluster lock work
We had a little helper that scheduled work after testing the list, which
required holding the spinlock.  This was a little too crude and required
scoutfs_unlock() acquiring the invalidate work list spinlock even though
it already had the cluster lock spinlock held and could see that there
are invalidate requests pending and should queue the invalidation work.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
28079610ed Two cluster lock LRU lists with less precision
Currently we maintain a single LRU list of cluster locks and every time
we acquire a cluster lock we move it to the head of the LRU, creating
significant contention acquiring the spinlock that protects the LRU
list.

This moves to two LRU lists, a list of cluster locks ready to be
reclaimed and one for locks that are in active use.  We mark locks with
which list they're on and only move them to the active list if they're
on the reclaim list.  We track imbalance between the two lists so that
they're always roughly the same size.

This removes contention maintaining a precise LRU amongst a set of
active cluster locks.  It doesn't address contention creating or
removing locks, which are already very expensive operations.

It also loses strict ordering by access time.  Reclaim has to make it
through the oldest half of locks before getting to the newer half,
though there is no guaranteed ordering amogst the newest half.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
05cd4ebe92 Lookup cluster locks with an RCU hash table
The previous work we did introduce the per-lock spinlock and the
refcount now make it easy to switch from an rbtree protected by a
spinlock to a hash table protected by RCU read critical sections.
The cluster lock lookup fast path now only dirties fields in the
scoutfs_lock struct itself.

We have to be a little careful when inserting so that users can't get
references to locks that made it into the hash table but which then had
to be removed because they were found to overlap.

Freeing is straight forward and we only have to make sure to free the
locks in RCU grace periods so that read sections can continue to
reference the memory and see the refcount that indicates that the locks
are freeing.

A few remaining places were using the lookup rbtree to walk all locks,
they're converted to using the range tree that we're keeping around to
resolve overlapping ranges but which is also handy for iteration that
isn't performance sensitive.

The LRU still does create contention on the linfo spinlock on every
lookup, fixing that is next.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
0041b599ab Protect clusters locks with refcounts
The first pass at managing the cluster lock state machine used a simple
global spinlock.  It's time to break it up.

This adds refcounting to the cluster lock struct.  Rather than managing
global data structures and individual lock state all under a global
spinlock, we use per-structure locks, a lock spinlock, and a lock
refcount.

Active users of the cluster lock hold a reference.  This primarily lets
unlock only check global structures once the refcounts say that it's
time to remove the lock from the structures.  The careful use of the
refcount to avoid locks that are being freed during lookup also paves
the way for using mostly read-only RCU lookup structures soon.

The global LRU is still modified on every lock use, that'll also be
removed up in future work.

The linfo spinlock is now only used for the LRU and lookup structures.
Other uses are removed, which causes more careful use of the finer
grained locks that initially just mirrored the use of the linfo spinlock
to keep those introduction patches safe.

The move from a single global lock to more fine grained locks creates
nesting that needs to be managed.  Shrinking and recovery in particular
need to be careful as they transition from spinlocks used to find
cluster locks to getting the cluster lock spinlock.

The presence of freeing locks in the lookup indexes means that some
callers need to retry if they hit freeing locks.  We have to add this
protection to recovery iterating over locks by their key value, but it
wouldn't have made sense to build that around the lookup rbtree as its
going away.  It makes sense to use the range tree that we're going to
keep using to make sure we don't accidentally introduce locks whose
ranges overlap (which would lead to item cache inconsistency).

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
f30e545870 Add per-cluster lock spinlock
Add a spinlock to the scoutfs_lock cluster lock which protects its
state.   This replaces the use of the mount-wide lock_info spinlock.

In practice, for now, this largely just mirrors the continued use of the
lock_info spinlock because it's still needed to protect the mount-wide
structures that are used during put_lock.   That'll be fixed in future
patches as the use of global structures is reduced.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
Zach Brown
cf046f26c4 Cluster lock invalidation and shrink spinlocks
Cluster lock invalidation and shrinking have very similar work flows.
They rarely modify the state of locks and put them on lists for work to
process.  Today the lists and state modification are protected by the
mount-wide lock_info spinlock, which we want to break up.

This creates a little work_list struct that has a work_queue, list, and
lock.   Invalidation and shrinking use this to track locks that are
being processed and protect the list with the new spinlock in the
struct.

This leaves some awkward nesting with the lock_info spinlock because it
still protects invididual lock state.  That will be fixed as we move
towards individual lock refcounting and spinlocks.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 14:03:21 -07:00
18 changed files with 554 additions and 720 deletions

View File

@@ -1,64 +1,6 @@
Versity ScoutFS Release Notes
=============================
---
v1.7
\
*Aug 26, 2022*
* **Fixed possible persistent errors moving freed data extents**
\
Fixed a case where the server could hit persistent errors trying to
move a client's freed extents in one commit. The client had to free
a large number of extents that occupied distant positions in the
global free extent btree. Very large fragmented files could cause
this. The server now moves the freed extents in multiple commits and
can always ensure forward progress.
* **Fixed possible persistent errors from freed duplicate extents**
\
Background orphan deletion wasn't properly synchronizing with
foreground tasks deleting very large files. If a deletion took long
enough then background deletion could also attempt to delete inode items
while the deletion was making progress. This could create duplicate
deletions of data extent items which causes the server to abort when
it later discovers the duplicate extents as it merges free lists.
---
v1.6
\
*Jul 7, 2022*
* **Fix memory leaks in rare corner cases**
\
Analysis tools found a few corner cases that leaked small structures,
generally around error handling or startup and shutdown.
* **Add --skip-likely-huge scoutfs print command option**
\
Add an option to scoutfs print to reduce the size of the output
so that it can be used to see system-wide metadata without being
overwhelmed by file-level details.
---
v1.5
\
*Jun 21, 2022*
* **Fix persistent error during server startup**
\
Fixed a case where the server would always hit a consistent error on
seartup, preventing the system from mounting. This required a rare
but valid state across the clients.
* **Fix a client hang that would lead to fencing**
\
The client module's use of in-kernel networking was missing annotation
that could lead to communication hanging. The server would fence the
client when it stopped communicating. This could be identified by the
server fencing a client after it disconnected with no attempt by the
client to reconnect.
---
v1.4
\

View File

@@ -892,11 +892,12 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
* -ENOENT is returned if we run out of extents in the source tree
* before moving the total.
*
* If meta_budget is non-zero then -EINPROGRESS can be returned if the
* the caller's budget is consumed in the allocator during this call
* (though not necessarily by us, we don't have per-thread tracking of
* allocator consumption :/). The call can still have made progress and
* caller is expected commit the dirty trees and examining the resulting
* If meta_reserved is non-zero then -EINPROGRESS can be returned if the
* current meta allocator's avail blocks or room for freed blocks would
* have fallen under the reserved amount. The could have been
* successfully dirtied in this case but the number of blocks moved is
* not returned. The caller is expected to deal with the partial
* progress by commiting the dirty trees and examining the resulting
* modified trees to see if they need to continue moving extents.
*
* The caller can specify that extents in the source tree should first
@@ -913,7 +914,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget)
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved)
{
struct alloc_ext_args args = {
.alloc = alloc,
@@ -921,8 +922,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
};
struct scoutfs_extent found;
struct scoutfs_extent ext;
u32 avail_start = 0;
u32 freed_start = 0;
u64 moved = 0;
u64 count;
int ret = 0;
@@ -933,9 +932,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
vacant = NULL;
}
if (meta_budget != 0)
scoutfs_alloc_meta_remaining(alloc, &avail_start, &freed_start);
while (moved < total) {
count = total - moved;
@@ -968,10 +964,10 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
break;
if (meta_budget != 0 &&
scoutfs_alloc_meta_low_since(alloc, avail_start, freed_start, meta_budget,
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
if (meta_reserved != 0 &&
scoutfs_alloc_meta_low(sb, alloc, meta_reserved +
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
ret = -EINPROGRESS;
break;
}
@@ -1355,27 +1351,6 @@ void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total,
} while (read_seqretry(&alloc->seqlock, seq));
}
/*
* Returns true if the caller's consumption of nr from either avail or
* freed would end up exceeding their budget relative to the starting
* remaining snapshot they took.
*/
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr)
{
u32 avail_use;
u32 freed_use;
u32 avail;
u32 freed;
scoutfs_alloc_meta_remaining(alloc, &avail, &freed);
avail_use = avail_start - avail;
freed_use = freed_start - freed;
return ((avail_use + nr) > budget) || ((freed_use + nr) > budget);
}
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag)
{

View File

@@ -131,7 +131,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget);
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
@@ -159,8 +159,6 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr);
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag);

View File

@@ -1685,7 +1685,6 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode sinode;
struct scoutfs_key key;
bool clear_trying = false;
u64 group_nr;
int bit_nr;
int ret;
@@ -1705,7 +1704,6 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = 0;
goto out;
}
clear_trying = true;
/* can't delete if it's cached in local or remote mounts */
if (scoutfs_omap_test(sb, ino) || test_bit_le(bit_nr, ldata->map.bits)) {
@@ -1732,7 +1730,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = delete_inode_items(sb, ino, &sinode, lock, orph_lock);
out:
if (clear_trying)
if (ldata)
clear_bit(bit_nr, ldata->trying);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
#ifndef _SCOUTFS_LOCK_H_
#define _SCOUTFS_LOCK_H_
#include <linux/rhashtable.h>
#include "key.h"
#include "tseq.h"
@@ -19,20 +21,24 @@ struct inode_deletion_lock_data;
*/
struct scoutfs_lock {
struct super_block *sb;
atomic_t refcount;
spinlock_t lock;
struct rcu_head rcu_head;
struct scoutfs_key start;
struct scoutfs_key end;
struct rb_node node;
struct rhash_head ht_head;
struct rb_node range_node;
u64 refresh_gen;
u64 write_seq;
u64 dirty_trans_seq;
struct list_head lru_head;
int lru_on_list;
wait_queue_head_t waitq;
unsigned long request_pending:1,
invalidate_pending:1;
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
struct list_head inv_list; /* list of lock's invalidation requests */
struct list_head inv_req_list; /* list of lock's invalidation requests */
struct list_head shrink_head;
spinlock_t cov_list_lock;

View File

@@ -355,7 +355,6 @@ static int submit_send(struct super_block *sb,
}
if (rid != 0) {
spin_unlock(&conn->lock);
kfree(msend);
return -ENOTCONN;
}
}
@@ -992,8 +991,6 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
if (ret < 0)
break;
acc_sock->sk->sk_allocation = GFP_NOFS;
/* inherit accepted request funcs from listening conn */
acc_conn = scoutfs_net_alloc_conn(sb, conn->notify_up,
conn->notify_down,
@@ -1056,8 +1053,6 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
/* caller specified connect timeout */
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
@@ -1346,12 +1341,10 @@ scoutfs_net_alloc_conn(struct super_block *sb,
if (!conn)
return NULL;
if (info_size) {
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
conn->workq = alloc_workqueue("scoutfs_net_%s",
@@ -1457,8 +1450,6 @@ int scoutfs_net_bind(struct super_block *sb,
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
(char *)&optval, sizeof(optval));

View File

@@ -157,15 +157,6 @@ static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
return nr;
}
static void free_rid_list(struct omap_rid_list *list)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *tmp;
list_for_each_entry_safe(entry, tmp, &list->head, head)
free_rid(list, entry);
}
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
{
struct omap_rid_entry *entry;
@@ -813,10 +804,6 @@ void scoutfs_omap_server_shutdown(struct super_block *sb)
llist_for_each_entry_safe(req, tmp, requests, llnode)
kfree(req);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
synchronize_rcu();
}
@@ -877,10 +864,6 @@ void scoutfs_omap_destroy(struct super_block *sb)
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
rhashtable_destroy(&ominf->group_ht);
rhashtable_destroy(&ominf->req_ht);
kfree(ominf);

View File

@@ -1023,6 +1023,7 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__field(unsigned char, request_pending)
__field(unsigned char, invalidate_pending)
__field(int, mode)
__field(unsigned int, refcount)
__field(unsigned int, waiters_cw)
__field(unsigned int, waiters_pr)
__field(unsigned int, waiters_ex)
@@ -1038,6 +1039,7 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__entry->request_pending = lck->request_pending;
__entry->invalidate_pending = lck->invalidate_pending;
__entry->mode = lck->mode;
__entry->refcount = atomic_read(&lck->refcount);
__entry->waiters_pr = lck->waiters[SCOUTFS_LOCK_READ];
__entry->waiters_ex = lck->waiters[SCOUTFS_LOCK_WRITE];
__entry->waiters_cw = lck->waiters[SCOUTFS_LOCK_WRITE_ONLY];
@@ -1045,10 +1047,10 @@ DECLARE_EVENT_CLASS(scoutfs_lock_class,
__entry->users_ex = lck->users[SCOUTFS_LOCK_WRITE];
__entry->users_cw = lck->users[SCOUTFS_LOCK_WRITE_ONLY];
),
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u reqpnd %u invpnd %u rfrgen %llu waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
TP_printk(SCSBF" start "SK_FMT" end "SK_FMT" mode %u reqpnd %u invpnd %u rfrgen %llu rfcnt %d waiters: pr %u ex %u cw %u users: pr %u ex %u cw %u",
SCSB_TRACE_ARGS, sk_trace_args(start), sk_trace_args(end),
__entry->mode, __entry->request_pending,
__entry->invalidate_pending, __entry->refresh_gen,
__entry->invalidate_pending, __entry->refresh_gen, __entry->refcount,
__entry->waiters_pr, __entry->waiters_ex, __entry->waiters_cw,
__entry->users_pr, __entry->users_ex, __entry->users_cw)
);

View File

@@ -694,13 +694,13 @@ static int alloc_move_refill_zoned(struct super_block *sb, struct scoutfs_alloc_
static int alloc_move_empty(struct super_block *sb,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 meta_budget)
struct scoutfs_alloc_root *src, u64 meta_reserved)
{
DECLARE_SERVER_INFO(sb, server);
return scoutfs_alloc_move(sb, &server->alloc, &server->wri,
dst, src, le64_to_cpu(src->total_len), NULL, NULL, 0,
meta_budget);
meta_reserved);
}
/*
@@ -1226,82 +1226,6 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
return ret;
}
/*
* The calling get_log_trees ran out of available blocks in its commit's
* metadata allocator while moving extents from the log tree's
* data_freed into the core data_avail. This finishes moving the
* extents in as many additional commits as it takes. The logs mutex
* is nested inside holding commits so we recheck the persistent item
* each time we commit to make sure it's still what we think. The
* caller is still going to send the item to the client so we update the
* caller's each time we make progress. This is a best-effort attempt
* to clean up and it's valid to leave extents in data_freed we don't
* return errors to the caller. The client will continue the work later
* in get_log_trees or as the rid is reclaimed.
*/
static void try_drain_data_freed(struct super_block *sb, struct scoutfs_log_trees *lt)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
const u64 rid = le64_to_cpu(lt->rid);
const u64 nr = le64_to_cpu(lt->nr);
struct scoutfs_log_trees drain;
struct scoutfs_key key;
COMMIT_HOLD(hold);
int ret = 0;
int err;
scoutfs_key_init_log_trees(&key, rid, nr);
while (lt->data_freed.total_len != 0) {
server_hold_commit(sb, &hold);
mutex_lock(&server->logs_mutex);
ret = find_log_trees_item(sb, &super->logs_root, false, rid, U64_MAX, &drain);
if (ret < 0)
break;
/* careful to only keep draining the caller's specific open trans */
if (drain.nr != lt->nr || drain.get_trans_seq != lt->get_trans_seq ||
drain.commit_trans_seq != lt->commit_trans_seq || drain.flags != lt->flags) {
ret = -ENOENT;
break;
}
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri,
&super->logs_root, &key);
if (ret < 0)
break;
/* moving can modify and return errors, always update caller and item */
mutex_lock(&server->alloc_mutex);
ret = alloc_move_empty(sb, &super->data_alloc, &drain.data_freed,
COMMIT_HOLD_ALLOC_BUDGET / 2);
mutex_unlock(&server->alloc_mutex);
if (ret == -EINPROGRESS)
ret = 0;
*lt = drain;
err = scoutfs_btree_force(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &drain, sizeof(drain));
BUG_ON(err < 0); /* dirtying must guarantee success */
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
if (ret < 0) {
ret = 0; /* don't try to abort, ignoring ret */
break;
}
}
/* try to cleanly abort and write any partial dirty btree blocks, but ignore result */
if (ret < 0) {
mutex_unlock(&server->logs_mutex);
server_apply_commit(sb, &hold, 0);
}
}
/*
* Give the client roots to all the trees that they'll use to build
* their transaction.
@@ -1343,7 +1267,6 @@ static int server_get_log_trees(struct super_block *sb,
char *err_str = NULL;
u64 nr;
int ret;
int err;
if (arg_len != 0) {
ret = -EINVAL;
@@ -1387,27 +1310,16 @@ static int server_get_log_trees(struct super_block *sb,
goto unlock;
}
if (ret != -ENOENT) {
/* need to sync lt with respect to changes in other structures */
scoutfs_key_init_log_trees(&key, le64_to_cpu(lt.rid), le64_to_cpu(lt.nr));
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri,
&super->logs_root, &key);
if (ret < 0) {
err_str = "dirtying lt btree key";
goto unlock;
}
}
/* drops and re-acquires the mutex and commit if it has to wait */
ret = finalize_and_start_log_merge(sb, &lt, rid, &hold);
if (ret < 0)
goto update;
goto unlock;
if (get_volopt_val(server, SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR, &data_zone_blocks)) {
ret = get_data_alloc_zone_bits(sb, rid, exclusive, vacant, data_zone_blocks);
if (ret < 0) {
err_str = "getting alloc zone bits";
goto update;
goto unlock;
}
} else {
data_zone_blocks = 0;
@@ -1424,15 +1336,13 @@ static int server_get_log_trees(struct super_block *sb,
&lt.meta_freed);
if (ret < 0) {
err_str = "splicing committed meta_freed";
goto update;
goto unlock;
}
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 100);
if (ret == -EINPROGRESS)
ret = 0;
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 0);
if (ret < 0) {
err_str = "emptying committed data_freed";
goto update;
goto unlock;
}
ret = scoutfs_alloc_fill_list(sb, &server->alloc, &server->wri,
@@ -1441,7 +1351,7 @@ static int server_get_log_trees(struct super_block *sb,
SCOUTFS_SERVER_META_FILL_TARGET);
if (ret < 0) {
err_str = "filling meta_avail";
goto update;
goto unlock;
}
if (le64_to_cpu(server->meta_avail->total_len) <= scoutfs_server_reserved_meta_blocks(sb))
@@ -1454,7 +1364,7 @@ static int server_get_log_trees(struct super_block *sb,
exclusive, vacant, data_zone_blocks);
if (ret < 0) {
err_str = "refilling data_avail";
goto update;
goto unlock;
}
if (le64_to_cpu(lt.data_avail.total_len) < SCOUTFS_SERVER_DATA_FILL_LO)
@@ -1474,7 +1384,7 @@ static int server_get_log_trees(struct super_block *sb,
if (ret < 0) {
zero_data_alloc_zone_bits(&lt);
err_str = "setting data_avail zone bits";
goto update;
goto unlock;
}
lt.data_alloc_zone_blocks = cpu_to_le64(data_zone_blocks);
@@ -1483,18 +1393,13 @@ static int server_get_log_trees(struct super_block *sb,
/* give the transaction a new seq (must have been ==) */
lt.get_trans_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
update:
/* update client's log tree's item */
scoutfs_key_init_log_trees(&key, le64_to_cpu(lt.rid), le64_to_cpu(lt.nr));
err = scoutfs_btree_force(sb, &server->alloc, &server->wri,
scoutfs_key_init_log_trees(&key, le64_to_cpu(lt.rid),
le64_to_cpu(lt.nr));
ret = scoutfs_btree_force(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
BUG_ON(err < 0); /* can duplicate extents.. move dst in super, still in in lt src */
if (err < 0) {
if (ret == 0) {
ret = err;
err_str = "updating log trees";
}
}
if (ret < 0)
err_str = "updating log trees";
unlock:
if (unlock_alloc)
@@ -1507,10 +1412,6 @@ out:
scoutfs_err(sb, "error %d getting log trees for rid %016llx: %s",
ret, rid, err_str);
/* try to drain excessive data_freed with additional commits, if needed, ignoring err */
if (ret == 0)
try_drain_data_freed(sb, &lt);
return scoutfs_net_response(sb, conn, cmd, id, ret, &lt, sizeof(lt));
}

View File

@@ -496,7 +496,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
ret = assign_random_id(sbi);
if (ret < 0)
goto out;
return ret;
spin_lock_init(&sbi->next_ino_lock);
spin_lock_init(&sbi->data_wait_root.lock);
@@ -505,7 +505,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
/* parse options early for use during setup */
ret = scoutfs_options_early_setup(sb, data);
if (ret < 0)
goto out;
return ret;
scoutfs_options_read(sb, &opts);
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);

View File

@@ -10,8 +10,7 @@ BIN := src/createmany \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop \
src/fragmented_data_extents
src/create_xattr_loop
DEPS := $(wildcard src/*.d)

View File

@@ -1,3 +0,0 @@
== creating fragmented extents
== unlink file with moved extents to free extents per block
== cleanup

View File

@@ -9,7 +9,6 @@ fallocate.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
large-fragmented-free.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh

View File

@@ -1,113 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
/*
* This creates fragmented data extents.
*
* A file is created that has alternating free and allocated extents.
* This also results in the global allocator having the matching
* fragmented free extent pattern. While that file is being created,
* occasionally an allocated extent is moved to another file. This
* results in a file that has fragmented extents at a given stride that
* can be deleted to create free data extents with a given stride.
*
* We don't have hole punching so to do this quickly we use a goofy
* combination of fallocate, truncate, and our move_blocks ioctl.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <linux/types.h>
#include <assert.h>
#include "ioctl.h"
#define BLOCK_SIZE 4096
int main(int argc, char **argv)
{
struct scoutfs_ioctl_move_blocks mb = {0,};
unsigned long long freed_extents;
unsigned long long move_stride;
unsigned long long i;
int alloc_fd;
int trunc_fd;
off_t off;
int ret;
if (argc != 5) {
printf("%s <freed_extents> <move_stride> <alloc_file> <trunc_file>\n", argv[0]);
return 1;
}
freed_extents = strtoull(argv[1], NULL, 0);
move_stride = strtoull(argv[2], NULL, 0);
alloc_fd = open(argv[3], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (alloc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[3], errno, strerror(errno));
exit(1);
}
trunc_fd = open(argv[4], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (trunc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[4], errno, strerror(errno));
exit(1);
}
for (i = 0, off = 0; i < freed_extents; i++, off += BLOCK_SIZE * 2) {
ret = fallocate(alloc_fd, 0, off, BLOCK_SIZE * 2);
if (ret < 0) {
fprintf(stderr, "fallocate at off %llu error: %d (%s)\n",
(unsigned long long)off, errno, strerror(errno));
exit(1);
}
ret = ftruncate(alloc_fd, off + BLOCK_SIZE);
if (ret < 0) {
fprintf(stderr, "truncate to off %llu error: %d (%s)\n",
(unsigned long long)off + BLOCK_SIZE, errno, strerror(errno));
exit(1);
}
if ((i % move_stride) == 0) {
mb.from_fd = alloc_fd;
mb.from_off = off;
mb.len = BLOCK_SIZE;
mb.to_off = i * BLOCK_SIZE;
ret = ioctl(trunc_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
if (ret < 0) {
fprintf(stderr, "move from off %llu error: %d (%s)\n",
(unsigned long long)off,
errno, strerror(errno));
}
}
}
if (alloc_fd > -1)
close(alloc_fd);
if (trunc_fd > -1)
close(trunc_fd);
return 0;
}

View File

@@ -1,22 +0,0 @@
#
# Make sure the server can handle a transaction with a data_freed whose
# blocks all hit different btree blocks in the main free list. It
# probably has to be merged in multiple commits.
#
t_require_commands fragmented_data_extents
EXTENTS_PER_BTREE_BLOCK=600
EXTENTS_PER_LIST_BLOCK=8192
FREED_EXTENTS=$((EXTENTS_PER_BTREE_BLOCK * EXTENTS_PER_LIST_BLOCK))
echo "== creating fragmented extents"
fragmented_data_extents $FREED_EXTENTS $EXTENTS_PER_BTREE_BLOCK "$T_D0/alloc" "$T_D0/move"
echo "== unlink file with moved extents to free extents per block"
rm -f "$T_D0/move"
echo "== cleanup"
rm -f "$T_D0/alloc"
t_pass

View File

@@ -597,7 +597,7 @@ format.
.PD
.TP
.BI "print {-S|--skip-likely-huge} META-DEVICE"
.BI "print META-DEVICE"
.sp
Prints out all of the metadata in the file system. This makes no effort
to ensure that the structures are consistent as they're traversed and
@@ -607,20 +607,6 @@ output.
.PD 0
.TP
.sp
.B "-S, --skip-likely-huge"
Skip printing structures that are likely to be very large. The
structures that are skipped tend to be global and whose size tends to be
related to the size of the volume. Examples of skipped structures include
the global fs items, srch files, and metadata and data
allocators. Similar structures that are not skipped are related to the
number of mounts and are maintained at a relatively reasonable size.
These include per-mount log trees, srch files, allocators, and the
metadata allocators used by server commits.
.sp
Skipping the larger structures limits the print output to a relatively
constant size rather than being a large multiple of the used metadata
space of the volume making the output much more useful for inspection.
.TP
.B "META-DEVICE"
The path to the metadata device for the filesystem whose metadata will be
printed. Since this command reads via the host's buffer cache, it may not

View File

@@ -8,7 +8,6 @@
#include <errno.h>
#include <string.h>
#include <stdarg.h>
#include <stdbool.h>
#include <ctype.h>
#include <uuid/uuid.h>
#include <sys/socket.h>
@@ -990,10 +989,9 @@ static void print_super_block(struct scoutfs_super_block *super, u64 blkno)
struct print_args {
char *meta_device;
bool skip_likely_huge;
};
static int print_volume(int fd, struct print_args *args)
static int print_volume(int fd)
{
struct scoutfs_super_block *super = NULL;
struct print_recursion_args pa;
@@ -1043,26 +1041,23 @@ static int print_volume(int fd, struct print_args *args)
ret = err;
}
if (!args->skip_likely_huge) {
for (i = 0; i < array_size(super->meta_alloc); i++) {
snprintf(str, sizeof(str), "meta_alloc[%u]", i);
err = print_btree(fd, super, str, &super->meta_alloc[i].root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
}
err = print_btree(fd, super, "data_alloc", &super->data_alloc.root,
for (i = 0; i < array_size(super->meta_alloc); i++) {
snprintf(str, sizeof(str), "meta_alloc[%u]", i);
err = print_btree(fd, super, str, &super->meta_alloc[i].root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
}
err = print_btree(fd, super, "data_alloc", &super->data_alloc.root,
print_alloc_item, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "srch_root", &super->srch_root,
print_srch_root_item, NULL);
if (err && !ret)
ret = err;
err = print_btree(fd, super, "logs_root", &super->logs_root,
print_log_trees_item, NULL);
if (err && !ret)
@@ -1070,23 +1065,19 @@ static int print_volume(int fd, struct print_args *args)
pa.super = super;
pa.fd = fd;
if (!args->skip_likely_huge) {
err = print_btree_leaf_items(fd, super, &super->srch_root.ref,
print_srch_root_files, &pa);
if (err && !ret)
ret = err;
}
err = print_btree_leaf_items(fd, super, &super->srch_root.ref,
print_srch_root_files, &pa);
if (err && !ret)
ret = err;
err = print_btree_leaf_items(fd, super, &super->logs_root.ref,
print_log_trees_roots, &pa);
if (err && !ret)
ret = err;
if (!args->skip_likely_huge) {
err = print_btree(fd, super, "fs_root", &super->fs_root,
print_fs_item, NULL);
if (err && !ret)
ret = err;
}
err = print_btree(fd, super, "fs_root", &super->fs_root,
print_fs_item, NULL);
if (err && !ret)
ret = err;
out:
free(super);
@@ -1107,7 +1098,7 @@ static int do_print(struct print_args *args)
return ret;
}
ret = print_volume(fd, args);
ret = print_volume(fd);
close(fd);
return ret;
};
@@ -1117,9 +1108,6 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
struct print_args *args = state->input;
switch (key) {
case 'S':
args->skip_likely_huge = true;
break;
case ARGP_KEY_ARG:
if (!args->meta_device)
args->meta_device = strdup_or_error(state, arg);
@@ -1137,13 +1125,8 @@ static int parse_opt(int key, char *arg, struct argp_state *state)
return 0;
}
static struct argp_option options[] = {
{ "skip-likely-huge", 'S', NULL, 0, "Skip large structures to minimize output size"},
{ NULL }
};
static struct argp argp = {
options,
NULL,
parse_opt,
"META-DEV",
"Print metadata structures"