Compare commits

...

15 Commits

Author SHA1 Message Date
Zach Brown
1abe97351d v1.5 Release
Finish the release notes for the 1.5 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-21 09:46:16 -07:00
Zach Brown
f757e29915 Merge pull request #92 from versity/zab/server_error_assertions
Protect get_log_trees corruption with assertion
2022-06-17 15:29:58 -07:00
Zach Brown
31e474c5fa Protect get_log_trees corruption with assertion
Like a lot of places in the server, get_log_trees() doesn't have the
tools in needs to safely unwind partial changes in the face of an error.

In the worst case, it can have moved extents from the mount's log_trees
item into the server's main data allocator.  The dirty data allocator
reference is in the super block so it can be written later.   The dirty
log_trees reference is on stack, though, so it will be thrown away on
error.  This ends up duplicating extents in the persistent structures
because they're written in the new dirty allocator but still remain in
the unwritten source log_trees allocator.

This change makes it harder for that to happen.   It dirties the
log_trees item and always tries to update so that the dirty blocks are
consistent if they're later written out.  If we do get an error updating
the item we throw an assertion.   It's not great, but it matches other
similar circumstances in other parts of the server.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-17 14:22:59 -07:00
Zach Brown
dcf8202d7c Merge pull request #91 from versity/zab/tcp_sk_alloc_nofs
Set sk_allocation on TCP sockets
2022-06-15 09:16:59 -07:00
Zach Brown
ae55fa3153 Set sk_allocation on TCP sockets
We were setting sk_allocation on the quorum UDP sockets to prevent
entering reclaim while using sockets but we missed setting it on the
regular messaging TCP sockets.   This could create deadlocks where the
sending socket could enter scoutfs reclaim and wait for server messages
while holding the socket lock, preventing the receive thread from
receiving messages while it blocked on the socket lock.

The fix is to prevent entering the FS to reclaim during socket
allocations.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-14 08:21:19 -07:00
Zach Brown
7f9f21317c Merge pull request #90 from versity/zab/multiple_alloc_move_commits
Reclaim log_trees alloc roots in multiple commits
2022-06-08 13:23:01 -07:00
Zach Brown
0d4bf83da3 Reclaim log_trees alloc roots in multiple commits
Client log_trees allocator btrees can build up quite a number of
extents.  In the right circumstances fragmented extents can have to
dirty a large number of paths to leaf blocks in the core allocator
btrees.  It might not be possible to dirty all the blocks necessary to
move all the extents in one commit.

This reworks the extent motion so that it can be performed in multiple
commits if the meta allocator for the commit runs out while it is moving
extents.  It's a minimal fix with as little disruption to the ordering
of commits and locking as possible.  It simply bubbles up an error when
the allocators run out and retries functions that can already be retried
in other circumstances.

Signed-off-by: Zach Brown <zab@versity.com>
2022-06-08 11:53:53 -07:00
Zach Brown
0a6b1fb304 Merge pull request #88 from versity/zab/v1_4_release
v1.4 Release
2022-05-06 11:23:45 -07:00
Zach Brown
fb7e43dd23 v1.4 Release
Finish the release notes for the 1.4 release.

Signed-off-by: Zach Brown <zab@versity.com>
2022-05-06 09:57:27 -07:00
Zach Brown
45d90a5ae4 Merge pull request #86 from versity/zab/increase_server_commit_block_budget
Increase server commit dirty block budget
2022-05-06 09:47:47 -07:00
Zach Brown
48f1305a8a Increase server commit dirty block budget
We're seeing allocator motion during get_log_trees dirty quite a lot of
blocks, which makes sense.  Let's continue to up the budget.   If we
still need significantly larger budgets we'll want to look into capping
the dirty block use of the allocator extent movers which will mean
changing callers to support partial progress.

Signed-off-by: Zach Brown <zab@versity.com>
2022-05-05 12:11:14 -07:00
Zach Brown
cd4d6502b8 Merge pull request #87 from versity/zab/lock_invalidation_recovery
Zab/lock invalidation recovery
2022-04-28 09:01:16 -07:00
Zach Brown
dff366e1a4 Add lock invalidation and recovery test
Add a test which tries to have lock recovery processed during lock
invalidation on clients.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-27 12:22:18 -07:00
Zach Brown
ca526e2bc0 Lock recovery uses old mode while invalidating
When a new server starts up it rebuilds its view of all the granted
locks with lock recovery messages.  Clients give the server their
granted lock modes which the server then uses to process all the resent
lock requests from clients.

The lock invalidation work in the client is responsible for
transitioning an old granted mode to a new invalidated mode from an
unsolicited message from the server.  It has to process any client state
that'd be incompatible with the new mode (write dirty data, drop
caches).  While it is doing this work, as an implementation short cut,
it sets the granted lock mode to the new mode so that users that are
compatible with the new invalidated mode can use the lock whlie it's
being invalidated.  Picture readers reading data while a write lock is
invalidating and writing dirty data.

A problem arises when a lock recover request is processed during lock
invalidation.  The client lock recover request handler sends a response
with the current granted mode.  The server takes this to mean that the
invalidation is done but the client invalidation worker might still be
writing data, dropping caches, etc.  The server will allow the state
machine to advance which can send grants to pending client requests
which believed that the invalidation was done.

All of this can lead to a grant response handler in the client tripping
the assertion that there can not be cached items that were incompatible
with the old mode in a grant from the server.  Invalidation might still
be invalidating caches.  Hitting this bug is very rare and requires a
new server starting up while a client has both a request outstanding and
an invalidation being processed when the lock recover request arrives.

The fix is to record the old mode during invalidation and send that in
lock recover responses.  This can lead the lock server to resend
invalidation requests to the client.  The client already safely handles
duplicate invalidation requests from other failover cases.

Signed-off-by: Zach Brown <zab@versity.com>
2022-04-27 12:20:56 -07:00
Zach Brown
e423d42106 Merge pull request #85 from versity/zab/v1_3_release
v1.3 Release
2022-04-07 12:21:42 -07:00
11 changed files with 222 additions and 68 deletions

View File

@@ -1,6 +1,38 @@
Versity ScoutFS Release Notes
=============================
---
v1.5
\
*Jun 21, 2022*
* **Fix persistent error during server startup**
\
Fixed a case where the server would always hit a consistent error on
seartup, preventing the system from mounting. This required a rare
but valid state across the clients.
* **Fix a client hang that would lead to fencing**
\
The client module's use of in-kernel networking was missing annotation
that could lead to communication hanging. The server would fence the
client when it stopped communicating. This could be identified by the
server fencing a client after it disconnected with no attempt by the
client to reconnect.
---
v1.4
\
*May 6, 2022*
* **Fix possible client crash during server failover**
\
Fixed a narrow window during server failover and lock recovery that
could cause a client mount to believe that it had an inconsistent item
cache and panic. This required very specific lock state and messaging
patterns between multiple mounts and multiple servers which made it
unlikely to occur in the field.
---
v1.3
\

View File

@@ -84,6 +84,21 @@ static u64 smallest_order_length(u64 len)
return 1ULL << (free_extent_order(len) * 3);
}
/*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*/
static u32 extent_mod_blocks(u32 height)
{
return ((1 + height) * 2) * 3;
}
/*
* Free extents don't have flags and are stored in two indexes sorted by
* block location and by length order, largest first. The location key
@@ -877,6 +892,14 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
* -ENOENT is returned if we run out of extents in the source tree
* before moving the total.
*
* If meta_reserved is non-zero then -EINPROGRESS can be returned if the
* current meta allocator's avail blocks or room for freed blocks would
* have fallen under the reserved amount. The could have been
* successfully dirtied in this case but the number of blocks moved is
* not returned. The caller is expected to deal with the partial
* progress by commiting the dirty trees and examining the resulting
* modified trees to see if they need to continue moving extents.
*
* The caller can specify that extents in the source tree should first
* be found based on their zone bitmaps. We'll first try to find
* extents in the exclusive zones, then vacant zones, and then we'll
@@ -891,7 +914,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks)
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved)
{
struct alloc_ext_args args = {
.alloc = alloc,
@@ -941,6 +964,14 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
break;
if (meta_reserved != 0 &&
scoutfs_alloc_meta_low(sb, alloc, meta_reserved +
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
ret = -EINPROGRESS;
break;
}
/* searching set start/len, finish initializing alloced extent */
ext.map = found.map ? ext.start - found.start + found.map : 0;
ext.flags = found.flags;
@@ -1065,15 +1096,6 @@ out:
* than completely exhausting the avail list or overflowing the freed
* list.
*
* An extent modification dirties three distinct leaves of an allocator
* btree as it adds and removes the blkno and size sorted items for the
* old and new lengths of the extent. Dirtying the paths to these
* leaves can grow the tree and grow/shrink neighbours at each level.
* We over-estimate the number of blocks allocated and freed (the paths
* share a root, growth doesn't free) to err on the simpler and safer
* side. The overhead is minimal given the relatively large list blocks
* and relatively short allocator trees.
*
* The caller tells us how many extents they're about to modify and how
* many other additional blocks they may cow manually. And finally, the
* caller could be the first to dirty the avail and freed blocks in the
@@ -1082,7 +1104,7 @@ out:
static bool list_has_blocks(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_alloc_root *root, u32 extents, u32 addl_blocks)
{
u32 tree_blocks = (((1 + root->root.height) * 2) * 3) * extents;
u32 tree_blocks = extent_mod_blocks(root->root.height) * extents;
u32 most = 1 + tree_blocks + addl_blocks;
if (le32_to_cpu(alloc->avail.first_nr) < most) {

View File

@@ -131,7 +131,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);

View File

@@ -289,6 +289,7 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
lock->sb = sb;
init_waitqueue_head(&lock->waitq);
lock->mode = SCOUTFS_LOCK_NULL;
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
atomic64_set(&lock->forest_bloom_nr, 0);
@@ -666,7 +667,9 @@ struct inv_req {
*
* Before we start invalidating the lock we set the lock to the new
* mode, preventing further incompatible users of the old mode from
* using the lock while we're invalidating.
* using the lock while we're invalidating. We record the previously
* granted mode so that we can send lock recover responses with the old
* granted mode during invalidation.
*/
static void lock_invalidate_worker(struct work_struct *work)
{
@@ -691,7 +694,8 @@ static void lock_invalidate_worker(struct work_struct *work)
if (!lock_counts_match(nl->new_mode, lock->users))
continue;
/* set the new mode, no incompatible users during inval */
/* set the new mode, no incompatible users during inval, recov needs old */
lock->invalidating_mode = lock->mode;
lock->mode = nl->new_mode;
/* move everyone that's ready to our private list */
@@ -734,6 +738,8 @@ static void lock_invalidate_worker(struct work_struct *work)
list_del(&ireq->head);
kfree(ireq);
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
if (list_empty(&lock->inv_list)) {
/* finish if another request didn't arrive */
list_del_init(&lock->inv_head);
@@ -824,6 +830,7 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_net_lock_recover *nlr;
enum scoutfs_lock_mode mode;
struct scoutfs_lock *lock;
struct scoutfs_lock *next;
struct rb_node *node;
@@ -844,10 +851,15 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
for (i = 0; lock && i < SCOUTFS_NET_LOCK_MAX_RECOVER_NR; i++) {
if (lock->invalidating_mode != SCOUTFS_LOCK_NULL)
mode = lock->invalidating_mode;
else
mode = lock->mode;
nlr->locks[i].key = lock->start;
nlr->locks[i].write_seq = cpu_to_le64(lock->write_seq);
nlr->locks[i].old_mode = lock->mode;
nlr->locks[i].new_mode = lock->mode;
nlr->locks[i].old_mode = mode;
nlr->locks[i].new_mode = mode;
node = rb_next(&lock->node);
if (node)

View File

@@ -39,6 +39,7 @@ struct scoutfs_lock {
struct list_head cov_list;
enum scoutfs_lock_mode mode;
enum scoutfs_lock_mode invalidating_mode;
unsigned int waiters[SCOUTFS_LOCK_NR_MODES];
unsigned int users[SCOUTFS_LOCK_NR_MODES];

View File

@@ -991,6 +991,8 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
if (ret < 0)
break;
acc_sock->sk->sk_allocation = GFP_NOFS;
/* inherit accepted request funcs from listening conn */
acc_conn = scoutfs_net_alloc_conn(sb, conn->notify_up,
conn->notify_down,
@@ -1053,6 +1055,8 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
/* caller specified connect timeout */
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
@@ -1450,6 +1454,8 @@ int scoutfs_net_bind(struct super_block *sb,
if (ret)
goto out;
sock->sk->sk_allocation = GFP_NOFS;
optval = 1;
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
(char *)&optval, sizeof(optval));

View File

@@ -246,10 +246,16 @@ static void server_down(struct server_info *server)
/*
* The per-holder allocation block use budget balances batching
* efficiency and concurrency. We can easily have a few holders per
* client trying to make concurrent updates in a commit.
* efficiency and concurrency. The larger this gets, the fewer
* concurrent server operations can be performed in one commit. Commits
* are immediately written after being dirtied so this really only
* limits immediate concurrency under load, not batching over time as
* one might expect if commits were long lived.
*
* The upper bound is determined by the server commit hold path that can
* dirty the most blocks.
*/
#define COMMIT_HOLD_ALLOC_BUDGET 250
#define COMMIT_HOLD_ALLOC_BUDGET 500
struct commit_hold {
struct list_head entry;
@@ -683,23 +689,18 @@ static int alloc_move_refill_zoned(struct super_block *sb, struct scoutfs_alloc_
return scoutfs_alloc_move(sb, &server->alloc, &server->wri, dst, src,
min(target - le64_to_cpu(dst->total_len),
le64_to_cpu(src->total_len)),
exclusive, vacant, zone_blocks);
}
static inline int alloc_move_refill(struct super_block *sb, struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 lo, u64 target)
{
return alloc_move_refill_zoned(sb, dst, src, lo, target, NULL, NULL, 0);
exclusive, vacant, zone_blocks, 0);
}
static int alloc_move_empty(struct super_block *sb,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src)
struct scoutfs_alloc_root *src, u64 meta_reserved)
{
DECLARE_SERVER_INFO(sb, server);
return scoutfs_alloc_move(sb, &server->alloc, &server->wri,
dst, src, le64_to_cpu(src->total_len), NULL, NULL, 0);
dst, src, le64_to_cpu(src->total_len), NULL, NULL, 0,
meta_reserved);
}
/*
@@ -1266,6 +1267,7 @@ static int server_get_log_trees(struct super_block *sb,
char *err_str = NULL;
u64 nr;
int ret;
int err;
if (arg_len != 0) {
ret = -EINVAL;
@@ -1309,16 +1311,27 @@ static int server_get_log_trees(struct super_block *sb,
goto unlock;
}
if (ret != -ENOENT) {
/* need to sync lt with respect to changes in other structures */
scoutfs_key_init_log_trees(&key, le64_to_cpu(lt.rid), le64_to_cpu(lt.nr));
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri,
&super->logs_root, &key);
if (ret < 0) {
err_str = "dirtying lt btree key";
goto unlock;
}
}
/* drops and re-acquires the mutex and commit if it has to wait */
ret = finalize_and_start_log_merge(sb, &lt, rid, &hold);
if (ret < 0)
goto unlock;
goto update;
if (get_volopt_val(server, SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR, &data_zone_blocks)) {
ret = get_data_alloc_zone_bits(sb, rid, exclusive, vacant, data_zone_blocks);
if (ret < 0) {
err_str = "getting alloc zone bits";
goto unlock;
goto update;
}
} else {
data_zone_blocks = 0;
@@ -1335,13 +1348,13 @@ static int server_get_log_trees(struct super_block *sb,
&lt.meta_freed);
if (ret < 0) {
err_str = "splicing committed meta_freed";
goto unlock;
goto update;
}
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed);
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 0);
if (ret < 0) {
err_str = "emptying committed data_freed";
goto unlock;
goto update;
}
ret = scoutfs_alloc_fill_list(sb, &server->alloc, &server->wri,
@@ -1350,7 +1363,7 @@ static int server_get_log_trees(struct super_block *sb,
SCOUTFS_SERVER_META_FILL_TARGET);
if (ret < 0) {
err_str = "filling meta_avail";
goto unlock;
goto update;
}
if (le64_to_cpu(server->meta_avail->total_len) <= scoutfs_server_reserved_meta_blocks(sb))
@@ -1363,7 +1376,7 @@ static int server_get_log_trees(struct super_block *sb,
exclusive, vacant, data_zone_blocks);
if (ret < 0) {
err_str = "refilling data_avail";
goto unlock;
goto update;
}
if (le64_to_cpu(lt.data_avail.total_len) < SCOUTFS_SERVER_DATA_FILL_LO)
@@ -1383,7 +1396,7 @@ static int server_get_log_trees(struct super_block *sb,
if (ret < 0) {
zero_data_alloc_zone_bits(&lt);
err_str = "setting data_avail zone bits";
goto unlock;
goto update;
}
lt.data_alloc_zone_blocks = cpu_to_le64(data_zone_blocks);
@@ -1392,13 +1405,18 @@ static int server_get_log_trees(struct super_block *sb,
/* give the transaction a new seq (must have been ==) */
lt.get_trans_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
update:
/* update client's log tree's item */
scoutfs_key_init_log_trees(&key, le64_to_cpu(lt.rid),
le64_to_cpu(lt.nr));
ret = scoutfs_btree_force(sb, &server->alloc, &server->wri,
scoutfs_key_init_log_trees(&key, le64_to_cpu(lt.rid), le64_to_cpu(lt.nr));
err = scoutfs_btree_force(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
if (ret < 0)
err_str = "updating log trees";
BUG_ON(err < 0); /* can duplicate extents.. move dst in super, still in in lt src */
if (err < 0) {
if (ret == 0) {
ret = err;
err_str = "updating log trees";
}
}
unlock:
if (unlock_alloc)
@@ -1538,9 +1556,11 @@ static int server_get_roots(struct super_block *sb,
* read and we finalize the tree so that it will be merged. We reclaim
* all the allocator items.
*
* The caller holds the commit rwsem which means we do all this work in
* one server commit. We'll need to keep the total amount of blocks in
* trees in check.
* The caller holds the commit rwsem which means we have to do our work
* in one commit. The alocator btrees can be very large and very
* fragmented. We return -EINPROGRESS if we couldn't fully reclaim the
* allocators in one commit. The caller should apply the current
* commit and call again in a new commit.
*
* By the time we're evicting a client they've either synced their data
* or have been forcefully removed. The free blocks in the allocator
@@ -1600,9 +1620,9 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
}
/*
* All of these can return errors after having modified the
* allocator trees. We have to try and update the roots in the
* log item.
* All of these can return errors, perhaps indicating successful
* partial progress, after having modified the allocator trees.
* We always have to update the roots in the log item.
*/
mutex_lock(&server->alloc_mutex);
ret = (err_str = "splice meta_freed to other_freed",
@@ -1612,18 +1632,21 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
scoutfs_alloc_splice_list(sb, &server->alloc, &server->wri, server->other_freed,
&lt.meta_avail)) ?:
(err_str = "empty data_avail",
alloc_move_empty(sb, &super->data_alloc, &lt.data_avail)) ?:
alloc_move_empty(sb, &super->data_alloc, &lt.data_avail, 100)) ?:
(err_str = "empty data_freed",
alloc_move_empty(sb, &super->data_alloc, &lt.data_freed));
alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 100));
mutex_unlock(&server->alloc_mutex);
/* the transaction is no longer open */
lt.commit_trans_seq = lt.get_trans_seq;
/* only finalize, allowing merging, once the allocators are fully freed */
if (ret == 0) {
/* the transaction is no longer open */
lt.commit_trans_seq = lt.get_trans_seq;
/* the mount is no longer writing to the zones */
zero_data_alloc_zone_bits(&lt);
le64_add_cpu(&lt.flags, SCOUTFS_LOG_TREES_FINALIZED);
lt.finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
/* the mount is no longer writing to the zones */
zero_data_alloc_zone_bits(&lt);
le64_add_cpu(&lt.flags, SCOUTFS_LOG_TREES_FINALIZED);
lt.finalize_seq = cpu_to_le64(scoutfs_server_next_seq(sb));
}
err = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
@@ -1632,7 +1655,7 @@ static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
out:
mutex_unlock(&server->logs_mutex);
if (ret < 0)
if (ret < 0 && ret != -EINPROGRESS)
scoutfs_err(sb, "server error %d reclaiming log trees for rid %016llx: %s",
ret, rid, err_str);
@@ -3530,26 +3553,37 @@ struct farewell_request {
* Reclaim all the resources for a mount which has gone away. It's sent
* us a farewell promising to leave or we actively fenced it.
*
* It's safe to call this multiple times for a given rid. Each
* individual action knows to recognize that it's already been performed
* and return success.
* This can be called multiple times across different servers for
* different reclaim attempts. The existence of the mounted_client item
* triggers reclaim and must be deleted last. Each step knows that it
* can be called multiple times and safely recognizes that its work
* might have already been done.
*
* Some steps (reclaiming large fragmented allocators) may need multiple
* calls to complete. They return -EINPROGRESS which tells us to apply
* the server commit and retry.
*/
static int reclaim_rid(struct super_block *sb, u64 rid)
{
COMMIT_HOLD(hold);
int ret;
int err;
server_hold_commit(sb, &hold);
do {
server_hold_commit(sb, &hold);
/* delete mounted client last, recovery looks for it */
ret = scoutfs_lock_server_farewell(sb, rid) ?:
reclaim_open_log_tree(sb, rid) ?:
cancel_srch_compact(sb, rid) ?:
cancel_log_merge(sb, rid) ?:
scoutfs_omap_remove_rid(sb, rid) ?:
delete_mounted_client(sb, rid);
err = scoutfs_lock_server_farewell(sb, rid) ?:
reclaim_open_log_tree(sb, rid) ?:
cancel_srch_compact(sb, rid) ?:
cancel_log_merge(sb, rid) ?:
scoutfs_omap_remove_rid(sb, rid) ?:
delete_mounted_client(sb, rid);
return server_apply_commit(sb, &hold, ret);
ret = server_apply_commit(sb, &hold, err == -EINPROGRESS ? 0 : err);
} while (err == -EINPROGRESS && ret == 0);
return ret;
}
/*

View File

@@ -0,0 +1,3 @@
== starting background invalidating read/write load
== 60s of lock recovery during invalidating load
== stopping background load

View File

View File

@@ -17,6 +17,7 @@ lock-refleak.sh
lock-shrink-consistency.sh
lock-pr-cw-conflict.sh
lock-revoke-getcwd.sh
lock-recover-invalidate.sh
export-lookup-evict-race.sh
createmany-parallel.sh
createmany-large-names.sh

View File

@@ -0,0 +1,43 @@
#
# trigger server failover and lock recovery during heavy invalidating
# load on multiple mounts
#
majority_nr=$(t_majority_count)
quorum_nr=$T_QUORUM
test "$quorum_nr" == "$majority_nr" && \
t_skip "need remaining majority when leader unmounted"
test "$T_NR_MOUNTS" -lt "$((quorum_nr + 2))" && \
t_skip "need at least 2 non-quorum load mounts"
echo "== starting background invalidating read/write load"
touch "$T_D0/file"
load_pids=""
for i in $(t_fs_nrs); do
if [ "$i" -ge "$quorum_nr" ]; then
eval path="\$T_D${i}/file"
(while true; do touch $path > /dev/null 2>&1; done) &
load_pids="$load_pids $!"
(while true; do stat $path > /dev/null 2>&1; done) &
load_pids="$load_pids $!"
fi
done
# had it reproduce in ~40s on wimpy debug kernel guests
LENGTH=60
echo "== ${LENGTH}s of lock recovery during invalidating load"
END=$((SECONDS + LENGTH))
while [ "$SECONDS" -lt "$END" ]; do
sv=$(t_server_nr)
t_umount $sv
t_mount $sv
# new server had to process greeting for mount to finish
done
echo "== stopping background load"
kill $load_pids
t_pass