mirror of
https://github.com/versity/scoutfs.git
synced 2026-01-11 14:10:26 +00:00
Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0b521a943e | ||
|
|
36a3f04566 |
128
README.md
128
README.md
@@ -1,24 +1,130 @@
|
||||
# Introduction
|
||||
|
||||
scoutfs is a clustered in-kernel Linux filesystem designed to support
|
||||
large archival systems. It features additional interfaces and metadata
|
||||
so that archive agents can perform their maintenance workflows without
|
||||
walking all the files in the namespace. Its cluster support lets
|
||||
deployments add nodes to satisfy archival tier bandwidth targets.
|
||||
scoutfs is a clustered in-kernel Linux filesystem designed and built
|
||||
from the ground up to support large archival systems.
|
||||
|
||||
The design goal is to reach file populations in the trillions, with the
|
||||
archival bandwidth to match, while remaining operational and responsive.
|
||||
Its key differentiating features are:
|
||||
|
||||
Highlights of the design and implementation include:
|
||||
- Integrated consistent indexing accelerates archival maintenance operations
|
||||
- Commit logs allow nodes to write concurrently without contention
|
||||
|
||||
It meets best of breed expectations:
|
||||
|
||||
* Fully consistent POSIX semantics between nodes
|
||||
* Rich metadata to ensure the integrity of metadata references
|
||||
* Atomic transactions to maintain consistent persistent structures
|
||||
* Integrated archival metadata replaces syncing to external databases
|
||||
* Dynamic seperation of resources lets nodes write in parallel
|
||||
* 64bit throughout; no limits on file or directory sizes or counts
|
||||
* First class kernel implementation for high performance and low latency
|
||||
* Open GPLv2 implementation
|
||||
|
||||
Learn more in the [white paper](https://docs.wixstatic.com/ugd/aaa89b_88a5cc84be0b4d1a90f60d8900834d28.pdf).
|
||||
|
||||
# Current Status
|
||||
|
||||
**Alpha Open Source Development**
|
||||
|
||||
scoutfs is under heavy active development. We're developing it in the
|
||||
open to give the community an opportunity to affect the design and
|
||||
implementation.
|
||||
|
||||
The core architectural design elements are in place. Much surrounding
|
||||
functionality hasn't been implemented. It's appropriate for early
|
||||
adopters and interested developers, not for production use.
|
||||
|
||||
In that vein, expect significant incompatible changes to both the format
|
||||
of network messages and persistent structures. To avoid mistakes the
|
||||
implementation currently calculates a hash of the format and ioctl
|
||||
header files in the source tree. The kernel module will refuse to mount
|
||||
a volume created by userspace utilities with a mismatched hash, and it
|
||||
will refuse to connect to a remote node with a mismatched hash. This
|
||||
means having to unmount, mkfs, and remount everything across many
|
||||
functional changes. Once the format is nailed down we'll wire up
|
||||
forward and back compat machinery and remove this temporary safety
|
||||
measure.
|
||||
|
||||
The current kernel module is developed against the RHEL/CentOS 7.x
|
||||
kernel to minimize the friction of developing and testing with partners'
|
||||
existing infrastructure. Once we're happy with the design we'll shift
|
||||
development to the upstream kernel while maintaining distro
|
||||
compatibility branches.
|
||||
|
||||
# Community Mailing List
|
||||
|
||||
Please join us on the open scoutfs-devel@scoutfs.org [mailing list
|
||||
hosted on Google Groups](https://groups.google.com/a/scoutfs.org/forum/#!forum/scoutfs-devel)
|
||||
for all discussion of scoutfs.
|
||||
|
||||
# Quick Start
|
||||
|
||||
**This following a very rough example of the procedure to get up and
|
||||
running, experience will be needed to fill in the gaps. We're happy to
|
||||
help on the mailing list.**
|
||||
|
||||
The requirements for running scoutfs on a small cluster are:
|
||||
|
||||
1. One or more nodes running x86-64 CentOS/RHEL 7.4 (or 7.3)
|
||||
2. Access to two shared block devices
|
||||
3. IPv4 connectivity between the nodes
|
||||
|
||||
The steps for getting scoutfs mounted and operational are:
|
||||
|
||||
1. Get the kernel module running on the nodes
|
||||
2. Make a new filesystem on the devices with the userspace utilities
|
||||
3. Mount the devices on all the nodes
|
||||
|
||||
In this example we run all of these commands on three nodes. The names
|
||||
of the block devices are the same on all the nodes.
|
||||
|
||||
1. Get the Kernel Module and Userspace Binaries
|
||||
|
||||
* Either use snapshot RPMs built from git by Versity:
|
||||
|
||||
```shell
|
||||
rpm -i https://scoutfs.s3-us-west-2.amazonaws.com/scoutfs-repo-0.0.1-1.el7_4.noarch.rpm
|
||||
yum install scoutfs-utils kmod-scoutfs
|
||||
```
|
||||
|
||||
* Or use the binaries built from checked out git repositories:
|
||||
|
||||
```shell
|
||||
yum install kernel-devel
|
||||
git clone git@github.com:versity/scoutfs.git
|
||||
make -C scoutfs
|
||||
modprobe libcrc32c
|
||||
insmod scoutfs/kmod/src/scoutfs.ko
|
||||
alias scoutfs=$PWD/scoutfs/utils/src/scoutfs
|
||||
```
|
||||
|
||||
2. Make a New Filesystem (**destroys contents, no questions asked**)
|
||||
|
||||
We specify that two of our three nodes must be present to form a
|
||||
quorum for the system to function.
|
||||
|
||||
```shell
|
||||
scoutfs mkfs -Q 2 /dev/meta_dev /dev/data_dev
|
||||
```
|
||||
|
||||
3. Mount the Filesystem
|
||||
|
||||
Each mounting node provides its local IP address on which it will run
|
||||
an internal server for the other mounts if it is elected the leader by
|
||||
the quorum.
|
||||
|
||||
```shell
|
||||
mkdir /mnt/scoutfs
|
||||
mount -t scoutfs -o server_addr=$NODE_ADDR,metadev_path=/dev/meta_dev /dev/data_dev /mnt/scoutfs
|
||||
```
|
||||
|
||||
4. For Kicks, Observe the Metadata Change Index
|
||||
|
||||
The `meta_seq` index tracks the inodes that are changed in each
|
||||
transaction.
|
||||
|
||||
```shell
|
||||
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
|
||||
touch /mnt/scoutfs/one; sync
|
||||
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
|
||||
touch /mnt/scoutfs/two; sync
|
||||
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
|
||||
touch /mnt/scoutfs/one; sync
|
||||
scoutfs walk-inodes meta_seq 0 -1 /mnt/scoutfs
|
||||
```
|
||||
|
||||
233
ReleaseNotes.md
233
ReleaseNotes.md
@@ -1,233 +0,0 @@
|
||||
Versity ScoutFS Release Notes
|
||||
=============================
|
||||
|
||||
---
|
||||
v1.11
|
||||
\
|
||||
*Feb 2, 2023*
|
||||
|
||||
Fixed a free extent processing error that could prevent mount from
|
||||
proceeding when free data extents were sufficiently fragmented. It now
|
||||
properly handle very fragmented free extent maps.
|
||||
|
||||
Fixed a statfs server processing race that could return spurious errors
|
||||
and shut down the server. With the race closed statfs processing is
|
||||
reliable.
|
||||
|
||||
Fixed a rare livelock in the move\_blocks ioctl. With the right
|
||||
relationship between ioctl arguments and eventual file extent items the
|
||||
core loop in the move\_blocks ioctl could get stuck looping on an extent
|
||||
item and never return. The loop exit conditions were fixed and the loop
|
||||
will always advance through all extents.
|
||||
|
||||
Changed the 'print' scoutfs commands to flush the block cache for the
|
||||
devices. It was inconvenient to expect cache flushing to be a separate
|
||||
step to ensure consistency with remote node writes.
|
||||
|
||||
---
|
||||
v1.10
|
||||
\
|
||||
*Dec 7, 2022*
|
||||
|
||||
Fixed a potential directory entry cache management deadlock that could
|
||||
occur when many nodes performed heavy metadata write loads across shared
|
||||
directories and their child subdirectories. The deadlock could halt
|
||||
invalidation progress on a node which could then stop use of locks that
|
||||
needed invalidation on that node which would result in almost all tasks
|
||||
hanging on those locks that would never make progress.
|
||||
|
||||
Fixed a circumstance where metadata change sequence index item
|
||||
modification could leave behind old stale metadata sequence items. The
|
||||
duplication case required concurrent metadata updates across mounts with
|
||||
particular open transaction patterns so the duplicate items are rare.
|
||||
They resulted in a small amount of additional load when walking change
|
||||
indexes but had no effect on correctness.
|
||||
|
||||
Fixed a rare case where sparse file extension might not write partial
|
||||
blocks of zeros which was found in testing. This required using
|
||||
truncate to extend files past file sizes that end in partial blocks
|
||||
along with the right transaction commit and memory reclaim patterns.
|
||||
This never affected regular non-sparse files nor files prepopulated with
|
||||
fallocate.
|
||||
|
||||
---
|
||||
v1.9
|
||||
\
|
||||
*Oct 29, 2022*
|
||||
|
||||
Fix VFS cached directory entry consistency verification that could cause
|
||||
spurious "no such file or directory" (ENOENT) errors from rename over
|
||||
NFS under certain conditions. The problem was only every with the
|
||||
consistency of in-memory cached dentry objects, persistent data was
|
||||
correct and eventual eviction of the bad cached objects would stop
|
||||
generating the errors.
|
||||
|
||||
---
|
||||
v1.8
|
||||
\
|
||||
*Oct 18, 2022*
|
||||
|
||||
Add support for Linux POSIX Access Control Lists, as described in
|
||||
acl(5). Mount options are added to enable ("acl") and disable ("noacl")
|
||||
support. The default is to support ACLs. ACLs are stored in the
|
||||
existing extended attribute scheme so adding support is does not require
|
||||
a format change.
|
||||
|
||||
Add options to control data extent preallocation. The default behavior
|
||||
does not change. The options can relax the limits on preallocation
|
||||
which will then trigger under more write patterns and increase the risk
|
||||
of preallocated space which is never used. The options are described in
|
||||
scoutfs(5).
|
||||
|
||||
---
|
||||
v1.7
|
||||
\
|
||||
*Aug 26, 2022*
|
||||
|
||||
* **Fixed possible persistent errors moving freed data extents**
|
||||
\
|
||||
Fixed a case where the server could hit persistent errors trying to
|
||||
move a client's freed extents in one commit. The client had to free
|
||||
a large number of extents that occupied distant positions in the
|
||||
global free extent btree. Very large fragmented files could cause
|
||||
this. The server now moves the freed extents in multiple commits and
|
||||
can always ensure forward progress.
|
||||
|
||||
* **Fixed possible persistent errors from freed duplicate extents**
|
||||
\
|
||||
Background orphan deletion wasn't properly synchronizing with
|
||||
foreground tasks deleting very large files. If a deletion took long
|
||||
enough then background deletion could also attempt to delete inode items
|
||||
while the deletion was making progress. This could create duplicate
|
||||
deletions of data extent items which causes the server to abort when
|
||||
it later discovers the duplicate extents as it merges free lists.
|
||||
|
||||
---
|
||||
v1.6
|
||||
\
|
||||
*Jul 7, 2022*
|
||||
|
||||
* **Fix memory leaks in rare corner cases**
|
||||
\
|
||||
Analysis tools found a few corner cases that leaked small structures,
|
||||
generally around error handling or startup and shutdown.
|
||||
|
||||
* **Add --skip-likely-huge scoutfs print command option**
|
||||
\
|
||||
Add an option to scoutfs print to reduce the size of the output
|
||||
so that it can be used to see system-wide metadata without being
|
||||
overwhelmed by file-level details.
|
||||
|
||||
---
|
||||
v1.5
|
||||
\
|
||||
*Jun 21, 2022*
|
||||
|
||||
* **Fix persistent error during server startup**
|
||||
\
|
||||
Fixed a case where the server would always hit a consistent error on
|
||||
seartup, preventing the system from mounting. This required a rare
|
||||
but valid state across the clients.
|
||||
|
||||
* **Fix a client hang that would lead to fencing**
|
||||
\
|
||||
The client module's use of in-kernel networking was missing annotation
|
||||
that could lead to communication hanging. The server would fence the
|
||||
client when it stopped communicating. This could be identified by the
|
||||
server fencing a client after it disconnected with no attempt by the
|
||||
client to reconnect.
|
||||
|
||||
---
|
||||
v1.4
|
||||
\
|
||||
*May 6, 2022*
|
||||
|
||||
* **Fix possible client crash during server failover**
|
||||
\
|
||||
Fixed a narrow window during server failover and lock recovery that
|
||||
could cause a client mount to believe that it had an inconsistent item
|
||||
cache and panic. This required very specific lock state and messaging
|
||||
patterns between multiple mounts and multiple servers which made it
|
||||
unlikely to occur in the field.
|
||||
|
||||
---
|
||||
v1.3
|
||||
\
|
||||
*Apr 7, 2022*
|
||||
|
||||
* **Fix rare server instability under heavy load**
|
||||
\
|
||||
Fixed a case of server instability under heavy load due to concurrent
|
||||
work fully exhausting metadata block allocation pools reserved for a
|
||||
single server transaction. This would cause brief interruption as the
|
||||
server shutdown and the next server started up and made progress as
|
||||
pending work was retried.
|
||||
|
||||
* **Fix slow fencing preventing server startup**
|
||||
\
|
||||
If a server had to process many fence requests with a slow fencing
|
||||
mechanism it could be interrupted before it finished. The server
|
||||
now makes sure heartbeat messages are sent while it is making progress
|
||||
on fencing requests so that other quorum members don't interrupt the
|
||||
process.
|
||||
|
||||
* **Performance improvement in getxattr and setxattr**
|
||||
\
|
||||
Kernel allocation patterns in the getxattr and setxattr
|
||||
implementations were causing significant contention between CPUs. Their
|
||||
allocation strategy was changed so that concurrent tasks can call these
|
||||
xattr methods without degrading performance.
|
||||
|
||||
---
|
||||
v1.2
|
||||
\
|
||||
*Mar 14, 2022*
|
||||
|
||||
* **Fix deadlock between fallocate() and read() system calls**
|
||||
\
|
||||
Fixed a lock inversion that could cause two tasks to deadlock if they
|
||||
performed fallocate() and read() on a file at the same time. The
|
||||
deadlock was uninterruptible so the machine needed to be rebooted. This
|
||||
was relatively rare as fallocate() is usually used to prepare files
|
||||
before they're used.
|
||||
|
||||
* **Fix instability from heavy file deletion workloads**
|
||||
\
|
||||
Fixed rare circumstances under which background file deletion cleanup
|
||||
tasks could try to delete a file while it is being deleted by another
|
||||
task. Heavy load across multiple nodes, either many files being deleted
|
||||
or large files being deleted, increased the chances of this happening.
|
||||
Heavy staging could cause this problem because staging can create many
|
||||
internal temporary files that need to be deleted.
|
||||
|
||||
---
|
||||
v1.1
|
||||
\
|
||||
*Feb 4, 2022*
|
||||
|
||||
|
||||
* **Add scoutfs(1) change-quorum-config command**
|
||||
\
|
||||
Add a change-quorum-config command to scoutfs(1) to change the quorum
|
||||
configuration stored in the metadata device while the file system is
|
||||
unmounted. This can be used to change the mounts that will
|
||||
participate in quorum and the IP addresses they use.
|
||||
|
||||
* **Fix Rare Risk of Item Cache Corruption**
|
||||
\
|
||||
Code review found a rare potential source of item cache corruption.
|
||||
If this happened it would look as though deleted parts of the filesystem
|
||||
returned, but only at the time they were deleted. Old deleted items are
|
||||
not affected. This problem only affected the item cache, never
|
||||
persistent storage. Unmounting and remounting would drop the bad item
|
||||
cache and resync it with the correct persistent data.
|
||||
|
||||
---
|
||||
v1.0
|
||||
\
|
||||
*Nov 8, 2021*
|
||||
|
||||
|
||||
* **Initial Release**
|
||||
\
|
||||
Version 1.0 marks the first GA release.
|
||||
@@ -16,7 +16,11 @@ SCOUTFS_GIT_DESCRIBE := \
|
||||
$(shell git describe --all --abbrev=6 --long 2>/dev/null || \
|
||||
echo no-git)
|
||||
|
||||
SCOUTFS_FORMAT_HASH := \
|
||||
$(shell cat src/format.h src/ioctl.h | md5sum | cut -b1-16)
|
||||
|
||||
SCOUTFS_ARGS := SCOUTFS_GIT_DESCRIBE=$(SCOUTFS_GIT_DESCRIBE) \
|
||||
SCOUTFS_FORMAT_HASH=$(SCOUTFS_FORMAT_HASH) \
|
||||
CONFIG_SCOUTFS_FS=m -C $(SK_KSRC) M=$(CURDIR)/src \
|
||||
EXTRA_CFLAGS="-Werror"
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
obj-$(CONFIG_SCOUTFS_FS) := scoutfs.o
|
||||
|
||||
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\"
|
||||
CFLAGS_super.o = -DSCOUTFS_GIT_DESCRIBE=\"$(SCOUTFS_GIT_DESCRIBE)\" \
|
||||
-DSCOUTFS_FORMAT_HASH=0x$(SCOUTFS_FORMAT_HASH)LLU
|
||||
|
||||
CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
|
||||
|
||||
@@ -8,7 +9,6 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
|
||||
-include $(src)/Makefile.kernelcompat
|
||||
|
||||
scoutfs-y += \
|
||||
acl.o \
|
||||
avl.o \
|
||||
alloc.o \
|
||||
block.o \
|
||||
@@ -19,7 +19,6 @@ scoutfs-y += \
|
||||
dir.o \
|
||||
export.o \
|
||||
ext.o \
|
||||
fence.o \
|
||||
file.o \
|
||||
forest.o \
|
||||
inode.o \
|
||||
@@ -29,11 +28,9 @@ scoutfs-y += \
|
||||
lock_server.o \
|
||||
msg.o \
|
||||
net.o \
|
||||
omap.o \
|
||||
options.o \
|
||||
per_task.o \
|
||||
quorum.o \
|
||||
recov.o \
|
||||
scoutfs_trace.o \
|
||||
server.o \
|
||||
sort_priv.o \
|
||||
@@ -44,7 +41,6 @@ scoutfs-y += \
|
||||
trans.o \
|
||||
triggers.o \
|
||||
tseq.o \
|
||||
volopt.o \
|
||||
xattr.o
|
||||
|
||||
#
|
||||
|
||||
@@ -34,12 +34,3 @@ endif
|
||||
ifneq (,$(shell grep 'FMODE_KABI_ITERATE' include/linux/fs.h))
|
||||
ccflags-y += -DKC_FMODE_KABI_ITERATE
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.7-rc2-23-g0d4d717f2583
|
||||
#
|
||||
# Added user_ns argument to posix_acl_valid
|
||||
#
|
||||
ifneq (,$(shell grep 'posix_acl_valid.*user_ns,' include/linux/posix_acl.h))
|
||||
ccflags-y += -DKC_POSIX_ACL_VALID_USER_NS
|
||||
endif
|
||||
|
||||
355
kmod/src/acl.c
355
kmod/src/acl.c
@@ -1,355 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/xattr.h>
|
||||
#include <linux/posix_acl.h>
|
||||
#include <linux/posix_acl_xattr.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "super.h"
|
||||
#include "scoutfs_trace.h"
|
||||
#include "xattr.h"
|
||||
#include "acl.h"
|
||||
#include "inode.h"
|
||||
#include "trans.h"
|
||||
|
||||
/*
|
||||
* POSIX draft ACLs are stored as full xattr items with the entries
|
||||
* encoded as the kernel's posix_acl_xattr_{header,entry} value structs.
|
||||
*
|
||||
* They're accessed and modified via user facing synthetic xattrs, iops
|
||||
* calls from the kernel, during inode mode changes, and during inode
|
||||
* creation.
|
||||
*
|
||||
* ACL access devolves into xattr access which is relatively expensive
|
||||
* so we maintain the cached native form in the vfs inode. We drop the
|
||||
* cache in lock invalidation which means that cached acl access must
|
||||
* always be performed under cluster locking.
|
||||
*/
|
||||
|
||||
static int acl_xattr_name_len(int type, char **name, size_t *name_len)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
switch (type) {
|
||||
case ACL_TYPE_ACCESS:
|
||||
*name = XATTR_NAME_POSIX_ACL_ACCESS;
|
||||
if (name_len)
|
||||
*name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
|
||||
break;
|
||||
case ACL_TYPE_DEFAULT:
|
||||
*name = XATTR_NAME_POSIX_ACL_DEFAULT;
|
||||
if (name_len)
|
||||
*name_len = sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1;
|
||||
break;
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock)
|
||||
{
|
||||
struct posix_acl *acl;
|
||||
char *value = NULL;
|
||||
char *name;
|
||||
int ret;
|
||||
|
||||
if (!IS_POSIXACL(inode))
|
||||
return NULL;
|
||||
|
||||
acl = get_cached_acl(inode, type);
|
||||
if (acl != ACL_NOT_CACHED)
|
||||
return acl;
|
||||
|
||||
ret = acl_xattr_name_len(type, &name, NULL);
|
||||
if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
ret = scoutfs_xattr_get_locked(inode, name, NULL, 0, lock);
|
||||
if (ret > 0) {
|
||||
value = kzalloc(ret, GFP_NOFS);
|
||||
if (!value)
|
||||
ret = -ENOMEM;
|
||||
else
|
||||
ret = scoutfs_xattr_get_locked(inode, name, value, ret, lock);
|
||||
}
|
||||
if (ret > 0) {
|
||||
acl = posix_acl_from_xattr(&init_user_ns, value, ret);
|
||||
} else if (ret == -ENODATA || ret == 0) {
|
||||
acl = NULL;
|
||||
} else {
|
||||
acl = ERR_PTR(ret);
|
||||
}
|
||||
|
||||
/* can set null negative cache */
|
||||
if (!IS_ERR(acl))
|
||||
set_cached_acl(inode, type, acl);
|
||||
|
||||
kfree(value);
|
||||
|
||||
return acl;
|
||||
}
|
||||
|
||||
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
|
||||
{
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
struct posix_acl *acl;
|
||||
int ret;
|
||||
|
||||
if (!IS_POSIXACL(inode))
|
||||
return NULL;
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
|
||||
if (ret < 0) {
|
||||
acl = ERR_PTR(ret);
|
||||
} else {
|
||||
acl = scoutfs_get_acl_locked(inode, type, lock);
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
|
||||
}
|
||||
|
||||
return acl;
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller has acquired the locks and dirtied the inode, they'll
|
||||
* update the inode item if we return 0.
|
||||
*/
|
||||
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks)
|
||||
{
|
||||
static const struct scoutfs_xattr_prefix_tags tgs = {0,}; /* never scoutfs. prefix */
|
||||
bool set_mode = false;
|
||||
char *value = NULL;
|
||||
umode_t new_mode;
|
||||
size_t name_len;
|
||||
char *name;
|
||||
int size = 0;
|
||||
int ret;
|
||||
|
||||
ret = acl_xattr_name_len(type, &name, &name_len);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
switch (type) {
|
||||
case ACL_TYPE_ACCESS:
|
||||
if (acl) {
|
||||
ret = posix_acl_update_mode(inode, &new_mode, &acl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
set_mode = true;
|
||||
}
|
||||
break;
|
||||
case ACL_TYPE_DEFAULT:
|
||||
if (!S_ISDIR(inode->i_mode)) {
|
||||
ret = acl ? -EINVAL : 0;
|
||||
goto out;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
if (acl) {
|
||||
size = posix_acl_xattr_size(acl->a_count);
|
||||
value = kmalloc(size, GFP_NOFS);
|
||||
if (!value) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scoutfs_xattr_set_locked(inode, name, name_len, value, size, 0, &tgs,
|
||||
lock, NULL, ind_locks);
|
||||
if (ret == 0 && set_mode) {
|
||||
inode->i_mode = new_mode;
|
||||
if (!value) {
|
||||
/* can be setting an acl that only affects mode, didn't need xattr */
|
||||
inode_inc_iversion(inode);
|
||||
inode->i_ctime = CURRENT_TIME;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
if (!ret)
|
||||
set_cached_acl(inode, type, acl);
|
||||
|
||||
kfree(value);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
|
||||
{
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
LIST_HEAD(ind_locks);
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE, SCOUTFS_LKF_REFRESH_INODE, inode, &lock) ?:
|
||||
scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
|
||||
if (ret == 0) {
|
||||
ret = scoutfs_dirty_inode_item(inode, lock) ?:
|
||||
scoutfs_set_acl_locked(inode, acl, type, lock, &ind_locks);
|
||||
if (ret == 0)
|
||||
scoutfs_update_inode_item(inode, lock, &ind_locks);
|
||||
|
||||
scoutfs_release_trans(sb);
|
||||
scoutfs_inode_index_unlock(sb, &ind_locks);
|
||||
}
|
||||
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
|
||||
int type)
|
||||
{
|
||||
struct posix_acl *acl;
|
||||
int ret = 0;
|
||||
|
||||
if (!IS_POSIXACL(dentry->d_inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
acl = scoutfs_get_acl(dentry->d_inode, type);
|
||||
if (IS_ERR(acl))
|
||||
return PTR_ERR(acl);
|
||||
if (acl == NULL)
|
||||
return -ENODATA;
|
||||
|
||||
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
|
||||
posix_acl_release(acl);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
|
||||
int flags, int type)
|
||||
{
|
||||
struct posix_acl *acl = NULL;
|
||||
int ret;
|
||||
|
||||
if (!inode_owner_or_capable(dentry->d_inode))
|
||||
return -EPERM;
|
||||
|
||||
if (!IS_POSIXACL(dentry->d_inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (value) {
|
||||
acl = posix_acl_from_xattr(&init_user_ns, value, size);
|
||||
if (IS_ERR(acl))
|
||||
return PTR_ERR(acl);
|
||||
|
||||
if (acl) {
|
||||
ret = kc_posix_acl_valid(&init_user_ns, acl);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
ret = scoutfs_set_acl(dentry->d_inode, acl, type);
|
||||
out:
|
||||
posix_acl_release(acl);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Apply the parent's default acl to new inodes access acl and inherit
|
||||
* it as the default for new directories. The caller holds locks and a
|
||||
* transaction.
|
||||
*/
|
||||
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
|
||||
struct list_head *ind_locks)
|
||||
{
|
||||
struct posix_acl *acl = NULL;
|
||||
int ret = 0;
|
||||
|
||||
if (!S_ISLNK(inode->i_mode)) {
|
||||
if (IS_POSIXACL(dir)) {
|
||||
acl = scoutfs_get_acl_locked(dir, ACL_TYPE_DEFAULT, dir_lock);
|
||||
if (IS_ERR(acl))
|
||||
return PTR_ERR(acl);
|
||||
}
|
||||
|
||||
if (!acl)
|
||||
inode->i_mode &= ~current_umask();
|
||||
}
|
||||
|
||||
if (IS_POSIXACL(dir) && acl) {
|
||||
if (S_ISDIR(inode->i_mode)) {
|
||||
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_DEFAULT,
|
||||
lock, ind_locks);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
ret = posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (ret > 0)
|
||||
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS,
|
||||
lock, ind_locks);
|
||||
} else {
|
||||
cache_no_acl(inode);
|
||||
}
|
||||
out:
|
||||
posix_acl_release(acl);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Update the access ACL based on a newly set mode. If we return an
|
||||
* error then the xattr wasn't changed.
|
||||
*
|
||||
* Annoyingly, setattr_copy has logic that transforms the final set mode
|
||||
* that we want to use to update the acl. But we don't want to modify
|
||||
* the other inode fields while discovering the resulting mode. We're
|
||||
* relying on acl_chmod not caring about the transformation (currently
|
||||
* just clears sgid). It would be better if we could get the resulting
|
||||
* mode to give to acl_chmod without modifying the other inode fields.
|
||||
*
|
||||
* The caller has the inode mutex, a cluster lock, transaction, and will
|
||||
* update the inode item if we return success.
|
||||
*/
|
||||
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks)
|
||||
{
|
||||
struct posix_acl *acl;
|
||||
int ret = 0;
|
||||
|
||||
if (!IS_POSIXACL(inode) || !(attr->ia_valid & ATTR_MODE))
|
||||
return 0;
|
||||
|
||||
if (S_ISLNK(inode->i_mode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
acl = scoutfs_get_acl_locked(inode, ACL_TYPE_ACCESS, lock);
|
||||
if (IS_ERR_OR_NULL(acl))
|
||||
return PTR_ERR(acl);
|
||||
|
||||
ret = posix_acl_chmod(&acl, GFP_KERNEL, attr->ia_mode);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS, lock, ind_locks);
|
||||
posix_acl_release(acl);
|
||||
return ret;
|
||||
}
|
||||
@@ -1,18 +0,0 @@
|
||||
#ifndef _SCOUTFS_ACL_H_
|
||||
#define _SCOUTFS_ACL_H_
|
||||
|
||||
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type);
|
||||
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
|
||||
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
|
||||
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks);
|
||||
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
|
||||
int type);
|
||||
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
|
||||
int flags, int type);
|
||||
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks);
|
||||
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
|
||||
struct list_head *ind_locks);
|
||||
#endif
|
||||
771
kmod/src/alloc.c
771
kmod/src/alloc.c
File diff suppressed because it is too large
Load Diff
@@ -19,11 +19,14 @@
|
||||
(128ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/*
|
||||
* The default size that we'll try to preallocate. This is trying to
|
||||
* hit the limit of large efficient device writes while minimizing
|
||||
* wasted preallocation that is never used.
|
||||
* The largest aligned region that we'll try to allocate at the end of
|
||||
* the file as it's extended. This is also limited to the current file
|
||||
* size so we can only waste at most twice the total file size when
|
||||
* files are less than this. We try to keep this around the point of
|
||||
* diminishing returns in streaming performance of common data devices
|
||||
* to limit waste.
|
||||
*/
|
||||
#define SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS \
|
||||
#define SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT \
|
||||
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/*
|
||||
@@ -35,10 +38,6 @@
|
||||
#define SCOUTFS_ALLOC_DATA_LG_THRESH \
|
||||
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/* the client will force commits if data allocators get too low */
|
||||
#define SCOUTFS_ALLOC_DATA_REFILL_THRESH \
|
||||
((256ULL * 1024 * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/*
|
||||
* Fill client alloc roots to the target when they fall below the lo
|
||||
* threshold.
|
||||
@@ -56,16 +55,15 @@
|
||||
#define SCOUTFS_SERVER_DATA_FILL_LO \
|
||||
(1ULL * 1024 * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
|
||||
/*
|
||||
* Log merge meta allocations are only used for one request and will
|
||||
* never use more than the dirty limit.
|
||||
* Each of the server meta_alloc roots will try to keep a minimum amount
|
||||
* of free blocks. The server will swap roots when its current avail
|
||||
* falls below the threshold while the freed root is still above it. It
|
||||
* must have room for all the largest allocation attempted in a
|
||||
* transaction on the server.
|
||||
*/
|
||||
#define SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT (64ULL * 1024 * 1024)
|
||||
/* a few extra blocks for alloc blocks */
|
||||
#define SCOUTFS_SERVER_MERGE_FILL_TARGET \
|
||||
((SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT >> SCOUTFS_BLOCK_LG_SHIFT) + 4)
|
||||
#define SCOUTFS_SERVER_MERGE_FILL_LO SCOUTFS_SERVER_MERGE_FILL_TARGET
|
||||
#define SCOUTFS_SERVER_META_ALLOC_MIN \
|
||||
(SCOUTFS_SERVER_META_FILL_TARGET * 2)
|
||||
|
||||
/*
|
||||
* A run-time use of a pair of persistent avail/freed roots as a
|
||||
@@ -74,8 +72,7 @@
|
||||
* transaction.
|
||||
*/
|
||||
struct scoutfs_alloc {
|
||||
/* writers rarely modify list_head avail/freed. readers often check for _meta_alloc_low */
|
||||
seqlock_t seqlock;
|
||||
spinlock_t lock;
|
||||
struct mutex mutex;
|
||||
struct scoutfs_block *dirty_avail_bl;
|
||||
struct scoutfs_block *dirty_freed_bl;
|
||||
@@ -127,14 +124,7 @@ int scoutfs_free_data(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_alloc_root *dst,
|
||||
struct scoutfs_alloc_root *src, u64 total,
|
||||
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget);
|
||||
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
|
||||
u64 start, u64 len);
|
||||
int scoutfs_alloc_remove(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
|
||||
u64 start, u64 len);
|
||||
struct scoutfs_alloc_root *src, u64 total);
|
||||
|
||||
int scoutfs_alloc_fill_list(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
@@ -155,23 +145,11 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
|
||||
|
||||
bool scoutfs_alloc_meta_low(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc, u32 nr);
|
||||
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
|
||||
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
|
||||
u32 budget, u32 nr);
|
||||
bool scoutfs_alloc_test_flag(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc, u32 flag);
|
||||
|
||||
typedef int (*scoutfs_alloc_foreach_cb_t)(struct super_block *sb, void *arg,
|
||||
int owner, u64 id,
|
||||
bool meta, bool avail, u64 blocks);
|
||||
int scoutfs_alloc_foreach(struct super_block *sb,
|
||||
scoutfs_alloc_foreach_cb_t cb, void *arg);
|
||||
int scoutfs_alloc_foreach_super(struct super_block *sb, struct scoutfs_super_block *super,
|
||||
scoutfs_alloc_foreach_cb_t cb, void *arg);
|
||||
|
||||
typedef void (*scoutfs_alloc_extent_cb_t)(struct super_block *sb, void *cb_arg,
|
||||
struct scoutfs_extent *ext);
|
||||
int scoutfs_alloc_extents_cb(struct super_block *sb, struct scoutfs_alloc_root *root,
|
||||
scoutfs_alloc_extent_cb_t cb, void *cb_arg);
|
||||
|
||||
#endif
|
||||
|
||||
836
kmod/src/block.c
836
kmod/src/block.c
File diff suppressed because it is too large
Load Diff
@@ -13,27 +13,27 @@ struct scoutfs_block {
|
||||
void *priv;
|
||||
};
|
||||
|
||||
struct scoutfs_block_saved_refs {
|
||||
struct scoutfs_block_ref refs[2];
|
||||
};
|
||||
__le32 scoutfs_block_calc_crc(struct scoutfs_block_header *hdr, u32 size);
|
||||
bool scoutfs_block_valid_crc(struct scoutfs_block_header *hdr, u32 size);
|
||||
bool scoutfs_block_valid_ref(struct super_block *sb,
|
||||
struct scoutfs_block_header *hdr,
|
||||
__le64 seq, __le64 blkno);
|
||||
|
||||
#define DECLARE_SAVED_REFS(name) \
|
||||
struct scoutfs_block_saved_refs name = {{{0,}}}
|
||||
|
||||
int scoutfs_block_check_stale(struct super_block *sb, int ret,
|
||||
struct scoutfs_block_saved_refs *saved,
|
||||
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b);
|
||||
|
||||
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
|
||||
struct scoutfs_block **bl_ret);
|
||||
struct scoutfs_block *scoutfs_block_create(struct super_block *sb, u64 blkno);
|
||||
struct scoutfs_block *scoutfs_block_read(struct super_block *sb, u64 blkno);
|
||||
void scoutfs_block_invalidate(struct super_block *sb, struct scoutfs_block *bl);
|
||||
bool scoutfs_block_consistent_ref(struct super_block *sb,
|
||||
struct scoutfs_block *bl,
|
||||
__le64 seq, __le64 blkno, u32 magic);
|
||||
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);
|
||||
|
||||
void scoutfs_block_writer_init(struct super_block *sb,
|
||||
struct scoutfs_block_writer *wri);
|
||||
int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri, struct scoutfs_block_ref *ref,
|
||||
u32 magic, struct scoutfs_block **bl_ret,
|
||||
u64 dirty_blkno, u64 *ref_blkno);
|
||||
void scoutfs_block_writer_mark_dirty(struct super_block *sb,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_block *bl);
|
||||
bool scoutfs_block_writer_is_dirty(struct super_block *sb,
|
||||
struct scoutfs_block *bl);
|
||||
int scoutfs_block_writer_write(struct super_block *sb,
|
||||
struct scoutfs_block_writer *wri);
|
||||
void scoutfs_block_writer_forget_all(struct super_block *sb,
|
||||
|
||||
1107
kmod/src/btree.c
1107
kmod/src/btree.c
File diff suppressed because it is too large
Load Diff
@@ -20,15 +20,13 @@ struct scoutfs_btree_item_ref {
|
||||
|
||||
/* caller gives an item to the callback */
|
||||
typedef int (*scoutfs_btree_item_cb)(struct super_block *sb,
|
||||
struct scoutfs_key *key, u64 seq, u8 flags,
|
||||
struct scoutfs_key *key,
|
||||
void *val, int val_len, void *arg);
|
||||
|
||||
/* simple singly-linked list of items */
|
||||
struct scoutfs_btree_item_list {
|
||||
struct scoutfs_btree_item_list *next;
|
||||
struct scoutfs_key key;
|
||||
u64 seq;
|
||||
u8 flags;
|
||||
int val_len;
|
||||
u8 val[0];
|
||||
};
|
||||
@@ -84,49 +82,6 @@ int scoutfs_btree_insert_list(struct super_block *sb,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct scoutfs_btree_item_list *lst);
|
||||
|
||||
int scoutfs_btree_parent_range(struct super_block *sb,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_key *start,
|
||||
struct scoutfs_key *end);
|
||||
int scoutfs_btree_get_parent(struct super_block *sb,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_btree_root *par_root);
|
||||
int scoutfs_btree_set_parent(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_btree_root *par_root);
|
||||
int scoutfs_btree_rebalance(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct scoutfs_key *key);
|
||||
|
||||
/* merge input is a list of roots */
|
||||
struct scoutfs_btree_root_head {
|
||||
struct list_head head;
|
||||
struct scoutfs_btree_root root;
|
||||
};
|
||||
|
||||
int scoutfs_btree_merge(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_key *start,
|
||||
struct scoutfs_key *end,
|
||||
struct scoutfs_key *next_ret,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct list_head *input_list,
|
||||
bool subtree, int dirty_limit, int alloc_low);
|
||||
|
||||
int scoutfs_btree_free_blocks(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_btree_root *root, int free_budget);
|
||||
|
||||
void scoutfs_btree_put_iref(struct scoutfs_btree_item_ref *iref);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -31,15 +31,16 @@
|
||||
#include "net.h"
|
||||
#include "endian_swap.h"
|
||||
#include "quorum.h"
|
||||
#include "omap.h"
|
||||
#include "trans.h"
|
||||
|
||||
/*
|
||||
* The client is responsible for maintaining a connection to the server.
|
||||
* This includes managing quorum elections that determine which client
|
||||
* should run the server that all the clients connect to.
|
||||
*/
|
||||
|
||||
#define CLIENT_CONNECT_DELAY_MS (MSEC_PER_SEC / 10)
|
||||
#define CLIENT_CONNECT_TIMEOUT_MS (1 * MSEC_PER_SEC)
|
||||
#define CLIENT_QUORUM_TIMEOUT_MS (5 * MSEC_PER_SEC)
|
||||
|
||||
struct client_info {
|
||||
struct super_block *sb;
|
||||
@@ -49,9 +50,9 @@ struct client_info {
|
||||
|
||||
struct workqueue_struct *workq;
|
||||
struct delayed_work connect_dwork;
|
||||
unsigned long connect_delay_jiffies;
|
||||
|
||||
u64 server_term;
|
||||
u64 greeting_umb;
|
||||
|
||||
bool sending_farewell;
|
||||
int farewell_error;
|
||||
@@ -117,6 +118,23 @@ int scoutfs_client_get_roots(struct super_block *sb,
|
||||
NULL, 0, roots, sizeof(*roots));
|
||||
}
|
||||
|
||||
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
__le64 before = cpu_to_le64p(seq);
|
||||
__le64 after;
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_net_sync_request(sb, client->conn,
|
||||
SCOUTFS_NET_CMD_ADVANCE_SEQ,
|
||||
&before, sizeof(before),
|
||||
&after, sizeof(after));
|
||||
if (ret == 0)
|
||||
*seq = le64_to_cpu(after);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
@@ -138,7 +156,7 @@ static int client_lock_response(struct super_block *sb,
|
||||
void *resp, unsigned int resp_len,
|
||||
int error, void *data)
|
||||
{
|
||||
if (resp_len != sizeof(struct scoutfs_net_lock))
|
||||
if (resp_len != sizeof(struct scoutfs_net_lock_grant_response))
|
||||
return -EINVAL;
|
||||
|
||||
/* XXX error? */
|
||||
@@ -203,120 +221,6 @@ int scoutfs_client_srch_commit_compact(struct super_block *sb,
|
||||
res, sizeof(*res), NULL, 0);
|
||||
}
|
||||
|
||||
int scoutfs_client_get_log_merge(struct super_block *sb,
|
||||
struct scoutfs_log_merge_request *req)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn,
|
||||
SCOUTFS_NET_CMD_GET_LOG_MERGE,
|
||||
NULL, 0, req, sizeof(*req));
|
||||
}
|
||||
|
||||
int scoutfs_client_commit_log_merge(struct super_block *sb,
|
||||
struct scoutfs_log_merge_complete *comp)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn,
|
||||
SCOUTFS_NET_CMD_COMMIT_LOG_MERGE,
|
||||
comp, sizeof(*comp), NULL, 0);
|
||||
}
|
||||
|
||||
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
|
||||
struct scoutfs_open_ino_map *map)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_response(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
|
||||
id, 0, map, sizeof(*map));
|
||||
}
|
||||
|
||||
/* The client is receiving an omap request from the server */
|
||||
static int client_open_ino_map(struct super_block *sb, struct scoutfs_net_connection *conn,
|
||||
u8 cmd, u64 id, void *arg, u16 arg_len)
|
||||
{
|
||||
if (arg_len != sizeof(struct scoutfs_open_ino_map_args))
|
||||
return -EINVAL;
|
||||
|
||||
return scoutfs_omap_client_handle_request(sb, id, arg);
|
||||
}
|
||||
|
||||
/* The client is sending an omap request to the server */
|
||||
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
|
||||
struct scoutfs_open_ino_map *map)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
struct scoutfs_open_ino_map_args args = {
|
||||
.group_nr = cpu_to_le64(group_nr),
|
||||
.req_id = 0,
|
||||
};
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_OPEN_INO_MAP,
|
||||
&args, sizeof(args), map, sizeof(*map));
|
||||
}
|
||||
|
||||
/* The client is asking the server for the current volume options */
|
||||
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_GET_VOLOPT,
|
||||
NULL, 0, volopt, sizeof(*volopt));
|
||||
}
|
||||
|
||||
/* The client is asking the server to update volume options */
|
||||
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_SET_VOLOPT,
|
||||
volopt, sizeof(*volopt), NULL, 0);
|
||||
}
|
||||
|
||||
/* The client is asking the server to clear volume options */
|
||||
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_CLEAR_VOLOPT,
|
||||
volopt, sizeof(*volopt), NULL, 0);
|
||||
}
|
||||
|
||||
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_RESIZE_DEVICES,
|
||||
nrd, sizeof(*nrd), NULL, 0);
|
||||
}
|
||||
|
||||
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
return scoutfs_net_sync_request(sb, client->conn, SCOUTFS_NET_CMD_STATFS,
|
||||
NULL, 0, nst, sizeof(*nst));
|
||||
}
|
||||
|
||||
/*
|
||||
* The server is asking that we trigger a commit of the current log
|
||||
* trees so that they can ensure an item seq discontinuity between
|
||||
* finalized log btrees and the next set of open log btrees. If we're
|
||||
* shutting down then we're already going to perform a final commit.
|
||||
*/
|
||||
static int sync_log_trees(struct super_block *sb, struct scoutfs_net_connection *conn,
|
||||
u8 cmd, u64 id, void *arg, u16 arg_len)
|
||||
{
|
||||
if (arg_len != 0)
|
||||
return -EINVAL;
|
||||
|
||||
if (!scoutfs_unmounting(sb))
|
||||
scoutfs_trans_sync(sb, 0);
|
||||
|
||||
return scoutfs_net_response(sb, conn, cmd, id, 0, NULL, 0);
|
||||
}
|
||||
|
||||
/* The client is receiving a invalidation request from the server */
|
||||
static int client_lock(struct super_block *sb,
|
||||
struct scoutfs_net_connection *conn, u8 cmd, u64 id,
|
||||
@@ -354,8 +258,8 @@ static int client_greeting(struct super_block *sb,
|
||||
void *resp, unsigned int resp_len, int error,
|
||||
void *data)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct client_info *client = sbi->client_info;
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct scoutfs_net_greeting *gr = resp;
|
||||
bool new_server;
|
||||
int ret;
|
||||
@@ -370,16 +274,18 @@ static int client_greeting(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (gr->fsid != cpu_to_le64(sbi->fsid)) {
|
||||
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
|
||||
le64_to_cpu(gr->fsid), sbi->fsid);
|
||||
if (gr->fsid != super->hdr.fsid) {
|
||||
scoutfs_warn(sb, "server sent fsid 0x%llx, client has 0x%llx",
|
||||
le64_to_cpu(gr->fsid),
|
||||
le64_to_cpu(super->hdr.fsid));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (le64_to_cpu(gr->fmt_vers) != sbi->fmt_vers) {
|
||||
scoutfs_warn(sb, "server greeting response format version %llu did not match client format version %llu",
|
||||
le64_to_cpu(gr->fmt_vers), sbi->fmt_vers);
|
||||
if (gr->format_hash != super->format_hash) {
|
||||
scoutfs_warn(sb, "server sent format 0x%llx, client has 0x%llx",
|
||||
le64_to_cpu(gr->format_hash),
|
||||
le64_to_cpu(super->format_hash));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -388,31 +294,52 @@ static int client_greeting(struct super_block *sb,
|
||||
scoutfs_net_client_greeting(sb, conn, new_server);
|
||||
|
||||
client->server_term = le64_to_cpu(gr->server_term);
|
||||
client->connect_delay_jiffies = 0;
|
||||
client->greeting_umb = le64_to_cpu(gr->unmount_barrier);
|
||||
ret = 0;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* The client is deciding if it needs to keep trying to reconnect to
|
||||
* have its farewell request processed. The server removes our mounted
|
||||
* client item last so that if we don't see it we know the server has
|
||||
* processed our farewell and we don't need to reconnect, we can unmount
|
||||
* safely.
|
||||
* This work is responsible for maintaining a connection from the client
|
||||
* to the server. It's queued on mount and disconnect and we requeue
|
||||
* the work if the work fails and we're not shutting down.
|
||||
*
|
||||
* This is peeking at btree blocks that the server could be actively
|
||||
* freeing with cow updates so it can see stale blocks, we just return
|
||||
* the error and we'll retry eventually as the connection times out.
|
||||
* In the typical case a mount reads the super blocks and finds the
|
||||
* address of the currently running server and connects to it.
|
||||
* Non-voting clients who can't connect will keep trying alternating
|
||||
* reading the address and getting connect timeouts.
|
||||
*
|
||||
* Voting mounts will try to elect a leader if they can't connect to the
|
||||
* server. When a quorum can't connect and are able to elect a leader
|
||||
* then a new server is started. The new server will write its address
|
||||
* in the super and everyone will be able to connect.
|
||||
*
|
||||
* There's a tricky bit of coordination required to safely unmount.
|
||||
* Clients need to tell the server that they won't be coming back with a
|
||||
* farewell request. Once a client receives its farewell response it
|
||||
* can exit. But a majority of clients need to stick around to elect a
|
||||
* server to process all their farewell requests. This is coordinated
|
||||
* by having the greeting tell the server that a client is a voter. The
|
||||
* server then holds on to farewell requests from voters until only
|
||||
* requests from the final quorum remain. These farewell responses are
|
||||
* only sent after updating an unmount barrier in the super to indicate
|
||||
* to the final quorum that they can safely exit without having received
|
||||
* a farewell response over the network.
|
||||
*/
|
||||
static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
|
||||
static void scoutfs_client_connect_worker(struct work_struct *work)
|
||||
{
|
||||
struct scoutfs_key key = {
|
||||
.sk_zone = SCOUTFS_MOUNTED_CLIENT_ZONE,
|
||||
.skmc_rid = cpu_to_le64(rid),
|
||||
};
|
||||
struct scoutfs_super_block *super;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
struct client_info *client = container_of(work, struct client_info,
|
||||
connect_dwork.work);
|
||||
struct super_block *sb = client->sb;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = NULL;
|
||||
struct mount_options *opts = &sbi->opts;
|
||||
const bool am_voter = opts->server_addr.sin_addr.s_addr != 0;
|
||||
struct scoutfs_net_greeting greet;
|
||||
struct sockaddr_in sin;
|
||||
ktime_t timeout_abs;
|
||||
u64 elected_term;
|
||||
int ret;
|
||||
|
||||
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
|
||||
@@ -425,96 +352,57 @@ static int lookup_mounted_client_item(struct super_block *sb, u64 rid)
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_btree_lookup(sb, &super->mounted_clients, &key, &iref);
|
||||
if (ret == 0) {
|
||||
scoutfs_btree_put_iref(&iref);
|
||||
ret = 1;
|
||||
}
|
||||
if (ret == -ENOENT)
|
||||
ret = 0;
|
||||
|
||||
kfree(super);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we're not seeing successful connections we want to back off. Each
|
||||
* connection attempt starts by setting a long connection work delay.
|
||||
* We only set a shorter delay if we see a greeting response from the
|
||||
* server. At that point we'll try to immediately reconnect if the
|
||||
* connection is broken.
|
||||
*/
|
||||
static void queue_connect_dwork(struct super_block *sb, struct client_info *client)
|
||||
{
|
||||
if (!atomic_read(&client->shutting_down) && !scoutfs_forcing_unmount(sb))
|
||||
queue_delayed_work(client->workq, &client->connect_dwork,
|
||||
client->connect_delay_jiffies);
|
||||
}
|
||||
|
||||
/*
|
||||
* This work is responsible for maintaining a connection from the client
|
||||
* to the server. It's queued on mount and disconnect and we requeue
|
||||
* the work if the work fails and we're not shutting down.
|
||||
*
|
||||
* We ask quorum for an address to try and connect to. If there isn't
|
||||
* one, or it fails, we back off a bit before trying again.
|
||||
*
|
||||
* There's a tricky bit of coordination required to safely unmount.
|
||||
* Clients need to tell the server that they won't be coming back with a
|
||||
* farewell request. Once the server processes a farewell request from
|
||||
* the client it can forget the client. If the connection is broken
|
||||
* before the client gets the farewell response it doesn't want to
|
||||
* reconnect to send it again.. instead the client can read the metadata
|
||||
* device to check for the lack of an item which indicates that the
|
||||
* server has processed its farewell.
|
||||
*/
|
||||
static void scoutfs_client_connect_worker(struct work_struct *work)
|
||||
{
|
||||
struct client_info *client = container_of(work, struct client_info,
|
||||
connect_dwork.work);
|
||||
struct super_block *sb = client->sb;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_mount_options opts;
|
||||
struct scoutfs_net_greeting greet;
|
||||
struct sockaddr_in sin;
|
||||
bool am_quorum;
|
||||
int ret;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
am_quorum = opts.quorum_slot_nr >= 0;
|
||||
|
||||
/* can unmount once server farewell handling removes our item */
|
||||
if (client->sending_farewell &&
|
||||
lookup_mounted_client_item(sb, sbi->rid) == 0) {
|
||||
/* can safely unmount if we see that server processed our farewell */
|
||||
if (am_voter && client->sending_farewell &&
|
||||
(le64_to_cpu(super->unmount_barrier) > client->greeting_umb)) {
|
||||
client->farewell_error = 0;
|
||||
complete(&client->farewell_comp);
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* always wait a bit until a greeting response sets a lower delay */
|
||||
client->connect_delay_jiffies = msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS);
|
||||
/* try to connect to the super's server address */
|
||||
scoutfs_addr_to_sin(&sin, &super->server_addr);
|
||||
if (sin.sin_addr.s_addr != 0 && sin.sin_port != 0)
|
||||
ret = scoutfs_net_connect(sb, client->conn, &sin,
|
||||
CLIENT_CONNECT_TIMEOUT_MS);
|
||||
else
|
||||
ret = -ENOTCONN;
|
||||
|
||||
ret = scoutfs_quorum_server_sin(sb, &sin);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
/* voters try to elect a leader if they couldn't connect */
|
||||
if (ret < 0) {
|
||||
/* non-voters will keep retrying */
|
||||
if (!am_voter)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_net_connect(sb, client->conn, &sin,
|
||||
CLIENT_CONNECT_TIMEOUT_MS);
|
||||
if (ret < 0)
|
||||
/* make sure local server isn't writing super during votes */
|
||||
scoutfs_server_stop(sb);
|
||||
|
||||
timeout_abs = ktime_add_ms(ktime_get(),
|
||||
CLIENT_QUORUM_TIMEOUT_MS);
|
||||
|
||||
ret = scoutfs_quorum_election(sb, timeout_abs,
|
||||
le64_to_cpu(super->quorum_server_term),
|
||||
&elected_term);
|
||||
/* start the server if we were asked to */
|
||||
if (elected_term > 0)
|
||||
ret = scoutfs_server_start(sb, &opts->server_addr,
|
||||
elected_term);
|
||||
ret = -ENOTCONN;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* send a greeting to verify endpoints of each connection */
|
||||
greet.fsid = cpu_to_le64(sbi->fsid);
|
||||
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
|
||||
greet.fsid = super->hdr.fsid;
|
||||
greet.format_hash = super->format_hash;
|
||||
greet.server_term = cpu_to_le64(client->server_term);
|
||||
greet.unmount_barrier = cpu_to_le64(client->greeting_umb);
|
||||
greet.rid = cpu_to_le64(sbi->rid);
|
||||
greet.flags = 0;
|
||||
if (client->sending_farewell)
|
||||
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_FAREWELL);
|
||||
if (am_quorum)
|
||||
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_QUORUM);
|
||||
if (am_voter)
|
||||
greet.flags |= cpu_to_le64(SCOUTFS_NET_GREETING_FLAG_VOTER);
|
||||
|
||||
ret = scoutfs_net_submit_request(sb, client->conn,
|
||||
SCOUTFS_NET_CMD_GREETING,
|
||||
@@ -523,15 +411,17 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
|
||||
if (ret)
|
||||
scoutfs_net_shutdown(sb, client->conn);
|
||||
out:
|
||||
if (ret)
|
||||
queue_connect_dwork(sb, client);
|
||||
kfree(super);
|
||||
|
||||
/* always have a small delay before retrying to avoid storms */
|
||||
if (ret && !atomic_read(&client->shutting_down))
|
||||
queue_delayed_work(client->workq, &client->connect_dwork,
|
||||
msecs_to_jiffies(CLIENT_CONNECT_DELAY_MS));
|
||||
}
|
||||
|
||||
static scoutfs_net_request_t client_req_funcs[] = {
|
||||
[SCOUTFS_NET_CMD_SYNC_LOG_TREES] = sync_log_trees,
|
||||
[SCOUTFS_NET_CMD_LOCK] = client_lock,
|
||||
[SCOUTFS_NET_CMD_LOCK_RECOVER] = client_lock_recover,
|
||||
[SCOUTFS_NET_CMD_OPEN_INO_MAP] = client_open_ino_map,
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -544,7 +434,8 @@ static void client_notify_down(struct super_block *sb,
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
queue_connect_dwork(sb, client);
|
||||
if (!atomic_read(&client->shutting_down))
|
||||
queue_delayed_work(client->workq, &client->connect_dwork, 0);
|
||||
}
|
||||
|
||||
int scoutfs_client_setup(struct super_block *sb)
|
||||
@@ -579,7 +470,7 @@ int scoutfs_client_setup(struct super_block *sb)
|
||||
goto out;
|
||||
}
|
||||
|
||||
queue_connect_dwork(sb, client);
|
||||
queue_delayed_work(client->workq, &client->connect_dwork, 0);
|
||||
ret = 0;
|
||||
|
||||
out:
|
||||
@@ -636,7 +527,7 @@ void scoutfs_client_destroy(struct super_block *sb)
|
||||
if (client == NULL)
|
||||
return;
|
||||
|
||||
if (client->server_term != 0 && !scoutfs_forcing_unmount(sb)) {
|
||||
if (client->server_term != 0) {
|
||||
client->sending_farewell = true;
|
||||
ret = scoutfs_net_submit_request(sb, client->conn,
|
||||
SCOUTFS_NET_CMD_FAREWELL,
|
||||
@@ -644,8 +535,10 @@ void scoutfs_client_destroy(struct super_block *sb)
|
||||
client_farewell_response,
|
||||
NULL, NULL);
|
||||
if (ret == 0) {
|
||||
wait_for_completion(&client->farewell_comp);
|
||||
ret = client->farewell_error;
|
||||
ret = wait_for_completion_interruptible(
|
||||
&client->farewell_comp);
|
||||
if (ret == 0)
|
||||
ret = client->farewell_error;
|
||||
}
|
||||
if (ret) {
|
||||
scoutfs_inc_counter(sb, client_farewell_error);
|
||||
@@ -669,11 +562,3 @@ void scoutfs_client_destroy(struct super_block *sb)
|
||||
kfree(client);
|
||||
sbi->client_info = NULL;
|
||||
}
|
||||
|
||||
void scoutfs_client_net_shutdown(struct super_block *sb)
|
||||
{
|
||||
struct client_info *client = SCOUTFS_SB(sb)->client_info;
|
||||
|
||||
if (client && client->conn)
|
||||
scoutfs_net_shutdown(sb, client->conn);
|
||||
}
|
||||
|
||||
@@ -10,6 +10,7 @@ int scoutfs_client_commit_log_trees(struct super_block *sb,
|
||||
int scoutfs_client_get_roots(struct super_block *sb,
|
||||
struct scoutfs_net_roots *roots);
|
||||
u64 *scoutfs_client_bulk_alloc(struct super_block *sb);
|
||||
int scoutfs_client_advance_seq(struct super_block *sb, u64 *seq);
|
||||
int scoutfs_client_get_last_seq(struct super_block *sb, u64 *seq);
|
||||
int scoutfs_client_lock_request(struct super_block *sb,
|
||||
struct scoutfs_net_lock *nl);
|
||||
@@ -21,21 +22,7 @@ int scoutfs_client_srch_get_compact(struct super_block *sb,
|
||||
struct scoutfs_srch_compact *sc);
|
||||
int scoutfs_client_srch_commit_compact(struct super_block *sb,
|
||||
struct scoutfs_srch_compact *res);
|
||||
int scoutfs_client_get_log_merge(struct super_block *sb,
|
||||
struct scoutfs_log_merge_request *req);
|
||||
int scoutfs_client_commit_log_merge(struct super_block *sb,
|
||||
struct scoutfs_log_merge_complete *comp);
|
||||
int scoutfs_client_send_omap_response(struct super_block *sb, u64 id,
|
||||
struct scoutfs_open_ino_map *map);
|
||||
int scoutfs_client_open_ino_map(struct super_block *sb, u64 group_nr,
|
||||
struct scoutfs_open_ino_map *map);
|
||||
int scoutfs_client_get_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
|
||||
int scoutfs_client_set_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
|
||||
int scoutfs_client_clear_volopt(struct super_block *sb, struct scoutfs_volume_options *volopt);
|
||||
int scoutfs_client_resize_devices(struct super_block *sb, struct scoutfs_net_resize_devices *nrd);
|
||||
int scoutfs_client_statfs(struct super_block *sb, struct scoutfs_net_statfs *nst);
|
||||
|
||||
void scoutfs_client_net_shutdown(struct super_block *sb);
|
||||
int scoutfs_client_setup(struct super_block *sb);
|
||||
void scoutfs_client_destroy(struct super_block *sb);
|
||||
|
||||
|
||||
315
kmod/src/count.h
Normal file
315
kmod/src/count.h
Normal file
@@ -0,0 +1,315 @@
|
||||
#ifndef _SCOUTFS_COUNT_H_
|
||||
#define _SCOUTFS_COUNT_H_
|
||||
|
||||
/*
|
||||
* Our estimate of the space consumed while dirtying items is based on
|
||||
* the number of items and the size of their values.
|
||||
*
|
||||
* The estimate is still a read-only input to entering the transaction.
|
||||
* We'd like to use it as a clean rhs arg to hold_trans. We define SIC_
|
||||
* functions which return the count struct. This lets us have a single
|
||||
* arg and avoid bugs in initializing and passing in struct pointers
|
||||
* from callers. The internal __count functions are used compose an
|
||||
* estimate out of the sets of items it manipulates. We program in much
|
||||
* clearer C instead of in the preprocessor.
|
||||
*
|
||||
* Compilers are able to collapse the inlines into constants for the
|
||||
* constant estimates.
|
||||
*/
|
||||
|
||||
struct scoutfs_item_count {
|
||||
signed items;
|
||||
signed vals;
|
||||
};
|
||||
|
||||
/* The caller knows exactly what they're doing. */
|
||||
static inline const struct scoutfs_item_count SIC_EXACT(signed items,
|
||||
signed vals)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {
|
||||
.items = items,
|
||||
.vals = vals,
|
||||
};
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocating an inode creates a new set of indexed items.
|
||||
*/
|
||||
static inline void __count_alloc_inode(struct scoutfs_item_count *cnt)
|
||||
{
|
||||
const int nr_indices = SCOUTFS_INODE_INDEX_NR;
|
||||
|
||||
cnt->items += 1 + nr_indices;
|
||||
cnt->vals += sizeof(struct scoutfs_inode);
|
||||
}
|
||||
|
||||
/*
|
||||
* Dirtying an inode dirties the inode item and can delete and create
|
||||
* the full set of indexed items.
|
||||
*/
|
||||
static inline void __count_dirty_inode(struct scoutfs_item_count *cnt)
|
||||
{
|
||||
const int nr_indices = 2 * SCOUTFS_INODE_INDEX_NR;
|
||||
|
||||
cnt->items += 1 + nr_indices;
|
||||
cnt->vals += sizeof(struct scoutfs_inode);
|
||||
}
|
||||
|
||||
static inline const struct scoutfs_item_count SIC_ALLOC_INODE(void)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
__count_alloc_inode(&cnt);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
static inline const struct scoutfs_item_count SIC_DIRTY_INODE(void)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Directory entries are stored in three items.
|
||||
*/
|
||||
static inline void __count_dirents(struct scoutfs_item_count *cnt,
|
||||
unsigned name_len)
|
||||
{
|
||||
cnt->items += 3;
|
||||
cnt->vals += 3 * offsetof(struct scoutfs_dirent, name[name_len]);
|
||||
}
|
||||
|
||||
static inline void __count_sym_target(struct scoutfs_item_count *cnt,
|
||||
unsigned size)
|
||||
{
|
||||
unsigned nr = DIV_ROUND_UP(size, SCOUTFS_MAX_VAL_SIZE);
|
||||
|
||||
cnt->items += nr;
|
||||
cnt->vals += size;
|
||||
}
|
||||
|
||||
static inline void __count_orphan(struct scoutfs_item_count *cnt)
|
||||
{
|
||||
|
||||
cnt->items += 1;
|
||||
}
|
||||
|
||||
static inline void __count_mknod(struct scoutfs_item_count *cnt,
|
||||
unsigned name_len)
|
||||
{
|
||||
__count_alloc_inode(cnt);
|
||||
__count_dirents(cnt, name_len);
|
||||
__count_dirty_inode(cnt);
|
||||
}
|
||||
|
||||
static inline const struct scoutfs_item_count SIC_MKNOD(unsigned name_len)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
__count_mknod(&cnt, name_len);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Dropping the inode deletes all its items. Potentially enormous numbers
|
||||
* of items (data mapping, xattrs) are deleted in their own transactions.
|
||||
*/
|
||||
static inline const struct scoutfs_item_count SIC_DROP_INODE(int mode,
|
||||
u64 size)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
if (S_ISLNK(mode))
|
||||
__count_sym_target(&cnt, size);
|
||||
__count_dirty_inode(&cnt);
|
||||
__count_orphan(&cnt);
|
||||
|
||||
cnt.vals = 0;
|
||||
return cnt;
|
||||
}
|
||||
|
||||
static inline const struct scoutfs_item_count SIC_LINK(unsigned name_len)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
__count_dirents(&cnt, name_len);
|
||||
__count_dirty_inode(&cnt);
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Unlink can add orphan items.
|
||||
*/
|
||||
static inline const struct scoutfs_item_count SIC_UNLINK(unsigned name_len)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
__count_dirents(&cnt, name_len);
|
||||
__count_dirty_inode(&cnt);
|
||||
__count_dirty_inode(&cnt);
|
||||
__count_orphan(&cnt);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
static inline const struct scoutfs_item_count SIC_SYMLINK(unsigned name_len,
|
||||
unsigned size)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
__count_mknod(&cnt, name_len);
|
||||
__count_sym_target(&cnt, size);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* This assumes the worst case of a rename between directories that
|
||||
* unlinks an existing target. That'll be worse than the common case
|
||||
* by a few hundred bytes.
|
||||
*/
|
||||
static inline const struct scoutfs_item_count SIC_RENAME(unsigned old_len,
|
||||
unsigned new_len)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
/* dirty dirs and inodes */
|
||||
__count_dirty_inode(&cnt);
|
||||
__count_dirty_inode(&cnt);
|
||||
__count_dirty_inode(&cnt);
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
/* unlink old and new, link new */
|
||||
__count_dirents(&cnt, old_len);
|
||||
__count_dirents(&cnt, new_len);
|
||||
__count_dirents(&cnt, new_len);
|
||||
|
||||
/* orphan the existing target */
|
||||
__count_orphan(&cnt);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Creating an xattr results in a dirty set of items with values that
|
||||
* store the xattr header, name, and value. There's always at least one
|
||||
* item with the header and name. Any previously existing items are
|
||||
* deleted which dirties their key but removes their value. The two
|
||||
* sets of items are indexed by different ids so their items don't
|
||||
* overlap.
|
||||
*/
|
||||
static inline const struct scoutfs_item_count SIC_XATTR_SET(unsigned old_parts,
|
||||
bool creating,
|
||||
unsigned name_len,
|
||||
unsigned size)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
unsigned int new_parts;
|
||||
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
if (old_parts)
|
||||
cnt.items += old_parts;
|
||||
|
||||
if (creating) {
|
||||
new_parts = SCOUTFS_XATTR_NR_PARTS(name_len, size);
|
||||
|
||||
cnt.items += new_parts;
|
||||
cnt.vals += sizeof(struct scoutfs_xattr) + name_len + size;
|
||||
}
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* write_begin can have to allocate all the blocks in the page and can
|
||||
* have to add a big allocation from the server to do so:
|
||||
* - merge added free extents from the server
|
||||
* - remove a free extent per block
|
||||
* - remove an offline extent for every other block
|
||||
* - add a file extent per block
|
||||
*/
|
||||
static inline const struct scoutfs_item_count SIC_WRITE_BEGIN(void)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
unsigned nr_free = (1 + SCOUTFS_BLOCK_SM_PER_PAGE) * 3;
|
||||
unsigned nr_file = (DIV_ROUND_UP(SCOUTFS_BLOCK_SM_PER_PAGE, 2) +
|
||||
SCOUTFS_BLOCK_SM_PER_PAGE) * 3;
|
||||
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
cnt.items += nr_free + nr_file;
|
||||
cnt.vals += nr_file;
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Truncating an extent can:
|
||||
* - delete existing file extent,
|
||||
* - create two surrounding file extents,
|
||||
* - add an offline file extent,
|
||||
* - delete two existing free extents
|
||||
* - create a merged free extent
|
||||
*/
|
||||
static inline const struct scoutfs_item_count
|
||||
SIC_TRUNC_EXTENT(struct inode *inode)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
unsigned int nr_file = 1 + 2 + 1;
|
||||
unsigned int nr_free = (2 + 1) * 2;
|
||||
|
||||
if (inode)
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
cnt.items += nr_file + nr_free;
|
||||
cnt.vals += nr_file;
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Fallocating an extent can, at most:
|
||||
* - allocate from the server: delete two free and insert merged
|
||||
* - free an allocated extent: delete one and create two split
|
||||
* - remove an unallocated file extent: delete one and create two split
|
||||
* - add an fallocated flie extent: delete two and inset one merged
|
||||
*/
|
||||
static inline const struct scoutfs_item_count SIC_FALLOCATE_ONE(void)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
unsigned int nr_free = ((1 + 2) * 2) * 2;
|
||||
unsigned int nr_file = (1 + 2) * 2;
|
||||
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
cnt.items += nr_free + nr_file;
|
||||
cnt.vals += nr_file;
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* ioc_setattr_more can dirty the inode and add a single offline extent.
|
||||
*/
|
||||
static inline const struct scoutfs_item_count SIC_SETATTR_MORE(void)
|
||||
{
|
||||
struct scoutfs_item_count cnt = {0,};
|
||||
|
||||
__count_dirty_inode(&cnt);
|
||||
|
||||
cnt.items++;
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
#endif
|
||||
@@ -20,21 +20,17 @@
|
||||
EXPAND_COUNTER(alloc_list_freed_hi) \
|
||||
EXPAND_COUNTER(alloc_move) \
|
||||
EXPAND_COUNTER(alloc_moved_extent) \
|
||||
EXPAND_COUNTER(alloc_stale_list_block) \
|
||||
EXPAND_COUNTER(block_cache_access_update) \
|
||||
EXPAND_COUNTER(alloc_stale_cached_list_block) \
|
||||
EXPAND_COUNTER(block_cache_access) \
|
||||
EXPAND_COUNTER(block_cache_alloc_failure) \
|
||||
EXPAND_COUNTER(block_cache_alloc_page_order) \
|
||||
EXPAND_COUNTER(block_cache_alloc_virt) \
|
||||
EXPAND_COUNTER(block_cache_end_io_error) \
|
||||
EXPAND_COUNTER(block_cache_forget) \
|
||||
EXPAND_COUNTER(block_cache_free) \
|
||||
EXPAND_COUNTER(block_cache_free_work) \
|
||||
EXPAND_COUNTER(block_cache_remove_stale) \
|
||||
EXPAND_COUNTER(block_cache_invalidate) \
|
||||
EXPAND_COUNTER(block_cache_lru_move) \
|
||||
EXPAND_COUNTER(block_cache_shrink) \
|
||||
EXPAND_COUNTER(block_cache_shrink_next) \
|
||||
EXPAND_COUNTER(block_cache_shrink_recent) \
|
||||
EXPAND_COUNTER(block_cache_shrink_remove) \
|
||||
EXPAND_COUNTER(block_cache_shrink_restart) \
|
||||
EXPAND_COUNTER(btree_compact_values) \
|
||||
EXPAND_COUNTER(btree_compact_values_enomem) \
|
||||
EXPAND_COUNTER(btree_delete) \
|
||||
@@ -44,18 +40,9 @@
|
||||
EXPAND_COUNTER(btree_insert) \
|
||||
EXPAND_COUNTER(btree_leaf_item_hash_search) \
|
||||
EXPAND_COUNTER(btree_lookup) \
|
||||
EXPAND_COUNTER(btree_merge) \
|
||||
EXPAND_COUNTER(btree_merge_alloc_low) \
|
||||
EXPAND_COUNTER(btree_merge_delete) \
|
||||
EXPAND_COUNTER(btree_merge_delta_combined) \
|
||||
EXPAND_COUNTER(btree_merge_delta_null) \
|
||||
EXPAND_COUNTER(btree_merge_dirty_limit) \
|
||||
EXPAND_COUNTER(btree_merge_drop_old) \
|
||||
EXPAND_COUNTER(btree_merge_insert) \
|
||||
EXPAND_COUNTER(btree_merge_update) \
|
||||
EXPAND_COUNTER(btree_merge_walk) \
|
||||
EXPAND_COUNTER(btree_next) \
|
||||
EXPAND_COUNTER(btree_prev) \
|
||||
EXPAND_COUNTER(btree_read_error) \
|
||||
EXPAND_COUNTER(btree_split) \
|
||||
EXPAND_COUNTER(btree_stale_read) \
|
||||
EXPAND_COUNTER(btree_update) \
|
||||
@@ -71,10 +58,10 @@
|
||||
EXPAND_COUNTER(corrupt_symlink_inode_size) \
|
||||
EXPAND_COUNTER(corrupt_symlink_missing_item) \
|
||||
EXPAND_COUNTER(corrupt_symlink_not_null_term) \
|
||||
EXPAND_COUNTER(data_fallocate_enobufs_retry) \
|
||||
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
|
||||
EXPAND_COUNTER(dentry_revalidate_error) \
|
||||
EXPAND_COUNTER(dentry_revalidate_invalid) \
|
||||
EXPAND_COUNTER(dentry_revalidate_locked) \
|
||||
EXPAND_COUNTER(dentry_revalidate_orphan) \
|
||||
EXPAND_COUNTER(dentry_revalidate_rcu) \
|
||||
EXPAND_COUNTER(dentry_revalidate_root) \
|
||||
EXPAND_COUNTER(dentry_revalidate_valid) \
|
||||
@@ -84,15 +71,12 @@
|
||||
EXPAND_COUNTER(ext_op_remove) \
|
||||
EXPAND_COUNTER(forest_bloom_fail) \
|
||||
EXPAND_COUNTER(forest_bloom_pass) \
|
||||
EXPAND_COUNTER(forest_bloom_stale) \
|
||||
EXPAND_COUNTER(forest_read_items) \
|
||||
EXPAND_COUNTER(forest_roots_next_hint) \
|
||||
EXPAND_COUNTER(forest_set_bloom_bits) \
|
||||
EXPAND_COUNTER(item_clear_dirty) \
|
||||
EXPAND_COUNTER(item_create) \
|
||||
EXPAND_COUNTER(item_delete) \
|
||||
EXPAND_COUNTER(item_delta) \
|
||||
EXPAND_COUNTER(item_delta_written) \
|
||||
EXPAND_COUNTER(item_dirty) \
|
||||
EXPAND_COUNTER(item_invalidate) \
|
||||
EXPAND_COUNTER(item_invalidate_page) \
|
||||
@@ -122,8 +106,12 @@
|
||||
EXPAND_COUNTER(item_write_dirty) \
|
||||
EXPAND_COUNTER(lock_alloc) \
|
||||
EXPAND_COUNTER(lock_free) \
|
||||
EXPAND_COUNTER(lock_grace_extended) \
|
||||
EXPAND_COUNTER(lock_grace_set) \
|
||||
EXPAND_COUNTER(lock_grace_wait) \
|
||||
EXPAND_COUNTER(lock_grant_request) \
|
||||
EXPAND_COUNTER(lock_grant_response) \
|
||||
EXPAND_COUNTER(lock_grant_work) \
|
||||
EXPAND_COUNTER(lock_invalidate_coverage) \
|
||||
EXPAND_COUNTER(lock_invalidate_inode) \
|
||||
EXPAND_COUNTER(lock_invalidate_request) \
|
||||
@@ -149,52 +137,39 @@
|
||||
EXPAND_COUNTER(net_recv_invalid_message) \
|
||||
EXPAND_COUNTER(net_recv_messages) \
|
||||
EXPAND_COUNTER(net_unknown_request) \
|
||||
EXPAND_COUNTER(orphan_scan) \
|
||||
EXPAND_COUNTER(orphan_scan_attempts) \
|
||||
EXPAND_COUNTER(orphan_scan_cached) \
|
||||
EXPAND_COUNTER(orphan_scan_error) \
|
||||
EXPAND_COUNTER(orphan_scan_item) \
|
||||
EXPAND_COUNTER(orphan_scan_omap_set) \
|
||||
EXPAND_COUNTER(quorum_candidate_server_stopping) \
|
||||
EXPAND_COUNTER(quorum_elected) \
|
||||
EXPAND_COUNTER(quorum_fence_error) \
|
||||
EXPAND_COUNTER(quorum_fence_leader) \
|
||||
EXPAND_COUNTER(quorum_cycle) \
|
||||
EXPAND_COUNTER(quorum_elected_leader) \
|
||||
EXPAND_COUNTER(quorum_election_timeout) \
|
||||
EXPAND_COUNTER(quorum_failure) \
|
||||
EXPAND_COUNTER(quorum_read_block) \
|
||||
EXPAND_COUNTER(quorum_read_block_error) \
|
||||
EXPAND_COUNTER(quorum_read_invalid_block) \
|
||||
EXPAND_COUNTER(quorum_recv_error) \
|
||||
EXPAND_COUNTER(quorum_recv_heartbeat) \
|
||||
EXPAND_COUNTER(quorum_recv_invalid) \
|
||||
EXPAND_COUNTER(quorum_recv_resignation) \
|
||||
EXPAND_COUNTER(quorum_recv_vote) \
|
||||
EXPAND_COUNTER(quorum_send_heartbeat) \
|
||||
EXPAND_COUNTER(quorum_send_resignation) \
|
||||
EXPAND_COUNTER(quorum_send_request) \
|
||||
EXPAND_COUNTER(quorum_send_vote) \
|
||||
EXPAND_COUNTER(quorum_server_shutdown) \
|
||||
EXPAND_COUNTER(quorum_term_follower) \
|
||||
EXPAND_COUNTER(quorum_saw_super_leader) \
|
||||
EXPAND_COUNTER(quorum_timedout) \
|
||||
EXPAND_COUNTER(quorum_write_block) \
|
||||
EXPAND_COUNTER(quorum_write_block_error) \
|
||||
EXPAND_COUNTER(quorum_fenced) \
|
||||
EXPAND_COUNTER(server_commit_hold) \
|
||||
EXPAND_COUNTER(server_commit_queue) \
|
||||
EXPAND_COUNTER(server_commit_worker) \
|
||||
EXPAND_COUNTER(srch_add_entry) \
|
||||
EXPAND_COUNTER(srch_compact_dirty_block) \
|
||||
EXPAND_COUNTER(srch_compact_entry) \
|
||||
EXPAND_COUNTER(srch_compact_error) \
|
||||
EXPAND_COUNTER(srch_compact_flush) \
|
||||
EXPAND_COUNTER(srch_compact_log_page) \
|
||||
EXPAND_COUNTER(srch_compact_removed_entry) \
|
||||
EXPAND_COUNTER(srch_inconsistent_ref) \
|
||||
EXPAND_COUNTER(srch_rotate_log) \
|
||||
EXPAND_COUNTER(srch_search_log) \
|
||||
EXPAND_COUNTER(srch_search_log_block) \
|
||||
EXPAND_COUNTER(srch_search_retry_empty) \
|
||||
EXPAND_COUNTER(srch_search_sorted) \
|
||||
EXPAND_COUNTER(srch_search_sorted_block) \
|
||||
EXPAND_COUNTER(srch_search_stale_eio) \
|
||||
EXPAND_COUNTER(srch_search_stale_retry) \
|
||||
EXPAND_COUNTER(srch_search_xattrs) \
|
||||
EXPAND_COUNTER(srch_read_stale) \
|
||||
EXPAND_COUNTER(statfs) \
|
||||
EXPAND_COUNTER(totl_read_copied) \
|
||||
EXPAND_COUNTER(totl_read_finalized) \
|
||||
EXPAND_COUNTER(totl_read_fs) \
|
||||
EXPAND_COUNTER(totl_read_item) \
|
||||
EXPAND_COUNTER(totl_read_logged) \
|
||||
EXPAND_COUNTER(trans_commit_data_alloc_low) \
|
||||
EXPAND_COUNTER(trans_commit_dirty_meta_full) \
|
||||
EXPAND_COUNTER(trans_commit_fsync) \
|
||||
|
||||
350
kmod/src/data.c
350
kmod/src/data.c
@@ -37,6 +37,7 @@
|
||||
#include "lock.h"
|
||||
#include "file.h"
|
||||
#include "msg.h"
|
||||
#include "count.h"
|
||||
#include "ext.h"
|
||||
#include "util.h"
|
||||
|
||||
@@ -207,7 +208,6 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
|
||||
u64 offset;
|
||||
s64 ret;
|
||||
u8 flags;
|
||||
int err;
|
||||
int i;
|
||||
|
||||
flags = offline ? SEF_OFFLINE : 0;
|
||||
@@ -247,18 +247,6 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
|
||||
tr.len = min(ext.len - offset, last - iblock + 1);
|
||||
tr.flags = ext.flags;
|
||||
|
||||
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
|
||||
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
|
||||
tr.start, tr.len, 0, flags);
|
||||
if (ret < 0) {
|
||||
if (WARN_ON_ONCE(ret == -EINVAL)) {
|
||||
scoutfs_err(sb, "unexpected truncate inconsistency: ino %llu iblock %llu last %llu, start %llu len %llu",
|
||||
ino, iblock, last, tr.start, tr.len);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
if (tr.map) {
|
||||
mutex_lock(&datinf->mutex);
|
||||
ret = scoutfs_free_data(sb, datinf->alloc,
|
||||
@@ -266,16 +254,16 @@ static s64 truncate_extents(struct super_block *sb, struct inode *inode,
|
||||
&datinf->data_freed,
|
||||
tr.map, tr.len);
|
||||
mutex_unlock(&datinf->mutex);
|
||||
if (ret < 0) {
|
||||
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
|
||||
tr.start, tr.len, tr.map, tr.flags);
|
||||
if (err < 0)
|
||||
scoutfs_err(sb, "truncate err %d restoring extent after error %lld: ino %llu start %llu len %llu",
|
||||
err, ret, ino, tr.start, tr.len);
|
||||
if (ret < 0)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
trace_scoutfs_data_extent_truncated(sb, ino, &tr);
|
||||
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args,
|
||||
tr.start, tr.len, 0, flags);
|
||||
BUG_ON(ret); /* inconsistent, could prealloc items */
|
||||
|
||||
iblock += tr.len;
|
||||
}
|
||||
|
||||
@@ -303,6 +291,7 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
|
||||
u64 ino, u64 iblock, u64 last, bool offline,
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
struct scoutfs_item_count cnt = SIC_TRUNC_EXTENT(inode);
|
||||
struct scoutfs_inode_info *si = NULL;
|
||||
LIST_HEAD(ind_locks);
|
||||
s64 ret = 0;
|
||||
@@ -325,9 +314,10 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
|
||||
|
||||
while (iblock <= last) {
|
||||
if (inode)
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, true, false);
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks,
|
||||
true, cnt);
|
||||
else
|
||||
ret = scoutfs_hold_trans(sb, false);
|
||||
ret = scoutfs_hold_trans(sb, cnt);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
@@ -366,27 +356,27 @@ static inline u64 ext_last(struct scoutfs_extent *ext)
|
||||
|
||||
/*
|
||||
* The caller is writing to a logical iblock that doesn't have an
|
||||
* allocated extent. The caller has searched for an extent containing
|
||||
* iblock. If it already existed then it must be unallocated and
|
||||
* offline.
|
||||
* allocated extent.
|
||||
*
|
||||
* We implement two preallocation strategies. Typically we only
|
||||
* preallocate for simple streaming writes and limit preallocation while
|
||||
* the file is small. The largest efficient allocation size is
|
||||
* typically large enough that it would be unreasonable to allocate that
|
||||
* much for all small files.
|
||||
* We always allocate an extent starting at the logical iblock. The
|
||||
* caller has searched for an extent containing iblock. If it already
|
||||
* existed then it must be unallocated and offline.
|
||||
*
|
||||
* Optionally, we can simply preallocate large empty aligned regions.
|
||||
* This can waste a lot of space for small or sparse files but is
|
||||
* reasonable when a file population is known to be large and dense but
|
||||
* known to be written with non-streaming write patterns.
|
||||
* Preallocation is used if we're strictly contiguously extending
|
||||
* writes. That is, if the logical block offset equals the number of
|
||||
* online blocks. We try to preallocate the number of blocks existing
|
||||
* so that small files don't waste inordinate amounts of space and large
|
||||
* files will eventually see large extents. This only works for
|
||||
* contiguous single stream writes or stages of files from the first
|
||||
* block. It doesn't work for concurrent stages, releasing behind
|
||||
* staging, sparse files, multi-node writes, etc. fallocate() is always
|
||||
* a better tool to use.
|
||||
*/
|
||||
static int alloc_block(struct super_block *sb, struct inode *inode,
|
||||
struct scoutfs_extent *ext, u64 iblock,
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
DECLARE_DATA_INFO(sb, datinf);
|
||||
struct scoutfs_mount_options opts;
|
||||
const u64 ino = scoutfs_ino(inode);
|
||||
struct data_ext_args args = {
|
||||
.ino = ino,
|
||||
@@ -394,22 +384,17 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
|
||||
.lock = lock,
|
||||
};
|
||||
struct scoutfs_extent found;
|
||||
struct scoutfs_extent pre = {0,};
|
||||
bool undo_pre = false;
|
||||
struct scoutfs_extent pre;
|
||||
u64 blkno = 0;
|
||||
u64 online;
|
||||
u64 offline;
|
||||
u8 flags;
|
||||
u64 start;
|
||||
u64 count;
|
||||
u64 rem;
|
||||
int ret;
|
||||
int err;
|
||||
|
||||
trace_scoutfs_data_alloc_block_enter(sb, ino, iblock, ext);
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
/* can only allocate over existing unallocated offline extent */
|
||||
if (WARN_ON_ONCE(ext->len &&
|
||||
!(iblock >= ext->start && iblock <= ext_last(ext) &&
|
||||
@@ -418,106 +403,66 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
|
||||
|
||||
mutex_lock(&datinf->mutex);
|
||||
|
||||
/* default to single allocation at the written block */
|
||||
start = iblock;
|
||||
count = 1;
|
||||
/* copy existing flags for preallocated regions */
|
||||
flags = ext->len ? ext->flags : 0;
|
||||
scoutfs_inode_get_onoff(inode, &online, &offline);
|
||||
|
||||
if (ext->len) {
|
||||
/*
|
||||
* Assume that offline writers are going to be writing
|
||||
* all the offline extents and try to preallocate the
|
||||
* rest of the unwritten extent.
|
||||
*/
|
||||
/* limit preallocation to remaining existing (offline) extent */
|
||||
count = ext->len - (iblock - ext->start);
|
||||
|
||||
} else if (opts.data_prealloc_contig_only) {
|
||||
/*
|
||||
* Only preallocate when a quick test of the online
|
||||
* block counts looks like we're a simple streaming
|
||||
* write. Try to write until the next extent but limit
|
||||
* the preallocation size to the number of online
|
||||
* blocks.
|
||||
*/
|
||||
scoutfs_inode_get_onoff(inode, &online, &offline);
|
||||
if (iblock > 1 && iblock == online) {
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
|
||||
iblock, 1, &found);
|
||||
if (ret < 0 && ret != -ENOENT)
|
||||
goto out;
|
||||
if (found.len && found.start > iblock)
|
||||
count = found.start - iblock;
|
||||
else
|
||||
count = opts.data_prealloc_blocks;
|
||||
|
||||
count = min(iblock, count);
|
||||
}
|
||||
|
||||
flags = ext->flags;
|
||||
} else {
|
||||
/*
|
||||
* Preallocation of aligned regions only preallocates if
|
||||
* the aligned region contains no extents at all. This
|
||||
* could be fooled by offline sparse extents but we
|
||||
* don't want to iterate over all offline extents in the
|
||||
* aligned region.
|
||||
*/
|
||||
div64_u64_rem(iblock, opts.data_prealloc_blocks, &rem);
|
||||
start = iblock - rem;
|
||||
count = opts.data_prealloc_blocks;
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, start, 1, &found);
|
||||
/* otherwise alloc to next extent */
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
|
||||
iblock, 1, &found);
|
||||
if (ret < 0 && ret != -ENOENT)
|
||||
goto out;
|
||||
if (found.len && found.start < start + count)
|
||||
count = 1;
|
||||
if (found.len && found.start > iblock)
|
||||
count = found.start - iblock;
|
||||
else
|
||||
count = SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT;
|
||||
flags = 0;
|
||||
}
|
||||
|
||||
/* overall prealloc limit */
|
||||
count = min_t(u64, count, opts.data_prealloc_blocks);
|
||||
count = min_t(u64, count, SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT);
|
||||
|
||||
/* only strictly contiguous extending writes will try to preallocate */
|
||||
if (iblock > 1 && iblock == online)
|
||||
count = min(iblock, count);
|
||||
else
|
||||
count = 1;
|
||||
|
||||
ret = scoutfs_alloc_data(sb, datinf->alloc, datinf->wri,
|
||||
&datinf->dalloc, count, &blkno, &count);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* An aligned prealloc attempt that gets a smaller extent can
|
||||
* fail to cover iblock, make sure that it does. This is a
|
||||
* pathological case so we don't try to move the window past
|
||||
* iblock. Just enough to cover it, which we know is safe.
|
||||
*/
|
||||
if (start + count <= iblock)
|
||||
start += (iblock - (start + count) + 1);
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno, 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (count > 1) {
|
||||
pre.start = start;
|
||||
pre.len = count;
|
||||
pre.map = blkno;
|
||||
pre.start = iblock + 1;
|
||||
pre.len = count - 1;
|
||||
pre.map = blkno + 1;
|
||||
pre.flags = flags | SEF_UNWRITTEN;
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, pre.start,
|
||||
pre.len, pre.map, pre.flags);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
err = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock,
|
||||
1, 0, flags);
|
||||
BUG_ON(err); /* couldn't restore original */
|
||||
goto out;
|
||||
undo_pre = true;
|
||||
}
|
||||
}
|
||||
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno + (iblock - start), 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* tell the caller we have a single block, could check next? */
|
||||
ext->start = iblock;
|
||||
ext->len = 1;
|
||||
ext->map = blkno + (iblock - start);
|
||||
ext->map = blkno;
|
||||
ext->flags = 0;
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret < 0 && blkno > 0) {
|
||||
if (undo_pre) {
|
||||
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
|
||||
pre.start, pre.len, 0, flags);
|
||||
BUG_ON(err); /* leaked preallocated extent */
|
||||
}
|
||||
err = scoutfs_free_data(sb, datinf->alloc, datinf->wri,
|
||||
&datinf->data_freed, blkno, count);
|
||||
BUG_ON(err); /* leaked free blocks */
|
||||
@@ -631,8 +576,8 @@ static int scoutfs_get_block_read(struct inode *inode, sector_t iblock,
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
|
||||
int create)
|
||||
static int scoutfs_get_block_write(struct inode *inode, sector_t iblock,
|
||||
struct buffer_head *bh, int create)
|
||||
{
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
int ret;
|
||||
@@ -808,12 +753,13 @@ static int scoutfs_write_begin(struct file *file,
|
||||
goto out;
|
||||
}
|
||||
|
||||
retry:
|
||||
do {
|
||||
ret = scoutfs_inode_index_start(sb, &ind_seq) ?:
|
||||
scoutfs_inode_index_prepare(sb, &wbd->ind_locks, inode,
|
||||
true) ?:
|
||||
scoutfs_inode_index_try_lock_hold(sb, &wbd->ind_locks, ind_seq, true);
|
||||
scoutfs_inode_index_try_lock_hold(sb, &wbd->ind_locks,
|
||||
ind_seq,
|
||||
SIC_WRITE_BEGIN());
|
||||
} while (ret > 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
@@ -822,22 +768,17 @@ retry:
|
||||
flags |= AOP_FLAG_NOFS;
|
||||
|
||||
/* generic write_end updates i_size and calls dirty_inode */
|
||||
ret = scoutfs_dirty_inode_item(inode, wbd->lock) ?:
|
||||
block_write_begin(mapping, pos, len, flags, pagep,
|
||||
scoutfs_get_block_write);
|
||||
if (ret < 0) {
|
||||
ret = scoutfs_dirty_inode_item(inode, wbd->lock);
|
||||
if (ret == 0)
|
||||
ret = block_write_begin(mapping, pos, len, flags, pagep,
|
||||
scoutfs_get_block_write);
|
||||
if (ret)
|
||||
scoutfs_release_trans(sb);
|
||||
scoutfs_inode_index_unlock(sb, &wbd->ind_locks);
|
||||
if (ret == -ENOBUFS) {
|
||||
/* Retry with a new transaction. */
|
||||
scoutfs_inc_counter(sb, data_write_begin_enobufs_retry);
|
||||
goto retry;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
if (ret < 0)
|
||||
if (ret) {
|
||||
scoutfs_inode_index_unlock(sb, &wbd->ind_locks);
|
||||
kfree(wbd);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -875,7 +816,6 @@ static int scoutfs_write_end(struct file *file, struct address_space *mapping,
|
||||
scoutfs_inode_inc_data_version(inode);
|
||||
}
|
||||
|
||||
inode_inc_iversion(inode);
|
||||
scoutfs_update_inode_item(inode, wbd->lock, &wbd->ind_locks);
|
||||
scoutfs_inode_queue_writeback(inode);
|
||||
}
|
||||
@@ -1028,6 +968,9 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
||||
u64 last;
|
||||
s64 ret;
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
down_write(&si->extent_sem);
|
||||
|
||||
/* XXX support more flags */
|
||||
if (mode & ~(FALLOC_FL_KEEP_SIZE)) {
|
||||
ret = -EOPNOTSUPP;
|
||||
@@ -1045,22 +988,18 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
||||
goto out;
|
||||
}
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
|
||||
if (ret)
|
||||
goto out_mutex;
|
||||
goto out;
|
||||
|
||||
inode_dio_wait(inode);
|
||||
|
||||
down_write(&si->extent_sem);
|
||||
|
||||
if (!(mode & FALLOC_FL_KEEP_SIZE) &&
|
||||
(offset + len > i_size_read(inode))) {
|
||||
ret = inode_newsize_ok(inode, offset + len);
|
||||
if (ret)
|
||||
goto out_extent;
|
||||
goto out;
|
||||
}
|
||||
|
||||
iblock = offset >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
@@ -1068,9 +1007,10 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
||||
|
||||
while(iblock <= last) {
|
||||
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
|
||||
SIC_FALLOCATE_ONE());
|
||||
if (ret)
|
||||
goto out_extent;
|
||||
goto out;
|
||||
|
||||
ret = fallocate_extents(sb, inode, iblock, last, lock);
|
||||
|
||||
@@ -1078,37 +1018,26 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
||||
end = (iblock + ret) << SCOUTFS_BLOCK_SM_SHIFT;
|
||||
if (end > offset + len)
|
||||
end = offset + len;
|
||||
if (end > i_size_read(inode)) {
|
||||
if (end > i_size_read(inode))
|
||||
i_size_write(inode, end);
|
||||
inode_inc_iversion(inode);
|
||||
scoutfs_inode_inc_data_version(inode);
|
||||
}
|
||||
}
|
||||
if (ret >= 0)
|
||||
scoutfs_update_inode_item(inode, lock, &ind_locks);
|
||||
scoutfs_release_trans(sb);
|
||||
scoutfs_inode_index_unlock(sb, &ind_locks);
|
||||
|
||||
/* txn couldn't meet the request. Let's try with a new txn */
|
||||
if (ret == -ENOBUFS) {
|
||||
scoutfs_inc_counter(sb, data_fallocate_enobufs_retry);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ret <= 0)
|
||||
goto out_extent;
|
||||
goto out;
|
||||
|
||||
iblock += ret;
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
out_extent:
|
||||
up_write(&si->extent_sem);
|
||||
out_mutex:
|
||||
out:
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
up_write(&si->extent_sem);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
|
||||
out:
|
||||
trace_scoutfs_data_fallocate(sb, ino, mode, offset, len, ret);
|
||||
return ret;
|
||||
}
|
||||
@@ -1149,7 +1078,8 @@ int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
|
||||
}
|
||||
|
||||
/* we're updating meta_seq with offline block count */
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, false,
|
||||
SIC_SETATTR_MORE());
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
@@ -1192,14 +1122,13 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
|
||||
* explained above the move_blocks ioctl argument structure definition.
|
||||
*
|
||||
* The caller has processed the ioctl args and performed the most basic
|
||||
* argument sanity and inode checks, but we perform more detailed inode
|
||||
* checks once we have the inode lock and refreshed inodes. Our job is
|
||||
* to safely lock the two files and move the extents.
|
||||
* inode checks, but we perform more detailed inode checks once we have
|
||||
* the inode lock and refreshed inodes. Our job is to safely lock the
|
||||
* two files and move the extents.
|
||||
*/
|
||||
#define MOVE_DATA_EXTENTS_PER_HOLD 16
|
||||
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
u64 byte_len, struct inode *to, u64 to_off, bool is_stage,
|
||||
u64 data_version)
|
||||
u64 byte_len, struct inode *to, u64 to_off)
|
||||
{
|
||||
struct scoutfs_inode_info *from_si = SCOUTFS_I(from);
|
||||
struct scoutfs_inode_info *to_si = SCOUTFS_I(to);
|
||||
@@ -1209,7 +1138,6 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
struct data_ext_args from_args;
|
||||
struct data_ext_args to_args;
|
||||
struct scoutfs_extent ext;
|
||||
struct timespec cur_time;
|
||||
LIST_HEAD(locks);
|
||||
bool done = false;
|
||||
loff_t from_size;
|
||||
@@ -1245,24 +1173,10 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (is_stage && (data_version != SCOUTFS_I(to)->data_version)) {
|
||||
ret = -ESTALE;
|
||||
goto out;
|
||||
}
|
||||
|
||||
from_iblock = from_off >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
|
||||
/* only move extent blocks inside i_size, careful not to wrap */
|
||||
from_size = i_size_read(from);
|
||||
if (from_off >= from_size) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
if (from_off + byte_len > from_size)
|
||||
count = ((from_size - from_off) + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
|
||||
if (S_ISDIR(from->i_mode) || S_ISDIR(to->i_mode)) {
|
||||
ret = -EISDIR;
|
||||
goto out;
|
||||
@@ -1281,7 +1195,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
/* can't stage once data_version changes */
|
||||
scoutfs_inode_get_onoff(from, &junk, &from_offline);
|
||||
scoutfs_inode_get_onoff(to, &junk, &to_offline);
|
||||
if (from_offline || (to_offline && !is_stage)) {
|
||||
if (from_offline || to_offline) {
|
||||
ret = -ENODATA;
|
||||
goto out;
|
||||
}
|
||||
@@ -1310,7 +1224,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
ret = scoutfs_inode_index_start(sb, &seq) ?:
|
||||
scoutfs_inode_index_prepare(sb, &locks, from, true) ?:
|
||||
scoutfs_inode_index_prepare(sb, &locks, to, true) ?:
|
||||
scoutfs_inode_index_try_lock_hold(sb, &locks, seq, false);
|
||||
scoutfs_inode_index_try_lock_hold(sb, &locks, seq,
|
||||
SIC_EXACT(1, 1));
|
||||
if (ret > 0)
|
||||
continue;
|
||||
if (ret < 0)
|
||||
@@ -1325,8 +1240,6 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
|
||||
/* arbitrarily limit the number of extents per trans hold */
|
||||
for (i = 0; i < MOVE_DATA_EXTENTS_PER_HOLD; i++) {
|
||||
struct scoutfs_extent off_ext;
|
||||
|
||||
/* find the next extent to move */
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &from_args,
|
||||
from_iblock, 1, &ext);
|
||||
@@ -1338,8 +1251,9 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
break;
|
||||
}
|
||||
|
||||
/* done if next extent starts after moving region */
|
||||
if (ext.start >= from_iblock + count) {
|
||||
/* only move extents within count and i_size */
|
||||
if (ext.start >= from_iblock + count ||
|
||||
ext.start >= i_size_read(from)) {
|
||||
done = true;
|
||||
ret = 0;
|
||||
break;
|
||||
@@ -1347,36 +1261,17 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
|
||||
from_start = max(ext.start, from_iblock);
|
||||
map = ext.map + (from_start - ext.start);
|
||||
len = min(from_iblock + count, ext.start + ext.len) - from_start;
|
||||
len = min3(from_iblock + count,
|
||||
round_up((u64)i_size_read(from),
|
||||
SCOUTFS_BLOCK_SM_SIZE),
|
||||
ext.start + ext.len) - from_start;
|
||||
|
||||
to_start = to_iblock + (from_start - from_iblock);
|
||||
|
||||
/* we'd get stuck, shouldn't happen */
|
||||
if (WARN_ON_ONCE(len == 0)) {
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (is_stage) {
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
|
||||
to_start, 1, &off_ext);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
if (!scoutfs_ext_inside(to_start, len, &off_ext) ||
|
||||
!(off_ext.flags & SEF_OFFLINE)) {
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
|
||||
to_start, len,
|
||||
map, ext.flags);
|
||||
} else {
|
||||
/* insert the new, fails if it overlaps */
|
||||
ret = scoutfs_ext_insert(sb, &data_ext_ops, &to_args,
|
||||
to_start, len,
|
||||
map, ext.flags);
|
||||
}
|
||||
/* insert the new, fails if it overlaps */
|
||||
ret = scoutfs_ext_insert(sb, &data_ext_ops, &to_args,
|
||||
to_start, len,
|
||||
map, ext.flags);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
@@ -1384,18 +1279,10 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &from_args,
|
||||
from_start, len, 0, 0);
|
||||
if (ret < 0) {
|
||||
if (is_stage) {
|
||||
/* re-mark dest range as offline */
|
||||
WARN_ON_ONCE(!(off_ext.flags & SEF_OFFLINE));
|
||||
err = scoutfs_ext_set(sb, &data_ext_ops, &to_args,
|
||||
to_start, len,
|
||||
0, off_ext.flags);
|
||||
} else {
|
||||
/* remove inserted new on err */
|
||||
err = scoutfs_ext_remove(sb, &data_ext_ops,
|
||||
&to_args, to_start,
|
||||
len);
|
||||
}
|
||||
/* remove inserted new on err */
|
||||
err = scoutfs_ext_remove(sb, &data_ext_ops,
|
||||
&to_args, to_start,
|
||||
len);
|
||||
BUG_ON(err); /* XXX inconsistent */
|
||||
break;
|
||||
}
|
||||
@@ -1423,17 +1310,12 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
up_write(&from_si->extent_sem);
|
||||
up_write(&to_si->extent_sem);
|
||||
|
||||
cur_time = CURRENT_TIME;
|
||||
if (!is_stage) {
|
||||
to->i_ctime = to->i_mtime = cur_time;
|
||||
inode_inc_iversion(to);
|
||||
scoutfs_inode_inc_data_version(to);
|
||||
scoutfs_inode_set_data_seq(to);
|
||||
}
|
||||
from->i_ctime = from->i_mtime = cur_time;
|
||||
inode_inc_iversion(from);
|
||||
from->i_ctime = from->i_mtime =
|
||||
to->i_ctime = to->i_mtime = CURRENT_TIME;
|
||||
scoutfs_inode_inc_data_version(from);
|
||||
scoutfs_inode_inc_data_version(to);
|
||||
scoutfs_inode_set_data_seq(from);
|
||||
scoutfs_inode_set_data_seq(to);
|
||||
|
||||
scoutfs_update_inode_item(from, from_lock, &locks);
|
||||
scoutfs_update_inode_item(to, to_lock, &locks);
|
||||
@@ -1919,17 +1801,13 @@ int scoutfs_data_prepare_commit(struct super_block *sb)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Return true if the data allocator is lower than the caller's
|
||||
* requirement and we haven't been told by the server that we're out of
|
||||
* free extents.
|
||||
*/
|
||||
bool scoutfs_data_alloc_should_refill(struct super_block *sb, u64 blocks)
|
||||
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb)
|
||||
{
|
||||
DECLARE_DATA_INFO(sb, datinf);
|
||||
|
||||
return (scoutfs_dalloc_total_len(&datinf->dalloc) < blocks) &&
|
||||
!(le32_to_cpu(datinf->dalloc.root.flags) & SCOUTFS_ALLOC_FLAG_LOW);
|
||||
return scoutfs_dalloc_total_len(&datinf->dalloc) <<
|
||||
SCOUTFS_BLOCK_SM_SHIFT;
|
||||
|
||||
}
|
||||
|
||||
int scoutfs_data_setup(struct super_block *sb)
|
||||
|
||||
@@ -38,14 +38,18 @@ struct scoutfs_data_wait {
|
||||
.err = 0, \
|
||||
}
|
||||
|
||||
struct scoutfs_traced_extent {
|
||||
u64 iblock;
|
||||
u64 count;
|
||||
u64 blkno;
|
||||
u8 flags;
|
||||
};
|
||||
|
||||
extern const struct address_space_operations scoutfs_file_aops;
|
||||
extern const struct file_operations scoutfs_file_fops;
|
||||
struct scoutfs_alloc;
|
||||
struct scoutfs_block_writer;
|
||||
|
||||
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
|
||||
int create);
|
||||
|
||||
int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
|
||||
u64 ino, u64 iblock, u64 last, bool offline,
|
||||
struct scoutfs_lock *lock);
|
||||
@@ -55,8 +59,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len);
|
||||
int scoutfs_data_init_offline_extent(struct inode *inode, u64 size,
|
||||
struct scoutfs_lock *lock);
|
||||
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
u64 byte_len, struct inode *to, u64 to_off, bool to_stage,
|
||||
u64 data_version);
|
||||
u64 byte_len, struct inode *to, u64 to_off);
|
||||
|
||||
int scoutfs_data_wait_check(struct inode *inode, loff_t pos, loff_t len,
|
||||
u8 sef, u8 op, struct scoutfs_data_wait *ow,
|
||||
@@ -82,7 +85,7 @@ void scoutfs_data_init_btrees(struct super_block *sb,
|
||||
void scoutfs_data_get_btrees(struct super_block *sb,
|
||||
struct scoutfs_log_trees *lt);
|
||||
int scoutfs_data_prepare_commit(struct super_block *sb);
|
||||
bool scoutfs_data_alloc_should_refill(struct super_block *sb, u64 blocks);
|
||||
u64 scoutfs_data_alloc_free_bytes(struct super_block *sb);
|
||||
|
||||
int scoutfs_data_setup(struct super_block *sb);
|
||||
void scoutfs_data_destroy(struct super_block *sb);
|
||||
|
||||
669
kmod/src/dir.c
669
kmod/src/dir.c
File diff suppressed because it is too large
Load Diff
@@ -5,18 +5,16 @@
|
||||
#include "lock.h"
|
||||
|
||||
extern const struct file_operations scoutfs_dir_fops;
|
||||
extern const struct inode_operations_wrapper scoutfs_dir_iops;
|
||||
extern const struct inode_operations scoutfs_dir_iops;
|
||||
extern const struct inode_operations scoutfs_symlink_iops;
|
||||
|
||||
extern const struct dentry_operations scoutfs_dentry_ops;
|
||||
|
||||
struct scoutfs_link_backref_entry {
|
||||
struct list_head head;
|
||||
u64 dir_ino;
|
||||
u64 dir_pos;
|
||||
u16 name_len;
|
||||
struct scoutfs_dirent dent;
|
||||
/* the full name is allocated and stored in dent.name[] */
|
||||
/* the full name is allocated and stored in dent.name[0] */
|
||||
};
|
||||
|
||||
int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,
|
||||
@@ -31,4 +29,7 @@ int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
|
||||
int scoutfs_symlink_drop(struct super_block *sb, u64 ino,
|
||||
struct scoutfs_lock *lock, u64 i_size);
|
||||
|
||||
int scoutfs_dir_init(void);
|
||||
void scoutfs_dir_exit(void);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -81,7 +81,7 @@ static struct dentry *scoutfs_fh_to_dentry(struct super_block *sb,
|
||||
trace_scoutfs_fh_to_dentry(sb, fh_type, sfid);
|
||||
|
||||
if (scoutfs_valid_fileid(fh_type))
|
||||
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino), 0, SCOUTFS_IGF_LINKED);
|
||||
inode = scoutfs_iget(sb, le64_to_cpu(sfid->ino));
|
||||
|
||||
return d_obtain_alias(inode);
|
||||
}
|
||||
@@ -100,7 +100,7 @@ static struct dentry *scoutfs_fh_to_parent(struct super_block *sb,
|
||||
|
||||
if (scoutfs_valid_fileid(fh_type) &&
|
||||
fh_type == FILEID_SCOUTFS_WITH_PARENT)
|
||||
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino), 0, SCOUTFS_IGF_LINKED);
|
||||
inode = scoutfs_iget(sb, le64_to_cpu(sfid->parent_ino));
|
||||
|
||||
return d_obtain_alias(inode);
|
||||
}
|
||||
@@ -123,7 +123,7 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
|
||||
scoutfs_dir_free_backref_path(sb, &list);
|
||||
trace_scoutfs_get_parent(sb, inode, ino);
|
||||
|
||||
inode = scoutfs_iget(sb, ino, 0, SCOUTFS_IGF_LINKED);
|
||||
inode = scoutfs_iget(sb, ino);
|
||||
|
||||
return d_obtain_alias(inode);
|
||||
}
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
|
||||
#include "msg.h"
|
||||
#include "ext.h"
|
||||
#include "counters.h"
|
||||
#include "scoutfs_trace.h"
|
||||
@@ -39,7 +38,7 @@ static bool ext_overlap(struct scoutfs_extent *ext, u64 start, u64 len)
|
||||
return !(e_end < start || ext->start > end);
|
||||
}
|
||||
|
||||
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
|
||||
static bool ext_inside(u64 start, u64 len, struct scoutfs_extent *out)
|
||||
{
|
||||
u64 in_end = start + len - 1;
|
||||
u64 out_end = out->start + out->len - 1;
|
||||
@@ -192,9 +191,6 @@ int scoutfs_ext_insert(struct super_block *sb, struct scoutfs_ext_ops *ops,
|
||||
|
||||
/* inserting extent must not overlap */
|
||||
if (found.len && ext_overlap(&ins, found.start, found.len)) {
|
||||
if (ops->insert_overlap_warn)
|
||||
scoutfs_err(sb, "inserting extent %llu.%llu overlaps existing %llu.%llu",
|
||||
start, len, found.start, found.len);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -245,9 +241,7 @@ int scoutfs_ext_remove(struct super_block *sb, struct scoutfs_ext_ops *ops,
|
||||
goto out;
|
||||
|
||||
/* removed extent must be entirely within found */
|
||||
if (!scoutfs_ext_inside(start, len, &found)) {
|
||||
scoutfs_err(sb, "error removing extent %llu.%llu, isn't inside existing %llu.%llu",
|
||||
start, len, found.start, found.len);
|
||||
if (!ext_inside(start, len, &found)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -347,7 +341,7 @@ int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
|
||||
|
||||
if (ret == 0 && ext_overlap(&found, start, len)) {
|
||||
/* set extent must be entirely within found */
|
||||
if (!scoutfs_ext_inside(start, len, &found)) {
|
||||
if (!ext_inside(start, len, &found)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -15,8 +15,6 @@ struct scoutfs_ext_ops {
|
||||
u64 start, u64 len, u64 map, u8 flags);
|
||||
int (*remove)(struct super_block *sb, void *arg, u64 start, u64 len,
|
||||
u64 map, u8 flags);
|
||||
|
||||
bool insert_overlap_warn;
|
||||
};
|
||||
|
||||
bool scoutfs_ext_can_merge(struct scoutfs_extent *left,
|
||||
@@ -33,6 +31,5 @@ int scoutfs_ext_alloc(struct super_block *sb, struct scoutfs_ext_ops *ops,
|
||||
struct scoutfs_extent *ext);
|
||||
int scoutfs_ext_set(struct super_block *sb, struct scoutfs_ext_ops *ops,
|
||||
void *arg, u64 start, u64 len, u64 map, u8 flags);
|
||||
bool scoutfs_ext_inside(u64 start, u64 len, struct scoutfs_extent *out);
|
||||
|
||||
#endif
|
||||
|
||||
481
kmod/src/fence.c
481
kmod/src/fence.c
@@ -1,481 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2019 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/timer.h>
|
||||
#include <asm/barrier.h>
|
||||
|
||||
#include "super.h"
|
||||
#include "msg.h"
|
||||
#include "sysfs.h"
|
||||
#include "server.h"
|
||||
#include "fence.h"
|
||||
|
||||
/*
|
||||
* Fencing ensures that a given mount can no longer write to the
|
||||
* metadata or data devices. It's necessary to ensure that it's safe to
|
||||
* give another mount access to a resource that is currently owned by a
|
||||
* mount that has stopped responding.
|
||||
*
|
||||
* Fencing is performed in collaboration between the currently elected
|
||||
* quorum leader mount and userspace running on its host. The kernel
|
||||
* creates fencing requests as it notices that mounts have stopped
|
||||
* participating. The fence requests are published as directories in
|
||||
* sysfs. Userspace agents watch for directories, take action, and
|
||||
* write to files in the directory to indicate that the mount has been
|
||||
* fenced. Once the mount is fenced the server can reclaim the
|
||||
* resources previously held by the fenced mount.
|
||||
*
|
||||
* The fence requests contain metadata identifying the specific instance
|
||||
* of the mount that needs to be fenced. This lets a fencing agent
|
||||
* ensure that a specific mount has been fenced without necessarily
|
||||
* destroying the node that was hosting it. Maybe the node had rebooted
|
||||
* and the mount is no longer there, maybe the mount can be force
|
||||
* unmounted, maybe the node can be configured to isolate the mount from
|
||||
* the devices.
|
||||
*
|
||||
* The fencing mechanism is asynchronous and can fail but the server
|
||||
* cannot make progress until it completes. If a fence request times
|
||||
* out the server shuts down in the hope that another instance of a
|
||||
* server might have more luck fencing a non-responsive mount.
|
||||
*
|
||||
* Sources of fencing are fundamentally anchored in shared persistent
|
||||
* state. It is possible, though unlikely, that servers can fence a
|
||||
* node and then themselves fail, leaving the next server to try and
|
||||
* fence the mount again.
|
||||
*/
|
||||
|
||||
struct fence_info {
|
||||
struct kset *kset;
|
||||
struct kobject fence_dir_kobj;
|
||||
struct workqueue_struct *wq;
|
||||
wait_queue_head_t waitq;
|
||||
spinlock_t lock;
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
#define DECLARE_FENCE_INFO(sb, name) \
|
||||
struct fence_info *name = SCOUTFS_SB(sb)->fence_info
|
||||
|
||||
struct pending_fence {
|
||||
struct super_block *sb;
|
||||
struct scoutfs_sysfs_attrs ssa;
|
||||
struct list_head entry;
|
||||
struct timer_list timer;
|
||||
|
||||
ktime_t start_kt;
|
||||
__be32 ipv4_addr;
|
||||
bool fenced;
|
||||
bool error;
|
||||
int reason;
|
||||
u64 rid;
|
||||
};
|
||||
|
||||
#define FENCE_FROM_KOBJ(kobj) \
|
||||
container_of(SCOUTFS_SYSFS_ATTRS(kobj), struct pending_fence, ssa)
|
||||
#define DECLARE_FENCE_FROM_KOBJ(name, kobj) \
|
||||
struct pending_fence *name = FENCE_FROM_KOBJ(kobj)
|
||||
|
||||
static void destroy_fence(struct pending_fence *fence)
|
||||
{
|
||||
struct super_block *sb = fence->sb;
|
||||
|
||||
scoutfs_sysfs_destroy_attrs(sb, &fence->ssa);
|
||||
del_timer_sync(&fence->timer);
|
||||
kfree(fence);
|
||||
}
|
||||
|
||||
static ssize_t elapsed_secs_show(struct kobject *kobj,
|
||||
struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
ktime_t now = ktime_get();
|
||||
struct timeval tv = { 0, };
|
||||
|
||||
if (ktime_after(now, fence->start_kt))
|
||||
tv = ktime_to_timeval(ktime_sub(now, fence->start_kt));
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu", (long long)tv.tv_sec);
|
||||
}
|
||||
SCOUTFS_ATTR_RO(elapsed_secs);
|
||||
|
||||
static ssize_t fenced_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%u", !!fence->fenced);
|
||||
}
|
||||
|
||||
/*
|
||||
* any write to the fenced file from userspace indicates that the mount
|
||||
* has been safely fenced and can no longer write to the shared device.
|
||||
*/
|
||||
static ssize_t fenced_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
DECLARE_FENCE_INFO(fence->sb, fi);
|
||||
|
||||
if (!fence->fenced) {
|
||||
del_timer_sync(&fence->timer);
|
||||
fence->fenced = true;
|
||||
wake_up(&fi->waitq);
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(fenced);
|
||||
|
||||
static ssize_t error_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%u", !!fence->error);
|
||||
}
|
||||
|
||||
/*
|
||||
* Fencing can tell us that they were unable to fence the given mount.
|
||||
* We can't continue if the mount can't be isolated so we shut down the
|
||||
* server.
|
||||
*/
|
||||
static ssize_t error_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
struct super_block *sb = fence->sb;
|
||||
DECLARE_FENCE_INFO(fence->sb, fi);
|
||||
|
||||
if (!fence->error) {
|
||||
fence->error = true;
|
||||
scoutfs_err(sb, "error indicated by fence action for rid %016llx", fence->rid);
|
||||
wake_up(&fi->waitq);
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(error);
|
||||
|
||||
static ssize_t ipv4_addr_show(struct kobject *kobj,
|
||||
struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%pI4", &fence->ipv4_addr);
|
||||
}
|
||||
SCOUTFS_ATTR_RO(ipv4_addr);
|
||||
|
||||
static ssize_t reason_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
unsigned r = fence->reason;
|
||||
char *str = "unknown";
|
||||
static char *reasons[] = {
|
||||
[SCOUTFS_FENCE_CLIENT_RECOVERY] = "client_recovery",
|
||||
[SCOUTFS_FENCE_CLIENT_RECONNECT] = "client_reconnect",
|
||||
[SCOUTFS_FENCE_QUORUM_BLOCK_LEADER] = "quorum_block_leader",
|
||||
};
|
||||
|
||||
if (r < ARRAY_SIZE(reasons) && reasons[r])
|
||||
str = reasons[r];
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s", str);
|
||||
}
|
||||
SCOUTFS_ATTR_RO(reason);
|
||||
|
||||
static ssize_t rid_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
DECLARE_FENCE_FROM_KOBJ(fence, kobj);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%016llx", fence->rid);
|
||||
}
|
||||
SCOUTFS_ATTR_RO(rid);
|
||||
|
||||
static struct attribute *fence_attrs[] = {
|
||||
SCOUTFS_ATTR_PTR(elapsed_secs),
|
||||
SCOUTFS_ATTR_PTR(fenced),
|
||||
SCOUTFS_ATTR_PTR(error),
|
||||
SCOUTFS_ATTR_PTR(ipv4_addr),
|
||||
SCOUTFS_ATTR_PTR(reason),
|
||||
SCOUTFS_ATTR_PTR(rid),
|
||||
NULL,
|
||||
};
|
||||
|
||||
#define FENCE_TIMEOUT_MS (MSEC_PER_SEC * 30)
|
||||
|
||||
static void fence_timeout(struct timer_list *timer)
|
||||
{
|
||||
struct pending_fence *fence = from_timer(fence, timer, timer);
|
||||
struct super_block *sb = fence->sb;
|
||||
DECLARE_FENCE_INFO(sb, fi);
|
||||
|
||||
fence->error = true;
|
||||
scoutfs_err(sb, "fence request for rid %016llx was not serviced in %lums, raising error",
|
||||
fence->rid, FENCE_TIMEOUT_MS);
|
||||
wake_up(&fi->waitq);
|
||||
}
|
||||
|
||||
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason)
|
||||
{
|
||||
DECLARE_FENCE_INFO(sb, fi);
|
||||
struct pending_fence *fence;
|
||||
int ret;
|
||||
|
||||
fence = kzalloc(sizeof(struct pending_fence), GFP_NOFS);
|
||||
if (!fence) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
fence->sb = sb;
|
||||
scoutfs_sysfs_init_attrs(sb, &fence->ssa);
|
||||
|
||||
fence->start_kt = ktime_get();
|
||||
fence->ipv4_addr = ipv4_addr;
|
||||
fence->fenced = false;
|
||||
fence->error = false;
|
||||
fence->reason = reason;
|
||||
fence->rid = rid;
|
||||
|
||||
ret = scoutfs_sysfs_create_attrs_parent(sb, &fi->kset->kobj,
|
||||
&fence->ssa, fence_attrs,
|
||||
"%016llx", rid);
|
||||
if (ret < 0) {
|
||||
kfree(fence);
|
||||
goto out;
|
||||
}
|
||||
|
||||
timer_setup(&fence->timer, fence_timeout, 0);
|
||||
fence->timer.expires = jiffies + msecs_to_jiffies(FENCE_TIMEOUT_MS);
|
||||
add_timer(&fence->timer);
|
||||
|
||||
spin_lock(&fi->lock);
|
||||
list_add_tail(&fence->entry, &fi->list);
|
||||
spin_unlock(&fi->lock);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Give the caller the rid of the next fence request which has been
|
||||
* fenced. This doesn't have a position from which to return the next
|
||||
* because the caller either frees the fence request it's given or shuts
|
||||
* down.
|
||||
*/
|
||||
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error)
|
||||
{
|
||||
DECLARE_FENCE_INFO(sb, fi);
|
||||
struct pending_fence *fence;
|
||||
int ret = -ENOENT;
|
||||
|
||||
spin_lock(&fi->lock);
|
||||
list_for_each_entry(fence, &fi->list, entry) {
|
||||
if (fence->fenced || fence->error) {
|
||||
*rid = fence->rid;
|
||||
*reason = fence->reason;
|
||||
*error = fence->error;
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&fi->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_fence_reason_pending(struct super_block *sb, int reason)
|
||||
{
|
||||
DECLARE_FENCE_INFO(sb, fi);
|
||||
struct pending_fence *fence;
|
||||
bool pending = false;
|
||||
|
||||
spin_lock(&fi->lock);
|
||||
list_for_each_entry(fence, &fi->list, entry) {
|
||||
if (fence->reason == reason) {
|
||||
pending = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&fi->lock);
|
||||
|
||||
return pending;
|
||||
}
|
||||
|
||||
int scoutfs_fence_free(struct super_block *sb, u64 rid)
|
||||
{
|
||||
DECLARE_FENCE_INFO(sb, fi);
|
||||
struct pending_fence *fence;
|
||||
int ret = -ENOENT;
|
||||
|
||||
spin_lock(&fi->lock);
|
||||
list_for_each_entry(fence, &fi->list, entry) {
|
||||
if (fence->rid == rid) {
|
||||
list_del_init(&fence->entry);
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&fi->lock);
|
||||
|
||||
if (ret == 0) {
|
||||
destroy_fence(fence);
|
||||
wake_up(&fi->waitq);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool all_fenced(struct fence_info *fi, bool *error)
|
||||
{
|
||||
struct pending_fence *fence;
|
||||
bool all = true;
|
||||
|
||||
*error = false;
|
||||
|
||||
spin_lock(&fi->lock);
|
||||
list_for_each_entry(fence, &fi->list, entry) {
|
||||
if (fence->error) {
|
||||
*error = true;
|
||||
all = true;
|
||||
break;
|
||||
}
|
||||
if (!fence->fenced) {
|
||||
all = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&fi->lock);
|
||||
|
||||
return all;
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller waits for all the current requests to be fenced, but not
|
||||
* necessarily reclaimed.
|
||||
*/
|
||||
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies)
|
||||
{
|
||||
DECLARE_FENCE_INFO(sb, fi);
|
||||
bool error;
|
||||
long ret;
|
||||
|
||||
ret = wait_event_timeout(fi->waitq, all_fenced(fi, &error), timeout_jiffies);
|
||||
if (ret == 0)
|
||||
ret = -ETIMEDOUT;
|
||||
else if (ret > 0)
|
||||
ret = 0;
|
||||
else if (error)
|
||||
ret = -EIO;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This must be called early during startup so that it is guaranteed that
|
||||
* no other subsystems will try and call fence_start while we're waiting
|
||||
* for testing fence requests to complete.
|
||||
*/
|
||||
int scoutfs_fence_setup(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_mount_options opts;
|
||||
struct fence_info *fi;
|
||||
int ret;
|
||||
|
||||
/* can only fence if we can be elected by quorum */
|
||||
scoutfs_options_read(sb, &opts);
|
||||
if (opts.quorum_slot_nr == -1) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
fi = kzalloc(sizeof(struct fence_info), GFP_KERNEL);
|
||||
if (!fi) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
init_waitqueue_head(&fi->waitq);
|
||||
spin_lock_init(&fi->lock);
|
||||
INIT_LIST_HEAD(&fi->list);
|
||||
|
||||
sbi->fence_info = fi;
|
||||
|
||||
fi->kset = kset_create_and_add("fence", NULL, scoutfs_sysfs_sb_dir(sb));
|
||||
if (!fi->kset) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
fi->wq = alloc_workqueue("scoutfs_fence",
|
||||
WQ_UNBOUND | WQ_NON_REENTRANT, 0);
|
||||
if (!fi->wq) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret)
|
||||
scoutfs_fence_destroy(sb);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Tear down all pending fence requests because the server is shutting down.
|
||||
*/
|
||||
void scoutfs_fence_stop(struct super_block *sb)
|
||||
{
|
||||
DECLARE_FENCE_INFO(sb, fi);
|
||||
struct pending_fence *fence;
|
||||
|
||||
do {
|
||||
spin_lock(&fi->lock);
|
||||
fence = list_first_entry_or_null(&fi->list, struct pending_fence, entry);
|
||||
if (fence)
|
||||
list_del_init(&fence->entry);
|
||||
spin_unlock(&fi->lock);
|
||||
|
||||
if (fence) {
|
||||
destroy_fence(fence);
|
||||
wake_up(&fi->waitq);
|
||||
}
|
||||
} while (fence);
|
||||
}
|
||||
|
||||
void scoutfs_fence_destroy(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct fence_info *fi = SCOUTFS_SB(sb)->fence_info;
|
||||
struct pending_fence *fence;
|
||||
struct pending_fence *tmp;
|
||||
|
||||
if (fi) {
|
||||
if (fi->wq)
|
||||
destroy_workqueue(fi->wq);
|
||||
list_for_each_entry_safe(fence, tmp, &fi->list, entry)
|
||||
destroy_fence(fence);
|
||||
if (fi->kset)
|
||||
kset_unregister(fi->kset);
|
||||
kfree(fi);
|
||||
sbi->fence_info = NULL;
|
||||
}
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
#ifndef _SCOUTFS_FENCE_H_
|
||||
#define _SCOUTFS_FENCE_H_
|
||||
|
||||
enum {
|
||||
SCOUTFS_FENCE_CLIENT_RECOVERY,
|
||||
SCOUTFS_FENCE_CLIENT_RECONNECT,
|
||||
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER,
|
||||
};
|
||||
|
||||
int scoutfs_fence_start(struct super_block *sb, u64 rid, __be32 ipv4_addr, int reason);
|
||||
int scoutfs_fence_next(struct super_block *sb, u64 *rid, int *reason, bool *error);
|
||||
int scoutfs_fence_reason_pending(struct super_block *sb, int reason);
|
||||
int scoutfs_fence_free(struct super_block *sb, u64 rid);
|
||||
int scoutfs_fence_wait_fenced(struct super_block *sb, long timeout_jiffies);
|
||||
|
||||
int scoutfs_fence_setup(struct super_block *sb);
|
||||
void scoutfs_fence_stop(struct super_block *sb);
|
||||
void scoutfs_fence_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
@@ -27,14 +27,8 @@
|
||||
#include "file.h"
|
||||
#include "inode.h"
|
||||
#include "per_task.h"
|
||||
#include "omap.h"
|
||||
|
||||
/*
|
||||
* Start a high level file read. We check for offline extents in the
|
||||
* read region here so that we only check the extents once. We use the
|
||||
* dio count to prevent releasing while we're reading after we've
|
||||
* checked the extents.
|
||||
*/
|
||||
/* TODO: Direct I/O, AIO */
|
||||
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
|
||||
unsigned long nr_segs, loff_t pos)
|
||||
{
|
||||
@@ -48,32 +42,30 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
|
||||
int ret;
|
||||
|
||||
retry:
|
||||
/* protect checked extents from release */
|
||||
mutex_lock(&inode->i_mutex);
|
||||
atomic_inc(&inode->i_dio_count);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
|
||||
/* protect checked extents from stage/release */
|
||||
mutex_lock(&inode->i_mutex);
|
||||
atomic_inc(&inode->i_dio_count);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
|
||||
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
|
||||
SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_READ,
|
||||
&dw, inode_lock);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
} else {
|
||||
WARN_ON_ONCE(true);
|
||||
}
|
||||
|
||||
ret = generic_file_aio_read(iocb, iov, nr_segs, pos);
|
||||
|
||||
out:
|
||||
inode_dio_done(inode);
|
||||
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
|
||||
if (scoutfs_per_task_del(&si->pt_data_lock, &pt_ent))
|
||||
inode_dio_done(inode);
|
||||
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
|
||||
|
||||
if (scoutfs_data_wait_found(&dw)) {
|
||||
|
||||
@@ -26,7 +26,6 @@
|
||||
#include "hash.h"
|
||||
#include "srch.h"
|
||||
#include "counters.h"
|
||||
#include "xattr.h"
|
||||
#include "scoutfs_trace.h"
|
||||
|
||||
/*
|
||||
@@ -38,9 +37,9 @@
|
||||
*
|
||||
* The log btrees are modified by multiple transactions over time so
|
||||
* there is no consistent ordering relationship between the items in
|
||||
* different btrees. Each item in a log btree stores a seq for the
|
||||
* item. Readers check log btrees for the most recent seq that it
|
||||
* should use.
|
||||
* different btrees. Each item in a log btree stores a version number
|
||||
* for the item. Readers check log btrees for the most recent version
|
||||
* that it should use.
|
||||
*
|
||||
* The item cache reads items in bulk from stable btrees, and writes a
|
||||
* transaction's worth of dirty items into the item log btree.
|
||||
@@ -53,8 +52,6 @@
|
||||
*/
|
||||
|
||||
struct forest_info {
|
||||
struct super_block *sb;
|
||||
|
||||
struct mutex mutex;
|
||||
struct scoutfs_alloc *alloc;
|
||||
struct scoutfs_block_writer *wri;
|
||||
@@ -63,21 +60,21 @@ struct forest_info {
|
||||
struct mutex srch_mutex;
|
||||
struct scoutfs_srch_file srch_file;
|
||||
struct scoutfs_block *srch_bl;
|
||||
|
||||
struct workqueue_struct *workq;
|
||||
struct delayed_work log_merge_dwork;
|
||||
|
||||
atomic64_t inode_count_delta;
|
||||
};
|
||||
|
||||
#define DECLARE_FOREST_INFO(sb, name) \
|
||||
struct forest_info *name = SCOUTFS_SB(sb)->forest_info
|
||||
|
||||
struct forest_refs {
|
||||
struct scoutfs_block_ref fs_ref;
|
||||
struct scoutfs_block_ref logs_ref;
|
||||
struct scoutfs_btree_ref fs_ref;
|
||||
struct scoutfs_btree_ref logs_ref;
|
||||
};
|
||||
|
||||
/* initialize some refs that initially aren't equal */
|
||||
#define DECLARE_STALE_TRACKING_SUPER_REFS(a, b) \
|
||||
struct forest_refs a = {{cpu_to_le64(0),}}; \
|
||||
struct forest_refs b = {{cpu_to_le64(1),}}
|
||||
|
||||
struct forest_bloom_nrs {
|
||||
unsigned int nrs[SCOUTFS_FOREST_BLOOM_NRS];
|
||||
};
|
||||
@@ -99,16 +96,20 @@ static void calc_bloom_nrs(struct forest_bloom_nrs *bloom,
|
||||
}
|
||||
}
|
||||
|
||||
static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scoutfs_block_ref *ref)
|
||||
static struct scoutfs_block *read_bloom_ref(struct super_block *sb,
|
||||
struct scoutfs_btree_ref *ref)
|
||||
{
|
||||
struct scoutfs_block *bl;
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_block_read_ref(sb, ref, SCOUTFS_BLOCK_MAGIC_BLOOM, &bl);
|
||||
if (ret < 0) {
|
||||
if (ret == -ESTALE)
|
||||
scoutfs_inc_counter(sb, forest_bloom_stale);
|
||||
bl = ERR_PTR(ret);
|
||||
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
|
||||
if (IS_ERR(bl))
|
||||
return bl;
|
||||
|
||||
if (!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno,
|
||||
SCOUTFS_BLOCK_MAGIC_BLOOM)) {
|
||||
scoutfs_block_invalidate(sb, bl);
|
||||
scoutfs_block_put(sb, bl);
|
||||
return ERR_PTR(-ESTALE);
|
||||
}
|
||||
|
||||
return bl;
|
||||
@@ -131,11 +132,11 @@ static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scout
|
||||
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_key *next)
|
||||
{
|
||||
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
|
||||
struct scoutfs_net_roots roots;
|
||||
struct scoutfs_btree_root item_root;
|
||||
struct scoutfs_log_trees *lt;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
DECLARE_SAVED_REFS(saved);
|
||||
struct scoutfs_key found;
|
||||
struct scoutfs_key ltk;
|
||||
bool checked_fs;
|
||||
@@ -150,6 +151,8 @@ retry:
|
||||
goto out;
|
||||
|
||||
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
|
||||
refs.fs_ref = roots.fs_root.ref;
|
||||
refs.logs_ref = roots.logs_root.ref;
|
||||
|
||||
scoutfs_key_init_log_trees(<k, 0, 0);
|
||||
checked_fs = false;
|
||||
@@ -205,25 +208,37 @@ retry:
|
||||
}
|
||||
}
|
||||
|
||||
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.fs_root.ref, &roots.logs_root.ref);
|
||||
if (ret == -ESTALE)
|
||||
if (ret == -ESTALE) {
|
||||
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
|
||||
return -EIO;
|
||||
prev_refs = refs;
|
||||
goto retry;
|
||||
}
|
||||
out:
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct forest_read_items_data {
|
||||
int fic;
|
||||
bool is_fs;
|
||||
scoutfs_forest_item_cb cb;
|
||||
void *cb_arg;
|
||||
};
|
||||
|
||||
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
|
||||
static int forest_read_items(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, void *arg)
|
||||
{
|
||||
struct forest_read_items_data *rid = arg;
|
||||
struct scoutfs_log_item_value _liv = {0,};
|
||||
struct scoutfs_log_item_value *liv = &_liv;
|
||||
|
||||
return rid->cb(sb, key, seq, flags, val, val_len, rid->fic, rid->cb_arg);
|
||||
if (!rid->is_fs) {
|
||||
liv = val;
|
||||
val += sizeof(struct scoutfs_log_item_value);
|
||||
val_len -= sizeof(struct scoutfs_log_item_value);
|
||||
}
|
||||
|
||||
return rid->cb(sb, key, liv, val, val_len, rid->cb_arg);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -235,16 +250,19 @@ static int forest_read_items(struct super_block *sb, struct scoutfs_key *key, u6
|
||||
* that covers all the blocks. Any keys outside of this range can't be
|
||||
* trusted because we didn't visit all the trees to check their items.
|
||||
*
|
||||
* We return -ESTALE if we hit stale blocks to give the caller a chance
|
||||
* to reset their state and retry with a newer version of the btrees.
|
||||
* If we hit stale blocks and retry we can call the callback for
|
||||
* duplicate items. This is harmless because the items are stable while
|
||||
* the caller holds their cluster lock and the caller has to filter out
|
||||
* item versions anyway.
|
||||
*/
|
||||
int scoutfs_forest_read_items(struct super_block *sb,
|
||||
struct scoutfs_lock *lock,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_key *bloom_key,
|
||||
struct scoutfs_key *start,
|
||||
struct scoutfs_key *end,
|
||||
scoutfs_forest_item_cb cb, void *arg)
|
||||
{
|
||||
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
|
||||
struct forest_read_items_data rid = {
|
||||
.cb = cb,
|
||||
.cb_arg = arg,
|
||||
@@ -256,30 +274,32 @@ int scoutfs_forest_read_items(struct super_block *sb,
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
struct scoutfs_block *bl;
|
||||
struct scoutfs_key ltk;
|
||||
struct scoutfs_key orig_start = *start;
|
||||
struct scoutfs_key orig_end = *end;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
scoutfs_inc_counter(sb, forest_read_items);
|
||||
calc_bloom_nrs(&bloom, bloom_key);
|
||||
calc_bloom_nrs(&bloom, &lock->start);
|
||||
|
||||
roots = lock->roots;
|
||||
retry:
|
||||
ret = scoutfs_client_get_roots(sb, &roots);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
|
||||
refs.fs_ref = roots.fs_root.ref;
|
||||
refs.logs_ref = roots.logs_root.ref;
|
||||
|
||||
*start = orig_start;
|
||||
*end = orig_end;
|
||||
*start = lock->start;
|
||||
*end = lock->end;
|
||||
|
||||
/* start with fs root items */
|
||||
rid.fic |= FIC_FS_ROOT;
|
||||
rid.is_fs = true;
|
||||
ret = scoutfs_btree_read_items(sb, &roots.fs_root, key, start, end,
|
||||
forest_read_items, &rid);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
rid.fic &= ~FIC_FS_ROOT;
|
||||
rid.is_fs = false;
|
||||
|
||||
scoutfs_key_init_log_trees(<k, 0, 0);
|
||||
for (;; scoutfs_key_inc(<k)) {
|
||||
@@ -324,40 +344,30 @@ int scoutfs_forest_read_items(struct super_block *sb,
|
||||
|
||||
scoutfs_inc_counter(sb, forest_bloom_pass);
|
||||
|
||||
if ((le64_to_cpu(lt.flags) & SCOUTFS_LOG_TREES_FINALIZED))
|
||||
rid.fic |= FIC_FINALIZED;
|
||||
|
||||
ret = scoutfs_btree_read_items(sb, <.item_root, key, start,
|
||||
end, forest_read_items, &rid);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
rid.fic &= ~FIC_FINALIZED;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret == -ESTALE) {
|
||||
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0) {
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
prev_refs = refs;
|
||||
|
||||
ret = scoutfs_client_get_roots(sb, &roots);
|
||||
if (ret)
|
||||
goto out;
|
||||
goto retry;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the items are deltas then combine the src with the destination
|
||||
* value and store the result in the destination.
|
||||
*
|
||||
* Returns:
|
||||
* -errno: fatal error, no change
|
||||
* 0: not delta items, no change
|
||||
* +ve: SCOUTFS_DELTA_ values indicating when dst and/or src can be dropped
|
||||
*/
|
||||
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
|
||||
void *src, int src_len)
|
||||
{
|
||||
if (key->sk_zone == SCOUTFS_XATTR_TOTL_ZONE)
|
||||
return scoutfs_xattr_combine_totl(dst, dst_len, src, src_len);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Make sure that the bloom bits for the lock's start key are all set in
|
||||
* the current log's bloom block. We record the nr of our log tree in
|
||||
@@ -371,14 +381,18 @@ int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_le
|
||||
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
DECLARE_FOREST_INFO(sb, finf);
|
||||
struct scoutfs_block *new_bl = NULL;
|
||||
struct scoutfs_block *bl = NULL;
|
||||
struct scoutfs_bloom_block *bb;
|
||||
struct scoutfs_block_ref *ref;
|
||||
struct scoutfs_btree_ref *ref;
|
||||
struct forest_bloom_nrs bloom;
|
||||
int nr_set = 0;
|
||||
u64 blkno;
|
||||
u64 nr;
|
||||
int ret;
|
||||
int err;
|
||||
int i;
|
||||
|
||||
nr = le64_to_cpu(finf->our_log.nr);
|
||||
@@ -396,11 +410,53 @@ int scoutfs_forest_set_bloom_bits(struct super_block *sb,
|
||||
|
||||
ref = &finf->our_log.bloom_ref;
|
||||
|
||||
ret = scoutfs_block_dirty_ref(sb, finf->alloc, finf->wri, ref, SCOUTFS_BLOCK_MAGIC_BLOOM,
|
||||
&bl, 0, NULL);
|
||||
if (ret < 0)
|
||||
goto unlock;
|
||||
bb = bl->data;
|
||||
if (ref->blkno) {
|
||||
bl = read_bloom_ref(sb, ref);
|
||||
if (IS_ERR(bl)) {
|
||||
ret = PTR_ERR(bl);
|
||||
goto unlock;
|
||||
}
|
||||
bb = bl->data;
|
||||
}
|
||||
|
||||
if (!ref->blkno || !scoutfs_block_writer_is_dirty(sb, bl)) {
|
||||
|
||||
ret = scoutfs_alloc_meta(sb, finf->alloc, finf->wri, &blkno);
|
||||
if (ret < 0)
|
||||
goto unlock;
|
||||
|
||||
new_bl = scoutfs_block_create(sb, blkno);
|
||||
if (IS_ERR(new_bl)) {
|
||||
err = scoutfs_free_meta(sb, finf->alloc, finf->wri,
|
||||
blkno);
|
||||
BUG_ON(err); /* could have dirtied */
|
||||
ret = PTR_ERR(new_bl);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (bl) {
|
||||
err = scoutfs_free_meta(sb, finf->alloc, finf->wri,
|
||||
le64_to_cpu(ref->blkno));
|
||||
BUG_ON(err); /* could have dirtied */
|
||||
memcpy(new_bl->data, bl->data, SCOUTFS_BLOCK_LG_SIZE);
|
||||
} else {
|
||||
memset(new_bl->data, 0, SCOUTFS_BLOCK_LG_SIZE);
|
||||
}
|
||||
|
||||
scoutfs_block_writer_mark_dirty(sb, finf->wri, new_bl);
|
||||
|
||||
scoutfs_block_put(sb, bl);
|
||||
bl = new_bl;
|
||||
bb = bl->data;
|
||||
new_bl = NULL;
|
||||
|
||||
bb->hdr.magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_BLOOM);
|
||||
bb->hdr.fsid = super->hdr.fsid;
|
||||
bb->hdr.blkno = cpu_to_le64(blkno);
|
||||
prandom_bytes(&bb->hdr.seq, sizeof(bb->hdr.seq));
|
||||
ref->blkno = bb->hdr.blkno;
|
||||
ref->seq = bb->hdr.seq;
|
||||
}
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(bloom.nrs); i++) {
|
||||
if (!test_and_set_bit_le(bloom.nrs[i], bb->bits)) {
|
||||
@@ -427,29 +483,29 @@ out:
|
||||
|
||||
/*
|
||||
* The caller is commiting items in the transaction and has found the
|
||||
* greatest item seq amongst them. We store it in the log_trees root
|
||||
* greatest item version amongst them. We store it in the log_trees root
|
||||
* to send to the server.
|
||||
*/
|
||||
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq)
|
||||
void scoutfs_forest_set_max_vers(struct super_block *sb, u64 max_vers)
|
||||
{
|
||||
DECLARE_FOREST_INFO(sb, finf);
|
||||
|
||||
finf->our_log.max_item_seq = cpu_to_le64(max_seq);
|
||||
finf->our_log.max_item_vers = cpu_to_le64(max_vers);
|
||||
}
|
||||
|
||||
/*
|
||||
* The server is calling during setup to find the greatest item seq
|
||||
* The server is calling during setup to find the greatest item version
|
||||
* amongst all the log tree roots. They have the authoritative current
|
||||
* super.
|
||||
*
|
||||
* Item seqs are only used to compare items in log trees, not in the
|
||||
* main fs tree. All we have to do is find the greatest seq amongst the
|
||||
* log_trees so that the core seq will have a greater seq than all the
|
||||
* items in the log_trees.
|
||||
* Item versions are only used to compare items in log trees, not in the
|
||||
* main fs tree. All we have to do is find the greatest version amongst
|
||||
* the log_trees so that new locks will have a write_version greater
|
||||
* than all the items in the log_trees.
|
||||
*/
|
||||
int scoutfs_forest_get_max_seq(struct super_block *sb,
|
||||
struct scoutfs_super_block *super,
|
||||
u64 *seq)
|
||||
int scoutfs_forest_get_max_vers(struct super_block *sb,
|
||||
struct scoutfs_super_block *super,
|
||||
u64 *vers)
|
||||
{
|
||||
struct scoutfs_log_trees *lt;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
@@ -457,7 +513,7 @@ int scoutfs_forest_get_max_seq(struct super_block *sb,
|
||||
int ret;
|
||||
|
||||
scoutfs_key_init_log_trees(<k, 0, 0);
|
||||
*seq = 0;
|
||||
*vers = 0;
|
||||
|
||||
for (;; scoutfs_key_inc(<k)) {
|
||||
ret = scoutfs_btree_next(sb, &super->logs_root, <k, &iref);
|
||||
@@ -465,7 +521,8 @@ int scoutfs_forest_get_max_seq(struct super_block *sb,
|
||||
if (iref.val_len == sizeof(struct scoutfs_log_trees)) {
|
||||
ltk = *iref.key;
|
||||
lt = iref.val;
|
||||
*seq = max(*seq, le64_to_cpu(lt->max_item_seq));
|
||||
*vers = max(*vers,
|
||||
le64_to_cpu(lt->max_item_vers));
|
||||
} else {
|
||||
ret = -EIO;
|
||||
}
|
||||
@@ -514,59 +571,6 @@ int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id)
|
||||
return ret;
|
||||
}
|
||||
|
||||
void scoutfs_forest_inc_inode_count(struct super_block *sb)
|
||||
{
|
||||
DECLARE_FOREST_INFO(sb, finf);
|
||||
|
||||
atomic64_inc(&finf->inode_count_delta);
|
||||
}
|
||||
|
||||
void scoutfs_forest_dec_inode_count(struct super_block *sb)
|
||||
{
|
||||
DECLARE_FOREST_INFO(sb, finf);
|
||||
|
||||
atomic64_dec(&finf->inode_count_delta);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return the total inode count from the super block and all the
|
||||
* log_btrees it references. ESTALE from read blocks is returned to the
|
||||
* caller who is expected to retry or return hard errors.
|
||||
*/
|
||||
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
|
||||
u64 *inode_count)
|
||||
{
|
||||
struct scoutfs_log_trees *lt;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
struct scoutfs_key key;
|
||||
int ret;
|
||||
|
||||
*inode_count = le64_to_cpu(super->inode_count);
|
||||
|
||||
scoutfs_key_init_log_trees(&key, 0, 0);
|
||||
for (;;) {
|
||||
ret = scoutfs_btree_next(sb, &super->logs_root, &key, &iref);
|
||||
if (ret == 0) {
|
||||
if (iref.val_len == sizeof(*lt)) {
|
||||
key = *iref.key;
|
||||
scoutfs_key_inc(&key);
|
||||
lt = iref.val;
|
||||
*inode_count += le64_to_cpu(lt->inode_count_delta);
|
||||
} else {
|
||||
ret = -EIO;
|
||||
}
|
||||
scoutfs_btree_put_iref(&iref);
|
||||
}
|
||||
if (ret < 0) {
|
||||
if (ret == -ENOENT)
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This is called from transactions as a new transaction opens and is
|
||||
* serialized with all writers.
|
||||
@@ -587,7 +591,7 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
|
||||
memset(&finf->our_log, 0, sizeof(finf->our_log));
|
||||
finf->our_log.item_root = lt->item_root;
|
||||
finf->our_log.bloom_ref = lt->bloom_ref;
|
||||
finf->our_log.max_item_seq = lt->max_item_seq;
|
||||
finf->our_log.max_item_vers = lt->max_item_vers;
|
||||
finf->our_log.rid = lt->rid;
|
||||
finf->our_log.nr = lt->nr;
|
||||
finf->srch_file = lt->srch_file;
|
||||
@@ -595,8 +599,6 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
|
||||
WARN_ON_ONCE(finf->srch_bl); /* commiting should have put the block */
|
||||
finf->srch_bl = NULL;
|
||||
|
||||
atomic64_set(&finf->inode_count_delta, le64_to_cpu(lt->inode_count_delta));
|
||||
|
||||
trace_scoutfs_forest_init_our_log(sb, le64_to_cpu(lt->rid),
|
||||
le64_to_cpu(lt->nr),
|
||||
le64_to_cpu(lt->item_root.ref.blkno),
|
||||
@@ -619,137 +621,15 @@ void scoutfs_forest_get_btrees(struct super_block *sb,
|
||||
lt->item_root = finf->our_log.item_root;
|
||||
lt->bloom_ref = finf->our_log.bloom_ref;
|
||||
lt->srch_file = finf->srch_file;
|
||||
lt->max_item_seq = finf->our_log.max_item_seq;
|
||||
lt->max_item_vers = finf->our_log.max_item_vers;
|
||||
|
||||
scoutfs_block_put(sb, finf->srch_bl);
|
||||
finf->srch_bl = NULL;
|
||||
|
||||
lt->inode_count_delta = cpu_to_le64(atomic64_read(&finf->inode_count_delta));
|
||||
|
||||
trace_scoutfs_forest_prepare_commit(sb, <->item_root.ref,
|
||||
<->bloom_ref);
|
||||
}
|
||||
|
||||
#define LOG_MERGE_DELAY_MS (5 * MSEC_PER_SEC)
|
||||
|
||||
/*
|
||||
* Regularly try to get a log merge request from the server. If we get
|
||||
* a request we walk the log_trees items to find input trees and pass
|
||||
* them to btree_merge. All of our work is done in dirty blocks
|
||||
* allocated from available free blocks that the server gave us. If we
|
||||
* hit an error then we drop our dirty blocks without writing them and
|
||||
* send an error flag to the server so they can reclaim our allocators
|
||||
* and ignore the rest of our work.
|
||||
*/
|
||||
static void scoutfs_forest_log_merge_worker(struct work_struct *work)
|
||||
{
|
||||
struct forest_info *finf = container_of(work, struct forest_info,
|
||||
log_merge_dwork.work);
|
||||
struct super_block *sb = finf->sb;
|
||||
struct scoutfs_btree_root_head *rhead = NULL;
|
||||
struct scoutfs_btree_root_head *tmp;
|
||||
struct scoutfs_log_merge_complete comp;
|
||||
struct scoutfs_log_merge_request req;
|
||||
struct scoutfs_log_trees *lt;
|
||||
struct scoutfs_block_writer wri;
|
||||
struct scoutfs_alloc alloc;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
struct scoutfs_key next;
|
||||
struct scoutfs_key key;
|
||||
unsigned long delay;
|
||||
LIST_HEAD(inputs);
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_client_get_log_merge(sb, &req);
|
||||
if (ret < 0)
|
||||
goto resched;
|
||||
|
||||
comp.root = req.root;
|
||||
comp.start = req.start;
|
||||
comp.end = req.end;
|
||||
comp.remain = req.end;
|
||||
comp.rid = req.rid;
|
||||
comp.seq = req.seq;
|
||||
comp.flags = 0;
|
||||
|
||||
scoutfs_alloc_init(&alloc, &req.meta_avail, &req.meta_freed);
|
||||
scoutfs_block_writer_init(sb, &wri);
|
||||
|
||||
/* find finalized input log trees within the input seq */
|
||||
for (scoutfs_key_init_log_trees(&key, 0, 0); ; scoutfs_key_inc(&key)) {
|
||||
|
||||
if (!rhead) {
|
||||
rhead = kmalloc(sizeof(*rhead), GFP_NOFS);
|
||||
if (!rhead) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
ret = scoutfs_btree_next(sb, &req.logs_root, &key, &iref);
|
||||
if (ret == 0) {
|
||||
if (iref.val_len == sizeof(*lt)) {
|
||||
key = *iref.key;
|
||||
lt = iref.val;
|
||||
if (lt->item_root.ref.blkno != 0 &&
|
||||
(le64_to_cpu(lt->flags) & SCOUTFS_LOG_TREES_FINALIZED) &&
|
||||
(le64_to_cpu(lt->finalize_seq) < le64_to_cpu(req.input_seq))) {
|
||||
rhead->root = lt->item_root;
|
||||
list_add_tail(&rhead->head, &inputs);
|
||||
rhead = NULL;
|
||||
}
|
||||
} else {
|
||||
ret = -EIO;
|
||||
}
|
||||
scoutfs_btree_put_iref(&iref);
|
||||
}
|
||||
if (ret < 0) {
|
||||
if (ret == -ENOENT) {
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
/* shouldn't be possible, but it's harmless */
|
||||
if (list_empty(&inputs)) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scoutfs_btree_merge(sb, &alloc, &wri, &req.start, &req.end,
|
||||
&next, &comp.root, &inputs,
|
||||
!!(req.flags & cpu_to_le64(SCOUTFS_LOG_MERGE_REQUEST_SUBTREE)),
|
||||
SCOUTFS_LOG_MERGE_DIRTY_BYTE_LIMIT, 10);
|
||||
if (ret == -ERANGE) {
|
||||
comp.remain = next;
|
||||
le64_add_cpu(&comp.flags, SCOUTFS_LOG_MERGE_COMP_REMAIN);
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
out:
|
||||
scoutfs_alloc_prepare_commit(sb, &alloc, &wri);
|
||||
if (ret == 0)
|
||||
ret = scoutfs_block_writer_write(sb, &wri);
|
||||
scoutfs_block_writer_forget_all(sb, &wri);
|
||||
|
||||
comp.meta_avail = alloc.avail;
|
||||
comp.meta_freed = alloc.freed;
|
||||
if (ret < 0)
|
||||
le64_add_cpu(&comp.flags, SCOUTFS_LOG_MERGE_COMP_ERROR);
|
||||
|
||||
ret = scoutfs_client_commit_log_merge(sb, &comp);
|
||||
|
||||
kfree(rhead);
|
||||
list_for_each_entry_safe(rhead, tmp, &inputs, head)
|
||||
kfree(rhead);
|
||||
|
||||
resched:
|
||||
delay = ret == 0 ? 0 : msecs_to_jiffies(LOG_MERGE_DELAY_MS);
|
||||
queue_delayed_work(finf->workq, &finf->log_merge_dwork, delay);
|
||||
}
|
||||
|
||||
int scoutfs_forest_setup(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
@@ -763,20 +643,10 @@ int scoutfs_forest_setup(struct super_block *sb)
|
||||
}
|
||||
|
||||
/* the finf fields will be setup as we open a transaction */
|
||||
finf->sb = sb;
|
||||
mutex_init(&finf->mutex);
|
||||
mutex_init(&finf->srch_mutex);
|
||||
INIT_DELAYED_WORK(&finf->log_merge_dwork,
|
||||
scoutfs_forest_log_merge_worker);
|
||||
|
||||
sbi->forest_info = finf;
|
||||
|
||||
finf->workq = alloc_workqueue("scoutfs_log_merge", WQ_NON_REENTRANT |
|
||||
WQ_UNBOUND | WQ_HIGHPRI, 0);
|
||||
if (!finf->workq) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret)
|
||||
@@ -785,24 +655,6 @@ out:
|
||||
return 0;
|
||||
}
|
||||
|
||||
void scoutfs_forest_start(struct super_block *sb)
|
||||
{
|
||||
DECLARE_FOREST_INFO(sb, finf);
|
||||
|
||||
queue_delayed_work(finf->workq, &finf->log_merge_dwork,
|
||||
msecs_to_jiffies(LOG_MERGE_DELAY_MS));
|
||||
}
|
||||
|
||||
void scoutfs_forest_stop(struct super_block *sb)
|
||||
{
|
||||
DECLARE_FOREST_INFO(sb, finf);
|
||||
|
||||
if (finf && finf->workq) {
|
||||
cancel_delayed_work_sync(&finf->log_merge_dwork);
|
||||
destroy_workqueue(finf->workq);
|
||||
}
|
||||
}
|
||||
|
||||
void scoutfs_forest_destroy(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
@@ -810,7 +662,6 @@ void scoutfs_forest_destroy(struct super_block *sb)
|
||||
|
||||
if (finf) {
|
||||
scoutfs_block_put(sb, finf->srch_bl);
|
||||
|
||||
kfree(finf);
|
||||
sbi->forest_info = NULL;
|
||||
}
|
||||
|
||||
@@ -8,36 +8,29 @@ struct scoutfs_block;
|
||||
#include "btree.h"
|
||||
|
||||
/* caller gives an item to the callback */
|
||||
enum {
|
||||
FIC_FS_ROOT = (1 << 0),
|
||||
FIC_FINALIZED = (1 << 1),
|
||||
};
|
||||
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb, struct scoutfs_key *key, u64 seq,
|
||||
u8 flags, void *val, int val_len, int fic, void *arg);
|
||||
typedef int (*scoutfs_forest_item_cb)(struct super_block *sb,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_log_item_value *liv,
|
||||
void *val, int val_len, void *arg);
|
||||
|
||||
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_key *next);
|
||||
int scoutfs_forest_read_items(struct super_block *sb,
|
||||
struct scoutfs_lock *lock,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_key *bloom_key,
|
||||
struct scoutfs_key *start,
|
||||
struct scoutfs_key *end,
|
||||
scoutfs_forest_item_cb cb, void *arg);
|
||||
int scoutfs_forest_set_bloom_bits(struct super_block *sb,
|
||||
struct scoutfs_lock *lock);
|
||||
void scoutfs_forest_set_max_seq(struct super_block *sb, u64 max_seq);
|
||||
int scoutfs_forest_get_max_seq(struct super_block *sb,
|
||||
struct scoutfs_super_block *super,
|
||||
u64 *seq);
|
||||
void scoutfs_forest_set_max_vers(struct super_block *sb, u64 max_vers);
|
||||
int scoutfs_forest_get_max_vers(struct super_block *sb,
|
||||
struct scoutfs_super_block *super,
|
||||
u64 *vers);
|
||||
int scoutfs_forest_insert_list(struct super_block *sb,
|
||||
struct scoutfs_btree_item_list *lst);
|
||||
int scoutfs_forest_srch_add(struct super_block *sb, u64 hash, u64 ino, u64 id);
|
||||
|
||||
void scoutfs_forest_inc_inode_count(struct super_block *sb);
|
||||
void scoutfs_forest_dec_inode_count(struct super_block *sb);
|
||||
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
|
||||
u64 *inode_count);
|
||||
|
||||
void scoutfs_forest_init_btrees(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
@@ -45,15 +38,7 @@ void scoutfs_forest_init_btrees(struct super_block *sb,
|
||||
void scoutfs_forest_get_btrees(struct super_block *sb,
|
||||
struct scoutfs_log_trees *lt);
|
||||
|
||||
/* > 0 error codes */
|
||||
#define SCOUTFS_DELTA_COMBINED 1 /* src val was combined, drop src */
|
||||
#define SCOUTFS_DELTA_COMBINED_NULL 2 /* combined val has no data, drop both */
|
||||
int scoutfs_forest_combine_deltas(struct scoutfs_key *key, void *dst, int dst_len,
|
||||
void *src, int src_len);
|
||||
|
||||
int scoutfs_forest_setup(struct super_block *sb);
|
||||
void scoutfs_forest_start(struct super_block *sb);
|
||||
void scoutfs_forest_stop(struct super_block *sb);
|
||||
void scoutfs_forest_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -1,16 +1,6 @@
|
||||
#ifndef _SCOUTFS_FORMAT_H_
|
||||
#define _SCOUTFS_FORMAT_H_
|
||||
|
||||
/*
|
||||
* The format version defines the format of structures on devices,
|
||||
* structures that are communicated over the wire, and the protocol
|
||||
* behind the structures.
|
||||
*/
|
||||
#define SCOUTFS_FORMAT_VERSION_MIN 1
|
||||
#define SCOUTFS_FORMAT_VERSION_MIN_STR __stringify(SCOUTFS_FORMAT_VERSION_MIN)
|
||||
#define SCOUTFS_FORMAT_VERSION_MAX 1
|
||||
#define SCOUTFS_FORMAT_VERSION_MAX_STR __stringify(SCOUTFS_FORMAT_VERSION_MAX)
|
||||
|
||||
/* statfs(2) f_type */
|
||||
#define SCOUTFS_SUPER_MAGIC 0x554f4353 /* "SCOU" */
|
||||
|
||||
@@ -21,7 +11,6 @@
|
||||
#define SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK 0x897e4a7d
|
||||
#define SCOUTFS_BLOCK_MAGIC_SRCH_PARENT 0xb23a2a05
|
||||
#define SCOUTFS_BLOCK_MAGIC_ALLOC_LIST 0x8a93ac83
|
||||
#define SCOUTFS_BLOCK_MAGIC_QUORUM 0xbc310868
|
||||
|
||||
/*
|
||||
* The super block, quorum block, and file data allocation granularity
|
||||
@@ -62,19 +51,15 @@
|
||||
#define SCOUTFS_SUPER_BLKNO ((64ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/*
|
||||
* A small number of quorum blocks follow the super block, enough of
|
||||
* them to match the starting offset of the super block so the region is
|
||||
* aligned to the power of two that contains it.
|
||||
* A reasonably large region of aligned quorum blocks follow the super
|
||||
* block. Each voting cycle reads the entire region so we don't want it
|
||||
* to be too enormous. 256K seems like a reasonably chunky single IO.
|
||||
* The number of blocks in the region also determines the number of
|
||||
* mounts that have a reasonable probability of not overwriting each
|
||||
* other's random block locations.
|
||||
*/
|
||||
#define SCOUTFS_QUORUM_BLKNO (SCOUTFS_SUPER_BLKNO + 1)
|
||||
#define SCOUTFS_QUORUM_BLOCKS (SCOUTFS_SUPER_BLKNO - 1)
|
||||
|
||||
/*
|
||||
* Free metadata blocks start after the quorum blocks
|
||||
*/
|
||||
#define SCOUTFS_META_DEV_START_BLKNO \
|
||||
((SCOUTFS_QUORUM_BLKNO + SCOUTFS_QUORUM_BLOCKS) >> \
|
||||
SCOUTFS_BLOCK_SM_LG_SHIFT)
|
||||
#define SCOUTFS_QUORUM_BLKNO ((256ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
#define SCOUTFS_QUORUM_BLOCKS ((256ULL * 1024) >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/*
|
||||
* Start data on the data device aligned as well.
|
||||
@@ -93,33 +78,11 @@ struct scoutfs_timespec {
|
||||
__u8 __pad[4];
|
||||
};
|
||||
|
||||
enum scoutfs_inet_family {
|
||||
SCOUTFS_AF_NONE = 0,
|
||||
SCOUTFS_AF_IPV4 = 1,
|
||||
SCOUTFS_AF_IPV6 = 2,
|
||||
};
|
||||
|
||||
struct scoutfs_inet_addr4 {
|
||||
__le16 family;
|
||||
__le16 port;
|
||||
/* XXX ipv6 */
|
||||
struct scoutfs_inet_addr {
|
||||
__le32 addr;
|
||||
};
|
||||
|
||||
/*
|
||||
* Not yet supported by code.
|
||||
*/
|
||||
struct scoutfs_inet_addr6 {
|
||||
__le16 family;
|
||||
__le16 port;
|
||||
__u8 addr[16];
|
||||
__le32 flow_info;
|
||||
__le32 scope_id;
|
||||
__u8 __pad[4];
|
||||
};
|
||||
|
||||
union scoutfs_inet_addr {
|
||||
struct scoutfs_inet_addr4 v4;
|
||||
struct scoutfs_inet_addr6 v6;
|
||||
__u8 __pad[2];
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -135,15 +98,6 @@ struct scoutfs_block_header {
|
||||
__le64 blkno;
|
||||
};
|
||||
|
||||
/*
|
||||
* A reference to a block. The corresponding fields in the block_header
|
||||
* must match after having read the block contents.
|
||||
*/
|
||||
struct scoutfs_block_ref {
|
||||
__le64 blkno;
|
||||
__le64 seq;
|
||||
};
|
||||
|
||||
/*
|
||||
* scoutfs identifies all file system metadata items by a small key
|
||||
* struct.
|
||||
@@ -175,11 +129,6 @@ struct scoutfs_key {
|
||||
#define sko_rid _sk_first
|
||||
#define sko_ino _sk_second
|
||||
|
||||
/* xattr totl */
|
||||
#define skxt_a _sk_first
|
||||
#define skxt_b _sk_second
|
||||
#define skxt_c _sk_third
|
||||
|
||||
/* inode */
|
||||
#define ski_ino _sk_first
|
||||
|
||||
@@ -207,16 +156,35 @@ struct scoutfs_key {
|
||||
#define sklt_rid _sk_first
|
||||
#define sklt_nr _sk_second
|
||||
|
||||
/* lock clients */
|
||||
#define sklc_rid _sk_first
|
||||
|
||||
/* seqs */
|
||||
#define skts_trans_seq _sk_first
|
||||
#define skts_rid _sk_second
|
||||
|
||||
/* mounted clients */
|
||||
#define skmc_rid _sk_first
|
||||
|
||||
/* free extents by blkno */
|
||||
#define skfb_end _sk_first
|
||||
#define skfb_len _sk_second
|
||||
/* free extents by order */
|
||||
#define skfo_revord _sk_first
|
||||
#define skfo_end _sk_second
|
||||
#define skfo_len _sk_third
|
||||
#define skfb_end _sk_second
|
||||
#define skfb_len _sk_third
|
||||
/* free extents by len */
|
||||
#define skfl_neglen _sk_second
|
||||
#define skfl_blkno _sk_third
|
||||
|
||||
struct scoutfs_radix_block {
|
||||
struct scoutfs_block_header hdr;
|
||||
union {
|
||||
struct scoutfs_radix_ref {
|
||||
__le64 blkno;
|
||||
__le64 seq;
|
||||
__le64 sm_total;
|
||||
__le64 lg_total;
|
||||
} refs[0];
|
||||
__le64 bits[0];
|
||||
};
|
||||
};
|
||||
|
||||
struct scoutfs_avl_root {
|
||||
__le16 node;
|
||||
@@ -239,12 +207,17 @@ struct scoutfs_avl_node {
|
||||
*/
|
||||
#define SCOUTFS_BTREE_MAX_HEIGHT 20
|
||||
|
||||
struct scoutfs_btree_ref {
|
||||
__le64 blkno;
|
||||
__le64 seq;
|
||||
};
|
||||
|
||||
/*
|
||||
* A height of X means that the first block read will have level X-1 and
|
||||
* the leaves will have level 0.
|
||||
*/
|
||||
struct scoutfs_btree_root {
|
||||
struct scoutfs_block_ref ref;
|
||||
struct scoutfs_btree_ref ref;
|
||||
__u8 height;
|
||||
__u8 __pad[7];
|
||||
};
|
||||
@@ -252,15 +225,11 @@ struct scoutfs_btree_root {
|
||||
struct scoutfs_btree_item {
|
||||
struct scoutfs_avl_node node;
|
||||
struct scoutfs_key key;
|
||||
__le64 seq;
|
||||
__le16 val_off;
|
||||
__le16 val_len;
|
||||
__u8 flags;
|
||||
__u8 __pad[3];
|
||||
__u8 __pad[4];
|
||||
};
|
||||
|
||||
#define SCOUTFS_ITEM_FLAG_DELETION (1 << 0)
|
||||
|
||||
struct scoutfs_btree_block {
|
||||
struct scoutfs_block_header hdr;
|
||||
struct scoutfs_avl_root item_root;
|
||||
@@ -269,7 +238,7 @@ struct scoutfs_btree_block {
|
||||
__le16 mid_free_len;
|
||||
__u8 level;
|
||||
__u8 __pad[7];
|
||||
struct scoutfs_btree_item items[];
|
||||
struct scoutfs_btree_item items[0];
|
||||
/* leaf blocks have a fixed size item offset hash table at the end */
|
||||
};
|
||||
|
||||
@@ -289,19 +258,23 @@ struct scoutfs_btree_block {
|
||||
#define SCOUTFS_BTREE_LEAF_ITEM_HASH_BYTES \
|
||||
(SCOUTFS_BTREE_LEAF_ITEM_HASH_NR * sizeof(__le16))
|
||||
|
||||
struct scoutfs_alloc_list_ref {
|
||||
__le64 blkno;
|
||||
__le64 seq;
|
||||
};
|
||||
|
||||
/*
|
||||
* first_nr tracks the nr of the first block in the list and is used for
|
||||
* allocation sizing. total_nr is the sum of the nr of all the blocks in
|
||||
* the list and is used for calculating total free block counts.
|
||||
*/
|
||||
struct scoutfs_alloc_list_head {
|
||||
struct scoutfs_block_ref ref;
|
||||
struct scoutfs_alloc_list_ref ref;
|
||||
__le64 total_nr;
|
||||
__le32 first_nr;
|
||||
__le32 flags;
|
||||
__u8 __pad[4];
|
||||
};
|
||||
|
||||
|
||||
/*
|
||||
* While the main allocator uses extent items in btree blocks, metadata
|
||||
* allocations for a single transaction are recorded in arrays in
|
||||
@@ -315,10 +288,10 @@ struct scoutfs_alloc_list_head {
|
||||
*/
|
||||
struct scoutfs_alloc_list_block {
|
||||
struct scoutfs_block_header hdr;
|
||||
struct scoutfs_block_ref next;
|
||||
struct scoutfs_alloc_list_ref next;
|
||||
__le32 start;
|
||||
__le32 nr;
|
||||
__le64 blknos[]; /* naturally aligned for sorting */
|
||||
__le64 blknos[0]; /* naturally aligned for sorting */
|
||||
};
|
||||
|
||||
#define SCOUTFS_ALLOC_LIST_MAX_BLOCKS \
|
||||
@@ -330,28 +303,20 @@ struct scoutfs_alloc_list_block {
|
||||
*/
|
||||
struct scoutfs_alloc_root {
|
||||
__le64 total_len;
|
||||
__le32 flags;
|
||||
__le32 _pad;
|
||||
struct scoutfs_btree_root root;
|
||||
};
|
||||
|
||||
/* Shared by _alloc_list_head and _alloc_root */
|
||||
#define SCOUTFS_ALLOC_FLAG_LOW (1U << 0)
|
||||
|
||||
/* types of allocators, exposed to alloc_detail ioctl */
|
||||
#define SCOUTFS_ALLOC_OWNER_NONE 0
|
||||
#define SCOUTFS_ALLOC_OWNER_SERVER 1
|
||||
#define SCOUTFS_ALLOC_OWNER_MOUNT 2
|
||||
#define SCOUTFS_ALLOC_OWNER_SRCH 3
|
||||
#define SCOUTFS_ALLOC_OWNER_LOG_MERGE 4
|
||||
|
||||
struct scoutfs_mounted_client_btree_val {
|
||||
union scoutfs_inet_addr addr;
|
||||
__u8 flags;
|
||||
__u8 __pad[7];
|
||||
};
|
||||
|
||||
#define SCOUTFS_MOUNTED_CLIENT_QUORUM (1 << 0)
|
||||
#define SCOUTFS_MOUNTED_CLIENT_VOTER (1 << 0)
|
||||
|
||||
/*
|
||||
* srch files are a contiguous run of blocks with compressed entries
|
||||
@@ -369,10 +334,15 @@ struct scoutfs_srch_entry {
|
||||
|
||||
#define SCOUTFS_SRCH_ENTRY_MAX_BYTES (2 + (sizeof(__u64) * 3))
|
||||
|
||||
struct scoutfs_srch_ref {
|
||||
__le64 blkno;
|
||||
__le64 seq;
|
||||
};
|
||||
|
||||
struct scoutfs_srch_file {
|
||||
struct scoutfs_srch_entry first;
|
||||
struct scoutfs_srch_entry last;
|
||||
struct scoutfs_block_ref ref;
|
||||
struct scoutfs_srch_ref ref;
|
||||
__le64 blocks;
|
||||
__le64 entries;
|
||||
__u8 height;
|
||||
@@ -381,13 +351,13 @@ struct scoutfs_srch_file {
|
||||
|
||||
struct scoutfs_srch_parent {
|
||||
struct scoutfs_block_header hdr;
|
||||
struct scoutfs_block_ref refs[];
|
||||
struct scoutfs_srch_ref refs[0];
|
||||
};
|
||||
|
||||
#define SCOUTFS_SRCH_PARENT_REFS \
|
||||
((SCOUTFS_BLOCK_LG_SIZE - \
|
||||
offsetof(struct scoutfs_srch_parent, refs)) / \
|
||||
sizeof(struct scoutfs_block_ref))
|
||||
sizeof(struct scoutfs_srch_ref))
|
||||
|
||||
struct scoutfs_srch_block {
|
||||
struct scoutfs_block_header hdr;
|
||||
@@ -396,7 +366,7 @@ struct scoutfs_srch_block {
|
||||
struct scoutfs_srch_entry tail;
|
||||
__le32 entry_nr;
|
||||
__le32 entry_bytes;
|
||||
__u8 entries[];
|
||||
__u8 entries[0];
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -449,50 +419,44 @@ struct scoutfs_srch_compact {
|
||||
/* client -> server: compaction failed */
|
||||
#define SCOUTFS_SRCH_COMPACT_FLAG_ERROR (1 << 5)
|
||||
|
||||
#define SCOUTFS_DATA_ALLOC_MAX_ZONES 1024
|
||||
#define SCOUTFS_DATA_ALLOC_ZONE_BYTES DIV_ROUND_UP(SCOUTFS_DATA_ALLOC_MAX_ZONES, 8)
|
||||
#define SCOUTFS_DATA_ALLOC_ZONE_LE64S DIV_ROUND_UP(SCOUTFS_DATA_ALLOC_MAX_ZONES, 64)
|
||||
|
||||
/*
|
||||
* XXX I imagine we should rename these now that they've evolved to track
|
||||
* all the btrees that clients use during a transaction. It's not just
|
||||
* about item logs, it's about clients making changes to trees.
|
||||
*
|
||||
* @get_trans_seq, @commit_trans_seq: These pair of sequence numbers
|
||||
* determine if a transaction is currently open for the mount that owns
|
||||
* the log_trees struct. get_trans_seq is advanced by the server as the
|
||||
* transaction is opened. The server sets comimt_trans_seq equal to
|
||||
* get_ as the transaction is committed.
|
||||
*/
|
||||
struct scoutfs_log_trees {
|
||||
struct scoutfs_alloc_list_head meta_avail;
|
||||
struct scoutfs_alloc_list_head meta_freed;
|
||||
struct scoutfs_btree_root item_root;
|
||||
struct scoutfs_block_ref bloom_ref;
|
||||
struct scoutfs_btree_ref bloom_ref;
|
||||
struct scoutfs_alloc_root data_avail;
|
||||
struct scoutfs_alloc_root data_freed;
|
||||
struct scoutfs_srch_file srch_file;
|
||||
__le64 data_alloc_zone_blocks;
|
||||
__le64 data_alloc_zones[SCOUTFS_DATA_ALLOC_ZONE_LE64S];
|
||||
__le64 inode_count_delta;
|
||||
__le64 get_trans_seq;
|
||||
__le64 commit_trans_seq;
|
||||
__le64 max_item_seq;
|
||||
__le64 finalize_seq;
|
||||
__le64 max_item_vers;
|
||||
__le64 rid;
|
||||
__le64 nr;
|
||||
__le64 flags;
|
||||
};
|
||||
|
||||
#define SCOUTFS_LOG_TREES_FINALIZED (1ULL << 0)
|
||||
struct scoutfs_log_item_value {
|
||||
__le64 vers;
|
||||
__u8 flags;
|
||||
__u8 __pad[7];
|
||||
__u8 data[0];
|
||||
};
|
||||
|
||||
/* FS items are limited by the max btree value length */
|
||||
#define SCOUTFS_MAX_VAL_SIZE SCOUTFS_BTREE_MAX_VAL_LEN
|
||||
/*
|
||||
* FS items are limited by the max btree value length with the log item
|
||||
* value header.
|
||||
*/
|
||||
#define SCOUTFS_MAX_VAL_SIZE \
|
||||
(SCOUTFS_BTREE_MAX_VAL_LEN - sizeof(struct scoutfs_log_item_value))
|
||||
|
||||
#define SCOUTFS_LOG_ITEM_FLAG_DELETION (1 << 0)
|
||||
|
||||
struct scoutfs_bloom_block {
|
||||
struct scoutfs_block_header hdr;
|
||||
__le64 total_set;
|
||||
__le64 bits[];
|
||||
__le64 bits[0];
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -509,122 +473,50 @@ struct scoutfs_bloom_block {
|
||||
member_sizeof(struct scoutfs_bloom_block, bits[0]) * 8)
|
||||
#define SCOUTFS_FOREST_BLOOM_FUNC_BITS (SCOUTFS_BLOCK_LG_SHIFT + 3)
|
||||
|
||||
/*
|
||||
* A private server btree item which records the status of a log merge
|
||||
* operation that is in progress.
|
||||
*/
|
||||
struct scoutfs_log_merge_status {
|
||||
struct scoutfs_key next_range_key;
|
||||
__le64 nr_requests;
|
||||
__le64 nr_complete;
|
||||
__le64 seq;
|
||||
};
|
||||
|
||||
/*
|
||||
* A request is sent to the client and stored in a server btree item to
|
||||
* record resources that would be reclaimed if the client failed. It
|
||||
* has all the inputs needed for the client to perform its portion of a
|
||||
* merge.
|
||||
*/
|
||||
struct scoutfs_log_merge_request {
|
||||
struct scoutfs_alloc_list_head meta_avail;
|
||||
struct scoutfs_alloc_list_head meta_freed;
|
||||
struct scoutfs_btree_root logs_root;
|
||||
struct scoutfs_btree_root root;
|
||||
struct scoutfs_key start;
|
||||
struct scoutfs_key end;
|
||||
__le64 input_seq;
|
||||
__le64 rid;
|
||||
__le64 seq;
|
||||
__le64 flags;
|
||||
};
|
||||
|
||||
/* request root is subtree of fs root at parent, restricted merging modifications */
|
||||
#define SCOUTFS_LOG_MERGE_REQUEST_SUBTREE (1ULL << 0)
|
||||
|
||||
/*
|
||||
* The output of a client's merge of log btree items into a subtree
|
||||
* rooted at a parent in the fs_root. The client sends it to the
|
||||
* server, who stores it in a btree item for later splicing/rebalancing.
|
||||
*/
|
||||
struct scoutfs_log_merge_complete {
|
||||
struct scoutfs_alloc_list_head meta_avail;
|
||||
struct scoutfs_alloc_list_head meta_freed;
|
||||
struct scoutfs_btree_root root;
|
||||
struct scoutfs_key start;
|
||||
struct scoutfs_key end;
|
||||
struct scoutfs_key remain;
|
||||
__le64 rid;
|
||||
__le64 seq;
|
||||
__le64 flags;
|
||||
};
|
||||
|
||||
/* merge failed, ignore completion and reclaim stored request */
|
||||
#define SCOUTFS_LOG_MERGE_COMP_ERROR (1ULL << 0)
|
||||
/* merge didn't complete range, restart from remain */
|
||||
#define SCOUTFS_LOG_MERGE_COMP_REMAIN (1ULL << 1)
|
||||
|
||||
/*
|
||||
* Range items record the ranges of the fs keyspace that still need to
|
||||
* be merged. They're added as a merge starts, removed as requests are
|
||||
* sent and added back if the request didn't consume its entire range.
|
||||
*/
|
||||
struct scoutfs_log_merge_range {
|
||||
struct scoutfs_key start;
|
||||
struct scoutfs_key end;
|
||||
};
|
||||
|
||||
struct scoutfs_log_merge_freeing {
|
||||
struct scoutfs_btree_root root;
|
||||
struct scoutfs_key key;
|
||||
__le64 seq;
|
||||
};
|
||||
|
||||
/*
|
||||
* Keys are first sorted by major key zones.
|
||||
*/
|
||||
#define SCOUTFS_INODE_INDEX_ZONE 4
|
||||
#define SCOUTFS_ORPHAN_ZONE 8
|
||||
#define SCOUTFS_XATTR_TOTL_ZONE 12
|
||||
#define SCOUTFS_FS_ZONE 16
|
||||
#define SCOUTFS_LOCK_ZONE 20
|
||||
#define SCOUTFS_INODE_INDEX_ZONE 1
|
||||
#define SCOUTFS_RID_ZONE 2
|
||||
#define SCOUTFS_FS_ZONE 3
|
||||
#define SCOUTFS_LOCK_ZONE 4
|
||||
/* Items only stored in server btrees */
|
||||
#define SCOUTFS_LOG_TREES_ZONE 24
|
||||
#define SCOUTFS_MOUNTED_CLIENT_ZONE 28
|
||||
#define SCOUTFS_SRCH_ZONE 32
|
||||
#define SCOUTFS_FREE_EXTENT_BLKNO_ZONE 36
|
||||
#define SCOUTFS_FREE_EXTENT_ORDER_ZONE 40
|
||||
/* Items only stored in log merge server btrees */
|
||||
#define SCOUTFS_LOG_MERGE_STATUS_ZONE 44
|
||||
#define SCOUTFS_LOG_MERGE_RANGE_ZONE 48
|
||||
#define SCOUTFS_LOG_MERGE_REQUEST_ZONE 52
|
||||
#define SCOUTFS_LOG_MERGE_COMPLETE_ZONE 56
|
||||
#define SCOUTFS_LOG_MERGE_FREEING_ZONE 60
|
||||
#define SCOUTFS_LOG_TREES_ZONE 6
|
||||
#define SCOUTFS_LOCK_CLIENTS_ZONE 7
|
||||
#define SCOUTFS_TRANS_SEQ_ZONE 8
|
||||
#define SCOUTFS_MOUNTED_CLIENT_ZONE 9
|
||||
#define SCOUTFS_SRCH_ZONE 10
|
||||
#define SCOUTFS_FREE_EXTENT_ZONE 11
|
||||
|
||||
/* inode index zone */
|
||||
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 4
|
||||
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 8
|
||||
#define SCOUTFS_INODE_INDEX_META_SEQ_TYPE 1
|
||||
#define SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE 2
|
||||
#define SCOUTFS_INODE_INDEX_NR 3 /* don't forget to update */
|
||||
|
||||
/* orphan zone, redundant type used for clarity */
|
||||
#define SCOUTFS_ORPHAN_TYPE 4
|
||||
/* rid zone (also used in server alloc btree) */
|
||||
#define SCOUTFS_ORPHAN_TYPE 1
|
||||
|
||||
/* fs zone */
|
||||
#define SCOUTFS_INODE_TYPE 4
|
||||
#define SCOUTFS_XATTR_TYPE 8
|
||||
#define SCOUTFS_DIRENT_TYPE 12
|
||||
#define SCOUTFS_READDIR_TYPE 16
|
||||
#define SCOUTFS_LINK_BACKREF_TYPE 20
|
||||
#define SCOUTFS_SYMLINK_TYPE 24
|
||||
#define SCOUTFS_DATA_EXTENT_TYPE 28
|
||||
#define SCOUTFS_INODE_TYPE 1
|
||||
#define SCOUTFS_XATTR_TYPE 2
|
||||
#define SCOUTFS_DIRENT_TYPE 3
|
||||
#define SCOUTFS_READDIR_TYPE 4
|
||||
#define SCOUTFS_LINK_BACKREF_TYPE 5
|
||||
#define SCOUTFS_SYMLINK_TYPE 6
|
||||
#define SCOUTFS_DATA_EXTENT_TYPE 7
|
||||
|
||||
/* lock zone, only ever found in lock ranges, never in persistent items */
|
||||
#define SCOUTFS_RENAME_TYPE 4
|
||||
#define SCOUTFS_RENAME_TYPE 1
|
||||
|
||||
/* srch zone, only in server btrees */
|
||||
#define SCOUTFS_SRCH_LOG_TYPE 4
|
||||
#define SCOUTFS_SRCH_BLOCKS_TYPE 8
|
||||
#define SCOUTFS_SRCH_PENDING_TYPE 12
|
||||
#define SCOUTFS_SRCH_BUSY_TYPE 16
|
||||
#define SCOUTFS_SRCH_LOG_TYPE 1
|
||||
#define SCOUTFS_SRCH_BLOCKS_TYPE 2
|
||||
#define SCOUTFS_SRCH_PENDING_TYPE 3
|
||||
#define SCOUTFS_SRCH_BUSY_TYPE 4
|
||||
|
||||
/* free extents in allocator btrees in client and server, by blkno or len */
|
||||
#define SCOUTFS_FREE_EXTENT_BLKNO_TYPE 1
|
||||
#define SCOUTFS_FREE_EXTENT_LEN_TYPE 2
|
||||
|
||||
/* file data extents have start and len in key */
|
||||
struct scoutfs_data_extent_val {
|
||||
@@ -646,172 +538,91 @@ struct scoutfs_xattr {
|
||||
__le16 val_len;
|
||||
__u8 name_len;
|
||||
__u8 __pad[5];
|
||||
__u8 name[];
|
||||
__u8 name[0];
|
||||
};
|
||||
|
||||
/*
|
||||
* .totl. xattrs are mapped to items. The dotted u64s in the xattr name
|
||||
* map to the item key. The item value total is the sum of all the
|
||||
* xattr values. The item value count records the number of xattrs
|
||||
* contributing to the total and is used when combining logged items to
|
||||
* determine if totals are being created or destroyed.
|
||||
*/
|
||||
struct scoutfs_xattr_totl_val {
|
||||
__le64 total;
|
||||
__le64 count;
|
||||
};
|
||||
|
||||
/* XXX does this exist upstream somewhere? */
|
||||
#define member_sizeof(TYPE, MEMBER) (sizeof(((TYPE *)0)->MEMBER))
|
||||
|
||||
#define SCOUTFS_UUID_BYTES 16
|
||||
|
||||
#define SCOUTFS_QUORUM_MAX_SLOTS 15
|
||||
|
||||
/*
|
||||
* To elect a leader, members race to have their variable election
|
||||
* timeouts expire. If they're first to send a vote request with a
|
||||
* greater term to a majority of waiting members they'll be elected with
|
||||
* a majority. If the timeouts are too close, the vote may be split and
|
||||
* everyone will wait for another cycle of variable timeouts to expire.
|
||||
*
|
||||
* These determine how long it will take to elect a leader once there's
|
||||
* no evidence of a server (no leader quorum blocks on mount; heartbeat
|
||||
* timeout expired.)
|
||||
* Mounts read all the quorum blocks and write to one random quorum
|
||||
* block during a cycle. The min cycle time limits the per-mount iop
|
||||
* load during elections. The random cycle delay makes it less likely
|
||||
* that mounts will read and write at the same time and miss each
|
||||
* other's writes. An election only completes if a quorum of mounts
|
||||
* vote for a leader before any of their elections timeout. This is
|
||||
* made less likely by the probability that mounts will overwrite each
|
||||
* others random block locations. The max quorum count limits that
|
||||
* probability. 9 mounts only have a 55% chance of writing to unique 4k
|
||||
* blocks in a 256k region. The election timeout is set to include
|
||||
* enough cycles to usually complete the election. Once a leader is
|
||||
* elected it spends a number of cycles writing out blocks with itself
|
||||
* logged as a leader. This reduces the possibility that servers
|
||||
* will have their log entries overwritten and not be fenced.
|
||||
*/
|
||||
#define SCOUTFS_QUORUM_ELECT_MIN_MS 250
|
||||
#define SCOUTFS_QUORUM_ELECT_VAR_MS 100
|
||||
|
||||
/*
|
||||
* Once a leader is elected they send out heartbeats at regular
|
||||
* intervals to force members to wait the much longer heartbeat timeout.
|
||||
* Once heartbeat timeout expires without receiving a heartbeat they'll
|
||||
* switch over the performing elections.
|
||||
*
|
||||
* These determine how long it could take members to notice that a
|
||||
* leader has gone silent and start to elect a new leader.
|
||||
*/
|
||||
#define SCOUTFS_QUORUM_HB_IVAL_MS 100
|
||||
#define SCOUTFS_QUORUM_HB_TIMEO_MS (5 * MSEC_PER_SEC)
|
||||
|
||||
/*
|
||||
* A newly elected leader will give fencing some time before giving up and
|
||||
* shutting down.
|
||||
*/
|
||||
#define SCOUTFS_QUORUM_FENCE_TO_MS (15 * MSEC_PER_SEC)
|
||||
|
||||
struct scoutfs_quorum_message {
|
||||
__le64 fsid;
|
||||
__le64 version;
|
||||
__le64 term;
|
||||
__u8 type;
|
||||
__u8 from;
|
||||
__u8 __pad[2];
|
||||
__le32 crc;
|
||||
};
|
||||
|
||||
/* a candidate requests a vote */
|
||||
#define SCOUTFS_QUORUM_MSG_REQUEST_VOTE 0
|
||||
/* followers send votes to candidates */
|
||||
#define SCOUTFS_QUORUM_MSG_VOTE 1
|
||||
/* elected leaders broadcast heartbeats to delay elections */
|
||||
#define SCOUTFS_QUORUM_MSG_HEARTBEAT 2
|
||||
/* leaders broadcast as they leave to break heartbeat timeout */
|
||||
#define SCOUTFS_QUORUM_MSG_RESIGNATION 3
|
||||
#define SCOUTFS_QUORUM_MSG_INVALID 4
|
||||
|
||||
/*
|
||||
* The version is currently always 0, but will be used by mounts to
|
||||
* discover that membership has changed.
|
||||
*/
|
||||
struct scoutfs_quorum_config {
|
||||
__le64 version;
|
||||
struct scoutfs_quorum_slot {
|
||||
union scoutfs_inet_addr addr;
|
||||
} slots[SCOUTFS_QUORUM_MAX_SLOTS];
|
||||
};
|
||||
|
||||
enum {
|
||||
SCOUTFS_QUORUM_EVENT_BEGIN, /* quorum service starting up */
|
||||
SCOUTFS_QUORUM_EVENT_TERM, /* updated persistent term */
|
||||
SCOUTFS_QUORUM_EVENT_ELECT, /* won election */
|
||||
SCOUTFS_QUORUM_EVENT_FENCE, /* server fenced others */
|
||||
SCOUTFS_QUORUM_EVENT_STOP, /* server stopped */
|
||||
SCOUTFS_QUORUM_EVENT_END, /* quorum service shutting down */
|
||||
SCOUTFS_QUORUM_EVENT_NR,
|
||||
};
|
||||
#define SCOUTFS_QUORUM_MAX_COUNT 9
|
||||
#define SCOUTFS_QUORUM_CYCLE_LO_MS 10
|
||||
#define SCOUTFS_QUORUM_CYCLE_HI_MS 20
|
||||
#define SCOUTFS_QUORUM_TERM_LO_MS 250
|
||||
#define SCOUTFS_QUORUM_TERM_HI_MS 500
|
||||
#define SCOUTFS_QUORUM_ELECTED_LOG_CYCLES 10
|
||||
|
||||
struct scoutfs_quorum_block {
|
||||
struct scoutfs_block_header hdr;
|
||||
__le64 fsid;
|
||||
__le64 blkno;
|
||||
__le64 term;
|
||||
__le64 write_nr;
|
||||
struct scoutfs_quorum_block_event {
|
||||
__le64 write_nr;
|
||||
__le64 rid;
|
||||
__le64 voter_rid;
|
||||
__le64 vote_for_rid;
|
||||
__le32 crc;
|
||||
__u8 log_nr;
|
||||
__u8 __pad[3];
|
||||
struct scoutfs_quorum_log {
|
||||
__le64 term;
|
||||
struct scoutfs_timespec ts;
|
||||
} events[SCOUTFS_QUORUM_EVENT_NR];
|
||||
__le64 rid;
|
||||
struct scoutfs_inet_addr addr;
|
||||
} log[0];
|
||||
};
|
||||
|
||||
/*
|
||||
* Tunable options that apply to the entire system. They can be set in
|
||||
* mkfs or in sysfs files which send an rpc to the server to make the
|
||||
* change. The super version defines the options that exist.
|
||||
*
|
||||
* @set_bits: bits for each 64bit starting offset after set_bits
|
||||
* indicate which logical option is set.
|
||||
*
|
||||
* @data_alloc_zone_blocks: if set, the data device is logically divided
|
||||
* into contiguous zones of this many blocks. Data allocation will try
|
||||
* and isolate allocated extents for each mount to their own zone. The
|
||||
* zone size must be larger than the data alloc high water mark and
|
||||
* large enough such that the number of zones is kept within its static
|
||||
* limit.
|
||||
*/
|
||||
struct scoutfs_volume_options {
|
||||
__le64 set_bits;
|
||||
__le64 data_alloc_zone_blocks;
|
||||
__le64 __future_expansion[63];
|
||||
};
|
||||
|
||||
#define scoutfs_volopt_nr(field) \
|
||||
((offsetof(struct scoutfs_volume_options, field) - \
|
||||
(offsetof(struct scoutfs_volume_options, set_bits) + \
|
||||
member_sizeof(struct scoutfs_volume_options, set_bits))) / sizeof(__le64))
|
||||
#define scoutfs_volopt_bit(field) \
|
||||
(1ULL << scoutfs_volopt_nr(field))
|
||||
|
||||
#define SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR \
|
||||
scoutfs_volopt_nr(data_alloc_zone_blocks)
|
||||
#define SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_BIT \
|
||||
scoutfs_volopt_bit(data_alloc_zone_blocks)
|
||||
|
||||
#define SCOUTFS_VOLOPT_EXPANSION_BITS \
|
||||
(~(scoutfs_volopt_bit(__future_expansion) - 1))
|
||||
#define SCOUTFS_QUORUM_LOG_MAX \
|
||||
((SCOUTFS_BLOCK_SM_SIZE - sizeof(struct scoutfs_quorum_block)) / \
|
||||
sizeof(struct scoutfs_quorum_log))
|
||||
|
||||
#define SCOUTFS_FLAG_IS_META_BDEV 0x01
|
||||
|
||||
struct scoutfs_super_block {
|
||||
struct scoutfs_block_header hdr;
|
||||
__le64 id;
|
||||
__le64 fmt_vers;
|
||||
__le64 format_hash;
|
||||
__le64 flags;
|
||||
__u8 uuid[SCOUTFS_UUID_BYTES];
|
||||
__le64 seq;
|
||||
__le64 next_ino;
|
||||
__le64 inode_count;
|
||||
__le64 next_trans_seq;
|
||||
__le64 total_meta_blocks; /* both static and dynamic */
|
||||
__le64 first_meta_blkno; /* first dynamically allocated */
|
||||
__le64 last_meta_blkno;
|
||||
__le64 total_data_blocks;
|
||||
struct scoutfs_quorum_config qconf;
|
||||
__le64 first_data_blkno;
|
||||
__le64 last_data_blkno;
|
||||
__le64 quorum_fenced_term;
|
||||
__le64 quorum_server_term;
|
||||
__le64 unmount_barrier;
|
||||
__u8 quorum_count;
|
||||
__u8 __pad[7];
|
||||
struct scoutfs_inet_addr server_addr;
|
||||
struct scoutfs_alloc_root meta_alloc[2];
|
||||
struct scoutfs_alloc_root data_alloc;
|
||||
struct scoutfs_alloc_list_head server_meta_avail[2];
|
||||
struct scoutfs_alloc_list_head server_meta_freed[2];
|
||||
struct scoutfs_btree_root fs_root;
|
||||
struct scoutfs_btree_root logs_root;
|
||||
struct scoutfs_btree_root log_merge;
|
||||
struct scoutfs_btree_root lock_clients;
|
||||
struct scoutfs_btree_root trans_seqs;
|
||||
struct scoutfs_btree_root mounted_clients;
|
||||
struct scoutfs_btree_root srch_root;
|
||||
struct scoutfs_volume_options volopt;
|
||||
};
|
||||
|
||||
#define SCOUTFS_ROOT_INO 1
|
||||
@@ -835,6 +646,13 @@ struct scoutfs_super_block {
|
||||
*
|
||||
* @offline_blocks: The number of fixed 4k blocks that could be made
|
||||
* online by staging.
|
||||
*
|
||||
* XXX
|
||||
* - otime?
|
||||
* - compat flags?
|
||||
* - version?
|
||||
* - generation?
|
||||
* - be more careful with rdev?
|
||||
*/
|
||||
struct scoutfs_inode {
|
||||
__le64 size;
|
||||
@@ -845,7 +663,6 @@ struct scoutfs_inode {
|
||||
__le64 offline_blocks;
|
||||
__le64 next_readdir_pos;
|
||||
__le64 next_xattr_id;
|
||||
__le64 version;
|
||||
__le32 nlink;
|
||||
__le32 uid;
|
||||
__le32 gid;
|
||||
@@ -855,7 +672,6 @@ struct scoutfs_inode {
|
||||
struct scoutfs_timespec atime;
|
||||
struct scoutfs_timespec ctime;
|
||||
struct scoutfs_timespec mtime;
|
||||
struct scoutfs_timespec crtime;
|
||||
};
|
||||
|
||||
#define SCOUTFS_INO_FLAG_TRUNCATE 0x1
|
||||
@@ -879,7 +695,7 @@ struct scoutfs_dirent {
|
||||
__le64 pos;
|
||||
__u8 type;
|
||||
__u8 __pad[7];
|
||||
__u8 name[];
|
||||
__u8 name[0];
|
||||
};
|
||||
|
||||
#define SCOUTFS_NAME_LEN 255
|
||||
@@ -907,7 +723,6 @@ enum scoutfs_dentry_type {
|
||||
#define SCOUTFS_XATTR_MAX_NAME_LEN 255
|
||||
#define SCOUTFS_XATTR_MAX_VAL_LEN 65535
|
||||
#define SCOUTFS_XATTR_MAX_PART_SIZE SCOUTFS_MAX_VAL_SIZE
|
||||
#define SCOUTFS_XATTR_MAX_TOTL_U64 23 /* octal U64_MAX */
|
||||
|
||||
#define SCOUTFS_XATTR_NR_PARTS(name_len, val_len) \
|
||||
DIV_ROUND_UP(sizeof(struct scoutfs_xattr) + name_len + val_len, \
|
||||
@@ -931,6 +746,12 @@ enum scoutfs_dentry_type {
|
||||
* the same serer after receiving a greeting response and to a new
|
||||
* server after failover.
|
||||
*
|
||||
* @unmount_barrier: Incremented every time the remaining majority of
|
||||
* quorum members all agree to leave. The server tells a quorum member
|
||||
* the value that it's connecting under so that if the client sees the
|
||||
* value increase in the super block then it knows that the server has
|
||||
* processed its farewell and can safely unmount.
|
||||
*
|
||||
* @rid: The client's random id that was generated once as the mount
|
||||
* started up. This identifies a specific remote mount across
|
||||
* connections and servers. It's set to the client's rid in both the
|
||||
@@ -938,14 +759,15 @@ enum scoutfs_dentry_type {
|
||||
*/
|
||||
struct scoutfs_net_greeting {
|
||||
__le64 fsid;
|
||||
__le64 fmt_vers;
|
||||
__le64 format_hash;
|
||||
__le64 server_term;
|
||||
__le64 unmount_barrier;
|
||||
__le64 rid;
|
||||
__le64 flags;
|
||||
};
|
||||
|
||||
#define SCOUTFS_NET_GREETING_FLAG_FAREWELL (1 << 0)
|
||||
#define SCOUTFS_NET_GREETING_FLAG_QUORUM (1 << 1)
|
||||
#define SCOUTFS_NET_GREETING_FLAG_VOTER (1 << 1)
|
||||
#define SCOUTFS_NET_GREETING_FLAG_INVALID (~(__u64)0 << 2)
|
||||
|
||||
/*
|
||||
@@ -969,6 +791,7 @@ struct scoutfs_net_greeting {
|
||||
* response messages.
|
||||
*/
|
||||
struct scoutfs_net_header {
|
||||
__le64 clock_sync_id;
|
||||
__le64 seq;
|
||||
__le64 recv_seq;
|
||||
__le64 id;
|
||||
@@ -977,7 +800,7 @@ struct scoutfs_net_header {
|
||||
__u8 flags;
|
||||
__u8 error;
|
||||
__u8 __pad[3];
|
||||
__u8 data[];
|
||||
__u8 data[0];
|
||||
};
|
||||
|
||||
#define SCOUTFS_NET_FLAG_RESPONSE (1 << 0)
|
||||
@@ -988,21 +811,13 @@ enum scoutfs_net_cmd {
|
||||
SCOUTFS_NET_CMD_ALLOC_INODES,
|
||||
SCOUTFS_NET_CMD_GET_LOG_TREES,
|
||||
SCOUTFS_NET_CMD_COMMIT_LOG_TREES,
|
||||
SCOUTFS_NET_CMD_SYNC_LOG_TREES,
|
||||
SCOUTFS_NET_CMD_GET_ROOTS,
|
||||
SCOUTFS_NET_CMD_ADVANCE_SEQ,
|
||||
SCOUTFS_NET_CMD_GET_LAST_SEQ,
|
||||
SCOUTFS_NET_CMD_LOCK,
|
||||
SCOUTFS_NET_CMD_LOCK_RECOVER,
|
||||
SCOUTFS_NET_CMD_SRCH_GET_COMPACT,
|
||||
SCOUTFS_NET_CMD_SRCH_COMMIT_COMPACT,
|
||||
SCOUTFS_NET_CMD_GET_LOG_MERGE,
|
||||
SCOUTFS_NET_CMD_COMMIT_LOG_MERGE,
|
||||
SCOUTFS_NET_CMD_OPEN_INO_MAP,
|
||||
SCOUTFS_NET_CMD_GET_VOLOPT,
|
||||
SCOUTFS_NET_CMD_SET_VOLOPT,
|
||||
SCOUTFS_NET_CMD_CLEAR_VOLOPT,
|
||||
SCOUTFS_NET_CMD_RESIZE_DEVICES,
|
||||
SCOUTFS_NET_CMD_STATFS,
|
||||
SCOUTFS_NET_CMD_FAREWELL,
|
||||
SCOUTFS_NET_CMD_UNKNOWN,
|
||||
};
|
||||
@@ -1045,32 +860,23 @@ struct scoutfs_net_roots {
|
||||
struct scoutfs_btree_root srch_root;
|
||||
};
|
||||
|
||||
struct scoutfs_net_resize_devices {
|
||||
__le64 new_total_meta_blocks;
|
||||
__le64 new_total_data_blocks;
|
||||
};
|
||||
|
||||
struct scoutfs_net_statfs {
|
||||
__u8 uuid[SCOUTFS_UUID_BYTES];
|
||||
__le64 free_meta_blocks;
|
||||
__le64 total_meta_blocks;
|
||||
__le64 free_data_blocks;
|
||||
__le64 total_data_blocks;
|
||||
__le64 inode_count;
|
||||
};
|
||||
|
||||
struct scoutfs_net_lock {
|
||||
struct scoutfs_key key;
|
||||
__le64 write_seq;
|
||||
__le64 write_version;
|
||||
__u8 old_mode;
|
||||
__u8 new_mode;
|
||||
__u8 __pad[6];
|
||||
};
|
||||
|
||||
struct scoutfs_net_lock_grant_response {
|
||||
struct scoutfs_net_lock nl;
|
||||
struct scoutfs_net_roots roots;
|
||||
};
|
||||
|
||||
struct scoutfs_net_lock_recover {
|
||||
__le16 nr;
|
||||
__u8 __pad[6];
|
||||
struct scoutfs_net_lock locks[];
|
||||
struct scoutfs_net_lock locks[0];
|
||||
};
|
||||
|
||||
#define SCOUTFS_NET_LOCK_MAX_RECOVER_NR \
|
||||
@@ -1085,7 +891,6 @@ enum scoutfs_lock_trace {
|
||||
SLT_INVALIDATE,
|
||||
SLT_REQUEST,
|
||||
SLT_RESPONSE,
|
||||
SLT_NR,
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -1138,42 +943,4 @@ enum scoutfs_corruption_sources {
|
||||
|
||||
#define SC_NR_LONGS DIV_ROUND_UP(SC_NR_SOURCES, BITS_PER_LONG)
|
||||
|
||||
#define SCOUTFS_OPEN_INO_MAP_SHIFT 10
|
||||
#define SCOUTFS_OPEN_INO_MAP_BITS (1 << SCOUTFS_OPEN_INO_MAP_SHIFT)
|
||||
#define SCOUTFS_OPEN_INO_MAP_MASK (SCOUTFS_OPEN_INO_MAP_BITS - 1)
|
||||
#define SCOUTFS_OPEN_INO_MAP_LE64S (SCOUTFS_OPEN_INO_MAP_BITS / 64)
|
||||
|
||||
/*
|
||||
* The request and response conversation is as follows:
|
||||
*
|
||||
* client[init] -> server:
|
||||
* group_nr = G
|
||||
* req_id = 0 (I)
|
||||
* server -> client[*]
|
||||
* group_nr = G
|
||||
* req_id = R
|
||||
* client[*] -> server
|
||||
* group_nr = G (I)
|
||||
* req_id = R
|
||||
* bits
|
||||
* server -> client[init]
|
||||
* group_nr = G (I)
|
||||
* req_id = R (I)
|
||||
* bits
|
||||
*
|
||||
* Many of the fields in individual messages are ignored ("I") because
|
||||
* the net id or the omap req_id can be used to identify the
|
||||
* conversation. We always include them on the wire to make inspected
|
||||
* messages easier to follow.
|
||||
*/
|
||||
struct scoutfs_open_ino_map_args {
|
||||
__le64 group_nr;
|
||||
__le64 req_id;
|
||||
};
|
||||
|
||||
struct scoutfs_open_ino_map {
|
||||
struct scoutfs_open_ino_map_args args;
|
||||
__le64 bits[SCOUTFS_OPEN_INO_MAP_LE64S];
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
1045
kmod/src/inode.c
1045
kmod/src/inode.c
File diff suppressed because it is too large
Load Diff
@@ -4,13 +4,12 @@
|
||||
#include "key.h"
|
||||
#include "lock.h"
|
||||
#include "per_task.h"
|
||||
#include "count.h"
|
||||
#include "format.h"
|
||||
#include "data.h"
|
||||
|
||||
struct scoutfs_lock;
|
||||
|
||||
#define SCOUTFS_INODE_NR_INDICES 2
|
||||
|
||||
struct scoutfs_inode_info {
|
||||
/* read or initialized for each inode instance */
|
||||
u64 ino;
|
||||
@@ -22,7 +21,6 @@ struct scoutfs_inode_info {
|
||||
u64 online_blocks;
|
||||
u64 offline_blocks;
|
||||
u32 flags;
|
||||
struct timespec crtime;
|
||||
|
||||
/*
|
||||
* Protects per-inode extent items, most particularly readers
|
||||
@@ -40,8 +38,8 @@ struct scoutfs_inode_info {
|
||||
*/
|
||||
struct mutex item_mutex;
|
||||
bool have_item;
|
||||
u64 item_majors[SCOUTFS_INODE_NR_INDICES];
|
||||
u32 item_minors[SCOUTFS_INODE_NR_INDICES];
|
||||
u64 item_majors[SCOUTFS_INODE_INDEX_NR];
|
||||
u32 item_minors[SCOUTFS_INODE_INDEX_NR];
|
||||
|
||||
/* updated at on each new lock acquisition */
|
||||
atomic64_t last_refreshed;
|
||||
@@ -52,20 +50,11 @@ struct scoutfs_inode_info {
|
||||
struct scoutfs_per_task pt_data_lock;
|
||||
struct scoutfs_data_waitq data_waitq;
|
||||
struct rw_semaphore xattr_rwsem;
|
||||
struct list_head writeback_entry;
|
||||
|
||||
struct scoutfs_lock_coverage ino_lock_cov;
|
||||
|
||||
struct list_head iput_head;
|
||||
unsigned long iput_count;
|
||||
unsigned long iput_flags;
|
||||
struct rb_node writeback_node;
|
||||
|
||||
struct inode inode;
|
||||
};
|
||||
|
||||
/* try to prune dcache aliases with queued iput */
|
||||
#define SI_IPUT_FLAG_PRUNE (1 << 0)
|
||||
|
||||
static inline struct scoutfs_inode_info *SCOUTFS_I(struct inode *inode)
|
||||
{
|
||||
return container_of(inode, struct scoutfs_inode_info, inode);
|
||||
@@ -80,15 +69,11 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
|
||||
void scoutfs_destroy_inode(struct inode *inode);
|
||||
int scoutfs_drop_inode(struct inode *inode);
|
||||
void scoutfs_evict_inode(struct inode *inode);
|
||||
void scoutfs_inode_queue_iput(struct inode *inode, unsigned long flags);
|
||||
int scoutfs_orphan_inode(struct inode *inode);
|
||||
|
||||
#define SCOUTFS_IGF_LINKED (1 << 0) /* enoent if nlink == 0 */
|
||||
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf);
|
||||
struct inode *scoutfs_ilookup_nowait(struct super_block *sb, u64 ino);
|
||||
struct inode *scoutfs_ilookup_nowait_nonewfree(struct super_block *sb, u64 ino);
|
||||
struct inode *scoutfs_iget(struct super_block *sb, u64 ino);
|
||||
struct inode *scoutfs_ilookup(struct super_block *sb, u64 ino);
|
||||
|
||||
|
||||
void scoutfs_inode_init_key(struct scoutfs_key *key, u64 ino);
|
||||
void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
|
||||
u32 minor, u64 ino);
|
||||
int scoutfs_inode_index_start(struct super_block *sb, u64 *seq);
|
||||
@@ -98,9 +83,11 @@ int scoutfs_inode_index_prepare_ino(struct super_block *sb,
|
||||
struct list_head *list, u64 ino,
|
||||
umode_t mode);
|
||||
int scoutfs_inode_index_try_lock_hold(struct super_block *sb,
|
||||
struct list_head *list, u64 seq, bool allocing);
|
||||
struct list_head *list, u64 seq,
|
||||
const struct scoutfs_item_count cnt);
|
||||
int scoutfs_inode_index_lock_hold(struct inode *inode, struct list_head *list,
|
||||
bool set_data_seq, bool allocing);
|
||||
bool set_data_seq,
|
||||
const struct scoutfs_item_count cnt);
|
||||
void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list);
|
||||
|
||||
int scoutfs_dirty_inode_item(struct inode *inode, struct scoutfs_lock *lock);
|
||||
@@ -108,8 +95,9 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
|
||||
struct list_head *ind_locks);
|
||||
|
||||
int scoutfs_alloc_ino(struct super_block *sb, bool is_dir, u64 *ino_ret);
|
||||
int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, dev_t rdev,
|
||||
u64 ino, struct scoutfs_lock *lock, struct inode **inode_ret);
|
||||
struct inode *scoutfs_new_inode(struct super_block *sb, struct inode *dir,
|
||||
umode_t mode, dev_t rdev, u64 ino,
|
||||
struct scoutfs_lock *lock);
|
||||
|
||||
void scoutfs_inode_set_meta_seq(struct inode *inode);
|
||||
void scoutfs_inode_set_data_seq(struct inode *inode);
|
||||
@@ -122,27 +110,23 @@ u64 scoutfs_inode_data_version(struct inode *inode);
|
||||
void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off);
|
||||
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock);
|
||||
|
||||
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
|
||||
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock,
|
||||
int flags);
|
||||
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
|
||||
struct kstat *stat);
|
||||
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
|
||||
|
||||
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary);
|
||||
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary);
|
||||
void scoutfs_inode_schedule_orphan_dwork(struct super_block *sb);
|
||||
int scoutfs_scan_orphans(struct super_block *sb);
|
||||
|
||||
void scoutfs_inode_queue_writeback(struct inode *inode);
|
||||
int scoutfs_inode_walk_writeback(struct super_block *sb, bool write);
|
||||
|
||||
u64 scoutfs_last_ino(struct super_block *sb);
|
||||
|
||||
void scoutfs_inode_exit(void);
|
||||
int scoutfs_inode_init(void);
|
||||
|
||||
int scoutfs_inode_setup(struct super_block *sb);
|
||||
void scoutfs_inode_start(struct super_block *sb);
|
||||
void scoutfs_inode_orphan_stop(struct super_block *sb);
|
||||
void scoutfs_inode_flush_iput(struct super_block *sb);
|
||||
void scoutfs_inode_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
|
||||
458
kmod/src/ioctl.c
458
kmod/src/ioctl.c
@@ -21,7 +21,6 @@
|
||||
#include <linux/mm.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/aio.h>
|
||||
#include <linux/list_sort.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "key.h"
|
||||
@@ -39,8 +38,6 @@
|
||||
#include "hash.h"
|
||||
#include "srch.h"
|
||||
#include "alloc.h"
|
||||
#include "server.h"
|
||||
#include "counters.h"
|
||||
#include "scoutfs_trace.h"
|
||||
|
||||
/*
|
||||
@@ -387,7 +384,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
|
||||
if (sblock > eblock)
|
||||
return -EINVAL;
|
||||
|
||||
inode = scoutfs_ilookup_nowait_nonewfree(sb, args.ino);
|
||||
inode = scoutfs_ilookup(sb, args.ino);
|
||||
if (!inode) {
|
||||
ret = -ESTALE;
|
||||
goto out;
|
||||
@@ -543,17 +540,19 @@ out:
|
||||
static long scoutfs_ioc_stat_more(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct inode *inode = file_inode(file);
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct scoutfs_ioctl_stat_more stm;
|
||||
|
||||
if (get_user(stm.valid_bytes, (__u64 __user *)arg))
|
||||
return -EFAULT;
|
||||
|
||||
stm.valid_bytes = min_t(u64, stm.valid_bytes,
|
||||
sizeof(struct scoutfs_ioctl_stat_more));
|
||||
stm.meta_seq = scoutfs_inode_meta_seq(inode);
|
||||
stm.data_seq = scoutfs_inode_data_seq(inode);
|
||||
stm.data_version = scoutfs_inode_data_version(inode);
|
||||
scoutfs_inode_get_onoff(inode, &stm.online_blocks, &stm.offline_blocks);
|
||||
stm.crtime_sec = si->crtime.tv_sec;
|
||||
stm.crtime_nsec = si->crtime.tv_nsec;
|
||||
|
||||
if (copy_to_user((void __user *)arg, &stm, sizeof(stm)))
|
||||
if (copy_to_user((void __user *)arg, &stm, stm.valid_bytes))
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
@@ -617,7 +616,6 @@ static long scoutfs_ioc_data_waiting(struct file *file, unsigned long arg)
|
||||
static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct inode *inode = file->f_inode;
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_ioctl_setattr_more __user *usm = (void __user *)arg;
|
||||
struct scoutfs_ioctl_setattr_more sm;
|
||||
@@ -676,7 +674,8 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
|
||||
|
||||
/* setting only so we don't see 0 data seq with nonzero data_version */
|
||||
set_data_seq = sm.data_version != 0 ? true : false;
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq, false);
|
||||
ret = scoutfs_inode_index_lock_hold(inode, &ind_locks, set_data_seq,
|
||||
SIC_SETATTR_MORE());
|
||||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
@@ -686,8 +685,6 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
|
||||
i_size_write(inode, sm.i_size);
|
||||
inode->i_ctime.tv_sec = sm.ctime_sec;
|
||||
inode->i_ctime.tv_nsec = sm.ctime_nsec;
|
||||
si->crtime.tv_sec = sm.crtime_sec;
|
||||
si->crtime.tv_nsec = sm.crtime_nsec;
|
||||
|
||||
scoutfs_update_inode_item(inode, lock, &ind_locks);
|
||||
ret = 0;
|
||||
@@ -870,35 +867,28 @@ static long scoutfs_ioc_statfs_more(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct super_block *sb = file_inode(file)->i_sb;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super;
|
||||
struct scoutfs_super_block *super = &sbi->super;
|
||||
struct scoutfs_ioctl_statfs_more sfm;
|
||||
int ret;
|
||||
|
||||
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
|
||||
if (!super)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = scoutfs_read_super(sb, super);
|
||||
if (ret)
|
||||
goto out;
|
||||
if (get_user(sfm.valid_bytes, (__u64 __user *)arg))
|
||||
return -EFAULT;
|
||||
|
||||
sfm.valid_bytes = min_t(u64, sfm.valid_bytes,
|
||||
sizeof(struct scoutfs_ioctl_statfs_more));
|
||||
sfm.fsid = le64_to_cpu(super->hdr.fsid);
|
||||
sfm.rid = sbi->rid;
|
||||
sfm.total_meta_blocks = le64_to_cpu(super->total_meta_blocks);
|
||||
sfm.total_data_blocks = le64_to_cpu(super->total_data_blocks);
|
||||
sfm.reserved_meta_blocks = scoutfs_server_reserved_meta_blocks(sb);
|
||||
|
||||
ret = scoutfs_client_get_last_seq(sb, &sfm.committed_seq);
|
||||
if (ret)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
if (copy_to_user((void __user *)arg, &sfm, sizeof(sfm)))
|
||||
ret = -EFAULT;
|
||||
else
|
||||
ret = 0;
|
||||
out:
|
||||
kfree(super);
|
||||
return ret;
|
||||
if (copy_to_user((void __user *)arg, &sfm, sfm.valid_bytes))
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct copy_alloc_detail_args {
|
||||
@@ -983,18 +973,12 @@ static long scoutfs_ioc_move_blocks(struct file *file, unsigned long arg)
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (mb.flags & SCOUTFS_IOC_MB_UNKNOWN) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = mnt_want_write_file(file);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_data_move_blocks(from, mb.from_off, mb.len,
|
||||
to, mb.to_off, !!(mb.flags & SCOUTFS_IOC_MB_STAGE),
|
||||
mb.data_version);
|
||||
to, mb.to_off);
|
||||
mnt_drop_write_file(file);
|
||||
out:
|
||||
fput(from_file);
|
||||
@@ -1002,402 +986,6 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static long scoutfs_ioc_resize_devices(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct super_block *sb = file_inode(file)->i_sb;
|
||||
struct scoutfs_ioctl_resize_devices __user *urd = (void __user *)arg;
|
||||
struct scoutfs_ioctl_resize_devices rd;
|
||||
struct scoutfs_net_resize_devices nrd;
|
||||
int ret;
|
||||
|
||||
if (!(file->f_mode & FMODE_READ)) {
|
||||
ret = -EBADF;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN)) {
|
||||
ret = -EPERM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (copy_from_user(&rd, urd, sizeof(rd))) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
nrd.new_total_meta_blocks = cpu_to_le64(rd.new_total_meta_blocks);
|
||||
nrd.new_total_data_blocks = cpu_to_le64(rd.new_total_data_blocks);
|
||||
|
||||
ret = scoutfs_client_resize_devices(sb, &nrd);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct xattr_total_entry {
|
||||
struct rb_node node;
|
||||
struct scoutfs_ioctl_xattr_total xt;
|
||||
u64 fs_seq;
|
||||
u64 fs_total;
|
||||
u64 fs_count;
|
||||
u64 fin_seq;
|
||||
u64 fin_total;
|
||||
s64 fin_count;
|
||||
u64 log_seq;
|
||||
u64 log_total;
|
||||
s64 log_count;
|
||||
};
|
||||
|
||||
static int cmp_xt_entry_name(const struct xattr_total_entry *a,
|
||||
const struct xattr_total_entry *b)
|
||||
|
||||
{
|
||||
return scoutfs_cmp_u64s(a->xt.name[0], b->xt.name[0]) ?:
|
||||
scoutfs_cmp_u64s(a->xt.name[1], b->xt.name[1]) ?:
|
||||
scoutfs_cmp_u64s(a->xt.name[2], b->xt.name[2]);
|
||||
}
|
||||
|
||||
/*
|
||||
* Record the contribution of the three classes of logged items we can
|
||||
* see: the item in the fs_root, items from finalized log btrees, and
|
||||
* items from active log btrees. Once we have the full set the caller
|
||||
* can decide which of the items contribute to the total it sends to the
|
||||
* user.
|
||||
*/
|
||||
static int read_xattr_total_item(struct super_block *sb, struct scoutfs_key *key,
|
||||
u64 seq, u8 flags, void *val, int val_len, int fic, void *arg)
|
||||
{
|
||||
struct scoutfs_xattr_totl_val *tval = val;
|
||||
struct xattr_total_entry *ent;
|
||||
struct xattr_total_entry rd;
|
||||
struct rb_root *root = arg;
|
||||
struct rb_node *parent;
|
||||
struct rb_node **node;
|
||||
int cmp;
|
||||
|
||||
rd.xt.name[0] = le64_to_cpu(key->skxt_a);
|
||||
rd.xt.name[1] = le64_to_cpu(key->skxt_b);
|
||||
rd.xt.name[2] = le64_to_cpu(key->skxt_c);
|
||||
|
||||
/* find entry matching name */
|
||||
node = &root->rb_node;
|
||||
parent = NULL;
|
||||
cmp = -1;
|
||||
while (*node) {
|
||||
parent = *node;
|
||||
ent = container_of(*node, struct xattr_total_entry, node);
|
||||
|
||||
/* sort merge items by key then newest to oldest */
|
||||
cmp = cmp_xt_entry_name(&rd, ent);
|
||||
if (cmp < 0)
|
||||
node = &(*node)->rb_left;
|
||||
else if (cmp > 0)
|
||||
node = &(*node)->rb_right;
|
||||
else
|
||||
break;
|
||||
}
|
||||
|
||||
/* allocate and insert new node if we need to */
|
||||
if (cmp != 0) {
|
||||
ent = kzalloc(sizeof(*ent), GFP_KERNEL);
|
||||
if (!ent)
|
||||
return -ENOMEM;
|
||||
|
||||
memcpy(&ent->xt.name, &rd.xt.name, sizeof(ent->xt.name));
|
||||
|
||||
rb_link_node(&ent->node, parent, node);
|
||||
rb_insert_color(&ent->node, root);
|
||||
}
|
||||
|
||||
if (fic & FIC_FS_ROOT) {
|
||||
ent->fs_seq = seq;
|
||||
ent->fs_total = le64_to_cpu(tval->total);
|
||||
ent->fs_count = le64_to_cpu(tval->count);
|
||||
} else if (fic & FIC_FINALIZED) {
|
||||
ent->fin_seq = seq;
|
||||
ent->fin_total += le64_to_cpu(tval->total);
|
||||
ent->fin_count += le64_to_cpu(tval->count);
|
||||
} else {
|
||||
ent->log_seq = seq;
|
||||
ent->log_total += le64_to_cpu(tval->total);
|
||||
ent->log_count += le64_to_cpu(tval->count);
|
||||
}
|
||||
|
||||
scoutfs_inc_counter(sb, totl_read_item);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* these are always _safe, node stores next */
|
||||
#define for_each_xt_ent(ent, node, root) \
|
||||
for (node = rb_first(root); \
|
||||
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
|
||||
node = rb_next(node), 1); )
|
||||
|
||||
#define for_each_xt_ent_reverse(ent, node, root) \
|
||||
for (node = rb_last(root); \
|
||||
node && (ent = rb_entry(node, struct xattr_total_entry, node), \
|
||||
node = rb_prev(node), 1); )
|
||||
|
||||
static void free_xt_ent(struct rb_root *root, struct xattr_total_entry *ent)
|
||||
{
|
||||
rb_erase(&ent->node, root);
|
||||
kfree(ent);
|
||||
}
|
||||
|
||||
static void free_all_xt_ents(struct rb_root *root)
|
||||
{
|
||||
struct xattr_total_entry *ent;
|
||||
struct rb_node *node;
|
||||
|
||||
for_each_xt_ent(ent, node, root)
|
||||
free_xt_ent(root, ent);
|
||||
}
|
||||
|
||||
/*
|
||||
* Starting from the caller's pos_name, copy the names, totals, and
|
||||
* counts for the .totl. tagged xattrs in the system sorted by their
|
||||
* name until the user's buffer is full. This only sees xattrs that
|
||||
* have been committed. It doesn't use locking to force commits and
|
||||
* block writers so it can be a little bit out of date with respect to
|
||||
* dirty xattrs in memory across the system.
|
||||
*
|
||||
* Our reader has to be careful because the log btree merging code can
|
||||
* write partial results to the fs_root. This means that a reader can
|
||||
* see both cases where new finalized logs should be applied to the old
|
||||
* fs items and where old finalized logs have already been applied to
|
||||
* the partially merged fs items. Currently active logged items are
|
||||
* always applied on top of all cases.
|
||||
*
|
||||
* These cases are differentiated with a combination of sequence numbers
|
||||
* in items, the count of contributing xattrs, and a flag
|
||||
* differentiating finalized and active logged items. This lets us
|
||||
* recognize all cases, including when finalized logs were merged and
|
||||
* deleted the fs item.
|
||||
*
|
||||
* We're allocating a tracking struct for each totl name we see while
|
||||
* traversing the item btrees. The forest reader is providing the items
|
||||
* it finds in leaf blocks that contain the search key. In the worst
|
||||
* case all of these blocks are full and none of the items overlap. At
|
||||
* most, figure order a thousand names per mount. But in practice many
|
||||
* of these factors fall away: leaf blocks aren't fill, leaf items
|
||||
* overlap, there aren't finalized log btrees, and not all mounts are
|
||||
* actively changing totals. We're much more likely to only read a
|
||||
* leaf block's worth of totals that have been long since merged into
|
||||
* the fs_root.
|
||||
*/
|
||||
static long scoutfs_ioc_read_xattr_totals(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct super_block *sb = file_inode(file)->i_sb;
|
||||
struct scoutfs_ioctl_read_xattr_totals __user *urxt = (void __user *)arg;
|
||||
struct scoutfs_ioctl_read_xattr_totals rxt;
|
||||
struct scoutfs_ioctl_xattr_total __user *uxt;
|
||||
struct xattr_total_entry *ent;
|
||||
struct scoutfs_key key;
|
||||
struct scoutfs_key bloom_key;
|
||||
struct scoutfs_key start;
|
||||
struct scoutfs_key end;
|
||||
struct rb_root root = RB_ROOT;
|
||||
struct rb_node *node;
|
||||
int count = 0;
|
||||
int ret;
|
||||
|
||||
if (!(file->f_mode & FMODE_READ)) {
|
||||
ret = -EBADF;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN)) {
|
||||
ret = -EPERM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (copy_from_user(&rxt, urxt, sizeof(rxt))) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
uxt = (void __user *)rxt.totals_ptr;
|
||||
|
||||
if ((rxt.totals_ptr & (sizeof(__u64) - 1)) ||
|
||||
(rxt.totals_bytes < sizeof(struct scoutfs_ioctl_xattr_total))) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
scoutfs_key_set_zeros(&bloom_key);
|
||||
bloom_key.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
|
||||
scoutfs_xattr_init_totl_key(&start, rxt.pos_name);
|
||||
|
||||
while (rxt.totals_bytes >= sizeof(struct scoutfs_ioctl_xattr_total)) {
|
||||
|
||||
scoutfs_key_set_ones(&end);
|
||||
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
|
||||
if (scoutfs_key_compare(&start, &end) > 0)
|
||||
break;
|
||||
|
||||
key = start;
|
||||
ret = scoutfs_forest_read_items(sb, &key, &bloom_key, &start, &end,
|
||||
read_xattr_total_item, &root);
|
||||
if (ret < 0) {
|
||||
if (ret == -ESTALE) {
|
||||
free_all_xt_ents(&root);
|
||||
continue;
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (RB_EMPTY_ROOT(&root))
|
||||
break;
|
||||
|
||||
/* trim totals that fall outside of the consistent range */
|
||||
for_each_xt_ent(ent, node, &root) {
|
||||
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
|
||||
if (scoutfs_key_compare(&key, &start) < 0) {
|
||||
free_xt_ent(&root, ent);
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
for_each_xt_ent_reverse(ent, node, &root) {
|
||||
scoutfs_xattr_init_totl_key(&key, ent->xt.name);
|
||||
if (scoutfs_key_compare(&key, &end) > 0) {
|
||||
free_xt_ent(&root, ent);
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* copy resulting unique non-zero totals to userspace */
|
||||
for_each_xt_ent(ent, node, &root) {
|
||||
if (rxt.totals_bytes < sizeof(ent->xt))
|
||||
break;
|
||||
|
||||
/* start with the fs item if we have it */
|
||||
if (ent->fs_seq != 0) {
|
||||
ent->xt.total = ent->fs_total;
|
||||
ent->xt.count = ent->fs_count;
|
||||
scoutfs_inc_counter(sb, totl_read_fs);
|
||||
}
|
||||
|
||||
/* apply finalized logs if they're newer or creating */
|
||||
if (((ent->fs_seq != 0) && (ent->fin_seq > ent->fs_seq)) ||
|
||||
((ent->fs_seq == 0) && (ent->fin_count > 0))) {
|
||||
ent->xt.total += ent->fin_total;
|
||||
ent->xt.count += ent->fin_count;
|
||||
scoutfs_inc_counter(sb, totl_read_finalized);
|
||||
}
|
||||
|
||||
/* always apply active logs which must be newer than fs and finalized */
|
||||
if (ent->log_seq > 0) {
|
||||
ent->xt.total += ent->log_total;
|
||||
ent->xt.count += ent->log_count;
|
||||
scoutfs_inc_counter(sb, totl_read_logged);
|
||||
}
|
||||
|
||||
if (ent->xt.total != 0 || ent->xt.count != 0) {
|
||||
if (copy_to_user(uxt, &ent->xt, sizeof(ent->xt))) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
uxt++;
|
||||
rxt.totals_bytes -= sizeof(ent->xt);
|
||||
count++;
|
||||
scoutfs_inc_counter(sb, totl_read_copied);
|
||||
}
|
||||
|
||||
free_xt_ent(&root, ent);
|
||||
}
|
||||
|
||||
/* continue after the last possible key read */
|
||||
start = end;
|
||||
scoutfs_key_inc(&start);
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
free_all_xt_ents(&root);
|
||||
|
||||
return ret ?: count;
|
||||
}
|
||||
|
||||
static long scoutfs_ioc_get_allocated_inos(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct super_block *sb = file_inode(file)->i_sb;
|
||||
struct scoutfs_ioctl_get_allocated_inos __user *ugai = (void __user *)arg;
|
||||
struct scoutfs_ioctl_get_allocated_inos gai;
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
struct scoutfs_key key;
|
||||
struct scoutfs_key end;
|
||||
u64 __user *uinos;
|
||||
u64 bytes;
|
||||
u64 ino;
|
||||
int nr;
|
||||
int ret;
|
||||
|
||||
if (!(file->f_mode & FMODE_READ)) {
|
||||
ret = -EBADF;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN)) {
|
||||
ret = -EPERM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (copy_from_user(&gai, ugai, sizeof(gai))) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if ((gai.inos_ptr & (sizeof(__u64) - 1)) || (gai.inos_bytes < sizeof(__u64))) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
scoutfs_inode_init_key(&key, gai.start_ino);
|
||||
scoutfs_inode_init_key(&end, gai.start_ino | SCOUTFS_LOCK_INODE_GROUP_MASK);
|
||||
uinos = (void __user *)gai.inos_ptr;
|
||||
bytes = gai.inos_bytes;
|
||||
nr = 0;
|
||||
|
||||
ret = scoutfs_lock_ino(sb, SCOUTFS_LOCK_READ, 0, gai.start_ino, &lock);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
while (bytes >= sizeof(*uinos)) {
|
||||
|
||||
ret = scoutfs_item_next(sb, &key, &end, NULL, 0, lock);
|
||||
if (ret < 0) {
|
||||
if (ret == -ENOENT)
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
if (key.sk_zone != SCOUTFS_FS_ZONE) {
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
/* all fs items are owned by allocated inodes, and _first is always ino */
|
||||
ino = le64_to_cpu(key._sk_first);
|
||||
if (put_user(ino, uinos)) {
|
||||
ret = -EFAULT;
|
||||
break;
|
||||
}
|
||||
|
||||
uinos++;
|
||||
bytes -= sizeof(*uinos);
|
||||
if (++nr == INT_MAX)
|
||||
break;
|
||||
|
||||
scoutfs_inode_init_key(&key, ino + 1);
|
||||
}
|
||||
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
|
||||
out:
|
||||
return ret ?: nr;
|
||||
}
|
||||
|
||||
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
switch (cmd) {
|
||||
@@ -1427,12 +1015,6 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
return scoutfs_ioc_alloc_detail(file, arg);
|
||||
case SCOUTFS_IOC_MOVE_BLOCKS:
|
||||
return scoutfs_ioc_move_blocks(file, arg);
|
||||
case SCOUTFS_IOC_RESIZE_DEVICES:
|
||||
return scoutfs_ioc_resize_devices(file, arg);
|
||||
case SCOUTFS_IOC_READ_XATTR_TOTALS:
|
||||
return scoutfs_ioc_read_xattr_totals(file, arg);
|
||||
case SCOUTFS_IOC_GET_ALLOCATED_INOS:
|
||||
return scoutfs_ioc_get_allocated_inos(file, arg);
|
||||
}
|
||||
|
||||
return -ENOTTY;
|
||||
|
||||
195
kmod/src/ioctl.h
195
kmod/src/ioctl.h
@@ -13,7 +13,8 @@
|
||||
* This is enforced by pahole scripting in external build environments.
|
||||
*/
|
||||
|
||||
#define SCOUTFS_IOCTL_MAGIC 0xE8 /* arbitrarily chosen hole in ioctl-number.rst */
|
||||
/* XXX I have no idea how these are chosen. */
|
||||
#define SCOUTFS_IOCTL_MAGIC 's'
|
||||
|
||||
/*
|
||||
* Packed scoutfs keys rarely cross the ioctl boundary so we have a
|
||||
@@ -87,7 +88,7 @@ enum scoutfs_ino_walk_seq_type {
|
||||
* Adds entries to the user's buffer for each inode that is found in the
|
||||
* given index between the first and last positions.
|
||||
*/
|
||||
#define SCOUTFS_IOC_WALK_INODES _IOW(SCOUTFS_IOCTL_MAGIC, 1, \
|
||||
#define SCOUTFS_IOC_WALK_INODES _IOR(SCOUTFS_IOCTL_MAGIC, 1, \
|
||||
struct scoutfs_ioctl_walk_inodes)
|
||||
|
||||
/*
|
||||
@@ -162,11 +163,11 @@ struct scoutfs_ioctl_ino_path_result {
|
||||
__u64 dir_pos;
|
||||
__u16 path_bytes;
|
||||
__u8 _pad[6];
|
||||
__u8 path[];
|
||||
__u8 path[0];
|
||||
};
|
||||
|
||||
/* Get a single path from the root to the given inode number */
|
||||
#define SCOUTFS_IOC_INO_PATH _IOW(SCOUTFS_IOCTL_MAGIC, 2, \
|
||||
#define SCOUTFS_IOC_INO_PATH _IOR(SCOUTFS_IOCTL_MAGIC, 2, \
|
||||
struct scoutfs_ioctl_ino_path)
|
||||
|
||||
/*
|
||||
@@ -214,16 +215,23 @@ struct scoutfs_ioctl_stage {
|
||||
/*
|
||||
* Give the user inode fields that are not otherwise visible. statx()
|
||||
* isn't always available and xattrs are relatively expensive.
|
||||
*
|
||||
* @valid_bytes stores the number of bytes that are valid in the
|
||||
* structure. The caller sets this to the size of the struct that they
|
||||
* understand. The kernel then fills and copies back the min of the
|
||||
* size they and the user caller understand. The user can tell if a
|
||||
* field is set if all of its bytes are within the valid_bytes that the
|
||||
* kernel set on return.
|
||||
*
|
||||
* New fields are only added to the end of the struct.
|
||||
*/
|
||||
struct scoutfs_ioctl_stat_more {
|
||||
__u64 valid_bytes;
|
||||
__u64 meta_seq;
|
||||
__u64 data_seq;
|
||||
__u64 data_version;
|
||||
__u64 online_blocks;
|
||||
__u64 offline_blocks;
|
||||
__u64 crtime_sec;
|
||||
__u32 crtime_nsec;
|
||||
__u8 _pad[4];
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_STAT_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 5, \
|
||||
@@ -251,16 +259,15 @@ struct scoutfs_ioctl_data_waiting {
|
||||
__u8 _pad[6];
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U64_MAX << 0)
|
||||
#define SCOUTFS_IOC_DATA_WAITING_FLAGS_UNKNOWN (U8_MAX << 0)
|
||||
|
||||
#define SCOUTFS_IOC_DATA_WAITING _IOW(SCOUTFS_IOCTL_MAGIC, 6, \
|
||||
#define SCOUTFS_IOC_DATA_WAITING _IOR(SCOUTFS_IOCTL_MAGIC, 6, \
|
||||
struct scoutfs_ioctl_data_waiting)
|
||||
|
||||
/*
|
||||
* If i_size is set then data_version must be non-zero. If the offline
|
||||
* flag is set then i_size must be set and a offline extent will be
|
||||
* created from offset 0 to i_size. The time fields are always applied
|
||||
* to the inode.
|
||||
* created from offset 0 to i_size.
|
||||
*/
|
||||
struct scoutfs_ioctl_setattr_more {
|
||||
__u64 data_version;
|
||||
@@ -268,12 +275,11 @@ struct scoutfs_ioctl_setattr_more {
|
||||
__u64 flags;
|
||||
__u64 ctime_sec;
|
||||
__u32 ctime_nsec;
|
||||
__u32 crtime_nsec;
|
||||
__u64 crtime_sec;
|
||||
__u8 _pad[4];
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_SETATTR_MORE_OFFLINE (1 << 0)
|
||||
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U64_MAX << 1)
|
||||
#define SCOUTFS_IOC_SETATTR_MORE_UNKNOWN (U8_MAX << 1)
|
||||
|
||||
#define SCOUTFS_IOC_SETATTR_MORE _IOW(SCOUTFS_IOCTL_MAGIC, 7, \
|
||||
struct scoutfs_ioctl_setattr_more)
|
||||
@@ -285,8 +291,8 @@ struct scoutfs_ioctl_listxattr_hidden {
|
||||
__u32 hash_pos;
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOWR(SCOUTFS_IOCTL_MAGIC, 8, \
|
||||
struct scoutfs_ioctl_listxattr_hidden)
|
||||
#define SCOUTFS_IOC_LISTXATTR_HIDDEN _IOR(SCOUTFS_IOCTL_MAGIC, 8, \
|
||||
struct scoutfs_ioctl_listxattr_hidden)
|
||||
|
||||
/*
|
||||
* Return the inode numbers of inodes which might contain the given
|
||||
@@ -339,23 +345,32 @@ struct scoutfs_ioctl_search_xattrs {
|
||||
/* set in output_flags if returned inodes reached last_ino */
|
||||
#define SCOUTFS_SEARCH_XATTRS_OFLAG_END (1ULL << 0)
|
||||
|
||||
#define SCOUTFS_IOC_SEARCH_XATTRS _IOW(SCOUTFS_IOCTL_MAGIC, 9, \
|
||||
struct scoutfs_ioctl_search_xattrs)
|
||||
#define SCOUTFS_IOC_SEARCH_XATTRS _IOR(SCOUTFS_IOCTL_MAGIC, 9, \
|
||||
struct scoutfs_ioctl_search_xattrs)
|
||||
|
||||
/*
|
||||
* Give the user information about the filesystem.
|
||||
*
|
||||
* @valid_bytes stores the number of bytes that are valid in the
|
||||
* structure. The caller sets this to the size of the struct that they
|
||||
* understand. The kernel then fills and copies back the min of the
|
||||
* size they and the user caller understand. The user can tell if a
|
||||
* field is set if all of its bytes are within the valid_bytes that the
|
||||
* kernel set on return.
|
||||
*
|
||||
* @committed_seq: All seqs up to and including this seq have been
|
||||
* committed. Can be compared with meta_seq and data_seq from inodes in
|
||||
* stat_more to discover if changes have been committed to disk.
|
||||
*
|
||||
* New fields are only added to the end of the struct.
|
||||
*/
|
||||
struct scoutfs_ioctl_statfs_more {
|
||||
__u64 valid_bytes;
|
||||
__u64 fsid;
|
||||
__u64 rid;
|
||||
__u64 committed_seq;
|
||||
__u64 total_meta_blocks;
|
||||
__u64 total_data_blocks;
|
||||
__u64 reserved_meta_blocks;
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_STATFS_MORE _IOR(SCOUTFS_IOCTL_MAGIC, 10, \
|
||||
@@ -376,7 +391,7 @@ struct scoutfs_ioctl_data_wait_err {
|
||||
__s64 err;
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOW(SCOUTFS_IOCTL_MAGIC, 11, \
|
||||
#define SCOUTFS_IOC_DATA_WAIT_ERR _IOR(SCOUTFS_IOCTL_MAGIC, 11, \
|
||||
struct scoutfs_ioctl_data_wait_err)
|
||||
|
||||
|
||||
@@ -395,7 +410,7 @@ struct scoutfs_ioctl_alloc_detail_entry {
|
||||
__u8 __pad[6];
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_ALLOC_DETAIL _IOW(SCOUTFS_IOCTL_MAGIC, 12, \
|
||||
#define SCOUTFS_IOC_ALLOC_DETAIL _IOR(SCOUTFS_IOCTL_MAGIC, 12, \
|
||||
struct scoutfs_ioctl_alloc_detail)
|
||||
|
||||
/*
|
||||
@@ -403,13 +418,12 @@ struct scoutfs_ioctl_alloc_detail_entry {
|
||||
* on the same file system.
|
||||
*
|
||||
* from_fd specifies the source file and the ioctl is called on the
|
||||
* destination file. Both files must have write access. from_off specifies
|
||||
* the byte offset in the source, to_off is the byte offset in the
|
||||
* destination, and len is the number of bytes in the region to move. All of
|
||||
* the offsets and lengths must be in multiples of 4KB, except in the case
|
||||
* where the from_off + len ends at the i_size of the source
|
||||
* file. data_version is only used when STAGE flag is set (see below). flags
|
||||
* field is currently only used to optionally specify STAGE behavior.
|
||||
* destination file. Both files must have write access. from_off
|
||||
* specifies the byte offset in the source, to_off is the byte offset in
|
||||
* the destination, and len is the number of bytes in the region to
|
||||
* move. All of the offsets and lengths must be in multiples of 4KB,
|
||||
* except in the case where the from_off + len ends at the i_size of the
|
||||
* source file.
|
||||
*
|
||||
* This interface only moves extents which are block granular, it does
|
||||
* not perform RMW of sub-block byte extents and it does not overwrite
|
||||
@@ -421,142 +435,33 @@ struct scoutfs_ioctl_alloc_detail_entry {
|
||||
* i_size. The i_size update will maintain final partial blocks in the
|
||||
* source.
|
||||
*
|
||||
* If STAGE flag is not set, it will return an error if either of the files
|
||||
* have offline extents. It will return 0 when all of the extents in the
|
||||
* source region have been moved to the destination. Moving extents updates
|
||||
* the ctime, mtime, meta_seq, data_seq, and data_version fields of both the
|
||||
* source and destination inodes. If an error is returned then partial
|
||||
* It will return an error if either of the files have offline extents.
|
||||
* It will return 0 when all of the extents in the source region have
|
||||
* been moved to the destination. Moving extents updates the ctime,
|
||||
* mtime, meta_seq, data_seq, and data_version fields of both the source
|
||||
* and destination inodes. If an error is returned then partial
|
||||
* progress may have been made and inode fields may have been updated.
|
||||
*
|
||||
* If STAGE flag is set, as above except destination range must be in an
|
||||
* offline extent. Fields are updated only for source inode.
|
||||
*
|
||||
* Errors specific to this interface include:
|
||||
*
|
||||
* EINVAL: from_off, len, or to_off aren't a multiple of 4KB; the source
|
||||
* and destination files are the same inode; either the source or
|
||||
* destination is not a regular file; the destination file has
|
||||
* an existing overlapping extent (if STAGE flag not set); the
|
||||
* destination range is not in an offline extent (if STAGE set).
|
||||
* an existing overlapping extent.
|
||||
* EOVERFLOW: either from_off + len or to_off + len exceeded 64bits.
|
||||
* EBADF: from_fd isn't a valid open file descriptor.
|
||||
* EXDEV: the source and destination files are in different filesystems.
|
||||
* EISDIR: either the source or destination is a directory.
|
||||
* ENODATA: either the source or destination file have offline extents and
|
||||
* STAGE flag is not set.
|
||||
* ESTALE: data_version does not match destination data_version.
|
||||
* ENODATA: either the source or destination file have offline extents.
|
||||
*/
|
||||
#define SCOUTFS_IOC_MB_STAGE (1 << 0)
|
||||
#define SCOUTFS_IOC_MB_UNKNOWN (U64_MAX << 1)
|
||||
|
||||
struct scoutfs_ioctl_move_blocks {
|
||||
__u64 from_fd;
|
||||
__u64 from_off;
|
||||
__u64 len;
|
||||
__u64 to_off;
|
||||
__u64 data_version;
|
||||
__u64 flags;
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_MOVE_BLOCKS _IOW(SCOUTFS_IOCTL_MAGIC, 13, \
|
||||
#define SCOUTFS_IOC_MOVE_BLOCKS _IOR(SCOUTFS_IOCTL_MAGIC, 13, \
|
||||
struct scoutfs_ioctl_move_blocks)
|
||||
|
||||
struct scoutfs_ioctl_resize_devices {
|
||||
__u64 new_total_meta_blocks;
|
||||
__u64 new_total_data_blocks;
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_RESIZE_DEVICES \
|
||||
_IOW(SCOUTFS_IOCTL_MAGIC, 14, struct scoutfs_ioctl_resize_devices)
|
||||
|
||||
#define SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR 3
|
||||
|
||||
/*
|
||||
* Copy global totals of .totl. xattr value payloads to the user. This
|
||||
* only sees xattrs which have been committed and this doesn't force
|
||||
* commits of dirty data throughout the system. This can be out of sync
|
||||
* by the amount of xattrs that can be dirty in open transactions that
|
||||
* are being built throughout the system.
|
||||
*
|
||||
* pos_name: The array name of the first total that can be returned.
|
||||
* The name is derived from the key of the xattrs that contribute to the
|
||||
* total. For xattrs with a .totl.1.2.3 key, the pos_name[] should be
|
||||
* {1, 2, 3}.
|
||||
*
|
||||
* totals_ptr: An aligned pointer to a buffer that will be filled with
|
||||
* an array of scoutfs_ioctl_xattr_total structs for each total copied.
|
||||
*
|
||||
* totals_bytes: The size of the buffer in bytes. There must be room
|
||||
* for at least one struct element so that returning 0 can promise that
|
||||
* there were no more totals to copy after the pos_name.
|
||||
*
|
||||
* The number of copied elements is returned and 0 is returned if there
|
||||
* were no more totals to copy after the pos_name.
|
||||
*
|
||||
* In addition to the usual errnos (EIO, EINVAL, EPERM, EFAULT) this
|
||||
* adds:
|
||||
*
|
||||
* EINVAL: The totals_ buffer was not aligned or was not large enough
|
||||
* for a single struct entry.
|
||||
*/
|
||||
struct scoutfs_ioctl_read_xattr_totals {
|
||||
__u64 pos_name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
|
||||
__u64 totals_ptr;
|
||||
__u64 totals_bytes;
|
||||
};
|
||||
|
||||
/*
|
||||
* An individual total that is given to userspace. The total is the
|
||||
* sum of all the values in the xattr payloads matching the name. The
|
||||
* count is the number of xattrs, not number of files, contributing to
|
||||
* the total.
|
||||
*/
|
||||
struct scoutfs_ioctl_xattr_total {
|
||||
__u64 name[SCOUTFS_IOCTL_XATTR_TOTAL_NAME_NR];
|
||||
__u64 total;
|
||||
__u64 count;
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_READ_XATTR_TOTALS \
|
||||
_IOW(SCOUTFS_IOCTL_MAGIC, 15, struct scoutfs_ioctl_read_xattr_totals)
|
||||
|
||||
/*
|
||||
* This fills the caller's inos array with inode numbers that are in use
|
||||
* after the start ino, within an internal inode group.
|
||||
*
|
||||
* This only makes a promise about the state of the inode numbers within
|
||||
* the first and last numbers returned by one call. At one time, all of
|
||||
* those inodes were still allocated. They could have changed before
|
||||
* the call returned. And any numbers outside of the first and last
|
||||
* (or single) are undefined.
|
||||
*
|
||||
* This doesn't iterate over all allocated inodes, it only probes a
|
||||
* single group that the start inode is within. This interface was
|
||||
* first introduced to support tests that needed to find out about a
|
||||
* specific inode, while having some other similarly niche uses. It is
|
||||
* unsuitable for a consistent iteration over all the inode numbers in
|
||||
* use.
|
||||
*
|
||||
* This test of inode items doesn't serialize with the inode lifetime
|
||||
* mechanism. It only tells you the numbers of inodes that were once
|
||||
* active in the system and haven't yet been fully deleted. The inode
|
||||
* numbers returned could have been in the process of being deleted and
|
||||
* were already unreachable even before the call started.
|
||||
*
|
||||
* @start_ino: the first inode number that could be returned
|
||||
* @inos_ptr: pointer to an aligned array of 64bit inode numbers
|
||||
* @inos_bytes: the number of bytes available in the inos_ptr array
|
||||
*
|
||||
* Returns errors or the count of inode numbers returned, quite possibly
|
||||
* including 0.
|
||||
*/
|
||||
struct scoutfs_ioctl_get_allocated_inos {
|
||||
__u64 start_ino;
|
||||
__u64 inos_ptr;
|
||||
__u64 inos_bytes;
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOC_GET_ALLOCATED_INOS \
|
||||
_IOW(SCOUTFS_IOCTL_MAGIC, 16, struct scoutfs_ioctl_get_allocated_inos)
|
||||
|
||||
#endif
|
||||
|
||||
433
kmod/src/item.c
433
kmod/src/item.c
@@ -95,7 +95,7 @@ struct item_cache_info {
|
||||
|
||||
/* written by page readers, read by shrink */
|
||||
spinlock_t active_lock;
|
||||
struct list_head active_list;
|
||||
struct rb_root active_root;
|
||||
};
|
||||
|
||||
#define DECLARE_ITEM_CACHE_INFO(sb, name) \
|
||||
@@ -127,7 +127,6 @@ struct cached_page {
|
||||
unsigned long lru_time;
|
||||
struct list_head dirty_list;
|
||||
struct list_head dirty_head;
|
||||
u64 max_seq;
|
||||
struct page *page;
|
||||
unsigned int page_off;
|
||||
unsigned int erased_bytes;
|
||||
@@ -139,11 +138,10 @@ struct cached_item {
|
||||
struct list_head dirty_head;
|
||||
unsigned int dirty:1, /* needs to be written */
|
||||
persistent:1, /* in btrees, needs deletion item */
|
||||
deletion:1, /* negative del item for writing */
|
||||
delta:1; /* item vales are combined, freed after write */
|
||||
deletion:1; /* negative del item for writing */
|
||||
unsigned int val_len;
|
||||
struct scoutfs_key key;
|
||||
u64 seq;
|
||||
struct scoutfs_log_item_value liv;
|
||||
char val[0];
|
||||
};
|
||||
|
||||
@@ -151,8 +149,7 @@ struct cached_item {
|
||||
|
||||
static int item_val_bytes(int val_len)
|
||||
{
|
||||
return round_up(offsetof(struct cached_item, val[val_len]),
|
||||
CACHED_ITEM_ALIGN);
|
||||
return round_up(offsetof(struct cached_item, val[val_len]), CACHED_ITEM_ALIGN);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -348,8 +345,7 @@ static struct cached_page *alloc_pg(struct super_block *sb, gfp_t gfp)
|
||||
page = alloc_page(GFP_NOFS | gfp);
|
||||
if (!page || !pg) {
|
||||
kfree(pg);
|
||||
if (page)
|
||||
__free_page(page);
|
||||
__free_page(page);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@@ -387,12 +383,6 @@ static void put_pg(struct super_block *sb, struct cached_page *pg)
|
||||
}
|
||||
}
|
||||
|
||||
static void update_pg_max_seq(struct cached_page *pg, struct cached_item *item)
|
||||
{
|
||||
if (item->seq > pg->max_seq)
|
||||
pg->max_seq = item->seq;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate space for a new item from the free offset at the end of a
|
||||
* cached page. This isn't a blocking allocation, and it's likely that
|
||||
@@ -400,7 +390,8 @@ static void update_pg_max_seq(struct cached_page *pg, struct cached_item *item)
|
||||
* page or checking the free space first.
|
||||
*/
|
||||
static struct cached_item *alloc_item(struct cached_page *pg,
|
||||
struct scoutfs_key *key, u64 seq, bool deletion,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_log_item_value *liv,
|
||||
void *val, int val_len)
|
||||
{
|
||||
struct cached_item *item;
|
||||
@@ -415,24 +406,22 @@ static struct cached_item *alloc_item(struct cached_page *pg,
|
||||
INIT_LIST_HEAD(&item->dirty_head);
|
||||
item->dirty = 0;
|
||||
item->persistent = 0;
|
||||
item->deletion = !!deletion;
|
||||
item->delta = 0;
|
||||
item->deletion = !!(liv->flags & SCOUTFS_LOG_ITEM_FLAG_DELETION);
|
||||
item->val_len = val_len;
|
||||
item->key = *key;
|
||||
item->seq = seq;
|
||||
item->liv = *liv;
|
||||
|
||||
if (val_len)
|
||||
memcpy(item->val, val, val_len);
|
||||
|
||||
update_pg_max_seq(pg, item);
|
||||
|
||||
return item;
|
||||
}
|
||||
|
||||
static void erase_item(struct cached_page *pg, struct cached_item *item)
|
||||
{
|
||||
rbtree_erase(&item->node, &pg->item_root);
|
||||
pg->erased_bytes += item_val_bytes(item->val_len);
|
||||
pg->erased_bytes += round_up(item_val_bytes(item->val_len),
|
||||
CACHED_ITEM_ALIGN);
|
||||
}
|
||||
|
||||
static void lru_add(struct super_block *sb, struct item_cache_info *cinf,
|
||||
@@ -632,8 +621,6 @@ static void mark_item_dirty(struct super_block *sb,
|
||||
list_add_tail(&item->dirty_head, &pg->dirty_list);
|
||||
item->dirty = 1;
|
||||
}
|
||||
|
||||
update_pg_max_seq(pg, item);
|
||||
}
|
||||
|
||||
static void clear_item_dirty(struct super_block *sb,
|
||||
@@ -685,12 +672,6 @@ static void erase_page_items(struct cached_page *pg,
|
||||
* to the dirty list after the left page, and by adding items to the
|
||||
* tail of right's dirty list in key sort order.
|
||||
*
|
||||
* The max_seq of the source page might be larger than all the items
|
||||
* while protecting an erased item from being reclaimed while an older
|
||||
* read is in flight. We don't know where it might be in the source
|
||||
* page so we have to assume that it's in the key range being moved and
|
||||
* update the destination page's max_seq accordingly.
|
||||
*
|
||||
* The caller is responsible for page locking and managing the lru.
|
||||
*/
|
||||
static void move_page_items(struct super_block *sb,
|
||||
@@ -716,7 +697,7 @@ static void move_page_items(struct super_block *sb,
|
||||
if (stop && scoutfs_key_compare(&from->key, stop) >= 0)
|
||||
break;
|
||||
|
||||
to = alloc_item(right, &from->key, from->seq, from->deletion, from->val,
|
||||
to = alloc_item(right, &from->key, &from->liv, from->val,
|
||||
from->val_len);
|
||||
rbtree_insert(&to->node, par, pnode, &right->item_root);
|
||||
par = &to->node;
|
||||
@@ -728,13 +709,10 @@ static void move_page_items(struct super_block *sb,
|
||||
}
|
||||
|
||||
to->persistent = from->persistent;
|
||||
to->delta = from->delta;
|
||||
to->deletion = from->deletion;
|
||||
|
||||
erase_item(left, from);
|
||||
}
|
||||
|
||||
if (left->max_seq > right->max_seq)
|
||||
right->max_seq = left->max_seq;
|
||||
}
|
||||
|
||||
enum page_intersection_type {
|
||||
@@ -874,7 +852,8 @@ static void compact_page_items(struct super_block *sb,
|
||||
|
||||
for (from = first_item(&pg->item_root); from; from = next_item(from)) {
|
||||
to = page_address(empty->page) + page_off;
|
||||
page_off += item_val_bytes(from->val_len);
|
||||
page_off += round_up(item_val_bytes(from->val_len),
|
||||
CACHED_ITEM_ALIGN);
|
||||
|
||||
/* copy the entire item, struct members and all */
|
||||
memcpy(to, from, item_val_bytes(from->val_len));
|
||||
@@ -1281,76 +1260,46 @@ static int cache_empty_page(struct super_block *sb,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Readers operate independently from dirty items and transactions.
|
||||
* They read a set of persistent items and insert them into the cache
|
||||
* when there aren't already pages whose key range contains the items.
|
||||
* This naturally prefers cached dirty items over stale read items.
|
||||
*
|
||||
* We have to deal with the case where dirty items are written and
|
||||
* invalidated while a read is in flight. The reader won't have seen
|
||||
* the items that were dirty in their persistent roots as they started
|
||||
* reading. By the time they insert their read pages the previously
|
||||
* dirty items have been reclaimed and are not in the cache. The old
|
||||
* stale items will be inserted in their place, effectively corrupting
|
||||
* by having the dirty items disappear.
|
||||
*
|
||||
* We fix this by tracking the max seq of items in pages. As readers
|
||||
* start they record the current transaction seq. Invalidation skips
|
||||
* pages with a max seq greater than the first reader seq because the
|
||||
* items in the page have to stick around to prevent the readers stale
|
||||
* items from being inserted.
|
||||
*
|
||||
* This naturally only affects a small set of pages with items that were
|
||||
* written relatively recently. If we're in memory pressure then we
|
||||
* probably have a lot of pages and they'll naturally have items that
|
||||
* were visible to any raders. We don't bother with the complicated and
|
||||
* expensive further refinement of tracking the ranges that are being
|
||||
* read and comparing those with pages to invalidate.
|
||||
*/
|
||||
struct active_reader {
|
||||
struct list_head head;
|
||||
u64 seq;
|
||||
struct rb_node node;
|
||||
struct scoutfs_key start;
|
||||
struct scoutfs_key end;
|
||||
};
|
||||
|
||||
#define INIT_ACTIVE_READER(rdr) \
|
||||
struct active_reader rdr = { .head = LIST_HEAD_INIT(rdr.head) }
|
||||
|
||||
static void add_active_reader(struct super_block *sb, struct active_reader *active)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
|
||||
BUG_ON(!list_empty(&active->head));
|
||||
|
||||
active->seq = scoutfs_trans_sample_seq(sb);
|
||||
|
||||
spin_lock(&cinf->active_lock);
|
||||
list_add_tail(&active->head, &cinf->active_list);
|
||||
spin_unlock(&cinf->active_lock);
|
||||
}
|
||||
|
||||
static u64 first_active_reader_seq(struct item_cache_info *cinf)
|
||||
static struct active_reader *active_rbtree_walk(struct rb_root *root,
|
||||
struct scoutfs_key *start,
|
||||
struct scoutfs_key *end,
|
||||
struct rb_node **par,
|
||||
struct rb_node ***pnode)
|
||||
{
|
||||
struct rb_node **node = &root->rb_node;
|
||||
struct rb_node *parent = NULL;
|
||||
struct active_reader *ret = NULL;
|
||||
struct active_reader *active;
|
||||
u64 first;
|
||||
int cmp;
|
||||
|
||||
/* only the calling task adds or deletes this active */
|
||||
spin_lock(&cinf->active_lock);
|
||||
active = list_first_entry_or_null(&cinf->active_list, struct active_reader, head);
|
||||
first = active ? active->seq : U64_MAX;
|
||||
spin_unlock(&cinf->active_lock);
|
||||
while (*node) {
|
||||
parent = *node;
|
||||
active = container_of(*node, struct active_reader, node);
|
||||
|
||||
return first;
|
||||
}
|
||||
|
||||
static void del_active_reader(struct item_cache_info *cinf, struct active_reader *active)
|
||||
{
|
||||
/* only the calling task adds or deletes this active */
|
||||
if (!list_empty(&active->head)) {
|
||||
spin_lock(&cinf->active_lock);
|
||||
list_del_init(&active->head);
|
||||
spin_unlock(&cinf->active_lock);
|
||||
cmp = scoutfs_key_compare_ranges(start, end, &active->start,
|
||||
&active->end);
|
||||
if (cmp < 0) {
|
||||
node = &(*node)->rb_left;
|
||||
} else if (cmp > 0) {
|
||||
node = &(*node)->rb_right;
|
||||
} else {
|
||||
ret = active;
|
||||
node = &(*node)->rb_left;
|
||||
}
|
||||
}
|
||||
|
||||
if (par)
|
||||
*par = parent;
|
||||
if (pnode)
|
||||
*pnode = node;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1359,16 +1308,16 @@ static void del_active_reader(struct item_cache_info *cinf, struct active_reader
|
||||
* on our root and aren't in dirty or lru lists.
|
||||
*
|
||||
* We need to store deletion items here as we read items from all the
|
||||
* btrees so that they can override older items. The deletion items
|
||||
* will be deleted before we insert the pages into the cache. We don't
|
||||
* insert old versions of items into the tree here so that the trees
|
||||
* don't have to compare seqs.
|
||||
* btrees so that they can override older versions of the items. The
|
||||
* deletion items will be deleted before we insert the pages into the
|
||||
* cache. We don't insert old versions of items into the tree here so
|
||||
* that the trees don't have to compare versions.
|
||||
*/
|
||||
static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 seq, u8 flags,
|
||||
void *val, int val_len, int fic, void *arg)
|
||||
static int read_page_item(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_log_item_value *liv, void *val,
|
||||
int val_len, void *arg)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const bool deletion = !!(flags & SCOUTFS_ITEM_FLAG_DELETION);
|
||||
struct rb_root *root = arg;
|
||||
struct cached_page *right = NULL;
|
||||
struct cached_page *left = NULL;
|
||||
@@ -1382,7 +1331,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
|
||||
|
||||
pg = page_rbtree_walk(sb, root, key, key, NULL, NULL, &p_par, &p_pnode);
|
||||
found = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
|
||||
if (found && (found->seq >= seq))
|
||||
if (found && (le64_to_cpu(found->liv.vers) >= le64_to_cpu(liv->vers)))
|
||||
return 0;
|
||||
|
||||
if (!page_has_room(pg, val_len)) {
|
||||
@@ -1390,13 +1339,10 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
|
||||
/* split needs multiple items, sparse may not have enough */
|
||||
if (!left)
|
||||
return -ENOMEM;
|
||||
|
||||
compact_page_items(sb, pg, left);
|
||||
found = item_rbtree_walk(&pg->item_root, key, NULL, &par,
|
||||
&pnode);
|
||||
}
|
||||
|
||||
item = alloc_item(pg, key, seq, deletion, val, val_len);
|
||||
item = alloc_item(pg, key, liv, val, val_len);
|
||||
if (!item) {
|
||||
/* simpler split of private pages, no locking/dirty/lru */
|
||||
if (!left)
|
||||
@@ -1419,7 +1365,7 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
|
||||
put_pg(sb, pg);
|
||||
|
||||
pg = scoutfs_key_compare(key, &left->end) <= 0 ? left : right;
|
||||
item = alloc_item(pg, key, seq, deletion, val, val_len);
|
||||
item = alloc_item(pg, key, liv, val, val_len);
|
||||
found = item_rbtree_walk(&pg->item_root, key, NULL, &par,
|
||||
&pnode);
|
||||
|
||||
@@ -1450,20 +1396,22 @@ static int read_page_item(struct super_block *sb, struct scoutfs_key *key, u64 s
|
||||
* locks held, but without locking the cache. The regions we read can
|
||||
* be stale with respect to the current cache, which can be read and
|
||||
* dirtied by other cluster lock holders on our node, but the cluster
|
||||
* locks protect the stable items we read. Invalidation is careful not
|
||||
* to drop pages that have items that we couldn't see because they were
|
||||
* dirty when we started reading.
|
||||
* locks protect the stable items we read.
|
||||
*
|
||||
* The forest item reader is reading stable trees that could be
|
||||
* overwritten. It can return -ESTALE which we return to the caller who
|
||||
* will retry the operation and work with a new set of more recent
|
||||
* btrees.
|
||||
* There's also the exciting case where a reader can populate the cache
|
||||
* with stale old persistent data which was read before another local
|
||||
* cluster lock holder was able to read, dirty, write, and then shrink
|
||||
* the cache. In this case the cache couldn't be cleared by lock
|
||||
* invalidation because the caller is actively holding the lock. But
|
||||
* shrinking could evict the cache within the held lock. So we record
|
||||
* that we're an active reader in the range covered by the lock and
|
||||
* shrink will refuse to reclaim any pages that intersect with our read.
|
||||
*/
|
||||
static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
|
||||
struct scoutfs_key *key, struct scoutfs_lock *lock)
|
||||
{
|
||||
struct rb_root root = RB_ROOT;
|
||||
INIT_ACTIVE_READER(active);
|
||||
struct active_reader active;
|
||||
struct cached_page *right = NULL;
|
||||
struct cached_page *pg;
|
||||
struct cached_page *rd;
|
||||
@@ -1479,6 +1427,15 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
|
||||
int pgi;
|
||||
int ret;
|
||||
|
||||
/* stop shrink from freeing new clean data, would let us cache stale */
|
||||
active.start = lock->start;
|
||||
active.end = lock->end;
|
||||
spin_lock(&cinf->active_lock);
|
||||
active_rbtree_walk(&cinf->active_root, &active.start, &active.end,
|
||||
&par, &pnode);
|
||||
rbtree_insert(&active.node, par, pnode, &cinf->active_root);
|
||||
spin_unlock(&cinf->active_lock);
|
||||
|
||||
/* start with an empty page that covers the whole lock */
|
||||
pg = alloc_pg(sb, 0);
|
||||
if (!pg) {
|
||||
@@ -1489,12 +1446,8 @@ static int read_pages(struct super_block *sb, struct item_cache_info *cinf,
|
||||
pg->end = lock->end;
|
||||
rbtree_insert(&pg->node, NULL, &root.rb_node, &root);
|
||||
|
||||
/* set active reader seq before reading persistent roots */
|
||||
add_active_reader(sb, &active);
|
||||
|
||||
start = lock->start;
|
||||
end = lock->end;
|
||||
ret = scoutfs_forest_read_items(sb, key, &lock->start, &start, &end, read_page_item, &root);
|
||||
ret = scoutfs_forest_read_items(sb, lock, key, &start, &end,
|
||||
read_page_item, &root);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
@@ -1538,8 +1491,6 @@ retry:
|
||||
rbtree_erase(&rd->node, &root);
|
||||
rbtree_insert(&rd->node, par, pnode, &cinf->pg_root);
|
||||
lru_accessed(sb, cinf, rd);
|
||||
trace_scoutfs_item_read_page(sb, key, &rd->start,
|
||||
&rd->end);
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -1570,7 +1521,9 @@ retry:
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
del_active_reader(cinf, &active);
|
||||
spin_lock(&cinf->active_lock);
|
||||
rbtree_erase(&active.node, &cinf->active_root);
|
||||
spin_unlock(&cinf->active_lock);
|
||||
|
||||
/* free any pages we left dangling on error */
|
||||
for_each_page_safe(&root, rd, pg_tmp) {
|
||||
@@ -1629,7 +1582,7 @@ retry:
|
||||
&lock->end);
|
||||
else
|
||||
ret = read_pages(sb, cinf, key, lock);
|
||||
if (ret < 0 && ret != -ESTALE)
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
goto retry;
|
||||
}
|
||||
@@ -1676,14 +1629,6 @@ static int lock_safe(struct scoutfs_lock *lock, struct scoutfs_key *key,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int optional_lock_mode_match(struct scoutfs_lock *lock, int mode)
|
||||
{
|
||||
if (WARN_ON_ONCE(lock && lock->mode != mode))
|
||||
return -EINVAL;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the cached item's value into the caller's value. The number of
|
||||
* bytes copied is returned. A null val returns 0.
|
||||
@@ -1833,28 +1778,6 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* An item's seq is greater of the client transaction's seq and the
|
||||
* lock's write_seq. This ensures that multiple commits in one lock
|
||||
* grant will have increasing seqs, and new locks in open commits will
|
||||
* also increase the seqs. It lets us limit the inputs of item merging
|
||||
* to the last stable seq and ensure that all the items in open
|
||||
* transactions and granted locks will have greater seqs.
|
||||
*
|
||||
* This is a little awkward for WRITE_ONLY locks which can have much
|
||||
* older versions than the version of locked primary data that they're
|
||||
* operating on behalf of. Callers can optionally provide that primary
|
||||
* lock to get the version from. This ensures that items created under
|
||||
* WRITE_ONLY locks can not have versions less than their primary data.
|
||||
*/
|
||||
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
return max3(sbi->trans_seq, lock->write_seq, primary ? primary->write_seq : 0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Mark the item dirty. Dirtying while holding a transaction pins the
|
||||
* page holding the item and guarantees that the item can be deleted or
|
||||
@@ -1887,8 +1810,8 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
|
||||
if (!item || item->deletion) {
|
||||
ret = -ENOENT;
|
||||
} else {
|
||||
item->seq = item_seq(sb, lock, NULL);
|
||||
mark_item_dirty(sb, cinf, pg, NULL, item);
|
||||
item->liv.vers = cpu_to_le64(lock->write_version);
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
@@ -1904,10 +1827,12 @@ out:
|
||||
*/
|
||||
static int item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary, int mode, bool force)
|
||||
int mode, bool force)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock, primary);
|
||||
struct scoutfs_log_item_value liv = {
|
||||
.vers = cpu_to_le64(lock->write_version),
|
||||
};
|
||||
struct cached_item *found;
|
||||
struct cached_item *item;
|
||||
struct cached_page *pg;
|
||||
@@ -1917,8 +1842,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
|
||||
scoutfs_inc_counter(sb, item_create);
|
||||
|
||||
if ((ret = lock_safe(lock, key, mode)) ||
|
||||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
|
||||
if ((ret = lock_safe(lock, key, mode)))
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_forest_set_bloom_bits(sb, lock);
|
||||
@@ -1936,7 +1860,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
item = alloc_item(pg, key, seq, false, val, val_len);
|
||||
item = alloc_item(pg, key, &liv, val, val_len);
|
||||
rbtree_insert(&item->node, par, pnode, &pg->item_root);
|
||||
mark_item_dirty(sb, cinf, pg, NULL, item);
|
||||
|
||||
@@ -1959,15 +1883,15 @@ out:
|
||||
int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock)
|
||||
{
|
||||
return item_create(sb, key, val, val_len, lock, NULL,
|
||||
return item_create(sb, key, val, val_len, lock,
|
||||
SCOUTFS_LOCK_READ, false);
|
||||
}
|
||||
|
||||
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
return item_create(sb, key, val, val_len, lock, primary,
|
||||
return item_create(sb, key, val, val_len, lock,
|
||||
SCOUTFS_LOCK_WRITE_ONLY, true);
|
||||
}
|
||||
|
||||
@@ -1981,7 +1905,9 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock, NULL);
|
||||
struct scoutfs_log_item_value liv = {
|
||||
.vers = cpu_to_le64(lock->write_version),
|
||||
};
|
||||
struct cached_item *item;
|
||||
struct cached_item *found;
|
||||
struct cached_page *pg;
|
||||
@@ -2013,13 +1939,12 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
|
||||
if (val_len)
|
||||
memcpy(found->val, val, val_len);
|
||||
if (val_len < found->val_len)
|
||||
pg->erased_bytes += item_val_bytes(found->val_len) -
|
||||
item_val_bytes(val_len);
|
||||
pg->erased_bytes += found->val_len - val_len;
|
||||
found->val_len = val_len;
|
||||
found->seq = seq;
|
||||
found->liv.vers = liv.vers;
|
||||
mark_item_dirty(sb, cinf, pg, NULL, found);
|
||||
} else {
|
||||
item = alloc_item(pg, key, seq, false, val, val_len);
|
||||
item = alloc_item(pg, key, &liv, val, val_len);
|
||||
item->persistent = found->persistent;
|
||||
rbtree_insert(&item->node, par, pnode, &pg->item_root);
|
||||
mark_item_dirty(sb, cinf, pg, NULL, item);
|
||||
@@ -2035,81 +1960,6 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Add a delta item. Delta items are an incremental change relative to
|
||||
* the current persistent delta items. We never have to read the
|
||||
* current items so the caller always writes with write only locks. If
|
||||
* combining the current delta item and the caller's item results in a
|
||||
* null we can just drop it, we don't have to emit a deletion item.
|
||||
*
|
||||
* Delta items don't have to worry about creating items with old
|
||||
* versions under write_only locks. The versions don't impact how we
|
||||
* merge two items.
|
||||
*/
|
||||
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock, NULL);
|
||||
struct cached_item *item;
|
||||
struct cached_page *pg;
|
||||
struct rb_node **pnode;
|
||||
struct rb_node *par;
|
||||
int ret;
|
||||
|
||||
scoutfs_inc_counter(sb, item_delta);
|
||||
|
||||
if ((ret = lock_safe(lock, key, SCOUTFS_LOCK_WRITE_ONLY)))
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_forest_set_bloom_bits(sb, lock);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = get_cached_page(sb, cinf, lock, key, true, true, val_len, &pg);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
__acquire(pg->rwlock);
|
||||
|
||||
item = item_rbtree_walk(&pg->item_root, key, NULL, &par, &pnode);
|
||||
if (item) {
|
||||
if (!item->delta) {
|
||||
ret = -EIO;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
ret = scoutfs_forest_combine_deltas(key, item->val, item->val_len, val, val_len);
|
||||
if (ret <= 0) {
|
||||
if (ret == 0)
|
||||
ret = -EIO;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (ret == SCOUTFS_DELTA_COMBINED) {
|
||||
item->seq = seq;
|
||||
mark_item_dirty(sb, cinf, pg, NULL, item);
|
||||
} else if (ret == SCOUTFS_DELTA_COMBINED_NULL) {
|
||||
clear_item_dirty(sb, cinf, pg, item);
|
||||
erase_item(pg, item);
|
||||
} else {
|
||||
ret = -EIO;
|
||||
goto unlock;
|
||||
}
|
||||
ret = 0;
|
||||
} else {
|
||||
item = alloc_item(pg, key, seq, false, val, val_len);
|
||||
rbtree_insert(&item->node, par, pnode, &pg->item_root);
|
||||
mark_item_dirty(sb, cinf, pg, NULL, item);
|
||||
item->delta = 1;
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
unlock:
|
||||
write_unlock(&pg->rwlock);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Delete an item from the cache. We can leave behind a dirty deletion
|
||||
* item if there is a persistent item that needs to be overwritten.
|
||||
@@ -2119,11 +1969,12 @@ out:
|
||||
* deletion item if there isn't one already cached.
|
||||
*/
|
||||
static int item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary,
|
||||
int mode, bool force)
|
||||
struct scoutfs_lock *lock, int mode, bool force)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock, primary);
|
||||
struct scoutfs_log_item_value liv = {
|
||||
.vers = cpu_to_le64(lock->write_version),
|
||||
};
|
||||
struct cached_item *item;
|
||||
struct cached_page *pg;
|
||||
struct rb_node **pnode;
|
||||
@@ -2132,8 +1983,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
|
||||
scoutfs_inc_counter(sb, item_delete);
|
||||
|
||||
if ((ret = lock_safe(lock, key, mode)) ||
|
||||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
|
||||
if ((ret = lock_safe(lock, key, mode)))
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_forest_set_bloom_bits(sb, lock);
|
||||
@@ -2152,7 +2002,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
}
|
||||
|
||||
if (!item) {
|
||||
item = alloc_item(pg, key, seq, false, NULL, 0);
|
||||
item = alloc_item(pg, key, &liv, NULL, 0);
|
||||
rbtree_insert(&item->node, par, pnode, &pg->item_root);
|
||||
}
|
||||
|
||||
@@ -2165,10 +2015,10 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
erase_item(pg, item);
|
||||
} else {
|
||||
/* must emit deletion to clobber old persistent item */
|
||||
item->seq = seq;
|
||||
item->liv.vers = cpu_to_le64(lock->write_version);
|
||||
item->liv.flags |= SCOUTFS_LOG_ITEM_FLAG_DELETION;
|
||||
item->deletion = 1;
|
||||
pg->erased_bytes += item_val_bytes(item->val_len) -
|
||||
item_val_bytes(0);
|
||||
pg->erased_bytes += item->val_len;
|
||||
item->val_len = 0;
|
||||
mark_item_dirty(sb, cinf, pg, NULL, item);
|
||||
}
|
||||
@@ -2183,13 +2033,13 @@ out:
|
||||
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
return item_delete(sb, key, lock, NULL, SCOUTFS_LOCK_WRITE, false);
|
||||
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE, false);
|
||||
}
|
||||
|
||||
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
return item_delete(sb, key, lock, primary, SCOUTFS_LOCK_WRITE_ONLY, true);
|
||||
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE_ONLY, true);
|
||||
}
|
||||
|
||||
u64 scoutfs_item_dirty_pages(struct super_block *sb)
|
||||
@@ -2251,11 +2101,17 @@ int scoutfs_item_write_dirty(struct super_block *sb)
|
||||
struct page *page;
|
||||
LIST_HEAD(pages);
|
||||
LIST_HEAD(pos);
|
||||
u64 max_seq = 0;
|
||||
u64 max_vers = 0;
|
||||
int val_len;
|
||||
int bytes;
|
||||
int off;
|
||||
int ret;
|
||||
|
||||
/* we're relying on struct layout to prepend item value headers */
|
||||
BUILD_BUG_ON(offsetof(struct cached_item, val) !=
|
||||
(offsetof(struct cached_item, liv) +
|
||||
member_sizeof(struct cached_item, liv)));
|
||||
|
||||
if (atomic_read(&cinf->dirty_pages) == 0)
|
||||
return 0;
|
||||
|
||||
@@ -2307,9 +2163,10 @@ int scoutfs_item_write_dirty(struct super_block *sb)
|
||||
list_sort(NULL, &pg->dirty_list, cmp_item_key);
|
||||
|
||||
list_for_each_entry(item, &pg->dirty_list, dirty_head) {
|
||||
val_len = sizeof(item->liv) + item->val_len;
|
||||
bytes = offsetof(struct scoutfs_btree_item_list,
|
||||
val[item->val_len]);
|
||||
max_seq = max(max_seq, item->seq);
|
||||
val[val_len]);
|
||||
max_vers = max(max_vers, le64_to_cpu(item->liv.vers));
|
||||
|
||||
if (off + bytes > PAGE_SIZE) {
|
||||
page = second;
|
||||
@@ -2325,10 +2182,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
|
||||
prev = &lst->next;
|
||||
|
||||
lst->key = item->key;
|
||||
lst->seq = item->seq;
|
||||
lst->flags = item->deletion ? SCOUTFS_ITEM_FLAG_DELETION : 0;
|
||||
lst->val_len = item->val_len;
|
||||
memcpy(lst->val, item->val, item->val_len);
|
||||
lst->val_len = val_len;
|
||||
memcpy(lst->val, &item->liv, val_len);
|
||||
}
|
||||
|
||||
spin_lock(&cinf->dirty_lock);
|
||||
@@ -2341,8 +2196,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
|
||||
read_unlock(&pg->rwlock);
|
||||
}
|
||||
|
||||
/* store max item seq in forest's log_trees */
|
||||
scoutfs_forest_set_max_seq(sb, max_seq);
|
||||
/* store max item vers in forest's log_trees */
|
||||
scoutfs_forest_set_max_vers(sb, max_vers);
|
||||
|
||||
/* write all the dirty items into log btree blocks */
|
||||
ret = scoutfs_forest_insert_list(sb, first);
|
||||
@@ -2386,11 +2241,8 @@ retry:
|
||||
dirty_head) {
|
||||
clear_item_dirty(sb, cinf, pg, item);
|
||||
|
||||
if (item->delta)
|
||||
scoutfs_inc_counter(sb, item_delta_written);
|
||||
|
||||
/* free deletion items */
|
||||
if (item->deletion || item->delta)
|
||||
if (item->deletion)
|
||||
erase_item(pg, item);
|
||||
else
|
||||
item->persistent = 1;
|
||||
@@ -2490,8 +2342,6 @@ retry:
|
||||
write_lock(&pg->rwlock);
|
||||
|
||||
pgi = trim_page_intersection(sb, cinf, pg, right, start, end);
|
||||
trace_scoutfs_item_invalidate_page(sb, start, end,
|
||||
&pg->start, &pg->end, pgi);
|
||||
BUG_ON(pgi == PGI_DISJOINT); /* walk wouldn't ret disjoint */
|
||||
|
||||
if (pgi == PGI_INSIDE) {
|
||||
@@ -2514,9 +2364,9 @@ retry:
|
||||
/* inv was entirely inside page, done after bisect */
|
||||
write_trylock_will_succeed(&right->rwlock);
|
||||
rbtree_insert(&right->node, par, pnode, &cinf->pg_root);
|
||||
lru_accessed(sb, cinf, right);
|
||||
write_unlock(&right->rwlock);
|
||||
write_unlock(&pg->rwlock);
|
||||
lru_accessed(sb, cinf, right);
|
||||
right = NULL;
|
||||
break;
|
||||
}
|
||||
@@ -2532,9 +2382,9 @@ retry:
|
||||
|
||||
/*
|
||||
* Shrink the size the item cache. We're operating against the fast
|
||||
* path lock ordering and we skip pages if we can't acquire locks. We
|
||||
* can run into dirty pages or pages with items that weren't visible to
|
||||
* the earliest active reader which must be skipped.
|
||||
* path lock ordering and we skip pages if we can't acquire locks.
|
||||
* Similarly, we can run into dirty pages or pages which intersect with
|
||||
* active readers that we can't shrink and also choose to skip.
|
||||
*/
|
||||
static int item_lru_shrink(struct shrinker *shrink,
|
||||
struct shrink_control *sc)
|
||||
@@ -2543,24 +2393,27 @@ static int item_lru_shrink(struct shrinker *shrink,
|
||||
struct item_cache_info,
|
||||
shrinker);
|
||||
struct super_block *sb = cinf->sb;
|
||||
struct active_reader *active;
|
||||
struct cached_page *tmp;
|
||||
struct cached_page *pg;
|
||||
u64 first_reader_seq;
|
||||
LIST_HEAD(list);
|
||||
int nr;
|
||||
|
||||
if (sc->nr_to_scan == 0)
|
||||
goto out;
|
||||
nr = sc->nr_to_scan;
|
||||
|
||||
/* can't invalidate pages with items that weren't visible to first reader */
|
||||
first_reader_seq = first_active_reader_seq(cinf);
|
||||
|
||||
write_lock(&cinf->rwlock);
|
||||
spin_lock(&cinf->lru_lock);
|
||||
|
||||
list_for_each_entry_safe(pg, tmp, &cinf->lru_list, lru_head) {
|
||||
|
||||
if (first_reader_seq <= pg->max_seq) {
|
||||
/* can't invalidate ranges being read, reader might be stale */
|
||||
spin_lock(&cinf->active_lock);
|
||||
active = active_rbtree_walk(&cinf->active_root, &pg->start,
|
||||
&pg->end, NULL, NULL);
|
||||
spin_unlock(&cinf->active_lock);
|
||||
if (active) {
|
||||
scoutfs_inc_counter(sb, item_shrink_page_reader);
|
||||
continue;
|
||||
}
|
||||
@@ -2580,17 +2433,21 @@ static int item_lru_shrink(struct shrinker *shrink,
|
||||
|
||||
__lru_remove(sb, cinf, pg);
|
||||
rbtree_erase(&pg->node, &cinf->pg_root);
|
||||
list_move_tail(&pg->lru_head, &list);
|
||||
invalidate_pcpu_page(pg);
|
||||
write_unlock(&pg->rwlock);
|
||||
|
||||
put_pg(sb, pg);
|
||||
|
||||
if (--nr == 0)
|
||||
break;
|
||||
}
|
||||
|
||||
write_unlock(&cinf->rwlock);
|
||||
spin_unlock(&cinf->lru_lock);
|
||||
|
||||
list_for_each_entry_safe(pg, tmp, &list, lru_head) {
|
||||
list_del_init(&pg->lru_head);
|
||||
put_pg(sb, pg);
|
||||
}
|
||||
out:
|
||||
return min_t(unsigned long, cinf->lru_pages, INT_MAX);
|
||||
}
|
||||
@@ -2629,7 +2486,7 @@ int scoutfs_item_setup(struct super_block *sb)
|
||||
spin_lock_init(&cinf->lru_lock);
|
||||
INIT_LIST_HEAD(&cinf->lru_list);
|
||||
spin_lock_init(&cinf->active_lock);
|
||||
INIT_LIST_HEAD(&cinf->active_list);
|
||||
cinf->active_root = RB_ROOT;
|
||||
|
||||
cinf->pcpu_pages = alloc_percpu(struct item_percpu_pages);
|
||||
if (!cinf->pcpu_pages)
|
||||
@@ -2660,7 +2517,7 @@ void scoutfs_item_destroy(struct super_block *sb)
|
||||
int cpu;
|
||||
|
||||
if (cinf) {
|
||||
BUG_ON(!list_empty(&cinf->active_list));
|
||||
BUG_ON(!RB_EMPTY_ROOT(&cinf->active_root));
|
||||
|
||||
unregister_hotcpu_notifier(&cinf->notifier);
|
||||
unregister_shrinker(&cinf->shrinker);
|
||||
|
||||
@@ -15,15 +15,14 @@ int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock);
|
||||
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
|
||||
struct scoutfs_lock *lock);
|
||||
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock);
|
||||
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock);
|
||||
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock);
|
||||
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
|
||||
int scoutfs_item_delete_force(struct super_block *sb,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock);
|
||||
|
||||
u64 scoutfs_item_dirty_pages(struct super_block *sb);
|
||||
int scoutfs_item_write_dirty(struct super_block *sb);
|
||||
|
||||
@@ -46,10 +46,4 @@ static inline int dir_emit_dots(struct file *file, void *dirent,
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef KC_POSIX_ACL_VALID_USER_NS
|
||||
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(user_ns, acl)
|
||||
#else
|
||||
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(acl)
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
@@ -108,16 +108,6 @@ static inline void scoutfs_key_set_ones(struct scoutfs_key *key)
|
||||
memset(key->__pad, 0, sizeof(key->__pad));
|
||||
}
|
||||
|
||||
static inline bool scoutfs_key_is_ones(struct scoutfs_key *key)
|
||||
{
|
||||
return key->sk_zone == U8_MAX &&
|
||||
key->_sk_first == cpu_to_le64(U64_MAX) &&
|
||||
key->sk_type == U8_MAX &&
|
||||
key->_sk_second == cpu_to_le64(U64_MAX) &&
|
||||
key->_sk_third == cpu_to_le64(U64_MAX) &&
|
||||
key->_sk_fourth == U8_MAX;
|
||||
}
|
||||
|
||||
/*
|
||||
* Return a -1/0/1 comparison of keys.
|
||||
*
|
||||
|
||||
545
kmod/src/lock.c
545
kmod/src/lock.c
@@ -18,7 +18,6 @@
|
||||
#include <linux/mm.h>
|
||||
#include <linux/sort.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/posix_acl.h>
|
||||
|
||||
#include "super.h"
|
||||
#include "lock.h"
|
||||
@@ -35,7 +34,6 @@
|
||||
#include "data.h"
|
||||
#include "xattr.h"
|
||||
#include "item.h"
|
||||
#include "omap.h"
|
||||
|
||||
/*
|
||||
* scoutfs uses a lock service to manage item cache consistency between
|
||||
@@ -67,6 +65,8 @@
|
||||
* relative to that lock state we resend.
|
||||
*/
|
||||
|
||||
#define GRACE_PERIOD_KT ms_to_ktime(2)
|
||||
|
||||
/*
|
||||
* allocated per-super, freed on unmount.
|
||||
*/
|
||||
@@ -74,19 +74,19 @@ struct lock_info {
|
||||
struct super_block *sb;
|
||||
spinlock_t lock;
|
||||
bool shutdown;
|
||||
bool unmounting;
|
||||
struct rb_root lock_tree;
|
||||
struct rb_root lock_range_tree;
|
||||
struct shrinker shrinker;
|
||||
struct list_head lru_list;
|
||||
unsigned long long lru_nr;
|
||||
struct workqueue_struct *workq;
|
||||
struct work_struct inv_work;
|
||||
struct work_struct grant_work;
|
||||
struct list_head grant_list;
|
||||
struct delayed_work inv_dwork;
|
||||
struct list_head inv_list;
|
||||
struct work_struct shrink_work;
|
||||
struct list_head shrink_list;
|
||||
atomic64_t next_refresh_gen;
|
||||
|
||||
struct dentry *tseq_dentry;
|
||||
struct scoutfs_tseq_tree tseq_tree;
|
||||
};
|
||||
@@ -122,37 +122,21 @@ static bool lock_modes_match(int granted, int requested)
|
||||
}
|
||||
|
||||
/*
|
||||
* Invalidate cached data associated with an inode whose lock is going
|
||||
* invalidate cached data associated with an inode whose lock is going
|
||||
* away.
|
||||
*
|
||||
* We try to drop cached dentries and inodes covered by the lock if they
|
||||
* aren't referenced. This removes them from the mount's open map and
|
||||
* allows deletions to be performed by unlink without having to wait for
|
||||
* remote cached inodes to be dropped.
|
||||
*
|
||||
* We kick the d_prune and iput off to async work because they can end
|
||||
* up in final iput and inode eviction item deletion which would
|
||||
* deadlock. d_prune->dput can end up in iput on parents in different
|
||||
* locks entirely.
|
||||
*/
|
||||
static void invalidate_inode(struct super_block *sb, u64 ino)
|
||||
{
|
||||
struct scoutfs_inode_info *si;
|
||||
struct inode *inode;
|
||||
|
||||
inode = scoutfs_ilookup_nowait_nonewfree(sb, ino);
|
||||
inode = scoutfs_ilookup(sb, ino);
|
||||
if (inode) {
|
||||
si = SCOUTFS_I(inode);
|
||||
|
||||
scoutfs_inc_counter(sb, lock_invalidate_inode);
|
||||
if (S_ISREG(inode->i_mode)) {
|
||||
truncate_inode_pages(inode->i_mapping, 0);
|
||||
scoutfs_data_wait_changed(inode);
|
||||
}
|
||||
|
||||
forget_all_cached_acls(inode);
|
||||
|
||||
scoutfs_inode_queue_iput(inode, SI_IPUT_FLAG_PRUNE);
|
||||
iput(inode);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -203,7 +187,6 @@ retry:
|
||||
}
|
||||
spin_unlock(&lock->cov_list_lock);
|
||||
|
||||
/* invalidate inodes after removing coverage so drop/evict aren't covered */
|
||||
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
|
||||
ino = le64_to_cpu(lock->start.ski_ino);
|
||||
last = le64_to_cpu(lock->end.ski_ino);
|
||||
@@ -241,11 +224,11 @@ static void lock_free(struct lock_info *linfo, struct scoutfs_lock *lock)
|
||||
BUG_ON(!RB_EMPTY_NODE(&lock->node));
|
||||
BUG_ON(!RB_EMPTY_NODE(&lock->range_node));
|
||||
BUG_ON(!list_empty(&lock->lru_head));
|
||||
BUG_ON(!list_empty(&lock->grant_head));
|
||||
BUG_ON(!list_empty(&lock->inv_head));
|
||||
BUG_ON(!list_empty(&lock->shrink_head));
|
||||
BUG_ON(!list_empty(&lock->cov_list));
|
||||
|
||||
kfree(lock->inode_deletion_data);
|
||||
kfree(lock);
|
||||
}
|
||||
|
||||
@@ -268,8 +251,8 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
|
||||
RB_CLEAR_NODE(&lock->node);
|
||||
RB_CLEAR_NODE(&lock->range_node);
|
||||
INIT_LIST_HEAD(&lock->lru_head);
|
||||
INIT_LIST_HEAD(&lock->grant_head);
|
||||
INIT_LIST_HEAD(&lock->inv_head);
|
||||
INIT_LIST_HEAD(&lock->inv_list);
|
||||
INIT_LIST_HEAD(&lock->shrink_head);
|
||||
spin_lock_init(&lock->cov_list_lock);
|
||||
INIT_LIST_HEAD(&lock->cov_list);
|
||||
@@ -279,7 +262,6 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
|
||||
lock->sb = sb;
|
||||
init_waitqueue_head(&lock->waitq);
|
||||
lock->mode = SCOUTFS_LOCK_NULL;
|
||||
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
|
||||
|
||||
atomic64_set(&lock->forest_bloom_nr, 0);
|
||||
|
||||
@@ -316,6 +298,23 @@ static bool lock_counts_match(int granted, unsigned int *counts)
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if there are any mode counts that match with the desired
|
||||
* mode. There can be other non-matching counts as well but we're only
|
||||
* testing for the existence of any matching counts.
|
||||
*/
|
||||
static bool lock_count_match_exists(int desired, unsigned int *counts)
|
||||
{
|
||||
enum scoutfs_lock_mode mode;
|
||||
|
||||
for (mode = 0; mode < SCOUTFS_LOCK_NR_MODES; mode++) {
|
||||
if (counts[mode] && lock_modes_match(desired, mode))
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* An idle lock has nothing going on. It can be present in the lru and
|
||||
* can be freed by the final put when it has a null mode.
|
||||
@@ -533,15 +532,45 @@ static void put_lock(struct lock_info *linfo,struct scoutfs_lock *lock)
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller has made a change (set a lock mode) which can let one of the
|
||||
* invalidating locks make forward progress.
|
||||
* Locks have a grace period that extends after activity and prevents
|
||||
* invalidation. It's intended to let nodes do reasonable batches of
|
||||
* work as locks ping pong between nodes that are doing conflicting
|
||||
* work.
|
||||
*/
|
||||
static void extend_grace(struct super_block *sb, struct scoutfs_lock *lock)
|
||||
{
|
||||
ktime_t now = ktime_get();
|
||||
|
||||
if (ktime_after(now, lock->grace_deadline))
|
||||
scoutfs_inc_counter(sb, lock_grace_set);
|
||||
else
|
||||
scoutfs_inc_counter(sb, lock_grace_extended);
|
||||
|
||||
lock->grace_deadline = ktime_add(now, GRACE_PERIOD_KT);
|
||||
}
|
||||
|
||||
static void queue_grant_work(struct lock_info *linfo)
|
||||
{
|
||||
assert_spin_locked(&linfo->lock);
|
||||
|
||||
if (!list_empty(&linfo->grant_list) && !linfo->shutdown)
|
||||
queue_work(linfo->workq, &linfo->grant_work);
|
||||
}
|
||||
|
||||
/*
|
||||
* We immediately queue work on the assumption that the caller might
|
||||
* have made a change (set a lock mode) which can let one of the
|
||||
* invalidating locks make forward progress, even if other locks are
|
||||
* waiting for their grace period to elapse. It's a trade-off between
|
||||
* invalidation latency and burning cpu repeatedly finding that locks
|
||||
* are still in their grace period.
|
||||
*/
|
||||
static void queue_inv_work(struct lock_info *linfo)
|
||||
{
|
||||
assert_spin_locked(&linfo->lock);
|
||||
|
||||
if (!list_empty(&linfo->inv_list))
|
||||
queue_work(linfo->workq, &linfo->inv_work);
|
||||
if (!list_empty(&linfo->inv_list) && !linfo->shutdown)
|
||||
mod_delayed_work(linfo->workq, &linfo->inv_dwork, 0);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -589,17 +618,80 @@ static void bug_on_inconsistent_grant_cache(struct super_block *sb,
|
||||
}
|
||||
|
||||
/*
|
||||
* The client is receiving a grant response message from the server.
|
||||
* This is being called synchronously in the networking receive path so
|
||||
* our work should be quick and reasonably non-blocking.
|
||||
* Each lock has received a grant response message from the server.
|
||||
*
|
||||
* The server's state machine can immediately send an invalidate request
|
||||
* after sending this grant response. We won't process the incoming
|
||||
* invalidate request until after processing this grant response.
|
||||
* Grant responses can be reordered with incoming invalidation requests
|
||||
* from the server so we have to be careful to only set the new mode
|
||||
* once the old mode matches.
|
||||
*
|
||||
* We extend the grace period as we grant the lock if there is a waiting
|
||||
* locker who can use the lock. This stops invalidation from pulling
|
||||
* the granted lock out from under the requester, resulting in a lot of
|
||||
* churn with no forward progress. Using the grace period avoids having
|
||||
* to identify a specific waiter and give it an acquired lock. It's
|
||||
* also very similar to waking up the locker and having it win the race
|
||||
* against the invalidation. In that case they'd extend the grace
|
||||
* period anyway as they unlock.
|
||||
*/
|
||||
static void lock_grant_worker(struct work_struct *work)
|
||||
{
|
||||
struct lock_info *linfo = container_of(work, struct lock_info,
|
||||
grant_work);
|
||||
struct super_block *sb = linfo->sb;
|
||||
struct scoutfs_net_lock_grant_response *gr;
|
||||
struct scoutfs_net_lock *nl;
|
||||
struct scoutfs_lock *lock;
|
||||
struct scoutfs_lock *tmp;
|
||||
|
||||
scoutfs_inc_counter(sb, lock_grant_work);
|
||||
|
||||
spin_lock(&linfo->lock);
|
||||
|
||||
list_for_each_entry_safe(lock, tmp, &linfo->grant_list, grant_head) {
|
||||
gr = &lock->grant_resp;
|
||||
nl = &lock->grant_resp.nl;
|
||||
|
||||
/* wait for reordered invalidation to finish */
|
||||
if (lock->mode != nl->old_mode)
|
||||
continue;
|
||||
|
||||
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode,
|
||||
nl->new_mode);
|
||||
|
||||
if (!lock_mode_can_read(nl->old_mode) &&
|
||||
lock_mode_can_read(nl->new_mode)) {
|
||||
lock->refresh_gen =
|
||||
atomic64_inc_return(&linfo->next_refresh_gen);
|
||||
}
|
||||
|
||||
lock->request_pending = 0;
|
||||
lock->mode = nl->new_mode;
|
||||
lock->write_version = le64_to_cpu(nl->write_version);
|
||||
lock->roots = gr->roots;
|
||||
|
||||
if (lock_count_match_exists(nl->new_mode, lock->waiters))
|
||||
extend_grace(sb, lock);
|
||||
|
||||
trace_scoutfs_lock_granted(sb, lock);
|
||||
list_del_init(&lock->grant_head);
|
||||
wake_up(&lock->waitq);
|
||||
put_lock(linfo, lock);
|
||||
}
|
||||
|
||||
/* invalidations might be waiting for our reordered grant */
|
||||
queue_inv_work(linfo);
|
||||
spin_unlock(&linfo->lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* The client is receiving a grant response message from the server. We
|
||||
* find the lock, record the response, and add it to the list for grant
|
||||
* work to process.
|
||||
*/
|
||||
int scoutfs_lock_grant_response(struct super_block *sb,
|
||||
struct scoutfs_net_lock *nl)
|
||||
struct scoutfs_net_lock_grant_response *gr)
|
||||
{
|
||||
struct scoutfs_net_lock *nl = &gr->nl;
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_lock *lock;
|
||||
|
||||
@@ -613,63 +705,62 @@ int scoutfs_lock_grant_response(struct super_block *sb,
|
||||
trace_scoutfs_lock_grant_response(sb, lock);
|
||||
BUG_ON(!lock->request_pending);
|
||||
|
||||
bug_on_inconsistent_grant_cache(sb, lock, nl->old_mode, nl->new_mode);
|
||||
|
||||
if (!lock_mode_can_read(nl->old_mode) && lock_mode_can_read(nl->new_mode))
|
||||
lock->refresh_gen = atomic64_inc_return(&linfo->next_refresh_gen);
|
||||
|
||||
lock->request_pending = 0;
|
||||
lock->mode = nl->new_mode;
|
||||
lock->write_seq = le64_to_cpu(nl->write_seq);
|
||||
|
||||
trace_scoutfs_lock_granted(sb, lock);
|
||||
wake_up(&lock->waitq);
|
||||
put_lock(linfo, lock);
|
||||
lock->grant_resp = *gr;
|
||||
list_add_tail(&lock->grant_head, &linfo->grant_list);
|
||||
queue_grant_work(linfo);
|
||||
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct inv_req {
|
||||
struct list_head head;
|
||||
struct scoutfs_lock *lock;
|
||||
u64 net_id;
|
||||
struct scoutfs_net_lock nl;
|
||||
};
|
||||
|
||||
/*
|
||||
* Each lock has received a lock invalidation request from the server
|
||||
* which specifies a new mode for the lock. Our processing state
|
||||
* machine and server failover and lock recovery can both conspire to
|
||||
* give us triplicate invalidation requests. The incoming requests for
|
||||
* a given lock need to be processed in order, but we can process locks
|
||||
* in any order.
|
||||
* which specifies a new mode for the lock. The server will only send
|
||||
* one invalidation request at a time for each lock.
|
||||
*
|
||||
* This is an unsolicited request from the server so it can arrive at
|
||||
* any time after we make the server aware of the lock. We wait for
|
||||
* users of the current mode to unlock before invalidating.
|
||||
* any time after we make the server aware of the lock by initially
|
||||
* requesting it. We wait for users of the current mode to unlock
|
||||
* before invalidating.
|
||||
*
|
||||
* This can arrive on behalf of our request for a mode that conflicts
|
||||
* with our current mode. We have to proceed while we have a request
|
||||
* pending. We can also be racing with shrink requests being sent while
|
||||
* we're invalidating.
|
||||
*
|
||||
* This can be processed concurrently and experience reordering with a
|
||||
* grant response sent back-to-back from the server. We carefully only
|
||||
* invalidate once the lock mode matches what the server told us to
|
||||
* invalidate.
|
||||
*
|
||||
* We delay invalidation processing until a grace period has elapsed
|
||||
* since the last unlock. The intent is to let users do a reasonable
|
||||
* batch of work before dropping the lock. Continuous unlocking can
|
||||
* continuously extend the deadline.
|
||||
*
|
||||
* Before we start invalidating the lock we set the lock to the new
|
||||
* mode, preventing further incompatible users of the old mode from
|
||||
* using the lock while we're invalidating. We record the previously
|
||||
* granted mode so that we can send lock recover responses with the old
|
||||
* granted mode during invalidation.
|
||||
* using the lock while we're invalidating.
|
||||
*
|
||||
* This does a lot of serialized inode invalidation in one context and
|
||||
* performs a lot of repeated calls to sync. It would be nice to get
|
||||
* some concurrent inode invalidation and to more carefully only call
|
||||
* sync when needed.
|
||||
*/
|
||||
static void lock_invalidate_worker(struct work_struct *work)
|
||||
{
|
||||
struct lock_info *linfo = container_of(work, struct lock_info, inv_work);
|
||||
struct lock_info *linfo = container_of(work, struct lock_info,
|
||||
inv_dwork.work);
|
||||
struct super_block *sb = linfo->sb;
|
||||
struct scoutfs_net_lock *nl;
|
||||
struct scoutfs_lock *lock;
|
||||
struct scoutfs_lock *tmp;
|
||||
struct inv_req *ireq;
|
||||
unsigned long delay = MAX_JIFFY_OFFSET;
|
||||
ktime_t now = ktime_get();
|
||||
ktime_t deadline;
|
||||
LIST_HEAD(ready);
|
||||
u64 net_id;
|
||||
int ret;
|
||||
|
||||
scoutfs_inc_counter(sb, lock_invalidate_work);
|
||||
@@ -677,15 +768,27 @@ static void lock_invalidate_worker(struct work_struct *work)
|
||||
spin_lock(&linfo->lock);
|
||||
|
||||
list_for_each_entry_safe(lock, tmp, &linfo->inv_list, inv_head) {
|
||||
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
|
||||
nl = &ireq->nl;
|
||||
nl = &lock->inv_nl;
|
||||
|
||||
/* skip if grace hasn't elapsed, record earliest */
|
||||
deadline = lock->grace_deadline;
|
||||
if (ktime_before(now, deadline)) {
|
||||
delay = min(delay,
|
||||
nsecs_to_jiffies(ktime_to_ns(
|
||||
ktime_sub(deadline, now))));
|
||||
scoutfs_inc_counter(linfo->sb, lock_grace_wait);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* wait for reordered grant to finish */
|
||||
if (lock->mode != nl->old_mode)
|
||||
continue;
|
||||
|
||||
/* wait until incompatible holders unlock */
|
||||
if (!lock_counts_match(nl->new_mode, lock->users))
|
||||
continue;
|
||||
|
||||
/* set the new mode, no incompatible users during inval, recov needs old */
|
||||
lock->invalidating_mode = lock->mode;
|
||||
/* set the new mode, no incompatible users during inval */
|
||||
lock->mode = nl->new_mode;
|
||||
|
||||
/* move everyone that's ready to our private list */
|
||||
@@ -695,23 +798,18 @@ static void lock_invalidate_worker(struct work_struct *work)
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
if (list_empty(&ready))
|
||||
return;
|
||||
goto out;
|
||||
|
||||
/* invalidate once the lock is read */
|
||||
list_for_each_entry(lock, &ready, inv_head) {
|
||||
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
|
||||
nl = &ireq->nl;
|
||||
nl = &lock->inv_nl;
|
||||
net_id = lock->inv_net_id;
|
||||
|
||||
/* only lock protocol, inv can't call subsystems after shutdown */
|
||||
if (!linfo->shutdown) {
|
||||
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
|
||||
BUG_ON(ret);
|
||||
}
|
||||
ret = lock_invalidate(sb, lock, nl->old_mode, nl->new_mode);
|
||||
BUG_ON(ret);
|
||||
|
||||
/* respond with the key and modes from the request, server might have died */
|
||||
ret = scoutfs_client_lock_response(sb, ireq->net_id, nl);
|
||||
if (ret == -ENOTCONN)
|
||||
ret = 0;
|
||||
/* respond with the key and modes from the request */
|
||||
ret = scoutfs_client_lock_response(sb, net_id, nl);
|
||||
BUG_ON(ret);
|
||||
|
||||
scoutfs_inc_counter(sb, lock_invalidate_response);
|
||||
@@ -721,91 +819,53 @@ static void lock_invalidate_worker(struct work_struct *work)
|
||||
spin_lock(&linfo->lock);
|
||||
|
||||
list_for_each_entry_safe(lock, tmp, &ready, inv_head) {
|
||||
ireq = list_first_entry(&lock->inv_list, struct inv_req, head);
|
||||
list_del_init(&lock->inv_head);
|
||||
|
||||
lock->invalidate_pending = 0;
|
||||
trace_scoutfs_lock_invalidated(sb, lock);
|
||||
|
||||
list_del(&ireq->head);
|
||||
kfree(ireq);
|
||||
|
||||
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
|
||||
|
||||
if (list_empty(&lock->inv_list)) {
|
||||
/* finish if another request didn't arrive */
|
||||
list_del_init(&lock->inv_head);
|
||||
lock->invalidate_pending = 0;
|
||||
wake_up(&lock->waitq);
|
||||
} else {
|
||||
/* another request arrived, back on the list and requeue */
|
||||
list_move_tail(&lock->inv_head, &linfo->inv_list);
|
||||
queue_inv_work(linfo);
|
||||
}
|
||||
|
||||
wake_up(&lock->waitq);
|
||||
put_lock(linfo, lock);
|
||||
}
|
||||
|
||||
/* grant might have been waiting for invalidate request */
|
||||
queue_grant_work(linfo);
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
out:
|
||||
/* queue delayed work if invalidations waiting on grace deadline */
|
||||
if (delay != MAX_JIFFY_OFFSET)
|
||||
queue_delayed_work(linfo->workq, &linfo->inv_dwork, delay);
|
||||
}
|
||||
|
||||
/*
|
||||
* Add an incoming invalidation request to the end of the list on the
|
||||
* lock and queue it for blocking invalidation work. This is being
|
||||
* called synchronously in the net recv path to avoid reordering with
|
||||
* grants that were sent immediately before the server sent this
|
||||
* invalidation.
|
||||
* Record an incoming invalidate request from the server and add its lock
|
||||
* to the list for processing.
|
||||
*
|
||||
* Incoming invalidation requests are a function of the remote lock
|
||||
* server's state machine and are slightly decoupled from our lock
|
||||
* state. We can receive duplicate requests if the server is quick
|
||||
* enough to send the next request after we send a previous reply, or if
|
||||
* pending invalidation spans server failover and lock recovery.
|
||||
*
|
||||
* Similarly, we can get a request to invalidate a lock we don't have if
|
||||
* invalidation finished just after lock recovery to a new server.
|
||||
* Happily we can just reply because we satisfy the invalidation
|
||||
* response promise to not be using the old lock's mode if the lock
|
||||
* doesn't exist.
|
||||
* This is trusting the server and will crash if it's sent bad requests :/
|
||||
*/
|
||||
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
|
||||
struct scoutfs_net_lock *nl)
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
struct inv_req *ireq;
|
||||
int ret = 0;
|
||||
struct scoutfs_lock *lock;
|
||||
|
||||
scoutfs_inc_counter(sb, lock_invalidate_request);
|
||||
|
||||
ireq = kmalloc(sizeof(struct inv_req), GFP_NOFS);
|
||||
BUG_ON(!ireq); /* lock server doesn't handle response errors */
|
||||
if (ireq == NULL) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
spin_lock(&linfo->lock);
|
||||
lock = get_lock(sb, &nl->key);
|
||||
BUG_ON(!lock);
|
||||
if (lock) {
|
||||
BUG_ON(lock->invalidate_pending);
|
||||
lock->invalidate_pending = 1;
|
||||
lock->inv_nl = *nl;
|
||||
lock->inv_net_id = net_id;
|
||||
list_add_tail(&lock->inv_head, &linfo->inv_list);
|
||||
trace_scoutfs_lock_invalidate_request(sb, lock);
|
||||
ireq->lock = lock;
|
||||
ireq->net_id = net_id;
|
||||
ireq->nl = *nl;
|
||||
if (list_empty(&lock->inv_list)) {
|
||||
list_add_tail(&lock->inv_head, &linfo->inv_list);
|
||||
lock->invalidate_pending = 1;
|
||||
queue_inv_work(linfo);
|
||||
}
|
||||
list_add_tail(&ireq->head, &lock->inv_list);
|
||||
queue_inv_work(linfo);
|
||||
}
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
out:
|
||||
if (!lock) {
|
||||
ret = scoutfs_client_lock_response(sb, net_id, nl);
|
||||
BUG_ON(ret); /* lock server doesn't fence timed out client requests */
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -820,7 +880,6 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_net_lock_recover *nlr;
|
||||
enum scoutfs_lock_mode mode;
|
||||
struct scoutfs_lock *lock;
|
||||
struct scoutfs_lock *next;
|
||||
struct rb_node *node;
|
||||
@@ -841,15 +900,10 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
|
||||
|
||||
for (i = 0; lock && i < SCOUTFS_NET_LOCK_MAX_RECOVER_NR; i++) {
|
||||
|
||||
if (lock->invalidating_mode != SCOUTFS_LOCK_NULL)
|
||||
mode = lock->invalidating_mode;
|
||||
else
|
||||
mode = lock->mode;
|
||||
|
||||
nlr->locks[i].key = lock->start;
|
||||
nlr->locks[i].write_seq = cpu_to_le64(lock->write_seq);
|
||||
nlr->locks[i].old_mode = mode;
|
||||
nlr->locks[i].new_mode = mode;
|
||||
nlr->locks[i].write_version = cpu_to_le64(lock->write_version);
|
||||
nlr->locks[i].old_mode = lock->mode;
|
||||
nlr->locks[i].new_mode = lock->mode;
|
||||
|
||||
node = rb_next(&lock->node);
|
||||
if (node)
|
||||
@@ -942,7 +996,7 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
|
||||
lock_inc_count(lock->waiters, mode);
|
||||
|
||||
for (;;) {
|
||||
if (WARN_ON_ONCE(linfo->shutdown)) {
|
||||
if (linfo->shutdown) {
|
||||
ret = -ESHUTDOWN;
|
||||
break;
|
||||
}
|
||||
@@ -987,14 +1041,8 @@ static int lock_key_range(struct super_block *sb, enum scoutfs_lock_mode mode, i
|
||||
|
||||
trace_scoutfs_lock_wait(sb, lock);
|
||||
|
||||
if (flags & SCOUTFS_LKF_INTERRUPTIBLE) {
|
||||
ret = wait_event_interruptible(lock->waitq,
|
||||
lock_wait_cond(sb, lock, mode));
|
||||
} else {
|
||||
wait_event(lock->waitq, lock_wait_cond(sb, lock, mode));
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
ret = wait_event_interruptible(lock->waitq,
|
||||
lock_wait_cond(sb, lock, mode));
|
||||
spin_lock(&linfo->lock);
|
||||
if (ret)
|
||||
break;
|
||||
@@ -1051,7 +1099,7 @@ int scoutfs_lock_inode(struct super_block *sb, enum scoutfs_lock_mode mode, int
|
||||
goto out;
|
||||
|
||||
if (flags & SCOUTFS_LKF_REFRESH_INODE) {
|
||||
ret = scoutfs_inode_refresh(inode, *lock);
|
||||
ret = scoutfs_inode_refresh(inode, *lock, flags);
|
||||
if (ret < 0) {
|
||||
scoutfs_unlock(sb, *lock, mode);
|
||||
*lock = NULL;
|
||||
@@ -1212,46 +1260,37 @@ int scoutfs_lock_inode_index(struct super_block *sb, enum scoutfs_lock_mode mode
|
||||
}
|
||||
|
||||
/*
|
||||
* Orphan items are stored in their own zone which are modified with
|
||||
* shared write_only locks and are read inconsistently without locks by
|
||||
* background scanning work.
|
||||
* The rid lock protects a mount's private persistent items in the rid
|
||||
* zone. It's held for the duration of the mount. It lets the mount
|
||||
* modify the rid items at will and signals to other mounts that we're
|
||||
* still alive and our rid items shouldn't be reclaimed.
|
||||
*
|
||||
* Since we only use write_only locks we just lock the entire zone, but
|
||||
* the api provides the inode in case we ever change the locking scheme.
|
||||
* Being held for the entire mount prevents other nodes from reclaiming
|
||||
* our items, like free blocks, when it would make sense for them to be
|
||||
* able to. Maybe we have a bunch free and they're trying to allocate
|
||||
* and are getting ENOSPC.
|
||||
*/
|
||||
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags, u64 ino,
|
||||
struct scoutfs_lock **lock)
|
||||
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
|
||||
u64 rid, struct scoutfs_lock **lock)
|
||||
{
|
||||
struct scoutfs_key start;
|
||||
struct scoutfs_key end;
|
||||
|
||||
scoutfs_key_set_zeros(&start);
|
||||
start.sk_zone = SCOUTFS_ORPHAN_ZONE;
|
||||
start.sko_ino = 0;
|
||||
start.sk_type = SCOUTFS_ORPHAN_TYPE;
|
||||
start.sk_zone = SCOUTFS_RID_ZONE;
|
||||
start.sko_rid = cpu_to_le64(rid);
|
||||
|
||||
scoutfs_key_set_zeros(&end);
|
||||
end.sk_zone = SCOUTFS_ORPHAN_ZONE;
|
||||
end.sko_ino = cpu_to_le64(U64_MAX);
|
||||
end.sk_type = SCOUTFS_ORPHAN_TYPE;
|
||||
|
||||
return lock_key_range(sb, mode, flags, &start, &end, lock);
|
||||
}
|
||||
|
||||
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
|
||||
struct scoutfs_lock **lock)
|
||||
{
|
||||
struct scoutfs_key start;
|
||||
struct scoutfs_key end;
|
||||
|
||||
scoutfs_key_set_zeros(&start);
|
||||
start.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
|
||||
scoutfs_key_set_ones(&end);
|
||||
end.sk_zone = SCOUTFS_XATTR_TOTL_ZONE;
|
||||
end.sk_zone = SCOUTFS_RID_ZONE;
|
||||
end.sko_rid = cpu_to_le64(rid);
|
||||
|
||||
return lock_key_range(sb, mode, flags, &start, &end, lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* As we unlock we always extend the grace period to give the caller
|
||||
* another pass at the lock before its invalidated.
|
||||
*/
|
||||
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scoutfs_lock_mode mode)
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
@@ -1264,6 +1303,7 @@ void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock, enum scou
|
||||
spin_lock(&linfo->lock);
|
||||
|
||||
lock_dec_count(lock->users, mode);
|
||||
extend_grace(sb, lock);
|
||||
if (lock_mode_can_write(mode))
|
||||
lock->dirty_trans_seq = scoutfs_trans_sample_seq(sb);
|
||||
|
||||
@@ -1438,7 +1478,7 @@ restart:
|
||||
BUG_ON(lock->mode == SCOUTFS_LOCK_NULL);
|
||||
BUG_ON(!list_empty(&lock->shrink_head));
|
||||
|
||||
if (nr-- == 0)
|
||||
if (linfo->shutdown || nr-- == 0)
|
||||
break;
|
||||
|
||||
__lock_del_lru(linfo, lock);
|
||||
@@ -1465,7 +1505,7 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void scoutfs_free_unused_locks(struct super_block *sb)
|
||||
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr)
|
||||
{
|
||||
struct lock_info *linfo = SCOUTFS_SB(sb)->lock_info;
|
||||
struct shrink_control sc = {
|
||||
@@ -1493,80 +1533,15 @@ static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
|
||||
}
|
||||
|
||||
/*
|
||||
* shrink_dcache_for_umount() tears down dentries with no locking. We
|
||||
* need to make sure that our invalidation won't touch dentries before
|
||||
* we return and the caller calls the generic vfs unmount path.
|
||||
*/
|
||||
void scoutfs_lock_unmount_begin(struct super_block *sb)
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
|
||||
if (linfo) {
|
||||
linfo->unmounting = true;
|
||||
flush_work(&linfo->inv_work);
|
||||
}
|
||||
}
|
||||
|
||||
void scoutfs_lock_flush_invalidate(struct super_block *sb)
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
|
||||
if (linfo)
|
||||
flush_work(&linfo->inv_work);
|
||||
}
|
||||
|
||||
static u64 get_held_lock_refresh_gen(struct super_block *sb, struct scoutfs_key *start)
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_lock *lock;
|
||||
u64 refresh_gen = 0;
|
||||
|
||||
/* this can be called from all manner of places */
|
||||
if (!linfo)
|
||||
return 0;
|
||||
|
||||
spin_lock(&linfo->lock);
|
||||
lock = lock_lookup(sb, start, NULL);
|
||||
if (lock) {
|
||||
if (lock_mode_can_read(lock->mode))
|
||||
refresh_gen = lock->refresh_gen;
|
||||
}
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
return refresh_gen;
|
||||
}
|
||||
|
||||
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino)
|
||||
{
|
||||
struct scoutfs_key start;
|
||||
|
||||
scoutfs_key_set_zeros(&start);
|
||||
start.sk_zone = SCOUTFS_FS_ZONE;
|
||||
start.ski_ino = cpu_to_le64(ino & ~(u64)SCOUTFS_LOCK_INODE_GROUP_MASK);
|
||||
|
||||
return get_held_lock_refresh_gen(sb, &start);
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller is going to be shutting down transactions and the client.
|
||||
* We need to make sure that locking won't call either after we return.
|
||||
* The caller is going to be calling _destroy soon and, critically, is
|
||||
* about to shutdown networking before calling us so that we don't get
|
||||
* any callbacks while we're destroying. We have to ensure that we
|
||||
* won't call networking after this returns.
|
||||
*
|
||||
* At this point all fs callers and internal services that use locks
|
||||
* should have stopped. We won't have any callers initiating lock
|
||||
* transitions and sending requests. We set the shutdown flag to catch
|
||||
* anyone who breaks this rule.
|
||||
*
|
||||
* We unregister the shrinker so that we won't try and send null
|
||||
* requests in response to memory pressure. The locks will all be
|
||||
* unceremoniously dropped once we get a farewell response from the
|
||||
* server which indicates that they destroyed our locking state.
|
||||
*
|
||||
* We will still respond to invalidation requests that have to be
|
||||
* processed to let unmount in other mounts acquire locks and make
|
||||
* progress. However, we don't fully process the invalidation because
|
||||
* we're shutting down. We only update the lock state and send the
|
||||
* response. We shouldn't have any users of locking that require
|
||||
* invalidation correctness at this point.
|
||||
* Internal fs threads can be using locking, and locking can have async
|
||||
* work pending. We use ->shutdown to force callers to return
|
||||
* -ESHUTDOWN and to prevent the future queueing of work that could call
|
||||
* networking. Locks whose work is stopped will be torn down by _destroy.
|
||||
*/
|
||||
void scoutfs_lock_shutdown(struct super_block *sb)
|
||||
{
|
||||
@@ -1579,18 +1554,19 @@ void scoutfs_lock_shutdown(struct super_block *sb)
|
||||
|
||||
trace_scoutfs_lock_shutdown(sb, linfo);
|
||||
|
||||
/* stop the shrinker from queueing work */
|
||||
unregister_shrinker(&linfo->shrinker);
|
||||
flush_work(&linfo->shrink_work);
|
||||
|
||||
/* cause current and future lock calls to return errors */
|
||||
spin_lock(&linfo->lock);
|
||||
|
||||
linfo->shutdown = true;
|
||||
for (node = rb_first(&linfo->lock_tree); node; node = rb_next(node)) {
|
||||
lock = rb_entry(node, struct scoutfs_lock, node);
|
||||
wake_up(&lock->waitq);
|
||||
}
|
||||
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
flush_work(&linfo->grant_work);
|
||||
flush_delayed_work(&linfo->inv_dwork);
|
||||
flush_work(&linfo->shrink_work);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1610,8 +1586,6 @@ void scoutfs_lock_destroy(struct super_block *sb)
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_lock *lock;
|
||||
struct inv_req *ireq_tmp;
|
||||
struct inv_req *ireq;
|
||||
struct rb_node *node;
|
||||
enum scoutfs_lock_mode mode;
|
||||
|
||||
@@ -1620,6 +1594,8 @@ void scoutfs_lock_destroy(struct super_block *sb)
|
||||
|
||||
trace_scoutfs_lock_destroy(sb, linfo);
|
||||
|
||||
/* stop the shrinker from queueing work */
|
||||
unregister_shrinker(&linfo->shrinker);
|
||||
|
||||
/* make sure that no one's actively using locks */
|
||||
spin_lock(&linfo->lock);
|
||||
@@ -1638,6 +1614,8 @@ void scoutfs_lock_destroy(struct super_block *sb)
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
if (linfo->workq) {
|
||||
/* pending grace work queues normal work */
|
||||
flush_workqueue(linfo->workq);
|
||||
/* now all work won't queue itself */
|
||||
destroy_workqueue(linfo->workq);
|
||||
}
|
||||
@@ -1654,31 +1632,22 @@ void scoutfs_lock_destroy(struct super_block *sb)
|
||||
* of free).
|
||||
*/
|
||||
spin_lock(&linfo->lock);
|
||||
|
||||
node = rb_first(&linfo->lock_tree);
|
||||
while (node) {
|
||||
lock = rb_entry(node, struct scoutfs_lock, node);
|
||||
node = rb_next(node);
|
||||
|
||||
list_for_each_entry_safe(ireq, ireq_tmp, &lock->inv_list, head) {
|
||||
list_del_init(&ireq->head);
|
||||
put_lock(linfo, ireq->lock);
|
||||
kfree(ireq);
|
||||
}
|
||||
|
||||
lock->request_pending = 0;
|
||||
if (!list_empty(&lock->lru_head))
|
||||
__lock_del_lru(linfo, lock);
|
||||
if (!list_empty(&lock->inv_head)) {
|
||||
if (!list_empty(&lock->grant_head))
|
||||
list_del_init(&lock->grant_head);
|
||||
if (!list_empty(&lock->inv_head))
|
||||
list_del_init(&lock->inv_head);
|
||||
lock->invalidate_pending = 0;
|
||||
}
|
||||
if (!list_empty(&lock->shrink_head))
|
||||
list_del_init(&lock->shrink_head);
|
||||
lock_remove(linfo, lock);
|
||||
lock_free(linfo, lock);
|
||||
}
|
||||
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
kfree(linfo);
|
||||
@@ -1703,7 +1672,9 @@ int scoutfs_lock_setup(struct super_block *sb)
|
||||
linfo->shrinker.seeks = DEFAULT_SEEKS;
|
||||
register_shrinker(&linfo->shrinker);
|
||||
INIT_LIST_HEAD(&linfo->lru_list);
|
||||
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
|
||||
INIT_WORK(&linfo->grant_work, lock_grant_worker);
|
||||
INIT_LIST_HEAD(&linfo->grant_list);
|
||||
INIT_DELAYED_WORK(&linfo->inv_dwork, lock_invalidate_worker);
|
||||
INIT_LIST_HEAD(&linfo->inv_list);
|
||||
INIT_WORK(&linfo->shrink_work, lock_shrink_worker);
|
||||
INIT_LIST_HEAD(&linfo->shrink_list);
|
||||
|
||||
@@ -6,15 +6,12 @@
|
||||
|
||||
#define SCOUTFS_LKF_REFRESH_INODE 0x01 /* update stale inode from item */
|
||||
#define SCOUTFS_LKF_NONBLOCK 0x02 /* only use already held locks */
|
||||
#define SCOUTFS_LKF_INTERRUPTIBLE 0x04 /* pending signals return -ERESTARTSYS */
|
||||
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_INTERRUPTIBLE << 1) - 1))
|
||||
#define SCOUTFS_LKF_INVALID (~((SCOUTFS_LKF_NONBLOCK << 1) - 1))
|
||||
|
||||
#define SCOUTFS_LOCK_NR_MODES SCOUTFS_LOCK_INVALID
|
||||
|
||||
struct inode_deletion_lock_data;
|
||||
|
||||
/*
|
||||
* A few fields (start, end, refresh_gen, write_seq, granted_mode)
|
||||
* A few fields (start, end, refresh_gen, write_version, granted_mode)
|
||||
* are referenced by code outside lock.c.
|
||||
*/
|
||||
struct scoutfs_lock {
|
||||
@@ -24,22 +21,26 @@ struct scoutfs_lock {
|
||||
struct rb_node node;
|
||||
struct rb_node range_node;
|
||||
u64 refresh_gen;
|
||||
u64 write_seq;
|
||||
u64 write_version;
|
||||
u64 dirty_trans_seq;
|
||||
struct scoutfs_net_roots roots;
|
||||
struct list_head lru_head;
|
||||
wait_queue_head_t waitq;
|
||||
ktime_t grace_deadline;
|
||||
unsigned long request_pending:1,
|
||||
invalidate_pending:1;
|
||||
|
||||
struct list_head inv_head; /* entry in linfo's list of locks with invalidations */
|
||||
struct list_head inv_list; /* list of lock's invalidation requests */
|
||||
struct list_head grant_head;
|
||||
struct scoutfs_net_lock_grant_response grant_resp;
|
||||
struct list_head inv_head;
|
||||
struct scoutfs_net_lock inv_nl;
|
||||
u64 inv_net_id;
|
||||
struct list_head shrink_head;
|
||||
|
||||
spinlock_t cov_list_lock;
|
||||
struct list_head cov_list;
|
||||
|
||||
enum scoutfs_lock_mode mode;
|
||||
enum scoutfs_lock_mode invalidating_mode;
|
||||
unsigned int waiters[SCOUTFS_LOCK_NR_MODES];
|
||||
unsigned int users[SCOUTFS_LOCK_NR_MODES];
|
||||
|
||||
@@ -47,9 +48,6 @@ struct scoutfs_lock {
|
||||
|
||||
/* the forest tracks which log tree last saw bloom bit updates */
|
||||
atomic64_t forest_bloom_nr;
|
||||
|
||||
/* inode deletion tracks some state per lock */
|
||||
struct inode_deletion_lock_data *inode_deletion_data;
|
||||
};
|
||||
|
||||
struct scoutfs_lock_coverage {
|
||||
@@ -59,7 +57,7 @@ struct scoutfs_lock_coverage {
|
||||
};
|
||||
|
||||
int scoutfs_lock_grant_response(struct super_block *sb,
|
||||
struct scoutfs_net_lock *nl);
|
||||
struct scoutfs_net_lock_grant_response *gr);
|
||||
int scoutfs_lock_invalidate_request(struct super_block *sb, u64 net_id,
|
||||
struct scoutfs_net_lock *nl);
|
||||
int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
|
||||
@@ -82,10 +80,8 @@ int scoutfs_lock_inodes(struct super_block *sb, enum scoutfs_lock_mode mode, int
|
||||
struct inode *d, struct scoutfs_lock **D_lock);
|
||||
int scoutfs_lock_rename(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
|
||||
struct scoutfs_lock **lock);
|
||||
int scoutfs_lock_orphan(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
|
||||
u64 ino, struct scoutfs_lock **lock);
|
||||
int scoutfs_lock_xattr_totl(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
|
||||
struct scoutfs_lock **lock);
|
||||
int scoutfs_lock_rid(struct super_block *sb, enum scoutfs_lock_mode mode, int flags,
|
||||
u64 rid, struct scoutfs_lock **lock);
|
||||
void scoutfs_unlock(struct super_block *sb, struct scoutfs_lock *lock,
|
||||
enum scoutfs_lock_mode mode);
|
||||
|
||||
@@ -100,13 +96,9 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
|
||||
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
|
||||
enum scoutfs_lock_mode mode);
|
||||
|
||||
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino);
|
||||
|
||||
void scoutfs_free_unused_locks(struct super_block *sb);
|
||||
void scoutfs_free_unused_locks(struct super_block *sb, unsigned long nr);
|
||||
|
||||
int scoutfs_lock_setup(struct super_block *sb);
|
||||
void scoutfs_lock_unmount_begin(struct super_block *sb);
|
||||
void scoutfs_lock_flush_invalidate(struct super_block *sb);
|
||||
void scoutfs_lock_shutdown(struct super_block *sb);
|
||||
void scoutfs_lock_destroy(struct super_block *sb);
|
||||
|
||||
|
||||
@@ -20,10 +20,10 @@
|
||||
#include "tseq.h"
|
||||
#include "spbm.h"
|
||||
#include "block.h"
|
||||
#include "btree.h"
|
||||
#include "msg.h"
|
||||
#include "scoutfs_trace.h"
|
||||
#include "lock_server.h"
|
||||
#include "recov.h"
|
||||
|
||||
/*
|
||||
* The scoutfs server implements a simple lock service. Client mounts
|
||||
@@ -56,11 +56,14 @@
|
||||
* Message requests and responses are reliably delivered in order across
|
||||
* reconnection.
|
||||
*
|
||||
* As a new server comes up it recovers lock state from existing clients
|
||||
* which were connected to a previous lock server. Recover requests are
|
||||
* sent to clients as they connect and they respond with all there
|
||||
* locks. Once all clients and locks are accounted for normal
|
||||
* processing can resume.
|
||||
* The server maintains a persistent record of connected clients. A new
|
||||
* server instance discovers these and waits for previously connected
|
||||
* clients to reconnect and recover their state before proceeding. If
|
||||
* clients don't reconnect they are forcefully prevented from unsafely
|
||||
* accessing the shared persistent storage. (fenced, according to the
|
||||
* rules of the platform.. could range from being powered off to having
|
||||
* their switch port disabled to having their local block device set
|
||||
* read-only.)
|
||||
*
|
||||
* The lock server doesn't respond to memory pressure. The only way
|
||||
* locks are freed is if they are invalidated to null on behalf of a
|
||||
@@ -74,12 +77,19 @@ struct lock_server_info {
|
||||
struct super_block *sb;
|
||||
|
||||
spinlock_t lock;
|
||||
struct mutex mutex;
|
||||
struct rb_root locks_root;
|
||||
|
||||
struct scoutfs_spbm recovery_pending;
|
||||
struct delayed_work recovery_dwork;
|
||||
|
||||
struct scoutfs_tseq_tree tseq_tree;
|
||||
struct dentry *tseq_dentry;
|
||||
struct scoutfs_tseq_tree stats_tseq_tree;
|
||||
struct dentry *stats_tseq_dentry;
|
||||
|
||||
struct scoutfs_alloc *alloc;
|
||||
struct scoutfs_block_writer *wri;
|
||||
|
||||
atomic64_t write_version;
|
||||
};
|
||||
|
||||
#define DECLARE_LOCK_SERVER_INFO(sb, name) \
|
||||
@@ -106,9 +116,6 @@ struct server_lock_node {
|
||||
struct list_head granted;
|
||||
struct list_head requested;
|
||||
struct list_head invalidated;
|
||||
|
||||
struct scoutfs_tseq_entry stats_tseq_entry;
|
||||
u64 stats[SLT_NR];
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -153,30 +160,30 @@ enum {
|
||||
*/
|
||||
static void add_client_entry(struct server_lock_node *snode,
|
||||
struct list_head *list,
|
||||
struct client_lock_entry *c_ent)
|
||||
struct client_lock_entry *clent)
|
||||
{
|
||||
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
|
||||
|
||||
if (list_empty(&c_ent->head))
|
||||
list_add_tail(&c_ent->head, list);
|
||||
if (list_empty(&clent->head))
|
||||
list_add_tail(&clent->head, list);
|
||||
else
|
||||
list_move_tail(&c_ent->head, list);
|
||||
list_move_tail(&clent->head, list);
|
||||
|
||||
c_ent->on_list = list == &snode->granted ? OL_GRANTED :
|
||||
clent->on_list = list == &snode->granted ? OL_GRANTED :
|
||||
list == &snode->requested ? OL_REQUESTED :
|
||||
OL_INVALIDATED;
|
||||
}
|
||||
|
||||
static void free_client_entry(struct lock_server_info *inf,
|
||||
struct server_lock_node *snode,
|
||||
struct client_lock_entry *c_ent)
|
||||
struct client_lock_entry *clent)
|
||||
{
|
||||
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
|
||||
|
||||
if (!list_empty(&c_ent->head))
|
||||
list_del_init(&c_ent->head);
|
||||
scoutfs_tseq_del(&inf->tseq_tree, &c_ent->tseq_entry);
|
||||
kfree(c_ent);
|
||||
if (!list_empty(&clent->head))
|
||||
list_del_init(&clent->head);
|
||||
scoutfs_tseq_del(&inf->tseq_tree, &clent->tseq_entry);
|
||||
kfree(clent);
|
||||
}
|
||||
|
||||
static bool invalid_mode(u8 mode)
|
||||
@@ -298,8 +305,6 @@ static struct server_lock_node *alloc_server_lock(struct lock_server_info *inf,
|
||||
snode = get_server_lock(inf, key, ins, false);
|
||||
if (snode != ins)
|
||||
kfree(ins);
|
||||
else
|
||||
scoutfs_tseq_add(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -329,23 +334,21 @@ static void put_server_lock(struct lock_server_info *inf,
|
||||
|
||||
mutex_unlock(&snode->mutex);
|
||||
|
||||
if (should_free) {
|
||||
scoutfs_tseq_del(&inf->stats_tseq_tree, &snode->stats_tseq_entry);
|
||||
if (should_free)
|
||||
kfree(snode);
|
||||
}
|
||||
}
|
||||
|
||||
static struct client_lock_entry *find_entry(struct server_lock_node *snode,
|
||||
struct list_head *list,
|
||||
u64 rid)
|
||||
{
|
||||
struct client_lock_entry *c_ent;
|
||||
struct client_lock_entry *clent;
|
||||
|
||||
WARN_ON_ONCE(!mutex_is_locked(&snode->mutex));
|
||||
|
||||
list_for_each_entry(c_ent, list, head) {
|
||||
if (c_ent->rid == rid)
|
||||
return c_ent;
|
||||
list_for_each_entry(clent, list, head) {
|
||||
if (clent->rid == rid)
|
||||
return clent;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
@@ -364,7 +367,7 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
|
||||
u64 net_id, struct scoutfs_net_lock *nl)
|
||||
{
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct client_lock_entry *c_ent;
|
||||
struct client_lock_entry *clent;
|
||||
struct server_lock_node *snode;
|
||||
int ret;
|
||||
|
||||
@@ -376,29 +379,27 @@ int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
|
||||
goto out;
|
||||
}
|
||||
|
||||
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
|
||||
if (!c_ent) {
|
||||
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
|
||||
if (!clent) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&c_ent->head);
|
||||
c_ent->rid = rid;
|
||||
c_ent->net_id = net_id;
|
||||
c_ent->mode = nl->new_mode;
|
||||
INIT_LIST_HEAD(&clent->head);
|
||||
clent->rid = rid;
|
||||
clent->net_id = net_id;
|
||||
clent->mode = nl->new_mode;
|
||||
|
||||
snode = alloc_server_lock(inf, &nl->key);
|
||||
if (snode == NULL) {
|
||||
kfree(c_ent);
|
||||
kfree(clent);
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
snode->stats[SLT_REQUEST]++;
|
||||
|
||||
c_ent->snode = snode;
|
||||
add_client_entry(snode, &snode->requested, c_ent);
|
||||
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
|
||||
clent->snode = snode;
|
||||
add_client_entry(snode, &snode->requested, clent);
|
||||
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
|
||||
|
||||
ret = process_waiting_requests(sb, snode);
|
||||
out:
|
||||
@@ -417,7 +418,7 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_net_lock *nl)
|
||||
{
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct client_lock_entry *c_ent;
|
||||
struct client_lock_entry *clent;
|
||||
struct server_lock_node *snode;
|
||||
int ret;
|
||||
|
||||
@@ -429,27 +430,25 @@ int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* XXX should always have a server lock here? */
|
||||
/* XXX should always have a server lock here? recovery? */
|
||||
snode = get_server_lock(inf, &nl->key, NULL, false);
|
||||
if (!snode) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
snode->stats[SLT_RESPONSE]++;
|
||||
|
||||
c_ent = find_entry(snode, &snode->invalidated, rid);
|
||||
if (!c_ent) {
|
||||
clent = find_entry(snode, &snode->invalidated, rid);
|
||||
if (!clent) {
|
||||
put_server_lock(inf, snode);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (nl->new_mode == SCOUTFS_LOCK_NULL) {
|
||||
free_client_entry(inf, snode, c_ent);
|
||||
free_client_entry(inf, snode, clent);
|
||||
} else {
|
||||
c_ent->mode = nl->new_mode;
|
||||
add_client_entry(snode, &snode->granted, c_ent);
|
||||
clent->mode = nl->new_mode;
|
||||
add_client_entry(snode, &snode->granted, clent);
|
||||
}
|
||||
|
||||
ret = process_waiting_requests(sb, snode);
|
||||
@@ -474,27 +473,31 @@ out:
|
||||
* so we unlock the snode mutex.
|
||||
*
|
||||
* All progress must wait for all clients to finish with recovery
|
||||
* because we don't know which locks they'll hold. Once recover
|
||||
* finishes the server calls us to kick all the locks that were waiting
|
||||
* during recovery.
|
||||
* because we don't know which locks they'll hold. The unlocked
|
||||
* recovery_pending test here is OK. It's filled by setup before
|
||||
* anything runs. It's emptied by recovery completion. We can get a
|
||||
* false nonempty result if we race with recovery completion, but that's
|
||||
* OK because recovery completion processes all the locks that have
|
||||
* requests after emptying, including the unlikely loser of that race.
|
||||
*/
|
||||
static int process_waiting_requests(struct super_block *sb,
|
||||
struct server_lock_node *snode)
|
||||
{
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct scoutfs_net_lock_grant_response gres;
|
||||
struct scoutfs_net_lock nl;
|
||||
struct client_lock_entry *req;
|
||||
struct client_lock_entry *req_tmp;
|
||||
struct client_lock_entry *gr;
|
||||
struct client_lock_entry *gr_tmp;
|
||||
u64 seq;
|
||||
u64 wv;
|
||||
int ret;
|
||||
|
||||
BUG_ON(!mutex_is_locked(&snode->mutex));
|
||||
|
||||
/* processing waits for all invalidation responses or recovery */
|
||||
if (!list_empty(&snode->invalidated) ||
|
||||
scoutfs_recov_next_pending(sb, 0, SCOUTFS_RECOV_LOCKS) != 0) {
|
||||
!scoutfs_spbm_empty(&inf->recovery_pending)) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
@@ -518,7 +521,6 @@ static int process_waiting_requests(struct super_block *sb,
|
||||
trace_scoutfs_lock_message(sb, SLT_SERVER,
|
||||
SLT_INVALIDATE, SLT_REQUEST,
|
||||
gr->rid, 0, &nl);
|
||||
snode->stats[SLT_INVALIDATE]++;
|
||||
|
||||
add_client_entry(snode, &snode->invalidated, gr);
|
||||
}
|
||||
@@ -529,7 +531,6 @@ static int process_waiting_requests(struct super_block *sb,
|
||||
|
||||
nl.key = snode->key;
|
||||
nl.new_mode = req->mode;
|
||||
nl.write_seq = 0;
|
||||
|
||||
/* see if there's an existing compatible grant to replace */
|
||||
gr = find_entry(snode, &snode->granted, req->rid);
|
||||
@@ -542,20 +543,21 @@ static int process_waiting_requests(struct super_block *sb,
|
||||
|
||||
if (nl.new_mode == SCOUTFS_LOCK_WRITE ||
|
||||
nl.new_mode == SCOUTFS_LOCK_WRITE_ONLY) {
|
||||
/* doesn't commit seq update, recovered with locks */
|
||||
seq = scoutfs_server_next_seq(sb);
|
||||
nl.write_seq = cpu_to_le64(seq);
|
||||
wv = atomic64_inc_return(&inf->write_version);
|
||||
nl.write_version = cpu_to_le64(wv);
|
||||
}
|
||||
|
||||
gres.nl = nl;
|
||||
scoutfs_server_get_roots(sb, &gres.roots);
|
||||
|
||||
ret = scoutfs_server_lock_response(sb, req->rid,
|
||||
req->net_id, &nl);
|
||||
req->net_id, &gres);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
trace_scoutfs_lock_message(sb, SLT_SERVER, SLT_GRANT,
|
||||
SLT_RESPONSE, req->rid,
|
||||
req->net_id, &nl);
|
||||
snode->stats[SLT_GRANT]++;
|
||||
|
||||
/* don't track null client locks, track all else */
|
||||
if (req->mode == SCOUTFS_LOCK_NULL)
|
||||
@@ -571,39 +573,85 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void init_lock_clients_key(struct scoutfs_key *key, u64 rid)
|
||||
{
|
||||
*key = (struct scoutfs_key) {
|
||||
.sk_zone = SCOUTFS_LOCK_CLIENTS_ZONE,
|
||||
.sklc_rid = cpu_to_le64(rid),
|
||||
};
|
||||
}
|
||||
|
||||
/*
|
||||
* The server received a greeting from a client for the first time. If
|
||||
* the client is in lock recovery then we send the initial lock request.
|
||||
* the client had already talked to the server then we must find an
|
||||
* existing record for it and should begin recovery. If it doesn't have
|
||||
* a record then its timed out and we can't allow it to reconnect. If
|
||||
* its connecting for the first time then we insert a new record. If
|
||||
*
|
||||
* This is running in concurrent client greeting processing contexts.
|
||||
*/
|
||||
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid)
|
||||
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
|
||||
bool should_exist)
|
||||
{
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
struct scoutfs_key key;
|
||||
int ret;
|
||||
|
||||
if (scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
|
||||
init_lock_clients_key(&key, rid);
|
||||
|
||||
mutex_lock(&inf->mutex);
|
||||
if (should_exist) {
|
||||
ret = scoutfs_btree_lookup(sb, &super->lock_clients, &key,
|
||||
&iref);
|
||||
if (ret == 0)
|
||||
scoutfs_btree_put_iref(&iref);
|
||||
} else {
|
||||
ret = scoutfs_btree_insert(sb, inf->alloc, inf->wri,
|
||||
&super->lock_clients,
|
||||
&key, NULL, 0);
|
||||
}
|
||||
mutex_unlock(&inf->mutex);
|
||||
|
||||
if (should_exist && ret == 0) {
|
||||
scoutfs_key_set_zeros(&key);
|
||||
ret = scoutfs_server_lock_recover_request(sb, rid, &key);
|
||||
} else {
|
||||
ret = 0;
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* All clients have finished lock recovery, we can make forward process
|
||||
* on all the queued requests that were waiting on recovery.
|
||||
* A client sent their last recovery response and can exit recovery. If
|
||||
* they were the last client in recovery then we can process all the
|
||||
* server locks that had requests.
|
||||
*/
|
||||
int scoutfs_lock_server_finished_recovery(struct super_block *sb)
|
||||
static int finished_recovery(struct super_block *sb, u64 rid, bool cancel)
|
||||
{
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct server_lock_node *snode;
|
||||
struct scoutfs_key key;
|
||||
bool still_pending;
|
||||
int ret = 0;
|
||||
|
||||
spin_lock(&inf->lock);
|
||||
scoutfs_spbm_clear(&inf->recovery_pending, rid);
|
||||
still_pending = !scoutfs_spbm_empty(&inf->recovery_pending);
|
||||
spin_unlock(&inf->lock);
|
||||
if (still_pending)
|
||||
return 0;
|
||||
|
||||
if (cancel)
|
||||
cancel_delayed_work_sync(&inf->recovery_dwork);
|
||||
|
||||
scoutfs_key_set_zeros(&key);
|
||||
|
||||
scoutfs_info(sb, "all lock clients recovered");
|
||||
|
||||
while ((snode = get_server_lock(inf, &key, NULL, true))) {
|
||||
|
||||
key = snode->key;
|
||||
@@ -621,6 +669,14 @@ int scoutfs_lock_server_finished_recovery(struct super_block *sb)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void set_max_write_version(struct lock_server_info *inf, u64 new)
|
||||
{
|
||||
u64 old;
|
||||
|
||||
while (new > (old = atomic64_read(&inf->write_version)) &&
|
||||
(atomic64_cmpxchg(&inf->write_version, old, new) != old));
|
||||
}
|
||||
|
||||
/*
|
||||
* We sent a lock recover request to the client when we received its
|
||||
* greeting while in recovery. Here we instantiate all the locks it
|
||||
@@ -632,61 +688,62 @@ int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
|
||||
{
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct client_lock_entry *existing;
|
||||
struct client_lock_entry *c_ent;
|
||||
struct client_lock_entry *clent;
|
||||
struct server_lock_node *snode;
|
||||
struct scoutfs_key key;
|
||||
int ret = 0;
|
||||
int i;
|
||||
|
||||
/* client must be in recovery */
|
||||
if (!scoutfs_recov_is_pending(sb, rid, SCOUTFS_RECOV_LOCKS)) {
|
||||
spin_lock(&inf->lock);
|
||||
if (!scoutfs_spbm_test(&inf->recovery_pending, rid))
|
||||
ret = -EINVAL;
|
||||
spin_unlock(&inf->lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* client has sent us all their locks */
|
||||
if (nlr->nr == 0) {
|
||||
scoutfs_server_recov_finish(sb, rid, SCOUTFS_RECOV_LOCKS);
|
||||
ret = 0;
|
||||
ret = finished_recovery(sb, rid, true);
|
||||
goto out;
|
||||
}
|
||||
|
||||
for (i = 0; i < le16_to_cpu(nlr->nr); i++) {
|
||||
c_ent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
|
||||
if (!c_ent) {
|
||||
clent = kzalloc(sizeof(struct client_lock_entry), GFP_NOFS);
|
||||
if (!clent) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&c_ent->head);
|
||||
c_ent->rid = rid;
|
||||
c_ent->net_id = 0;
|
||||
c_ent->mode = nlr->locks[i].new_mode;
|
||||
INIT_LIST_HEAD(&clent->head);
|
||||
clent->rid = rid;
|
||||
clent->net_id = 0;
|
||||
clent->mode = nlr->locks[i].new_mode;
|
||||
|
||||
snode = alloc_server_lock(inf, &nlr->locks[i].key);
|
||||
if (snode == NULL) {
|
||||
kfree(c_ent);
|
||||
kfree(clent);
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
existing = find_entry(snode, &snode->granted, rid);
|
||||
if (existing) {
|
||||
kfree(c_ent);
|
||||
kfree(clent);
|
||||
put_server_lock(inf, snode);
|
||||
ret = -EEXIST;
|
||||
goto out;
|
||||
}
|
||||
|
||||
c_ent->snode = snode;
|
||||
add_client_entry(snode, &snode->granted, c_ent);
|
||||
scoutfs_tseq_add(&inf->tseq_tree, &c_ent->tseq_entry);
|
||||
clent->snode = snode;
|
||||
add_client_entry(snode, &snode->granted, clent);
|
||||
scoutfs_tseq_add(&inf->tseq_tree, &clent->tseq_entry);
|
||||
|
||||
put_server_lock(inf, snode);
|
||||
|
||||
/* make sure next core seq is greater than all lock write seq */
|
||||
scoutfs_server_set_seq_if_greater(sb,
|
||||
le64_to_cpu(nlr->locks[i].write_seq));
|
||||
/* make sure next write lock is greater than all recovered */
|
||||
set_max_write_version(inf,
|
||||
le64_to_cpu(nlr->locks[i].write_version));
|
||||
}
|
||||
|
||||
/* send request for next batch of keys */
|
||||
@@ -698,16 +755,102 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int get_rid_and_put_ref(struct scoutfs_btree_item_ref *iref, u64 *rid)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (iref->val_len == 0) {
|
||||
*rid = le64_to_cpu(iref->key->sklc_rid);
|
||||
ret = 0;
|
||||
} else {
|
||||
ret = -EIO;
|
||||
}
|
||||
scoutfs_btree_put_iref(iref);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This work executes if enough time passes without all of the clients
|
||||
* finishing with recovery and canceling the work. We walk through the
|
||||
* client records and find any that still have their recovery pending.
|
||||
*/
|
||||
static void scoutfs_lock_server_recovery_timeout(struct work_struct *work)
|
||||
{
|
||||
struct lock_server_info *inf = container_of(work,
|
||||
struct lock_server_info,
|
||||
recovery_dwork.work);
|
||||
struct super_block *sb = inf->sb;
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
struct scoutfs_key key;
|
||||
bool timed_out;
|
||||
u64 rid;
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_server_hold_commit(sb);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/* we enter recovery if there are any client records */
|
||||
for (rid = 0; ; rid++) {
|
||||
init_lock_clients_key(&key, rid);
|
||||
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
|
||||
if (ret == -ENOENT) {
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
if (ret == 0)
|
||||
ret = get_rid_and_put_ref(&iref, &rid);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
spin_lock(&inf->lock);
|
||||
if (scoutfs_spbm_test(&inf->recovery_pending, rid)) {
|
||||
scoutfs_spbm_clear(&inf->recovery_pending, rid);
|
||||
timed_out = true;
|
||||
} else {
|
||||
timed_out = false;
|
||||
}
|
||||
spin_unlock(&inf->lock);
|
||||
|
||||
if (!timed_out)
|
||||
continue;
|
||||
|
||||
scoutfs_err(sb, "client rid %016llx lock recovery timed out",
|
||||
rid);
|
||||
|
||||
init_lock_clients_key(&key, rid);
|
||||
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
|
||||
&super->lock_clients, &key);
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
|
||||
ret = scoutfs_server_apply_commit(sb, ret);
|
||||
out:
|
||||
/* force processing all pending lock requests */
|
||||
if (ret == 0)
|
||||
ret = finished_recovery(sb, 0, false);
|
||||
|
||||
if (ret < 0) {
|
||||
scoutfs_err(sb, "lock server saw err %d while timing out clients, shutting down", ret);
|
||||
scoutfs_server_abort(sb);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* A client is leaving the lock service. They aren't using locks and
|
||||
* won't send any more requests. We tear down all the state we had for
|
||||
* them. This can be called multiple times for a given client as their
|
||||
* farewell is resent to new servers. It's OK to not find any state.
|
||||
* If we fail to delete a persistent entry then we have to shut down and
|
||||
* hope that the next server has more luck.
|
||||
*/
|
||||
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
|
||||
{
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct client_lock_entry *c_ent;
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct client_lock_entry *clent;
|
||||
struct client_lock_entry *tmp;
|
||||
struct server_lock_node *snode;
|
||||
struct scoutfs_key key;
|
||||
@@ -715,7 +858,20 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
|
||||
bool freed;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&inf->mutex);
|
||||
init_lock_clients_key(&key, rid);
|
||||
ret = scoutfs_btree_delete(sb, inf->alloc, inf->wri,
|
||||
&super->lock_clients, &key);
|
||||
mutex_unlock(&inf->mutex);
|
||||
if (ret == -ENOENT) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
scoutfs_key_set_zeros(&key);
|
||||
|
||||
while ((snode = get_server_lock(inf, &key, NULL, true))) {
|
||||
|
||||
freed = false;
|
||||
@@ -724,9 +880,9 @@ int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid)
|
||||
(list == &snode->requested) ? &snode->invalidated :
|
||||
NULL) {
|
||||
|
||||
list_for_each_entry_safe(c_ent, tmp, list, head) {
|
||||
if (c_ent->rid == rid) {
|
||||
free_client_entry(inf, snode, c_ent);
|
||||
list_for_each_entry_safe(clent, tmp, list, head) {
|
||||
if (clent->rid == rid) {
|
||||
free_client_entry(inf, snode, clent);
|
||||
freed = true;
|
||||
}
|
||||
}
|
||||
@@ -749,7 +905,7 @@ out:
|
||||
if (ret < 0) {
|
||||
scoutfs_err(sb, "lock server err %d during client rid %016llx farewell, shutting down",
|
||||
ret, rid);
|
||||
scoutfs_server_stop(sb);
|
||||
scoutfs_server_abort(sb);
|
||||
}
|
||||
|
||||
return ret;
|
||||
@@ -787,35 +943,36 @@ static char *lock_on_list_string(u8 on_list)
|
||||
static void lock_server_tseq_show(struct seq_file *m,
|
||||
struct scoutfs_tseq_entry *ent)
|
||||
{
|
||||
struct client_lock_entry *c_ent = container_of(ent,
|
||||
struct client_lock_entry *clent = container_of(ent,
|
||||
struct client_lock_entry,
|
||||
tseq_entry);
|
||||
struct server_lock_node *snode = c_ent->snode;
|
||||
struct server_lock_node *snode = clent->snode;
|
||||
|
||||
seq_printf(m, SK_FMT" %s %s rid %016llx net_id %llu\n",
|
||||
SK_ARG(&snode->key), lock_mode_string(c_ent->mode),
|
||||
lock_on_list_string(c_ent->on_list), c_ent->rid,
|
||||
c_ent->net_id);
|
||||
}
|
||||
|
||||
static void stats_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
|
||||
{
|
||||
struct server_lock_node *snode = container_of(ent, struct server_lock_node,
|
||||
stats_tseq_entry);
|
||||
|
||||
seq_printf(m, SK_FMT" req %llu inv %llu rsp %llu gr %llu\n",
|
||||
SK_ARG(&snode->key), snode->stats[SLT_REQUEST], snode->stats[SLT_INVALIDATE],
|
||||
snode->stats[SLT_RESPONSE], snode->stats[SLT_GRANT]);
|
||||
SK_ARG(&snode->key), lock_mode_string(clent->mode),
|
||||
lock_on_list_string(clent->on_list), clent->rid,
|
||||
clent->net_id);
|
||||
}
|
||||
|
||||
/*
|
||||
* Setup the lock server. This is called before networking can deliver
|
||||
* requests.
|
||||
* requests. If we find existing client records then we enter recovery.
|
||||
* Lock request processing is deferred until recovery is resolved for
|
||||
* all the existing clients, either they reconnect and replay locks or
|
||||
* we time them out.
|
||||
*/
|
||||
int scoutfs_lock_server_setup(struct super_block *sb)
|
||||
int scoutfs_lock_server_setup(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri, u64 max_vers)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct lock_server_info *inf;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
struct scoutfs_key key;
|
||||
unsigned int nr;
|
||||
u64 rid;
|
||||
int ret;
|
||||
|
||||
inf = kzalloc(sizeof(struct lock_server_info), GFP_KERNEL);
|
||||
if (!inf)
|
||||
@@ -823,9 +980,15 @@ int scoutfs_lock_server_setup(struct super_block *sb)
|
||||
|
||||
inf->sb = sb;
|
||||
spin_lock_init(&inf->lock);
|
||||
mutex_init(&inf->mutex);
|
||||
inf->locks_root = RB_ROOT;
|
||||
scoutfs_spbm_init(&inf->recovery_pending);
|
||||
INIT_DELAYED_WORK(&inf->recovery_dwork,
|
||||
scoutfs_lock_server_recovery_timeout);
|
||||
scoutfs_tseq_tree_init(&inf->tseq_tree, lock_server_tseq_show);
|
||||
scoutfs_tseq_tree_init(&inf->stats_tseq_tree, stats_tseq_show);
|
||||
inf->alloc = alloc;
|
||||
inf->wri = wri;
|
||||
atomic64_set(&inf->write_version, max_vers); /* inc_return gives +1 */
|
||||
|
||||
inf->tseq_dentry = scoutfs_tseq_create("server_locks", sbi->debug_root,
|
||||
&inf->tseq_tree);
|
||||
@@ -834,17 +997,38 @@ int scoutfs_lock_server_setup(struct super_block *sb)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
inf->stats_tseq_dentry = scoutfs_tseq_create("server_lock_stats", sbi->debug_root,
|
||||
&inf->stats_tseq_tree);
|
||||
if (!inf->stats_tseq_dentry) {
|
||||
debugfs_remove(inf->tseq_dentry);
|
||||
kfree(inf);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
sbi->lock_server_info = inf;
|
||||
|
||||
return 0;
|
||||
/* we enter recovery if there are any client records */
|
||||
nr = 0;
|
||||
for (rid = 0; ; rid++) {
|
||||
init_lock_clients_key(&key, rid);
|
||||
ret = scoutfs_btree_next(sb, &super->lock_clients, &key, &iref);
|
||||
if (ret == -ENOENT)
|
||||
break;
|
||||
if (ret == 0)
|
||||
ret = get_rid_and_put_ref(&iref, &rid);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_spbm_set(&inf->recovery_pending, rid);
|
||||
if (ret)
|
||||
goto out;
|
||||
nr++;
|
||||
|
||||
if (rid == U64_MAX)
|
||||
break;
|
||||
}
|
||||
ret = 0;
|
||||
|
||||
if (nr) {
|
||||
schedule_delayed_work(&inf->recovery_dwork,
|
||||
msecs_to_jiffies(LOCK_SERVER_RECOVERY_MS));
|
||||
scoutfs_info(sb, "waiting for %u lock clients to recover", nr);
|
||||
}
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -857,13 +1041,14 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
|
||||
DECLARE_LOCK_SERVER_INFO(sb, inf);
|
||||
struct server_lock_node *snode;
|
||||
struct server_lock_node *stmp;
|
||||
struct client_lock_entry *c_ent;
|
||||
struct client_lock_entry *clent;
|
||||
struct client_lock_entry *ctmp;
|
||||
LIST_HEAD(list);
|
||||
|
||||
if (inf) {
|
||||
cancel_delayed_work_sync(&inf->recovery_dwork);
|
||||
|
||||
debugfs_remove(inf->tseq_dentry);
|
||||
debugfs_remove(inf->stats_tseq_dentry);
|
||||
|
||||
rbtree_postorder_for_each_entry_safe(snode, stmp,
|
||||
&inf->locks_root, node) {
|
||||
@@ -873,14 +1058,16 @@ void scoutfs_lock_server_destroy(struct super_block *sb)
|
||||
list_splice_init(&snode->invalidated, &list);
|
||||
|
||||
mutex_lock(&snode->mutex);
|
||||
list_for_each_entry_safe(c_ent, ctmp, &list, head) {
|
||||
free_client_entry(inf, snode, c_ent);
|
||||
list_for_each_entry_safe(clent, ctmp, &list, head) {
|
||||
free_client_entry(inf, snode, clent);
|
||||
}
|
||||
mutex_unlock(&snode->mutex);
|
||||
|
||||
kfree(snode);
|
||||
}
|
||||
|
||||
scoutfs_spbm_destroy(&inf->recovery_pending);
|
||||
|
||||
kfree(inf);
|
||||
sbi->lock_server_info = NULL;
|
||||
}
|
||||
|
||||
@@ -3,15 +3,17 @@
|
||||
|
||||
int scoutfs_lock_server_recover_response(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_net_lock_recover *nlr);
|
||||
int scoutfs_lock_server_finished_recovery(struct super_block *sb);
|
||||
int scoutfs_lock_server_request(struct super_block *sb, u64 rid,
|
||||
u64 net_id, struct scoutfs_net_lock *nl);
|
||||
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid);
|
||||
int scoutfs_lock_server_greeting(struct super_block *sb, u64 rid,
|
||||
bool should_exist);
|
||||
int scoutfs_lock_server_response(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_net_lock *nl);
|
||||
int scoutfs_lock_server_farewell(struct super_block *sb, u64 rid);
|
||||
|
||||
int scoutfs_lock_server_setup(struct super_block *sb);
|
||||
int scoutfs_lock_server_setup(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri, u64 max_vers);
|
||||
void scoutfs_lock_server_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -4,7 +4,6 @@
|
||||
#include <linux/bitops.h>
|
||||
#include "key.h"
|
||||
#include "counters.h"
|
||||
#include "super.h"
|
||||
|
||||
void __printf(4, 5) scoutfs_msg(struct super_block *sb, const char *prefix,
|
||||
const char *str, const char *fmt, ...);
|
||||
@@ -24,9 +23,6 @@ do { \
|
||||
#define scoutfs_info(sb, fmt, args...) \
|
||||
scoutfs_msg_check(sb, KERN_INFO, "", fmt, ##args)
|
||||
|
||||
#define scoutfs_tprintk(sb, fmt, args...) \
|
||||
trace_printk(SCSBF " " fmt "\n", SCSB_ARGS(sb), ##args);
|
||||
|
||||
#define scoutfs_bug_on(sb, cond, fmt, args...) \
|
||||
do { \
|
||||
if (cond) { \
|
||||
|
||||
196
kmod/src/net.c
196
kmod/src/net.c
@@ -30,7 +30,6 @@
|
||||
#include "net.h"
|
||||
#include "endian_swap.h"
|
||||
#include "tseq.h"
|
||||
#include "fence.h"
|
||||
|
||||
/*
|
||||
* scoutfs networking delivers requests and responses between nodes.
|
||||
@@ -331,9 +330,6 @@ static int submit_send(struct super_block *sb,
|
||||
WARN_ON_ONCE(id == 0 && (flags & SCOUTFS_NET_FLAG_RESPONSE)))
|
||||
return -EINVAL;
|
||||
|
||||
if (scoutfs_forcing_unmount(sb))
|
||||
return -EIO;
|
||||
|
||||
msend = kmalloc(offsetof(struct message_send,
|
||||
nh.data[data_len]), GFP_NOFS);
|
||||
if (!msend)
|
||||
@@ -355,7 +351,6 @@ static int submit_send(struct super_block *sb,
|
||||
}
|
||||
if (rid != 0) {
|
||||
spin_unlock(&conn->lock);
|
||||
kfree(msend);
|
||||
return -ENOTCONN;
|
||||
}
|
||||
}
|
||||
@@ -425,16 +420,6 @@ static int process_request(struct scoutfs_net_connection *conn,
|
||||
mrecv->nh.data, le16_to_cpu(mrecv->nh.data_len));
|
||||
}
|
||||
|
||||
static int call_resp_func(struct super_block *sb, struct scoutfs_net_connection *conn,
|
||||
scoutfs_net_response_t resp_func, void *resp_data,
|
||||
void *resp, unsigned int resp_len, int error)
|
||||
{
|
||||
if (resp_func)
|
||||
return resp_func(sb, conn, resp, resp_len, error, resp_data);
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* An incoming response finds the queued request and calls its response
|
||||
* function. The response function for a given request will only be
|
||||
@@ -449,6 +434,7 @@ static int process_response(struct scoutfs_net_connection *conn,
|
||||
struct message_send *msend;
|
||||
scoutfs_net_response_t resp_func = NULL;
|
||||
void *resp_data;
|
||||
int ret = 0;
|
||||
|
||||
spin_lock(&conn->lock);
|
||||
|
||||
@@ -463,8 +449,11 @@ static int process_response(struct scoutfs_net_connection *conn,
|
||||
|
||||
spin_unlock(&conn->lock);
|
||||
|
||||
return call_resp_func(sb, conn, resp_func, resp_data, mrecv->nh.data,
|
||||
le16_to_cpu(mrecv->nh.data_len), net_err_to_host(mrecv->nh.error));
|
||||
if (resp_func)
|
||||
ret = resp_func(sb, conn, mrecv->nh.data,
|
||||
le16_to_cpu(mrecv->nh.data_len),
|
||||
net_err_to_host(mrecv->nh.error), resp_data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -630,6 +619,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
|
||||
break;
|
||||
}
|
||||
|
||||
trace_scoutfs_recv_clock_sync(nh.clock_sync_id);
|
||||
|
||||
data_len = le16_to_cpu(nh.data_len);
|
||||
|
||||
scoutfs_inc_counter(sb, net_recv_messages);
|
||||
@@ -676,15 +667,8 @@ static void scoutfs_net_recv_worker(struct work_struct *work)
|
||||
|
||||
scoutfs_tseq_add(&ninf->msg_tseq_tree, &mrecv->tseq_entry);
|
||||
|
||||
/*
|
||||
* Initial received greetings are processed
|
||||
* synchronously before any other incoming messages.
|
||||
*
|
||||
* Incoming requests or responses to the lock client are
|
||||
* called synchronously to avoid reordering.
|
||||
*/
|
||||
if (nh.cmd == SCOUTFS_NET_CMD_GREETING ||
|
||||
(nh.cmd == SCOUTFS_NET_CMD_LOCK && !conn->listening_conn))
|
||||
/* synchronously process greeting before next recvmsg */
|
||||
if (nh.cmd == SCOUTFS_NET_CMD_GREETING)
|
||||
scoutfs_net_proc_worker(&mrecv->proc_work);
|
||||
else
|
||||
queue_work(conn->workq, &mrecv->proc_work);
|
||||
@@ -784,6 +768,9 @@ static void scoutfs_net_send_worker(struct work_struct *work)
|
||||
trace_scoutfs_net_send_message(sb, &conn->sockname,
|
||||
&conn->peername, &msend->nh);
|
||||
|
||||
msend->nh.clock_sync_id = scoutfs_clock_sync_id();
|
||||
trace_scoutfs_send_clock_sync(msend->nh.clock_sync_id);
|
||||
|
||||
ret = sendmsg_full(conn->sock, &msend->nh, len);
|
||||
|
||||
spin_lock(&conn->lock);
|
||||
@@ -836,9 +823,11 @@ static void scoutfs_net_destroy_worker(struct work_struct *work)
|
||||
if (conn->listening_conn && conn->notify_down)
|
||||
conn->notify_down(sb, conn, conn->info, conn->rid);
|
||||
|
||||
/* free all messages, refactor and complete for forced unmount? */
|
||||
list_splice_init(&conn->resend_queue, &conn->send_queue);
|
||||
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head)
|
||||
list_for_each_entry_safe(msend, tmp, &conn->send_queue, head) {
|
||||
free_msend(ninf, msend);
|
||||
}
|
||||
|
||||
/* accepted sockets are removed from their listener's list */
|
||||
if (conn->listening_conn) {
|
||||
@@ -868,31 +857,13 @@ static void destroy_conn(struct scoutfs_net_connection *conn)
|
||||
}
|
||||
|
||||
/*
|
||||
* By default, TCP would maintain a connection to an unresponsive peer
|
||||
* for a very long time indeed. We can't do that because quorum
|
||||
* members will only participate in an election when they don't have a
|
||||
* healthy connection to a server. We use the KEEPALIVE* and
|
||||
* TCP_USER_TIMEOUT options to ensure that we'll break an unresponsive
|
||||
* connection and return to the quorum and client connection paths to
|
||||
* try and establish a new connection to an active server.
|
||||
*
|
||||
* The TCP_KEEP* and TCP_USER_TIMEOUT option interaction is subtle.
|
||||
* TCP_USER_TIMEOUT only applies if there is unacked written data in the
|
||||
* send queue. It doesn't work if the connection is idle. Adding
|
||||
* keepalice probes with user_timeout set changes how the keepalive
|
||||
* timeout is calculated. CNT no longer matters. Each time
|
||||
* additional probes (not the first) are sent the user timeout is
|
||||
* checked against the last time data was received. If none of the
|
||||
* keepalives are responded to then eventually the user timeout applies.
|
||||
*
|
||||
* Given all this, we start with the overall unresponsive timeout. Then
|
||||
* we set the probes to start sending towards the end of the timeout.
|
||||
* We give it a few tries for a successful response before the timeout
|
||||
* elapses during the probe timer processing after the unsuccessful
|
||||
* probes.
|
||||
* Have a pretty aggressive keepalive timeout of around 10 seconds. The
|
||||
* TCP keepalives are being processed out of task context so they should
|
||||
* be responsive even when mounts are under load.
|
||||
*/
|
||||
#define UNRESPONSIVE_TIMEOUT_SECS 10
|
||||
#define UNRESPONSIVE_PROBES 3
|
||||
#define KEEPCNT 3
|
||||
#define KEEPIDLE 7
|
||||
#define KEEPINTVL 1
|
||||
static int sock_opts_and_names(struct scoutfs_net_connection *conn,
|
||||
struct socket *sock)
|
||||
{
|
||||
@@ -901,7 +872,7 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
|
||||
int optval;
|
||||
int ret;
|
||||
|
||||
/* we use a keepalive timeout instead of send timeout */
|
||||
/* but use a keepalive timeout instead of send timeout */
|
||||
tv.tv_sec = 0;
|
||||
tv.tv_usec = 0;
|
||||
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO,
|
||||
@@ -909,32 +880,24 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/* not checked when user_timeout != 0, but for clarity */
|
||||
optval = UNRESPONSIVE_PROBES;
|
||||
optval = KEEPCNT;
|
||||
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
|
||||
(char *)&optval, sizeof(optval));
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
BUILD_BUG_ON(UNRESPONSIVE_PROBES >= UNRESPONSIVE_TIMEOUT_SECS);
|
||||
optval = UNRESPONSIVE_TIMEOUT_SECS - (UNRESPONSIVE_PROBES);
|
||||
optval = KEEPIDLE;
|
||||
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
|
||||
(char *)&optval, sizeof(optval));
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
optval = 1;
|
||||
optval = KEEPINTVL;
|
||||
ret = kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
|
||||
(char *)&optval, sizeof(optval));
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
optval = UNRESPONSIVE_TIMEOUT_SECS * MSEC_PER_SEC;
|
||||
ret = kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
|
||||
(char *)&optval, sizeof(optval));
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
optval = 1;
|
||||
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
|
||||
(char *)&optval, sizeof(optval));
|
||||
@@ -962,8 +925,6 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
|
||||
ret = -EAFNOSUPPORT;
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
conn->last_peername = conn->peername;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
@@ -983,6 +944,7 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
|
||||
struct scoutfs_net_connection *acc_conn;
|
||||
DECLARE_WAIT_QUEUE_HEAD(waitq);
|
||||
struct socket *acc_sock;
|
||||
LIST_HEAD(conn_list);
|
||||
int ret;
|
||||
|
||||
trace_scoutfs_net_listen_work_enter(sb, 0, 0);
|
||||
@@ -992,8 +954,6 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
acc_sock->sk->sk_allocation = GFP_NOFS;
|
||||
|
||||
/* inherit accepted request funcs from listening conn */
|
||||
acc_conn = scoutfs_net_alloc_conn(sb, conn->notify_up,
|
||||
conn->notify_down,
|
||||
@@ -1056,8 +1016,6 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
sock->sk->sk_allocation = GFP_NOFS;
|
||||
|
||||
/* caller specified connect timeout */
|
||||
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
|
||||
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
|
||||
@@ -1131,11 +1089,9 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
|
||||
struct net_info *ninf = SCOUTFS_SB(sb)->net_info;
|
||||
struct scoutfs_net_connection *listener;
|
||||
struct scoutfs_net_connection *acc_conn;
|
||||
scoutfs_net_response_t resp_func;
|
||||
struct message_send *msend;
|
||||
struct message_send *tmp;
|
||||
unsigned long delay;
|
||||
void *resp_data;
|
||||
|
||||
trace_scoutfs_net_shutdown_work_enter(sb, 0, 0);
|
||||
trace_scoutfs_conn_shutdown_start(conn);
|
||||
@@ -1181,30 +1137,6 @@ static void scoutfs_net_shutdown_worker(struct work_struct *work)
|
||||
/* and wait for accepted conn shutdown work to finish */
|
||||
wait_event(conn->waitq, empty_accepted_list(conn));
|
||||
|
||||
/*
|
||||
* Forced unmount will cause net submit to fail once it's
|
||||
* started and it calls shutdown to interrupt any previous
|
||||
* senders waiting for a response. The response callbacks can
|
||||
* do quite a lot of work so we're careful to call them outside
|
||||
* the lock.
|
||||
*/
|
||||
if (scoutfs_forcing_unmount(sb)) {
|
||||
spin_lock(&conn->lock);
|
||||
list_splice_tail_init(&conn->send_queue, &conn->resend_queue);
|
||||
while ((msend = list_first_entry_or_null(&conn->resend_queue,
|
||||
struct message_send, head))) {
|
||||
resp_func = msend->resp_func;
|
||||
resp_data = msend->resp_data;
|
||||
free_msend(ninf, msend);
|
||||
spin_unlock(&conn->lock);
|
||||
|
||||
call_resp_func(sb, conn, resp_func, resp_data, NULL, 0, -ECONNABORTED);
|
||||
|
||||
spin_lock(&conn->lock);
|
||||
}
|
||||
spin_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
spin_lock(&conn->lock);
|
||||
|
||||
/* greetings aren't resent across sockets */
|
||||
@@ -1274,7 +1206,6 @@ static void scoutfs_net_reconn_free_worker(struct work_struct *work)
|
||||
unsigned long now = jiffies;
|
||||
unsigned long deadline = 0;
|
||||
bool requeue = false;
|
||||
int ret;
|
||||
|
||||
trace_scoutfs_net_reconn_free_work_enter(sb, 0, 0);
|
||||
|
||||
@@ -1288,18 +1219,10 @@ restart:
|
||||
time_after_eq(now, acc->reconn_deadline))) {
|
||||
set_conn_fl(acc, reconn_freeing);
|
||||
spin_unlock(&conn->lock);
|
||||
if (!test_conn_fl(conn, shutting_down)) {
|
||||
scoutfs_info(sb, "client "SIN_FMT" reconnect timed out, fencing",
|
||||
SIN_ARG(&acc->last_peername));
|
||||
ret = scoutfs_fence_start(sb, acc->rid,
|
||||
acc->last_peername.sin_addr.s_addr,
|
||||
SCOUTFS_FENCE_CLIENT_RECONNECT);
|
||||
if (ret) {
|
||||
scoutfs_err(sb, "client fence returned err %d, shutting down server",
|
||||
ret);
|
||||
scoutfs_server_stop(sb);
|
||||
}
|
||||
}
|
||||
if (!test_conn_fl(conn, shutting_down))
|
||||
scoutfs_info(sb, "client timed out "SIN_FMT" -> "SIN_FMT", can not reconnect",
|
||||
SIN_ARG(&acc->sockname),
|
||||
SIN_ARG(&acc->peername));
|
||||
destroy_conn(acc);
|
||||
goto restart;
|
||||
}
|
||||
@@ -1346,12 +1269,10 @@ scoutfs_net_alloc_conn(struct super_block *sb,
|
||||
if (!conn)
|
||||
return NULL;
|
||||
|
||||
if (info_size) {
|
||||
conn->info = kzalloc(info_size, GFP_NOFS);
|
||||
if (!conn->info) {
|
||||
kfree(conn);
|
||||
return NULL;
|
||||
}
|
||||
conn->info = kzalloc(info_size, GFP_NOFS);
|
||||
if (!conn->info) {
|
||||
kfree(conn);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
conn->workq = alloc_workqueue("scoutfs_net_%s",
|
||||
@@ -1372,7 +1293,6 @@ scoutfs_net_alloc_conn(struct super_block *sb,
|
||||
init_waitqueue_head(&conn->waitq);
|
||||
conn->sockname.sin_family = AF_INET;
|
||||
conn->peername.sin_family = AF_INET;
|
||||
conn->last_peername.sin_family = AF_INET;
|
||||
INIT_LIST_HEAD(&conn->accepted_head);
|
||||
INIT_LIST_HEAD(&conn->accepted_list);
|
||||
conn->next_send_seq = 1;
|
||||
@@ -1457,8 +1377,6 @@ int scoutfs_net_bind(struct super_block *sb,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
sock->sk->sk_allocation = GFP_NOFS;
|
||||
|
||||
optval = 1;
|
||||
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
|
||||
(char *)&optval, sizeof(optval));
|
||||
@@ -1541,7 +1459,8 @@ int scoutfs_net_connect(struct super_block *sb,
|
||||
struct scoutfs_net_connection *conn,
|
||||
struct sockaddr_in *sin, unsigned long timeout_ms)
|
||||
{
|
||||
int ret = 0;
|
||||
int error = 0;
|
||||
int ret;
|
||||
|
||||
spin_lock(&conn->lock);
|
||||
conn->connect_sin = *sin;
|
||||
@@ -1549,8 +1468,10 @@ int scoutfs_net_connect(struct super_block *sb,
|
||||
spin_unlock(&conn->lock);
|
||||
|
||||
queue_work(conn->workq, &conn->connect_work);
|
||||
wait_event(conn->waitq, connect_result(conn, &ret));
|
||||
return ret;
|
||||
|
||||
ret = wait_event_interruptible(conn->waitq,
|
||||
connect_result(conn, &error));
|
||||
return ret ?: error;
|
||||
}
|
||||
|
||||
static void set_valid_greeting(struct scoutfs_net_connection *conn)
|
||||
@@ -1625,8 +1546,9 @@ void scoutfs_net_client_greeting(struct super_block *sb,
|
||||
* response and they can disconnect cleanly.
|
||||
*
|
||||
* At this point our connection is idle except for send submissions and
|
||||
* shutdown being queued. We have exclusive access to the previous conn
|
||||
* once it's shutdown and we set _freeing.
|
||||
* shutdown being queued. Once we shut down a We completely own a We
|
||||
* have exclusive access to a previous conn once its shutdown and we set
|
||||
* _freeing.
|
||||
*/
|
||||
void scoutfs_net_server_greeting(struct super_block *sb,
|
||||
struct scoutfs_net_connection *conn,
|
||||
@@ -1686,10 +1608,10 @@ restart:
|
||||
conn->next_send_id = reconn->next_send_id;
|
||||
atomic64_set(&conn->recv_seq, atomic64_read(&reconn->recv_seq));
|
||||
|
||||
/* reconn should be idle while in reconn_wait */
|
||||
/* greeting response/ack will be on conn send queue */
|
||||
BUG_ON(!list_empty(&reconn->send_queue));
|
||||
/* queued greeting response is racing, can be in send or resend queue */
|
||||
list_splice_tail_init(&reconn->resend_queue, &conn->resend_queue);
|
||||
BUG_ON(!list_empty(&conn->resend_queue));
|
||||
list_splice_init(&reconn->resend_queue, &conn->resend_queue);
|
||||
|
||||
/* new conn info is unused, swap, old won't call down */
|
||||
swap(conn->info, reconn->info);
|
||||
@@ -1781,6 +1703,23 @@ int scoutfs_net_response_node(struct super_block *sb,
|
||||
NULL, NULL, NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
* The response function that was submitted with the request is not
|
||||
* called if the request is canceled here.
|
||||
*/
|
||||
void scoutfs_net_cancel_request(struct super_block *sb,
|
||||
struct scoutfs_net_connection *conn,
|
||||
u8 cmd, u64 id)
|
||||
{
|
||||
struct message_send *msend;
|
||||
|
||||
spin_lock(&conn->lock);
|
||||
msend = find_request(conn, cmd, id);
|
||||
if (msend)
|
||||
complete_send(conn, msend);
|
||||
spin_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
struct sync_request_completion {
|
||||
struct completion comp;
|
||||
void *resp;
|
||||
@@ -1836,10 +1775,11 @@ int scoutfs_net_sync_request(struct super_block *sb,
|
||||
ret = scoutfs_net_submit_request(sb, conn, cmd, arg, arg_len,
|
||||
sync_response, &sreq, &id);
|
||||
|
||||
if (ret == 0) {
|
||||
wait_for_completion(&sreq.comp);
|
||||
ret = wait_for_completion_interruptible(&sreq.comp);
|
||||
if (ret == -ERESTARTSYS)
|
||||
scoutfs_net_cancel_request(sb, conn, cmd, id);
|
||||
else
|
||||
ret = sreq.error;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -49,7 +49,6 @@ struct scoutfs_net_connection {
|
||||
u64 greeting_id;
|
||||
struct sockaddr_in sockname;
|
||||
struct sockaddr_in peername;
|
||||
struct sockaddr_in last_peername;
|
||||
|
||||
struct list_head accepted_head;
|
||||
struct scoutfs_net_connection *listening_conn;
|
||||
@@ -91,23 +90,19 @@ enum conn_flags {
|
||||
#define SIN_ARG(sin) sin, be16_to_cpu((sin)->sin_port)
|
||||
|
||||
static inline void scoutfs_addr_to_sin(struct sockaddr_in *sin,
|
||||
union scoutfs_inet_addr *addr)
|
||||
struct scoutfs_inet_addr *addr)
|
||||
{
|
||||
BUG_ON(addr->v4.family != cpu_to_le16(SCOUTFS_AF_IPV4));
|
||||
|
||||
sin->sin_family = AF_INET;
|
||||
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->v4.addr));
|
||||
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->v4.port));
|
||||
sin->sin_addr.s_addr = cpu_to_be32(le32_to_cpu(addr->addr));
|
||||
sin->sin_port = cpu_to_be16(le16_to_cpu(addr->port));
|
||||
}
|
||||
|
||||
static inline void scoutfs_sin_to_addr(union scoutfs_inet_addr *addr, struct sockaddr_in *sin)
|
||||
static inline void scoutfs_addr_from_sin(struct scoutfs_inet_addr *addr,
|
||||
struct sockaddr_in *sin)
|
||||
{
|
||||
BUG_ON(sin->sin_family != AF_INET);
|
||||
|
||||
memset(addr, 0, sizeof(union scoutfs_inet_addr));
|
||||
addr->v4.family = cpu_to_le16(SCOUTFS_AF_IPV4);
|
||||
addr->v4.addr = be32_to_le32(sin->sin_addr.s_addr);
|
||||
addr->v4.port = be16_to_le16(sin->sin_port);
|
||||
addr->addr = be32_to_le32(sin->sin_addr.s_addr);
|
||||
addr->port = be16_to_le16(sin->sin_port);
|
||||
memset(addr->__pad, 0, sizeof(addr->__pad));
|
||||
}
|
||||
|
||||
struct scoutfs_net_connection *
|
||||
@@ -134,6 +129,9 @@ int scoutfs_net_submit_request_node(struct super_block *sb,
|
||||
u64 rid, u8 cmd, void *arg, u16 arg_len,
|
||||
scoutfs_net_response_t resp_func,
|
||||
void *resp_data, u64 *id_ret);
|
||||
void scoutfs_net_cancel_request(struct super_block *sb,
|
||||
struct scoutfs_net_connection *conn,
|
||||
u8 cmd, u64 id);
|
||||
int scoutfs_net_sync_request(struct super_block *sb,
|
||||
struct scoutfs_net_connection *conn,
|
||||
u8 cmd, void *arg, unsigned arg_len,
|
||||
|
||||
889
kmod/src/omap.c
889
kmod/src/omap.c
@@ -1,889 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/rhashtable.h>
|
||||
#include <linux/rcupdate.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "counters.h"
|
||||
#include "cmp.h"
|
||||
#include "inode.h"
|
||||
#include "client.h"
|
||||
#include "server.h"
|
||||
#include "omap.h"
|
||||
#include "recov.h"
|
||||
#include "scoutfs_trace.h"
|
||||
|
||||
/*
|
||||
* As a client removes an inode from its cache with an nlink of 0 it
|
||||
* needs to decide if it is the last client using the inode and should
|
||||
* fully delete all the inode's items. It needs to know if other mounts
|
||||
* still have the inode in use.
|
||||
*
|
||||
* We need a way to communicate between mounts that an inode is in use.
|
||||
* We don't want to pay the synchronous per-file locking round trip
|
||||
* costs associated with per-inode open locks that you'd typically see
|
||||
* in systems to solve this problem. The first prototypes of this
|
||||
* tracked open file handles so this was coined the open map, though it
|
||||
* now tracks cached inodes.
|
||||
*
|
||||
* Clients maintain bitmaps that cover groups of inodes. As inodes
|
||||
* enter the cache their bit is set and as the inode is evicted the bit
|
||||
* is cleared. As deletion is attempted, either by scanning orphans or
|
||||
* evicting an inode with an nlink of 0, messages are sent around the
|
||||
* cluster to get the current bitmaps for that inode's group from all
|
||||
* active mounts. If the inode's bit is clear then it can be deleted.
|
||||
*
|
||||
* This layer maintains a list of client rids to send messages to. The
|
||||
* server calls us as clients enter and leave the cluster. We can't
|
||||
* process requests until all clients are present as a server starts up
|
||||
* so we hook into recovery and delay processing until all previously
|
||||
* existing clients are recovered or fenced.
|
||||
*/
|
||||
|
||||
struct omap_rid_list {
|
||||
int nr_rids;
|
||||
struct list_head head;
|
||||
};
|
||||
|
||||
struct omap_rid_entry {
|
||||
struct list_head head;
|
||||
u64 rid;
|
||||
};
|
||||
|
||||
struct omap_info {
|
||||
/* client */
|
||||
struct rhashtable group_ht;
|
||||
|
||||
/* server */
|
||||
struct rhashtable req_ht;
|
||||
struct llist_head requests;
|
||||
spinlock_t lock;
|
||||
struct omap_rid_list rids;
|
||||
atomic64_t next_req_id;
|
||||
};
|
||||
|
||||
#define DECLARE_OMAP_INFO(sb, name) \
|
||||
struct omap_info *name = SCOUTFS_SB(sb)->omap_info
|
||||
|
||||
/*
|
||||
* The presence of an inode in the inode sets its bit in the lock
|
||||
* group's bitmap.
|
||||
*
|
||||
* We don't want to add additional global synchronization of inode cache
|
||||
* maintenance so these are tracked in an rcu hash table. Once their
|
||||
* total reaches zero they're removed from the hash and queued for
|
||||
* freeing and readers should ignore them.
|
||||
*/
|
||||
struct omap_group {
|
||||
struct super_block *sb;
|
||||
struct rhash_head ht_head;
|
||||
struct rcu_head rcu;
|
||||
u64 nr;
|
||||
spinlock_t lock;
|
||||
unsigned int total;
|
||||
__le64 bits[SCOUTFS_OPEN_INO_MAP_LE64S];
|
||||
};
|
||||
|
||||
#define trace_group(sb, which, group, bit_nr) \
|
||||
do { \
|
||||
__typeof__(group) _grp = (group); \
|
||||
__typeof__(bit_nr) _nr = (bit_nr); \
|
||||
\
|
||||
trace_scoutfs_omap_group_##which(sb, _grp, _grp->nr, _grp->total, _nr); \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
* Each request is initialized with the rids of currently mounted
|
||||
* clients. As each responds we remove their rid and send the response
|
||||
* once everyone has contributed.
|
||||
*
|
||||
* The request frequency will typically be low, but in a mass rm -rf
|
||||
* load we will see O(groups * clients) messages flying around.
|
||||
*/
|
||||
struct omap_request {
|
||||
struct llist_node llnode;
|
||||
struct rhash_head ht_head;
|
||||
struct rcu_head rcu;
|
||||
spinlock_t lock;
|
||||
u64 client_rid;
|
||||
u64 client_id;
|
||||
struct omap_rid_list rids;
|
||||
struct scoutfs_open_ino_map map;
|
||||
};
|
||||
|
||||
static inline void init_rid_list(struct omap_rid_list *list)
|
||||
{
|
||||
INIT_LIST_HEAD(&list->head);
|
||||
list->nr_rids = 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Negative searches almost never happen.
|
||||
*/
|
||||
static struct omap_rid_entry *find_rid(struct omap_rid_list *list, u64 rid)
|
||||
{
|
||||
struct omap_rid_entry *entry;
|
||||
|
||||
list_for_each_entry(entry, &list->head, head) {
|
||||
if (rid == entry->rid)
|
||||
return entry;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
|
||||
{
|
||||
int nr;
|
||||
|
||||
list_del(&entry->head);
|
||||
nr = --list->nr_rids;
|
||||
|
||||
kfree(entry);
|
||||
return nr;
|
||||
}
|
||||
|
||||
static void free_rid_list(struct omap_rid_list *list)
|
||||
{
|
||||
struct omap_rid_entry *entry;
|
||||
struct omap_rid_entry *tmp;
|
||||
|
||||
list_for_each_entry_safe(entry, tmp, &list->head, head)
|
||||
free_rid(list, entry);
|
||||
}
|
||||
|
||||
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
|
||||
{
|
||||
struct omap_rid_entry *entry;
|
||||
struct omap_rid_entry *src;
|
||||
struct omap_rid_entry *dst;
|
||||
int nr;
|
||||
|
||||
spin_lock(from_lock);
|
||||
|
||||
while (to->nr_rids != from->nr_rids) {
|
||||
nr = from->nr_rids;
|
||||
spin_unlock(from_lock);
|
||||
|
||||
while (to->nr_rids < nr) {
|
||||
entry = kmalloc(sizeof(struct omap_rid_entry), GFP_NOFS);
|
||||
if (!entry)
|
||||
return -ENOMEM;
|
||||
|
||||
list_add_tail(&entry->head, &to->head);
|
||||
to->nr_rids++;
|
||||
}
|
||||
|
||||
while (to->nr_rids > nr) {
|
||||
entry = list_first_entry(&to->head, struct omap_rid_entry, head);
|
||||
list_del(&entry->head);
|
||||
kfree(entry);
|
||||
to->nr_rids--;
|
||||
}
|
||||
|
||||
spin_lock(from_lock);
|
||||
}
|
||||
|
||||
dst = list_first_entry(&to->head, struct omap_rid_entry, head);
|
||||
list_for_each_entry(src, &from->head, head) {
|
||||
dst->rid = src->rid;
|
||||
dst = list_next_entry(dst, head);
|
||||
}
|
||||
|
||||
spin_unlock(from_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void free_rids(struct omap_rid_list *list)
|
||||
{
|
||||
struct omap_rid_entry *entry;
|
||||
struct omap_rid_entry *tmp;
|
||||
|
||||
list_for_each_entry_safe(entry, tmp, &list->head, head) {
|
||||
list_del(&entry->head);
|
||||
kfree(entry);
|
||||
}
|
||||
}
|
||||
|
||||
void scoutfs_omap_calc_group_nrs(u64 ino, u64 *group_nr, int *bit_nr)
|
||||
{
|
||||
*group_nr = ino >> SCOUTFS_OPEN_INO_MAP_SHIFT;
|
||||
*bit_nr = ino & SCOUTFS_OPEN_INO_MAP_MASK;
|
||||
}
|
||||
|
||||
static struct omap_group *alloc_group(struct super_block *sb, u64 group_nr)
|
||||
{
|
||||
struct omap_group *group;
|
||||
|
||||
group = kzalloc(sizeof(struct omap_group), GFP_NOFS);
|
||||
if (group) {
|
||||
group->sb = sb;
|
||||
group->nr = group_nr;
|
||||
spin_lock_init(&group->lock);
|
||||
|
||||
trace_group(sb, alloc, group, -1);
|
||||
}
|
||||
|
||||
return group;
|
||||
}
|
||||
|
||||
static void free_group(struct super_block *sb, struct omap_group *group)
|
||||
{
|
||||
trace_group(sb, free, group, -1);
|
||||
kfree(group);
|
||||
}
|
||||
|
||||
static void free_group_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
struct omap_group *group = container_of(rcu, struct omap_group, rcu);
|
||||
|
||||
free_group(group->sb, group);
|
||||
}
|
||||
|
||||
static const struct rhashtable_params group_ht_params = {
|
||||
.key_len = member_sizeof(struct omap_group, nr),
|
||||
.key_offset = offsetof(struct omap_group, nr),
|
||||
.head_offset = offsetof(struct omap_group, ht_head),
|
||||
};
|
||||
|
||||
/*
|
||||
* Track an cached inode in its group. Our set can be racing with a
|
||||
* final clear that removes the group from the hash, sets total to
|
||||
* UINT_MAX, and calls rcu free. We can retry until the dead group is
|
||||
* no longer visible in the hash table and we can insert a new allocated
|
||||
* group.
|
||||
*
|
||||
* The caller must ensure that the bit is clear, -EEXIST will be
|
||||
* returned otherwise.
|
||||
*/
|
||||
int scoutfs_omap_set(struct super_block *sb, u64 ino)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct omap_group *group;
|
||||
u64 group_nr;
|
||||
int bit_nr;
|
||||
bool found;
|
||||
int ret = 0;
|
||||
|
||||
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
|
||||
|
||||
retry:
|
||||
found = false;
|
||||
rcu_read_lock();
|
||||
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
|
||||
if (group) {
|
||||
spin_lock(&group->lock);
|
||||
if (group->total < UINT_MAX) {
|
||||
found = true;
|
||||
if (WARN_ON_ONCE(test_and_set_bit_le(bit_nr, group->bits)))
|
||||
ret = -EEXIST;
|
||||
else
|
||||
group->total++;
|
||||
}
|
||||
trace_group(sb, inc, group, bit_nr);
|
||||
spin_unlock(&group->lock);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
if (!found) {
|
||||
group = alloc_group(sb, group_nr);
|
||||
if (group) {
|
||||
ret = rhashtable_lookup_insert_fast(&ominf->group_ht, &group->ht_head,
|
||||
group_ht_params);
|
||||
if (ret < 0)
|
||||
free_group(sb, group);
|
||||
if (ret == -EEXIST)
|
||||
ret = 0;
|
||||
if (ret == -EBUSY) {
|
||||
/* wait for rehash to finish */
|
||||
synchronize_rcu();
|
||||
ret = 0;
|
||||
}
|
||||
if (ret == 0)
|
||||
goto retry;
|
||||
} else {
|
||||
ret = -ENOMEM;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
bool scoutfs_omap_test(struct super_block *sb, u64 ino)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct omap_group *group;
|
||||
bool ret = false;
|
||||
u64 group_nr;
|
||||
int bit_nr;
|
||||
|
||||
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
|
||||
|
||||
rcu_read_lock();
|
||||
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
|
||||
if (group) {
|
||||
spin_lock(&group->lock);
|
||||
ret = !!test_bit_le(bit_nr, group->bits);
|
||||
spin_unlock(&group->lock);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Clear a previously set ino bit. Trying to clear a bit that's already
|
||||
* clear implies imbalanced set/clear or bugs freeing groups. We only
|
||||
* free groups here as the last clear drops the group's total to 0.
|
||||
*/
|
||||
void scoutfs_omap_clear(struct super_block *sb, u64 ino)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct omap_group *group;
|
||||
u64 group_nr;
|
||||
int bit_nr;
|
||||
|
||||
scoutfs_omap_calc_group_nrs(ino, &group_nr, &bit_nr);
|
||||
|
||||
rcu_read_lock();
|
||||
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
|
||||
if (group) {
|
||||
spin_lock(&group->lock);
|
||||
WARN_ON_ONCE(!test_bit_le(bit_nr, group->bits));
|
||||
WARN_ON_ONCE(group->total == 0);
|
||||
WARN_ON_ONCE(group->total == UINT_MAX);
|
||||
if (test_and_clear_bit_le(bit_nr, group->bits)) {
|
||||
if (--group->total == 0) {
|
||||
group->total = UINT_MAX;
|
||||
rhashtable_remove_fast(&ominf->group_ht, &group->ht_head,
|
||||
group_ht_params);
|
||||
call_rcu(&group->rcu, free_group_rcu);
|
||||
}
|
||||
}
|
||||
trace_group(sb, dec, group, bit_nr);
|
||||
spin_unlock(&group->lock);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
WARN_ON_ONCE(!group);
|
||||
}
|
||||
|
||||
/*
|
||||
* The server adds rids as it discovers clients. We add them to the
|
||||
* list of rids to send map requests to.
|
||||
*/
|
||||
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct omap_rid_entry *entry;
|
||||
struct omap_rid_entry *found;
|
||||
|
||||
entry = kmalloc(sizeof(struct omap_rid_entry), GFP_NOFS);
|
||||
if (!entry)
|
||||
return -ENOMEM;
|
||||
|
||||
spin_lock(&ominf->lock);
|
||||
found = find_rid(&ominf->rids, rid);
|
||||
if (!found) {
|
||||
entry->rid = rid;
|
||||
list_add_tail(&entry->head, &ominf->rids.head);
|
||||
ominf->rids.nr_rids++;
|
||||
}
|
||||
spin_unlock(&ominf->lock);
|
||||
|
||||
if (found)
|
||||
kfree(entry);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void free_req(struct omap_request *req)
|
||||
{
|
||||
free_rids(&req->rids);
|
||||
kfree(req);
|
||||
}
|
||||
|
||||
static void free_req_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
struct omap_request *req = container_of(rcu, struct omap_request, rcu);
|
||||
|
||||
free_req(req);
|
||||
}
|
||||
|
||||
static const struct rhashtable_params req_ht_params = {
|
||||
.key_len = member_sizeof(struct omap_request, map.args.req_id),
|
||||
.key_offset = offsetof(struct omap_request, map.args.req_id),
|
||||
.head_offset = offsetof(struct omap_request, ht_head),
|
||||
};
|
||||
|
||||
/*
|
||||
* Remove a rid from all the pending requests. If it's the last rid we
|
||||
* give the caller the details to send a response, they'll call back to
|
||||
* keep removing. If their send fails they're going to shutdown the
|
||||
* server so we can queue freeing the request as we give it to them.
|
||||
*/
|
||||
static int remove_rid_from_reqs(struct omap_info *ominf, u64 rid, u64 *resp_rid, u64 *resp_id,
|
||||
struct scoutfs_open_ino_map *map)
|
||||
{
|
||||
struct omap_rid_entry *entry;
|
||||
struct rhashtable_iter iter;
|
||||
struct omap_request *req;
|
||||
int ret = 0;
|
||||
|
||||
rhashtable_walk_enter(&ominf->req_ht, &iter);
|
||||
rhashtable_walk_start(&iter);
|
||||
|
||||
for (;;) {
|
||||
req = rhashtable_walk_next(&iter);
|
||||
if (req == NULL)
|
||||
break;
|
||||
if (req == ERR_PTR(-EAGAIN))
|
||||
continue;
|
||||
|
||||
spin_lock(&req->lock);
|
||||
entry = find_rid(&req->rids, rid);
|
||||
if (entry && free_rid(&req->rids, entry) == 0) {
|
||||
*resp_rid = req->client_rid;
|
||||
*resp_id = req->client_id;
|
||||
memcpy(map, &req->map, sizeof(struct scoutfs_open_ino_map));
|
||||
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
|
||||
call_rcu(&req->rcu, free_req_rcu);
|
||||
ret = 1;
|
||||
}
|
||||
spin_unlock(&req->lock);
|
||||
if (ret > 0)
|
||||
break;
|
||||
}
|
||||
|
||||
rhashtable_walk_stop(&iter);
|
||||
rhashtable_walk_exit(&iter);
|
||||
|
||||
if (ret <= 0) {
|
||||
*resp_rid = 0;
|
||||
*resp_id = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* A client has been evicted. Remove its rid from the list and walk
|
||||
* through all the pending requests and remove its rids, sending the
|
||||
* response if it was the last rid waiting for a response.
|
||||
*
|
||||
* If this returns an error then the server will shut down.
|
||||
*
|
||||
* This can be called multiple times by different servers if there are
|
||||
* errors reclaiming an evicted mount, so we allow asking to remove a
|
||||
* rid that hasn't been added.
|
||||
*/
|
||||
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct scoutfs_open_ino_map *map = NULL;
|
||||
struct omap_rid_entry *entry;
|
||||
u64 resp_rid = 0;
|
||||
u64 resp_id = 0;
|
||||
int ret;
|
||||
|
||||
spin_lock(&ominf->lock);
|
||||
entry = find_rid(&ominf->rids, rid);
|
||||
if (entry)
|
||||
free_rid(&ominf->rids, entry);
|
||||
spin_unlock(&ominf->lock);
|
||||
|
||||
if (!entry) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
|
||||
if (!map) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* remove the rid from all pending requests, sending responses if it was final */
|
||||
for (;;) {
|
||||
ret = remove_rid_from_reqs(ominf, rid, &resp_rid, &resp_id, map);
|
||||
if (ret <= 0)
|
||||
break;
|
||||
ret = scoutfs_server_send_omap_response(sb, resp_rid, resp_id, map, 0);
|
||||
if (ret < 0)
|
||||
break;
|
||||
}
|
||||
|
||||
out:
|
||||
kfree(map);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle a single incoming request in the server. This could have been
|
||||
* delayed by recovery. This only returns an error if we couldn't send
|
||||
* a processing error response to the client.
|
||||
*/
|
||||
static int handle_request(struct super_block *sb, struct omap_request *req)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct omap_rid_list priv_rids;
|
||||
struct omap_rid_entry *entry;
|
||||
int ret;
|
||||
|
||||
init_rid_list(&priv_rids);
|
||||
|
||||
ret = copy_rids(&priv_rids, &ominf->rids, &ominf->lock);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* don't send a request to the client who originated this request */
|
||||
entry = find_rid(&priv_rids, req->client_rid);
|
||||
if (entry && free_rid(&priv_rids, entry) == 0) {
|
||||
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
|
||||
&req->map, 0);
|
||||
kfree(req);
|
||||
req = NULL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* this lock isn't needed but sparse gave warnings with conditional locking */
|
||||
ret = copy_rids(&req->rids, &priv_rids, &ominf->lock);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
do {
|
||||
ret = rhashtable_insert_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
|
||||
if (ret == -EBUSY)
|
||||
synchronize_rcu(); /* wait for rehash to finish */
|
||||
} while (ret == -EBUSY);
|
||||
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* We can start getting responses the moment we send the first response. After
|
||||
* we send the last request the req can be freed.
|
||||
*/
|
||||
while ((entry = list_first_entry_or_null(&priv_rids.head, struct omap_rid_entry, head))) {
|
||||
ret = scoutfs_server_send_omap_request(sb, entry->rid, &req->map.args);
|
||||
if (ret < 0) {
|
||||
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
|
||||
goto out;
|
||||
}
|
||||
|
||||
free_rid(&priv_rids, entry);
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
free_rids(&priv_rids);
|
||||
if (ret < 0) {
|
||||
ret = scoutfs_server_send_omap_response(sb, req->client_rid, req->client_id,
|
||||
NULL, ret);
|
||||
free_req(req);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle all previously received omap requests from clients. Once
|
||||
* we've finished recovery and can send requests to all clients we can
|
||||
* handle all pending requests. The handling function frees the request
|
||||
* and only returns an error if it couldn't send a response to the
|
||||
* client.
|
||||
*/
|
||||
static int handle_requests(struct super_block *sb)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct llist_node *requests;
|
||||
struct omap_request *req;
|
||||
struct omap_request *tmp;
|
||||
int ret;
|
||||
int err;
|
||||
|
||||
if (scoutfs_recov_next_pending(sb, 0, SCOUTFS_RECOV_GREETING))
|
||||
return 0;
|
||||
|
||||
ret = 0;
|
||||
requests = llist_del_all(&ominf->requests);
|
||||
|
||||
llist_for_each_entry_safe(req, tmp, requests, llnode) {
|
||||
err = handle_request(sb, req);
|
||||
if (err < 0 && ret == 0)
|
||||
ret = err;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_omap_finished_recovery(struct super_block *sb)
|
||||
{
|
||||
return handle_requests(sb);
|
||||
}
|
||||
|
||||
/*
|
||||
* The server is receiving a request from a client for the bitmap of all
|
||||
* open inodes around their ino. Queue it for processing which is
|
||||
* typically immediate and inline but which can be deferred by recovery
|
||||
* as the server first starts up.
|
||||
*/
|
||||
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
|
||||
struct scoutfs_open_ino_map_args *args)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct omap_request *req;
|
||||
|
||||
req = kzalloc(sizeof(struct omap_request), GFP_NOFS);
|
||||
if (req == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
spin_lock_init(&req->lock);
|
||||
req->client_rid = rid;
|
||||
req->client_id = id;
|
||||
init_rid_list(&req->rids);
|
||||
req->map.args.group_nr = args->group_nr;
|
||||
req->map.args.req_id = cpu_to_le64(atomic64_inc_return(&ominf->next_req_id));
|
||||
|
||||
llist_add(&req->llnode, &ominf->requests);
|
||||
|
||||
return handle_requests(sb);
|
||||
}
|
||||
|
||||
/*
|
||||
* The client is receiving a request from the server for its map for the
|
||||
* given group. Look up the group and copy the bits to the map.
|
||||
*
|
||||
* The mount originating the request for this bitmap has the inode group
|
||||
* write locked. We can't be adding links to any inodes in the group
|
||||
* because that requires the lock. Inodes bits can be set and cleared
|
||||
* while we're sampling the bitmap. These races are fine, they can't be
|
||||
* adding cached inodes if nlink is 0 and we don't have the lock. If
|
||||
* the caller is removing a set bit then they're about to try and delete
|
||||
* the inode themselves and will first have to acquire the cluster lock
|
||||
* themselves.
|
||||
*/
|
||||
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
|
||||
struct scoutfs_open_ino_map_args *args)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
u64 group_nr = le64_to_cpu(args->group_nr);
|
||||
struct scoutfs_open_ino_map *map;
|
||||
struct omap_group *group;
|
||||
bool copied = false;
|
||||
int ret;
|
||||
|
||||
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
|
||||
if (!map)
|
||||
return -ENOMEM;
|
||||
|
||||
map->args = *args;
|
||||
|
||||
rcu_read_lock();
|
||||
group = rhashtable_lookup(&ominf->group_ht, &group_nr, group_ht_params);
|
||||
if (group) {
|
||||
spin_lock(&group->lock);
|
||||
trace_group(sb, request, group, -1);
|
||||
if (group->total > 0 && group->total < UINT_MAX) {
|
||||
memcpy(map->bits, group->bits, sizeof(map->bits));
|
||||
copied = true;
|
||||
}
|
||||
spin_unlock(&group->lock);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
if (!copied)
|
||||
memset(map->bits, 0, sizeof(map->bits));
|
||||
|
||||
ret = scoutfs_client_send_omap_response(sb, id, map);
|
||||
kfree(map);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* The server has received an open ino map response from a client. Find
|
||||
* the original request that it's serving, or in the response's map, and
|
||||
* send a reply if this was the last response from a client we were
|
||||
* waiting for.
|
||||
*
|
||||
* We can get responses for requests we're no longer tracking if, for
|
||||
* example, sending to a client gets an error. We'll have already sent
|
||||
* the response to the requesting client so we drop these responses on
|
||||
* the floor.
|
||||
*/
|
||||
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_open_ino_map *resp_map)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct scoutfs_open_ino_map *map;
|
||||
struct omap_rid_entry *entry;
|
||||
bool send_response = false;
|
||||
struct omap_request *req;
|
||||
u64 resp_rid;
|
||||
u64 resp_id;
|
||||
int ret;
|
||||
|
||||
map = kmalloc(sizeof(struct scoutfs_open_ino_map), GFP_NOFS);
|
||||
if (!map) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
rcu_read_lock();
|
||||
req = rhashtable_lookup(&ominf->req_ht, &resp_map->args.req_id, req_ht_params);
|
||||
if (req) {
|
||||
spin_lock(&req->lock);
|
||||
entry = find_rid(&req->rids, rid);
|
||||
if (entry) {
|
||||
bitmap_or((unsigned long *)req->map.bits, (unsigned long *)req->map.bits,
|
||||
(unsigned long *)resp_map->bits, SCOUTFS_OPEN_INO_MAP_BITS);
|
||||
if (free_rid(&req->rids, entry) == 0)
|
||||
send_response = true;
|
||||
}
|
||||
spin_unlock(&req->lock);
|
||||
|
||||
if (send_response) {
|
||||
resp_rid = req->client_rid;
|
||||
resp_id = req->client_id;
|
||||
memcpy(map, &req->map, sizeof(struct scoutfs_open_ino_map));
|
||||
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
|
||||
call_rcu(&req->rcu, free_req_rcu);
|
||||
}
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
if (send_response)
|
||||
ret = scoutfs_server_send_omap_response(sb, resp_rid, resp_id, map, 0);
|
||||
else
|
||||
ret = 0;
|
||||
kfree(map);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* The server is shutting down. Free all the server state associated
|
||||
* with ongoing request processing. Clients who still have requests
|
||||
* pending will resend them to the next server.
|
||||
*/
|
||||
void scoutfs_omap_server_shutdown(struct super_block *sb)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct rhashtable_iter iter;
|
||||
struct llist_node *requests;
|
||||
struct omap_request *req;
|
||||
struct omap_request *tmp;
|
||||
|
||||
rhashtable_walk_enter(&ominf->req_ht, &iter);
|
||||
rhashtable_walk_start(&iter);
|
||||
|
||||
for (;;) {
|
||||
req = rhashtable_walk_next(&iter);
|
||||
if (req == NULL)
|
||||
break;
|
||||
if (req == ERR_PTR(-EAGAIN))
|
||||
continue;
|
||||
|
||||
if (req->rids.nr_rids != 0) {
|
||||
free_rids(&req->rids);
|
||||
rhashtable_remove_fast(&ominf->req_ht, &req->ht_head, req_ht_params);
|
||||
call_rcu(&req->rcu, free_req_rcu);
|
||||
}
|
||||
}
|
||||
|
||||
rhashtable_walk_stop(&iter);
|
||||
rhashtable_walk_exit(&iter);
|
||||
|
||||
requests = llist_del_all(&ominf->requests);
|
||||
llist_for_each_entry_safe(req, tmp, requests, llnode)
|
||||
kfree(req);
|
||||
|
||||
spin_lock(&ominf->lock);
|
||||
free_rid_list(&ominf->rids);
|
||||
spin_unlock(&ominf->lock);
|
||||
|
||||
synchronize_rcu();
|
||||
}
|
||||
|
||||
int scoutfs_omap_setup(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct omap_info *ominf;
|
||||
int ret;
|
||||
|
||||
ominf = kzalloc(sizeof(struct omap_info), GFP_KERNEL);
|
||||
if (!ominf) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = rhashtable_init(&ominf->group_ht, &group_ht_params);
|
||||
if (ret < 0) {
|
||||
kfree(ominf);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = rhashtable_init(&ominf->req_ht, &req_ht_params);
|
||||
if (ret < 0) {
|
||||
rhashtable_destroy(&ominf->group_ht);
|
||||
kfree(ominf);
|
||||
goto out;
|
||||
}
|
||||
|
||||
init_llist_head(&ominf->requests);
|
||||
spin_lock_init(&ominf->lock);
|
||||
init_rid_list(&ominf->rids);
|
||||
atomic64_set(&ominf->next_req_id, 0);
|
||||
|
||||
sbi->omap_info = ominf;
|
||||
ret = 0;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* To get here the server must have shut down, freeing requests, and
|
||||
* evict must have been called on all cached inodes so we can just
|
||||
* synchronize all the pending group frees.
|
||||
*/
|
||||
void scoutfs_omap_destroy(struct super_block *sb)
|
||||
{
|
||||
DECLARE_OMAP_INFO(sb, ominf);
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct rhashtable_iter iter;
|
||||
|
||||
if (ominf) {
|
||||
synchronize_rcu();
|
||||
|
||||
/* double check that all the groups deced to 0 and were freed */
|
||||
rhashtable_walk_enter(&ominf->group_ht, &iter);
|
||||
rhashtable_walk_start(&iter);
|
||||
WARN_ON_ONCE(rhashtable_walk_peek(&iter) != NULL);
|
||||
rhashtable_walk_stop(&iter);
|
||||
rhashtable_walk_exit(&iter);
|
||||
|
||||
spin_lock(&ominf->lock);
|
||||
free_rid_list(&ominf->rids);
|
||||
spin_unlock(&ominf->lock);
|
||||
|
||||
rhashtable_destroy(&ominf->group_ht);
|
||||
rhashtable_destroy(&ominf->req_ht);
|
||||
kfree(ominf);
|
||||
sbi->omap_info = NULL;
|
||||
}
|
||||
}
|
||||
@@ -1,23 +0,0 @@
|
||||
#ifndef _SCOUTFS_OMAP_H_
|
||||
#define _SCOUTFS_OMAP_H_
|
||||
|
||||
int scoutfs_omap_set(struct super_block *sb, u64 ino);
|
||||
bool scoutfs_omap_test(struct super_block *sb, u64 ino);
|
||||
void scoutfs_omap_clear(struct super_block *sb, u64 ino);
|
||||
int scoutfs_omap_client_handle_request(struct super_block *sb, u64 id,
|
||||
struct scoutfs_open_ino_map_args *args);
|
||||
void scoutfs_omap_calc_group_nrs(u64 ino, u64 *group_nr, int *bit_nr);
|
||||
|
||||
int scoutfs_omap_add_rid(struct super_block *sb, u64 rid);
|
||||
int scoutfs_omap_remove_rid(struct super_block *sb, u64 rid);
|
||||
int scoutfs_omap_finished_recovery(struct super_block *sb);
|
||||
int scoutfs_omap_server_handle_request(struct super_block *sb, u64 rid, u64 id,
|
||||
struct scoutfs_open_ino_map_args *args);
|
||||
int scoutfs_omap_server_handle_response(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_open_ino_map *resp_map);
|
||||
void scoutfs_omap_server_shutdown(struct super_block *sb);
|
||||
|
||||
int scoutfs_omap_setup(struct super_block *sb);
|
||||
void scoutfs_omap_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
@@ -26,39 +26,62 @@
|
||||
#include "msg.h"
|
||||
#include "options.h"
|
||||
#include "super.h"
|
||||
#include "inode.h"
|
||||
#include "alloc.h"
|
||||
|
||||
enum {
|
||||
Opt_acl,
|
||||
Opt_data_prealloc_blocks,
|
||||
Opt_data_prealloc_contig_only,
|
||||
Opt_metadev_path,
|
||||
Opt_noacl,
|
||||
Opt_orphan_scan_delay_ms,
|
||||
Opt_quorum_slot_nr,
|
||||
Opt_err,
|
||||
};
|
||||
|
||||
static const match_table_t tokens = {
|
||||
{Opt_acl, "acl"},
|
||||
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
|
||||
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
|
||||
{Opt_server_addr, "server_addr=%s"},
|
||||
{Opt_metadev_path, "metadev_path=%s"},
|
||||
{Opt_noacl, "noacl"},
|
||||
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
|
||||
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
|
||||
{Opt_err, NULL}
|
||||
};
|
||||
|
||||
struct options_info {
|
||||
seqlock_t seqlock;
|
||||
struct scoutfs_mount_options opts;
|
||||
struct scoutfs_sysfs_attrs sysfs_attrs;
|
||||
struct options_sb_info {
|
||||
struct dentry *debugfs_dir;
|
||||
};
|
||||
|
||||
#define DECLARE_OPTIONS_INFO(sb, name) \
|
||||
struct options_info *name = SCOUTFS_SB(sb)->options_info
|
||||
u32 scoutfs_option_u32(struct super_block *sb, int token)
|
||||
{
|
||||
WARN_ON_ONCE(1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* The caller's string is null terminted and can be clobbered */
|
||||
static int parse_ipv4(struct super_block *sb, char *str,
|
||||
struct sockaddr_in *sin)
|
||||
{
|
||||
unsigned long port = 0;
|
||||
__be32 addr;
|
||||
char *c;
|
||||
int ret;
|
||||
|
||||
/* null term port, if specified */
|
||||
c = strchr(str, ':');
|
||||
if (c)
|
||||
*c = '\0';
|
||||
|
||||
/* parse addr */
|
||||
addr = in_aton(str);
|
||||
if (ipv4_is_multicast(addr) || ipv4_is_lbcast(addr) ||
|
||||
ipv4_is_zeronet(addr) ||
|
||||
ipv4_is_local_multicast(addr)) {
|
||||
scoutfs_err(sb, "invalid unicast ipv4 address: %s", str);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* parse port, if specified */
|
||||
if (c) {
|
||||
c++;
|
||||
ret = kstrtoul(c, 0, &port);
|
||||
if (ret != 0 || port == 0 || port >= U16_MAX) {
|
||||
scoutfs_err(sb, "invalid port in ipv4 address: %s", c);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
sin->sin_family = AF_INET;
|
||||
sin->sin_addr.s_addr = addr;
|
||||
sin->sin_port = cpu_to_be16(port);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_bdev_path(struct super_block *sb, substring_t *substr,
|
||||
char **bdev_path_ret)
|
||||
@@ -106,133 +129,46 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void free_options(struct scoutfs_mount_options *opts)
|
||||
{
|
||||
kfree(opts->metadev_path);
|
||||
}
|
||||
|
||||
#define MIN_ORPHAN_SCAN_DELAY_MS 100UL
|
||||
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
|
||||
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
|
||||
|
||||
#define MIN_DATA_PREALLOC_BLOCKS 1ULL
|
||||
#define MAX_DATA_PREALLOC_BLOCKS ((unsigned long long)SCOUTFS_BLOCK_SM_MAX)
|
||||
|
||||
static void init_default_options(struct scoutfs_mount_options *opts)
|
||||
{
|
||||
memset(opts, 0, sizeof(*opts));
|
||||
|
||||
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
|
||||
opts->data_prealloc_contig_only = 1;
|
||||
opts->quorum_slot_nr = -1;
|
||||
opts->orphan_scan_delay_ms = -1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Parse the option string into our options struct. This can allocate
|
||||
* memory in the struct. The caller is responsible for always calling
|
||||
* free_options() when the struct is destroyed, including when we return
|
||||
* an error.
|
||||
*/
|
||||
static int parse_options(struct super_block *sb, char *options, struct scoutfs_mount_options *opts)
|
||||
int scoutfs_parse_options(struct super_block *sb, char *options,
|
||||
struct mount_options *parsed)
|
||||
{
|
||||
char ipstr[INET_ADDRSTRLEN + 1];
|
||||
substring_t args[MAX_OPT_ARGS];
|
||||
u64 nr64;
|
||||
int nr;
|
||||
int token;
|
||||
char *p;
|
||||
int ret;
|
||||
|
||||
/* Set defaults */
|
||||
memset(parsed, 0, sizeof(*parsed));
|
||||
|
||||
while ((p = strsep(&options, ",")) != NULL) {
|
||||
if (!*p)
|
||||
continue;
|
||||
|
||||
token = match_token(p, tokens, args);
|
||||
switch (token) {
|
||||
case Opt_server_addr:
|
||||
|
||||
case Opt_acl:
|
||||
sb->s_flags |= MS_POSIXACL;
|
||||
break;
|
||||
|
||||
case Opt_data_prealloc_blocks:
|
||||
ret = match_u64(args, &nr64);
|
||||
if (ret < 0 ||
|
||||
nr64 < MIN_DATA_PREALLOC_BLOCKS || nr64 > MAX_DATA_PREALLOC_BLOCKS) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
|
||||
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
|
||||
if (ret == 0)
|
||||
ret = -EINVAL;
|
||||
return ret;
|
||||
}
|
||||
opts->data_prealloc_blocks = nr64;
|
||||
break;
|
||||
|
||||
case Opt_data_prealloc_contig_only:
|
||||
ret = match_int(args, &nr);
|
||||
if (ret < 0 || nr < 0 || nr > 1) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must only be 0 or 1");
|
||||
if (ret == 0)
|
||||
ret = -EINVAL;
|
||||
return ret;
|
||||
}
|
||||
opts->data_prealloc_contig_only = nr;
|
||||
break;
|
||||
|
||||
case Opt_metadev_path:
|
||||
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
|
||||
match_strlcpy(ipstr, args, ARRAY_SIZE(ipstr));
|
||||
ret = parse_ipv4(sb, ipstr, &parsed->server_addr);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
break;
|
||||
case Opt_metadev_path:
|
||||
|
||||
case Opt_noacl:
|
||||
sb->s_flags &= ~MS_POSIXACL;
|
||||
break;
|
||||
|
||||
case Opt_orphan_scan_delay_ms:
|
||||
if (opts->orphan_scan_delay_ms != -1) {
|
||||
scoutfs_err(sb, "multiple orphan_scan_delay_ms options provided, only provide one.");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = match_int(args, &nr);
|
||||
if (ret < 0 ||
|
||||
nr < MIN_ORPHAN_SCAN_DELAY_MS || nr > MAX_ORPHAN_SCAN_DELAY_MS) {
|
||||
scoutfs_err(sb, "invalid orphan_scan_delay_ms option, must be between %lu and %lu",
|
||||
MIN_ORPHAN_SCAN_DELAY_MS, MAX_ORPHAN_SCAN_DELAY_MS);
|
||||
if (ret == 0)
|
||||
ret = -EINVAL;
|
||||
ret = parse_bdev_path(sb, &args[0],
|
||||
&parsed->metadev_path);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
opts->orphan_scan_delay_ms = nr;
|
||||
break;
|
||||
|
||||
case Opt_quorum_slot_nr:
|
||||
if (opts->quorum_slot_nr != -1) {
|
||||
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = match_int(args, &nr);
|
||||
if (ret < 0 || nr < 0 || nr >= SCOUTFS_QUORUM_MAX_SLOTS) {
|
||||
scoutfs_err(sb, "invalid quorum_slot_nr option, must be between 0 and %u",
|
||||
SCOUTFS_QUORUM_MAX_SLOTS - 1);
|
||||
if (ret == 0)
|
||||
ret = -EINVAL;
|
||||
return ret;
|
||||
}
|
||||
opts->quorum_slot_nr = nr;
|
||||
break;
|
||||
|
||||
default:
|
||||
scoutfs_err(sb, "Unknown or malformed option, \"%s\"", p);
|
||||
return -EINVAL;
|
||||
scoutfs_err(sb, "Unknown or malformed option, \"%s\"",
|
||||
p);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (opts->orphan_scan_delay_ms == -1)
|
||||
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
|
||||
|
||||
if (!opts->metadev_path) {
|
||||
if (!parsed->metadev_path) {
|
||||
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -240,267 +176,40 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
|
||||
return 0;
|
||||
}
|
||||
|
||||
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts)
|
||||
{
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
unsigned int seq;
|
||||
|
||||
if (WARN_ON_ONCE(optinf == NULL)) {
|
||||
/* trying to use options before early setup or after destroy */
|
||||
init_default_options(opts);
|
||||
return;
|
||||
}
|
||||
|
||||
do {
|
||||
seq = read_seqbegin(&optinf->seqlock);
|
||||
memcpy(opts, &optinf->opts, sizeof(struct scoutfs_mount_options));
|
||||
} while (read_seqretry(&optinf->seqlock, seq));
|
||||
}
|
||||
|
||||
/*
|
||||
* Early setup that parses and stores the options so that the rest of
|
||||
* setup can use them. Full options setup that relies on other
|
||||
* components will be done later.
|
||||
*/
|
||||
int scoutfs_options_early_setup(struct super_block *sb, char *options)
|
||||
int scoutfs_options_setup(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_mount_options opts;
|
||||
struct options_info *optinf;
|
||||
struct options_sb_info *osi;
|
||||
int ret;
|
||||
|
||||
init_default_options(&opts);
|
||||
osi = kzalloc(sizeof(struct options_sb_info), GFP_KERNEL);
|
||||
if (!osi)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = parse_options(sb, options, &opts);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
sbi->options = osi;
|
||||
|
||||
optinf = kzalloc(sizeof(struct options_info), GFP_KERNEL);
|
||||
if (!optinf) {
|
||||
osi->debugfs_dir = debugfs_create_dir("options", sbi->debug_root);
|
||||
if (!osi->debugfs_dir) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
seqlock_init(&optinf->seqlock);
|
||||
scoutfs_sysfs_init_attrs(sb, &optinf->sysfs_attrs);
|
||||
|
||||
write_seqlock(&optinf->seqlock);
|
||||
optinf->opts = opts;
|
||||
write_sequnlock(&optinf->seqlock);
|
||||
|
||||
sbi->options_info = optinf;
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret < 0)
|
||||
free_options(&opts);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
|
||||
{
|
||||
struct super_block *sb = root->d_sb;
|
||||
struct scoutfs_mount_options opts;
|
||||
const bool is_acl = !!(sb->s_flags & MS_POSIXACL);
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
if (is_acl)
|
||||
seq_puts(seq, ",acl");
|
||||
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
|
||||
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
|
||||
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
|
||||
if (!is_acl)
|
||||
seq_puts(seq, ",noacl");
|
||||
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
|
||||
if (opts.quorum_slot_nr >= 0)
|
||||
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t data_prealloc_blocks_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu", opts.data_prealloc_blocks);
|
||||
}
|
||||
static ssize_t data_prealloc_blocks_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
char nullterm[30]; /* more than enough for octal -U64_MAX */
|
||||
u64 val;
|
||||
int len;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(nullterm) - 1);
|
||||
memcpy(nullterm, buf, len);
|
||||
nullterm[len] = '\0';
|
||||
|
||||
ret = kstrtoll(nullterm, 0, &val);
|
||||
if (ret < 0 || val < MIN_DATA_PREALLOC_BLOCKS || val > MAX_DATA_PREALLOC_BLOCKS) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
|
||||
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
write_seqlock(&optinf->seqlock);
|
||||
optinf->opts.data_prealloc_blocks = val;
|
||||
write_sequnlock(&optinf->seqlock);
|
||||
|
||||
return count;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(data_prealloc_blocks);
|
||||
|
||||
static ssize_t data_prealloc_contig_only_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%u", opts.data_prealloc_contig_only);
|
||||
}
|
||||
static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
char nullterm[20]; /* more than enough for octal -U32_MAX */
|
||||
long val;
|
||||
int len;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(nullterm) - 1);
|
||||
memcpy(nullterm, buf, len);
|
||||
nullterm[len] = '\0';
|
||||
|
||||
ret = kstrtol(nullterm, 0, &val);
|
||||
if (ret < 0 || val < 0 || val > 1) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must be 0 or 1");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
write_seqlock(&optinf->seqlock);
|
||||
optinf->opts.data_prealloc_contig_only = val;
|
||||
write_sequnlock(&optinf->seqlock);
|
||||
|
||||
return count;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
|
||||
|
||||
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s", opts.metadev_path);
|
||||
}
|
||||
SCOUTFS_ATTR_RO(metadev_path);
|
||||
|
||||
static ssize_t orphan_scan_delay_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%u", opts.orphan_scan_delay_ms);
|
||||
}
|
||||
static ssize_t orphan_scan_delay_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
char nullterm[20]; /* more than enough for octal -U32_MAX */
|
||||
long val;
|
||||
int len;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(nullterm) - 1);
|
||||
memcpy(nullterm, buf, len);
|
||||
nullterm[len] = '\0';
|
||||
|
||||
ret = kstrtol(nullterm, 0, &val);
|
||||
if (ret < 0 || val < MIN_ORPHAN_SCAN_DELAY_MS || val > MAX_ORPHAN_SCAN_DELAY_MS) {
|
||||
scoutfs_err(sb, "invalid orphan_scan_delay_ms value written to options sysfs file, must be between %lu and %lu",
|
||||
MIN_ORPHAN_SCAN_DELAY_MS, MAX_ORPHAN_SCAN_DELAY_MS);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
write_seqlock(&optinf->seqlock);
|
||||
optinf->opts.orphan_scan_delay_ms = val;
|
||||
write_sequnlock(&optinf->seqlock);
|
||||
|
||||
scoutfs_inode_schedule_orphan_dwork(sb);
|
||||
|
||||
return count;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(orphan_scan_delay_ms);
|
||||
|
||||
static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", opts.quorum_slot_nr);
|
||||
}
|
||||
SCOUTFS_ATTR_RO(quorum_slot_nr);
|
||||
|
||||
static struct attribute *options_attrs[] = {
|
||||
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
|
||||
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
|
||||
SCOUTFS_ATTR_PTR(metadev_path),
|
||||
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
|
||||
SCOUTFS_ATTR_PTR(quorum_slot_nr),
|
||||
NULL,
|
||||
};
|
||||
|
||||
int scoutfs_options_setup(struct super_block *sb)
|
||||
{
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_sysfs_create_attrs(sb, &optinf->sysfs_attrs, options_attrs, "mount_options");
|
||||
if (ret < 0)
|
||||
if (ret)
|
||||
scoutfs_options_destroy(sb);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* We remove the sysfs files early in unmount so that they can't try to call other subsystems
|
||||
* as they're being destroyed.
|
||||
*/
|
||||
void scoutfs_options_stop(struct super_block *sb)
|
||||
{
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
|
||||
if (optinf)
|
||||
scoutfs_sysfs_destroy_attrs(sb, &optinf->sysfs_attrs);
|
||||
}
|
||||
|
||||
void scoutfs_options_destroy(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
struct options_sb_info *osi = sbi->options;
|
||||
|
||||
scoutfs_options_stop(sb);
|
||||
|
||||
if (optinf) {
|
||||
free_options(&optinf->opts);
|
||||
kfree(optinf);
|
||||
sbi->options_info = NULL;
|
||||
if (osi) {
|
||||
if (osi->debugfs_dir)
|
||||
debugfs_remove_recursive(osi->debugfs_dir);
|
||||
kfree(osi);
|
||||
sbi->options = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,21 +5,23 @@
|
||||
#include <linux/in.h>
|
||||
#include "format.h"
|
||||
|
||||
struct scoutfs_mount_options {
|
||||
u64 data_prealloc_blocks;
|
||||
bool data_prealloc_contig_only;
|
||||
char *metadev_path;
|
||||
unsigned int orphan_scan_delay_ms;
|
||||
int quorum_slot_nr;
|
||||
|
||||
enum scoutfs_mount_options {
|
||||
Opt_server_addr,
|
||||
Opt_metadev_path,
|
||||
Opt_err,
|
||||
};
|
||||
|
||||
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);
|
||||
int scoutfs_options_show(struct seq_file *seq, struct dentry *root);
|
||||
struct mount_options {
|
||||
struct sockaddr_in server_addr;
|
||||
char *metadev_path;
|
||||
};
|
||||
|
||||
int scoutfs_options_early_setup(struct super_block *sb, char *options);
|
||||
int scoutfs_parse_options(struct super_block *sb, char *options,
|
||||
struct mount_options *parsed);
|
||||
int scoutfs_options_setup(struct super_block *sb);
|
||||
void scoutfs_options_stop(struct super_block *sb);
|
||||
void scoutfs_options_destroy(struct super_block *sb);
|
||||
|
||||
u32 scoutfs_option_u32(struct super_block *sb, int token);
|
||||
#define scoutfs_option_bool scoutfs_option_u32
|
||||
|
||||
#endif /* _SCOUTFS_OPTIONS_H_ */
|
||||
|
||||
1694
kmod/src/quorum.c
1694
kmod/src/quorum.c
File diff suppressed because it is too large
Load Diff
@@ -1,17 +1,10 @@
|
||||
#ifndef _SCOUTFS_QUORUM_H_
|
||||
#define _SCOUTFS_QUORUM_H_
|
||||
|
||||
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
|
||||
|
||||
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
|
||||
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i,
|
||||
struct sockaddr_in *sin);
|
||||
|
||||
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
|
||||
u64 term);
|
||||
int scoutfs_quorum_election(struct super_block *sb, ktime_t timeout_abs,
|
||||
u64 prev_term, u64 *elected_term);
|
||||
void scoutfs_quorum_clear_leader(struct super_block *sb);
|
||||
|
||||
int scoutfs_quorum_setup(struct super_block *sb);
|
||||
void scoutfs_quorum_shutdown(struct super_block *sb);
|
||||
void scoutfs_quorum_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
|
||||
305
kmod/src/recov.c
305
kmod/src/recov.c
@@ -1,305 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/rhashtable.h>
|
||||
#include <linux/rcupdate.h>
|
||||
#include <linux/list_sort.h>
|
||||
|
||||
#include "super.h"
|
||||
#include "recov.h"
|
||||
#include "cmp.h"
|
||||
|
||||
/*
|
||||
* There are a few server messages which can't be processed until they
|
||||
* know that they have state for all possibly active clients. These
|
||||
* little helpers track which clients have recovered what state and give
|
||||
* those message handlers a call to check if recovery has completed. We
|
||||
* track the timeout here, but all we do is call back into the server to
|
||||
* take steps to evict timed out clients and then let us know that their
|
||||
* recovery has finished.
|
||||
*/
|
||||
|
||||
struct recov_info {
|
||||
struct super_block *sb;
|
||||
spinlock_t lock;
|
||||
struct list_head pending;
|
||||
struct timer_list timer;
|
||||
void (*timeout_fn)(struct super_block *);
|
||||
};
|
||||
|
||||
#define DECLARE_RECOV_INFO(sb, name) \
|
||||
struct recov_info *name = SCOUTFS_SB(sb)->recov_info
|
||||
|
||||
struct recov_pending {
|
||||
struct list_head head;
|
||||
u64 rid;
|
||||
int which;
|
||||
};
|
||||
|
||||
static struct recov_pending *next_pending(struct recov_info *recinf, u64 rid, int which)
|
||||
{
|
||||
struct recov_pending *pend;
|
||||
|
||||
list_for_each_entry(pend, &recinf->pending, head) {
|
||||
if (pend->rid > rid && pend->which & which)
|
||||
return pend;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct recov_pending *lookup_pending(struct recov_info *recinf, u64 rid, int which)
|
||||
{
|
||||
struct recov_pending *pend;
|
||||
|
||||
pend = next_pending(recinf, rid - 1, which);
|
||||
if (pend && pend->rid == rid)
|
||||
return pend;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* We keep the pending list sorted by rid so that we can iterate over
|
||||
* them. The list should be small and shouldn't be used often.
|
||||
*/
|
||||
static int cmp_pending_rid(void *priv, struct list_head *A, struct list_head *B)
|
||||
{
|
||||
struct recov_pending *a = list_entry(A, struct recov_pending, head);
|
||||
struct recov_pending *b = list_entry(B, struct recov_pending, head);
|
||||
|
||||
return scoutfs_cmp_u64s(a->rid, b->rid);
|
||||
}
|
||||
|
||||
/*
|
||||
* Record that we'll be waiting for a client to recover something.
|
||||
* _finished will eventually be called for every _prepare, either
|
||||
* because recovery naturally finished or because it timed out and the
|
||||
* server evicted the client.
|
||||
*/
|
||||
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which)
|
||||
{
|
||||
DECLARE_RECOV_INFO(sb, recinf);
|
||||
struct recov_pending *alloc;
|
||||
struct recov_pending *pend;
|
||||
|
||||
if (WARN_ON_ONCE(which & SCOUTFS_RECOV_INVALID))
|
||||
return -EINVAL;
|
||||
|
||||
alloc = kmalloc(sizeof(*pend), GFP_NOFS);
|
||||
if (!alloc)
|
||||
return -ENOMEM;
|
||||
|
||||
spin_lock(&recinf->lock);
|
||||
|
||||
pend = lookup_pending(recinf, rid, SCOUTFS_RECOV_ALL);
|
||||
if (pend) {
|
||||
pend->which |= which;
|
||||
} else {
|
||||
swap(pend, alloc);
|
||||
pend->rid = rid;
|
||||
pend->which = which;
|
||||
list_add_tail(&pend->head, &recinf->pending);
|
||||
list_sort(NULL, &recinf->pending, cmp_pending_rid);
|
||||
}
|
||||
|
||||
spin_unlock(&recinf->lock);
|
||||
|
||||
kfree(alloc);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Recovery is only finished once we've begun (which sets the timer) and
|
||||
* all clients have finished. If we didn't test the timer we could
|
||||
* claim it finished prematurely as clients are being prepared.
|
||||
*/
|
||||
static int recov_finished(struct recov_info *recinf)
|
||||
{
|
||||
return !!(recinf->timeout_fn != NULL && list_empty(&recinf->pending));
|
||||
}
|
||||
|
||||
static void timer_callback(struct timer_list *timer)
|
||||
{
|
||||
struct recov_info *recinf = from_timer(recinf, timer, timer);
|
||||
|
||||
recinf->timeout_fn(recinf->sb);
|
||||
}
|
||||
|
||||
/*
|
||||
* Begin waiting for recovery once we've prepared all the clients. If
|
||||
* the timeout period elapses before _finish is called on all prepared
|
||||
* clients then the timer will call the callback.
|
||||
*
|
||||
* Returns > 0 if all the prepared clients finish recovery before begin
|
||||
* is called.
|
||||
*/
|
||||
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
|
||||
unsigned int timeout_ms)
|
||||
{
|
||||
DECLARE_RECOV_INFO(sb, recinf);
|
||||
int ret;
|
||||
|
||||
spin_lock(&recinf->lock);
|
||||
|
||||
recinf->timeout_fn = timeout_fn;
|
||||
recinf->timer.expires = jiffies + msecs_to_jiffies(timeout_ms);
|
||||
add_timer(&recinf->timer);
|
||||
|
||||
ret = recov_finished(recinf);
|
||||
|
||||
spin_unlock(&recinf->lock);
|
||||
|
||||
if (ret > 0)
|
||||
del_timer_sync(&recinf->timer);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* A given client has recovered the given state. If it's finished all
|
||||
* recovery then we free it, and if all clients have finished recovery
|
||||
* then we cancel the timeout timer.
|
||||
*
|
||||
* Returns > 0 if _begin has been called and all clients have finished.
|
||||
* The caller will only see > 0 returned once.
|
||||
*/
|
||||
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which)
|
||||
{
|
||||
DECLARE_RECOV_INFO(sb, recinf);
|
||||
struct recov_pending *pend;
|
||||
int ret = 0;
|
||||
|
||||
spin_lock(&recinf->lock);
|
||||
|
||||
pend = lookup_pending(recinf, rid, which);
|
||||
if (pend) {
|
||||
pend->which &= ~which;
|
||||
if (pend->which) {
|
||||
pend = NULL;
|
||||
} else {
|
||||
list_del(&pend->head);
|
||||
ret = recov_finished(recinf);
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock(&recinf->lock);
|
||||
|
||||
if (ret > 0)
|
||||
del_timer_sync(&recinf->timer);
|
||||
|
||||
kfree(pend);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if the given client is still trying to recover
|
||||
* the given state.
|
||||
*/
|
||||
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which)
|
||||
{
|
||||
DECLARE_RECOV_INFO(sb, recinf);
|
||||
bool is_pending;
|
||||
|
||||
spin_lock(&recinf->lock);
|
||||
is_pending = lookup_pending(recinf, rid, which) != NULL;
|
||||
spin_unlock(&recinf->lock);
|
||||
|
||||
return is_pending;
|
||||
}
|
||||
|
||||
/*
|
||||
* Return the next rid after the given rid of a client waiting for the
|
||||
* given state to be recovered. Start with rid 0, returns 0 when there
|
||||
* are no more clients waiting for recovery.
|
||||
*
|
||||
* This is inherently racey. Callers are responsible for resolving any
|
||||
* actions taken based on pending with the recovery finishing, perhaps
|
||||
* before we return.
|
||||
*/
|
||||
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which)
|
||||
{
|
||||
DECLARE_RECOV_INFO(sb, recinf);
|
||||
struct recov_pending *pend;
|
||||
|
||||
spin_lock(&recinf->lock);
|
||||
pend = next_pending(recinf, rid, which);
|
||||
rid = pend ? pend->rid : 0;
|
||||
spin_unlock(&recinf->lock);
|
||||
|
||||
return rid;
|
||||
}
|
||||
|
||||
/*
|
||||
* The server is shutting down and doesn't need to worry about recovery
|
||||
* anymore. It'll be built up again by the next server, if needed.
|
||||
*/
|
||||
void scoutfs_recov_shutdown(struct super_block *sb)
|
||||
{
|
||||
DECLARE_RECOV_INFO(sb, recinf);
|
||||
struct recov_pending *pend;
|
||||
struct recov_pending *tmp;
|
||||
LIST_HEAD(list);
|
||||
|
||||
del_timer_sync(&recinf->timer);
|
||||
|
||||
spin_lock(&recinf->lock);
|
||||
list_splice_init(&recinf->pending, &list);
|
||||
recinf->timeout_fn = NULL;
|
||||
spin_unlock(&recinf->lock);
|
||||
|
||||
list_for_each_entry_safe(pend, tmp, &list, head) {
|
||||
list_del(&pend->head);
|
||||
kfree(pend);
|
||||
}
|
||||
}
|
||||
|
||||
int scoutfs_recov_setup(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct recov_info *recinf;
|
||||
int ret;
|
||||
|
||||
recinf = kzalloc(sizeof(struct recov_info), GFP_KERNEL);
|
||||
if (!recinf) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
recinf->sb = sb;
|
||||
spin_lock_init(&recinf->lock);
|
||||
INIT_LIST_HEAD(&recinf->pending);
|
||||
timer_setup(&recinf->timer, timer_callback, 0);
|
||||
|
||||
sbi->recov_info = recinf;
|
||||
ret = 0;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void scoutfs_recov_destroy(struct super_block *sb)
|
||||
{
|
||||
DECLARE_RECOV_INFO(sb, recinf);
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
if (recinf) {
|
||||
scoutfs_recov_shutdown(sb);
|
||||
|
||||
kfree(recinf);
|
||||
sbi->recov_info = NULL;
|
||||
}
|
||||
}
|
||||
@@ -1,23 +0,0 @@
|
||||
#ifndef _SCOUTFS_RECOV_H_
|
||||
#define _SCOUTFS_RECOV_H_
|
||||
|
||||
enum {
|
||||
SCOUTFS_RECOV_GREETING = ( 1 << 0),
|
||||
SCOUTFS_RECOV_LOCKS = ( 1 << 1),
|
||||
|
||||
SCOUTFS_RECOV_INVALID = (~0 << 2),
|
||||
SCOUTFS_RECOV_ALL = (~SCOUTFS_RECOV_INVALID),
|
||||
};
|
||||
|
||||
int scoutfs_recov_prepare(struct super_block *sb, u64 rid, int which);
|
||||
int scoutfs_recov_begin(struct super_block *sb, void (*timeout_fn)(struct super_block *),
|
||||
unsigned int timeout_ms);
|
||||
int scoutfs_recov_finish(struct super_block *sb, u64 rid, int which);
|
||||
bool scoutfs_recov_is_pending(struct super_block *sb, u64 rid, int which);
|
||||
u64 scoutfs_recov_next_pending(struct super_block *sb, u64 rid, int which);
|
||||
void scoutfs_recov_shutdown(struct super_block *sb);
|
||||
|
||||
int scoutfs_recov_setup(struct super_block *sb);
|
||||
void scoutfs_recov_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
File diff suppressed because it is too large
Load Diff
3840
kmod/src/server.c
3840
kmod/src/server.c
File diff suppressed because it is too large
Load Diff
@@ -56,31 +56,23 @@ do { \
|
||||
__entry->name##_data_len, __entry->name##_cmd, __entry->name##_flags, \
|
||||
__entry->name##_error
|
||||
|
||||
u64 scoutfs_server_reserved_meta_blocks(struct super_block *sb);
|
||||
|
||||
int scoutfs_server_lock_request(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_net_lock *nl);
|
||||
int scoutfs_server_lock_response(struct super_block *sb, u64 rid, u64 id,
|
||||
struct scoutfs_net_lock *nl);
|
||||
struct scoutfs_net_lock_grant_response *gr);
|
||||
int scoutfs_server_lock_recover_request(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_key *key);
|
||||
void scoutfs_server_recov_finish(struct super_block *sb, u64 rid, int which);
|
||||
void scoutfs_server_get_roots(struct super_block *sb,
|
||||
struct scoutfs_net_roots *roots);
|
||||
int scoutfs_server_hold_commit(struct super_block *sb);
|
||||
int scoutfs_server_apply_commit(struct super_block *sb, int err);
|
||||
|
||||
int scoutfs_server_send_omap_request(struct super_block *sb, u64 rid,
|
||||
struct scoutfs_open_ino_map_args *args);
|
||||
int scoutfs_server_send_omap_response(struct super_block *sb, u64 rid, u64 id,
|
||||
struct scoutfs_open_ino_map *map, int err);
|
||||
|
||||
u64 scoutfs_server_seq(struct super_block *sb);
|
||||
u64 scoutfs_server_next_seq(struct super_block *sb);
|
||||
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
|
||||
|
||||
void scoutfs_server_start(struct super_block *sb, struct scoutfs_quorum_config *qconf, u64 term);
|
||||
struct sockaddr_in;
|
||||
struct scoutfs_quorum_elected_info;
|
||||
int scoutfs_server_start(struct super_block *sb, struct sockaddr_in *sin,
|
||||
u64 term);
|
||||
void scoutfs_server_abort(struct super_block *sb);
|
||||
void scoutfs_server_stop(struct super_block *sb);
|
||||
void scoutfs_server_stop_wait(struct super_block *sb);
|
||||
bool scoutfs_server_is_running(struct super_block *sb);
|
||||
bool scoutfs_server_is_up(struct super_block *sb);
|
||||
bool scoutfs_server_is_down(struct super_block *sb);
|
||||
|
||||
int scoutfs_server_setup(struct super_block *sb);
|
||||
void scoutfs_server_destroy(struct super_block *sb);
|
||||
|
||||
230
kmod/src/srch.c
230
kmod/src/srch.c
@@ -28,7 +28,6 @@
|
||||
#include "btree.h"
|
||||
#include "spbm.h"
|
||||
#include "client.h"
|
||||
#include "counters.h"
|
||||
#include "scoutfs_trace.h"
|
||||
|
||||
/*
|
||||
@@ -256,9 +255,24 @@ static u8 height_for_blk(u64 blk)
|
||||
return hei;
|
||||
}
|
||||
|
||||
static inline u32 srch_level_magic(int level)
|
||||
static void init_file_block(struct super_block *sb, struct scoutfs_block *bl,
|
||||
int level)
|
||||
{
|
||||
return level ? SCOUTFS_BLOCK_MAGIC_SRCH_PARENT : SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK;
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct scoutfs_block_header *hdr;
|
||||
|
||||
/* don't leak uninit kernel mem.. block should do this for us? */
|
||||
memset(bl->data, 0, SCOUTFS_BLOCK_LG_SIZE);
|
||||
|
||||
hdr = bl->data;
|
||||
hdr->fsid = super->hdr.fsid;
|
||||
hdr->blkno = cpu_to_le64(bl->blkno);
|
||||
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
|
||||
|
||||
if (level)
|
||||
hdr->magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_SRCH_PARENT);
|
||||
else
|
||||
hdr->magic = cpu_to_le32(SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -270,15 +284,39 @@ static inline u32 srch_level_magic(int level)
|
||||
*/
|
||||
static int read_srch_block(struct super_block *sb,
|
||||
struct scoutfs_block_writer *wri, int level,
|
||||
struct scoutfs_block_ref *ref,
|
||||
struct scoutfs_srch_ref *ref,
|
||||
struct scoutfs_block **bl_ret)
|
||||
{
|
||||
u32 magic = srch_level_magic(level);
|
||||
int ret;
|
||||
struct scoutfs_block *bl;
|
||||
int retries = 0;
|
||||
int ret = 0;
|
||||
int mag;
|
||||
|
||||
ret = scoutfs_block_read_ref(sb, ref, magic, bl_ret);
|
||||
if (ret == -ESTALE)
|
||||
mag = level ? SCOUTFS_BLOCK_MAGIC_SRCH_PARENT :
|
||||
SCOUTFS_BLOCK_MAGIC_SRCH_BLOCK;
|
||||
retry:
|
||||
bl = scoutfs_block_read(sb, le64_to_cpu(ref->blkno));
|
||||
if (!IS_ERR_OR_NULL(bl) &&
|
||||
!scoutfs_block_consistent_ref(sb, bl, ref->seq, ref->blkno, mag)) {
|
||||
|
||||
scoutfs_inc_counter(sb, srch_inconsistent_ref);
|
||||
scoutfs_block_writer_forget(sb, wri, bl);
|
||||
scoutfs_block_invalidate(sb, bl);
|
||||
scoutfs_block_put(sb, bl);
|
||||
bl = NULL;
|
||||
|
||||
if (retries++ == 0)
|
||||
goto retry;
|
||||
|
||||
bl = ERR_PTR(-ESTALE);
|
||||
scoutfs_inc_counter(sb, srch_read_stale);
|
||||
}
|
||||
if (IS_ERR(bl)) {
|
||||
ret = PTR_ERR(bl);
|
||||
bl = NULL;
|
||||
}
|
||||
|
||||
*bl_ret = bl;
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -295,7 +333,7 @@ static int read_path_block(struct super_block *sb,
|
||||
{
|
||||
struct scoutfs_block *bl = NULL;
|
||||
struct scoutfs_srch_parent *srp;
|
||||
struct scoutfs_block_ref ref;
|
||||
struct scoutfs_srch_ref ref;
|
||||
int level;
|
||||
int ind;
|
||||
int ret;
|
||||
@@ -354,10 +392,12 @@ static int get_file_block(struct super_block *sb,
|
||||
struct scoutfs_block_header *hdr;
|
||||
struct scoutfs_block *bl = NULL;
|
||||
struct scoutfs_srch_parent *srp;
|
||||
struct scoutfs_block_ref new_root_ref;
|
||||
struct scoutfs_block_ref *ref;
|
||||
struct scoutfs_block *new_bl;
|
||||
struct scoutfs_srch_ref *ref;
|
||||
u64 blkno = 0;
|
||||
int level;
|
||||
int ind;
|
||||
int err;
|
||||
int ret;
|
||||
u8 hei;
|
||||
|
||||
@@ -369,21 +409,29 @@ static int get_file_block(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
memset(&new_root_ref, 0, sizeof(new_root_ref));
|
||||
level = sfl->height;
|
||||
|
||||
ret = scoutfs_block_dirty_ref(sb, alloc, wri, &new_root_ref,
|
||||
srch_level_magic(level), &bl, 0, NULL);
|
||||
ret = scoutfs_alloc_meta(sb, alloc, wri, &blkno);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (level) {
|
||||
bl = scoutfs_block_create(sb, blkno);
|
||||
if (IS_ERR(bl)) {
|
||||
ret = PTR_ERR(bl);
|
||||
goto out;
|
||||
}
|
||||
blkno = 0;
|
||||
|
||||
scoutfs_block_writer_mark_dirty(sb, wri, bl);
|
||||
|
||||
init_file_block(sb, bl, sfl->height);
|
||||
if (sfl->height) {
|
||||
srp = bl->data;
|
||||
srp->refs[0] = sfl->ref;
|
||||
srp->refs[0].blkno = sfl->ref.blkno;
|
||||
srp->refs[0].seq = sfl->ref.seq;
|
||||
}
|
||||
|
||||
hdr = bl->data;
|
||||
sfl->ref = new_root_ref;
|
||||
sfl->ref.blkno = hdr->blkno;
|
||||
sfl->ref.seq = hdr->seq;
|
||||
sfl->height++;
|
||||
scoutfs_block_put(sb, bl);
|
||||
bl = NULL;
|
||||
@@ -399,13 +447,54 @@ static int get_file_block(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (flags & GFB_DIRTY)
|
||||
ret = scoutfs_block_dirty_ref(sb, alloc, wri, ref, srch_level_magic(level),
|
||||
&bl, 0, NULL);
|
||||
else
|
||||
ret = scoutfs_block_read_ref(sb, ref, srch_level_magic(level), &bl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
/* read an existing block */
|
||||
if (ref->blkno) {
|
||||
ret = read_srch_block(sb, wri, level, ref, &bl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* allocate a new block if we need it */
|
||||
if (!ref->blkno || ((flags & GFB_DIRTY) &&
|
||||
!scoutfs_block_writer_is_dirty(sb, bl))) {
|
||||
ret = scoutfs_alloc_meta(sb, alloc, wri, &blkno);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
new_bl = scoutfs_block_create(sb, blkno);
|
||||
if (IS_ERR(new_bl)) {
|
||||
ret = PTR_ERR(new_bl);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (bl) {
|
||||
/* cow old block if we have one */
|
||||
ret = scoutfs_free_meta(sb, alloc, wri,
|
||||
bl->blkno);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
memcpy(new_bl->data, bl->data,
|
||||
SCOUTFS_BLOCK_LG_SIZE);
|
||||
scoutfs_block_put(sb, bl);
|
||||
bl = new_bl;
|
||||
hdr = bl->data;
|
||||
hdr->blkno = cpu_to_le64(bl->blkno);
|
||||
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
|
||||
} else {
|
||||
/* init new allocated block */
|
||||
bl = new_bl;
|
||||
init_file_block(sb, bl, level);
|
||||
}
|
||||
|
||||
blkno = 0;
|
||||
scoutfs_block_writer_mark_dirty(sb, wri, bl);
|
||||
|
||||
/* update file or parent block ref */
|
||||
hdr = bl->data;
|
||||
ref->blkno = hdr->blkno;
|
||||
ref->seq = hdr->seq;
|
||||
}
|
||||
|
||||
if (level == 0) {
|
||||
ret = 0;
|
||||
@@ -425,6 +514,12 @@ static int get_file_block(struct super_block *sb,
|
||||
out:
|
||||
scoutfs_block_put(sb, parent);
|
||||
|
||||
/* return allocated blkno on error */
|
||||
if (blkno > 0) {
|
||||
err = scoutfs_free_meta(sb, alloc, wri, blkno);
|
||||
BUG_ON(err); /* radix should have been dirty */
|
||||
}
|
||||
|
||||
if (ret < 0) {
|
||||
scoutfs_block_put(sb, bl);
|
||||
bl = NULL;
|
||||
@@ -861,6 +956,7 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
|
||||
struct scoutfs_srch_rb_root *sroot,
|
||||
u64 hash, u64 ino, u64 last_ino, bool *done)
|
||||
{
|
||||
struct scoutfs_net_roots prev_roots;
|
||||
struct scoutfs_net_roots roots;
|
||||
struct scoutfs_srch_entry start;
|
||||
struct scoutfs_srch_entry end;
|
||||
@@ -868,7 +964,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
|
||||
struct scoutfs_log_trees lt;
|
||||
struct scoutfs_srch_file sfl;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
DECLARE_SAVED_REFS(saved);
|
||||
struct scoutfs_key key;
|
||||
unsigned long limit = SRCH_LIMIT;
|
||||
int ret;
|
||||
@@ -877,6 +972,7 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
|
||||
|
||||
*done = false;
|
||||
srch_init_rb_root(sroot);
|
||||
memset(&prev_roots, 0, sizeof(prev_roots));
|
||||
|
||||
start.hash = cpu_to_le64(hash);
|
||||
start.ino = cpu_to_le64(ino);
|
||||
@@ -891,6 +987,7 @@ retry:
|
||||
ret = scoutfs_client_get_roots(sb, &roots);
|
||||
if (ret)
|
||||
goto out;
|
||||
memset(&roots.fs_root, 0, sizeof(roots.fs_root));
|
||||
|
||||
end = final;
|
||||
|
||||
@@ -966,10 +1063,16 @@ retry:
|
||||
*done = sre_cmp(&end, &final) == 0;
|
||||
ret = 0;
|
||||
out:
|
||||
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.srch_root.ref,
|
||||
&roots.logs_root.ref);
|
||||
if (ret == -ESTALE)
|
||||
goto retry;
|
||||
if (ret == -ESTALE) {
|
||||
if (memcmp(&prev_roots, &roots, sizeof(roots)) == 0) {
|
||||
scoutfs_inc_counter(sb, srch_search_stale_eio);
|
||||
ret = -EIO;
|
||||
} else {
|
||||
scoutfs_inc_counter(sb, srch_search_stale_retry);
|
||||
prev_roots = roots;
|
||||
goto retry;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -982,27 +1085,18 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct scoutfs_srch_file *sfl, bool force)
|
||||
struct scoutfs_srch_file *sfl)
|
||||
{
|
||||
struct scoutfs_key key;
|
||||
int ret;
|
||||
|
||||
if (sfl->ref.blkno == 0 ||
|
||||
(!force && le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT))
|
||||
if (le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT)
|
||||
return 0;
|
||||
|
||||
init_srch_key(&key, SCOUTFS_SRCH_LOG_TYPE,
|
||||
le64_to_cpu(sfl->ref.blkno), 0);
|
||||
ret = scoutfs_btree_insert(sb, alloc, wri, root, &key,
|
||||
sfl, sizeof(*sfl));
|
||||
/*
|
||||
* While it's fine to replay moving the client's logging srch
|
||||
* file to the core btree item, server commits should keep it
|
||||
* from happening. So we'll warn if we see it happen. This can
|
||||
* be removed eventually.
|
||||
*/
|
||||
if (WARN_ON_ONCE(ret == -EEXIST))
|
||||
ret = 0;
|
||||
if (ret == 0) {
|
||||
memset(sfl, 0, sizeof(*sfl));
|
||||
scoutfs_inc_counter(sb, srch_rotate_log);
|
||||
@@ -1104,10 +1198,14 @@ int scoutfs_srch_get_compact(struct super_block *sb,
|
||||
|
||||
for (;;scoutfs_key_inc(&key)) {
|
||||
ret = scoutfs_btree_next(sb, root, &key, &iref);
|
||||
if (ret == -ENOENT) {
|
||||
ret = 0;
|
||||
sc->nr = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (ret == 0) {
|
||||
if (iref.key->sk_type != type) {
|
||||
ret = -ENOENT;
|
||||
} else if (iref.val_len == sizeof(sfl)) {
|
||||
if (iref.val_len == sizeof(struct scoutfs_srch_file)) {
|
||||
key = *iref.key;
|
||||
memcpy(&sfl, iref.val, iref.val_len);
|
||||
} else {
|
||||
@@ -1115,25 +1213,24 @@ int scoutfs_srch_get_compact(struct super_block *sb,
|
||||
}
|
||||
scoutfs_btree_put_iref(&iref);
|
||||
}
|
||||
if (ret < 0) {
|
||||
/* see if we ran out of log files or files entirely */
|
||||
if (ret == -ENOENT) {
|
||||
sc->nr = 0;
|
||||
if (type == SCOUTFS_SRCH_LOG_TYPE) {
|
||||
type = SCOUTFS_SRCH_BLOCKS_TYPE;
|
||||
init_srch_key(&key, type, 0, 0);
|
||||
continue;
|
||||
} else {
|
||||
ret = 0;
|
||||
}
|
||||
}
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* skip any files already being compacted */
|
||||
if (scoutfs_spbm_test(&busy, le64_to_cpu(sfl.ref.blkno)))
|
||||
continue;
|
||||
|
||||
/* see if we ran out of log files or files entirely */
|
||||
if (key.sk_type != type) {
|
||||
sc->nr = 0;
|
||||
if (key.sk_type == SCOUTFS_SRCH_BLOCKS_TYPE) {
|
||||
type = SCOUTFS_SRCH_BLOCKS_TYPE;
|
||||
} else {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
/* reset if we iterated into the next size category */
|
||||
if (type == SCOUTFS_SRCH_BLOCKS_TYPE) {
|
||||
order = fls64(le64_to_cpu(sfl.blocks)) /
|
||||
@@ -1482,11 +1579,10 @@ static int kway_merge(struct super_block *sb,
|
||||
int ind;
|
||||
int i;
|
||||
|
||||
if (WARN_ON_ONCE(nr <= 0))
|
||||
if (WARN_ON_ONCE(nr <= 1))
|
||||
return -EINVAL;
|
||||
|
||||
/* always at least one parent for single leaf */
|
||||
nr_parents = max_t(unsigned long, 1, roundup_pow_of_two(nr) - 1);
|
||||
nr_parents = roundup_pow_of_two(nr) - 1;
|
||||
/* root at [1] for easy sib/parent index calc, final pad for odd sib */
|
||||
nr_nodes = 1 + nr_parents + nr + 1;
|
||||
tnodes = __vmalloc(nr_nodes * sizeof(struct tourn_node),
|
||||
@@ -2083,7 +2179,7 @@ static int delete_files(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_srch_compact *sc)
|
||||
{
|
||||
int ret = 0;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < sc->nr; i++) {
|
||||
@@ -2129,7 +2225,6 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
|
||||
struct scoutfs_alloc alloc;
|
||||
unsigned long delay;
|
||||
int ret;
|
||||
int err;
|
||||
|
||||
sc = kmalloc(sizeof(struct scoutfs_srch_compact), GFP_NOFS);
|
||||
if (sc == NULL) {
|
||||
@@ -2160,22 +2255,17 @@ static void scoutfs_srch_compact_worker(struct work_struct *work)
|
||||
if (ret < 0)
|
||||
goto commit;
|
||||
|
||||
ret = scoutfs_alloc_prepare_commit(sb, &alloc, &wri) ?:
|
||||
scoutfs_block_writer_write(sb, &wri);
|
||||
ret = scoutfs_block_writer_write(sb, &wri);
|
||||
commit:
|
||||
/* the server won't use our partial compact if _ERROR is set */
|
||||
sc->meta_avail = alloc.avail;
|
||||
sc->meta_freed = alloc.freed;
|
||||
sc->flags |= ret < 0 ? SCOUTFS_SRCH_COMPACT_FLAG_ERROR : 0;
|
||||
|
||||
err = scoutfs_client_srch_commit_compact(sb, sc);
|
||||
if (err < 0 && ret == 0)
|
||||
ret = err;
|
||||
ret = scoutfs_client_srch_commit_compact(sb, sc);
|
||||
out:
|
||||
/* our allocators and files should be stable */
|
||||
WARN_ON_ONCE(ret == -ESTALE);
|
||||
if (ret < 0)
|
||||
scoutfs_inc_counter(sb, srch_compact_error);
|
||||
|
||||
scoutfs_block_writer_forget_all(sb, &wri);
|
||||
if (!atomic_read(&srinf->shutdown)) {
|
||||
|
||||
@@ -37,7 +37,7 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_btree_root *root,
|
||||
struct scoutfs_srch_file *sfl, bool force);
|
||||
struct scoutfs_srch_file *sfl);
|
||||
int scoutfs_srch_get_compact(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
|
||||
381
kmod/src/super.c
381
kmod/src/super.c
@@ -20,6 +20,7 @@
|
||||
#include <linux/statfs.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/percpu.h>
|
||||
|
||||
#include "super.h"
|
||||
#include "block.h"
|
||||
@@ -43,43 +44,70 @@
|
||||
#include "srch.h"
|
||||
#include "item.h"
|
||||
#include "alloc.h"
|
||||
#include "recov.h"
|
||||
#include "omap.h"
|
||||
#include "volopt.h"
|
||||
#include "fence.h"
|
||||
#include "xattr.h"
|
||||
#include "scoutfs_trace.h"
|
||||
|
||||
static struct dentry *scoutfs_debugfs_root;
|
||||
|
||||
/* the statfs file fields can be small (and signed?) :/ */
|
||||
static __statfs_word saturate_truncated_word(u64 files)
|
||||
{
|
||||
__statfs_word word = files;
|
||||
static DEFINE_PER_CPU(u64, clock_sync_ids) = 0;
|
||||
|
||||
if (word != files) {
|
||||
word = ~0ULL;
|
||||
if (word < 0)
|
||||
word = (unsigned long)word >> 1;
|
||||
/*
|
||||
* Give the caller a unique clock sync id for a message they're about to
|
||||
* send. We make the ids reasonably globally unique by using randomly
|
||||
* initialized per-cpu 64bit counters.
|
||||
*/
|
||||
__le64 scoutfs_clock_sync_id(void)
|
||||
{
|
||||
u64 rnd = 0;
|
||||
u64 ret;
|
||||
u64 *id;
|
||||
|
||||
retry:
|
||||
preempt_disable();
|
||||
id = this_cpu_ptr(&clock_sync_ids);
|
||||
if (*id == 0) {
|
||||
if (rnd == 0) {
|
||||
preempt_enable();
|
||||
get_random_bytes(&rnd, sizeof(rnd));
|
||||
goto retry;
|
||||
}
|
||||
*id = rnd;
|
||||
}
|
||||
|
||||
return word;
|
||||
ret = ++(*id);
|
||||
preempt_enable();
|
||||
|
||||
return cpu_to_le64(ret);
|
||||
}
|
||||
|
||||
struct statfs_free_blocks {
|
||||
u64 meta;
|
||||
u64 data;
|
||||
};
|
||||
|
||||
static int count_free_blocks(struct super_block *sb, void *arg, int owner,
|
||||
u64 id, bool meta, bool avail, u64 blocks)
|
||||
{
|
||||
struct statfs_free_blocks *sfb = arg;
|
||||
|
||||
if (meta)
|
||||
sfb->meta += blocks;
|
||||
else
|
||||
sfb->data += blocks;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* The server gives us the current sum of free blocks and the total
|
||||
* inode count that it can see across all the clients' log trees. It
|
||||
* won't see allocations and inode creations or deletions that are dirty
|
||||
* in client memory as it builds a transaction.
|
||||
* Build the free block counts by having alloc read all the persistent
|
||||
* blocks which contain allocators and calling us for each of them.
|
||||
* Only the super block reads aren't cached so repeatedly calling statfs
|
||||
* is like repeated O_DIRECT IO. We can add a cache and stale results
|
||||
* if that IO becomes a problem.
|
||||
*
|
||||
* We don't have static limits on the number of files so the statfs
|
||||
* fields for the total possible files and the number free isn't
|
||||
* particularly helpful. What we do want to report is the number of
|
||||
* inodes, so we fake a max possible number of inodes given a
|
||||
* conservative estimate of the total space consumption per file and
|
||||
* then find the free by subtracting our precise count of active inodes.
|
||||
* This seems like the least surprising compromise where the file max
|
||||
* doesn't change and the caller gets the correct count of used inodes.
|
||||
* We fake the number of free inodes value by assuming that we can fill
|
||||
* free blocks with a certain number of inodes. We then the number of
|
||||
* current inodes to that free count to determine the total possible
|
||||
* inodes.
|
||||
*
|
||||
* The fsid that we report is constructed from the xor of the first two
|
||||
* and second two little endian u32s that make up the uuid bytes.
|
||||
@@ -87,33 +115,41 @@ static __statfs_word saturate_truncated_word(u64 files)
|
||||
static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
|
||||
{
|
||||
struct super_block *sb = dentry->d_inode->i_sb;
|
||||
struct scoutfs_net_statfs nst;
|
||||
u64 files;
|
||||
u64 ffree;
|
||||
struct scoutfs_super_block *super = NULL;
|
||||
struct statfs_free_blocks sfb = {0,};
|
||||
__le32 uuid[4];
|
||||
int ret;
|
||||
|
||||
scoutfs_inc_counter(sb, statfs);
|
||||
|
||||
ret = scoutfs_client_statfs(sb, &nst);
|
||||
super = kzalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
|
||||
if (!super) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scoutfs_read_super(sb, super);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
kst->f_bfree = (le64_to_cpu(nst.free_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
|
||||
le64_to_cpu(nst.free_data_blocks);
|
||||
ret = scoutfs_alloc_foreach(sb, count_free_blocks, &sfb);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
kst->f_bfree = (sfb.meta << SCOUTFS_BLOCK_SM_LG_SHIFT) + sfb.data;
|
||||
kst->f_type = SCOUTFS_SUPER_MAGIC;
|
||||
kst->f_bsize = SCOUTFS_BLOCK_SM_SIZE;
|
||||
kst->f_blocks = (le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_SM_LG_SHIFT) +
|
||||
le64_to_cpu(nst.total_data_blocks);
|
||||
kst->f_blocks = (le64_to_cpu(super->total_meta_blocks) <<
|
||||
SCOUTFS_BLOCK_SM_LG_SHIFT) +
|
||||
le64_to_cpu(super->total_data_blocks);
|
||||
kst->f_bavail = kst->f_bfree;
|
||||
|
||||
files = div_u64(le64_to_cpu(nst.total_meta_blocks) << SCOUTFS_BLOCK_LG_SHIFT, 2048);
|
||||
ffree = files - le64_to_cpu(nst.inode_count);
|
||||
kst->f_files = saturate_truncated_word(files);
|
||||
kst->f_ffree = saturate_truncated_word(ffree);
|
||||
/* arbitrarily assume ~1K / empty file */
|
||||
kst->f_ffree = sfb.meta * (SCOUTFS_BLOCK_LG_SIZE / 1024);
|
||||
kst->f_files = kst->f_ffree + le64_to_cpu(super->next_ino);
|
||||
|
||||
BUILD_BUG_ON(sizeof(uuid) != sizeof(nst.uuid));
|
||||
memcpy(uuid, nst.uuid, sizeof(uuid));
|
||||
BUILD_BUG_ON(sizeof(uuid) != sizeof(super->uuid));
|
||||
memcpy(uuid, super->uuid, sizeof(uuid));
|
||||
kst->f_fsid.val[0] = le32_to_cpu(uuid[0]) ^ le32_to_cpu(uuid[1]);
|
||||
kst->f_fsid.val[1] = le32_to_cpu(uuid[2]) ^ le32_to_cpu(uuid[3]);
|
||||
kst->f_namelen = SCOUTFS_NAME_LEN;
|
||||
@@ -122,17 +158,57 @@ static int scoutfs_statfs(struct dentry *dentry, struct kstatfs *kst)
|
||||
/* the vfs fills f_flags */
|
||||
ret = 0;
|
||||
out:
|
||||
kfree(super);
|
||||
|
||||
/*
|
||||
* We don't take cluster locks in statfs which makes it a very
|
||||
* convenient place to trigger lock reclaim for debugging. We
|
||||
* try to free as many locks as possible.
|
||||
*/
|
||||
if (scoutfs_trigger(sb, STATFS_LOCK_PURGE))
|
||||
scoutfs_free_unused_locks(sb);
|
||||
scoutfs_free_unused_locks(sb, -1UL);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scoutfs_show_options(struct seq_file *seq, struct dentry *root)
|
||||
{
|
||||
struct super_block *sb = root->d_sb;
|
||||
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
|
||||
|
||||
seq_printf(seq, ",server_addr="SIN_FMT, SIN_ARG(&opts->server_addr));
|
||||
seq_printf(seq, ",metadev_path=%s", opts->metadev_path);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t metadev_path_show(struct kobject *kobj,
|
||||
struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s", opts->metadev_path);
|
||||
}
|
||||
SCOUTFS_ATTR_RO(metadev_path);
|
||||
|
||||
static ssize_t server_addr_show(struct kobject *kobj,
|
||||
struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct mount_options *opts = &SCOUTFS_SB(sb)->opts;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, SIN_FMT"\n",
|
||||
SIN_ARG(&opts->server_addr));
|
||||
}
|
||||
SCOUTFS_ATTR_RO(server_addr);
|
||||
|
||||
static struct attribute *mount_options_attrs[] = {
|
||||
SCOUTFS_ATTR_PTR(metadev_path),
|
||||
SCOUTFS_ATTR_PTR(server_addr),
|
||||
NULL,
|
||||
};
|
||||
|
||||
static int scoutfs_sync_fs(struct super_block *sb, int wait)
|
||||
{
|
||||
trace_scoutfs_sync_fs(sb, wait);
|
||||
@@ -150,15 +226,7 @@ static void scoutfs_metadev_close(struct super_block *sb)
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
if (sbi->meta_bdev) {
|
||||
/*
|
||||
* Some kernels have blkdev_reread_part which calls
|
||||
* fsync_bdev while holding the bd_mutex which inverts
|
||||
* the s_umount hold in deactivate_super and blkdev_put
|
||||
* from kill_sb->put_super.
|
||||
*/
|
||||
lockdep_off();
|
||||
blkdev_put(sbi->meta_bdev, SCOUTFS_META_BDEV_MODE);
|
||||
lockdep_on();
|
||||
sbi->meta_bdev = NULL;
|
||||
}
|
||||
}
|
||||
@@ -175,67 +243,44 @@ static void scoutfs_put_super(struct super_block *sb)
|
||||
|
||||
trace_scoutfs_put_super(sb);
|
||||
|
||||
/*
|
||||
* Wait for invalidation and iput to finish with any lingering
|
||||
* inode references that escaped the evict_inodes in
|
||||
* generic_shutdown_super. MS_ACTIVE is clear so final iput
|
||||
* will always evict.
|
||||
*/
|
||||
scoutfs_lock_flush_invalidate(sb);
|
||||
scoutfs_inode_flush_iput(sb);
|
||||
WARN_ON_ONCE(!list_empty(&sb->s_inodes));
|
||||
sbi->shutdown = true;
|
||||
|
||||
scoutfs_forest_stop(sb);
|
||||
scoutfs_data_destroy(sb);
|
||||
scoutfs_srch_destroy(sb);
|
||||
|
||||
scoutfs_lock_shutdown(sb);
|
||||
scoutfs_unlock(sb, sbi->rid_lock, SCOUTFS_LOCK_WRITE);
|
||||
sbi->rid_lock = NULL;
|
||||
|
||||
scoutfs_shutdown_trans(sb);
|
||||
scoutfs_volopt_destroy(sb);
|
||||
scoutfs_client_destroy(sb);
|
||||
scoutfs_inode_destroy(sb);
|
||||
scoutfs_item_destroy(sb);
|
||||
scoutfs_forest_destroy(sb);
|
||||
scoutfs_data_destroy(sb);
|
||||
|
||||
scoutfs_quorum_destroy(sb);
|
||||
/* the server locks the listen address and compacts */
|
||||
scoutfs_lock_shutdown(sb);
|
||||
scoutfs_server_destroy(sb);
|
||||
scoutfs_recov_destroy(sb);
|
||||
scoutfs_net_destroy(sb);
|
||||
scoutfs_lock_destroy(sb);
|
||||
scoutfs_omap_destroy(sb);
|
||||
|
||||
/* server clears quorum leader flag during shutdown */
|
||||
scoutfs_quorum_destroy(sb);
|
||||
|
||||
scoutfs_block_destroy(sb);
|
||||
scoutfs_destroy_triggers(sb);
|
||||
scoutfs_fence_destroy(sb);
|
||||
scoutfs_options_destroy(sb);
|
||||
scoutfs_sysfs_destroy_attrs(sb, &sbi->mopts_ssa);
|
||||
debugfs_remove(sbi->debug_root);
|
||||
scoutfs_destroy_counters(sb);
|
||||
scoutfs_destroy_sysfs(sb);
|
||||
scoutfs_metadev_close(sb);
|
||||
|
||||
kfree(sbi->opts.metadev_path);
|
||||
kfree(sbi);
|
||||
|
||||
sb->s_fs_info = NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Record that we're performing a forced unmount. As put_super drives
|
||||
* destruction of the filesystem we won't issue more network or storage
|
||||
* operations because we assume that they'll hang. Pending operations
|
||||
* can return errors when it's possible to do so. We may be racing with
|
||||
* pending operations which can't be canceled.
|
||||
*/
|
||||
static void scoutfs_umount_begin(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
scoutfs_warn(sb, "forcing unmount, can return errors and lose unsynced data");
|
||||
sbi->forced_unmount = true;
|
||||
|
||||
scoutfs_client_net_shutdown(sb);
|
||||
}
|
||||
|
||||
static const struct super_operations scoutfs_super_ops = {
|
||||
.alloc_inode = scoutfs_alloc_inode,
|
||||
.drop_inode = scoutfs_drop_inode,
|
||||
@@ -243,9 +288,8 @@ static const struct super_operations scoutfs_super_ops = {
|
||||
.destroy_inode = scoutfs_destroy_inode,
|
||||
.sync_fs = scoutfs_sync_fs,
|
||||
.statfs = scoutfs_statfs,
|
||||
.show_options = scoutfs_options_show,
|
||||
.show_options = scoutfs_show_options,
|
||||
.put_super = scoutfs_put_super,
|
||||
.umount_begin = scoutfs_umount_begin,
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -265,22 +309,6 @@ int scoutfs_write_super(struct super_block *sb,
|
||||
sizeof(struct scoutfs_super_block));
|
||||
}
|
||||
|
||||
static bool small_bdev(struct super_block *sb, char *which, u64 blocks,
|
||||
struct block_device *bdev, int shift)
|
||||
{
|
||||
u64 size = (u64)i_size_read(bdev->bd_inode);
|
||||
u64 count = size >> shift;
|
||||
|
||||
if (blocks > count) {
|
||||
scoutfs_err(sb, "super block records %llu %s blocks, but device %u:%u size %llu only allows %llu blocks",
|
||||
blocks, which, MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev), size, count);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Read super, specifying bdev.
|
||||
*/
|
||||
@@ -288,9 +316,9 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
|
||||
struct block_device *bdev,
|
||||
struct scoutfs_super_block *super_res)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super;
|
||||
__le32 calc;
|
||||
u64 blkno;
|
||||
int ret;
|
||||
|
||||
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
|
||||
@@ -323,33 +351,59 @@ static int scoutfs_read_super_from_bdev(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (le64_to_cpu(super->fmt_vers) < SCOUTFS_FORMAT_VERSION_MIN ||
|
||||
le64_to_cpu(super->fmt_vers) > SCOUTFS_FORMAT_VERSION_MAX) {
|
||||
scoutfs_err(sb, "super block has format version %llu outside of supported version range %u-%u",
|
||||
le64_to_cpu(super->fmt_vers), SCOUTFS_FORMAT_VERSION_MIN,
|
||||
SCOUTFS_FORMAT_VERSION_MAX);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* fill_supers checks the fmt_vers in both supers and then decides to use it.
|
||||
* From then on we verify that the supers we read have that version.
|
||||
*/
|
||||
if (sbi->fmt_vers != 0 && le64_to_cpu(super->fmt_vers) != sbi->fmt_vers) {
|
||||
scoutfs_err(sb, "super block has format version %llu than %llu read at mount",
|
||||
le64_to_cpu(super->fmt_vers), sbi->fmt_vers);
|
||||
if (super->format_hash != cpu_to_le64(SCOUTFS_FORMAT_HASH)) {
|
||||
scoutfs_err(sb, "super block has invalid format hash 0x%llx, expected 0x%llx",
|
||||
le64_to_cpu(super->format_hash),
|
||||
SCOUTFS_FORMAT_HASH);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* XXX do we want more rigorous invalid super checking? */
|
||||
|
||||
if (small_bdev(sb, "metadata", le64_to_cpu(super->total_meta_blocks), sbi->meta_bdev,
|
||||
SCOUTFS_BLOCK_LG_SHIFT) ||
|
||||
small_bdev(sb, "data", le64_to_cpu(super->total_data_blocks), sb->s_bdev,
|
||||
SCOUTFS_BLOCK_SM_SHIFT)) {
|
||||
if (super->quorum_count == 0 ||
|
||||
super->quorum_count > SCOUTFS_QUORUM_MAX_COUNT) {
|
||||
scoutfs_err(sb, "super block has invalid quorum count %u, must be > 0 and <= %u",
|
||||
super->quorum_count, SCOUTFS_QUORUM_MAX_COUNT);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
blkno = (SCOUTFS_QUORUM_BLKNO + SCOUTFS_QUORUM_BLOCKS) >>
|
||||
SCOUTFS_BLOCK_SM_LG_SHIFT;
|
||||
if (le64_to_cpu(super->first_meta_blkno) < blkno) {
|
||||
scoutfs_err(sb, "super block first meta blkno %llu is within quorum blocks",
|
||||
le64_to_cpu(super->first_meta_blkno));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (le64_to_cpu(super->first_meta_blkno) >
|
||||
le64_to_cpu(super->last_meta_blkno)) {
|
||||
scoutfs_err(sb, "super block first meta blkno %llu is greater than last meta blkno %llu",
|
||||
le64_to_cpu(super->first_meta_blkno),
|
||||
le64_to_cpu(super->last_meta_blkno));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (le64_to_cpu(super->first_data_blkno) >
|
||||
le64_to_cpu(super->last_data_blkno)) {
|
||||
scoutfs_err(sb, "super block first data blkno %llu is greater than last data blkno %llu",
|
||||
le64_to_cpu(super->first_data_blkno),
|
||||
le64_to_cpu(super->last_data_blkno));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
blkno = (i_size_read(sb->s_bdev->bd_inode) >>
|
||||
SCOUTFS_BLOCK_SM_SHIFT) - 1;
|
||||
if (le64_to_cpu(super->last_data_blkno) > blkno) {
|
||||
scoutfs_err(sb, "super block last data blkno %llu is outsite device size last blkno %llu",
|
||||
le64_to_cpu(super->last_data_blkno), blkno);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
out:
|
||||
@@ -455,14 +509,7 @@ static int scoutfs_read_supers(struct super_block *sb)
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (le64_to_cpu(meta_super->fmt_vers) != le64_to_cpu(data_super->fmt_vers)) {
|
||||
scoutfs_err(sb, "meta device format version %llu != data device format version %llu",
|
||||
le64_to_cpu(meta_super->fmt_vers), le64_to_cpu(data_super->fmt_vers));
|
||||
goto out;
|
||||
}
|
||||
|
||||
sbi->fsid = le64_to_cpu(meta_super->hdr.fsid);
|
||||
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
|
||||
sbi->super = *meta_super;
|
||||
out:
|
||||
kfree(meta_super);
|
||||
kfree(data_super);
|
||||
@@ -471,9 +518,9 @@ out:
|
||||
|
||||
static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
{
|
||||
struct scoutfs_mount_options opts;
|
||||
struct block_device *meta_bdev;
|
||||
struct scoutfs_sb_info *sbi;
|
||||
struct mount_options opts;
|
||||
struct block_device *meta_bdev;
|
||||
struct inode *inode;
|
||||
int ret;
|
||||
|
||||
@@ -482,10 +529,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
sb->s_magic = SCOUTFS_SUPER_MAGIC;
|
||||
sb->s_maxbytes = MAX_LFS_FILESIZE;
|
||||
sb->s_op = &scoutfs_super_ops;
|
||||
sb->s_d_op = &scoutfs_dentry_ops;
|
||||
sb->s_export_op = &scoutfs_export_ops;
|
||||
sb->s_xattr = scoutfs_xattr_handlers;
|
||||
sb->s_flags |= MS_I_VERSION | MS_POSIXACL;
|
||||
|
||||
/* btree blocks use long lived bh->b_data refs */
|
||||
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
|
||||
@@ -498,17 +542,22 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
|
||||
ret = assign_random_id(sbi);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
spin_lock_init(&sbi->next_ino_lock);
|
||||
init_waitqueue_head(&sbi->trans_hold_wq);
|
||||
spin_lock_init(&sbi->data_wait_root.lock);
|
||||
sbi->data_wait_root.root = RB_ROOT;
|
||||
spin_lock_init(&sbi->trans_write_lock);
|
||||
INIT_DELAYED_WORK(&sbi->trans_write_work, scoutfs_trans_write_func);
|
||||
init_waitqueue_head(&sbi->trans_write_wq);
|
||||
scoutfs_sysfs_init_attrs(sb, &sbi->mopts_ssa);
|
||||
|
||||
/* parse options early for use during setup */
|
||||
ret = scoutfs_options_early_setup(sb, data);
|
||||
if (ret < 0)
|
||||
ret = scoutfs_parse_options(sb, data, &opts);
|
||||
if (ret)
|
||||
goto out;
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
sbi->opts = opts;
|
||||
|
||||
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
|
||||
if (ret != SCOUTFS_BLOCK_SM_SIZE) {
|
||||
@@ -517,7 +566,9 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
goto out;
|
||||
}
|
||||
|
||||
meta_bdev = blkdev_get_by_path(opts.metadev_path, SCOUTFS_META_BDEV_MODE, sb);
|
||||
meta_bdev =
|
||||
blkdev_get_by_path(sbi->opts.metadev_path,
|
||||
SCOUTFS_META_BDEV_MODE, sb);
|
||||
if (IS_ERR(meta_bdev)) {
|
||||
scoutfs_err(sb, "could not open metadev: error %ld",
|
||||
PTR_ERR(meta_bdev));
|
||||
@@ -537,32 +588,30 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
scoutfs_setup_sysfs(sb) ?:
|
||||
scoutfs_setup_counters(sb) ?:
|
||||
scoutfs_options_setup(sb) ?:
|
||||
scoutfs_sysfs_create_attrs(sb, &sbi->mopts_ssa,
|
||||
mount_options_attrs, "mount_options") ?:
|
||||
scoutfs_setup_triggers(sb) ?:
|
||||
scoutfs_fence_setup(sb) ?:
|
||||
scoutfs_block_setup(sb) ?:
|
||||
scoutfs_forest_setup(sb) ?:
|
||||
scoutfs_item_setup(sb) ?:
|
||||
scoutfs_inode_setup(sb) ?:
|
||||
scoutfs_data_setup(sb) ?:
|
||||
scoutfs_setup_trans(sb) ?:
|
||||
scoutfs_omap_setup(sb) ?:
|
||||
scoutfs_lock_setup(sb) ?:
|
||||
scoutfs_net_setup(sb) ?:
|
||||
scoutfs_recov_setup(sb) ?:
|
||||
scoutfs_server_setup(sb) ?:
|
||||
scoutfs_quorum_setup(sb) ?:
|
||||
scoutfs_server_setup(sb) ?:
|
||||
scoutfs_client_setup(sb) ?:
|
||||
scoutfs_volopt_setup(sb) ?:
|
||||
scoutfs_lock_rid(sb, SCOUTFS_LOCK_WRITE, 0, sbi->rid,
|
||||
&sbi->rid_lock) ?:
|
||||
scoutfs_trans_get_log_trees(sb) ?:
|
||||
scoutfs_srch_setup(sb);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/* this interruptible iget lets hung mount be aborted with ctl-c */
|
||||
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO, SCOUTFS_LKF_INTERRUPTIBLE, 0);
|
||||
inode = scoutfs_iget(sb, SCOUTFS_ROOT_INO);
|
||||
if (IS_ERR(inode)) {
|
||||
ret = PTR_ERR(inode);
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -572,15 +621,12 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* send requests once iget progress shows we had a server */
|
||||
ret = scoutfs_trans_get_log_trees(sb);
|
||||
ret = scoutfs_client_advance_seq(sb, &sbi->trans_seq);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/* start up background services that use everything else */
|
||||
scoutfs_inode_start(sb);
|
||||
scoutfs_forest_start(sb);
|
||||
scoutfs_trans_restart_sync_deadline(sb);
|
||||
// scoutfs_scan_orphans(sb);
|
||||
ret = 0;
|
||||
out:
|
||||
/* on error, generic_shutdown_super calls put_super if s_root */
|
||||
@@ -601,18 +647,7 @@ static struct dentry *scoutfs_mount(struct file_system_type *fs_type, int flags,
|
||||
*/
|
||||
static void scoutfs_kill_sb(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
if (sbi) {
|
||||
sbi->unmounting = true;
|
||||
smp_wmb();
|
||||
}
|
||||
|
||||
if (SCOUTFS_HAS_SBI(sb)) {
|
||||
scoutfs_options_stop(sb);
|
||||
scoutfs_inode_orphan_stop(sb);
|
||||
scoutfs_lock_unmount_begin(sb);
|
||||
}
|
||||
trace_scoutfs_kill_sb(sb);
|
||||
|
||||
kill_block_super(sb);
|
||||
}
|
||||
@@ -630,6 +665,7 @@ MODULE_ALIAS_FS("scoutfs");
|
||||
static void teardown_module(void)
|
||||
{
|
||||
debugfs_remove(scoutfs_debugfs_root);
|
||||
scoutfs_dir_exit();
|
||||
scoutfs_inode_exit();
|
||||
scoutfs_sysfs_exit();
|
||||
}
|
||||
@@ -644,15 +680,7 @@ static int __init scoutfs_module_init(void)
|
||||
*/
|
||||
__asm__ __volatile__ (
|
||||
".section .note.git_describe,\"a\"\n"
|
||||
".ascii \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
|
||||
".previous\n");
|
||||
__asm__ __volatile__ (
|
||||
".section .note.scoutfs_format_version_min,\"a\"\n"
|
||||
".ascii \""SCOUTFS_FORMAT_VERSION_MIN_STR"\\n\"\n"
|
||||
".previous\n");
|
||||
__asm__ __volatile__ (
|
||||
".section .note.scoutfs_format_version_max,\"a\"\n"
|
||||
".ascii \""SCOUTFS_FORMAT_VERSION_MAX_STR"\\n\"\n"
|
||||
".string \""SCOUTFS_GIT_DESCRIBE"\\n\"\n"
|
||||
".previous\n");
|
||||
|
||||
scoutfs_init_counters();
|
||||
@@ -667,6 +695,7 @@ static int __init scoutfs_module_init(void)
|
||||
goto out;
|
||||
}
|
||||
ret = scoutfs_inode_init() ?:
|
||||
scoutfs_dir_init() ?:
|
||||
register_filesystem(&scoutfs_fs_type);
|
||||
out:
|
||||
if (ret)
|
||||
@@ -685,5 +714,3 @@ module_exit(scoutfs_module_exit)
|
||||
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_INFO(git_describe, SCOUTFS_GIT_DESCRIBE);
|
||||
MODULE_INFO(scoutfs_format_version_min, SCOUTFS_FORMAT_VERSION_MIN_STR);
|
||||
MODULE_INFO(scoutfs_format_version_max, SCOUTFS_FORMAT_VERSION_MAX_STR);
|
||||
|
||||
@@ -26,24 +26,20 @@ struct net_info;
|
||||
struct block_info;
|
||||
struct forest_info;
|
||||
struct srch_info;
|
||||
struct recov_info;
|
||||
struct omap_info;
|
||||
struct volopt_info;
|
||||
struct fence_info;
|
||||
|
||||
struct scoutfs_sb_info {
|
||||
struct super_block *sb;
|
||||
|
||||
/* assigned once at the start of each mount, read-only */
|
||||
u64 fsid;
|
||||
u64 rid;
|
||||
u64 fmt_vers;
|
||||
struct scoutfs_lock *rid_lock;
|
||||
|
||||
struct scoutfs_super_block super;
|
||||
|
||||
struct block_device *meta_bdev;
|
||||
|
||||
spinlock_t next_ino_lock;
|
||||
|
||||
struct options_info *options_info;
|
||||
struct data_info *data_info;
|
||||
struct inode_sb_info *inode_sb_info;
|
||||
struct btree_info *btree_info;
|
||||
@@ -52,32 +48,40 @@ struct scoutfs_sb_info {
|
||||
struct block_info *block_info;
|
||||
struct forest_info *forest_info;
|
||||
struct srch_info *srch_info;
|
||||
struct omap_info *omap_info;
|
||||
struct volopt_info *volopt_info;
|
||||
struct item_cache_info *item_cache_info;
|
||||
struct fence_info *fence_info;
|
||||
|
||||
wait_queue_head_t trans_hold_wq;
|
||||
struct task_struct *trans_task;
|
||||
|
||||
/* tracks tasks waiting for data extents */
|
||||
struct scoutfs_data_wait_root data_wait_root;
|
||||
|
||||
/* set as transaction opens with trans holders excluded */
|
||||
spinlock_t trans_write_lock;
|
||||
u64 trans_write_count;
|
||||
u64 trans_seq;
|
||||
int trans_write_ret;
|
||||
struct delayed_work trans_write_work;
|
||||
wait_queue_head_t trans_write_wq;
|
||||
struct workqueue_struct *trans_write_workq;
|
||||
bool trans_deadline_expired;
|
||||
|
||||
struct trans_info *trans_info;
|
||||
struct lock_info *lock_info;
|
||||
struct lock_server_info *lock_server_info;
|
||||
struct client_info *client_info;
|
||||
struct server_info *server_info;
|
||||
struct recov_info *recov_info;
|
||||
struct sysfs_info *sfsinfo;
|
||||
|
||||
struct scoutfs_counters *counters;
|
||||
struct scoutfs_triggers *triggers;
|
||||
|
||||
struct mount_options opts;
|
||||
struct options_sb_info *options;
|
||||
struct scoutfs_sysfs_attrs mopts_ssa;
|
||||
|
||||
struct dentry *debug_root;
|
||||
|
||||
bool forced_unmount;
|
||||
bool unmounting;
|
||||
bool shutdown;
|
||||
|
||||
unsigned long corruption_messages_once[SC_NR_LONGS];
|
||||
};
|
||||
@@ -99,26 +103,6 @@ static inline bool SCOUTFS_IS_META_BDEV(struct scoutfs_super_block *super_block)
|
||||
|
||||
#define SCOUTFS_META_BDEV_MODE (FMODE_READ | FMODE_WRITE | FMODE_EXCL)
|
||||
|
||||
static inline bool scoutfs_forcing_unmount(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
return sbi->forced_unmount;
|
||||
}
|
||||
|
||||
/*
|
||||
* True if we're shutting down the system and can be used as a coarse
|
||||
* indicator that we can avoid doing some work that no longer makes
|
||||
* sense.
|
||||
*/
|
||||
static inline bool scoutfs_unmounting(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
smp_rmb();
|
||||
return !sbi || sbi->unmounting;
|
||||
}
|
||||
|
||||
/*
|
||||
* A small string embedded in messages that's used to identify a
|
||||
* specific mount. It's the three most significant bytes of the fsid
|
||||
@@ -134,14 +118,14 @@ static inline bool scoutfs_unmounting(struct super_block *sb)
|
||||
(int)(le64_to_cpu(fsid) >> SCSB_SHIFT), \
|
||||
(int)(le64_to_cpu(rid) >> SCSB_SHIFT)
|
||||
#define SCSB_ARGS(sb) \
|
||||
(int)(SCOUTFS_SB(sb)->fsid >> SCSB_SHIFT), \
|
||||
(int)(le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) >> SCSB_SHIFT), \
|
||||
(int)(SCOUTFS_SB(sb)->rid >> SCSB_SHIFT)
|
||||
#define SCSB_TRACE_FIELDS \
|
||||
__field(__u64, fsid) \
|
||||
__field(__u64, rid)
|
||||
#define SCSB_TRACE_ASSIGN(sb) \
|
||||
__entry->fsid = SCOUTFS_HAS_SBI(sb) ? \
|
||||
SCOUTFS_SB(sb)->fsid : 0; \
|
||||
le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) : 0;\
|
||||
__entry->rid = SCOUTFS_HAS_SBI(sb) ? \
|
||||
SCOUTFS_SB(sb)->rid : 0;
|
||||
#define SCSB_TRACE_ARGS \
|
||||
@@ -156,4 +140,6 @@ int scoutfs_write_super(struct super_block *sb,
|
||||
/* to keep this out of the ioctl.h public interface definition */
|
||||
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
|
||||
|
||||
__le64 scoutfs_clock_sync_id(void);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -37,32 +37,14 @@ struct attr_funcs {
|
||||
#define ATTR_FUNCS_RO(_name) \
|
||||
static struct attr_funcs _name##_attr_funcs = __ATTR_RO(_name)
|
||||
|
||||
static ssize_t data_device_maj_min_show(struct kobject *kobj, struct attribute *attr, char *buf)
|
||||
{
|
||||
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%u:%u\n",
|
||||
MAJOR(sb->s_bdev->bd_dev), MINOR(sb->s_bdev->bd_dev));
|
||||
}
|
||||
ATTR_FUNCS_RO(data_device_maj_min);
|
||||
|
||||
static ssize_t format_version_show(struct kobject *kobj, struct attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n", sbi->fmt_vers);
|
||||
}
|
||||
ATTR_FUNCS_RO(format_version);
|
||||
|
||||
static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%016llx\n", sbi->fsid);
|
||||
return snprintf(buf, PAGE_SIZE, "%016llx\n",
|
||||
le64_to_cpu(super->hdr.fsid));
|
||||
}
|
||||
ATTR_FUNCS_RO(fsid);
|
||||
|
||||
@@ -109,8 +91,6 @@ static ssize_t attr_funcs_show(struct kobject *kobj, struct attribute *attr,
|
||||
|
||||
|
||||
static struct attribute *sb_id_attrs[] = {
|
||||
&data_device_maj_min_attr_funcs.attr,
|
||||
&format_version_attr_funcs.attr,
|
||||
&fsid_attr_funcs.attr,
|
||||
&rid_attr_funcs.attr,
|
||||
NULL,
|
||||
@@ -151,10 +131,9 @@ void scoutfs_sysfs_init_attrs(struct super_block *sb,
|
||||
* If this returns success then the file will be visible and show can
|
||||
* be called until unmount.
|
||||
*/
|
||||
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
|
||||
struct kobject *parent,
|
||||
struct scoutfs_sysfs_attrs *ssa,
|
||||
struct attribute **attrs, char *fmt, ...)
|
||||
int scoutfs_sysfs_create_attrs(struct super_block *sb,
|
||||
struct scoutfs_sysfs_attrs *ssa,
|
||||
struct attribute **attrs, char *fmt, ...)
|
||||
{
|
||||
va_list args;
|
||||
size_t name_len;
|
||||
@@ -195,8 +174,8 @@ int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype, parent,
|
||||
"%s", ssa->name);
|
||||
ret = kobject_init_and_add(&ssa->kobj, &ssa->ktype,
|
||||
scoutfs_sysfs_sb_dir(sb), "%s", ssa->name);
|
||||
out:
|
||||
if (ret) {
|
||||
kfree(ssa->name);
|
||||
@@ -267,7 +246,7 @@ int __init scoutfs_sysfs_init(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
void scoutfs_sysfs_exit(void)
|
||||
void __exit scoutfs_sysfs_exit(void)
|
||||
{
|
||||
if (scoutfs_kset)
|
||||
kset_unregister(scoutfs_kset);
|
||||
|
||||
@@ -10,8 +10,6 @@
|
||||
|
||||
#define SCOUTFS_ATTR_RO(_name) \
|
||||
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RO(_name)
|
||||
#define SCOUTFS_ATTR_RW(_name) \
|
||||
static struct kobj_attribute scoutfs_attr_##_name = __ATTR_RW(_name)
|
||||
|
||||
#define SCOUTFS_ATTR_PTR(_name) \
|
||||
&scoutfs_attr_##_name.attr
|
||||
@@ -36,14 +34,9 @@ struct scoutfs_sysfs_attrs {
|
||||
|
||||
void scoutfs_sysfs_init_attrs(struct super_block *sb,
|
||||
struct scoutfs_sysfs_attrs *ssa);
|
||||
int scoutfs_sysfs_create_attrs_parent(struct super_block *sb,
|
||||
struct kobject *parent,
|
||||
struct scoutfs_sysfs_attrs *ssa,
|
||||
struct attribute **attrs, char *fmt, ...);
|
||||
#define scoutfs_sysfs_create_attrs(sb, ssa, attrs, fmt, args...) \
|
||||
scoutfs_sysfs_create_attrs_parent(sb, scoutfs_sysfs_sb_dir(sb), \
|
||||
ssa, attrs, fmt, ##args)
|
||||
|
||||
int scoutfs_sysfs_create_attrs(struct super_block *sb,
|
||||
struct scoutfs_sysfs_attrs *ssa,
|
||||
struct attribute **attrs, char *fmt, ...);
|
||||
void scoutfs_sysfs_destroy_attrs(struct super_block *sb,
|
||||
struct scoutfs_sysfs_attrs *ssa);
|
||||
|
||||
@@ -53,6 +46,6 @@ int scoutfs_setup_sysfs(struct super_block *sb);
|
||||
void scoutfs_destroy_sysfs(struct super_block *sb);
|
||||
|
||||
int __init scoutfs_sysfs_init(void);
|
||||
void scoutfs_sysfs_exit(void);
|
||||
void __exit scoutfs_sysfs_exit(void);
|
||||
|
||||
#endif
|
||||
|
||||
637
kmod/src/trans.c
637
kmod/src/trans.c
@@ -17,7 +17,6 @@
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/writeback.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include "super.h"
|
||||
#include "trans.h"
|
||||
@@ -40,46 +39,51 @@
|
||||
* track the relationships between dirty blocks so there's only ever one
|
||||
* transaction being built.
|
||||
*
|
||||
* Committing the current dirty transaction can be triggered by sync, a
|
||||
* regular background commit interval, reaching a dirty block threshold,
|
||||
* or the transaction running out of its private allocator resources.
|
||||
* Once all the current holders release the writing func writes out the
|
||||
* dirty blocks while excluding holders until it finishes.
|
||||
* The copy of the on-disk super block in the fs sb info has its header
|
||||
* sequence advanced so that new dirty blocks inherit this dirty
|
||||
* sequence number. It's only advanced once all those dirty blocks are
|
||||
* reachable after having first written them all out and then the new
|
||||
* super with that seq. It's first incremented at mount.
|
||||
*
|
||||
* Unfortunately writing holders can nest. We track nested hold callers
|
||||
* with the per-task journal_info pointer to avoid deadlocks between
|
||||
* holders that might otherwise wait for a pending commit.
|
||||
* Unfortunately writers can nest. We don't bother trying to special
|
||||
* case holding a transaction that you're already holding because that
|
||||
* requires per-task storage. We just let anyone hold transactions
|
||||
* regardless of waiters waiting to write, which risks waiters waiting a
|
||||
* very long time.
|
||||
*/
|
||||
|
||||
/* sync dirty data at least this often */
|
||||
#define TRANS_SYNC_DELAY (HZ * 10)
|
||||
|
||||
/*
|
||||
* XXX move the rest of the super trans_ fields here.
|
||||
*/
|
||||
struct trans_info {
|
||||
struct super_block *sb;
|
||||
|
||||
atomic_t holders;
|
||||
spinlock_t lock;
|
||||
unsigned reserved_items;
|
||||
unsigned reserved_vals;
|
||||
unsigned holders;
|
||||
bool writing;
|
||||
|
||||
struct scoutfs_log_trees lt;
|
||||
struct scoutfs_alloc alloc;
|
||||
struct scoutfs_block_writer wri;
|
||||
|
||||
wait_queue_head_t hold_wq;
|
||||
struct task_struct *task;
|
||||
spinlock_t write_lock;
|
||||
u64 write_count;
|
||||
int write_ret;
|
||||
struct delayed_work write_work;
|
||||
wait_queue_head_t write_wq;
|
||||
struct workqueue_struct *write_workq;
|
||||
bool deadline_expired;
|
||||
};
|
||||
|
||||
#define DECLARE_TRANS_INFO(sb, name) \
|
||||
struct trans_info *name = SCOUTFS_SB(sb)->trans_info
|
||||
|
||||
/* avoid the high sign bit out of an abundance of caution*/
|
||||
#define TRANS_HOLDERS_WRITE_FUNC_BIT (1 << 30)
|
||||
#define TRANS_HOLDERS_COUNT_MASK (TRANS_HOLDERS_WRITE_FUNC_BIT - 1)
|
||||
static bool drained_holders(struct trans_info *tri)
|
||||
{
|
||||
bool drained;
|
||||
|
||||
spin_lock(&tri->lock);
|
||||
tri->writing = true;
|
||||
drained = tri->holders == 0;
|
||||
spin_unlock(&tri->lock);
|
||||
|
||||
return drained;
|
||||
}
|
||||
|
||||
static int commit_btrees(struct super_block *sb)
|
||||
{
|
||||
@@ -101,7 +105,6 @@ static int commit_btrees(struct super_block *sb)
|
||||
*/
|
||||
int scoutfs_trans_get_log_trees(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
struct scoutfs_log_trees lt;
|
||||
int ret = 0;
|
||||
@@ -114,11 +117,6 @@ int scoutfs_trans_get_log_trees(struct super_block *sb)
|
||||
|
||||
scoutfs_forest_init_btrees(sb, &tri->alloc, &tri->wri, <);
|
||||
scoutfs_data_init_btrees(sb, &tri->alloc, &tri->wri, <);
|
||||
|
||||
/* first set during mount from 0 to nonzero allows commits */
|
||||
spin_lock(&tri->write_lock);
|
||||
sbi->trans_seq = le64_to_cpu(lt.get_trans_seq);
|
||||
spin_unlock(&tri->write_lock);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
@@ -130,35 +128,6 @@ bool scoutfs_trans_has_dirty(struct super_block *sb)
|
||||
return scoutfs_block_writer_has_dirty(sb, &tri->wri);
|
||||
}
|
||||
|
||||
/*
|
||||
* This is racing with wait_event conditions, make sure our atomic
|
||||
* stores and waitqueue loads are ordered.
|
||||
*/
|
||||
static void sub_holders_and_wake(struct super_block *sb, int val)
|
||||
{
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
|
||||
atomic_sub(val, &tri->holders);
|
||||
smp_mb(); /* make sure sub is visible before we wake */
|
||||
if (waitqueue_active(&tri->hold_wq))
|
||||
wake_up(&tri->hold_wq);
|
||||
}
|
||||
|
||||
/*
|
||||
* called as a wait_event condition, needs to be careful to not change
|
||||
* task state and is racing with waking paths that sub_return, test, and
|
||||
* wake.
|
||||
*/
|
||||
static bool drained_holders(struct trans_info *tri)
|
||||
{
|
||||
int holders;
|
||||
|
||||
smp_mb(); /* make sure task in wait_event queue before atomic read */
|
||||
holders = atomic_read(&tri->holders) & TRANS_HOLDERS_COUNT_MASK;
|
||||
|
||||
return holders == 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This work func is responsible for writing out all the dirty blocks
|
||||
* that make up the current dirty transaction. It prevents writers from
|
||||
@@ -169,93 +138,90 @@ static bool drained_holders(struct trans_info *tri)
|
||||
* functions that would try to hold the transaction. We record the task
|
||||
* whose committing the transaction so that holding won't deadlock.
|
||||
*
|
||||
* Once we clear the write func bit in holders then waiting holders can
|
||||
* enter the transaction and continue modifying the transaction. Once
|
||||
* we start writing we consider the transaction done and won't exit,
|
||||
* clearing the write func bit, until get_log_trees has opened the next
|
||||
* transaction. The exception is forced unmount which is allowed to
|
||||
* generate errors and throw away data.
|
||||
* Any dirty block had to have allocated a new blkno which would have
|
||||
* created dirty allocator metadata blocks. We can avoid writing
|
||||
* entirely if we don't have any dirty metadata blocks. This is
|
||||
* important because we don't try to serialize this work during
|
||||
* unmount.. we can execute as the vfs is shutting down.. we need to
|
||||
* decide that nothing is dirty without calling the vfs at all.
|
||||
*
|
||||
* This means that the only way fsync can return an error is if we're in
|
||||
* forced unmount.
|
||||
* We first try to sync the dirty inodes and write their dirty data blocks,
|
||||
* then we write all our dirty metadata blocks, and only when those succeed
|
||||
* do we write the new super that references all of these newly written blocks.
|
||||
*
|
||||
* If there are write errors then blocks are kept dirty in memory and will
|
||||
* be written again at the next sync.
|
||||
*/
|
||||
void scoutfs_trans_write_func(struct work_struct *work)
|
||||
{
|
||||
struct trans_info *tri = container_of(work, struct trans_info, write_work.work);
|
||||
struct super_block *sb = tri->sb;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
bool retrying = false;
|
||||
struct scoutfs_sb_info *sbi = container_of(work, struct scoutfs_sb_info,
|
||||
trans_write_work.work);
|
||||
struct super_block *sb = sbi->sb;
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
u64 trans_seq = sbi->trans_seq;
|
||||
char *s = NULL;
|
||||
int ret = 0;
|
||||
|
||||
tri->task = current;
|
||||
sbi->trans_task = current;
|
||||
|
||||
/* mark that we're writing so holders wait for us to finish and clear our bit */
|
||||
atomic_add(TRANS_HOLDERS_WRITE_FUNC_BIT, &tri->holders);
|
||||
wait_event(sbi->trans_hold_wq, drained_holders(tri));
|
||||
|
||||
wait_event(tri->hold_wq, drained_holders(tri));
|
||||
trace_scoutfs_trans_write_func(sb,
|
||||
scoutfs_block_writer_dirty_bytes(sb, &tri->wri));
|
||||
|
||||
/* mount hasn't opened first transaction yet, still complete sync */
|
||||
if (sbi->trans_seq == 0) {
|
||||
ret = 0;
|
||||
if (!scoutfs_block_writer_has_dirty(sb, &tri->wri) &&
|
||||
!scoutfs_item_dirty_pages(sb)) {
|
||||
if (sbi->trans_deadline_expired) {
|
||||
/*
|
||||
* If we're not writing data then we only advance the
|
||||
* seq at the sync deadline interval. This keeps idle
|
||||
* mounts from pinning a seq and stopping readers of the
|
||||
* seq indices but doesn't send a message for every sync
|
||||
* syscall.
|
||||
*/
|
||||
ret = scoutfs_client_advance_seq(sb, &trans_seq);
|
||||
if (ret < 0)
|
||||
s = "clean advance seq";
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (scoutfs_forcing_unmount(sb)) {
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
||||
trace_scoutfs_trans_write_func(sb, scoutfs_block_writer_dirty_bytes(sb, &tri->wri),
|
||||
scoutfs_item_dirty_pages(sb));
|
||||
|
||||
if (tri->deadline_expired)
|
||||
if (sbi->trans_deadline_expired)
|
||||
scoutfs_inc_counter(sb, trans_commit_timer);
|
||||
|
||||
scoutfs_inc_counter(sb, trans_commit_written);
|
||||
|
||||
do {
|
||||
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
|
||||
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
|
||||
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
|
||||
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb, &tri->alloc,
|
||||
&tri->wri)) ?:
|
||||
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
|
||||
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
|
||||
(s = "commit log trees", commit_btrees(sb)) ?:
|
||||
scoutfs_item_write_done(sb) ?:
|
||||
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
|
||||
if (ret < 0) {
|
||||
if (!retrying) {
|
||||
scoutfs_warn(sb, "critical transaction commit failure: %s = %d, retrying",
|
||||
s, ret);
|
||||
retrying = true;
|
||||
}
|
||||
|
||||
if (scoutfs_forcing_unmount(sb)) {
|
||||
ret = -EIO;
|
||||
break;
|
||||
}
|
||||
|
||||
msleep(2 * MSEC_PER_SEC);
|
||||
|
||||
} else if (retrying) {
|
||||
scoutfs_info(sb, "retried transaction commit succeeded");
|
||||
}
|
||||
|
||||
} while (ret < 0);
|
||||
|
||||
/* XXX this all needs serious work for dealing with errors */
|
||||
ret = (s = "data submit", scoutfs_inode_walk_writeback(sb, true)) ?:
|
||||
(s = "item dirty", scoutfs_item_write_dirty(sb)) ?:
|
||||
(s = "data prepare", scoutfs_data_prepare_commit(sb)) ?:
|
||||
(s = "alloc prepare", scoutfs_alloc_prepare_commit(sb,
|
||||
&tri->alloc, &tri->wri)) ?:
|
||||
(s = "meta write", scoutfs_block_writer_write(sb, &tri->wri)) ?:
|
||||
(s = "data wait", scoutfs_inode_walk_writeback(sb, false)) ?:
|
||||
(s = "commit log trees", commit_btrees(sb)) ?:
|
||||
scoutfs_item_write_done(sb) ?:
|
||||
(s = "advance seq", scoutfs_client_advance_seq(sb, &trans_seq)) ?:
|
||||
(s = "get log trees", scoutfs_trans_get_log_trees(sb));
|
||||
out:
|
||||
spin_lock(&tri->write_lock);
|
||||
tri->write_count++;
|
||||
tri->write_ret = ret;
|
||||
spin_unlock(&tri->write_lock);
|
||||
wake_up(&tri->write_wq);
|
||||
if (ret < 0)
|
||||
scoutfs_err(sb, "critical transaction commit failure: %s, %d",
|
||||
s, ret);
|
||||
|
||||
/* we're done, wake waiting holders */
|
||||
sub_holders_and_wake(sb, TRANS_HOLDERS_WRITE_FUNC_BIT);
|
||||
spin_lock(&sbi->trans_write_lock);
|
||||
sbi->trans_write_count++;
|
||||
sbi->trans_write_ret = ret;
|
||||
sbi->trans_seq = trans_seq;
|
||||
spin_unlock(&sbi->trans_write_lock);
|
||||
wake_up(&sbi->trans_write_wq);
|
||||
|
||||
tri->task = NULL;
|
||||
spin_lock(&tri->lock);
|
||||
tri->writing = false;
|
||||
spin_unlock(&tri->lock);
|
||||
|
||||
wake_up(&sbi->trans_hold_wq);
|
||||
|
||||
sbi->trans_task = NULL;
|
||||
|
||||
scoutfs_trans_restart_sync_deadline(sb);
|
||||
}
|
||||
@@ -266,17 +232,17 @@ struct write_attempt {
|
||||
};
|
||||
|
||||
/* this is called as a wait_event() condition so it can't change task state */
|
||||
static int write_attempted(struct super_block *sb, struct write_attempt *attempt)
|
||||
static int write_attempted(struct scoutfs_sb_info *sbi,
|
||||
struct write_attempt *attempt)
|
||||
{
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
int done = 1;
|
||||
|
||||
spin_lock(&tri->write_lock);
|
||||
if (tri->write_count > attempt->count)
|
||||
attempt->ret = tri->write_ret;
|
||||
spin_lock(&sbi->trans_write_lock);
|
||||
if (sbi->trans_write_count > attempt->count)
|
||||
attempt->ret = sbi->trans_write_ret;
|
||||
else
|
||||
done = 0;
|
||||
spin_unlock(&tri->write_lock);
|
||||
spin_unlock(&sbi->trans_write_lock);
|
||||
|
||||
return done;
|
||||
}
|
||||
@@ -286,12 +252,10 @@ static int write_attempted(struct super_block *sb, struct write_attempt *attempt
|
||||
* We always have delayed sync work pending but the caller wants it
|
||||
* to execute immediately.
|
||||
*/
|
||||
static void queue_trans_work(struct super_block *sb)
|
||||
static void queue_trans_work(struct scoutfs_sb_info *sbi)
|
||||
{
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
|
||||
tri->deadline_expired = false;
|
||||
mod_delayed_work(tri->write_workq, &tri->write_work, 0);
|
||||
sbi->trans_deadline_expired = false;
|
||||
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work, 0);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -304,24 +268,26 @@ static void queue_trans_work(struct super_block *sb)
|
||||
*/
|
||||
int scoutfs_trans_sync(struct super_block *sb, int wait)
|
||||
{
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
struct write_attempt attempt = { .ret = 0 };
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct write_attempt attempt;
|
||||
int ret;
|
||||
|
||||
|
||||
if (!wait) {
|
||||
queue_trans_work(sb);
|
||||
queue_trans_work(sbi);
|
||||
return 0;
|
||||
}
|
||||
|
||||
spin_lock(&tri->write_lock);
|
||||
attempt.count = tri->write_count;
|
||||
spin_unlock(&tri->write_lock);
|
||||
spin_lock(&sbi->trans_write_lock);
|
||||
attempt.count = sbi->trans_write_count;
|
||||
spin_unlock(&sbi->trans_write_lock);
|
||||
|
||||
queue_trans_work(sb);
|
||||
queue_trans_work(sbi);
|
||||
|
||||
wait_event(tri->write_wq, write_attempted(sb, &attempt));
|
||||
ret = attempt.ret;
|
||||
ret = wait_event_interruptible(sbi->trans_write_wq,
|
||||
write_attempted(sbi, &attempt));
|
||||
if (ret == 0)
|
||||
ret = attempt.ret;
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -337,91 +303,72 @@ int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
|
||||
|
||||
void scoutfs_trans_restart_sync_deadline(struct super_block *sb)
|
||||
{
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
tri->deadline_expired = true;
|
||||
mod_delayed_work(tri->write_workq, &tri->write_work,
|
||||
sbi->trans_deadline_expired = true;
|
||||
mod_delayed_work(sbi->trans_write_workq, &sbi->trans_write_work,
|
||||
TRANS_SYNC_DELAY);
|
||||
}
|
||||
|
||||
/*
|
||||
* We store nested holders in the lower bits of journal_info. We use
|
||||
* some higher bits as a magic value to detect if something goes
|
||||
* horribly wrong and it gets clobbered.
|
||||
* Each thread reserves space in the segment for their dirty items while
|
||||
* they hold the transaction. This is calculated before the first
|
||||
* transaction hold is acquired. It includes all the potential nested
|
||||
* item manipulation that could happen with the transaction held.
|
||||
* Including nested holds avoids having to deal with writing out partial
|
||||
* transactions while a caller still holds the transaction.
|
||||
*/
|
||||
#define TRANS_JI_MAGIC 0xd5700000
|
||||
#define TRANS_JI_MAGIC_MASK 0xfff00000
|
||||
#define TRANS_JI_COUNT_MASK 0x000fffff
|
||||
|
||||
/* returns true if a caller already had a holder counted in journal_info */
|
||||
static bool inc_journal_info_holders(void)
|
||||
{
|
||||
unsigned long holders = (unsigned long)current->journal_info;
|
||||
|
||||
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
|
||||
|
||||
if (holders == 0)
|
||||
holders = TRANS_JI_MAGIC;
|
||||
holders++;
|
||||
|
||||
current->journal_info = (void *)holders;
|
||||
return (holders > (TRANS_JI_MAGIC | 1));
|
||||
}
|
||||
|
||||
static void dec_journal_info_holders(void)
|
||||
{
|
||||
unsigned long holders = (unsigned long)current->journal_info;
|
||||
|
||||
WARN_ON_ONCE(holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) != TRANS_JI_MAGIC));
|
||||
WARN_ON_ONCE((holders & TRANS_JI_COUNT_MASK) == 0);
|
||||
|
||||
holders--;
|
||||
if (holders == TRANS_JI_MAGIC)
|
||||
holders = 0;
|
||||
|
||||
current->journal_info = (void *)holders;
|
||||
}
|
||||
#define SCOUTFS_RESERVATION_MAGIC 0xd57cd13b
|
||||
struct scoutfs_reservation {
|
||||
unsigned magic;
|
||||
unsigned holders;
|
||||
struct scoutfs_item_count reserved;
|
||||
struct scoutfs_item_count actual;
|
||||
};
|
||||
|
||||
/*
|
||||
* This is called as the wait_event condition for holding a transaction.
|
||||
* Increment the holder count unless the writer is present. We return
|
||||
* false to wait until the writer finishes and wakes us.
|
||||
* Try to hold the transaction. If a caller already holds the trans then
|
||||
* we piggy back on their hold. We wait if the writer is trying to
|
||||
* write out the transation. And if our items won't fit then we kick off
|
||||
* a write.
|
||||
*
|
||||
* This can be racing with itself while there's no waiters. We retry
|
||||
* the cmpxchg instead of returning and waiting.
|
||||
* This is called as a condition for wait_event. It is very limited in
|
||||
* the locking (blocking) it can do because the caller has set the task
|
||||
* state before testing the condition safely race with waking after
|
||||
* setting the condition. Our checking the amount of dirty metadata
|
||||
* blocks and free data blocks is racy, but we don't mind the risk of
|
||||
* delaying or prematurely forcing commits.
|
||||
*/
|
||||
static bool inc_holders_unless_writer(struct trans_info *tri)
|
||||
static bool acquired_hold(struct super_block *sb,
|
||||
struct scoutfs_reservation *rsv,
|
||||
const struct scoutfs_item_count *cnt)
|
||||
{
|
||||
int holders;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
bool acquired = false;
|
||||
unsigned items;
|
||||
unsigned vals;
|
||||
|
||||
do {
|
||||
smp_mb(); /* make sure we read after wait puts task in queue */
|
||||
holders = atomic_read(&tri->holders);
|
||||
if (holders & TRANS_HOLDERS_WRITE_FUNC_BIT)
|
||||
return false;
|
||||
spin_lock(&tri->lock);
|
||||
|
||||
} while (atomic_cmpxchg(&tri->holders, holders, holders + 1) != holders);
|
||||
trace_scoutfs_trans_acquired_hold(sb, cnt, rsv, rsv->holders,
|
||||
&rsv->reserved, &rsv->actual,
|
||||
tri->holders, tri->writing,
|
||||
tri->reserved_items,
|
||||
tri->reserved_vals);
|
||||
|
||||
return true;
|
||||
}
|
||||
/* use a caller's existing reservation */
|
||||
if (rsv->holders)
|
||||
goto hold;
|
||||
|
||||
/*
|
||||
* As we drop the last trans holder we try to wake a writing thread that
|
||||
* was waiting for us to finish.
|
||||
*/
|
||||
static void release_holders(struct super_block *sb)
|
||||
{
|
||||
dec_journal_info_holders();
|
||||
sub_holders_and_wake(sb, 1);
|
||||
}
|
||||
/* wait until the writing thread is finished */
|
||||
if (tri->writing)
|
||||
goto out;
|
||||
|
||||
/* see if we can reserve space for our item count */
|
||||
items = tri->reserved_items + cnt->items;
|
||||
vals = tri->reserved_vals + cnt->vals;
|
||||
|
||||
/*
|
||||
* The caller has incremented holders so it is blocking commits. We
|
||||
* make some quick checks to see if we need to trigger and wait for
|
||||
* another commit before proceeding.
|
||||
*/
|
||||
static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
|
||||
{
|
||||
/*
|
||||
* In theory each dirty item page could be straddling two full
|
||||
* blocks, requiring 4 allocations for each item cache page.
|
||||
@@ -431,9 +378,11 @@ static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
|
||||
* that it accounts for having to dirty parent blocks and
|
||||
* whatever dirtying is done during the transaction hold.
|
||||
*/
|
||||
if (scoutfs_alloc_meta_low(sb, &tri->alloc, scoutfs_item_dirty_pages(sb) * 2)) {
|
||||
if (scoutfs_alloc_meta_low(sb, &tri->alloc,
|
||||
scoutfs_item_dirty_pages(sb) * 2)) {
|
||||
scoutfs_inc_counter(sb, trans_commit_dirty_meta_full);
|
||||
return true;
|
||||
queue_trans_work(sbi);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -445,100 +394,70 @@ static bool commit_before_hold(struct super_block *sb, struct trans_info *tri)
|
||||
*/
|
||||
if (scoutfs_alloc_meta_low(sb, &tri->alloc, 16)) {
|
||||
scoutfs_inc_counter(sb, trans_commit_meta_alloc_low);
|
||||
return true;
|
||||
queue_trans_work(sbi);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* if we're low and can't refill then alloc could empty and return enospc */
|
||||
if (scoutfs_data_alloc_should_refill(sb, SCOUTFS_ALLOC_DATA_REFILL_THRESH)) {
|
||||
/* Try to refill data allocator before premature enospc */
|
||||
if (scoutfs_data_alloc_free_bytes(sb) <= SCOUTFS_TRANS_DATA_ALLOC_LWM) {
|
||||
scoutfs_inc_counter(sb, trans_commit_data_alloc_low);
|
||||
return true;
|
||||
queue_trans_work(sbi);
|
||||
goto out;
|
||||
}
|
||||
|
||||
return false;
|
||||
tri->reserved_items = items;
|
||||
tri->reserved_vals = vals;
|
||||
|
||||
rsv->reserved.items = cnt->items;
|
||||
rsv->reserved.vals = cnt->vals;
|
||||
|
||||
hold:
|
||||
rsv->holders++;
|
||||
tri->holders++;
|
||||
acquired = true;
|
||||
|
||||
out:
|
||||
|
||||
spin_unlock(&tri->lock);
|
||||
|
||||
return acquired;
|
||||
}
|
||||
|
||||
/*
|
||||
* called as a wait_event condition, needs to be careful to not change
|
||||
* task state and is racing with waking paths that sub_return, test, and
|
||||
* wake.
|
||||
*/
|
||||
static bool holders_no_writer(struct trans_info *tri)
|
||||
{
|
||||
smp_mb(); /* make sure task in wait_event queue before atomic read */
|
||||
return !(atomic_read(&tri->holders) & TRANS_HOLDERS_WRITE_FUNC_BIT);
|
||||
}
|
||||
|
||||
/*
|
||||
* Try to hold the transaction. Holding the transaction prevents it
|
||||
* from being committed. If a transaction is currently being written
|
||||
* then we'll block until it's done and our hold can be granted.
|
||||
*
|
||||
* If a caller already holds the trans then we unconditionally acquire
|
||||
* our hold and return to avoid deadlocks with our caller, the writing
|
||||
* thread, and us. We record nested holds in a call stack with the
|
||||
* journal_info pointer in the task_struct.
|
||||
*
|
||||
* The writing thread marks itself as a global trans_task which
|
||||
* short-circuits all the hold machinery so it can call code that would
|
||||
* otherwise try to hold transactions while it is writing.
|
||||
*
|
||||
* If the caller is adding metadata items that will eventually consume
|
||||
* free space -- not dirtying existing items or adding deletion items --
|
||||
* then we can return enospc if our metadata allocator indicates that
|
||||
* we're low on space.
|
||||
*/
|
||||
int scoutfs_hold_trans(struct super_block *sb, bool allocing)
|
||||
int scoutfs_hold_trans(struct super_block *sb,
|
||||
const struct scoutfs_item_count cnt)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
u64 seq;
|
||||
struct scoutfs_reservation *rsv;
|
||||
int ret;
|
||||
|
||||
if (current == tri->task)
|
||||
/*
|
||||
* Caller shouldn't provide garbage counts, nor counts that
|
||||
* can't fit in segments by themselves.
|
||||
*/
|
||||
if (WARN_ON_ONCE(cnt.items <= 0 || cnt.vals < 0))
|
||||
return -EINVAL;
|
||||
|
||||
if (current == sbi->trans_task)
|
||||
return 0;
|
||||
|
||||
for (;;) {
|
||||
/* shouldn't get holders until mount finishes, (not locking for cheap test) */
|
||||
if (WARN_ON_ONCE(sbi->trans_seq == 0)) {
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
rsv = current->journal_info;
|
||||
if (rsv == NULL) {
|
||||
rsv = kzalloc(sizeof(struct scoutfs_reservation), GFP_NOFS);
|
||||
if (!rsv)
|
||||
return -ENOMEM;
|
||||
|
||||
/* if a caller already has a hold we acquire unconditionally */
|
||||
if (inc_journal_info_holders()) {
|
||||
atomic_inc(&tri->holders);
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
/* wait until the writer work is finished */
|
||||
if (!inc_holders_unless_writer(tri)) {
|
||||
dec_journal_info_holders();
|
||||
wait_event(tri->hold_wq, holders_no_writer(tri));
|
||||
continue;
|
||||
}
|
||||
|
||||
/* return enospc if server is into reserved blocks and we're allocating */
|
||||
if (allocing && scoutfs_alloc_test_flag(sb, &tri->alloc, SCOUTFS_ALLOC_FLAG_LOW)) {
|
||||
release_holders(sb);
|
||||
ret = -ENOSPC;
|
||||
break;
|
||||
}
|
||||
|
||||
/* see if we need to trigger and wait for a commit before holding */
|
||||
if (commit_before_hold(sb, tri)) {
|
||||
seq = scoutfs_trans_sample_seq(sb);
|
||||
release_holders(sb);
|
||||
queue_trans_work(sb);
|
||||
wait_event(tri->hold_wq, scoutfs_trans_sample_seq(sb) != seq);
|
||||
continue;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
break;
|
||||
rsv->magic = SCOUTFS_RESERVATION_MAGIC;
|
||||
current->journal_info = rsv;
|
||||
}
|
||||
|
||||
trace_scoutfs_hold_trans(sb, current->journal_info, atomic_read(&tri->holders), ret);
|
||||
BUG_ON(rsv->magic != SCOUTFS_RESERVATION_MAGIC);
|
||||
|
||||
ret = wait_event_interruptible(sbi->trans_hold_wq,
|
||||
acquired_hold(sb, rsv, &cnt));
|
||||
if (ret && rsv->holders == 0) {
|
||||
current->journal_info = NULL;
|
||||
kfree(rsv);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -549,21 +468,86 @@ int scoutfs_hold_trans(struct super_block *sb, bool allocing)
|
||||
*/
|
||||
bool scoutfs_trans_held(void)
|
||||
{
|
||||
unsigned long holders = (unsigned long)current->journal_info;
|
||||
struct scoutfs_reservation *rsv = current->journal_info;
|
||||
|
||||
return (holders != 0 && ((holders & TRANS_JI_MAGIC_MASK) == TRANS_JI_MAGIC));
|
||||
return rsv && rsv->magic == SCOUTFS_RESERVATION_MAGIC;
|
||||
}
|
||||
|
||||
void scoutfs_release_trans(struct super_block *sb)
|
||||
/*
|
||||
* Record a transaction holder's individual contribution to the dirty
|
||||
* items in the current transaction. We're making sure that the
|
||||
* reservation matches the possible item manipulations while they hold
|
||||
* the reservation.
|
||||
*
|
||||
* It is possible and legitimate for an individual contribution to be
|
||||
* negative if they delete dirty items. The item cache makes sure that
|
||||
* the total dirty item count doesn't fall below zero.
|
||||
*/
|
||||
void scoutfs_trans_track_item(struct super_block *sb, signed items,
|
||||
signed vals)
|
||||
{
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_reservation *rsv = current->journal_info;
|
||||
|
||||
if (current == tri->task)
|
||||
if (current == sbi->trans_task)
|
||||
return;
|
||||
|
||||
release_holders(sb);
|
||||
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
|
||||
|
||||
trace_scoutfs_release_trans(sb, current->journal_info, atomic_read(&tri->holders), 0);
|
||||
rsv->actual.items += items;
|
||||
rsv->actual.vals += vals;
|
||||
|
||||
trace_scoutfs_trans_track_item(sb, items, vals, rsv->actual.items,
|
||||
rsv->actual.vals, rsv->reserved.items,
|
||||
rsv->reserved.vals);
|
||||
|
||||
WARN_ON_ONCE(rsv->actual.items > rsv->reserved.items);
|
||||
WARN_ON_ONCE(rsv->actual.vals > rsv->reserved.vals);
|
||||
}
|
||||
|
||||
/*
|
||||
* As we drop the last hold in the reservation we try and wake other
|
||||
* hold attempts that were waiting for space. As we drop the last trans
|
||||
* holder we try to wake a writing thread that was waiting for us to
|
||||
* finish.
|
||||
*/
|
||||
void scoutfs_release_trans(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_reservation *rsv;
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
bool wake = false;
|
||||
|
||||
if (current == sbi->trans_task)
|
||||
return;
|
||||
|
||||
rsv = current->journal_info;
|
||||
BUG_ON(!rsv || rsv->magic != SCOUTFS_RESERVATION_MAGIC);
|
||||
|
||||
spin_lock(&tri->lock);
|
||||
|
||||
trace_scoutfs_release_trans(sb, rsv, rsv->holders, &rsv->reserved,
|
||||
&rsv->actual, tri->holders, tri->writing,
|
||||
tri->reserved_items, tri->reserved_vals);
|
||||
|
||||
BUG_ON(rsv->holders <= 0);
|
||||
BUG_ON(tri->holders <= 0);
|
||||
|
||||
if (--rsv->holders == 0) {
|
||||
tri->reserved_items -= rsv->reserved.items;
|
||||
tri->reserved_vals -= rsv->reserved.vals;
|
||||
current->journal_info = NULL;
|
||||
kfree(rsv);
|
||||
wake = true;
|
||||
}
|
||||
|
||||
if (--tri->holders == 0)
|
||||
wake = true;
|
||||
|
||||
spin_unlock(&tri->lock);
|
||||
|
||||
if (wake)
|
||||
wake_up(&sbi->trans_hold_wq);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -573,13 +557,12 @@ void scoutfs_release_trans(struct super_block *sb)
|
||||
*/
|
||||
u64 scoutfs_trans_sample_seq(struct super_block *sb)
|
||||
{
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
u64 ret;
|
||||
|
||||
spin_lock(&tri->write_lock);
|
||||
spin_lock(&sbi->trans_write_lock);
|
||||
ret = sbi->trans_seq;
|
||||
spin_unlock(&tri->write_lock);
|
||||
spin_unlock(&sbi->trans_write_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -593,17 +576,12 @@ int scoutfs_setup_trans(struct super_block *sb)
|
||||
if (!tri)
|
||||
return -ENOMEM;
|
||||
|
||||
tri->sb = sb;
|
||||
atomic_set(&tri->holders, 0);
|
||||
spin_lock_init(&tri->lock);
|
||||
scoutfs_block_writer_init(sb, &tri->wri);
|
||||
|
||||
spin_lock_init(&tri->write_lock);
|
||||
INIT_DELAYED_WORK(&tri->write_work, scoutfs_trans_write_func);
|
||||
init_waitqueue_head(&tri->write_wq);
|
||||
init_waitqueue_head(&tri->hold_wq);
|
||||
|
||||
tri->write_workq = alloc_workqueue("scoutfs_trans", WQ_UNBOUND, 1);
|
||||
if (!tri->write_workq) {
|
||||
sbi->trans_write_workq = alloc_workqueue("scoutfs_trans",
|
||||
WQ_UNBOUND, 1);
|
||||
if (!sbi->trans_write_workq) {
|
||||
kfree(tri);
|
||||
return -ENOMEM;
|
||||
}
|
||||
@@ -614,15 +592,8 @@ int scoutfs_setup_trans(struct super_block *sb)
|
||||
}
|
||||
|
||||
/*
|
||||
* While the vfs will have done an fs level sync before calling
|
||||
* put_super, we may have done work down in our level after all the fs
|
||||
* ops were done. An example is final inode deletion in iput, that's
|
||||
* done in generic_shutdown_super after the sync and before calling our
|
||||
* put_super.
|
||||
*
|
||||
* So we always try to write any remaining dirty transactions before
|
||||
* shutting down. Typically there won't be any dirty data and the
|
||||
* worker will just return.
|
||||
* kill_sb calls sync before getting here so we know that dirty data
|
||||
* should be in flight. We just have to wait for it to quiesce.
|
||||
*/
|
||||
void scoutfs_shutdown_trans(struct super_block *sb)
|
||||
{
|
||||
@@ -630,19 +601,13 @@ void scoutfs_shutdown_trans(struct super_block *sb)
|
||||
DECLARE_TRANS_INFO(sb, tri);
|
||||
|
||||
if (tri) {
|
||||
if (tri->write_workq) {
|
||||
/* immediately queues pending timer */
|
||||
flush_delayed_work(&tri->write_work);
|
||||
/* prevents re-arming if it has to wait */
|
||||
cancel_delayed_work_sync(&tri->write_work);
|
||||
destroy_workqueue(tri->write_workq);
|
||||
/* trans work schedules after shutdown see null */
|
||||
tri->write_workq = NULL;
|
||||
}
|
||||
|
||||
scoutfs_alloc_prepare_commit(sb, &tri->alloc, &tri->wri);
|
||||
scoutfs_block_writer_forget_all(sb, &tri->wri);
|
||||
|
||||
if (sbi->trans_write_workq) {
|
||||
cancel_delayed_work_sync(&sbi->trans_write_work);
|
||||
destroy_workqueue(sbi->trans_write_workq);
|
||||
/* trans work schedules after shutdown see null */
|
||||
sbi->trans_write_workq = NULL;
|
||||
}
|
||||
kfree(tri);
|
||||
sbi->trans_info = NULL;
|
||||
}
|
||||
|
||||
@@ -1,16 +1,26 @@
|
||||
#ifndef _SCOUTFS_TRANS_H_
|
||||
#define _SCOUTFS_TRANS_H_
|
||||
|
||||
/* the server will attempt to fill data allocs for each trans */
|
||||
#define SCOUTFS_TRANS_DATA_ALLOC_HWM (2ULL * 1024 * 1024 * 1024)
|
||||
/* the client will force commits if data allocators get too low */
|
||||
#define SCOUTFS_TRANS_DATA_ALLOC_LWM (256ULL * 1024 * 1024)
|
||||
|
||||
#include "count.h"
|
||||
|
||||
void scoutfs_trans_write_func(struct work_struct *work);
|
||||
int scoutfs_trans_sync(struct super_block *sb, int wait);
|
||||
int scoutfs_file_fsync(struct file *file, loff_t start, loff_t end,
|
||||
int datasync);
|
||||
void scoutfs_trans_restart_sync_deadline(struct super_block *sb);
|
||||
|
||||
int scoutfs_hold_trans(struct super_block *sb, bool allocing);
|
||||
int scoutfs_hold_trans(struct super_block *sb,
|
||||
const struct scoutfs_item_count cnt);
|
||||
bool scoutfs_trans_held(void);
|
||||
void scoutfs_release_trans(struct super_block *sb);
|
||||
u64 scoutfs_trans_sample_seq(struct super_block *sb);
|
||||
void scoutfs_trans_track_item(struct super_block *sb, signed items,
|
||||
signed vals);
|
||||
|
||||
int scoutfs_trans_get_log_trees(struct super_block *sb);
|
||||
bool scoutfs_trans_has_dirty(struct super_block *sb);
|
||||
|
||||
@@ -38,7 +38,10 @@ struct scoutfs_triggers {
|
||||
struct scoutfs_triggers *name = SCOUTFS_SB(sb)->triggers
|
||||
|
||||
static char *names[] = {
|
||||
[SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE] = "block_remove_stale",
|
||||
[SCOUTFS_TRIGGER_BTREE_STALE_READ] = "btree_stale_read",
|
||||
[SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF] = "btree_advance_ring_half",
|
||||
[SCOUTFS_TRIGGER_HARD_STALE_ERROR] = "hard_stale_error",
|
||||
[SCOUTFS_TRIGGER_SEG_STALE_READ] = "seg_stale_read",
|
||||
[SCOUTFS_TRIGGER_STATFS_LOCK_PURGE] = "statfs_lock_purge",
|
||||
};
|
||||
|
||||
|
||||
@@ -2,7 +2,10 @@
|
||||
#define _SCOUTFS_TRIGGERS_H_
|
||||
|
||||
enum scoutfs_trigger {
|
||||
SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE,
|
||||
SCOUTFS_TRIGGER_BTREE_STALE_READ,
|
||||
SCOUTFS_TRIGGER_BTREE_ADVANCE_RING_HALF,
|
||||
SCOUTFS_TRIGGER_HARD_STALE_ERROR,
|
||||
SCOUTFS_TRIGGER_SEG_STALE_READ,
|
||||
SCOUTFS_TRIGGER_STATFS_LOCK_PURGE,
|
||||
SCOUTFS_TRIGGER_NR,
|
||||
};
|
||||
|
||||
@@ -1,188 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/sysfs.h>
|
||||
|
||||
#include "super.h"
|
||||
#include "client.h"
|
||||
#include "volopt.h"
|
||||
|
||||
/*
|
||||
* Volume options are exposed through a sysfs directory. Getting and
|
||||
* setting the values sends rpcs to the server who owns the options in
|
||||
* the super block.
|
||||
*/
|
||||
|
||||
struct volopt_info {
|
||||
struct super_block *sb;
|
||||
struct scoutfs_sysfs_attrs ssa;
|
||||
};
|
||||
|
||||
#define DECLARE_VOLOPT_INFO(sb, name) \
|
||||
struct volopt_info *name = SCOUTFS_SB(sb)->volopt_info
|
||||
#define DECLARE_VOLOPT_INFO_KOBJ(kobj, name) \
|
||||
DECLARE_VOLOPT_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
|
||||
|
||||
/*
|
||||
* attribute arrays need to be dense but the options we export could
|
||||
* well become sparse over time. .store and .load are generic and we
|
||||
* have a lookup table to map the attributes array indexes to the number
|
||||
* and name of the option.
|
||||
*/
|
||||
static struct volopt_nr_name {
|
||||
int nr;
|
||||
char *name;
|
||||
} volopt_table[] = {
|
||||
{ SCOUTFS_VOLOPT_DATA_ALLOC_ZONE_BLOCKS_NR, "data_alloc_zone_blocks" },
|
||||
};
|
||||
|
||||
/* initialized by setup, pointer array is null terminated */
|
||||
static struct kobj_attribute volopt_attrs[ARRAY_SIZE(volopt_table)];
|
||||
static struct attribute *volopt_attr_ptrs[ARRAY_SIZE(volopt_table) + 1];
|
||||
|
||||
static void get_opt_data(struct kobj_attribute *attr, struct scoutfs_volume_options *volopt,
|
||||
u64 *bit, __le64 **opt)
|
||||
{
|
||||
size_t index = attr - &volopt_attrs[0];
|
||||
int nr = volopt_table[index].nr;
|
||||
|
||||
*bit = 1ULL << nr;
|
||||
*opt = &volopt->set_bits + 1 + nr;
|
||||
}
|
||||
|
||||
static ssize_t volopt_attr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
|
||||
struct super_block *sb = vinf->sb;
|
||||
struct scoutfs_volume_options volopt;
|
||||
__le64 *opt;
|
||||
u64 bit;
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_client_get_volopt(sb, &volopt);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
get_opt_data(attr, &volopt, &bit, &opt);
|
||||
|
||||
if (le64_to_cpu(volopt.set_bits) & bit) {
|
||||
return snprintf(buf, PAGE_SIZE, "%llu", le64_to_cpup(opt));
|
||||
} else {
|
||||
buf[0] = '\0';
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t volopt_attr_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
DECLARE_VOLOPT_INFO_KOBJ(kobj, vinf);
|
||||
struct super_block *sb = vinf->sb;
|
||||
struct scoutfs_volume_options volopt = {0,};
|
||||
u8 chars[32];
|
||||
__le64 *opt;
|
||||
u64 bit;
|
||||
u64 val;
|
||||
int ret;
|
||||
|
||||
if (count == 0)
|
||||
return 0;
|
||||
if (count > sizeof(chars) - 1)
|
||||
return -ERANGE;
|
||||
|
||||
get_opt_data(attr, &volopt, &bit, &opt);
|
||||
|
||||
if (buf[0] == '\n' || buf[0] == '\r') {
|
||||
volopt.set_bits = cpu_to_le64(bit);
|
||||
|
||||
ret = scoutfs_client_clear_volopt(sb, &volopt);
|
||||
} else {
|
||||
memcpy(chars, buf, count);
|
||||
chars[count] = '\0';
|
||||
ret = kstrtoull(chars, 0, &val);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
volopt.set_bits = cpu_to_le64(bit);
|
||||
*opt = cpu_to_le64(val);
|
||||
|
||||
ret = scoutfs_client_set_volopt(sb, &volopt);
|
||||
}
|
||||
|
||||
if (ret == 0)
|
||||
ret = count;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* The volume option sysfs files are slim shims around RPCs so this
|
||||
* should be called after the client is setup and before it is torn
|
||||
* down.
|
||||
*/
|
||||
int scoutfs_volopt_setup(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct volopt_info *vinf;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
/* persistent volume options are always a bitmap u64 then the 64 options */
|
||||
BUILD_BUG_ON(sizeof(struct scoutfs_volume_options) != (1 + 64) * 8);
|
||||
|
||||
vinf = kzalloc(sizeof(struct volopt_info), GFP_KERNEL);
|
||||
if (!vinf) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
scoutfs_sysfs_init_attrs(sb, &vinf->ssa);
|
||||
vinf->sb = sb;
|
||||
sbi->volopt_info = vinf;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(volopt_table); i++) {
|
||||
volopt_attrs[i] = (struct kobj_attribute) {
|
||||
.attr = { .name = volopt_table[i].name, .mode = S_IWUSR | S_IRUGO },
|
||||
.show = volopt_attr_show,
|
||||
.store = volopt_attr_store,
|
||||
};
|
||||
volopt_attr_ptrs[i] = &volopt_attrs[i].attr;
|
||||
}
|
||||
|
||||
BUILD_BUG_ON(ARRAY_SIZE(volopt_table) != ARRAY_SIZE(volopt_attr_ptrs) - 1);
|
||||
volopt_attr_ptrs[i] = NULL;
|
||||
|
||||
ret = scoutfs_sysfs_create_attrs(sb, &vinf->ssa, volopt_attr_ptrs, "volume_options");
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
out:
|
||||
if (ret)
|
||||
scoutfs_volopt_destroy(sb);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void scoutfs_volopt_destroy(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct volopt_info *vinf = SCOUTFS_SB(sb)->volopt_info;
|
||||
|
||||
if (vinf) {
|
||||
scoutfs_sysfs_destroy_attrs(sb, &vinf->ssa);
|
||||
kfree(vinf);
|
||||
sbi->volopt_info = NULL;
|
||||
}
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
#ifndef _SCOUTFS_VOLOPT_H_
|
||||
#define _SCOUTFS_VOLOPT_H_
|
||||
|
||||
int scoutfs_volopt_setup(struct super_block *sb);
|
||||
void scoutfs_volopt_destroy(struct super_block *sb);
|
||||
|
||||
#endif
|
||||
750
kmod/src/xattr.c
750
kmod/src/xattr.c
File diff suppressed because it is too large
Load Diff
@@ -1,33 +1,25 @@
|
||||
#ifndef _SCOUTFS_XATTR_H_
|
||||
#define _SCOUTFS_XATTR_H_
|
||||
|
||||
struct scoutfs_xattr_prefix_tags {
|
||||
unsigned long hide:1,
|
||||
srch:1,
|
||||
totl:1;
|
||||
};
|
||||
|
||||
extern const struct xattr_handler *scoutfs_xattr_handlers[];
|
||||
|
||||
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
|
||||
struct scoutfs_lock *lck);
|
||||
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
|
||||
const void *value, size_t size, int flags,
|
||||
const struct scoutfs_xattr_prefix_tags *tgs,
|
||||
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
|
||||
struct list_head *ind_locks);
|
||||
|
||||
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
|
||||
size_t size);
|
||||
int scoutfs_setxattr(struct dentry *dentry, const char *name,
|
||||
const void *value, size_t size, int flags);
|
||||
int scoutfs_removexattr(struct dentry *dentry, const char *name);
|
||||
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
|
||||
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
|
||||
size_t size, __u32 *hash_pos, __u64 *id_pos,
|
||||
bool e_range, bool show_hidden);
|
||||
|
||||
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
|
||||
struct scoutfs_lock *lock);
|
||||
|
||||
struct scoutfs_xattr_prefix_tags {
|
||||
unsigned long hide:1,
|
||||
srch:1;
|
||||
};
|
||||
|
||||
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
|
||||
struct scoutfs_xattr_prefix_tags *tgs);
|
||||
|
||||
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name);
|
||||
int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len);
|
||||
|
||||
#endif
|
||||
|
||||
5
tests/.gitignore
vendored
5
tests/.gitignore
vendored
@@ -1,11 +1,6 @@
|
||||
src/*.d
|
||||
src/createmany
|
||||
src/dumb_renameat2
|
||||
src/dumb_setxattr
|
||||
src/handle_cat
|
||||
src/handle_fsetxattr
|
||||
src/bulk_create_paths
|
||||
src/find_xattrs
|
||||
src/stage_tmpfile
|
||||
src/create_xattr_loop
|
||||
src/o_tmpfile_umask
|
||||
|
||||
@@ -1,18 +1,12 @@
|
||||
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing -I ../kmod/src
|
||||
CFLAGS := -Wall -O2 -Werror -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing
|
||||
SHELL := /usr/bin/bash
|
||||
|
||||
# each binary command is built from a single .c file
|
||||
BIN := src/createmany \
|
||||
src/dumb_renameat2 \
|
||||
src/dumb_setxattr \
|
||||
src/handle_cat \
|
||||
src/handle_fsetxattr \
|
||||
src/bulk_create_paths \
|
||||
src/stage_tmpfile \
|
||||
src/find_xattrs \
|
||||
src/create_xattr_loop \
|
||||
src/fragmented_data_extents \
|
||||
src/o_tmpfile_umask
|
||||
src/find_xattrs
|
||||
|
||||
DEPS := $(wildcard src/*.d)
|
||||
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
#!/usr/bin/bash
|
||||
|
||||
#
|
||||
# This fencing script is used for testing clusters of multiple mounts on
|
||||
# a single host. It finds mounts to fence by looking for their rids and
|
||||
# only knows how to "fence" by using forced unmount.
|
||||
#
|
||||
|
||||
echo "$0 running rid '$SCOUTFS_FENCED_REQ_RID' ip '$SCOUTFS_FENCED_REQ_IP' args '$@'"
|
||||
|
||||
log() {
|
||||
echo "$@" > /dev/stderr
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo_fail() {
|
||||
echo "$@" > /dev/stderr
|
||||
exit 1
|
||||
}
|
||||
|
||||
rid="$SCOUTFS_FENCED_REQ_RID"
|
||||
|
||||
for fs in /sys/fs/scoutfs/*; do
|
||||
[ ! -d "$fs" ] && continue
|
||||
|
||||
fs_rid="$(cat $fs/rid)" || \
|
||||
echo_fail "failed to get rid in $fs"
|
||||
if [ "$fs_rid" != "$rid" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
nr="$(cat $fs/data_device_maj_min)" || \
|
||||
echo_fail "failed to get data device major:minor in $fs"
|
||||
|
||||
mnts=$(findmnt -l -n -t scoutfs -o TARGET -S $nr) || \
|
||||
echo_fail "findmnt -t scoutfs -S $nr failed"
|
||||
for mnt in $mnts; do
|
||||
umount -f "$mnt" || \
|
||||
echo_fail "umout -f $mnt failed"
|
||||
done
|
||||
done
|
||||
|
||||
exit 0
|
||||
@@ -3,7 +3,7 @@
|
||||
t_filter_fs()
|
||||
{
|
||||
sed -e 's@mnt/test\.[0-9]*@mnt/test@g' \
|
||||
-e 's@Device: [a-fA-F0-9]*h/[0-9]*d@Device: 0h/0d@g'
|
||||
-e 's@Device: [a-fA-F0-7]*h/[0-9]*d@Device: 0h/0d@g'
|
||||
}
|
||||
|
||||
#
|
||||
@@ -40,7 +40,7 @@ t_filter_dmesg()
|
||||
# mount and unmount spew a bunch
|
||||
re="$re|scoutfs.*client connected"
|
||||
re="$re|scoutfs.*client disconnected"
|
||||
re="$re|scoutfs.*server starting"
|
||||
re="$re|scoutfs.*server setting up"
|
||||
re="$re|scoutfs.*server ready"
|
||||
re="$re|scoutfs.*server accepted"
|
||||
re="$re|scoutfs.*server closing"
|
||||
@@ -52,35 +52,12 @@ t_filter_dmesg()
|
||||
|
||||
# tests that drop unmount io triggers fencing
|
||||
re="$re|scoutfs .* error: fencing "
|
||||
re="$re|scoutfs .*: waiting for .* clients"
|
||||
re="$re|scoutfs .*: all clients recovered"
|
||||
re="$re|scoutfs .*: waiting for .* lock clients"
|
||||
re="$re|scoutfs .*: all lock clients recovered"
|
||||
re="$re|scoutfs .* error: client rid.*lock recovery timed out"
|
||||
|
||||
# we test bad devices and options
|
||||
# some tests mount w/o options
|
||||
re="$re|scoutfs .* error: Required mount option \"metadev_path\" not found"
|
||||
re="$re|scoutfs .* error: meta_super META flag not set"
|
||||
re="$re|scoutfs .* error: could not open metadev:.*"
|
||||
re="$re|scoutfs .* error: Unknown or malformed option,.*"
|
||||
|
||||
# in debugging kernels we can slow things down a bit
|
||||
re="$re|hrtimer: interrupt took .*"
|
||||
|
||||
# fencing tests force unmounts and trigger timeouts
|
||||
re="$re|scoutfs .* forcing unmount"
|
||||
re="$re|scoutfs .* reconnect timed out"
|
||||
re="$re|scoutfs .* recovery timeout expired"
|
||||
re="$re|scoutfs .* fencing previous leader"
|
||||
re="$re|scoutfs .* reclaimed resources"
|
||||
re="$re|scoutfs .* quorum .* error"
|
||||
re="$re|scoutfs .* error reading quorum block"
|
||||
re="$re|scoutfs .* error .* writing quorum block"
|
||||
re="$re|scoutfs .* error .* while checking to delete inode"
|
||||
re="$re|scoutfs .* error .*writing btree blocks.*"
|
||||
re="$re|scoutfs .* error .*writing super block.*"
|
||||
re="$re|scoutfs .* error .* freeing merged btree blocks.*.looping commit del.*upd freeing item"
|
||||
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
|
||||
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
|
||||
re="$re|scoutfs .* error.*server failed to bind to.*"
|
||||
|
||||
egrep -v "($re)"
|
||||
}
|
||||
|
||||
@@ -17,17 +17,6 @@ t_sync_seq_index()
|
||||
t_quiet sync
|
||||
}
|
||||
|
||||
t_mount_rid()
|
||||
{
|
||||
local nr="${1:-0}"
|
||||
local mnt="$(eval echo \$T_M$nr)"
|
||||
local rid
|
||||
|
||||
rid=$(scoutfs statfs -s rid -p "$mnt")
|
||||
|
||||
echo "$rid"
|
||||
}
|
||||
|
||||
#
|
||||
# Output the "f.$fsid.r.$rid" identifier string for the given mount
|
||||
# number, 0 is used by default if none is specified.
|
||||
@@ -75,20 +64,6 @@ t_fs_nrs()
|
||||
seq 0 $((T_NR_MOUNTS - 1))
|
||||
}
|
||||
|
||||
#
|
||||
# outputs "1" if the fs number has "1" in its quorum/is_leader file.
|
||||
# All other cases output 0, including the fs nr being a client which
|
||||
# won't have a quorum/ dir.
|
||||
#
|
||||
t_fs_is_leader()
|
||||
{
|
||||
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader 2>/dev/null)" == "1" ]; then
|
||||
echo "1"
|
||||
else
|
||||
echo "0"
|
||||
fi
|
||||
}
|
||||
|
||||
#
|
||||
# Output the mount nr of the current server. This takes no steps to
|
||||
# ensure that the server doesn't shut down and have some other mount
|
||||
@@ -97,7 +72,7 @@ t_fs_is_leader()
|
||||
t_server_nr()
|
||||
{
|
||||
for i in $(t_fs_nrs); do
|
||||
if [ "$(t_fs_is_leader $i)" == "1" ]; then
|
||||
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "1" ]; then
|
||||
echo $i
|
||||
return
|
||||
fi
|
||||
@@ -115,7 +90,7 @@ t_server_nr()
|
||||
t_first_client_nr()
|
||||
{
|
||||
for i in $(t_fs_nrs); do
|
||||
if [ "$(t_fs_is_leader $i)" == "0" ]; then
|
||||
if [ "$(cat $(t_sysfs_path $i)/quorum/is_leader)" == "0" ]; then
|
||||
echo $i
|
||||
return
|
||||
fi
|
||||
@@ -124,19 +99,6 @@ t_first_client_nr()
|
||||
t_fail "t_first_client_nr didn't find any clients"
|
||||
}
|
||||
|
||||
#
|
||||
# The number of quorum members needed to form a majority to start the
|
||||
# server.
|
||||
#
|
||||
t_majority_count()
|
||||
{
|
||||
if [ "$T_QUORUM" -lt 3 ]; then
|
||||
echo 1
|
||||
else
|
||||
echo $(((T_QUORUM / 2) + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
t_mount()
|
||||
{
|
||||
local nr="$1"
|
||||
@@ -154,17 +116,7 @@ t_umount()
|
||||
test "$nr" -lt "$T_NR_MOUNTS" || \
|
||||
t_fail "fs nr $nr invalid"
|
||||
|
||||
eval t_quiet umount \$T_M$nr
|
||||
}
|
||||
|
||||
t_force_umount()
|
||||
{
|
||||
local nr="$1"
|
||||
|
||||
test "$nr" -lt "$T_NR_MOUNTS" || \
|
||||
t_fail "fs nr $nr invalid"
|
||||
|
||||
eval t_quiet umount -f \$T_M$nr
|
||||
eval t_quiet umount \$T_DB$i
|
||||
}
|
||||
|
||||
#
|
||||
@@ -244,19 +196,12 @@ t_trigger_show() {
|
||||
echo "trigger $which $string: $(t_trigger_get $which $nr)"
|
||||
}
|
||||
|
||||
t_trigger_arm_silent() {
|
||||
t_trigger_arm() {
|
||||
local which="$1"
|
||||
local nr="$2"
|
||||
local path=$(t_trigger_path "$nr")
|
||||
|
||||
echo 1 > "$path/$which"
|
||||
}
|
||||
|
||||
t_trigger_arm() {
|
||||
local which="$1"
|
||||
local nr="$2"
|
||||
|
||||
t_trigger_arm_silent $which $nr
|
||||
t_trigger_show $which armed $nr
|
||||
}
|
||||
|
||||
@@ -271,162 +216,16 @@ t_counter() {
|
||||
cat "$(t_sysfs_path $nr)/counters/$which"
|
||||
}
|
||||
|
||||
#
|
||||
# output the difference between the current value of a counter and the
|
||||
# caller's provided previous value.
|
||||
#
|
||||
t_counter_diff_value() {
|
||||
local which="$1"
|
||||
local old="$2"
|
||||
local nr="$3"
|
||||
local new="$(t_counter $which $nr)"
|
||||
|
||||
echo "$((new - old))"
|
||||
}
|
||||
|
||||
#
|
||||
# output the value of the given counter for the given mount, defaulting
|
||||
# to mount 0 if a mount isn't specified. For tests which expect a
|
||||
# specific difference in counters.
|
||||
# to mount 0 if a mount isn't specified.
|
||||
#
|
||||
t_counter_diff() {
|
||||
local which="$1"
|
||||
local old="$2"
|
||||
local nr="$3"
|
||||
local new
|
||||
|
||||
echo "counter $which diff $(t_counter_diff_value $which $old $nr)"
|
||||
}
|
||||
|
||||
#
|
||||
# output a message indicating whether or not the counter value changed.
|
||||
# For tests that expect a difference, or not, but the amount of
|
||||
# difference isn't significant.
|
||||
#
|
||||
t_counter_diff_changed() {
|
||||
local which="$1"
|
||||
local old="$2"
|
||||
local nr="$3"
|
||||
local diff="$(t_counter_diff_value $which $old $nr)"
|
||||
|
||||
test "$diff" -eq 0 && \
|
||||
echo "counter $which didn't change" ||
|
||||
echo "counter $which changed"
|
||||
}
|
||||
|
||||
#
|
||||
# See if we can find a local mount with the caller's rid.
|
||||
#
|
||||
t_rid_is_mounted() {
|
||||
local rid="$1"
|
||||
local fr="$1"
|
||||
|
||||
for fr in /sys/fs/scoutfs/*; do
|
||||
if [ "$(cat $fr/rid)" == "$rid" ]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
#
|
||||
# A given mount is being fenced if any mount has a fence request pending
|
||||
# for it which hasn't finished and been removed.
|
||||
#
|
||||
t_rid_is_fencing() {
|
||||
local rid="$1"
|
||||
local fr
|
||||
|
||||
for fr in /sys/fs/scoutfs/*; do
|
||||
if [ -d "$fr/fence/$rid" ]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
#
|
||||
# Wait until the mount identified by the first rid arg is not in any
|
||||
# states specified by the remaining state description word args.
|
||||
#
|
||||
t_wait_if_rid_is() {
|
||||
local rid="$1"
|
||||
|
||||
while ( [[ $* =~ mounted ]] && t_rid_is_mounted $rid ) ||
|
||||
( [[ $* =~ fencing ]] && t_rid_is_fencing $rid ) ; do
|
||||
sleep .5
|
||||
done
|
||||
}
|
||||
|
||||
#
|
||||
# Wait until any mount identifies itself as the elected leader. We can
|
||||
# be waiting while tests mount and unmount so mounts may not be mounted
|
||||
# at the test's expected mount points.
|
||||
#
|
||||
t_wait_for_leader() {
|
||||
local i
|
||||
|
||||
while sleep .25; do
|
||||
for i in $(t_fs_nrs); do
|
||||
local ldr="$(t_sysfs_path $i 2>/dev/null)/quorum/is_leader"
|
||||
if [ "$(cat $ldr 2>/dev/null)" == "1" ]; then
|
||||
return
|
||||
fi
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
t_get_sysfs_mount_option() {
|
||||
local nr="$1"
|
||||
local name="$2"
|
||||
local opt="$(t_sysfs_path $nr)/mount_options/$name"
|
||||
|
||||
cat "$opt"
|
||||
}
|
||||
|
||||
t_set_sysfs_mount_option() {
|
||||
local nr="$1"
|
||||
local name="$2"
|
||||
local val="$3"
|
||||
local opt="$(t_sysfs_path $nr)/mount_options/$name"
|
||||
|
||||
echo "$val" > "$opt"
|
||||
}
|
||||
|
||||
t_set_all_sysfs_mount_options() {
|
||||
local name="$1"
|
||||
local val="$2"
|
||||
local i
|
||||
|
||||
for i in $(t_fs_nrs); do
|
||||
t_set_sysfs_mount_option $i $name $val
|
||||
done
|
||||
}
|
||||
|
||||
declare -A _saved_opts
|
||||
t_save_all_sysfs_mount_options() {
|
||||
local name="$1"
|
||||
local ind
|
||||
local opt
|
||||
local i
|
||||
|
||||
for i in $(t_fs_nrs); do
|
||||
opt="$(t_sysfs_path $i)/mount_options/$name"
|
||||
ind="${name}_${i}"
|
||||
|
||||
_saved_opts[$ind]="$(cat $opt)"
|
||||
done
|
||||
}
|
||||
|
||||
t_restore_all_sysfs_mount_options() {
|
||||
local name="$1"
|
||||
local ind
|
||||
local i
|
||||
|
||||
for i in $(t_fs_nrs); do
|
||||
ind="${name}_${i}"
|
||||
|
||||
t_set_sysfs_mount_option $i $name "${_saved_opts[$ind]}"
|
||||
done
|
||||
new="$(t_counter $which $nr)"
|
||||
echo "counter $which diff $((new - old))"
|
||||
}
|
||||
|
||||
@@ -23,18 +23,3 @@ t_require_mounts() {
|
||||
test "$T_NR_MOUNTS" -ge "$req" || \
|
||||
t_skip "$req mounts required, only have $T_NR_MOUNTS"
|
||||
}
|
||||
|
||||
#
|
||||
# Require that the meta device be at least the size string argument, as
|
||||
# parsed by numfmt using single char base 2 suffixes (iec).. 64G, etc.
|
||||
#
|
||||
t_require_meta_size() {
|
||||
local dev="$T_META_DEVICE"
|
||||
local req_iec="$1"
|
||||
local req_bytes=$(numfmt --from=iec --to=none $req_iec)
|
||||
local dev_bytes=$(blockdev --getsize64 $dev)
|
||||
local dev_iec=$(numfmt --from=auto --to=iec $dev_bytes)
|
||||
|
||||
test "$dev_bytes" -ge "$req_bytes" || \
|
||||
t_skip "$dev must be at least $req_iec, is $dev_iec"
|
||||
}
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
== prepare devices, mount point, and logs
|
||||
== bad devices, bad options
|
||||
== swapped devices
|
||||
== both meta devices
|
||||
== both data devices
|
||||
== good volume, bad option and good options
|
||||
@@ -53,5 +53,3 @@ mv: cannot move ‘/mnt/test/test/basic-posix-consistency/dir/c/clobber’ to
|
||||
== inode indexes match after syncing existing
|
||||
== inode indexes match after copying and syncing
|
||||
== inode indexes match after removing and syncing
|
||||
== concurrent creates make one file
|
||||
one-file
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
== truncate writes zeroed partial end of file block
|
||||
0000000 0a79 0a79 0a79 0a79 0a79 0a79 0a79 0a79
|
||||
*
|
||||
0006144 0000 0000 0000 0000 0000 0000 0000 0000
|
||||
*
|
||||
0012288
|
||||
@@ -1,2 +0,0 @@
|
||||
== Issue scoutfs df to force block reads to trigger stale invalidation/retry
|
||||
counter block_cache_remove_stale changed
|
||||
@@ -1 +0,0 @@
|
||||
== 60s of unmounting non-quorum clients during recovery
|
||||
@@ -1,26 +0,0 @@
|
||||
== initial writes smaller than prealloc grow to prealloc size
|
||||
/mnt/test/test/data-prealloc/file-1: 7 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 7 extents found
|
||||
== larger files get full prealloc extents
|
||||
/mnt/test/test/data-prealloc/file-1: 9 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 9 extents found
|
||||
== non-streaming writes with contig have per-block extents
|
||||
/mnt/test/test/data-prealloc/file-1: 32 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 32 extents found
|
||||
== any writes to region prealloc get full extents
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
== streaming offline writes get full extents either way
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
== goofy preallocation amounts work
|
||||
/mnt/test/test/data-prealloc/file-1: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 3 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 3 extents found
|
||||
@@ -1,8 +0,0 @@
|
||||
== prepare directories and files
|
||||
== fallocate until enospc
|
||||
== remove all the files and verify free data blocks
|
||||
== make small meta fs
|
||||
== create large xattrs until we fill up metadata
|
||||
== remove files with xattrs after enospc
|
||||
== make sure we can create again
|
||||
== cleanup small meta fs
|
||||
@@ -1,3 +0,0 @@
|
||||
== creating reasonably large per-mount files
|
||||
== 10s of racing cold reads and fallocate nop
|
||||
== cleaning up files
|
||||
@@ -1,5 +0,0 @@
|
||||
== make sure all mounts can see each other
|
||||
== force unmount one client, connection timeout, fence nop, mount
|
||||
== force unmount all non-server, connection timeout, fence nop, mount
|
||||
== force unmount server, quorum elects new leader, fence nop, mount
|
||||
== force unmount everything, new server fences all previous
|
||||
@@ -1,27 +0,0 @@
|
||||
== basic unlink deletes
|
||||
ino found in dseq index
|
||||
ino not found in dseq index
|
||||
== local open-unlink waits for close to delete
|
||||
contents after rm: contents
|
||||
ino found in dseq index
|
||||
ino not found in dseq index
|
||||
== multiple local opens are protected
|
||||
contents after rm 1: contents
|
||||
contents after rm 2: contents
|
||||
ino found in dseq index
|
||||
ino not found in dseq index
|
||||
== remote unopened unlink deletes
|
||||
ino not found in dseq index
|
||||
ino not found in dseq index
|
||||
== unlink wait for open on other mount
|
||||
mount 0 contents after mount 1 rm: contents
|
||||
ino found in dseq index
|
||||
ino found in dseq index
|
||||
stat: cannot stat ‘/mnt/test/test/inode-deletion/file’: No such file or directory
|
||||
ino not found in dseq index
|
||||
ino not found in dseq index
|
||||
== lots of deletions use one open map
|
||||
== open files survive remote scanning orphans
|
||||
mount 0 contents after mount 1 remounted: contents
|
||||
ino not found in dseq index
|
||||
ino not found in dseq index
|
||||
@@ -1,3 +0,0 @@
|
||||
== creating fragmented extents
|
||||
== unlink file with moved extents to free extents per block
|
||||
== cleanup
|
||||
4
tests/golden/lock-conflicting-batch-commit
Normal file
4
tests/golden/lock-conflicting-batch-commit
Normal file
@@ -0,0 +1,4 @@
|
||||
== create per mount files
|
||||
== time independent modification
|
||||
== time concurrent independent modification
|
||||
== time concurrent conflicting modification
|
||||
@@ -1,3 +0,0 @@
|
||||
== starting background invalidating read/write load
|
||||
== 60s of lock recovery during invalidating load
|
||||
== stopping background load
|
||||
@@ -1,3 +0,0 @@
|
||||
== create per mount files
|
||||
== 30s of racing random mount/umount
|
||||
== mounting any unmounted
|
||||
@@ -1,26 +0,0 @@
|
||||
== non-acl O_TMPFILE creation honors umask
|
||||
umask 022
|
||||
fstat after open(0777): 0100755
|
||||
stat after linkat: 0100755
|
||||
umask 077
|
||||
fstat after open(0777): 0100700
|
||||
stat after linkat: 0100700
|
||||
== stage from tmpfile
|
||||
total file size 33669120
|
||||
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
|
||||
*
|
||||
00400000 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 |BBBBBBBBBBBBBBBB|
|
||||
*
|
||||
00801000 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 43 |CCCCCCCCCCCCCCCC|
|
||||
*
|
||||
00c03000 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 |DDDDDDDDDDDDDDDD|
|
||||
*
|
||||
01006000 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 45 |EEEEEEEEEEEEEEEE|
|
||||
*
|
||||
0140a000 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 |FFFFFFFFFFFFFFFF|
|
||||
*
|
||||
0180f000 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 47 |GGGGGGGGGGGGGGGG|
|
||||
*
|
||||
01c15000 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 |HHHHHHHHHHHHHHHH|
|
||||
*
|
||||
0201c000
|
||||
@@ -1,5 +0,0 @@
|
||||
== test our inode existance function
|
||||
== unlinked and opened inodes still exist
|
||||
== orphan from failed evict deletion is picked up
|
||||
== orphaned inos in all mounts all deleted
|
||||
== 30s of racing evict deletion, orphan scanning, and open by handle
|
||||
@@ -1,2 +0,0 @@
|
||||
=== renameat2 noreplace flag test
|
||||
=== run two asynchronous calls to renameat2 NOREPLACE
|
||||
@@ -1,27 +0,0 @@
|
||||
== make initial small fs
|
||||
== 0s do nothing
|
||||
== shrinking fails
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
== existing sizes do nothing
|
||||
== growing outside device fails
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
== resizing meta works
|
||||
== resizing data works
|
||||
== shrinking back fails
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
resize_devices ioctl failed: Invalid argument (22)
|
||||
scoutfs: resize-devices failed: Invalid argument (22)
|
||||
== resizing again does nothing
|
||||
== resizing to full works
|
||||
== cleanup extra fs
|
||||
@@ -7,4 +7,3 @@ found second
|
||||
== changing metadata must increase meta seq
|
||||
== changing contents must increase data seq
|
||||
== make sure dirtying doesn't livelock walk
|
||||
== concurrent update attempts maintain single entries
|
||||
|
||||
@@ -16,4 +16,3 @@ setfattr: /mnt/test/test/simple-xattr-unit/file: Numerical result out of range
|
||||
setfattr: /mnt/test/test/simple-xattr-unit/file: Argument list too long
|
||||
=== good length boundaries
|
||||
=== 500 random lengths
|
||||
=== alternate val size between interesting sizes
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
== update existing xattr
|
||||
== remove an xattr
|
||||
== remove xattr with files
|
||||
== trigger small log merges by rotating single block with unmount
|
||||
== create entries in current log
|
||||
== delete small fraction
|
||||
== remove files
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user