Compare commits

..

1 Commits

Author SHA1 Message Date
Ben McClelland
b898c89c11 add local,ipmi,powerman fenced scripts to utils rpm
This adds the fenced scripts so that we have a place to track these and
get updates out to users. This latest version of scripts has the checks
to validate that power off succeeded and not just assume based on power
command return status.

Signed-off-by: Ben McClelland <ben.mcclelland@versity.com>
2022-07-06 15:11:21 -07:00
106 changed files with 1980 additions and 6187 deletions

View File

@@ -1,200 +1,6 @@
Versity ScoutFS Release Notes
=============================
---
v1.17
\
*Oct 23, 2023*
Add support for EL8 generation kernels.
---
v1.16
\
*Oct 4, 2023*
Fix an issue where the server could hang on startup if its persistent
allocator structures were left in a specific degraded state by the
previously active server.
---
v1.15
\
*Jul 17, 2023*
Process log btree merge splicing in multiple commits. This prevents a
rare case where pending log merge completions contain more work than can
be done in a single server commit, causing the server to trigger an
assert shortly after starting.
Fix spurious EINVAL from data writes when data\_prealloc\_contig\_only was
set to 0.
---
v1.14
\
*Jun 29, 2023*
Add get\_referring\_entries ioctl for getting directory entries that
refer to an inode.
Fix excessive CPU use in the move\_blocks interface when moving a large
number of extents.
Reduce fragmented data allocation when contig\_only prealloc is not in
use by more consistently allocating multi-block extents within each
aligned prealloc region.
Avoid rare deadlock in metadata block cache recalim under both heavy
load and memory pressure.
Fix crash when using quorum\_heartbeat\_timeout\_ms mount option.
---
v1.13
\
*May 19, 2023*
Add the quorum\_heartbeat\_timeout\_ms mount option to set the quorum
heartbeat timeout.
Change some task prioritization and allocation behavior of the quorum
agent to help reduce delays in sending and receiving heartbeat messages.
---
v1.12
\
*Apr 17, 2023*
Add the prepare-empty-data-device scoutfs command. A data device can be
unused when no files have data blocks, perhaps because they're archived
and offline. In this case the data device can be swapped out for
another device without changes to the metadata device.
Fix an oversight which limited inode timestamps to second granularity
for some operations. All operations now record timestamps with full
nanosecond precision.
Fix spurious ENOENT failures when renaming from other directories into
the root directory.
---
v1.11
\
*Feb 2, 2023*
Fixed a free extent processing error that could prevent mount from
proceeding when free data extents were sufficiently fragmented. It now
properly handle very fragmented free extent maps.
Fixed a statfs server processing race that could return spurious errors
and shut down the server. With the race closed statfs processing is
reliable.
Fixed a rare livelock in the move\_blocks ioctl. With the right
relationship between ioctl arguments and eventual file extent items the
core loop in the move\_blocks ioctl could get stuck looping on an extent
item and never return. The loop exit conditions were fixed and the loop
will always advance through all extents.
Changed the 'print' scoutfs commands to flush the block cache for the
devices. It was inconvenient to expect cache flushing to be a separate
step to ensure consistency with remote node writes.
---
v1.10
\
*Dec 7, 2022*
Fixed a potential directory entry cache management deadlock that could
occur when many nodes performed heavy metadata write loads across shared
directories and their child subdirectories. The deadlock could halt
invalidation progress on a node which could then stop use of locks that
needed invalidation on that node which would result in almost all tasks
hanging on those locks that would never make progress.
Fixed a circumstance where metadata change sequence index item
modification could leave behind old stale metadata sequence items. The
duplication case required concurrent metadata updates across mounts with
particular open transaction patterns so the duplicate items are rare.
They resulted in a small amount of additional load when walking change
indexes but had no effect on correctness.
Fixed a rare case where sparse file extension might not write partial
blocks of zeros which was found in testing. This required using
truncate to extend files past file sizes that end in partial blocks
along with the right transaction commit and memory reclaim patterns.
This never affected regular non-sparse files nor files prepopulated with
fallocate.
---
v1.9
\
*Oct 29, 2022*
Fix VFS cached directory entry consistency verification that could cause
spurious "no such file or directory" (ENOENT) errors from rename over
NFS under certain conditions. The problem was only every with the
consistency of in-memory cached dentry objects, persistent data was
correct and eventual eviction of the bad cached objects would stop
generating the errors.
---
v1.8
\
*Oct 18, 2022*
Add support for Linux POSIX Access Control Lists, as described in
acl(5). Mount options are added to enable ("acl") and disable ("noacl")
support. The default is to support ACLs. ACLs are stored in the
existing extended attribute scheme so adding support is does not require
a format change.
Add options to control data extent preallocation. The default behavior
does not change. The options can relax the limits on preallocation
which will then trigger under more write patterns and increase the risk
of preallocated space which is never used. The options are described in
scoutfs(5).
---
v1.7
\
*Aug 26, 2022*
* **Fixed possible persistent errors moving freed data extents**
\
Fixed a case where the server could hit persistent errors trying to
move a client's freed extents in one commit. The client had to free
a large number of extents that occupied distant positions in the
global free extent btree. Very large fragmented files could cause
this. The server now moves the freed extents in multiple commits and
can always ensure forward progress.
* **Fixed possible persistent errors from freed duplicate extents**
\
Background orphan deletion wasn't properly synchronizing with
foreground tasks deleting very large files. If a deletion took long
enough then background deletion could also attempt to delete inode items
while the deletion was making progress. This could create duplicate
deletions of data extent items which causes the server to abort when
it later discovers the duplicate extents as it merges free lists.
---
v1.6
\
*Jul 7, 2022*
* **Fix memory leaks in rare corner cases**
\
Analysis tools found a few corner cases that leaked small structures,
generally around error handling or startup and shutdown.
* **Add --skip-likely-huge scoutfs print command option**
\
Add an option to scoutfs print to reduce the size of the output
so that it can be used to see system-wide metadata without being
overwhelmed by file-level details.
---
v1.5
\

View File

@@ -31,12 +31,12 @@ TARFILE = scoutfs-kmod-$(RPM_VERSION).tar
all: module
module:
$(MAKE) $(SCOUTFS_ARGS)
$(SP) $(MAKE) C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
make $(SCOUTFS_ARGS)
$(SP) make C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
modules_install:
$(MAKE) $(SCOUTFS_ARGS) modules_install
make $(SCOUTFS_ARGS) modules_install
%.spec: %.spec.in .FORCE
@@ -50,4 +50,4 @@ dist: scoutfs-kmod.spec
@ tar rf $(TARFILE) --transform="s@\(.*\)@scoutfs-kmod-$(RPM_VERSION)/\1@" scoutfs-kmod.spec
clean:
$(MAKE) $(SCOUTFS_ARGS) clean
make $(SCOUTFS_ARGS) clean

View File

@@ -3,28 +3,16 @@
%define kmod_git_hash @@GITHASH@@
%define pkg_date %(date +%%Y%%m%%d)
# Disable the building of the debug package(s).
%define debug_package %{nil}
# take kernel version or default to uname -r
%{!?kversion: %global kversion %(uname -r)}
%global kernel_version %{kversion}
%if 0%{?el7}
%global kernel_source() /usr/src/kernels/%{kernel_version}.$(arch)
%endif
%if 0%{?el8}
%global kernel_source() /usr/src/kernels/%{kernel_version}
%endif
%global kernel_release() %{kversion}
%{!?_release: %global _release 0.%{pkg_date}git%{kmod_git_hash}}
%if 0%{?el7}
Name: %{kmod_name}
%endif
%if 0%{?el8}
Name: kmod-%{kmod_name}
%endif
Summary: %{kmod_name} kernel module
Version: %{kmod_version}
Release: %{_release}%{?dist}
@@ -32,30 +20,24 @@ License: GPLv2
Group: System/Kernel
URL: http://scoutfs.org/
%if 0%{?el7}
BuildRequires: %{kernel_module_package_buildreqs}
%endif
%if 0%{?el8}
BuildRequires: elfutils-libelf-devel
%endif
BuildRequires: kernel-devel-uname-r = %{kernel_version}
BuildRequires: git
BuildRequires: kernel-devel-uname-r = %{kernel_version}
BuildRequires: module-init-tools
ExclusiveArch: x86_64
Source: %{kmod_name}-kmod-%{kmod_version}.tar
%if 0%{?el7}
# Build only for standard kernel variant(s); for debug packages, append "debug"
# after "default" (separated by space)
%kernel_module_package default
%endif
%global install_mod_dir extra/%{kmod_name}
%if 0%{?el8}
%global flavors_to_build x86_64
%endif
# Disable the building of the debug package(s).
%define debug_package %{nil}
%global install_mod_dir extra/%{name}
%description
%{kmod_name} - kernel module
@@ -84,7 +66,7 @@ export INSTALL_MOD_DIR=%{install_mod_dir}
mkdir -p %{install_mod_dir}
for flavor in %{flavors_to_build}; do
export KSRC=%{kernel_source $flavor}
export KVERSION=%{kversion}
export KVERSION=%{kernel_release $KSRC}
install -d $INSTALL_MOD_PATH/lib/modules/$KVERSION/%{install_mod_dir}
cp $PWD/obj/$flavor/src/scoutfs.ko $INSTALL_MOD_PATH/lib/modules/$KVERSION/%{install_mod_dir}/
done
@@ -92,14 +74,6 @@ done
# mark modules executable so that strip-to-file can strip them
find %{buildroot} -type f -name \*.ko -exec %{__chmod} u+x \{\} \;
%if 0%{?el8}
%files
/lib/modules
%post
weak-modules --add-kernel --no-initramfs
depmod -a
%endif
%clean
rm -rf %{buildroot}

View File

@@ -8,7 +8,6 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
-include $(src)/Makefile.kernelcompat
scoutfs-y += \
acl.o \
avl.o \
alloc.o \
block.o \
@@ -25,7 +24,6 @@ scoutfs-y += \
inode.o \
ioctl.o \
item.o \
kernelcompat.o \
lock.o \
lock_server.o \
msg.o \

View File

@@ -26,16 +26,6 @@ ifneq (,$(shell grep 'dir_emit_dots' include/linux/fs.h))
ccflags-y += -DKC_DIR_EMIT_DOTS
endif
#
# v3.18-rc2-19-gb5ae6b15bd73
#
# Folds d_materialise_unique into d_splice_alias. Note reversal
# of arguments (Also note Documentation/filesystems/porting.rst)
#
ifneq (,$(shell grep 'd_materialise_unique' include/linux/dcache.h))
ccflags-y += -DKC_D_MATERIALISE_UNIQUE=1
endif
#
# RHEL extended the fop struct so to use it we have to set
# a flag to indicate that the struct is large enough and
@@ -44,217 +34,3 @@ endif
ifneq (,$(shell grep 'FMODE_KABI_ITERATE' include/linux/fs.h))
ccflags-y += -DKC_FMODE_KABI_ITERATE
endif
#
# v4.7-rc2-23-g0d4d717f2583
#
# Added user_ns argument to posix_acl_valid
#
ifneq (,$(shell grep 'posix_acl_valid.*user_namespace' include/linux/posix_acl.h))
ccflags-y += -DKC_POSIX_ACL_VALID_USER_NS
endif
#
# v5.3-12296-g6d2052d188d9
#
# The RBCOMPUTE function is now passed an extra flag, and should return a bool
# to indicate whether the propagated callback should stop or not.
#
ifneq (,$(shell grep 'static inline bool RBNAME.*_compute_max' include/linux/rbtree_augmented.h))
ccflags-y += -DKC_RB_TREE_AUGMENTED_COMPUTE_MAX
endif
#
# v3.13-25-g37bc15392a23
#
# Renames posix_acl_create to __posix_acl_create and provide some
# new interfaces for creating ACLs
#
ifneq (,$(shell grep '__posix_acl_create' include/linux/posix_acl.h))
ccflags-y += -DKC___POSIX_ACL_CREATE
endif
#
# v4.8-rc1-29-g31051c85b5e2
#
# inode_change_ok() removed - replace with setattr_prepare()
#
ifneq (,$(shell grep 'extern int setattr_prepare' include/linux/fs.h))
ccflags-y += -DKC_SETATTR_PREPARE
endif
#
# v4.15-rc3-4-gae5e165d855d
#
# linux/iversion.h needs to manually be included for code that
# manipulates this field.
#
ifneq (,$(shell grep -s 'define _LINUX_IVERSION_H' include/linux/iversion.h))
ccflags-y += -DKC_NEED_LINUX_IVERSION_H=1
endif
# v4.11-12447-g104b4e5139fe
#
# Renamed __percpu_counter_add to percpu_counter_add_batch to clarify
# that the __ wasn't less safe, just took an extra parameter.
#
ifneq (,$(shell grep 'percpu_counter_add_batch' include/linux/percpu_counter.h))
ccflags-y += -DKC_PERCPU_COUNTER_ADD_BATCH
endif
#
# v4.11-4550-g7dea19f9ee63
#
# Introduced memalloc_nofs_{save,restore} preferred instead of _noio_.
#
ifneq (,$(shell grep 'memalloc_nofs_save' include/linux/sched/mm.h))
ccflags-y += -DKC_MEMALLOC_NOFS_SAVE
endif
#
# v4.7-12414-g1eff9d322a44
#
# Renamed bi_rw to bi_opf to force old code to catch up. We use it as a
# single switch between old and new bio structures.
#
ifneq (,$(shell grep 'bi_opf' include/linux/blk_types.h))
ccflags-y += -DKC_BIO_BI_OPF
endif
#
# v4.12-rc2-201-g4e4cbee93d56
#
# Moves to bi_status BLK_STS_ API instead of having a mix of error
# end_io args or bi_error.
#
ifneq (,$(shell grep 'bi_status' include/linux/blk_types.h))
ccflags-y += -DKC_BIO_BI_STATUS
endif
#
# v3.11-8765-ga0b02131c5fc
#
# Remove the old ->shrink() API, ->{scan,count}_objects is preferred.
#
ifneq (,$(shell grep '(*shrink)' include/linux/shrinker.h))
ccflags-y += -DKC_SHRINKER_SHRINK
endif
#
# v3.19-4777-g6bec00352861
#
# backing_dev_info is removed from address_space. Instead we need to use
# inode_to_bdi() inline from <backing-dev.h>.
#
ifneq (,$(shell grep 'struct backing_dev_info.*backing_dev_info' include/linux/fs.h))
ccflags-y += -DKC_LINUX_BACKING_DEV_INFO=1
endif
#
# v4.3-9290-ge409de992e3e
#
# xattr handlers are now passed a struct that contains `flags`
#
ifneq (,$(shell grep 'int...get..const struct xattr_handler.*struct dentry.*dentry,' include/linux/xattr.h))
ccflags-y += -DKC_XATTR_STRUCT_XATTR_HANDLER=1
endif
#
# v4.16-rc1-1-g9b2c45d479d0
#
# kernel_getsockname() and kernel_getpeername dropped addrlen arg
#
ifneq (,$(shell grep 'kernel_getsockname.*,$$' include/linux/net.h))
ccflags-y += -DKC_KERNEL_GETSOCKNAME_ADDRLEN=1
endif
#
# v4.1-rc1-410-geeb1bd5c40ed
#
# Adds a struct net parameter to sock_create_kern
#
ifneq (,$(shell grep 'sock_create_kern.*struct net' include/linux/net.h))
ccflags-y += -DKC_SOCK_CREATE_KERN_NET=1
endif
#
# v3.18-rc6-1619-gc0371da6047a
#
# iov_iter is now part of struct msghdr
#
ifneq (,$(shell grep 'struct iov_iter.*msg_iter' include/linux/socket.h))
ccflags-y += -DKC_MSGHDR_STRUCT_IOV_ITER=1
endif
#
# v4.17-rc6-7-g95582b008388
#
# Kernel has current_time(inode) to uniformly retreive timespec in the right unit
#
ifneq (,$(shell grep 'extern struct timespec64 current_time' include/linux/fs.h))
ccflags-y += -DKC_CURRENT_TIME_INODE=1
endif
#
# v4.9-12228-g530e9b76ae8f
#
# register_cpu_notifier and family were all removed and to be
# replaced with cpuhp_* API calls.
#
ifneq (,$(shell grep 'define register_hotcpu_notifier' include/linux/cpu.h))
ccflags-y += -DKC_CPU_NOTIFIER
endif
#
# v3.14-rc8-130-gccad2365668f
#
# generic_file_buffered_write is removed, backport it
#
ifneq (,$(shell grep 'extern ssize_t generic_file_buffered_write' include/linux/fs.h))
ccflags-y += -DKC_GENERIC_FILE_BUFFERED_WRITE=1
endif
#
# v5.7-438-g8151b4c8bee4
#
# struct address_space_operations switches away from .readpages to .readahead
#
# RHEL has backported this feature all the way to RHEL8, as part of RHEL_KABI,
# which means we need to detect this very precisely
#
ifneq (,$(shell grep 'readahead.*struct readahead_control' include/linux/fs.h))
ccflags-y += -DKC_FILE_AOPS_READAHEAD
endif
#
# v4.0-rc7-1743-g8436318205b9
#
# .aio_read and .aio_write no longer exist. All reads and writes now use the
# .read_iter and .write_iter methods, or must implement .read and .write (which
# we don't).
#
ifneq (,$(shell grep 'ssize_t.*aio_read' include/linux/fs.h))
ccflags-y += -DKC_LINUX_HAVE_FOP_AIO_READ=1
endif
#
# rhel7 has a custom inode_operations_wrapper struct that is discarded
# entirely in favor of upstream structure since rhel8.
#
ifneq (,$(shell grep 'void.*follow_link.*struct dentry' include/linux/fs.h))
ccflags-y += -DKC_LINUX_HAVE_RHEL_IOPS_WRAPPER=1
endif
ifneq (,$(shell grep 'size_t.*ki_left;' include/linux/aio.h))
ccflags-y += -DKC_LINUX_AIO_KI_LEFT=1
endif
#
# v4.4-rc4-4-g98e9cb5711c6
#
# Introduces a new xattr_handler .name member that can be used to match the
# entire field, instead of just a prefix. For these kernels, we must use
# the new .name field instead.
ifneq (,$(shell grep 'static inline const char .xattr_prefix' include/linux/xattr.h))
ccflags-y += -DKC_XATTR_HANDLER_NAME=1
endif

View File

@@ -1,378 +0,0 @@
/*
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/xattr.h>
#include <linux/posix_acl.h>
#include <linux/posix_acl_xattr.h>
#include "format.h"
#include "super.h"
#include "scoutfs_trace.h"
#include "xattr.h"
#include "acl.h"
#include "inode.h"
#include "trans.h"
/*
* POSIX draft ACLs are stored as full xattr items with the entries
* encoded as the kernel's posix_acl_xattr_{header,entry} value structs.
*
* They're accessed and modified via user facing synthetic xattrs, iops
* calls from the kernel, during inode mode changes, and during inode
* creation.
*
* ACL access devolves into xattr access which is relatively expensive
* so we maintain the cached native form in the vfs inode. We drop the
* cache in lock invalidation which means that cached acl access must
* always be performed under cluster locking.
*/
static int acl_xattr_name_len(int type, char **name, size_t *name_len)
{
int ret = 0;
switch (type) {
case ACL_TYPE_ACCESS:
*name = XATTR_NAME_POSIX_ACL_ACCESS;
if (name_len)
*name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
break;
case ACL_TYPE_DEFAULT:
*name = XATTR_NAME_POSIX_ACL_DEFAULT;
if (name_len)
*name_len = sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1;
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock)
{
struct posix_acl *acl;
char *value = NULL;
char *name;
int ret;
#ifndef KC___POSIX_ACL_CREATE
if (!IS_POSIXACL(inode))
return NULL;
acl = get_cached_acl(inode, type);
if (acl != ACL_NOT_CACHED)
return acl;
#endif
ret = acl_xattr_name_len(type, &name, NULL);
if (ret < 0)
return ERR_PTR(ret);
ret = scoutfs_xattr_get_locked(inode, name, NULL, 0, lock);
if (ret > 0) {
value = kzalloc(ret, GFP_NOFS);
if (!value)
ret = -ENOMEM;
else
ret = scoutfs_xattr_get_locked(inode, name, value, ret, lock);
}
if (ret > 0) {
acl = posix_acl_from_xattr(&init_user_ns, value, ret);
} else if (ret == -ENODATA || ret == 0) {
acl = NULL;
} else {
acl = ERR_PTR(ret);
}
#ifndef KC___POSIX_ACL_CREATE
/* can set null negative cache */
if (!IS_ERR(acl))
set_cached_acl(inode, type, acl);
#endif
kfree(value);
return acl;
}
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
struct posix_acl *acl;
int ret;
#ifndef KC___POSIX_ACL_CREATE
if (!IS_POSIXACL(inode))
return NULL;
#endif
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret < 0) {
acl = ERR_PTR(ret);
} else {
acl = scoutfs_get_acl_locked(inode, type, lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
return acl;
}
/*
* The caller has acquired the locks and dirtied the inode, they'll
* update the inode item if we return 0.
*/
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks)
{
static const struct scoutfs_xattr_prefix_tags tgs = {0,}; /* never scoutfs. prefix */
bool set_mode = false;
char *value = NULL;
umode_t new_mode;
size_t name_len;
char *name;
int size = 0;
int ret;
ret = acl_xattr_name_len(type, &name, &name_len);
if (ret < 0)
return ret;
switch (type) {
case ACL_TYPE_ACCESS:
if (acl) {
ret = posix_acl_update_mode(inode, &new_mode, &acl);
if (ret < 0)
goto out;
set_mode = true;
}
break;
case ACL_TYPE_DEFAULT:
if (!S_ISDIR(inode->i_mode)) {
ret = acl ? -EINVAL : 0;
goto out;
}
break;
}
if (acl) {
size = posix_acl_xattr_size(acl->a_count);
value = kmalloc(size, GFP_NOFS);
if (!value) {
ret = -ENOMEM;
goto out;
}
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
if (ret < 0)
goto out;
}
ret = scoutfs_xattr_set_locked(inode, name, name_len, value, size, 0, &tgs,
lock, NULL, ind_locks);
if (ret == 0 && set_mode) {
inode->i_mode = new_mode;
if (!value) {
/* can be setting an acl that only affects mode, didn't need xattr */
inode_inc_iversion(inode);
inode->i_ctime = current_time(inode);
}
}
out:
#ifndef KC___POSIX_ACL_CREATE
if (!ret)
set_cached_acl(inode, type, acl);
#endif
kfree(value);
return ret;
}
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
{
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
LIST_HEAD(ind_locks);
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE, SCOUTFS_LKF_REFRESH_INODE, inode, &lock) ?:
scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
if (ret == 0) {
ret = scoutfs_dirty_inode_item(inode, lock) ?:
scoutfs_set_acl_locked(inode, acl, type, lock, &ind_locks);
if (ret == 0)
scoutfs_update_inode_item(inode, lock, &ind_locks);
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
}
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
return ret;
}
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_get_xattr(const struct xattr_handler *handler, struct dentry *dentry,
struct inode *inode, const char *name, void *value,
size_t size)
{
int type = handler->flags;
#else
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type)
{
#endif
struct posix_acl *acl;
int ret = 0;
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
acl = scoutfs_get_acl(dentry->d_inode, type);
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl == NULL)
return -ENODATA;
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
posix_acl_release(acl);
return ret;
}
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_set_xattr(const struct xattr_handler *handler, struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags)
{
int type = handler->flags;
#else
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type)
{
#endif
struct posix_acl *acl = NULL;
int ret;
if (!inode_owner_or_capable(dentry->d_inode))
return -EPERM;
if (!IS_POSIXACL(dentry->d_inode))
return -EOPNOTSUPP;
if (value) {
acl = posix_acl_from_xattr(&init_user_ns, value, size);
if (IS_ERR(acl))
return PTR_ERR(acl);
if (acl) {
ret = kc_posix_acl_valid(&init_user_ns, acl);
if (ret)
goto out;
}
}
ret = scoutfs_set_acl(dentry->d_inode, acl, type);
out:
posix_acl_release(acl);
return ret;
}
/*
* Apply the parent's default acl to new inodes access acl and inherit
* it as the default for new directories. The caller holds locks and a
* transaction.
*/
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
struct list_head *ind_locks)
{
struct posix_acl *acl = NULL;
int ret = 0;
if (!S_ISLNK(inode->i_mode)) {
if (IS_POSIXACL(dir)) {
acl = scoutfs_get_acl_locked(dir, ACL_TYPE_DEFAULT, dir_lock);
if (IS_ERR(acl))
return PTR_ERR(acl);
}
if (!acl)
inode->i_mode &= ~current_umask();
}
if (IS_POSIXACL(dir) && acl) {
if (S_ISDIR(inode->i_mode)) {
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_DEFAULT,
lock, ind_locks);
if (ret)
goto out;
}
ret = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
if (ret < 0)
return ret;
if (ret > 0)
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS,
lock, ind_locks);
} else {
cache_no_acl(inode);
}
out:
posix_acl_release(acl);
return ret;
}
/*
* Update the access ACL based on a newly set mode. If we return an
* error then the xattr wasn't changed.
*
* Annoyingly, setattr_copy has logic that transforms the final set mode
* that we want to use to update the acl. But we don't want to modify
* the other inode fields while discovering the resulting mode. We're
* relying on acl_chmod not caring about the transformation (currently
* just clears sgid). It would be better if we could get the resulting
* mode to give to acl_chmod without modifying the other inode fields.
*
* The caller has the inode mutex, a cluster lock, transaction, and will
* update the inode item if we return success.
*/
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
struct scoutfs_lock *lock, struct list_head *ind_locks)
{
struct posix_acl *acl;
int ret = 0;
if (!IS_POSIXACL(inode) || !(attr->ia_valid & ATTR_MODE))
return 0;
if (S_ISLNK(inode->i_mode))
return -EOPNOTSUPP;
acl = scoutfs_get_acl_locked(inode, ACL_TYPE_ACCESS, lock);
if (IS_ERR_OR_NULL(acl))
return PTR_ERR(acl);
ret = __posix_acl_chmod(&acl, GFP_KERNEL, attr->ia_mode);
if (ret)
return ret;
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS, lock, ind_locks);
posix_acl_release(acl);
return ret;
}

View File

@@ -1,27 +0,0 @@
#ifndef _SCOUTFS_ACL_H_
#define _SCOUTFS_ACL_H_
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type);
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
struct scoutfs_lock *lock, struct list_head *ind_locks);
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
int scoutfs_acl_get_xattr(const struct xattr_handler *, struct dentry *dentry,
struct inode *inode, const char *name, void *value,
size_t size);
int scoutfs_acl_set_xattr(const struct xattr_handler *, struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags);
#else
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
int type);
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
int flags, int type);
#endif
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
struct scoutfs_lock *lock, struct list_head *ind_locks);
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
struct list_head *ind_locks);
#endif

View File

@@ -892,11 +892,12 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
* -ENOENT is returned if we run out of extents in the source tree
* before moving the total.
*
* If meta_budget is non-zero then -EINPROGRESS can be returned if the
* the caller's budget is consumed in the allocator during this call
* (though not necessarily by us, we don't have per-thread tracking of
* allocator consumption :/). The call can still have made progress and
* caller is expected commit the dirty trees and examining the resulting
* If meta_reserved is non-zero then -EINPROGRESS can be returned if the
* current meta allocator's avail blocks or room for freed blocks would
* have fallen under the reserved amount. The could have been
* successfully dirtied in this case but the number of blocks moved is
* not returned. The caller is expected to deal with the partial
* progress by commiting the dirty trees and examining the resulting
* modified trees to see if they need to continue moving extents.
*
* The caller can specify that extents in the source tree should first
@@ -913,7 +914,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget)
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved)
{
struct alloc_ext_args args = {
.alloc = alloc,
@@ -921,8 +922,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
};
struct scoutfs_extent found;
struct scoutfs_extent ext;
u32 avail_start = 0;
u32 freed_start = 0;
u64 moved = 0;
u64 count;
int ret = 0;
@@ -933,9 +932,6 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
vacant = NULL;
}
if (meta_budget != 0)
scoutfs_alloc_meta_remaining(alloc, &avail_start, &freed_start);
while (moved < total) {
count = total - moved;
@@ -968,24 +964,14 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
if (ret < 0)
break;
if (meta_budget != 0 &&
scoutfs_alloc_meta_low_since(alloc, avail_start, freed_start, meta_budget,
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
if (meta_reserved != 0 &&
scoutfs_alloc_meta_low(sb, alloc, meta_reserved +
extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
ret = -EINPROGRESS;
break;
}
/* return partial if the server alloc can't dirty any more */
if (scoutfs_alloc_meta_low(sb, alloc, 50 + extent_mod_blocks(src->root.height) +
extent_mod_blocks(dst->root.height))) {
if (WARN_ON_ONCE(!moved))
ret = -ENOSPC;
else
ret = 0;
break;
}
/* searching set start/len, finish initializing alloced extent */
ext.map = found.map ? ext.start - found.start + found.map : 0;
ext.flags = found.flags;
@@ -1365,27 +1351,6 @@ void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total,
} while (read_seqretry(&alloc->seqlock, seq));
}
/*
* Returns true if the caller's consumption of nr from either avail or
* freed would end up exceeding their budget relative to the starting
* remaining snapshot they took.
*/
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr)
{
u32 avail_use;
u32 freed_use;
u32 avail;
u32 freed;
scoutfs_alloc_meta_remaining(alloc, &avail, &freed);
avail_use = avail_start - avail;
freed_use = freed_start - freed;
return ((avail_use + nr) > budget) || ((freed_use + nr) > budget);
}
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag)
{
@@ -1582,10 +1547,12 @@ out:
* call the caller's callback. This assumes that the super it's reading
* could be stale and will retry if it encounters stale blocks.
*/
int scoutfs_alloc_foreach(struct super_block *sb, scoutfs_alloc_foreach_cb_t cb, void *arg)
int scoutfs_alloc_foreach(struct super_block *sb,
scoutfs_alloc_foreach_cb_t cb, void *arg)
{
struct scoutfs_super_block *super = NULL;
DECLARE_SAVED_REFS(saved);
struct scoutfs_block_ref stale_refs[2] = {{0,}};
struct scoutfs_block_ref refs[2] = {{0,}};
int ret;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
@@ -1594,18 +1561,26 @@ int scoutfs_alloc_foreach(struct super_block *sb, scoutfs_alloc_foreach_cb_t cb,
goto out;
}
do {
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
retry:
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
ret = scoutfs_block_check_stale(sb, ret, &saved, &super->logs_root.ref,
&super->srch_root.ref);
} while (ret == -ESTALE);
refs[0] = super->logs_root.ref;
refs[1] = super->srch_root.ref;
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
out:
if (ret == -ESTALE) {
if (memcmp(&stale_refs, &refs, sizeof(refs)) == 0) {
ret = -EIO;
} else {
BUILD_BUG_ON(sizeof(stale_refs) != sizeof(refs));
memcpy(stale_refs, refs, sizeof(stale_refs));
goto retry;
}
}
kfree(super);
return ret;
}

View File

@@ -19,11 +19,14 @@
(128ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
* The default size that we'll try to preallocate. This is trying to
* hit the limit of large efficient device writes while minimizing
* wasted preallocation that is never used.
* The largest aligned region that we'll try to allocate at the end of
* the file as it's extended. This is also limited to the current file
* size so we can only waste at most twice the total file size when
* files are less than this. We try to keep this around the point of
* diminishing returns in streaming performance of common data devices
* to limit waste.
*/
#define SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS \
#define SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT \
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
/*
@@ -128,7 +131,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 total,
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget);
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_reserved);
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
u64 start, u64 len);
@@ -156,8 +159,6 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
bool scoutfs_alloc_meta_low(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 nr);
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
u32 budget, u32 nr);
bool scoutfs_alloc_test_flag(struct super_block *sb,
struct scoutfs_alloc *alloc, u32 flag);

View File

@@ -21,7 +21,6 @@
#include <linux/blkdev.h>
#include <linux/rhashtable.h>
#include <linux/random.h>
#include <linux/sched/mm.h>
#include "format.h"
#include "super.h"
@@ -31,7 +30,6 @@
#include "scoutfs_trace.h"
#include "alloc.h"
#include "triggers.h"
#include "util.h"
/*
* The scoutfs block cache manages metadata blocks that can be larger
@@ -59,7 +57,7 @@ struct block_info {
atomic64_t access_counter;
struct rhashtable ht;
wait_queue_head_t waitq;
KC_DEFINE_SHRINKER(shrinker);
struct shrinker shrinker;
struct work_struct free_work;
struct llist_head free_llist;
};
@@ -130,7 +128,7 @@ static __le32 block_calc_crc(struct scoutfs_block_header *hdr, u32 size)
static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
{
struct block_private *bp;
unsigned int nofs_flags;
unsigned int noio_flags;
/*
* If we had multiple blocks per page we'd need to be a little
@@ -158,9 +156,9 @@ static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
* spurious reclaim-on dependencies and warnings.
*/
lockdep_off();
nofs_flags = memalloc_nofs_save();
noio_flags = memalloc_noio_save();
bp->virt = __vmalloc(SCOUTFS_BLOCK_LG_SIZE, GFP_NOFS | __GFP_HIGHMEM, PAGE_KERNEL);
memalloc_nofs_restore(nofs_flags);
memalloc_noio_restore(noio_flags);
lockdep_on();
if (!bp->virt) {
@@ -438,10 +436,11 @@ static void block_remove_all(struct super_block *sb)
* possible. Final freeing, verifying checksums, and unlinking errored
* blocks are all done by future users of the blocks.
*/
static void block_end_io(struct super_block *sb, unsigned int opf,
static void block_end_io(struct super_block *sb, int rw,
struct block_private *bp, int err)
{
DECLARE_BLOCK_INFO(sb, binf);
bool is_read = !(rw & WRITE);
if (err) {
scoutfs_inc_counter(sb, block_cache_end_io_error);
@@ -451,7 +450,7 @@ static void block_end_io(struct super_block *sb, unsigned int opf,
if (!atomic_dec_and_test(&bp->io_count))
return;
if (!op_is_write(opf) && !test_bit(BLOCK_BIT_ERROR, &bp->bits))
if (is_read && !test_bit(BLOCK_BIT_ERROR, &bp->bits))
set_bit(BLOCK_BIT_UPTODATE, &bp->bits);
clear_bit(BLOCK_BIT_IO_BUSY, &bp->bits);
@@ -464,13 +463,13 @@ static void block_end_io(struct super_block *sb, unsigned int opf,
wake_up(&binf->waitq);
}
static void KC_DECLARE_BIO_END_IO(block_bio_end_io, struct bio *bio)
static void block_bio_end_io(struct bio *bio, int err)
{
struct block_private *bp = bio->bi_private;
struct super_block *sb = bp->sb;
TRACE_BLOCK(end_io, bp);
block_end_io(sb, kc_bio_get_opf(bio), bp, kc_bio_get_errno(bio));
block_end_io(sb, bio->bi_rw, bp, err);
bio_put(bio);
}
@@ -478,7 +477,7 @@ static void KC_DECLARE_BIO_END_IO(block_bio_end_io, struct bio *bio)
* Kick off IO for a single block.
*/
static int block_submit_bio(struct super_block *sb, struct block_private *bp,
unsigned int opf)
int rw)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct bio *bio = NULL;
@@ -511,9 +510,8 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
break;
}
kc_bio_set_opf(bio, opf);
kc_bio_set_sector(bio, sector + (off >> 9));
bio_set_dev(bio, sbi->meta_bdev);
bio->bi_sector = sector + (off >> 9);
bio->bi_bdev = sbi->meta_bdev;
bio->bi_end_io = block_bio_end_io;
bio->bi_private = bp;
@@ -530,18 +528,18 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
BUG();
if (!bio_add_page(bio, page, PAGE_SIZE, 0)) {
kc_submit_bio(bio);
submit_bio(rw, bio);
bio = NULL;
}
}
if (bio)
kc_submit_bio(bio);
submit_bio(rw, bio);
blk_finish_plug(&plug);
/* let racing end_io know we're done */
block_end_io(sb, opf, bp, ret);
block_end_io(sb, rw, bp, ret);
return ret;
}
@@ -642,7 +640,7 @@ static struct block_private *block_read(struct super_block *sb, u64 blkno)
if (!test_bit(BLOCK_BIT_UPTODATE, &bp->bits) &&
test_and_clear_bit(BLOCK_BIT_NEW, &bp->bits)) {
ret = block_submit_bio(sb, bp, REQ_OP_READ);
ret = block_submit_bio(sb, bp, READ);
if (ret < 0)
goto out;
}
@@ -679,7 +677,7 @@ out:
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_block_header *hdr;
struct block_private *bp = NULL;
bool retried = false;
@@ -703,7 +701,7 @@ retry:
set_bit(BLOCK_BIT_CRC_VALID, &bp->bits);
}
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != cpu_to_le64(sbi->fsid) ||
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != super->hdr.fsid ||
hdr->seq != ref->seq || hdr->blkno != ref->blkno) {
ret = -ESTALE;
goto out;
@@ -730,36 +728,6 @@ out:
return ret;
}
static bool stale_refs_match(struct scoutfs_block_ref *caller, struct scoutfs_block_ref *saved)
{
return !caller || (caller->blkno == saved->blkno && caller->seq == saved->seq);
}
/*
* Check if a read of a reference that gave ESTALE should be retried or
* should generate a hard error. If this is the second time we got
* ESTALE from the same refs then we return EIO and the caller should
* stop. As long as we keep seeing different refs we'll return ESTALE
* and the caller can keep trying.
*/
int scoutfs_block_check_stale(struct super_block *sb, int ret,
struct scoutfs_block_saved_refs *saved,
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b)
{
if (ret == -ESTALE) {
if (stale_refs_match(a, &saved->refs[0]) && stale_refs_match(b, &saved->refs[1])){
ret = -EIO;
} else {
if (a)
saved->refs[0] = *a;
if (b)
saved->refs[1] = *b;
}
}
return ret;
}
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl)
{
if (!IS_ERR_OR_NULL(bl))
@@ -829,7 +797,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
u32 magic, struct scoutfs_block **bl_ret,
u64 dirty_blkno, u64 *ref_blkno)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_block *cow_bl = NULL;
struct scoutfs_block *bl = NULL;
struct block_private *exist_bp = NULL;
@@ -897,7 +865,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
hdr = bl->data;
hdr->magic = cpu_to_le32(magic);
hdr->fsid = cpu_to_le64(sbi->fsid);
hdr->fsid = super->hdr.fsid;
hdr->blkno = cpu_to_le64(bl->blkno);
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
@@ -971,7 +939,7 @@ int scoutfs_block_writer_write(struct super_block *sb,
/* retry previous write errors */
clear_bit(BLOCK_BIT_ERROR, &bp->bits);
ret = block_submit_bio(sb, bp, REQ_OP_WRITE);
ret = block_submit_bio(sb, bp, WRITE);
if (ret < 0)
break;
}
@@ -1071,16 +1039,6 @@ u64 scoutfs_block_writer_dirty_bytes(struct super_block *sb,
return wri->nr_dirty_blocks * SCOUTFS_BLOCK_LG_SIZE;
}
static unsigned long block_count_objects(struct shrinker *shrink, struct shrink_control *sc)
{
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
struct super_block *sb = binf->sb;
scoutfs_inc_counter(sb, block_cache_count_objects);
return shrinker_min_long(atomic_read(&binf->total_inserted));
}
/*
* Remove a number of cached blocks that haven't been used recently.
*
@@ -1101,19 +1059,25 @@ static unsigned long block_count_objects(struct shrinker *shrink, struct shrink_
* atomically remove blocks when the only references are ours and the
* hash table.
*/
static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_control *sc)
static int block_shrink(struct shrinker *shrink, struct shrink_control *sc)
{
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
struct block_info *binf = container_of(shrink, struct block_info,
shrinker);
struct super_block *sb = binf->sb;
struct rhashtable_iter iter;
struct block_private *bp;
bool stop = false;
unsigned long freed = 0;
unsigned long nr = sc->nr_to_scan;
unsigned long nr;
u64 recently;
scoutfs_inc_counter(sb, block_cache_scan_objects);
nr = sc->nr_to_scan;
if (nr == 0)
goto out;
scoutfs_inc_counter(sb, block_cache_shrink);
nr = DIV_ROUND_UP(nr, SCOUTFS_BLOCK_LG_PAGES_PER);
restart:
recently = accessed_recently(binf);
rhashtable_walk_enter(&binf->ht, &iter);
rhashtable_walk_start(&iter);
@@ -1135,15 +1099,12 @@ static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_c
if (bp == NULL)
break;
if (bp == ERR_PTR(-EAGAIN)) {
/*
* We can be called from reclaim in the allocation
* to resize the hash table itself. We have to
* return so that the caller can proceed and
* enable hash table iteration again.
*/
scoutfs_inc_counter(sb, block_cache_shrink_stop);
stop = true;
break;
/* hard exit to wait for rcu rebalance to finish */
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
scoutfs_inc_counter(sb, block_cache_shrink_restart);
synchronize_rcu();
goto restart;
}
scoutfs_inc_counter(sb, block_cache_shrink_next);
@@ -1157,7 +1118,6 @@ static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_c
if (block_remove_solo(sb, bp)) {
scoutfs_inc_counter(sb, block_cache_shrink_remove);
TRACE_BLOCK(shrink, bp);
freed++;
nr--;
}
block_put(sb, bp);
@@ -1166,11 +1126,9 @@ static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_c
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
if (stop)
return SHRINK_STOP;
else
return freed;
out:
return min_t(u64, (u64)atomic_read(&binf->total_inserted) * SCOUTFS_BLOCK_LG_PAGES_PER,
INT_MAX);
}
struct sm_block_completion {
@@ -1178,11 +1136,11 @@ struct sm_block_completion {
int err;
};
static void KC_DECLARE_BIO_END_IO(sm_block_bio_end_io, struct bio *bio)
static void sm_block_bio_end_io(struct bio *bio, int err)
{
struct sm_block_completion *sbc = bio->bi_private;
sbc->err = kc_bio_get_errno(bio);
sbc->err = err;
complete(&sbc->comp);
bio_put(bio);
}
@@ -1197,8 +1155,9 @@ static void KC_DECLARE_BIO_END_IO(sm_block_bio_end_io, struct bio *bio)
* only layer that sees the full block buffer so we pass the calculated
* crc to the caller for them to check in their context.
*/
static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsigned int opf,
u64 blkno, struct scoutfs_block_header *hdr, size_t len, __le32 *blk_crc)
static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw, u64 blkno,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc)
{
struct scoutfs_block_header *pg_hdr;
struct sm_block_completion sbc;
@@ -1212,7 +1171,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsign
return -EIO;
if (WARN_ON_ONCE(len > SCOUTFS_BLOCK_SM_SIZE) ||
WARN_ON_ONCE(!op_is_write(opf) && !blk_crc))
WARN_ON_ONCE(!(rw & WRITE) && !blk_crc))
return -EINVAL;
page = alloc_page(GFP_NOFS);
@@ -1221,7 +1180,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsign
pg_hdr = page_address(page);
if (op_is_write(opf)) {
if (rw & WRITE) {
memcpy(pg_hdr, hdr, len);
if (len < SCOUTFS_BLOCK_SM_SIZE)
memset((char *)pg_hdr + len, 0,
@@ -1235,9 +1194,8 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsign
goto out;
}
kc_bio_set_opf(bio, opf | REQ_SYNC);
kc_bio_set_sector(bio, blkno << (SCOUTFS_BLOCK_SM_SHIFT - 9));
bio_set_dev(bio, bdev);
bio->bi_sector = blkno << (SCOUTFS_BLOCK_SM_SHIFT - 9);
bio->bi_bdev = bdev;
bio->bi_end_io = sm_block_bio_end_io;
bio->bi_private = &sbc;
bio_add_page(bio, page, SCOUTFS_BLOCK_SM_SIZE, 0);
@@ -1245,12 +1203,12 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsign
init_completion(&sbc.comp);
sbc.err = 0;
kc_submit_bio(bio);
submit_bio((rw & WRITE) ? WRITE_SYNC : READ_SYNC, bio);
wait_for_completion(&sbc.comp);
ret = sbc.err;
if (ret == 0 && !op_is_write(opf)) {
if (ret == 0 && !(rw & WRITE)) {
memcpy(hdr, pg_hdr, len);
*blk_crc = block_calc_crc(pg_hdr, SCOUTFS_BLOCK_SM_SIZE);
}
@@ -1264,14 +1222,14 @@ int scoutfs_block_read_sm(struct super_block *sb,
struct scoutfs_block_header *hdr, size_t len,
__le32 *blk_crc)
{
return sm_block_io(sb, bdev, REQ_OP_READ, blkno, hdr, len, blk_crc);
return sm_block_io(sb, bdev, READ, blkno, hdr, len, blk_crc);
}
int scoutfs_block_write_sm(struct super_block *sb,
struct block_device *bdev, u64 blkno,
struct scoutfs_block_header *hdr, size_t len)
{
return sm_block_io(sb, bdev, REQ_OP_WRITE, blkno, hdr, len, NULL);
return sm_block_io(sb, bdev, WRITE, blkno, hdr, len, NULL);
}
int scoutfs_block_setup(struct super_block *sb)
@@ -1296,9 +1254,9 @@ int scoutfs_block_setup(struct super_block *sb)
atomic_set(&binf->total_inserted, 0);
atomic64_set(&binf->access_counter, 0);
init_waitqueue_head(&binf->waitq);
KC_INIT_SHRINKER_FUNCS(&binf->shrinker, block_count_objects,
block_scan_objects);
KC_REGISTER_SHRINKER(&binf->shrinker);
binf->shrinker.shrink = block_shrink;
binf->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&binf->shrinker);
INIT_WORK(&binf->free_work, block_free_work);
init_llist_head(&binf->free_llist);
@@ -1318,7 +1276,7 @@ void scoutfs_block_destroy(struct super_block *sb)
struct block_info *binf = SCOUTFS_SB(sb)->block_info;
if (binf) {
KC_UNREGISTER_SHRINKER(&binf->shrinker);
unregister_shrinker(&binf->shrinker);
block_remove_all(sb);
flush_work(&binf->free_work);
rhashtable_destroy(&binf->ht);

View File

@@ -13,17 +13,6 @@ struct scoutfs_block {
void *priv;
};
struct scoutfs_block_saved_refs {
struct scoutfs_block_ref refs[2];
};
#define DECLARE_SAVED_REFS(name) \
struct scoutfs_block_saved_refs name = {{{0,}}}
int scoutfs_block_check_stale(struct super_block *sb, int ret,
struct scoutfs_block_saved_refs *saved,
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b);
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
struct scoutfs_block **bl_ret);
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);

View File

@@ -356,6 +356,7 @@ static int client_greeting(struct super_block *sb,
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct client_info *client = sbi->client_info;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_greeting *gr = resp;
bool new_server;
int ret;
@@ -370,9 +371,9 @@ static int client_greeting(struct super_block *sb,
goto out;
}
if (gr->fsid != cpu_to_le64(sbi->fsid)) {
if (gr->fsid != super->hdr.fsid) {
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
le64_to_cpu(gr->fsid), sbi->fsid);
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
ret = -EINVAL;
goto out;
}
@@ -475,6 +476,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
connect_dwork.work);
struct super_block *sb = client->sb;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_mount_options opts;
struct scoutfs_net_greeting greet;
struct sockaddr_in sin;
@@ -506,7 +508,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
goto out;
/* send a greeting to verify endpoints of each connection */
greet.fsid = cpu_to_le64(sbi->fsid);
greet.fsid = super->hdr.fsid;
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
greet.server_term = cpu_to_le64(client->server_term);
greet.rid = cpu_to_le64(sbi->rid);

View File

@@ -30,13 +30,11 @@
EXPAND_COUNTER(block_cache_free) \
EXPAND_COUNTER(block_cache_free_work) \
EXPAND_COUNTER(block_cache_remove_stale) \
EXPAND_COUNTER(block_cache_count_objects) \
EXPAND_COUNTER(block_cache_scan_objects) \
EXPAND_COUNTER(block_cache_shrink) \
EXPAND_COUNTER(block_cache_shrink_next) \
EXPAND_COUNTER(block_cache_shrink_recent) \
EXPAND_COUNTER(block_cache_shrink_remove) \
EXPAND_COUNTER(block_cache_shrink_stop) \
EXPAND_COUNTER(block_cache_shrink_restart) \
EXPAND_COUNTER(btree_compact_values) \
EXPAND_COUNTER(btree_compact_values_enomem) \
EXPAND_COUNTER(btree_delete) \
@@ -77,6 +75,8 @@
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
EXPAND_COUNTER(dentry_revalidate_error) \
EXPAND_COUNTER(dentry_revalidate_invalid) \
EXPAND_COUNTER(dentry_revalidate_locked) \
EXPAND_COUNTER(dentry_revalidate_orphan) \
EXPAND_COUNTER(dentry_revalidate_rcu) \
EXPAND_COUNTER(dentry_revalidate_root) \
EXPAND_COUNTER(dentry_revalidate_valid) \
@@ -90,8 +90,6 @@
EXPAND_COUNTER(forest_read_items) \
EXPAND_COUNTER(forest_roots_next_hint) \
EXPAND_COUNTER(forest_set_bloom_bits) \
EXPAND_COUNTER(item_cache_count_objects) \
EXPAND_COUNTER(item_cache_scan_objects) \
EXPAND_COUNTER(item_clear_dirty) \
EXPAND_COUNTER(item_create) \
EXPAND_COUNTER(item_delete) \
@@ -125,7 +123,6 @@
EXPAND_COUNTER(item_update) \
EXPAND_COUNTER(item_write_dirty) \
EXPAND_COUNTER(lock_alloc) \
EXPAND_COUNTER(lock_count_objects) \
EXPAND_COUNTER(lock_free) \
EXPAND_COUNTER(lock_grant_request) \
EXPAND_COUNTER(lock_grant_response) \
@@ -139,7 +136,6 @@
EXPAND_COUNTER(lock_lock_error) \
EXPAND_COUNTER(lock_nonblock_eagain) \
EXPAND_COUNTER(lock_recover_request) \
EXPAND_COUNTER(lock_scan_objects) \
EXPAND_COUNTER(lock_shrink_attempted) \
EXPAND_COUNTER(lock_shrink_aborted) \
EXPAND_COUNTER(lock_shrink_work) \
@@ -172,7 +168,6 @@
EXPAND_COUNTER(quorum_recv_resignation) \
EXPAND_COUNTER(quorum_recv_vote) \
EXPAND_COUNTER(quorum_send_heartbeat) \
EXPAND_COUNTER(quorum_send_heartbeat_dropped) \
EXPAND_COUNTER(quorum_send_resignation) \
EXPAND_COUNTER(quorum_send_request) \
EXPAND_COUNTER(quorum_send_vote) \
@@ -194,6 +189,8 @@
EXPAND_COUNTER(srch_search_retry_empty) \
EXPAND_COUNTER(srch_search_sorted) \
EXPAND_COUNTER(srch_search_sorted_block) \
EXPAND_COUNTER(srch_search_stale_eio) \
EXPAND_COUNTER(srch_search_stale_retry) \
EXPAND_COUNTER(srch_search_xattrs) \
EXPAND_COUNTER(srch_read_stale) \
EXPAND_COUNTER(statfs) \
@@ -238,12 +235,12 @@ struct scoutfs_counters {
#define SCOUTFS_PCPU_COUNTER_BATCH (1 << 30)
#define scoutfs_inc_counter(sb, which) \
percpu_counter_add_batch(&SCOUTFS_SB(sb)->counters->which, 1, \
SCOUTFS_PCPU_COUNTER_BATCH)
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, 1, \
SCOUTFS_PCPU_COUNTER_BATCH)
#define scoutfs_add_counter(sb, which, cnt) \
percpu_counter_add_batch(&SCOUTFS_SB(sb)->counters->which, cnt, \
SCOUTFS_PCPU_COUNTER_BATCH)
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, cnt, \
SCOUTFS_PCPU_COUNTER_BATCH)
void __init scoutfs_init_counters(void);
int scoutfs_setup_counters(struct super_block *sb);

View File

@@ -307,7 +307,7 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
LIST_HEAD(ind_locks);
s64 ret = 0;
WARN_ON_ONCE(inode && !inode_is_locked(inode));
WARN_ON_ONCE(inode && !mutex_is_locked(&inode->i_mutex));
/* clamp last to the last possible block? */
if (last > SCOUTFS_BLOCK_SM_MAX)
@@ -366,27 +366,27 @@ static inline u64 ext_last(struct scoutfs_extent *ext)
/*
* The caller is writing to a logical iblock that doesn't have an
* allocated extent. The caller has searched for an extent containing
* iblock. If it already existed then it must be unallocated and
* offline.
* allocated extent.
*
* We implement two preallocation strategies. Typically we only
* preallocate for simple streaming writes and limit preallocation while
* the file is small. The largest efficient allocation size is
* typically large enough that it would be unreasonable to allocate that
* much for all small files.
* We always allocate an extent starting at the logical iblock. The
* caller has searched for an extent containing iblock. If it already
* existed then it must be unallocated and offline.
*
* Optionally, we can simply preallocate large empty aligned regions.
* This can waste a lot of space for small or sparse files but is
* reasonable when a file population is known to be large and dense but
* known to be written with non-streaming write patterns.
* Preallocation is used if we're strictly contiguously extending
* writes. That is, if the logical block offset equals the number of
* online blocks. We try to preallocate the number of blocks existing
* so that small files don't waste inordinate amounts of space and large
* files will eventually see large extents. This only works for
* contiguous single stream writes or stages of files from the first
* block. It doesn't work for concurrent stages, releasing behind
* staging, sparse files, multi-node writes, etc. fallocate() is always
* a better tool to use.
*/
static int alloc_block(struct super_block *sb, struct inode *inode,
struct scoutfs_extent *ext, u64 iblock,
struct scoutfs_lock *lock)
{
DECLARE_DATA_INFO(sb, datinf);
struct scoutfs_mount_options opts;
const u64 ino = scoutfs_ino(inode);
struct data_ext_args args = {
.ino = ino,
@@ -394,22 +394,17 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
.lock = lock,
};
struct scoutfs_extent found;
struct scoutfs_extent pre = {0,};
bool undo_pre = false;
struct scoutfs_extent pre;
u64 blkno = 0;
u64 online;
u64 offline;
u8 flags;
u64 start;
u64 count;
u64 rem;
int ret;
int err;
trace_scoutfs_data_alloc_block_enter(sb, ino, iblock, ext);
scoutfs_options_read(sb, &opts);
/* can only allocate over existing unallocated offline extent */
if (WARN_ON_ONCE(ext->len &&
!(iblock >= ext->start && iblock <= ext_last(ext) &&
@@ -418,118 +413,66 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
mutex_lock(&datinf->mutex);
/* default to single allocation at the written block */
start = iblock;
count = 1;
/* copy existing flags for preallocated regions */
flags = ext->len ? ext->flags : 0;
scoutfs_inode_get_onoff(inode, &online, &offline);
if (ext->len) {
/*
* Assume that offline writers are going to be writing
* all the offline extents and try to preallocate the
* rest of the unwritten extent.
*/
/* limit preallocation to remaining existing (offline) extent */
count = ext->len - (iblock - ext->start);
} else if (opts.data_prealloc_contig_only) {
/*
* Only preallocate when a quick test of the online
* block counts looks like we're a simple streaming
* write. Try to write until the next extent but limit
* the preallocation size to the number of online
* blocks.
*/
scoutfs_inode_get_onoff(inode, &online, &offline);
if (iblock > 1 && iblock == online) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
if (found.len && found.start > iblock)
count = found.start - iblock;
else
count = opts.data_prealloc_blocks;
count = min(iblock, count);
}
flags = ext->flags;
} else {
/*
* Preallocation within aligned regions tries to
* allocate an extent to fill the hole in the region
* that contains iblock. We'd have to add a bit of plumbing
* to find previous extents so we only search for a next
* extent from the front of the region and from iblock.
*/
div64_u64_rem(iblock, opts.data_prealloc_blocks, &rem);
start = iblock - rem;
count = opts.data_prealloc_blocks;
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, start, 1, &found);
/* otherwise alloc to next extent */
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
iblock, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
/* trim count if there's an extent in the region before iblock */
if (found.len && found.start < iblock) {
count -= iblock - start;
start = iblock;
/* see if there's also an extent after iblock */
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, iblock, 1, &found);
if (ret < 0 && ret != -ENOENT)
goto out;
}
/* trim count by next extent after iblock */
if (found.len && found.start > start && found.start < start + count)
count = (found.start - start);
if (found.len && found.start > iblock)
count = found.start - iblock;
else
count = SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT;
flags = 0;
}
/* overall prealloc limit */
count = min_t(u64, count, opts.data_prealloc_blocks);
count = min_t(u64, count, SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT);
/* only strictly contiguous extending writes will try to preallocate */
if (iblock > 1 && iblock == online)
count = min(iblock, count);
else
count = 1;
ret = scoutfs_alloc_data(sb, datinf->alloc, datinf->wri,
&datinf->dalloc, count, &blkno, &count);
if (ret < 0)
goto out;
/*
* An aligned prealloc attempt that gets a smaller extent can
* fail to cover iblock, make sure that it does. This is a
* pathological case so we don't try to move the window past
* iblock. Just enough to cover it, which we know is safe.
*/
if (start + count <= iblock)
start += (iblock - (start + count) + 1);
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno, 0);
if (ret < 0)
goto out;
if (count > 1) {
pre.start = start;
pre.len = count;
pre.map = blkno;
pre.start = iblock + 1;
pre.len = count - 1;
pre.map = blkno + 1;
pre.flags = flags | SEF_UNWRITTEN;
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, pre.start,
pre.len, pre.map, pre.flags);
if (ret < 0)
if (ret < 0) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock,
1, 0, flags);
BUG_ON(err); /* couldn't restore original */
goto out;
undo_pre = true;
}
}
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno + (iblock - start), 0);
if (ret < 0)
goto out;
/* tell the caller we have a single block, could check next? */
ext->start = iblock;
ext->len = 1;
ext->map = blkno + (iblock - start);
ext->map = blkno;
ext->flags = 0;
ret = 0;
out:
if (ret < 0 && blkno > 0) {
if (undo_pre) {
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
pre.start, pre.len, 0, flags);
BUG_ON(err); /* leaked preallocated extent */
}
err = scoutfs_free_data(sb, datinf->alloc, datinf->wri,
&datinf->data_freed, blkno, count);
BUG_ON(err); /* leaked free blocks */
@@ -558,7 +501,7 @@ static int scoutfs_get_block(struct inode *inode, sector_t iblock,
u64 offset;
int ret;
WARN_ON_ONCE(create && !inode_is_locked(inode));
WARN_ON_ONCE(create && !mutex_is_locked(&inode->i_mutex));
/* make sure caller holds a cluster lock */
lock = scoutfs_per_task_get(&si->pt_data_lock);
@@ -643,8 +586,8 @@ static int scoutfs_get_block_read(struct inode *inode, sector_t iblock,
return ret;
}
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
int create)
static int scoutfs_get_block_write(struct inode *inode, sector_t iblock,
struct buffer_head *bh, int create)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
int ret;
@@ -704,7 +647,7 @@ static int scoutfs_readpage(struct file *file, struct page *page)
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
ret = scoutfs_data_wait_check(inode, page_offset(page),
PAGE_SIZE, SEF_OFFLINE,
PAGE_CACHE_SIZE, SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ, &dw,
inode_lock);
if (ret != 0) {
@@ -729,7 +672,6 @@ static int scoutfs_readpage(struct file *file, struct page *page)
return ret;
}
#ifndef KC_FILE_AOPS_READAHEAD
/*
* This is used for opportunistic read-ahead which can throw the pages
* away if it needs to. If the caller didn't deal with offline extents
@@ -755,14 +697,14 @@ static int scoutfs_readpages(struct file *file, struct address_space *mapping,
list_for_each_entry_safe(page, tmp, pages, lru) {
ret = scoutfs_data_wait_check(inode, page_offset(page),
PAGE_SIZE, SEF_OFFLINE,
PAGE_CACHE_SIZE, SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ, NULL,
inode_lock);
if (ret < 0)
goto out;
if (ret > 0) {
list_del(&page->lru);
put_page(page);
page_cache_release(page);
if (--nr_pages == 0) {
ret = 0;
goto out;
@@ -776,29 +718,6 @@ out:
BUG_ON(!list_empty(pages));
return ret;
}
#else
static void scoutfs_readahead(struct readahead_control *rac)
{
struct inode *inode = rac->file->f_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *inode_lock = NULL;
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
return;
ret = scoutfs_data_wait_check(inode, readahead_pos(rac),
readahead_length(rac), SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ, NULL,
inode_lock);
if (ret == 0)
mpage_readahead(rac, scoutfs_get_block_read);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
}
#endif
static int scoutfs_writepage(struct page *page, struct writeback_control *wbc)
{
@@ -1081,7 +1000,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
goto out;
}
inode_lock(inode);
mutex_lock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -1142,7 +1061,7 @@ out_extent:
up_write(&si->extent_sem);
out_mutex:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
out:
trace_scoutfs_data_fallocate(sb, ino, mode, offset, len, ret);
@@ -1228,9 +1147,9 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
* explained above the move_blocks ioctl argument structure definition.
*
* The caller has processed the ioctl args and performed the most basic
* argument sanity and inode checks, but we perform more detailed inode
* checks once we have the inode lock and refreshed inodes. Our job is
* to safely lock the two files and move the extents.
* inode checks, but we perform more detailed inode checks once we have
* the inode lock and refreshed inodes. Our job is to safely lock the
* two files and move the extents.
*/
#define MOVE_DATA_EXTENTS_PER_HOLD 16
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
@@ -1245,7 +1164,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
struct data_ext_args from_args;
struct data_ext_args to_args;
struct scoutfs_extent ext;
struct kc_timespec cur_time;
struct timespec cur_time;
LIST_HEAD(locks);
bool done = false;
loff_t from_size;
@@ -1289,16 +1208,6 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
from_iblock = from_off >> SCOUTFS_BLOCK_SM_SHIFT;
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
from_start = from_iblock;
/* only move extent blocks inside i_size, careful not to wrap */
from_size = i_size_read(from);
if (from_off >= from_size) {
ret = 0;
goto out;
}
if (from_off + byte_len > from_size)
count = ((from_size - from_off) + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
if (S_ISDIR(from->i_mode) || S_ISDIR(to->i_mode)) {
ret = -EISDIR;
@@ -1366,7 +1275,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
/* find the next extent to move */
ret = scoutfs_ext_next(sb, &data_ext_ops, &from_args,
from_start, 1, &ext);
from_iblock, 1, &ext);
if (ret < 0) {
if (ret == -ENOENT) {
done = true;
@@ -1375,8 +1284,9 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
break;
}
/* done if next extent starts after moving region */
if (ext.start >= from_iblock + count) {
/* only move extents within count and i_size */
if (ext.start >= from_iblock + count ||
ext.start >= i_size_read(from)) {
done = true;
ret = 0;
break;
@@ -1384,14 +1294,12 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
from_start = max(ext.start, from_iblock);
map = ext.map + (from_start - ext.start);
len = min(from_iblock + count, ext.start + ext.len) - from_start;
to_start = to_iblock + (from_start - from_iblock);
len = min3(from_iblock + count,
round_up((u64)i_size_read(from),
SCOUTFS_BLOCK_SM_SIZE),
ext.start + ext.len) - from_start;
/* we'd get stuck, shouldn't happen */
if (WARN_ON_ONCE(len == 0)) {
ret = -EIO;
goto out;
}
to_start = to_iblock + (from_start - from_iblock);
if (is_stage) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
@@ -1454,19 +1362,13 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
i_size_read(from);
i_size_write(to, to_size);
}
/* find next after moved extent, avoiding wrapping */
if (from_start + len < from_start)
from_start = from_iblock + count + 1;
else
from_start += len;
}
up_write(&from_si->extent_sem);
up_write(&to_si->extent_sem);
cur_time = current_time(from);
cur_time = CURRENT_TIME;
if (!is_stage) {
to->i_ctime = to->i_mtime = cur_time;
inode_inc_iversion(to);
@@ -1553,7 +1455,7 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
if (ret)
goto out;
inode_lock(inode);
mutex_lock(&inode->i_mutex);
down_read(&si->extent_sem);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
@@ -1607,7 +1509,7 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
unlock:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
up_read(&si->extent_sem);
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
out:
if (ret == 1)
@@ -1807,37 +1709,6 @@ int scoutfs_data_wait_check_iov(struct inode *inode, const struct iovec *iov,
return ret;
}
int scoutfs_data_wait_check_iter(struct inode *inode, loff_t pos, struct iov_iter *iter,
u8 sef, u8 op, struct scoutfs_data_wait *dw,
struct scoutfs_lock *lock)
{
size_t count = iov_iter_count(iter);
size_t off = iter->iov_offset;
const struct iovec *iov;
size_t len;
int ret = 0;
for (iov = iter->iov; count > 0; iov++) {
len = iov->iov_len - off;
if (len == 0)
continue;
/* aren't we waiting on too much data here ? */
ret = scoutfs_data_wait_check(inode, pos, len,
sef, op, dw, lock);
if (ret != 0)
break;
pos += len;
count -= len;
off = 0;
}
return ret;
}
int scoutfs_data_wait(struct inode *inode, struct scoutfs_data_wait *dw)
{
DECLARE_DATA_WAIT_ROOT(inode->i_sb, rt);
@@ -1928,11 +1799,7 @@ int scoutfs_data_waiting(struct super_block *sb, u64 ino, u64 iblock,
const struct address_space_operations scoutfs_file_aops = {
.readpage = scoutfs_readpage,
#ifndef KC_FILE_AOPS_READAHEAD
.readpages = scoutfs_readpages,
#else
.readahead = scoutfs_readahead,
#endif
.writepage = scoutfs_writepage,
.writepages = scoutfs_writepages,
.write_begin = scoutfs_write_begin,
@@ -1940,15 +1807,10 @@ const struct address_space_operations scoutfs_file_aops = {
};
const struct file_operations scoutfs_file_fops = {
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
.read = do_sync_read,
.write = do_sync_write,
.aio_read = scoutfs_file_aio_read,
.aio_write = scoutfs_file_aio_write,
#else
.read_iter = scoutfs_file_read_iter,
.write_iter = scoutfs_file_write_iter,
#endif
.unlocked_ioctl = scoutfs_ioctl,
.fsync = scoutfs_file_fsync,
.llseek = scoutfs_file_llseek,

View File

@@ -43,9 +43,6 @@ extern const struct file_operations scoutfs_file_fops;
struct scoutfs_alloc;
struct scoutfs_block_writer;
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
int create);
int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
u64 ino, u64 iblock, u64 last, bool offline,
struct scoutfs_lock *lock);
@@ -65,9 +62,6 @@ int scoutfs_data_wait_check_iov(struct inode *inode, const struct iovec *iov,
unsigned long nr_segs, loff_t pos, u8 sef,
u8 op, struct scoutfs_data_wait *ow,
struct scoutfs_lock *lock);
int scoutfs_data_wait_check_iter(struct inode *inode, loff_t pos, struct iov_iter *iter,
u8 sef, u8 op, struct scoutfs_data_wait *ow,
struct scoutfs_lock *lock);
bool scoutfs_data_wait_found(struct scoutfs_data_wait *ow);
int scoutfs_data_wait(struct inode *inode,
struct scoutfs_data_wait *ow);

File diff suppressed because it is too large Load Diff

View File

@@ -5,22 +5,14 @@
#include "lock.h"
extern const struct file_operations scoutfs_dir_fops;
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
extern const struct inode_operations_wrapper scoutfs_dir_iops;
#else
extern const struct inode_operations scoutfs_dir_iops;
#endif
extern const struct inode_operations scoutfs_symlink_iops;
extern const struct dentry_operations scoutfs_dentry_ops;
struct scoutfs_link_backref_entry {
struct list_head head;
u64 dir_ino;
u64 dir_pos;
u16 name_len;
u8 d_type;
bool last;
struct scoutfs_dirent dent;
/* the full name is allocated and stored in dent.name[] */
};
@@ -30,10 +22,14 @@ int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,
void scoutfs_dir_free_backref_path(struct super_block *sb,
struct list_head *list);
int scoutfs_dir_add_next_linkrefs(struct super_block *sb, u64 ino, u64 dir_ino, u64 dir_pos,
int count, struct list_head *list);
int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
u64 dir_ino, u64 dir_pos,
struct list_head *list);
int scoutfs_symlink_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock, u64 i_size);
int scoutfs_dir_init(void);
void scoutfs_dir_exit(void);
#endif

View File

@@ -114,8 +114,8 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
int ret;
u64 ino;
ret = scoutfs_dir_add_next_linkrefs(sb, scoutfs_ino(inode), 0, 0, 1, &list);
if (ret < 0)
ret = scoutfs_dir_add_next_linkref(sb, scoutfs_ino(inode), 0, 0, &list);
if (ret)
return ERR_PTR(ret);
ent = list_first_entry(&list, struct scoutfs_link_backref_entry, head);
@@ -138,9 +138,9 @@ static int scoutfs_get_name(struct dentry *parent, char *name,
LIST_HEAD(list);
int ret;
ret = scoutfs_dir_add_next_linkrefs(sb, scoutfs_ino(inode), dir_ino,
0, 1, &list);
if (ret < 0)
ret = scoutfs_dir_add_next_linkref(sb, scoutfs_ino(inode), dir_ino,
0, &list);
if (ret)
return ret;
ret = -ENOENT;

View File

@@ -29,7 +29,6 @@
#include "per_task.h"
#include "omap.h"
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
/*
* Start a high level file read. We check for offline extents in the
* read region here so that we only check the extents once. We use the
@@ -43,27 +42,27 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
int ret;
retry:
/* protect checked extents from release */
inode_lock(inode);
mutex_lock(&inode->i_mutex);
atomic_inc(&inode->i_dio_count);
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ,
&dw, scoutfs_inode_lock);
&dw, inode_lock);
if (ret != 0)
goto out;
} else {
@@ -75,7 +74,7 @@ retry:
out:
inode_dio_done(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_READ);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
@@ -93,7 +92,7 @@ ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
struct scoutfs_lock *inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
int ret;
@@ -102,22 +101,22 @@ ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
return 0;
retry:
inode_lock(inode);
mutex_lock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
if (ret)
goto out;
ret = scoutfs_complete_truncate(inode, scoutfs_inode_lock);
ret = scoutfs_complete_truncate(inode, inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
/* data_version is per inode, whole file must be online */
ret = scoutfs_data_wait_check(inode, 0, i_size_read(inode),
SEF_OFFLINE,
SCOUTFS_IOC_DWO_WRITE,
&dw, scoutfs_inode_lock);
&dw, inode_lock);
if (ret != 0)
goto out;
}
@@ -128,8 +127,8 @@ retry:
out:
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
mutex_unlock(&inode->i_mutex);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
@@ -147,113 +146,6 @@ out:
return ret;
}
#else
ssize_t scoutfs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
int ret;
retry:
/* protect checked extents from release */
inode_lock(inode);
atomic_inc(&inode->i_dio_count);
inode_unlock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
ret = scoutfs_data_wait_check_iter(inode, iocb->ki_pos, to,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_READ,
&dw, scoutfs_inode_lock);
if (ret != 0)
goto out;
} else {
WARN_ON_ONCE(true);
}
ret = generic_file_read_iter(iocb, to);
out:
inode_dio_end(inode);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_READ);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
if (ret == 0)
goto retry;
}
return ret;
}
ssize_t scoutfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file_inode(file);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *scoutfs_inode_lock = NULL;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
DECLARE_DATA_WAIT(dw);
int ret;
int written;
retry:
inode_lock(inode);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
if (ret)
goto out;
ret = generic_write_checks(iocb, from);
if (ret <= 0)
goto out;
ret = scoutfs_complete_truncate(inode, scoutfs_inode_lock);
if (ret)
goto out;
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
/* data_version is per inode, whole file must be online */
ret = scoutfs_data_wait_check_iter(inode, iocb->ki_pos, from,
SEF_OFFLINE,
SCOUTFS_IOC_DWO_WRITE,
&dw, scoutfs_inode_lock);
if (ret != 0)
goto out;
}
/* XXX: remove SUID bit */
written = __generic_file_write_iter(iocb, from);
out:
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
if (scoutfs_data_wait_found(&dw)) {
ret = scoutfs_data_wait(inode, &dw);
if (ret == 0)
goto retry;
}
if (ret > 0 || ret == -EIOCBQUEUED)
ret = generic_write_sync(iocb, written);
return written ? written : ret;
}
#endif
int scoutfs_permission(struct inode *inode, int mask)
{

View File

@@ -1,15 +1,10 @@
#ifndef _SCOUTFS_FILE_H_
#define _SCOUTFS_FILE_H_
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos);
ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos);
#else
ssize_t scoutfs_file_read_iter(struct kiocb *, struct iov_iter *);
ssize_t scoutfs_file_write_iter(struct kiocb *, struct iov_iter *);
#endif
int scoutfs_permission(struct inode *inode, int mask);
loff_t scoutfs_file_llseek(struct file *file, loff_t offset, int whence);

View File

@@ -78,6 +78,11 @@ struct forest_refs {
struct scoutfs_block_ref logs_ref;
};
/* initialize some refs that initially aren't equal */
#define DECLARE_STALE_TRACKING_SUPER_REFS(a, b) \
struct forest_refs a = {{cpu_to_le64(0),}}; \
struct forest_refs b = {{cpu_to_le64(1),}}
struct forest_bloom_nrs {
unsigned int nrs[SCOUTFS_FOREST_BLOOM_NRS];
};
@@ -131,11 +136,11 @@ static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scout
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_key *next)
{
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
struct scoutfs_net_roots roots;
struct scoutfs_btree_root item_root;
struct scoutfs_log_trees *lt;
SCOUTFS_BTREE_ITEM_REF(iref);
DECLARE_SAVED_REFS(saved);
struct scoutfs_key found;
struct scoutfs_key ltk;
bool checked_fs;
@@ -150,6 +155,8 @@ retry:
goto out;
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
refs.fs_ref = roots.fs_root.ref;
refs.logs_ref = roots.logs_root.ref;
scoutfs_key_init_log_trees(&ltk, 0, 0);
checked_fs = false;
@@ -205,10 +212,14 @@ retry:
}
}
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.fs_root.ref, &roots.logs_root.ref);
if (ret == -ESTALE)
if (ret == -ESTALE) {
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
return -EIO;
prev_refs = refs;
goto retry;
}
out:
return ret;
}
@@ -530,8 +541,9 @@ void scoutfs_forest_dec_inode_count(struct super_block *sb)
/*
* Return the total inode count from the super block and all the
* log_btrees it references. ESTALE from read blocks is returned to the
* caller who is expected to retry or return hard errors.
* log_btrees it references. This assumes it's working with a block
* reference hierarchy that should be fully consistent. If we see
* ESTALE we've hit persistent corruption.
*/
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
u64 *inode_count)
@@ -560,6 +572,8 @@ int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_bloc
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
else if (ret == -ESTALE)
ret = -EIO;
break;
}
}

View File

@@ -683,19 +683,16 @@ struct scoutfs_xattr_totl_val {
#define SCOUTFS_QUORUM_ELECT_VAR_MS 100
/*
* Once a leader is elected they send heartbeat messages to all quorum
* members at regular intervals to force members to wait the much longer
* heartbeat timeout. Once the heartbeat timeout expires without
* receiving a heartbeat message a member will start an election.
* Once a leader is elected they send out heartbeats at regular
* intervals to force members to wait the much longer heartbeat timeout.
* Once heartbeat timeout expires without receiving a heartbeat they'll
* switch over the performing elections.
*
* These determine how long it could take members to notice that a
* leader has gone silent and start to elect a new leader. The
* heartbeat timeout can be changed at run time by options.
* leader has gone silent and start to elect a new leader.
*/
#define SCOUTFS_QUORUM_HB_IVAL_MS 100
#define SCOUTFS_QUORUM_MIN_HB_TIMEO_MS (2 * MSEC_PER_SEC)
#define SCOUTFS_QUORUM_DEF_HB_TIMEO_MS (10 * MSEC_PER_SEC)
#define SCOUTFS_QUORUM_MAX_HB_TIMEO_MS (60 * MSEC_PER_SEC)
#define SCOUTFS_QUORUM_HB_TIMEO_MS (5 * MSEC_PER_SEC)
/*
* A newly elected leader will give fencing some time before giving up and

View File

@@ -19,8 +19,6 @@
#include <linux/pagemap.h>
#include <linux/sched.h>
#include <linux/list_sort.h>
#include <linux/workqueue.h>
#include <linux/buffer_head.h>
#include "format.h"
#include "super.h"
@@ -38,7 +36,6 @@
#include "omap.h"
#include "forest.h"
#include "btree.h"
#include "acl.h"
/*
* XXX
@@ -69,10 +66,8 @@ struct inode_sb_info {
struct delayed_work orphan_scan_dwork;
struct workqueue_struct *iput_workq;
struct work_struct iput_work;
spinlock_t iput_lock;
struct list_head iput_list;
struct llist_head iput_llist;
};
#define DECLARE_INODE_SB_INFO(sb, name) \
@@ -99,9 +94,7 @@ static void scoutfs_inode_ctor(void *obj)
init_rwsem(&si->xattr_rwsem);
INIT_LIST_HEAD(&si->writeback_entry);
scoutfs_lock_init_coverage(&si->ino_lock_cov);
INIT_LIST_HEAD(&si->iput_head);
si->iput_count = 0;
si->iput_flags = 0;
atomic_set(&si->iput_count, 0);
inode_init_once(&si->inode);
}
@@ -143,26 +136,20 @@ void scoutfs_destroy_inode(struct inode *inode)
static const struct inode_operations scoutfs_file_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.removexattr = generic_removexattr,
#endif
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
.removexattr = scoutfs_removexattr,
.fiemap = scoutfs_data_fiemap,
};
static const struct inode_operations scoutfs_special_iops = {
.getattr = scoutfs_getattr,
.setattr = scoutfs_setattr,
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.removexattr = generic_removexattr,
#endif
.setxattr = scoutfs_setxattr,
.getxattr = scoutfs_getxattr,
.listxattr = scoutfs_listxattr,
.get_acl = scoutfs_get_acl,
.removexattr = scoutfs_removexattr,
};
/*
@@ -178,12 +165,8 @@ static void set_inode_ops(struct inode *inode)
inode->i_fop = &scoutfs_file_fops;
break;
case S_IFDIR:
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
inode->i_op = &scoutfs_dir_iops.ops;
inode->i_flags |= S_IOPS_WRAPPER;
#else
inode->i_op = &scoutfs_dir_iops;
#endif
inode->i_fop = &scoutfs_dir_fops;
break;
case S_IFLNK:
@@ -255,7 +238,7 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
i_size_write(inode, le64_to_cpu(cinode->size));
inode_set_iversion_queried(inode, le64_to_cpu(cinode->version));
inode->i_version = le64_to_cpu(cinode->version);
set_nlink(inode, le32_to_cpu(cinode->nlink));
i_uid_write(inode, le32_to_cpu(cinode->uid));
i_gid_write(inode, le32_to_cpu(cinode->gid));
@@ -339,6 +322,7 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
load_inode(inode, &sinode);
atomic64_set(&si->last_refreshed, refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
si->drop_invalidated = false;
}
} else {
ret = 0;
@@ -348,17 +332,10 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
return ret;
}
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat)
{
struct inode *inode = dentry->d_inode;
#else
int scoutfs_getattr(const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int query_flags)
{
struct inode *inode = d_inode(path->dentry);
#endif
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
int ret;
@@ -377,7 +354,6 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
LIST_HEAD(ind_locks);
int ret;
@@ -388,25 +364,17 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
if (ret)
return ret;
scoutfs_per_task_add(&si->pt_data_lock, &pt_ent, lock);
ret = block_truncate_page(inode->i_mapping, new_size, scoutfs_get_block_write);
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
if (ret < 0)
goto unlock;
scoutfs_inode_queue_writeback(inode);
if (new_size != i_size_read(inode))
scoutfs_inode_inc_data_version(inode);
truncate_setsize(inode, new_size);
inode->i_ctime = inode->i_mtime = current_time(inode);
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
if (truncate)
si->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
scoutfs_inode_set_data_seq(inode);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
unlock:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
@@ -482,7 +450,8 @@ retry:
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
if (ret)
return ret;
ret = setattr_prepare(dentry, attr);
ret = inode_change_ok(inode, attr);
if (ret)
goto out;
@@ -510,9 +479,9 @@ retry:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
/* XXX callee locks instead? */
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
ret = scoutfs_data_wait(inode, &dw);
inode_lock(inode);
mutex_lock(&inode->i_mutex);
if (ret == 0)
goto retry;
@@ -538,15 +507,10 @@ retry:
if (ret)
goto out;
ret = scoutfs_acl_chmod_locked(inode, attr, lock, &ind_locks);
if (ret < 0)
goto release;
setattr_copy(inode, attr);
inode_inc_iversion(inode);
scoutfs_update_inode_item(inode, lock, &ind_locks);
release:
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
out:
@@ -764,7 +728,7 @@ struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf)
/* XXX ensure refresh, instead clear in drop_inode? */
si = SCOUTFS_I(inode);
atomic64_set(&si->last_refreshed, 0);
inode_set_iversion_queried(inode, 0);
inode->i_version = 0;
}
ret = scoutfs_inode_refresh(inode, lock);
@@ -812,7 +776,7 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
scoutfs_inode_get_onoff(inode, &online_blocks, &offline_blocks);
cinode->size = cpu_to_le64(i_size_read(inode));
cinode->version = cpu_to_le64(inode_peek_iversion(inode));
cinode->version = cpu_to_le64(inode->i_version);
cinode->nlink = cpu_to_le32(inode->i_nlink);
cinode->uid = cpu_to_le32(i_uid_read(inode));
cinode->gid = cpu_to_le32(i_gid_read(inode));
@@ -983,8 +947,7 @@ void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
static int update_index_items(struct super_block *sb,
struct scoutfs_inode_info *si, u64 ino, u8 type,
u64 major, u32 minor,
struct list_head *lock_list,
struct scoutfs_lock *primary)
struct list_head *lock_list)
{
struct scoutfs_lock *ins_lock;
struct scoutfs_lock *del_lock;
@@ -1001,7 +964,7 @@ static int update_index_items(struct super_block *sb,
scoutfs_inode_init_index_key(&ins, type, major, minor, ino);
ins_lock = find_index_lock(lock_list, type, major, minor, ino);
ret = scoutfs_item_create_force(sb, &ins, NULL, 0, ins_lock, primary);
ret = scoutfs_item_create_force(sb, &ins, NULL, 0, ins_lock);
if (ret || !will_del_index(si, type, major, minor))
return ret;
@@ -1013,7 +976,7 @@ static int update_index_items(struct super_block *sb,
del_lock = find_index_lock(lock_list, type, get_item_major(si, type),
get_item_minor(si, type), ino);
ret = scoutfs_item_delete_force(sb, &del, del_lock, primary);
ret = scoutfs_item_delete_force(sb, &del, del_lock);
if (ret) {
err = scoutfs_item_delete(sb, &ins, ins_lock);
BUG_ON(err);
@@ -1025,8 +988,7 @@ static int update_index_items(struct super_block *sb,
static int update_indices(struct super_block *sb,
struct scoutfs_inode_info *si, u64 ino, umode_t mode,
struct scoutfs_inode *sinode,
struct list_head *lock_list,
struct scoutfs_lock *primary)
struct list_head *lock_list)
{
struct index_update {
u8 type;
@@ -1046,7 +1008,7 @@ static int update_indices(struct super_block *sb,
continue;
ret = update_index_items(sb, si, ino, upd->type, upd->major,
upd->minor, lock_list, primary);
upd->minor, lock_list);
if (ret)
break;
}
@@ -1086,7 +1048,7 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
/* only race with other inode field stores once */
store_inode(&sinode, inode);
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list, lock);
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list);
BUG_ON(ret);
scoutfs_inode_init_key(&key, ino);
@@ -1355,7 +1317,7 @@ void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list)
/* this is called on final inode cleanup so enoent is fine */
static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
u32 minor, struct list_head *ind_locks, struct scoutfs_lock *primary)
u32 minor, struct list_head *ind_locks)
{
struct scoutfs_key key;
struct scoutfs_lock *lock;
@@ -1364,7 +1326,7 @@ static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
scoutfs_inode_init_index_key(&key, type, major, minor, ino);
lock = find_index_lock(ind_locks, type, major, minor, ino);
ret = scoutfs_item_delete_force(sb, &key, lock, primary);
ret = scoutfs_item_delete_force(sb, &key, lock);
if (ret == -ENOENT)
ret = 0;
return ret;
@@ -1381,17 +1343,16 @@ static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
*/
static int remove_index_items(struct super_block *sb, u64 ino,
struct scoutfs_inode *sinode,
struct list_head *ind_locks,
struct scoutfs_lock *primary)
struct list_head *ind_locks)
{
umode_t mode = le32_to_cpu(sinode->mode);
int ret;
ret = remove_index(sb, ino, SCOUTFS_INODE_INDEX_META_SEQ_TYPE,
le64_to_cpu(sinode->meta_seq), 0, ind_locks, primary);
le64_to_cpu(sinode->meta_seq), 0, ind_locks);
if (ret == 0 && S_ISREG(mode))
ret = remove_index(sb, ino, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE,
le64_to_cpu(sinode->data_seq), 0, ind_locks, primary);
le64_to_cpu(sinode->data_seq), 0, ind_locks);
return ret;
}
@@ -1481,6 +1442,7 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
si->have_item = false;
atomic64_set(&si->last_refreshed, lock->refresh_gen);
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
si->drop_invalidated = false;
si->flags = 0;
scoutfs_inode_set_meta_seq(inode);
@@ -1489,7 +1451,7 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
inode->i_ino = ino; /* XXX overflow */
inode_init_owner(inode, dir, mode);
inode_set_bytes(inode, 0);
inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
inode->i_rdev = rdev;
set_inode_ops(inode);
@@ -1523,24 +1485,22 @@ static void init_orphan_key(struct scoutfs_key *key, u64 ino)
* zone under a write only lock while the caller has the inode protected
* by a write lock.
*/
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary)
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock)
{
struct scoutfs_key key;
init_orphan_key(&key, ino);
return scoutfs_item_create_force(sb, &key, NULL, 0, lock, primary);
return scoutfs_item_create_force(sb, &key, NULL, 0, lock);
}
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary)
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock)
{
struct scoutfs_key key;
init_orphan_key(&key, ino);
return scoutfs_item_delete_force(sb, &key, lock, primary);
return scoutfs_item_delete_force(sb, &key, lock);
}
/*
@@ -1593,7 +1553,7 @@ retry:
release = true;
ret = remove_index_items(sb, ino, sinode, &ind_locks, lock);
ret = remove_index_items(sb, ino, sinode, &ind_locks);
if (ret)
goto out;
@@ -1608,7 +1568,7 @@ retry:
if (ret < 0)
goto out;
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock, lock);
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock);
if (ret < 0)
goto out;
@@ -1725,7 +1685,6 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
struct scoutfs_lock *lock = NULL;
struct scoutfs_inode sinode;
struct scoutfs_key key;
bool clear_trying = false;
u64 group_nr;
int bit_nr;
int ret;
@@ -1745,7 +1704,6 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = 0;
goto out;
}
clear_trying = true;
/* can't delete if it's cached in local or remote mounts */
if (scoutfs_omap_test(sb, ino) || test_bit_le(bit_nr, ldata->map.bits)) {
@@ -1772,7 +1730,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
ret = delete_inode_items(sb, ino, &sinode, lock, orph_lock);
out:
if (clear_trying)
if (ldata)
clear_bit(bit_nr, ldata->trying);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
@@ -1782,18 +1740,18 @@ out:
}
/*
* As we evicted an inode we need to decide to try and delete its items
* or not, which is expensive. We only try when we have lock coverage
* and the inode has been unlinked. This catches the common case of
* regular deletion so deletion will be performed in the final unlink
* task. It also catches open-unlink or o_tmpfile that aren't cached on
* other nodes.
* As we drop an inode we need to decide to try and delete its items or
* not, which is expensive. The two common cases we want to get right
* both have cluster lock coverage and don't want to delete. Dropping
* unused inodes during read lock invalidation has the current lock and
* sees a nonzero nlink and knows not to delete. Final iput after a
* local unlink also has a lock, sees a zero nlink, and tries to perform
* item deletion in the task that dropped the last link, as users
* expect.
*
* Inodes being evicted outside of lock coverage, by referenced dentries
* or inodes that survived the attempt to drop them as their lock was
* invalidated, will not try to delete. This means that cross-mount
* open/unlink will almost certainly fall back to the orphan scanner to
* perform final deletion.
* Evicting an inode outside of cluster locking is the odd slow path
* that involves lock contention during use the worst cross-mount
* open-unlink/delete case.
*/
void scoutfs_evict_inode(struct inode *inode)
{
@@ -1809,7 +1767,7 @@ void scoutfs_evict_inode(struct inode *inode)
/* clear before trying to delete tests */
scoutfs_omap_clear(sb, ino);
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink == 0)
if (!scoutfs_lock_is_covered(sb, &si->ino_lock_cov) || inode->i_nlink == 0)
try_delete_inode_items(sb, scoutfs_ino(inode));
}
@@ -1834,56 +1792,30 @@ int scoutfs_drop_inode(struct inode *inode)
{
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const bool covered = scoutfs_lock_is_covered(sb, &si->ino_lock_cov);
trace_scoutfs_drop_inode(sb, scoutfs_ino(inode), inode->i_nlink, inode_unhashed(inode),
covered);
si->drop_invalidated);
return !covered || generic_drop_inode(inode);
return si->drop_invalidated || !scoutfs_lock_is_covered(sb, &si->ino_lock_cov) ||
generic_drop_inode(inode);
}
/*
* These iput workers can be concurrent amongst cpus. This lets us get
* some concurrency when these async final iputs end up performing very
* expensive inode deletion. Typically they're dropping linked inodes
* that lost lock coverage and the iput will evict without deleting.
*
* Keep in mind that the dputs in d_prune can ascend into parents and
* end up performing the final iput->evict deletion on other inodes.
*/
static void iput_worker(struct work_struct *work)
{
struct inode_sb_info *inf = container_of(work, struct inode_sb_info, iput_work);
struct scoutfs_inode_info *si;
struct inode *inode;
unsigned long count;
unsigned long flags;
struct scoutfs_inode_info *tmp;
struct llist_node *inodes;
bool more;
spin_lock(&inf->iput_lock);
while ((si = list_first_entry_or_null(&inf->iput_list, struct scoutfs_inode_info,
iput_head))) {
list_del_init(&si->iput_head);
count = si->iput_count;
flags = si->iput_flags;
si->iput_count = 0;
si->iput_flags = 0;
spin_unlock(&inf->iput_lock);
inodes = llist_del_all(&inf->iput_llist);
inode = &si->inode;
/* can't touch during unmount, dcache destroys w/o locks */
if ((flags & SI_IPUT_FLAG_PRUNE) && !inf->stopped)
d_prune_aliases(inode);
while (count-- > 0)
iput(inode);
/* can't touch inode after final iput */
spin_lock(&inf->iput_lock);
llist_for_each_entry_safe(si, tmp, inodes, iput_llnode) {
do {
more = atomic_dec_return(&si->iput_count) > 0;
iput(&si->inode);
} while (more);
}
spin_unlock(&inf->iput_lock);
}
/*
@@ -1900,21 +1832,15 @@ static void iput_worker(struct work_struct *work)
* Nothing stops multiple puts of an inode before the work runs so we
* can track multiple puts in flight.
*/
void scoutfs_inode_queue_iput(struct inode *inode, unsigned long flags)
void scoutfs_inode_queue_iput(struct inode *inode)
{
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
bool should_queue;
spin_lock(&inf->iput_lock);
si->iput_count++;
si->iput_flags |= flags;
if ((should_queue = list_empty(&si->iput_head)))
list_add_tail(&si->iput_head, &inf->iput_list);
spin_unlock(&inf->iput_lock);
if (should_queue)
queue_work(inf->iput_workq, &inf->iput_work);
if (atomic_inc_return(&si->iput_count) == 1)
llist_add(&si->iput_llnode, &inf->iput_llist);
smp_wmb(); /* count and list visible before work executes */
schedule_work(&inf->iput_work);
}
/*
@@ -2118,7 +2044,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
trace_scoutfs_inode_walk_writeback(sb, scoutfs_ino(inode),
write, ret);
if (ret) {
scoutfs_inode_queue_iput(inode, 0);
scoutfs_inode_queue_iput(inode);
goto out;
}
@@ -2134,7 +2060,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
if (!write)
list_del_init(&si->writeback_entry);
scoutfs_inode_queue_iput(inode, 0);
scoutfs_inode_queue_iput(inode);
}
spin_unlock(&inf->writeback_lock);
@@ -2159,15 +2085,7 @@ int scoutfs_inode_setup(struct super_block *sb)
spin_lock_init(&inf->ino_alloc.lock);
INIT_DELAYED_WORK(&inf->orphan_scan_dwork, inode_orphan_scan_worker);
INIT_WORK(&inf->iput_work, iput_worker);
spin_lock_init(&inf->iput_lock);
INIT_LIST_HEAD(&inf->iput_list);
/* re-entrant, worker locks with itself and queueing */
inf->iput_workq = alloc_workqueue("scoutfs_inode_iput", WQ_UNBOUND, 0);
if (!inf->iput_workq) {
kfree(inf);
return -ENOMEM;
}
init_llist_head(&inf->iput_llist);
sbi->inode_sb_info = inf;
@@ -2203,18 +2121,14 @@ void scoutfs_inode_flush_iput(struct super_block *sb)
DECLARE_INODE_SB_INFO(sb, inf);
if (inf)
flush_workqueue(inf->iput_workq);
flush_work(&inf->iput_work);
}
void scoutfs_inode_destroy(struct super_block *sb)
{
struct inode_sb_info *inf = SCOUTFS_SB(sb)->inode_sb_info;
if (inf) {
if (inf->iput_workq)
destroy_workqueue(inf->iput_workq);
kfree(inf);
}
kfree(inf);
}
void scoutfs_inode_exit(void)

View File

@@ -22,7 +22,7 @@ struct scoutfs_inode_info {
u64 online_blocks;
u64 offline_blocks;
u32 flags;
struct kc_timespec crtime;
struct timespec crtime;
/*
* Protects per-inode extent items, most particularly readers
@@ -56,16 +56,14 @@ struct scoutfs_inode_info {
struct scoutfs_lock_coverage ino_lock_cov;
struct list_head iput_head;
unsigned long iput_count;
unsigned long iput_flags;
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
bool drop_invalidated;
struct llist_node iput_llnode;
atomic_t iput_count;
struct inode inode;
};
/* try to prune dcache aliases with queued iput */
#define SI_IPUT_FLAG_PRUNE (1 << 0)
static inline struct scoutfs_inode_info *SCOUTFS_I(struct inode *inode)
{
return container_of(inode, struct scoutfs_inode_info, inode);
@@ -80,7 +78,7 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
void scoutfs_destroy_inode(struct inode *inode);
int scoutfs_drop_inode(struct inode *inode);
void scoutfs_evict_inode(struct inode *inode);
void scoutfs_inode_queue_iput(struct inode *inode, unsigned long flags);
void scoutfs_inode_queue_iput(struct inode *inode);
#define SCOUTFS_IGF_LINKED (1 << 0) /* enoent if nlink == 0 */
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf);
@@ -123,19 +121,12 @@ void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off);
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock);
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
#else
int scoutfs_getattr(const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int query_flags);
#endif
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary);
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
struct scoutfs_lock *primary);
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
void scoutfs_inode_schedule_orphan_dwork(struct super_block *sb);
void scoutfs_inode_queue_writeback(struct inode *inode);

View File

@@ -22,7 +22,6 @@
#include <linux/sched.h>
#include <linux/aio.h>
#include <linux/list_sort.h>
#include <linux/backing-dev.h>
#include "format.h"
#include "key.h"
@@ -303,7 +302,7 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
if (ret)
return ret;
inode_lock(inode);
mutex_lock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -352,7 +351,7 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
out:
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
mnt_drop_write_file(file);
trace_scoutfs_ioc_release_ret(sb, scoutfs_ino(inode), ret);
@@ -394,7 +393,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
goto out;
}
inode_lock(inode);
mutex_lock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -412,7 +411,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
unlock:
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
iput(inode);
out:
return ret;
@@ -449,6 +448,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
{
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
struct address_space *mapping = inode->i_mapping;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
struct scoutfs_ioctl_stage args;
@@ -480,10 +480,8 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
/* the iocb is really only used for the file pointer :P */
init_sync_kiocb(&kiocb, file);
kiocb.ki_pos = args.offset;
#ifdef KC_LINUX_AIO_KI_LEFT
kiocb.ki_left = args.length;
kiocb.ki_nbytes = args.length;
#endif
iov.iov_base = (void __user *)(unsigned long)args.buf_ptr;
iov.iov_len = args.length;
@@ -491,7 +489,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
if (ret)
return ret;
inode_lock(inode);
mutex_lock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -518,7 +516,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
}
si->staging = true;
current->backing_dev_info = inode_to_bdi(inode);
current->backing_dev_info = mapping->backing_dev_info;
pos = args.offset;
written = 0;
@@ -535,7 +533,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
out:
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
mnt_drop_write_file(file);
trace_scoutfs_ioc_stage_ret(sb, scoutfs_ino(inode), ret);
@@ -654,7 +652,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
if (ret)
goto out;
inode_lock(inode);
mutex_lock(&inode->i_mutex);
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
@@ -698,7 +696,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
unlock:
scoutfs_inode_index_unlock(sb, &ind_locks);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
inode_unlock(inode);
mutex_unlock(&inode->i_mutex);
mnt_drop_write_file(file);
out:
@@ -1400,110 +1398,6 @@ out:
return ret ?: nr;
}
/*
* Copy entries that point to an inode to the user's buffer. We copy to
* userspace from copies of the entries that are acquired under a lock
* so that we don't fault while holding cluster locks. It also gives us
* a chance to limit the amount of work under each lock hold.
*/
static long scoutfs_ioc_get_referring_entries(struct file *file, unsigned long arg)
{
struct super_block *sb = file_inode(file)->i_sb;
struct scoutfs_ioctl_get_referring_entries gre;
struct scoutfs_link_backref_entry *bref = NULL;
struct scoutfs_link_backref_entry *bref_tmp;
struct scoutfs_ioctl_dirent __user *uent;
struct scoutfs_ioctl_dirent ent;
LIST_HEAD(list);
u64 copied;
int name_len;
int bytes;
long nr;
int ret;
if (!capable(CAP_DAC_READ_SEARCH))
return -EPERM;
if (copy_from_user(&gre, (void __user *)arg, sizeof(gre)))
return -EFAULT;
uent = (void __user *)(unsigned long)gre.entries_ptr;
copied = 0;
nr = 0;
/* use entry as cursor between calls */
ent.dir_ino = gre.dir_ino;
ent.dir_pos = gre.dir_pos;
for (;;) {
ret = scoutfs_dir_add_next_linkrefs(sb, gre.ino, ent.dir_ino, ent.dir_pos, 1024,
&list);
if (ret < 0) {
if (ret == -ENOENT)
ret = 0;
goto out;
}
/* _add_next adds each entry to the head, _reverse for key order */
list_for_each_entry_safe_reverse(bref, bref_tmp, &list, head) {
list_del_init(&bref->head);
name_len = bref->name_len;
bytes = ALIGN(offsetof(struct scoutfs_ioctl_dirent, name[name_len + 1]),
16);
if (copied + bytes > gre.entries_bytes) {
ret = -EINVAL;
goto out;
}
ent.dir_ino = bref->dir_ino;
ent.dir_pos = bref->dir_pos;
ent.ino = gre.ino;
ent.entry_bytes = bytes;
ent.flags = bref->last ? SCOUTFS_IOCTL_DIRENT_FLAG_LAST : 0;
ent.d_type = bref->d_type;
ent.name_len = name_len;
if (copy_to_user(uent, &ent, sizeof(struct scoutfs_ioctl_dirent)) ||
copy_to_user(&uent->name[0], bref->dent.name, name_len) ||
put_user('\0', &uent->name[name_len])) {
ret = -EFAULT;
goto out;
}
kfree(bref);
bref = NULL;
uent = (void __user *)uent + bytes;
copied += bytes;
nr++;
if (nr == LONG_MAX || (ent.flags & SCOUTFS_IOCTL_DIRENT_FLAG_LAST)) {
ret = 0;
goto out;
}
}
/* advance cursor pos from last copied entry */
if (++ent.dir_pos == 0) {
if (++ent.dir_ino == 0) {
ret = 0;
goto out;
}
}
}
ret = 0;
out:
kfree(bref);
list_for_each_entry_safe(bref, bref_tmp, &list, head) {
list_del_init(&bref->head);
kfree(bref);
}
return nr ?: ret;
}
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
@@ -1539,8 +1433,6 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
return scoutfs_ioc_read_xattr_totals(file, arg);
case SCOUTFS_IOC_GET_ALLOCATED_INOS:
return scoutfs_ioc_get_allocated_inos(file, arg);
case SCOUTFS_IOC_GET_REFERRING_ENTRIES:
return scoutfs_ioc_get_referring_entries(file, arg);
}
return -ENOTTY;

View File

@@ -559,118 +559,4 @@ struct scoutfs_ioctl_get_allocated_inos {
#define SCOUTFS_IOC_GET_ALLOCATED_INOS \
_IOW(SCOUTFS_IOCTL_MAGIC, 16, struct scoutfs_ioctl_get_allocated_inos)
/*
* Get directory entries that refer to a specific inode.
*
* @ino: The target ino that we're finding referring entries to.
* Constant across all the calls that make up an iteration over all the
* inode's entries.
*
* @dir_ino: The inode number of a directory containing the entry to our
* inode to search from. If this parent directory contains no more
* entries to our inode then we'll search through other parent directory
* inodes in inode order.
*
* @dir_pos: The position in the dir_ino parent directory of the entry
* to our inode to search from. If there is no entry at this position
* then we'll search through other entry positions in increasing order.
* If we exhaust the parent directory then we'll search through
* additional parent directories in inode order.
*
* @entries_ptr: A pointer to the buffer where found entries will be
* stored. The pointer must be aligned to 16 bytes.
*
* @entries_bytes: The size of the buffer that will contain entries.
*
* To start iterating set the desired target ino, dir_ino to 0, dir_pos
* to 0, and set result_ptr and _bytes to a sufficiently large buffer.
* Each entry struct that's stored in the buffer adds some overhead so a
* large multiple of the largest possible name is a reasonable choice.
* (A few multiples of PATH_MAX perhaps.)
*
* Each call returns the total number of entries that were stored in the
* entries buffer. Zero is returned when the search was successful and
* no referring entries were found. The entries can be iterated over by
* advancing each starting struct offset by the total number of bytes in
* each entry. If the _LAST flag is set on an entry then there were no
* more entries referring to the inode at the time of the call and
* iteration can be stopped.
*
* To resume iteration set the next call's starting dir_ino and dir_pos
* to one past the last entry seen. Increment the last entry's dir_pos,
* and if it wrapped to 0, increment its dir_ino.
*
* This does not check that the caller has permission to read the
* entries found in each containing directory. It requires
* CAP_DAC_READ_SEARCH which bypasses path traversal permissions
* checking.
*
* Entries returned by a single call can reflect any combination of
* racing creation and removal of entries. Each entry existed at the
* time it was read though it may have changed in the time it took to
* return from the call. The set of entries returned may no longer
* reflect the current set of entries and may not have existed at the
* same time.
*
* This has no knowledge of the life cycle of the inode. It can return
* 0 when there are no referring entries because either the target inode
* doesn't exist, it is in the process of being deleted, or because it
* is still open while being unlinked.
*
* On success this returns the number of entries filled in the buffer.
* A return of 0 indicates that no entries referred to the inode.
*
* EINVAL is returned when there is a problem with the buffer. Either
* it was not aligned or it was not large enough for the first entry.
*
* Many other errnos indicate hard failure to find the next entry.
*/
struct scoutfs_ioctl_get_referring_entries {
__u64 ino;
__u64 dir_ino;
__u64 dir_pos;
__u64 entries_ptr;
__u64 entries_bytes;
};
/*
* @dir_ino: The inode of the directory containing the entry.
*
* @dir_pos: The readdir f_pos position of the entry within the
* directory.
*
* @ino: The inode number of the target of the entry.
*
* @flags: Flags associated with this entry.
*
* @d_type: Inode type as specified with DT_ enum values in readdir(3).
*
* @entry_bytes: The total bytes taken by the entry in memory, including
* the name and any alignment padding. The start of a following entry
* will be found after this number of bytes.
*
* @name_len: The number of bytes in the name not including the trailing
* null, ala strlen(3).
*
* @name: The null terminated name of the referring entry. In the
* struct definition this array is sized to naturally align the struct.
* That number of padded bytes are not necessarily found in the buffer
* returned by _get_referring_entries;
*/
struct scoutfs_ioctl_dirent {
__u64 dir_ino;
__u64 dir_pos;
__u64 ino;
__u16 entry_bytes;
__u8 flags;
__u8 d_type;
__u8 name_len;
__u8 name[3];
};
#define SCOUTFS_IOCTL_DIRENT_FLAG_LAST (1 << 0)
#define SCOUTFS_IOC_GET_REFERRING_ENTRIES \
_IOW(SCOUTFS_IOCTL_MAGIC, 17, struct scoutfs_ioctl_get_referring_entries)
#endif

View File

@@ -27,7 +27,6 @@
#include "trans.h"
#include "counters.h"
#include "scoutfs_trace.h"
#include "util.h"
/*
* The item cache maintains a consistent view of items that are read
@@ -77,10 +76,8 @@ struct item_cache_info {
/* almost always read, barely written */
struct super_block *sb;
struct item_percpu_pages __percpu *pcpu_pages;
KC_DEFINE_SHRINKER(shrinker);
#ifdef KC_CPU_NOTIFIER
struct shrinker shrinker;
struct notifier_block notifier;
#endif
/* often walked, but per-cpu refs are fast path */
rwlock_t rwlock;
@@ -1679,14 +1676,6 @@ static int lock_safe(struct scoutfs_lock *lock, struct scoutfs_key *key,
return 0;
}
static int optional_lock_mode_match(struct scoutfs_lock *lock, int mode)
{
if (WARN_ON_ONCE(lock && lock->mode != mode))
return -EINVAL;
else
return 0;
}
/*
* Copy the cached item's value into the caller's value. The number of
* bytes copied is returned. A null val returns 0.
@@ -1843,19 +1832,12 @@ out:
* also increase the seqs. It lets us limit the inputs of item merging
* to the last stable seq and ensure that all the items in open
* transactions and granted locks will have greater seqs.
*
* This is a little awkward for WRITE_ONLY locks which can have much
* older versions than the version of locked primary data that they're
* operating on behalf of. Callers can optionally provide that primary
* lock to get the version from. This ensures that items created under
* WRITE_ONLY locks can not have versions less than their primary data.
*/
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock,
struct scoutfs_lock *primary)
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
return max3(sbi->trans_seq, lock->write_seq, primary ? primary->write_seq : 0);
return max(sbi->trans_seq, lock->write_seq);
}
/*
@@ -1890,7 +1872,7 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
if (!item || item->deletion) {
ret = -ENOENT;
} else {
item->seq = item_seq(sb, lock, NULL);
item->seq = item_seq(sb, lock);
mark_item_dirty(sb, cinf, pg, NULL, item);
ret = 0;
}
@@ -1907,10 +1889,10 @@ out:
*/
static int item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock,
struct scoutfs_lock *primary, int mode, bool force)
int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock, primary);
const u64 seq = item_seq(sb, lock);
struct cached_item *found;
struct cached_item *item;
struct cached_page *pg;
@@ -1920,8 +1902,7 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_create);
if ((ret = lock_safe(lock, key, mode)) ||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
if ((ret = lock_safe(lock, key, mode)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
@@ -1962,15 +1943,15 @@ out:
int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
return item_create(sb, key, val, val_len, lock, NULL,
return item_create(sb, key, val, val_len, lock,
SCOUTFS_LOCK_READ, false);
}
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
struct scoutfs_lock *lock)
{
return item_create(sb, key, val, val_len, lock, primary,
return item_create(sb, key, val, val_len, lock,
SCOUTFS_LOCK_WRITE_ONLY, true);
}
@@ -1984,7 +1965,7 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock, NULL);
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_item *found;
struct cached_page *pg;
@@ -2044,16 +2025,12 @@ out:
* current items so the caller always writes with write only locks. If
* combining the current delta item and the caller's item results in a
* null we can just drop it, we don't have to emit a deletion item.
*
* Delta items don't have to worry about creating items with old
* versions under write_only locks. The versions don't impact how we
* merge two items.
*/
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock, NULL);
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
@@ -2122,11 +2099,10 @@ out:
* deletion item if there isn't one already cached.
*/
static int item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, struct scoutfs_lock *primary,
int mode, bool force)
struct scoutfs_lock *lock, int mode, bool force)
{
DECLARE_ITEM_CACHE_INFO(sb, cinf);
const u64 seq = item_seq(sb, lock, primary);
const u64 seq = item_seq(sb, lock);
struct cached_item *item;
struct cached_page *pg;
struct rb_node **pnode;
@@ -2135,8 +2111,7 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
scoutfs_inc_counter(sb, item_delete);
if ((ret = lock_safe(lock, key, mode)) ||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
if ((ret = lock_safe(lock, key, mode)))
goto out;
ret = scoutfs_forest_set_bloom_bits(sb, lock);
@@ -2186,13 +2161,13 @@ out:
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock)
{
return item_delete(sb, key, lock, NULL, SCOUTFS_LOCK_WRITE, false);
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE, false);
}
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
struct scoutfs_lock *lock)
{
return item_delete(sb, key, lock, primary, SCOUTFS_LOCK_WRITE_ONLY, true);
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE_ONLY, true);
}
u64 scoutfs_item_dirty_pages(struct super_block *sb)
@@ -2280,7 +2255,7 @@ int scoutfs_item_write_dirty(struct super_block *sb)
ret = -ENOMEM;
goto out;
}
list_add(&page->lru, &pages);
list_add(&page->list, &pages);
first = NULL;
prev = &first;
@@ -2293,7 +2268,7 @@ int scoutfs_item_write_dirty(struct super_block *sb)
ret = -ENOMEM;
goto out;
}
list_add(&second->lru, &pages);
list_add(&second->list, &pages);
}
/* read lock next sorted page, we're only dirty_list user */
@@ -2350,8 +2325,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
/* write all the dirty items into log btree blocks */
ret = scoutfs_forest_insert_list(sb, first);
out:
list_for_each_entry_safe(page, second, &pages, lru) {
list_del_init(&page->lru);
list_for_each_entry_safe(page, second, &pages, list) {
list_del_init(&page->list);
__free_page(page);
}
@@ -2533,35 +2508,27 @@ retry:
put_pg(sb, right);
}
static unsigned long item_cache_count_objects(struct shrinker *shrink,
struct shrink_control *sc)
{
struct item_cache_info *cinf = KC_SHRINKER_CONTAINER_OF(shrink, struct item_cache_info);
struct super_block *sb = cinf->sb;
scoutfs_inc_counter(sb, item_cache_count_objects);
return shrinker_min_long(cinf->lru_pages);
}
/*
* Shrink the size the item cache. We're operating against the fast
* path lock ordering and we skip pages if we can't acquire locks. We
* can run into dirty pages or pages with items that weren't visible to
* the earliest active reader which must be skipped.
*/
static unsigned long item_cache_scan_objects(struct shrinker *shrink,
struct shrink_control *sc)
static int item_lru_shrink(struct shrinker *shrink,
struct shrink_control *sc)
{
struct item_cache_info *cinf = KC_SHRINKER_CONTAINER_OF(shrink, struct item_cache_info);
struct item_cache_info *cinf = container_of(shrink,
struct item_cache_info,
shrinker);
struct super_block *sb = cinf->sb;
struct cached_page *tmp;
struct cached_page *pg;
unsigned long freed = 0;
u64 first_reader_seq;
int nr = sc->nr_to_scan;
int nr;
scoutfs_inc_counter(sb, item_cache_scan_objects);
if (sc->nr_to_scan == 0)
goto out;
nr = sc->nr_to_scan;
/* can't invalidate pages with items that weren't visible to first reader */
first_reader_seq = first_active_reader_seq(cinf);
@@ -2593,7 +2560,6 @@ static unsigned long item_cache_scan_objects(struct shrinker *shrink,
rbtree_erase(&pg->node, &cinf->pg_root);
invalidate_pcpu_page(pg);
write_unlock(&pg->rwlock);
freed++;
put_pg(sb, pg);
@@ -2603,11 +2569,10 @@ static unsigned long item_cache_scan_objects(struct shrinker *shrink,
write_unlock(&cinf->rwlock);
spin_unlock(&cinf->lru_lock);
return freed;
out:
return min_t(unsigned long, cinf->lru_pages, INT_MAX);
}
#ifdef KC_CPU_NOTIFIER
static int item_cpu_callback(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
@@ -2622,7 +2587,6 @@ static int item_cpu_callback(struct notifier_block *nfb,
return NOTIFY_OK;
}
#endif
int scoutfs_item_setup(struct super_block *sb)
{
@@ -2652,13 +2616,11 @@ int scoutfs_item_setup(struct super_block *sb)
for_each_possible_cpu(cpu)
init_pcpu_pages(cinf, cpu);
KC_INIT_SHRINKER_FUNCS(&cinf->shrinker, item_cache_count_objects,
item_cache_scan_objects);
KC_REGISTER_SHRINKER(&cinf->shrinker);
#ifdef KC_CPU_NOTIFIER
cinf->shrinker.shrink = item_lru_shrink;
cinf->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&cinf->shrinker);
cinf->notifier.notifier_call = item_cpu_callback;
register_hotcpu_notifier(&cinf->notifier);
#endif
sbi->item_cache_info = cinf;
return 0;
@@ -2678,10 +2640,8 @@ void scoutfs_item_destroy(struct super_block *sb)
if (cinf) {
BUG_ON(!list_empty(&cinf->active_list));
#ifdef KC_CPU_NOTIFIER
unregister_hotcpu_notifier(&cinf->notifier);
#endif
KC_UNREGISTER_SHRINKER(&cinf->shrinker);
unregister_shrinker(&cinf->shrinker);
for_each_possible_cpu(cpu)
drop_pcpu_pages(sb, cinf, cpu);

View File

@@ -15,15 +15,16 @@ int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len,
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
struct scoutfs_lock *lock);
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
void *val, int val_len, struct scoutfs_lock *lock);
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock);
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
int scoutfs_item_delete_force(struct super_block *sb,
struct scoutfs_key *key,
struct scoutfs_lock *lock);
u64 scoutfs_item_dirty_pages(struct super_block *sb);
int scoutfs_item_write_dirty(struct super_block *sb);

View File

@@ -1,84 +0,0 @@
#include <linux/uio.h>
#include "kernelcompat.h"
#ifdef KC_SHRINKER_SHRINK
#include <linux/shrinker.h>
/*
* If a target doesn't have that .{count,scan}_objects() interface then
* we have a .shrink() helper that performs the shrink work in terms of
* count/scan.
*/
int kc_shrink_wrapper_fn(struct shrinker *shrink, struct shrink_control *sc)
{
struct kc_shrinker_wrapper *wrapper = container_of(shrink, struct kc_shrinker_wrapper, shrink);
unsigned long nr;
unsigned long rc;
if (sc->nr_to_scan != 0) {
rc = wrapper->scan_objects(shrink, sc);
/* translate magic values to the equivalent for older kernels */
if (rc == SHRINK_STOP)
return -1;
else if (rc == SHRINK_EMPTY)
return 0;
}
nr = wrapper->count_objects(shrink, sc);
return min_t(unsigned long, nr, INT_MAX);
}
#endif
#ifndef KC_CURRENT_TIME_INODE
struct timespec64 kc_current_time(struct inode *inode)
{
struct timespec64 now;
unsigned gran;
getnstimeofday64(&now);
if (unlikely(!inode->i_sb)) {
WARN(1, "current_time() called with uninitialized super_block in the inode");
return now;
}
gran = inode->i_sb->s_time_gran;
/* Avoid division in the common cases 1 ns and 1 s. */
if (gran == 1) {
/* nothing */
} else if (gran == NSEC_PER_SEC) {
now.tv_nsec = 0;
} else if (gran > 1 && gran < NSEC_PER_SEC) {
now.tv_nsec -= now.tv_nsec % gran;
} else {
WARN(1, "illegal file time granularity: %u", gran);
}
return now;
}
#endif
#ifndef KC_GENERIC_FILE_BUFFERED_WRITE
ssize_t
kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos, loff_t *ppos,
size_t count, ssize_t written)
{
struct file *file = iocb->ki_filp;
ssize_t status;
struct iov_iter i;
iov_iter_init(&i, WRITE, iov, nr_segs, count);
status = generic_perform_write(file, &i, pos);
if (likely(status >= 0)) {
written += status;
*ppos = pos + status;
}
return written ? written : status;
}
#endif

View File

@@ -1,35 +1,8 @@
#ifndef _SCOUTFS_KERNELCOMPAT_H_
#define _SCOUTFS_KERNELCOMPAT_H_
#include <linux/kernel.h>
#include <linux/fs.h>
/*
* v4.15-rc3-4-gae5e165d855d
*
* new API for handling inode->i_version. This forces us to
* include this API where we need. We include it here for
* convenience instead of where it's needed.
*/
#ifdef KC_NEED_LINUX_IVERSION_H
#include <linux/iversion.h>
#else
/*
* Kernels before above version will need to fall back to
* manipulating inode->i_version as previous with degraded
* methods.
*/
#define inode_set_iversion_queried(inode, val) \
do { \
(inode)->i_version = val; \
} while (0)
#define inode_peek_iversion(inode) \
({ \
(inode)->i_version; \
})
#endif
#ifndef KC_ITERATE_DIR_CONTEXT
#include <linux/fs.h>
typedef filldir_t kc_readdir_ctx_t;
#define KC_DECLARE_READDIR(name, file, dirent, ctx) name(file, dirent, ctx)
#define KC_FOP_READDIR readdir
@@ -73,204 +46,4 @@ static inline int dir_emit_dots(struct file *file, void *dirent,
}
#endif
#ifdef KC_POSIX_ACL_VALID_USER_NS
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(user_ns, acl)
#else
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(acl)
#endif
/*
* v3.6-rc1-24-gdbf2576e37da
*
* All workqueues are now non-reentrant, and the bit flag is removed
* shortly after its uses were removed.
*/
#ifndef WQ_NON_REENTRANT
#define WQ_NON_REENTRANT 0
#endif
/*
* v3.18-rc2-19-gb5ae6b15bd73
*
* Folds d_materialise_unique into d_splice_alias. Note reversal
* of arguments (Also note Documentation/filesystems/porting.rst)
*/
#ifndef KC_D_MATERIALISE_UNIQUE
#define d_materialise_unique(dentry, inode) d_splice_alias(inode, dentry)
#endif
/*
* v4.8-rc1-29-g31051c85b5e2
*
* fall back to inode_change_ok() if setattr_prepare() isn't available
*/
#ifndef KC_SETATTR_PREPARE
#define setattr_prepare(dentry, attr) inode_change_ok(d_inode(dentry), attr)
#endif
#ifndef KC___POSIX_ACL_CREATE
#define __posix_acl_create posix_acl_create
#define __posix_acl_chmod posix_acl_chmod
#endif
#ifndef KC_PERCPU_COUNTER_ADD_BATCH
#define percpu_counter_add_batch __percpu_counter_add
#endif
#ifndef KC_MEMALLOC_NOFS_SAVE
#define memalloc_nofs_save memalloc_noio_save
#define memalloc_nofs_restore memalloc_noio_restore
#endif
#ifdef KC_BIO_BI_OPF
#define kc_bio_get_opf(bio) \
({ \
(bio)->bi_opf; \
})
#define kc_bio_set_opf(bio, opf) \
do { \
(bio)->bi_opf = opf; \
} while (0)
#define kc_bio_set_sector(bio, sect) \
do { \
(bio)->bi_iter.bi_sector = sect;\
} while (0)
#define kc_submit_bio(bio) submit_bio(bio)
#else
#define kc_bio_get_opf(bio) \
({ \
(bio)->bi_rw; \
})
#define kc_bio_set_opf(bio, opf) \
do { \
(bio)->bi_rw = opf; \
} while (0)
#define kc_bio_set_sector(bio, sect) \
do { \
(bio)->bi_sector = sect; \
} while (0)
#define kc_submit_bio(bio) \
do { \
submit_bio((bio)->bi_rw, bio); \
} while (0)
#define bio_set_dev(bio, bdev) \
do { \
(bio)->bi_bdev = (bdev); \
} while (0)
#endif
#ifdef KC_BIO_BI_STATUS
#define KC_DECLARE_BIO_END_IO(name, bio) name(bio)
#define kc_bio_get_errno(bio) ({ blk_status_to_errno((bio)->bi_status); })
#else
#define KC_DECLARE_BIO_END_IO(name, bio) name(bio, int _error_arg)
#define kc_bio_get_errno(bio) ({ (int)((void)(bio), _error_arg); })
#endif
/*
* v4.13-rc1-6-ge462ec50cb5f
*
* MS_* (mount) flags from <linux/mount.h> should not be used in the kernel
* anymore from 4.x onwards. Instead, we need to use the SB_* (superblock) flags
*/
#ifndef SB_POSIXACL
#define SB_POSIXACL MS_POSIXACL
#define SB_I_VERSION MS_I_VERSION
#endif
#ifndef KC_CURRENT_TIME_INODE
struct timespec64 kc_current_time(struct inode *inode);
#define current_time kc_current_time
#define kc_timespec timespec
#else
#define kc_timespec timespec64
#endif
#ifndef KC_SHRINKER_SHRINK
#define KC_DEFINE_SHRINKER(name) struct shrinker name
#define KC_INIT_SHRINKER_FUNCS(name, countfn, scanfn) do { \
__typeof__(name) _shrink = (name); \
_shrink->count_objects = (countfn); \
_shrink->scan_objects = (scanfn); \
_shrink->seeks = DEFAULT_SEEKS; \
} while (0)
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(ptr, type, shrinker)
#define KC_REGISTER_SHRINKER(ptr) (register_shrinker(ptr))
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr))
#define KC_SHRINKER_FN(ptr) (ptr)
#else
#include <linux/shrinker.h>
#ifndef SHRINK_STOP
#define SHRINK_STOP (~0UL)
#define SHRINK_EMPTY (~0UL - 1)
#endif
int kc_shrink_wrapper_fn(struct shrinker *shrink, struct shrink_control *sc);
struct kc_shrinker_wrapper {
unsigned long (*count_objects)(struct shrinker *, struct shrink_control *sc);
unsigned long (*scan_objects)(struct shrinker *, struct shrink_control *sc);
struct shrinker shrink;
};
#define KC_DEFINE_SHRINKER(name) struct kc_shrinker_wrapper name;
#define KC_INIT_SHRINKER_FUNCS(name, countfn, scanfn) do { \
struct kc_shrinker_wrapper *_wrap = (name); \
_wrap->count_objects = (countfn); \
_wrap->scan_objects = (scanfn); \
_wrap->shrink.shrink = kc_shrink_wrapper_fn; \
_wrap->shrink.seeks = DEFAULT_SEEKS; \
} while (0)
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(container_of(ptr, struct kc_shrinker_wrapper, shrink), type, shrinker)
#define KC_REGISTER_SHRINKER(ptr) (register_shrinker(ptr.shrink))
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr.shrink))
#define KC_SHRINKER_FN(ptr) (ptr.shrink)
#endif /* KC_SHRINKER_SHRINK */
#ifdef KC_KERNEL_GETSOCKNAME_ADDRLEN
#include <linux/net.h>
#include <linux/inet.h>
static inline int kc_kernel_getsockname(struct socket *sock, struct sockaddr *addr)
{
int addrlen = sizeof(struct sockaddr_in);
int ret = kernel_getsockname(sock, addr, &addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
return -EAFNOSUPPORT;
else if (ret < 0)
return ret;
return sizeof(struct sockaddr_in);
}
static inline int kc_kernel_getpeername(struct socket *sock, struct sockaddr *addr)
{
int addrlen = sizeof(struct sockaddr_in);
int ret = kernel_getpeername(sock, addr, &addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
return -EAFNOSUPPORT;
else if (ret < 0)
return ret;
return sizeof(struct sockaddr_in);
}
#else
#define kc_kernel_getsockname(sock, addr) kernel_getsockname(sock, addr)
#define kc_kernel_getpeername(sock, addr) kernel_getpeername(sock, addr)
#endif
#ifdef KC_SOCK_CREATE_KERN_NET
#define kc_sock_create_kern(family, type, proto, res) sock_create_kern(&init_net, family, type, proto, res)
#else
#define kc_sock_create_kern sock_create_kern
#endif
#ifndef KC_GENERIC_FILE_BUFFERED_WRITE
ssize_t kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos, loff_t *ppos,
size_t count, ssize_t written);
#define generic_file_buffered_write kc_generic_file_buffered_write
#endif
#endif

View File

@@ -12,12 +12,12 @@
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/preempt_mask.h> /* a rhel shed.h needed preempt_offset? */
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include <linux/sort.h>
#include <linux/ctype.h>
#include <linux/posix_acl.h>
#include "super.h"
#include "lock.h"
@@ -35,7 +35,6 @@
#include "xattr.h"
#include "item.h"
#include "omap.h"
#include "util.h"
/*
* scoutfs uses a lock service to manage item cache consistency between
@@ -77,7 +76,7 @@ struct lock_info {
bool unmounting;
struct rb_root lock_tree;
struct rb_root lock_range_tree;
KC_DEFINE_SHRINKER(shrinker);
struct shrinker shrinker;
struct list_head lru_list;
unsigned long long lru_nr;
struct workqueue_struct *workq;
@@ -130,13 +129,16 @@ static bool lock_modes_match(int granted, int requested)
* allows deletions to be performed by unlink without having to wait for
* remote cached inodes to be dropped.
*
* We kick the d_prune and iput off to async work because they can end
* up in final iput and inode eviction item deletion which would
* deadlock. d_prune->dput can end up in iput on parents in different
* locks entirely.
* If the cached inode was already deferring final inode deletion then
* we can't perform that inline in invalidation. The locking alone
* deadlock, and it might also take multiple transactions to fully
* delete an inode with significant metadata. We only perform the iput
* inline if we know that possible eviction can't perform the final
* deletion, otherwise we kick it off to async work.
*/
static void invalidate_inode(struct super_block *sb, u64 ino)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_inode_info *si;
struct inode *inode;
@@ -150,9 +152,17 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
scoutfs_data_wait_changed(inode);
}
forget_all_cached_acls(inode);
/* can't touch during unmount, dcache destroys w/o locks */
if (!linfo->unmounting)
d_prune_aliases(inode);
scoutfs_inode_queue_iput(inode, SI_IPUT_FLAG_PRUNE);
si->drop_invalidated = true;
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
iput(inode);
} else {
/* defer iput to work context so we don't evict inodes from invalidation */
scoutfs_inode_queue_iput(inode);
}
}
}
@@ -188,6 +198,16 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
/* have to invalidate if we're not in the only usable case */
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
retry:
/* invalidate inodes before removing coverage */
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
/* remove cov items to tell users that their cache is stale */
spin_lock(&lock->cov_list_lock);
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
@@ -203,16 +223,6 @@ retry:
}
spin_unlock(&lock->cov_list_lock);
/* invalidate inodes after removing coverage so drop/evict aren't covered */
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
ino = le64_to_cpu(lock->start.ski_ino);
last = le64_to_cpu(lock->end.ski_ino);
while (ino <= last) {
invalidate_inode(sb, ino);
ino++;
}
}
scoutfs_item_invalidate(sb, &lock->start, &lock->end);
}
@@ -1346,7 +1356,7 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode)
{
signed char lock_mode = READ_ONCE(lock->mode);
signed char lock_mode = ACCESS_ONCE(lock->mode);
return lock_modes_match(lock_mode, mode) &&
scoutfs_key_compare_ranges(key, key,
@@ -1401,17 +1411,6 @@ static void lock_shrink_worker(struct work_struct *work)
}
}
static unsigned long lock_count_objects(struct shrinker *shrink,
struct shrink_control *sc)
{
struct lock_info *linfo = KC_SHRINKER_CONTAINER_OF(shrink, struct lock_info);
struct super_block *sb = linfo->sb;
scoutfs_inc_counter(sb, lock_count_objects);
return shrinker_min_long(linfo->lru_nr);
}
/*
* Start the shrinking process for locks on the lru. If a lock is on
* the lru then it can't have any active users. We don't want to block
@@ -1424,18 +1423,21 @@ static unsigned long lock_count_objects(struct shrinker *shrink,
* mode which will prevent the lock from being freed when the null
* response arrives.
*/
static unsigned long lock_scan_objects(struct shrinker *shrink,
struct shrink_control *sc)
static int scoutfs_lock_shrink(struct shrinker *shrink,
struct shrink_control *sc)
{
struct lock_info *linfo = KC_SHRINKER_CONTAINER_OF(shrink, struct lock_info);
struct lock_info *linfo = container_of(shrink, struct lock_info,
shrinker);
struct super_block *sb = linfo->sb;
struct scoutfs_lock *lock;
struct scoutfs_lock *tmp;
unsigned long freed = 0;
unsigned long nr = sc->nr_to_scan;
unsigned long nr;
bool added = false;
int ret;
scoutfs_inc_counter(sb, lock_scan_objects);
nr = sc->nr_to_scan;
if (nr == 0)
goto out;
spin_lock(&linfo->lock);
@@ -1453,7 +1455,6 @@ restart:
lock->request_pending = 1;
list_add_tail(&lock->shrink_head, &linfo->shrink_list);
added = true;
freed++;
scoutfs_inc_counter(sb, lock_shrink_attempted);
trace_scoutfs_lock_shrink(sb, lock);
@@ -1468,8 +1469,10 @@ restart:
if (added)
queue_work(linfo->workq, &linfo->shrink_work);
trace_scoutfs_lock_shrink_exit(sb, sc->nr_to_scan, freed);
return freed;
out:
ret = min_t(unsigned long, linfo->lru_nr, INT_MAX);
trace_scoutfs_lock_shrink_exit(sb, sc->nr_to_scan, ret);
return ret;
}
void scoutfs_free_unused_locks(struct super_block *sb)
@@ -1480,7 +1483,7 @@ void scoutfs_free_unused_locks(struct super_block *sb)
.nr_to_scan = INT_MAX,
};
lock_scan_objects(KC_SHRINKER_FN(&linfo->shrinker), &sc);
linfo->shrinker.shrink(&linfo->shrinker, &sc);
}
static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
@@ -1522,38 +1525,6 @@ void scoutfs_lock_flush_invalidate(struct super_block *sb)
flush_work(&linfo->inv_work);
}
static u64 get_held_lock_refresh_gen(struct super_block *sb, struct scoutfs_key *start)
{
DECLARE_LOCK_INFO(sb, linfo);
struct scoutfs_lock *lock;
u64 refresh_gen = 0;
/* this can be called from all manner of places */
if (!linfo)
return 0;
spin_lock(&linfo->lock);
lock = lock_lookup(sb, start, NULL);
if (lock) {
if (lock_mode_can_read(lock->mode))
refresh_gen = lock->refresh_gen;
}
spin_unlock(&linfo->lock);
return refresh_gen;
}
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino)
{
struct scoutfs_key start;
scoutfs_key_set_zeros(&start);
start.sk_zone = SCOUTFS_FS_ZONE;
start.ski_ino = cpu_to_le64(ino & ~(u64)SCOUTFS_LOCK_INODE_GROUP_MASK);
return get_held_lock_refresh_gen(sb, &start);
}
/*
* The caller is going to be shutting down transactions and the client.
* We need to make sure that locking won't call either after we return.
@@ -1587,7 +1558,7 @@ void scoutfs_lock_shutdown(struct super_block *sb)
trace_scoutfs_lock_shutdown(sb, linfo);
/* stop the shrinker from queueing work */
KC_UNREGISTER_SHRINKER(&linfo->shrinker);
unregister_shrinker(&linfo->shrinker);
flush_work(&linfo->shrink_work);
/* cause current and future lock calls to return errors */
@@ -1706,9 +1677,9 @@ int scoutfs_lock_setup(struct super_block *sb)
spin_lock_init(&linfo->lock);
linfo->lock_tree = RB_ROOT;
linfo->lock_range_tree = RB_ROOT;
KC_INIT_SHRINKER_FUNCS(&linfo->shrinker, lock_count_objects,
lock_scan_objects);
KC_REGISTER_SHRINKER(&linfo->shrinker);
linfo->shrinker.shrink = scoutfs_lock_shrink;
linfo->shrinker.seeks = DEFAULT_SEEKS;
register_shrinker(&linfo->shrinker);
INIT_LIST_HEAD(&linfo->lru_list);
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
INIT_LIST_HEAD(&linfo->inv_list);

View File

@@ -100,8 +100,6 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
enum scoutfs_lock_mode mode);
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino);
void scoutfs_free_unused_locks(struct super_block *sb);
int scoutfs_lock_setup(struct super_block *sb);

View File

@@ -355,7 +355,6 @@ static int submit_send(struct super_block *sb,
}
if (rid != 0) {
spin_unlock(&conn->lock);
kfree(msend);
return -ENOTCONN;
}
}
@@ -549,16 +548,12 @@ static int recvmsg_full(struct socket *sock, void *buf, unsigned len)
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, READ, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_recvmsg(sock, &msg, &kv, 1, len, msg.msg_flags);
if (ret <= 0)
return -ECONNABORTED;
@@ -711,16 +706,12 @@ static int sendmsg_full(struct socket *sock, void *buf, unsigned len)
while (len) {
memset(&msg, 0, sizeof(msg));
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
msg.msg_flags = MSG_NOSIGNAL;
kv.iov_base = buf;
kv.iov_len = len;
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
msg.msg_iov = (struct iovec *)&kv;
msg.msg_iovlen = 1;
#else
iov_iter_init(&msg.msg_iter, WRITE, (struct iovec *)&kv, len, 1);
#endif
ret = kernel_sendmsg(sock, &msg, &kv, 1, len);
if (ret <= 0)
return -ECONNABORTED;
@@ -905,6 +896,7 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
struct socket *sock)
{
struct timeval tv;
int addrlen;
int optval;
int ret;
@@ -954,18 +946,23 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
if (ret)
goto out;
ret = kc_kernel_getsockname(sock, (struct sockaddr *)&conn->sockname);
if (ret < 0)
addrlen = sizeof(struct sockaddr_in);
ret = kernel_getsockname(sock, (struct sockaddr *)&conn->sockname,
&addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
ret = -EAFNOSUPPORT;
if (ret)
goto out;
ret = kc_kernel_getpeername(sock, (struct sockaddr *)&conn->peername);
if (ret < 0)
addrlen = sizeof(struct sockaddr_in);
ret = kernel_getpeername(sock, (struct sockaddr *)&conn->peername,
&addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
ret = -EAFNOSUPPORT;
if (ret)
goto out;
ret = 0;
conn->last_peername = conn->peername;
out:
return ret;
}
@@ -1054,7 +1051,7 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
trace_scoutfs_net_connect_work_enter(sb, 0, 0);
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
ret = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
if (ret)
goto out;
@@ -1348,12 +1345,10 @@ scoutfs_net_alloc_conn(struct super_block *sb,
if (!conn)
return NULL;
if (info_size) {
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
conn->info = kzalloc(info_size, GFP_NOFS);
if (!conn->info) {
kfree(conn);
return NULL;
}
conn->workq = alloc_workqueue("scoutfs_net_%s",
@@ -1455,7 +1450,7 @@ int scoutfs_net_bind(struct super_block *sb,
if (WARN_ON_ONCE(conn->sock))
return -EINVAL;
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
ret = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
if (ret)
goto out;
@@ -1473,18 +1468,20 @@ int scoutfs_net_bind(struct super_block *sb,
goto out;
ret = kernel_listen(sock, 255);
if (ret < 0)
if (ret)
goto out;
ret = kc_kernel_getsockname(sock, (struct sockaddr *)&conn->sockname);
if (ret < 0)
addrlen = sizeof(struct sockaddr_in);
ret = kernel_getsockname(sock, (struct sockaddr *)&conn->sockname,
&addrlen);
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
ret = -EAFNOSUPPORT;
if (ret)
goto out;
ret = 0;
conn->sock = sock;
*sin = conn->sockname;
ret = 0;
out:
if (ret < 0 && sock)
sock_release(sock);

View File

@@ -157,15 +157,6 @@ static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
return nr;
}
static void free_rid_list(struct omap_rid_list *list)
{
struct omap_rid_entry *entry;
struct omap_rid_entry *tmp;
list_for_each_entry_safe(entry, tmp, &list->head, head)
free_rid(list, entry);
}
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
{
struct omap_rid_entry *entry;
@@ -813,10 +804,6 @@ void scoutfs_omap_server_shutdown(struct super_block *sb)
llist_for_each_entry_safe(req, tmp, requests, llnode)
kfree(req);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
synchronize_rcu();
}
@@ -877,10 +864,6 @@ void scoutfs_omap_destroy(struct super_block *sb)
rhashtable_walk_stop(&iter);
rhashtable_walk_exit(&iter);
spin_lock(&ominf->lock);
free_rid_list(&ominf->rids);
spin_unlock(&ominf->lock);
rhashtable_destroy(&ominf->group_ht);
rhashtable_destroy(&ominf->req_ht);
kfree(ominf);

View File

@@ -27,28 +27,17 @@
#include "options.h"
#include "super.h"
#include "inode.h"
#include "alloc.h"
enum {
Opt_acl,
Opt_data_prealloc_blocks,
Opt_data_prealloc_contig_only,
Opt_metadev_path,
Opt_noacl,
Opt_orphan_scan_delay_ms,
Opt_quorum_heartbeat_timeout_ms,
Opt_quorum_slot_nr,
Opt_err,
};
static const match_table_t tokens = {
{Opt_acl, "acl"},
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
{Opt_metadev_path, "metadev_path=%s"},
{Opt_noacl, "noacl"},
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
{Opt_quorum_heartbeat_timeout_ms, "quorum_heartbeat_timeout_ms=%s"},
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
{Opt_err, NULL}
};
@@ -117,33 +106,11 @@ static void free_options(struct scoutfs_mount_options *opts)
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
#define MIN_DATA_PREALLOC_BLOCKS 1ULL
#define MAX_DATA_PREALLOC_BLOCKS ((unsigned long long)SCOUTFS_BLOCK_SM_MAX)
static void init_default_options(struct scoutfs_mount_options *opts)
{
memset(opts, 0, sizeof(*opts));
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
opts->data_prealloc_contig_only = 1;
opts->orphan_scan_delay_ms = -1;
opts->quorum_heartbeat_timeout_ms = SCOUTFS_QUORUM_DEF_HB_TIMEO_MS;
opts->quorum_slot_nr = -1;
}
static int verify_quorum_heartbeat_timeout_ms(struct super_block *sb, int ret, u64 val)
{
if (ret < 0) {
scoutfs_err(sb, "failed to parse quorum_heartbeat_timeout_ms value");
return -EINVAL;
}
if (val < SCOUTFS_QUORUM_MIN_HB_TIMEO_MS || val > SCOUTFS_QUORUM_MAX_HB_TIMEO_MS) {
scoutfs_err(sb, "invalid quorum_heartbeat_timeout_ms value %llu, must be between %lu and %lu",
val, SCOUTFS_QUORUM_MIN_HB_TIMEO_MS, SCOUTFS_QUORUM_MAX_HB_TIMEO_MS);
return -EINVAL;
}
return 0;
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
}
/*
@@ -155,7 +122,6 @@ static int verify_quorum_heartbeat_timeout_ms(struct super_block *sb, int ret, u
static int parse_options(struct super_block *sb, char *options, struct scoutfs_mount_options *opts)
{
substring_t args[MAX_OPT_ARGS];
u64 nr64;
int nr;
int token;
char *p;
@@ -168,44 +134,12 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
token = match_token(p, tokens, args);
switch (token) {
case Opt_acl:
sb->s_flags |= SB_POSIXACL;
break;
case Opt_data_prealloc_blocks:
ret = match_u64(args, &nr64);
if (ret < 0 ||
nr64 < MIN_DATA_PREALLOC_BLOCKS || nr64 > MAX_DATA_PREALLOC_BLOCKS) {
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->data_prealloc_blocks = nr64;
break;
case Opt_data_prealloc_contig_only:
ret = match_int(args, &nr);
if (ret < 0 || nr < 0 || nr > 1) {
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must only be 0 or 1");
if (ret == 0)
ret = -EINVAL;
return ret;
}
opts->data_prealloc_contig_only = nr;
break;
case Opt_metadev_path:
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
if (ret < 0)
return ret;
break;
case Opt_noacl:
sb->s_flags &= ~SB_POSIXACL;
break;
case Opt_orphan_scan_delay_ms:
if (opts->orphan_scan_delay_ms != -1) {
scoutfs_err(sb, "multiple orphan_scan_delay_ms options provided, only provide one.");
@@ -224,14 +158,6 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
opts->orphan_scan_delay_ms = nr;
break;
case Opt_quorum_heartbeat_timeout_ms:
ret = match_u64(args, &nr64);
ret = verify_quorum_heartbeat_timeout_ms(sb, ret, nr64);
if (ret < 0)
return ret;
opts->quorum_heartbeat_timeout_ms = nr64;
break;
case Opt_quorum_slot_nr:
if (opts->quorum_slot_nr != -1) {
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
@@ -255,9 +181,6 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
}
}
if (opts->orphan_scan_delay_ms == -1)
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
if (!opts->metadev_path) {
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
return -EINVAL;
@@ -327,17 +250,10 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
{
struct super_block *sb = root->d_sb;
struct scoutfs_mount_options opts;
const bool is_acl = !!(sb->s_flags & SB_POSIXACL);
scoutfs_options_read(sb, &opts);
if (is_acl)
seq_puts(seq, ",acl");
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
if (!is_acl)
seq_puts(seq, ",noacl");
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
if (opts.quorum_slot_nr >= 0)
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
@@ -345,83 +261,6 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
return 0;
}
static ssize_t data_prealloc_blocks_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%llu", opts.data_prealloc_blocks);
}
static ssize_t data_prealloc_blocks_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[30]; /* more than enough for octal -U64_MAX */
u64 val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtoll(nullterm, 0, &val);
if (ret < 0 || val < MIN_DATA_PREALLOC_BLOCKS || val > MAX_DATA_PREALLOC_BLOCKS) {
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.data_prealloc_blocks = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(data_prealloc_blocks);
static ssize_t data_prealloc_contig_only_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%u", opts.data_prealloc_contig_only);
}
static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[20]; /* more than enough for octal -U32_MAX */
long val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtol(nullterm, 0, &val);
if (ret < 0 || val < 0 || val > 1) {
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must be 0 or 1");
return -EINVAL;
}
write_seqlock(&optinf->seqlock);
optinf->opts.data_prealloc_contig_only = val;
write_sequnlock(&optinf->seqlock);
return count;
}
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
@@ -474,43 +313,6 @@ static ssize_t orphan_scan_delay_ms_store(struct kobject *kobj, struct kobj_attr
}
SCOUTFS_ATTR_RW(orphan_scan_delay_ms);
static ssize_t quorum_heartbeat_timeout_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
struct scoutfs_mount_options opts;
scoutfs_options_read(sb, &opts);
return snprintf(buf, PAGE_SIZE, "%llu", opts.quorum_heartbeat_timeout_ms);
}
static ssize_t quorum_heartbeat_timeout_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
DECLARE_OPTIONS_INFO(sb, optinf);
char nullterm[30]; /* more than enough for octal -U64_MAX */
u64 val;
int len;
int ret;
len = min(count, sizeof(nullterm) - 1);
memcpy(nullterm, buf, len);
nullterm[len] = '\0';
ret = kstrtoll(nullterm, 0, &val);
ret = verify_quorum_heartbeat_timeout_ms(sb, ret, val);
if (ret == 0) {
write_seqlock(&optinf->seqlock);
optinf->opts.quorum_heartbeat_timeout_ms = val;
write_sequnlock(&optinf->seqlock);
ret = count;
}
return ret;
}
SCOUTFS_ATTR_RW(quorum_heartbeat_timeout_ms);
static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
@@ -523,11 +325,8 @@ static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *
SCOUTFS_ATTR_RO(quorum_slot_nr);
static struct attribute *options_attrs[] = {
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
SCOUTFS_ATTR_PTR(metadev_path),
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
SCOUTFS_ATTR_PTR(quorum_heartbeat_timeout_ms),
SCOUTFS_ATTR_PTR(quorum_slot_nr),
NULL,
};

View File

@@ -6,12 +6,10 @@
#include "format.h"
struct scoutfs_mount_options {
u64 data_prealloc_blocks;
bool data_prealloc_contig_only;
char *metadev_path;
unsigned int orphan_scan_delay_ms;
int quorum_slot_nr;
u64 quorum_heartbeat_timeout_ms;
};
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);

View File

@@ -100,11 +100,6 @@ struct last_msg {
ktime_t ts;
};
struct count_recent {
u64 count;
ktime_t recent;
};
enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
struct quorum_status {
@@ -117,12 +112,8 @@ struct quorum_status {
ktime_t timeout;
};
#define HB_DELAY_NR (SCOUTFS_QUORUM_MAX_HB_TIMEO_MS / MSEC_PER_SEC)
struct quorum_info {
struct super_block *sb;
struct scoutfs_quorum_config qconf;
struct workqueue_struct *workq;
struct work_struct work;
struct socket *sock;
bool shutdown;
@@ -134,8 +125,6 @@ struct quorum_info {
struct quorum_status show_status;
struct last_msg last_send[SCOUTFS_QUORUM_MAX_SLOTS];
struct last_msg last_recv[SCOUTFS_QUORUM_MAX_SLOTS];
struct count_recent *hb_delay;
unsigned long max_hb_delay;
struct scoutfs_sysfs_attrs ssa;
};
@@ -145,18 +134,11 @@ struct quorum_info {
#define DECLARE_QUORUM_INFO_KOBJ(kobj, name) \
DECLARE_QUORUM_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
static bool quorum_slot_present(struct scoutfs_quorum_config *qconf, int i)
static bool quorum_slot_present(struct scoutfs_super_block *super, int i)
{
BUG_ON(i < 0 || i > SCOUTFS_QUORUM_MAX_SLOTS);
return qconf->slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
}
static void quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
{
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
scoutfs_addr_to_sin(sin, &qconf->slots[i].addr);
return super->qconf.slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
}
static ktime_t election_timeout(void)
@@ -170,29 +152,29 @@ static ktime_t heartbeat_interval(void)
return ktime_add_ms(ktime_get(), SCOUTFS_QUORUM_HB_IVAL_MS);
}
static ktime_t heartbeat_timeout(struct scoutfs_mount_options *opts)
static ktime_t heartbeat_timeout(void)
{
return ktime_add_ms(ktime_get(), opts->quorum_heartbeat_timeout_ms);
return ktime_add_ms(ktime_get(), SCOUTFS_QUORUM_HB_TIMEO_MS);
}
static int create_socket(struct super_block *sb)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct socket *sock = NULL;
struct sockaddr_in sin;
int addrlen;
int ret;
ret = kc_sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
ret = sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
if (ret) {
scoutfs_err(sb, "quorum couldn't create udp socket: %d", ret);
goto out;
}
/* rather fail and retry than block waiting for free */
sock->sk->sk_allocation = GFP_ATOMIC;
sock->sk->sk_allocation = GFP_NOFS;
quorum_slot_sin(&qinf->qconf, qinf->our_quorum_slot_nr, &sin);
scoutfs_quorum_slot_sin(super, qinf->our_quorum_slot_nr, &sin);
addrlen = sizeof(sin);
ret = kernel_bind(sock, (struct sockaddr *)&sin, addrlen);
@@ -219,20 +201,16 @@ static __le32 quorum_message_crc(struct scoutfs_quorum_message *qmes)
return cpu_to_le32(crc32c(~0, qmes, len));
}
/*
* Returns the number of failures from sendmsg.
*/
static int send_msg_members(struct super_block *sb, int type, u64 term, int only)
static void send_msg_members(struct super_block *sb, int type, u64 term,
int only)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
DECLARE_QUORUM_INFO(sb, qinf);
int failed = 0;
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
ktime_t now;
int ret;
int i;
struct scoutfs_quorum_message qmes = {
.fsid = cpu_to_le64(sbi->fsid),
.fsid = super->hdr.fsid,
.term = cpu_to_le64(term),
.type = type,
.from = qinf->our_quorum_slot_nr,
@@ -243,10 +221,8 @@ static int send_msg_members(struct super_block *sb, int type, u64 term, int only
};
struct sockaddr_in sin;
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL,
.msg_name = &sin,
.msg_namelen = sizeof(sin),
@@ -256,24 +232,15 @@ static int send_msg_members(struct super_block *sb, int type, u64 term, int only
qmes.crc = quorum_message_crc(&qmes);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(&qinf->qconf, i) ||
if (!quorum_slot_present(super, i) ||
(only >= 0 && i != only) || i == qinf->our_quorum_slot_nr)
continue;
if (scoutfs_forcing_unmount(sb)) {
failed = 0;
break;
}
scoutfs_quorum_slot_sin(&qinf->qconf, i, &sin);
scoutfs_quorum_slot_sin(super, i, &sin);
now = ktime_get();
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, WRITE, (struct iovec *)&kv, sizeof(qmes), 1);
#endif
ret = kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
if (ret != kv.iov_len)
failed++;
kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
spin_lock(&qinf->show_lock);
qinf->last_send[i].msg.term = term;
@@ -284,8 +251,6 @@ static int send_msg_members(struct super_block *sb, int type, u64 term, int only
if (i == only)
break;
}
return failed;
}
#define send_msg_to(sb, type, term, nr) send_msg_members(sb, type, term, nr)
@@ -301,7 +266,7 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
ktime_t abs_to)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_quorum_message qmes;
struct timeval tv;
ktime_t rel_to;
@@ -313,10 +278,8 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
.iov_len = sizeof(struct scoutfs_quorum_message),
};
struct msghdr mh = {
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
.msg_iov = (struct iovec *)&kv,
.msg_iovlen = 1,
#endif
.msg_flags = MSG_NOSIGNAL,
};
@@ -338,24 +301,18 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
return ret;
}
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
iov_iter_init(&mh.msg_iter, READ, (struct iovec *)&kv, sizeof(struct scoutfs_quorum_message), 1);
#endif
ret = kernel_recvmsg(qinf->sock, &mh, &kv, 1, kv.iov_len, mh.msg_flags);
if (ret < 0)
return ret;
if (scoutfs_forcing_unmount(sb))
return 0;
now = ktime_get();
if (ret != sizeof(qmes) ||
qmes.crc != quorum_message_crc(&qmes) ||
qmes.fsid != cpu_to_le64(sbi->fsid) ||
qmes.fsid != super->hdr.fsid ||
qmes.type >= SCOUTFS_QUORUM_MSG_INVALID ||
qmes.from >= SCOUTFS_QUORUM_MAX_SLOTS ||
!quorum_slot_present(&qinf->qconf, qmes.from)) {
!quorum_slot_present(super, qmes.from)) {
/* should we be trying to open a new socket? */
scoutfs_inc_counter(sb, quorum_recv_invalid);
return -EAGAIN;
@@ -385,7 +342,7 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
bool check_rid)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
const u64 fsid = sbi->fsid;
struct scoutfs_super_block *super = &sbi->super;
const u64 rid = sbi->rid;
char msg[150];
__le32 crc;
@@ -410,9 +367,9 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
else if (le32_to_cpu(blk->hdr.magic) != SCOUTFS_BLOCK_MAGIC_QUORUM)
snprintf(msg, sizeof(msg), "blk magic %08x != %08x",
le32_to_cpu(blk->hdr.magic), SCOUTFS_BLOCK_MAGIC_QUORUM);
else if (blk->hdr.fsid != cpu_to_le64(fsid))
else if (blk->hdr.fsid != super->hdr.fsid)
snprintf(msg, sizeof(msg), "blk fsid %016llx != %016llx",
le64_to_cpu(blk->hdr.fsid), fsid);
le64_to_cpu(blk->hdr.fsid), le64_to_cpu(super->hdr.fsid));
else if (le64_to_cpu(blk->hdr.blkno) != blkno)
snprintf(msg, sizeof(msg), "blk blkno %llu != %llu",
le64_to_cpu(blk->hdr.blkno), blkno);
@@ -453,7 +410,8 @@ out:
*/
static void read_greatest_term(struct super_block *sb, u64 *term)
{
DECLARE_QUORUM_INFO(sb, qinf);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
int ret;
int e;
@@ -462,7 +420,7 @@ static void read_greatest_term(struct super_block *sb, u64 *term)
*term = 0;
for (s = 0; s < SCOUTFS_QUORUM_MAX_SLOTS; s++) {
if (!quorum_slot_present(&qinf->qconf, s))
if (!quorum_slot_present(super, s))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + s, &blk, false);
@@ -556,15 +514,14 @@ static int update_quorum_block(struct super_block *sb, int event, u64 term, bool
* keeps us from being fenced while we allow userspace fencing to take a
* reasonably long time. We still want to timeout eventually.
*/
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
u64 term)
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
{
#define NR_OLD 2
struct scoutfs_quorum_block_event old[SCOUTFS_QUORUM_MAX_SLOTS][NR_OLD] = {{{0,}}};
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
struct sockaddr_in sin;
const __le64 lefsid = cpu_to_le64(sbi->fsid);
const u64 rid = sbi->rid;
bool fence_started = false;
u64 fenced = 0;
@@ -577,7 +534,7 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_c
BUILD_BUG_ON(SCOUTFS_QUORUM_BLOCKS < SCOUTFS_QUORUM_MAX_SLOTS);
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(qconf, i))
if (!quorum_slot_present(super, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
@@ -610,11 +567,11 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_c
continue;
scoutfs_inc_counter(sb, quorum_fence_leader);
quorum_slot_sin(qconf, i, &sin);
scoutfs_quorum_slot_sin(super, i, &sin);
fence_rid = old[i][j].rid;
scoutfs_info(sb, "fencing previous leader "SCSBF" at term %llu in slot %u with address "SIN_FMT,
SCSB_LEFR_ARGS(lefsid, fence_rid),
SCSB_LEFR_ARGS(super->hdr.fsid, fence_rid),
le64_to_cpu(old[i][j].term), i, SIN_ARG(&sin));
ret = scoutfs_fence_start(sb, le64_to_cpu(fence_rid), sin.sin_addr.s_addr,
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER);
@@ -635,71 +592,6 @@ out:
return ret;
}
static void clear_hb_delay(struct quorum_info *qinf)
{
int i;
spin_lock(&qinf->show_lock);
qinf->max_hb_delay = 0;
for (i = 0; i < HB_DELAY_NR; i++) {
qinf->hb_delay[i].recent = ns_to_ktime(0);
qinf->hb_delay[i].count = 0;
}
spin_unlock(&qinf->show_lock);
}
struct hb_recording {
ktime_t prev;
int count;
};
/*
* Record long heartbeat delays. We only record the delay between back
* to back send attempts in the leader or back to back recv messages in
* the followers. The worker caller sets record_hb when their iteration
* sent or received a heartbeat. An iteration that does anything else
* resets the tracking.
*/
static void record_hb_delay(struct super_block *sb, struct quorum_info *qinf,
struct hb_recording *hbr, bool record_hb, int role)
{
bool log = false;
ktime_t now;
s64 s;
if (!record_hb) {
hbr->count = 0;
return;
}
now = ktime_get();
if (hbr->count < 2 && ++hbr->count < 2) {
hbr->prev = now;
return;
}
s = ktime_ms_delta(now, hbr->prev) / MSEC_PER_SEC;
hbr->prev = now;
if (s <= 0 || s >= HB_DELAY_NR)
return;
spin_lock(&qinf->show_lock);
if (qinf->max_hb_delay < s) {
qinf->max_hb_delay = s;
if (s >= 3)
log = true;
}
qinf->hb_delay[s].recent = now;
qinf->hb_delay[s].count++;
spin_unlock(&qinf->show_lock);
if (log)
scoutfs_info(sb, "longest quorum heartbeat %s delay of %lld sec",
role == LEADER ? "send" : "recv", s);
}
/*
* The main quorum task maintains its private status. It seemed cleaner
* to occasionally copy the status for showing in sysfs/debugfs files
@@ -724,23 +616,16 @@ static void update_show_status(struct quorum_info *qinf, struct quorum_status *q
static void scoutfs_quorum_worker(struct work_struct *work)
{
struct quorum_info *qinf = container_of(work, struct quorum_info, work);
struct scoutfs_mount_options opts;
struct super_block *sb = qinf->sb;
struct sockaddr_in unused;
struct quorum_host_msg msg;
struct quorum_status qst = {0,};
struct hb_recording hbr;
bool record_hb;
int ret;
int err;
memset(&hbr, 0, sizeof(struct hb_recording));
/* recording votes from slots as native single word bitmap */
BUILD_BUG_ON(SCOUTFS_QUORUM_MAX_SLOTS > BITS_PER_LONG);
scoutfs_options_read(sb, &opts);
/* start out as a follower */
qst.role = FOLLOWER;
qst.vote_for = -1;
@@ -750,7 +635,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* see if there's a server to chose heartbeat or election timeout */
if (scoutfs_quorum_server_sin(sb, &unused) == 0)
qst.timeout = heartbeat_timeout(&opts);
qst.timeout = heartbeat_timeout();
else
qst.timeout = election_timeout();
@@ -774,16 +659,14 @@ static void scoutfs_quorum_worker(struct work_struct *work)
ret = 0;
}
scoutfs_options_read(sb, &opts);
record_hb = false;
/* ignore messages from older terms */
if (msg.type != SCOUTFS_QUORUM_MSG_INVALID &&
msg.term < qst.term)
msg.type = SCOUTFS_QUORUM_MSG_INVALID;
trace_scoutfs_quorum_loop(sb, qst.role, qst.term, qst.vote_for,
qst.vote_bits, ktime_to_ns(qst.timeout));
qst.vote_bits,
ktime_to_timespec64(qst.timeout));
/* receiving greater terms resets term, becomes follower */
if (msg.type != SCOUTFS_QUORUM_MSG_INVALID &&
@@ -791,7 +674,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
if (qst.role == LEADER) {
scoutfs_warn(sb, "saw msg type %u from %u for term %llu while leader in term %llu, shutting down server.",
msg.type, msg.from, msg.term, qst.term);
clear_hb_delay(qinf);
}
qst.role = FOLLOWER;
qst.term = msg.term;
@@ -800,7 +682,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
scoutfs_inc_counter(sb, quorum_term_follower);
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT)
qst.timeout = heartbeat_timeout(&opts);
qst.timeout = heartbeat_timeout();
else
qst.timeout = election_timeout();
@@ -810,21 +692,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
goto out;
}
/* receiving heartbeats extends timeout, delaying elections */
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT) {
qst.timeout = heartbeat_timeout(&opts);
scoutfs_inc_counter(sb, quorum_recv_heartbeat);
record_hb = true;
}
/* receiving a resignation from server starts election */
if (msg.type == SCOUTFS_QUORUM_MSG_RESIGNATION &&
qst.role == FOLLOWER &&
msg.term == qst.term) {
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_recv_resignation);
}
/* followers and candidates start new election on timeout */
if (qst.role != LEADER &&
ktime_after(ktime_get(), qst.timeout)) {
@@ -877,7 +744,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.timeout = heartbeat_interval();
update_show_status(qinf, &qst);
clear_hb_delay(qinf);
/* record that we've been elected before starting up server */
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_ELECT, qst.term, true);
@@ -886,7 +752,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
qst.server_start_term = qst.term;
qst.server_event = SCOUTFS_QUORUM_EVENT_ELECT;
scoutfs_server_start(sb, &qinf->qconf, qst.term);
scoutfs_server_start(sb, qst.term);
}
/*
@@ -932,7 +798,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
qst.server_start_term);
scoutfs_inc_counter(sb, quorum_send_resignation);
clear_hb_delay(qinf);
}
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP,
@@ -946,16 +811,24 @@ static void scoutfs_quorum_worker(struct work_struct *work)
/* leaders regularly send heartbeats to delay elections */
if (qst.role == LEADER &&
ktime_after(ktime_get(), qst.timeout)) {
ret = send_msg_others(sb, SCOUTFS_QUORUM_MSG_HEARTBEAT, qst.term);
if (ret > 0) {
scoutfs_add_counter(sb, quorum_send_heartbeat_dropped, ret);
ret = 0;
}
send_msg_others(sb, SCOUTFS_QUORUM_MSG_HEARTBEAT,
qst.term);
qst.timeout = heartbeat_interval();
scoutfs_inc_counter(sb, quorum_send_heartbeat);
record_hb = true;
}
/* receiving heartbeats extends timeout, delaying elections */
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT) {
qst.timeout = heartbeat_timeout();
scoutfs_inc_counter(sb, quorum_recv_heartbeat);
}
/* receiving a resignation from server starts election */
if (msg.type == SCOUTFS_QUORUM_MSG_RESIGNATION &&
qst.role == FOLLOWER &&
msg.term == qst.term) {
qst.timeout = election_timeout();
scoutfs_inc_counter(sb, quorum_recv_resignation);
}
/* followers vote once per term */
@@ -967,8 +840,6 @@ static void scoutfs_quorum_worker(struct work_struct *work)
msg.from);
scoutfs_inc_counter(sb, quorum_send_vote);
}
record_hb_delay(sb, qinf, &hbr, record_hb, qst.role);
}
update_show_status(qinf, &qst);
@@ -1006,25 +877,16 @@ out:
*/
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
{
struct scoutfs_super_block *super = NULL;
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_quorum_block blk;
u64 elect_term;
u64 term = 0;
int ret = 0;
int i;
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
if (!super) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret)
goto out;
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(&super->qconf, i))
if (!quorum_slot_present(super, i))
continue;
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
@@ -1038,7 +900,7 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
if (elect_term > term &&
elect_term > le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_STOP].term)) {
term = elect_term;
scoutfs_quorum_slot_sin(&super->qconf, i, sin);
scoutfs_quorum_slot_sin(super, i, sin);
continue;
}
}
@@ -1047,7 +909,6 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
ret = -ENOENT;
out:
kfree(super);
return ret;
}
@@ -1063,9 +924,12 @@ u8 scoutfs_quorum_votes_needed(struct super_block *sb)
return qinf->votes_needed;
}
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin)
{
return quorum_slot_sin(qconf, i, sin);
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
scoutfs_addr_to_sin(sin, &super->qconf.slots[i].addr);
}
static char *role_str(int role)
@@ -1105,11 +969,9 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
{
DECLARE_QUORUM_INFO_KOBJ(kobj, qinf);
struct quorum_status qst;
struct count_recent cr;
struct last_msg last;
struct timespec64 ts;
const ktime_t now = ktime_get();
unsigned long ul;
size_t size;
int ret;
int i;
@@ -1167,26 +1029,6 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
(s64)ts.tv_sec, (int)ts.tv_nsec);
}
spin_lock(&qinf->show_lock);
ul = qinf->max_hb_delay;
spin_unlock(&qinf->show_lock);
if (ul)
snprintf_ret(buf, size, &ret, "HB Delay(s) Count Secs Since\n");
for (i = 1; i <= ul && i < HB_DELAY_NR; i++) {
spin_lock(&qinf->show_lock);
cr = qinf->hb_delay[i];
spin_unlock(&qinf->show_lock);
if (cr.count == 0)
continue;
ts = ktime_to_timespec64(ktime_sub(now, cr.recent));
snprintf_ret(buf, size, &ret,
"%11u %9llu %lld.%09u\n",
i, cr.count, (s64)ts.tv_sec, (int)ts.tv_nsec);
}
return ret;
}
SCOUTFS_ATTR_RO(status);
@@ -1218,10 +1060,11 @@ static inline bool valid_ipv4_port(__be16 port)
return port != 0 && be16_to_cpu(port) != U16_MAX;
}
static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
struct scoutfs_quorum_config *qconf)
static int verify_quorum_slots(struct super_block *sb)
{
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
DECLARE_QUORUM_INFO(sb, qinf);
struct sockaddr_in other;
struct sockaddr_in sin;
int found = 0;
@@ -1231,10 +1074,10 @@ static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (!quorum_slot_present(qconf, i))
if (!quorum_slot_present(super, i))
continue;
scoutfs_quorum_slot_sin(qconf, i, &sin);
scoutfs_quorum_slot_sin(super, i, &sin);
if (!valid_ipv4_unicast(sin.sin_addr.s_addr)) {
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 unicast address: "SIN_FMT,
@@ -1249,10 +1092,10 @@ static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
}
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
if (!quorum_slot_present(qconf, j))
if (!quorum_slot_present(super, j))
continue;
scoutfs_quorum_slot_sin(qconf, j, &other);
scoutfs_quorum_slot_sin(super, j, &other);
if (sin.sin_addr.s_addr == other.sin_addr.s_addr &&
sin.sin_port == other.sin_port) {
@@ -1270,11 +1113,11 @@ static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
return -EINVAL;
}
if (!quorum_slot_present(qconf, qinf->our_quorum_slot_nr)) {
if (!quorum_slot_present(super, qinf->our_quorum_slot_nr)) {
char *str = slots;
*str = '\0';
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
if (quorum_slot_present(qconf, i)) {
if (quorum_slot_present(super, i)) {
ret = snprintf(str, &slots[ARRAY_SIZE(slots)] - str, "%c%u",
str == slots ? ' ' : ',', i);
if (ret < 2 || ret > 3) {
@@ -1298,22 +1141,16 @@ static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
else
qinf->votes_needed = (found / 2) + 1;
qinf->qconf = *qconf;
return 0;
}
/*
* Once this schedules the quorum worker it can be elected leader and
* start the server, possibly before this returns. The quorum agent
* would be responsible for tracking the quorum config in the super
* block if it changes. Until then uses a static config that it reads
* during setup.
* start the server, possibly before this returns.
*/
int scoutfs_quorum_setup(struct super_block *sb)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = NULL;
struct scoutfs_mount_options opts;
struct quorum_info *qinf;
int ret;
@@ -1323,14 +1160,7 @@ int scoutfs_quorum_setup(struct super_block *sb)
return 0;
qinf = kzalloc(sizeof(struct quorum_info), GFP_KERNEL);
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_KERNEL);
if (qinf)
qinf->hb_delay = __vmalloc(HB_DELAY_NR * sizeof(struct count_recent),
GFP_KERNEL | __GFP_ZERO, PAGE_KERNEL);
if (!qinf || !super || !qinf->hb_delay) {
if (qinf)
vfree(qinf->hb_delay);
kfree(qinf);
if (!qinf) {
ret = -ENOMEM;
goto out;
}
@@ -1344,20 +1174,7 @@ int scoutfs_quorum_setup(struct super_block *sb)
sbi->quorum_info = qinf;
qinf->sb = sb;
/* a high priority single threaded context without mem reclaim */
qinf->workq = alloc_workqueue("scoutfs_quorum_work",
WQ_NON_REENTRANT | WQ_UNBOUND |
WQ_HIGHPRI, 1);
if (!qinf->workq) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_read_super(sb, super);
if (ret < 0)
goto out;
ret = verify_quorum_slots(sb, qinf, &super->qconf);
ret = verify_quorum_slots(sb);
if (ret < 0)
goto out;
@@ -1371,13 +1188,12 @@ int scoutfs_quorum_setup(struct super_block *sb)
if (ret < 0)
goto out;
queue_work(qinf->workq, &qinf->work);
schedule_work(&qinf->work);
out:
if (ret)
scoutfs_quorum_destroy(sb);
kfree(super);
return ret;
}
@@ -1401,14 +1217,10 @@ void scoutfs_quorum_destroy(struct super_block *sb)
qinf->shutdown = true;
flush_work(&qinf->work);
if (qinf->workq)
destroy_workqueue(qinf->workq);
scoutfs_sysfs_destroy_attrs(sb, &qinf->ssa);
if (qinf->sock)
sock_release(qinf->sock);
vfree(qinf->hb_delay);
kfree(qinf);
sbi->quorum_info = NULL;
}

View File

@@ -4,11 +4,10 @@
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i,
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
struct sockaddr_in *sin);
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
u64 term);
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
int scoutfs_quorum_setup(struct super_block *sb);
void scoutfs_quorum_shutdown(struct super_block *sb);

View File

@@ -691,16 +691,16 @@ TRACE_EVENT(scoutfs_evict_inode,
TRACE_EVENT(scoutfs_drop_inode,
TP_PROTO(struct super_block *sb, __u64 ino, unsigned int nlink,
unsigned int unhashed, bool lock_covered),
unsigned int unhashed, bool drop_invalidated),
TP_ARGS(sb, ino, nlink, unhashed, lock_covered),
TP_ARGS(sb, ino, nlink, unhashed, drop_invalidated),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(unsigned int, nlink)
__field(unsigned int, unhashed)
__field(unsigned int, lock_covered)
__field(unsigned int, drop_invalidated)
),
TP_fast_assign(
@@ -708,12 +708,12 @@ TRACE_EVENT(scoutfs_drop_inode,
__entry->ino = ino;
__entry->nlink = nlink;
__entry->unhashed = unhashed;
__entry->lock_covered = !!lock_covered;
__entry->drop_invalidated = !!drop_invalidated;
),
TP_printk(SCSBF" ino %llu nlink %u unhashed %d lock_covered %u", SCSB_TRACE_ARGS,
TP_printk(SCSBF" ino %llu nlink %u unhashed %d drop_invalidated %u", SCSB_TRACE_ARGS,
__entry->ino, __entry->nlink, __entry->unhashed,
__entry->lock_covered)
__entry->drop_invalidated)
);
TRACE_EVENT(scoutfs_inode_walk_writeback,
@@ -817,17 +817,22 @@ TRACE_EVENT(scoutfs_advance_dirty_super,
TP_printk(SCSBF" super seq now %llu", SCSB_TRACE_ARGS, __entry->seq)
);
TRACE_EVENT(scoutfs_dir_add_next_linkref_found,
TRACE_EVENT(scoutfs_dir_add_next_linkref,
TP_PROTO(struct super_block *sb, __u64 ino, __u64 dir_ino,
__u64 dir_pos, unsigned int name_len),
__u64 dir_pos, int ret, __u64 found_dir_ino,
__u64 found_dir_pos, unsigned int name_len),
TP_ARGS(sb, ino, dir_ino, dir_pos, name_len),
TP_ARGS(sb, ino, dir_ino, dir_pos, ret, found_dir_pos, found_dir_ino,
name_len),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(__u64, dir_ino)
__field(__u64, dir_pos)
__field(int, ret)
__field(__u64, found_dir_ino)
__field(__u64, found_dir_pos)
__field(unsigned int, name_len)
),
@@ -836,43 +841,16 @@ TRACE_EVENT(scoutfs_dir_add_next_linkref_found,
__entry->ino = ino;
__entry->dir_ino = dir_ino;
__entry->dir_pos = dir_pos;
__entry->ret = ret;
__entry->found_dir_ino = dir_ino;
__entry->found_dir_pos = dir_pos;
__entry->name_len = name_len;
),
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu name_len %u",
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_ino,
__entry->dir_pos, __entry->name_len)
);
TRACE_EVENT(scoutfs_dir_add_next_linkrefs,
TP_PROTO(struct super_block *sb, __u64 ino, __u64 dir_ino,
__u64 dir_pos, int count, int nr, int ret),
TP_ARGS(sb, ino, dir_ino, dir_pos, count, nr, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(__u64, ino)
__field(__u64, dir_ino)
__field(__u64, dir_pos)
__field(int, count)
__field(int, nr)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->ino = ino;
__entry->dir_ino = dir_ino;
__entry->dir_pos = dir_pos;
__entry->count = count;
__entry->nr = nr;
__entry->ret = ret;
),
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu count %d nr %d ret %d",
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_ino,
__entry->dir_pos, __entry->count, __entry->nr, __entry->ret)
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu ret %d found_dir_ino %llu found_dir_pos %llu name_len %u",
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_pos,
__entry->dir_ino, __entry->ret, __entry->found_dir_pos,
__entry->found_dir_ino, __entry->name_len)
);
TRACE_EVENT(scoutfs_write_begin,
@@ -1439,71 +1417,42 @@ TRACE_EVENT(scoutfs_rename,
);
TRACE_EVENT(scoutfs_d_revalidate,
TP_PROTO(struct super_block *sb, struct dentry *dentry, int flags, u64 dir_ino, int ret),
TP_PROTO(struct super_block *sb,
struct dentry *dentry, int flags, struct dentry *parent,
bool is_covered, int ret),
TP_ARGS(sb, dentry, flags, dir_ino, ret),
TP_ARGS(sb, dentry, flags, parent, is_covered, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, dentry)
__string(name, dentry->d_name.name)
__field(__u64, ino)
__field(__u64, dir_ino)
__field(__u64, parent_ino)
__field(int, flags)
__field(int, is_root)
__field(int, is_covered)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dentry = dentry;
__assign_str(name, dentry->d_name.name)
__entry->ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
__entry->dir_ino = dir_ino;
__entry->ino = dentry->d_inode ?
scoutfs_ino(dentry->d_inode) : 0;
__entry->parent_ino = parent->d_inode ?
scoutfs_ino(parent->d_inode) : 0;
__entry->flags = flags;
__entry->is_root = IS_ROOT(dentry);
__entry->is_covered = is_covered;
__entry->ret = ret;
),
TP_printk(SCSBF" dentry %p name %s ino %llu dir_ino %llu flags 0x%x s_root %u ret %d",
SCSB_TRACE_ARGS, __entry->dentry, __get_str(name), __entry->ino, __entry->dir_ino,
__entry->flags, __entry->is_root, __entry->ret)
);
TRACE_EVENT(scoutfs_validate_dentry,
TP_PROTO(struct super_block *sb, struct dentry *dentry, u64 dir_ino, u64 dentry_ino,
u64 dent_ino, u64 refresh_gen, int ret),
TP_ARGS(sb, dentry, dir_ino, dentry_ino, dent_ino, refresh_gen, ret),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(void *, dentry)
__field(__u64, dir_ino)
__string(name, dentry->d_name.name)
__field(__u64, dentry_ino)
__field(__u64, dent_ino)
__field(__u64, fsdata_gen)
__field(__u64, refresh_gen)
__field(int, ret)
),
TP_fast_assign(
SCSB_TRACE_ASSIGN(sb);
__entry->dentry = dentry;
__entry->dir_ino = dir_ino;
__assign_str(name, dentry->d_name.name)
__entry->dentry_ino = dentry_ino;
__entry->dent_ino = dent_ino;
__entry->fsdata_gen = (unsigned long long)dentry->d_fsdata;
__entry->refresh_gen = refresh_gen;
__entry->ret = ret;
),
TP_printk(SCSBF" dentry %p dir %llu name %s dentry_ino %llu dent_ino %llu fsdata_gen %llu refresh_gen %llu ret %d",
SCSB_TRACE_ARGS, __entry->dentry, __entry->dir_ino, __get_str(name),
__entry->dentry_ino, __entry->dent_ino, __entry->fsdata_gen,
__entry->refresh_gen, __entry->ret)
TP_printk(SCSBF" name %s ino %llu parent_ino %llu flags 0x%x s_root %u is_covered %u ret %d",
SCSB_TRACE_ARGS, __get_str(name), __entry->ino,
__entry->parent_ino, __entry->flags,
__entry->is_root,
__entry->is_covered,
__entry->ret)
);
DECLARE_EVENT_CLASS(scoutfs_super_lifecycle_class,
@@ -1896,9 +1845,8 @@ DEFINE_EVENT(scoutfs_server_client_count_class, scoutfs_server_client_down,
DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing,
exceeded),
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
__field(int, holding)
@@ -1906,7 +1854,6 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
__field(int, nr_holders)
__field(__u32, avail_before)
__field(__u32, freed_before)
__field(int, committing)
__field(int, exceeded)
),
TP_fast_assign(
@@ -1916,33 +1863,31 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
__entry->nr_holders = nr_holders;
__entry->avail_before = avail_before;
__entry->freed_before = freed_before;
__entry->committing = !!committing;
__entry->exceeded = !!exceeded;
),
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u committing %u exceeded %u",
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u exceeded %u",
SCSB_TRACE_ARGS, __entry->holding, __entry->applying, __entry->nr_holders,
__entry->avail_before, __entry->freed_before, __entry->committing,
__entry->exceeded)
__entry->avail_before, __entry->freed_before, __entry->exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_hold,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_apply,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_start,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_end,
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
u32 avail_before, u32 freed_before, int committing, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
u32 avail_before, u32 freed_before, int exceeded),
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
);
#define slt_symbolic(mode) \
@@ -2024,9 +1969,9 @@ DEFINE_EVENT(scoutfs_quorum_message_class, scoutfs_quorum_recv_message,
TRACE_EVENT(scoutfs_quorum_loop,
TP_PROTO(struct super_block *sb, int role, u64 term, int vote_for,
unsigned long vote_bits, unsigned long long nsecs),
unsigned long vote_bits, struct timespec64 timeout),
TP_ARGS(sb, role, term, vote_for, vote_bits, nsecs),
TP_ARGS(sb, role, term, vote_for, vote_bits, timeout),
TP_STRUCT__entry(
SCSB_TRACE_FIELDS
@@ -2035,7 +1980,8 @@ TRACE_EVENT(scoutfs_quorum_loop,
__field(int, vote_for)
__field(unsigned long, vote_bits)
__field(unsigned long, vote_count)
__field(unsigned long long, nsecs)
__field(unsigned long long, timeout_sec)
__field(int, timeout_nsec)
),
TP_fast_assign(
@@ -2045,13 +1991,14 @@ TRACE_EVENT(scoutfs_quorum_loop,
__entry->vote_for = vote_for;
__entry->vote_bits = vote_bits;
__entry->vote_count = hweight_long(vote_bits);
__entry->nsecs = nsecs;
__entry->timeout_sec = timeout.tv_sec;
__entry->timeout_nsec = timeout.tv_nsec;
),
TP_printk(SCSBF" term %llu role %d vote_for %d vote_bits 0x%lx vote_count %lu timeout %llu",
TP_printk(SCSBF" term %llu role %d vote_for %d vote_bits 0x%lx vote_count %lu timeout %llu.%u",
SCSB_TRACE_ARGS, __entry->term, __entry->role,
__entry->vote_for, __entry->vote_bits, __entry->vote_count,
__entry->nsecs)
__entry->timeout_sec, __entry->timeout_nsec)
);
TRACE_EVENT(scoutfs_trans_seq_last,

View File

@@ -67,7 +67,6 @@ struct commit_users {
unsigned int nr_holders;
u32 avail_before;
u32 freed_before;
bool committing;
bool exceeded;
};
@@ -85,7 +84,7 @@ do { \
__typeof__(cusers) _cusers = (cusers); \
trace_scoutfs_server_commit_##which(sb, !list_empty(&_cusers->holding), \
!list_empty(&_cusers->applying), _cusers->nr_holders, _cusers->avail_before, \
_cusers->freed_before, _cusers->committing, _cusers->exceeded); \
_cusers->freed_before, _cusers->exceeded); \
} while (0)
struct server_info {
@@ -131,9 +130,9 @@ struct server_info {
struct mutex srch_mutex;
struct mutex mounted_clients_mutex;
/* stable super stored from commits, given in locks and rpcs */
seqcount_t stable_seqcount;
struct scoutfs_super_block stable_super;
/* stable versions stored from commits, given in locks and rpcs */
seqcount_t roots_seqcount;
struct scoutfs_net_roots roots;
/* serializing and get and set volume options */
seqcount_t volopt_seqcount;
@@ -144,18 +143,11 @@ struct server_info {
struct work_struct fence_pending_recov_work;
/* while running we check for fenced mounts to reclaim */
struct delayed_work reclaim_dwork;
/* a running server gets a static quorum config from quorum as it starts */
struct scoutfs_quorum_config qconf;
/* a running server maintains a private dirty super */
struct scoutfs_super_block dirty_super;
};
#define DECLARE_SERVER_INFO(sb, name) \
struct server_info *name = SCOUTFS_SB(sb)->server_info
#define DIRTY_SUPER_SB(sb) (&SCOUTFS_SB(sb)->server_info->dirty_super)
/*
* The server tracks each connected client.
*/
@@ -283,14 +275,6 @@ struct commit_hold {
* per-holder allocation consumption tracking. The best we can do is
* flag all the current holders so that as they release we can see
* everyone involved in crossing the limit.
*
* The consumption of space to record freed blocks is tricky. The
* freed_before value was the space available as the holder started.
* But that happens before we actually dirty the first block in the
* freed list. If that block is too full then we just allocate a new
* empty first block. In that case the current remaining here can be a
* lot more than the initial freed_before. We account for that and
* treat freed_before as the maximum capacity.
*/
static void check_holder_budget(struct super_block *sb, struct server_info *server,
struct commit_users *cusers)
@@ -310,13 +294,8 @@ static void check_holder_budget(struct super_block *sb, struct server_info *serv
return;
scoutfs_alloc_meta_remaining(&server->alloc, &avail_now, &freed_now);
avail_used = cusers->avail_before - avail_now;
if (freed_now < cusers->freed_before)
freed_used = cusers->freed_before - freed_now;
else
freed_used = SCOUTFS_ALLOC_LIST_MAX_BLOCKS - freed_now;
freed_used = cusers->freed_before - freed_now;
budget = cusers->nr_holders * COMMIT_HOLD_ALLOC_BUDGET;
if (avail_used <= budget && freed_used <= budget)
return;
@@ -339,18 +318,31 @@ static void check_holder_budget(struct super_block *sb, struct server_info *serv
/*
* We don't have per-holder consumption. We allow commit holders as
* long as the total budget of all the holders doesn't exceed the alloc
* resources that were available. If a hold is waiting for budget
* availability in the allocators then we try and kick off a commit to
* fill and use the next allocators after the current transaction.
* resources that were available
*/
static bool commit_alloc_has_room(struct server_info *server, struct commit_users *cusers,
unsigned int more_holders)
{
u32 avail_before;
u32 freed_before;
u32 budget;
if (cusers->nr_holders > 0) {
avail_before = cusers->avail_before;
freed_before = cusers->freed_before;
} else {
scoutfs_alloc_meta_remaining(&server->alloc, &avail_before, &freed_before);
}
budget = (cusers->nr_holders + more_holders) * COMMIT_HOLD_ALLOC_BUDGET;
return avail_before >= budget && freed_before >= budget;
}
static bool hold_commit(struct super_block *sb, struct server_info *server,
struct commit_users *cusers, struct commit_hold *hold)
{
bool has_room;
bool held;
u32 budget;
u32 av;
u32 fr;
bool held = false;
spin_lock(&cusers->lock);
@@ -358,39 +350,19 @@ static bool hold_commit(struct super_block *sb, struct server_info *server,
check_holder_budget(sb, server, cusers);
if (cusers->nr_holders == 0) {
scoutfs_alloc_meta_remaining(&server->alloc, &av, &fr);
} else {
av = cusers->avail_before;
fr = cusers->freed_before;
}
/* +2 for our additional hold and then for the final commit work the server does */
budget = (cusers->nr_holders + 2) * COMMIT_HOLD_ALLOC_BUDGET;
has_room = av >= budget && fr >= budget;
/* checking applying so holders drain once an apply caller starts waiting */
held = !cusers->committing && has_room && list_empty(&cusers->applying);
if (held) {
if (list_empty(&cusers->applying) && commit_alloc_has_room(server, cusers, 2)) {
scoutfs_alloc_meta_remaining(&server->alloc, &hold->avail, &hold->freed);
if (cusers->nr_holders == 0) {
cusers->avail_before = av;
cusers->freed_before = fr;
hold->avail = av;
hold->freed = fr;
cusers->avail_before = hold->avail;
cusers->freed_before = hold->freed;
cusers->exceeded = false;
} else {
scoutfs_alloc_meta_remaining(&server->alloc, &hold->avail, &hold->freed);
}
hold->exceeded = false;
hold->start = ktime_get();
list_add_tail(&hold->entry, &cusers->holding);
cusers->nr_holders++;
} else if (!has_room && cusers->nr_holders == 0 && !cusers->committing) {
cusers->committing = true;
queue_work(server->wq, &server->commit_work);
held = true;
}
spin_unlock(&cusers->lock);
@@ -424,6 +396,7 @@ static int server_apply_commit(struct super_block *sb, struct commit_hold *hold,
DECLARE_SERVER_INFO(sb, server);
struct commit_users *cusers = &server->cusers;
struct timespec ts;
bool start_commit;
spin_lock(&cusers->lock);
@@ -444,15 +417,13 @@ static int server_apply_commit(struct super_block *sb, struct commit_hold *hold,
list_del_init(&hold->entry);
hold->ret = err;
}
cusers->nr_holders--;
if (cusers->nr_holders == 0 && !cusers->committing && !list_empty(&cusers->applying)) {
cusers->committing = true;
queue_work(server->wq, &server->commit_work);
}
start_commit = cusers->nr_holders == 0 && !list_empty(&cusers->applying);
spin_unlock(&cusers->lock);
if (start_commit)
queue_work(server->wq, &server->commit_work);
wait_event(cusers->waitq, list_empty_careful(&hold->entry));
smp_rmb(); /* entry load before ret */
return hold->ret;
@@ -460,8 +431,8 @@ static int server_apply_commit(struct super_block *sb, struct commit_hold *hold,
/*
* Start a commit from the commit work. We should only have been queued
* while there are no active holders and someone started the commit.
* There may or may not be blocked apply callers waiting for the result.
* while a holder is waiting to apply after all active holders have
* finished.
*/
static int commit_start(struct super_block *sb, struct commit_users *cusers)
{
@@ -470,7 +441,7 @@ static int commit_start(struct super_block *sb, struct commit_users *cusers)
/* make sure holders held off once commit started */
spin_lock(&cusers->lock);
TRACE_COMMIT_USERS(sb, cusers, start);
if (WARN_ON_ONCE(!cusers->committing || cusers->nr_holders != 0))
if (WARN_ON_ONCE(list_empty(&cusers->applying) || cusers->nr_holders != 0))
ret = -EINVAL;
spin_unlock(&cusers->lock);
@@ -493,28 +464,21 @@ static void commit_end(struct super_block *sb, struct commit_users *cusers, int
smp_wmb(); /* ret stores before list updates */
list_for_each_entry_safe(hold, tmp, &cusers->applying, entry)
list_del_init(&hold->entry);
cusers->committing = false;
spin_unlock(&cusers->lock);
wake_up(&cusers->waitq);
}
static void get_stable(struct super_block *sb, struct scoutfs_super_block *super,
struct scoutfs_net_roots *roots)
static void get_roots(struct super_block *sb,
struct scoutfs_net_roots *roots)
{
DECLARE_SERVER_INFO(sb, server);
unsigned int seq;
do {
seq = read_seqcount_begin(&server->stable_seqcount);
if (super)
*super = server->stable_super;
if (roots) {
roots->fs_root = server->stable_super.fs_root;
roots->logs_root = server->stable_super.logs_root;
roots->srch_root = server->stable_super.srch_root;
}
} while (read_seqcount_retry(&server->stable_seqcount, seq));
seq = read_seqcount_begin(&server->roots_seqcount);
*roots = server->roots;
} while (read_seqcount_retry(&server->roots_seqcount, seq));
}
u64 scoutfs_server_seq(struct super_block *sb)
@@ -546,12 +510,17 @@ void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq)
}
}
static void set_stable_super(struct server_info *server, struct scoutfs_super_block *super)
static void set_roots(struct server_info *server,
struct scoutfs_btree_root *fs_root,
struct scoutfs_btree_root *logs_root,
struct scoutfs_btree_root *srch_root)
{
preempt_disable();
write_seqcount_begin(&server->stable_seqcount);
server->stable_super = *super;
write_seqcount_end(&server->stable_seqcount);
write_seqcount_begin(&server->roots_seqcount);
server->roots.fs_root = *fs_root;
server->roots.logs_root = *logs_root;
server->roots.srch_root = *srch_root;
write_seqcount_end(&server->roots_seqcount);
preempt_enable();
}
@@ -566,7 +535,7 @@ static void set_stable_super(struct server_info *server, struct scoutfs_super_bl
* implement commits with a single pending work func.
*
* Processing paths hold the commit while they're making multiple
* dependent changes. When they're done and want it persistent they
* dependent changes. When they're done and want it persistent they add
* queue the commit work. This work runs, performs the commit, and
* wakes all the applying waiters with the result. Readers can run
* concurrently with these commits.
@@ -576,7 +545,7 @@ static void scoutfs_server_commit_func(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
commit_work);
struct super_block *sb = server->sb;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct commit_users *cusers = &server->cusers;
int ret;
@@ -634,7 +603,8 @@ static void scoutfs_server_commit_func(struct work_struct *work)
goto out;
}
set_stable_super(server, super);
set_roots(server, &super->fs_root, &super->logs_root,
&super->srch_root);
/* swizzle the active and idle server alloc/freed heads */
server->other_ind ^= 1;
@@ -671,7 +641,7 @@ static int server_alloc_inodes(struct super_block *sb,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_net_inode_alloc ial = { 0, };
COMMIT_HOLD(hold);
__le64 lecount;
@@ -724,13 +694,13 @@ static int alloc_move_refill_zoned(struct super_block *sb, struct scoutfs_alloc_
static int alloc_move_empty(struct super_block *sb,
struct scoutfs_alloc_root *dst,
struct scoutfs_alloc_root *src, u64 meta_budget)
struct scoutfs_alloc_root *src, u64 meta_reserved)
{
DECLARE_SERVER_INFO(sb, server);
return scoutfs_alloc_move(sb, &server->alloc, &server->wri,
dst, src, le64_to_cpu(src->total_len), NULL, NULL, 0,
meta_budget);
meta_reserved);
}
/*
@@ -839,7 +809,7 @@ static void mod_bitmap_bits(__le64 *dst, u64 dst_zone_blocks,
static int get_data_alloc_zone_bits(struct super_block *sb, u64 rid, __le64 *exclusive,
__le64 *vacant, u64 zone_blocks)
{
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_log_trees *lt;
struct scoutfs_key key;
@@ -1070,7 +1040,7 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
u64 rid, struct commit_hold *hold)
{
struct server_info *server = SCOUTFS_SB(sb)->server_info;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_log_merge_status stat;
struct scoutfs_log_merge_range rng;
struct scoutfs_log_trees each_lt;
@@ -1256,82 +1226,6 @@ static int finalize_and_start_log_merge(struct super_block *sb, struct scoutfs_l
return ret;
}
/*
* The calling get_log_trees ran out of available blocks in its commit's
* metadata allocator while moving extents from the log tree's
* data_freed into the core data_avail. This finishes moving the
* extents in as many additional commits as it takes. The logs mutex
* is nested inside holding commits so we recheck the persistent item
* each time we commit to make sure it's still what we think. The
* caller is still going to send the item to the client so we update the
* caller's each time we make progress. This is a best-effort attempt
* to clean up and it's valid to leave extents in data_freed we don't
* return errors to the caller. The client will continue the work later
* in get_log_trees or as the rid is reclaimed.
*/
static void try_drain_data_freed(struct super_block *sb, struct scoutfs_log_trees *lt)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
const u64 rid = le64_to_cpu(lt->rid);
const u64 nr = le64_to_cpu(lt->nr);
struct scoutfs_log_trees drain;
struct scoutfs_key key;
COMMIT_HOLD(hold);
int ret = 0;
int err;
scoutfs_key_init_log_trees(&key, rid, nr);
while (lt->data_freed.total_len != 0) {
server_hold_commit(sb, &hold);
mutex_lock(&server->logs_mutex);
ret = find_log_trees_item(sb, &super->logs_root, false, rid, U64_MAX, &drain);
if (ret < 0)
break;
/* careful to only keep draining the caller's specific open trans */
if (drain.nr != lt->nr || drain.get_trans_seq != lt->get_trans_seq ||
drain.commit_trans_seq != lt->commit_trans_seq || drain.flags != lt->flags) {
ret = -ENOENT;
break;
}
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri,
&super->logs_root, &key);
if (ret < 0)
break;
/* moving can modify and return errors, always update caller and item */
mutex_lock(&server->alloc_mutex);
ret = alloc_move_empty(sb, &super->data_alloc, &drain.data_freed,
COMMIT_HOLD_ALLOC_BUDGET / 2);
mutex_unlock(&server->alloc_mutex);
if (ret == -EINPROGRESS)
ret = 0;
*lt = drain;
err = scoutfs_btree_force(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &drain, sizeof(drain));
BUG_ON(err < 0); /* dirtying must guarantee success */
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
if (ret < 0) {
ret = 0; /* don't try to abort, ignoring ret */
break;
}
}
/* try to cleanly abort and write any partial dirty btree blocks, but ignore result */
if (ret < 0) {
mutex_unlock(&server->logs_mutex);
server_apply_commit(sb, &hold, 0);
}
}
/*
* Give the client roots to all the trees that they'll use to build
* their transaction.
@@ -1359,7 +1253,7 @@ static int server_get_log_trees(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
u64 rid = scoutfs_net_client_rid(conn);
DECLARE_SERVER_INFO(sb, server);
__le64 exclusive[SCOUTFS_DATA_ALLOC_ZONE_LE64S];
@@ -1457,9 +1351,7 @@ static int server_get_log_trees(struct super_block *sb,
goto update;
}
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 100);
if (ret == -EINPROGRESS)
ret = 0;
ret = alloc_move_empty(sb, &super->data_alloc, &lt.data_freed, 0);
if (ret < 0) {
err_str = "emptying committed data_freed";
goto update;
@@ -1537,10 +1429,6 @@ out:
scoutfs_err(sb, "error %d getting log trees for rid %016llx: %s",
ret, rid, err_str);
/* try to drain excessive data_freed with additional commits, if needed, ignoring err */
if (ret == 0)
try_drain_data_freed(sb, &lt);
return scoutfs_net_response(sb, conn, cmd, id, ret, &lt, sizeof(lt));
}
@@ -1554,7 +1442,7 @@ static int server_commit_log_trees(struct super_block *sb,
struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
const u64 rid = scoutfs_net_client_rid(conn);
DECLARE_SERVER_INFO(sb, server);
SCOUTFS_BTREE_ITEM_REF(iref);
@@ -1609,13 +1497,6 @@ static int server_commit_log_trees(struct super_block *sb,
if (ret < 0 || committed)
goto unlock;
/* make sure _update succeeds before we modify srch items */
ret = scoutfs_btree_dirty(sb, &server->alloc, &server->wri, &super->logs_root, &key);
if (ret < 0) {
err_str = "dirtying lt item";
goto unlock;
}
/* try to rotate the srch log when big enough */
mutex_lock(&server->srch_mutex);
ret = scoutfs_srch_rotate_log(sb, &server->alloc, &server->wri,
@@ -1630,7 +1511,6 @@ static int server_commit_log_trees(struct super_block *sb,
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->logs_root, &key, &lt, sizeof(lt));
BUG_ON(ret < 0); /* dirtying should have guaranteed success */
if (ret < 0)
err_str = "updating log trees item";
@@ -1662,7 +1542,7 @@ static int server_get_roots(struct super_block *sb,
memset(&roots, 0, sizeof(roots));
ret = -EINVAL;
} else {
get_stable(sb, NULL, &roots);
get_roots(sb, &roots);
ret = 0;
}
@@ -1692,7 +1572,7 @@ static int server_get_roots(struct super_block *sb,
*/
static int reclaim_open_log_tree(struct super_block *sb, u64 rid)
{
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
DECLARE_SERVER_INFO(sb, server);
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_log_trees lt;
@@ -1789,8 +1669,9 @@ out:
*/
static int get_stable_trans_seq(struct super_block *sb, u64 *last_seq_ret)
{
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_log_trees *lt;
struct scoutfs_key key;
@@ -1946,8 +1827,9 @@ static int server_srch_get_compact(struct super_block *sb,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_srch_compact *sc = NULL;
COMMIT_HOLD(hold);
int ret;
@@ -2012,7 +1894,8 @@ static int server_srch_commit_compact(struct super_block *sb,
{
DECLARE_SERVER_INFO(sb, server);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_srch_compact *sc;
struct scoutfs_alloc_list_head av;
struct scoutfs_alloc_list_head fr;
@@ -2081,48 +1964,28 @@ out:
* reset the next range key if there's still work to do. If the
* operation is complete then we tear down the input log_trees items and
* delete the status.
*
* Processing all the completions can take more than one transaction.
* We return -EINPROGRESS if we have to commit a transaction and the
* caller will apply the commit and immediate call back in so we can
* perform another commit. We need to be very careful to leave the
* status in a state where requests won't be issued at the wrong time
* (by forcing nr_completions to a batch while we delete them).
*/
static int splice_log_merge_completions(struct super_block *sb,
struct scoutfs_log_merge_status *stat,
bool no_ranges)
{
struct server_info *server = SCOUTFS_SB(sb)->server_info;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_log_merge_complete comp;
struct scoutfs_log_merge_freeing fr;
struct scoutfs_log_merge_range rng;
struct scoutfs_log_trees lt = {{{0,}}};
SCOUTFS_BTREE_ITEM_REF(iref);
bool upd_stat = true;
int einprogress = 0;
struct scoutfs_key key;
char *err_str = NULL;
u32 alloc_low;
u32 tmp;
u64 seq;
int ret;
int err;
/* musn't rebalance fs tree parents while reqs rely on their key bounds */
if (WARN_ON_ONCE(le64_to_cpu(stat->nr_requests) > 0))
return -EIO;
/*
* Be overly conservative about how low the allocator can get
* before we commit. This gives us a lot of work to do in a
* commit while also allowing a pretty big smallest allocator to
* work with the theoretically unbounded alloc list splicing.
*/
scoutfs_alloc_meta_remaining(&server->alloc, &alloc_low, &tmp);
alloc_low = min(alloc_low, tmp) / 4;
/*
* Splice in all the completed subtrees at the initial parent
* blocks in the main fs_tree before rebalancing any of them.
@@ -2144,22 +2007,6 @@ static int splice_log_merge_completions(struct super_block *sb,
seq = le64_to_cpu(comp.seq);
/*
* Use having cleared the lists as an indication that
* we've already set the parents and don't need to dirty
* the btree blocks to do it all over again. This is
* safe because there is always an fs block that the
* merge dirties and frees into the meta_freed list.
*/
if (comp.meta_avail.ref.blkno == 0 && comp.meta_freed.ref.blkno == 0)
continue;
if (scoutfs_alloc_meta_low(sb, &server->alloc, alloc_low)) {
einprogress = -EINPROGRESS;
ret = 0;
goto out;
}
ret = scoutfs_btree_set_parent(sb, &server->alloc, &server->wri,
&super->fs_root, &comp.start,
&comp.root);
@@ -2194,14 +2041,6 @@ static int splice_log_merge_completions(struct super_block *sb,
}
}
/*
* Once we start rebalancing we force the number of completions
* to a batch so that requests won't be issued. Once we're done
* we clear the completion count and requests can flow again.
*/
if (le64_to_cpu(stat->nr_complete) < LOG_MERGE_SPLICE_BATCH)
stat->nr_complete = cpu_to_le64(LOG_MERGE_SPLICE_BATCH);
/*
* Now with all the parent blocks spliced in, rebalance items
* amongst parents that needed to split/join and delete the
@@ -2223,12 +2062,6 @@ static int splice_log_merge_completions(struct super_block *sb,
seq = le64_to_cpu(comp.seq);
if (scoutfs_alloc_meta_low(sb, &server->alloc, alloc_low)) {
einprogress = -EINPROGRESS;
ret = 0;
goto out;
}
/* balance when there was a remaining key range */
if (le64_to_cpu(comp.flags) & SCOUTFS_LOG_MERGE_COMP_REMAIN) {
ret = scoutfs_btree_rebalance(sb, &server->alloc,
@@ -2268,11 +2101,18 @@ static int splice_log_merge_completions(struct super_block *sb,
}
}
/* update the status once all completes are processed */
scoutfs_key_set_zeros(&stat->next_range_key);
stat->nr_complete = 0;
/* update counts and done if there's still ranges to process */
if (!no_ranges) {
scoutfs_key_set_zeros(&stat->next_range_key);
stat->nr_complete = 0;
ret = 0;
init_log_merge_key(&key, SCOUTFS_LOG_MERGE_STATUS_ZONE, 0, 0);
ret = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->log_merge, &key,
stat, sizeof(*stat));
if (ret < 0)
err_str = "update status";
goto out;
}
@@ -2308,12 +2148,6 @@ static int splice_log_merge_completions(struct super_block *sb,
(le64_to_cpu(lt.finalize_seq) < le64_to_cpu(stat->seq))))
continue;
if (scoutfs_alloc_meta_low(sb, &server->alloc, alloc_low)) {
einprogress = -EINPROGRESS;
ret = 0;
goto out;
}
fr.root = lt.item_root;
scoutfs_key_set_zeros(&fr.key);
fr.seq = cpu_to_le64(scoutfs_server_next_seq(sb));
@@ -2347,10 +2181,9 @@ static int splice_log_merge_completions(struct super_block *sb,
}
le64_add_cpu(&super->inode_count, le64_to_cpu(lt.inode_count_delta));
}
/* everything's done, remove the merge operation */
upd_stat = false;
init_log_merge_key(&key, SCOUTFS_LOG_MERGE_STATUS_ZONE, 0, 0);
ret = scoutfs_btree_delete(sb, &server->alloc, &server->wri,
&super->log_merge, &key);
@@ -2359,23 +2192,12 @@ static int splice_log_merge_completions(struct super_block *sb,
else
err_str = "deleting merge status item";
out:
if (upd_stat) {
init_log_merge_key(&key, SCOUTFS_LOG_MERGE_STATUS_ZONE, 0, 0);
err = scoutfs_btree_update(sb, &server->alloc, &server->wri,
&super->log_merge, &key,
stat, sizeof(struct scoutfs_log_merge_status));
if (err && !ret) {
err_str = "updating merge status item";
ret = err;
}
}
if (ret < 0)
scoutfs_err(sb, "server error %d splicing log merge completion: %s", ret, err_str);
BUG_ON(ret); /* inconsistent */
return ret ?: einprogress;
return ret;
}
/*
@@ -2466,7 +2288,7 @@ static void server_log_merge_free_work(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
log_merge_free_work);
struct super_block *sb = server->sb;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_log_merge_freeing fr;
struct scoutfs_key key;
COMMIT_HOLD(hold);
@@ -2550,12 +2372,6 @@ static void server_log_merge_free_work(struct work_struct *work)
}
/*
* Clients regularly ask if there is log merge work to do. We process
* completions inline before responding so that we don't create large
* delays between completion processing and the next request. We don't
* mind if the client get_log_merge request sees high latency, the
* blocked caller has nothing else to do.
*
* This will return ENOENT to the client if there is no work to do.
*/
static int server_get_log_merge(struct super_block *sb,
@@ -2564,7 +2380,8 @@ static int server_get_log_merge(struct super_block *sb,
{
DECLARE_SERVER_INFO(sb, server);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_log_merge_status stat;
struct scoutfs_log_merge_range rng;
struct scoutfs_log_merge_range remain;
@@ -2623,22 +2440,14 @@ restart:
goto out;
}
/* splice if we have a batch or ran out of ranges */
/* maybe splice now that we know if there's ranges */
no_next = ret == -ENOENT;
no_ranges = scoutfs_key_is_zeros(&stat.next_range_key) && ret == -ENOENT;
if (le64_to_cpu(stat.nr_requests) == 0 &&
(no_next || le64_to_cpu(stat.nr_complete) >= LOG_MERGE_SPLICE_BATCH)) {
ret = splice_log_merge_completions(sb, &stat, no_ranges);
if (ret == -EINPROGRESS) {
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, 0);
if (ret < 0)
goto respond;
server_hold_commit(sb, &hold);
mutex_lock(&server->logs_mutex);
} else if (ret < 0) {
if (ret < 0)
goto out;
}
/* splicing resets key and adds ranges, could finish status */
goto restart;
}
@@ -2840,7 +2649,6 @@ out:
mutex_unlock(&server->logs_mutex);
ret = server_apply_commit(sb, &hold, ret);
respond:
return scoutfs_net_response(sb, conn, cmd, id, ret, &req, sizeof(req));
}
@@ -2856,7 +2664,8 @@ static int server_commit_log_merge(struct super_block *sb,
{
DECLARE_SERVER_INFO(sb, server);
u64 rid = scoutfs_net_client_rid(conn);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_log_merge_request orig_req;
struct scoutfs_log_merge_complete *comp;
struct scoutfs_log_merge_status stat;
@@ -3091,7 +2900,7 @@ static int server_set_volopt(struct super_block *sb, struct scoutfs_net_connecti
u8 cmd, u64 id, void *arg, u16 arg_len)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_volume_options *volopt;
COMMIT_HOLD(hold);
u64 opt;
@@ -3160,7 +2969,7 @@ static int server_clear_volopt(struct super_block *sb, struct scoutfs_net_connec
u8 cmd, u64 id, void *arg, u16 arg_len)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_volume_options *volopt;
COMMIT_HOLD(hold);
__le64 *opt;
@@ -3214,7 +3023,7 @@ static int server_resize_devices(struct super_block *sb, struct scoutfs_net_conn
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_resize_devices *nrd;
COMMIT_HOLD(hold);
u64 meta_tot;
@@ -3321,19 +3130,16 @@ static int count_free_blocks(struct super_block *sb, void *arg, int owner,
}
/*
* We calculate the total inode count and free blocks from the last
* stable super that was written. Other users also walk stable blocks
* so by joining them we don't have to worry about ensuring that we've
* locked all the dirty structures that the summations could reference.
* We handle stale reads by retrying with the most recent stable super.
* We calculate the total inode count and free blocks from the current in-memory dirty
* versions of the super block and log_trees structs, so we have to lock them.
*/
static int server_statfs(struct super_block *sb, struct scoutfs_net_connection *conn,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_super_block super;
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_net_statfs nst = {{0,}};
struct statfs_free_blocks sfb = {0,};
DECLARE_SAVED_REFS(saved);
u64 inode_count;
int ret;
@@ -3342,24 +3148,24 @@ static int server_statfs(struct super_block *sb, struct scoutfs_net_connection *
goto out;
}
do {
get_stable(sb, &super, NULL);
mutex_lock(&server->alloc_mutex);
ret = scoutfs_alloc_foreach_super(sb, super, count_free_blocks, &sfb);
mutex_unlock(&server->alloc_mutex);
if (ret < 0)
goto out;
ret = scoutfs_alloc_foreach_super(sb, &super, count_free_blocks, &sfb) ?:
scoutfs_forest_inode_count(sb, &super, &inode_count);
if (ret < 0 && ret != -ESTALE)
goto out;
mutex_lock(&server->logs_mutex);
ret = scoutfs_forest_inode_count(sb, super, &inode_count);
mutex_unlock(&server->logs_mutex);
if (ret < 0)
goto out;
ret = scoutfs_block_check_stale(sb, ret, &saved, &super.logs_root.ref,
&super.srch_root.ref);
} while (ret == -ESTALE);
BUILD_BUG_ON(sizeof(nst.uuid) != sizeof(super.uuid));
memcpy(nst.uuid, super.uuid, sizeof(nst.uuid));
BUILD_BUG_ON(sizeof(nst.uuid) != sizeof(super->uuid));
memcpy(nst.uuid, super->uuid, sizeof(nst.uuid));
nst.free_meta_blocks = cpu_to_le64(sfb.meta);
nst.total_meta_blocks = super.total_meta_blocks;
nst.total_meta_blocks = super->total_meta_blocks;
nst.free_data_blocks = cpu_to_le64(sfb.data);
nst.total_data_blocks = super.total_data_blocks;
nst.total_data_blocks = super->total_data_blocks;
nst.inode_count = cpu_to_le64(inode_count);
ret = 0;
@@ -3390,7 +3196,7 @@ static int insert_mounted_client(struct super_block *sb, u64 rid, u64 gr_flags,
struct sockaddr_in *sin)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_mounted_client_btree_val mcv;
struct scoutfs_key key;
int ret;
@@ -3416,7 +3222,7 @@ static int lookup_mounted_client_addr(struct super_block *sb, u64 rid,
union scoutfs_inet_addr *addr)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_mounted_client_btree_val *mcv;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
@@ -3450,7 +3256,7 @@ static int lookup_mounted_client_addr(struct super_block *sb, u64 rid,
static int delete_mounted_client(struct super_block *sb, u64 rid)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_key key;
int ret;
@@ -3474,7 +3280,7 @@ static int delete_mounted_client(struct super_block *sb, u64 rid)
static int cancel_srch_compact(struct super_block *sb, u64 rid)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_alloc_list_head av;
struct scoutfs_alloc_list_head fr;
int ret;
@@ -3526,7 +3332,7 @@ static int cancel_srch_compact(struct super_block *sb, u64 rid)
static int cancel_log_merge(struct super_block *sb, u64 rid)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_log_merge_status stat;
struct scoutfs_log_merge_request req;
struct scoutfs_log_merge_range rng;
@@ -3650,7 +3456,7 @@ static int server_greeting(struct super_block *sb,
u8 cmd, u64 id, void *arg, u16 arg_len)
{
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_net_greeting *gr = arg;
struct scoutfs_net_greeting greet;
DECLARE_SERVER_INFO(sb, server);
@@ -3666,9 +3472,10 @@ static int server_greeting(struct super_block *sb,
goto send_err;
}
if (gr->fsid != cpu_to_le64(sbi->fsid)) {
if (gr->fsid != super->hdr.fsid) {
scoutfs_warn(sb, "client rid %016llx greeting fsid 0x%llx did not match server fsid 0x%llx",
le64_to_cpu(gr->rid), le64_to_cpu(gr->fsid), sbi->fsid);
le64_to_cpu(gr->rid), le64_to_cpu(gr->fsid),
le64_to_cpu(super->hdr.fsid));
ret = -EINVAL;
goto send_err;
}
@@ -3808,7 +3615,7 @@ static void farewell_worker(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
farewell_work);
struct super_block *sb = server->sb;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
struct scoutfs_mounted_client_btree_val *mcv;
struct farewell_request *tmp;
struct farewell_request *fw;
@@ -4170,7 +3977,7 @@ static void recovery_timeout(struct super_block *sb)
static int start_recovery(struct super_block *sb)
{
DECLARE_SERVER_INFO(sb, server);
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
SCOUTFS_BTREE_ITEM_REF(iref);
struct scoutfs_key key;
unsigned int nr = 0;
@@ -4287,7 +4094,8 @@ static void scoutfs_server_worker(struct work_struct *work)
struct server_info *server = container_of(work, struct server_info,
work);
struct super_block *sb = server->sb;
struct scoutfs_super_block *super = DIRTY_SUPER_SB(sb);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &sbi->super;
struct scoutfs_net_connection *conn = NULL;
struct scoutfs_mount_options opts;
DECLARE_WAIT_QUEUE_HEAD(waitq);
@@ -4299,13 +4107,13 @@ static void scoutfs_server_worker(struct work_struct *work)
trace_scoutfs_server_work_enter(sb, 0, 0);
scoutfs_options_read(sb, &opts);
scoutfs_quorum_slot_sin(&server->qconf, opts.quorum_slot_nr, &sin);
scoutfs_quorum_slot_sin(super, opts.quorum_slot_nr, &sin);
scoutfs_info(sb, "server starting at "SIN_FMT, SIN_ARG(&sin));
scoutfs_block_writer_init(sb, &server->wri);
/* first make sure no other servers are still running */
ret = scoutfs_quorum_fence_leaders(sb, &server->qconf, server->term);
ret = scoutfs_quorum_fence_leaders(sb, server->term);
if (ret < 0) {
scoutfs_err(sb, "server error %d attempting to fence previous leaders", ret);
goto out;
@@ -4341,7 +4149,8 @@ static void scoutfs_server_worker(struct work_struct *work)
write_seqcount_end(&server->volopt_seqcount);
atomic64_set(&server->seq_atomic, le64_to_cpu(super->seq));
set_stable_super(server, super);
set_roots(server, &super->fs_root, &super->logs_root,
&super->srch_root);
/* prepare server alloc for this transaction, larger first */
if (le64_to_cpu(super->server_meta_avail[0].total_nr) <
@@ -4435,12 +4244,11 @@ out:
/*
* Start the server but don't wait for it to complete.
*/
void scoutfs_server_start(struct super_block *sb, struct scoutfs_quorum_config *qconf, u64 term)
void scoutfs_server_start(struct super_block *sb, u64 term)
{
DECLARE_SERVER_INFO(sb, server);
if (cmpxchg(&server->status, SERVER_DOWN, SERVER_STARTING) == SERVER_DOWN) {
server->qconf = *qconf;
server->term = term;
queue_work(server->wq, &server->work);
}
@@ -4464,7 +4272,7 @@ void scoutfs_server_stop_wait(struct super_block *sb)
DECLARE_SERVER_INFO(sb, server);
stop_server(server);
flush_work(&server->work);
flush_work_sync(&server->work);
}
int scoutfs_server_setup(struct super_block *sb)
@@ -4492,7 +4300,7 @@ int scoutfs_server_setup(struct super_block *sb)
INIT_WORK(&server->log_merge_free_work, server_log_merge_free_work);
mutex_init(&server->srch_mutex);
mutex_init(&server->mounted_clients_mutex);
seqcount_init(&server->stable_seqcount);
seqcount_init(&server->roots_seqcount);
seqcount_init(&server->volopt_seqcount);
mutex_init(&server->volopt_mutex);
INIT_WORK(&server->fence_pending_recov_work, fence_pending_recov_worker);

View File

@@ -75,7 +75,7 @@ u64 scoutfs_server_seq(struct super_block *sb);
u64 scoutfs_server_next_seq(struct super_block *sb);
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
void scoutfs_server_start(struct super_block *sb, struct scoutfs_quorum_config *qconf, u64 term);
void scoutfs_server_start(struct super_block *sb, u64 term);
void scoutfs_server_stop(struct super_block *sb);
void scoutfs_server_stop_wait(struct super_block *sb);
bool scoutfs_server_is_running(struct super_block *sb);

View File

@@ -861,6 +861,7 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_srch_rb_root *sroot,
u64 hash, u64 ino, u64 last_ino, bool *done)
{
struct scoutfs_net_roots prev_roots;
struct scoutfs_net_roots roots;
struct scoutfs_srch_entry start;
struct scoutfs_srch_entry end;
@@ -868,7 +869,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
struct scoutfs_log_trees lt;
struct scoutfs_srch_file sfl;
SCOUTFS_BTREE_ITEM_REF(iref);
DECLARE_SAVED_REFS(saved);
struct scoutfs_key key;
unsigned long limit = SRCH_LIMIT;
int ret;
@@ -877,6 +877,7 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
*done = false;
srch_init_rb_root(sroot);
memset(&prev_roots, 0, sizeof(prev_roots));
start.hash = cpu_to_le64(hash);
start.ino = cpu_to_le64(ino);
@@ -891,6 +892,7 @@ retry:
ret = scoutfs_client_get_roots(sb, &roots);
if (ret)
goto out;
memset(&roots.fs_root, 0, sizeof(roots.fs_root));
end = final;
@@ -966,10 +968,16 @@ retry:
*done = sre_cmp(&end, &final) == 0;
ret = 0;
out:
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.srch_root.ref,
&roots.logs_root.ref);
if (ret == -ESTALE)
goto retry;
if (ret == -ESTALE) {
if (memcmp(&prev_roots, &roots, sizeof(roots)) == 0) {
scoutfs_inc_counter(sb, srch_search_stale_eio);
ret = -EIO;
} else {
scoutfs_inc_counter(sb, srch_search_stale_retry);
prev_roots = roots;
goto retry;
}
}
return ret;
}
@@ -995,14 +1003,6 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
le64_to_cpu(sfl->ref.blkno), 0);
ret = scoutfs_btree_insert(sb, alloc, wri, root, &key,
sfl, sizeof(*sfl));
/*
* While it's fine to replay moving the client's logging srch
* file to the core btree item, server commits should keep it
* from happening. So we'll warn if we see it happen. This can
* be removed eventually.
*/
if (WARN_ON_ONCE(ret == -EEXIST))
ret = 0;
if (ret == 0) {
memset(sfl, 0, sizeof(*sfl));
scoutfs_inc_counter(sb, srch_rotate_log);
@@ -1747,7 +1747,7 @@ static int compact_logs(struct super_block *sb,
goto out;
}
page->private = 0;
list_add_tail(&page->lru, &pages);
list_add_tail(&page->list, &pages);
nr_pages++;
scoutfs_inc_counter(sb, srch_compact_log_page);
}
@@ -1800,7 +1800,7 @@ static int compact_logs(struct super_block *sb,
/* sort page entries and reset private for _next */
i = 0;
list_for_each_entry(page, &pages, lru) {
list_for_each_entry(page, &pages, list) {
args[i++] = page;
if (atomic_read(&srinf->shutdown)) {
@@ -1821,7 +1821,7 @@ static int compact_logs(struct super_block *sb,
goto out;
/* make sure we finished all the pages */
list_for_each_entry(page, &pages, lru) {
list_for_each_entry(page, &pages, list) {
sre = page_priv_sre(page);
if (page->private < SRES_PER_PAGE && sre->ino != 0) {
ret = -ENOSPC;
@@ -1834,8 +1834,8 @@ static int compact_logs(struct super_block *sb,
out:
scoutfs_block_put(sb, bl);
vfree(args);
list_for_each_entry_safe(page, tmp, &pages, lru) {
list_del(&page->lru);
list_for_each_entry_safe(page, tmp, &pages, list) {
list_del(&page->list);
__free_page(page);
}

View File

@@ -13,7 +13,6 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include <linux/slab.h>
#include <linux/pagemap.h>
#include <linux/magic.h>
@@ -48,7 +47,6 @@
#include "omap.h"
#include "volopt.h"
#include "fence.h"
#include "xattr.h"
#include "scoutfs_trace.h"
static struct dentry *scoutfs_debugfs_root;
@@ -179,7 +177,7 @@ static void scoutfs_put_super(struct super_block *sb)
/*
* Wait for invalidation and iput to finish with any lingering
* inode references that escaped the evict_inodes in
* generic_shutdown_super. SB_ACTIVE is clear so final iput
* generic_shutdown_super. MS_ACTIVE is clear so final iput
* will always evict.
*/
scoutfs_lock_flush_invalidate(sb);
@@ -462,8 +460,9 @@ static int scoutfs_read_supers(struct super_block *sb)
goto out;
}
sbi->fsid = le64_to_cpu(meta_super->hdr.fsid);
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
sbi->super = *meta_super;
out:
kfree(meta_super);
kfree(data_super);
@@ -483,11 +482,8 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_magic = SCOUTFS_SUPER_MAGIC;
sb->s_maxbytes = MAX_LFS_FILESIZE;
sb->s_op = &scoutfs_super_ops;
sb->s_d_op = &scoutfs_dentry_ops;
sb->s_export_op = &scoutfs_export_ops;
sb->s_xattr = scoutfs_xattr_handlers;
sb->s_flags |= SB_I_VERSION | SB_POSIXACL;
sb->s_time_gran = 1;
sb->s_flags |= MS_I_VERSION;
/* btree blocks use long lived bh->b_data refs */
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
@@ -500,7 +496,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
ret = assign_random_id(sbi);
if (ret < 0)
goto out;
return ret;
spin_lock_init(&sbi->next_ino_lock);
spin_lock_init(&sbi->data_wait_root.lock);
@@ -509,7 +505,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
/* parse options early for use during setup */
ret = scoutfs_options_early_setup(sb, data);
if (ret < 0)
goto out;
return ret;
scoutfs_options_read(sb, &opts);
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
@@ -632,6 +628,7 @@ MODULE_ALIAS_FS("scoutfs");
static void teardown_module(void)
{
debugfs_remove(scoutfs_debugfs_root);
scoutfs_dir_exit();
scoutfs_inode_exit();
scoutfs_sysfs_exit();
}
@@ -669,20 +666,21 @@ static int __init scoutfs_module_init(void)
goto out;
}
ret = scoutfs_inode_init() ?:
scoutfs_dir_init() ?:
register_filesystem(&scoutfs_fs_type);
out:
if (ret)
teardown_module();
return ret;
}
module_init(scoutfs_module_init);
module_init(scoutfs_module_init)
static void __exit scoutfs_module_exit(void)
{
unregister_filesystem(&scoutfs_fs_type);
teardown_module();
}
module_exit(scoutfs_module_exit);
module_exit(scoutfs_module_exit)
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
MODULE_LICENSE("GPL");

View File

@@ -35,10 +35,11 @@ struct scoutfs_sb_info {
struct super_block *sb;
/* assigned once at the start of each mount, read-only */
u64 fsid;
u64 rid;
u64 fmt_vers;
struct scoutfs_super_block super;
struct block_device *meta_bdev;
spinlock_t next_ino_lock;
@@ -134,14 +135,14 @@ static inline bool scoutfs_unmounting(struct super_block *sb)
(int)(le64_to_cpu(fsid) >> SCSB_SHIFT), \
(int)(le64_to_cpu(rid) >> SCSB_SHIFT)
#define SCSB_ARGS(sb) \
(int)(SCOUTFS_SB(sb)->fsid >> SCSB_SHIFT), \
(int)(le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) >> SCSB_SHIFT), \
(int)(SCOUTFS_SB(sb)->rid >> SCSB_SHIFT)
#define SCSB_TRACE_FIELDS \
__field(__u64, fsid) \
__field(__u64, rid)
#define SCSB_TRACE_ASSIGN(sb) \
__entry->fsid = SCOUTFS_HAS_SBI(sb) ? \
SCOUTFS_SB(sb)->fsid : 0; \
le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) : 0;\
__entry->rid = SCOUTFS_HAS_SBI(sb) ? \
SCOUTFS_SB(sb)->rid : 0;
#define SCSB_TRACE_ARGS \

View File

@@ -60,9 +60,10 @@ static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
return snprintf(buf, PAGE_SIZE, "%016llx\n", sbi->fsid);
return snprintf(buf, PAGE_SIZE, "%016llx\n",
le64_to_cpu(super->hdr.fsid));
}
ATTR_FUNCS_RO(fsid);
@@ -267,7 +268,7 @@ int __init scoutfs_sysfs_init(void)
return 0;
}
void scoutfs_sysfs_exit(void)
void __exit scoutfs_sysfs_exit(void)
{
if (scoutfs_kset)
kset_unregister(scoutfs_kset);

View File

@@ -53,6 +53,6 @@ int scoutfs_setup_sysfs(struct super_block *sb);
void scoutfs_destroy_sysfs(struct super_block *sb);
int __init scoutfs_sysfs_init(void);
void scoutfs_sysfs_exit(void);
void __exit scoutfs_sysfs_exit(void);
#endif

View File

@@ -46,23 +46,6 @@ static struct scoutfs_tseq_entry *tseq_rb_next(struct scoutfs_tseq_entry *ent)
return rb_entry(node, struct scoutfs_tseq_entry, node);
}
#ifdef KC_RB_TREE_AUGMENTED_COMPUTE_MAX
static bool tseq_compute_total(struct scoutfs_tseq_entry *ent, bool exit)
{
loff_t total = 1 + tseq_node_total(ent->node.rb_left) +
tseq_node_total(ent->node.rb_right);
if (exit && ent->total == total)
return true;
ent->total = total;
return false;
}
RB_DECLARE_CALLBACKS(static, tseq_rb_callbacks, struct scoutfs_tseq_entry,
node, total, tseq_compute_total);
#else
static loff_t tseq_compute_total(struct scoutfs_tseq_entry *ent)
{
return 1 + tseq_node_total(ent->node.rb_left) +
@@ -70,8 +53,7 @@ static loff_t tseq_compute_total(struct scoutfs_tseq_entry *ent)
}
RB_DECLARE_CALLBACKS(static, tseq_rb_callbacks, struct scoutfs_tseq_entry,
node, loff_t, total, tseq_compute_total);
#endif
node, loff_t, total, tseq_compute_total)
void scoutfs_tseq_tree_init(struct scoutfs_tseq_tree *tree,
scoutfs_tseq_show_t show)

View File

@@ -17,15 +17,4 @@ static inline void down_write_two(struct rw_semaphore *a,
down_write_nested(b, SINGLE_DEPTH_NESTING);
}
/*
* When returning shrinker counts from scan_objects, we should steer
* clear of the magic SHRINK_STOP and SHRINK_EMPTY values, which are near
* ~0UL values. Hence, we cap count to ~0L, which is arbitarily high
* enough to avoid it.
*/
static inline long shrinker_min_long(long count)
{
return min(count, LONG_MAX);
}
#endif

View File

@@ -15,7 +15,6 @@
#include <linux/dcache.h>
#include <linux/xattr.h>
#include <linux/crc32c.h>
#include <linux/posix_acl.h>
#include "format.h"
#include "inode.h"
@@ -27,7 +26,6 @@
#include "xattr.h"
#include "lock.h"
#include "hash.h"
#include "acl.h"
#include "scoutfs_trace.h"
/*
@@ -81,6 +79,16 @@ static void init_xattr_key(struct scoutfs_key *key, u64 ino, u32 name_hash,
#define SCOUTFS_XATTR_PREFIX "scoutfs."
#define SCOUTFS_XATTR_PREFIX_LEN (sizeof(SCOUTFS_XATTR_PREFIX) - 1)
static int unknown_prefix(const char *name)
{
return strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) &&
strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) &&
strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)&&
strncmp(name, SCOUTFS_XATTR_PREFIX, SCOUTFS_XATTR_PREFIX_LEN);
}
#define HIDE_TAG "hide."
#define SRCH_TAG "srch."
#define TOTL_TAG "totl."
@@ -447,17 +455,22 @@ out:
* Copy the value for the given xattr name into the caller's buffer, if it
* fits. Return the bytes copied or -ERANGE if it doesn't fit.
*/
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
struct scoutfs_lock *lck)
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
size_t size)
{
struct inode *inode = dentry->d_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_key key;
unsigned int xat_bytes;
size_t name_len;
int ret;
if (unknown_prefix(name))
return -EOPNOTSUPP;
name_len = strlen(name);
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
return -ENODATA;
@@ -467,6 +480,10 @@ int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer
if (!xat)
return -ENOMEM;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lck);
if (ret)
goto out;
down_read(&si->xattr_rwsem);
ret = get_next_xattr(inode, &key, xat, xat_bytes, name, name_len, 0, 0, lck);
@@ -492,27 +509,12 @@ int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer
ret = copy_xattr_value(sb, &key, xat, xat_bytes, buffer, size, lck);
unlock:
up_read(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_READ);
out:
kfree(xat);
return ret;
}
static int scoutfs_xattr_get(struct dentry *dentry, const char *name, void *buffer, size_t size)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_lock *lock = NULL;
int ret;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
if (ret == 0) {
ret = scoutfs_xattr_get_locked(inode, name, buffer, size, lock);
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
}
return ret;
}
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
{
scoutfs_key_set_zeros(key);
@@ -617,32 +619,30 @@ int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
* cause creation to fail if the xattr already exists (_CREATE) or
* doesn't already exist (_REPLACE). xattrs can have a zero length
* value.
*
* The caller has acquired cluster locks, holds a transaction, and has
* dirtied the inode item so that they can update it after we modify it.
* The caller has to know the tags to acquire cluster locks before
* holding the transaction so they pass in the parsed tags, or all 0s for
* non scoutfs. prefixes.
*/
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
const void *value, size_t size, int flags,
const struct scoutfs_xattr_prefix_tags *tgs,
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
struct list_head *ind_locks)
static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags)
{
struct inode *inode = dentry->d_inode;
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
struct super_block *sb = inode->i_sb;
const u64 ino = scoutfs_ino(inode);
struct scoutfs_xattr_totl_val tval = {0,};
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_xattr *xat = NULL;
struct scoutfs_lock *lck = NULL;
struct scoutfs_lock *totl_lock = NULL;
size_t name_len = strlen(name);
struct scoutfs_key totl_key;
struct scoutfs_key key;
bool undo_srch = false;
bool undo_totl = false;
LIST_HEAD(ind_locks);
u8 found_parts;
unsigned int xat_bytes_totl;
unsigned int xat_bytes;
unsigned int val_len;
u64 ind_seq;
u64 total;
u64 hash = 0;
u64 id = 0;
@@ -651,9 +651,6 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
trace_scoutfs_xattr_set(sb, name_len, value, size, flags);
if (WARN_ON_ONCE(tgs->totl && !totl_lock))
return -EINVAL;
/* mirror the syscall's errors for large names and values */
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
return -ERANGE;
@@ -664,10 +661,16 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
(flags & ~(XATTR_CREATE | XATTR_REPLACE)))
return -EINVAL;
if ((tgs->hide | tgs->srch | tgs->totl) && !capable(CAP_SYS_ADMIN))
if (unknown_prefix(name))
return -EOPNOTSUPP;
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (tgs->totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
return ret;
/* allocate enough to always read an existing xattr's totl */
@@ -676,44 +679,51 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
/* but store partial first item that only includes the new xattr's value */
xat_bytes = first_item_bytes(name_len, size);
xat = kmalloc(xat_bytes_totl, GFP_NOFS);
if (!xat)
return -ENOMEM;
if (!xat) {
ret = -ENOMEM;
goto out;
}
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
if (ret)
goto out;
down_write(&si->xattr_rwsem);
/* find an existing xattr to delete, including possible totl value */
ret = get_next_xattr(inode, &key, xat, xat_bytes_totl, name, name_len, 0, 0, lck);
if (ret < 0 && ret != -ENOENT)
goto out;
goto unlock;
/* check existence constraint flags */
if (ret == -ENOENT && (flags & XATTR_REPLACE)) {
ret = -ENODATA;
goto out;
goto unlock;
} else if (ret >= 0 && (flags & XATTR_CREATE)) {
ret = -EEXIST;
goto out;
goto unlock;
}
/* not an error to delete something that doesn't exist */
if (ret == -ENOENT && !value) {
ret = 0;
goto out;
goto unlock;
}
/* s64 count delta if we create or delete */
if (tgs->totl)
if (tgs.totl)
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
/* found fields in key will also be used */
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
if (found_parts && tgs->totl) {
if (found_parts && tgs.totl) {
/* parse old totl value before we clobber xat buf */
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
if (ret < 0)
goto out;
goto unlock;
le64_add_cpu(&tval.total, -total);
}
@@ -732,90 +742,15 @@ int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_
min(size, SCOUTFS_XATTR_MAX_PART_SIZE -
offsetof(struct scoutfs_xattr, name[name_len])));
if (tgs->totl) {
if (tgs.totl) {
ret = parse_totl_u64(value, size, &total);
if (ret < 0)
goto out;
goto unlock;
}
le64_add_cpu(&tval.total, total);
}
if (tgs->srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
goto out;
undo_srch = true;
}
if (tgs->totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto out;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
lck);
else
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), lck);
if (ret < 0)
goto out;
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = current_time(inode);
ret = 0;
out:
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
up_write(&si->xattr_rwsem);
kfree(xat);
return ret;
}
static int scoutfs_xattr_set(struct dentry *dentry, const char *name, const void *value,
size_t size, int flags)
{
struct inode *inode = dentry->d_inode;
struct super_block *sb = inode->i_sb;
struct scoutfs_xattr_prefix_tags tgs;
struct scoutfs_lock *totl_lock = NULL;
struct scoutfs_lock *lck = NULL;
size_t name_len = strlen(name);
LIST_HEAD(ind_locks);
u64 ind_seq;
int ret;
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
return -EINVAL;
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
if (ret)
goto unlock;
if (tgs.totl) {
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
if (ret)
@@ -835,126 +770,80 @@ retry:
if (ret < 0)
goto release;
ret = scoutfs_xattr_set_locked(dentry->d_inode, name, name_len, value, size, flags, &tgs,
lck, totl_lock, &ind_locks);
if (ret == 0)
scoutfs_update_inode_item(inode, lck, &ind_locks);
if (tgs.srch && !(found_parts && value)) {
if (found_parts)
id = le64_to_cpu(key.skx_id);
hash = scoutfs_hash64(name, name_len);
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
if (ret < 0)
goto release;
undo_srch = true;
}
if (tgs.totl) {
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
if (ret < 0)
goto release;
undo_totl = true;
}
if (found_parts && value)
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), found_parts, lck);
else if (found_parts)
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
le64_to_cpu(key.skx_id), found_parts,
lck);
else
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
xattr_nr_parts(xat), lck);
if (ret < 0)
goto release;
/* XXX do these want i_mutex or anything? */
inode_inc_iversion(inode);
inode->i_ctime = CURRENT_TIME;
scoutfs_update_inode_item(inode, lck, &ind_locks);
ret = 0;
release:
if (ret < 0 && undo_srch) {
err = scoutfs_forest_srch_add(sb, hash, ino, id);
BUG_ON(err);
}
if (ret < 0 && undo_totl) {
/* _delta() on dirty items shouldn't fail */
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
BUG_ON(err);
}
scoutfs_release_trans(sb);
scoutfs_inode_index_unlock(sb, &ind_locks);
unlock:
up_write(&si->xattr_rwsem);
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
out:
kfree(xat);
return ret;
}
#ifndef KC_XATTR_STRUCT_XATTR_HANDLER
/*
* Future kernels have this amazing hack to rewind the name to get the
* skipped prefix. We're back in the stone ages without the handler
* arg, so we Just Know that this is possible. This will become a
* compat hook to either call the kernel's xattr_full_name(handler), or
* our hack to use the flags as the prefix length.
*/
static const char *full_name_hack(const char *name, int len)
int scoutfs_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags)
{
return name - len;
}
#endif
if (size == 0)
value = ""; /* set empty value */
static int scoutfs_xattr_get_handler
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
(const struct xattr_handler *handler, struct dentry *dentry,
struct inode *inode, const char *name, void *value,
size_t size)
{
name = xattr_full_name(handler, name);
#else
(struct dentry *dentry, const char *name,
void *value, size_t size, int handler_flags)
{
name = full_name_hack(name, handler_flags);
#endif
return scoutfs_xattr_get(dentry, name, value, size);
}
static int scoutfs_xattr_set_handler
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
(const struct xattr_handler *handler, struct dentry *dentry,
struct inode *inode, const char *name, const void *value,
size_t size, int flags)
{
name = xattr_full_name(handler, name);
#else
(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags, int handler_flags)
{
name = full_name_hack(name, handler_flags);
#endif
return scoutfs_xattr_set(dentry, name, value, size, flags);
}
static const struct xattr_handler scoutfs_xattr_user_handler = {
.prefix = XATTR_USER_PREFIX,
.flags = XATTR_USER_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_scoutfs_handler = {
.prefix = SCOUTFS_XATTR_PREFIX,
.flags = SCOUTFS_XATTR_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_trusted_handler = {
.prefix = XATTR_TRUSTED_PREFIX,
.flags = XATTR_TRUSTED_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_security_handler = {
.prefix = XATTR_SECURITY_PREFIX,
.flags = XATTR_SECURITY_PREFIX_LEN,
.get = scoutfs_xattr_get_handler,
.set = scoutfs_xattr_set_handler,
};
static const struct xattr_handler scoutfs_xattr_acl_access_handler = {
#ifdef KC_XATTR_HANDLER_NAME
.name = XATTR_NAME_POSIX_ACL_ACCESS,
#else
.prefix = XATTR_NAME_POSIX_ACL_ACCESS,
#endif
.flags = ACL_TYPE_ACCESS,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
};
static const struct xattr_handler scoutfs_xattr_acl_default_handler = {
#ifdef KC_XATTR_HANDLER_NAME
.name = XATTR_NAME_POSIX_ACL_DEFAULT,
#else
.prefix = XATTR_NAME_POSIX_ACL_DEFAULT,
#endif
.flags = ACL_TYPE_DEFAULT,
.get = scoutfs_acl_get_xattr,
.set = scoutfs_acl_set_xattr,
};
const struct xattr_handler *scoutfs_xattr_handlers[] = {
&scoutfs_xattr_user_handler,
&scoutfs_xattr_scoutfs_handler,
&scoutfs_xattr_trusted_handler,
&scoutfs_xattr_security_handler,
&scoutfs_xattr_acl_access_handler,
&scoutfs_xattr_acl_default_handler,
NULL
};
int scoutfs_removexattr(struct dentry *dentry, const char *name)
{
return scoutfs_xattr_set(dentry, name, NULL, 0, XATTR_REPLACE);
}
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,

View File

@@ -1,29 +1,25 @@
#ifndef _SCOUTFS_XATTR_H_
#define _SCOUTFS_XATTR_H_
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
size_t size);
int scoutfs_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags);
int scoutfs_removexattr(struct dentry *dentry, const char *name);
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,
bool e_range, bool show_hidden);
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
struct scoutfs_xattr_prefix_tags {
unsigned long hide:1,
srch:1,
totl:1;
};
extern const struct xattr_handler *scoutfs_xattr_handlers[];
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
struct scoutfs_lock *lck);
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
const void *value, size_t size, int flags,
const struct scoutfs_xattr_prefix_tags *tgs,
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
struct list_head *ind_locks);
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
size_t size, __u32 *hash_pos, __u64 *id_pos,
bool e_range, bool show_hidden);
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
struct scoutfs_lock *lock);
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
struct scoutfs_xattr_prefix_tags *tgs);

1
tests/.gitignore vendored
View File

@@ -8,4 +8,3 @@ src/bulk_create_paths
src/find_xattrs
src/stage_tmpfile
src/create_xattr_loop
src/o_tmpfile_umask

View File

@@ -10,9 +10,7 @@ BIN := src/createmany \
src/bulk_create_paths \
src/stage_tmpfile \
src/find_xattrs \
src/create_xattr_loop \
src/fragmented_data_extents \
src/o_tmpfile_umask
src/create_xattr_loop
DEPS := $(wildcard src/*.d)

View File

@@ -35,22 +35,10 @@ t_fail()
t_quiet()
{
echo "# $*" >> "$T_TMPDIR/quiet.log"
"$@" >> "$T_TMPDIR/quiet.log" 2>&1 || \
"$@" > "$T_TMPDIR/quiet.log" 2>&1 || \
t_fail "quiet command failed"
}
#
# Quietly run a command during a test. The output is logged but only
# the return code is printed, presumably because the output contains
# a lot of invocation specific text that is difficult to filter.
#
t_rc()
{
echo "# $*" >> "$T_TMP.rc.log"
"$@" >> "$T_TMP.rc.log" 2>&1
echo "rc: $?"
}
#
# redirect test output back to the output of the invoking script intead
# of the compared output.

View File

@@ -18,7 +18,6 @@ t_filter_dmesg()
# the kernel can just be noisy
re=" used greatest stack depth: "
re="$re|sched: RT throttling activated"
# mkfs/mount checks partition tables
re="$re|unknown partition table"
@@ -62,7 +61,6 @@ t_filter_dmesg()
re="$re|scoutfs .* error: meta_super META flag not set"
re="$re|scoutfs .* error: could not open metadev:.*"
re="$re|scoutfs .* error: Unknown or malformed option,.*"
re="$re|scoutfs .* error: invalid quorum_heartbeat_timeout_ms value"
# in debugging kernels we can slow things down a bit
re="$re|hrtimer: interrupt took .*"
@@ -83,13 +81,6 @@ t_filter_dmesg()
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
re="$re|scoutfs .* error.*server failed to bind to.*"
re="$re|scoutfs .* critical transaction commit failure.*"
# change-devices causes loop device resizing
re="$re|loop[0-9].* detected capacity change from.*"
# ignore systemd-journal rotating
re="$re|systemd-journald.*"
egrep -v "($re)"
}

View File

@@ -75,15 +75,6 @@ t_fs_nrs()
seq 0 $((T_NR_MOUNTS - 1))
}
#
# output the fs nrs of quorum nodes, we "know" that
# the quorum nrs are the first consequtive nrs
#
t_quorum_nrs()
{
seq 0 $((T_QUORUM - 1))
}
#
# outputs "1" if the fs number has "1" in its quorum/is_leader file.
# All other cases output 0, including the fs nr being a client which
@@ -153,27 +144,7 @@ t_mount()
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval t_quiet mount -t scoutfs \$T_O$nr\$opt \$T_DB$nr \$T_M$nr
}
#
# Mount with an optional mount option string. If the string is empty
# then the saved mount options are used. If the string has contents
# then it is appended to the end of the saved options with a separating
# comma.
#
# Unlike t_mount this won't inherently fail in t_quiet, errors are
# returned so bad options can be tested.
#
t_mount_opt()
{
local nr="$1"
local opt="${2:+,$2}"
test "$nr" -lt "$T_NR_MOUNTS" || \
t_fail "fs nr $nr invalid"
eval mount -t scoutfs \$T_O$nr\$opt \$T_DB$nr \$T_M$nr
eval t_quiet mount -t scoutfs \$T_O$nr \$T_DB$nr \$T_M$nr
}
t_umount()
@@ -406,21 +377,13 @@ t_wait_for_leader() {
done
}
t_get_sysfs_mount_option() {
local nr="$1"
local name="$2"
local opt="$(t_sysfs_path $nr)/mount_options/$name"
cat "$opt"
}
t_set_sysfs_mount_option() {
local nr="$1"
local name="$2"
local val="$3"
local opt="$(t_sysfs_path $nr)/mount_options/$name"
echo "$val" > "$opt" 2>/dev/null
echo "$val" > "$opt"
}
t_set_all_sysfs_mount_options() {
@@ -442,7 +405,7 @@ t_save_all_sysfs_mount_options() {
for i in $(t_fs_nrs); do
opt="$(t_sysfs_path $i)/mount_options/$name"
ind="${name}_${i}"
ind="$name_$i"
_saved_opts[$ind]="$(cat $opt)"
done
@@ -454,7 +417,7 @@ t_restore_all_sysfs_mount_options() {
local i
for i in $(t_fs_nrs); do
ind="${name}_${i}"
ind="$name_$i"
t_set_sysfs_mount_option $i $name "${_saved_opts[$ind]}"
done

View File

@@ -47,9 +47,8 @@ four
--- dir within dir
--- overwrite file
--- can't overwrite non-empty dir
mv: cannot move '/mnt/test/test/basic-posix-consistency/dir/c/clobber' to '/mnt/test/test/basic-posix-consistency/dir/a/dir': Directory not empty
mv: cannot move /mnt/test/test/basic-posix-consistency/dir/c/clobber to /mnt/test/test/basic-posix-consistency/dir/a/dir: Directory not empty
--- can overwrite empty dir
--- can rename into root
== path resoluion
== inode indexes match after syncing existing
== inode indexes match after copying and syncing

View File

@@ -1,6 +0,0 @@
== truncate writes zeroed partial end of file block
0000000 0a79 0a79 0a79 0a79 0a79 0a79 0a79 0a79
*
0006144 0000 0000 0000 0000 0000 0000 0000 0000
*
0012288

View File

@@ -1,27 +0,0 @@
== make tmp sparse data dev files
== make scratch fs
== small new data device fails
rc: 1
== check sees data device errors
rc: 1
rc: 0
== preparing while mounted fails
rc: 1
== preparing without recovery fails
rc: 1
== check sees metadata errors
rc: 1
rc: 1
== preparing with file data fails
rc: 1
== preparing after emptied
rc: 0
== checks pass
rc: 0
rc: 0
== using prepared
== preparing larger and resizing
rc: 0
equal_prepared
large_prepared
resized larger test rc: 0

View File

@@ -1,330 +0,0 @@
== initial writes smaller than prealloc grow to prealloc size
/mnt/test/test/data-prealloc/file-1: 7 extents found
/mnt/test/test/data-prealloc/file-2: 7 extents found
== larger files get full prealloc extents
/mnt/test/test/data-prealloc/file-1: 9 extents found
/mnt/test/test/data-prealloc/file-2: 9 extents found
== non-streaming writes with contig have per-block extents
/mnt/test/test/data-prealloc/file-1: 32 extents found
/mnt/test/test/data-prealloc/file-2: 32 extents found
== any writes to region prealloc get full extents
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
== streaming offline writes get full extents either way
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
/mnt/test/test/data-prealloc/file-1: 4 extents found
/mnt/test/test/data-prealloc/file-2: 4 extents found
== goofy preallocation amounts work
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 5 extents found
/mnt/test/test/data-prealloc/file-2: 5 extents found
/mnt/test/test/data-prealloc/file-1: 3 extents found
/mnt/test/test/data-prealloc/file-2: 3 extents found
== block writes into region allocs hole
wrote blk 24
wrote blk 32
wrote blk 40
wrote blk 55
wrote blk 63
wrote blk 71
wrote blk 72
wrote blk 79
wrote blk 80
wrote blk 87
wrote blk 88
wrote blk 95
before:
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 0 at pos 0
wrote blk 0
0.. 1:
1.. 7: unwritten
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 0 at pos 1
wrote blk 15
0.. 1:
1.. 14: unwritten
15.. 1:
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 0 at pos 2
wrote blk 19
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 1 at pos 0
wrote blk 25
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 1 at pos 1
wrote blk 39
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 1 at pos 2
wrote blk 44
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 2 at pos 0
wrote blk 48
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 2 at pos 1
wrote blk 62
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 2 at pos 2
wrote blk 67
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 3 at pos 0
wrote blk 73
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
73.. 1:
74.. 5: unwritten
79.. 2:
87.. 2:
95.. 1: eof
writing into existing 3 at pos 1
wrote blk 86
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
73.. 1:
74.. 5: unwritten
79.. 2:
86.. 1:
87.. 2:
95.. 1: eof
writing into existing 3 at pos 2
wrote blk 92
0.. 1:
1.. 14: unwritten
15.. 1:
16.. 3: unwritten
19.. 1:
20.. 4: unwritten
24.. 1:
25.. 1:
26.. 6: unwritten
32.. 1:
39.. 1:
40.. 1:
44.. 1:
45.. 3: unwritten
48.. 1:
49.. 6: unwritten
55.. 1:
56.. 6: unwritten
62.. 1:
63.. 1:
64.. 3: unwritten
67.. 1:
68.. 3: unwritten
71.. 2:
73.. 1:
74.. 5: unwritten
79.. 2:
86.. 1:
87.. 2:
92.. 1:
93.. 2: unwritten
95.. 1: eof

View File

@@ -1,18 +0,0 @@
== root inode returns nothing
== crazy large unused inode does nothing
== basic entry
file
== rename
renamed
== hard link
file
link
== removal
== different dirs
== file types
type b name block
type c name char
type d name dir
type f name file
type l name symlink
== all name lengths work

View File

@@ -17,7 +17,7 @@ ino not found in dseq index
mount 0 contents after mount 1 rm: contents
ino found in dseq index
ino found in dseq index
stat: cannot stat '/mnt/test/test/inode-deletion/file': No such file or directory
stat: cannot stat /mnt/test/test/inode-deletion/file: No such file or directory
ino not found in dseq index
ino not found in dseq index
== lots of deletions use one open map

View File

@@ -1,3 +0,0 @@
== creating fragmented extents
== unlink file with moved extents to free extents per block
== cleanup

View File

@@ -20,10 +20,10 @@ offline waiting should now have two known entries:
data_wait_err found 2 waiters.
offline waiting should now have 0 known entries:
0
dd: error reading '/mnt/test/test/offline-extent-waiting/dir/file': Input/output error
dd: error reading /mnt/test/test/offline-extent-waiting/dir/file: Input/output error
0+0 records in
0+0 records out
dd: error reading '/mnt/test/test/offline-extent-waiting/dir/file': Input/output error
dd: error reading /mnt/test/test/offline-extent-waiting/dir/file: Input/output error
0+0 records in
0+0 records out
offline waiting should be empty again:

View File

@@ -1,5 +0,0 @@
== bad timeout values fail
== bad mount option fails
== mount option
== sysfs
== reset all options

View File

@@ -7,4 +7,3 @@ found second
== changing metadata must increase meta seq
== changing contents must increase data seq
== make sure dirtying doesn't livelock walk
== concurrent update attempts maintain single entries

View File

@@ -1,11 +1,3 @@
== non-acl O_TMPFILE creation honors umask
umask 022
fstat after open(0777): 0100755
stat after linkat: 0100755
umask 077
fstat after open(0777): 0100700
stat after linkat: 0100700
== stage from tmpfile
total file size 33669120
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
*

View File

@@ -40,7 +40,6 @@ generic/092
generic/098
generic/101
generic/104
generic/105
generic/106
generic/107
generic/117
@@ -52,7 +51,6 @@ generic/184
generic/221
generic/228
generic/236
generic/237
generic/245
generic/249
generic/257
@@ -65,7 +63,6 @@ generic/308
generic/309
generic/313
generic/315
generic/319
generic/322
generic/335
generic/336
@@ -75,7 +72,6 @@ generic/342
generic/343
generic/348
generic/360
generic/375
generic/376
generic/377
Not
@@ -241,6 +237,7 @@ generic/312
generic/314
generic/316
generic/317
generic/318
generic/324
generic/326
generic/327
@@ -285,4 +282,4 @@ shared/004
shared/032
shared/051
shared/289
Passed all 79 tests
Passed all 75 tests

View File

@@ -1,8 +1,5 @@
#!/usr/bin/bash
# Force system tools to use ASCII quotes
export LC_ALL=C
#
# XXX
# - could have helper functions for waiting for pids
@@ -61,7 +58,6 @@ $(basename $0) options:
-m | Run mkfs on the device before mounting and running
| tests. Implies unmounting existing mounts first.
-n <nr> | The number of devices and mounts to test.
-o <opts> | Add option string to all mounts during all tests.
-P | Enable trace_printk.
-p | Exit script after preparing mounts only, don't run tests.
-q <nr> | The first <nr> mounts will be quorum members. Must be
@@ -72,7 +68,6 @@ $(basename $0) options:
-s | Skip git repo checkouts.
-t | Enabled trace events that match the given glob argument.
| Multiple options enable multiple globbed events.
-T <nr> | Multiply the original trace buffer size by nr during the run.
-X | xfstests git repo. Used by tests/xfstests.sh.
-x | xfstests git branch to checkout and track.
-y | xfstests ./check additional args
@@ -141,12 +136,6 @@ while true; do
T_NR_MOUNTS="$2"
shift
;;
-o)
test -n "$2" || die "-o must have option string argument"
# always appending to existing options
T_MNT_OPTIONS+=",$2"
shift
;;
-P)
T_TRACE_PRINTK="1"
;;
@@ -171,11 +160,6 @@ while true; do
T_TRACE_GLOB+=("$2")
shift
;;
-T)
test -n "$2" || die "-T must have trace buffer size multiplier argument"
T_TRACE_MULT="$2"
shift
;;
-X)
test -n "$2" || die "-X requires xfstests git repo dir argument"
T_XFSTESTS_REPO="$2"
@@ -361,13 +345,6 @@ if [ -n "$T_INSMOD" ]; then
cmd insmod "$T_KMOD/src/scoutfs.ko"
fi
if [ -n "$T_TRACE_MULT" ]; then
orig_trace_size=$(cat /sys/kernel/debug/tracing/buffer_size_kb)
mult_trace_size=$((orig_trace_size * T_TRACE_MULT))
msg "increasing trace buffer size from $orig_trace_size KiB to $mult_trace_size KiB"
echo $mult_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
fi
nr_globs=${#T_TRACE_GLOB[@]}
if [ $nr_globs -gt 0 ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
@@ -397,7 +374,6 @@ fi
# always describe tracing in the logs
cmd cat /sys/kernel/debug/tracing/set_event
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
/sys/kernel/debug/tracing/buffer_size_kb \
/proc/sys/kernel/ftrace_dump_on_oops
#
@@ -454,7 +430,6 @@ for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
if [ "$i" -lt "$T_QUORUM" ]; then
opts="$opts,quorum_slot_nr=$i"
fi
opts="${opts}${T_MNT_OPTIONS}"
msg "mounting $meta_dev|$data_dev on $dir"
cmd mount -t scoutfs $opts "$data_dev" "$dir" &
@@ -629,9 +604,6 @@ if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
echo 0 > /sys/kernel/debug/tracing/options/trace_printk
cat /sys/kernel/debug/tracing/trace > "$T_RESULTS/traces"
if [ -n "$orig_trace_size" ]; then
echo $orig_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
fi
fi
if [ "$skipped" == 0 -a "$failed" == 0 ]; then

View File

@@ -5,14 +5,10 @@ inode-items-updated.sh
simple-inode-index.sh
simple-staging.sh
simple-release-extents.sh
get-referring-entries.sh
fallocate.sh
basic-truncate.sh
data-prealloc.sh
setattr_more.sh
offline-extent-waiting.sh
move-blocks.sh
large-fragmented-free.sh
enospc.sh
srch-basic-functionality.sh
simple-xattr-unit.sh
@@ -28,7 +24,7 @@ createmany-large-names.sh
createmany-rename-large-dir.sh
stage-release-race-alloc.sh
stage-multi-part.sh
o_tmpfile.sh
stage-tmpfile.sh
basic-posix-consistency.sh
dirent-consistency.sh
mkdir-rename-rmdir.sh
@@ -37,9 +33,7 @@ cross-mount-data-free.sh
persistent-item-vers.sh
setup-error-teardown.sh
resize-devices.sh
change-devices.sh
fence-and-reclaim.sh
quorum-heartbeat-timeout.sh
orphan-inodes.sh
mount-unmount-race.sh
client-unmount-recovery.sh

View File

@@ -1,113 +0,0 @@
/*
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
/*
* This creates fragmented data extents.
*
* A file is created that has alternating free and allocated extents.
* This also results in the global allocator having the matching
* fragmented free extent pattern. While that file is being created,
* occasionally an allocated extent is moved to another file. This
* results in a file that has fragmented extents at a given stride that
* can be deleted to create free data extents with a given stride.
*
* We don't have hole punching so to do this quickly we use a goofy
* combination of fallocate, truncate, and our move_blocks ioctl.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <linux/types.h>
#include <assert.h>
#include "ioctl.h"
#define BLOCK_SIZE 4096
int main(int argc, char **argv)
{
struct scoutfs_ioctl_move_blocks mb = {0,};
unsigned long long freed_extents;
unsigned long long move_stride;
unsigned long long i;
int alloc_fd;
int trunc_fd;
off_t off;
int ret;
if (argc != 5) {
printf("%s <freed_extents> <move_stride> <alloc_file> <trunc_file>\n", argv[0]);
return 1;
}
freed_extents = strtoull(argv[1], NULL, 0);
move_stride = strtoull(argv[2], NULL, 0);
alloc_fd = open(argv[3], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (alloc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[3], errno, strerror(errno));
exit(1);
}
trunc_fd = open(argv[4], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (trunc_fd == -1) {
fprintf(stderr, "error opening %s: %d (%s)\n", argv[4], errno, strerror(errno));
exit(1);
}
for (i = 0, off = 0; i < freed_extents; i++, off += BLOCK_SIZE * 2) {
ret = fallocate(alloc_fd, 0, off, BLOCK_SIZE * 2);
if (ret < 0) {
fprintf(stderr, "fallocate at off %llu error: %d (%s)\n",
(unsigned long long)off, errno, strerror(errno));
exit(1);
}
ret = ftruncate(alloc_fd, off + BLOCK_SIZE);
if (ret < 0) {
fprintf(stderr, "truncate to off %llu error: %d (%s)\n",
(unsigned long long)off + BLOCK_SIZE, errno, strerror(errno));
exit(1);
}
if ((i % move_stride) == 0) {
mb.from_fd = alloc_fd;
mb.from_off = off;
mb.len = BLOCK_SIZE;
mb.to_off = i * BLOCK_SIZE;
ret = ioctl(trunc_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
if (ret < 0) {
fprintf(stderr, "move from off %llu error: %d (%s)\n",
(unsigned long long)off,
errno, strerror(errno));
}
}
}
if (alloc_fd > -1)
close(alloc_fd);
if (trunc_fd > -1)
close(trunc_fd);
return 0;
}

View File

@@ -48,7 +48,7 @@ struct our_handle {
static void exit_usage(void)
{
printf(" -h/-? output this usage message and exit\n"
" -e keep trying on enoent and estale, consider success an error\n"
" -e keep trying on enoent, consider success an error\n"
" -i <num> 64bit inode number for handle open, can be multiple\n"
" -m <string> scoutfs mount path string for ioctl fd\n"
" -n <string> optional xattr name string, defaults to \""DEFAULT_NAME"\"\n"
@@ -149,7 +149,7 @@ int main(int argc, char **argv)
fd = open_by_handle_at(mntfd, &handle.handle, O_RDWR);
if (fd == -1) {
if (!enoent_success_err || ( errno != ENOENT && errno != ESTALE )) {
if (!enoent_success_err || errno != ENOENT) {
perror("open_by_handle_at");
return 1;
}

View File

@@ -1,97 +0,0 @@
/*
* Show the modes of files as we create them with O_TMPFILE and link
* them into the namespace.
*
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/stat.h>
#include <assert.h>
#include <limits.h>
static void linkat_tmpfile_modes(char *dir, char *lpath, mode_t mode)
{
char proc_self[PATH_MAX];
struct stat st;
int ret;
int fd;
umask(mode);
printf("umask 0%o\n", mode);
fd = open(dir, O_RDWR | O_TMPFILE, 0777);
if (fd < 0) {
perror("open(O_TMPFILE)");
exit(1);
}
ret = fstat(fd, &st);
if (ret < 0) {
perror("fstat");
exit(1);
}
printf("fstat after open(0777): 0%o\n", st.st_mode);
snprintf(proc_self, sizeof(proc_self), "/proc/self/fd/%d", fd);
ret = linkat(AT_FDCWD, proc_self, AT_FDCWD, lpath, AT_SYMLINK_FOLLOW);
if (ret < 0) {
perror("linkat");
exit(1);
}
close(fd);
ret = stat(lpath, &st);
if (ret < 0) {
perror("fstat");
exit(1);
}
printf("stat after linkat: 0%o\n", st.st_mode);
ret = unlink(lpath);
if (ret < 0) {
perror("unlink");
exit(1);
}
}
int main(int argc, char **argv)
{
char *lpath;
char *dir;
if (argc < 3) {
printf("%s <open_dir> <linkat_path>\n", argv[0]);
return 1;
}
dir = argv[1];
lpath = argv[2];
linkat_tmpfile_modes(dir, lpath, 022);
linkat_tmpfile_modes(dir, lpath, 077);
return 0;
}

View File

@@ -12,7 +12,7 @@ mount_fail()
}
echo "== prepare devices, mount point, and logs"
SCR="$T_TMPDIR/mnt.scratch"
SCR="/mnt/scoutfs.extra"
mkdir -p "$SCR"
> $T_TMP.mount.out
scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \

View File

@@ -149,10 +149,6 @@ find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
diff -u "$T_TMP.0" "$T_TMP.1"
rm -rf "$T_D0/dir"
echo "--- can rename into root"
touch "$T_D0/rename-into-root"
mv "$T_D0/rename-into-root" "$T_M0/"
rm -f "$T_M0/rename-into-root"
echo "== path resoluion"
touch "$T_D0/file"

View File

@@ -1,21 +0,0 @@
#
# Test basic correctness of truncate.
#
t_require_commands yes dd od truncate
FILE="$T_D0/file"
#
# We forgot to write a dirty block that zeroed the tail of a partial
# final block as we truncated past it.
#
echo "== truncate writes zeroed partial end of file block"
yes | dd of="$FILE" bs=8K count=1 status=none
sync
truncate -s 6K "$FILE"
truncate -s 12K "$FILE"
echo 3 > /proc/sys/vm/drop_caches
od -Ad -x "$FILE"
t_pass

View File

@@ -1,76 +0,0 @@
#
# test changing devices
#
echo "== make tmp sparse data dev files"
sz=$(blockdev --getsize64 "$T_EX_DATA_DEV")
large_sz=$((sz * 2))
touch "$T_TMP."{small,equal,large}
truncate -s 1MB "$T_TMP.small"
truncate -s $sz "$T_TMP.equal"
truncate -s $large_sz "$T_TMP.large"
echo "== make scratch fs"
t_quiet scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV"
SCR="$T_TMPDIR/mnt.scratch"
mkdir -p "$SCR"
echo "== small new data device fails"
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.small"
echo "== check sees data device errors"
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV" "$T_TMP.small"
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV"
echo "== preparing while mounted fails"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
umount "$SCR"
echo "== preparing without recovery fails"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
umount -f "$SCR"
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
echo "== check sees metadata errors"
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV"
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV" "$T_TMP.equal"
echo "== preparing with file data fails"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
echo hi > "$SCR"/file
umount "$SCR"
scoutfs print "$T_EX_META_DEV" > "$T_TMP.print"
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
echo "== preparing after emptied"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
rm -f "$SCR"/file
umount "$SCR"
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
echo "== checks pass"
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV"
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV" "$T_TMP.equal"
echo "== using prepared"
scr_loop=$(losetup --find --show "$T_TMP.equal")
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$scr_loop" "$SCR"
touch "$SCR"/equal_prepared
equal_tot=$(scoutfs statfs -s total_data_blocks -p "$SCR")
umount "$SCR"
losetup -d "$scr_loop"
echo "== preparing larger and resizing"
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.large"
scr_loop=$(losetup --find --show "$T_TMP.large")
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$scr_loop" "$SCR"
touch "$SCR"/large_prepared
ls "$SCR"
scoutfs resize-devices -p "$SCR" -d $large_sz
large_tot=$(scoutfs statfs -s total_data_blocks -p "$SCR")
test "$large_tot" -gt "$equal_tot" ; echo "resized larger test rc: $?"
umount "$SCR"
losetup -d "$scr_loop"
t_pass

View File

@@ -1,231 +0,0 @@
#
# test that the data prealloc options behave as expected. We write to
# two files a block at a time so that a single file doesn't naturally
# merge adjacent consecutive allocations. (we don't have multiple
# allocation cursors)
#
t_require_commands scoutfs stat filefrag dd touch truncate
write_block()
{
local file="$1"
local blk="$2"
dd if=/dev/zero of="$file" bs=4096 seek=$blk count=1 conv=notrunc status=none
echo "wrote blk $blk"
}
write_forwards()
{
local prefix="$1"
local nr="$2"
local blk
touch "$prefix"-{1,2}
truncate -s 0 "$prefix"-{1,2}
for blk in $(seq 0 1 $((nr - 1))); do
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
done
}
write_backwards()
{
local prefix="$1"
local nr="$2"
local blk
touch "$prefix"-{1,2}
truncate -s 0 "$prefix"-{1,2}
for blk in $(seq $((nr - 1)) -1 0); do
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
done
}
release_files() {
local prefix="$1"
local size=$(($2 * 4096))
local vers
local f
for f in "$prefix"*; do
size=$(stat -c "%s" "$f")
vers=$(scoutfs stat -s data_version "$f")
scoutfs release "$f" -V "$vers" -o 0 -l $size
done
}
stage_files() {
local prefix="$1"
local nr="$2"
local vers
local f
for blk in $(seq 0 1 $((nr - 1))); do
for f in "$prefix"*; do
vers=$(scoutfs stat -s data_version "$f")
scoutfs stage /dev/zero "$f" -V "$vers" -o $((blk * 4096)) -l 4096
done
done
}
print_extents_found()
{
local prefix="$1"
filefrag "$prefix"* 2>&1 | grep "extent.*found" | t_filter_fs
}
#
# print the logical start, len, and flags if they're there.
#
print_logical_extents()
{
local file="$1"
filefrag -v -b4096 "$file" 2>&1 | t_filter_fs | awk '
($1 ~ /[0-9]+:/) {
if ($NF !~ /[0-9]+:/) {
flags=$NF
} else {
flags=""
}
print $2, $6, flags
}
' | sed 's/last,eof/eof/'
}
t_save_all_sysfs_mount_options data_prealloc_blocks
t_save_all_sysfs_mount_options data_prealloc_contig_only
restore_options()
{
t_restore_all_sysfs_mount_options data_prealloc_blocks
t_restore_all_sysfs_mount_options data_prealloc_contig_only
}
trap restore_options EXIT
prefix="$T_D0/file"
echo "== initial writes smaller than prealloc grow to prealloc size"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 64
print_extents_found $prefix
echo "== larger files get full prealloc extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 128
print_extents_found $prefix
echo "== non-streaming writes with contig have per-block extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_backwards $prefix 32
print_extents_found $prefix
echo "== any writes to region prealloc get full extents"
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 64
print_extents_found $prefix
write_backwards $prefix 64
print_extents_found $prefix
echo "== streaming offline writes get full extents either way"
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 64
release_files $prefix 64
stage_files $prefix 64
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
release_files $prefix 64
stage_files $prefix 64
print_extents_found $prefix
echo "== goofy preallocation amounts work"
t_set_sysfs_mount_option 0 data_prealloc_blocks 7
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
write_forwards $prefix 14
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_blocks 13
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 53
print_extents_found $prefix
t_set_sysfs_mount_option 0 data_prealloc_blocks 1
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
write_forwards $prefix 3
print_extents_found $prefix
#
# prepare aligned regions of 8 blocks that we'll write into.
# We'll right into the first, last, and middle block of each
# region which was prepared with no existing extents, one at
# the start, and one at the end.
#
# Let's keep this last because it creates a ton of output to read
# through. The correct output is tied to preallocation strategy so it
# has to be verified each time we change preallocation.
#
echo "== block writes into region allocs hole"
t_set_sysfs_mount_option 0 data_prealloc_blocks 8
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
touch "$prefix"
truncate -s 0 "$prefix"
# write initial blocks in regions
base=0
for sides in 0 1 2 3; do
for i in 0 1 2; do
case "$sides" in
# none
0) ;;
# left
1) write_block $prefix $((base + 0)) ;;
# right
2) write_block $prefix $((base + 7)) ;;
# both
3) write_block $prefix $((base + 0))
write_block $prefix $((base + 7)) ;;
esac
((base+=8))
done
done
echo before:
print_logical_extents "$prefix"
# now write into the first, middle, and last empty block of each
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
base=0
for sides in 0 1 2 3; do
for i in 0 1 2; do
echo "writing into existing $sides at pos $i"
case "$sides" in
# none
0) left=$base; right=$((base + 7));;
# left
1) left=$((base + 1)); right=$((base + 7));;
# right
2) left=$((base)); right=$((base + 6));;
# both
3) left=$((base + 1)); right=$((base + 6));;
esac
case "$i" in
# start
0) write_block $prefix $left ;;
# end
1) write_block $prefix $right ;;
# mid (both has 6 blocks internally)
2) write_block $prefix $((left + 3)) ;;
esac
print_logical_extents "$prefix"
((base+=8))
done
done
t_pass

View File

@@ -59,7 +59,7 @@ echo "== make small meta fs"
# meta device just big enough for reserves and the metadata we'll fill
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="$T_TMPDIR/mnt.scratch"
SCR="/mnt/scoutfs.enospc"
mkdir -p "$SCR"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"

View File

@@ -7,11 +7,14 @@ t_require_mounts 2
#
# Make sure that all mounts can read the results of a write from each
# mount.
# mount. And make sure that the greatest of all the written seqs is
# visible after the writes were commited by remote reads.
#
check_read_write()
{
local expected
local greatest=0
local seq
local path
local saw
local w
@@ -22,6 +25,11 @@ check_read_write()
eval path="\$T_D${w}/written"
echo "$expected" > "$path"
seq=$(scoutfs stat -s meta_seq $path)
if [ "$seq" -gt "$greatest" ]; then
greatest=$seq
fi
for r in $(t_fs_nrs); do
eval path="\$T_D${r}/written"
saw=$(cat "$path")
@@ -30,6 +38,11 @@ check_read_write()
fi
done
done
seq=$(scoutfs statfs -s committed_seq -p $T_D0)
if [ "$seq" -lt "$greatest" ]; then
echo "committed_seq $seq less than greatest $greatest"
fi
}
# verify that fenced ran our testing fence script

View File

@@ -1,99 +0,0 @@
#
# Test _GET_REFERRING_ENTRIES ioctl via the get-referring-entries cli
# command
#
# consistently print only entry names
filter_names() {
exec cut -d ' ' -f 8- | sort
}
# print entries with type characters to match find. not happy with hard
# coding, but abi won't change much.
filter_types() {
exec cut -d ' ' -f 5- | \
sed \
-e 's/type 1 /type p /' \
-e 's/type 2 /type c /' \
-e 's/type 4 /type d /' \
-e 's/type 6 /type b /' \
-e 's/type 8 /type f /' \
-e 's/type 10 /type l /' \
-e 's/type 12 /type s /' \
| \
sort
}
n_chars() {
local n="$1"
printf 'A%.0s' $(eval echo {1..\$n})
}
GRE="scoutfs get-referring-entries -p $T_M0"
echo "== root inode returns nothing"
$GRE 1
echo "== crazy large unused inode does nothing"
$GRE 4611686018427387904 # 1 << 62
echo "== basic entry"
touch $T_D0/file
ino=$(stat -c '%i' $T_D0/file)
$GRE $ino | filter_names
echo "== rename"
mv $T_D0/file $T_D0/renamed
$GRE $ino | filter_names
echo "== hard link"
mv $T_D0/renamed $T_D0/file
ln $T_D0/file $T_D0/link
$GRE $ino | filter_names
echo "== removal"
rm $T_D0/file $T_D0/link
$GRE $ino
echo "== different dirs"
touch $T_D0/file
ino=$(stat -c '%i' $T_D0/file)
for i in $(seq 1 10); do
mkdir $T_D0/dir-$i
ln $T_D0/file $T_D0/dir-$i/file-$i
done
diff -u <(find $T_D0 -type f -printf '%f\n' | sort) <($GRE $ino | filter_names)
rm $T_D0/file
echo "== file types"
mkdir $T_D0/dir
touch $T_D0/dir/file
mkdir $T_D0/dir/dir
ln -s $T_D0/dir/file $T_D0/dir/symlink
mknod $T_D0/dir/char c 1 3 # null
mknod $T_D0/dir/block b 7 0 # loop0
for name in $(ls -UA $T_D0/dir | sort); do
ino=$(stat -c '%i' $T_D0/dir/$name)
$GRE $ino | filter_types
done
rm -rf $T_D0/dir
echo "== all name lengths work"
mkdir $T_D0/dir
touch $T_D0/dir/file
ino=$(stat -c '%i' $T_D0/dir/file)
name=""
> $T_TMP.unsorted
for i in $(seq 1 255); do
name+="a"
echo "$name" >> $T_TMP.unsorted
ln $T_D0/dir/file $T_D0/dir/$name
done
sort $T_TMP.unsorted > $T_TMP.sorted
rm $T_D0/dir/file
$GRE $ino | filter_names > $T_TMP.gre
diff -u $T_TMP.sorted $T_TMP.gre
rm -rf $T_D0/dir
t_pass

View File

@@ -72,7 +72,7 @@ check_ino_index "$ino" "$dseq" "$T_M0"
check_ino_index "$ino" "$dseq" "$T_M1"
exec {FD}>&- # close
# we know that revalidating will unhash the remote dentry
stat "$T_D0/file" 2>&1 | sed 's/cannot statx/cannot stat/' | t_filter_fs
stat "$T_D0/file" 2>&1 | t_filter_fs
check_ino_index "$ino" "$dseq" "$T_M0"
check_ino_index "$ino" "$dseq" "$T_M1"

View File

@@ -1,22 +0,0 @@
#
# Make sure the server can handle a transaction with a data_freed whose
# blocks all hit different btree blocks in the main free list. It
# probably has to be merged in multiple commits.
#
t_require_commands fragmented_data_extents
EXTENTS_PER_BTREE_BLOCK=600
EXTENTS_PER_LIST_BLOCK=8192
FREED_EXTENTS=$((EXTENTS_PER_BTREE_BLOCK * EXTENTS_PER_LIST_BLOCK))
echo "== creating fragmented extents"
fragmented_data_extents $FREED_EXTENTS $EXTENTS_PER_BTREE_BLOCK "$T_D0/alloc" "$T_D0/move"
echo "== unlink file with moved extents to free extents per block"
rm -f "$T_D0/move"
echo "== cleanup"
rm -f "$T_D0/alloc"
t_pass

View File

@@ -1,16 +0,0 @@
#
# basic tests of O_TMPFILE
#
t_require_commands stage_tmpfile hexdump
echo "== non-acl O_TMPFILE creation honors umask"
o_tmpfile_umask "$T_D0" "$T_D0/umask-file"
echo "== stage from tmpfile"
DEST_FILE="$T_D0/dest_file"
stage_tmpfile $T_D0 $DEST_FILE
hexdump -C "$DEST_FILE"
rm -f "$DEST_FILE"
t_pass

View File

@@ -1,117 +0,0 @@
#
# test that the quorum_heartbeat_time_ms option affects how long it
# takes to recover from a failed mount.
#
t_require_mounts 2
time_ms()
{
# time_t in seconds, then trunate nanoseconds to 3 most dig digits
date +%s%3N
}
set_bad_timeout() {
local to="$1"
t_set_sysfs_mount_option 0 quorum_heartbeat_timeout_ms $to && \
t_fail "set bad q hb to $to"
}
set_timeout()
{
local nr="$1"
local how="$2"
local to="$3"
local is
if [ $how == "sysfs" ]; then
t_set_sysfs_mount_option $nr quorum_heartbeat_timeout_ms $to
fi
if [ $how == "mount" ]; then
t_umount $nr
t_mount_opt $nr "quorum_heartbeat_timeout_ms=$to"
fi
is=$(t_get_sysfs_mount_option $nr quorum_heartbeat_timeout_ms)
if [ "$is" != "$to" ]; then
t_fail "tried to set qhbto on $nr via $how to $to but got $is"
fi
}
test_timeout()
{
local how="$1"
local to="$2"
local start
local nr
local sv
local delay
local low
local high
# set timeout on non-server quorum mounts
sv=$(t_server_nr)
for nr in $(t_quorum_nrs); do
if [ $nr -ne $sv ]; then
set_timeout $nr $how $to
fi
done
# give followers time to recv heartbeats and reset timeouts
sleep 1
# tear down the current server/leader
t_force_umount $sv
# see how long it takes for the next leader to start
start=$(time_ms)
t_wait_for_leader
delay=$(($(time_ms) - start))
# kind of fun to have these logged
echo "to $to delay $delay" >> $T_TMP.delay
# restore the mount that we tore down
t_mount $sv
# make sure the new leader delay was reasonable, allowing for some slack
low=$((to - 1000))
high=$((to + 5000))
# make sure the new leader delay was reasonable
test "$delay" -lt "$low" && t_fail "delay $delay < low $low (to $to)"
test "$delay" -gt "$high" && t_fail "delay $delay > high $high (to $to)"
}
echo "== bad timeout values fail"
set_bad_timeout 0
set_bad_timeout -1
set_bad_timeout 1000000
echo "== bad mount option fails"
if [ "$(t_server_nr)" == 0 ]; then
nr=1
else
nr=0
fi
t_umount $nr
t_mount_opt $nr "quorum_heartbeat_timeout_ms=1000000" 2>/dev/null && \
t_fail "bad mount option succeeded"
t_mount $nr
echo "== mount option"
def=$(t_get_sysfs_mount_option 0 quorum_heartbeat_timeout_ms)
test_timeout mount $def
test_timeout mount 3000
test_timeout mount $((def + 19000))
echo "== sysfs"
test_timeout sysfs $def
test_timeout sysfs 3000
test_timeout sysfs $((def + 19000))
echo "== reset all options"
t_remount_all
t_pass

View File

@@ -2,8 +2,6 @@
# Some basic tests of online resizing metadata and data devices.
#
t_require_commands bc
statfs_total() {
local single="total_$1_blocks"
local mnt="$2"
@@ -75,7 +73,7 @@ echo "== make initial small fs"
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m $quarter_meta -d $quarter_data \
"$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
t_fail "mkfs failed"
SCR="$T_TMPDIR/mnt.scratch"
SCR="/mnt/scoutfs.enospc"
mkdir -p "$SCR"
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
"$T_EX_DATA_DEV" "$SCR"

View File

@@ -55,17 +55,10 @@ scoutfs setattr -t 67305985.999999999 -V 1 -s 1 "$FILE" 2>&1 | t_filter_fs
TZ=GMT stat -c "%z" "$FILE"
rm "$FILE"
#
# With e2fsprogs-v1.42.10-10-g29758d2f, the output of filefrag 'flags' changes
# significantly. First, the _LAST flag is now output. Second, the 'unknown'
# flag is now printed out as 'unknown_loc'. To compensate for this, we check
# and replace the "correct" output for new versions here with the expected
# value.
#
echo "== large offline extents are created"
touch "$FILE"
scoutfs setattr -V 1 -o -s $((10007 * 4096)) "$FILE" 2>&1 | t_filter_fs
filefrag -v -b4096 "$FILE" 2>&1 | sed 's/last,unknown_loc,eof$/unknown,eof/' | t_filter_fs
filefrag -v -b4096 "$FILE" 2>&1 | t_filter_fs
rm "$FILE"
# had a bug where we were creating extents that were too long

View File

@@ -103,34 +103,4 @@ while [ "$nr" -lt 100 ]; do
((nr++))
done
#
# make sure rapid concurrent metadata updates don't create multiple
# meta_seq entries
#
# we had a bug where deletion items created under concurrent_write locks
# could get versions older than the items they're deleting which were
# protected by read/write locks.
#
echo "== concurrent update attempts maintain single entries"
FILES=4
nr=1
while [ "$nr" -lt 10 ]; do
# touch a bunch of files in parallel from all mounts
for i in $(t_fs_nrs); do
eval path="\$T_D${i}"
seq -f "$path/file-%.0f" 1 $FILES | xargs touch &
done
wait || t_fail "concurrent file updates failed"
# make sure no inodes have duplicate entries
sync
scoutfs walk-inodes -p "$T_D0" meta_seq -- 0 -1 | \
grep -v "minor" | \
awk '{print $4}' | \
sort -n | uniq -c | \
awk '($1 != 1)' | \
sort -n
((nr++))
done
t_pass

View File

@@ -27,11 +27,16 @@ test_xattr_lengths() {
echo "key len $name_len val len $val_len" >> "$T_TMP.log"
setfattr -n $name -v \"$val\" "$FILE"
getfattr -d --only-values --absolute-names "$FILE" -n "$name" > "$T_TMP.got"
echo -n "$val" > "$T_TMP.good"
# grep has trouble with enormous args? so we dump the
# name=value to a file and compare with a known good file
getfattr -d --absolute-names "$FILE" | grep "$name" > "$T_TMP.got"
cmp "$T_TMP.good" "$T_TMP.got" || \
t_fail "cmp failed name len $name_len val len $val_len"
if [ $val_len == 0 ]; then
echo "$name" > "$T_TMP.good"
else
echo "$name=\"$val\"" > "$T_TMP.good"
fi
cmp "$T_TMP.good" "$T_TMP.got" || exit 1
setfattr -x $name "$FILE"
}

View File

@@ -0,0 +1,15 @@
#
# Run tmpfile_stage and check the output with hexdump.
#
t_require_commands stage_tmpfile hexdump
DEST_FILE="$T_D0/dest_file"
stage_tmpfile $T_D0 $DEST_FILE
hexdump -C "$DEST_FILE"
rm -fr "$DEST_FILE"
t_pass

View File

@@ -65,6 +65,7 @@ generic/030 # mmap missing
generic/075 # file content mismatch failures (fds, etc)
generic/080 # mmap missing
generic/103 # enospc causes trans commit failures
generic/105 # needs trigage: something about acls
generic/108 # mount fails on failing device?
generic/112 # file content mismatch failures (fds, etc)
generic/120 # (can't exec 'cause no mmap)
@@ -72,15 +73,17 @@ generic/126 # (can't exec 'cause no mmap)
generic/141 # mmap missing
generic/213 # enospc causes trans commit failures
generic/215 # mmap missing
generic/237 # wrong error return from failing setfacl?
generic/246 # mmap missing
generic/247 # mmap missing
generic/248 # mmap missing
generic/318 # can't support user namespaces until v5.11
generic/319 # utils output change? update branch?
generic/321 # requires selinux enabled for '+' in ls?
generic/325 # mmap missing
generic/338 # BUG_ON update inode error handling
generic/346 # mmap missing
generic/347 # _dmthin_mount doesn't work?
generic/375 # utils output change? update branch?
EOF
t_restore_output

View File

@@ -0,0 +1,140 @@
#!/usr/bin/bash
# /usr/libexec/scoutfs-fenced/run/ipmi-remote-host
# ipmi configuration
SCOUTFS_IPMI_CONFIG_FILE=${SCOUTFS_IPMI_CONFIG_FILE:-/etc/scoutfs/scoutfs-ipmi.conf}
SCOUTFS_IPMI_HOSTS_FILE=${SCOUTFS_IPMI_HOSTS_FILE:-/etc/scoutfs/scoutfs-ipmi-hosts.conf}
## hosts file format
## SCOUTFS_HOST_IP IPMI_ADDRESS
## ex:
# 192.168.1.1 192.168.10.1
# command setup
IPMI_POWER="/sbin/ipmipower"
SSH_CMD="ssh -o ConnectTimeout=3 -o BatchMode=yes -o StrictHostKeyChecking=no"
LOGGER="/bin/logger -p local3.crit -t scoutfs-fenced"
$LOGGER "ipmi fence script invoked: IP: $SCOUTFS_FENCED_REQ_IP RID: $SCOUTFS_FENCED_REQ_RID TEST: $IPMITEST"
echo_fail() {
echo "$@" >&2
$LOGGER "fence failed: $@"
exit 1
}
echo_log() {
echo "$@" >&2
$LOGGER "fence info: $@"
}
echo_test_pass() {
echo -e "\xE2\x9C\x94 $@"
}
echo_test_fail() {
echo -e "\xE2\x9D\x8C $@"
}
test -n "$SCOUTFS_IPMI_CONFIG_FILE" || \
echo_fail "SCOUTFS_IPMI_CONFIG_FILE isn't set"
test -r "$SCOUTFS_IPMI_CONFIG_FILE" || \
echo_fail "$SCOUTFS_IPMI_CONFIG_FILE isn't readable file"
. "$SCOUTFS_IPMI_CONFIG_FILE"
test -n "$SCOUTFS_IPMI_HOSTS_FILE" || \
echo_fail "SCOUTFS_IPMI_HOSTS_FILE isn't set"
test -r "$SCOUTFS_IPMI_HOSTS_FILE" || \
echo_fail "$SCOUTFS_IPMI_HOSTS_FILE isn't readable file"
test -x "$IPMI_POWER" || \
echo_fail "$IPMI_POWER not found, need to install freeimpi?"
export ip="$SCOUTFS_FENCED_REQ_IP"
export rid="$SCOUTFS_FENCED_REQ_RID"
getIPMIhost () {
host=$(awk -v ip="$1" '$1 == ip {print $2}' "$SCOUTFS_IPMI_HOSTS_FILE") || \
echo_fail "lookup ipmi host failed"
echo "$host"
}
powerOffHost() {
# older versions of ipmipower inverted wait-until-off/wait-until-on, so specify both
$IPMI_POWER $IPMI_OPTS -h "$1" --wait-until-off --wait-until-on --off || \
echo_fail "ipmi power off $1 failed"
ipmioutput=$($IPMI_POWER $IPMI_OPTS -h "$1" --stat) || \
echo_fail "ipmi power stat $1 failed"
if [[ ! "$ipmioutput" =~ off ]]; then
echo_fail "ipmi stat $1 not off"
fi
$LOGGER "ipmi fence power down $1 success"
exit 0
}
if [ -n "$IPMITEST" ]; then
for i in $(awk '!/^($|[[:space:]]*#)/ {print $1}' "$SCOUTFS_IPMI_HOSTS_FILE"); do
if ! $SSH_CMD "$i" /bin/true; then
echo_test_fail "ssh $i"
else
echo_test_pass "ssh $i"
fi
host=$(getIPMIhost "$i")
if [ -z "$host" ]; then
echo_test_fail "ipmi config $i $host"
else
if ! $IPMI_POWER $IPMI_OPTS -h "$host" --stat; then
echo_test_fail "ipmi $i"
else
echo_test_pass "ipmi $i"
fi
fi
done
exit 0
fi
if [ -z "$ip" ]; then
echo_fail "no IP given for fencing"
fi
host=$(getIPMIhost "$ip")
if [ -z "$host" ]; then
echo_fail "no IPMI host found for fence IP"
fi
# first check via ssh if the mount still exists
# if ssh succeeds, we will only power down the node if mounted
if ! output=$($SSH_CMD "$ip" "echo BEGIN; LC_ALL=C egrep -m 1 '(^0x*|^$rid$)' /sys/kernel/boot_params/version /sys/fs/scoutfs/f*r*/rid; echo END"); then
# ssh not working, just power down host
powerOffHost "$host"
fi
if [[ ! "$output" =~ BEGIN ]]; then
# ssh failure
echo_log "no BEGIN"
powerOffHost "$host"
fi
if [[ ! "$output" =~ \/boot_params\/ ]]; then
# ssh failure
echo_log "no boot params"
powerOffHost "$host"
fi
if [[ ! "$output" =~ END ]]; then
# ssh failure
echo_log "no END"
powerOffHost "$host"
fi
if [[ "$output" =~ "rid:$rid" ]]; then
# rid still mounted, power down
echo_log "rid $rid still mounted"
powerOffHost "$host"
fi
$LOGGER "ipmi fence host $ip/$host success (rid $rid not mounted)"
exit 0

View File

@@ -0,0 +1,36 @@
#!/usr/bin/bash
# /usr/libexec/scoutfs-fenced/run/local-force-umount
echo_fail() {
echo "$@" > /dev/stderr
exit 1
}
rid="$SCOUTFS_FENCED_REQ_RID"
#
# Look for a local mount with the rid to fence. Typically we'll at
# least find the mount with the server that requested the fence that
# we're processing. But it's possible that mounts are unmounted
# before, or while, we're running.
#
mnts=$(findmnt -l -n -t scoutfs -o TARGET) || \
echo_fail "findmnt -t scoutfs failed" > /dev/stderr
for mnt in $mnts; do
mnt_rid=$(scoutfs statfs -p "$mnt" -s rid) || \
echo_fail "scoutfs statfs $mnt failed"
if [ "$mnt_rid" == "$rid" ]; then
umount -f "$mnt" || \
echo_fail "umout -f $mnt"
exit 0
fi
done
#
# If the mount doesn't exist on this host then it can't access the
# devices by definition and can be considered fenced.
#
exit 0

View File

@@ -0,0 +1,139 @@
#!/usr/bin/bash
# /usr/libexec/scoutfs-fenced/run/powerman-remote-host
# powerman configuration
SCOUTFS_PM_CONFIG_FILE=${SCOUTFS_PM_CONFIG_FILE:-/etc/scoutfs/scoutfs-pm.conf}
SCOUTFS_PM_HOSTS_FILE=${SCOUTFS_PM_HOSTS_FILE:-/etc/scoutfs/scoutfs-pm-hosts.conf}
## hosts file format
## SCOUTFS_HOST_IP POWERMAN_NODE_NAME
## ex:
# 192.168.1.1 dm1
# command setup
PM_CMD="/usr/bin/pm"
SSH_CMD="ssh -o ConnectTimeout=3 -o BatchMode=yes -o StrictHostKeyChecking=no"
LOGGER="/bin/logger -p local3.crit -t scoutfs-fenced"
$LOGGER "ipmi fence script invoked: IP: $SCOUTFS_FENCED_REQ_IP RID: $SCOUTFS_FENCED_REQ_RID TEST: $IPMITEST"
echo_fail() {
echo "$@" >&2
$LOGGER "fence failed: $@"
exit 1
}
echo_log() {
echo "$@" >&2
$LOGGER "fence info: $@"
}
echo_test_pass() {
echo -e "\xE2\x9C\x94 $@"
}
echo_test_fail() {
echo -e "\xE2\x9D\x8C $@"
}
test -n "$SCOUTFS_PM_CONFIG_FILE" || \
echo_fail "SCOUTFS_PM_CONFIG_FILE isn't set"
test -r "$SCOUTFS_PM_CONFIG_FILE" || \
echo_fail "$SCOUTFS_PM_CONFIG_FILE isn't readable file"
. "$SCOUTFS_PM_CONFIG_FILE"
test -n "$SCOUTFS_PM_HOSTS_FILE" || \
echo_fail "SCOUTFS_PM_HOSTS_FILE isn't set"
test -r "$SCOUTFS_PM_HOSTS_FILE" || \
echo_fail "$SCOUTFS_PM_HOSTS_FILE isn't readable file"
test -x "$PM_CMD" || \
echo_fail "$PMCMD not found, need to install powerman?"
export ip="$SCOUTFS_FENCED_REQ_IP"
fence_rid="$SCOUTFS_FENCED_REQ_RID"
getPMhost () {
host=$(awk -v ip="$1" '$1 == ip {print $2}' "$SCOUTFS_PM_HOSTS_FILE") || \
echo_fail "lookup pm host failed"
echo "$host"
}
powerOffHost() {
$PM_CMD $PM_OPTS "$1" -0 || \
echo_fail "pm power off $host failed"
pmoutput=$($PM_CMD $PM_OPTS "$1" -q | grep "$1") || \
echo_fail "powerman power stat $1 failed"
if [[ ! "$pmoutput" =~ off ]]; then
echo_fail "powerman stat $1 not off"
fi
$LOGGER "powerman fence power down $1 success"
exit 0
}
if [ -n "$PMTEST" ]; then
for i in $(awk '!/^($|[[:space:]]*#)/ {print $1}' "$SCOUTFS_PM_HOSTS_FILE"); do
if ! $SSH_CMD "$i" /bin/true; then
echo_test_fail "ssh $i"
else
echo_test_pass "ssh $i"
fi
host=$(getPMhost "$i")
if [ -z "$host" ]; then
echo_test_fail "pm config $i $host"
else
if ! $PM_CMD $PM_OPTS "$host" -q; then
echo_test_fail "pm $i"
else
echo_test_pass "pm $i"
fi
fi
done
exit 0
fi
if [ -z "$ip" ]; then
echo_fail "no IP given for fencing"
fi
host=$(getPMhost "$ip")
if [ -z "$host" ]; then
echo_fail "no host found for fence IP"
fi
# first check via ssh if the mount still exists
# if ssh succeeds, we will only power down the node if mounted
if ! output=$($SSH_CMD "$ip" "echo BEGIN; LC_ALL=C egrep -m 1 '(^0x*|^$rid$)' /sys/kernel/boot_params/version /sys/fs/scoutfs/f*r*/rid; echo END"); then
# ssh not working, just power down host
powerOffHost "$host"
fi
if [[ ! "$output" =~ BEGIN ]]; then
# ssh failure
echo_log "no BEGIN"
powerOffHost "$host"
fi
if [[ ! "$output" =~ \/boot_params\/ ]]; then
# ssh failure
echo_log "no boot params"
powerOffHost "$host"
fi
if [[ ! "$output" =~ END ]]; then
# ssh failure
echo_log "no END"
powerOffHost "$host"
fi
if [[ "$output" =~ "rid:$rid" ]]; then
# rid still mounted, power down
echo_log "rid $rid still mounted"
powerOffHost "$host"
fi
$LOGGER "powerman fence host $ip/$host success (rid $rid not mounted)"
exit 0

View File

@@ -0,0 +1,11 @@
# /etc/scoutfs/scoutfs-ipmi-hosts.conf
## config file format
##
## SCOUTFS_HOST_IP must match the interface used for scoutfs
## leader/follower communications
##
## SCOUTFS_HOST_IP IPMI_ADDRESS
## ex:
#192.168.1.1 192.168.10.1

View File

@@ -0,0 +1,10 @@
#!/usr/bin/bash
# /etc/scoutfs/scoutfs-ipmi.conf
IPMI_USER="admin"
IPMI_PASSWORD="password"
IPMI_OPTS="-D LAN_2_0 -u $IPMI_USER -p $IPMI_PASSWORD"
# some Intel BMCs need -I 17
# IPMI_OPTS="-D LAN_2_0 -u $IPMI_USER -p $IPMI_PASSWORD -I 17"

View File

@@ -0,0 +1,11 @@
# /etc/scoutfs/scoutfs-ipmi-hosts.conf
## config file format
##
## SCOUTFS_HOST_IP must match the interface used for scoutfs
## leader/follower communications
##
## SCOUTFS_HOST_IP POWERMAN_NODE_NAME
## ex:
#192.168.1.1 node1

View File

@@ -0,0 +1,8 @@
#!/usr/bin/bash
# /etc/scoutfs/scoutfs-pm.conf
PM_OPTS=""
# optionally specify remote powerman server
#PM_OPTS="-h pm-server.localdomain"

View File

@@ -15,61 +15,12 @@ general mount options described in the
.BR mount (8)
manual page.
.TP
.B acl
The acl mount option enables support for POSIX Access Control Lists
as detailed in
.BR acl (5) .
Support for POSIX ACLs is the default.
.TP
.B data_prealloc_blocks=<blocks>
Set the size of preallocation regions of data files, in 4KiB blocks.
Writes to these regions that contain no extents will attempt to
preallocate the size of the full region. This can waste a lot of space
with small files, files with sparse regions, and files whose final
length isn't a multiple of the preallocation size. The following
data_prealloc_contig_only option, which is the default, restricts this
behaviour to waste less space.
.sp
All the preallocation options can be changed in an active mount by
writing to their respective files in the options directory in the
mount's sysfs directory.
.sp
It is worth noting that it is always more efficient in every way to use
.BR fallocate (2)
to precisely allocate large extents for the resulting size of the file.
Always attempt to enable it in software that supports it.
.TP
.B data_prealloc_contig_only=<0|1>
This option, currently the default, limits file data preallocation in
two ways. First, it will only preallocate when extending a fully
allocated file. Second, it will limit the size of preallocation to the
existing length of the file. These limits reduce the amount of
preallocation wasted per file at the cost of multiple initial extents in
all files. It only supports simple streaming writes, any other write
pattern will not be recognized and could result in many fragmented
extent allocations.
.sp
This option can be disabled to encourage large allocated extents
regardless of write patterns. This can be helpful if files are written
with initial sparse regions (perhaps by multiple threads writing to
different regions) and wasted space isn't an issue (perhaps because the
file population contains few small files).
.TP
.B metadev_path=<device>
The metadev_path option specifies the path to the block device that
contains the filesystem's metadata.
.sp
This option is required.
.TP
.B noacl
The noacl mount option disables the default support for POSIX Access
Control Lists. Any existing system.posix_acl_default and
system.posix_acl_access extended attributes remain in inodes. They
will appear in listings from
.BR listxattr (5)
but specific retrieval or reomval operations will fail. They will be
used for enforcement again if ACL support is later enabled.
.TP
.B orphan_scan_delay_ms=<number>
This option sets the average expected delay, in milliseconds, between
each mount's scan of the global orphaned inode list. Jitter is added to
@@ -85,25 +36,6 @@ the options directory in the mount's sysfs directory. Writing a new
value will cause the next pending orphan scan to be rescheduled
with the newly written delay time.
.TP
.B quorum_heartbeat_timeout_ms=<number>
This option sets the amount of time, in milliseconds, that a quorum
member will wait without receiving heartbeat messages from the current
leader before trying to take over as leader. This setting is per-mount
and only changes the behavior of that mount.
.sp
This determines how long it may take before a failed leader is replaced
by a waiting quorum member. Setting it too low may lead to spurious
fencing as active leaders are prematurely replaced due to task or
network delays that prevent the quorum members from promptly sending and
receiving messages. The ideal setting is the longest acceptable
downtime during server failover. The default is 10000 (10s) and it can
not be less than 2000 greater than 60000.
.sp
This option can be changed in an active mount by writing to its file in
the options directory in the mount's sysfs directory. Writing a new
value will take effect the next time the quorum agent receives a
heartbeat message and sets the next timeout.
.TP
.B quorum_slot_nr=<number>
The quorum_slot_nr option assigns a quorum member slot to the mount.
The mount will use the slot assignment to claim exclusive ownership of

View File

@@ -76,97 +76,6 @@ run when the file system will not be mounted.
.RE
.PD
.TP
.BI "counters [-t|--table] SYSFS-DIR"
.sp
Display the counters and their values for a mounted ScoutFS filesystem.
.RS 1.0i
.PD 0
.sp
.TP
.B SYSFS-DIR
The mount's sysfs directory in which to find the
.B counters/
directory when then contains files for each counter.
The sysfs directory is
of the form
.I /sys/fs/scoutfs/f.<fsid>.r.<rid>/
\&.
.TP
.B "-t, --table"
Format the counters into a columnar table that fills the width of the display
instead of printing one counter per line.
.RE
.PD
.TP
.BI "data-waiting {-I|--inode} INODE-NUM {-B|--block} BLOCK-NUM [-p|--path PATH]"
.sp
Display all the files and blocks for which there is a task blocked waiting on
offline data.
.sp
The results are sorted by the file's inode number and the
logical block offset that is being waited on.
.sp
Each line of output describes a block in a file that has a task waiting
and is formatted as:
.I "ino <nr> iblock <nr> ops [str]"
\&. The ops string indicates blocked operations seperated by commas and can
include
.B read
for a read operation,
.B write
for a write operation, and
.B change_size
for a truncate or extending write.
.RS 1.0i
.PD 0
.sp
.TP
.B "-I, --inode INODE-NUM"
Start iterating over waiting tasks from the given inode number.
Value of 0 will show all waiting tasks.
.TP
.B "-B, --block BLOCK-NUM"
Start iterating over waiting tasks from the given logical block number
in the starting inode. Value of 0 will show blocks in the first inode
and then continue to show all blocks with tasks waiting in all the
remaining inodes.
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "data-wait-err {-I|--inode} INODE-NUM {-V|--version} VER-NUM {-F|--offset} OFF-NUM {-C|--count} COUNT {-O|--op} OP {-E|--err} ERR [-p|--path PATH]"
.sp
Return error from matching waiters.
.RS 1.0i
.PD 0
.sp
.TP
.B "-C, --count COUNT"
Count.
.TP
.B "-E, --err ERR"
Error.
.TP
.B "-F, --offset OFF-NUM"
Offset. May be expressed in bytes, or with KMGTP (Kibi, Mibi, etc.) size
suffixes.
.TP
.B "-I, --inode INODE-NUM"
Inode number.
.TP
.B "-O, --op OP"
Operation. One of: "read", "write", "change_size".
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "df [-h|--human-readable] [-p|--path PATH]"
.sp
@@ -184,95 +93,6 @@ A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "get-allocated-inos [-i|--ino INO] [-s|--single] [-p|--path PATH]"
.sp
This debugging command prints allocated inode numbers. It only prints
inodes
found in the group that contains the starting inode. The printed inode
numbers aren't necessarily reachable. They could be anywhere in the
process from being unlinked to finally deleted when their items
were found.
.RS 1.0i
.PD 0
.TP
.sp
.B "-i, --ino INO"
The first 64bit inode number which could be printed.
.TP
.B "-s, --single"
Only print the single starting inode when it is allocated, all other allocated
inode numbers will be ignored.
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "get-referring-entries [-p|--path PATH] INO"
.sp
Find directory entries that reference an inode number.
.sp
Display all the directory entries that refer to a given inode. Each
entry includes the inode number of the directory that contains it, the
d_off and d_type values for the entry as described by
.BR readdir (3)
, and the name of the entry.
.RS 1.0i
.PD 0
.TP
.sp
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.TP
.B "INO"
The inode number of the target inode.
.RE
.PD
.TP
.BI "ino-path INODE-NUM [-p|--path PATH]"
.sp
Display all paths that reference an inode number.
.sp
Ongoing filesystem changes, such as renaming a common parent of multiple paths,
can cause displayed paths to be inconsistent.
.RS 1.0i
.PD 0
.sp
.TP
.B "INODE-NUM"
The inode number of the target inode.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "list-hidden-xattrs FILE"
.sp
Display extended attributes starting with the
.BR scoutfs.
prefix and containing the
.BR hide.
tag
which makes them invisible to
.BR listxattr (2) .
The names of each attribute are output, one per line. Their order
is not specified.
.RS 1.0i
.PD 0
.TP
.sp
.B "FILE"
The path to a file within a ScoutFS filesystem. File permissions must allow
reading.
.RE
.PD
.TP
.BI "mkfs META-DEVICE DATA-DEVICE {-Q|--quorum-slot} NR,ADDR,PORT [-m|--max-meta-size SIZE] [-d|--max-data-size SIZE] [-z|--data-alloc-zone-blocks BLOCKS] [-f|--force] [-A|--allow-small-size] [-V|--format-version VERS]"
.sp
@@ -351,79 +171,6 @@ The range of supported versions is visible in the output of
.RE
.PD
.TP
.BI "prepare-empty-data-device {-c|--check} META-DEVICE DATA-DEVICE"
.sp
Prepare an unused device for use as the data device for an existing file
system. This will write an initialized super block to the specified
data device, destroying any existing contents. The specified metadata
device will not be modified. The file system must be fully unmounted
and any client mount recovery must be complete.
.sp
The existing metadata device is read to ensure that it's safe to stop
using the old data device. The data block allocators must indicate that
all data blocks are free. If there are still data blocks referenced by
files then the command will fail. The contents of these files must be
freed for the command to proceed.
.sp
A new super block is written to the new data device. The device can
then be used as the data device to mount the file system. As this
switch is made all client mounts must refer to the new device. The old
device is not modified and still contains a valid data super block that
could be mounted, creating data device writes that wouldn't be read by
mounts using the new device.
.sp
The number of data blocks available to the file system will not change
as the new data device is used. The new device must be large enough to
store all the data blocks that were available on the old device. If the
new device is larger then its added capacity can be used by growing the
new data device with the resize-devices command once it is mounted.
.RS 1.0i
.PD 0
.TP
.sp
.B "-c, --check"
Only check for errors that would prevent a new empty data device from
being used. No changes will be made to the data device. If the data
device is provided then its size will be checked to make sure that it is
large enough. This can be used to test the metadata for data references
before destroying an old empty data device.
.RE
.PD
.TP
.BI "print {-S|--skip-likely-huge} META-DEVICE"
.sp
Prints out all of the metadata in the file system. This makes no effort
to ensure that the structures are consistent as they're traversed and
can present structures that seem corrupt as they change as they're
output.
.RS 1.0i
.PD 0
.TP
.sp
.B "-S, --skip-likely-huge"
Skip printing structures that are likely to be very large. The
structures that are skipped tend to be global and whose size tends to be
related to the size of the volume. Examples of skipped structures include
the global fs items, srch files, and metadata and data
allocators. Similar structures that are not skipped are related to the
number of mounts and are maintained at a relatively reasonable size.
These include per-mount log trees, srch files, allocators, and the
metadata allocators used by server commits.
.sp
Skipping the larger structures limits the print output to a relatively
constant size rather than being a large multiple of the used metadata
space of the volume making the output much more useful for inspection.
.TP
.B "META-DEVICE"
The path to the metadata device for the filesystem whose metadata will be
printed. An attempt will be made to flush the host's buffer cache for
this device with the BLKFLSBUF ioctl, or with posix_fadvise() if
the path refers to a regular file.
.RE
.PD
.TP
.BI "resize-devices [-p|--path PATH] [-m|--meta-size SIZE] [-d|--data-size SIZE]"
.sp
@@ -482,92 +229,6 @@ kibibytes, mebibytes, etc.
.RE
.PD
.TP
.BI "search-xattrs XATTR-NAME [-p|--path PATH]"
.sp
Display the inode numbers of inodes in the filesystem which may have
an extended attribute with the given name.
.sp
The results may contain false positives. The returned inode numbers
should be checked to verify that the extended attribute is in fact
present on the inode.
.RS 1.0i
.PD 0
.TP
.sp
.B XATTR-NAME
The full name of the extended attribute to search for as
described in the
.BR xattr (7)
manual page.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "setattr FILE [-d, --data-version=VERSION [-s, --size=SIZE [-o, --offline]]] [-t, --ctime=TIMESPEC]"
.sp
Set ScoutFS-specific attributes on a newly created zero-length file.
.RS 1.0i
.PD 0
.sp
.TP
.B "-V, --data-version=VERSION"
Set data version.
.TP
.B "-o, --offline"
Set file contents as offline, not sparse. Requires
.I --size
option also be present.
.TP
.B "-s, --size=SIZE"
Set file size. May be expressed in bytes, or with
KMGTP (Kibi, Mibi, etc.) size suffixes. Requires
.I --data-version
option also be present.
.TP
.B "-t, --ctime=TIMESPEC"
Set creation time using
.I "<seconds-since-epoch>.<nanoseconds>"
format.
.RE
.PD
.TP
.BI "stage ARCHIVE-FILE FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
.sp
.B Stage
(i.e. return to online) the previously-offline contents of a file by copying a
region from another file, the archive, and without updating regular inode
metadata. Any operations that are blocked by the existence of an offline
region will proceed once the region has been staged.
.RS 1.0i
.PD 0
.TP
.sp
.B "ARCHIVE-FILE"
The source file for the file contents being staged.
.TP
.B "FILE"
The regular file whose contents will be staged.
.TP
.B "-V, --version VERSION"
The data_version of the contents to be staged. It must match the
current data_version of the file.
.TP
.B "-o, --offset OFF-NUM"
The starting byte offset of the region to write. May be expressed in bytes, or with
KMGTP (Kibi, Mibi, etc.) size suffixes. Default is 0.
.TP
.B "-l, --length LENGTH"
Length of range (bytes or KMGTP units) of file to stage. Default is the file's
total size.
.RE
.PD
.TP
.BI "stat FILE [-s|--single-field FIELD-NAME]"
.sp
Display ScoutFS-specific metadata fields for the given file.
@@ -653,6 +314,221 @@ The total number of 4K data blocks in the filesystem.
.RE
.PD
.TP
.BI "counters [-t|--table] SYSFS-DIR"
.sp
Display the counters and their values for a mounted ScoutFS filesystem.
.RS 1.0i
.PD 0
.sp
.TP
.B SYSFS-DIR
The mount's sysfs directory in which to find the
.B counters/
directory when then contains files for each counter.
The sysfs directory is
of the form
.I /sys/fs/scoutfs/f.<fsid>.r.<rid>/
\&.
.TP
.B "-t, --table"
Format the counters into a columnar table that fills the width of the display
instead of printing one counter per line.
.RE
.PD
.TP
.BI "search-xattrs XATTR-NAME [-p|--path PATH]"
.sp
Display the inode numbers of inodes in the filesystem which may have
an extended attribute with the given name.
.sp
The results may contain false positives. The returned inode numbers
should be checked to verify that the extended attribute is in fact
present on the inode.
.RS 1.0i
.PD 0
.TP
.sp
.B XATTR-NAME
The full name of the extended attribute to search for as
described in the
.BR xattr (7)
manual page.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "list-hidden-xattrs FILE"
.sp
Display extended attributes starting with the
.BR scoutfs.
prefix and containing the
.BR hide.
tag
which makes them invisible to
.BR listxattr (2) .
The names of each attribute are output, one per line. Their order
is not specified.
.RS 1.0i
.PD 0
.TP
.sp
.B "FILE"
The path to a file within a ScoutFS filesystem. File permissions must allow
reading.
.RE
.PD
.TP
.BI "walk-inodes {meta_seq|data_seq} FIRST-INODE LAST-INODE [-p|--path PATH]"
.sp
Walk an inode index in the file system and output the inode numbers
that are found between the first and last positions in the index.
.RS 1.0i
.PD 0
.sp
.TP
.BR meta_seq , data_seq
Which index to walk.
.TP
.B "FIRST-INODE"
An integer index value giving starting position of the index walk.
.I 0
is the first possible position.
.TP
.B "LAST-INODE"
An integer index value giving the last position to include in the index walk.
.I \-1
can be given to indicate the last possible position.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "ino-path INODE-NUM [-p|--path PATH]"
.sp
Display all paths that reference an inode number.
.sp
Ongoing filesystem changes, such as renaming a common parent of multiple paths,
can cause displayed paths to be inconsistent.
.RS 1.0i
.PD 0
.sp
.TP
.B "INODE-NUM"
The inode number of the target inode.
.TP
.B "-p|--path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "data-waiting {-I|--inode} INODE-NUM {-B|--block} BLOCK-NUM [-p|--path PATH]"
.sp
Display all the files and blocks for which there is a task blocked waiting on
offline data.
.sp
The results are sorted by the file's inode number and the
logical block offset that is being waited on.
.sp
Each line of output describes a block in a file that has a task waiting
and is formatted as:
.I "ino <nr> iblock <nr> ops [str]"
\&. The ops string indicates blocked operations seperated by commas and can
include
.B read
for a read operation,
.B write
for a write operation, and
.B change_size
for a truncate or extending write.
.RS 1.0i
.PD 0
.sp
.TP
.B "-I, --inode INODE-NUM"
Start iterating over waiting tasks from the given inode number.
Value of 0 will show all waiting tasks.
.TP
.B "-B, --block BLOCK-NUM"
Start iterating over waiting tasks from the given logical block number
in the starting inode. Value of 0 will show blocks in the first inode
and then continue to show all blocks with tasks waiting in all the
remaining inodes.
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "data-wait-err {-I|--inode} INODE-NUM {-V|--version} VER-NUM {-F|--offset} OFF-NUM {-C|--count} COUNT {-O|--op} OP {-E|--err} ERR [-p|--path PATH]"
.sp
Return error from matching waiters.
.RS 1.0i
.PD 0
.sp
.TP
.B "-C, --count COUNT"
Count.
.TP
.B "-E, --err ERR"
Error.
.TP
.B "-F, --offset OFF-NUM"
Offset. May be expressed in bytes, or with KMGTP (Kibi, Mibi, etc.) size
suffixes.
.TP
.B "-I, --inode INODE-NUM"
Inode number.
.TP
.B "-O, --op OP"
Operation. One of: "read", "write", "change_size".
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD
.TP
.BI "stage ARCHIVE-FILE FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
.sp
.B Stage
(i.e. return to online) the previously-offline contents of a file by copying a
region from another file, the archive, and without updating regular inode
metadata. Any operations that are blocked by the existence of an offline
region will proceed once the region has been staged.
.RS 1.0i
.PD 0
.TP
.sp
.B "ARCHIVE-FILE"
The source file for the file contents being staged.
.TP
.B "FILE"
The regular file whose contents will be staged.
.TP
.B "-V, --version VERSION"
The data_version of the contents to be staged. It must match the
current data_version of the file.
.TP
.B "-o, --offset OFF-NUM"
The starting byte offset of the region to write. May be expressed in bytes, or with
KMGTP (Kibi, Mibi, etc.) size suffixes. Default is 0.
.TP
.B "-l, --length LENGTH"
Length of range (bytes or KMGTP units) of file to stage. Default is the file's
total size.
.RE
.PD
.TP
.BI "release FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
.sp
@@ -692,28 +568,76 @@ total size.
.PD
.TP
.BI "walk-inodes {meta_seq|data_seq} FIRST-INODE LAST-INODE [-p|--path PATH]"
.BI "setattr FILE [-d, --data-version=VERSION [-s, --size=SIZE [-o, --offline]]] [-t, --ctime=TIMESPEC]"
.sp
Walk an inode index in the file system and output the inode numbers
that are found between the first and last positions in the index.
Set ScoutFS-specific attributes on a newly created zero-length file.
.RS 1.0i
.PD 0
.sp
.TP
.BR meta_seq , data_seq
Which index to walk.
.B "-V, --data-version=VERSION"
Set data version.
.TP
.B "FIRST-INODE"
An integer index value giving starting position of the index walk.
.I 0
is the first possible position.
.B "-o, --offline"
Set file contents as offline, not sparse. Requires
.I --size
option also be present.
.TP
.B "LAST-INODE"
An integer index value giving the last position to include in the index walk.
.I \-1
can be given to indicate the last possible position.
.B "-s, --size=SIZE"
Set file size. May be expressed in bytes, or with
KMGTP (Kibi, Mibi, etc.) size suffixes. Requires
.I --data-version
option also be present.
.TP
.B "-p|--path PATH"
.B "-t, --ctime=TIMESPEC"
Set creation time using
.I "<seconds-since-epoch>.<nanoseconds>"
format.
.RE
.PD
.TP
.BI "print META-DEVICE"
.sp
Prints out all of the metadata in the file system. This makes no effort
to ensure that the structures are consistent as they're traversed and
can present structures that seem corrupt as they change as they're
output.
.RS 1.0i
.PD 0
.TP
.sp
.B "META-DEVICE"
The path to the metadata device for the filesystem whose metadata will be
printed. Since this command reads via the host's buffer cache, it may not
reflect the current blocks in the filesystem possibly written to the shared
block devices from another host, unless
.B blockdev \--flushbufs
command is used first.
.RE
.PD
.TP
.BI "get-allocated-inos [-i|--ino INO] [-s|--single] [-p|--path PATH]"
.sp
This debugging command prints allocated inode numbers. It only prints
inodes
found in the group that contains the starting inode. The printed inode
numbers aren't necessarily reachable. They could be anywhere in the
process from being unlinked to finally deleted when their items
were found.
.RS 1.0i
.PD 0
.TP
.sp
.B "-i, --ino INO"
The first 64bit inode number which could be printed.
.TP
.B "-s, --single"
Only print the single starting inode when it is allocated, all other allocated
inode numbers will be ignored.
.TP
.B "-p, --path PATH"
A path within a ScoutFS filesystem.
.RE
.PD

View File

@@ -55,14 +55,21 @@ install -m 755 -D src/scoutfs $RPM_BUILD_ROOT%{_sbindir}/scoutfs
install -m 644 -D src/ioctl.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/ioctl.h
install -m 644 -D src/format.h $RPM_BUILD_ROOT%{_includedir}/scoutfs/format.h
install -m 755 -D fenced/scoutfs-fenced $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/scoutfs-fenced
install -m 755 -D fenced/local-force-unmount $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/local-force-unmount
install -m 755 -D fenced/ipmi-remote-host $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/ipmi-remote-host
install -m 755 -D fenced/powerman-remote-host $RPM_BUILD_ROOT%{_libexecdir}/scoutfs-fenced/run/powerman-remote-host
install -m 644 -D fenced/scoutfs-fenced.service $RPM_BUILD_ROOT%{_unitdir}/scoutfs-fenced.service
install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-fenced.conf.example
install -m 644 -D fenced/scoutfs-ipmi.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-ipmi.conf
install -m 644 -D fenced/scoutfs-ipmi-hosts.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-ipmi-hosts.conf
install -m 644 -D fenced/scoutfs-pm.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-pm.conf
install -m 644 -D fenced/scoutfs-pm-hosts.conf $RPM_BUILD_ROOT%{_sysconfdir}/scoutfs/scoutfs-pm-hosts.conf
%files
%defattr(644,root,root,755)
%{_mandir}/man*/scoutfs*.gz
/%{_unitdir}/scoutfs-fenced.service
%{_sysconfdir}/scoutfs
%{_unitdir}/scoutfs-fenced.service
%config(noreplace) %{_sysconfdir}/scoutfs
%defattr(755,root,root,755)
%{_sbindir}/scoutfs
%{_libexecdir}/scoutfs-fenced

Some files were not shown because too many files have changed in this diff Show More