mirror of
https://github.com/versity/scoutfs.git
synced 2026-01-09 13:23:14 +00:00
Compare commits
216 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4b2afa61b8 | ||
|
|
222ba2cede | ||
|
|
c7e97eeb1f | ||
|
|
21c070b42d | ||
|
|
77fbf92968 | ||
|
|
d5c699c3b4 | ||
|
|
b56b8e502c | ||
|
|
5ff372561d | ||
|
|
bdecee5e5d | ||
|
|
a9281b75fa | ||
|
|
707e1b2d59 | ||
|
|
006f429f72 | ||
|
|
d71583bcf5 | ||
|
|
bb835b948d | ||
|
|
bcdc4f5423 | ||
|
|
7ceb215c91 | ||
|
|
d4d2b0850b | ||
|
|
cf05aefe50 | ||
|
|
9f06065ce7 | ||
|
|
d2c2fece2a | ||
|
|
0e1e55d25b | ||
|
|
293cee9554 | ||
|
|
a7704e0b56 | ||
|
|
819df4be60 | ||
|
|
592e3d471f | ||
|
|
29160b0bc6 | ||
|
|
11c041d2ea | ||
|
|
46e8dfe884 | ||
|
|
a9beeaf5da | ||
|
|
205d8ebd4a | ||
|
|
e580f33f82 | ||
|
|
d480243c11 | ||
|
|
bafecbc604 | ||
|
|
65be4682e3 | ||
|
|
e88845d185 | ||
|
|
ec50e66fff | ||
|
|
0e91f9a277 | ||
|
|
69068ae2c0 | ||
|
|
016dac39bf | ||
|
|
e69cf3dec8 | ||
|
|
d6c143a639 | ||
|
|
09ae100254 | ||
|
|
50f5077863 | ||
|
|
cca4fcb788 | ||
|
|
1d150da3f0 | ||
|
|
28f03d3558 | ||
|
|
4275f6e6e5 | ||
|
|
70a5b6ffe2 | ||
|
|
b89ecd47b4 | ||
|
|
4293816764 | ||
|
|
f0de59a9a3 | ||
|
|
1f0a08eacb | ||
|
|
dac3f056a5 | ||
|
|
af868aad9b | ||
|
|
cf4df0ef9f | ||
|
|
81aa58253e | ||
|
|
c683ded0e6 | ||
|
|
f27431b3ae | ||
|
|
28c3cee995 | ||
|
|
430960ef3c | ||
|
|
7006a84d96 | ||
|
|
eafb8621da | ||
|
|
006555d42a | ||
|
|
8e458f9230 | ||
|
|
32c0dbce09 | ||
|
|
9c9ba651bd | ||
|
|
14eddb6420 | ||
|
|
597208324d | ||
|
|
8596c9ad45 | ||
|
|
8a705ea380 | ||
|
|
4784ccdfd5 | ||
|
|
778c2769df | ||
|
|
9e3529060e | ||
|
|
1672b3ecec | ||
|
|
55f9435fad | ||
|
|
072f6868d3 | ||
|
|
8a64b46a2f | ||
|
|
14901c39aa | ||
|
|
e095127ae9 | ||
|
|
a9da27444f | ||
|
|
49fe89741d | ||
|
|
847916860d | ||
|
|
564b942ead | ||
|
|
3d99fda0f6 | ||
|
|
6c0ab75477 | ||
|
|
89b238a5c4 | ||
|
|
05371b83f0 | ||
|
|
acafb869e7 | ||
|
|
74c5fe1115 | ||
|
|
2279e9657f | ||
|
|
707752a7bf | ||
|
|
0316c22026 | ||
|
|
5a1e5639c2 | ||
|
|
950963375b | ||
|
|
e52435b993 | ||
|
|
2b72c57cb0 | ||
|
|
9c67b2a42d | ||
|
|
0b38aeb5a4 | ||
|
|
2daf873983 | ||
|
|
904c5dce90 | ||
|
|
57c6d78df8 | ||
|
|
74e9d0f764 | ||
|
|
98eb0eb649 | ||
|
|
15de0c21c1 | ||
|
|
7b65767803 | ||
|
|
46640e4ff9 | ||
|
|
912906f050 | ||
|
|
ec02cf442b | ||
|
|
0e9cd1eea5 | ||
|
|
e18ea24561 | ||
|
|
723309ff75 | ||
|
|
9bfad7d324 | ||
|
|
448e0abacb | ||
|
|
2a6d827e7a | ||
|
|
e7bd1b45dc | ||
|
|
6ded240089 | ||
|
|
99a20bc383 | ||
|
|
18903ce500 | ||
|
|
b76e22ffcf | ||
|
|
d6863d6832 | ||
|
|
bb01a3990f | ||
|
|
409631ceb1 | ||
|
|
f1264c7e47 | ||
|
|
a61b8d9961 | ||
|
|
eac57a1f7a | ||
|
|
5512d5c03e | ||
|
|
8cf7be4651 | ||
|
|
3363b4fb79 | ||
|
|
ddb5cce2a5 | ||
|
|
1b0e9c45f4 | ||
|
|
2e2ccb6f61 | ||
|
|
01c8bba56d | ||
|
|
17cb1fe84b | ||
|
|
78ae87031b | ||
|
|
bf93ea73c4 | ||
|
|
a23e7478a0 | ||
|
|
9ba2ee5c88 | ||
|
|
fe33a492c2 | ||
|
|
77c0ff89fb | ||
|
|
7c2d83e2f8 | ||
|
|
40aa47c888 | ||
|
|
c1bd7bcce5 | ||
|
|
7720222588 | ||
|
|
fff07ce19c | ||
|
|
464de56d28 | ||
|
|
342c206550 | ||
|
|
fe4734d019 | ||
|
|
b1a43bb312 | ||
|
|
929703213f | ||
|
|
78279ffb4a | ||
|
|
0b919e2ba7 | ||
|
|
bb5267f0c9 | ||
|
|
6d4916954b | ||
|
|
8e067b3d3f | ||
|
|
87500e8bb5 | ||
|
|
41174867ed | ||
|
|
276fbebdac | ||
|
|
03df993e14 | ||
|
|
701f1a9538 | ||
|
|
71ed4512dc | ||
|
|
57dff347a6 | ||
|
|
fb7cb057c4 | ||
|
|
1b924c501e | ||
|
|
aed4313995 | ||
|
|
61d86f7718 | ||
|
|
717b56698a | ||
|
|
c92a7ff705 | ||
|
|
d05489c670 | ||
|
|
4806e8a7b3 | ||
|
|
b74f3f577d | ||
|
|
d5ddf1ecac | ||
|
|
e27ea22fe4 | ||
|
|
51fe5a4ceb | ||
|
|
3847c4fe63 | ||
|
|
ef2daf8857 | ||
|
|
064409eb62 | ||
|
|
ddc5d9f04d | ||
|
|
433a80c6fc | ||
|
|
78405bb5fd | ||
|
|
98e514e5f4 | ||
|
|
29538a9f45 | ||
|
|
1826048ca3 | ||
|
|
798fbb793e | ||
|
|
d7b16419ef | ||
|
|
f13aba78b1 | ||
|
|
3220c2055c | ||
|
|
1cbc927ccb | ||
|
|
acb94dd9b7 | ||
|
|
233fbb39f3 | ||
|
|
198d3cda32 | ||
|
|
e8c64b4217 | ||
|
|
89b64ae1f7 | ||
|
|
fc8a5a1b5c | ||
|
|
d4c793e010 | ||
|
|
8a3058818c | ||
|
|
ba9a106f72 | ||
|
|
310725eb72 | ||
|
|
51a8236316 | ||
|
|
f3dd00895b | ||
|
|
49df98f5a8 | ||
|
|
15cf3c4134 | ||
|
|
1abe97351d | ||
|
|
f757e29915 | ||
|
|
31e474c5fa | ||
|
|
dcf8202d7c | ||
|
|
ae55fa3153 | ||
|
|
7f9f21317c | ||
|
|
0d4bf83da3 | ||
|
|
0a6b1fb304 | ||
|
|
fb7e43dd23 | ||
|
|
45d90a5ae4 | ||
|
|
48f1305a8a | ||
|
|
cd4d6502b8 | ||
|
|
dff366e1a4 | ||
|
|
ca526e2bc0 | ||
|
|
e423d42106 |
238
ReleaseNotes.md
238
ReleaseNotes.md
@@ -1,6 +1,244 @@
|
||||
Versity ScoutFS Release Notes
|
||||
=============================
|
||||
|
||||
---
|
||||
v1.18
|
||||
\
|
||||
*Nov 7, 2023*
|
||||
|
||||
Fixed a bug where background srch file compaction could stop making
|
||||
forward progress if a partial compaction operation was committed at a
|
||||
specific byte offset in a block. This would cause srch file searches to
|
||||
be progressively more expensive over time. Once this fix is running
|
||||
background compaction will resume, bringing the cost of searches back
|
||||
down.
|
||||
|
||||
---
|
||||
v1.17
|
||||
\
|
||||
*Oct 23, 2023*
|
||||
|
||||
Add support for EL8 generation kernels.
|
||||
|
||||
---
|
||||
v1.16
|
||||
\
|
||||
*Oct 4, 2023*
|
||||
|
||||
Fix an issue where the server could hang on startup if its persistent
|
||||
allocator structures were left in a specific degraded state by the
|
||||
previously active server.
|
||||
|
||||
---
|
||||
v1.15
|
||||
\
|
||||
*Jul 17, 2023*
|
||||
|
||||
Process log btree merge splicing in multiple commits. This prevents a
|
||||
rare case where pending log merge completions contain more work than can
|
||||
be done in a single server commit, causing the server to trigger an
|
||||
assert shortly after starting.
|
||||
|
||||
Fix spurious EINVAL from data writes when data\_prealloc\_contig\_only was
|
||||
set to 0.
|
||||
|
||||
---
|
||||
v1.14
|
||||
\
|
||||
*Jun 29, 2023*
|
||||
|
||||
Add get\_referring\_entries ioctl for getting directory entries that
|
||||
refer to an inode.
|
||||
|
||||
Fix excessive CPU use in the move\_blocks interface when moving a large
|
||||
number of extents.
|
||||
|
||||
Reduce fragmented data allocation when contig\_only prealloc is not in
|
||||
use by more consistently allocating multi-block extents within each
|
||||
aligned prealloc region.
|
||||
|
||||
Avoid rare deadlock in metadata block cache recalim under both heavy
|
||||
load and memory pressure.
|
||||
|
||||
Fix crash when using quorum\_heartbeat\_timeout\_ms mount option.
|
||||
|
||||
---
|
||||
v1.13
|
||||
\
|
||||
*May 19, 2023*
|
||||
|
||||
Add the quorum\_heartbeat\_timeout\_ms mount option to set the quorum
|
||||
heartbeat timeout.
|
||||
|
||||
Change some task prioritization and allocation behavior of the quorum
|
||||
agent to help reduce delays in sending and receiving heartbeat messages.
|
||||
|
||||
---
|
||||
v1.12
|
||||
\
|
||||
*Apr 17, 2023*
|
||||
|
||||
Add the prepare-empty-data-device scoutfs command. A data device can be
|
||||
unused when no files have data blocks, perhaps because they're archived
|
||||
and offline. In this case the data device can be swapped out for
|
||||
another device without changes to the metadata device.
|
||||
|
||||
Fix an oversight which limited inode timestamps to second granularity
|
||||
for some operations. All operations now record timestamps with full
|
||||
nanosecond precision.
|
||||
|
||||
Fix spurious ENOENT failures when renaming from other directories into
|
||||
the root directory.
|
||||
|
||||
---
|
||||
v1.11
|
||||
\
|
||||
*Feb 2, 2023*
|
||||
|
||||
Fixed a free extent processing error that could prevent mount from
|
||||
proceeding when free data extents were sufficiently fragmented. It now
|
||||
properly handle very fragmented free extent maps.
|
||||
|
||||
Fixed a statfs server processing race that could return spurious errors
|
||||
and shut down the server. With the race closed statfs processing is
|
||||
reliable.
|
||||
|
||||
Fixed a rare livelock in the move\_blocks ioctl. With the right
|
||||
relationship between ioctl arguments and eventual file extent items the
|
||||
core loop in the move\_blocks ioctl could get stuck looping on an extent
|
||||
item and never return. The loop exit conditions were fixed and the loop
|
||||
will always advance through all extents.
|
||||
|
||||
Changed the 'print' scoutfs commands to flush the block cache for the
|
||||
devices. It was inconvenient to expect cache flushing to be a separate
|
||||
step to ensure consistency with remote node writes.
|
||||
|
||||
---
|
||||
v1.10
|
||||
\
|
||||
*Dec 7, 2022*
|
||||
|
||||
Fixed a potential directory entry cache management deadlock that could
|
||||
occur when many nodes performed heavy metadata write loads across shared
|
||||
directories and their child subdirectories. The deadlock could halt
|
||||
invalidation progress on a node which could then stop use of locks that
|
||||
needed invalidation on that node which would result in almost all tasks
|
||||
hanging on those locks that would never make progress.
|
||||
|
||||
Fixed a circumstance where metadata change sequence index item
|
||||
modification could leave behind old stale metadata sequence items. The
|
||||
duplication case required concurrent metadata updates across mounts with
|
||||
particular open transaction patterns so the duplicate items are rare.
|
||||
They resulted in a small amount of additional load when walking change
|
||||
indexes but had no effect on correctness.
|
||||
|
||||
Fixed a rare case where sparse file extension might not write partial
|
||||
blocks of zeros which was found in testing. This required using
|
||||
truncate to extend files past file sizes that end in partial blocks
|
||||
along with the right transaction commit and memory reclaim patterns.
|
||||
This never affected regular non-sparse files nor files prepopulated with
|
||||
fallocate.
|
||||
|
||||
---
|
||||
v1.9
|
||||
\
|
||||
*Oct 29, 2022*
|
||||
|
||||
Fix VFS cached directory entry consistency verification that could cause
|
||||
spurious "no such file or directory" (ENOENT) errors from rename over
|
||||
NFS under certain conditions. The problem was only every with the
|
||||
consistency of in-memory cached dentry objects, persistent data was
|
||||
correct and eventual eviction of the bad cached objects would stop
|
||||
generating the errors.
|
||||
|
||||
---
|
||||
v1.8
|
||||
\
|
||||
*Oct 18, 2022*
|
||||
|
||||
Add support for Linux POSIX Access Control Lists, as described in
|
||||
acl(5). Mount options are added to enable ("acl") and disable ("noacl")
|
||||
support. The default is to support ACLs. ACLs are stored in the
|
||||
existing extended attribute scheme so adding support is does not require
|
||||
a format change.
|
||||
|
||||
Add options to control data extent preallocation. The default behavior
|
||||
does not change. The options can relax the limits on preallocation
|
||||
which will then trigger under more write patterns and increase the risk
|
||||
of preallocated space which is never used. The options are described in
|
||||
scoutfs(5).
|
||||
|
||||
---
|
||||
v1.7
|
||||
\
|
||||
*Aug 26, 2022*
|
||||
|
||||
* **Fixed possible persistent errors moving freed data extents**
|
||||
\
|
||||
Fixed a case where the server could hit persistent errors trying to
|
||||
move a client's freed extents in one commit. The client had to free
|
||||
a large number of extents that occupied distant positions in the
|
||||
global free extent btree. Very large fragmented files could cause
|
||||
this. The server now moves the freed extents in multiple commits and
|
||||
can always ensure forward progress.
|
||||
|
||||
* **Fixed possible persistent errors from freed duplicate extents**
|
||||
\
|
||||
Background orphan deletion wasn't properly synchronizing with
|
||||
foreground tasks deleting very large files. If a deletion took long
|
||||
enough then background deletion could also attempt to delete inode items
|
||||
while the deletion was making progress. This could create duplicate
|
||||
deletions of data extent items which causes the server to abort when
|
||||
it later discovers the duplicate extents as it merges free lists.
|
||||
|
||||
---
|
||||
v1.6
|
||||
\
|
||||
*Jul 7, 2022*
|
||||
|
||||
* **Fix memory leaks in rare corner cases**
|
||||
\
|
||||
Analysis tools found a few corner cases that leaked small structures,
|
||||
generally around error handling or startup and shutdown.
|
||||
|
||||
* **Add --skip-likely-huge scoutfs print command option**
|
||||
\
|
||||
Add an option to scoutfs print to reduce the size of the output
|
||||
so that it can be used to see system-wide metadata without being
|
||||
overwhelmed by file-level details.
|
||||
|
||||
---
|
||||
v1.5
|
||||
\
|
||||
*Jun 21, 2022*
|
||||
|
||||
* **Fix persistent error during server startup**
|
||||
\
|
||||
Fixed a case where the server would always hit a consistent error on
|
||||
seartup, preventing the system from mounting. This required a rare
|
||||
but valid state across the clients.
|
||||
|
||||
* **Fix a client hang that would lead to fencing**
|
||||
\
|
||||
The client module's use of in-kernel networking was missing annotation
|
||||
that could lead to communication hanging. The server would fence the
|
||||
client when it stopped communicating. This could be identified by the
|
||||
server fencing a client after it disconnected with no attempt by the
|
||||
client to reconnect.
|
||||
|
||||
---
|
||||
v1.4
|
||||
\
|
||||
*May 6, 2022*
|
||||
|
||||
* **Fix possible client crash during server failover**
|
||||
\
|
||||
Fixed a narrow window during server failover and lock recovery that
|
||||
could cause a client mount to believe that it had an inconsistent item
|
||||
cache and panic. This required very specific lock state and messaging
|
||||
patterns between multiple mounts and multiple servers which made it
|
||||
unlikely to occur in the field.
|
||||
|
||||
---
|
||||
v1.3
|
||||
\
|
||||
|
||||
@@ -31,12 +31,12 @@ TARFILE = scoutfs-kmod-$(RPM_VERSION).tar
|
||||
all: module
|
||||
|
||||
module:
|
||||
make $(SCOUTFS_ARGS)
|
||||
$(SP) make C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
|
||||
$(MAKE) $(SCOUTFS_ARGS)
|
||||
$(SP) $(MAKE) C=2 CF="-D__CHECK_ENDIAN__" $(SCOUTFS_ARGS)
|
||||
|
||||
|
||||
modules_install:
|
||||
make $(SCOUTFS_ARGS) modules_install
|
||||
$(MAKE) $(SCOUTFS_ARGS) modules_install
|
||||
|
||||
|
||||
%.spec: %.spec.in .FORCE
|
||||
@@ -50,4 +50,4 @@ dist: scoutfs-kmod.spec
|
||||
@ tar rf $(TARFILE) --transform="s@\(.*\)@scoutfs-kmod-$(RPM_VERSION)/\1@" scoutfs-kmod.spec
|
||||
|
||||
clean:
|
||||
make $(SCOUTFS_ARGS) clean
|
||||
$(MAKE) $(SCOUTFS_ARGS) clean
|
||||
|
||||
@@ -3,16 +3,28 @@
|
||||
%define kmod_git_hash @@GITHASH@@
|
||||
%define pkg_date %(date +%%Y%%m%%d)
|
||||
|
||||
# Disable the building of the debug package(s).
|
||||
%define debug_package %{nil}
|
||||
|
||||
# take kernel version or default to uname -r
|
||||
%{!?kversion: %global kversion %(uname -r)}
|
||||
%global kernel_version %{kversion}
|
||||
|
||||
%if 0%{?el7}
|
||||
%global kernel_source() /usr/src/kernels/%{kernel_version}.$(arch)
|
||||
%global kernel_release() %{kversion}
|
||||
%endif
|
||||
%if 0%{?el8}
|
||||
%global kernel_source() /usr/src/kernels/%{kernel_version}
|
||||
%endif
|
||||
|
||||
%{!?_release: %global _release 0.%{pkg_date}git%{kmod_git_hash}}
|
||||
|
||||
%if 0%{?el7}
|
||||
Name: %{kmod_name}
|
||||
%endif
|
||||
%if 0%{?el8}
|
||||
Name: kmod-%{kmod_name}
|
||||
%endif
|
||||
Summary: %{kmod_name} kernel module
|
||||
Version: %{kmod_version}
|
||||
Release: %{_release}%{?dist}
|
||||
@@ -20,24 +32,30 @@ License: GPLv2
|
||||
Group: System/Kernel
|
||||
URL: http://scoutfs.org/
|
||||
|
||||
%if 0%{?el7}
|
||||
BuildRequires: %{kernel_module_package_buildreqs}
|
||||
BuildRequires: git
|
||||
%endif
|
||||
%if 0%{?el8}
|
||||
BuildRequires: elfutils-libelf-devel
|
||||
%endif
|
||||
BuildRequires: kernel-devel-uname-r = %{kernel_version}
|
||||
BuildRequires: git
|
||||
BuildRequires: module-init-tools
|
||||
|
||||
ExclusiveArch: x86_64
|
||||
|
||||
Source: %{kmod_name}-kmod-%{kmod_version}.tar
|
||||
|
||||
%if 0%{?el7}
|
||||
# Build only for standard kernel variant(s); for debug packages, append "debug"
|
||||
# after "default" (separated by space)
|
||||
%kernel_module_package default
|
||||
%endif
|
||||
|
||||
# Disable the building of the debug package(s).
|
||||
%define debug_package %{nil}
|
||||
|
||||
%global install_mod_dir extra/%{name}
|
||||
|
||||
%global install_mod_dir extra/%{kmod_name}
|
||||
%if 0%{?el8}
|
||||
%global flavors_to_build x86_64
|
||||
%endif
|
||||
|
||||
%description
|
||||
%{kmod_name} - kernel module
|
||||
@@ -66,7 +84,7 @@ export INSTALL_MOD_DIR=%{install_mod_dir}
|
||||
mkdir -p %{install_mod_dir}
|
||||
for flavor in %{flavors_to_build}; do
|
||||
export KSRC=%{kernel_source $flavor}
|
||||
export KVERSION=%{kernel_release $KSRC}
|
||||
export KVERSION=%{kversion}
|
||||
install -d $INSTALL_MOD_PATH/lib/modules/$KVERSION/%{install_mod_dir}
|
||||
cp $PWD/obj/$flavor/src/scoutfs.ko $INSTALL_MOD_PATH/lib/modules/$KVERSION/%{install_mod_dir}/
|
||||
done
|
||||
@@ -74,6 +92,14 @@ done
|
||||
# mark modules executable so that strip-to-file can strip them
|
||||
find %{buildroot} -type f -name \*.ko -exec %{__chmod} u+x \{\} \;
|
||||
|
||||
%if 0%{?el8}
|
||||
%files
|
||||
/lib/modules
|
||||
|
||||
%post
|
||||
weak-modules --add-kernel --no-initramfs
|
||||
depmod -a
|
||||
%endif
|
||||
|
||||
%clean
|
||||
rm -rf %{buildroot}
|
||||
|
||||
@@ -8,6 +8,7 @@ CFLAGS_scoutfs_trace.o = -I$(src) # define_trace.h double include
|
||||
-include $(src)/Makefile.kernelcompat
|
||||
|
||||
scoutfs-y += \
|
||||
acl.o \
|
||||
avl.o \
|
||||
alloc.o \
|
||||
block.o \
|
||||
@@ -24,6 +25,7 @@ scoutfs-y += \
|
||||
inode.o \
|
||||
ioctl.o \
|
||||
item.o \
|
||||
kernelcompat.o \
|
||||
lock.o \
|
||||
lock_server.o \
|
||||
msg.o \
|
||||
|
||||
@@ -26,6 +26,16 @@ ifneq (,$(shell grep 'dir_emit_dots' include/linux/fs.h))
|
||||
ccflags-y += -DKC_DIR_EMIT_DOTS
|
||||
endif
|
||||
|
||||
#
|
||||
# v3.18-rc2-19-gb5ae6b15bd73
|
||||
#
|
||||
# Folds d_materialise_unique into d_splice_alias. Note reversal
|
||||
# of arguments (Also note Documentation/filesystems/porting.rst)
|
||||
#
|
||||
ifneq (,$(shell grep 'd_materialise_unique' include/linux/dcache.h))
|
||||
ccflags-y += -DKC_D_MATERIALISE_UNIQUE=1
|
||||
endif
|
||||
|
||||
#
|
||||
# RHEL extended the fop struct so to use it we have to set
|
||||
# a flag to indicate that the struct is large enough and
|
||||
@@ -34,3 +44,217 @@ endif
|
||||
ifneq (,$(shell grep 'FMODE_KABI_ITERATE' include/linux/fs.h))
|
||||
ccflags-y += -DKC_FMODE_KABI_ITERATE
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.7-rc2-23-g0d4d717f2583
|
||||
#
|
||||
# Added user_ns argument to posix_acl_valid
|
||||
#
|
||||
ifneq (,$(shell grep 'posix_acl_valid.*user_namespace' include/linux/posix_acl.h))
|
||||
ccflags-y += -DKC_POSIX_ACL_VALID_USER_NS
|
||||
endif
|
||||
|
||||
#
|
||||
# v5.3-12296-g6d2052d188d9
|
||||
#
|
||||
# The RBCOMPUTE function is now passed an extra flag, and should return a bool
|
||||
# to indicate whether the propagated callback should stop or not.
|
||||
#
|
||||
ifneq (,$(shell grep 'static inline bool RBNAME.*_compute_max' include/linux/rbtree_augmented.h))
|
||||
ccflags-y += -DKC_RB_TREE_AUGMENTED_COMPUTE_MAX
|
||||
endif
|
||||
|
||||
#
|
||||
# v3.13-25-g37bc15392a23
|
||||
#
|
||||
# Renames posix_acl_create to __posix_acl_create and provide some
|
||||
# new interfaces for creating ACLs
|
||||
#
|
||||
ifneq (,$(shell grep '__posix_acl_create' include/linux/posix_acl.h))
|
||||
ccflags-y += -DKC___POSIX_ACL_CREATE
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.8-rc1-29-g31051c85b5e2
|
||||
#
|
||||
# inode_change_ok() removed - replace with setattr_prepare()
|
||||
#
|
||||
ifneq (,$(shell grep 'extern int setattr_prepare' include/linux/fs.h))
|
||||
ccflags-y += -DKC_SETATTR_PREPARE
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.15-rc3-4-gae5e165d855d
|
||||
#
|
||||
# linux/iversion.h needs to manually be included for code that
|
||||
# manipulates this field.
|
||||
#
|
||||
ifneq (,$(shell grep -s 'define _LINUX_IVERSION_H' include/linux/iversion.h))
|
||||
ccflags-y += -DKC_NEED_LINUX_IVERSION_H=1
|
||||
endif
|
||||
|
||||
# v4.11-12447-g104b4e5139fe
|
||||
#
|
||||
# Renamed __percpu_counter_add to percpu_counter_add_batch to clarify
|
||||
# that the __ wasn't less safe, just took an extra parameter.
|
||||
#
|
||||
ifneq (,$(shell grep 'percpu_counter_add_batch' include/linux/percpu_counter.h))
|
||||
ccflags-y += -DKC_PERCPU_COUNTER_ADD_BATCH
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.11-4550-g7dea19f9ee63
|
||||
#
|
||||
# Introduced memalloc_nofs_{save,restore} preferred instead of _noio_.
|
||||
#
|
||||
ifneq (,$(shell grep 'memalloc_nofs_save' include/linux/sched/mm.h))
|
||||
ccflags-y += -DKC_MEMALLOC_NOFS_SAVE
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.7-12414-g1eff9d322a44
|
||||
#
|
||||
# Renamed bi_rw to bi_opf to force old code to catch up. We use it as a
|
||||
# single switch between old and new bio structures.
|
||||
#
|
||||
ifneq (,$(shell grep 'bi_opf' include/linux/blk_types.h))
|
||||
ccflags-y += -DKC_BIO_BI_OPF
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.12-rc2-201-g4e4cbee93d56
|
||||
#
|
||||
# Moves to bi_status BLK_STS_ API instead of having a mix of error
|
||||
# end_io args or bi_error.
|
||||
#
|
||||
ifneq (,$(shell grep 'bi_status' include/linux/blk_types.h))
|
||||
ccflags-y += -DKC_BIO_BI_STATUS
|
||||
endif
|
||||
|
||||
#
|
||||
# v3.11-8765-ga0b02131c5fc
|
||||
#
|
||||
# Remove the old ->shrink() API, ->{scan,count}_objects is preferred.
|
||||
#
|
||||
ifneq (,$(shell grep '(*shrink)' include/linux/shrinker.h))
|
||||
ccflags-y += -DKC_SHRINKER_SHRINK
|
||||
endif
|
||||
|
||||
#
|
||||
# v3.19-4777-g6bec00352861
|
||||
#
|
||||
# backing_dev_info is removed from address_space. Instead we need to use
|
||||
# inode_to_bdi() inline from <backing-dev.h>.
|
||||
#
|
||||
ifneq (,$(shell grep 'struct backing_dev_info.*backing_dev_info' include/linux/fs.h))
|
||||
ccflags-y += -DKC_LINUX_BACKING_DEV_INFO=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.3-9290-ge409de992e3e
|
||||
#
|
||||
# xattr handlers are now passed a struct that contains `flags`
|
||||
#
|
||||
ifneq (,$(shell grep 'int...get..const struct xattr_handler.*struct dentry.*dentry,' include/linux/xattr.h))
|
||||
ccflags-y += -DKC_XATTR_STRUCT_XATTR_HANDLER=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.16-rc1-1-g9b2c45d479d0
|
||||
#
|
||||
# kernel_getsockname() and kernel_getpeername dropped addrlen arg
|
||||
#
|
||||
ifneq (,$(shell grep 'kernel_getsockname.*,$$' include/linux/net.h))
|
||||
ccflags-y += -DKC_KERNEL_GETSOCKNAME_ADDRLEN=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.1-rc1-410-geeb1bd5c40ed
|
||||
#
|
||||
# Adds a struct net parameter to sock_create_kern
|
||||
#
|
||||
ifneq (,$(shell grep 'sock_create_kern.*struct net' include/linux/net.h))
|
||||
ccflags-y += -DKC_SOCK_CREATE_KERN_NET=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v3.18-rc6-1619-gc0371da6047a
|
||||
#
|
||||
# iov_iter is now part of struct msghdr
|
||||
#
|
||||
ifneq (,$(shell grep 'struct iov_iter.*msg_iter' include/linux/socket.h))
|
||||
ccflags-y += -DKC_MSGHDR_STRUCT_IOV_ITER=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.17-rc6-7-g95582b008388
|
||||
#
|
||||
# Kernel has current_time(inode) to uniformly retreive timespec in the right unit
|
||||
#
|
||||
ifneq (,$(shell grep 'extern struct timespec64 current_time' include/linux/fs.h))
|
||||
ccflags-y += -DKC_CURRENT_TIME_INODE=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.9-12228-g530e9b76ae8f
|
||||
#
|
||||
# register_cpu_notifier and family were all removed and to be
|
||||
# replaced with cpuhp_* API calls.
|
||||
#
|
||||
ifneq (,$(shell grep 'define register_hotcpu_notifier' include/linux/cpu.h))
|
||||
ccflags-y += -DKC_CPU_NOTIFIER
|
||||
endif
|
||||
|
||||
#
|
||||
# v3.14-rc8-130-gccad2365668f
|
||||
#
|
||||
# generic_file_buffered_write is removed, backport it
|
||||
#
|
||||
ifneq (,$(shell grep 'extern ssize_t generic_file_buffered_write' include/linux/fs.h))
|
||||
ccflags-y += -DKC_GENERIC_FILE_BUFFERED_WRITE=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v5.7-438-g8151b4c8bee4
|
||||
#
|
||||
# struct address_space_operations switches away from .readpages to .readahead
|
||||
#
|
||||
# RHEL has backported this feature all the way to RHEL8, as part of RHEL_KABI,
|
||||
# which means we need to detect this very precisely
|
||||
#
|
||||
ifneq (,$(shell grep 'readahead.*struct readahead_control' include/linux/fs.h))
|
||||
ccflags-y += -DKC_FILE_AOPS_READAHEAD
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.0-rc7-1743-g8436318205b9
|
||||
#
|
||||
# .aio_read and .aio_write no longer exist. All reads and writes now use the
|
||||
# .read_iter and .write_iter methods, or must implement .read and .write (which
|
||||
# we don't).
|
||||
#
|
||||
ifneq (,$(shell grep 'ssize_t.*aio_read' include/linux/fs.h))
|
||||
ccflags-y += -DKC_LINUX_HAVE_FOP_AIO_READ=1
|
||||
endif
|
||||
|
||||
#
|
||||
# rhel7 has a custom inode_operations_wrapper struct that is discarded
|
||||
# entirely in favor of upstream structure since rhel8.
|
||||
#
|
||||
ifneq (,$(shell grep 'void.*follow_link.*struct dentry' include/linux/fs.h))
|
||||
ccflags-y += -DKC_LINUX_HAVE_RHEL_IOPS_WRAPPER=1
|
||||
endif
|
||||
|
||||
ifneq (,$(shell grep 'size_t.*ki_left;' include/linux/aio.h))
|
||||
ccflags-y += -DKC_LINUX_AIO_KI_LEFT=1
|
||||
endif
|
||||
|
||||
#
|
||||
# v4.4-rc4-4-g98e9cb5711c6
|
||||
#
|
||||
# Introduces a new xattr_handler .name member that can be used to match the
|
||||
# entire field, instead of just a prefix. For these kernels, we must use
|
||||
# the new .name field instead.
|
||||
ifneq (,$(shell grep 'static inline const char .xattr_prefix' include/linux/xattr.h))
|
||||
ccflags-y += -DKC_XATTR_HANDLER_NAME=1
|
||||
endif
|
||||
|
||||
378
kmod/src/acl.c
Normal file
378
kmod/src/acl.c
Normal file
@@ -0,0 +1,378 @@
|
||||
/*
|
||||
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/xattr.h>
|
||||
#include <linux/posix_acl.h>
|
||||
#include <linux/posix_acl_xattr.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "super.h"
|
||||
#include "scoutfs_trace.h"
|
||||
#include "xattr.h"
|
||||
#include "acl.h"
|
||||
#include "inode.h"
|
||||
#include "trans.h"
|
||||
|
||||
/*
|
||||
* POSIX draft ACLs are stored as full xattr items with the entries
|
||||
* encoded as the kernel's posix_acl_xattr_{header,entry} value structs.
|
||||
*
|
||||
* They're accessed and modified via user facing synthetic xattrs, iops
|
||||
* calls from the kernel, during inode mode changes, and during inode
|
||||
* creation.
|
||||
*
|
||||
* ACL access devolves into xattr access which is relatively expensive
|
||||
* so we maintain the cached native form in the vfs inode. We drop the
|
||||
* cache in lock invalidation which means that cached acl access must
|
||||
* always be performed under cluster locking.
|
||||
*/
|
||||
|
||||
static int acl_xattr_name_len(int type, char **name, size_t *name_len)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
switch (type) {
|
||||
case ACL_TYPE_ACCESS:
|
||||
*name = XATTR_NAME_POSIX_ACL_ACCESS;
|
||||
if (name_len)
|
||||
*name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
|
||||
break;
|
||||
case ACL_TYPE_DEFAULT:
|
||||
*name = XATTR_NAME_POSIX_ACL_DEFAULT;
|
||||
if (name_len)
|
||||
*name_len = sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1;
|
||||
break;
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock)
|
||||
{
|
||||
struct posix_acl *acl;
|
||||
char *value = NULL;
|
||||
char *name;
|
||||
int ret;
|
||||
|
||||
#ifndef KC___POSIX_ACL_CREATE
|
||||
if (!IS_POSIXACL(inode))
|
||||
return NULL;
|
||||
|
||||
acl = get_cached_acl(inode, type);
|
||||
if (acl != ACL_NOT_CACHED)
|
||||
return acl;
|
||||
#endif
|
||||
|
||||
ret = acl_xattr_name_len(type, &name, NULL);
|
||||
if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
ret = scoutfs_xattr_get_locked(inode, name, NULL, 0, lock);
|
||||
if (ret > 0) {
|
||||
value = kzalloc(ret, GFP_NOFS);
|
||||
if (!value)
|
||||
ret = -ENOMEM;
|
||||
else
|
||||
ret = scoutfs_xattr_get_locked(inode, name, value, ret, lock);
|
||||
}
|
||||
if (ret > 0) {
|
||||
acl = posix_acl_from_xattr(&init_user_ns, value, ret);
|
||||
} else if (ret == -ENODATA || ret == 0) {
|
||||
acl = NULL;
|
||||
} else {
|
||||
acl = ERR_PTR(ret);
|
||||
}
|
||||
|
||||
#ifndef KC___POSIX_ACL_CREATE
|
||||
/* can set null negative cache */
|
||||
if (!IS_ERR(acl))
|
||||
set_cached_acl(inode, type, acl);
|
||||
#endif
|
||||
|
||||
kfree(value);
|
||||
|
||||
return acl;
|
||||
}
|
||||
|
||||
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type)
|
||||
{
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
struct posix_acl *acl;
|
||||
int ret;
|
||||
|
||||
#ifndef KC___POSIX_ACL_CREATE
|
||||
if (!IS_POSIXACL(inode))
|
||||
return NULL;
|
||||
#endif
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
|
||||
if (ret < 0) {
|
||||
acl = ERR_PTR(ret);
|
||||
} else {
|
||||
acl = scoutfs_get_acl_locked(inode, type, lock);
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
|
||||
}
|
||||
|
||||
return acl;
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller has acquired the locks and dirtied the inode, they'll
|
||||
* update the inode item if we return 0.
|
||||
*/
|
||||
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks)
|
||||
{
|
||||
static const struct scoutfs_xattr_prefix_tags tgs = {0,}; /* never scoutfs. prefix */
|
||||
bool set_mode = false;
|
||||
char *value = NULL;
|
||||
umode_t new_mode;
|
||||
size_t name_len;
|
||||
char *name;
|
||||
int size = 0;
|
||||
int ret;
|
||||
|
||||
ret = acl_xattr_name_len(type, &name, &name_len);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
switch (type) {
|
||||
case ACL_TYPE_ACCESS:
|
||||
if (acl) {
|
||||
ret = posix_acl_update_mode(inode, &new_mode, &acl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
set_mode = true;
|
||||
}
|
||||
break;
|
||||
case ACL_TYPE_DEFAULT:
|
||||
if (!S_ISDIR(inode->i_mode)) {
|
||||
ret = acl ? -EINVAL : 0;
|
||||
goto out;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
if (acl) {
|
||||
size = posix_acl_xattr_size(acl->a_count);
|
||||
value = kmalloc(size, GFP_NOFS);
|
||||
if (!value) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scoutfs_xattr_set_locked(inode, name, name_len, value, size, 0, &tgs,
|
||||
lock, NULL, ind_locks);
|
||||
if (ret == 0 && set_mode) {
|
||||
inode->i_mode = new_mode;
|
||||
if (!value) {
|
||||
/* can be setting an acl that only affects mode, didn't need xattr */
|
||||
inode_inc_iversion(inode);
|
||||
inode->i_ctime = current_time(inode);
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
#ifndef KC___POSIX_ACL_CREATE
|
||||
if (!ret)
|
||||
set_cached_acl(inode, type, acl);
|
||||
#endif
|
||||
|
||||
kfree(value);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
|
||||
{
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
LIST_HEAD(ind_locks);
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE, SCOUTFS_LKF_REFRESH_INODE, inode, &lock) ?:
|
||||
scoutfs_inode_index_lock_hold(inode, &ind_locks, false, true);
|
||||
if (ret == 0) {
|
||||
ret = scoutfs_dirty_inode_item(inode, lock) ?:
|
||||
scoutfs_set_acl_locked(inode, acl, type, lock, &ind_locks);
|
||||
if (ret == 0)
|
||||
scoutfs_update_inode_item(inode, lock, &ind_locks);
|
||||
|
||||
scoutfs_release_trans(sb);
|
||||
scoutfs_inode_index_unlock(sb, &ind_locks);
|
||||
}
|
||||
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
return ret;
|
||||
}
|
||||
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
|
||||
int scoutfs_acl_get_xattr(const struct xattr_handler *handler, struct dentry *dentry,
|
||||
struct inode *inode, const char *name, void *value,
|
||||
size_t size)
|
||||
{
|
||||
int type = handler->flags;
|
||||
#else
|
||||
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
|
||||
int type)
|
||||
{
|
||||
#endif
|
||||
struct posix_acl *acl;
|
||||
int ret = 0;
|
||||
|
||||
if (!IS_POSIXACL(dentry->d_inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
acl = scoutfs_get_acl(dentry->d_inode, type);
|
||||
if (IS_ERR(acl))
|
||||
return PTR_ERR(acl);
|
||||
if (acl == NULL)
|
||||
return -ENODATA;
|
||||
|
||||
ret = posix_acl_to_xattr(&init_user_ns, acl, value, size);
|
||||
posix_acl_release(acl);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
|
||||
int scoutfs_acl_set_xattr(const struct xattr_handler *handler, struct dentry *dentry,
|
||||
struct inode *inode, const char *name, const void *value,
|
||||
size_t size, int flags)
|
||||
{
|
||||
int type = handler->flags;
|
||||
#else
|
||||
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
|
||||
int flags, int type)
|
||||
{
|
||||
#endif
|
||||
struct posix_acl *acl = NULL;
|
||||
int ret;
|
||||
|
||||
if (!inode_owner_or_capable(dentry->d_inode))
|
||||
return -EPERM;
|
||||
|
||||
if (!IS_POSIXACL(dentry->d_inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (value) {
|
||||
acl = posix_acl_from_xattr(&init_user_ns, value, size);
|
||||
if (IS_ERR(acl))
|
||||
return PTR_ERR(acl);
|
||||
|
||||
if (acl) {
|
||||
ret = kc_posix_acl_valid(&init_user_ns, acl);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
ret = scoutfs_set_acl(dentry->d_inode, acl, type);
|
||||
out:
|
||||
posix_acl_release(acl);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Apply the parent's default acl to new inodes access acl and inherit
|
||||
* it as the default for new directories. The caller holds locks and a
|
||||
* transaction.
|
||||
*/
|
||||
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
|
||||
struct list_head *ind_locks)
|
||||
{
|
||||
struct posix_acl *acl = NULL;
|
||||
int ret = 0;
|
||||
|
||||
if (!S_ISLNK(inode->i_mode)) {
|
||||
if (IS_POSIXACL(dir)) {
|
||||
acl = scoutfs_get_acl_locked(dir, ACL_TYPE_DEFAULT, dir_lock);
|
||||
if (IS_ERR(acl))
|
||||
return PTR_ERR(acl);
|
||||
}
|
||||
|
||||
if (!acl)
|
||||
inode->i_mode &= ~current_umask();
|
||||
}
|
||||
|
||||
if (IS_POSIXACL(dir) && acl) {
|
||||
if (S_ISDIR(inode->i_mode)) {
|
||||
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_DEFAULT,
|
||||
lock, ind_locks);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
ret = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (ret > 0)
|
||||
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS,
|
||||
lock, ind_locks);
|
||||
} else {
|
||||
cache_no_acl(inode);
|
||||
}
|
||||
out:
|
||||
posix_acl_release(acl);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Update the access ACL based on a newly set mode. If we return an
|
||||
* error then the xattr wasn't changed.
|
||||
*
|
||||
* Annoyingly, setattr_copy has logic that transforms the final set mode
|
||||
* that we want to use to update the acl. But we don't want to modify
|
||||
* the other inode fields while discovering the resulting mode. We're
|
||||
* relying on acl_chmod not caring about the transformation (currently
|
||||
* just clears sgid). It would be better if we could get the resulting
|
||||
* mode to give to acl_chmod without modifying the other inode fields.
|
||||
*
|
||||
* The caller has the inode mutex, a cluster lock, transaction, and will
|
||||
* update the inode item if we return success.
|
||||
*/
|
||||
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks)
|
||||
{
|
||||
struct posix_acl *acl;
|
||||
int ret = 0;
|
||||
|
||||
if (!IS_POSIXACL(inode) || !(attr->ia_valid & ATTR_MODE))
|
||||
return 0;
|
||||
|
||||
if (S_ISLNK(inode->i_mode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
acl = scoutfs_get_acl_locked(inode, ACL_TYPE_ACCESS, lock);
|
||||
if (IS_ERR_OR_NULL(acl))
|
||||
return PTR_ERR(acl);
|
||||
|
||||
ret = __posix_acl_chmod(&acl, GFP_KERNEL, attr->ia_mode);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = scoutfs_set_acl_locked(inode, acl, ACL_TYPE_ACCESS, lock, ind_locks);
|
||||
posix_acl_release(acl);
|
||||
return ret;
|
||||
}
|
||||
27
kmod/src/acl.h
Normal file
27
kmod/src/acl.h
Normal file
@@ -0,0 +1,27 @@
|
||||
#ifndef _SCOUTFS_ACL_H_
|
||||
#define _SCOUTFS_ACL_H_
|
||||
|
||||
struct posix_acl *scoutfs_get_acl(struct inode *inode, int type);
|
||||
struct posix_acl *scoutfs_get_acl_locked(struct inode *inode, int type, struct scoutfs_lock *lock);
|
||||
int scoutfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
|
||||
int scoutfs_set_acl_locked(struct inode *inode, struct posix_acl *acl, int type,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks);
|
||||
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
|
||||
int scoutfs_acl_get_xattr(const struct xattr_handler *, struct dentry *dentry,
|
||||
struct inode *inode, const char *name, void *value,
|
||||
size_t size);
|
||||
int scoutfs_acl_set_xattr(const struct xattr_handler *, struct dentry *dentry,
|
||||
struct inode *inode, const char *name, const void *value,
|
||||
size_t size, int flags);
|
||||
#else
|
||||
int scoutfs_acl_get_xattr(struct dentry *dentry, const char *name, void *value, size_t size,
|
||||
int type);
|
||||
int scoutfs_acl_set_xattr(struct dentry *dentry, const char *name, const void *value, size_t size,
|
||||
int flags, int type);
|
||||
#endif
|
||||
int scoutfs_acl_chmod_locked(struct inode *inode, struct iattr *attr,
|
||||
struct scoutfs_lock *lock, struct list_head *ind_locks);
|
||||
int scoutfs_init_acl_locked(struct inode *inode, struct inode *dir,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *dir_lock,
|
||||
struct list_head *ind_locks);
|
||||
#endif
|
||||
111
kmod/src/alloc.c
111
kmod/src/alloc.c
@@ -84,6 +84,21 @@ static u64 smallest_order_length(u64 len)
|
||||
return 1ULL << (free_extent_order(len) * 3);
|
||||
}
|
||||
|
||||
/*
|
||||
* An extent modification dirties three distinct leaves of an allocator
|
||||
* btree as it adds and removes the blkno and size sorted items for the
|
||||
* old and new lengths of the extent. Dirtying the paths to these
|
||||
* leaves can grow the tree and grow/shrink neighbours at each level.
|
||||
* We over-estimate the number of blocks allocated and freed (the paths
|
||||
* share a root, growth doesn't free) to err on the simpler and safer
|
||||
* side. The overhead is minimal given the relatively large list blocks
|
||||
* and relatively short allocator trees.
|
||||
*/
|
||||
static u32 extent_mod_blocks(u32 height)
|
||||
{
|
||||
return ((1 + height) * 2) * 3;
|
||||
}
|
||||
|
||||
/*
|
||||
* Free extents don't have flags and are stored in two indexes sorted by
|
||||
* block location and by length order, largest first. The location key
|
||||
@@ -877,6 +892,13 @@ static int find_zone_extent(struct super_block *sb, struct scoutfs_alloc_root *r
|
||||
* -ENOENT is returned if we run out of extents in the source tree
|
||||
* before moving the total.
|
||||
*
|
||||
* If meta_budget is non-zero then -EINPROGRESS can be returned if the
|
||||
* the caller's budget is consumed in the allocator during this call
|
||||
* (though not necessarily by us, we don't have per-thread tracking of
|
||||
* allocator consumption :/). The call can still have made progress and
|
||||
* caller is expected commit the dirty trees and examining the resulting
|
||||
* modified trees to see if they need to continue moving extents.
|
||||
*
|
||||
* The caller can specify that extents in the source tree should first
|
||||
* be found based on their zone bitmaps. We'll first try to find
|
||||
* extents in the exclusive zones, then vacant zones, and then we'll
|
||||
@@ -891,7 +913,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_alloc_root *dst,
|
||||
struct scoutfs_alloc_root *src, u64 total,
|
||||
__le64 *exclusive, __le64 *vacant, u64 zone_blocks)
|
||||
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget)
|
||||
{
|
||||
struct alloc_ext_args args = {
|
||||
.alloc = alloc,
|
||||
@@ -899,6 +921,8 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
};
|
||||
struct scoutfs_extent found;
|
||||
struct scoutfs_extent ext;
|
||||
u32 avail_start = 0;
|
||||
u32 freed_start = 0;
|
||||
u64 moved = 0;
|
||||
u64 count;
|
||||
int ret = 0;
|
||||
@@ -909,6 +933,9 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
vacant = NULL;
|
||||
}
|
||||
|
||||
if (meta_budget != 0)
|
||||
scoutfs_alloc_meta_remaining(alloc, &avail_start, &freed_start);
|
||||
|
||||
while (moved < total) {
|
||||
count = total - moved;
|
||||
|
||||
@@ -941,6 +968,24 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
if (meta_budget != 0 &&
|
||||
scoutfs_alloc_meta_low_since(alloc, avail_start, freed_start, meta_budget,
|
||||
extent_mod_blocks(src->root.height) +
|
||||
extent_mod_blocks(dst->root.height))) {
|
||||
ret = -EINPROGRESS;
|
||||
break;
|
||||
}
|
||||
|
||||
/* return partial if the server alloc can't dirty any more */
|
||||
if (scoutfs_alloc_meta_low(sb, alloc, 50 + extent_mod_blocks(src->root.height) +
|
||||
extent_mod_blocks(dst->root.height))) {
|
||||
if (WARN_ON_ONCE(!moved))
|
||||
ret = -ENOSPC;
|
||||
else
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
/* searching set start/len, finish initializing alloced extent */
|
||||
ext.map = found.map ? ext.start - found.start + found.map : 0;
|
||||
ext.flags = found.flags;
|
||||
@@ -1065,15 +1110,6 @@ out:
|
||||
* than completely exhausting the avail list or overflowing the freed
|
||||
* list.
|
||||
*
|
||||
* An extent modification dirties three distinct leaves of an allocator
|
||||
* btree as it adds and removes the blkno and size sorted items for the
|
||||
* old and new lengths of the extent. Dirtying the paths to these
|
||||
* leaves can grow the tree and grow/shrink neighbours at each level.
|
||||
* We over-estimate the number of blocks allocated and freed (the paths
|
||||
* share a root, growth doesn't free) to err on the simpler and safer
|
||||
* side. The overhead is minimal given the relatively large list blocks
|
||||
* and relatively short allocator trees.
|
||||
*
|
||||
* The caller tells us how many extents they're about to modify and how
|
||||
* many other additional blocks they may cow manually. And finally, the
|
||||
* caller could be the first to dirty the avail and freed blocks in the
|
||||
@@ -1082,7 +1118,7 @@ out:
|
||||
static bool list_has_blocks(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_alloc_root *root, u32 extents, u32 addl_blocks)
|
||||
{
|
||||
u32 tree_blocks = (((1 + root->root.height) * 2) * 3) * extents;
|
||||
u32 tree_blocks = extent_mod_blocks(root->root.height) * extents;
|
||||
u32 most = 1 + tree_blocks + addl_blocks;
|
||||
|
||||
if (le32_to_cpu(alloc->avail.first_nr) < most) {
|
||||
@@ -1329,6 +1365,27 @@ void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total,
|
||||
} while (read_seqretry(&alloc->seqlock, seq));
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if the caller's consumption of nr from either avail or
|
||||
* freed would end up exceeding their budget relative to the starting
|
||||
* remaining snapshot they took.
|
||||
*/
|
||||
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
|
||||
u32 budget, u32 nr)
|
||||
{
|
||||
u32 avail_use;
|
||||
u32 freed_use;
|
||||
u32 avail;
|
||||
u32 freed;
|
||||
|
||||
scoutfs_alloc_meta_remaining(alloc, &avail, &freed);
|
||||
|
||||
avail_use = avail_start - avail;
|
||||
freed_use = freed_start - freed;
|
||||
|
||||
return ((avail_use + nr) > budget) || ((freed_use + nr) > budget);
|
||||
}
|
||||
|
||||
bool scoutfs_alloc_test_flag(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc, u32 flag)
|
||||
{
|
||||
@@ -1525,12 +1582,10 @@ out:
|
||||
* call the caller's callback. This assumes that the super it's reading
|
||||
* could be stale and will retry if it encounters stale blocks.
|
||||
*/
|
||||
int scoutfs_alloc_foreach(struct super_block *sb,
|
||||
scoutfs_alloc_foreach_cb_t cb, void *arg)
|
||||
int scoutfs_alloc_foreach(struct super_block *sb, scoutfs_alloc_foreach_cb_t cb, void *arg)
|
||||
{
|
||||
struct scoutfs_super_block *super = NULL;
|
||||
struct scoutfs_block_ref stale_refs[2] = {{0,}};
|
||||
struct scoutfs_block_ref refs[2] = {{0,}};
|
||||
DECLARE_SAVED_REFS(saved);
|
||||
int ret;
|
||||
|
||||
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
|
||||
@@ -1539,26 +1594,18 @@ int scoutfs_alloc_foreach(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
retry:
|
||||
ret = scoutfs_read_super(sb, super);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
do {
|
||||
ret = scoutfs_read_super(sb, super);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
refs[0] = super->logs_root.ref;
|
||||
refs[1] = super->srch_root.ref;
|
||||
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
|
||||
|
||||
ret = scoutfs_block_check_stale(sb, ret, &saved, &super->logs_root.ref,
|
||||
&super->srch_root.ref);
|
||||
} while (ret == -ESTALE);
|
||||
|
||||
ret = scoutfs_alloc_foreach_super(sb, super, cb, arg);
|
||||
out:
|
||||
if (ret == -ESTALE) {
|
||||
if (memcmp(&stale_refs, &refs, sizeof(refs)) == 0) {
|
||||
ret = -EIO;
|
||||
} else {
|
||||
BUILD_BUG_ON(sizeof(stale_refs) != sizeof(refs));
|
||||
memcpy(stale_refs, refs, sizeof(stale_refs));
|
||||
goto retry;
|
||||
}
|
||||
}
|
||||
|
||||
kfree(super);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -19,14 +19,11 @@
|
||||
(128ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/*
|
||||
* The largest aligned region that we'll try to allocate at the end of
|
||||
* the file as it's extended. This is also limited to the current file
|
||||
* size so we can only waste at most twice the total file size when
|
||||
* files are less than this. We try to keep this around the point of
|
||||
* diminishing returns in streaming performance of common data devices
|
||||
* to limit waste.
|
||||
* The default size that we'll try to preallocate. This is trying to
|
||||
* hit the limit of large efficient device writes while minimizing
|
||||
* wasted preallocation that is never used.
|
||||
*/
|
||||
#define SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT \
|
||||
#define SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS \
|
||||
(8ULL * 1024 * 1024 >> SCOUTFS_BLOCK_SM_SHIFT)
|
||||
|
||||
/*
|
||||
@@ -131,7 +128,7 @@ int scoutfs_alloc_move(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_alloc_root *dst,
|
||||
struct scoutfs_alloc_root *src, u64 total,
|
||||
__le64 *exclusive, __le64 *vacant, u64 zone_blocks);
|
||||
__le64 *exclusive, __le64 *vacant, u64 zone_blocks, u64 meta_budget);
|
||||
int scoutfs_alloc_insert(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
struct scoutfs_block_writer *wri, struct scoutfs_alloc_root *root,
|
||||
u64 start, u64 len);
|
||||
@@ -159,6 +156,8 @@ int scoutfs_alloc_splice_list(struct super_block *sb,
|
||||
bool scoutfs_alloc_meta_low(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc, u32 nr);
|
||||
void scoutfs_alloc_meta_remaining(struct scoutfs_alloc *alloc, u32 *avail_total, u32 *freed_space);
|
||||
bool scoutfs_alloc_meta_low_since(struct scoutfs_alloc *alloc, u32 avail_start, u32 freed_start,
|
||||
u32 budget, u32 nr);
|
||||
bool scoutfs_alloc_test_flag(struct super_block *sb,
|
||||
struct scoutfs_alloc *alloc, u32 flag);
|
||||
|
||||
|
||||
160
kmod/src/block.c
160
kmod/src/block.c
@@ -21,6 +21,7 @@
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/rhashtable.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/sched/mm.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "super.h"
|
||||
@@ -30,6 +31,7 @@
|
||||
#include "scoutfs_trace.h"
|
||||
#include "alloc.h"
|
||||
#include "triggers.h"
|
||||
#include "util.h"
|
||||
|
||||
/*
|
||||
* The scoutfs block cache manages metadata blocks that can be larger
|
||||
@@ -57,7 +59,7 @@ struct block_info {
|
||||
atomic64_t access_counter;
|
||||
struct rhashtable ht;
|
||||
wait_queue_head_t waitq;
|
||||
struct shrinker shrinker;
|
||||
KC_DEFINE_SHRINKER(shrinker);
|
||||
struct work_struct free_work;
|
||||
struct llist_head free_llist;
|
||||
};
|
||||
@@ -128,7 +130,7 @@ static __le32 block_calc_crc(struct scoutfs_block_header *hdr, u32 size)
|
||||
static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
|
||||
{
|
||||
struct block_private *bp;
|
||||
unsigned int noio_flags;
|
||||
unsigned int nofs_flags;
|
||||
|
||||
/*
|
||||
* If we had multiple blocks per page we'd need to be a little
|
||||
@@ -156,9 +158,9 @@ static struct block_private *block_alloc(struct super_block *sb, u64 blkno)
|
||||
* spurious reclaim-on dependencies and warnings.
|
||||
*/
|
||||
lockdep_off();
|
||||
noio_flags = memalloc_noio_save();
|
||||
nofs_flags = memalloc_nofs_save();
|
||||
bp->virt = __vmalloc(SCOUTFS_BLOCK_LG_SIZE, GFP_NOFS | __GFP_HIGHMEM, PAGE_KERNEL);
|
||||
memalloc_noio_restore(noio_flags);
|
||||
memalloc_nofs_restore(nofs_flags);
|
||||
lockdep_on();
|
||||
|
||||
if (!bp->virt) {
|
||||
@@ -436,11 +438,10 @@ static void block_remove_all(struct super_block *sb)
|
||||
* possible. Final freeing, verifying checksums, and unlinking errored
|
||||
* blocks are all done by future users of the blocks.
|
||||
*/
|
||||
static void block_end_io(struct super_block *sb, int rw,
|
||||
static void block_end_io(struct super_block *sb, unsigned int opf,
|
||||
struct block_private *bp, int err)
|
||||
{
|
||||
DECLARE_BLOCK_INFO(sb, binf);
|
||||
bool is_read = !(rw & WRITE);
|
||||
|
||||
if (err) {
|
||||
scoutfs_inc_counter(sb, block_cache_end_io_error);
|
||||
@@ -450,7 +451,7 @@ static void block_end_io(struct super_block *sb, int rw,
|
||||
if (!atomic_dec_and_test(&bp->io_count))
|
||||
return;
|
||||
|
||||
if (is_read && !test_bit(BLOCK_BIT_ERROR, &bp->bits))
|
||||
if (!op_is_write(opf) && !test_bit(BLOCK_BIT_ERROR, &bp->bits))
|
||||
set_bit(BLOCK_BIT_UPTODATE, &bp->bits);
|
||||
|
||||
clear_bit(BLOCK_BIT_IO_BUSY, &bp->bits);
|
||||
@@ -463,13 +464,13 @@ static void block_end_io(struct super_block *sb, int rw,
|
||||
wake_up(&binf->waitq);
|
||||
}
|
||||
|
||||
static void block_bio_end_io(struct bio *bio, int err)
|
||||
static void KC_DECLARE_BIO_END_IO(block_bio_end_io, struct bio *bio)
|
||||
{
|
||||
struct block_private *bp = bio->bi_private;
|
||||
struct super_block *sb = bp->sb;
|
||||
|
||||
TRACE_BLOCK(end_io, bp);
|
||||
block_end_io(sb, bio->bi_rw, bp, err);
|
||||
block_end_io(sb, kc_bio_get_opf(bio), bp, kc_bio_get_errno(bio));
|
||||
bio_put(bio);
|
||||
}
|
||||
|
||||
@@ -477,7 +478,7 @@ static void block_bio_end_io(struct bio *bio, int err)
|
||||
* Kick off IO for a single block.
|
||||
*/
|
||||
static int block_submit_bio(struct super_block *sb, struct block_private *bp,
|
||||
int rw)
|
||||
unsigned int opf)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct bio *bio = NULL;
|
||||
@@ -510,8 +511,9 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
|
||||
break;
|
||||
}
|
||||
|
||||
bio->bi_sector = sector + (off >> 9);
|
||||
bio->bi_bdev = sbi->meta_bdev;
|
||||
kc_bio_set_opf(bio, opf);
|
||||
kc_bio_set_sector(bio, sector + (off >> 9));
|
||||
bio_set_dev(bio, sbi->meta_bdev);
|
||||
bio->bi_end_io = block_bio_end_io;
|
||||
bio->bi_private = bp;
|
||||
|
||||
@@ -528,18 +530,18 @@ static int block_submit_bio(struct super_block *sb, struct block_private *bp,
|
||||
BUG();
|
||||
|
||||
if (!bio_add_page(bio, page, PAGE_SIZE, 0)) {
|
||||
submit_bio(rw, bio);
|
||||
kc_submit_bio(bio);
|
||||
bio = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
if (bio)
|
||||
submit_bio(rw, bio);
|
||||
kc_submit_bio(bio);
|
||||
|
||||
blk_finish_plug(&plug);
|
||||
|
||||
/* let racing end_io know we're done */
|
||||
block_end_io(sb, rw, bp, ret);
|
||||
block_end_io(sb, opf, bp, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -640,7 +642,7 @@ static struct block_private *block_read(struct super_block *sb, u64 blkno)
|
||||
|
||||
if (!test_bit(BLOCK_BIT_UPTODATE, &bp->bits) &&
|
||||
test_and_clear_bit(BLOCK_BIT_NEW, &bp->bits)) {
|
||||
ret = block_submit_bio(sb, bp, READ);
|
||||
ret = block_submit_bio(sb, bp, REQ_OP_READ);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
@@ -677,7 +679,7 @@ out:
|
||||
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
|
||||
struct scoutfs_block **bl_ret)
|
||||
{
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_block_header *hdr;
|
||||
struct block_private *bp = NULL;
|
||||
bool retried = false;
|
||||
@@ -701,7 +703,7 @@ retry:
|
||||
set_bit(BLOCK_BIT_CRC_VALID, &bp->bits);
|
||||
}
|
||||
|
||||
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != super->hdr.fsid ||
|
||||
if (hdr->magic != cpu_to_le32(magic) || hdr->fsid != cpu_to_le64(sbi->fsid) ||
|
||||
hdr->seq != ref->seq || hdr->blkno != ref->blkno) {
|
||||
ret = -ESTALE;
|
||||
goto out;
|
||||
@@ -728,6 +730,36 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool stale_refs_match(struct scoutfs_block_ref *caller, struct scoutfs_block_ref *saved)
|
||||
{
|
||||
return !caller || (caller->blkno == saved->blkno && caller->seq == saved->seq);
|
||||
}
|
||||
|
||||
/*
|
||||
* Check if a read of a reference that gave ESTALE should be retried or
|
||||
* should generate a hard error. If this is the second time we got
|
||||
* ESTALE from the same refs then we return EIO and the caller should
|
||||
* stop. As long as we keep seeing different refs we'll return ESTALE
|
||||
* and the caller can keep trying.
|
||||
*/
|
||||
int scoutfs_block_check_stale(struct super_block *sb, int ret,
|
||||
struct scoutfs_block_saved_refs *saved,
|
||||
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b)
|
||||
{
|
||||
if (ret == -ESTALE) {
|
||||
if (stale_refs_match(a, &saved->refs[0]) && stale_refs_match(b, &saved->refs[1])){
|
||||
ret = -EIO;
|
||||
} else {
|
||||
if (a)
|
||||
saved->refs[0] = *a;
|
||||
if (b)
|
||||
saved->refs[1] = *b;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl)
|
||||
{
|
||||
if (!IS_ERR_OR_NULL(bl))
|
||||
@@ -797,7 +829,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
u32 magic, struct scoutfs_block **bl_ret,
|
||||
u64 dirty_blkno, u64 *ref_blkno)
|
||||
{
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_block *cow_bl = NULL;
|
||||
struct scoutfs_block *bl = NULL;
|
||||
struct block_private *exist_bp = NULL;
|
||||
@@ -865,7 +897,7 @@ int scoutfs_block_dirty_ref(struct super_block *sb, struct scoutfs_alloc *alloc,
|
||||
|
||||
hdr = bl->data;
|
||||
hdr->magic = cpu_to_le32(magic);
|
||||
hdr->fsid = super->hdr.fsid;
|
||||
hdr->fsid = cpu_to_le64(sbi->fsid);
|
||||
hdr->blkno = cpu_to_le64(bl->blkno);
|
||||
prandom_bytes(&hdr->seq, sizeof(hdr->seq));
|
||||
|
||||
@@ -939,7 +971,7 @@ int scoutfs_block_writer_write(struct super_block *sb,
|
||||
/* retry previous write errors */
|
||||
clear_bit(BLOCK_BIT_ERROR, &bp->bits);
|
||||
|
||||
ret = block_submit_bio(sb, bp, WRITE);
|
||||
ret = block_submit_bio(sb, bp, REQ_OP_WRITE);
|
||||
if (ret < 0)
|
||||
break;
|
||||
}
|
||||
@@ -1039,6 +1071,16 @@ u64 scoutfs_block_writer_dirty_bytes(struct super_block *sb,
|
||||
return wri->nr_dirty_blocks * SCOUTFS_BLOCK_LG_SIZE;
|
||||
}
|
||||
|
||||
static unsigned long block_count_objects(struct shrinker *shrink, struct shrink_control *sc)
|
||||
{
|
||||
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
|
||||
struct super_block *sb = binf->sb;
|
||||
|
||||
scoutfs_inc_counter(sb, block_cache_count_objects);
|
||||
|
||||
return shrinker_min_long(atomic_read(&binf->total_inserted));
|
||||
}
|
||||
|
||||
/*
|
||||
* Remove a number of cached blocks that haven't been used recently.
|
||||
*
|
||||
@@ -1059,25 +1101,19 @@ u64 scoutfs_block_writer_dirty_bytes(struct super_block *sb,
|
||||
* atomically remove blocks when the only references are ours and the
|
||||
* hash table.
|
||||
*/
|
||||
static int block_shrink(struct shrinker *shrink, struct shrink_control *sc)
|
||||
static unsigned long block_scan_objects(struct shrinker *shrink, struct shrink_control *sc)
|
||||
{
|
||||
struct block_info *binf = container_of(shrink, struct block_info,
|
||||
shrinker);
|
||||
struct block_info *binf = KC_SHRINKER_CONTAINER_OF(shrink, struct block_info);
|
||||
struct super_block *sb = binf->sb;
|
||||
struct rhashtable_iter iter;
|
||||
struct block_private *bp;
|
||||
unsigned long nr;
|
||||
bool stop = false;
|
||||
unsigned long freed = 0;
|
||||
unsigned long nr = sc->nr_to_scan;
|
||||
u64 recently;
|
||||
|
||||
nr = sc->nr_to_scan;
|
||||
if (nr == 0)
|
||||
goto out;
|
||||
scoutfs_inc_counter(sb, block_cache_scan_objects);
|
||||
|
||||
scoutfs_inc_counter(sb, block_cache_shrink);
|
||||
|
||||
nr = DIV_ROUND_UP(nr, SCOUTFS_BLOCK_LG_PAGES_PER);
|
||||
|
||||
restart:
|
||||
recently = accessed_recently(binf);
|
||||
rhashtable_walk_enter(&binf->ht, &iter);
|
||||
rhashtable_walk_start(&iter);
|
||||
@@ -1099,12 +1135,15 @@ restart:
|
||||
if (bp == NULL)
|
||||
break;
|
||||
if (bp == ERR_PTR(-EAGAIN)) {
|
||||
/* hard exit to wait for rcu rebalance to finish */
|
||||
rhashtable_walk_stop(&iter);
|
||||
rhashtable_walk_exit(&iter);
|
||||
scoutfs_inc_counter(sb, block_cache_shrink_restart);
|
||||
synchronize_rcu();
|
||||
goto restart;
|
||||
/*
|
||||
* We can be called from reclaim in the allocation
|
||||
* to resize the hash table itself. We have to
|
||||
* return so that the caller can proceed and
|
||||
* enable hash table iteration again.
|
||||
*/
|
||||
scoutfs_inc_counter(sb, block_cache_shrink_stop);
|
||||
stop = true;
|
||||
break;
|
||||
}
|
||||
|
||||
scoutfs_inc_counter(sb, block_cache_shrink_next);
|
||||
@@ -1118,6 +1157,7 @@ restart:
|
||||
if (block_remove_solo(sb, bp)) {
|
||||
scoutfs_inc_counter(sb, block_cache_shrink_remove);
|
||||
TRACE_BLOCK(shrink, bp);
|
||||
freed++;
|
||||
nr--;
|
||||
}
|
||||
block_put(sb, bp);
|
||||
@@ -1126,9 +1166,11 @@ restart:
|
||||
|
||||
rhashtable_walk_stop(&iter);
|
||||
rhashtable_walk_exit(&iter);
|
||||
out:
|
||||
return min_t(u64, (u64)atomic_read(&binf->total_inserted) * SCOUTFS_BLOCK_LG_PAGES_PER,
|
||||
INT_MAX);
|
||||
|
||||
if (stop)
|
||||
return SHRINK_STOP;
|
||||
else
|
||||
return freed;
|
||||
}
|
||||
|
||||
struct sm_block_completion {
|
||||
@@ -1136,11 +1178,11 @@ struct sm_block_completion {
|
||||
int err;
|
||||
};
|
||||
|
||||
static void sm_block_bio_end_io(struct bio *bio, int err)
|
||||
static void KC_DECLARE_BIO_END_IO(sm_block_bio_end_io, struct bio *bio)
|
||||
{
|
||||
struct sm_block_completion *sbc = bio->bi_private;
|
||||
|
||||
sbc->err = err;
|
||||
sbc->err = kc_bio_get_errno(bio);
|
||||
complete(&sbc->comp);
|
||||
bio_put(bio);
|
||||
}
|
||||
@@ -1155,9 +1197,8 @@ static void sm_block_bio_end_io(struct bio *bio, int err)
|
||||
* only layer that sees the full block buffer so we pass the calculated
|
||||
* crc to the caller for them to check in their context.
|
||||
*/
|
||||
static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw, u64 blkno,
|
||||
struct scoutfs_block_header *hdr, size_t len,
|
||||
__le32 *blk_crc)
|
||||
static int sm_block_io(struct super_block *sb, struct block_device *bdev, unsigned int opf,
|
||||
u64 blkno, struct scoutfs_block_header *hdr, size_t len, __le32 *blk_crc)
|
||||
{
|
||||
struct scoutfs_block_header *pg_hdr;
|
||||
struct sm_block_completion sbc;
|
||||
@@ -1171,7 +1212,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
|
||||
return -EIO;
|
||||
|
||||
if (WARN_ON_ONCE(len > SCOUTFS_BLOCK_SM_SIZE) ||
|
||||
WARN_ON_ONCE(!(rw & WRITE) && !blk_crc))
|
||||
WARN_ON_ONCE(!op_is_write(opf) && !blk_crc))
|
||||
return -EINVAL;
|
||||
|
||||
page = alloc_page(GFP_NOFS);
|
||||
@@ -1180,7 +1221,7 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
|
||||
|
||||
pg_hdr = page_address(page);
|
||||
|
||||
if (rw & WRITE) {
|
||||
if (op_is_write(opf)) {
|
||||
memcpy(pg_hdr, hdr, len);
|
||||
if (len < SCOUTFS_BLOCK_SM_SIZE)
|
||||
memset((char *)pg_hdr + len, 0,
|
||||
@@ -1194,8 +1235,9 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
|
||||
goto out;
|
||||
}
|
||||
|
||||
bio->bi_sector = blkno << (SCOUTFS_BLOCK_SM_SHIFT - 9);
|
||||
bio->bi_bdev = bdev;
|
||||
kc_bio_set_opf(bio, opf | REQ_SYNC);
|
||||
kc_bio_set_sector(bio, blkno << (SCOUTFS_BLOCK_SM_SHIFT - 9));
|
||||
bio_set_dev(bio, bdev);
|
||||
bio->bi_end_io = sm_block_bio_end_io;
|
||||
bio->bi_private = &sbc;
|
||||
bio_add_page(bio, page, SCOUTFS_BLOCK_SM_SIZE, 0);
|
||||
@@ -1203,12 +1245,12 @@ static int sm_block_io(struct super_block *sb, struct block_device *bdev, int rw
|
||||
init_completion(&sbc.comp);
|
||||
sbc.err = 0;
|
||||
|
||||
submit_bio((rw & WRITE) ? WRITE_SYNC : READ_SYNC, bio);
|
||||
kc_submit_bio(bio);
|
||||
|
||||
wait_for_completion(&sbc.comp);
|
||||
ret = sbc.err;
|
||||
|
||||
if (ret == 0 && !(rw & WRITE)) {
|
||||
if (ret == 0 && !op_is_write(opf)) {
|
||||
memcpy(hdr, pg_hdr, len);
|
||||
*blk_crc = block_calc_crc(pg_hdr, SCOUTFS_BLOCK_SM_SIZE);
|
||||
}
|
||||
@@ -1222,14 +1264,14 @@ int scoutfs_block_read_sm(struct super_block *sb,
|
||||
struct scoutfs_block_header *hdr, size_t len,
|
||||
__le32 *blk_crc)
|
||||
{
|
||||
return sm_block_io(sb, bdev, READ, blkno, hdr, len, blk_crc);
|
||||
return sm_block_io(sb, bdev, REQ_OP_READ, blkno, hdr, len, blk_crc);
|
||||
}
|
||||
|
||||
int scoutfs_block_write_sm(struct super_block *sb,
|
||||
struct block_device *bdev, u64 blkno,
|
||||
struct scoutfs_block_header *hdr, size_t len)
|
||||
{
|
||||
return sm_block_io(sb, bdev, WRITE, blkno, hdr, len, NULL);
|
||||
return sm_block_io(sb, bdev, REQ_OP_WRITE, blkno, hdr, len, NULL);
|
||||
}
|
||||
|
||||
int scoutfs_block_setup(struct super_block *sb)
|
||||
@@ -1254,9 +1296,9 @@ int scoutfs_block_setup(struct super_block *sb)
|
||||
atomic_set(&binf->total_inserted, 0);
|
||||
atomic64_set(&binf->access_counter, 0);
|
||||
init_waitqueue_head(&binf->waitq);
|
||||
binf->shrinker.shrink = block_shrink;
|
||||
binf->shrinker.seeks = DEFAULT_SEEKS;
|
||||
register_shrinker(&binf->shrinker);
|
||||
KC_INIT_SHRINKER_FUNCS(&binf->shrinker, block_count_objects,
|
||||
block_scan_objects);
|
||||
KC_REGISTER_SHRINKER(&binf->shrinker);
|
||||
INIT_WORK(&binf->free_work, block_free_work);
|
||||
init_llist_head(&binf->free_llist);
|
||||
|
||||
@@ -1276,7 +1318,7 @@ void scoutfs_block_destroy(struct super_block *sb)
|
||||
struct block_info *binf = SCOUTFS_SB(sb)->block_info;
|
||||
|
||||
if (binf) {
|
||||
unregister_shrinker(&binf->shrinker);
|
||||
KC_UNREGISTER_SHRINKER(&binf->shrinker);
|
||||
block_remove_all(sb);
|
||||
flush_work(&binf->free_work);
|
||||
rhashtable_destroy(&binf->ht);
|
||||
|
||||
@@ -13,6 +13,17 @@ struct scoutfs_block {
|
||||
void *priv;
|
||||
};
|
||||
|
||||
struct scoutfs_block_saved_refs {
|
||||
struct scoutfs_block_ref refs[2];
|
||||
};
|
||||
|
||||
#define DECLARE_SAVED_REFS(name) \
|
||||
struct scoutfs_block_saved_refs name = {{{0,}}}
|
||||
|
||||
int scoutfs_block_check_stale(struct super_block *sb, int ret,
|
||||
struct scoutfs_block_saved_refs *saved,
|
||||
struct scoutfs_block_ref *a, struct scoutfs_block_ref *b);
|
||||
|
||||
int scoutfs_block_read_ref(struct super_block *sb, struct scoutfs_block_ref *ref, u32 magic,
|
||||
struct scoutfs_block **bl_ret);
|
||||
void scoutfs_block_put(struct super_block *sb, struct scoutfs_block *bl);
|
||||
|
||||
@@ -356,7 +356,6 @@ static int client_greeting(struct super_block *sb,
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct client_info *client = sbi->client_info;
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct scoutfs_net_greeting *gr = resp;
|
||||
bool new_server;
|
||||
int ret;
|
||||
@@ -371,9 +370,9 @@ static int client_greeting(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (gr->fsid != super->hdr.fsid) {
|
||||
if (gr->fsid != cpu_to_le64(sbi->fsid)) {
|
||||
scoutfs_warn(sb, "server greeting response fsid 0x%llx did not match client fsid 0x%llx",
|
||||
le64_to_cpu(gr->fsid), le64_to_cpu(super->hdr.fsid));
|
||||
le64_to_cpu(gr->fsid), sbi->fsid);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -476,7 +475,6 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
|
||||
connect_dwork.work);
|
||||
struct super_block *sb = client->sb;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = &sbi->super;
|
||||
struct scoutfs_mount_options opts;
|
||||
struct scoutfs_net_greeting greet;
|
||||
struct sockaddr_in sin;
|
||||
@@ -508,7 +506,7 @@ static void scoutfs_client_connect_worker(struct work_struct *work)
|
||||
goto out;
|
||||
|
||||
/* send a greeting to verify endpoints of each connection */
|
||||
greet.fsid = super->hdr.fsid;
|
||||
greet.fsid = cpu_to_le64(sbi->fsid);
|
||||
greet.fmt_vers = cpu_to_le64(sbi->fmt_vers);
|
||||
greet.server_term = cpu_to_le64(client->server_term);
|
||||
greet.rid = cpu_to_le64(sbi->rid);
|
||||
|
||||
@@ -30,11 +30,13 @@
|
||||
EXPAND_COUNTER(block_cache_free) \
|
||||
EXPAND_COUNTER(block_cache_free_work) \
|
||||
EXPAND_COUNTER(block_cache_remove_stale) \
|
||||
EXPAND_COUNTER(block_cache_count_objects) \
|
||||
EXPAND_COUNTER(block_cache_scan_objects) \
|
||||
EXPAND_COUNTER(block_cache_shrink) \
|
||||
EXPAND_COUNTER(block_cache_shrink_next) \
|
||||
EXPAND_COUNTER(block_cache_shrink_recent) \
|
||||
EXPAND_COUNTER(block_cache_shrink_remove) \
|
||||
EXPAND_COUNTER(block_cache_shrink_restart) \
|
||||
EXPAND_COUNTER(block_cache_shrink_stop) \
|
||||
EXPAND_COUNTER(btree_compact_values) \
|
||||
EXPAND_COUNTER(btree_compact_values_enomem) \
|
||||
EXPAND_COUNTER(btree_delete) \
|
||||
@@ -75,8 +77,6 @@
|
||||
EXPAND_COUNTER(data_write_begin_enobufs_retry) \
|
||||
EXPAND_COUNTER(dentry_revalidate_error) \
|
||||
EXPAND_COUNTER(dentry_revalidate_invalid) \
|
||||
EXPAND_COUNTER(dentry_revalidate_locked) \
|
||||
EXPAND_COUNTER(dentry_revalidate_orphan) \
|
||||
EXPAND_COUNTER(dentry_revalidate_rcu) \
|
||||
EXPAND_COUNTER(dentry_revalidate_root) \
|
||||
EXPAND_COUNTER(dentry_revalidate_valid) \
|
||||
@@ -90,6 +90,8 @@
|
||||
EXPAND_COUNTER(forest_read_items) \
|
||||
EXPAND_COUNTER(forest_roots_next_hint) \
|
||||
EXPAND_COUNTER(forest_set_bloom_bits) \
|
||||
EXPAND_COUNTER(item_cache_count_objects) \
|
||||
EXPAND_COUNTER(item_cache_scan_objects) \
|
||||
EXPAND_COUNTER(item_clear_dirty) \
|
||||
EXPAND_COUNTER(item_create) \
|
||||
EXPAND_COUNTER(item_delete) \
|
||||
@@ -123,6 +125,7 @@
|
||||
EXPAND_COUNTER(item_update) \
|
||||
EXPAND_COUNTER(item_write_dirty) \
|
||||
EXPAND_COUNTER(lock_alloc) \
|
||||
EXPAND_COUNTER(lock_count_objects) \
|
||||
EXPAND_COUNTER(lock_free) \
|
||||
EXPAND_COUNTER(lock_grant_request) \
|
||||
EXPAND_COUNTER(lock_grant_response) \
|
||||
@@ -136,6 +139,7 @@
|
||||
EXPAND_COUNTER(lock_lock_error) \
|
||||
EXPAND_COUNTER(lock_nonblock_eagain) \
|
||||
EXPAND_COUNTER(lock_recover_request) \
|
||||
EXPAND_COUNTER(lock_scan_objects) \
|
||||
EXPAND_COUNTER(lock_shrink_attempted) \
|
||||
EXPAND_COUNTER(lock_shrink_aborted) \
|
||||
EXPAND_COUNTER(lock_shrink_work) \
|
||||
@@ -168,6 +172,7 @@
|
||||
EXPAND_COUNTER(quorum_recv_resignation) \
|
||||
EXPAND_COUNTER(quorum_recv_vote) \
|
||||
EXPAND_COUNTER(quorum_send_heartbeat) \
|
||||
EXPAND_COUNTER(quorum_send_heartbeat_dropped) \
|
||||
EXPAND_COUNTER(quorum_send_resignation) \
|
||||
EXPAND_COUNTER(quorum_send_request) \
|
||||
EXPAND_COUNTER(quorum_send_vote) \
|
||||
@@ -189,8 +194,6 @@
|
||||
EXPAND_COUNTER(srch_search_retry_empty) \
|
||||
EXPAND_COUNTER(srch_search_sorted) \
|
||||
EXPAND_COUNTER(srch_search_sorted_block) \
|
||||
EXPAND_COUNTER(srch_search_stale_eio) \
|
||||
EXPAND_COUNTER(srch_search_stale_retry) \
|
||||
EXPAND_COUNTER(srch_search_xattrs) \
|
||||
EXPAND_COUNTER(srch_read_stale) \
|
||||
EXPAND_COUNTER(statfs) \
|
||||
@@ -235,12 +238,12 @@ struct scoutfs_counters {
|
||||
#define SCOUTFS_PCPU_COUNTER_BATCH (1 << 30)
|
||||
|
||||
#define scoutfs_inc_counter(sb, which) \
|
||||
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, 1, \
|
||||
SCOUTFS_PCPU_COUNTER_BATCH)
|
||||
percpu_counter_add_batch(&SCOUTFS_SB(sb)->counters->which, 1, \
|
||||
SCOUTFS_PCPU_COUNTER_BATCH)
|
||||
|
||||
#define scoutfs_add_counter(sb, which, cnt) \
|
||||
__percpu_counter_add(&SCOUTFS_SB(sb)->counters->which, cnt, \
|
||||
SCOUTFS_PCPU_COUNTER_BATCH)
|
||||
percpu_counter_add_batch(&SCOUTFS_SB(sb)->counters->which, cnt, \
|
||||
SCOUTFS_PCPU_COUNTER_BATCH)
|
||||
|
||||
void __init scoutfs_init_counters(void);
|
||||
int scoutfs_setup_counters(struct super_block *sb);
|
||||
|
||||
276
kmod/src/data.c
276
kmod/src/data.c
@@ -307,7 +307,7 @@ int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
|
||||
LIST_HEAD(ind_locks);
|
||||
s64 ret = 0;
|
||||
|
||||
WARN_ON_ONCE(inode && !mutex_is_locked(&inode->i_mutex));
|
||||
WARN_ON_ONCE(inode && !inode_is_locked(inode));
|
||||
|
||||
/* clamp last to the last possible block? */
|
||||
if (last > SCOUTFS_BLOCK_SM_MAX)
|
||||
@@ -366,27 +366,27 @@ static inline u64 ext_last(struct scoutfs_extent *ext)
|
||||
|
||||
/*
|
||||
* The caller is writing to a logical iblock that doesn't have an
|
||||
* allocated extent.
|
||||
* allocated extent. The caller has searched for an extent containing
|
||||
* iblock. If it already existed then it must be unallocated and
|
||||
* offline.
|
||||
*
|
||||
* We always allocate an extent starting at the logical iblock. The
|
||||
* caller has searched for an extent containing iblock. If it already
|
||||
* existed then it must be unallocated and offline.
|
||||
* We implement two preallocation strategies. Typically we only
|
||||
* preallocate for simple streaming writes and limit preallocation while
|
||||
* the file is small. The largest efficient allocation size is
|
||||
* typically large enough that it would be unreasonable to allocate that
|
||||
* much for all small files.
|
||||
*
|
||||
* Preallocation is used if we're strictly contiguously extending
|
||||
* writes. That is, if the logical block offset equals the number of
|
||||
* online blocks. We try to preallocate the number of blocks existing
|
||||
* so that small files don't waste inordinate amounts of space and large
|
||||
* files will eventually see large extents. This only works for
|
||||
* contiguous single stream writes or stages of files from the first
|
||||
* block. It doesn't work for concurrent stages, releasing behind
|
||||
* staging, sparse files, multi-node writes, etc. fallocate() is always
|
||||
* a better tool to use.
|
||||
* Optionally, we can simply preallocate large empty aligned regions.
|
||||
* This can waste a lot of space for small or sparse files but is
|
||||
* reasonable when a file population is known to be large and dense but
|
||||
* known to be written with non-streaming write patterns.
|
||||
*/
|
||||
static int alloc_block(struct super_block *sb, struct inode *inode,
|
||||
struct scoutfs_extent *ext, u64 iblock,
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
DECLARE_DATA_INFO(sb, datinf);
|
||||
struct scoutfs_mount_options opts;
|
||||
const u64 ino = scoutfs_ino(inode);
|
||||
struct data_ext_args args = {
|
||||
.ino = ino,
|
||||
@@ -394,17 +394,22 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
|
||||
.lock = lock,
|
||||
};
|
||||
struct scoutfs_extent found;
|
||||
struct scoutfs_extent pre;
|
||||
struct scoutfs_extent pre = {0,};
|
||||
bool undo_pre = false;
|
||||
u64 blkno = 0;
|
||||
u64 online;
|
||||
u64 offline;
|
||||
u8 flags;
|
||||
u64 start;
|
||||
u64 count;
|
||||
u64 rem;
|
||||
int ret;
|
||||
int err;
|
||||
|
||||
trace_scoutfs_data_alloc_block_enter(sb, ino, iblock, ext);
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
/* can only allocate over existing unallocated offline extent */
|
||||
if (WARN_ON_ONCE(ext->len &&
|
||||
!(iblock >= ext->start && iblock <= ext_last(ext) &&
|
||||
@@ -413,66 +418,118 @@ static int alloc_block(struct super_block *sb, struct inode *inode,
|
||||
|
||||
mutex_lock(&datinf->mutex);
|
||||
|
||||
scoutfs_inode_get_onoff(inode, &online, &offline);
|
||||
/* default to single allocation at the written block */
|
||||
start = iblock;
|
||||
count = 1;
|
||||
/* copy existing flags for preallocated regions */
|
||||
flags = ext->len ? ext->flags : 0;
|
||||
|
||||
if (ext->len) {
|
||||
/* limit preallocation to remaining existing (offline) extent */
|
||||
/*
|
||||
* Assume that offline writers are going to be writing
|
||||
* all the offline extents and try to preallocate the
|
||||
* rest of the unwritten extent.
|
||||
*/
|
||||
count = ext->len - (iblock - ext->start);
|
||||
flags = ext->flags;
|
||||
|
||||
} else if (opts.data_prealloc_contig_only) {
|
||||
/*
|
||||
* Only preallocate when a quick test of the online
|
||||
* block counts looks like we're a simple streaming
|
||||
* write. Try to write until the next extent but limit
|
||||
* the preallocation size to the number of online
|
||||
* blocks.
|
||||
*/
|
||||
scoutfs_inode_get_onoff(inode, &online, &offline);
|
||||
if (iblock > 1 && iblock == online) {
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
|
||||
iblock, 1, &found);
|
||||
if (ret < 0 && ret != -ENOENT)
|
||||
goto out;
|
||||
if (found.len && found.start > iblock)
|
||||
count = found.start - iblock;
|
||||
else
|
||||
count = opts.data_prealloc_blocks;
|
||||
|
||||
count = min(iblock, count);
|
||||
}
|
||||
|
||||
} else {
|
||||
/* otherwise alloc to next extent */
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &args,
|
||||
iblock, 1, &found);
|
||||
/*
|
||||
* Preallocation within aligned regions tries to
|
||||
* allocate an extent to fill the hole in the region
|
||||
* that contains iblock. We'd have to add a bit of plumbing
|
||||
* to find previous extents so we only search for a next
|
||||
* extent from the front of the region and from iblock.
|
||||
*/
|
||||
div64_u64_rem(iblock, opts.data_prealloc_blocks, &rem);
|
||||
start = iblock - rem;
|
||||
count = opts.data_prealloc_blocks;
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, start, 1, &found);
|
||||
if (ret < 0 && ret != -ENOENT)
|
||||
goto out;
|
||||
if (found.len && found.start > iblock)
|
||||
count = found.start - iblock;
|
||||
else
|
||||
count = SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT;
|
||||
flags = 0;
|
||||
|
||||
/* trim count if there's an extent in the region before iblock */
|
||||
if (found.len && found.start < iblock) {
|
||||
count -= iblock - start;
|
||||
start = iblock;
|
||||
/* see if there's also an extent after iblock */
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &args, iblock, 1, &found);
|
||||
if (ret < 0 && ret != -ENOENT)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* trim count by next extent after iblock */
|
||||
if (found.len && found.start > start && found.start < start + count)
|
||||
count = (found.start - start);
|
||||
}
|
||||
|
||||
/* overall prealloc limit */
|
||||
count = min_t(u64, count, SCOUTFS_DATA_EXTEND_PREALLOC_LIMIT);
|
||||
|
||||
/* only strictly contiguous extending writes will try to preallocate */
|
||||
if (iblock > 1 && iblock == online)
|
||||
count = min(iblock, count);
|
||||
else
|
||||
count = 1;
|
||||
count = min_t(u64, count, opts.data_prealloc_blocks);
|
||||
|
||||
ret = scoutfs_alloc_data(sb, datinf->alloc, datinf->wri,
|
||||
&datinf->dalloc, count, &blkno, &count);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno, 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
/*
|
||||
* An aligned prealloc attempt that gets a smaller extent can
|
||||
* fail to cover iblock, make sure that it does. This is a
|
||||
* pathological case so we don't try to move the window past
|
||||
* iblock. Just enough to cover it, which we know is safe.
|
||||
*/
|
||||
if (start + count <= iblock)
|
||||
start += (iblock - (start + count) + 1);
|
||||
|
||||
if (count > 1) {
|
||||
pre.start = iblock + 1;
|
||||
pre.len = count - 1;
|
||||
pre.map = blkno + 1;
|
||||
pre.start = start;
|
||||
pre.len = count;
|
||||
pre.map = blkno;
|
||||
pre.flags = flags | SEF_UNWRITTEN;
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, pre.start,
|
||||
pre.len, pre.map, pre.flags);
|
||||
if (ret < 0) {
|
||||
err = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock,
|
||||
1, 0, flags);
|
||||
BUG_ON(err); /* couldn't restore original */
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
undo_pre = true;
|
||||
}
|
||||
|
||||
ret = scoutfs_ext_set(sb, &data_ext_ops, &args, iblock, 1, blkno + (iblock - start), 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* tell the caller we have a single block, could check next? */
|
||||
ext->start = iblock;
|
||||
ext->len = 1;
|
||||
ext->map = blkno;
|
||||
ext->map = blkno + (iblock - start);
|
||||
ext->flags = 0;
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret < 0 && blkno > 0) {
|
||||
if (undo_pre) {
|
||||
err = scoutfs_ext_set(sb, &data_ext_ops, &args,
|
||||
pre.start, pre.len, 0, flags);
|
||||
BUG_ON(err); /* leaked preallocated extent */
|
||||
}
|
||||
err = scoutfs_free_data(sb, datinf->alloc, datinf->wri,
|
||||
&datinf->data_freed, blkno, count);
|
||||
BUG_ON(err); /* leaked free blocks */
|
||||
@@ -501,7 +558,7 @@ static int scoutfs_get_block(struct inode *inode, sector_t iblock,
|
||||
u64 offset;
|
||||
int ret;
|
||||
|
||||
WARN_ON_ONCE(create && !mutex_is_locked(&inode->i_mutex));
|
||||
WARN_ON_ONCE(create && !inode_is_locked(inode));
|
||||
|
||||
/* make sure caller holds a cluster lock */
|
||||
lock = scoutfs_per_task_get(&si->pt_data_lock);
|
||||
@@ -586,8 +643,8 @@ static int scoutfs_get_block_read(struct inode *inode, sector_t iblock,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scoutfs_get_block_write(struct inode *inode, sector_t iblock,
|
||||
struct buffer_head *bh, int create)
|
||||
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
|
||||
int create)
|
||||
{
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
int ret;
|
||||
@@ -647,7 +704,7 @@ static int scoutfs_readpage(struct file *file, struct page *page)
|
||||
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
|
||||
ret = scoutfs_data_wait_check(inode, page_offset(page),
|
||||
PAGE_CACHE_SIZE, SEF_OFFLINE,
|
||||
PAGE_SIZE, SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_READ, &dw,
|
||||
inode_lock);
|
||||
if (ret != 0) {
|
||||
@@ -672,6 +729,7 @@ static int scoutfs_readpage(struct file *file, struct page *page)
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifndef KC_FILE_AOPS_READAHEAD
|
||||
/*
|
||||
* This is used for opportunistic read-ahead which can throw the pages
|
||||
* away if it needs to. If the caller didn't deal with offline extents
|
||||
@@ -697,14 +755,14 @@ static int scoutfs_readpages(struct file *file, struct address_space *mapping,
|
||||
|
||||
list_for_each_entry_safe(page, tmp, pages, lru) {
|
||||
ret = scoutfs_data_wait_check(inode, page_offset(page),
|
||||
PAGE_CACHE_SIZE, SEF_OFFLINE,
|
||||
PAGE_SIZE, SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_READ, NULL,
|
||||
inode_lock);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
if (ret > 0) {
|
||||
list_del(&page->lru);
|
||||
page_cache_release(page);
|
||||
put_page(page);
|
||||
if (--nr_pages == 0) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
@@ -718,6 +776,29 @@ out:
|
||||
BUG_ON(!list_empty(pages));
|
||||
return ret;
|
||||
}
|
||||
#else
|
||||
static void scoutfs_readahead(struct readahead_control *rac)
|
||||
{
|
||||
struct inode *inode = rac->file->f_inode;
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *inode_lock = NULL;
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
ret = scoutfs_data_wait_check(inode, readahead_pos(rac),
|
||||
readahead_length(rac), SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_READ, NULL,
|
||||
inode_lock);
|
||||
if (ret == 0)
|
||||
mpage_readahead(rac, scoutfs_get_block_read);
|
||||
|
||||
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
|
||||
}
|
||||
#endif
|
||||
|
||||
static int scoutfs_writepage(struct page *page, struct writeback_control *wbc)
|
||||
{
|
||||
@@ -1000,7 +1081,7 @@ long scoutfs_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
||||
goto out;
|
||||
}
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
|
||||
@@ -1061,7 +1142,7 @@ out_extent:
|
||||
up_write(&si->extent_sem);
|
||||
out_mutex:
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
|
||||
out:
|
||||
trace_scoutfs_data_fallocate(sb, ino, mode, offset, len, ret);
|
||||
@@ -1147,9 +1228,9 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
|
||||
* explained above the move_blocks ioctl argument structure definition.
|
||||
*
|
||||
* The caller has processed the ioctl args and performed the most basic
|
||||
* inode checks, but we perform more detailed inode checks once we have
|
||||
* the inode lock and refreshed inodes. Our job is to safely lock the
|
||||
* two files and move the extents.
|
||||
* argument sanity and inode checks, but we perform more detailed inode
|
||||
* checks once we have the inode lock and refreshed inodes. Our job is
|
||||
* to safely lock the two files and move the extents.
|
||||
*/
|
||||
#define MOVE_DATA_EXTENTS_PER_HOLD 16
|
||||
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
@@ -1164,7 +1245,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
struct data_ext_args from_args;
|
||||
struct data_ext_args to_args;
|
||||
struct scoutfs_extent ext;
|
||||
struct timespec cur_time;
|
||||
struct kc_timespec cur_time;
|
||||
LIST_HEAD(locks);
|
||||
bool done = false;
|
||||
loff_t from_size;
|
||||
@@ -1208,6 +1289,16 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
from_iblock = from_off >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
from_start = from_iblock;
|
||||
|
||||
/* only move extent blocks inside i_size, careful not to wrap */
|
||||
from_size = i_size_read(from);
|
||||
if (from_off >= from_size) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
if (from_off + byte_len > from_size)
|
||||
count = ((from_size - from_off) + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
|
||||
|
||||
if (S_ISDIR(from->i_mode) || S_ISDIR(to->i_mode)) {
|
||||
ret = -EISDIR;
|
||||
@@ -1275,7 +1366,7 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
|
||||
/* find the next extent to move */
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &from_args,
|
||||
from_iblock, 1, &ext);
|
||||
from_start, 1, &ext);
|
||||
if (ret < 0) {
|
||||
if (ret == -ENOENT) {
|
||||
done = true;
|
||||
@@ -1284,9 +1375,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
break;
|
||||
}
|
||||
|
||||
/* only move extents within count and i_size */
|
||||
if (ext.start >= from_iblock + count ||
|
||||
ext.start >= i_size_read(from)) {
|
||||
/* done if next extent starts after moving region */
|
||||
if (ext.start >= from_iblock + count) {
|
||||
done = true;
|
||||
ret = 0;
|
||||
break;
|
||||
@@ -1294,13 +1384,15 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
|
||||
from_start = max(ext.start, from_iblock);
|
||||
map = ext.map + (from_start - ext.start);
|
||||
len = min3(from_iblock + count,
|
||||
round_up((u64)i_size_read(from),
|
||||
SCOUTFS_BLOCK_SM_SIZE),
|
||||
ext.start + ext.len) - from_start;
|
||||
|
||||
len = min(from_iblock + count, ext.start + ext.len) - from_start;
|
||||
to_start = to_iblock + (from_start - from_iblock);
|
||||
|
||||
/* we'd get stuck, shouldn't happen */
|
||||
if (WARN_ON_ONCE(len == 0)) {
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (is_stage) {
|
||||
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
|
||||
to_start, 1, &off_ext);
|
||||
@@ -1362,13 +1454,19 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
|
||||
i_size_read(from);
|
||||
i_size_write(to, to_size);
|
||||
}
|
||||
|
||||
/* find next after moved extent, avoiding wrapping */
|
||||
if (from_start + len < from_start)
|
||||
from_start = from_iblock + count + 1;
|
||||
else
|
||||
from_start += len;
|
||||
}
|
||||
|
||||
|
||||
up_write(&from_si->extent_sem);
|
||||
up_write(&to_si->extent_sem);
|
||||
|
||||
cur_time = CURRENT_TIME;
|
||||
cur_time = current_time(from);
|
||||
if (!is_stage) {
|
||||
to->i_ctime = to->i_mtime = cur_time;
|
||||
inode_inc_iversion(to);
|
||||
@@ -1455,7 +1553,7 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
down_read(&si->extent_sem);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
|
||||
@@ -1509,7 +1607,7 @@ int scoutfs_data_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
|
||||
unlock:
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
|
||||
up_read(&si->extent_sem);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
|
||||
out:
|
||||
if (ret == 1)
|
||||
@@ -1709,6 +1807,37 @@ int scoutfs_data_wait_check_iov(struct inode *inode, const struct iovec *iov,
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_data_wait_check_iter(struct inode *inode, loff_t pos, struct iov_iter *iter,
|
||||
u8 sef, u8 op, struct scoutfs_data_wait *dw,
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
size_t count = iov_iter_count(iter);
|
||||
size_t off = iter->iov_offset;
|
||||
const struct iovec *iov;
|
||||
size_t len;
|
||||
int ret = 0;
|
||||
|
||||
for (iov = iter->iov; count > 0; iov++) {
|
||||
len = iov->iov_len - off;
|
||||
if (len == 0)
|
||||
continue;
|
||||
|
||||
/* aren't we waiting on too much data here ? */
|
||||
ret = scoutfs_data_wait_check(inode, pos, len,
|
||||
sef, op, dw, lock);
|
||||
|
||||
if (ret != 0)
|
||||
break;
|
||||
|
||||
|
||||
pos += len;
|
||||
count -= len;
|
||||
off = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_data_wait(struct inode *inode, struct scoutfs_data_wait *dw)
|
||||
{
|
||||
DECLARE_DATA_WAIT_ROOT(inode->i_sb, rt);
|
||||
@@ -1799,7 +1928,11 @@ int scoutfs_data_waiting(struct super_block *sb, u64 ino, u64 iblock,
|
||||
|
||||
const struct address_space_operations scoutfs_file_aops = {
|
||||
.readpage = scoutfs_readpage,
|
||||
#ifndef KC_FILE_AOPS_READAHEAD
|
||||
.readpages = scoutfs_readpages,
|
||||
#else
|
||||
.readahead = scoutfs_readahead,
|
||||
#endif
|
||||
.writepage = scoutfs_writepage,
|
||||
.writepages = scoutfs_writepages,
|
||||
.write_begin = scoutfs_write_begin,
|
||||
@@ -1807,10 +1940,15 @@ const struct address_space_operations scoutfs_file_aops = {
|
||||
};
|
||||
|
||||
const struct file_operations scoutfs_file_fops = {
|
||||
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
|
||||
.read = do_sync_read,
|
||||
.write = do_sync_write,
|
||||
.aio_read = scoutfs_file_aio_read,
|
||||
.aio_write = scoutfs_file_aio_write,
|
||||
#else
|
||||
.read_iter = scoutfs_file_read_iter,
|
||||
.write_iter = scoutfs_file_write_iter,
|
||||
#endif
|
||||
.unlocked_ioctl = scoutfs_ioctl,
|
||||
.fsync = scoutfs_file_fsync,
|
||||
.llseek = scoutfs_file_llseek,
|
||||
|
||||
@@ -43,6 +43,9 @@ extern const struct file_operations scoutfs_file_fops;
|
||||
struct scoutfs_alloc;
|
||||
struct scoutfs_block_writer;
|
||||
|
||||
int scoutfs_get_block_write(struct inode *inode, sector_t iblock, struct buffer_head *bh,
|
||||
int create);
|
||||
|
||||
int scoutfs_data_truncate_items(struct super_block *sb, struct inode *inode,
|
||||
u64 ino, u64 iblock, u64 last, bool offline,
|
||||
struct scoutfs_lock *lock);
|
||||
@@ -62,6 +65,9 @@ int scoutfs_data_wait_check_iov(struct inode *inode, const struct iovec *iov,
|
||||
unsigned long nr_segs, loff_t pos, u8 sef,
|
||||
u8 op, struct scoutfs_data_wait *ow,
|
||||
struct scoutfs_lock *lock);
|
||||
int scoutfs_data_wait_check_iter(struct inode *inode, loff_t pos, struct iov_iter *iter,
|
||||
u8 sef, u8 op, struct scoutfs_data_wait *ow,
|
||||
struct scoutfs_lock *lock);
|
||||
bool scoutfs_data_wait_found(struct scoutfs_data_wait *ow);
|
||||
int scoutfs_data_wait(struct inode *inode,
|
||||
struct scoutfs_data_wait *ow);
|
||||
|
||||
633
kmod/src/dir.c
633
kmod/src/dir.c
File diff suppressed because it is too large
Load Diff
@@ -5,14 +5,22 @@
|
||||
#include "lock.h"
|
||||
|
||||
extern const struct file_operations scoutfs_dir_fops;
|
||||
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
|
||||
extern const struct inode_operations_wrapper scoutfs_dir_iops;
|
||||
#else
|
||||
extern const struct inode_operations scoutfs_dir_iops;
|
||||
#endif
|
||||
extern const struct inode_operations scoutfs_symlink_iops;
|
||||
|
||||
extern const struct dentry_operations scoutfs_dentry_ops;
|
||||
|
||||
struct scoutfs_link_backref_entry {
|
||||
struct list_head head;
|
||||
u64 dir_ino;
|
||||
u64 dir_pos;
|
||||
u16 name_len;
|
||||
u8 d_type;
|
||||
bool last;
|
||||
struct scoutfs_dirent dent;
|
||||
/* the full name is allocated and stored in dent.name[] */
|
||||
};
|
||||
@@ -22,14 +30,10 @@ int scoutfs_dir_get_backref_path(struct super_block *sb, u64 ino, u64 dir_ino,
|
||||
void scoutfs_dir_free_backref_path(struct super_block *sb,
|
||||
struct list_head *list);
|
||||
|
||||
int scoutfs_dir_add_next_linkref(struct super_block *sb, u64 ino,
|
||||
u64 dir_ino, u64 dir_pos,
|
||||
struct list_head *list);
|
||||
int scoutfs_dir_add_next_linkrefs(struct super_block *sb, u64 ino, u64 dir_ino, u64 dir_pos,
|
||||
int count, struct list_head *list);
|
||||
|
||||
int scoutfs_symlink_drop(struct super_block *sb, u64 ino,
|
||||
struct scoutfs_lock *lock, u64 i_size);
|
||||
|
||||
int scoutfs_dir_init(void);
|
||||
void scoutfs_dir_exit(void);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -114,8 +114,8 @@ static struct dentry *scoutfs_get_parent(struct dentry *child)
|
||||
int ret;
|
||||
u64 ino;
|
||||
|
||||
ret = scoutfs_dir_add_next_linkref(sb, scoutfs_ino(inode), 0, 0, &list);
|
||||
if (ret)
|
||||
ret = scoutfs_dir_add_next_linkrefs(sb, scoutfs_ino(inode), 0, 0, 1, &list);
|
||||
if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
ent = list_first_entry(&list, struct scoutfs_link_backref_entry, head);
|
||||
@@ -138,9 +138,9 @@ static int scoutfs_get_name(struct dentry *parent, char *name,
|
||||
LIST_HEAD(list);
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_dir_add_next_linkref(sb, scoutfs_ino(inode), dir_ino,
|
||||
0, &list);
|
||||
if (ret)
|
||||
ret = scoutfs_dir_add_next_linkrefs(sb, scoutfs_ino(inode), dir_ino,
|
||||
0, 1, &list);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = -ENOENT;
|
||||
|
||||
138
kmod/src/file.c
138
kmod/src/file.c
@@ -29,6 +29,7 @@
|
||||
#include "per_task.h"
|
||||
#include "omap.h"
|
||||
|
||||
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
|
||||
/*
|
||||
* Start a high level file read. We check for offline extents in the
|
||||
* read region here so that we only check the extents once. We use the
|
||||
@@ -42,27 +43,27 @@ ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
|
||||
struct inode *inode = file_inode(file);
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *inode_lock = NULL;
|
||||
struct scoutfs_lock *scoutfs_inode_lock = NULL;
|
||||
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
|
||||
DECLARE_DATA_WAIT(dw);
|
||||
int ret;
|
||||
|
||||
retry:
|
||||
/* protect checked extents from release */
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
atomic_inc(&inode->i_dio_count);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
|
||||
ret = scoutfs_data_wait_check_iov(inode, iov, nr_segs, pos,
|
||||
SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_READ,
|
||||
&dw, inode_lock);
|
||||
&dw, scoutfs_inode_lock);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
} else {
|
||||
@@ -74,7 +75,7 @@ retry:
|
||||
out:
|
||||
inode_dio_done(inode);
|
||||
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
|
||||
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_READ);
|
||||
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_READ);
|
||||
|
||||
if (scoutfs_data_wait_found(&dw)) {
|
||||
ret = scoutfs_data_wait(inode, &dw);
|
||||
@@ -92,7 +93,7 @@ ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
|
||||
struct inode *inode = file_inode(file);
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *inode_lock = NULL;
|
||||
struct scoutfs_lock *scoutfs_inode_lock = NULL;
|
||||
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
|
||||
DECLARE_DATA_WAIT(dw);
|
||||
int ret;
|
||||
@@ -101,22 +102,22 @@ ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
|
||||
return 0;
|
||||
|
||||
retry:
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &inode_lock);
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_complete_truncate(inode, inode_lock);
|
||||
ret = scoutfs_complete_truncate(inode, scoutfs_inode_lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, inode_lock)) {
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
|
||||
/* data_version is per inode, whole file must be online */
|
||||
ret = scoutfs_data_wait_check(inode, 0, i_size_read(inode),
|
||||
SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_WRITE,
|
||||
&dw, inode_lock);
|
||||
&dw, scoutfs_inode_lock);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
}
|
||||
@@ -127,8 +128,8 @@ retry:
|
||||
|
||||
out:
|
||||
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
|
||||
scoutfs_unlock(sb, inode_lock, SCOUTFS_LOCK_WRITE);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_WRITE);
|
||||
inode_unlock(inode);
|
||||
|
||||
if (scoutfs_data_wait_found(&dw)) {
|
||||
ret = scoutfs_data_wait(inode, &dw);
|
||||
@@ -146,6 +147,113 @@ out:
|
||||
|
||||
return ret;
|
||||
}
|
||||
#else
|
||||
ssize_t scoutfs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
|
||||
{
|
||||
struct file *file = iocb->ki_filp;
|
||||
struct inode *inode = file_inode(file);
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *scoutfs_inode_lock = NULL;
|
||||
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
|
||||
DECLARE_DATA_WAIT(dw);
|
||||
int ret;
|
||||
|
||||
retry:
|
||||
/* protect checked extents from release */
|
||||
inode_lock(inode);
|
||||
atomic_inc(&inode->i_dio_count);
|
||||
inode_unlock(inode);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
|
||||
ret = scoutfs_data_wait_check_iter(inode, iocb->ki_pos, to,
|
||||
SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_READ,
|
||||
&dw, scoutfs_inode_lock);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
} else {
|
||||
WARN_ON_ONCE(true);
|
||||
}
|
||||
|
||||
ret = generic_file_read_iter(iocb, to);
|
||||
|
||||
out:
|
||||
inode_dio_end(inode);
|
||||
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
|
||||
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_READ);
|
||||
|
||||
if (scoutfs_data_wait_found(&dw)) {
|
||||
ret = scoutfs_data_wait(inode, &dw);
|
||||
if (ret == 0)
|
||||
goto retry;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
ssize_t scoutfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
||||
{
|
||||
struct file *file = iocb->ki_filp;
|
||||
struct inode *inode = file_inode(file);
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *scoutfs_inode_lock = NULL;
|
||||
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
|
||||
DECLARE_DATA_WAIT(dw);
|
||||
int ret;
|
||||
int written;
|
||||
|
||||
retry:
|
||||
inode_lock(inode);
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &scoutfs_inode_lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = generic_write_checks(iocb, from);
|
||||
if (ret <= 0)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_complete_truncate(inode, scoutfs_inode_lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (scoutfs_per_task_add_excl(&si->pt_data_lock, &pt_ent, scoutfs_inode_lock)) {
|
||||
/* data_version is per inode, whole file must be online */
|
||||
ret = scoutfs_data_wait_check_iter(inode, iocb->ki_pos, from,
|
||||
SEF_OFFLINE,
|
||||
SCOUTFS_IOC_DWO_WRITE,
|
||||
&dw, scoutfs_inode_lock);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* XXX: remove SUID bit */
|
||||
|
||||
written = __generic_file_write_iter(iocb, from);
|
||||
|
||||
out:
|
||||
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
|
||||
scoutfs_unlock(sb, scoutfs_inode_lock, SCOUTFS_LOCK_WRITE);
|
||||
inode_unlock(inode);
|
||||
|
||||
if (scoutfs_data_wait_found(&dw)) {
|
||||
ret = scoutfs_data_wait(inode, &dw);
|
||||
if (ret == 0)
|
||||
goto retry;
|
||||
}
|
||||
|
||||
if (ret > 0 || ret == -EIOCBQUEUED)
|
||||
ret = generic_write_sync(iocb, written);
|
||||
|
||||
return written ? written : ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
int scoutfs_permission(struct inode *inode, int mask)
|
||||
{
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
#ifndef _SCOUTFS_FILE_H_
|
||||
#define _SCOUTFS_FILE_H_
|
||||
|
||||
#ifdef KC_LINUX_HAVE_FOP_AIO_READ
|
||||
ssize_t scoutfs_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
|
||||
unsigned long nr_segs, loff_t pos);
|
||||
ssize_t scoutfs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
|
||||
unsigned long nr_segs, loff_t pos);
|
||||
#else
|
||||
ssize_t scoutfs_file_read_iter(struct kiocb *, struct iov_iter *);
|
||||
ssize_t scoutfs_file_write_iter(struct kiocb *, struct iov_iter *);
|
||||
#endif
|
||||
int scoutfs_permission(struct inode *inode, int mask);
|
||||
loff_t scoutfs_file_llseek(struct file *file, loff_t offset, int whence);
|
||||
|
||||
|
||||
@@ -78,11 +78,6 @@ struct forest_refs {
|
||||
struct scoutfs_block_ref logs_ref;
|
||||
};
|
||||
|
||||
/* initialize some refs that initially aren't equal */
|
||||
#define DECLARE_STALE_TRACKING_SUPER_REFS(a, b) \
|
||||
struct forest_refs a = {{cpu_to_le64(0),}}; \
|
||||
struct forest_refs b = {{cpu_to_le64(1),}}
|
||||
|
||||
struct forest_bloom_nrs {
|
||||
unsigned int nrs[SCOUTFS_FOREST_BLOOM_NRS];
|
||||
};
|
||||
@@ -136,11 +131,11 @@ static struct scoutfs_block *read_bloom_ref(struct super_block *sb, struct scout
|
||||
int scoutfs_forest_next_hint(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_key *next)
|
||||
{
|
||||
DECLARE_STALE_TRACKING_SUPER_REFS(prev_refs, refs);
|
||||
struct scoutfs_net_roots roots;
|
||||
struct scoutfs_btree_root item_root;
|
||||
struct scoutfs_log_trees *lt;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
DECLARE_SAVED_REFS(saved);
|
||||
struct scoutfs_key found;
|
||||
struct scoutfs_key ltk;
|
||||
bool checked_fs;
|
||||
@@ -155,8 +150,6 @@ retry:
|
||||
goto out;
|
||||
|
||||
trace_scoutfs_forest_using_roots(sb, &roots.fs_root, &roots.logs_root);
|
||||
refs.fs_ref = roots.fs_root.ref;
|
||||
refs.logs_ref = roots.logs_root.ref;
|
||||
|
||||
scoutfs_key_init_log_trees(<k, 0, 0);
|
||||
checked_fs = false;
|
||||
@@ -212,14 +205,10 @@ retry:
|
||||
}
|
||||
}
|
||||
|
||||
if (ret == -ESTALE) {
|
||||
if (memcmp(&prev_refs, &refs, sizeof(refs)) == 0)
|
||||
return -EIO;
|
||||
prev_refs = refs;
|
||||
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.fs_root.ref, &roots.logs_root.ref);
|
||||
if (ret == -ESTALE)
|
||||
goto retry;
|
||||
}
|
||||
out:
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -541,9 +530,8 @@ void scoutfs_forest_dec_inode_count(struct super_block *sb)
|
||||
|
||||
/*
|
||||
* Return the total inode count from the super block and all the
|
||||
* log_btrees it references. This assumes it's working with a block
|
||||
* reference hierarchy that should be fully consistent. If we see
|
||||
* ESTALE we've hit persistent corruption.
|
||||
* log_btrees it references. ESTALE from read blocks is returned to the
|
||||
* caller who is expected to retry or return hard errors.
|
||||
*/
|
||||
int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_block *super,
|
||||
u64 *inode_count)
|
||||
@@ -572,8 +560,6 @@ int scoutfs_forest_inode_count(struct super_block *sb, struct scoutfs_super_bloc
|
||||
if (ret < 0) {
|
||||
if (ret == -ENOENT)
|
||||
ret = 0;
|
||||
else if (ret == -ESTALE)
|
||||
ret = -EIO;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -683,16 +683,19 @@ struct scoutfs_xattr_totl_val {
|
||||
#define SCOUTFS_QUORUM_ELECT_VAR_MS 100
|
||||
|
||||
/*
|
||||
* Once a leader is elected they send out heartbeats at regular
|
||||
* intervals to force members to wait the much longer heartbeat timeout.
|
||||
* Once heartbeat timeout expires without receiving a heartbeat they'll
|
||||
* switch over the performing elections.
|
||||
* Once a leader is elected they send heartbeat messages to all quorum
|
||||
* members at regular intervals to force members to wait the much longer
|
||||
* heartbeat timeout. Once the heartbeat timeout expires without
|
||||
* receiving a heartbeat message a member will start an election.
|
||||
*
|
||||
* These determine how long it could take members to notice that a
|
||||
* leader has gone silent and start to elect a new leader.
|
||||
* leader has gone silent and start to elect a new leader. The
|
||||
* heartbeat timeout can be changed at run time by options.
|
||||
*/
|
||||
#define SCOUTFS_QUORUM_HB_IVAL_MS 100
|
||||
#define SCOUTFS_QUORUM_HB_TIMEO_MS (5 * MSEC_PER_SEC)
|
||||
#define SCOUTFS_QUORUM_MIN_HB_TIMEO_MS (2 * MSEC_PER_SEC)
|
||||
#define SCOUTFS_QUORUM_DEF_HB_TIMEO_MS (10 * MSEC_PER_SEC)
|
||||
#define SCOUTFS_QUORUM_MAX_HB_TIMEO_MS (60 * MSEC_PER_SEC)
|
||||
|
||||
/*
|
||||
* A newly elected leader will give fencing some time before giving up and
|
||||
|
||||
228
kmod/src/inode.c
228
kmod/src/inode.c
@@ -19,6 +19,8 @@
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/list_sort.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/buffer_head.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "super.h"
|
||||
@@ -36,6 +38,7 @@
|
||||
#include "omap.h"
|
||||
#include "forest.h"
|
||||
#include "btree.h"
|
||||
#include "acl.h"
|
||||
|
||||
/*
|
||||
* XXX
|
||||
@@ -66,8 +69,10 @@ struct inode_sb_info {
|
||||
|
||||
struct delayed_work orphan_scan_dwork;
|
||||
|
||||
struct workqueue_struct *iput_workq;
|
||||
struct work_struct iput_work;
|
||||
struct llist_head iput_llist;
|
||||
spinlock_t iput_lock;
|
||||
struct list_head iput_list;
|
||||
};
|
||||
|
||||
#define DECLARE_INODE_SB_INFO(sb, name) \
|
||||
@@ -94,7 +99,9 @@ static void scoutfs_inode_ctor(void *obj)
|
||||
init_rwsem(&si->xattr_rwsem);
|
||||
INIT_LIST_HEAD(&si->writeback_entry);
|
||||
scoutfs_lock_init_coverage(&si->ino_lock_cov);
|
||||
atomic_set(&si->iput_count, 0);
|
||||
INIT_LIST_HEAD(&si->iput_head);
|
||||
si->iput_count = 0;
|
||||
si->iput_flags = 0;
|
||||
|
||||
inode_init_once(&si->inode);
|
||||
}
|
||||
@@ -136,20 +143,26 @@ void scoutfs_destroy_inode(struct inode *inode)
|
||||
static const struct inode_operations scoutfs_file_iops = {
|
||||
.getattr = scoutfs_getattr,
|
||||
.setattr = scoutfs_setattr,
|
||||
.setxattr = scoutfs_setxattr,
|
||||
.getxattr = scoutfs_getxattr,
|
||||
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
|
||||
.setxattr = generic_setxattr,
|
||||
.getxattr = generic_getxattr,
|
||||
.removexattr = generic_removexattr,
|
||||
#endif
|
||||
.listxattr = scoutfs_listxattr,
|
||||
.removexattr = scoutfs_removexattr,
|
||||
.get_acl = scoutfs_get_acl,
|
||||
.fiemap = scoutfs_data_fiemap,
|
||||
};
|
||||
|
||||
static const struct inode_operations scoutfs_special_iops = {
|
||||
.getattr = scoutfs_getattr,
|
||||
.setattr = scoutfs_setattr,
|
||||
.setxattr = scoutfs_setxattr,
|
||||
.getxattr = scoutfs_getxattr,
|
||||
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
|
||||
.setxattr = generic_setxattr,
|
||||
.getxattr = generic_getxattr,
|
||||
.removexattr = generic_removexattr,
|
||||
#endif
|
||||
.listxattr = scoutfs_listxattr,
|
||||
.removexattr = scoutfs_removexattr,
|
||||
.get_acl = scoutfs_get_acl,
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -165,8 +178,12 @@ static void set_inode_ops(struct inode *inode)
|
||||
inode->i_fop = &scoutfs_file_fops;
|
||||
break;
|
||||
case S_IFDIR:
|
||||
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
|
||||
inode->i_op = &scoutfs_dir_iops.ops;
|
||||
inode->i_flags |= S_IOPS_WRAPPER;
|
||||
#else
|
||||
inode->i_op = &scoutfs_dir_iops;
|
||||
#endif
|
||||
inode->i_fop = &scoutfs_dir_fops;
|
||||
break;
|
||||
case S_IFLNK:
|
||||
@@ -238,7 +255,7 @@ static void load_inode(struct inode *inode, struct scoutfs_inode *cinode)
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
|
||||
i_size_write(inode, le64_to_cpu(cinode->size));
|
||||
inode->i_version = le64_to_cpu(cinode->version);
|
||||
inode_set_iversion_queried(inode, le64_to_cpu(cinode->version));
|
||||
set_nlink(inode, le32_to_cpu(cinode->nlink));
|
||||
i_uid_write(inode, le32_to_cpu(cinode->uid));
|
||||
i_gid_write(inode, le32_to_cpu(cinode->gid));
|
||||
@@ -322,7 +339,6 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
|
||||
load_inode(inode, &sinode);
|
||||
atomic64_set(&si->last_refreshed, refresh_gen);
|
||||
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
|
||||
si->drop_invalidated = false;
|
||||
}
|
||||
} else {
|
||||
ret = 0;
|
||||
@@ -332,10 +348,17 @@ int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock)
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
|
||||
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
|
||||
struct kstat *stat)
|
||||
{
|
||||
struct inode *inode = dentry->d_inode;
|
||||
#else
|
||||
int scoutfs_getattr(const struct path *path, struct kstat *stat,
|
||||
u32 request_mask, unsigned int query_flags)
|
||||
{
|
||||
struct inode *inode = d_inode(path->dentry);
|
||||
#endif
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
int ret;
|
||||
@@ -354,6 +377,7 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
|
||||
{
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
|
||||
LIST_HEAD(ind_locks);
|
||||
int ret;
|
||||
|
||||
@@ -364,17 +388,25 @@ static int set_inode_size(struct inode *inode, struct scoutfs_lock *lock,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
scoutfs_per_task_add(&si->pt_data_lock, &pt_ent, lock);
|
||||
ret = block_truncate_page(inode->i_mapping, new_size, scoutfs_get_block_write);
|
||||
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
|
||||
if (ret < 0)
|
||||
goto unlock;
|
||||
scoutfs_inode_queue_writeback(inode);
|
||||
|
||||
if (new_size != i_size_read(inode))
|
||||
scoutfs_inode_inc_data_version(inode);
|
||||
|
||||
truncate_setsize(inode, new_size);
|
||||
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
|
||||
inode->i_ctime = inode->i_mtime = current_time(inode);
|
||||
if (truncate)
|
||||
si->flags |= SCOUTFS_INO_FLAG_TRUNCATE;
|
||||
scoutfs_inode_set_data_seq(inode);
|
||||
inode_inc_iversion(inode);
|
||||
scoutfs_update_inode_item(inode, lock, &ind_locks);
|
||||
|
||||
unlock:
|
||||
scoutfs_release_trans(sb);
|
||||
scoutfs_inode_index_unlock(sb, &ind_locks);
|
||||
|
||||
@@ -450,8 +482,7 @@ retry:
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = inode_change_ok(inode, attr);
|
||||
ret = setattr_prepare(dentry, attr);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
@@ -479,9 +510,9 @@ retry:
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
|
||||
/* XXX callee locks instead? */
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
ret = scoutfs_data_wait(inode, &dw);
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
|
||||
if (ret == 0)
|
||||
goto retry;
|
||||
@@ -507,10 +538,15 @@ retry:
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_acl_chmod_locked(inode, attr, lock, &ind_locks);
|
||||
if (ret < 0)
|
||||
goto release;
|
||||
|
||||
setattr_copy(inode, attr);
|
||||
inode_inc_iversion(inode);
|
||||
scoutfs_update_inode_item(inode, lock, &ind_locks);
|
||||
|
||||
release:
|
||||
scoutfs_release_trans(sb);
|
||||
scoutfs_inode_index_unlock(sb, &ind_locks);
|
||||
out:
|
||||
@@ -728,7 +764,7 @@ struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf)
|
||||
/* XXX ensure refresh, instead clear in drop_inode? */
|
||||
si = SCOUTFS_I(inode);
|
||||
atomic64_set(&si->last_refreshed, 0);
|
||||
inode->i_version = 0;
|
||||
inode_set_iversion_queried(inode, 0);
|
||||
}
|
||||
|
||||
ret = scoutfs_inode_refresh(inode, lock);
|
||||
@@ -776,7 +812,7 @@ static void store_inode(struct scoutfs_inode *cinode, struct inode *inode)
|
||||
scoutfs_inode_get_onoff(inode, &online_blocks, &offline_blocks);
|
||||
|
||||
cinode->size = cpu_to_le64(i_size_read(inode));
|
||||
cinode->version = cpu_to_le64(inode->i_version);
|
||||
cinode->version = cpu_to_le64(inode_peek_iversion(inode));
|
||||
cinode->nlink = cpu_to_le32(inode->i_nlink);
|
||||
cinode->uid = cpu_to_le32(i_uid_read(inode));
|
||||
cinode->gid = cpu_to_le32(i_gid_read(inode));
|
||||
@@ -947,7 +983,8 @@ void scoutfs_inode_init_index_key(struct scoutfs_key *key, u8 type, u64 major,
|
||||
static int update_index_items(struct super_block *sb,
|
||||
struct scoutfs_inode_info *si, u64 ino, u8 type,
|
||||
u64 major, u32 minor,
|
||||
struct list_head *lock_list)
|
||||
struct list_head *lock_list,
|
||||
struct scoutfs_lock *primary)
|
||||
{
|
||||
struct scoutfs_lock *ins_lock;
|
||||
struct scoutfs_lock *del_lock;
|
||||
@@ -964,7 +1001,7 @@ static int update_index_items(struct super_block *sb,
|
||||
scoutfs_inode_init_index_key(&ins, type, major, minor, ino);
|
||||
|
||||
ins_lock = find_index_lock(lock_list, type, major, minor, ino);
|
||||
ret = scoutfs_item_create_force(sb, &ins, NULL, 0, ins_lock);
|
||||
ret = scoutfs_item_create_force(sb, &ins, NULL, 0, ins_lock, primary);
|
||||
if (ret || !will_del_index(si, type, major, minor))
|
||||
return ret;
|
||||
|
||||
@@ -976,7 +1013,7 @@ static int update_index_items(struct super_block *sb,
|
||||
|
||||
del_lock = find_index_lock(lock_list, type, get_item_major(si, type),
|
||||
get_item_minor(si, type), ino);
|
||||
ret = scoutfs_item_delete_force(sb, &del, del_lock);
|
||||
ret = scoutfs_item_delete_force(sb, &del, del_lock, primary);
|
||||
if (ret) {
|
||||
err = scoutfs_item_delete(sb, &ins, ins_lock);
|
||||
BUG_ON(err);
|
||||
@@ -988,7 +1025,8 @@ static int update_index_items(struct super_block *sb,
|
||||
static int update_indices(struct super_block *sb,
|
||||
struct scoutfs_inode_info *si, u64 ino, umode_t mode,
|
||||
struct scoutfs_inode *sinode,
|
||||
struct list_head *lock_list)
|
||||
struct list_head *lock_list,
|
||||
struct scoutfs_lock *primary)
|
||||
{
|
||||
struct index_update {
|
||||
u8 type;
|
||||
@@ -1008,7 +1046,7 @@ static int update_indices(struct super_block *sb,
|
||||
continue;
|
||||
|
||||
ret = update_index_items(sb, si, ino, upd->type, upd->major,
|
||||
upd->minor, lock_list);
|
||||
upd->minor, lock_list, primary);
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
@@ -1048,7 +1086,7 @@ void scoutfs_update_inode_item(struct inode *inode, struct scoutfs_lock *lock,
|
||||
/* only race with other inode field stores once */
|
||||
store_inode(&sinode, inode);
|
||||
|
||||
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list);
|
||||
ret = update_indices(sb, si, ino, inode->i_mode, &sinode, lock_list, lock);
|
||||
BUG_ON(ret);
|
||||
|
||||
scoutfs_inode_init_key(&key, ino);
|
||||
@@ -1317,7 +1355,7 @@ void scoutfs_inode_index_unlock(struct super_block *sb, struct list_head *list)
|
||||
|
||||
/* this is called on final inode cleanup so enoent is fine */
|
||||
static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
|
||||
u32 minor, struct list_head *ind_locks)
|
||||
u32 minor, struct list_head *ind_locks, struct scoutfs_lock *primary)
|
||||
{
|
||||
struct scoutfs_key key;
|
||||
struct scoutfs_lock *lock;
|
||||
@@ -1326,7 +1364,7 @@ static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
|
||||
scoutfs_inode_init_index_key(&key, type, major, minor, ino);
|
||||
|
||||
lock = find_index_lock(ind_locks, type, major, minor, ino);
|
||||
ret = scoutfs_item_delete_force(sb, &key, lock);
|
||||
ret = scoutfs_item_delete_force(sb, &key, lock, primary);
|
||||
if (ret == -ENOENT)
|
||||
ret = 0;
|
||||
return ret;
|
||||
@@ -1343,16 +1381,17 @@ static int remove_index(struct super_block *sb, u64 ino, u8 type, u64 major,
|
||||
*/
|
||||
static int remove_index_items(struct super_block *sb, u64 ino,
|
||||
struct scoutfs_inode *sinode,
|
||||
struct list_head *ind_locks)
|
||||
struct list_head *ind_locks,
|
||||
struct scoutfs_lock *primary)
|
||||
{
|
||||
umode_t mode = le32_to_cpu(sinode->mode);
|
||||
int ret;
|
||||
|
||||
ret = remove_index(sb, ino, SCOUTFS_INODE_INDEX_META_SEQ_TYPE,
|
||||
le64_to_cpu(sinode->meta_seq), 0, ind_locks);
|
||||
le64_to_cpu(sinode->meta_seq), 0, ind_locks, primary);
|
||||
if (ret == 0 && S_ISREG(mode))
|
||||
ret = remove_index(sb, ino, SCOUTFS_INODE_INDEX_DATA_SEQ_TYPE,
|
||||
le64_to_cpu(sinode->data_seq), 0, ind_locks);
|
||||
le64_to_cpu(sinode->data_seq), 0, ind_locks, primary);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1442,7 +1481,6 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
|
||||
si->have_item = false;
|
||||
atomic64_set(&si->last_refreshed, lock->refresh_gen);
|
||||
scoutfs_lock_add_coverage(sb, lock, &si->ino_lock_cov);
|
||||
si->drop_invalidated = false;
|
||||
si->flags = 0;
|
||||
|
||||
scoutfs_inode_set_meta_seq(inode);
|
||||
@@ -1451,7 +1489,7 @@ int scoutfs_new_inode(struct super_block *sb, struct inode *dir, umode_t mode, d
|
||||
inode->i_ino = ino; /* XXX overflow */
|
||||
inode_init_owner(inode, dir, mode);
|
||||
inode_set_bytes(inode, 0);
|
||||
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
|
||||
inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
|
||||
inode->i_rdev = rdev;
|
||||
set_inode_ops(inode);
|
||||
|
||||
@@ -1485,22 +1523,24 @@ static void init_orphan_key(struct scoutfs_key *key, u64 ino)
|
||||
* zone under a write only lock while the caller has the inode protected
|
||||
* by a write lock.
|
||||
*/
|
||||
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock)
|
||||
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary)
|
||||
{
|
||||
struct scoutfs_key key;
|
||||
|
||||
init_orphan_key(&key, ino);
|
||||
|
||||
return scoutfs_item_create_force(sb, &key, NULL, 0, lock);
|
||||
return scoutfs_item_create_force(sb, &key, NULL, 0, lock, primary);
|
||||
}
|
||||
|
||||
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock)
|
||||
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary)
|
||||
{
|
||||
struct scoutfs_key key;
|
||||
|
||||
init_orphan_key(&key, ino);
|
||||
|
||||
return scoutfs_item_delete_force(sb, &key, lock);
|
||||
return scoutfs_item_delete_force(sb, &key, lock, primary);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1553,7 +1593,7 @@ retry:
|
||||
|
||||
release = true;
|
||||
|
||||
ret = remove_index_items(sb, ino, sinode, &ind_locks);
|
||||
ret = remove_index_items(sb, ino, sinode, &ind_locks, lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
@@ -1568,7 +1608,7 @@ retry:
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock);
|
||||
ret = scoutfs_inode_orphan_delete(sb, ino, orph_lock, lock);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
@@ -1685,6 +1725,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
struct scoutfs_inode sinode;
|
||||
struct scoutfs_key key;
|
||||
bool clear_trying = false;
|
||||
u64 group_nr;
|
||||
int bit_nr;
|
||||
int ret;
|
||||
@@ -1704,6 +1745,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
clear_trying = true;
|
||||
|
||||
/* can't delete if it's cached in local or remote mounts */
|
||||
if (scoutfs_omap_test(sb, ino) || test_bit_le(bit_nr, ldata->map.bits)) {
|
||||
@@ -1730,7 +1772,7 @@ static int try_delete_inode_items(struct super_block *sb, u64 ino)
|
||||
|
||||
ret = delete_inode_items(sb, ino, &sinode, lock, orph_lock);
|
||||
out:
|
||||
if (ldata)
|
||||
if (clear_trying)
|
||||
clear_bit(bit_nr, ldata->trying);
|
||||
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
@@ -1740,18 +1782,18 @@ out:
|
||||
}
|
||||
|
||||
/*
|
||||
* As we drop an inode we need to decide to try and delete its items or
|
||||
* not, which is expensive. The two common cases we want to get right
|
||||
* both have cluster lock coverage and don't want to delete. Dropping
|
||||
* unused inodes during read lock invalidation has the current lock and
|
||||
* sees a nonzero nlink and knows not to delete. Final iput after a
|
||||
* local unlink also has a lock, sees a zero nlink, and tries to perform
|
||||
* item deletion in the task that dropped the last link, as users
|
||||
* expect.
|
||||
* As we evicted an inode we need to decide to try and delete its items
|
||||
* or not, which is expensive. We only try when we have lock coverage
|
||||
* and the inode has been unlinked. This catches the common case of
|
||||
* regular deletion so deletion will be performed in the final unlink
|
||||
* task. It also catches open-unlink or o_tmpfile that aren't cached on
|
||||
* other nodes.
|
||||
*
|
||||
* Evicting an inode outside of cluster locking is the odd slow path
|
||||
* that involves lock contention during use the worst cross-mount
|
||||
* open-unlink/delete case.
|
||||
* Inodes being evicted outside of lock coverage, by referenced dentries
|
||||
* or inodes that survived the attempt to drop them as their lock was
|
||||
* invalidated, will not try to delete. This means that cross-mount
|
||||
* open/unlink will almost certainly fall back to the orphan scanner to
|
||||
* perform final deletion.
|
||||
*/
|
||||
void scoutfs_evict_inode(struct inode *inode)
|
||||
{
|
||||
@@ -1767,7 +1809,7 @@ void scoutfs_evict_inode(struct inode *inode)
|
||||
/* clear before trying to delete tests */
|
||||
scoutfs_omap_clear(sb, ino);
|
||||
|
||||
if (!scoutfs_lock_is_covered(sb, &si->ino_lock_cov) || inode->i_nlink == 0)
|
||||
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink == 0)
|
||||
try_delete_inode_items(sb, scoutfs_ino(inode));
|
||||
}
|
||||
|
||||
@@ -1792,30 +1834,56 @@ int scoutfs_drop_inode(struct inode *inode)
|
||||
{
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
const bool covered = scoutfs_lock_is_covered(sb, &si->ino_lock_cov);
|
||||
|
||||
trace_scoutfs_drop_inode(sb, scoutfs_ino(inode), inode->i_nlink, inode_unhashed(inode),
|
||||
si->drop_invalidated);
|
||||
covered);
|
||||
|
||||
return si->drop_invalidated || !scoutfs_lock_is_covered(sb, &si->ino_lock_cov) ||
|
||||
generic_drop_inode(inode);
|
||||
return !covered || generic_drop_inode(inode);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* These iput workers can be concurrent amongst cpus. This lets us get
|
||||
* some concurrency when these async final iputs end up performing very
|
||||
* expensive inode deletion. Typically they're dropping linked inodes
|
||||
* that lost lock coverage and the iput will evict without deleting.
|
||||
*
|
||||
* Keep in mind that the dputs in d_prune can ascend into parents and
|
||||
* end up performing the final iput->evict deletion on other inodes.
|
||||
*/
|
||||
static void iput_worker(struct work_struct *work)
|
||||
{
|
||||
struct inode_sb_info *inf = container_of(work, struct inode_sb_info, iput_work);
|
||||
struct scoutfs_inode_info *si;
|
||||
struct scoutfs_inode_info *tmp;
|
||||
struct llist_node *inodes;
|
||||
bool more;
|
||||
struct inode *inode;
|
||||
unsigned long count;
|
||||
unsigned long flags;
|
||||
|
||||
inodes = llist_del_all(&inf->iput_llist);
|
||||
spin_lock(&inf->iput_lock);
|
||||
while ((si = list_first_entry_or_null(&inf->iput_list, struct scoutfs_inode_info,
|
||||
iput_head))) {
|
||||
list_del_init(&si->iput_head);
|
||||
count = si->iput_count;
|
||||
flags = si->iput_flags;
|
||||
si->iput_count = 0;
|
||||
si->iput_flags = 0;
|
||||
spin_unlock(&inf->iput_lock);
|
||||
|
||||
llist_for_each_entry_safe(si, tmp, inodes, iput_llnode) {
|
||||
do {
|
||||
more = atomic_dec_return(&si->iput_count) > 0;
|
||||
iput(&si->inode);
|
||||
} while (more);
|
||||
inode = &si->inode;
|
||||
|
||||
/* can't touch during unmount, dcache destroys w/o locks */
|
||||
if ((flags & SI_IPUT_FLAG_PRUNE) && !inf->stopped)
|
||||
d_prune_aliases(inode);
|
||||
|
||||
while (count-- > 0)
|
||||
iput(inode);
|
||||
|
||||
/* can't touch inode after final iput */
|
||||
|
||||
spin_lock(&inf->iput_lock);
|
||||
}
|
||||
spin_unlock(&inf->iput_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1832,15 +1900,21 @@ static void iput_worker(struct work_struct *work)
|
||||
* Nothing stops multiple puts of an inode before the work runs so we
|
||||
* can track multiple puts in flight.
|
||||
*/
|
||||
void scoutfs_inode_queue_iput(struct inode *inode)
|
||||
void scoutfs_inode_queue_iput(struct inode *inode, unsigned long flags)
|
||||
{
|
||||
DECLARE_INODE_SB_INFO(inode->i_sb, inf);
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
bool should_queue;
|
||||
|
||||
if (atomic_inc_return(&si->iput_count) == 1)
|
||||
llist_add(&si->iput_llnode, &inf->iput_llist);
|
||||
smp_wmb(); /* count and list visible before work executes */
|
||||
schedule_work(&inf->iput_work);
|
||||
spin_lock(&inf->iput_lock);
|
||||
si->iput_count++;
|
||||
si->iput_flags |= flags;
|
||||
if ((should_queue = list_empty(&si->iput_head)))
|
||||
list_add_tail(&si->iput_head, &inf->iput_list);
|
||||
spin_unlock(&inf->iput_lock);
|
||||
|
||||
if (should_queue)
|
||||
queue_work(inf->iput_workq, &inf->iput_work);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -2044,7 +2118,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
|
||||
trace_scoutfs_inode_walk_writeback(sb, scoutfs_ino(inode),
|
||||
write, ret);
|
||||
if (ret) {
|
||||
scoutfs_inode_queue_iput(inode);
|
||||
scoutfs_inode_queue_iput(inode, 0);
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -2060,7 +2134,7 @@ int scoutfs_inode_walk_writeback(struct super_block *sb, bool write)
|
||||
if (!write)
|
||||
list_del_init(&si->writeback_entry);
|
||||
|
||||
scoutfs_inode_queue_iput(inode);
|
||||
scoutfs_inode_queue_iput(inode, 0);
|
||||
}
|
||||
|
||||
spin_unlock(&inf->writeback_lock);
|
||||
@@ -2085,7 +2159,15 @@ int scoutfs_inode_setup(struct super_block *sb)
|
||||
spin_lock_init(&inf->ino_alloc.lock);
|
||||
INIT_DELAYED_WORK(&inf->orphan_scan_dwork, inode_orphan_scan_worker);
|
||||
INIT_WORK(&inf->iput_work, iput_worker);
|
||||
init_llist_head(&inf->iput_llist);
|
||||
spin_lock_init(&inf->iput_lock);
|
||||
INIT_LIST_HEAD(&inf->iput_list);
|
||||
|
||||
/* re-entrant, worker locks with itself and queueing */
|
||||
inf->iput_workq = alloc_workqueue("scoutfs_inode_iput", WQ_UNBOUND, 0);
|
||||
if (!inf->iput_workq) {
|
||||
kfree(inf);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
sbi->inode_sb_info = inf;
|
||||
|
||||
@@ -2121,14 +2203,18 @@ void scoutfs_inode_flush_iput(struct super_block *sb)
|
||||
DECLARE_INODE_SB_INFO(sb, inf);
|
||||
|
||||
if (inf)
|
||||
flush_work(&inf->iput_work);
|
||||
flush_workqueue(inf->iput_workq);
|
||||
}
|
||||
|
||||
void scoutfs_inode_destroy(struct super_block *sb)
|
||||
{
|
||||
struct inode_sb_info *inf = SCOUTFS_SB(sb)->inode_sb_info;
|
||||
|
||||
kfree(inf);
|
||||
if (inf) {
|
||||
if (inf->iput_workq)
|
||||
destroy_workqueue(inf->iput_workq);
|
||||
kfree(inf);
|
||||
}
|
||||
}
|
||||
|
||||
void scoutfs_inode_exit(void)
|
||||
|
||||
@@ -22,7 +22,7 @@ struct scoutfs_inode_info {
|
||||
u64 online_blocks;
|
||||
u64 offline_blocks;
|
||||
u32 flags;
|
||||
struct timespec crtime;
|
||||
struct kc_timespec crtime;
|
||||
|
||||
/*
|
||||
* Protects per-inode extent items, most particularly readers
|
||||
@@ -56,14 +56,16 @@ struct scoutfs_inode_info {
|
||||
|
||||
struct scoutfs_lock_coverage ino_lock_cov;
|
||||
|
||||
/* drop if i_count hits 0, allows drop while invalidate holds coverage */
|
||||
bool drop_invalidated;
|
||||
struct llist_node iput_llnode;
|
||||
atomic_t iput_count;
|
||||
struct list_head iput_head;
|
||||
unsigned long iput_count;
|
||||
unsigned long iput_flags;
|
||||
|
||||
struct inode inode;
|
||||
};
|
||||
|
||||
/* try to prune dcache aliases with queued iput */
|
||||
#define SI_IPUT_FLAG_PRUNE (1 << 0)
|
||||
|
||||
static inline struct scoutfs_inode_info *SCOUTFS_I(struct inode *inode)
|
||||
{
|
||||
return container_of(inode, struct scoutfs_inode_info, inode);
|
||||
@@ -78,7 +80,7 @@ struct inode *scoutfs_alloc_inode(struct super_block *sb);
|
||||
void scoutfs_destroy_inode(struct inode *inode);
|
||||
int scoutfs_drop_inode(struct inode *inode);
|
||||
void scoutfs_evict_inode(struct inode *inode);
|
||||
void scoutfs_inode_queue_iput(struct inode *inode);
|
||||
void scoutfs_inode_queue_iput(struct inode *inode, unsigned long flags);
|
||||
|
||||
#define SCOUTFS_IGF_LINKED (1 << 0) /* enoent if nlink == 0 */
|
||||
struct inode *scoutfs_iget(struct super_block *sb, u64 ino, int lkf, int igf);
|
||||
@@ -121,12 +123,19 @@ void scoutfs_inode_get_onoff(struct inode *inode, s64 *on, s64 *off);
|
||||
int scoutfs_complete_truncate(struct inode *inode, struct scoutfs_lock *lock);
|
||||
|
||||
int scoutfs_inode_refresh(struct inode *inode, struct scoutfs_lock *lock);
|
||||
#ifdef KC_LINUX_HAVE_RHEL_IOPS_WRAPPER
|
||||
int scoutfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
|
||||
struct kstat *stat);
|
||||
#else
|
||||
int scoutfs_getattr(const struct path *path, struct kstat *stat,
|
||||
u32 request_mask, unsigned int query_flags);
|
||||
#endif
|
||||
int scoutfs_setattr(struct dentry *dentry, struct iattr *attr);
|
||||
|
||||
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
|
||||
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock);
|
||||
int scoutfs_inode_orphan_create(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary);
|
||||
int scoutfs_inode_orphan_delete(struct super_block *sb, u64 ino, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary);
|
||||
void scoutfs_inode_schedule_orphan_dwork(struct super_block *sb);
|
||||
|
||||
void scoutfs_inode_queue_writeback(struct inode *inode);
|
||||
|
||||
128
kmod/src/ioctl.c
128
kmod/src/ioctl.c
@@ -22,6 +22,7 @@
|
||||
#include <linux/sched.h>
|
||||
#include <linux/aio.h>
|
||||
#include <linux/list_sort.h>
|
||||
#include <linux/backing-dev.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "key.h"
|
||||
@@ -302,7 +303,7 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
|
||||
@@ -351,7 +352,7 @@ static long scoutfs_ioc_release(struct file *file, unsigned long arg)
|
||||
|
||||
out:
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
mnt_drop_write_file(file);
|
||||
|
||||
trace_scoutfs_ioc_release_ret(sb, scoutfs_ino(inode), ret);
|
||||
@@ -393,7 +394,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
|
||||
goto out;
|
||||
}
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
|
||||
@@ -411,7 +412,7 @@ static long scoutfs_ioc_data_wait_err(struct file *file, unsigned long arg)
|
||||
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
|
||||
unlock:
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
iput(inode);
|
||||
out:
|
||||
return ret;
|
||||
@@ -448,7 +449,6 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct inode *inode = file_inode(file);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
SCOUTFS_DECLARE_PER_TASK_ENTRY(pt_ent);
|
||||
struct scoutfs_ioctl_stage args;
|
||||
@@ -480,8 +480,10 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
|
||||
/* the iocb is really only used for the file pointer :P */
|
||||
init_sync_kiocb(&kiocb, file);
|
||||
kiocb.ki_pos = args.offset;
|
||||
#ifdef KC_LINUX_AIO_KI_LEFT
|
||||
kiocb.ki_left = args.length;
|
||||
kiocb.ki_nbytes = args.length;
|
||||
#endif
|
||||
iov.iov_base = (void __user *)(unsigned long)args.buf_ptr;
|
||||
iov.iov_len = args.length;
|
||||
|
||||
@@ -489,7 +491,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
|
||||
@@ -516,7 +518,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
|
||||
}
|
||||
|
||||
si->staging = true;
|
||||
current->backing_dev_info = mapping->backing_dev_info;
|
||||
current->backing_dev_info = inode_to_bdi(inode);
|
||||
|
||||
pos = args.offset;
|
||||
written = 0;
|
||||
@@ -533,7 +535,7 @@ static long scoutfs_ioc_stage(struct file *file, unsigned long arg)
|
||||
out:
|
||||
scoutfs_per_task_del(&si->pt_data_lock, &pt_ent);
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
mnt_drop_write_file(file);
|
||||
|
||||
trace_scoutfs_ioc_stage_ret(sb, scoutfs_ino(inode), ret);
|
||||
@@ -652,7 +654,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
mutex_lock(&inode->i_mutex);
|
||||
inode_lock(inode);
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lock);
|
||||
@@ -696,7 +698,7 @@ static long scoutfs_ioc_setattr_more(struct file *file, unsigned long arg)
|
||||
unlock:
|
||||
scoutfs_inode_index_unlock(sb, &ind_locks);
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_WRITE);
|
||||
mutex_unlock(&inode->i_mutex);
|
||||
inode_unlock(inode);
|
||||
mnt_drop_write_file(file);
|
||||
out:
|
||||
|
||||
@@ -1398,6 +1400,110 @@ out:
|
||||
return ret ?: nr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy entries that point to an inode to the user's buffer. We copy to
|
||||
* userspace from copies of the entries that are acquired under a lock
|
||||
* so that we don't fault while holding cluster locks. It also gives us
|
||||
* a chance to limit the amount of work under each lock hold.
|
||||
*/
|
||||
static long scoutfs_ioc_get_referring_entries(struct file *file, unsigned long arg)
|
||||
{
|
||||
struct super_block *sb = file_inode(file)->i_sb;
|
||||
struct scoutfs_ioctl_get_referring_entries gre;
|
||||
struct scoutfs_link_backref_entry *bref = NULL;
|
||||
struct scoutfs_link_backref_entry *bref_tmp;
|
||||
struct scoutfs_ioctl_dirent __user *uent;
|
||||
struct scoutfs_ioctl_dirent ent;
|
||||
LIST_HEAD(list);
|
||||
u64 copied;
|
||||
int name_len;
|
||||
int bytes;
|
||||
long nr;
|
||||
int ret;
|
||||
|
||||
if (!capable(CAP_DAC_READ_SEARCH))
|
||||
return -EPERM;
|
||||
|
||||
if (copy_from_user(&gre, (void __user *)arg, sizeof(gre)))
|
||||
return -EFAULT;
|
||||
|
||||
uent = (void __user *)(unsigned long)gre.entries_ptr;
|
||||
copied = 0;
|
||||
nr = 0;
|
||||
|
||||
/* use entry as cursor between calls */
|
||||
ent.dir_ino = gre.dir_ino;
|
||||
ent.dir_pos = gre.dir_pos;
|
||||
|
||||
for (;;) {
|
||||
ret = scoutfs_dir_add_next_linkrefs(sb, gre.ino, ent.dir_ino, ent.dir_pos, 1024,
|
||||
&list);
|
||||
if (ret < 0) {
|
||||
if (ret == -ENOENT)
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* _add_next adds each entry to the head, _reverse for key order */
|
||||
list_for_each_entry_safe_reverse(bref, bref_tmp, &list, head) {
|
||||
list_del_init(&bref->head);
|
||||
|
||||
name_len = bref->name_len;
|
||||
bytes = ALIGN(offsetof(struct scoutfs_ioctl_dirent, name[name_len + 1]),
|
||||
16);
|
||||
if (copied + bytes > gre.entries_bytes) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ent.dir_ino = bref->dir_ino;
|
||||
ent.dir_pos = bref->dir_pos;
|
||||
ent.ino = gre.ino;
|
||||
ent.entry_bytes = bytes;
|
||||
ent.flags = bref->last ? SCOUTFS_IOCTL_DIRENT_FLAG_LAST : 0;
|
||||
ent.d_type = bref->d_type;
|
||||
ent.name_len = name_len;
|
||||
|
||||
if (copy_to_user(uent, &ent, sizeof(struct scoutfs_ioctl_dirent)) ||
|
||||
copy_to_user(&uent->name[0], bref->dent.name, name_len) ||
|
||||
put_user('\0', &uent->name[name_len])) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
kfree(bref);
|
||||
bref = NULL;
|
||||
|
||||
uent = (void __user *)uent + bytes;
|
||||
copied += bytes;
|
||||
nr++;
|
||||
|
||||
if (nr == LONG_MAX || (ent.flags & SCOUTFS_IOCTL_DIRENT_FLAG_LAST)) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
/* advance cursor pos from last copied entry */
|
||||
if (++ent.dir_pos == 0) {
|
||||
if (++ent.dir_ino == 0) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
kfree(bref);
|
||||
list_for_each_entry_safe(bref, bref_tmp, &list, head) {
|
||||
list_del_init(&bref->head);
|
||||
kfree(bref);
|
||||
}
|
||||
|
||||
return nr ?: ret;
|
||||
}
|
||||
|
||||
long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
switch (cmd) {
|
||||
@@ -1433,6 +1539,8 @@ long scoutfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
return scoutfs_ioc_read_xattr_totals(file, arg);
|
||||
case SCOUTFS_IOC_GET_ALLOCATED_INOS:
|
||||
return scoutfs_ioc_get_allocated_inos(file, arg);
|
||||
case SCOUTFS_IOC_GET_REFERRING_ENTRIES:
|
||||
return scoutfs_ioc_get_referring_entries(file, arg);
|
||||
}
|
||||
|
||||
return -ENOTTY;
|
||||
|
||||
114
kmod/src/ioctl.h
114
kmod/src/ioctl.h
@@ -559,4 +559,118 @@ struct scoutfs_ioctl_get_allocated_inos {
|
||||
#define SCOUTFS_IOC_GET_ALLOCATED_INOS \
|
||||
_IOW(SCOUTFS_IOCTL_MAGIC, 16, struct scoutfs_ioctl_get_allocated_inos)
|
||||
|
||||
/*
|
||||
* Get directory entries that refer to a specific inode.
|
||||
*
|
||||
* @ino: The target ino that we're finding referring entries to.
|
||||
* Constant across all the calls that make up an iteration over all the
|
||||
* inode's entries.
|
||||
*
|
||||
* @dir_ino: The inode number of a directory containing the entry to our
|
||||
* inode to search from. If this parent directory contains no more
|
||||
* entries to our inode then we'll search through other parent directory
|
||||
* inodes in inode order.
|
||||
*
|
||||
* @dir_pos: The position in the dir_ino parent directory of the entry
|
||||
* to our inode to search from. If there is no entry at this position
|
||||
* then we'll search through other entry positions in increasing order.
|
||||
* If we exhaust the parent directory then we'll search through
|
||||
* additional parent directories in inode order.
|
||||
*
|
||||
* @entries_ptr: A pointer to the buffer where found entries will be
|
||||
* stored. The pointer must be aligned to 16 bytes.
|
||||
*
|
||||
* @entries_bytes: The size of the buffer that will contain entries.
|
||||
*
|
||||
* To start iterating set the desired target ino, dir_ino to 0, dir_pos
|
||||
* to 0, and set result_ptr and _bytes to a sufficiently large buffer.
|
||||
* Each entry struct that's stored in the buffer adds some overhead so a
|
||||
* large multiple of the largest possible name is a reasonable choice.
|
||||
* (A few multiples of PATH_MAX perhaps.)
|
||||
*
|
||||
* Each call returns the total number of entries that were stored in the
|
||||
* entries buffer. Zero is returned when the search was successful and
|
||||
* no referring entries were found. The entries can be iterated over by
|
||||
* advancing each starting struct offset by the total number of bytes in
|
||||
* each entry. If the _LAST flag is set on an entry then there were no
|
||||
* more entries referring to the inode at the time of the call and
|
||||
* iteration can be stopped.
|
||||
*
|
||||
* To resume iteration set the next call's starting dir_ino and dir_pos
|
||||
* to one past the last entry seen. Increment the last entry's dir_pos,
|
||||
* and if it wrapped to 0, increment its dir_ino.
|
||||
*
|
||||
* This does not check that the caller has permission to read the
|
||||
* entries found in each containing directory. It requires
|
||||
* CAP_DAC_READ_SEARCH which bypasses path traversal permissions
|
||||
* checking.
|
||||
*
|
||||
* Entries returned by a single call can reflect any combination of
|
||||
* racing creation and removal of entries. Each entry existed at the
|
||||
* time it was read though it may have changed in the time it took to
|
||||
* return from the call. The set of entries returned may no longer
|
||||
* reflect the current set of entries and may not have existed at the
|
||||
* same time.
|
||||
*
|
||||
* This has no knowledge of the life cycle of the inode. It can return
|
||||
* 0 when there are no referring entries because either the target inode
|
||||
* doesn't exist, it is in the process of being deleted, or because it
|
||||
* is still open while being unlinked.
|
||||
*
|
||||
* On success this returns the number of entries filled in the buffer.
|
||||
* A return of 0 indicates that no entries referred to the inode.
|
||||
*
|
||||
* EINVAL is returned when there is a problem with the buffer. Either
|
||||
* it was not aligned or it was not large enough for the first entry.
|
||||
*
|
||||
* Many other errnos indicate hard failure to find the next entry.
|
||||
*/
|
||||
struct scoutfs_ioctl_get_referring_entries {
|
||||
__u64 ino;
|
||||
__u64 dir_ino;
|
||||
__u64 dir_pos;
|
||||
__u64 entries_ptr;
|
||||
__u64 entries_bytes;
|
||||
};
|
||||
|
||||
/*
|
||||
* @dir_ino: The inode of the directory containing the entry.
|
||||
*
|
||||
* @dir_pos: The readdir f_pos position of the entry within the
|
||||
* directory.
|
||||
*
|
||||
* @ino: The inode number of the target of the entry.
|
||||
*
|
||||
* @flags: Flags associated with this entry.
|
||||
*
|
||||
* @d_type: Inode type as specified with DT_ enum values in readdir(3).
|
||||
*
|
||||
* @entry_bytes: The total bytes taken by the entry in memory, including
|
||||
* the name and any alignment padding. The start of a following entry
|
||||
* will be found after this number of bytes.
|
||||
*
|
||||
* @name_len: The number of bytes in the name not including the trailing
|
||||
* null, ala strlen(3).
|
||||
*
|
||||
* @name: The null terminated name of the referring entry. In the
|
||||
* struct definition this array is sized to naturally align the struct.
|
||||
* That number of padded bytes are not necessarily found in the buffer
|
||||
* returned by _get_referring_entries;
|
||||
*/
|
||||
struct scoutfs_ioctl_dirent {
|
||||
__u64 dir_ino;
|
||||
__u64 dir_pos;
|
||||
__u64 ino;
|
||||
__u16 entry_bytes;
|
||||
__u8 flags;
|
||||
__u8 d_type;
|
||||
__u8 name_len;
|
||||
__u8 name[3];
|
||||
};
|
||||
|
||||
#define SCOUTFS_IOCTL_DIRENT_FLAG_LAST (1 << 0)
|
||||
|
||||
#define SCOUTFS_IOC_GET_REFERRING_ENTRIES \
|
||||
_IOW(SCOUTFS_IOCTL_MAGIC, 17, struct scoutfs_ioctl_get_referring_entries)
|
||||
|
||||
#endif
|
||||
|
||||
114
kmod/src/item.c
114
kmod/src/item.c
@@ -27,6 +27,7 @@
|
||||
#include "trans.h"
|
||||
#include "counters.h"
|
||||
#include "scoutfs_trace.h"
|
||||
#include "util.h"
|
||||
|
||||
/*
|
||||
* The item cache maintains a consistent view of items that are read
|
||||
@@ -76,8 +77,10 @@ struct item_cache_info {
|
||||
/* almost always read, barely written */
|
||||
struct super_block *sb;
|
||||
struct item_percpu_pages __percpu *pcpu_pages;
|
||||
struct shrinker shrinker;
|
||||
KC_DEFINE_SHRINKER(shrinker);
|
||||
#ifdef KC_CPU_NOTIFIER
|
||||
struct notifier_block notifier;
|
||||
#endif
|
||||
|
||||
/* often walked, but per-cpu refs are fast path */
|
||||
rwlock_t rwlock;
|
||||
@@ -1676,6 +1679,14 @@ static int lock_safe(struct scoutfs_lock *lock, struct scoutfs_key *key,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int optional_lock_mode_match(struct scoutfs_lock *lock, int mode)
|
||||
{
|
||||
if (WARN_ON_ONCE(lock && lock->mode != mode))
|
||||
return -EINVAL;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the cached item's value into the caller's value. The number of
|
||||
* bytes copied is returned. A null val returns 0.
|
||||
@@ -1832,12 +1843,19 @@ out:
|
||||
* also increase the seqs. It lets us limit the inputs of item merging
|
||||
* to the last stable seq and ensure that all the items in open
|
||||
* transactions and granted locks will have greater seqs.
|
||||
*
|
||||
* This is a little awkward for WRITE_ONLY locks which can have much
|
||||
* older versions than the version of locked primary data that they're
|
||||
* operating on behalf of. Callers can optionally provide that primary
|
||||
* lock to get the version from. This ensures that items created under
|
||||
* WRITE_ONLY locks can not have versions less than their primary data.
|
||||
*/
|
||||
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock)
|
||||
static u64 item_seq(struct super_block *sb, struct scoutfs_lock *lock,
|
||||
struct scoutfs_lock *primary)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
return max(sbi->trans_seq, lock->write_seq);
|
||||
return max3(sbi->trans_seq, lock->write_seq, primary ? primary->write_seq : 0);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1872,7 +1890,7 @@ int scoutfs_item_dirty(struct super_block *sb, struct scoutfs_key *key,
|
||||
if (!item || item->deletion) {
|
||||
ret = -ENOENT;
|
||||
} else {
|
||||
item->seq = item_seq(sb, lock);
|
||||
item->seq = item_seq(sb, lock, NULL);
|
||||
mark_item_dirty(sb, cinf, pg, NULL, item);
|
||||
ret = 0;
|
||||
}
|
||||
@@ -1889,10 +1907,10 @@ out:
|
||||
*/
|
||||
static int item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock,
|
||||
int mode, bool force)
|
||||
struct scoutfs_lock *primary, int mode, bool force)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock);
|
||||
const u64 seq = item_seq(sb, lock, primary);
|
||||
struct cached_item *found;
|
||||
struct cached_item *item;
|
||||
struct cached_page *pg;
|
||||
@@ -1902,7 +1920,8 @@ static int item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
|
||||
scoutfs_inc_counter(sb, item_create);
|
||||
|
||||
if ((ret = lock_safe(lock, key, mode)))
|
||||
if ((ret = lock_safe(lock, key, mode)) ||
|
||||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_forest_set_bloom_bits(sb, lock);
|
||||
@@ -1943,15 +1962,15 @@ out:
|
||||
int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock)
|
||||
{
|
||||
return item_create(sb, key, val, val_len, lock,
|
||||
return item_create(sb, key, val, val_len, lock, NULL,
|
||||
SCOUTFS_LOCK_READ, false);
|
||||
}
|
||||
|
||||
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len,
|
||||
struct scoutfs_lock *lock)
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
|
||||
{
|
||||
return item_create(sb, key, val, val_len, lock,
|
||||
return item_create(sb, key, val, val_len, lock, primary,
|
||||
SCOUTFS_LOCK_WRITE_ONLY, true);
|
||||
}
|
||||
|
||||
@@ -1965,7 +1984,7 @@ int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock);
|
||||
const u64 seq = item_seq(sb, lock, NULL);
|
||||
struct cached_item *item;
|
||||
struct cached_item *found;
|
||||
struct cached_page *pg;
|
||||
@@ -2025,12 +2044,16 @@ out:
|
||||
* current items so the caller always writes with write only locks. If
|
||||
* combining the current delta item and the caller's item results in a
|
||||
* null we can just drop it, we don't have to emit a deletion item.
|
||||
*
|
||||
* Delta items don't have to worry about creating items with old
|
||||
* versions under write_only locks. The versions don't impact how we
|
||||
* merge two items.
|
||||
*/
|
||||
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock);
|
||||
const u64 seq = item_seq(sb, lock, NULL);
|
||||
struct cached_item *item;
|
||||
struct cached_page *pg;
|
||||
struct rb_node **pnode;
|
||||
@@ -2099,10 +2122,11 @@ out:
|
||||
* deletion item if there isn't one already cached.
|
||||
*/
|
||||
static int item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock, int mode, bool force)
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary,
|
||||
int mode, bool force)
|
||||
{
|
||||
DECLARE_ITEM_CACHE_INFO(sb, cinf);
|
||||
const u64 seq = item_seq(sb, lock);
|
||||
const u64 seq = item_seq(sb, lock, primary);
|
||||
struct cached_item *item;
|
||||
struct cached_page *pg;
|
||||
struct rb_node **pnode;
|
||||
@@ -2111,7 +2135,8 @@ static int item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
|
||||
scoutfs_inc_counter(sb, item_delete);
|
||||
|
||||
if ((ret = lock_safe(lock, key, mode)))
|
||||
if ((ret = lock_safe(lock, key, mode)) ||
|
||||
(ret = optional_lock_mode_match(primary, SCOUTFS_LOCK_WRITE)))
|
||||
goto out;
|
||||
|
||||
ret = scoutfs_forest_set_bloom_bits(sb, lock);
|
||||
@@ -2161,13 +2186,13 @@ out:
|
||||
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock)
|
||||
{
|
||||
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE, false);
|
||||
return item_delete(sb, key, lock, NULL, SCOUTFS_LOCK_WRITE, false);
|
||||
}
|
||||
|
||||
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock)
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary)
|
||||
{
|
||||
return item_delete(sb, key, lock, SCOUTFS_LOCK_WRITE_ONLY, true);
|
||||
return item_delete(sb, key, lock, primary, SCOUTFS_LOCK_WRITE_ONLY, true);
|
||||
}
|
||||
|
||||
u64 scoutfs_item_dirty_pages(struct super_block *sb)
|
||||
@@ -2255,7 +2280,7 @@ int scoutfs_item_write_dirty(struct super_block *sb)
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
list_add(&page->list, &pages);
|
||||
list_add(&page->lru, &pages);
|
||||
|
||||
first = NULL;
|
||||
prev = &first;
|
||||
@@ -2268,7 +2293,7 @@ int scoutfs_item_write_dirty(struct super_block *sb)
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
list_add(&second->list, &pages);
|
||||
list_add(&second->lru, &pages);
|
||||
}
|
||||
|
||||
/* read lock next sorted page, we're only dirty_list user */
|
||||
@@ -2325,8 +2350,8 @@ int scoutfs_item_write_dirty(struct super_block *sb)
|
||||
/* write all the dirty items into log btree blocks */
|
||||
ret = scoutfs_forest_insert_list(sb, first);
|
||||
out:
|
||||
list_for_each_entry_safe(page, second, &pages, list) {
|
||||
list_del_init(&page->list);
|
||||
list_for_each_entry_safe(page, second, &pages, lru) {
|
||||
list_del_init(&page->lru);
|
||||
__free_page(page);
|
||||
}
|
||||
|
||||
@@ -2508,27 +2533,35 @@ retry:
|
||||
put_pg(sb, right);
|
||||
}
|
||||
|
||||
static unsigned long item_cache_count_objects(struct shrinker *shrink,
|
||||
struct shrink_control *sc)
|
||||
{
|
||||
struct item_cache_info *cinf = KC_SHRINKER_CONTAINER_OF(shrink, struct item_cache_info);
|
||||
struct super_block *sb = cinf->sb;
|
||||
|
||||
scoutfs_inc_counter(sb, item_cache_count_objects);
|
||||
|
||||
return shrinker_min_long(cinf->lru_pages);
|
||||
}
|
||||
|
||||
/*
|
||||
* Shrink the size the item cache. We're operating against the fast
|
||||
* path lock ordering and we skip pages if we can't acquire locks. We
|
||||
* can run into dirty pages or pages with items that weren't visible to
|
||||
* the earliest active reader which must be skipped.
|
||||
*/
|
||||
static int item_lru_shrink(struct shrinker *shrink,
|
||||
struct shrink_control *sc)
|
||||
static unsigned long item_cache_scan_objects(struct shrinker *shrink,
|
||||
struct shrink_control *sc)
|
||||
{
|
||||
struct item_cache_info *cinf = container_of(shrink,
|
||||
struct item_cache_info,
|
||||
shrinker);
|
||||
struct item_cache_info *cinf = KC_SHRINKER_CONTAINER_OF(shrink, struct item_cache_info);
|
||||
struct super_block *sb = cinf->sb;
|
||||
struct cached_page *tmp;
|
||||
struct cached_page *pg;
|
||||
unsigned long freed = 0;
|
||||
u64 first_reader_seq;
|
||||
int nr;
|
||||
int nr = sc->nr_to_scan;
|
||||
|
||||
if (sc->nr_to_scan == 0)
|
||||
goto out;
|
||||
nr = sc->nr_to_scan;
|
||||
scoutfs_inc_counter(sb, item_cache_scan_objects);
|
||||
|
||||
/* can't invalidate pages with items that weren't visible to first reader */
|
||||
first_reader_seq = first_active_reader_seq(cinf);
|
||||
@@ -2560,6 +2593,7 @@ static int item_lru_shrink(struct shrinker *shrink,
|
||||
rbtree_erase(&pg->node, &cinf->pg_root);
|
||||
invalidate_pcpu_page(pg);
|
||||
write_unlock(&pg->rwlock);
|
||||
freed++;
|
||||
|
||||
put_pg(sb, pg);
|
||||
|
||||
@@ -2569,10 +2603,11 @@ static int item_lru_shrink(struct shrinker *shrink,
|
||||
|
||||
write_unlock(&cinf->rwlock);
|
||||
spin_unlock(&cinf->lru_lock);
|
||||
out:
|
||||
return min_t(unsigned long, cinf->lru_pages, INT_MAX);
|
||||
|
||||
return freed;
|
||||
}
|
||||
|
||||
#ifdef KC_CPU_NOTIFIER
|
||||
static int item_cpu_callback(struct notifier_block *nfb,
|
||||
unsigned long action, void *hcpu)
|
||||
{
|
||||
@@ -2587,6 +2622,7 @@ static int item_cpu_callback(struct notifier_block *nfb,
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
#endif
|
||||
|
||||
int scoutfs_item_setup(struct super_block *sb)
|
||||
{
|
||||
@@ -2616,11 +2652,13 @@ int scoutfs_item_setup(struct super_block *sb)
|
||||
for_each_possible_cpu(cpu)
|
||||
init_pcpu_pages(cinf, cpu);
|
||||
|
||||
cinf->shrinker.shrink = item_lru_shrink;
|
||||
cinf->shrinker.seeks = DEFAULT_SEEKS;
|
||||
register_shrinker(&cinf->shrinker);
|
||||
KC_INIT_SHRINKER_FUNCS(&cinf->shrinker, item_cache_count_objects,
|
||||
item_cache_scan_objects);
|
||||
KC_REGISTER_SHRINKER(&cinf->shrinker);
|
||||
#ifdef KC_CPU_NOTIFIER
|
||||
cinf->notifier.notifier_call = item_cpu_callback;
|
||||
register_hotcpu_notifier(&cinf->notifier);
|
||||
#endif
|
||||
|
||||
sbi->item_cache_info = cinf;
|
||||
return 0;
|
||||
@@ -2640,8 +2678,10 @@ void scoutfs_item_destroy(struct super_block *sb)
|
||||
if (cinf) {
|
||||
BUG_ON(!list_empty(&cinf->active_list));
|
||||
|
||||
#ifdef KC_CPU_NOTIFIER
|
||||
unregister_hotcpu_notifier(&cinf->notifier);
|
||||
unregister_shrinker(&cinf->shrinker);
|
||||
#endif
|
||||
KC_UNREGISTER_SHRINKER(&cinf->shrinker);
|
||||
|
||||
for_each_possible_cpu(cpu)
|
||||
drop_pcpu_pages(sb, cinf, cpu);
|
||||
|
||||
@@ -15,16 +15,15 @@ int scoutfs_item_create(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock);
|
||||
int scoutfs_item_create_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len,
|
||||
struct scoutfs_lock *lock);
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
|
||||
int scoutfs_item_update(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock);
|
||||
int scoutfs_item_delta(struct super_block *sb, struct scoutfs_key *key,
|
||||
void *val, int val_len, struct scoutfs_lock *lock);
|
||||
int scoutfs_item_delete(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock);
|
||||
int scoutfs_item_delete_force(struct super_block *sb,
|
||||
struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock);
|
||||
int scoutfs_item_delete_force(struct super_block *sb, struct scoutfs_key *key,
|
||||
struct scoutfs_lock *lock, struct scoutfs_lock *primary);
|
||||
|
||||
u64 scoutfs_item_dirty_pages(struct super_block *sb);
|
||||
int scoutfs_item_write_dirty(struct super_block *sb);
|
||||
|
||||
84
kmod/src/kernelcompat.c
Normal file
84
kmod/src/kernelcompat.c
Normal file
@@ -0,0 +1,84 @@
|
||||
|
||||
#include <linux/uio.h>
|
||||
|
||||
#include "kernelcompat.h"
|
||||
|
||||
#ifdef KC_SHRINKER_SHRINK
|
||||
#include <linux/shrinker.h>
|
||||
/*
|
||||
* If a target doesn't have that .{count,scan}_objects() interface then
|
||||
* we have a .shrink() helper that performs the shrink work in terms of
|
||||
* count/scan.
|
||||
*/
|
||||
int kc_shrink_wrapper_fn(struct shrinker *shrink, struct shrink_control *sc)
|
||||
{
|
||||
struct kc_shrinker_wrapper *wrapper = container_of(shrink, struct kc_shrinker_wrapper, shrink);
|
||||
unsigned long nr;
|
||||
unsigned long rc;
|
||||
|
||||
if (sc->nr_to_scan != 0) {
|
||||
rc = wrapper->scan_objects(shrink, sc);
|
||||
/* translate magic values to the equivalent for older kernels */
|
||||
if (rc == SHRINK_STOP)
|
||||
return -1;
|
||||
else if (rc == SHRINK_EMPTY)
|
||||
return 0;
|
||||
}
|
||||
|
||||
nr = wrapper->count_objects(shrink, sc);
|
||||
|
||||
return min_t(unsigned long, nr, INT_MAX);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifndef KC_CURRENT_TIME_INODE
|
||||
struct timespec64 kc_current_time(struct inode *inode)
|
||||
{
|
||||
struct timespec64 now;
|
||||
unsigned gran;
|
||||
|
||||
getnstimeofday64(&now);
|
||||
|
||||
if (unlikely(!inode->i_sb)) {
|
||||
WARN(1, "current_time() called with uninitialized super_block in the inode");
|
||||
return now;
|
||||
}
|
||||
|
||||
gran = inode->i_sb->s_time_gran;
|
||||
|
||||
/* Avoid division in the common cases 1 ns and 1 s. */
|
||||
if (gran == 1) {
|
||||
/* nothing */
|
||||
} else if (gran == NSEC_PER_SEC) {
|
||||
now.tv_nsec = 0;
|
||||
} else if (gran > 1 && gran < NSEC_PER_SEC) {
|
||||
now.tv_nsec -= now.tv_nsec % gran;
|
||||
} else {
|
||||
WARN(1, "illegal file time granularity: %u", gran);
|
||||
}
|
||||
|
||||
return now;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifndef KC_GENERIC_FILE_BUFFERED_WRITE
|
||||
ssize_t
|
||||
kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
|
||||
unsigned long nr_segs, loff_t pos, loff_t *ppos,
|
||||
size_t count, ssize_t written)
|
||||
{
|
||||
struct file *file = iocb->ki_filp;
|
||||
ssize_t status;
|
||||
struct iov_iter i;
|
||||
|
||||
iov_iter_init(&i, WRITE, iov, nr_segs, count);
|
||||
status = generic_perform_write(file, &i, pos);
|
||||
|
||||
if (likely(status >= 0)) {
|
||||
written += status;
|
||||
*ppos = pos + status;
|
||||
}
|
||||
|
||||
return written ? written : status;
|
||||
}
|
||||
#endif
|
||||
@@ -1,8 +1,35 @@
|
||||
#ifndef _SCOUTFS_KERNELCOMPAT_H_
|
||||
#define _SCOUTFS_KERNELCOMPAT_H_
|
||||
|
||||
#ifndef KC_ITERATE_DIR_CONTEXT
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
|
||||
/*
|
||||
* v4.15-rc3-4-gae5e165d855d
|
||||
*
|
||||
* new API for handling inode->i_version. This forces us to
|
||||
* include this API where we need. We include it here for
|
||||
* convenience instead of where it's needed.
|
||||
*/
|
||||
#ifdef KC_NEED_LINUX_IVERSION_H
|
||||
#include <linux/iversion.h>
|
||||
#else
|
||||
/*
|
||||
* Kernels before above version will need to fall back to
|
||||
* manipulating inode->i_version as previous with degraded
|
||||
* methods.
|
||||
*/
|
||||
#define inode_set_iversion_queried(inode, val) \
|
||||
do { \
|
||||
(inode)->i_version = val; \
|
||||
} while (0)
|
||||
#define inode_peek_iversion(inode) \
|
||||
({ \
|
||||
(inode)->i_version; \
|
||||
})
|
||||
#endif
|
||||
|
||||
#ifndef KC_ITERATE_DIR_CONTEXT
|
||||
typedef filldir_t kc_readdir_ctx_t;
|
||||
#define KC_DECLARE_READDIR(name, file, dirent, ctx) name(file, dirent, ctx)
|
||||
#define KC_FOP_READDIR readdir
|
||||
@@ -46,4 +73,204 @@ static inline int dir_emit_dots(struct file *file, void *dirent,
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef KC_POSIX_ACL_VALID_USER_NS
|
||||
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(user_ns, acl)
|
||||
#else
|
||||
#define kc_posix_acl_valid(user_ns, acl) posix_acl_valid(acl)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* v3.6-rc1-24-gdbf2576e37da
|
||||
*
|
||||
* All workqueues are now non-reentrant, and the bit flag is removed
|
||||
* shortly after its uses were removed.
|
||||
*/
|
||||
#ifndef WQ_NON_REENTRANT
|
||||
#define WQ_NON_REENTRANT 0
|
||||
#endif
|
||||
|
||||
/*
|
||||
* v3.18-rc2-19-gb5ae6b15bd73
|
||||
*
|
||||
* Folds d_materialise_unique into d_splice_alias. Note reversal
|
||||
* of arguments (Also note Documentation/filesystems/porting.rst)
|
||||
*/
|
||||
#ifndef KC_D_MATERIALISE_UNIQUE
|
||||
#define d_materialise_unique(dentry, inode) d_splice_alias(inode, dentry)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* v4.8-rc1-29-g31051c85b5e2
|
||||
*
|
||||
* fall back to inode_change_ok() if setattr_prepare() isn't available
|
||||
*/
|
||||
#ifndef KC_SETATTR_PREPARE
|
||||
#define setattr_prepare(dentry, attr) inode_change_ok(d_inode(dentry), attr)
|
||||
#endif
|
||||
|
||||
#ifndef KC___POSIX_ACL_CREATE
|
||||
#define __posix_acl_create posix_acl_create
|
||||
#define __posix_acl_chmod posix_acl_chmod
|
||||
#endif
|
||||
|
||||
#ifndef KC_PERCPU_COUNTER_ADD_BATCH
|
||||
#define percpu_counter_add_batch __percpu_counter_add
|
||||
#endif
|
||||
|
||||
#ifndef KC_MEMALLOC_NOFS_SAVE
|
||||
#define memalloc_nofs_save memalloc_noio_save
|
||||
#define memalloc_nofs_restore memalloc_noio_restore
|
||||
#endif
|
||||
|
||||
#ifdef KC_BIO_BI_OPF
|
||||
#define kc_bio_get_opf(bio) \
|
||||
({ \
|
||||
(bio)->bi_opf; \
|
||||
})
|
||||
#define kc_bio_set_opf(bio, opf) \
|
||||
do { \
|
||||
(bio)->bi_opf = opf; \
|
||||
} while (0)
|
||||
#define kc_bio_set_sector(bio, sect) \
|
||||
do { \
|
||||
(bio)->bi_iter.bi_sector = sect;\
|
||||
} while (0)
|
||||
#define kc_submit_bio(bio) submit_bio(bio)
|
||||
#else
|
||||
#define kc_bio_get_opf(bio) \
|
||||
({ \
|
||||
(bio)->bi_rw; \
|
||||
})
|
||||
#define kc_bio_set_opf(bio, opf) \
|
||||
do { \
|
||||
(bio)->bi_rw = opf; \
|
||||
} while (0)
|
||||
#define kc_bio_set_sector(bio, sect) \
|
||||
do { \
|
||||
(bio)->bi_sector = sect; \
|
||||
} while (0)
|
||||
#define kc_submit_bio(bio) \
|
||||
do { \
|
||||
submit_bio((bio)->bi_rw, bio); \
|
||||
} while (0)
|
||||
#define bio_set_dev(bio, bdev) \
|
||||
do { \
|
||||
(bio)->bi_bdev = (bdev); \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifdef KC_BIO_BI_STATUS
|
||||
#define KC_DECLARE_BIO_END_IO(name, bio) name(bio)
|
||||
#define kc_bio_get_errno(bio) ({ blk_status_to_errno((bio)->bi_status); })
|
||||
#else
|
||||
#define KC_DECLARE_BIO_END_IO(name, bio) name(bio, int _error_arg)
|
||||
#define kc_bio_get_errno(bio) ({ (int)((void)(bio), _error_arg); })
|
||||
#endif
|
||||
|
||||
/*
|
||||
* v4.13-rc1-6-ge462ec50cb5f
|
||||
*
|
||||
* MS_* (mount) flags from <linux/mount.h> should not be used in the kernel
|
||||
* anymore from 4.x onwards. Instead, we need to use the SB_* (superblock) flags
|
||||
*/
|
||||
#ifndef SB_POSIXACL
|
||||
#define SB_POSIXACL MS_POSIXACL
|
||||
#define SB_I_VERSION MS_I_VERSION
|
||||
#endif
|
||||
|
||||
#ifndef KC_CURRENT_TIME_INODE
|
||||
struct timespec64 kc_current_time(struct inode *inode);
|
||||
#define current_time kc_current_time
|
||||
#define kc_timespec timespec
|
||||
#else
|
||||
#define kc_timespec timespec64
|
||||
#endif
|
||||
|
||||
#ifndef KC_SHRINKER_SHRINK
|
||||
|
||||
#define KC_DEFINE_SHRINKER(name) struct shrinker name
|
||||
#define KC_INIT_SHRINKER_FUNCS(name, countfn, scanfn) do { \
|
||||
__typeof__(name) _shrink = (name); \
|
||||
_shrink->count_objects = (countfn); \
|
||||
_shrink->scan_objects = (scanfn); \
|
||||
_shrink->seeks = DEFAULT_SEEKS; \
|
||||
} while (0)
|
||||
|
||||
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(ptr, type, shrinker)
|
||||
#define KC_REGISTER_SHRINKER(ptr) (register_shrinker(ptr))
|
||||
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr))
|
||||
#define KC_SHRINKER_FN(ptr) (ptr)
|
||||
#else
|
||||
|
||||
#include <linux/shrinker.h>
|
||||
#ifndef SHRINK_STOP
|
||||
#define SHRINK_STOP (~0UL)
|
||||
#define SHRINK_EMPTY (~0UL - 1)
|
||||
#endif
|
||||
|
||||
int kc_shrink_wrapper_fn(struct shrinker *shrink, struct shrink_control *sc);
|
||||
struct kc_shrinker_wrapper {
|
||||
unsigned long (*count_objects)(struct shrinker *, struct shrink_control *sc);
|
||||
unsigned long (*scan_objects)(struct shrinker *, struct shrink_control *sc);
|
||||
struct shrinker shrink;
|
||||
};
|
||||
|
||||
#define KC_DEFINE_SHRINKER(name) struct kc_shrinker_wrapper name;
|
||||
#define KC_INIT_SHRINKER_FUNCS(name, countfn, scanfn) do { \
|
||||
struct kc_shrinker_wrapper *_wrap = (name); \
|
||||
_wrap->count_objects = (countfn); \
|
||||
_wrap->scan_objects = (scanfn); \
|
||||
_wrap->shrink.shrink = kc_shrink_wrapper_fn; \
|
||||
_wrap->shrink.seeks = DEFAULT_SEEKS; \
|
||||
} while (0)
|
||||
#define KC_SHRINKER_CONTAINER_OF(ptr, type) container_of(container_of(ptr, struct kc_shrinker_wrapper, shrink), type, shrinker)
|
||||
#define KC_REGISTER_SHRINKER(ptr) (register_shrinker(ptr.shrink))
|
||||
#define KC_UNREGISTER_SHRINKER(ptr) (unregister_shrinker(ptr.shrink))
|
||||
#define KC_SHRINKER_FN(ptr) (ptr.shrink)
|
||||
|
||||
#endif /* KC_SHRINKER_SHRINK */
|
||||
|
||||
#ifdef KC_KERNEL_GETSOCKNAME_ADDRLEN
|
||||
#include <linux/net.h>
|
||||
#include <linux/inet.h>
|
||||
static inline int kc_kernel_getsockname(struct socket *sock, struct sockaddr *addr)
|
||||
{
|
||||
int addrlen = sizeof(struct sockaddr_in);
|
||||
int ret = kernel_getsockname(sock, addr, &addrlen);
|
||||
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
|
||||
return -EAFNOSUPPORT;
|
||||
else if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return sizeof(struct sockaddr_in);
|
||||
}
|
||||
static inline int kc_kernel_getpeername(struct socket *sock, struct sockaddr *addr)
|
||||
{
|
||||
int addrlen = sizeof(struct sockaddr_in);
|
||||
int ret = kernel_getpeername(sock, addr, &addrlen);
|
||||
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
|
||||
return -EAFNOSUPPORT;
|
||||
else if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return sizeof(struct sockaddr_in);
|
||||
}
|
||||
#else
|
||||
#define kc_kernel_getsockname(sock, addr) kernel_getsockname(sock, addr)
|
||||
#define kc_kernel_getpeername(sock, addr) kernel_getpeername(sock, addr)
|
||||
#endif
|
||||
|
||||
#ifdef KC_SOCK_CREATE_KERN_NET
|
||||
#define kc_sock_create_kern(family, type, proto, res) sock_create_kern(&init_net, family, type, proto, res)
|
||||
#else
|
||||
#define kc_sock_create_kern sock_create_kern
|
||||
#endif
|
||||
|
||||
#ifndef KC_GENERIC_FILE_BUFFERED_WRITE
|
||||
ssize_t kc_generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
|
||||
unsigned long nr_segs, loff_t pos, loff_t *ppos,
|
||||
size_t count, ssize_t written);
|
||||
#define generic_file_buffered_write kc_generic_file_buffered_write
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
145
kmod/src/lock.c
145
kmod/src/lock.c
@@ -12,12 +12,12 @@
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/preempt_mask.h> /* a rhel shed.h needed preempt_offset? */
|
||||
#include <linux/sched.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/sort.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/posix_acl.h>
|
||||
|
||||
#include "super.h"
|
||||
#include "lock.h"
|
||||
@@ -35,6 +35,7 @@
|
||||
#include "xattr.h"
|
||||
#include "item.h"
|
||||
#include "omap.h"
|
||||
#include "util.h"
|
||||
|
||||
/*
|
||||
* scoutfs uses a lock service to manage item cache consistency between
|
||||
@@ -76,7 +77,7 @@ struct lock_info {
|
||||
bool unmounting;
|
||||
struct rb_root lock_tree;
|
||||
struct rb_root lock_range_tree;
|
||||
struct shrinker shrinker;
|
||||
KC_DEFINE_SHRINKER(shrinker);
|
||||
struct list_head lru_list;
|
||||
unsigned long long lru_nr;
|
||||
struct workqueue_struct *workq;
|
||||
@@ -129,16 +130,13 @@ static bool lock_modes_match(int granted, int requested)
|
||||
* allows deletions to be performed by unlink without having to wait for
|
||||
* remote cached inodes to be dropped.
|
||||
*
|
||||
* If the cached inode was already deferring final inode deletion then
|
||||
* we can't perform that inline in invalidation. The locking alone
|
||||
* deadlock, and it might also take multiple transactions to fully
|
||||
* delete an inode with significant metadata. We only perform the iput
|
||||
* inline if we know that possible eviction can't perform the final
|
||||
* deletion, otherwise we kick it off to async work.
|
||||
* We kick the d_prune and iput off to async work because they can end
|
||||
* up in final iput and inode eviction item deletion which would
|
||||
* deadlock. d_prune->dput can end up in iput on parents in different
|
||||
* locks entirely.
|
||||
*/
|
||||
static void invalidate_inode(struct super_block *sb, u64 ino)
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_inode_info *si;
|
||||
struct inode *inode;
|
||||
|
||||
@@ -152,17 +150,9 @@ static void invalidate_inode(struct super_block *sb, u64 ino)
|
||||
scoutfs_data_wait_changed(inode);
|
||||
}
|
||||
|
||||
/* can't touch during unmount, dcache destroys w/o locks */
|
||||
if (!linfo->unmounting)
|
||||
d_prune_aliases(inode);
|
||||
forget_all_cached_acls(inode);
|
||||
|
||||
si->drop_invalidated = true;
|
||||
if (scoutfs_lock_is_covered(sb, &si->ino_lock_cov) && inode->i_nlink > 0) {
|
||||
iput(inode);
|
||||
} else {
|
||||
/* defer iput to work context so we don't evict inodes from invalidation */
|
||||
scoutfs_inode_queue_iput(inode);
|
||||
}
|
||||
scoutfs_inode_queue_iput(inode, SI_IPUT_FLAG_PRUNE);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -198,16 +188,6 @@ static int lock_invalidate(struct super_block *sb, struct scoutfs_lock *lock,
|
||||
/* have to invalidate if we're not in the only usable case */
|
||||
if (!(prev == SCOUTFS_LOCK_WRITE && mode == SCOUTFS_LOCK_READ)) {
|
||||
retry:
|
||||
/* invalidate inodes before removing coverage */
|
||||
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
|
||||
ino = le64_to_cpu(lock->start.ski_ino);
|
||||
last = le64_to_cpu(lock->end.ski_ino);
|
||||
while (ino <= last) {
|
||||
invalidate_inode(sb, ino);
|
||||
ino++;
|
||||
}
|
||||
}
|
||||
|
||||
/* remove cov items to tell users that their cache is stale */
|
||||
spin_lock(&lock->cov_list_lock);
|
||||
list_for_each_entry_safe(cov, tmp, &lock->cov_list, head) {
|
||||
@@ -223,6 +203,16 @@ retry:
|
||||
}
|
||||
spin_unlock(&lock->cov_list_lock);
|
||||
|
||||
/* invalidate inodes after removing coverage so drop/evict aren't covered */
|
||||
if (lock->start.sk_zone == SCOUTFS_FS_ZONE) {
|
||||
ino = le64_to_cpu(lock->start.ski_ino);
|
||||
last = le64_to_cpu(lock->end.ski_ino);
|
||||
while (ino <= last) {
|
||||
invalidate_inode(sb, ino);
|
||||
ino++;
|
||||
}
|
||||
}
|
||||
|
||||
scoutfs_item_invalidate(sb, &lock->start, &lock->end);
|
||||
}
|
||||
|
||||
@@ -289,6 +279,7 @@ static struct scoutfs_lock *lock_alloc(struct super_block *sb,
|
||||
lock->sb = sb;
|
||||
init_waitqueue_head(&lock->waitq);
|
||||
lock->mode = SCOUTFS_LOCK_NULL;
|
||||
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
|
||||
|
||||
atomic64_set(&lock->forest_bloom_nr, 0);
|
||||
|
||||
@@ -666,7 +657,9 @@ struct inv_req {
|
||||
*
|
||||
* Before we start invalidating the lock we set the lock to the new
|
||||
* mode, preventing further incompatible users of the old mode from
|
||||
* using the lock while we're invalidating.
|
||||
* using the lock while we're invalidating. We record the previously
|
||||
* granted mode so that we can send lock recover responses with the old
|
||||
* granted mode during invalidation.
|
||||
*/
|
||||
static void lock_invalidate_worker(struct work_struct *work)
|
||||
{
|
||||
@@ -691,7 +684,8 @@ static void lock_invalidate_worker(struct work_struct *work)
|
||||
if (!lock_counts_match(nl->new_mode, lock->users))
|
||||
continue;
|
||||
|
||||
/* set the new mode, no incompatible users during inval */
|
||||
/* set the new mode, no incompatible users during inval, recov needs old */
|
||||
lock->invalidating_mode = lock->mode;
|
||||
lock->mode = nl->new_mode;
|
||||
|
||||
/* move everyone that's ready to our private list */
|
||||
@@ -734,6 +728,8 @@ static void lock_invalidate_worker(struct work_struct *work)
|
||||
list_del(&ireq->head);
|
||||
kfree(ireq);
|
||||
|
||||
lock->invalidating_mode = SCOUTFS_LOCK_NULL;
|
||||
|
||||
if (list_empty(&lock->inv_list)) {
|
||||
/* finish if another request didn't arrive */
|
||||
list_del_init(&lock->inv_head);
|
||||
@@ -824,6 +820,7 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_net_lock_recover *nlr;
|
||||
enum scoutfs_lock_mode mode;
|
||||
struct scoutfs_lock *lock;
|
||||
struct scoutfs_lock *next;
|
||||
struct rb_node *node;
|
||||
@@ -844,10 +841,15 @@ int scoutfs_lock_recover_request(struct super_block *sb, u64 net_id,
|
||||
|
||||
for (i = 0; lock && i < SCOUTFS_NET_LOCK_MAX_RECOVER_NR; i++) {
|
||||
|
||||
if (lock->invalidating_mode != SCOUTFS_LOCK_NULL)
|
||||
mode = lock->invalidating_mode;
|
||||
else
|
||||
mode = lock->mode;
|
||||
|
||||
nlr->locks[i].key = lock->start;
|
||||
nlr->locks[i].write_seq = cpu_to_le64(lock->write_seq);
|
||||
nlr->locks[i].old_mode = lock->mode;
|
||||
nlr->locks[i].new_mode = lock->mode;
|
||||
nlr->locks[i].old_mode = mode;
|
||||
nlr->locks[i].new_mode = mode;
|
||||
|
||||
node = rb_next(&lock->node);
|
||||
if (node)
|
||||
@@ -1344,7 +1346,7 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
|
||||
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
|
||||
enum scoutfs_lock_mode mode)
|
||||
{
|
||||
signed char lock_mode = ACCESS_ONCE(lock->mode);
|
||||
signed char lock_mode = READ_ONCE(lock->mode);
|
||||
|
||||
return lock_modes_match(lock_mode, mode) &&
|
||||
scoutfs_key_compare_ranges(key, key,
|
||||
@@ -1399,6 +1401,17 @@ static void lock_shrink_worker(struct work_struct *work)
|
||||
}
|
||||
}
|
||||
|
||||
static unsigned long lock_count_objects(struct shrinker *shrink,
|
||||
struct shrink_control *sc)
|
||||
{
|
||||
struct lock_info *linfo = KC_SHRINKER_CONTAINER_OF(shrink, struct lock_info);
|
||||
struct super_block *sb = linfo->sb;
|
||||
|
||||
scoutfs_inc_counter(sb, lock_count_objects);
|
||||
|
||||
return shrinker_min_long(linfo->lru_nr);
|
||||
}
|
||||
|
||||
/*
|
||||
* Start the shrinking process for locks on the lru. If a lock is on
|
||||
* the lru then it can't have any active users. We don't want to block
|
||||
@@ -1411,21 +1424,18 @@ static void lock_shrink_worker(struct work_struct *work)
|
||||
* mode which will prevent the lock from being freed when the null
|
||||
* response arrives.
|
||||
*/
|
||||
static int scoutfs_lock_shrink(struct shrinker *shrink,
|
||||
struct shrink_control *sc)
|
||||
static unsigned long lock_scan_objects(struct shrinker *shrink,
|
||||
struct shrink_control *sc)
|
||||
{
|
||||
struct lock_info *linfo = container_of(shrink, struct lock_info,
|
||||
shrinker);
|
||||
struct lock_info *linfo = KC_SHRINKER_CONTAINER_OF(shrink, struct lock_info);
|
||||
struct super_block *sb = linfo->sb;
|
||||
struct scoutfs_lock *lock;
|
||||
struct scoutfs_lock *tmp;
|
||||
unsigned long nr;
|
||||
unsigned long freed = 0;
|
||||
unsigned long nr = sc->nr_to_scan;
|
||||
bool added = false;
|
||||
int ret;
|
||||
|
||||
nr = sc->nr_to_scan;
|
||||
if (nr == 0)
|
||||
goto out;
|
||||
scoutfs_inc_counter(sb, lock_scan_objects);
|
||||
|
||||
spin_lock(&linfo->lock);
|
||||
|
||||
@@ -1443,6 +1453,7 @@ restart:
|
||||
lock->request_pending = 1;
|
||||
list_add_tail(&lock->shrink_head, &linfo->shrink_list);
|
||||
added = true;
|
||||
freed++;
|
||||
|
||||
scoutfs_inc_counter(sb, lock_shrink_attempted);
|
||||
trace_scoutfs_lock_shrink(sb, lock);
|
||||
@@ -1457,10 +1468,8 @@ restart:
|
||||
if (added)
|
||||
queue_work(linfo->workq, &linfo->shrink_work);
|
||||
|
||||
out:
|
||||
ret = min_t(unsigned long, linfo->lru_nr, INT_MAX);
|
||||
trace_scoutfs_lock_shrink_exit(sb, sc->nr_to_scan, ret);
|
||||
return ret;
|
||||
trace_scoutfs_lock_shrink_exit(sb, sc->nr_to_scan, freed);
|
||||
return freed;
|
||||
}
|
||||
|
||||
void scoutfs_free_unused_locks(struct super_block *sb)
|
||||
@@ -1471,7 +1480,7 @@ void scoutfs_free_unused_locks(struct super_block *sb)
|
||||
.nr_to_scan = INT_MAX,
|
||||
};
|
||||
|
||||
linfo->shrinker.shrink(&linfo->shrinker, &sc);
|
||||
lock_scan_objects(KC_SHRINKER_FN(&linfo->shrinker), &sc);
|
||||
}
|
||||
|
||||
static void lock_tseq_show(struct seq_file *m, struct scoutfs_tseq_entry *ent)
|
||||
@@ -1513,6 +1522,38 @@ void scoutfs_lock_flush_invalidate(struct super_block *sb)
|
||||
flush_work(&linfo->inv_work);
|
||||
}
|
||||
|
||||
static u64 get_held_lock_refresh_gen(struct super_block *sb, struct scoutfs_key *start)
|
||||
{
|
||||
DECLARE_LOCK_INFO(sb, linfo);
|
||||
struct scoutfs_lock *lock;
|
||||
u64 refresh_gen = 0;
|
||||
|
||||
/* this can be called from all manner of places */
|
||||
if (!linfo)
|
||||
return 0;
|
||||
|
||||
spin_lock(&linfo->lock);
|
||||
lock = lock_lookup(sb, start, NULL);
|
||||
if (lock) {
|
||||
if (lock_mode_can_read(lock->mode))
|
||||
refresh_gen = lock->refresh_gen;
|
||||
}
|
||||
spin_unlock(&linfo->lock);
|
||||
|
||||
return refresh_gen;
|
||||
}
|
||||
|
||||
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino)
|
||||
{
|
||||
struct scoutfs_key start;
|
||||
|
||||
scoutfs_key_set_zeros(&start);
|
||||
start.sk_zone = SCOUTFS_FS_ZONE;
|
||||
start.ski_ino = cpu_to_le64(ino & ~(u64)SCOUTFS_LOCK_INODE_GROUP_MASK);
|
||||
|
||||
return get_held_lock_refresh_gen(sb, &start);
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller is going to be shutting down transactions and the client.
|
||||
* We need to make sure that locking won't call either after we return.
|
||||
@@ -1546,7 +1587,7 @@ void scoutfs_lock_shutdown(struct super_block *sb)
|
||||
trace_scoutfs_lock_shutdown(sb, linfo);
|
||||
|
||||
/* stop the shrinker from queueing work */
|
||||
unregister_shrinker(&linfo->shrinker);
|
||||
KC_UNREGISTER_SHRINKER(&linfo->shrinker);
|
||||
flush_work(&linfo->shrink_work);
|
||||
|
||||
/* cause current and future lock calls to return errors */
|
||||
@@ -1665,9 +1706,9 @@ int scoutfs_lock_setup(struct super_block *sb)
|
||||
spin_lock_init(&linfo->lock);
|
||||
linfo->lock_tree = RB_ROOT;
|
||||
linfo->lock_range_tree = RB_ROOT;
|
||||
linfo->shrinker.shrink = scoutfs_lock_shrink;
|
||||
linfo->shrinker.seeks = DEFAULT_SEEKS;
|
||||
register_shrinker(&linfo->shrinker);
|
||||
KC_INIT_SHRINKER_FUNCS(&linfo->shrinker, lock_count_objects,
|
||||
lock_scan_objects);
|
||||
KC_REGISTER_SHRINKER(&linfo->shrinker);
|
||||
INIT_LIST_HEAD(&linfo->lru_list);
|
||||
INIT_WORK(&linfo->inv_work, lock_invalidate_worker);
|
||||
INIT_LIST_HEAD(&linfo->inv_list);
|
||||
|
||||
@@ -39,6 +39,7 @@ struct scoutfs_lock {
|
||||
struct list_head cov_list;
|
||||
|
||||
enum scoutfs_lock_mode mode;
|
||||
enum scoutfs_lock_mode invalidating_mode;
|
||||
unsigned int waiters[SCOUTFS_LOCK_NR_MODES];
|
||||
unsigned int users[SCOUTFS_LOCK_NR_MODES];
|
||||
|
||||
@@ -99,6 +100,8 @@ void scoutfs_lock_del_coverage(struct super_block *sb,
|
||||
bool scoutfs_lock_protected(struct scoutfs_lock *lock, struct scoutfs_key *key,
|
||||
enum scoutfs_lock_mode mode);
|
||||
|
||||
u64 scoutfs_lock_ino_refresh_gen(struct super_block *sb, u64 ino);
|
||||
|
||||
void scoutfs_free_unused_locks(struct super_block *sb);
|
||||
|
||||
int scoutfs_lock_setup(struct super_block *sb);
|
||||
|
||||
@@ -355,6 +355,7 @@ static int submit_send(struct super_block *sb,
|
||||
}
|
||||
if (rid != 0) {
|
||||
spin_unlock(&conn->lock);
|
||||
kfree(msend);
|
||||
return -ENOTCONN;
|
||||
}
|
||||
}
|
||||
@@ -548,12 +549,16 @@ static int recvmsg_full(struct socket *sock, void *buf, unsigned len)
|
||||
|
||||
while (len) {
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
msg.msg_iov = (struct iovec *)&kv;
|
||||
msg.msg_iovlen = 1;
|
||||
msg.msg_flags = MSG_NOSIGNAL;
|
||||
kv.iov_base = buf;
|
||||
kv.iov_len = len;
|
||||
|
||||
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
|
||||
msg.msg_iov = (struct iovec *)&kv;
|
||||
msg.msg_iovlen = 1;
|
||||
#else
|
||||
iov_iter_init(&msg.msg_iter, READ, (struct iovec *)&kv, len, 1);
|
||||
#endif
|
||||
ret = kernel_recvmsg(sock, &msg, &kv, 1, len, msg.msg_flags);
|
||||
if (ret <= 0)
|
||||
return -ECONNABORTED;
|
||||
@@ -706,12 +711,16 @@ static int sendmsg_full(struct socket *sock, void *buf, unsigned len)
|
||||
|
||||
while (len) {
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
msg.msg_iov = (struct iovec *)&kv;
|
||||
msg.msg_iovlen = 1;
|
||||
msg.msg_flags = MSG_NOSIGNAL;
|
||||
kv.iov_base = buf;
|
||||
kv.iov_len = len;
|
||||
|
||||
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
|
||||
msg.msg_iov = (struct iovec *)&kv;
|
||||
msg.msg_iovlen = 1;
|
||||
#else
|
||||
iov_iter_init(&msg.msg_iter, WRITE, (struct iovec *)&kv, len, 1);
|
||||
#endif
|
||||
ret = kernel_sendmsg(sock, &msg, &kv, 1, len);
|
||||
if (ret <= 0)
|
||||
return -ECONNABORTED;
|
||||
@@ -896,7 +905,6 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
|
||||
struct socket *sock)
|
||||
{
|
||||
struct timeval tv;
|
||||
int addrlen;
|
||||
int optval;
|
||||
int ret;
|
||||
|
||||
@@ -946,23 +954,18 @@ static int sock_opts_and_names(struct scoutfs_net_connection *conn,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
addrlen = sizeof(struct sockaddr_in);
|
||||
ret = kernel_getsockname(sock, (struct sockaddr *)&conn->sockname,
|
||||
&addrlen);
|
||||
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
|
||||
ret = -EAFNOSUPPORT;
|
||||
if (ret)
|
||||
ret = kc_kernel_getsockname(sock, (struct sockaddr *)&conn->sockname);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
addrlen = sizeof(struct sockaddr_in);
|
||||
ret = kernel_getpeername(sock, (struct sockaddr *)&conn->peername,
|
||||
&addrlen);
|
||||
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
|
||||
ret = -EAFNOSUPPORT;
|
||||
if (ret)
|
||||
ret = kc_kernel_getpeername(sock, (struct sockaddr *)&conn->peername);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = 0;
|
||||
|
||||
conn->last_peername = conn->peername;
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
@@ -991,6 +994,8 @@ static void scoutfs_net_listen_worker(struct work_struct *work)
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
acc_sock->sk->sk_allocation = GFP_NOFS;
|
||||
|
||||
/* inherit accepted request funcs from listening conn */
|
||||
acc_conn = scoutfs_net_alloc_conn(sb, conn->notify_up,
|
||||
conn->notify_down,
|
||||
@@ -1049,10 +1054,12 @@ static void scoutfs_net_connect_worker(struct work_struct *work)
|
||||
|
||||
trace_scoutfs_net_connect_work_enter(sb, 0, 0);
|
||||
|
||||
ret = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
|
||||
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
sock->sk->sk_allocation = GFP_NOFS;
|
||||
|
||||
/* caller specified connect timeout */
|
||||
tv.tv_sec = conn->connect_timeout_ms / MSEC_PER_SEC;
|
||||
tv.tv_usec = (conn->connect_timeout_ms % MSEC_PER_SEC) * USEC_PER_MSEC;
|
||||
@@ -1341,10 +1348,12 @@ scoutfs_net_alloc_conn(struct super_block *sb,
|
||||
if (!conn)
|
||||
return NULL;
|
||||
|
||||
conn->info = kzalloc(info_size, GFP_NOFS);
|
||||
if (!conn->info) {
|
||||
kfree(conn);
|
||||
return NULL;
|
||||
if (info_size) {
|
||||
conn->info = kzalloc(info_size, GFP_NOFS);
|
||||
if (!conn->info) {
|
||||
kfree(conn);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
conn->workq = alloc_workqueue("scoutfs_net_%s",
|
||||
@@ -1446,10 +1455,12 @@ int scoutfs_net_bind(struct super_block *sb,
|
||||
if (WARN_ON_ONCE(conn->sock))
|
||||
return -EINVAL;
|
||||
|
||||
ret = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
|
||||
ret = kc_sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
sock->sk->sk_allocation = GFP_NOFS;
|
||||
|
||||
optval = 1;
|
||||
ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
|
||||
(char *)&optval, sizeof(optval));
|
||||
@@ -1462,20 +1473,18 @@ int scoutfs_net_bind(struct super_block *sb,
|
||||
goto out;
|
||||
|
||||
ret = kernel_listen(sock, 255);
|
||||
if (ret)
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
addrlen = sizeof(struct sockaddr_in);
|
||||
ret = kernel_getsockname(sock, (struct sockaddr *)&conn->sockname,
|
||||
&addrlen);
|
||||
if (ret == 0 && addrlen != sizeof(struct sockaddr_in))
|
||||
ret = -EAFNOSUPPORT;
|
||||
if (ret)
|
||||
ret = kc_kernel_getsockname(sock, (struct sockaddr *)&conn->sockname);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = 0;
|
||||
|
||||
conn->sock = sock;
|
||||
*sin = conn->sockname;
|
||||
ret = 0;
|
||||
|
||||
out:
|
||||
if (ret < 0 && sock)
|
||||
sock_release(sock);
|
||||
|
||||
@@ -157,6 +157,15 @@ static int free_rid(struct omap_rid_list *list, struct omap_rid_entry *entry)
|
||||
return nr;
|
||||
}
|
||||
|
||||
static void free_rid_list(struct omap_rid_list *list)
|
||||
{
|
||||
struct omap_rid_entry *entry;
|
||||
struct omap_rid_entry *tmp;
|
||||
|
||||
list_for_each_entry_safe(entry, tmp, &list->head, head)
|
||||
free_rid(list, entry);
|
||||
}
|
||||
|
||||
static int copy_rids(struct omap_rid_list *to, struct omap_rid_list *from, spinlock_t *from_lock)
|
||||
{
|
||||
struct omap_rid_entry *entry;
|
||||
@@ -804,6 +813,10 @@ void scoutfs_omap_server_shutdown(struct super_block *sb)
|
||||
llist_for_each_entry_safe(req, tmp, requests, llnode)
|
||||
kfree(req);
|
||||
|
||||
spin_lock(&ominf->lock);
|
||||
free_rid_list(&ominf->rids);
|
||||
spin_unlock(&ominf->lock);
|
||||
|
||||
synchronize_rcu();
|
||||
}
|
||||
|
||||
@@ -864,6 +877,10 @@ void scoutfs_omap_destroy(struct super_block *sb)
|
||||
rhashtable_walk_stop(&iter);
|
||||
rhashtable_walk_exit(&iter);
|
||||
|
||||
spin_lock(&ominf->lock);
|
||||
free_rid_list(&ominf->rids);
|
||||
spin_unlock(&ominf->lock);
|
||||
|
||||
rhashtable_destroy(&ominf->group_ht);
|
||||
rhashtable_destroy(&ominf->req_ht);
|
||||
kfree(ominf);
|
||||
|
||||
@@ -27,17 +27,28 @@
|
||||
#include "options.h"
|
||||
#include "super.h"
|
||||
#include "inode.h"
|
||||
#include "alloc.h"
|
||||
|
||||
enum {
|
||||
Opt_acl,
|
||||
Opt_data_prealloc_blocks,
|
||||
Opt_data_prealloc_contig_only,
|
||||
Opt_metadev_path,
|
||||
Opt_noacl,
|
||||
Opt_orphan_scan_delay_ms,
|
||||
Opt_quorum_heartbeat_timeout_ms,
|
||||
Opt_quorum_slot_nr,
|
||||
Opt_err,
|
||||
};
|
||||
|
||||
static const match_table_t tokens = {
|
||||
{Opt_acl, "acl"},
|
||||
{Opt_data_prealloc_blocks, "data_prealloc_blocks=%s"},
|
||||
{Opt_data_prealloc_contig_only, "data_prealloc_contig_only=%s"},
|
||||
{Opt_metadev_path, "metadev_path=%s"},
|
||||
{Opt_noacl, "noacl"},
|
||||
{Opt_orphan_scan_delay_ms, "orphan_scan_delay_ms=%s"},
|
||||
{Opt_quorum_heartbeat_timeout_ms, "quorum_heartbeat_timeout_ms=%s"},
|
||||
{Opt_quorum_slot_nr, "quorum_slot_nr=%s"},
|
||||
{Opt_err, NULL}
|
||||
};
|
||||
@@ -106,11 +117,33 @@ static void free_options(struct scoutfs_mount_options *opts)
|
||||
#define DEFAULT_ORPHAN_SCAN_DELAY_MS (10 * MSEC_PER_SEC)
|
||||
#define MAX_ORPHAN_SCAN_DELAY_MS (60 * MSEC_PER_SEC)
|
||||
|
||||
#define MIN_DATA_PREALLOC_BLOCKS 1ULL
|
||||
#define MAX_DATA_PREALLOC_BLOCKS ((unsigned long long)SCOUTFS_BLOCK_SM_MAX)
|
||||
|
||||
static void init_default_options(struct scoutfs_mount_options *opts)
|
||||
{
|
||||
memset(opts, 0, sizeof(*opts));
|
||||
|
||||
opts->data_prealloc_blocks = SCOUTFS_DATA_PREALLOC_DEFAULT_BLOCKS;
|
||||
opts->data_prealloc_contig_only = 1;
|
||||
opts->orphan_scan_delay_ms = -1;
|
||||
opts->quorum_heartbeat_timeout_ms = SCOUTFS_QUORUM_DEF_HB_TIMEO_MS;
|
||||
opts->quorum_slot_nr = -1;
|
||||
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
|
||||
}
|
||||
|
||||
static int verify_quorum_heartbeat_timeout_ms(struct super_block *sb, int ret, u64 val)
|
||||
{
|
||||
if (ret < 0) {
|
||||
scoutfs_err(sb, "failed to parse quorum_heartbeat_timeout_ms value");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (val < SCOUTFS_QUORUM_MIN_HB_TIMEO_MS || val > SCOUTFS_QUORUM_MAX_HB_TIMEO_MS) {
|
||||
scoutfs_err(sb, "invalid quorum_heartbeat_timeout_ms value %llu, must be between %lu and %lu",
|
||||
val, SCOUTFS_QUORUM_MIN_HB_TIMEO_MS, SCOUTFS_QUORUM_MAX_HB_TIMEO_MS);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -122,6 +155,7 @@ static void init_default_options(struct scoutfs_mount_options *opts)
|
||||
static int parse_options(struct super_block *sb, char *options, struct scoutfs_mount_options *opts)
|
||||
{
|
||||
substring_t args[MAX_OPT_ARGS];
|
||||
u64 nr64;
|
||||
int nr;
|
||||
int token;
|
||||
char *p;
|
||||
@@ -134,12 +168,44 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
|
||||
token = match_token(p, tokens, args);
|
||||
switch (token) {
|
||||
|
||||
case Opt_acl:
|
||||
sb->s_flags |= SB_POSIXACL;
|
||||
break;
|
||||
|
||||
case Opt_data_prealloc_blocks:
|
||||
ret = match_u64(args, &nr64);
|
||||
if (ret < 0 ||
|
||||
nr64 < MIN_DATA_PREALLOC_BLOCKS || nr64 > MAX_DATA_PREALLOC_BLOCKS) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
|
||||
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
|
||||
if (ret == 0)
|
||||
ret = -EINVAL;
|
||||
return ret;
|
||||
}
|
||||
opts->data_prealloc_blocks = nr64;
|
||||
break;
|
||||
|
||||
case Opt_data_prealloc_contig_only:
|
||||
ret = match_int(args, &nr);
|
||||
if (ret < 0 || nr < 0 || nr > 1) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must only be 0 or 1");
|
||||
if (ret == 0)
|
||||
ret = -EINVAL;
|
||||
return ret;
|
||||
}
|
||||
opts->data_prealloc_contig_only = nr;
|
||||
break;
|
||||
|
||||
case Opt_metadev_path:
|
||||
ret = parse_bdev_path(sb, &args[0], &opts->metadev_path);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
break;
|
||||
|
||||
case Opt_noacl:
|
||||
sb->s_flags &= ~SB_POSIXACL;
|
||||
break;
|
||||
|
||||
case Opt_orphan_scan_delay_ms:
|
||||
if (opts->orphan_scan_delay_ms != -1) {
|
||||
scoutfs_err(sb, "multiple orphan_scan_delay_ms options provided, only provide one.");
|
||||
@@ -158,6 +224,14 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
|
||||
opts->orphan_scan_delay_ms = nr;
|
||||
break;
|
||||
|
||||
case Opt_quorum_heartbeat_timeout_ms:
|
||||
ret = match_u64(args, &nr64);
|
||||
ret = verify_quorum_heartbeat_timeout_ms(sb, ret, nr64);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
opts->quorum_heartbeat_timeout_ms = nr64;
|
||||
break;
|
||||
|
||||
case Opt_quorum_slot_nr:
|
||||
if (opts->quorum_slot_nr != -1) {
|
||||
scoutfs_err(sb, "multiple quorum_slot_nr options provided, only provide one.");
|
||||
@@ -181,6 +255,9 @@ static int parse_options(struct super_block *sb, char *options, struct scoutfs_m
|
||||
}
|
||||
}
|
||||
|
||||
if (opts->orphan_scan_delay_ms == -1)
|
||||
opts->orphan_scan_delay_ms = DEFAULT_ORPHAN_SCAN_DELAY_MS;
|
||||
|
||||
if (!opts->metadev_path) {
|
||||
scoutfs_err(sb, "Required mount option \"metadev_path\" not found");
|
||||
return -EINVAL;
|
||||
@@ -250,10 +327,17 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
|
||||
{
|
||||
struct super_block *sb = root->d_sb;
|
||||
struct scoutfs_mount_options opts;
|
||||
const bool is_acl = !!(sb->s_flags & SB_POSIXACL);
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
if (is_acl)
|
||||
seq_puts(seq, ",acl");
|
||||
seq_printf(seq, ",data_prealloc_blocks=%llu", opts.data_prealloc_blocks);
|
||||
seq_printf(seq, ",data_prealloc_contig_only=%u", opts.data_prealloc_contig_only);
|
||||
seq_printf(seq, ",metadev_path=%s", opts.metadev_path);
|
||||
if (!is_acl)
|
||||
seq_puts(seq, ",noacl");
|
||||
seq_printf(seq, ",orphan_scan_delay_ms=%u", opts.orphan_scan_delay_ms);
|
||||
if (opts.quorum_slot_nr >= 0)
|
||||
seq_printf(seq, ",quorum_slot_nr=%d", opts.quorum_slot_nr);
|
||||
@@ -261,6 +345,83 @@ int scoutfs_options_show(struct seq_file *seq, struct dentry *root)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t data_prealloc_blocks_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu", opts.data_prealloc_blocks);
|
||||
}
|
||||
static ssize_t data_prealloc_blocks_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
char nullterm[30]; /* more than enough for octal -U64_MAX */
|
||||
u64 val;
|
||||
int len;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(nullterm) - 1);
|
||||
memcpy(nullterm, buf, len);
|
||||
nullterm[len] = '\0';
|
||||
|
||||
ret = kstrtoll(nullterm, 0, &val);
|
||||
if (ret < 0 || val < MIN_DATA_PREALLOC_BLOCKS || val > MAX_DATA_PREALLOC_BLOCKS) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_blocks option, must be between %llu and %llu",
|
||||
MIN_DATA_PREALLOC_BLOCKS, MAX_DATA_PREALLOC_BLOCKS);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
write_seqlock(&optinf->seqlock);
|
||||
optinf->opts.data_prealloc_blocks = val;
|
||||
write_sequnlock(&optinf->seqlock);
|
||||
|
||||
return count;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(data_prealloc_blocks);
|
||||
|
||||
static ssize_t data_prealloc_contig_only_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%u", opts.data_prealloc_contig_only);
|
||||
}
|
||||
static ssize_t data_prealloc_contig_only_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
char nullterm[20]; /* more than enough for octal -U32_MAX */
|
||||
long val;
|
||||
int len;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(nullterm) - 1);
|
||||
memcpy(nullterm, buf, len);
|
||||
nullterm[len] = '\0';
|
||||
|
||||
ret = kstrtol(nullterm, 0, &val);
|
||||
if (ret < 0 || val < 0 || val > 1) {
|
||||
scoutfs_err(sb, "invalid data_prealloc_contig_only option, bool must be 0 or 1");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
write_seqlock(&optinf->seqlock);
|
||||
optinf->opts.data_prealloc_contig_only = val;
|
||||
write_sequnlock(&optinf->seqlock);
|
||||
|
||||
return count;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(data_prealloc_contig_only);
|
||||
|
||||
static ssize_t metadev_path_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
@@ -313,6 +474,43 @@ static ssize_t orphan_scan_delay_ms_store(struct kobject *kobj, struct kobj_attr
|
||||
}
|
||||
SCOUTFS_ATTR_RW(orphan_scan_delay_ms);
|
||||
|
||||
static ssize_t quorum_heartbeat_timeout_ms_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
struct scoutfs_mount_options opts;
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu", opts.quorum_heartbeat_timeout_ms);
|
||||
}
|
||||
static ssize_t quorum_heartbeat_timeout_ms_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
DECLARE_OPTIONS_INFO(sb, optinf);
|
||||
char nullterm[30]; /* more than enough for octal -U64_MAX */
|
||||
u64 val;
|
||||
int len;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(nullterm) - 1);
|
||||
memcpy(nullterm, buf, len);
|
||||
nullterm[len] = '\0';
|
||||
|
||||
ret = kstrtoll(nullterm, 0, &val);
|
||||
ret = verify_quorum_heartbeat_timeout_ms(sb, ret, val);
|
||||
if (ret == 0) {
|
||||
write_seqlock(&optinf->seqlock);
|
||||
optinf->opts.quorum_heartbeat_timeout_ms = val;
|
||||
write_sequnlock(&optinf->seqlock);
|
||||
ret = count;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
SCOUTFS_ATTR_RW(quorum_heartbeat_timeout_ms);
|
||||
|
||||
static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
struct super_block *sb = SCOUTFS_SYSFS_ATTRS_SB(kobj);
|
||||
@@ -325,8 +523,11 @@ static ssize_t quorum_slot_nr_show(struct kobject *kobj, struct kobj_attribute *
|
||||
SCOUTFS_ATTR_RO(quorum_slot_nr);
|
||||
|
||||
static struct attribute *options_attrs[] = {
|
||||
SCOUTFS_ATTR_PTR(data_prealloc_blocks),
|
||||
SCOUTFS_ATTR_PTR(data_prealloc_contig_only),
|
||||
SCOUTFS_ATTR_PTR(metadev_path),
|
||||
SCOUTFS_ATTR_PTR(orphan_scan_delay_ms),
|
||||
SCOUTFS_ATTR_PTR(quorum_heartbeat_timeout_ms),
|
||||
SCOUTFS_ATTR_PTR(quorum_slot_nr),
|
||||
NULL,
|
||||
};
|
||||
|
||||
@@ -6,10 +6,12 @@
|
||||
#include "format.h"
|
||||
|
||||
struct scoutfs_mount_options {
|
||||
u64 data_prealloc_blocks;
|
||||
bool data_prealloc_contig_only;
|
||||
char *metadev_path;
|
||||
unsigned int orphan_scan_delay_ms;
|
||||
int quorum_slot_nr;
|
||||
|
||||
u64 quorum_heartbeat_timeout_ms;
|
||||
};
|
||||
|
||||
void scoutfs_options_read(struct super_block *sb, struct scoutfs_mount_options *opts);
|
||||
|
||||
@@ -100,6 +100,11 @@ struct last_msg {
|
||||
ktime_t ts;
|
||||
};
|
||||
|
||||
struct count_recent {
|
||||
u64 count;
|
||||
ktime_t recent;
|
||||
};
|
||||
|
||||
enum quorum_role { FOLLOWER, CANDIDATE, LEADER };
|
||||
|
||||
struct quorum_status {
|
||||
@@ -112,8 +117,12 @@ struct quorum_status {
|
||||
ktime_t timeout;
|
||||
};
|
||||
|
||||
#define HB_DELAY_NR (SCOUTFS_QUORUM_MAX_HB_TIMEO_MS / MSEC_PER_SEC)
|
||||
|
||||
struct quorum_info {
|
||||
struct super_block *sb;
|
||||
struct scoutfs_quorum_config qconf;
|
||||
struct workqueue_struct *workq;
|
||||
struct work_struct work;
|
||||
struct socket *sock;
|
||||
bool shutdown;
|
||||
@@ -125,6 +134,8 @@ struct quorum_info {
|
||||
struct quorum_status show_status;
|
||||
struct last_msg last_send[SCOUTFS_QUORUM_MAX_SLOTS];
|
||||
struct last_msg last_recv[SCOUTFS_QUORUM_MAX_SLOTS];
|
||||
struct count_recent *hb_delay;
|
||||
unsigned long max_hb_delay;
|
||||
|
||||
struct scoutfs_sysfs_attrs ssa;
|
||||
};
|
||||
@@ -134,11 +145,18 @@ struct quorum_info {
|
||||
#define DECLARE_QUORUM_INFO_KOBJ(kobj, name) \
|
||||
DECLARE_QUORUM_INFO(SCOUTFS_SYSFS_ATTRS_SB(kobj), name)
|
||||
|
||||
static bool quorum_slot_present(struct scoutfs_super_block *super, int i)
|
||||
static bool quorum_slot_present(struct scoutfs_quorum_config *qconf, int i)
|
||||
{
|
||||
BUG_ON(i < 0 || i > SCOUTFS_QUORUM_MAX_SLOTS);
|
||||
|
||||
return super->qconf.slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
|
||||
return qconf->slots[i].addr.v4.family == cpu_to_le16(SCOUTFS_AF_IPV4);
|
||||
}
|
||||
|
||||
static void quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
|
||||
{
|
||||
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
|
||||
|
||||
scoutfs_addr_to_sin(sin, &qconf->slots[i].addr);
|
||||
}
|
||||
|
||||
static ktime_t election_timeout(void)
|
||||
@@ -152,29 +170,29 @@ static ktime_t heartbeat_interval(void)
|
||||
return ktime_add_ms(ktime_get(), SCOUTFS_QUORUM_HB_IVAL_MS);
|
||||
}
|
||||
|
||||
static ktime_t heartbeat_timeout(void)
|
||||
static ktime_t heartbeat_timeout(struct scoutfs_mount_options *opts)
|
||||
{
|
||||
return ktime_add_ms(ktime_get(), SCOUTFS_QUORUM_HB_TIMEO_MS);
|
||||
return ktime_add_ms(ktime_get(), opts->quorum_heartbeat_timeout_ms);
|
||||
}
|
||||
|
||||
static int create_socket(struct super_block *sb)
|
||||
{
|
||||
DECLARE_QUORUM_INFO(sb, qinf);
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct socket *sock = NULL;
|
||||
struct sockaddr_in sin;
|
||||
int addrlen;
|
||||
int ret;
|
||||
|
||||
ret = sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
|
||||
ret = kc_sock_create_kern(PF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
|
||||
if (ret) {
|
||||
scoutfs_err(sb, "quorum couldn't create udp socket: %d", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
sock->sk->sk_allocation = GFP_NOFS;
|
||||
/* rather fail and retry than block waiting for free */
|
||||
sock->sk->sk_allocation = GFP_ATOMIC;
|
||||
|
||||
scoutfs_quorum_slot_sin(super, qinf->our_quorum_slot_nr, &sin);
|
||||
quorum_slot_sin(&qinf->qconf, qinf->our_quorum_slot_nr, &sin);
|
||||
|
||||
addrlen = sizeof(sin);
|
||||
ret = kernel_bind(sock, (struct sockaddr *)&sin, addrlen);
|
||||
@@ -201,16 +219,20 @@ static __le32 quorum_message_crc(struct scoutfs_quorum_message *qmes)
|
||||
return cpu_to_le32(crc32c(~0, qmes, len));
|
||||
}
|
||||
|
||||
static void send_msg_members(struct super_block *sb, int type, u64 term,
|
||||
int only)
|
||||
/*
|
||||
* Returns the number of failures from sendmsg.
|
||||
*/
|
||||
static int send_msg_members(struct super_block *sb, int type, u64 term, int only)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
DECLARE_QUORUM_INFO(sb, qinf);
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
int failed = 0;
|
||||
ktime_t now;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
struct scoutfs_quorum_message qmes = {
|
||||
.fsid = super->hdr.fsid,
|
||||
.fsid = cpu_to_le64(sbi->fsid),
|
||||
.term = cpu_to_le64(term),
|
||||
.type = type,
|
||||
.from = qinf->our_quorum_slot_nr,
|
||||
@@ -221,8 +243,10 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
|
||||
};
|
||||
struct sockaddr_in sin;
|
||||
struct msghdr mh = {
|
||||
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
|
||||
.msg_iov = (struct iovec *)&kv,
|
||||
.msg_iovlen = 1,
|
||||
#endif
|
||||
.msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL,
|
||||
.msg_name = &sin,
|
||||
.msg_namelen = sizeof(sin),
|
||||
@@ -232,15 +256,24 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
|
||||
|
||||
qmes.crc = quorum_message_crc(&qmes);
|
||||
|
||||
|
||||
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
|
||||
if (!quorum_slot_present(super, i) ||
|
||||
if (!quorum_slot_present(&qinf->qconf, i) ||
|
||||
(only >= 0 && i != only) || i == qinf->our_quorum_slot_nr)
|
||||
continue;
|
||||
|
||||
scoutfs_quorum_slot_sin(super, i, &sin);
|
||||
if (scoutfs_forcing_unmount(sb)) {
|
||||
failed = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
scoutfs_quorum_slot_sin(&qinf->qconf, i, &sin);
|
||||
now = ktime_get();
|
||||
kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
|
||||
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
|
||||
iov_iter_init(&mh.msg_iter, WRITE, (struct iovec *)&kv, sizeof(qmes), 1);
|
||||
#endif
|
||||
ret = kernel_sendmsg(qinf->sock, &mh, &kv, 1, kv.iov_len);
|
||||
if (ret != kv.iov_len)
|
||||
failed++;
|
||||
|
||||
spin_lock(&qinf->show_lock);
|
||||
qinf->last_send[i].msg.term = term;
|
||||
@@ -251,6 +284,8 @@ static void send_msg_members(struct super_block *sb, int type, u64 term,
|
||||
if (i == only)
|
||||
break;
|
||||
}
|
||||
|
||||
return failed;
|
||||
}
|
||||
|
||||
#define send_msg_to(sb, type, term, nr) send_msg_members(sb, type, term, nr)
|
||||
@@ -266,7 +301,7 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
|
||||
ktime_t abs_to)
|
||||
{
|
||||
DECLARE_QUORUM_INFO(sb, qinf);
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_quorum_message qmes;
|
||||
struct timeval tv;
|
||||
ktime_t rel_to;
|
||||
@@ -278,8 +313,10 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
|
||||
.iov_len = sizeof(struct scoutfs_quorum_message),
|
||||
};
|
||||
struct msghdr mh = {
|
||||
#ifndef KC_MSGHDR_STRUCT_IOV_ITER
|
||||
.msg_iov = (struct iovec *)&kv,
|
||||
.msg_iovlen = 1,
|
||||
#endif
|
||||
.msg_flags = MSG_NOSIGNAL,
|
||||
};
|
||||
|
||||
@@ -301,18 +338,24 @@ static int recv_msg(struct super_block *sb, struct quorum_host_msg *msg,
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef KC_MSGHDR_STRUCT_IOV_ITER
|
||||
iov_iter_init(&mh.msg_iter, READ, (struct iovec *)&kv, sizeof(struct scoutfs_quorum_message), 1);
|
||||
#endif
|
||||
ret = kernel_recvmsg(qinf->sock, &mh, &kv, 1, kv.iov_len, mh.msg_flags);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (scoutfs_forcing_unmount(sb))
|
||||
return 0;
|
||||
|
||||
now = ktime_get();
|
||||
|
||||
if (ret != sizeof(qmes) ||
|
||||
qmes.crc != quorum_message_crc(&qmes) ||
|
||||
qmes.fsid != super->hdr.fsid ||
|
||||
qmes.fsid != cpu_to_le64(sbi->fsid) ||
|
||||
qmes.type >= SCOUTFS_QUORUM_MSG_INVALID ||
|
||||
qmes.from >= SCOUTFS_QUORUM_MAX_SLOTS ||
|
||||
!quorum_slot_present(super, qmes.from)) {
|
||||
!quorum_slot_present(&qinf->qconf, qmes.from)) {
|
||||
/* should we be trying to open a new socket? */
|
||||
scoutfs_inc_counter(sb, quorum_recv_invalid);
|
||||
return -EAGAIN;
|
||||
@@ -342,7 +385,7 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
|
||||
bool check_rid)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = &sbi->super;
|
||||
const u64 fsid = sbi->fsid;
|
||||
const u64 rid = sbi->rid;
|
||||
char msg[150];
|
||||
__le32 crc;
|
||||
@@ -367,9 +410,9 @@ static int read_quorum_block(struct super_block *sb, u64 blkno, struct scoutfs_q
|
||||
else if (le32_to_cpu(blk->hdr.magic) != SCOUTFS_BLOCK_MAGIC_QUORUM)
|
||||
snprintf(msg, sizeof(msg), "blk magic %08x != %08x",
|
||||
le32_to_cpu(blk->hdr.magic), SCOUTFS_BLOCK_MAGIC_QUORUM);
|
||||
else if (blk->hdr.fsid != super->hdr.fsid)
|
||||
else if (blk->hdr.fsid != cpu_to_le64(fsid))
|
||||
snprintf(msg, sizeof(msg), "blk fsid %016llx != %016llx",
|
||||
le64_to_cpu(blk->hdr.fsid), le64_to_cpu(super->hdr.fsid));
|
||||
le64_to_cpu(blk->hdr.fsid), fsid);
|
||||
else if (le64_to_cpu(blk->hdr.blkno) != blkno)
|
||||
snprintf(msg, sizeof(msg), "blk blkno %llu != %llu",
|
||||
le64_to_cpu(blk->hdr.blkno), blkno);
|
||||
@@ -410,8 +453,7 @@ out:
|
||||
*/
|
||||
static void read_greatest_term(struct super_block *sb, u64 *term)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = &sbi->super;
|
||||
DECLARE_QUORUM_INFO(sb, qinf);
|
||||
struct scoutfs_quorum_block blk;
|
||||
int ret;
|
||||
int e;
|
||||
@@ -420,7 +462,7 @@ static void read_greatest_term(struct super_block *sb, u64 *term)
|
||||
*term = 0;
|
||||
|
||||
for (s = 0; s < SCOUTFS_QUORUM_MAX_SLOTS; s++) {
|
||||
if (!quorum_slot_present(super, s))
|
||||
if (!quorum_slot_present(&qinf->qconf, s))
|
||||
continue;
|
||||
|
||||
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + s, &blk, false);
|
||||
@@ -514,14 +556,15 @@ static int update_quorum_block(struct super_block *sb, int event, u64 term, bool
|
||||
* keeps us from being fenced while we allow userspace fencing to take a
|
||||
* reasonably long time. We still want to timeout eventually.
|
||||
*/
|
||||
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
|
||||
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
|
||||
u64 term)
|
||||
{
|
||||
#define NR_OLD 2
|
||||
struct scoutfs_quorum_block_event old[SCOUTFS_QUORUM_MAX_SLOTS][NR_OLD] = {{{0,}}};
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = &sbi->super;
|
||||
struct scoutfs_quorum_block blk;
|
||||
struct sockaddr_in sin;
|
||||
const __le64 lefsid = cpu_to_le64(sbi->fsid);
|
||||
const u64 rid = sbi->rid;
|
||||
bool fence_started = false;
|
||||
u64 fenced = 0;
|
||||
@@ -534,7 +577,7 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
|
||||
BUILD_BUG_ON(SCOUTFS_QUORUM_BLOCKS < SCOUTFS_QUORUM_MAX_SLOTS);
|
||||
|
||||
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
|
||||
if (!quorum_slot_present(super, i))
|
||||
if (!quorum_slot_present(qconf, i))
|
||||
continue;
|
||||
|
||||
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
|
||||
@@ -567,11 +610,11 @@ int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term)
|
||||
continue;
|
||||
|
||||
scoutfs_inc_counter(sb, quorum_fence_leader);
|
||||
scoutfs_quorum_slot_sin(super, i, &sin);
|
||||
quorum_slot_sin(qconf, i, &sin);
|
||||
fence_rid = old[i][j].rid;
|
||||
|
||||
scoutfs_info(sb, "fencing previous leader "SCSBF" at term %llu in slot %u with address "SIN_FMT,
|
||||
SCSB_LEFR_ARGS(super->hdr.fsid, fence_rid),
|
||||
SCSB_LEFR_ARGS(lefsid, fence_rid),
|
||||
le64_to_cpu(old[i][j].term), i, SIN_ARG(&sin));
|
||||
ret = scoutfs_fence_start(sb, le64_to_cpu(fence_rid), sin.sin_addr.s_addr,
|
||||
SCOUTFS_FENCE_QUORUM_BLOCK_LEADER);
|
||||
@@ -592,6 +635,71 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void clear_hb_delay(struct quorum_info *qinf)
|
||||
{
|
||||
int i;
|
||||
|
||||
spin_lock(&qinf->show_lock);
|
||||
qinf->max_hb_delay = 0;
|
||||
for (i = 0; i < HB_DELAY_NR; i++) {
|
||||
qinf->hb_delay[i].recent = ns_to_ktime(0);
|
||||
qinf->hb_delay[i].count = 0;
|
||||
}
|
||||
spin_unlock(&qinf->show_lock);
|
||||
}
|
||||
|
||||
struct hb_recording {
|
||||
ktime_t prev;
|
||||
int count;
|
||||
};
|
||||
|
||||
/*
|
||||
* Record long heartbeat delays. We only record the delay between back
|
||||
* to back send attempts in the leader or back to back recv messages in
|
||||
* the followers. The worker caller sets record_hb when their iteration
|
||||
* sent or received a heartbeat. An iteration that does anything else
|
||||
* resets the tracking.
|
||||
*/
|
||||
static void record_hb_delay(struct super_block *sb, struct quorum_info *qinf,
|
||||
struct hb_recording *hbr, bool record_hb, int role)
|
||||
{
|
||||
bool log = false;
|
||||
ktime_t now;
|
||||
s64 s;
|
||||
|
||||
if (!record_hb) {
|
||||
hbr->count = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
now = ktime_get();
|
||||
|
||||
if (hbr->count < 2 && ++hbr->count < 2) {
|
||||
hbr->prev = now;
|
||||
return;
|
||||
}
|
||||
|
||||
s = ktime_ms_delta(now, hbr->prev) / MSEC_PER_SEC;
|
||||
hbr->prev = now;
|
||||
|
||||
if (s <= 0 || s >= HB_DELAY_NR)
|
||||
return;
|
||||
|
||||
spin_lock(&qinf->show_lock);
|
||||
if (qinf->max_hb_delay < s) {
|
||||
qinf->max_hb_delay = s;
|
||||
if (s >= 3)
|
||||
log = true;
|
||||
}
|
||||
qinf->hb_delay[s].recent = now;
|
||||
qinf->hb_delay[s].count++;
|
||||
spin_unlock(&qinf->show_lock);
|
||||
|
||||
if (log)
|
||||
scoutfs_info(sb, "longest quorum heartbeat %s delay of %lld sec",
|
||||
role == LEADER ? "send" : "recv", s);
|
||||
}
|
||||
|
||||
/*
|
||||
* The main quorum task maintains its private status. It seemed cleaner
|
||||
* to occasionally copy the status for showing in sysfs/debugfs files
|
||||
@@ -616,16 +724,23 @@ static void update_show_status(struct quorum_info *qinf, struct quorum_status *q
|
||||
static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
{
|
||||
struct quorum_info *qinf = container_of(work, struct quorum_info, work);
|
||||
struct scoutfs_mount_options opts;
|
||||
struct super_block *sb = qinf->sb;
|
||||
struct sockaddr_in unused;
|
||||
struct quorum_host_msg msg;
|
||||
struct quorum_status qst = {0,};
|
||||
struct hb_recording hbr;
|
||||
bool record_hb;
|
||||
int ret;
|
||||
int err;
|
||||
|
||||
memset(&hbr, 0, sizeof(struct hb_recording));
|
||||
|
||||
/* recording votes from slots as native single word bitmap */
|
||||
BUILD_BUG_ON(SCOUTFS_QUORUM_MAX_SLOTS > BITS_PER_LONG);
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
/* start out as a follower */
|
||||
qst.role = FOLLOWER;
|
||||
qst.vote_for = -1;
|
||||
@@ -635,7 +750,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
|
||||
/* see if there's a server to chose heartbeat or election timeout */
|
||||
if (scoutfs_quorum_server_sin(sb, &unused) == 0)
|
||||
qst.timeout = heartbeat_timeout();
|
||||
qst.timeout = heartbeat_timeout(&opts);
|
||||
else
|
||||
qst.timeout = election_timeout();
|
||||
|
||||
@@ -659,14 +774,16 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
scoutfs_options_read(sb, &opts);
|
||||
record_hb = false;
|
||||
|
||||
/* ignore messages from older terms */
|
||||
if (msg.type != SCOUTFS_QUORUM_MSG_INVALID &&
|
||||
msg.term < qst.term)
|
||||
msg.type = SCOUTFS_QUORUM_MSG_INVALID;
|
||||
|
||||
trace_scoutfs_quorum_loop(sb, qst.role, qst.term, qst.vote_for,
|
||||
qst.vote_bits,
|
||||
ktime_to_timespec64(qst.timeout));
|
||||
qst.vote_bits, ktime_to_ns(qst.timeout));
|
||||
|
||||
/* receiving greater terms resets term, becomes follower */
|
||||
if (msg.type != SCOUTFS_QUORUM_MSG_INVALID &&
|
||||
@@ -674,6 +791,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
if (qst.role == LEADER) {
|
||||
scoutfs_warn(sb, "saw msg type %u from %u for term %llu while leader in term %llu, shutting down server.",
|
||||
msg.type, msg.from, msg.term, qst.term);
|
||||
clear_hb_delay(qinf);
|
||||
}
|
||||
qst.role = FOLLOWER;
|
||||
qst.term = msg.term;
|
||||
@@ -682,7 +800,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
scoutfs_inc_counter(sb, quorum_term_follower);
|
||||
|
||||
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT)
|
||||
qst.timeout = heartbeat_timeout();
|
||||
qst.timeout = heartbeat_timeout(&opts);
|
||||
else
|
||||
qst.timeout = election_timeout();
|
||||
|
||||
@@ -692,6 +810,21 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* receiving heartbeats extends timeout, delaying elections */
|
||||
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT) {
|
||||
qst.timeout = heartbeat_timeout(&opts);
|
||||
scoutfs_inc_counter(sb, quorum_recv_heartbeat);
|
||||
record_hb = true;
|
||||
}
|
||||
|
||||
/* receiving a resignation from server starts election */
|
||||
if (msg.type == SCOUTFS_QUORUM_MSG_RESIGNATION &&
|
||||
qst.role == FOLLOWER &&
|
||||
msg.term == qst.term) {
|
||||
qst.timeout = election_timeout();
|
||||
scoutfs_inc_counter(sb, quorum_recv_resignation);
|
||||
}
|
||||
|
||||
/* followers and candidates start new election on timeout */
|
||||
if (qst.role != LEADER &&
|
||||
ktime_after(ktime_get(), qst.timeout)) {
|
||||
@@ -744,6 +877,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
qst.timeout = heartbeat_interval();
|
||||
|
||||
update_show_status(qinf, &qst);
|
||||
clear_hb_delay(qinf);
|
||||
|
||||
/* record that we've been elected before starting up server */
|
||||
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_ELECT, qst.term, true);
|
||||
@@ -752,7 +886,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
|
||||
qst.server_start_term = qst.term;
|
||||
qst.server_event = SCOUTFS_QUORUM_EVENT_ELECT;
|
||||
scoutfs_server_start(sb, qst.term);
|
||||
scoutfs_server_start(sb, &qinf->qconf, qst.term);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -798,6 +932,7 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
send_msg_others(sb, SCOUTFS_QUORUM_MSG_RESIGNATION,
|
||||
qst.server_start_term);
|
||||
scoutfs_inc_counter(sb, quorum_send_resignation);
|
||||
clear_hb_delay(qinf);
|
||||
}
|
||||
|
||||
ret = update_quorum_block(sb, SCOUTFS_QUORUM_EVENT_STOP,
|
||||
@@ -811,24 +946,16 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
/* leaders regularly send heartbeats to delay elections */
|
||||
if (qst.role == LEADER &&
|
||||
ktime_after(ktime_get(), qst.timeout)) {
|
||||
send_msg_others(sb, SCOUTFS_QUORUM_MSG_HEARTBEAT,
|
||||
qst.term);
|
||||
ret = send_msg_others(sb, SCOUTFS_QUORUM_MSG_HEARTBEAT, qst.term);
|
||||
if (ret > 0) {
|
||||
scoutfs_add_counter(sb, quorum_send_heartbeat_dropped, ret);
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
qst.timeout = heartbeat_interval();
|
||||
scoutfs_inc_counter(sb, quorum_send_heartbeat);
|
||||
}
|
||||
record_hb = true;
|
||||
|
||||
/* receiving heartbeats extends timeout, delaying elections */
|
||||
if (msg.type == SCOUTFS_QUORUM_MSG_HEARTBEAT) {
|
||||
qst.timeout = heartbeat_timeout();
|
||||
scoutfs_inc_counter(sb, quorum_recv_heartbeat);
|
||||
}
|
||||
|
||||
/* receiving a resignation from server starts election */
|
||||
if (msg.type == SCOUTFS_QUORUM_MSG_RESIGNATION &&
|
||||
qst.role == FOLLOWER &&
|
||||
msg.term == qst.term) {
|
||||
qst.timeout = election_timeout();
|
||||
scoutfs_inc_counter(sb, quorum_recv_resignation);
|
||||
}
|
||||
|
||||
/* followers vote once per term */
|
||||
@@ -840,6 +967,8 @@ static void scoutfs_quorum_worker(struct work_struct *work)
|
||||
msg.from);
|
||||
scoutfs_inc_counter(sb, quorum_send_vote);
|
||||
}
|
||||
|
||||
record_hb_delay(sb, qinf, &hbr, record_hb, qst.role);
|
||||
}
|
||||
|
||||
update_show_status(qinf, &qst);
|
||||
@@ -877,16 +1006,25 @@ out:
|
||||
*/
|
||||
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = &sbi->super;
|
||||
struct scoutfs_super_block *super = NULL;
|
||||
struct scoutfs_quorum_block blk;
|
||||
u64 elect_term;
|
||||
u64 term = 0;
|
||||
int ret = 0;
|
||||
int i;
|
||||
|
||||
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_NOFS);
|
||||
if (!super) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scoutfs_read_super(sb, super);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
|
||||
if (!quorum_slot_present(super, i))
|
||||
if (!quorum_slot_present(&super->qconf, i))
|
||||
continue;
|
||||
|
||||
ret = read_quorum_block(sb, SCOUTFS_QUORUM_BLKNO + i, &blk, false);
|
||||
@@ -900,7 +1038,7 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
|
||||
if (elect_term > term &&
|
||||
elect_term > le64_to_cpu(blk.events[SCOUTFS_QUORUM_EVENT_STOP].term)) {
|
||||
term = elect_term;
|
||||
scoutfs_quorum_slot_sin(super, i, sin);
|
||||
scoutfs_quorum_slot_sin(&super->qconf, i, sin);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@@ -909,6 +1047,7 @@ int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin)
|
||||
ret = -ENOENT;
|
||||
|
||||
out:
|
||||
kfree(super);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -924,12 +1063,9 @@ u8 scoutfs_quorum_votes_needed(struct super_block *sb)
|
||||
return qinf->votes_needed;
|
||||
}
|
||||
|
||||
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
|
||||
struct sockaddr_in *sin)
|
||||
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i, struct sockaddr_in *sin)
|
||||
{
|
||||
BUG_ON(i < 0 || i >= SCOUTFS_QUORUM_MAX_SLOTS);
|
||||
|
||||
scoutfs_addr_to_sin(sin, &super->qconf.slots[i].addr);
|
||||
return quorum_slot_sin(qconf, i, sin);
|
||||
}
|
||||
|
||||
static char *role_str(int role)
|
||||
@@ -969,9 +1105,11 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
{
|
||||
DECLARE_QUORUM_INFO_KOBJ(kobj, qinf);
|
||||
struct quorum_status qst;
|
||||
struct count_recent cr;
|
||||
struct last_msg last;
|
||||
struct timespec64 ts;
|
||||
const ktime_t now = ktime_get();
|
||||
unsigned long ul;
|
||||
size_t size;
|
||||
int ret;
|
||||
int i;
|
||||
@@ -1029,6 +1167,26 @@ static ssize_t status_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
(s64)ts.tv_sec, (int)ts.tv_nsec);
|
||||
}
|
||||
|
||||
spin_lock(&qinf->show_lock);
|
||||
ul = qinf->max_hb_delay;
|
||||
spin_unlock(&qinf->show_lock);
|
||||
if (ul)
|
||||
snprintf_ret(buf, size, &ret, "HB Delay(s) Count Secs Since\n");
|
||||
|
||||
for (i = 1; i <= ul && i < HB_DELAY_NR; i++) {
|
||||
spin_lock(&qinf->show_lock);
|
||||
cr = qinf->hb_delay[i];
|
||||
spin_unlock(&qinf->show_lock);
|
||||
|
||||
if (cr.count == 0)
|
||||
continue;
|
||||
|
||||
ts = ktime_to_timespec64(ktime_sub(now, cr.recent));
|
||||
snprintf_ret(buf, size, &ret,
|
||||
"%11u %9llu %lld.%09u\n",
|
||||
i, cr.count, (s64)ts.tv_sec, (int)ts.tv_nsec);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
SCOUTFS_ATTR_RO(status);
|
||||
@@ -1060,11 +1218,10 @@ static inline bool valid_ipv4_port(__be16 port)
|
||||
return port != 0 && be16_to_cpu(port) != U16_MAX;
|
||||
}
|
||||
|
||||
static int verify_quorum_slots(struct super_block *sb)
|
||||
static int verify_quorum_slots(struct super_block *sb, struct quorum_info *qinf,
|
||||
struct scoutfs_quorum_config *qconf)
|
||||
{
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
char slots[(SCOUTFS_QUORUM_MAX_SLOTS * 3) + 1];
|
||||
DECLARE_QUORUM_INFO(sb, qinf);
|
||||
struct sockaddr_in other;
|
||||
struct sockaddr_in sin;
|
||||
int found = 0;
|
||||
@@ -1074,10 +1231,10 @@ static int verify_quorum_slots(struct super_block *sb)
|
||||
|
||||
|
||||
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
|
||||
if (!quorum_slot_present(super, i))
|
||||
if (!quorum_slot_present(qconf, i))
|
||||
continue;
|
||||
|
||||
scoutfs_quorum_slot_sin(super, i, &sin);
|
||||
scoutfs_quorum_slot_sin(qconf, i, &sin);
|
||||
|
||||
if (!valid_ipv4_unicast(sin.sin_addr.s_addr)) {
|
||||
scoutfs_err(sb, "quorum slot #%d has invalid ipv4 unicast address: "SIN_FMT,
|
||||
@@ -1092,10 +1249,10 @@ static int verify_quorum_slots(struct super_block *sb)
|
||||
}
|
||||
|
||||
for (j = i + 1; j < SCOUTFS_QUORUM_MAX_SLOTS; j++) {
|
||||
if (!quorum_slot_present(super, j))
|
||||
if (!quorum_slot_present(qconf, j))
|
||||
continue;
|
||||
|
||||
scoutfs_quorum_slot_sin(super, j, &other);
|
||||
scoutfs_quorum_slot_sin(qconf, j, &other);
|
||||
|
||||
if (sin.sin_addr.s_addr == other.sin_addr.s_addr &&
|
||||
sin.sin_port == other.sin_port) {
|
||||
@@ -1113,11 +1270,11 @@ static int verify_quorum_slots(struct super_block *sb)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!quorum_slot_present(super, qinf->our_quorum_slot_nr)) {
|
||||
if (!quorum_slot_present(qconf, qinf->our_quorum_slot_nr)) {
|
||||
char *str = slots;
|
||||
*str = '\0';
|
||||
for (i = 0; i < SCOUTFS_QUORUM_MAX_SLOTS; i++) {
|
||||
if (quorum_slot_present(super, i)) {
|
||||
if (quorum_slot_present(qconf, i)) {
|
||||
ret = snprintf(str, &slots[ARRAY_SIZE(slots)] - str, "%c%u",
|
||||
str == slots ? ' ' : ',', i);
|
||||
if (ret < 2 || ret > 3) {
|
||||
@@ -1141,16 +1298,22 @@ static int verify_quorum_slots(struct super_block *sb)
|
||||
else
|
||||
qinf->votes_needed = (found / 2) + 1;
|
||||
|
||||
qinf->qconf = *qconf;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Once this schedules the quorum worker it can be elected leader and
|
||||
* start the server, possibly before this returns.
|
||||
* start the server, possibly before this returns. The quorum agent
|
||||
* would be responsible for tracking the quorum config in the super
|
||||
* block if it changes. Until then uses a static config that it reads
|
||||
* during setup.
|
||||
*/
|
||||
int scoutfs_quorum_setup(struct super_block *sb)
|
||||
{
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
struct scoutfs_super_block *super = NULL;
|
||||
struct scoutfs_mount_options opts;
|
||||
struct quorum_info *qinf;
|
||||
int ret;
|
||||
@@ -1160,7 +1323,14 @@ int scoutfs_quorum_setup(struct super_block *sb)
|
||||
return 0;
|
||||
|
||||
qinf = kzalloc(sizeof(struct quorum_info), GFP_KERNEL);
|
||||
if (!qinf) {
|
||||
super = kmalloc(sizeof(struct scoutfs_super_block), GFP_KERNEL);
|
||||
if (qinf)
|
||||
qinf->hb_delay = __vmalloc(HB_DELAY_NR * sizeof(struct count_recent),
|
||||
GFP_KERNEL | __GFP_ZERO, PAGE_KERNEL);
|
||||
if (!qinf || !super || !qinf->hb_delay) {
|
||||
if (qinf)
|
||||
vfree(qinf->hb_delay);
|
||||
kfree(qinf);
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
@@ -1174,7 +1344,20 @@ int scoutfs_quorum_setup(struct super_block *sb)
|
||||
sbi->quorum_info = qinf;
|
||||
qinf->sb = sb;
|
||||
|
||||
ret = verify_quorum_slots(sb);
|
||||
/* a high priority single threaded context without mem reclaim */
|
||||
qinf->workq = alloc_workqueue("scoutfs_quorum_work",
|
||||
WQ_NON_REENTRANT | WQ_UNBOUND |
|
||||
WQ_HIGHPRI, 1);
|
||||
if (!qinf->workq) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scoutfs_read_super(sb, super);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = verify_quorum_slots(sb, qinf, &super->qconf);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
@@ -1188,12 +1371,13 @@ int scoutfs_quorum_setup(struct super_block *sb)
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
schedule_work(&qinf->work);
|
||||
queue_work(qinf->workq, &qinf->work);
|
||||
|
||||
out:
|
||||
if (ret)
|
||||
scoutfs_quorum_destroy(sb);
|
||||
|
||||
kfree(super);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1217,10 +1401,14 @@ void scoutfs_quorum_destroy(struct super_block *sb)
|
||||
qinf->shutdown = true;
|
||||
flush_work(&qinf->work);
|
||||
|
||||
if (qinf->workq)
|
||||
destroy_workqueue(qinf->workq);
|
||||
|
||||
scoutfs_sysfs_destroy_attrs(sb, &qinf->ssa);
|
||||
if (qinf->sock)
|
||||
sock_release(qinf->sock);
|
||||
|
||||
vfree(qinf->hb_delay);
|
||||
kfree(qinf);
|
||||
sbi->quorum_info = NULL;
|
||||
}
|
||||
|
||||
@@ -4,10 +4,11 @@
|
||||
int scoutfs_quorum_server_sin(struct super_block *sb, struct sockaddr_in *sin);
|
||||
|
||||
u8 scoutfs_quorum_votes_needed(struct super_block *sb);
|
||||
void scoutfs_quorum_slot_sin(struct scoutfs_super_block *super, int i,
|
||||
void scoutfs_quorum_slot_sin(struct scoutfs_quorum_config *qconf, int i,
|
||||
struct sockaddr_in *sin);
|
||||
|
||||
int scoutfs_quorum_fence_leaders(struct super_block *sb, u64 term);
|
||||
int scoutfs_quorum_fence_leaders(struct super_block *sb, struct scoutfs_quorum_config *qconf,
|
||||
u64 term);
|
||||
|
||||
int scoutfs_quorum_setup(struct super_block *sb);
|
||||
void scoutfs_quorum_shutdown(struct super_block *sb);
|
||||
|
||||
@@ -691,16 +691,16 @@ TRACE_EVENT(scoutfs_evict_inode,
|
||||
|
||||
TRACE_EVENT(scoutfs_drop_inode,
|
||||
TP_PROTO(struct super_block *sb, __u64 ino, unsigned int nlink,
|
||||
unsigned int unhashed, bool drop_invalidated),
|
||||
unsigned int unhashed, bool lock_covered),
|
||||
|
||||
TP_ARGS(sb, ino, nlink, unhashed, drop_invalidated),
|
||||
TP_ARGS(sb, ino, nlink, unhashed, lock_covered),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
SCSB_TRACE_FIELDS
|
||||
__field(__u64, ino)
|
||||
__field(unsigned int, nlink)
|
||||
__field(unsigned int, unhashed)
|
||||
__field(unsigned int, drop_invalidated)
|
||||
__field(unsigned int, lock_covered)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
@@ -708,12 +708,12 @@ TRACE_EVENT(scoutfs_drop_inode,
|
||||
__entry->ino = ino;
|
||||
__entry->nlink = nlink;
|
||||
__entry->unhashed = unhashed;
|
||||
__entry->drop_invalidated = !!drop_invalidated;
|
||||
__entry->lock_covered = !!lock_covered;
|
||||
),
|
||||
|
||||
TP_printk(SCSBF" ino %llu nlink %u unhashed %d drop_invalidated %u", SCSB_TRACE_ARGS,
|
||||
TP_printk(SCSBF" ino %llu nlink %u unhashed %d lock_covered %u", SCSB_TRACE_ARGS,
|
||||
__entry->ino, __entry->nlink, __entry->unhashed,
|
||||
__entry->drop_invalidated)
|
||||
__entry->lock_covered)
|
||||
);
|
||||
|
||||
TRACE_EVENT(scoutfs_inode_walk_writeback,
|
||||
@@ -817,22 +817,17 @@ TRACE_EVENT(scoutfs_advance_dirty_super,
|
||||
TP_printk(SCSBF" super seq now %llu", SCSB_TRACE_ARGS, __entry->seq)
|
||||
);
|
||||
|
||||
TRACE_EVENT(scoutfs_dir_add_next_linkref,
|
||||
TRACE_EVENT(scoutfs_dir_add_next_linkref_found,
|
||||
TP_PROTO(struct super_block *sb, __u64 ino, __u64 dir_ino,
|
||||
__u64 dir_pos, int ret, __u64 found_dir_ino,
|
||||
__u64 found_dir_pos, unsigned int name_len),
|
||||
__u64 dir_pos, unsigned int name_len),
|
||||
|
||||
TP_ARGS(sb, ino, dir_ino, dir_pos, ret, found_dir_pos, found_dir_ino,
|
||||
name_len),
|
||||
TP_ARGS(sb, ino, dir_ino, dir_pos, name_len),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
SCSB_TRACE_FIELDS
|
||||
__field(__u64, ino)
|
||||
__field(__u64, dir_ino)
|
||||
__field(__u64, dir_pos)
|
||||
__field(int, ret)
|
||||
__field(__u64, found_dir_ino)
|
||||
__field(__u64, found_dir_pos)
|
||||
__field(unsigned int, name_len)
|
||||
),
|
||||
|
||||
@@ -841,16 +836,43 @@ TRACE_EVENT(scoutfs_dir_add_next_linkref,
|
||||
__entry->ino = ino;
|
||||
__entry->dir_ino = dir_ino;
|
||||
__entry->dir_pos = dir_pos;
|
||||
__entry->ret = ret;
|
||||
__entry->found_dir_ino = dir_ino;
|
||||
__entry->found_dir_pos = dir_pos;
|
||||
__entry->name_len = name_len;
|
||||
),
|
||||
|
||||
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu ret %d found_dir_ino %llu found_dir_pos %llu name_len %u",
|
||||
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_pos,
|
||||
__entry->dir_ino, __entry->ret, __entry->found_dir_pos,
|
||||
__entry->found_dir_ino, __entry->name_len)
|
||||
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu name_len %u",
|
||||
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_ino,
|
||||
__entry->dir_pos, __entry->name_len)
|
||||
);
|
||||
|
||||
TRACE_EVENT(scoutfs_dir_add_next_linkrefs,
|
||||
TP_PROTO(struct super_block *sb, __u64 ino, __u64 dir_ino,
|
||||
__u64 dir_pos, int count, int nr, int ret),
|
||||
|
||||
TP_ARGS(sb, ino, dir_ino, dir_pos, count, nr, ret),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
SCSB_TRACE_FIELDS
|
||||
__field(__u64, ino)
|
||||
__field(__u64, dir_ino)
|
||||
__field(__u64, dir_pos)
|
||||
__field(int, count)
|
||||
__field(int, nr)
|
||||
__field(int, ret)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
SCSB_TRACE_ASSIGN(sb);
|
||||
__entry->ino = ino;
|
||||
__entry->dir_ino = dir_ino;
|
||||
__entry->dir_pos = dir_pos;
|
||||
__entry->count = count;
|
||||
__entry->nr = nr;
|
||||
__entry->ret = ret;
|
||||
),
|
||||
|
||||
TP_printk(SCSBF" ino %llu dir_ino %llu dir_pos %llu count %d nr %d ret %d",
|
||||
SCSB_TRACE_ARGS, __entry->ino, __entry->dir_ino,
|
||||
__entry->dir_pos, __entry->count, __entry->nr, __entry->ret)
|
||||
);
|
||||
|
||||
TRACE_EVENT(scoutfs_write_begin,
|
||||
@@ -1417,42 +1439,71 @@ TRACE_EVENT(scoutfs_rename,
|
||||
);
|
||||
|
||||
TRACE_EVENT(scoutfs_d_revalidate,
|
||||
TP_PROTO(struct super_block *sb,
|
||||
struct dentry *dentry, int flags, struct dentry *parent,
|
||||
bool is_covered, int ret),
|
||||
TP_PROTO(struct super_block *sb, struct dentry *dentry, int flags, u64 dir_ino, int ret),
|
||||
|
||||
TP_ARGS(sb, dentry, flags, parent, is_covered, ret),
|
||||
TP_ARGS(sb, dentry, flags, dir_ino, ret),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
SCSB_TRACE_FIELDS
|
||||
__field(void *, dentry)
|
||||
__string(name, dentry->d_name.name)
|
||||
__field(__u64, ino)
|
||||
__field(__u64, parent_ino)
|
||||
__field(__u64, dir_ino)
|
||||
__field(int, flags)
|
||||
__field(int, is_root)
|
||||
__field(int, is_covered)
|
||||
__field(int, ret)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
SCSB_TRACE_ASSIGN(sb);
|
||||
__entry->dentry = dentry;
|
||||
__assign_str(name, dentry->d_name.name)
|
||||
__entry->ino = dentry->d_inode ?
|
||||
scoutfs_ino(dentry->d_inode) : 0;
|
||||
__entry->parent_ino = parent->d_inode ?
|
||||
scoutfs_ino(parent->d_inode) : 0;
|
||||
__entry->ino = dentry->d_inode ? scoutfs_ino(dentry->d_inode) : 0;
|
||||
__entry->dir_ino = dir_ino;
|
||||
__entry->flags = flags;
|
||||
__entry->is_root = IS_ROOT(dentry);
|
||||
__entry->is_covered = is_covered;
|
||||
__entry->ret = ret;
|
||||
),
|
||||
|
||||
TP_printk(SCSBF" name %s ino %llu parent_ino %llu flags 0x%x s_root %u is_covered %u ret %d",
|
||||
SCSB_TRACE_ARGS, __get_str(name), __entry->ino,
|
||||
__entry->parent_ino, __entry->flags,
|
||||
__entry->is_root,
|
||||
__entry->is_covered,
|
||||
__entry->ret)
|
||||
TP_printk(SCSBF" dentry %p name %s ino %llu dir_ino %llu flags 0x%x s_root %u ret %d",
|
||||
SCSB_TRACE_ARGS, __entry->dentry, __get_str(name), __entry->ino, __entry->dir_ino,
|
||||
__entry->flags, __entry->is_root, __entry->ret)
|
||||
);
|
||||
|
||||
TRACE_EVENT(scoutfs_validate_dentry,
|
||||
TP_PROTO(struct super_block *sb, struct dentry *dentry, u64 dir_ino, u64 dentry_ino,
|
||||
u64 dent_ino, u64 refresh_gen, int ret),
|
||||
|
||||
TP_ARGS(sb, dentry, dir_ino, dentry_ino, dent_ino, refresh_gen, ret),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
SCSB_TRACE_FIELDS
|
||||
__field(void *, dentry)
|
||||
__field(__u64, dir_ino)
|
||||
__string(name, dentry->d_name.name)
|
||||
__field(__u64, dentry_ino)
|
||||
__field(__u64, dent_ino)
|
||||
__field(__u64, fsdata_gen)
|
||||
__field(__u64, refresh_gen)
|
||||
__field(int, ret)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
SCSB_TRACE_ASSIGN(sb);
|
||||
__entry->dentry = dentry;
|
||||
__entry->dir_ino = dir_ino;
|
||||
__assign_str(name, dentry->d_name.name)
|
||||
__entry->dentry_ino = dentry_ino;
|
||||
__entry->dent_ino = dent_ino;
|
||||
__entry->fsdata_gen = (unsigned long long)dentry->d_fsdata;
|
||||
__entry->refresh_gen = refresh_gen;
|
||||
__entry->ret = ret;
|
||||
),
|
||||
|
||||
TP_printk(SCSBF" dentry %p dir %llu name %s dentry_ino %llu dent_ino %llu fsdata_gen %llu refresh_gen %llu ret %d",
|
||||
SCSB_TRACE_ARGS, __entry->dentry, __entry->dir_ino, __get_str(name),
|
||||
__entry->dentry_ino, __entry->dent_ino, __entry->fsdata_gen,
|
||||
__entry->refresh_gen, __entry->ret)
|
||||
);
|
||||
|
||||
DECLARE_EVENT_CLASS(scoutfs_super_lifecycle_class,
|
||||
@@ -1845,8 +1896,9 @@ DEFINE_EVENT(scoutfs_server_client_count_class, scoutfs_server_client_down,
|
||||
|
||||
DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
|
||||
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
|
||||
u32 avail_before, u32 freed_before, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded),
|
||||
u32 avail_before, u32 freed_before, int committing, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing,
|
||||
exceeded),
|
||||
TP_STRUCT__entry(
|
||||
SCSB_TRACE_FIELDS
|
||||
__field(int, holding)
|
||||
@@ -1854,6 +1906,7 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
|
||||
__field(int, nr_holders)
|
||||
__field(__u32, avail_before)
|
||||
__field(__u32, freed_before)
|
||||
__field(int, committing)
|
||||
__field(int, exceeded)
|
||||
),
|
||||
TP_fast_assign(
|
||||
@@ -1863,31 +1916,33 @@ DECLARE_EVENT_CLASS(scoutfs_server_commit_users_class,
|
||||
__entry->nr_holders = nr_holders;
|
||||
__entry->avail_before = avail_before;
|
||||
__entry->freed_before = freed_before;
|
||||
__entry->committing = !!committing;
|
||||
__entry->exceeded = !!exceeded;
|
||||
),
|
||||
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u exceeded %u",
|
||||
TP_printk(SCSBF" holding %u applying %u nr %u avail_before %u freed_before %u committing %u exceeded %u",
|
||||
SCSB_TRACE_ARGS, __entry->holding, __entry->applying, __entry->nr_holders,
|
||||
__entry->avail_before, __entry->freed_before, __entry->exceeded)
|
||||
__entry->avail_before, __entry->freed_before, __entry->committing,
|
||||
__entry->exceeded)
|
||||
);
|
||||
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_hold,
|
||||
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
|
||||
u32 avail_before, u32 freed_before, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
|
||||
u32 avail_before, u32 freed_before, int committing, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
|
||||
);
|
||||
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_apply,
|
||||
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
|
||||
u32 avail_before, u32 freed_before, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
|
||||
u32 avail_before, u32 freed_before, int committing, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
|
||||
);
|
||||
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_start,
|
||||
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
|
||||
u32 avail_before, u32 freed_before, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
|
||||
u32 avail_before, u32 freed_before, int committing, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
|
||||
);
|
||||
DEFINE_EVENT(scoutfs_server_commit_users_class, scoutfs_server_commit_end,
|
||||
TP_PROTO(struct super_block *sb, int holding, int applying, int nr_holders,
|
||||
u32 avail_before, u32 freed_before, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, exceeded)
|
||||
u32 avail_before, u32 freed_before, int committing, int exceeded),
|
||||
TP_ARGS(sb, holding, applying, nr_holders, avail_before, freed_before, committing, exceeded)
|
||||
);
|
||||
|
||||
#define slt_symbolic(mode) \
|
||||
@@ -1969,9 +2024,9 @@ DEFINE_EVENT(scoutfs_quorum_message_class, scoutfs_quorum_recv_message,
|
||||
|
||||
TRACE_EVENT(scoutfs_quorum_loop,
|
||||
TP_PROTO(struct super_block *sb, int role, u64 term, int vote_for,
|
||||
unsigned long vote_bits, struct timespec64 timeout),
|
||||
unsigned long vote_bits, unsigned long long nsecs),
|
||||
|
||||
TP_ARGS(sb, role, term, vote_for, vote_bits, timeout),
|
||||
TP_ARGS(sb, role, term, vote_for, vote_bits, nsecs),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
SCSB_TRACE_FIELDS
|
||||
@@ -1980,8 +2035,7 @@ TRACE_EVENT(scoutfs_quorum_loop,
|
||||
__field(int, vote_for)
|
||||
__field(unsigned long, vote_bits)
|
||||
__field(unsigned long, vote_count)
|
||||
__field(unsigned long long, timeout_sec)
|
||||
__field(int, timeout_nsec)
|
||||
__field(unsigned long long, nsecs)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
@@ -1991,14 +2045,13 @@ TRACE_EVENT(scoutfs_quorum_loop,
|
||||
__entry->vote_for = vote_for;
|
||||
__entry->vote_bits = vote_bits;
|
||||
__entry->vote_count = hweight_long(vote_bits);
|
||||
__entry->timeout_sec = timeout.tv_sec;
|
||||
__entry->timeout_nsec = timeout.tv_nsec;
|
||||
__entry->nsecs = nsecs;
|
||||
),
|
||||
|
||||
TP_printk(SCSBF" term %llu role %d vote_for %d vote_bits 0x%lx vote_count %lu timeout %llu.%u",
|
||||
TP_printk(SCSBF" term %llu role %d vote_for %d vote_bits 0x%lx vote_count %lu timeout %llu",
|
||||
SCSB_TRACE_ARGS, __entry->term, __entry->role,
|
||||
__entry->vote_for, __entry->vote_bits, __entry->vote_count,
|
||||
__entry->timeout_sec, __entry->timeout_nsec)
|
||||
__entry->nsecs)
|
||||
);
|
||||
|
||||
TRACE_EVENT(scoutfs_trans_seq_last,
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -75,7 +75,7 @@ u64 scoutfs_server_seq(struct super_block *sb);
|
||||
u64 scoutfs_server_next_seq(struct super_block *sb);
|
||||
void scoutfs_server_set_seq_if_greater(struct super_block *sb, u64 seq);
|
||||
|
||||
void scoutfs_server_start(struct super_block *sb, u64 term);
|
||||
void scoutfs_server_start(struct super_block *sb, struct scoutfs_quorum_config *qconf, u64 term);
|
||||
void scoutfs_server_stop(struct super_block *sb);
|
||||
void scoutfs_server_stop_wait(struct super_block *sb);
|
||||
bool scoutfs_server_is_running(struct super_block *sb);
|
||||
|
||||
156
kmod/src/srch.c
156
kmod/src/srch.c
@@ -30,6 +30,7 @@
|
||||
#include "client.h"
|
||||
#include "counters.h"
|
||||
#include "scoutfs_trace.h"
|
||||
#include "triggers.h"
|
||||
|
||||
/*
|
||||
* This srch subsystem gives us a way to find inodes that have a given
|
||||
@@ -520,6 +521,95 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Padded entries are encoded in pairs after an existing entry. All of
|
||||
* the pairs cancel each other out by all readers (the second encoding
|
||||
* looks like deletion) so they aren't visible to the first/last bounds of
|
||||
* the block or file.
|
||||
*/
|
||||
static int append_padded_entry(struct scoutfs_srch_file *sfl, u64 blk,
|
||||
struct scoutfs_srch_block *srb, struct scoutfs_srch_entry *sre)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = encode_entry(srb->entries + le32_to_cpu(srb->entry_bytes),
|
||||
sre, &srb->tail);
|
||||
if (ret > 0) {
|
||||
srb->tail = *sre;
|
||||
le32_add_cpu(&srb->entry_nr, 1);
|
||||
le32_add_cpu(&srb->entry_bytes, ret);
|
||||
le64_add_cpu(&sfl->entries, 1);
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This is called by a testing trigger to create a very specific case of
|
||||
* encoded entry offsets. We want the last entry in the block to start
|
||||
* precisely at the _SAFE_BYTES offset.
|
||||
*
|
||||
* This is called when there is a single existing entry in the block.
|
||||
* We have the entire block to work with. We encode pairs of matching
|
||||
* entries. This hides them from readers (both searches and merging) as
|
||||
* they're interpreted as creation and deletion and are deleted. We use
|
||||
* the existing hash value of the first entry in the block but then set
|
||||
* the inode to an impossibly large number so it doesn't interfere with
|
||||
* anything.
|
||||
*
|
||||
* To hit the specific offset we very carefully manage the amount of
|
||||
* bytes of change between fields in the entry. We know that if we
|
||||
* change all the byte of the ino and id we end up with a 20 byte
|
||||
* (2+8+8,2) encoding of the pair of entries. To have the last entry
|
||||
* start at the _SAFE_POS offset we know that the final 20 byte pair
|
||||
* encoding needs to end at 2 bytes (second entry encoding) after the
|
||||
* _SAFE_POS offset.
|
||||
*
|
||||
* So as we encode pairs we watch the delta of our current offset from
|
||||
* that desired final offset of 2 past _SAFE_POS. If we're a multiple
|
||||
* of 20 away then we encode the full 20 byte pairs. If we're not, then
|
||||
* we drop a byte to encode 19 bytes. That'll slowly change the offset
|
||||
* to be a multiple of 20 again while encoding large entries.
|
||||
*/
|
||||
static void pad_entries_at_safe(struct scoutfs_srch_file *sfl, u64 blk,
|
||||
struct scoutfs_srch_block *srb)
|
||||
{
|
||||
struct scoutfs_srch_entry sre;
|
||||
u32 target;
|
||||
s32 diff;
|
||||
u64 hash;
|
||||
u64 ino;
|
||||
u64 id;
|
||||
int ret;
|
||||
|
||||
hash = le64_to_cpu(srb->tail.hash);
|
||||
ino = le64_to_cpu(srb->tail.ino) | (1ULL << 62);
|
||||
id = le64_to_cpu(srb->tail.id);
|
||||
|
||||
target = SCOUTFS_SRCH_BLOCK_SAFE_BYTES + 2;
|
||||
|
||||
while ((diff = target - le32_to_cpu(srb->entry_bytes)) > 0) {
|
||||
ino ^= 1ULL << (7 * 8);
|
||||
if (diff % 20 == 0) {
|
||||
id ^= 1ULL << (7 * 8);
|
||||
} else {
|
||||
id ^= 1ULL << (6 * 8);
|
||||
}
|
||||
|
||||
sre.hash = cpu_to_le64(hash);
|
||||
sre.ino = cpu_to_le64(ino);
|
||||
sre.id = cpu_to_le64(id);
|
||||
|
||||
ret = append_padded_entry(sfl, blk, srb, &sre);
|
||||
if (ret == 0)
|
||||
ret = append_padded_entry(sfl, blk, srb, &sre);
|
||||
BUG_ON(ret != 0);
|
||||
|
||||
diff = target - le32_to_cpu(srb->entry_bytes);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller is dropping an ino/id because the tracking rbtree is full.
|
||||
* This loses information so we can't return any entries at or after the
|
||||
@@ -861,7 +951,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
|
||||
struct scoutfs_srch_rb_root *sroot,
|
||||
u64 hash, u64 ino, u64 last_ino, bool *done)
|
||||
{
|
||||
struct scoutfs_net_roots prev_roots;
|
||||
struct scoutfs_net_roots roots;
|
||||
struct scoutfs_srch_entry start;
|
||||
struct scoutfs_srch_entry end;
|
||||
@@ -869,6 +958,7 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
|
||||
struct scoutfs_log_trees lt;
|
||||
struct scoutfs_srch_file sfl;
|
||||
SCOUTFS_BTREE_ITEM_REF(iref);
|
||||
DECLARE_SAVED_REFS(saved);
|
||||
struct scoutfs_key key;
|
||||
unsigned long limit = SRCH_LIMIT;
|
||||
int ret;
|
||||
@@ -877,7 +967,6 @@ int scoutfs_srch_search_xattrs(struct super_block *sb,
|
||||
|
||||
*done = false;
|
||||
srch_init_rb_root(sroot);
|
||||
memset(&prev_roots, 0, sizeof(prev_roots));
|
||||
|
||||
start.hash = cpu_to_le64(hash);
|
||||
start.ino = cpu_to_le64(ino);
|
||||
@@ -892,7 +981,6 @@ retry:
|
||||
ret = scoutfs_client_get_roots(sb, &roots);
|
||||
if (ret)
|
||||
goto out;
|
||||
memset(&roots.fs_root, 0, sizeof(roots.fs_root));
|
||||
|
||||
end = final;
|
||||
|
||||
@@ -968,16 +1056,10 @@ retry:
|
||||
*done = sre_cmp(&end, &final) == 0;
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret == -ESTALE) {
|
||||
if (memcmp(&prev_roots, &roots, sizeof(roots)) == 0) {
|
||||
scoutfs_inc_counter(sb, srch_search_stale_eio);
|
||||
ret = -EIO;
|
||||
} else {
|
||||
scoutfs_inc_counter(sb, srch_search_stale_retry);
|
||||
prev_roots = roots;
|
||||
goto retry;
|
||||
}
|
||||
}
|
||||
ret = scoutfs_block_check_stale(sb, ret, &saved, &roots.srch_root.ref,
|
||||
&roots.logs_root.ref);
|
||||
if (ret == -ESTALE)
|
||||
goto retry;
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -995,6 +1077,9 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
|
||||
struct scoutfs_key key;
|
||||
int ret;
|
||||
|
||||
if (sfl->ref.blkno && !force && scoutfs_trigger(sb, SRCH_FORCE_LOG_ROTATE))
|
||||
force = true;
|
||||
|
||||
if (sfl->ref.blkno == 0 ||
|
||||
(!force && le64_to_cpu(sfl->blocks) < SCOUTFS_SRCH_LOG_BLOCK_LIMIT))
|
||||
return 0;
|
||||
@@ -1003,6 +1088,14 @@ int scoutfs_srch_rotate_log(struct super_block *sb,
|
||||
le64_to_cpu(sfl->ref.blkno), 0);
|
||||
ret = scoutfs_btree_insert(sb, alloc, wri, root, &key,
|
||||
sfl, sizeof(*sfl));
|
||||
/*
|
||||
* While it's fine to replay moving the client's logging srch
|
||||
* file to the core btree item, server commits should keep it
|
||||
* from happening. So we'll warn if we see it happen. This can
|
||||
* be removed eventually.
|
||||
*/
|
||||
if (WARN_ON_ONCE(ret == -EEXIST))
|
||||
ret = 0;
|
||||
if (ret == 0) {
|
||||
memset(sfl, 0, sizeof(*sfl));
|
||||
scoutfs_inc_counter(sb, srch_rotate_log);
|
||||
@@ -1462,7 +1555,7 @@ static int kway_merge(struct super_block *sb,
|
||||
struct scoutfs_block_writer *wri,
|
||||
struct scoutfs_srch_file *sfl,
|
||||
kway_get_t kway_get, kway_advance_t kway_adv,
|
||||
void **args, int nr)
|
||||
void **args, int nr, bool logs_input)
|
||||
{
|
||||
DECLARE_SRCH_INFO(sb, srinf);
|
||||
struct scoutfs_srch_block *srb = NULL;
|
||||
@@ -1567,6 +1660,15 @@ static int kway_merge(struct super_block *sb,
|
||||
blk++;
|
||||
}
|
||||
|
||||
/* end sorted block on _SAFE offset for testing */
|
||||
if (bl && le32_to_cpu(srb->entry_nr) == 1 && logs_input &&
|
||||
scoutfs_trigger(sb, SRCH_COMPACT_LOGS_PAD_SAFE)) {
|
||||
pad_entries_at_safe(sfl, blk, srb);
|
||||
scoutfs_block_put(sb, bl);
|
||||
bl = NULL;
|
||||
blk++;
|
||||
}
|
||||
|
||||
scoutfs_inc_counter(sb, srch_compact_entry);
|
||||
|
||||
} else {
|
||||
@@ -1609,6 +1711,8 @@ static int kway_merge(struct super_block *sb,
|
||||
empty++;
|
||||
ret = 0;
|
||||
} else if (ret < 0) {
|
||||
if (ret == -ENOANO) /* just testing trigger */
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -1747,7 +1851,7 @@ static int compact_logs(struct super_block *sb,
|
||||
goto out;
|
||||
}
|
||||
page->private = 0;
|
||||
list_add_tail(&page->list, &pages);
|
||||
list_add_tail(&page->lru, &pages);
|
||||
nr_pages++;
|
||||
scoutfs_inc_counter(sb, srch_compact_log_page);
|
||||
}
|
||||
@@ -1800,7 +1904,7 @@ static int compact_logs(struct super_block *sb,
|
||||
|
||||
/* sort page entries and reset private for _next */
|
||||
i = 0;
|
||||
list_for_each_entry(page, &pages, list) {
|
||||
list_for_each_entry(page, &pages, lru) {
|
||||
args[i++] = page;
|
||||
|
||||
if (atomic_read(&srinf->shutdown)) {
|
||||
@@ -1816,12 +1920,12 @@ static int compact_logs(struct super_block *sb,
|
||||
}
|
||||
|
||||
ret = kway_merge(sb, alloc, wri, &sc->out, kway_get_page, kway_adv_page,
|
||||
args, nr_pages);
|
||||
args, nr_pages, true);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* make sure we finished all the pages */
|
||||
list_for_each_entry(page, &pages, list) {
|
||||
list_for_each_entry(page, &pages, lru) {
|
||||
sre = page_priv_sre(page);
|
||||
if (page->private < SRES_PER_PAGE && sre->ino != 0) {
|
||||
ret = -ENOSPC;
|
||||
@@ -1834,8 +1938,8 @@ static int compact_logs(struct super_block *sb,
|
||||
out:
|
||||
scoutfs_block_put(sb, bl);
|
||||
vfree(args);
|
||||
list_for_each_entry_safe(page, tmp, &pages, list) {
|
||||
list_del(&page->list);
|
||||
list_for_each_entry_safe(page, tmp, &pages, lru) {
|
||||
list_del(&page->lru);
|
||||
__free_page(page);
|
||||
}
|
||||
|
||||
@@ -1874,12 +1978,18 @@ static int kway_get_reader(struct super_block *sb,
|
||||
srb = rdr->bl->data;
|
||||
|
||||
if (rdr->pos > SCOUTFS_SRCH_BLOCK_SAFE_BYTES ||
|
||||
rdr->skip >= SCOUTFS_SRCH_BLOCK_SAFE_BYTES ||
|
||||
rdr->skip > SCOUTFS_SRCH_BLOCK_SAFE_BYTES ||
|
||||
rdr->skip >= le32_to_cpu(srb->entry_bytes)) {
|
||||
/* XXX inconsistency */
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (rdr->decoded_bytes == 0 && rdr->pos == SCOUTFS_SRCH_BLOCK_SAFE_BYTES &&
|
||||
scoutfs_trigger(sb, SRCH_MERGE_STOP_SAFE)) {
|
||||
/* only used in testing */
|
||||
return -ENOANO;
|
||||
}
|
||||
|
||||
/* decode entry, possibly skipping start of the block */
|
||||
while (rdr->decoded_bytes == 0 || rdr->pos < rdr->skip) {
|
||||
ret = decode_entry(srb->entries + rdr->pos,
|
||||
@@ -1969,7 +2079,7 @@ static int compact_sorted(struct super_block *sb,
|
||||
}
|
||||
|
||||
ret = kway_merge(sb, alloc, wri, &sc->out, kway_get_reader,
|
||||
kway_adv_reader, args, nr);
|
||||
kway_adv_reader, args, nr, false);
|
||||
|
||||
sc->flags |= SCOUTFS_SRCH_COMPACT_FLAG_DONE;
|
||||
for (i = 0; i < nr; i++) {
|
||||
@@ -2179,7 +2289,7 @@ out:
|
||||
|
||||
scoutfs_block_writer_forget_all(sb, &wri);
|
||||
if (!atomic_read(&srinf->shutdown)) {
|
||||
delay = ret == 0 ? 0 : msecs_to_jiffies(SRCH_COMPACT_DELAY_MS);
|
||||
delay = (sc->nr > 0 && ret == 0) ? 0 : msecs_to_jiffies(SRCH_COMPACT_DELAY_MS);
|
||||
queue_delayed_work(srinf->workq, &srinf->compact_dwork, delay);
|
||||
}
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/magic.h>
|
||||
@@ -47,6 +48,7 @@
|
||||
#include "omap.h"
|
||||
#include "volopt.h"
|
||||
#include "fence.h"
|
||||
#include "xattr.h"
|
||||
#include "scoutfs_trace.h"
|
||||
|
||||
static struct dentry *scoutfs_debugfs_root;
|
||||
@@ -177,7 +179,7 @@ static void scoutfs_put_super(struct super_block *sb)
|
||||
/*
|
||||
* Wait for invalidation and iput to finish with any lingering
|
||||
* inode references that escaped the evict_inodes in
|
||||
* generic_shutdown_super. MS_ACTIVE is clear so final iput
|
||||
* generic_shutdown_super. SB_ACTIVE is clear so final iput
|
||||
* will always evict.
|
||||
*/
|
||||
scoutfs_lock_flush_invalidate(sb);
|
||||
@@ -460,9 +462,8 @@ static int scoutfs_read_supers(struct super_block *sb)
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
||||
sbi->fsid = le64_to_cpu(meta_super->hdr.fsid);
|
||||
sbi->fmt_vers = le64_to_cpu(meta_super->fmt_vers);
|
||||
sbi->super = *meta_super;
|
||||
out:
|
||||
kfree(meta_super);
|
||||
kfree(data_super);
|
||||
@@ -482,8 +483,11 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
sb->s_magic = SCOUTFS_SUPER_MAGIC;
|
||||
sb->s_maxbytes = MAX_LFS_FILESIZE;
|
||||
sb->s_op = &scoutfs_super_ops;
|
||||
sb->s_d_op = &scoutfs_dentry_ops;
|
||||
sb->s_export_op = &scoutfs_export_ops;
|
||||
sb->s_flags |= MS_I_VERSION;
|
||||
sb->s_xattr = scoutfs_xattr_handlers;
|
||||
sb->s_flags |= SB_I_VERSION | SB_POSIXACL;
|
||||
sb->s_time_gran = 1;
|
||||
|
||||
/* btree blocks use long lived bh->b_data refs */
|
||||
mapping_set_gfp_mask(sb->s_bdev->bd_inode->i_mapping, GFP_NOFS);
|
||||
@@ -496,7 +500,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
|
||||
ret = assign_random_id(sbi);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto out;
|
||||
|
||||
spin_lock_init(&sbi->next_ino_lock);
|
||||
spin_lock_init(&sbi->data_wait_root.lock);
|
||||
@@ -505,7 +509,7 @@ static int scoutfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
/* parse options early for use during setup */
|
||||
ret = scoutfs_options_early_setup(sb, data);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto out;
|
||||
scoutfs_options_read(sb, &opts);
|
||||
|
||||
ret = sb_set_blocksize(sb, SCOUTFS_BLOCK_SM_SIZE);
|
||||
@@ -628,7 +632,6 @@ MODULE_ALIAS_FS("scoutfs");
|
||||
static void teardown_module(void)
|
||||
{
|
||||
debugfs_remove(scoutfs_debugfs_root);
|
||||
scoutfs_dir_exit();
|
||||
scoutfs_inode_exit();
|
||||
scoutfs_sysfs_exit();
|
||||
}
|
||||
@@ -666,21 +669,20 @@ static int __init scoutfs_module_init(void)
|
||||
goto out;
|
||||
}
|
||||
ret = scoutfs_inode_init() ?:
|
||||
scoutfs_dir_init() ?:
|
||||
register_filesystem(&scoutfs_fs_type);
|
||||
out:
|
||||
if (ret)
|
||||
teardown_module();
|
||||
return ret;
|
||||
}
|
||||
module_init(scoutfs_module_init)
|
||||
module_init(scoutfs_module_init);
|
||||
|
||||
static void __exit scoutfs_module_exit(void)
|
||||
{
|
||||
unregister_filesystem(&scoutfs_fs_type);
|
||||
teardown_module();
|
||||
}
|
||||
module_exit(scoutfs_module_exit)
|
||||
module_exit(scoutfs_module_exit);
|
||||
|
||||
MODULE_AUTHOR("Zach Brown <zab@versity.com>");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
@@ -35,11 +35,10 @@ struct scoutfs_sb_info {
|
||||
struct super_block *sb;
|
||||
|
||||
/* assigned once at the start of each mount, read-only */
|
||||
u64 fsid;
|
||||
u64 rid;
|
||||
u64 fmt_vers;
|
||||
|
||||
struct scoutfs_super_block super;
|
||||
|
||||
struct block_device *meta_bdev;
|
||||
|
||||
spinlock_t next_ino_lock;
|
||||
@@ -135,14 +134,14 @@ static inline bool scoutfs_unmounting(struct super_block *sb)
|
||||
(int)(le64_to_cpu(fsid) >> SCSB_SHIFT), \
|
||||
(int)(le64_to_cpu(rid) >> SCSB_SHIFT)
|
||||
#define SCSB_ARGS(sb) \
|
||||
(int)(le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) >> SCSB_SHIFT), \
|
||||
(int)(SCOUTFS_SB(sb)->fsid >> SCSB_SHIFT), \
|
||||
(int)(SCOUTFS_SB(sb)->rid >> SCSB_SHIFT)
|
||||
#define SCSB_TRACE_FIELDS \
|
||||
__field(__u64, fsid) \
|
||||
__field(__u64, rid)
|
||||
#define SCSB_TRACE_ASSIGN(sb) \
|
||||
__entry->fsid = SCOUTFS_HAS_SBI(sb) ? \
|
||||
le64_to_cpu(SCOUTFS_SB(sb)->super.hdr.fsid) : 0;\
|
||||
SCOUTFS_SB(sb)->fsid : 0; \
|
||||
__entry->rid = SCOUTFS_HAS_SBI(sb) ? \
|
||||
SCOUTFS_SB(sb)->rid : 0;
|
||||
#define SCSB_TRACE_ARGS \
|
||||
|
||||
@@ -60,10 +60,9 @@ static ssize_t fsid_show(struct kobject *kobj, struct attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct super_block *sb = KOBJ_TO_SB(kobj, sb_id_kobj);
|
||||
struct scoutfs_super_block *super = &SCOUTFS_SB(sb)->super;
|
||||
struct scoutfs_sb_info *sbi = SCOUTFS_SB(sb);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%016llx\n",
|
||||
le64_to_cpu(super->hdr.fsid));
|
||||
return snprintf(buf, PAGE_SIZE, "%016llx\n", sbi->fsid);
|
||||
}
|
||||
ATTR_FUNCS_RO(fsid);
|
||||
|
||||
@@ -268,7 +267,7 @@ int __init scoutfs_sysfs_init(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
void __exit scoutfs_sysfs_exit(void)
|
||||
void scoutfs_sysfs_exit(void)
|
||||
{
|
||||
if (scoutfs_kset)
|
||||
kset_unregister(scoutfs_kset);
|
||||
|
||||
@@ -53,6 +53,6 @@ int scoutfs_setup_sysfs(struct super_block *sb);
|
||||
void scoutfs_destroy_sysfs(struct super_block *sb);
|
||||
|
||||
int __init scoutfs_sysfs_init(void);
|
||||
void __exit scoutfs_sysfs_exit(void);
|
||||
void scoutfs_sysfs_exit(void);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -39,6 +39,9 @@ struct scoutfs_triggers {
|
||||
|
||||
static char *names[] = {
|
||||
[SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE] = "block_remove_stale",
|
||||
[SCOUTFS_TRIGGER_SRCH_COMPACT_LOGS_PAD_SAFE] = "srch_compact_logs_pad_safe",
|
||||
[SCOUTFS_TRIGGER_SRCH_FORCE_LOG_ROTATE] = "srch_force_log_rotate",
|
||||
[SCOUTFS_TRIGGER_SRCH_MERGE_STOP_SAFE] = "srch_merge_stop_safe",
|
||||
[SCOUTFS_TRIGGER_STATFS_LOCK_PURGE] = "statfs_lock_purge",
|
||||
};
|
||||
|
||||
|
||||
@@ -3,6 +3,9 @@
|
||||
|
||||
enum scoutfs_trigger {
|
||||
SCOUTFS_TRIGGER_BLOCK_REMOVE_STALE,
|
||||
SCOUTFS_TRIGGER_SRCH_COMPACT_LOGS_PAD_SAFE,
|
||||
SCOUTFS_TRIGGER_SRCH_FORCE_LOG_ROTATE,
|
||||
SCOUTFS_TRIGGER_SRCH_MERGE_STOP_SAFE,
|
||||
SCOUTFS_TRIGGER_STATFS_LOCK_PURGE,
|
||||
SCOUTFS_TRIGGER_NR,
|
||||
};
|
||||
|
||||
@@ -46,6 +46,23 @@ static struct scoutfs_tseq_entry *tseq_rb_next(struct scoutfs_tseq_entry *ent)
|
||||
return rb_entry(node, struct scoutfs_tseq_entry, node);
|
||||
}
|
||||
|
||||
#ifdef KC_RB_TREE_AUGMENTED_COMPUTE_MAX
|
||||
static bool tseq_compute_total(struct scoutfs_tseq_entry *ent, bool exit)
|
||||
{
|
||||
loff_t total = 1 + tseq_node_total(ent->node.rb_left) +
|
||||
tseq_node_total(ent->node.rb_right);
|
||||
|
||||
if (exit && ent->total == total)
|
||||
return true;
|
||||
|
||||
ent->total = total;
|
||||
return false;
|
||||
}
|
||||
|
||||
RB_DECLARE_CALLBACKS(static, tseq_rb_callbacks, struct scoutfs_tseq_entry,
|
||||
node, total, tseq_compute_total);
|
||||
#else
|
||||
|
||||
static loff_t tseq_compute_total(struct scoutfs_tseq_entry *ent)
|
||||
{
|
||||
return 1 + tseq_node_total(ent->node.rb_left) +
|
||||
@@ -53,7 +70,8 @@ static loff_t tseq_compute_total(struct scoutfs_tseq_entry *ent)
|
||||
}
|
||||
|
||||
RB_DECLARE_CALLBACKS(static, tseq_rb_callbacks, struct scoutfs_tseq_entry,
|
||||
node, loff_t, total, tseq_compute_total)
|
||||
node, loff_t, total, tseq_compute_total);
|
||||
#endif
|
||||
|
||||
void scoutfs_tseq_tree_init(struct scoutfs_tseq_tree *tree,
|
||||
scoutfs_tseq_show_t show)
|
||||
|
||||
@@ -17,4 +17,15 @@ static inline void down_write_two(struct rw_semaphore *a,
|
||||
down_write_nested(b, SINGLE_DEPTH_NESTING);
|
||||
}
|
||||
|
||||
/*
|
||||
* When returning shrinker counts from scan_objects, we should steer
|
||||
* clear of the magic SHRINK_STOP and SHRINK_EMPTY values, which are near
|
||||
* ~0UL values. Hence, we cap count to ~0L, which is arbitarily high
|
||||
* enough to avoid it.
|
||||
*/
|
||||
static inline long shrinker_min_long(long count)
|
||||
{
|
||||
return min(count, LONG_MAX);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
343
kmod/src/xattr.c
343
kmod/src/xattr.c
@@ -15,6 +15,7 @@
|
||||
#include <linux/dcache.h>
|
||||
#include <linux/xattr.h>
|
||||
#include <linux/crc32c.h>
|
||||
#include <linux/posix_acl.h>
|
||||
|
||||
#include "format.h"
|
||||
#include "inode.h"
|
||||
@@ -26,6 +27,7 @@
|
||||
#include "xattr.h"
|
||||
#include "lock.h"
|
||||
#include "hash.h"
|
||||
#include "acl.h"
|
||||
#include "scoutfs_trace.h"
|
||||
|
||||
/*
|
||||
@@ -79,16 +81,6 @@ static void init_xattr_key(struct scoutfs_key *key, u64 ino, u32 name_hash,
|
||||
#define SCOUTFS_XATTR_PREFIX "scoutfs."
|
||||
#define SCOUTFS_XATTR_PREFIX_LEN (sizeof(SCOUTFS_XATTR_PREFIX) - 1)
|
||||
|
||||
static int unknown_prefix(const char *name)
|
||||
{
|
||||
return strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
|
||||
strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) &&
|
||||
strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) &&
|
||||
strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)&&
|
||||
strncmp(name, SCOUTFS_XATTR_PREFIX, SCOUTFS_XATTR_PREFIX_LEN);
|
||||
}
|
||||
|
||||
|
||||
#define HIDE_TAG "hide."
|
||||
#define SRCH_TAG "srch."
|
||||
#define TOTL_TAG "totl."
|
||||
@@ -455,22 +447,17 @@ out:
|
||||
* Copy the value for the given xattr name into the caller's buffer, if it
|
||||
* fits. Return the bytes copied or -ERANGE if it doesn't fit.
|
||||
*/
|
||||
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
|
||||
size_t size)
|
||||
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
|
||||
struct scoutfs_lock *lck)
|
||||
{
|
||||
struct inode *inode = dentry->d_inode;
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_xattr *xat = NULL;
|
||||
struct scoutfs_lock *lck = NULL;
|
||||
struct scoutfs_key key;
|
||||
unsigned int xat_bytes;
|
||||
size_t name_len;
|
||||
int ret;
|
||||
|
||||
if (unknown_prefix(name))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
name_len = strlen(name);
|
||||
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
|
||||
return -ENODATA;
|
||||
@@ -480,10 +467,6 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
|
||||
if (!xat)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lck);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
down_read(&si->xattr_rwsem);
|
||||
|
||||
ret = get_next_xattr(inode, &key, xat, xat_bytes, name, name_len, 0, 0, lck);
|
||||
@@ -509,12 +492,27 @@ ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
|
||||
ret = copy_xattr_value(sb, &key, xat, xat_bytes, buffer, size, lck);
|
||||
unlock:
|
||||
up_read(&si->xattr_rwsem);
|
||||
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_READ);
|
||||
out:
|
||||
|
||||
kfree(xat);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scoutfs_xattr_get(struct dentry *dentry, const char *name, void *buffer, size_t size)
|
||||
{
|
||||
struct inode *inode = dentry->d_inode;
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_lock *lock = NULL;
|
||||
int ret;
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_READ, 0, inode, &lock);
|
||||
if (ret == 0) {
|
||||
ret = scoutfs_xattr_get_locked(inode, name, buffer, size, lock);
|
||||
scoutfs_unlock(sb, lock, SCOUTFS_LOCK_READ);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void scoutfs_xattr_init_totl_key(struct scoutfs_key *key, u64 *name)
|
||||
{
|
||||
scoutfs_key_set_zeros(key);
|
||||
@@ -619,30 +617,32 @@ int scoutfs_xattr_combine_totl(void *dst, int dst_len, void *src, int src_len)
|
||||
* cause creation to fail if the xattr already exists (_CREATE) or
|
||||
* doesn't already exist (_REPLACE). xattrs can have a zero length
|
||||
* value.
|
||||
*
|
||||
* The caller has acquired cluster locks, holds a transaction, and has
|
||||
* dirtied the inode item so that they can update it after we modify it.
|
||||
* The caller has to know the tags to acquire cluster locks before
|
||||
* holding the transaction so they pass in the parsed tags, or all 0s for
|
||||
* non scoutfs. prefixes.
|
||||
*/
|
||||
static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
|
||||
const void *value, size_t size, int flags)
|
||||
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
|
||||
const void *value, size_t size, int flags,
|
||||
const struct scoutfs_xattr_prefix_tags *tgs,
|
||||
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
|
||||
struct list_head *ind_locks)
|
||||
{
|
||||
struct inode *inode = dentry->d_inode;
|
||||
struct scoutfs_inode_info *si = SCOUTFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
const u64 ino = scoutfs_ino(inode);
|
||||
struct scoutfs_xattr_totl_val tval = {0,};
|
||||
struct scoutfs_xattr_prefix_tags tgs;
|
||||
struct scoutfs_xattr *xat = NULL;
|
||||
struct scoutfs_lock *lck = NULL;
|
||||
struct scoutfs_lock *totl_lock = NULL;
|
||||
size_t name_len = strlen(name);
|
||||
struct scoutfs_key totl_key;
|
||||
struct scoutfs_key key;
|
||||
bool undo_srch = false;
|
||||
bool undo_totl = false;
|
||||
LIST_HEAD(ind_locks);
|
||||
u8 found_parts;
|
||||
unsigned int xat_bytes_totl;
|
||||
unsigned int xat_bytes;
|
||||
unsigned int val_len;
|
||||
u64 ind_seq;
|
||||
u64 total;
|
||||
u64 hash = 0;
|
||||
u64 id = 0;
|
||||
@@ -651,6 +651,9 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
|
||||
|
||||
trace_scoutfs_xattr_set(sb, name_len, value, size, flags);
|
||||
|
||||
if (WARN_ON_ONCE(tgs->totl && !totl_lock))
|
||||
return -EINVAL;
|
||||
|
||||
/* mirror the syscall's errors for large names and values */
|
||||
if (name_len > SCOUTFS_XATTR_MAX_NAME_LEN)
|
||||
return -ERANGE;
|
||||
@@ -661,16 +664,10 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
|
||||
(flags & ~(XATTR_CREATE | XATTR_REPLACE)))
|
||||
return -EINVAL;
|
||||
|
||||
if (unknown_prefix(name))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
|
||||
return -EINVAL;
|
||||
|
||||
if ((tgs.hide | tgs.srch | tgs.totl) && !capable(CAP_SYS_ADMIN))
|
||||
if ((tgs->hide | tgs->srch | tgs->totl) && !capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
|
||||
if (tgs.totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
|
||||
if (tgs->totl && ((ret = parse_totl_key(&totl_key, name, name_len)) != 0))
|
||||
return ret;
|
||||
|
||||
/* allocate enough to always read an existing xattr's totl */
|
||||
@@ -679,51 +676,44 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
|
||||
/* but store partial first item that only includes the new xattr's value */
|
||||
xat_bytes = first_item_bytes(name_len, size);
|
||||
xat = kmalloc(xat_bytes_totl, GFP_NOFS);
|
||||
if (!xat) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
|
||||
if (ret)
|
||||
goto out;
|
||||
if (!xat)
|
||||
return -ENOMEM;
|
||||
|
||||
down_write(&si->xattr_rwsem);
|
||||
|
||||
/* find an existing xattr to delete, including possible totl value */
|
||||
ret = get_next_xattr(inode, &key, xat, xat_bytes_totl, name, name_len, 0, 0, lck);
|
||||
if (ret < 0 && ret != -ENOENT)
|
||||
goto unlock;
|
||||
goto out;
|
||||
|
||||
/* check existence constraint flags */
|
||||
if (ret == -ENOENT && (flags & XATTR_REPLACE)) {
|
||||
ret = -ENODATA;
|
||||
goto unlock;
|
||||
goto out;
|
||||
} else if (ret >= 0 && (flags & XATTR_CREATE)) {
|
||||
ret = -EEXIST;
|
||||
goto unlock;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* not an error to delete something that doesn't exist */
|
||||
if (ret == -ENOENT && !value) {
|
||||
ret = 0;
|
||||
goto unlock;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* s64 count delta if we create or delete */
|
||||
if (tgs.totl)
|
||||
if (tgs->totl)
|
||||
tval.count = cpu_to_le64((u64)!!(value) - (u64)!!(ret != -ENOENT));
|
||||
|
||||
/* found fields in key will also be used */
|
||||
found_parts = ret >= 0 ? xattr_nr_parts(xat) : 0;
|
||||
|
||||
if (found_parts && tgs.totl) {
|
||||
if (found_parts && tgs->totl) {
|
||||
/* parse old totl value before we clobber xat buf */
|
||||
val_len = ret - offsetof(struct scoutfs_xattr, name[xat->name_len]);
|
||||
ret = parse_totl_u64(&xat->name[xat->name_len], val_len, &total);
|
||||
if (ret < 0)
|
||||
goto unlock;
|
||||
goto out;
|
||||
|
||||
le64_add_cpu(&tval.total, -total);
|
||||
}
|
||||
@@ -742,15 +732,90 @@ static int scoutfs_xattr_set(struct dentry *dentry, const char *name,
|
||||
min(size, SCOUTFS_XATTR_MAX_PART_SIZE -
|
||||
offsetof(struct scoutfs_xattr, name[name_len])));
|
||||
|
||||
if (tgs.totl) {
|
||||
if (tgs->totl) {
|
||||
ret = parse_totl_u64(value, size, &total);
|
||||
if (ret < 0)
|
||||
goto unlock;
|
||||
goto out;
|
||||
}
|
||||
|
||||
le64_add_cpu(&tval.total, total);
|
||||
}
|
||||
|
||||
if (tgs->srch && !(found_parts && value)) {
|
||||
if (found_parts)
|
||||
id = le64_to_cpu(key.skx_id);
|
||||
hash = scoutfs_hash64(name, name_len);
|
||||
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
undo_srch = true;
|
||||
}
|
||||
|
||||
if (tgs->totl) {
|
||||
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
undo_totl = true;
|
||||
}
|
||||
|
||||
if (found_parts && value)
|
||||
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
|
||||
xattr_nr_parts(xat), found_parts, lck);
|
||||
else if (found_parts)
|
||||
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
|
||||
le64_to_cpu(key.skx_id), found_parts,
|
||||
lck);
|
||||
else
|
||||
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
|
||||
xattr_nr_parts(xat), lck);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* XXX do these want i_mutex or anything? */
|
||||
inode_inc_iversion(inode);
|
||||
inode->i_ctime = current_time(inode);
|
||||
ret = 0;
|
||||
|
||||
out:
|
||||
if (ret < 0 && undo_srch) {
|
||||
err = scoutfs_forest_srch_add(sb, hash, ino, id);
|
||||
BUG_ON(err);
|
||||
}
|
||||
if (ret < 0 && undo_totl) {
|
||||
/* _delta() on dirty items shouldn't fail */
|
||||
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
|
||||
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
|
||||
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
|
||||
BUG_ON(err);
|
||||
}
|
||||
|
||||
up_write(&si->xattr_rwsem);
|
||||
kfree(xat);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scoutfs_xattr_set(struct dentry *dentry, const char *name, const void *value,
|
||||
size_t size, int flags)
|
||||
{
|
||||
struct inode *inode = dentry->d_inode;
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct scoutfs_xattr_prefix_tags tgs;
|
||||
struct scoutfs_lock *totl_lock = NULL;
|
||||
struct scoutfs_lock *lck = NULL;
|
||||
size_t name_len = strlen(name);
|
||||
LIST_HEAD(ind_locks);
|
||||
u64 ind_seq;
|
||||
int ret;
|
||||
|
||||
if (scoutfs_xattr_parse_tags(name, name_len, &tgs) != 0)
|
||||
return -EINVAL;
|
||||
|
||||
ret = scoutfs_lock_inode(sb, SCOUTFS_LOCK_WRITE,
|
||||
SCOUTFS_LKF_REFRESH_INODE, inode, &lck);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
if (tgs.totl) {
|
||||
ret = scoutfs_lock_xattr_totl(sb, SCOUTFS_LOCK_WRITE_ONLY, 0, &totl_lock);
|
||||
if (ret)
|
||||
@@ -770,80 +835,126 @@ retry:
|
||||
if (ret < 0)
|
||||
goto release;
|
||||
|
||||
if (tgs.srch && !(found_parts && value)) {
|
||||
if (found_parts)
|
||||
id = le64_to_cpu(key.skx_id);
|
||||
hash = scoutfs_hash64(name, name_len);
|
||||
ret = scoutfs_forest_srch_add(sb, hash, ino, id);
|
||||
if (ret < 0)
|
||||
goto release;
|
||||
undo_srch = true;
|
||||
}
|
||||
|
||||
if (tgs.totl) {
|
||||
ret = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
|
||||
if (ret < 0)
|
||||
goto release;
|
||||
undo_totl = true;
|
||||
}
|
||||
|
||||
if (found_parts && value)
|
||||
ret = change_xattr_items(inode, id, xat, xat_bytes, value, size,
|
||||
xattr_nr_parts(xat), found_parts, lck);
|
||||
else if (found_parts)
|
||||
ret = delete_xattr_items(inode, le64_to_cpu(key.skx_name_hash),
|
||||
le64_to_cpu(key.skx_id), found_parts,
|
||||
lck);
|
||||
else
|
||||
ret = create_xattr_items(inode, id, xat, xat_bytes, value, size,
|
||||
xattr_nr_parts(xat), lck);
|
||||
if (ret < 0)
|
||||
goto release;
|
||||
|
||||
/* XXX do these want i_mutex or anything? */
|
||||
inode_inc_iversion(inode);
|
||||
inode->i_ctime = CURRENT_TIME;
|
||||
scoutfs_update_inode_item(inode, lck, &ind_locks);
|
||||
ret = 0;
|
||||
ret = scoutfs_xattr_set_locked(dentry->d_inode, name, name_len, value, size, flags, &tgs,
|
||||
lck, totl_lock, &ind_locks);
|
||||
if (ret == 0)
|
||||
scoutfs_update_inode_item(inode, lck, &ind_locks);
|
||||
|
||||
release:
|
||||
if (ret < 0 && undo_srch) {
|
||||
err = scoutfs_forest_srch_add(sb, hash, ino, id);
|
||||
BUG_ON(err);
|
||||
}
|
||||
if (ret < 0 && undo_totl) {
|
||||
/* _delta() on dirty items shouldn't fail */
|
||||
tval.total = cpu_to_le64(-le64_to_cpu(tval.total));
|
||||
tval.count = cpu_to_le64(-le64_to_cpu(tval.count));
|
||||
err = apply_totl_delta(sb, &totl_key, &tval, totl_lock);
|
||||
BUG_ON(err);
|
||||
}
|
||||
|
||||
scoutfs_release_trans(sb);
|
||||
scoutfs_inode_index_unlock(sb, &ind_locks);
|
||||
unlock:
|
||||
up_write(&si->xattr_rwsem);
|
||||
scoutfs_unlock(sb, lck, SCOUTFS_LOCK_WRITE);
|
||||
scoutfs_unlock(sb, totl_lock, SCOUTFS_LOCK_WRITE_ONLY);
|
||||
out:
|
||||
kfree(xat);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int scoutfs_setxattr(struct dentry *dentry, const char *name,
|
||||
const void *value, size_t size, int flags)
|
||||
#ifndef KC_XATTR_STRUCT_XATTR_HANDLER
|
||||
/*
|
||||
* Future kernels have this amazing hack to rewind the name to get the
|
||||
* skipped prefix. We're back in the stone ages without the handler
|
||||
* arg, so we Just Know that this is possible. This will become a
|
||||
* compat hook to either call the kernel's xattr_full_name(handler), or
|
||||
* our hack to use the flags as the prefix length.
|
||||
*/
|
||||
static const char *full_name_hack(const char *name, int len)
|
||||
{
|
||||
if (size == 0)
|
||||
value = ""; /* set empty value */
|
||||
return name - len;
|
||||
}
|
||||
#endif
|
||||
|
||||
static int scoutfs_xattr_get_handler
|
||||
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
|
||||
(const struct xattr_handler *handler, struct dentry *dentry,
|
||||
struct inode *inode, const char *name, void *value,
|
||||
size_t size)
|
||||
{
|
||||
name = xattr_full_name(handler, name);
|
||||
#else
|
||||
(struct dentry *dentry, const char *name,
|
||||
void *value, size_t size, int handler_flags)
|
||||
{
|
||||
name = full_name_hack(name, handler_flags);
|
||||
#endif
|
||||
return scoutfs_xattr_get(dentry, name, value, size);
|
||||
}
|
||||
|
||||
static int scoutfs_xattr_set_handler
|
||||
#ifdef KC_XATTR_STRUCT_XATTR_HANDLER
|
||||
(const struct xattr_handler *handler, struct dentry *dentry,
|
||||
struct inode *inode, const char *name, const void *value,
|
||||
size_t size, int flags)
|
||||
{
|
||||
name = xattr_full_name(handler, name);
|
||||
#else
|
||||
(struct dentry *dentry, const char *name,
|
||||
const void *value, size_t size, int flags, int handler_flags)
|
||||
{
|
||||
name = full_name_hack(name, handler_flags);
|
||||
#endif
|
||||
return scoutfs_xattr_set(dentry, name, value, size, flags);
|
||||
}
|
||||
|
||||
int scoutfs_removexattr(struct dentry *dentry, const char *name)
|
||||
{
|
||||
return scoutfs_xattr_set(dentry, name, NULL, 0, XATTR_REPLACE);
|
||||
}
|
||||
static const struct xattr_handler scoutfs_xattr_user_handler = {
|
||||
.prefix = XATTR_USER_PREFIX,
|
||||
.flags = XATTR_USER_PREFIX_LEN,
|
||||
.get = scoutfs_xattr_get_handler,
|
||||
.set = scoutfs_xattr_set_handler,
|
||||
};
|
||||
|
||||
static const struct xattr_handler scoutfs_xattr_scoutfs_handler = {
|
||||
.prefix = SCOUTFS_XATTR_PREFIX,
|
||||
.flags = SCOUTFS_XATTR_PREFIX_LEN,
|
||||
.get = scoutfs_xattr_get_handler,
|
||||
.set = scoutfs_xattr_set_handler,
|
||||
};
|
||||
|
||||
static const struct xattr_handler scoutfs_xattr_trusted_handler = {
|
||||
.prefix = XATTR_TRUSTED_PREFIX,
|
||||
.flags = XATTR_TRUSTED_PREFIX_LEN,
|
||||
.get = scoutfs_xattr_get_handler,
|
||||
.set = scoutfs_xattr_set_handler,
|
||||
};
|
||||
|
||||
static const struct xattr_handler scoutfs_xattr_security_handler = {
|
||||
.prefix = XATTR_SECURITY_PREFIX,
|
||||
.flags = XATTR_SECURITY_PREFIX_LEN,
|
||||
.get = scoutfs_xattr_get_handler,
|
||||
.set = scoutfs_xattr_set_handler,
|
||||
};
|
||||
|
||||
static const struct xattr_handler scoutfs_xattr_acl_access_handler = {
|
||||
#ifdef KC_XATTR_HANDLER_NAME
|
||||
.name = XATTR_NAME_POSIX_ACL_ACCESS,
|
||||
#else
|
||||
.prefix = XATTR_NAME_POSIX_ACL_ACCESS,
|
||||
#endif
|
||||
.flags = ACL_TYPE_ACCESS,
|
||||
.get = scoutfs_acl_get_xattr,
|
||||
.set = scoutfs_acl_set_xattr,
|
||||
};
|
||||
|
||||
static const struct xattr_handler scoutfs_xattr_acl_default_handler = {
|
||||
#ifdef KC_XATTR_HANDLER_NAME
|
||||
.name = XATTR_NAME_POSIX_ACL_DEFAULT,
|
||||
#else
|
||||
.prefix = XATTR_NAME_POSIX_ACL_DEFAULT,
|
||||
#endif
|
||||
.flags = ACL_TYPE_DEFAULT,
|
||||
.get = scoutfs_acl_get_xattr,
|
||||
.set = scoutfs_acl_set_xattr,
|
||||
};
|
||||
|
||||
const struct xattr_handler *scoutfs_xattr_handlers[] = {
|
||||
&scoutfs_xattr_user_handler,
|
||||
&scoutfs_xattr_scoutfs_handler,
|
||||
&scoutfs_xattr_trusted_handler,
|
||||
&scoutfs_xattr_security_handler,
|
||||
&scoutfs_xattr_acl_access_handler,
|
||||
&scoutfs_xattr_acl_default_handler,
|
||||
NULL
|
||||
};
|
||||
|
||||
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
|
||||
size_t size, __u32 *hash_pos, __u64 *id_pos,
|
||||
|
||||
@@ -1,25 +1,29 @@
|
||||
#ifndef _SCOUTFS_XATTR_H_
|
||||
#define _SCOUTFS_XATTR_H_
|
||||
|
||||
ssize_t scoutfs_getxattr(struct dentry *dentry, const char *name, void *buffer,
|
||||
size_t size);
|
||||
int scoutfs_setxattr(struct dentry *dentry, const char *name,
|
||||
const void *value, size_t size, int flags);
|
||||
int scoutfs_removexattr(struct dentry *dentry, const char *name);
|
||||
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
|
||||
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
|
||||
size_t size, __u32 *hash_pos, __u64 *id_pos,
|
||||
bool e_range, bool show_hidden);
|
||||
|
||||
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
|
||||
struct scoutfs_lock *lock);
|
||||
|
||||
struct scoutfs_xattr_prefix_tags {
|
||||
unsigned long hide:1,
|
||||
srch:1,
|
||||
totl:1;
|
||||
};
|
||||
|
||||
extern const struct xattr_handler *scoutfs_xattr_handlers[];
|
||||
|
||||
int scoutfs_xattr_get_locked(struct inode *inode, const char *name, void *buffer, size_t size,
|
||||
struct scoutfs_lock *lck);
|
||||
int scoutfs_xattr_set_locked(struct inode *inode, const char *name, size_t name_len,
|
||||
const void *value, size_t size, int flags,
|
||||
const struct scoutfs_xattr_prefix_tags *tgs,
|
||||
struct scoutfs_lock *lck, struct scoutfs_lock *totl_lock,
|
||||
struct list_head *ind_locks);
|
||||
|
||||
ssize_t scoutfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
|
||||
ssize_t scoutfs_list_xattrs(struct inode *inode, char *buffer,
|
||||
size_t size, __u32 *hash_pos, __u64 *id_pos,
|
||||
bool e_range, bool show_hidden);
|
||||
int scoutfs_xattr_drop(struct super_block *sb, u64 ino,
|
||||
struct scoutfs_lock *lock);
|
||||
|
||||
int scoutfs_xattr_parse_tags(const char *name, unsigned int name_len,
|
||||
struct scoutfs_xattr_prefix_tags *tgs);
|
||||
|
||||
|
||||
1
tests/.gitignore
vendored
1
tests/.gitignore
vendored
@@ -8,3 +8,4 @@ src/bulk_create_paths
|
||||
src/find_xattrs
|
||||
src/stage_tmpfile
|
||||
src/create_xattr_loop
|
||||
src/o_tmpfile_umask
|
||||
|
||||
@@ -10,7 +10,9 @@ BIN := src/createmany \
|
||||
src/bulk_create_paths \
|
||||
src/stage_tmpfile \
|
||||
src/find_xattrs \
|
||||
src/create_xattr_loop
|
||||
src/create_xattr_loop \
|
||||
src/fragmented_data_extents \
|
||||
src/o_tmpfile_umask
|
||||
|
||||
DEPS := $(wildcard src/*.d)
|
||||
|
||||
|
||||
@@ -35,10 +35,22 @@ t_fail()
|
||||
t_quiet()
|
||||
{
|
||||
echo "# $*" >> "$T_TMPDIR/quiet.log"
|
||||
"$@" > "$T_TMPDIR/quiet.log" 2>&1 || \
|
||||
"$@" >> "$T_TMPDIR/quiet.log" 2>&1 || \
|
||||
t_fail "quiet command failed"
|
||||
}
|
||||
|
||||
#
|
||||
# Quietly run a command during a test. The output is logged but only
|
||||
# the return code is printed, presumably because the output contains
|
||||
# a lot of invocation specific text that is difficult to filter.
|
||||
#
|
||||
t_rc()
|
||||
{
|
||||
echo "# $*" >> "$T_TMP.rc.log"
|
||||
"$@" >> "$T_TMP.rc.log" 2>&1
|
||||
echo "rc: $?"
|
||||
}
|
||||
|
||||
#
|
||||
# redirect test output back to the output of the invoking script intead
|
||||
# of the compared output.
|
||||
|
||||
@@ -18,6 +18,7 @@ t_filter_dmesg()
|
||||
|
||||
# the kernel can just be noisy
|
||||
re=" used greatest stack depth: "
|
||||
re="$re|sched: RT throttling activated"
|
||||
|
||||
# mkfs/mount checks partition tables
|
||||
re="$re|unknown partition table"
|
||||
@@ -61,6 +62,7 @@ t_filter_dmesg()
|
||||
re="$re|scoutfs .* error: meta_super META flag not set"
|
||||
re="$re|scoutfs .* error: could not open metadev:.*"
|
||||
re="$re|scoutfs .* error: Unknown or malformed option,.*"
|
||||
re="$re|scoutfs .* error: invalid quorum_heartbeat_timeout_ms value"
|
||||
|
||||
# in debugging kernels we can slow things down a bit
|
||||
re="$re|hrtimer: interrupt took .*"
|
||||
@@ -81,6 +83,13 @@ t_filter_dmesg()
|
||||
re="$re|scoutfs .* error .* freeing merged btree blocks.*.final commit del.upd freeing item"
|
||||
re="$re|scoutfs .* error .*reading quorum block.*to update event.*"
|
||||
re="$re|scoutfs .* error.*server failed to bind to.*"
|
||||
re="$re|scoutfs .* critical transaction commit failure.*"
|
||||
|
||||
# change-devices causes loop device resizing
|
||||
re="$re|loop[0-9].* detected capacity change from.*"
|
||||
|
||||
# ignore systemd-journal rotating
|
||||
re="$re|systemd-journald.*"
|
||||
|
||||
egrep -v "($re)"
|
||||
}
|
||||
|
||||
@@ -75,6 +75,15 @@ t_fs_nrs()
|
||||
seq 0 $((T_NR_MOUNTS - 1))
|
||||
}
|
||||
|
||||
#
|
||||
# output the fs nrs of quorum nodes, we "know" that
|
||||
# the quorum nrs are the first consequtive nrs
|
||||
#
|
||||
t_quorum_nrs()
|
||||
{
|
||||
seq 0 $((T_QUORUM - 1))
|
||||
}
|
||||
|
||||
#
|
||||
# outputs "1" if the fs number has "1" in its quorum/is_leader file.
|
||||
# All other cases output 0, including the fs nr being a client which
|
||||
@@ -144,7 +153,27 @@ t_mount()
|
||||
test "$nr" -lt "$T_NR_MOUNTS" || \
|
||||
t_fail "fs nr $nr invalid"
|
||||
|
||||
eval t_quiet mount -t scoutfs \$T_O$nr \$T_DB$nr \$T_M$nr
|
||||
eval t_quiet mount -t scoutfs \$T_O$nr\$opt \$T_DB$nr \$T_M$nr
|
||||
}
|
||||
|
||||
#
|
||||
# Mount with an optional mount option string. If the string is empty
|
||||
# then the saved mount options are used. If the string has contents
|
||||
# then it is appended to the end of the saved options with a separating
|
||||
# comma.
|
||||
#
|
||||
# Unlike t_mount this won't inherently fail in t_quiet, errors are
|
||||
# returned so bad options can be tested.
|
||||
#
|
||||
t_mount_opt()
|
||||
{
|
||||
local nr="$1"
|
||||
local opt="${2:+,$2}"
|
||||
|
||||
test "$nr" -lt "$T_NR_MOUNTS" || \
|
||||
t_fail "fs nr $nr invalid"
|
||||
|
||||
eval mount -t scoutfs \$T_O$nr\$opt \$T_DB$nr \$T_M$nr
|
||||
}
|
||||
|
||||
t_umount()
|
||||
@@ -236,6 +265,15 @@ t_trigger_get() {
|
||||
cat "$(t_trigger_path "$nr")/$which"
|
||||
}
|
||||
|
||||
t_trigger_set() {
|
||||
local which="$1"
|
||||
local nr="$2"
|
||||
local val="$3"
|
||||
local path=$(t_trigger_path "$nr")
|
||||
|
||||
echo "$val" > "$path/$which"
|
||||
}
|
||||
|
||||
t_trigger_show() {
|
||||
local which="$1"
|
||||
local string="$2"
|
||||
@@ -247,9 +285,8 @@ t_trigger_show() {
|
||||
t_trigger_arm_silent() {
|
||||
local which="$1"
|
||||
local nr="$2"
|
||||
local path=$(t_trigger_path "$nr")
|
||||
|
||||
echo 1 > "$path/$which"
|
||||
t_trigger_set "$which" "$nr" 1
|
||||
}
|
||||
|
||||
t_trigger_arm() {
|
||||
@@ -377,13 +414,21 @@ t_wait_for_leader() {
|
||||
done
|
||||
}
|
||||
|
||||
t_get_sysfs_mount_option() {
|
||||
local nr="$1"
|
||||
local name="$2"
|
||||
local opt="$(t_sysfs_path $nr)/mount_options/$name"
|
||||
|
||||
cat "$opt"
|
||||
}
|
||||
|
||||
t_set_sysfs_mount_option() {
|
||||
local nr="$1"
|
||||
local name="$2"
|
||||
local val="$3"
|
||||
local opt="$(t_sysfs_path $nr)/mount_options/$name"
|
||||
|
||||
echo "$val" > "$opt"
|
||||
echo "$val" > "$opt" 2>/dev/null
|
||||
}
|
||||
|
||||
t_set_all_sysfs_mount_options() {
|
||||
@@ -405,7 +450,7 @@ t_save_all_sysfs_mount_options() {
|
||||
|
||||
for i in $(t_fs_nrs); do
|
||||
opt="$(t_sysfs_path $i)/mount_options/$name"
|
||||
ind="$name_$i"
|
||||
ind="${name}_${i}"
|
||||
|
||||
_saved_opts[$ind]="$(cat $opt)"
|
||||
done
|
||||
@@ -417,7 +462,7 @@ t_restore_all_sysfs_mount_options() {
|
||||
local i
|
||||
|
||||
for i in $(t_fs_nrs); do
|
||||
ind="$name_$i"
|
||||
ind="${name}_${i}"
|
||||
|
||||
t_set_sysfs_mount_option $i $name "${_saved_opts[$ind]}"
|
||||
done
|
||||
|
||||
@@ -47,8 +47,9 @@ four
|
||||
--- dir within dir
|
||||
--- overwrite file
|
||||
--- can't overwrite non-empty dir
|
||||
mv: cannot move ‘/mnt/test/test/basic-posix-consistency/dir/c/clobber’ to ‘/mnt/test/test/basic-posix-consistency/dir/a/dir’: Directory not empty
|
||||
mv: cannot move '/mnt/test/test/basic-posix-consistency/dir/c/clobber' to '/mnt/test/test/basic-posix-consistency/dir/a/dir': Directory not empty
|
||||
--- can overwrite empty dir
|
||||
--- can rename into root
|
||||
== path resoluion
|
||||
== inode indexes match after syncing existing
|
||||
== inode indexes match after copying and syncing
|
||||
|
||||
6
tests/golden/basic-truncate
Normal file
6
tests/golden/basic-truncate
Normal file
@@ -0,0 +1,6 @@
|
||||
== truncate writes zeroed partial end of file block
|
||||
0000000 0a79 0a79 0a79 0a79 0a79 0a79 0a79 0a79
|
||||
*
|
||||
0006144 0000 0000 0000 0000 0000 0000 0000 0000
|
||||
*
|
||||
0012288
|
||||
27
tests/golden/change-devices
Normal file
27
tests/golden/change-devices
Normal file
@@ -0,0 +1,27 @@
|
||||
== make tmp sparse data dev files
|
||||
== make scratch fs
|
||||
== small new data device fails
|
||||
rc: 1
|
||||
== check sees data device errors
|
||||
rc: 1
|
||||
rc: 0
|
||||
== preparing while mounted fails
|
||||
rc: 1
|
||||
== preparing without recovery fails
|
||||
rc: 1
|
||||
== check sees metadata errors
|
||||
rc: 1
|
||||
rc: 1
|
||||
== preparing with file data fails
|
||||
rc: 1
|
||||
== preparing after emptied
|
||||
rc: 0
|
||||
== checks pass
|
||||
rc: 0
|
||||
rc: 0
|
||||
== using prepared
|
||||
== preparing larger and resizing
|
||||
rc: 0
|
||||
equal_prepared
|
||||
large_prepared
|
||||
resized larger test rc: 0
|
||||
330
tests/golden/data-prealloc
Normal file
330
tests/golden/data-prealloc
Normal file
@@ -0,0 +1,330 @@
|
||||
== initial writes smaller than prealloc grow to prealloc size
|
||||
/mnt/test/test/data-prealloc/file-1: 7 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 7 extents found
|
||||
== larger files get full prealloc extents
|
||||
/mnt/test/test/data-prealloc/file-1: 9 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 9 extents found
|
||||
== non-streaming writes with contig have per-block extents
|
||||
/mnt/test/test/data-prealloc/file-1: 32 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 32 extents found
|
||||
== any writes to region prealloc get full extents
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
== streaming offline writes get full extents either way
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 4 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 4 extents found
|
||||
== goofy preallocation amounts work
|
||||
/mnt/test/test/data-prealloc/file-1: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 5 extents found
|
||||
/mnt/test/test/data-prealloc/file-1: 3 extents found
|
||||
/mnt/test/test/data-prealloc/file-2: 3 extents found
|
||||
== block writes into region allocs hole
|
||||
wrote blk 24
|
||||
wrote blk 32
|
||||
wrote blk 40
|
||||
wrote blk 55
|
||||
wrote blk 63
|
||||
wrote blk 71
|
||||
wrote blk 72
|
||||
wrote blk 79
|
||||
wrote blk 80
|
||||
wrote blk 87
|
||||
wrote blk 88
|
||||
wrote blk 95
|
||||
before:
|
||||
24.. 1:
|
||||
32.. 1:
|
||||
40.. 1:
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 0 at pos 0
|
||||
wrote blk 0
|
||||
0.. 1:
|
||||
1.. 7: unwritten
|
||||
24.. 1:
|
||||
32.. 1:
|
||||
40.. 1:
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 0 at pos 1
|
||||
wrote blk 15
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
24.. 1:
|
||||
32.. 1:
|
||||
40.. 1:
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 0 at pos 2
|
||||
wrote blk 19
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
32.. 1:
|
||||
40.. 1:
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 1 at pos 0
|
||||
wrote blk 25
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
40.. 1:
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 1 at pos 1
|
||||
wrote blk 39
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 1 at pos 2
|
||||
wrote blk 44
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
44.. 1:
|
||||
45.. 3: unwritten
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 2 at pos 0
|
||||
wrote blk 48
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
44.. 1:
|
||||
45.. 3: unwritten
|
||||
48.. 1:
|
||||
49.. 6: unwritten
|
||||
55.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 2 at pos 1
|
||||
wrote blk 62
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
44.. 1:
|
||||
45.. 3: unwritten
|
||||
48.. 1:
|
||||
49.. 6: unwritten
|
||||
55.. 1:
|
||||
56.. 6: unwritten
|
||||
62.. 1:
|
||||
63.. 1:
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 2 at pos 2
|
||||
wrote blk 67
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
44.. 1:
|
||||
45.. 3: unwritten
|
||||
48.. 1:
|
||||
49.. 6: unwritten
|
||||
55.. 1:
|
||||
56.. 6: unwritten
|
||||
62.. 1:
|
||||
63.. 1:
|
||||
64.. 3: unwritten
|
||||
67.. 1:
|
||||
68.. 3: unwritten
|
||||
71.. 2:
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 3 at pos 0
|
||||
wrote blk 73
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
44.. 1:
|
||||
45.. 3: unwritten
|
||||
48.. 1:
|
||||
49.. 6: unwritten
|
||||
55.. 1:
|
||||
56.. 6: unwritten
|
||||
62.. 1:
|
||||
63.. 1:
|
||||
64.. 3: unwritten
|
||||
67.. 1:
|
||||
68.. 3: unwritten
|
||||
71.. 2:
|
||||
73.. 1:
|
||||
74.. 5: unwritten
|
||||
79.. 2:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 3 at pos 1
|
||||
wrote blk 86
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
44.. 1:
|
||||
45.. 3: unwritten
|
||||
48.. 1:
|
||||
49.. 6: unwritten
|
||||
55.. 1:
|
||||
56.. 6: unwritten
|
||||
62.. 1:
|
||||
63.. 1:
|
||||
64.. 3: unwritten
|
||||
67.. 1:
|
||||
68.. 3: unwritten
|
||||
71.. 2:
|
||||
73.. 1:
|
||||
74.. 5: unwritten
|
||||
79.. 2:
|
||||
86.. 1:
|
||||
87.. 2:
|
||||
95.. 1: eof
|
||||
writing into existing 3 at pos 2
|
||||
wrote blk 92
|
||||
0.. 1:
|
||||
1.. 14: unwritten
|
||||
15.. 1:
|
||||
16.. 3: unwritten
|
||||
19.. 1:
|
||||
20.. 4: unwritten
|
||||
24.. 1:
|
||||
25.. 1:
|
||||
26.. 6: unwritten
|
||||
32.. 1:
|
||||
39.. 1:
|
||||
40.. 1:
|
||||
44.. 1:
|
||||
45.. 3: unwritten
|
||||
48.. 1:
|
||||
49.. 6: unwritten
|
||||
55.. 1:
|
||||
56.. 6: unwritten
|
||||
62.. 1:
|
||||
63.. 1:
|
||||
64.. 3: unwritten
|
||||
67.. 1:
|
||||
68.. 3: unwritten
|
||||
71.. 2:
|
||||
73.. 1:
|
||||
74.. 5: unwritten
|
||||
79.. 2:
|
||||
86.. 1:
|
||||
87.. 2:
|
||||
92.. 1:
|
||||
93.. 2: unwritten
|
||||
95.. 1: eof
|
||||
18
tests/golden/get-referring-entries
Normal file
18
tests/golden/get-referring-entries
Normal file
@@ -0,0 +1,18 @@
|
||||
== root inode returns nothing
|
||||
== crazy large unused inode does nothing
|
||||
== basic entry
|
||||
file
|
||||
== rename
|
||||
renamed
|
||||
== hard link
|
||||
file
|
||||
link
|
||||
== removal
|
||||
== different dirs
|
||||
== file types
|
||||
type b name block
|
||||
type c name char
|
||||
type d name dir
|
||||
type f name file
|
||||
type l name symlink
|
||||
== all name lengths work
|
||||
@@ -17,7 +17,7 @@ ino not found in dseq index
|
||||
mount 0 contents after mount 1 rm: contents
|
||||
ino found in dseq index
|
||||
ino found in dseq index
|
||||
stat: cannot stat ‘/mnt/test/test/inode-deletion/file’: No such file or directory
|
||||
stat: cannot stat '/mnt/test/test/inode-deletion/file': No such file or directory
|
||||
ino not found in dseq index
|
||||
ino not found in dseq index
|
||||
== lots of deletions use one open map
|
||||
|
||||
3
tests/golden/large-fragmented-free
Normal file
3
tests/golden/large-fragmented-free
Normal file
@@ -0,0 +1,3 @@
|
||||
== creating fragmented extents
|
||||
== unlink file with moved extents to free extents per block
|
||||
== cleanup
|
||||
3
tests/golden/lock-recover-invalidate
Normal file
3
tests/golden/lock-recover-invalidate
Normal file
@@ -0,0 +1,3 @@
|
||||
== starting background invalidating read/write load
|
||||
== 60s of lock recovery during invalidating load
|
||||
== stopping background load
|
||||
0
tests/golden/lock-rever-invalidate
Normal file
0
tests/golden/lock-rever-invalidate
Normal file
@@ -1,3 +1,11 @@
|
||||
== non-acl O_TMPFILE creation honors umask
|
||||
umask 022
|
||||
fstat after open(0777): 0100755
|
||||
stat after linkat: 0100755
|
||||
umask 077
|
||||
fstat after open(0777): 0100700
|
||||
stat after linkat: 0100700
|
||||
== stage from tmpfile
|
||||
total file size 33669120
|
||||
00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA|
|
||||
*
|
||||
@@ -20,10 +20,10 @@ offline waiting should now have two known entries:
|
||||
data_wait_err found 2 waiters.
|
||||
offline waiting should now have 0 known entries:
|
||||
0
|
||||
dd: error reading ‘/mnt/test/test/offline-extent-waiting/dir/file’: Input/output error
|
||||
dd: error reading '/mnt/test/test/offline-extent-waiting/dir/file': Input/output error
|
||||
0+0 records in
|
||||
0+0 records out
|
||||
dd: error reading ‘/mnt/test/test/offline-extent-waiting/dir/file’: Input/output error
|
||||
dd: error reading '/mnt/test/test/offline-extent-waiting/dir/file': Input/output error
|
||||
0+0 records in
|
||||
0+0 records out
|
||||
offline waiting should be empty again:
|
||||
|
||||
5
tests/golden/quorum-heartbeat-timeout
Normal file
5
tests/golden/quorum-heartbeat-timeout
Normal file
@@ -0,0 +1,5 @@
|
||||
== bad timeout values fail
|
||||
== bad mount option fails
|
||||
== mount option
|
||||
== sysfs
|
||||
== reset all options
|
||||
@@ -7,3 +7,4 @@ found second
|
||||
== changing metadata must increase meta seq
|
||||
== changing contents must increase data seq
|
||||
== make sure dirtying doesn't livelock walk
|
||||
== concurrent update attempts maintain single entries
|
||||
|
||||
81
tests/golden/srch-safe-merge-pos
Normal file
81
tests/golden/srch-safe-merge-pos
Normal file
@@ -0,0 +1,81 @@
|
||||
== snapshot errors
|
||||
== arm compaction triggers
|
||||
trigger srch_compact_logs_pad_safe armed: 1
|
||||
trigger srch_merge_stop_safe armed: 1
|
||||
trigger srch_compact_logs_pad_safe armed: 1
|
||||
trigger srch_merge_stop_safe armed: 1
|
||||
trigger srch_compact_logs_pad_safe armed: 1
|
||||
trigger srch_merge_stop_safe armed: 1
|
||||
trigger srch_compact_logs_pad_safe armed: 1
|
||||
trigger srch_merge_stop_safe armed: 1
|
||||
trigger srch_compact_logs_pad_safe armed: 1
|
||||
trigger srch_merge_stop_safe armed: 1
|
||||
== force lots of small rotated log files for compaction
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
trigger srch_force_log_rotate armed: 1
|
||||
== wait for compaction
|
||||
== test and disarm compaction triggers
|
||||
== verify triggers and errors
|
||||
== cleanup
|
||||
@@ -40,6 +40,7 @@ generic/092
|
||||
generic/098
|
||||
generic/101
|
||||
generic/104
|
||||
generic/105
|
||||
generic/106
|
||||
generic/107
|
||||
generic/117
|
||||
@@ -51,6 +52,7 @@ generic/184
|
||||
generic/221
|
||||
generic/228
|
||||
generic/236
|
||||
generic/237
|
||||
generic/245
|
||||
generic/249
|
||||
generic/257
|
||||
@@ -63,6 +65,7 @@ generic/308
|
||||
generic/309
|
||||
generic/313
|
||||
generic/315
|
||||
generic/319
|
||||
generic/322
|
||||
generic/335
|
||||
generic/336
|
||||
@@ -72,6 +75,7 @@ generic/342
|
||||
generic/343
|
||||
generic/348
|
||||
generic/360
|
||||
generic/375
|
||||
generic/376
|
||||
generic/377
|
||||
Not
|
||||
@@ -237,7 +241,6 @@ generic/312
|
||||
generic/314
|
||||
generic/316
|
||||
generic/317
|
||||
generic/318
|
||||
generic/324
|
||||
generic/326
|
||||
generic/327
|
||||
@@ -282,4 +285,4 @@ shared/004
|
||||
shared/032
|
||||
shared/051
|
||||
shared/289
|
||||
Passed all 75 tests
|
||||
Passed all 79 tests
|
||||
|
||||
@@ -1,5 +1,8 @@
|
||||
#!/usr/bin/bash
|
||||
|
||||
# Force system tools to use ASCII quotes
|
||||
export LC_ALL=C
|
||||
|
||||
#
|
||||
# XXX
|
||||
# - could have helper functions for waiting for pids
|
||||
@@ -58,6 +61,7 @@ $(basename $0) options:
|
||||
-m | Run mkfs on the device before mounting and running
|
||||
| tests. Implies unmounting existing mounts first.
|
||||
-n <nr> | The number of devices and mounts to test.
|
||||
-o <opts> | Add option string to all mounts during all tests.
|
||||
-P | Enable trace_printk.
|
||||
-p | Exit script after preparing mounts only, don't run tests.
|
||||
-q <nr> | The first <nr> mounts will be quorum members. Must be
|
||||
@@ -68,6 +72,7 @@ $(basename $0) options:
|
||||
-s | Skip git repo checkouts.
|
||||
-t | Enabled trace events that match the given glob argument.
|
||||
| Multiple options enable multiple globbed events.
|
||||
-T <nr> | Multiply the original trace buffer size by nr during the run.
|
||||
-X | xfstests git repo. Used by tests/xfstests.sh.
|
||||
-x | xfstests git branch to checkout and track.
|
||||
-y | xfstests ./check additional args
|
||||
@@ -136,6 +141,12 @@ while true; do
|
||||
T_NR_MOUNTS="$2"
|
||||
shift
|
||||
;;
|
||||
-o)
|
||||
test -n "$2" || die "-o must have option string argument"
|
||||
# always appending to existing options
|
||||
T_MNT_OPTIONS+=",$2"
|
||||
shift
|
||||
;;
|
||||
-P)
|
||||
T_TRACE_PRINTK="1"
|
||||
;;
|
||||
@@ -160,6 +171,11 @@ while true; do
|
||||
T_TRACE_GLOB+=("$2")
|
||||
shift
|
||||
;;
|
||||
-T)
|
||||
test -n "$2" || die "-T must have trace buffer size multiplier argument"
|
||||
T_TRACE_MULT="$2"
|
||||
shift
|
||||
;;
|
||||
-X)
|
||||
test -n "$2" || die "-X requires xfstests git repo dir argument"
|
||||
T_XFSTESTS_REPO="$2"
|
||||
@@ -345,6 +361,13 @@ if [ -n "$T_INSMOD" ]; then
|
||||
cmd insmod "$T_KMOD/src/scoutfs.ko"
|
||||
fi
|
||||
|
||||
if [ -n "$T_TRACE_MULT" ]; then
|
||||
orig_trace_size=$(cat /sys/kernel/debug/tracing/buffer_size_kb)
|
||||
mult_trace_size=$((orig_trace_size * T_TRACE_MULT))
|
||||
msg "increasing trace buffer size from $orig_trace_size KiB to $mult_trace_size KiB"
|
||||
echo $mult_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
|
||||
fi
|
||||
|
||||
nr_globs=${#T_TRACE_GLOB[@]}
|
||||
if [ $nr_globs -gt 0 ]; then
|
||||
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
|
||||
@@ -374,6 +397,7 @@ fi
|
||||
# always describe tracing in the logs
|
||||
cmd cat /sys/kernel/debug/tracing/set_event
|
||||
cmd grep . /sys/kernel/debug/tracing/options/trace_printk \
|
||||
/sys/kernel/debug/tracing/buffer_size_kb \
|
||||
/proc/sys/kernel/ftrace_dump_on_oops
|
||||
|
||||
#
|
||||
@@ -430,6 +454,7 @@ for i in $(seq 0 $((T_NR_MOUNTS - 1))); do
|
||||
if [ "$i" -lt "$T_QUORUM" ]; then
|
||||
opts="$opts,quorum_slot_nr=$i"
|
||||
fi
|
||||
opts="${opts}${T_MNT_OPTIONS}"
|
||||
|
||||
msg "mounting $meta_dev|$data_dev on $dir"
|
||||
cmd mount -t scoutfs $opts "$data_dev" "$dir" &
|
||||
@@ -604,6 +629,9 @@ if [ -n "$T_TRACE_GLOB" -o -n "$T_TRACE_PRINTK" ]; then
|
||||
echo 0 > /sys/kernel/debug/tracing/events/scoutfs/enable
|
||||
echo 0 > /sys/kernel/debug/tracing/options/trace_printk
|
||||
cat /sys/kernel/debug/tracing/trace > "$T_RESULTS/traces"
|
||||
if [ -n "$orig_trace_size" ]; then
|
||||
echo $orig_trace_size > /sys/kernel/debug/tracing/buffer_size_kb
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$skipped" == 0 -a "$failed" == 0 ]; then
|
||||
|
||||
@@ -5,11 +5,16 @@ inode-items-updated.sh
|
||||
simple-inode-index.sh
|
||||
simple-staging.sh
|
||||
simple-release-extents.sh
|
||||
get-referring-entries.sh
|
||||
fallocate.sh
|
||||
basic-truncate.sh
|
||||
data-prealloc.sh
|
||||
setattr_more.sh
|
||||
offline-extent-waiting.sh
|
||||
move-blocks.sh
|
||||
large-fragmented-free.sh
|
||||
enospc.sh
|
||||
srch-safe-merge-pos.sh
|
||||
srch-basic-functionality.sh
|
||||
simple-xattr-unit.sh
|
||||
totl-xattr-tag.sh
|
||||
@@ -17,13 +22,14 @@ lock-refleak.sh
|
||||
lock-shrink-consistency.sh
|
||||
lock-pr-cw-conflict.sh
|
||||
lock-revoke-getcwd.sh
|
||||
lock-recover-invalidate.sh
|
||||
export-lookup-evict-race.sh
|
||||
createmany-parallel.sh
|
||||
createmany-large-names.sh
|
||||
createmany-rename-large-dir.sh
|
||||
stage-release-race-alloc.sh
|
||||
stage-multi-part.sh
|
||||
stage-tmpfile.sh
|
||||
o_tmpfile.sh
|
||||
basic-posix-consistency.sh
|
||||
dirent-consistency.sh
|
||||
mkdir-rename-rmdir.sh
|
||||
@@ -32,7 +38,9 @@ cross-mount-data-free.sh
|
||||
persistent-item-vers.sh
|
||||
setup-error-teardown.sh
|
||||
resize-devices.sh
|
||||
change-devices.sh
|
||||
fence-and-reclaim.sh
|
||||
quorum-heartbeat-timeout.sh
|
||||
orphan-inodes.sh
|
||||
mount-unmount-race.sh
|
||||
client-unmount-recovery.sh
|
||||
|
||||
113
tests/src/fragmented_data_extents.c
Normal file
113
tests/src/fragmented_data_extents.c
Normal file
@@ -0,0 +1,113 @@
|
||||
/*
|
||||
* Copyright (C) 2021 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
|
||||
/*
|
||||
* This creates fragmented data extents.
|
||||
*
|
||||
* A file is created that has alternating free and allocated extents.
|
||||
* This also results in the global allocator having the matching
|
||||
* fragmented free extent pattern. While that file is being created,
|
||||
* occasionally an allocated extent is moved to another file. This
|
||||
* results in a file that has fragmented extents at a given stride that
|
||||
* can be deleted to create free data extents with a given stride.
|
||||
*
|
||||
* We don't have hole punching so to do this quickly we use a goofy
|
||||
* combination of fallocate, truncate, and our move_blocks ioctl.
|
||||
*/
|
||||
|
||||
#ifndef _GNU_SOURCE
|
||||
#define _GNU_SOURCE
|
||||
#endif
|
||||
#include <unistd.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <fcntl.h>
|
||||
#include <errno.h>
|
||||
#include <linux/types.h>
|
||||
#include <assert.h>
|
||||
|
||||
#include "ioctl.h"
|
||||
|
||||
#define BLOCK_SIZE 4096
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
struct scoutfs_ioctl_move_blocks mb = {0,};
|
||||
unsigned long long freed_extents;
|
||||
unsigned long long move_stride;
|
||||
unsigned long long i;
|
||||
int alloc_fd;
|
||||
int trunc_fd;
|
||||
off_t off;
|
||||
int ret;
|
||||
|
||||
if (argc != 5) {
|
||||
printf("%s <freed_extents> <move_stride> <alloc_file> <trunc_file>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
freed_extents = strtoull(argv[1], NULL, 0);
|
||||
move_stride = strtoull(argv[2], NULL, 0);
|
||||
|
||||
alloc_fd = open(argv[3], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
|
||||
if (alloc_fd == -1) {
|
||||
fprintf(stderr, "error opening %s: %d (%s)\n", argv[3], errno, strerror(errno));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
trunc_fd = open(argv[4], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
|
||||
if (trunc_fd == -1) {
|
||||
fprintf(stderr, "error opening %s: %d (%s)\n", argv[4], errno, strerror(errno));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
for (i = 0, off = 0; i < freed_extents; i++, off += BLOCK_SIZE * 2) {
|
||||
|
||||
ret = fallocate(alloc_fd, 0, off, BLOCK_SIZE * 2);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "fallocate at off %llu error: %d (%s)\n",
|
||||
(unsigned long long)off, errno, strerror(errno));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
ret = ftruncate(alloc_fd, off + BLOCK_SIZE);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "truncate to off %llu error: %d (%s)\n",
|
||||
(unsigned long long)off + BLOCK_SIZE, errno, strerror(errno));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if ((i % move_stride) == 0) {
|
||||
mb.from_fd = alloc_fd;
|
||||
mb.from_off = off;
|
||||
mb.len = BLOCK_SIZE;
|
||||
mb.to_off = i * BLOCK_SIZE;
|
||||
|
||||
ret = ioctl(trunc_fd, SCOUTFS_IOC_MOVE_BLOCKS, &mb);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "move from off %llu error: %d (%s)\n",
|
||||
(unsigned long long)off,
|
||||
errno, strerror(errno));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (alloc_fd > -1)
|
||||
close(alloc_fd);
|
||||
if (trunc_fd > -1)
|
||||
close(trunc_fd);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -48,7 +48,7 @@ struct our_handle {
|
||||
static void exit_usage(void)
|
||||
{
|
||||
printf(" -h/-? output this usage message and exit\n"
|
||||
" -e keep trying on enoent, consider success an error\n"
|
||||
" -e keep trying on enoent and estale, consider success an error\n"
|
||||
" -i <num> 64bit inode number for handle open, can be multiple\n"
|
||||
" -m <string> scoutfs mount path string for ioctl fd\n"
|
||||
" -n <string> optional xattr name string, defaults to \""DEFAULT_NAME"\"\n"
|
||||
@@ -149,7 +149,7 @@ int main(int argc, char **argv)
|
||||
|
||||
fd = open_by_handle_at(mntfd, &handle.handle, O_RDWR);
|
||||
if (fd == -1) {
|
||||
if (!enoent_success_err || errno != ENOENT) {
|
||||
if (!enoent_success_err || ( errno != ENOENT && errno != ESTALE )) {
|
||||
perror("open_by_handle_at");
|
||||
return 1;
|
||||
}
|
||||
|
||||
97
tests/src/o_tmpfile_umask.c
Normal file
97
tests/src/o_tmpfile_umask.c
Normal file
@@ -0,0 +1,97 @@
|
||||
/*
|
||||
* Show the modes of files as we create them with O_TMPFILE and link
|
||||
* them into the namespace.
|
||||
*
|
||||
* Copyright (C) 2022 Versity Software, Inc. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License v2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef _GNU_SOURCE
|
||||
#define _GNU_SOURCE
|
||||
#endif
|
||||
#include <unistd.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <fcntl.h>
|
||||
#include <errno.h>
|
||||
#include <sys/stat.h>
|
||||
#include <assert.h>
|
||||
#include <limits.h>
|
||||
|
||||
static void linkat_tmpfile_modes(char *dir, char *lpath, mode_t mode)
|
||||
{
|
||||
char proc_self[PATH_MAX];
|
||||
struct stat st;
|
||||
int ret;
|
||||
int fd;
|
||||
|
||||
umask(mode);
|
||||
printf("umask 0%o\n", mode);
|
||||
|
||||
fd = open(dir, O_RDWR | O_TMPFILE, 0777);
|
||||
if (fd < 0) {
|
||||
perror("open(O_TMPFILE)");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
ret = fstat(fd, &st);
|
||||
if (ret < 0) {
|
||||
perror("fstat");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
printf("fstat after open(0777): 0%o\n", st.st_mode);
|
||||
|
||||
snprintf(proc_self, sizeof(proc_self), "/proc/self/fd/%d", fd);
|
||||
|
||||
ret = linkat(AT_FDCWD, proc_self, AT_FDCWD, lpath, AT_SYMLINK_FOLLOW);
|
||||
if (ret < 0) {
|
||||
perror("linkat");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
close(fd);
|
||||
|
||||
ret = stat(lpath, &st);
|
||||
if (ret < 0) {
|
||||
perror("fstat");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
printf("stat after linkat: 0%o\n", st.st_mode);
|
||||
|
||||
ret = unlink(lpath);
|
||||
if (ret < 0) {
|
||||
perror("unlink");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
char *lpath;
|
||||
char *dir;
|
||||
|
||||
if (argc < 3) {
|
||||
printf("%s <open_dir> <linkat_path>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
dir = argv[1];
|
||||
lpath = argv[2];
|
||||
|
||||
linkat_tmpfile_modes(dir, lpath, 022);
|
||||
linkat_tmpfile_modes(dir, lpath, 077);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -12,7 +12,7 @@ mount_fail()
|
||||
}
|
||||
|
||||
echo "== prepare devices, mount point, and logs"
|
||||
SCR="/mnt/scoutfs.extra"
|
||||
SCR="$T_TMPDIR/mnt.scratch"
|
||||
mkdir -p "$SCR"
|
||||
> $T_TMP.mount.out
|
||||
scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 \
|
||||
|
||||
@@ -149,6 +149,10 @@ find "$T_D0/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.0"
|
||||
find "$T_D1/dir" -ls 2>&1 | t_filter_fs > "$T_TMP.1"
|
||||
diff -u "$T_TMP.0" "$T_TMP.1"
|
||||
rm -rf "$T_D0/dir"
|
||||
echo "--- can rename into root"
|
||||
touch "$T_D0/rename-into-root"
|
||||
mv "$T_D0/rename-into-root" "$T_M0/"
|
||||
rm -f "$T_M0/rename-into-root"
|
||||
|
||||
echo "== path resoluion"
|
||||
touch "$T_D0/file"
|
||||
|
||||
26
tests/tests/basic-truncate.sh
Normal file
26
tests/tests/basic-truncate.sh
Normal file
@@ -0,0 +1,26 @@
|
||||
#
|
||||
# Test basic correctness of truncate.
|
||||
#
|
||||
|
||||
t_require_commands yes dd od truncate
|
||||
|
||||
FILE="$T_D0/file"
|
||||
|
||||
#
|
||||
# We forgot to write a dirty block that zeroed the tail of a partial
|
||||
# final block as we truncated past it.
|
||||
#
|
||||
echo "== truncate writes zeroed partial end of file block"
|
||||
yes | dd of="$FILE" bs=8K count=1 status=none iflag=fullblock
|
||||
sync
|
||||
|
||||
# not passing iflag=fullblock causes the file occasionally to just be
|
||||
# 4K, so just to be safe we should at least check size once
|
||||
test `stat --printf="%s\n" "$FILE"` -eq 8192 || t_fail "test file incorrect start size"
|
||||
|
||||
truncate -s 6K "$FILE"
|
||||
truncate -s 12K "$FILE"
|
||||
echo 3 > /proc/sys/vm/drop_caches
|
||||
od -Ad -x "$FILE"
|
||||
|
||||
t_pass
|
||||
76
tests/tests/change-devices.sh
Normal file
76
tests/tests/change-devices.sh
Normal file
@@ -0,0 +1,76 @@
|
||||
#
|
||||
# test changing devices
|
||||
#
|
||||
|
||||
echo "== make tmp sparse data dev files"
|
||||
sz=$(blockdev --getsize64 "$T_EX_DATA_DEV")
|
||||
large_sz=$((sz * 2))
|
||||
touch "$T_TMP."{small,equal,large}
|
||||
truncate -s 1MB "$T_TMP.small"
|
||||
truncate -s $sz "$T_TMP.equal"
|
||||
truncate -s $large_sz "$T_TMP.large"
|
||||
|
||||
echo "== make scratch fs"
|
||||
t_quiet scoutfs mkfs -f -Q 0,127.0.0.1,53000 "$T_EX_META_DEV" "$T_EX_DATA_DEV"
|
||||
SCR="$T_TMPDIR/mnt.scratch"
|
||||
mkdir -p "$SCR"
|
||||
|
||||
echo "== small new data device fails"
|
||||
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.small"
|
||||
|
||||
echo "== check sees data device errors"
|
||||
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV" "$T_TMP.small"
|
||||
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV"
|
||||
|
||||
echo "== preparing while mounted fails"
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
|
||||
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
|
||||
umount "$SCR"
|
||||
|
||||
echo "== preparing without recovery fails"
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
|
||||
umount -f "$SCR"
|
||||
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
|
||||
|
||||
echo "== check sees metadata errors"
|
||||
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV"
|
||||
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV" "$T_TMP.equal"
|
||||
|
||||
echo "== preparing with file data fails"
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
|
||||
echo hi > "$SCR"/file
|
||||
umount "$SCR"
|
||||
scoutfs print "$T_EX_META_DEV" > "$T_TMP.print"
|
||||
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
|
||||
|
||||
echo "== preparing after emptied"
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$T_EX_DATA_DEV" "$SCR"
|
||||
rm -f "$SCR"/file
|
||||
umount "$SCR"
|
||||
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.equal"
|
||||
|
||||
echo "== checks pass"
|
||||
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV"
|
||||
t_rc scoutfs prepare-empty-data-device --check "$T_EX_META_DEV" "$T_TMP.equal"
|
||||
|
||||
echo "== using prepared"
|
||||
scr_loop=$(losetup --find --show "$T_TMP.equal")
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$scr_loop" "$SCR"
|
||||
touch "$SCR"/equal_prepared
|
||||
equal_tot=$(scoutfs statfs -s total_data_blocks -p "$SCR")
|
||||
umount "$SCR"
|
||||
losetup -d "$scr_loop"
|
||||
|
||||
echo "== preparing larger and resizing"
|
||||
t_rc scoutfs prepare-empty-data-device "$T_EX_META_DEV" "$T_TMP.large"
|
||||
scr_loop=$(losetup --find --show "$T_TMP.large")
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 "$scr_loop" "$SCR"
|
||||
touch "$SCR"/large_prepared
|
||||
ls "$SCR"
|
||||
scoutfs resize-devices -p "$SCR" -d $large_sz
|
||||
large_tot=$(scoutfs statfs -s total_data_blocks -p "$SCR")
|
||||
test "$large_tot" -gt "$equal_tot" ; echo "resized larger test rc: $?"
|
||||
umount "$SCR"
|
||||
losetup -d "$scr_loop"
|
||||
|
||||
t_pass
|
||||
231
tests/tests/data-prealloc.sh
Normal file
231
tests/tests/data-prealloc.sh
Normal file
@@ -0,0 +1,231 @@
|
||||
#
|
||||
# test that the data prealloc options behave as expected. We write to
|
||||
# two files a block at a time so that a single file doesn't naturally
|
||||
# merge adjacent consecutive allocations. (we don't have multiple
|
||||
# allocation cursors)
|
||||
#
|
||||
t_require_commands scoutfs stat filefrag dd touch truncate
|
||||
|
||||
write_block()
|
||||
{
|
||||
local file="$1"
|
||||
local blk="$2"
|
||||
|
||||
dd if=/dev/zero of="$file" bs=4096 seek=$blk count=1 conv=notrunc status=none
|
||||
echo "wrote blk $blk"
|
||||
}
|
||||
|
||||
write_forwards()
|
||||
{
|
||||
local prefix="$1"
|
||||
local nr="$2"
|
||||
local blk
|
||||
|
||||
touch "$prefix"-{1,2}
|
||||
truncate -s 0 "$prefix"-{1,2}
|
||||
|
||||
for blk in $(seq 0 1 $((nr - 1))); do
|
||||
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
|
||||
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
|
||||
done
|
||||
}
|
||||
|
||||
write_backwards()
|
||||
{
|
||||
local prefix="$1"
|
||||
local nr="$2"
|
||||
local blk
|
||||
|
||||
touch "$prefix"-{1,2}
|
||||
truncate -s 0 "$prefix"-{1,2}
|
||||
|
||||
for blk in $(seq $((nr - 1)) -1 0); do
|
||||
dd if=/dev/zero of="$prefix"-1 bs=4096 seek=$blk count=1 conv=notrunc status=none
|
||||
dd if=/dev/zero of="$prefix"-2 bs=4096 seek=$blk count=1 conv=notrunc status=none
|
||||
done
|
||||
}
|
||||
|
||||
release_files() {
|
||||
local prefix="$1"
|
||||
local size=$(($2 * 4096))
|
||||
local vers
|
||||
local f
|
||||
|
||||
for f in "$prefix"*; do
|
||||
size=$(stat -c "%s" "$f")
|
||||
vers=$(scoutfs stat -s data_version "$f")
|
||||
scoutfs release "$f" -V "$vers" -o 0 -l $size
|
||||
done
|
||||
}
|
||||
|
||||
stage_files() {
|
||||
local prefix="$1"
|
||||
local nr="$2"
|
||||
local vers
|
||||
local f
|
||||
|
||||
for blk in $(seq 0 1 $((nr - 1))); do
|
||||
for f in "$prefix"*; do
|
||||
vers=$(scoutfs stat -s data_version "$f")
|
||||
scoutfs stage /dev/zero "$f" -V "$vers" -o $((blk * 4096)) -l 4096
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
print_extents_found()
|
||||
{
|
||||
local prefix="$1"
|
||||
|
||||
filefrag "$prefix"* 2>&1 | grep "extent.*found" | t_filter_fs
|
||||
}
|
||||
|
||||
#
|
||||
# print the logical start, len, and flags if they're there.
|
||||
#
|
||||
print_logical_extents()
|
||||
{
|
||||
local file="$1"
|
||||
|
||||
filefrag -v -b4096 "$file" 2>&1 | t_filter_fs | awk '
|
||||
($1 ~ /[0-9]+:/) {
|
||||
if ($NF !~ /[0-9]+:/) {
|
||||
flags=$NF
|
||||
} else {
|
||||
flags=""
|
||||
}
|
||||
print $2, $6, flags
|
||||
}
|
||||
' | sed 's/last,eof/eof/'
|
||||
}
|
||||
|
||||
t_save_all_sysfs_mount_options data_prealloc_blocks
|
||||
t_save_all_sysfs_mount_options data_prealloc_contig_only
|
||||
restore_options()
|
||||
{
|
||||
t_restore_all_sysfs_mount_options data_prealloc_blocks
|
||||
t_restore_all_sysfs_mount_options data_prealloc_contig_only
|
||||
}
|
||||
trap restore_options EXIT
|
||||
|
||||
prefix="$T_D0/file"
|
||||
|
||||
echo "== initial writes smaller than prealloc grow to prealloc size"
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
|
||||
write_forwards $prefix 64
|
||||
print_extents_found $prefix
|
||||
|
||||
echo "== larger files get full prealloc extents"
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
|
||||
write_forwards $prefix 128
|
||||
print_extents_found $prefix
|
||||
|
||||
echo "== non-streaming writes with contig have per-block extents"
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 32
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
|
||||
write_backwards $prefix 32
|
||||
print_extents_found $prefix
|
||||
|
||||
echo "== any writes to region prealloc get full extents"
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
|
||||
write_forwards $prefix 64
|
||||
print_extents_found $prefix
|
||||
write_backwards $prefix 64
|
||||
print_extents_found $prefix
|
||||
|
||||
echo "== streaming offline writes get full extents either way"
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 16
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
|
||||
write_forwards $prefix 64
|
||||
release_files $prefix 64
|
||||
stage_files $prefix 64
|
||||
print_extents_found $prefix
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
|
||||
release_files $prefix 64
|
||||
stage_files $prefix 64
|
||||
print_extents_found $prefix
|
||||
|
||||
echo "== goofy preallocation amounts work"
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 7
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
|
||||
write_forwards $prefix 14
|
||||
print_extents_found $prefix
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 13
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
|
||||
write_forwards $prefix 53
|
||||
print_extents_found $prefix
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 1
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
|
||||
write_forwards $prefix 3
|
||||
print_extents_found $prefix
|
||||
|
||||
#
|
||||
# prepare aligned regions of 8 blocks that we'll write into.
|
||||
# We'll right into the first, last, and middle block of each
|
||||
# region which was prepared with no existing extents, one at
|
||||
# the start, and one at the end.
|
||||
#
|
||||
# Let's keep this last because it creates a ton of output to read
|
||||
# through. The correct output is tied to preallocation strategy so it
|
||||
# has to be verified each time we change preallocation.
|
||||
#
|
||||
echo "== block writes into region allocs hole"
|
||||
t_set_sysfs_mount_option 0 data_prealloc_blocks 8
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 1
|
||||
touch "$prefix"
|
||||
truncate -s 0 "$prefix"
|
||||
|
||||
# write initial blocks in regions
|
||||
base=0
|
||||
for sides in 0 1 2 3; do
|
||||
for i in 0 1 2; do
|
||||
case "$sides" in
|
||||
# none
|
||||
0) ;;
|
||||
# left
|
||||
1) write_block $prefix $((base + 0)) ;;
|
||||
# right
|
||||
2) write_block $prefix $((base + 7)) ;;
|
||||
# both
|
||||
3) write_block $prefix $((base + 0))
|
||||
write_block $prefix $((base + 7)) ;;
|
||||
esac
|
||||
((base+=8))
|
||||
done
|
||||
done
|
||||
|
||||
echo before:
|
||||
print_logical_extents "$prefix"
|
||||
|
||||
# now write into the first, middle, and last empty block of each
|
||||
t_set_sysfs_mount_option 0 data_prealloc_contig_only 0
|
||||
base=0
|
||||
for sides in 0 1 2 3; do
|
||||
for i in 0 1 2; do
|
||||
echo "writing into existing $sides at pos $i"
|
||||
case "$sides" in
|
||||
# none
|
||||
0) left=$base; right=$((base + 7));;
|
||||
# left
|
||||
1) left=$((base + 1)); right=$((base + 7));;
|
||||
# right
|
||||
2) left=$((base)); right=$((base + 6));;
|
||||
# both
|
||||
3) left=$((base + 1)); right=$((base + 6));;
|
||||
esac
|
||||
case "$i" in
|
||||
# start
|
||||
0) write_block $prefix $left ;;
|
||||
# end
|
||||
1) write_block $prefix $right ;;
|
||||
# mid (both has 6 blocks internally)
|
||||
2) write_block $prefix $((left + 3)) ;;
|
||||
esac
|
||||
print_logical_extents "$prefix"
|
||||
((base+=8))
|
||||
done
|
||||
done
|
||||
|
||||
t_pass
|
||||
@@ -59,7 +59,7 @@ echo "== make small meta fs"
|
||||
# meta device just big enough for reserves and the metadata we'll fill
|
||||
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m 10G "$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
|
||||
t_fail "mkfs failed"
|
||||
SCR="/mnt/scoutfs.enospc"
|
||||
SCR="$T_TMPDIR/mnt.scratch"
|
||||
mkdir -p "$SCR"
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
|
||||
"$T_EX_DATA_DEV" "$SCR"
|
||||
|
||||
@@ -7,14 +7,11 @@ t_require_mounts 2
|
||||
|
||||
#
|
||||
# Make sure that all mounts can read the results of a write from each
|
||||
# mount. And make sure that the greatest of all the written seqs is
|
||||
# visible after the writes were commited by remote reads.
|
||||
# mount.
|
||||
#
|
||||
check_read_write()
|
||||
{
|
||||
local expected
|
||||
local greatest=0
|
||||
local seq
|
||||
local path
|
||||
local saw
|
||||
local w
|
||||
@@ -25,11 +22,6 @@ check_read_write()
|
||||
eval path="\$T_D${w}/written"
|
||||
echo "$expected" > "$path"
|
||||
|
||||
seq=$(scoutfs stat -s meta_seq $path)
|
||||
if [ "$seq" -gt "$greatest" ]; then
|
||||
greatest=$seq
|
||||
fi
|
||||
|
||||
for r in $(t_fs_nrs); do
|
||||
eval path="\$T_D${r}/written"
|
||||
saw=$(cat "$path")
|
||||
@@ -38,11 +30,6 @@ check_read_write()
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
seq=$(scoutfs statfs -s committed_seq -p $T_D0)
|
||||
if [ "$seq" -lt "$greatest" ]; then
|
||||
echo "committed_seq $seq less than greatest $greatest"
|
||||
fi
|
||||
}
|
||||
|
||||
# verify that fenced ran our testing fence script
|
||||
|
||||
99
tests/tests/get-referring-entries.sh
Normal file
99
tests/tests/get-referring-entries.sh
Normal file
@@ -0,0 +1,99 @@
|
||||
|
||||
#
|
||||
# Test _GET_REFERRING_ENTRIES ioctl via the get-referring-entries cli
|
||||
# command
|
||||
#
|
||||
|
||||
# consistently print only entry names
|
||||
filter_names() {
|
||||
exec cut -d ' ' -f 8- | sort
|
||||
}
|
||||
|
||||
# print entries with type characters to match find. not happy with hard
|
||||
# coding, but abi won't change much.
|
||||
filter_types() {
|
||||
exec cut -d ' ' -f 5- | \
|
||||
sed \
|
||||
-e 's/type 1 /type p /' \
|
||||
-e 's/type 2 /type c /' \
|
||||
-e 's/type 4 /type d /' \
|
||||
-e 's/type 6 /type b /' \
|
||||
-e 's/type 8 /type f /' \
|
||||
-e 's/type 10 /type l /' \
|
||||
-e 's/type 12 /type s /' \
|
||||
| \
|
||||
sort
|
||||
}
|
||||
|
||||
n_chars() {
|
||||
local n="$1"
|
||||
printf 'A%.0s' $(eval echo {1..\$n})
|
||||
}
|
||||
|
||||
GRE="scoutfs get-referring-entries -p $T_M0"
|
||||
|
||||
echo "== root inode returns nothing"
|
||||
$GRE 1
|
||||
|
||||
echo "== crazy large unused inode does nothing"
|
||||
$GRE 4611686018427387904 # 1 << 62
|
||||
|
||||
echo "== basic entry"
|
||||
touch $T_D0/file
|
||||
ino=$(stat -c '%i' $T_D0/file)
|
||||
$GRE $ino | filter_names
|
||||
|
||||
echo "== rename"
|
||||
mv $T_D0/file $T_D0/renamed
|
||||
$GRE $ino | filter_names
|
||||
|
||||
echo "== hard link"
|
||||
mv $T_D0/renamed $T_D0/file
|
||||
ln $T_D0/file $T_D0/link
|
||||
$GRE $ino | filter_names
|
||||
|
||||
echo "== removal"
|
||||
rm $T_D0/file $T_D0/link
|
||||
$GRE $ino
|
||||
|
||||
echo "== different dirs"
|
||||
touch $T_D0/file
|
||||
ino=$(stat -c '%i' $T_D0/file)
|
||||
for i in $(seq 1 10); do
|
||||
mkdir $T_D0/dir-$i
|
||||
ln $T_D0/file $T_D0/dir-$i/file-$i
|
||||
done
|
||||
diff -u <(find $T_D0 -type f -printf '%f\n' | sort) <($GRE $ino | filter_names)
|
||||
rm $T_D0/file
|
||||
|
||||
echo "== file types"
|
||||
mkdir $T_D0/dir
|
||||
touch $T_D0/dir/file
|
||||
mkdir $T_D0/dir/dir
|
||||
ln -s $T_D0/dir/file $T_D0/dir/symlink
|
||||
mknod $T_D0/dir/char c 1 3 # null
|
||||
mknod $T_D0/dir/block b 7 0 # loop0
|
||||
for name in $(ls -UA $T_D0/dir | sort); do
|
||||
ino=$(stat -c '%i' $T_D0/dir/$name)
|
||||
$GRE $ino | filter_types
|
||||
done
|
||||
rm -rf $T_D0/dir
|
||||
|
||||
echo "== all name lengths work"
|
||||
mkdir $T_D0/dir
|
||||
touch $T_D0/dir/file
|
||||
ino=$(stat -c '%i' $T_D0/dir/file)
|
||||
name=""
|
||||
> $T_TMP.unsorted
|
||||
for i in $(seq 1 255); do
|
||||
name+="a"
|
||||
echo "$name" >> $T_TMP.unsorted
|
||||
ln $T_D0/dir/file $T_D0/dir/$name
|
||||
done
|
||||
sort $T_TMP.unsorted > $T_TMP.sorted
|
||||
rm $T_D0/dir/file
|
||||
$GRE $ino | filter_names > $T_TMP.gre
|
||||
diff -u $T_TMP.sorted $T_TMP.gre
|
||||
rm -rf $T_D0/dir
|
||||
|
||||
t_pass
|
||||
@@ -72,7 +72,7 @@ check_ino_index "$ino" "$dseq" "$T_M0"
|
||||
check_ino_index "$ino" "$dseq" "$T_M1"
|
||||
exec {FD}>&- # close
|
||||
# we know that revalidating will unhash the remote dentry
|
||||
stat "$T_D0/file" 2>&1 | t_filter_fs
|
||||
stat "$T_D0/file" 2>&1 | sed 's/cannot statx/cannot stat/' | t_filter_fs
|
||||
check_ino_index "$ino" "$dseq" "$T_M0"
|
||||
check_ino_index "$ino" "$dseq" "$T_M1"
|
||||
|
||||
|
||||
22
tests/tests/large-fragmented-free.sh
Normal file
22
tests/tests/large-fragmented-free.sh
Normal file
@@ -0,0 +1,22 @@
|
||||
#
|
||||
# Make sure the server can handle a transaction with a data_freed whose
|
||||
# blocks all hit different btree blocks in the main free list. It
|
||||
# probably has to be merged in multiple commits.
|
||||
#
|
||||
|
||||
t_require_commands fragmented_data_extents
|
||||
|
||||
EXTENTS_PER_BTREE_BLOCK=600
|
||||
EXTENTS_PER_LIST_BLOCK=8192
|
||||
FREED_EXTENTS=$((EXTENTS_PER_BTREE_BLOCK * EXTENTS_PER_LIST_BLOCK))
|
||||
|
||||
echo "== creating fragmented extents"
|
||||
fragmented_data_extents $FREED_EXTENTS $EXTENTS_PER_BTREE_BLOCK "$T_D0/alloc" "$T_D0/move"
|
||||
|
||||
echo "== unlink file with moved extents to free extents per block"
|
||||
rm -f "$T_D0/move"
|
||||
|
||||
echo "== cleanup"
|
||||
rm -f "$T_D0/alloc"
|
||||
|
||||
t_pass
|
||||
43
tests/tests/lock-recover-invalidate.sh
Normal file
43
tests/tests/lock-recover-invalidate.sh
Normal file
@@ -0,0 +1,43 @@
|
||||
#
|
||||
# trigger server failover and lock recovery during heavy invalidating
|
||||
# load on multiple mounts
|
||||
#
|
||||
|
||||
majority_nr=$(t_majority_count)
|
||||
quorum_nr=$T_QUORUM
|
||||
|
||||
test "$quorum_nr" == "$majority_nr" && \
|
||||
t_skip "need remaining majority when leader unmounted"
|
||||
|
||||
test "$T_NR_MOUNTS" -lt "$((quorum_nr + 2))" && \
|
||||
t_skip "need at least 2 non-quorum load mounts"
|
||||
|
||||
echo "== starting background invalidating read/write load"
|
||||
touch "$T_D0/file"
|
||||
load_pids=""
|
||||
for i in $(t_fs_nrs); do
|
||||
if [ "$i" -ge "$quorum_nr" ]; then
|
||||
eval path="\$T_D${i}/file"
|
||||
|
||||
(while true; do touch $path > /dev/null 2>&1; done) &
|
||||
load_pids="$load_pids $!"
|
||||
(while true; do stat $path > /dev/null 2>&1; done) &
|
||||
load_pids="$load_pids $!"
|
||||
fi
|
||||
done
|
||||
|
||||
# had it reproduce in ~40s on wimpy debug kernel guests
|
||||
LENGTH=60
|
||||
echo "== ${LENGTH}s of lock recovery during invalidating load"
|
||||
END=$((SECONDS + LENGTH))
|
||||
while [ "$SECONDS" -lt "$END" ]; do
|
||||
sv=$(t_server_nr)
|
||||
t_umount $sv
|
||||
t_mount $sv
|
||||
# new server had to process greeting for mount to finish
|
||||
done
|
||||
|
||||
echo "== stopping background load"
|
||||
kill $load_pids
|
||||
|
||||
t_pass
|
||||
16
tests/tests/o_tmpfile.sh
Normal file
16
tests/tests/o_tmpfile.sh
Normal file
@@ -0,0 +1,16 @@
|
||||
#
|
||||
# basic tests of O_TMPFILE
|
||||
#
|
||||
|
||||
t_require_commands stage_tmpfile hexdump
|
||||
|
||||
echo "== non-acl O_TMPFILE creation honors umask"
|
||||
o_tmpfile_umask "$T_D0" "$T_D0/umask-file"
|
||||
|
||||
echo "== stage from tmpfile"
|
||||
DEST_FILE="$T_D0/dest_file"
|
||||
stage_tmpfile $T_D0 $DEST_FILE
|
||||
hexdump -C "$DEST_FILE"
|
||||
rm -f "$DEST_FILE"
|
||||
|
||||
t_pass
|
||||
117
tests/tests/quorum-heartbeat-timeout.sh
Normal file
117
tests/tests/quorum-heartbeat-timeout.sh
Normal file
@@ -0,0 +1,117 @@
|
||||
#
|
||||
# test that the quorum_heartbeat_time_ms option affects how long it
|
||||
# takes to recover from a failed mount.
|
||||
#
|
||||
|
||||
t_require_mounts 2
|
||||
|
||||
time_ms()
|
||||
{
|
||||
# time_t in seconds, then trunate nanoseconds to 3 most dig digits
|
||||
date +%s%3N
|
||||
}
|
||||
|
||||
set_bad_timeout() {
|
||||
local to="$1"
|
||||
t_set_sysfs_mount_option 0 quorum_heartbeat_timeout_ms $to && \
|
||||
t_fail "set bad q hb to $to"
|
||||
}
|
||||
|
||||
set_timeout()
|
||||
{
|
||||
local nr="$1"
|
||||
local how="$2"
|
||||
local to="$3"
|
||||
local is
|
||||
|
||||
if [ $how == "sysfs" ]; then
|
||||
t_set_sysfs_mount_option $nr quorum_heartbeat_timeout_ms $to
|
||||
fi
|
||||
if [ $how == "mount" ]; then
|
||||
t_umount $nr
|
||||
t_mount_opt $nr "quorum_heartbeat_timeout_ms=$to"
|
||||
fi
|
||||
|
||||
is=$(t_get_sysfs_mount_option $nr quorum_heartbeat_timeout_ms)
|
||||
|
||||
if [ "$is" != "$to" ]; then
|
||||
t_fail "tried to set qhbto on $nr via $how to $to but got $is"
|
||||
fi
|
||||
}
|
||||
|
||||
test_timeout()
|
||||
{
|
||||
local how="$1"
|
||||
local to="$2"
|
||||
local start
|
||||
local nr
|
||||
local sv
|
||||
local delay
|
||||
local low
|
||||
local high
|
||||
|
||||
# set timeout on non-server quorum mounts
|
||||
sv=$(t_server_nr)
|
||||
for nr in $(t_quorum_nrs); do
|
||||
if [ $nr -ne $sv ]; then
|
||||
set_timeout $nr $how $to
|
||||
fi
|
||||
done
|
||||
|
||||
# give followers time to recv heartbeats and reset timeouts
|
||||
sleep 1
|
||||
|
||||
# tear down the current server/leader
|
||||
t_force_umount $sv
|
||||
|
||||
# see how long it takes for the next leader to start
|
||||
start=$(time_ms)
|
||||
t_wait_for_leader
|
||||
delay=$(($(time_ms) - start))
|
||||
|
||||
# kind of fun to have these logged
|
||||
echo "to $to delay $delay" >> $T_TMP.delay
|
||||
|
||||
# restore the mount that we tore down
|
||||
t_mount $sv
|
||||
|
||||
# make sure the new leader delay was reasonable, allowing for some slack
|
||||
low=$((to - 1000))
|
||||
high=$((to + 5000))
|
||||
|
||||
# make sure the new leader delay was reasonable
|
||||
test "$delay" -lt "$low" && t_fail "delay $delay < low $low (to $to)"
|
||||
test "$delay" -gt "$high" && t_fail "delay $delay > high $high (to $to)"
|
||||
}
|
||||
|
||||
echo "== bad timeout values fail"
|
||||
set_bad_timeout 0
|
||||
set_bad_timeout -1
|
||||
set_bad_timeout 1000000
|
||||
|
||||
echo "== bad mount option fails"
|
||||
if [ "$(t_server_nr)" == 0 ]; then
|
||||
nr=1
|
||||
else
|
||||
nr=0
|
||||
fi
|
||||
t_umount $nr
|
||||
t_mount_opt $nr "quorum_heartbeat_timeout_ms=1000000" 2>/dev/null && \
|
||||
t_fail "bad mount option succeeded"
|
||||
t_mount $nr
|
||||
|
||||
echo "== mount option"
|
||||
def=$(t_get_sysfs_mount_option 0 quorum_heartbeat_timeout_ms)
|
||||
test_timeout mount $def
|
||||
test_timeout mount 3000
|
||||
test_timeout mount $((def + 19000))
|
||||
|
||||
echo "== sysfs"
|
||||
test_timeout sysfs $def
|
||||
test_timeout sysfs 3000
|
||||
test_timeout sysfs $((def + 19000))
|
||||
|
||||
echo "== reset all options"
|
||||
t_remount_all
|
||||
|
||||
t_pass
|
||||
@@ -2,6 +2,8 @@
|
||||
# Some basic tests of online resizing metadata and data devices.
|
||||
#
|
||||
|
||||
t_require_commands bc
|
||||
|
||||
statfs_total() {
|
||||
local single="total_$1_blocks"
|
||||
local mnt="$2"
|
||||
@@ -73,7 +75,7 @@ echo "== make initial small fs"
|
||||
scoutfs mkfs -A -f -Q 0,127.0.0.1,53000 -m $quarter_meta -d $quarter_data \
|
||||
"$T_EX_META_DEV" "$T_EX_DATA_DEV" > $T_TMP.mkfs.out 2>&1 || \
|
||||
t_fail "mkfs failed"
|
||||
SCR="/mnt/scoutfs.enospc"
|
||||
SCR="$T_TMPDIR/mnt.scratch"
|
||||
mkdir -p "$SCR"
|
||||
mount -t scoutfs -o metadev_path=$T_EX_META_DEV,quorum_slot_nr=0 \
|
||||
"$T_EX_DATA_DEV" "$SCR"
|
||||
|
||||
@@ -55,10 +55,17 @@ scoutfs setattr -t 67305985.999999999 -V 1 -s 1 "$FILE" 2>&1 | t_filter_fs
|
||||
TZ=GMT stat -c "%z" "$FILE"
|
||||
rm "$FILE"
|
||||
|
||||
#
|
||||
# With e2fsprogs-v1.42.10-10-g29758d2f, the output of filefrag 'flags' changes
|
||||
# significantly. First, the _LAST flag is now output. Second, the 'unknown'
|
||||
# flag is now printed out as 'unknown_loc'. To compensate for this, we check
|
||||
# and replace the "correct" output for new versions here with the expected
|
||||
# value.
|
||||
#
|
||||
echo "== large offline extents are created"
|
||||
touch "$FILE"
|
||||
scoutfs setattr -V 1 -o -s $((10007 * 4096)) "$FILE" 2>&1 | t_filter_fs
|
||||
filefrag -v -b4096 "$FILE" 2>&1 | t_filter_fs
|
||||
filefrag -v -b4096 "$FILE" 2>&1 | sed 's/last,unknown_loc,eof$/unknown,eof/' | t_filter_fs
|
||||
rm "$FILE"
|
||||
|
||||
# had a bug where we were creating extents that were too long
|
||||
|
||||
@@ -103,4 +103,34 @@ while [ "$nr" -lt 100 ]; do
|
||||
((nr++))
|
||||
done
|
||||
|
||||
#
|
||||
# make sure rapid concurrent metadata updates don't create multiple
|
||||
# meta_seq entries
|
||||
#
|
||||
# we had a bug where deletion items created under concurrent_write locks
|
||||
# could get versions older than the items they're deleting which were
|
||||
# protected by read/write locks.
|
||||
#
|
||||
echo "== concurrent update attempts maintain single entries"
|
||||
FILES=4
|
||||
nr=1
|
||||
while [ "$nr" -lt 10 ]; do
|
||||
# touch a bunch of files in parallel from all mounts
|
||||
for i in $(t_fs_nrs); do
|
||||
eval path="\$T_D${i}"
|
||||
seq -f "$path/file-%.0f" 1 $FILES | xargs touch &
|
||||
done
|
||||
wait || t_fail "concurrent file updates failed"
|
||||
|
||||
# make sure no inodes have duplicate entries
|
||||
sync
|
||||
scoutfs walk-inodes -p "$T_D0" meta_seq -- 0 -1 | \
|
||||
grep -v "minor" | \
|
||||
awk '{print $4}' | \
|
||||
sort -n | uniq -c | \
|
||||
awk '($1 != 1)' | \
|
||||
sort -n
|
||||
((nr++))
|
||||
done
|
||||
|
||||
t_pass
|
||||
|
||||
@@ -27,16 +27,11 @@ test_xattr_lengths() {
|
||||
echo "key len $name_len val len $val_len" >> "$T_TMP.log"
|
||||
setfattr -n $name -v \"$val\" "$FILE"
|
||||
|
||||
# grep has trouble with enormous args? so we dump the
|
||||
# name=value to a file and compare with a known good file
|
||||
getfattr -d --absolute-names "$FILE" | grep "$name" > "$T_TMP.got"
|
||||
getfattr -d --only-values --absolute-names "$FILE" -n "$name" > "$T_TMP.got"
|
||||
echo -n "$val" > "$T_TMP.good"
|
||||
|
||||
if [ $val_len == 0 ]; then
|
||||
echo "$name" > "$T_TMP.good"
|
||||
else
|
||||
echo "$name=\"$val\"" > "$T_TMP.good"
|
||||
fi
|
||||
cmp "$T_TMP.good" "$T_TMP.got" || exit 1
|
||||
cmp "$T_TMP.good" "$T_TMP.got" || \
|
||||
t_fail "cmp failed name len $name_len val len $val_len"
|
||||
|
||||
setfattr -x $name "$FILE"
|
||||
}
|
||||
|
||||
69
tests/tests/srch-safe-merge-pos.sh
Normal file
69
tests/tests/srch-safe-merge-pos.sh
Normal file
@@ -0,0 +1,69 @@
|
||||
#
|
||||
# There was a bug where srch file compaction could get stuck if a
|
||||
# partial compaction finished at the specific _SAFE_BYTES offset in a
|
||||
# block. Resuming from that position would return an error and
|
||||
# compaction would stop making forward progress.
|
||||
#
|
||||
# We use triggers to make sure that we create the circumstance where a
|
||||
# sorted srch block ends at the _SAFE_BYTES offsset and that a merge
|
||||
# request stops with a partial block at that specific offset. We then
|
||||
# watch error counters to make sure compaction doesn't get stuck.
|
||||
#
|
||||
|
||||
# forcing rotation, so just a few
|
||||
NR=10
|
||||
SEQF="%.20g"
|
||||
COMPACT_NR=4
|
||||
|
||||
echo "== snapshot errors"
|
||||
declare -a err
|
||||
for nr in $(t_fs_nrs); do
|
||||
err[$nr]=$(t_counter srch_compact_error $nr)
|
||||
done
|
||||
|
||||
echo "== arm compaction triggers"
|
||||
for nr in $(t_fs_nrs); do
|
||||
t_trigger_arm srch_compact_logs_pad_safe $nr
|
||||
t_trigger_arm srch_merge_stop_safe $nr
|
||||
done
|
||||
|
||||
echo "== force lots of small rotated log files for compaction"
|
||||
sv=$(t_server_nr)
|
||||
iter=1
|
||||
while [ $iter -le $((COMPACT_NR * COMPACT_NR * COMPACT_NR)) ]; do
|
||||
t_trigger_arm srch_force_log_rotate $sv
|
||||
|
||||
seq -f "f-$iter-$SEQF" 1 10 | src/bulk_create_paths -S -d "$T_D0" > /dev/null
|
||||
sync
|
||||
|
||||
test "$(t_trigger_get srch_force_log_rotate $sv)" == "0" || \
|
||||
t_fail "srch_force_log_rotate didn't trigger"
|
||||
|
||||
((iter++))
|
||||
done
|
||||
|
||||
echo "== wait for compaction"
|
||||
sleep 15
|
||||
|
||||
echo "== test and disarm compaction triggers"
|
||||
pad=0
|
||||
merge_stop=0
|
||||
for nr in $(t_fs_nrs); do
|
||||
test "$(t_trigger_get srch_compact_logs_pad_safe $nr)" == "0" && pad=1
|
||||
t_trigger_set srch_compact_logs_pad_safe $nr 0
|
||||
test "$(t_trigger_get srch_merge_stop_safe $nr)" == "0" && merge_stop=1
|
||||
t_trigger_set srch_merge_stop_safe $nr 0
|
||||
done
|
||||
|
||||
echo "== verify triggers and errors"
|
||||
test $pad == 1 || t_fail "srch_compact_logs_pad_safe didn't trigger"
|
||||
test $merge_stop == 1 || t_fail "srch_merge_stop_safe didn't trigger"
|
||||
for nr in $(t_fs_nrs); do
|
||||
test "$(t_counter srch_compact_error $nr)" == "${err[$nr]}" || \
|
||||
t_fail "srch_compact_error counter increased on mount $nr"
|
||||
done
|
||||
|
||||
echo "== cleanup"
|
||||
find "$T_D0" -type f -name 'f-*' -delete
|
||||
|
||||
t_pass
|
||||
@@ -1,15 +0,0 @@
|
||||
#
|
||||
# Run tmpfile_stage and check the output with hexdump.
|
||||
#
|
||||
|
||||
t_require_commands stage_tmpfile hexdump
|
||||
|
||||
DEST_FILE="$T_D0/dest_file"
|
||||
|
||||
stage_tmpfile $T_D0 $DEST_FILE
|
||||
|
||||
hexdump -C "$DEST_FILE"
|
||||
|
||||
rm -fr "$DEST_FILE"
|
||||
|
||||
t_pass
|
||||
@@ -65,7 +65,6 @@ generic/030 # mmap missing
|
||||
generic/075 # file content mismatch failures (fds, etc)
|
||||
generic/080 # mmap missing
|
||||
generic/103 # enospc causes trans commit failures
|
||||
generic/105 # needs trigage: something about acls
|
||||
generic/108 # mount fails on failing device?
|
||||
generic/112 # file content mismatch failures (fds, etc)
|
||||
generic/120 # (can't exec 'cause no mmap)
|
||||
@@ -73,17 +72,15 @@ generic/126 # (can't exec 'cause no mmap)
|
||||
generic/141 # mmap missing
|
||||
generic/213 # enospc causes trans commit failures
|
||||
generic/215 # mmap missing
|
||||
generic/237 # wrong error return from failing setfacl?
|
||||
generic/246 # mmap missing
|
||||
generic/247 # mmap missing
|
||||
generic/248 # mmap missing
|
||||
generic/319 # utils output change? update branch?
|
||||
generic/318 # can't support user namespaces until v5.11
|
||||
generic/321 # requires selinux enabled for '+' in ls?
|
||||
generic/325 # mmap missing
|
||||
generic/338 # BUG_ON update inode error handling
|
||||
generic/346 # mmap missing
|
||||
generic/347 # _dmthin_mount doesn't work?
|
||||
generic/375 # utils output change? update branch?
|
||||
EOF
|
||||
|
||||
t_restore_output
|
||||
|
||||
@@ -15,12 +15,61 @@ general mount options described in the
|
||||
.BR mount (8)
|
||||
manual page.
|
||||
.TP
|
||||
.B acl
|
||||
The acl mount option enables support for POSIX Access Control Lists
|
||||
as detailed in
|
||||
.BR acl (5) .
|
||||
Support for POSIX ACLs is the default.
|
||||
.TP
|
||||
.B data_prealloc_blocks=<blocks>
|
||||
Set the size of preallocation regions of data files, in 4KiB blocks.
|
||||
Writes to these regions that contain no extents will attempt to
|
||||
preallocate the size of the full region. This can waste a lot of space
|
||||
with small files, files with sparse regions, and files whose final
|
||||
length isn't a multiple of the preallocation size. The following
|
||||
data_prealloc_contig_only option, which is the default, restricts this
|
||||
behaviour to waste less space.
|
||||
.sp
|
||||
All the preallocation options can be changed in an active mount by
|
||||
writing to their respective files in the options directory in the
|
||||
mount's sysfs directory.
|
||||
.sp
|
||||
It is worth noting that it is always more efficient in every way to use
|
||||
.BR fallocate (2)
|
||||
to precisely allocate large extents for the resulting size of the file.
|
||||
Always attempt to enable it in software that supports it.
|
||||
.TP
|
||||
.B data_prealloc_contig_only=<0|1>
|
||||
This option, currently the default, limits file data preallocation in
|
||||
two ways. First, it will only preallocate when extending a fully
|
||||
allocated file. Second, it will limit the size of preallocation to the
|
||||
existing length of the file. These limits reduce the amount of
|
||||
preallocation wasted per file at the cost of multiple initial extents in
|
||||
all files. It only supports simple streaming writes, any other write
|
||||
pattern will not be recognized and could result in many fragmented
|
||||
extent allocations.
|
||||
.sp
|
||||
This option can be disabled to encourage large allocated extents
|
||||
regardless of write patterns. This can be helpful if files are written
|
||||
with initial sparse regions (perhaps by multiple threads writing to
|
||||
different regions) and wasted space isn't an issue (perhaps because the
|
||||
file population contains few small files).
|
||||
.TP
|
||||
.B metadev_path=<device>
|
||||
The metadev_path option specifies the path to the block device that
|
||||
contains the filesystem's metadata.
|
||||
.sp
|
||||
This option is required.
|
||||
.TP
|
||||
.B noacl
|
||||
The noacl mount option disables the default support for POSIX Access
|
||||
Control Lists. Any existing system.posix_acl_default and
|
||||
system.posix_acl_access extended attributes remain in inodes. They
|
||||
will appear in listings from
|
||||
.BR listxattr (5)
|
||||
but specific retrieval or reomval operations will fail. They will be
|
||||
used for enforcement again if ACL support is later enabled.
|
||||
.TP
|
||||
.B orphan_scan_delay_ms=<number>
|
||||
This option sets the average expected delay, in milliseconds, between
|
||||
each mount's scan of the global orphaned inode list. Jitter is added to
|
||||
@@ -36,6 +85,25 @@ the options directory in the mount's sysfs directory. Writing a new
|
||||
value will cause the next pending orphan scan to be rescheduled
|
||||
with the newly written delay time.
|
||||
.TP
|
||||
.B quorum_heartbeat_timeout_ms=<number>
|
||||
This option sets the amount of time, in milliseconds, that a quorum
|
||||
member will wait without receiving heartbeat messages from the current
|
||||
leader before trying to take over as leader. This setting is per-mount
|
||||
and only changes the behavior of that mount.
|
||||
.sp
|
||||
This determines how long it may take before a failed leader is replaced
|
||||
by a waiting quorum member. Setting it too low may lead to spurious
|
||||
fencing as active leaders are prematurely replaced due to task or
|
||||
network delays that prevent the quorum members from promptly sending and
|
||||
receiving messages. The ideal setting is the longest acceptable
|
||||
downtime during server failover. The default is 10000 (10s) and it can
|
||||
not be less than 2000 greater than 60000.
|
||||
.sp
|
||||
This option can be changed in an active mount by writing to its file in
|
||||
the options directory in the mount's sysfs directory. Writing a new
|
||||
value will take effect the next time the quorum agent receives a
|
||||
heartbeat message and sets the next timeout.
|
||||
.TP
|
||||
.B quorum_slot_nr=<number>
|
||||
The quorum_slot_nr option assigns a quorum member slot to the mount.
|
||||
The mount will use the slot assignment to claim exclusive ownership of
|
||||
|
||||
@@ -76,6 +76,97 @@ run when the file system will not be mounted.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "counters [-t|--table] SYSFS-DIR"
|
||||
.sp
|
||||
Display the counters and their values for a mounted ScoutFS filesystem.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B SYSFS-DIR
|
||||
The mount's sysfs directory in which to find the
|
||||
.B counters/
|
||||
directory when then contains files for each counter.
|
||||
The sysfs directory is
|
||||
of the form
|
||||
.I /sys/fs/scoutfs/f.<fsid>.r.<rid>/
|
||||
\&.
|
||||
.TP
|
||||
.B "-t, --table"
|
||||
Format the counters into a columnar table that fills the width of the display
|
||||
instead of printing one counter per line.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "data-waiting {-I|--inode} INODE-NUM {-B|--block} BLOCK-NUM [-p|--path PATH]"
|
||||
.sp
|
||||
Display all the files and blocks for which there is a task blocked waiting on
|
||||
offline data.
|
||||
.sp
|
||||
The results are sorted by the file's inode number and the
|
||||
logical block offset that is being waited on.
|
||||
.sp
|
||||
Each line of output describes a block in a file that has a task waiting
|
||||
and is formatted as:
|
||||
.I "ino <nr> iblock <nr> ops [str]"
|
||||
\&. The ops string indicates blocked operations seperated by commas and can
|
||||
include
|
||||
.B read
|
||||
for a read operation,
|
||||
.B write
|
||||
for a write operation, and
|
||||
.B change_size
|
||||
for a truncate or extending write.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "-I, --inode INODE-NUM"
|
||||
Start iterating over waiting tasks from the given inode number.
|
||||
Value of 0 will show all waiting tasks.
|
||||
.TP
|
||||
.B "-B, --block BLOCK-NUM"
|
||||
Start iterating over waiting tasks from the given logical block number
|
||||
in the starting inode. Value of 0 will show blocks in the first inode
|
||||
and then continue to show all blocks with tasks waiting in all the
|
||||
remaining inodes.
|
||||
.TP
|
||||
.B "-p, --path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "data-wait-err {-I|--inode} INODE-NUM {-V|--version} VER-NUM {-F|--offset} OFF-NUM {-C|--count} COUNT {-O|--op} OP {-E|--err} ERR [-p|--path PATH]"
|
||||
.sp
|
||||
Return error from matching waiters.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "-C, --count COUNT"
|
||||
Count.
|
||||
.TP
|
||||
.B "-E, --err ERR"
|
||||
Error.
|
||||
.TP
|
||||
.B "-F, --offset OFF-NUM"
|
||||
Offset. May be expressed in bytes, or with KMGTP (Kibi, Mibi, etc.) size
|
||||
suffixes.
|
||||
.TP
|
||||
.B "-I, --inode INODE-NUM"
|
||||
Inode number.
|
||||
.TP
|
||||
.B "-O, --op OP"
|
||||
Operation. One of: "read", "write", "change_size".
|
||||
.TP
|
||||
.B "-p, --path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "df [-h|--human-readable] [-p|--path PATH]"
|
||||
.sp
|
||||
@@ -93,6 +184,95 @@ A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "get-allocated-inos [-i|--ino INO] [-s|--single] [-p|--path PATH]"
|
||||
.sp
|
||||
This debugging command prints allocated inode numbers. It only prints
|
||||
inodes
|
||||
found in the group that contains the starting inode. The printed inode
|
||||
numbers aren't necessarily reachable. They could be anywhere in the
|
||||
process from being unlinked to finally deleted when their items
|
||||
were found.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "-i, --ino INO"
|
||||
The first 64bit inode number which could be printed.
|
||||
.TP
|
||||
.B "-s, --single"
|
||||
Only print the single starting inode when it is allocated, all other allocated
|
||||
inode numbers will be ignored.
|
||||
.TP
|
||||
.B "-p, --path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "get-referring-entries [-p|--path PATH] INO"
|
||||
.sp
|
||||
Find directory entries that reference an inode number.
|
||||
.sp
|
||||
Display all the directory entries that refer to a given inode. Each
|
||||
entry includes the inode number of the directory that contains it, the
|
||||
d_off and d_type values for the entry as described by
|
||||
.BR readdir (3)
|
||||
, and the name of the entry.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.TP
|
||||
.B "-p, --path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.TP
|
||||
.B "INO"
|
||||
The inode number of the target inode.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "ino-path INODE-NUM [-p|--path PATH]"
|
||||
.sp
|
||||
Display all paths that reference an inode number.
|
||||
.sp
|
||||
Ongoing filesystem changes, such as renaming a common parent of multiple paths,
|
||||
can cause displayed paths to be inconsistent.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "INODE-NUM"
|
||||
The inode number of the target inode.
|
||||
.TP
|
||||
.B "-p|--path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "list-hidden-xattrs FILE"
|
||||
.sp
|
||||
Display extended attributes starting with the
|
||||
.BR scoutfs.
|
||||
prefix and containing the
|
||||
.BR hide.
|
||||
tag
|
||||
which makes them invisible to
|
||||
.BR listxattr (2) .
|
||||
The names of each attribute are output, one per line. Their order
|
||||
is not specified.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "FILE"
|
||||
The path to a file within a ScoutFS filesystem. File permissions must allow
|
||||
reading.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "mkfs META-DEVICE DATA-DEVICE {-Q|--quorum-slot} NR,ADDR,PORT [-m|--max-meta-size SIZE] [-d|--max-data-size SIZE] [-z|--data-alloc-zone-blocks BLOCKS] [-f|--force] [-A|--allow-small-size] [-V|--format-version VERS]"
|
||||
.sp
|
||||
@@ -171,6 +351,79 @@ The range of supported versions is visible in the output of
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "prepare-empty-data-device {-c|--check} META-DEVICE DATA-DEVICE"
|
||||
.sp
|
||||
Prepare an unused device for use as the data device for an existing file
|
||||
system. This will write an initialized super block to the specified
|
||||
data device, destroying any existing contents. The specified metadata
|
||||
device will not be modified. The file system must be fully unmounted
|
||||
and any client mount recovery must be complete.
|
||||
.sp
|
||||
The existing metadata device is read to ensure that it's safe to stop
|
||||
using the old data device. The data block allocators must indicate that
|
||||
all data blocks are free. If there are still data blocks referenced by
|
||||
files then the command will fail. The contents of these files must be
|
||||
freed for the command to proceed.
|
||||
.sp
|
||||
A new super block is written to the new data device. The device can
|
||||
then be used as the data device to mount the file system. As this
|
||||
switch is made all client mounts must refer to the new device. The old
|
||||
device is not modified and still contains a valid data super block that
|
||||
could be mounted, creating data device writes that wouldn't be read by
|
||||
mounts using the new device.
|
||||
.sp
|
||||
The number of data blocks available to the file system will not change
|
||||
as the new data device is used. The new device must be large enough to
|
||||
store all the data blocks that were available on the old device. If the
|
||||
new device is larger then its added capacity can be used by growing the
|
||||
new data device with the resize-devices command once it is mounted.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "-c, --check"
|
||||
Only check for errors that would prevent a new empty data device from
|
||||
being used. No changes will be made to the data device. If the data
|
||||
device is provided then its size will be checked to make sure that it is
|
||||
large enough. This can be used to test the metadata for data references
|
||||
before destroying an old empty data device.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "print {-S|--skip-likely-huge} META-DEVICE"
|
||||
.sp
|
||||
Prints out all of the metadata in the file system. This makes no effort
|
||||
to ensure that the structures are consistent as they're traversed and
|
||||
can present structures that seem corrupt as they change as they're
|
||||
output.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "-S, --skip-likely-huge"
|
||||
Skip printing structures that are likely to be very large. The
|
||||
structures that are skipped tend to be global and whose size tends to be
|
||||
related to the size of the volume. Examples of skipped structures include
|
||||
the global fs items, srch files, and metadata and data
|
||||
allocators. Similar structures that are not skipped are related to the
|
||||
number of mounts and are maintained at a relatively reasonable size.
|
||||
These include per-mount log trees, srch files, allocators, and the
|
||||
metadata allocators used by server commits.
|
||||
.sp
|
||||
Skipping the larger structures limits the print output to a relatively
|
||||
constant size rather than being a large multiple of the used metadata
|
||||
space of the volume making the output much more useful for inspection.
|
||||
.TP
|
||||
.B "META-DEVICE"
|
||||
The path to the metadata device for the filesystem whose metadata will be
|
||||
printed. An attempt will be made to flush the host's buffer cache for
|
||||
this device with the BLKFLSBUF ioctl, or with posix_fadvise() if
|
||||
the path refers to a regular file.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "resize-devices [-p|--path PATH] [-m|--meta-size SIZE] [-d|--data-size SIZE]"
|
||||
.sp
|
||||
@@ -229,6 +482,92 @@ kibibytes, mebibytes, etc.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "search-xattrs XATTR-NAME [-p|--path PATH]"
|
||||
.sp
|
||||
Display the inode numbers of inodes in the filesystem which may have
|
||||
an extended attribute with the given name.
|
||||
.sp
|
||||
The results may contain false positives. The returned inode numbers
|
||||
should be checked to verify that the extended attribute is in fact
|
||||
present on the inode.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B XATTR-NAME
|
||||
The full name of the extended attribute to search for as
|
||||
described in the
|
||||
.BR xattr (7)
|
||||
manual page.
|
||||
.TP
|
||||
.B "-p|--path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "setattr FILE [-d, --data-version=VERSION [-s, --size=SIZE [-o, --offline]]] [-t, --ctime=TIMESPEC]"
|
||||
.sp
|
||||
Set ScoutFS-specific attributes on a newly created zero-length file.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "-V, --data-version=VERSION"
|
||||
Set data version.
|
||||
.TP
|
||||
.B "-o, --offline"
|
||||
Set file contents as offline, not sparse. Requires
|
||||
.I --size
|
||||
option also be present.
|
||||
.TP
|
||||
.B "-s, --size=SIZE"
|
||||
Set file size. May be expressed in bytes, or with
|
||||
KMGTP (Kibi, Mibi, etc.) size suffixes. Requires
|
||||
.I --data-version
|
||||
option also be present.
|
||||
.TP
|
||||
.B "-t, --ctime=TIMESPEC"
|
||||
Set creation time using
|
||||
.I "<seconds-since-epoch>.<nanoseconds>"
|
||||
format.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "stage ARCHIVE-FILE FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
|
||||
.sp
|
||||
.B Stage
|
||||
(i.e. return to online) the previously-offline contents of a file by copying a
|
||||
region from another file, the archive, and without updating regular inode
|
||||
metadata. Any operations that are blocked by the existence of an offline
|
||||
region will proceed once the region has been staged.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "ARCHIVE-FILE"
|
||||
The source file for the file contents being staged.
|
||||
.TP
|
||||
.B "FILE"
|
||||
The regular file whose contents will be staged.
|
||||
.TP
|
||||
.B "-V, --version VERSION"
|
||||
The data_version of the contents to be staged. It must match the
|
||||
current data_version of the file.
|
||||
.TP
|
||||
.B "-o, --offset OFF-NUM"
|
||||
The starting byte offset of the region to write. May be expressed in bytes, or with
|
||||
KMGTP (Kibi, Mibi, etc.) size suffixes. Default is 0.
|
||||
.TP
|
||||
.B "-l, --length LENGTH"
|
||||
Length of range (bytes or KMGTP units) of file to stage. Default is the file's
|
||||
total size.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "stat FILE [-s|--single-field FIELD-NAME]"
|
||||
.sp
|
||||
Display ScoutFS-specific metadata fields for the given file.
|
||||
@@ -314,221 +653,6 @@ The total number of 4K data blocks in the filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "counters [-t|--table] SYSFS-DIR"
|
||||
.sp
|
||||
Display the counters and their values for a mounted ScoutFS filesystem.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B SYSFS-DIR
|
||||
The mount's sysfs directory in which to find the
|
||||
.B counters/
|
||||
directory when then contains files for each counter.
|
||||
The sysfs directory is
|
||||
of the form
|
||||
.I /sys/fs/scoutfs/f.<fsid>.r.<rid>/
|
||||
\&.
|
||||
.TP
|
||||
.B "-t, --table"
|
||||
Format the counters into a columnar table that fills the width of the display
|
||||
instead of printing one counter per line.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "search-xattrs XATTR-NAME [-p|--path PATH]"
|
||||
.sp
|
||||
Display the inode numbers of inodes in the filesystem which may have
|
||||
an extended attribute with the given name.
|
||||
.sp
|
||||
The results may contain false positives. The returned inode numbers
|
||||
should be checked to verify that the extended attribute is in fact
|
||||
present on the inode.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B XATTR-NAME
|
||||
The full name of the extended attribute to search for as
|
||||
described in the
|
||||
.BR xattr (7)
|
||||
manual page.
|
||||
.TP
|
||||
.B "-p|--path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "list-hidden-xattrs FILE"
|
||||
.sp
|
||||
Display extended attributes starting with the
|
||||
.BR scoutfs.
|
||||
prefix and containing the
|
||||
.BR hide.
|
||||
tag
|
||||
which makes them invisible to
|
||||
.BR listxattr (2) .
|
||||
The names of each attribute are output, one per line. Their order
|
||||
is not specified.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "FILE"
|
||||
The path to a file within a ScoutFS filesystem. File permissions must allow
|
||||
reading.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "walk-inodes {meta_seq|data_seq} FIRST-INODE LAST-INODE [-p|--path PATH]"
|
||||
.sp
|
||||
Walk an inode index in the file system and output the inode numbers
|
||||
that are found between the first and last positions in the index.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.BR meta_seq , data_seq
|
||||
Which index to walk.
|
||||
.TP
|
||||
.B "FIRST-INODE"
|
||||
An integer index value giving starting position of the index walk.
|
||||
.I 0
|
||||
is the first possible position.
|
||||
.TP
|
||||
.B "LAST-INODE"
|
||||
An integer index value giving the last position to include in the index walk.
|
||||
.I \-1
|
||||
can be given to indicate the last possible position.
|
||||
.TP
|
||||
.B "-p|--path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "ino-path INODE-NUM [-p|--path PATH]"
|
||||
.sp
|
||||
Display all paths that reference an inode number.
|
||||
.sp
|
||||
Ongoing filesystem changes, such as renaming a common parent of multiple paths,
|
||||
can cause displayed paths to be inconsistent.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "INODE-NUM"
|
||||
The inode number of the target inode.
|
||||
.TP
|
||||
.B "-p|--path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "data-waiting {-I|--inode} INODE-NUM {-B|--block} BLOCK-NUM [-p|--path PATH]"
|
||||
.sp
|
||||
Display all the files and blocks for which there is a task blocked waiting on
|
||||
offline data.
|
||||
.sp
|
||||
The results are sorted by the file's inode number and the
|
||||
logical block offset that is being waited on.
|
||||
.sp
|
||||
Each line of output describes a block in a file that has a task waiting
|
||||
and is formatted as:
|
||||
.I "ino <nr> iblock <nr> ops [str]"
|
||||
\&. The ops string indicates blocked operations seperated by commas and can
|
||||
include
|
||||
.B read
|
||||
for a read operation,
|
||||
.B write
|
||||
for a write operation, and
|
||||
.B change_size
|
||||
for a truncate or extending write.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "-I, --inode INODE-NUM"
|
||||
Start iterating over waiting tasks from the given inode number.
|
||||
Value of 0 will show all waiting tasks.
|
||||
.TP
|
||||
.B "-B, --block BLOCK-NUM"
|
||||
Start iterating over waiting tasks from the given logical block number
|
||||
in the starting inode. Value of 0 will show blocks in the first inode
|
||||
and then continue to show all blocks with tasks waiting in all the
|
||||
remaining inodes.
|
||||
.TP
|
||||
.B "-p, --path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "data-wait-err {-I|--inode} INODE-NUM {-V|--version} VER-NUM {-F|--offset} OFF-NUM {-C|--count} COUNT {-O|--op} OP {-E|--err} ERR [-p|--path PATH]"
|
||||
.sp
|
||||
Return error from matching waiters.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "-C, --count COUNT"
|
||||
Count.
|
||||
.TP
|
||||
.B "-E, --err ERR"
|
||||
Error.
|
||||
.TP
|
||||
.B "-F, --offset OFF-NUM"
|
||||
Offset. May be expressed in bytes, or with KMGTP (Kibi, Mibi, etc.) size
|
||||
suffixes.
|
||||
.TP
|
||||
.B "-I, --inode INODE-NUM"
|
||||
Inode number.
|
||||
.TP
|
||||
.B "-O, --op OP"
|
||||
Operation. One of: "read", "write", "change_size".
|
||||
.TP
|
||||
.B "-p, --path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "stage ARCHIVE-FILE FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
|
||||
.sp
|
||||
.B Stage
|
||||
(i.e. return to online) the previously-offline contents of a file by copying a
|
||||
region from another file, the archive, and without updating regular inode
|
||||
metadata. Any operations that are blocked by the existence of an offline
|
||||
region will proceed once the region has been staged.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "ARCHIVE-FILE"
|
||||
The source file for the file contents being staged.
|
||||
.TP
|
||||
.B "FILE"
|
||||
The regular file whose contents will be staged.
|
||||
.TP
|
||||
.B "-V, --version VERSION"
|
||||
The data_version of the contents to be staged. It must match the
|
||||
current data_version of the file.
|
||||
.TP
|
||||
.B "-o, --offset OFF-NUM"
|
||||
The starting byte offset of the region to write. May be expressed in bytes, or with
|
||||
KMGTP (Kibi, Mibi, etc.) size suffixes. Default is 0.
|
||||
.TP
|
||||
.B "-l, --length LENGTH"
|
||||
Length of range (bytes or KMGTP units) of file to stage. Default is the file's
|
||||
total size.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "release FILE {-V|--version} VERSION [-o, --offset OFF-NUM] [-l, --length LENGTH]"
|
||||
.sp
|
||||
@@ -568,76 +692,28 @@ total size.
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "setattr FILE [-d, --data-version=VERSION [-s, --size=SIZE [-o, --offline]]] [-t, --ctime=TIMESPEC]"
|
||||
.BI "walk-inodes {meta_seq|data_seq} FIRST-INODE LAST-INODE [-p|--path PATH]"
|
||||
.sp
|
||||
Set ScoutFS-specific attributes on a newly created zero-length file.
|
||||
Walk an inode index in the file system and output the inode numbers
|
||||
that are found between the first and last positions in the index.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.sp
|
||||
.TP
|
||||
.B "-V, --data-version=VERSION"
|
||||
Set data version.
|
||||
.BR meta_seq , data_seq
|
||||
Which index to walk.
|
||||
.TP
|
||||
.B "-o, --offline"
|
||||
Set file contents as offline, not sparse. Requires
|
||||
.I --size
|
||||
option also be present.
|
||||
.B "FIRST-INODE"
|
||||
An integer index value giving starting position of the index walk.
|
||||
.I 0
|
||||
is the first possible position.
|
||||
.TP
|
||||
.B "-s, --size=SIZE"
|
||||
Set file size. May be expressed in bytes, or with
|
||||
KMGTP (Kibi, Mibi, etc.) size suffixes. Requires
|
||||
.I --data-version
|
||||
option also be present.
|
||||
.B "LAST-INODE"
|
||||
An integer index value giving the last position to include in the index walk.
|
||||
.I \-1
|
||||
can be given to indicate the last possible position.
|
||||
.TP
|
||||
.B "-t, --ctime=TIMESPEC"
|
||||
Set creation time using
|
||||
.I "<seconds-since-epoch>.<nanoseconds>"
|
||||
format.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "print META-DEVICE"
|
||||
.sp
|
||||
Prints out all of the metadata in the file system. This makes no effort
|
||||
to ensure that the structures are consistent as they're traversed and
|
||||
can present structures that seem corrupt as they change as they're
|
||||
output.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "META-DEVICE"
|
||||
The path to the metadata device for the filesystem whose metadata will be
|
||||
printed. Since this command reads via the host's buffer cache, it may not
|
||||
reflect the current blocks in the filesystem possibly written to the shared
|
||||
block devices from another host, unless
|
||||
.B blockdev \--flushbufs
|
||||
command is used first.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
.TP
|
||||
.BI "get-allocated-inos [-i|--ino INO] [-s|--single] [-p|--path PATH]"
|
||||
.sp
|
||||
This debugging command prints allocated inode numbers. It only prints
|
||||
inodes
|
||||
found in the group that contains the starting inode. The printed inode
|
||||
numbers aren't necessarily reachable. They could be anywhere in the
|
||||
process from being unlinked to finally deleted when their items
|
||||
were found.
|
||||
.RS 1.0i
|
||||
.PD 0
|
||||
.TP
|
||||
.sp
|
||||
.B "-i, --ino INO"
|
||||
The first 64bit inode number which could be printed.
|
||||
.TP
|
||||
.B "-s, --single"
|
||||
Only print the single starting inode when it is allocated, all other allocated
|
||||
inode numbers will be ignored.
|
||||
.TP
|
||||
.B "-p, --path PATH"
|
||||
.B "-p|--path PATH"
|
||||
A path within a ScoutFS filesystem.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
@@ -61,7 +61,7 @@ install -m 644 -D fenced/scoutfs-fenced.conf.example $RPM_BUILD_ROOT%{_sysconfdi
|
||||
%files
|
||||
%defattr(644,root,root,755)
|
||||
%{_mandir}/man*/scoutfs*.gz
|
||||
%{_unitdir}/scoutfs-fenced.service
|
||||
/%{_unitdir}/scoutfs-fenced.service
|
||||
%{_sysconfdir}/scoutfs
|
||||
%defattr(755,root,root,755)
|
||||
%{_sbindir}/scoutfs
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user