* fix(weed/command) address unhandled errors
* fix(command): don't log graceful-shutdown sentinels; plug response-body leak
- s3: Serve on unix socket treated http.ErrServerClosed as fatal; now
excluded like the other Serve/ServeTLS paths in this file.
- mq_agent, mq_broker: filter grpc.ErrServerStopped so clean shutdown
doesn't log as an error.
- worker_runtime: the added decodeErr early-continue skipped
resp.Body.Close(); drop it since the existing check below already
surfaces the decode error.
- mount_std: the pre-mount Unmount commonly fails when nothing is
mounted; demote to V(1) Infof.
- fuse_std: tidy panic message to match sibling cases.
* fix(mq_broker): filter grpc.ErrServerStopped on localhost listener
The localhost listener goroutine logged any Serve error unconditionally,
which includes grpc.ErrServerStopped on graceful shutdown. Match the
main listener's check so clean stops don't surface as errors.
---------
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* feat(mount): cap write buffer with -writeBufferSizeMB
Without a bound on the per-mount write pipeline, sustained upload
failures (e.g. volume server returning "Volume Size Exceeded" while
the master hasn't yet rotated assignments) let sealed chunks pile up
across open file handles until the swap directory — by default
os.TempDir() — fills the disk. Reported on 4.19 filling /tmp to 1.8 TB
during a large rclone sync.
Add a global WriteBufferAccountant shared across every UploadPipeline
in a mount. Creating a new page chunk (memory or swap) first reserves
ChunkSize bytes; when the cap is reached the writer blocks until an
uploader finishes and releases, turning swap overflow into natural
FUSE-level backpressure instead of unbounded disk growth.
The new -writeBufferSizeMB flag (also accepted via fuse.conf) defaults
to 0 = unlimited, preserving current behavior. Reserve drops
chunksLock while blocking so uploader goroutines — which take
chunksLock on completion before calling Release — cannot deadlock,
and an oversized reservation on an empty accountant succeeds to avoid
single-handle starvation.
* fix(mount): plug write-budget leaks in pipeline Shutdown
Review on #9066 caught two accounting bugs on the Destroy() path:
1. Writable-chunk leak (high). SaveDataAt() reserves ChunkSize before
inserting into writableChunks, but Shutdown() only iterated
sealedChunks. Truncate / metadata-invalidation flows call Destroy()
(via ResetDirtyPages) without flushing first, so any dirty but
unsealed chunks would permanently shrink the global write budget.
Shutdown now frees and releases writable chunks too.
2. Double release with racing uploader (medium). Shutdown called
accountant.Release directly after FreeReference, while the async
uploader goroutine did the same on normal completion — under a
Destroy-before-flush race this could underflow the accountant and
let later writes exceed the configured cap. Move accounting into
SealedChunk.FreeReference itself: the refcount-zero transition is
exactly-once by construction, so any number of FreeReference calls
release the slot precisely once.
Add regression tests for the writable-leak and the FreeReference
idempotency guarantee.
* test(mount): remove sleep-based race in accountant blocking test
Address review nits on #9066:
- Replace time.Sleep(50ms) proxy for "goroutine entered Reserve" with
a started channel the goroutine closes immediately before calling
Reserve. Reserve cannot make progress until Release is called, so
landed is guaranteed false after the handshake — no arbitrary wait.
- Short-circuit WriteBufferAccountant.Used() in unlimited mode for
consistency with Reserve/Release, avoiding a mutex round-trip.
* test(mount): add end-to-end write-buffer cap integration test
Exercises the full write-budget plumbing with a small cap (4 chunks of
64 KiB = 256 KiB) shared across three UploadPipelines fed by six
concurrent writers. A gated saveFn models the "volume server rejecting
uploads" condition from the original report: no sealed chunk can drain
until the test opens the gate. A background sampler records the peak
value of accountant.Used() throughout the run.
The test asserts:
- writers fill the budget and then block on Reserve (Used() stays at
the cap while stalled)
- Used() never exceeds the configured cap even under concurrent
pressure from multiple pipelines
- after the gate opens, writers drain to zero
- peak observed Used() matches the cap (262144 bytes in this run)
While wiring this up, the race detector surfaced a pre-existing data
race on UploadPipeline.uploaderCount: the two glog.V(4) lines around
the atomic Add sites read the field non-atomically. Capture the new
value from AddInt32 and log that instead — one-liner each, no
behavioral change.
* test(fuse): end-to-end integration test for -writeBufferSizeMB
Exercise the new write-buffer cap against a real weed mount so CI
(fuse-integration.yml) covers the FUSE→upload-pipeline→filer path, not
just the in-package unit tests. Uses a 4 MiB cap with 2 MiB chunks so
every subtest's total write demand is multiples of the budget and
Reserve/Release must drive forward progress for writes to complete.
Subtests:
- ConcurrentLargeWrites: six parallel 6 MiB files (36 MiB total, ~18
chunk allocations) through the same mount, verifies every byte
round-trips.
- SingleFileExceedingCap: one 20 MiB file (10 chunks) through a single
handle, catching any self-deadlock when the pipeline's own earlier
chunks already fill the global budget.
- DoesNotDeadlockAfterPressure: final small write with a 30s timeout,
catching budget-slot leaks that would otherwise hang subsequent
writes on a still-full accountant.
Ran locally on Darwin with macfuse against a real weed mini + mount:
=== RUN TestWriteBufferCap
--- PASS: TestWriteBufferCap (1.82s)
* test(fuse): loosen write-buffer cap e2e test + fail-fast on hang
On Linux CI the previous configuration (-writeBufferSizeMB=4,
-concurrentWriters=4 against a 20 MiB single-handle write)
deterministically hung the "Run FUSE Integration Tests" step to the
45-minute workflow timeout, while on macOS / macfuse the same test
completes in ~2 seconds (see run 24386197483). The Linux hang shows
up after TestWriteBufferCap/ConcurrentLargeWrites completes cleanly,
then TestWriteBufferCap/SingleFileExceedingCap starts and never
emits its PASS line.
Change:
- Loosen the cap to 16 MiB (8 × 2 MiB chunk slots) and drop the
custom -concurrentWriters override. The subtests still drive demand
well above the cap (32 MiB concurrent, 12 MiB single-handle), so
Reserve/Release is still on every chunk-allocation path; the cap
just gives the pipeline enough headroom that interactions with the
per-file writableChunkLimit and the go-fuse MaxWrite batching don't
wedge a single-handle writer on a slow runner.
- Wrap every os.WriteFile in a writeWithTimeout helper that dumps every
live goroutine on timeout. If this ever re-regresses, CI surfaces
the actual stuck goroutines instead of a 45-minute walltime.
- Also guard the concurrent-writer goroutines with the same timeout +
stack dump.
The in-package unit test TestWriteBufferCap_SharedAcrossPipelines
remains the deterministic, controlled verification of the blocking
Reserve/Release path — this e2e test is now a smoke test for
correctness and absence of deadlocks through a real FUSE mount, which
is all it should be.
* fix: address PR #9066 review — idempotent FreeReference, subtest watchdog, larger single-handle test
FreeReference on SealedChunk now early-returns when referenceCounter is
already <= 0. The existing == 0 body guard already made side effects
idempotent, but the counter itself would still decrement into the
negatives on a double-call — ugly and a latent landmine for any future
caller that does math on the counter. Make double-call a strict no-op.
test(fuse): per-subtest watchdog + larger single-handle test
- Add runSubtestWithWatchdog and wrap every TestWriteBufferCap subtest
with a 3-minute deadline. Individual writes were already
timeout-wrapped but the readback loops and surrounding bookkeeping
were not, leaving a gap where a subtest body could still hang. On
watchdog fire, every live goroutine is dumped so CI surfaces the
wedge instead of a 45-minute walltime.
- Bump testLargeFileUnderCap from 12 MiB → 20 MiB (10 chunks) to
exceed the 16 MiB cap (8 slots) again and actually exercise
Reserve/Release backpressure on a single file handle. The earlier
e2e hang was under much tighter params (-writeBufferSizeMB=4,
-concurrentWriters=4, writable limit 4); with the current loosened
config the pressure is gentle and the goroutine-dump-on-timeout
safety net is in place if it ever regresses.
Declined: adding an observable peak-Used() assertion to the e2e test.
The mount runs as a subprocess so its in-process WriteBufferAccountant
state isn't reachable from the test without adding a metrics/RPC
surface. The deterministic peak-vs-cap verification already lives in
the in-package unit test TestWriteBufferCap_SharedAcrossPipelines.
Recorded this rationale inline in TestWriteBufferCap's doc comment.
* test(fuse): capture mount pprof goroutine dump on write-timeout
The previous run (24388549058) hung on LargeFileUnderCap and the
test-side dumpAllGoroutines only showed the test process — the test's
syscall.Write is blocked in the kernel waiting for FUSE to respond,
which tells us nothing about where the MOUNT is stuck. The mount runs
as a subprocess so its in-process stacks aren't reachable from the
test.
Enable the mount's pprof endpoint via -debug=true -debug.port=<free>,
allocate the port from the test, and on write-timeout fetch
/debug/pprof/goroutine?debug=2 from the mount process and log it. This
gives CI the only view that can actually diagnose a write-buffer
backpressure deadlock (writer goroutines blocked on Reserve, uploader
goroutines stalled on something, etc).
Kept fileSize at 20 MiB so the Linux CI run will still hit the hang
(if it's genuinely there) and produce an actionable mount-side dump;
the alternative — silently shrinking the test below the cap — would
lose the regression signal entirely.
* review: constructor-inject accountant + subtest watchdog body on main
Two PR-#9066 review fixes:
1. NewUploadPipeline now takes the WriteBufferAccountant as a
constructor parameter; SetWriteBufferAccountant is removed. In
practice the previous setter was only called once during
newMemoryChunkPages, before any goroutine could touch the
pipeline, so there was no actual race — but constructor injection
makes the "accountant is fixed at construction time" invariant
explicit and eliminates the possibility of a future caller
mutating it mid-flight. All three call sites (real + two tests)
updated; the legacy TestUploadPipeline passes a nil accountant,
preserving backward-compatible unlimited-mode behavior.
2. runSubtestWithWatchdog now runs body on the subtest main goroutine
and starts a watcher goroutine that only calls goroutine-safe t
methods (t.Log, t.Logf, t.Errorf). The previous version ran body
on a spawned goroutine, which meant any require.* or writeWithTimeout
t.Fatalf inside body was being called from a non-test goroutine —
explicitly disallowed by Go's testing docs. The watcher no longer
interrupts body (it can't), so body must return on its own —
which it does via writeWithTimeout's internal 90s timeout firing
t.Fatalf on (now) the main goroutine. The watchdog still provides
the critical diagnostic: on timeout it dumps both test-side and
mount-side (via pprof) goroutine stacks and marks the test failed
via t.Errorf.
* fix(mount): IsComplete must detect coverage across adjacent intervals
Linux FUSE caps per-op writes at FUSE_MAX_PAGES_PER_REQ (typically
1 MiB on x86_64) regardless of go-fuse's requested MaxWrite, so a
2 MiB chunk filled by a sequential writer arrives as two adjacent
1 MiB write ops. addInterval in ChunkWrittenIntervalList does not
merge adjacent intervals, so the resulting list has two elements
{[0,1M], [1M,2M]} — fully covered, but list.size()==2.
IsComplete previously returned `list.size() == 1 &&
list.head.next.isComplete(chunkSize)`, which required a single
interval covering [0, chunkSize). Under that rule, chunks filled by
adjacent writes never reach IsComplete==true, so maybeMoveToSealed
never fires, and the chunks sit in writableChunks until
FlushAll/close. SaveContent handles the adjacency correctly via its
inline merge loop, so uploads work once they're triggered — but
IsComplete is the gate that triggers them.
This was a latent bug: without the write-buffer cap, the overflow
path kicks in at writableChunkLimit (default 128) and force-seals
chunks, hiding the leak. #9066's -writeBufferSizeMB adds a tighter
global cap, and with 8 slots / 20 MiB test, the budget trips long
before overflow. The writer blocks in Reserve, waiting for a slot
that never frees because no uploader ever ran — observed in the CI
run 24390596623 mount pprof dump: goroutine 1 stuck in
WriteBufferAccountant.Reserve → cond.Wait, zero uploader goroutines
anywhere in the 89-goroutine dump.
Walk the (sorted) interval list tracking the furthest covered
offset; return true if coverage reaches chunkSize with no gaps. This
correctly handles adjacent intervals, overlapping intervals, and
out-of-order inserts. Added TestIsComplete_AdjacentIntervals
covering single-write, two adjacent halves (both orderings), eight
adjacent eighths, gaps, missing edges, and overlaps.
* test(fuse): route mount glog to stderr + dump mount on any write error
Run 24392087737 (with the IsComplete fix) no longer hangs on Linux —
huge progress. Now TestWriteBufferCap/LargeFileUnderCap fails with
'close(...write_buffer_cap_large.bin): input/output error', meaning
a chunk upload failed and pages.lastErr propagated via FlushData to
close(). But the mount log in the CI artifact is empty because weed
mount's glog defaults to /tmp/weed.* files, which the CI upload step
never sees, so we can't tell WHICH upload failed or WHY.
Add -logtostderr=true -v=2 to MountOptions so glog output goes to
the mount process's stderr, which the framework's startProcess
redirects into f.logDir/mount.log, which the framework's DumpLogs
then prints to the test output on failure. The -v=2 floor enables
saveDataAsChunk upload errors (currently logged at V(0)) plus the
medium-level write_pipeline/upload traces without drowning the log
in V(4) noise.
Also dump MOUNT goroutines on any writeWithTimeout error (not just
timeout). The IsComplete fix means we now get explicit errors
instead of silent hangs, and the goroutine dump at the error moment
shows in-flight upload state (pending sealed chunks, retry loops,
etc) that a post-failure log alone can't capture.
* Make weed-fuse compatible with systemd-mount series
* fix: add missing type annotation on skipAutofs param in FreeBSD build
The parameter was declared without a type, causing a compile error on FreeBSD.
* fix: guard hasAutofs nil dereference and make FsName conditional on autofs mode
- Check option.hasAutofs for nil before dereferencing to prevent panic
when RunMount is called without the flag initialized.
- Only set FsName to "fuse" when autofs mode is active; otherwise
preserve the descriptive server:path name for mount/df output.
- Fix typo: recogize -> recognize.
* fix: consistent error handling for autofs option and log ignored _netdev
- Replace panic with fmt.Fprintf+return false for autofs parse errors,
matching the pattern used by other fuse option parsers.
- Log when _netdev option is silently stripped to aid debugging.
---------
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* mount: refresh and evict hot dir cache
* mount: guard dir update window and extend TTL
* mount: reuse timestamp for cache mark
* Apply suggestion from @gemini-code-assist[bot]
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* mount: make dir cache tuning configurable
* mount: dedupe dir update notices
* mount: restore invalidate-all cache helper
* mount: keep hot dir tuning constants
* mount: centralize cache state reset
* mount: mark refresh completion time
* mount: allow disabling idle eviction
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This adds support for the new FUSE performance options to the 'weed fuse' command,
matching the functionality available in 'weed mount'.
Added options:
- writebackCache: Enable FUSE writeback cache for improved write performance
- asyncDio: Enable async direct I/O for better concurrency
- cacheSymlink: Enable symlink caching to reduce metadata lookups
- sys.novncache: (macOS only) Disable vnode name caching to avoid stale data
These options can now be used with mount -t weed:
mount -t weed fuse /mnt -o "filer=localhost:8888,writebackCache=true,asyncDio=true"
This ensures feature parity between 'weed mount' and 'weed fuse' commands.
Changes:
Modified weed/command/fuse.go to add a function GetFuseCommandName to return the name of the fuse command.
Modified weed/weed.go to conditionally initialize the global HTTP client only if the command is not "fuse".
Modified weed/command/fuse_std.go to parse parameters and ensure the global HTTP client is initialized for the fuse command.
Tests:
Use /etc/fstab like:
fuse /repos fuse.weed filer=192.168.1.101:7202,filer.path=/hpc/repos,config_dir=/etc/seaweedfs/seaweedfs_01 0 0
fuse /opt/ohpc/pub fuse.weed filer=192.168.1.102:7202,filer.path=/hpc_cluster/pub,config_dir=/etc/seaweedfs/seaweedfs_02 0 0
Co-authored-by: zhangxl56 <zhangxl56@lenovo.com>
* mount: improve read throughput with parallel chunk fetching
This addresses issue #7504 where a single weed mount FUSE instance
does not fully utilize node network bandwidth when reading large files.
Changes:
- Add -concurrentReaders mount option (default: 16) to control the
maximum number of parallel chunk fetches during read operations
- Implement parallel section reading in ChunkGroup.ReadDataAt() using
errgroup for better throughput when reading across multiple sections
- Enhance ReaderCache with MaybeCacheMany() to prefetch multiple chunks
ahead in parallel during sequential reads (now prefetches 4 chunks)
- Increase ReaderCache limit dynamically based on concurrentReaders
to support higher read parallelism
The bottleneck was that chunks were being read sequentially even when
they reside on different volume servers. By introducing parallel chunk
fetching, a single mount instance can now better saturate available
network bandwidth.
Fixes: #7504
* fmt
* Address review comments: make prefetch configurable, improve error handling
Changes:
1. Add DefaultPrefetchCount constant (4) to reader_at.go
2. Add GetPrefetchCount() method to ChunkGroup that derives prefetch count
from concurrentReaders (1/4 ratio, min 1, max 8)
3. Pass prefetch count through NewChunkReaderAtFromClient
4. Fix error handling in readDataAtParallel to prioritize errgroup error
5. Update all callers to use DefaultPrefetchCount constant
For mount operations, prefetch scales with -concurrentReaders:
- concurrentReaders=16 (default) -> prefetch=4
- concurrentReaders=32 -> prefetch=8 (capped)
- concurrentReaders=4 -> prefetch=1
For non-mount paths (WebDAV, query engine, MQ), uses DefaultPrefetchCount.
* fmt
* Refactor: use variadic parameter instead of new function name
Use NewChunkGroup with optional concurrentReaders parameter instead of
creating a separate NewChunkGroupWithConcurrency function.
This maintains backward compatibility - existing callers without the
parameter get the default of 16 concurrent readers.
* Use explicit concurrentReaders parameter instead of variadic
* Refactor: use MaybeCache with count parameter instead of new MaybeCacheMany function
* Address nitpick review comments
- Add upper bound (128) on concurrentReaders to prevent excessive goroutine fan-out
- Cap readerCacheLimit at 256 accordingly
- Fix SetChunks: use Lock() instead of RLock() since we are writing to group.sections