mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-14 13:51:33 +00:00
* feat(mount): pre-allocate file IDs in pool for writeback cache mode When writeback caching is enabled, chunk uploads no longer block on a per-chunk AssignVolume RPC. Instead, a FileIdPool pre-allocates file IDs in batches using a single AssignVolume(Count=N, ExpectedDataSize=ChunkSize) call and hands them out instantly to upload workers. Pool size is 2x ConcurrentWriters, refilled in background when it drops below ConcurrentWriters. Entries expire after 25s to respect JWT TTL. Sequential needle keys are generated from the base file ID returned by the master, so one Assign RPC produces N usable IDs. This cuts per-chunk upload latency from 2 RTTs (assign + upload) to 1 RTT (upload only), with the assign cost amortized across the batch. * test: add benchmarks for file ID pool vs direct assign Benchmarks measure: - Pool Get vs Direct AssignVolume at various simulated latencies - Batch assign scaling (Count=1 through Count=32) - Concurrent pool access with 1-64 workers Results on Apple M4: - Pool Get: constant ~3ns regardless of assign latency - Batch=16: 15.7x more IDs/sec than individual assigns - 64 concurrent workers: 19M IDs/sec throughput * fix(mount): address review feedback on file ID pool 1. Fix race condition in Get(): use sync.Cond so callers wait for an in-flight refill instead of returning an error when the pool is empty. 2. Match default pool size to async flush worker count (128, not 16) when ConcurrentWriters is unset. 3. Add logging to UploadWithAssignFunc for consistency with UploadWithRetry. 4. Document that pooled assigns omit the Path field, bypassing path-based storage rules (filer.conf). This is an intentional tradeoff for writeback cache performance. 5. Fix flaky expiry test: widen time margin from 50ms to 1s. 6. Add TestFileIdPoolGetWaitsForRefill to verify concurrent waiters. * fix(mount): use individual Count=1 assigns to get per-fid JWTs The master generates one JWT per AssignResponse, bound to the base file ID (master_grpc_server_assign.go:158). The volume server validates that the JWT's Fid matches the upload exactly (volume_server_handlers.go:367). Using Count=N and deriving sequential IDs would fail this check. Switch to individual Count=1 RPCs over a single gRPC connection. This still amortizes connection overhead while getting a correct per-fid JWT for each entry. Partial batches are accepted if some requests fail. Remove unused needle import now that sequential ID generation is gone. * fix(mount): separate pprof from FUSE protocol debug logging The -debug flag was enabling both the pprof HTTP server and the noisy go-fuse protocol logging (rx/tx lines for every FUSE operation). This makes profiling impractical as the log output dominates. Split into two flags: - -debug: enables pprof HTTP server only (for profiling) - -debug.fuse: enables raw FUSE protocol request/response logging * perf(mount): replace LevelDB read+write with in-memory overlay for dir mtime Profile showed TouchDirMtimeCtime at 0.22s — every create/rename/unlink in a directory did a LevelDB FindEntry (read) + UpdateEntry (write) just to bump the parent dir's mtime/ctime. Replace with an in-memory map (same pattern as existing atime overlay): - touchDirMtimeCtimeLocal now stores inode→timestamp in dirMtimeMap - applyInMemoryDirMtime overlays onto GetAttr/Lookup output - No LevelDB I/O on the mutation hot path The overlay only advances timestamps forward (max of stored vs overlay), so stale entries are harmless. Map is bounded at 8192 entries. * perf(mount): skip self-originated metadata subscription events in writeback mode With writeback caching, this mount is the single writer. All local mutations are already applied to the local meta cache (via applyLocalMetadataEvent or direct InsertEntry). The filer subscription then delivers the same event back, causing redundant work: proto.Clone, enqueue to apply loop, dedup ring check, and sometimes redundant LevelDB writes when the dedup ring misses (deferred creates). Check EventNotification.Signatures against selfSignature and skip events that originated from this mount. This eliminates the redundant processing for every self-originated mutation. * perf(mount): increase kernel FUSE cache TTL in writeback cache mode With writeback caching, this mount is the single writer — the local meta cache is authoritative. Increase EntryValid and AttrValid from 1s to 10s so the kernel doesn't re-issue Lookup/GetAttr for every path component and stat call. This reduces FUSE /dev/fuse round-trips which dominate the profile at 38% of CPU (syscall.rawsyscalln). Each saved round-trip eliminates a kernel→userspace→kernel transition. Normal (non-writeback) mode retains the 1s TTL for multi-mount consistency.