Files
Chris Lu b37bbf541a feat(master): drain pending size before marking volume readonly (#9036)
* feat(master): drain pending size before marking volume readonly

When vacuum, volume move, or EC encoding marks a volume readonly,
in-flight assigned bytes may still be pending. This adds a drain step:
immediately remove from writable list (stop new assigns), then wait
for pending to decay below 4MB or 30s timeout.

- Add volumeSizeTracking struct consolidating effectiveSize,
  reportedSize, and compactRevision into a single map
- Add GetPendingSize, waitForPendingDrain, DrainAndRemoveFromWritable,
  DrainAndSetVolumeReadOnly to VolumeLayout
- UpdateVolumeSize detects compaction via compactRevision change and
  resets effectiveSize instead of decaying
- Wire drain into vacuum (topology_vacuum.go) and volume mark readonly
  (master_grpc_server_volume.go)

* fix: use 2MB pending size drain threshold

* fix: check crowded state on initial UpdateVolumeSize registration

* fix: respect context cancellation in drain, relax test timing

- DrainAndSetVolumeReadOnly now accepts context.Context and returns
  early on cancellation (for gRPC handler timeout/cancel)
- waitForPendingDrain uses select on ctx.Done instead of time.Sleep
- Increase concurrent heartbeat test timeout from 10s to 15s for CI

* fix: use time-based dedup so decay runs even when reported size is unchanged

The value-based dedup (same reportedSize + compactRevision = skip) prevented
decay from running when pending bytes existed but no writes had landed on
disk yet. The reported size stayed the same across heartbeats, so the excess
never decayed.

Fix: dedup replicas within the same heartbeat cycle using a 2-second time
window instead of comparing values. This allows decay to run once per
heartbeat cycle even when the reported size is unchanged.

Also confirmed finding 1 (draining re-add race) is a false positive:
- Vacuum: ensureCorrectWritables only runs for ReadOnly-changed volumes
- Move/EC: readonlyVolumes flag prevents re-adding during drain

* fix: make VolumeMarkReadonly non-blocking to fix EC integration test timeout

The DrainAndSetVolumeReadOnly call in VolumeMarkReadonly gRPC blocked up
to 30s waiting for pending bytes to decay. In integration tests (and
real clusters during EC encoding), this caused timeouts because multiple
volumes are marked readonly sequentially and heartbeats may not arrive
fast enough to decay pending within the drain window.

Fix: VolumeMarkReadonly now calls SetVolumeReadOnly immediately (stops
new assigns) and only logs a warning if pending bytes remain. The drain
wait is kept only for vacuum (DrainAndRemoveFromWritable) which runs
inside the master's own goroutine pool.

Remove DrainAndSetVolumeReadOnly as it's no longer used.

* fix: relax test timing, rename test, add post-condition assert

* test: add vacuum integration tests with CI workflow

Full-cluster integration test for vacuum, modeled on the EC integration
tests. Starts a real master + 2 volume servers, uploads data, deletes
entries to create garbage, runs volume.vacuum via shell command, and
verifies garbage cleanup and data integrity.

Test flow:
1. Start cluster (master + 2 volume servers)
2. Upload 10 files to create volume with data
3. Delete 5 files to create ~50% garbage
4. Verify garbage ratio > 10%
5. Run volume.vacuum command
6. Verify garbage cleaned up
7. Verify remaining 5 files are still accessible

CI workflow runs on push/PR to master with 15-minute timeout.
Log collection on failure via artifact upload.

* fix: use 500KB files and delete 75% to exceed vacuum garbage threshold

* fix: add shell lock before vacuum command, fix compilation error

* fix: strengthen vacuum integration test assertions

- waitForServer: use net.DialTimeout instead of grpc.NewClient for
  real TCP readiness check
- verify_garbage_before_vacuum: t.Fatal instead of warning when no
  garbage detected
- verify_cleanup_after_vacuum: t.Fatal if no server reported the
  volume or cleanup wasn't verified
- verify_remaining_data: read actual file contents via HTTP and
  compare byte-for-byte against original uploaded payloads

* fix: use http.Client with timeout and close body before retry
2026-04-11 18:29:11 -07:00
..