mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-13 21:31:32 +00:00
* test(vacuum): fix flaky TestVacuumIntegration across multiple volumes The test assumed all uploaded files landed in a single volume and tracked only the last file's volume id. With -volumeSizeLimitMB 10 and 16x500KB files, the master can spread uploads across volumes, so the tracked id could point to a volume with no deletes and thus 0% garbage — causing verify_garbage_before_vacuum to fail even though vacuum ran correctly on the other volume. Track the set of volumes where deletes actually occurred and verify garbage/cleanup against all of them. Also add a short retry loop on the pre-vacuum check to absorb heartbeat jitter. * test(vacuum): require all dirty volumes ready; retry cleanup check Address review feedback: the pre-vacuum check now waits until every volume in dirtyVolumes reports garbage > threshold (not just the first), and the post-vacuum cleanup check retries per-volume with a deadline instead of relying on a fixed sleep, since vacuum + heartbeat reporting is asynchronous. * test(vacuum): deterministic dirty volumes order, aggregate cleanup failures - Sort dirtyVolumes after building from the set so logs and iteration are stable across runs. - In verify_cleanup_after_vacuum, track per-volume failure reasons in a map and report all still-failing volumes on timeout instead of only the last one that happened to be written to lastErr.