* fix(volume): don't fatal on missing .idx for remote-tiered volume
A .vif left behind without its .idx (orphaned by a crashed move, partial
copy, or hand-edit) would trip glog.Fatalf in checkIdxFile and take the
whole volume server down on boot, killing every healthy volume on it
too. For remote-tiered volumes treat it as a per-volume load error so
the server can come up and the operator can clean up the stray .vif.
Refs #9331.
* fix(balance): skip remote-tiered volumes in admin balance detection
The admin/worker balance detector had no equivalent of the shell-side
guard ("does not move volume in remote storage" in
command_volume_balance.go), so it scheduled moves on remote-tiered
volumes. The "move" copies .idx/.vif to the destination and then calls
Volume.Destroy on the source, which calls backendStorage.DeleteFile —
deleting the remote object the destination's new .vif now points at.
Populate HasRemoteCopy on the metrics emitted by both the admin
maintenance scanner and the worker's master poll, then drop those
volumes at the top of Detection.
Fixes#9331.
* Apply suggestion from @gemini-code-assist[bot]
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fix(volume): keep remote data on volume-move-driven delete
The on-source delete after a volume move (admin/worker balance and
shell volume.move) ran Volume.Destroy with no way to opt out of the
remote-object cleanup. Volume.Destroy unconditionally calls
backendStorage.DeleteFile for remote-tiered volumes, so a successful
move would copy .idx/.vif to the destination and then nuke the cloud
object the destination's new .vif was already pointing at.
Add VolumeDeleteRequest.keep_remote_data and plumb it through
Store.DeleteVolume / DiskLocation.DeleteVolume / Volume.Destroy. The
balance task and shell volume.move set it to true; the post-tier-upload
cleanup of other replicas and the over-replication trim in
volume.fix.replication also set it to true since the remote object is
still referenced. Other real-delete callers keep the default. The
delete-before-receive path in VolumeCopy also sets it: the inbound copy
carries a .vif that may reference the same cloud object as the
existing volume.
Refs #9331.
* test(storage): in-process remote-tier integration tests
Cover the four operations the user is most likely to run against a
cloud-tiered volume — balance/move, vacuum, EC encode, EC decode — by
registering a local-disk-backed BackendStorage as the "remote" tier and
exercising the real Volume / DiskLocation / EC encoder code paths.
Locks in:
- Destroy(keepRemoteData=true) preserves the remote object (move case)
- Destroy(keepRemoteData=false) deletes it (real-delete case)
- Vacuum/compact on a remote-tier volume never deletes the remote object
- EC encode requires the local .dat (callers must download first)
- EC encode + rebuild round-trips after a tier-down
Tests run in-process and finish in under a second total — no cluster,
binary, or external storage required.
* fix(rust-volume): keep remote data on volume-move-driven delete
Mirror the Go fix in seaweed-volume: plumb keep_remote_data through
grpc volume_delete → Store.delete_volume → DiskLocation.delete_volume
→ Volume.destroy, and skip the s3-tier delete_file call when the flag
is set. The pre-receive cleanup in volume_copy passes true for the
same reason as the Go side: the inbound copy carries a .vif that may
reference the same cloud object as the existing volume.
The Rust loader already warns rather than fataling on a stray .vif
without an .idx (volume.rs load_index_inmemory / load_index_redb), so
no counterpart to the Go fatal-on-missing-idx fix is needed.
Refs #9331.
* fix(volume): preserve remote tier on IO-error eviction; fix EC test target
Two review nits:
- Store.MaybeAddVolumes' periodic cleanup pass deleted IO-errored
volumes with keepRemoteData=false, so a transient local fault on a
remote-tiered volume would also nuke the cloud object. Track the
delete reason via a parallel slice and pass keepRemoteData=v.HasRemoteFile()
for IO-error evictions; TTL-expired evictions still pass false.
- TestRemoteTier_ECEncodeDecode_AfterDownload deleted shards 0..3 but
called them "parity" — by the klauspost/reedsolomon convention shards
0..DataShardsCount-1 are data and DataShardsCount..TotalShardsCount-1
are parity. Switch the loop to delete the parity range so the
intent matches the indices.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* volume.tier.move: fulfill target replication before deleting old replicas
When -toReplication is specified, volume.tier.move now creates all
required replicas on the destination tier before deleting old replicas.
This closes the data-loss window where only one copy existed on the
target tier while awaiting volume.fix.replication.
If replication fulfillment fails, old replicas are preserved and marked
writable so the volume remains accessible.
Also extracts replicateVolumeToServer and configureVolumeReplication
helpers to reduce duplication across volume.tier.move and
volume.fix.replication.
Fixes#8937
* volume.tier.move: always fulfill replication before deleting old replicas
When -toReplication is specified, use that replication setting.
Otherwise, read the volume's existing replication from the super block.
In both cases, all required replicas are created on the destination
tier before old replicas are deleted.
If replication fulfillment fails (e.g. not enough destination nodes),
old replicas are preserved and marked writable so no data is lost.
* volume.tier.move: address review feedback on ensureReplicationFulfilled
- Add 5s delay before re-collecting topology to allow master heartbeat
propagation after the move
- Add nil guard for targetTierReplicas to prevent panic if the moved
replica is not yet visible in the topology
- Treat configureVolumeReplication failure as a hard error instead of a
warning, so the rollback logic preserves old replicas
* volume.tier.move: harden replication config error handling
- Make configureVolumeReplication failure on the primary moved replica a
hard error that aborts the move, instead of logging and continuing
- Configure replication metadata on all existing target-tier replicas
(not just newly created ones) when -toReplication is specified
- Deletion of old replicas cannot affect new replicas since the
locations list only contains pre-move servers (verified, no change)
* volume.tier.move: fix cleanup deleting fulfilled replicas and broken recovery
Fix 1: The cleanup loop now preserves pre-existing target-tier replicas
that ensureReplicationFulfilled counted toward the replication target.
Previously, a mixed-tier volume with an existing replica on the target
tier could have that replica deleted right after being counted as
fulfilled, leaving the volume under-replicated.
ensureReplicationFulfilled now returns a preserveServers set that the
deletion loop checks before removing any old replica.
Fix 2: Failure paths after LiveMoveVolume (which deletes the source
replica) now use restoreSurvivingReplicasWritable instead of
markVolumeReplicasWritable. The old helper stopped on first error, so
attempting to mark the already-deleted source writable would prevent
all surviving replicas from being restored. The new helper skips the
deleted source and continues through all remaining locations, logging
per-replica errors instead of aborting.
* volume.tier.move: mark preserved replicas writable, skip nodes with existing volume
Fix 1: Preserved pre-existing target-tier replicas were left read-only
after the move completed. They were marked read-only at the start
(along with all other replicas) but never restored since the old code
deleted them. Now they are explicitly marked writable before cleanup.
Fix 2: The fulfillment loop could pick a candidate node that already
hosts this volume on a different disk type, causing a VolumeCopy
conflict. Added a guard that skips any node already hosting the volume
(on any disk) before attempting replication.
* feat(weed.move): add a speed limit parameter of moving files
* fix(weed.move): set the default value of ioBytePerSecond to vs.compactionBytePerSecond
Co-authored-by: zhihao.qu <zhihao.qu@ly.com>