Files
Chris Lu ae08e77979 fix(scheduler): give worker tasks a real per-attempt execution deadline (#9041)
* fix(scheduler): give worker tasks a real per-attempt execution deadline

The plugin scheduler derived the per-attempt execution deadline as
DetectionTimeoutSeconds * 2, which capped every worker task at twice
the cluster-scan budget regardless of actual work. For volume_balance
batches this was 240s — far too short for 20 large volume copies, so
every attempt died at "context deadline exceeded" and all in-flight
sub-RPCs surfaced as "context canceled". Retries restarted from move 1
and hit the same wall.

Add an explicit ExecutionTimeoutSeconds field to the plugin proto and
make each handler declare its own baseline (1800s for vacuum, balance,
EC; 3600s for iceberg). Size-aware handlers also emit an
estimated_runtime_seconds parameter on each proposal so the scheduler
extends the per-attempt deadline based on actual workload:

- volume_balance batch: max(largest single move, total / concurrency)
  at 5 min/GB, so a skewed batch with one big volume isn't averaged
  away.
- volume_balance single, vacuum (already), erasure_coding (10 min/GB),
  ec_balance (5 min/GB): per-volume budgets.

admin_script and iceberg keep the configurable handler default since
their workloads are opaque to the detector.

* fix(scheduler): apply descriptor defaults to existing persisted configs

The previous commit added execution_timeout_seconds to the proto and
each handler's descriptor defaults, but two paths still left existing
deployments broken:

1. deriveSchedulerAdminRuntime returned stored AdminRuntime configs
   as-is. Persisted configs from older versions have no
   execution_timeout_seconds, so the scheduler fell back to the 90s
   default — worse than the prior 240s behavior. Overlay descriptor
   defaults for any zero numeric fields when loading.

2. The admin form did not round-trip execution_timeout_seconds, so a
   normal save would clear it back to zero. Add the input field, the
   fillAdminSettings/collectAdminSettings hooks, and as defense in
   depth reapply descriptor defaults in UpdatePluginJobTypeConfigAPI
   before persisting so a stale form can never silently clobber a
   baseline.

* fix(volume_balance): account for partial scheduling rounds in batch estimate

With N moves and C slots, the busiest slot processes ceil(N/C) moves,
not N/C. Dividing total seconds by C underestimates wall-clock time
whenever N is not a multiple of C — e.g. 6 moves at concurrency 5
needs 2 rounds, not 1.2. Use avg * ceil(N/C) so partial rounds are
counted as full ones.

* fix(volume_balance): scale minBudget per wave instead of per move

Orchestration overhead (setup/teardown for the parallel move runner)
happens once per wave, not once per move. Use numRounds*60 as the
floor instead of len(moves)*60 so the minimum doesn't inflate
linearly with batch size when individual moves are tiny.
2026-04-13 01:15:53 -07:00
..
2026-04-10 17:31:14 -07:00
2026-04-10 17:31:14 -07:00