Files
seaweedfs/weed/admin/plugin/registry.go
Chris Lu 884b0bcbfd feat(s3/lifecycle): cluster rate-limit allocation (Phase 3) (#9456)
* feat(s3/lifecycle): cluster rate-limit allocation (Phase 3)

Admin computes a per-worker share of cluster_deletes_per_second at
ExecuteJob time and ships it to the worker via
ClusterContext.Metadata. The worker reads the share, constructs a
golang.org/x/time/rate.Limiter, and passes it to dailyrun.Run via
cfg.Limiter (Phase 2 already plumbed the field). Phase 5 deletes the
streaming path; until then streaming ignores the cap.

Why allocate at admin: the cluster cap is a single knob operators
care about. Dividing it locally per worker would either need
out-of-band coordination or accept N× the configured budget. Admin
is the only party that knows how many execute-capable workers there
are, so it owns the math.

Admin side (weed/admin/plugin):
- Registry.CountCapableExecutors(jobType) returns the number of
  non-stale workers with CanExecute=true.
- New file cluster_rate_limit.go: decorateClusterContextForJob clones
  the input ClusterContext and injects two metadata keys for
  s3_lifecycle. cloneClusterContext duplicates Metadata so per-job
  decoration doesn't race shared base state.
- executeJobWithExecutor calls the decorator after loading the admin
  config; other job types pass through unchanged.

Worker side (weed/worker/tasks/s3_lifecycle):
- New cluster_rate_limit.go declares the constants both sides agree
  on (admin-config field names, metadata keys). Plain strings on the
  admin side keep weed/admin/plugin free of a dependency on the
  s3_lifecycle worker package; the two sets of constants are pinned
  to identical values and a mismatch would silently disable rate
  limiting.
- handler.go executeDailyReplay reads ClusterContext.Metadata,
  builds a rate.Limiter, and passes it into dailyrun.Config{Limiter}.
  Missing/empty/non-positive values → no limiter (legacy unlimited
  behavior). burst defaults to 2 × rate, clamped to ≥1 to avoid a
  bucket that never refills.
- Admin form gains two fields under "Scope": cluster_deletes_per_second
  (rate, 0 = unlimited) and cluster_deletes_burst (0 = 2 × rate).

Metric:
- New S3LifecycleDispatchLimiterWaitSeconds histogram observes how
  long each Limiter.Wait blocks before a LifecycleDelete RPC.
  Operators tune the cap by reading p95 — near-zero means the cap
  isn't binding, a long tail at 1/rate means it is.

Tests:
- weed/admin/plugin/cluster_rate_limit_test.go: 9 cases covering
  pass-through for non-allocator job types, rps=0 / no-executors
  skip, even sharing, burst sharing, burst=0 omit (worker default
  kicks in), burst floor of 1, no mutation of input metadata, nil
  input.
- weed/worker/tasks/s3_lifecycle/cluster_rate_limit_test.go: 7 cases
  covering nil/empty/missing metadata, non-positive/invalid rate,
  positive rate builds correctly, burst missing defaults to 2× rate,
  tiny rate clamps burst to ≥1.

Build clean. Phase 2 (#9446) and Phase 4 engine (#9447) are the
parents; this branch stacks on Phase 2 since it consumes
dailyrun.Config{Limiter} which lands there.

* fix(s3/lifecycle): divide cluster budget by active workers, not all capable

gemini pointed out that s3_lifecycle has MaxJobsPerDetection=1
(handler.go:189) — it's a singleton job, only one worker is ever active.
Dividing the cluster_deletes_per_second budget by the count of capable
executors gave the single active worker just 1/N of the configured cap.

Pass adminRuntime.MaxJobsPerDetection through to the decorator. Divisor
is now min(executors, maxJobsPerDetection), clamped to >=1. For
s3_lifecycle (maxJobs=1) the active worker gets the full budget; for a
hypothetical parallel-dispatch job (maxJobs>1) the budget divides
across the running-set.

Tests swap the SharedEvenly case for two pinned scenarios:
  - SingletonJobGetsFullBudget: maxJobs=1 across 4 executors => 100/1
  - SharedEvenlyWhenParallelLimited: maxJobs=4 across 4 executors => 25/worker
  - MaxJobsExceedsExecutors: maxJobs=10 across 4 executors => divisor 4

* feat(s3/lifecycle): drop Worker Count knob from admin config form

The "Worker Count" admin field controlled in-process pipeline goroutines
across the 16-shard space — per-worker tuning, not a cluster-wide scope
concern. Operators looking at the form alongside Cluster Delete Rate
reasonably misread it as the number of workers in the cluster.

Drop the form field and DefaultValues entry. cfg.Workers is now hardcoded
to shardPipelineGoroutines (=1) inside ParseConfig; the rest of the
plumbing through dailyrun.Config.Workers stays so a future need can
re-introduce it as a worker-local knob (or just bump the constant).

handler_test.go pins that "workers" must NOT appear in the form so the
removal doesn't silently regress.
2026-05-11 19:17:06 -07:00

513 lines
13 KiB
Go

package plugin
import (
"fmt"
"sort"
"strings"
"sync"
"time"
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
)
const defaultWorkerStaleTimeout = 2 * time.Minute
// WorkerSession contains tracked worker metadata and plugin status.
type WorkerSession struct {
WorkerID string
WorkerInstance string
Address string
WorkerVersion string
ProtocolVersion string
ConnectedAt time.Time
LastSeenAt time.Time
Capabilities map[string]*plugin_pb.JobTypeCapability
Heartbeat *plugin_pb.WorkerHeartbeat
}
// Registry tracks connected plugin workers and capability-based selection.
type Registry struct {
mu sync.RWMutex
sessions map[string]*WorkerSession
staleAfter time.Duration
detectorCursor map[string]int
executorCursor map[string]int
}
func NewRegistry() *Registry {
return &Registry{
sessions: make(map[string]*WorkerSession),
staleAfter: defaultWorkerStaleTimeout,
detectorCursor: make(map[string]int),
executorCursor: make(map[string]int),
}
}
func (r *Registry) UpsertFromHello(hello *plugin_pb.WorkerHello) *WorkerSession {
now := time.Now()
caps := make(map[string]*plugin_pb.JobTypeCapability, len(hello.Capabilities))
for _, c := range hello.Capabilities {
if c == nil || c.JobType == "" {
continue
}
caps[c.JobType] = cloneJobTypeCapability(c)
}
r.mu.Lock()
defer r.mu.Unlock()
session, ok := r.sessions[hello.WorkerId]
if !ok {
session = &WorkerSession{
WorkerID: hello.WorkerId,
ConnectedAt: now,
}
r.sessions[hello.WorkerId] = session
}
session.WorkerInstance = hello.WorkerInstanceId
session.Address = hello.Address
session.WorkerVersion = hello.WorkerVersion
session.ProtocolVersion = hello.ProtocolVersion
session.LastSeenAt = now
session.Capabilities = caps
return cloneWorkerSession(session)
}
func (r *Registry) Remove(workerID string) {
r.mu.Lock()
defer r.mu.Unlock()
delete(r.sessions, workerID)
}
func (r *Registry) UpdateHeartbeat(workerID string, heartbeat *plugin_pb.WorkerHeartbeat) {
r.mu.Lock()
defer r.mu.Unlock()
session, ok := r.sessions[workerID]
if !ok {
return
}
session.Heartbeat = cloneWorkerHeartbeat(heartbeat)
session.LastSeenAt = time.Now()
}
func (r *Registry) Get(workerID string) (*WorkerSession, bool) {
r.mu.RLock()
defer r.mu.RUnlock()
session, ok := r.sessions[workerID]
if !ok || r.isSessionStaleLocked(session, time.Now()) {
return nil, false
}
return cloneWorkerSession(session), true
}
func (r *Registry) List() []*WorkerSession {
r.mu.RLock()
defer r.mu.RUnlock()
out := make([]*WorkerSession, 0, len(r.sessions))
now := time.Now()
for _, s := range r.sessions {
if r.isSessionStaleLocked(s, now) {
continue
}
out = append(out, cloneWorkerSession(s))
}
sort.Slice(out, func(i, j int) bool {
return out[i].WorkerID < out[j].WorkerID
})
return out
}
// HasCapableWorker checks if any non-stale worker session has a capability for the given job type.
// A worker is capable if its capabilities include the job type with CanDetect or CanExecute true.
func (r *Registry) HasCapableWorker(jobType string) bool {
r.mu.RLock()
defer r.mu.RUnlock()
now := time.Now()
for _, session := range r.sessions {
if r.isSessionStaleLocked(session, now) {
continue
}
capability := session.Capabilities[jobType]
if capability == nil {
continue
}
if capability.CanDetect || capability.CanExecute {
return true
}
}
return false
}
// CountCapableExecutors returns the number of non-stale workers that
// can EXECUTE the given job type. Used by per-job-type cluster
// allocators (e.g. the s3_lifecycle delete-rate divider) to compute a
// per-worker share at dispatch time. Returns 0 when no executor is
// available — callers should treat that as "skip allocation" rather
// than dividing by zero.
func (r *Registry) CountCapableExecutors(jobType string) int {
r.mu.RLock()
defer r.mu.RUnlock()
now := time.Now()
n := 0
for _, session := range r.sessions {
if r.isSessionStaleLocked(session, now) {
continue
}
capability := session.Capabilities[jobType]
if capability == nil || !capability.CanExecute {
continue
}
n++
}
return n
}
// DetectableJobTypes returns sorted job types that currently have at least one detect-capable worker.
func (r *Registry) DetectableJobTypes() []string {
r.mu.RLock()
defer r.mu.RUnlock()
jobTypes := make(map[string]struct{})
now := time.Now()
for _, session := range r.sessions {
if r.isSessionStaleLocked(session, now) {
continue
}
for jobType, capability := range session.Capabilities {
if capability == nil || !capability.CanDetect {
continue
}
jobTypes[jobType] = struct{}{}
}
}
out := make([]string, 0, len(jobTypes))
for jobType := range jobTypes {
out = append(out, jobType)
}
sort.Strings(out)
return out
}
// JobTypes returns sorted job types known by connected workers regardless of capability kind.
func (r *Registry) JobTypes() []string {
r.mu.RLock()
defer r.mu.RUnlock()
jobTypes := make(map[string]struct{})
now := time.Now()
for _, session := range r.sessions {
if r.isSessionStaleLocked(session, now) {
continue
}
for jobType := range session.Capabilities {
if jobType == "" {
continue
}
jobTypes[jobType] = struct{}{}
}
}
out := make([]string, 0, len(jobTypes))
for jobType := range jobTypes {
out = append(out, jobType)
}
sort.Strings(out)
return out
}
// PickSchemaProvider picks one worker for schema requests.
// Preference order:
// 1) workers that can detect this job type
// 2) workers that can execute this job type
// tie-break: more free slots, then lexical worker ID.
func (r *Registry) PickSchemaProvider(jobType string) (*WorkerSession, error) {
r.mu.RLock()
defer r.mu.RUnlock()
var candidates []*WorkerSession
now := time.Now()
for _, s := range r.sessions {
if r.isSessionStaleLocked(s, now) {
continue
}
capability := s.Capabilities[jobType]
if capability == nil {
continue
}
if capability.CanDetect || capability.CanExecute {
candidates = append(candidates, s)
}
}
if len(candidates) == 0 {
return nil, fmt.Errorf("no worker available for schema job_type=%s", jobType)
}
sort.Slice(candidates, func(i, j int) bool {
a := candidates[i]
b := candidates[j]
ac := a.Capabilities[jobType]
bc := b.Capabilities[jobType]
// Prefer detect-capable providers first.
if ac.CanDetect != bc.CanDetect {
return ac.CanDetect
}
aSlots := availableDetectionSlots(a, ac) + availableExecutionSlots(a, ac)
bSlots := availableDetectionSlots(b, bc) + availableExecutionSlots(b, bc)
if aSlots != bSlots {
return aSlots > bSlots
}
return a.WorkerID < b.WorkerID
})
return cloneWorkerSession(candidates[0]), nil
}
// PickDetector picks one detector worker for a job type.
func (r *Registry) PickDetector(jobType string) (*WorkerSession, error) {
return r.pickByKind(jobType, true)
}
// PickExecutor picks one executor worker for a job type.
func (r *Registry) PickExecutor(jobType string) (*WorkerSession, error) {
return r.pickByKind(jobType, false)
}
// ListExecutors returns sorted executor candidates for one job type.
// Ordering is by most available execution slots, then lexical worker ID.
// The top tie group is rotated round-robin to prevent sticky assignment.
func (r *Registry) ListExecutors(jobType string) ([]*WorkerSession, error) {
r.mu.Lock()
defer r.mu.Unlock()
candidates := r.collectByKindLocked(jobType, false, time.Now())
if len(candidates) == 0 {
return nil, fmt.Errorf("no executor worker available for job_type=%s", jobType)
}
sortByKind(candidates, jobType, false)
r.rotateTopCandidatesLocked(candidates, jobType, false)
out := make([]*WorkerSession, 0, len(candidates))
for _, candidate := range candidates {
out = append(out, cloneWorkerSession(candidate))
}
return out, nil
}
func (r *Registry) pickByKind(jobType string, detect bool) (*WorkerSession, error) {
r.mu.Lock()
defer r.mu.Unlock()
candidates := r.collectByKindLocked(jobType, detect, time.Now())
if len(candidates) == 0 {
kind := "executor"
if detect {
kind = "detector"
}
return nil, fmt.Errorf("no %s worker available for job_type=%s", kind, jobType)
}
sortByKind(candidates, jobType, detect)
r.rotateTopCandidatesLocked(candidates, jobType, detect)
return cloneWorkerSession(candidates[0]), nil
}
func (r *Registry) collectByKindLocked(jobType string, detect bool, now time.Time) []*WorkerSession {
var candidates []*WorkerSession
for _, session := range r.sessions {
if r.isSessionStaleLocked(session, now) {
continue
}
capability := session.Capabilities[jobType]
if capability == nil {
continue
}
if detect && capability.CanDetect {
candidates = append(candidates, session)
}
if !detect && capability.CanExecute {
candidates = append(candidates, session)
}
}
return candidates
}
func (r *Registry) isSessionStaleLocked(session *WorkerSession, now time.Time) bool {
if session == nil {
return true
}
if r.staleAfter <= 0 {
return false
}
lastSeen := session.LastSeenAt
if lastSeen.IsZero() {
lastSeen = session.ConnectedAt
}
if lastSeen.IsZero() {
return false
}
return now.Sub(lastSeen) > r.staleAfter
}
func sortByKind(candidates []*WorkerSession, jobType string, detect bool) {
sort.Slice(candidates, func(i, j int) bool {
a := candidates[i]
b := candidates[j]
ac := a.Capabilities[jobType]
bc := b.Capabilities[jobType]
aSlots := availableSlotsByKind(a, ac, detect)
bSlots := availableSlotsByKind(b, bc, detect)
if aSlots != bSlots {
return aSlots > bSlots
}
return a.WorkerID < b.WorkerID
})
}
func (r *Registry) rotateTopCandidatesLocked(candidates []*WorkerSession, jobType string, detect bool) {
if len(candidates) < 2 {
return
}
capability := candidates[0].Capabilities[jobType]
topSlots := availableSlotsByKind(candidates[0], capability, detect)
tieEnd := 1
for tieEnd < len(candidates) {
nextCapability := candidates[tieEnd].Capabilities[jobType]
if availableSlotsByKind(candidates[tieEnd], nextCapability, detect) != topSlots {
break
}
tieEnd++
}
if tieEnd <= 1 {
return
}
cursorKey := strings.TrimSpace(jobType)
if cursorKey == "" {
cursorKey = "*"
}
var offset int
if detect {
offset = r.detectorCursor[cursorKey] % tieEnd
r.detectorCursor[cursorKey] = (offset + 1) % tieEnd
} else {
offset = r.executorCursor[cursorKey] % tieEnd
r.executorCursor[cursorKey] = (offset + 1) % tieEnd
}
if offset == 0 {
return
}
prefix := append([]*WorkerSession(nil), candidates[:tieEnd]...)
for i := 0; i < tieEnd; i++ {
candidates[i] = prefix[(i+offset)%tieEnd]
}
}
func availableSlotsByKind(
session *WorkerSession,
capability *plugin_pb.JobTypeCapability,
detect bool,
) int {
if detect {
return availableDetectionSlots(session, capability)
}
return availableExecutionSlots(session, capability)
}
func availableDetectionSlots(session *WorkerSession, capability *plugin_pb.JobTypeCapability) int {
if session.Heartbeat != nil && session.Heartbeat.DetectionSlotsTotal > 0 {
free := int(session.Heartbeat.DetectionSlotsTotal - session.Heartbeat.DetectionSlotsUsed)
if free < 0 {
return 0
}
return free
}
if capability.MaxDetectionConcurrency > 0 {
return int(capability.MaxDetectionConcurrency)
}
return 1
}
func availableExecutionSlots(session *WorkerSession, capability *plugin_pb.JobTypeCapability) int {
if session.Heartbeat != nil && session.Heartbeat.ExecutionSlotsTotal > 0 {
free := int(session.Heartbeat.ExecutionSlotsTotal - session.Heartbeat.ExecutionSlotsUsed)
if free < 0 {
return 0
}
return free
}
if capability.MaxExecutionConcurrency > 0 {
return int(capability.MaxExecutionConcurrency)
}
return 1
}
func cloneWorkerSession(in *WorkerSession) *WorkerSession {
if in == nil {
return nil
}
out := *in
out.Capabilities = make(map[string]*plugin_pb.JobTypeCapability, len(in.Capabilities))
for jobType, cap := range in.Capabilities {
out.Capabilities[jobType] = cloneJobTypeCapability(cap)
}
out.Heartbeat = cloneWorkerHeartbeat(in.Heartbeat)
return &out
}
func cloneJobTypeCapability(in *plugin_pb.JobTypeCapability) *plugin_pb.JobTypeCapability {
if in == nil {
return nil
}
out := *in
return &out
}
func cloneWorkerHeartbeat(in *plugin_pb.WorkerHeartbeat) *plugin_pb.WorkerHeartbeat {
if in == nil {
return nil
}
out := *in
if in.RunningWork != nil {
out.RunningWork = make([]*plugin_pb.RunningWork, 0, len(in.RunningWork))
for _, rw := range in.RunningWork {
if rw == nil {
continue
}
clone := *rw
out.RunningWork = append(out.RunningWork, &clone)
}
}
if in.QueuedJobsByType != nil {
out.QueuedJobsByType = make(map[string]int32, len(in.QueuedJobsByType))
for k, v := range in.QueuedJobsByType {
out.QueuedJobsByType[k] = v
}
}
if in.Metadata != nil {
out.Metadata = make(map[string]string, len(in.Metadata))
for k, v := range in.Metadata {
out.Metadata[k] = v
}
}
return &out
}