Files
seaweedfs/weed/command/plugin_worker_test.go
Chris Lu 1f6f473995 refactor(worker): co-locate plugin handlers with their task packages (#9301)
* refactor(worker): co-locate plugin handlers with their task packages

Move every per-task plugin handler from weed/plugin/worker/ into the
matching weed/worker/tasks/<name>/ package, so each task owns its
detection, scheduling, execution, and plugin handler in one place.

Step 0 (within pluginworker, no behavior change): extract shared helpers
that previously lived inside individual handler files into dedicated
files and export the ones now consumed across packages.

  - activity.go: BuildExecutorActivity, BuildDetectorActivity
  - config.go: ReadStringConfig/Double/Int64/Bytes/StringList, MapTaskPriority
  - interval.go: ShouldSkipDetectionByInterval
  - volume_state.go: VolumeState + consts, FilterMetricsByVolumeState/Location
  - collection_filter.go: CollectionFilterMode + consts
  - volume_metrics.go: export CollectVolumeMetricsFromMasters,
    MasterAddressCandidates, FetchVolumeList
  - testing_senders_test.go: shared test stubs

Phase 1: move the per-task plugin handlers (and the iceberg subpackage)
into their task packages.

  weed/plugin/worker/vacuum_handler.go         -> weed/worker/tasks/vacuum/plugin_handler.go
  weed/plugin/worker/ec_balance_handler.go     -> weed/worker/tasks/ec_balance/plugin_handler.go
  weed/plugin/worker/erasure_coding_handler.go -> weed/worker/tasks/erasure_coding/plugin_handler.go
  weed/plugin/worker/volume_balance_handler.go -> weed/worker/tasks/balance/plugin_handler.go
  weed/plugin/worker/iceberg/                   -> weed/worker/tasks/iceberg/

  weed/plugin/worker/handlers/handlers.go now blank-imports all five
  task subpackages so their init() registrations fire.

  weed/command/mini.go and the worker tests construct the handler with
  vacuum.DefaultMaxExecutionConcurrency (the constant moved with the
  vacuum handler).

admin_script remains in weed/plugin/worker/ because there is no
underlying weed/worker/tasks/admin_script/ package to merge with.

* refactor(worker): update test/plugin_workers imports for moved handlers

Three handler constructors moved out of pluginworker into their task
packages — update the integration test files in test/plugin_workers/
to import from the new locations:

  pluginworker.NewVacuumHandler        -> vacuum.NewVacuumHandler
  pluginworker.NewVolumeBalanceHandler -> balance.NewVolumeBalanceHandler
  pluginworker.NewErasureCodingHandler -> erasure_coding.NewErasureCodingHandler

The pluginworker import is kept where the file still uses
pluginworker.WorkerOptions / pluginworker.JobHandler.

* refactor(worker): update test/s3tables iceberg import path

The iceberg subpackage moved from weed/plugin/worker/iceberg/ to
weed/worker/tasks/iceberg/. test/s3tables/maintenance/maintenance_integration_test.go
still imported the old path, breaking S3 Tables / RisingWave / Trino /
Spark / Iceberg-catalog / STS integration test builds.

Mirrors the OSS-side fix needed by every job in the run that
transitively imports test/s3tables/maintenance.

* chore: gofmt PR-touched files

The S3 Tables Format Check job runs `gofmt -l` over weed/s3api/s3tables
and test/s3tables, then fails if anything is unformatted. Files this
PR moved or modified had import-grouping and trailing-spacing issues
introduced by perl-based renames; reformat them with gofmt -w.

Touched files:
  test/plugin_workers/erasure_coding/{detection,execution}_test.go
  test/s3tables/maintenance/maintenance_integration_test.go
  weed/plugin/worker/handlers/handlers.go
  weed/worker/tasks/{balance,ec_balance,erasure_coding,vacuum}/plugin_handler*.go

* refactor(worker): bounds-checked int conversions for plugin config values

CodeQL flagged 18 go/incorrect-integer-conversion warnings on the moved
plugin handler files: results of pluginworker.ReadInt64Config (which
ultimately calls strconv.ParseInt with bit size 64) were being narrowed
to int32/uint32/int without an upper-bound check, so a malicious or
malformed admin/worker config value could overflow the target type.

Add three helpers in weed/plugin/worker/config.go that wrap
ReadInt64Config and clamp out-of-range values back to the caller's
fallback:

  ReadInt32Config (math.MinInt32 .. math.MaxInt32)
  ReadUint32Config (0 .. math.MaxUint32)
  ReadIntConfig    (math.MinInt32 .. math.MaxInt32, platform-portable)

Update each flagged call site in the four moved task packages to use
the bounds-checked helper. For protobuf uint32 fields (volume IDs)
the variable type also becomes uint32, removing the trailing
uint32(volumeID) casts and changing the "missing volume_id" check
from `<= 0` to `== 0`.

Touched files:
  weed/plugin/worker/config.go
  weed/worker/tasks/balance/plugin_handler.go
  weed/worker/tasks/erasure_coding/plugin_handler.go
  weed/worker/tasks/vacuum/plugin_handler.go

* refactor(worker): use ReadIntConfig for clamped derive-worker-config helpers

CodeQL still flagged three call sites where ReadInt64Config was being
narrowed to int after a value-range clamp (max_concurrent_moves <= 50,
batch_size <= 100, min_server_count >= 2). The clamp is correct but
CodeQL's flow analysis didn't recognize the bound, so it flagged them
as unbounded narrowing.

Switch to ReadIntConfig (already int32-bounded by the helper) for
those three sites, drop the now-redundant int64 intermediate variables.

Also drops the now-unused `> math.MaxInt32` clamp in
ec_balance.deriveECBalanceWorkerConfig (the helper covers it).
2026-05-02 18:03:13 -07:00

281 lines
9.6 KiB
Go

package command
import (
"fmt"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
pluginworker "github.com/seaweedfs/seaweedfs/weed/plugin/worker"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
func TestBuildPluginWorkerHandlerExplicitTypes(t *testing.T) {
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
testMaxConcurrency := int(vacuum.DefaultMaxExecutionConcurrency)
for _, jobType := range []string{"vacuum", "volume_balance", "erasure_coding", "admin_script", "iceberg_maintenance"} {
handlers, err := buildPluginWorkerHandlers(jobType, dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(%s) err = %v", jobType, err)
}
if len(handlers) != 1 {
t.Fatalf("expected 1 handler for %s, got %d", jobType, len(handlers))
}
}
}
func TestBuildPluginWorkerHandlerAliases(t *testing.T) {
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
testMaxConcurrency := int(vacuum.DefaultMaxExecutionConcurrency)
for _, alias := range []string{"balance", "ec", "iceberg", "admin", "script"} {
handlers, err := buildPluginWorkerHandlers(alias, dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(%s) err = %v", alias, err)
}
if len(handlers) != 1 {
t.Fatalf("expected 1 handler for alias %s, got %d", alias, len(handlers))
}
}
}
func TestBuildPluginWorkerHandlerUnknown(t *testing.T) {
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
_, err := buildPluginWorkerHandlers("unknown", dialOption, 1, "")
if err == nil {
t.Fatalf("expected error for unknown job type")
}
}
func TestBuildPluginWorkerHandlers(t *testing.T) {
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
testMaxConcurrency := int(vacuum.DefaultMaxExecutionConcurrency)
handlers, err := buildPluginWorkerHandlers("vacuum,volume_balance,erasure_coding", dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(list) err = %v", err)
}
if len(handlers) != 3 {
t.Fatalf("expected 3 handlers, got %d", len(handlers))
}
handlers, err = buildPluginWorkerHandlers("balance,ec,vacuum,balance", dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(aliases) err = %v", err)
}
if len(handlers) != 3 {
t.Fatalf("expected deduped 3 handlers, got %d", len(handlers))
}
_, err = buildPluginWorkerHandlers("unknown,vacuum", dialOption, testMaxConcurrency, "")
if err == nil {
t.Fatalf("expected unsupported job type error")
}
}
func TestBuildPluginWorkerHandlersCategories(t *testing.T) {
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
testMaxConcurrency := int(vacuum.DefaultMaxExecutionConcurrency)
allHandlers, err := buildPluginWorkerHandlers("all", dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(all) err = %v", err)
}
// "all" must include at least vacuum and erasure_coding (one default, one heavy)
allNames := handlerJobTypes(allHandlers)
for _, required := range []string{"vacuum", "erasure_coding", "iceberg_maintenance"} {
if !allNames[required] {
t.Fatalf("'all' missing expected job type %q, got %v", required, allNames)
}
}
defaultHandlers, err := buildPluginWorkerHandlers("default", dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(default) err = %v", err)
}
defaultNames := handlerJobTypes(defaultHandlers)
heavyHandlers, err := buildPluginWorkerHandlers("heavy", dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(heavy) err = %v", err)
}
heavyNames := handlerJobTypes(heavyHandlers)
// default and heavy must both be non-empty subsets of all
if len(defaultNames) == 0 {
t.Fatalf("'default' resolved no handlers")
}
if len(heavyNames) == 0 {
t.Fatalf("'heavy' resolved no handlers")
}
for name := range defaultNames {
if !allNames[name] {
t.Fatalf("default handler %q not in 'all'", name)
}
}
for name := range heavyNames {
if !allNames[name] {
t.Fatalf("heavy handler %q not in 'all'", name)
}
}
// default and heavy must be disjoint and their union must equal all
for name := range defaultNames {
if heavyNames[name] {
t.Fatalf("handler %q appears in both default and heavy", name)
}
}
if len(defaultNames)+len(heavyNames) != len(allNames) {
t.Fatalf("union(default=%d, heavy=%d) != all(%d)", len(defaultNames), len(heavyNames), len(allNames))
}
// mix category + explicit: "default,iceberg" adds one heavy to default set
mixedHandlers, err := buildPluginWorkerHandlers("default,iceberg", dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(default,iceberg) err = %v", err)
}
if len(mixedHandlers) != len(defaultHandlers)+1 {
t.Fatalf("expected default+1 handlers for 'default,iceberg', got %d (default=%d)", len(mixedHandlers), len(defaultHandlers))
}
}
func TestPluginWorkerDefaultJobTypes(t *testing.T) {
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
testMaxConcurrency := int(vacuum.DefaultMaxExecutionConcurrency)
// defaultPluginWorkerJobTypes is "all", so it should match the "all" category exactly
defaultHandlers, err := buildPluginWorkerHandlers(defaultPluginWorkerJobTypes, dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(default setting) err = %v", err)
}
allHandlers, err := buildPluginWorkerHandlers("all", dialOption, testMaxConcurrency, "")
if err != nil {
t.Fatalf("buildPluginWorkerHandlers(all) err = %v", err)
}
if len(defaultHandlers) != len(allHandlers) {
t.Fatalf("default setting resolved %d handlers, 'all' resolved %d", len(defaultHandlers), len(allHandlers))
}
}
// handlerJobTypes returns the set of job type names from a slice of handlers.
func handlerJobTypes(handlers []pluginworker.JobHandler) map[string]bool {
m := make(map[string]bool, len(handlers))
for _, h := range handlers {
m[h.Capability().JobType] = true
}
return m
}
func TestResolvePluginWorkerID(t *testing.T) {
dir := t.TempDir()
explicit, err := resolvePluginWorkerID("worker-x", dir)
if err != nil {
t.Fatalf("resolvePluginWorkerID(explicit) err = %v", err)
}
if explicit != "worker-x" {
t.Fatalf("expected explicit id, got %q", explicit)
}
generated, err := resolvePluginWorkerID("", dir)
if err != nil {
t.Fatalf("resolvePluginWorkerID(generate) err = %v", err)
}
if generated == "" {
t.Fatalf("expected generated id")
}
if len(generated) < 2 || generated[:2] != "w-" {
t.Fatalf("expected generated id prefix w-, got %q", generated)
}
persistedPath := filepath.Join(dir, "worker.id")
if _, statErr := os.Stat(persistedPath); statErr != nil {
t.Fatalf("expected persisted worker id file: %v", statErr)
}
reused, err := resolvePluginWorkerID("", dir)
if err != nil {
t.Fatalf("resolvePluginWorkerID(reuse) err = %v", err)
}
if reused != generated {
t.Fatalf("expected reused id %q, got %q", generated, reused)
}
}
func TestParsePluginWorkerAdminAddress(t *testing.T) {
host, httpPort, hasExplicitGrpcPort, err := parsePluginWorkerAdminAddress("localhost:23646")
if err != nil {
t.Fatalf("parsePluginWorkerAdminAddress(localhost:23646) err = %v", err)
}
if host != "localhost" || httpPort != 23646 || hasExplicitGrpcPort {
t.Fatalf("unexpected parse result: host=%q httpPort=%d hasExplicit=%v", host, httpPort, hasExplicitGrpcPort)
}
host, httpPort, hasExplicitGrpcPort, err = parsePluginWorkerAdminAddress("localhost:23646.33646")
if err != nil {
t.Fatalf("parsePluginWorkerAdminAddress(localhost:23646.33646) err = %v", err)
}
if host != "localhost" || httpPort != 23646 || !hasExplicitGrpcPort {
t.Fatalf("unexpected dotted parse result: host=%q httpPort=%d hasExplicit=%v", host, httpPort, hasExplicitGrpcPort)
}
if _, _, _, err = parsePluginWorkerAdminAddress("localhost"); err == nil {
t.Fatalf("expected parse error for invalid address")
}
}
func TestResolvePluginWorkerAdminServerUsesStatusGrpcPort(t *testing.T) {
const grpcPort = 35432
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/plugin/status" {
http.NotFound(w, r)
return
}
_, _ = w.Write([]byte(fmt.Sprintf(`{"worker_grpc_port":%d}`, grpcPort)))
}))
defer server.Close()
adminAddress := strings.TrimPrefix(server.URL, "http://")
host, httpPort, _, err := parsePluginWorkerAdminAddress(adminAddress)
if err != nil {
t.Fatalf("parsePluginWorkerAdminAddress(%s) err = %v", adminAddress, err)
}
resolved := resolvePluginWorkerAdminServer(adminAddress)
expected := fmt.Sprintf("%s:%d.%d", host, httpPort, grpcPort)
if resolved != expected {
t.Fatalf("unexpected resolved admin address: got=%q want=%q", resolved, expected)
}
}
func TestResolvePluginWorkerAdminServerKeepsDefaultGrpcOffset(t *testing.T) {
var server *httptest.Server
server = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/api/plugin/status" {
http.NotFound(w, r)
return
}
address := strings.TrimPrefix(server.URL, "http://")
_, httpPort, _, parseErr := parsePluginWorkerAdminAddress(address)
if parseErr != nil {
http.Error(w, parseErr.Error(), http.StatusInternalServerError)
return
}
_, _ = w.Write([]byte(fmt.Sprintf(`{"worker_grpc_port":%d}`, httpPort+10000)))
}))
defer server.Close()
adminAddress := strings.TrimPrefix(server.URL, "http://")
resolved := resolvePluginWorkerAdminServer(adminAddress)
if resolved != adminAddress {
t.Fatalf("expected admin address to remain unchanged, got=%q want=%q", resolved, adminAddress)
}
}